mirror of
https://github.com/OffchainLabs/prysm.git
synced 2026-01-09 15:37:56 -05:00
Implement EIP-3076 minimal slashing protection, using a filesystem database (#13360)
* `EpochFromString`: Use already defined `Uint64FromString` function.
* `Test_uint64FromString` => `Test_FromString`
This test function tests more functions than `Uint64FromString`.
* Slashing protection history: Remove unreachable code.
The function `NewKVStore` creates, via `kv.UpdatePublicKeysBuckets`,
a new item in the `proposal-history-bucket-interchange`.
IMO there is no real reason to prefer `proposal` than `attestation`
as a prefix for this bucket, but this is the way it is done right now
and renaming the bucket will probably be backward incompatible.
An `attestedPublicKey` cannot exist without
the corresponding `proposedPublicKey`.
Thus, the `else` portion of code removed in this commit is not reachable.
We raise an error if we get there.
This is also probably the reason why the removed `else` portion was not
tested.
* `NewKVStore`: Switch items in `createBuckets`.
So the order corresponds to `schema.go`
* `slashableAttestationCheck`: Fix comments and logs.
* `ValidatorClient.db`: Use `iface.ValidatorDB`.
* BoltDB database: Implement `GraffitiFileHash`.
* Filesystem database: Creates `db.go`.
This file defines the following structs:
- `Store`
- `Graffiti`
- `Configuration`
- `ValidatorSlashingProtection`
This files implements the following public functions:
- `NewStore`
- `Close`
- `Backup`
- `DatabasePath`
- `ClearDB`
- `UpdatePublicKeysBuckets`
This files implements the following private functions:
- `slashingProtectionDirPath`
- `configurationFilePath`
- `configuration`
- `saveConfiguration`
- `validatorSlashingProtection`
- `saveValidatorSlashingProtection`
- `publicKeys`
* Filesystem database: Creates `genesis.go`.
This file defines the following public functions:
- `GenesisValidatorsRoot`
- `SaveGenesisValidatorsRoot`
* Filesystem database: Creates `graffiti.go`.
This file defines the following public functions:
- `SaveGraffitiOrderedIndex`
- `GraffitiOrderedIndex`
* Filesystem database: Creates `migration.go`.
This file defines the following public functions:
- `RunUpMigrations`
- `RunDownMigrations`
* Filesystem database: Creates proposer_settings.go.
This file defines the following public functions:
- `ProposerSettings`
- `ProposerSettingsExists`
- `SaveProposerSettings`
* Filesystem database: Creates `attester_protection.go`.
This file defines the following public functions:
- `EIPImportBlacklistedPublicKeys`
- `SaveEIPImportBlacklistedPublicKeys`
- `SigningRootAtTargetEpoch`
- `LowestSignedTargetEpoch`
- `LowestSignedSourceEpoch`
- `AttestedPublicKeys`
- `CheckSlashableAttestation`
- `SaveAttestationForPubKey`
- `SaveAttestationsForPubKey`
- `AttestationHistoryForPubKey`
* Filesystem database: Creates `proposer_protection.go`.
This file defines the following public functions:
- `HighestSignedProposal`
- `LowestSignedProposal`
- `ProposalHistoryForPubKey`
- `ProposalHistoryForSlot`
- `ProposedPublicKeys`
* Ensure that the filesystem store implements the `ValidatorDB` interface.
* `slashableAttestationCheck`: Check the database type.
* `slashableProposalCheck`: Check the database type.
* `slashableAttestationCheck`: Allow usage of minimal slashing protection.
* `slashableProposalCheck`: Allow usage of minimal slashing protection.
* `ImportStandardProtectionJSON`: Check the database type.
* `ImportStandardProtectionJSON`: Allow usage of min slashing protection.
* Implement `RecursiveDirFind`.
* Implement minimal<->complete DB conversion.
3 public functions are implemented:
- `IsCompleteDatabaseExisting`
- `IsMinimalDatabaseExisting`
- `ConvertDatabase`
* `setupDB`: Add `isSlashingProtectionMinimal` argument.
The feature addition is located in `validator/node/node_test.go`.
The rest of this commit consists in minimal slashing protection testing.
* `setupWithKey`: Add `isSlashingProtectionMinimal` argument.
The feature addition is located in `validator/client/propose_test.go`.
The rest of this commit consists in tests wrapping.
* `setup`: Add `isSlashingProtectionMinimal` argument.
The added feature is located in the `validator/client/propose_test.go`
file.
The rest of this commit consists in tests wrapping.
* `initializeFromCLI` and `initializeForWeb`: Factorize db init.
* Add `convert-complete-to-minimal` command.
* Creates `--enable-minimal-slashing-protection` flag.
* `importSlashingProtectionJSON`: Check database type.
* `exportSlashingProtectionJSON`: Check database type.
* `TestClearDB`: Test with minimal slashing protection.
* KeyManager: Test with minimal slashing protection.
* RPC: KeyManager: Test with minimal slashing protection.
* `convert-complete-to-minimal`: Change option names.
Options were:
- `--source` (for source data directory), and
- `--target` (for target data directory)
However, since this command deals with slashing protection, which has
source (epochs) and target (epochs), the initial option names may confuse
the user.
In this commit:
`--source` ==> `--source-data-dir`
`--target` ==> `--target-data-dir`
* Set `SlashableAttestationCheck` as an iface method.
And delete `CheckSlashableAttestation` from iface.
* Move helpers functions in a more general directory.
No functional change.
* Extract common structs out of `kv`.
==> `filesystem` does not depend anymore on `kv`.
==> `iface` does not depend anymore on `kv`.
==> `slashing-protection` does not depend anymore on `kv`.
* Move `ValidateMetadata` in `validator/helpers`.
* `ValidateMetadata`: Test with mock.
This way, we can:
- Avoid any circular import for tests.
- Implement once for all `iface.ValidatorDB` implementations
the `ValidateMetadata`function.
- Have tests (and coverage) of `ValidateMetadata`in
its own package.
The ideal solution would have been to implement `ValidateMetadata` as
a method with the `iface.ValidatorDB`receiver.
Unfortunately, golang does not allow that.
* `iface.ValidatorDB`: Implement ImportStandardProtectionJSON.
The whole purpose of this commit is to avoid the `switch validatorDB.(type)`
in `ImportStandardProtectionJSON`.
* `iface.ValidatorDB`: Implement `SlashableProposalCheck`.
* Remove now useless `slashableProposalCheck`.
* Delete useless `ImportStandardProtectionJSON`.
* `file.Exists`: Detect directories and return an error.
Before, `Exists` was only able to detect if a file exists.
Now, this function takes an extra `File` or `Directory` argument.
It detects either if a file or a directory exists.
Before, if an error was returned by `os.Stat`, the the file was
considered as non existing.
Now, it is treated as a real error.
* Replace `os.Stat` by `file.Exists`.
* Remove `Is{Complete,Minimal}DatabaseExisting`.
* `publicKeys`: Add log if unexpected file found.
* Move `{Source,Target}DataDirFlag`in `db.go`.
* `failedAttLocalProtectionErr`: `var`==> `const`
* `signingRoot`: `32`==> `fieldparams.RootLength`.
* `validatorClientData`==> `validator-client-data`.
To be consistent with `slashing-protection`.
* Add progress bars for `import` and `convert`.
* `parseBlocksForUniquePublicKeys`: Move in `db/kv`.
* helpers: Remove unused `initializeProgressBar` function.
This commit is contained in:
@@ -22,7 +22,14 @@ func Restore(cliCtx *cli.Context) error {
|
||||
targetDir := cliCtx.String(cmd.RestoreTargetDirFlag.Name)
|
||||
|
||||
restoreDir := path.Join(targetDir, kv.BeaconNodeDbDirName)
|
||||
if file.Exists(path.Join(restoreDir, kv.DatabaseFileName)) {
|
||||
restoreFile := path.Join(restoreDir, kv.DatabaseFileName)
|
||||
|
||||
dbExists, err := file.Exists(restoreFile, file.Regular)
|
||||
if err != nil {
|
||||
return errors.Wrapf(err, "could not check if database exists in %s", restoreFile)
|
||||
}
|
||||
|
||||
if dbExists {
|
||||
resp, err := prompt.ValidatePrompt(
|
||||
os.Stdin, dbExistsYesNoPrompt, prompt.ValidateYesOrNo,
|
||||
)
|
||||
|
||||
@@ -98,7 +98,9 @@ func TestBackupAccounts_Noninteractive_Derived(t *testing.T) {
|
||||
|
||||
// We check a backup.zip file was created at the output path.
|
||||
zipFilePath := filepath.Join(backupDir, accounts.ArchiveFilename)
|
||||
assert.DeepEqual(t, true, file.Exists(zipFilePath))
|
||||
fileExists, err := file.Exists(zipFilePath, file.Regular)
|
||||
require.NoError(t, err, "could not check if backup file exists")
|
||||
assert.Equal(t, true, fileExists, "backup file does not exist")
|
||||
|
||||
// We attempt to unzip the file and verify the keystores do match our accounts.
|
||||
f, err := os.Open(zipFilePath)
|
||||
@@ -189,7 +191,9 @@ func TestBackupAccounts_Noninteractive_Imported(t *testing.T) {
|
||||
|
||||
// We check a backup.zip file was created at the output path.
|
||||
zipFilePath := filepath.Join(backupDir, accounts.ArchiveFilename)
|
||||
assert.DeepEqual(t, true, file.Exists(zipFilePath))
|
||||
exists, err := file.Exists(zipFilePath, file.Regular)
|
||||
require.NoError(t, err, "could not check if backup file exists")
|
||||
assert.Equal(t, true, exists, "backup file does not exist")
|
||||
|
||||
// We attempt to unzip the file and verify the keystores do match our accounts.
|
||||
f, err := os.Open(zipFilePath)
|
||||
|
||||
@@ -395,5 +395,7 @@ func TestExitAccountsCli_WriteJSON_NoBroadcast(t *testing.T) {
|
||||
require.Equal(t, 1, len(formattedExitedKeys))
|
||||
assert.Equal(t, "0x"+keystore.Pubkey[:12], formattedExitedKeys[0])
|
||||
|
||||
require.Equal(t, true, file.Exists(path.Join(out, "validator-exit-1.json")), "Expected file to exist")
|
||||
exists, err := file.Exists(path.Join(out, "validator-exit-1.json"), file.Regular)
|
||||
require.NoError(t, err, "could not check if exit file exists")
|
||||
require.Equal(t, true, exists, "Expected file to exist")
|
||||
}
|
||||
|
||||
@@ -10,6 +10,22 @@ import (
|
||||
|
||||
var log = logrus.WithField("prefix", "db")
|
||||
|
||||
var (
|
||||
// SourceDataDirFlag defines a path on disk where source Prysm databases are stored. Used for conversion.
|
||||
SourceDataDirFlag = &cli.StringFlag{
|
||||
Name: "source-data-dir",
|
||||
Usage: "Source data directory",
|
||||
Required: true,
|
||||
}
|
||||
|
||||
// SourceDataDirFlag defines a path on disk where source Prysm databases are stored. Used for conversion.
|
||||
TargetDataDirFlag = &cli.StringFlag{
|
||||
Name: "target-data-dir",
|
||||
Usage: "Target data directory",
|
||||
Required: true,
|
||||
}
|
||||
)
|
||||
|
||||
// Commands for interacting with the Prysm validator database.
|
||||
var Commands = &cli.Command{
|
||||
Name: "db",
|
||||
@@ -66,5 +82,29 @@ var Commands = &cli.Command{
|
||||
},
|
||||
},
|
||||
},
|
||||
{
|
||||
Name: "convert-complete-to-minimal",
|
||||
Category: "db",
|
||||
Usage: "Convert a complete EIP-3076 slashing protection to a minimal one",
|
||||
Flags: []cli.Flag{
|
||||
SourceDataDirFlag,
|
||||
TargetDataDirFlag,
|
||||
},
|
||||
Before: func(cliCtx *cli.Context) error {
|
||||
return cmd.LoadFlagsFromConfig(cliCtx, cliCtx.Command.Flags)
|
||||
},
|
||||
Action: func(cliCtx *cli.Context) error {
|
||||
sourcedDatabasePath := cliCtx.String(SourceDataDirFlag.Name)
|
||||
targetDatabasePath := cliCtx.String(TargetDataDirFlag.Name)
|
||||
|
||||
// Convert the database
|
||||
err := validatordb.ConvertDatabase(cliCtx.Context, sourcedDatabasePath, targetDatabasePath, false)
|
||||
if err != nil {
|
||||
log.WithError(err).Fatal("Could not convert database")
|
||||
}
|
||||
|
||||
return nil
|
||||
},
|
||||
},
|
||||
},
|
||||
}
|
||||
|
||||
@@ -17,8 +17,11 @@ go_library(
|
||||
"//io/file:go_default_library",
|
||||
"//runtime/tos:go_default_library",
|
||||
"//validator/accounts/userprompt:go_default_library",
|
||||
"//validator/db/filesystem:go_default_library",
|
||||
"//validator/db/iface:go_default_library",
|
||||
"//validator/db/kv:go_default_library",
|
||||
"//validator/slashing-protection-history:go_default_library",
|
||||
"//validator/slashing-protection-history/format:go_default_library",
|
||||
"@com_github_pkg_errors//:go_default_library",
|
||||
"@com_github_sirupsen_logrus//:go_default_library",
|
||||
"@com_github_urfave_cli_v2//:go_default_library",
|
||||
@@ -35,7 +38,7 @@ go_test(
|
||||
"//io/file:go_default_library",
|
||||
"//testing/assert:go_default_library",
|
||||
"//testing/require:go_default_library",
|
||||
"//validator/db/kv:go_default_library",
|
||||
"//validator/db/common:go_default_library",
|
||||
"//validator/db/testing:go_default_library",
|
||||
"//validator/slashing-protection-history/format:go_default_library",
|
||||
"//validator/testing:go_default_library",
|
||||
|
||||
@@ -8,10 +8,14 @@ import (
|
||||
"github.com/pkg/errors"
|
||||
"github.com/prysmaticlabs/prysm/v5/cmd"
|
||||
"github.com/prysmaticlabs/prysm/v5/cmd/validator/flags"
|
||||
"github.com/prysmaticlabs/prysm/v5/config/features"
|
||||
"github.com/prysmaticlabs/prysm/v5/io/file"
|
||||
"github.com/prysmaticlabs/prysm/v5/validator/accounts/userprompt"
|
||||
"github.com/prysmaticlabs/prysm/v5/validator/db/filesystem"
|
||||
"github.com/prysmaticlabs/prysm/v5/validator/db/iface"
|
||||
"github.com/prysmaticlabs/prysm/v5/validator/db/kv"
|
||||
slashingprotection "github.com/prysmaticlabs/prysm/v5/validator/slashing-protection-history"
|
||||
"github.com/prysmaticlabs/prysm/v5/validator/slashing-protection-history/format"
|
||||
"github.com/urfave/cli/v2"
|
||||
)
|
||||
|
||||
@@ -30,11 +34,21 @@ const (
|
||||
// the validator's db into an EIP standard slashing protection format
|
||||
// 4. Format and save the JSON file to a user's specified output directory.
|
||||
func exportSlashingProtectionJSON(cliCtx *cli.Context) error {
|
||||
var (
|
||||
validatorDB iface.ValidatorDB
|
||||
found bool
|
||||
err error
|
||||
)
|
||||
|
||||
log.Info(
|
||||
"This command exports your validator's attestation and proposal history into " +
|
||||
"a file that can then be imported into any other Prysm setup across computers",
|
||||
)
|
||||
var err error
|
||||
|
||||
// Check if a minimal database is requested
|
||||
isDatabaseMinimal := cliCtx.Bool(features.EnableMinimalSlashingProtection.Name)
|
||||
|
||||
// Read the data directory from the CLI context.
|
||||
dataDir := cliCtx.String(cmd.DataDirFlag.Name)
|
||||
if !cliCtx.IsSet(cmd.DataDirFlag.Name) {
|
||||
dataDir, err = userprompt.InputDirectory(cliCtx, userprompt.DataDirDirPromptText, cmd.DataDirFlag)
|
||||
@@ -42,27 +56,45 @@ func exportSlashingProtectionJSON(cliCtx *cli.Context) error {
|
||||
return errors.Wrapf(err, "could not read directory value from input")
|
||||
}
|
||||
}
|
||||
// ensure that the validator.db is found under the specified dir or its subdirectories
|
||||
found, _, err := file.RecursiveFileFind(kv.ProtectionDbFileName, dataDir)
|
||||
|
||||
// Ensure that the database is found under the specified dir or its subdirectories
|
||||
if isDatabaseMinimal {
|
||||
found, _, err = file.RecursiveDirFind(filesystem.DatabaseDirName, dataDir)
|
||||
} else {
|
||||
found, _, err = file.RecursiveFileFind(kv.ProtectionDbFileName, dataDir)
|
||||
}
|
||||
|
||||
if err != nil {
|
||||
return errors.Wrapf(err, "error finding validator database at path %s", dataDir)
|
||||
}
|
||||
|
||||
if !found {
|
||||
return fmt.Errorf(
|
||||
"validator.db file (validator database) was not found at path %s, so nothing to export",
|
||||
dataDir,
|
||||
)
|
||||
databaseFileDir := kv.ProtectionDbFileName
|
||||
if isDatabaseMinimal {
|
||||
databaseFileDir = filesystem.DatabaseDirName
|
||||
}
|
||||
return fmt.Errorf("%s (validator database) was not found at path %s, so nothing to export", databaseFileDir, dataDir)
|
||||
}
|
||||
|
||||
// Open the validator database.
|
||||
if isDatabaseMinimal {
|
||||
validatorDB, err = filesystem.NewStore(dataDir, nil)
|
||||
} else {
|
||||
validatorDB, err = kv.NewKVStore(cliCtx.Context, dataDir, nil)
|
||||
}
|
||||
|
||||
validatorDB, err := kv.NewKVStore(cliCtx.Context, dataDir, &kv.Config{})
|
||||
if err != nil {
|
||||
return errors.Wrapf(err, "could not access validator database at path %s", dataDir)
|
||||
}
|
||||
|
||||
// Close the database when we're done.
|
||||
defer func() {
|
||||
if err := validatorDB.Close(); err != nil {
|
||||
log.WithError(err).Errorf("Could not close validator DB")
|
||||
}
|
||||
}()
|
||||
|
||||
// Export the slashing protection history from the validator's database.
|
||||
eipJSON, err := slashingprotection.ExportStandardProtectionJSON(cliCtx.Context, validatorDB)
|
||||
if err != nil {
|
||||
return errors.Wrap(err, "could not export slashing protection history")
|
||||
@@ -79,39 +111,60 @@ func exportSlashingProtectionJSON(cliCtx *cli.Context) error {
|
||||
)
|
||||
}
|
||||
|
||||
// Write the result to the output file
|
||||
if err := writeToOutput(cliCtx, eipJSON); err != nil {
|
||||
return errors.Wrap(err, "could not write slashing protection history to output file")
|
||||
}
|
||||
|
||||
return nil
|
||||
}
|
||||
|
||||
func writeToOutput(cliCtx *cli.Context, eipJSON *format.EIPSlashingProtectionFormat) error {
|
||||
// Get the output directory where the slashing protection history file will be stored
|
||||
outputDir, err := userprompt.InputDirectory(
|
||||
cliCtx,
|
||||
"Enter your desired output directory for your slashing protection history file",
|
||||
flags.SlashingProtectionExportDirFlag,
|
||||
)
|
||||
|
||||
if err != nil {
|
||||
return errors.Wrap(err, "could not get slashing protection json file")
|
||||
}
|
||||
|
||||
if outputDir == "" {
|
||||
return errors.New("output directory not specified")
|
||||
}
|
||||
|
||||
// Check is the output directory already exists, if not, create it
|
||||
exists, err := file.HasDir(outputDir)
|
||||
if err != nil {
|
||||
return errors.Wrapf(err, "could not check if output directory %s already exists", outputDir)
|
||||
}
|
||||
|
||||
if !exists {
|
||||
if err := file.MkdirAll(outputDir); err != nil {
|
||||
return errors.Wrapf(err, "could not create output directory %s", outputDir)
|
||||
}
|
||||
}
|
||||
|
||||
// Write into the output file
|
||||
outputFilePath := filepath.Join(outputDir, jsonExportFileName)
|
||||
log.Infof("Writing slashing protection export JSON file to %s", outputFilePath)
|
||||
|
||||
encoded, err := json.MarshalIndent(eipJSON, "", "\t")
|
||||
if err != nil {
|
||||
return errors.Wrap(err, "could not JSON marshal slashing protection history")
|
||||
}
|
||||
|
||||
if err := file.WriteFile(outputFilePath, encoded); err != nil {
|
||||
return errors.Wrapf(err, "could not write file to path %s", outputFilePath)
|
||||
}
|
||||
|
||||
log.Infof(
|
||||
"Successfully wrote %s. You can import this file using Prysm's "+
|
||||
"validator slashing-protection-history import command in another machine",
|
||||
outputFilePath,
|
||||
)
|
||||
|
||||
return nil
|
||||
}
|
||||
|
||||
@@ -7,10 +7,12 @@ import (
|
||||
"github.com/pkg/errors"
|
||||
"github.com/prysmaticlabs/prysm/v5/cmd"
|
||||
"github.com/prysmaticlabs/prysm/v5/cmd/validator/flags"
|
||||
"github.com/prysmaticlabs/prysm/v5/config/features"
|
||||
"github.com/prysmaticlabs/prysm/v5/io/file"
|
||||
"github.com/prysmaticlabs/prysm/v5/validator/accounts/userprompt"
|
||||
"github.com/prysmaticlabs/prysm/v5/validator/db/filesystem"
|
||||
"github.com/prysmaticlabs/prysm/v5/validator/db/iface"
|
||||
"github.com/prysmaticlabs/prysm/v5/validator/db/kv"
|
||||
slashingprotection "github.com/prysmaticlabs/prysm/v5/validator/slashing-protection-history"
|
||||
"github.com/urfave/cli/v2"
|
||||
)
|
||||
|
||||
@@ -24,7 +26,16 @@ import (
|
||||
// 4. Call the function which actually imports the data from
|
||||
// the standard slashing protection JSON file into our database.
|
||||
func importSlashingProtectionJSON(cliCtx *cli.Context) error {
|
||||
var err error
|
||||
var (
|
||||
valDB iface.ValidatorDB
|
||||
found bool
|
||||
err error
|
||||
)
|
||||
|
||||
// Check if a minimal database is requested
|
||||
isDatabaseMimimal := cliCtx.Bool(features.EnableMinimalSlashingProtection.Name)
|
||||
|
||||
// Get the data directory from the CLI context.
|
||||
dataDir := cliCtx.String(cmd.DataDirFlag.Name)
|
||||
if !cliCtx.IsSet(cmd.DataDirFlag.Name) {
|
||||
dataDir, err = userprompt.InputDirectory(cliCtx, userprompt.DataDirDirPromptText, cmd.DataDirFlag)
|
||||
@@ -32,28 +43,44 @@ func importSlashingProtectionJSON(cliCtx *cli.Context) error {
|
||||
return errors.Wrapf(err, "could not read directory value from input")
|
||||
}
|
||||
}
|
||||
// ensure that the validator.db is found under the specified dir or its subdirectories
|
||||
found, _, err := file.RecursiveFileFind(kv.ProtectionDbFileName, dataDir)
|
||||
|
||||
// Ensure that the database is found under the specified directory or its subdirectories
|
||||
if isDatabaseMimimal {
|
||||
found, _, err = file.RecursiveDirFind(filesystem.DatabaseDirName, dataDir)
|
||||
} else {
|
||||
found, _, err = file.RecursiveFileFind(kv.ProtectionDbFileName, dataDir)
|
||||
}
|
||||
|
||||
if err != nil {
|
||||
return errors.Wrapf(err, "error finding validator database at path %s", dataDir)
|
||||
}
|
||||
|
||||
message := "Found existing database inside of %s"
|
||||
if !found {
|
||||
log.Infof(
|
||||
"Did not find existing validator.db inside of %s, creating a new one",
|
||||
dataDir,
|
||||
)
|
||||
} else {
|
||||
log.Infof("Found existing validator.db inside of %s", dataDir)
|
||||
message = "Did not find existing database inside of %s, creating a new one"
|
||||
}
|
||||
valDB, err := kv.NewKVStore(cliCtx.Context, dataDir, &kv.Config{})
|
||||
|
||||
log.Infof(message, dataDir)
|
||||
|
||||
// Open the validator database.
|
||||
if isDatabaseMimimal {
|
||||
valDB, err = filesystem.NewStore(dataDir, nil)
|
||||
} else {
|
||||
valDB, err = kv.NewKVStore(cliCtx.Context, dataDir, nil)
|
||||
}
|
||||
|
||||
if err != nil {
|
||||
return errors.Wrapf(err, "could not access validator database at path: %s", dataDir)
|
||||
}
|
||||
|
||||
// Close the database when we're done.
|
||||
defer func() {
|
||||
if err := valDB.Close(); err != nil {
|
||||
log.WithError(err).Errorf("Could not close validator DB")
|
||||
}
|
||||
}()
|
||||
|
||||
// Get the path to the slashing protection JSON file from the CLI context.
|
||||
protectionFilePath, err := userprompt.InputDirectory(cliCtx, userprompt.SlashingProtectionJSONPromptText, flags.SlashingProtectionJSONFileFlag)
|
||||
if err != nil {
|
||||
return errors.Wrap(err, "could not get slashing protection json file")
|
||||
@@ -65,17 +92,22 @@ func importSlashingProtectionJSON(cliCtx *cli.Context) error {
|
||||
flags.SlashingProtectionJSONFileFlag.Name,
|
||||
)
|
||||
}
|
||||
|
||||
// Read the JSON file from user input.
|
||||
enc, err := file.ReadFileAsBytes(protectionFilePath)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
// Import the data from the standard slashing protection JSON file into our database.
|
||||
log.Infof("Starting import of slashing protection file %s", protectionFilePath)
|
||||
buf := bytes.NewBuffer(enc)
|
||||
if err := slashingprotection.ImportStandardProtectionJSON(
|
||||
cliCtx.Context, valDB, buf,
|
||||
); err != nil {
|
||||
return err
|
||||
|
||||
if err := valDB.ImportStandardProtectionJSON(cliCtx.Context, buf); err != nil {
|
||||
return errors.Wrapf(err, "could not import slashing protection JSON file %s", protectionFilePath)
|
||||
}
|
||||
|
||||
log.Infof("Slashing protection JSON successfully imported into %s", dataDir)
|
||||
|
||||
return nil
|
||||
}
|
||||
|
||||
@@ -11,7 +11,7 @@ import (
|
||||
"github.com/prysmaticlabs/prysm/v5/io/file"
|
||||
"github.com/prysmaticlabs/prysm/v5/testing/assert"
|
||||
"github.com/prysmaticlabs/prysm/v5/testing/require"
|
||||
"github.com/prysmaticlabs/prysm/v5/validator/db/kv"
|
||||
"github.com/prysmaticlabs/prysm/v5/validator/db/common"
|
||||
dbTest "github.com/prysmaticlabs/prysm/v5/validator/db/testing"
|
||||
"github.com/prysmaticlabs/prysm/v5/validator/slashing-protection-history/format"
|
||||
mocks "github.com/prysmaticlabs/prysm/v5/validator/testing"
|
||||
@@ -35,6 +35,11 @@ func setupCliCtx(
|
||||
return cli.NewContext(&app, set, nil)
|
||||
}
|
||||
|
||||
// TestImportExportSlashingProtectionCli_RoundTrip imports a EIP-3076 interchange format JSON file,
|
||||
// and exports it back to disk. It then compare the exported file to the original file.
|
||||
// This test is only suitable for complete slashing protection history database, since minimal
|
||||
// slashing protection history database will keep only the latest signed block slot / attestations,
|
||||
// and thus will not be able to export the same data as the original file.
|
||||
func TestImportExportSlashingProtectionCli_RoundTrip(t *testing.T) {
|
||||
numValidators := 10
|
||||
outputPath := filepath.Join(t.TempDir(), "slashing-exports")
|
||||
@@ -59,7 +64,8 @@ func TestImportExportSlashingProtectionCli_RoundTrip(t *testing.T) {
|
||||
require.NoError(t, err)
|
||||
|
||||
// We create a CLI context with the required values, such as the database datadir and output directory.
|
||||
validatorDB := dbTest.SetupDB(t, pubKeys)
|
||||
isSlashingProtectionMinimal := false
|
||||
validatorDB := dbTest.SetupDB(t, pubKeys, isSlashingProtectionMinimal)
|
||||
dbPath := validatorDB.DatabasePath()
|
||||
require.NoError(t, validatorDB.Close())
|
||||
cliCtx := setupCliCtx(t, dbPath, protectionFilePath, outputPath)
|
||||
@@ -108,6 +114,11 @@ func TestImportExportSlashingProtectionCli_RoundTrip(t *testing.T) {
|
||||
}
|
||||
}
|
||||
|
||||
// TestImportExportSlashingProtectionCli_EmptyData imports a EIP-3076 interchange format JSON file,
|
||||
// and exports it back to disk. It then compare the exported file to the original file.
|
||||
// This test is only suitable for complete slashing protection history database, since minimal
|
||||
// slashing protection history database will keep only the latest signed block slot / attestations,
|
||||
// and thus will not be able to export the same data as the original file.
|
||||
func TestImportExportSlashingProtectionCli_EmptyData(t *testing.T) {
|
||||
numValidators := 10
|
||||
outputPath := filepath.Join(t.TempDir(), "slashing-exports")
|
||||
@@ -118,10 +129,10 @@ func TestImportExportSlashingProtectionCli_EmptyData(t *testing.T) {
|
||||
// Create some mock slashing protection history. and JSON file
|
||||
pubKeys, err := mocks.CreateRandomPubKeys(numValidators)
|
||||
require.NoError(t, err)
|
||||
attestingHistory := make([][]*kv.AttestationRecord, 0)
|
||||
proposalHistory := make([]kv.ProposalHistoryForPubkey, len(pubKeys))
|
||||
attestingHistory := make([][]*common.AttestationRecord, 0)
|
||||
proposalHistory := make([]common.ProposalHistoryForPubkey, len(pubKeys))
|
||||
for i := 0; i < len(pubKeys); i++ {
|
||||
proposalHistory[i].Proposals = make([]kv.Proposal, 0)
|
||||
proposalHistory[i].Proposals = make([]common.Proposal, 0)
|
||||
}
|
||||
mockJSON, err := mocks.MockSlashingProtectionJSON(pubKeys, attestingHistory, proposalHistory)
|
||||
require.NoError(t, err)
|
||||
@@ -135,7 +146,8 @@ func TestImportExportSlashingProtectionCli_EmptyData(t *testing.T) {
|
||||
require.NoError(t, err)
|
||||
|
||||
// We create a CLI context with the required values, such as the database datadir and output directory.
|
||||
validatorDB := dbTest.SetupDB(t, pubKeys)
|
||||
isSlashingProtectionMinimal := false
|
||||
validatorDB := dbTest.SetupDB(t, pubKeys, isSlashingProtectionMinimal)
|
||||
dbPath := validatorDB.DatabasePath()
|
||||
require.NoError(t, validatorDB.Close())
|
||||
cliCtx := setupCliCtx(t, dbPath, protectionFilePath, outputPath)
|
||||
|
||||
@@ -25,6 +25,7 @@ var Commands = &cli.Command{
|
||||
features.PraterTestnet,
|
||||
features.SepoliaTestnet,
|
||||
features.HoleskyTestnet,
|
||||
features.EnableMinimalSlashingProtection,
|
||||
cmd.AcceptTosFlag,
|
||||
}),
|
||||
Before: func(cliCtx *cli.Context) error {
|
||||
@@ -53,6 +54,7 @@ var Commands = &cli.Command{
|
||||
features.PraterTestnet,
|
||||
features.SepoliaTestnet,
|
||||
features.HoleskyTestnet,
|
||||
features.EnableMinimalSlashingProtection,
|
||||
cmd.AcceptTosFlag,
|
||||
}),
|
||||
Before: func(cliCtx *cli.Context) error {
|
||||
|
||||
@@ -57,7 +57,8 @@ type Flags struct {
|
||||
AttestTimely bool // AttestTimely fixes #8185. It is gated behind a flag to ensure beacon node's fix can safely roll out first. We'll invert this in v1.1.0.
|
||||
|
||||
EnableSlasher bool // Enable slasher in the beacon node runtime.
|
||||
EnableSlashingProtectionPruning bool // EnableSlashingProtectionPruning for the validator client.
|
||||
EnableSlashingProtectionPruning bool // Enable slashing protection pruning for the validator client.
|
||||
EnableMinimalSlashingProtection bool // Enable minimal slashing protection database for the validator client.
|
||||
|
||||
SaveFullExecutionPayloads bool // Save full beacon blocks with execution payloads in the database.
|
||||
EnableStartOptimistic bool // EnableStartOptimistic treats every block as optimistic at startup.
|
||||
@@ -276,6 +277,10 @@ func ConfigureValidator(ctx *cli.Context) error {
|
||||
logEnabled(enableSlashingProtectionPruning)
|
||||
cfg.EnableSlashingProtectionPruning = true
|
||||
}
|
||||
if ctx.Bool(EnableMinimalSlashingProtection.Name) {
|
||||
logEnabled(EnableMinimalSlashingProtection)
|
||||
cfg.EnableMinimalSlashingProtection = true
|
||||
}
|
||||
if ctx.Bool(enableDoppelGangerProtection.Name) {
|
||||
logEnabled(enableDoppelGangerProtection)
|
||||
cfg.EnableDoppelGanger = true
|
||||
|
||||
@@ -95,6 +95,10 @@ var (
|
||||
Name: "enable-slashing-protection-history-pruning",
|
||||
Usage: "Enables the pruning of the validator client's slashing protection database.",
|
||||
}
|
||||
EnableMinimalSlashingProtection = &cli.BoolFlag{
|
||||
Name: "enable-minimal-slashing-protection",
|
||||
Usage: "Enables the minimal slashing protection. See EIP-3076 for more details.",
|
||||
}
|
||||
enableDoppelGangerProtection = &cli.BoolFlag{
|
||||
Name: "enable-doppelganger",
|
||||
Usage: `Enables the validator to perform a doppelganger check.
|
||||
@@ -177,6 +181,7 @@ var ValidatorFlags = append(deprecatedFlags, []cli.Flag{
|
||||
dynamicKeyReloadDebounceInterval,
|
||||
attestTimely,
|
||||
enableSlashingProtectionPruning,
|
||||
EnableMinimalSlashingProtection,
|
||||
enableDoppelGangerProtection,
|
||||
EnableBeaconRESTApi,
|
||||
}...)
|
||||
|
||||
@@ -820,102 +820,108 @@ func TestProposerSettingsLoader(t *testing.T) {
|
||||
},
|
||||
}
|
||||
for _, tt := range tests {
|
||||
t.Run(tt.name, func(t *testing.T) {
|
||||
for _, isSlashingProtectionMinimal := range [...]bool{false, true} {
|
||||
t.Run(fmt.Sprintf("%v-minimal:%v", tt.name, isSlashingProtectionMinimal), func(t *testing.T) {
|
||||
app := cli.App{}
|
||||
set := flag.NewFlagSet("test", 0)
|
||||
if tt.args.proposerSettingsFlagValues.dir != "" {
|
||||
set.String(flags.ProposerSettingsFlag.Name, tt.args.proposerSettingsFlagValues.dir, "")
|
||||
require.NoError(t, set.Set(flags.ProposerSettingsFlag.Name, tt.args.proposerSettingsFlagValues.dir))
|
||||
}
|
||||
if tt.args.proposerSettingsFlagValues.url != "" {
|
||||
content, err := os.ReadFile(tt.args.proposerSettingsFlagValues.url)
|
||||
require.NoError(t, err)
|
||||
srv := httptest.NewServer(http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) {
|
||||
w.WriteHeader(200)
|
||||
w.Header().Set("Content-Type", "application/json")
|
||||
_, err := fmt.Fprintf(w, "%s", content)
|
||||
require.NoError(t, err)
|
||||
}))
|
||||
defer srv.Close()
|
||||
|
||||
set.String(flags.ProposerSettingsURLFlag.Name, tt.args.proposerSettingsFlagValues.url, "")
|
||||
require.NoError(t, set.Set(flags.ProposerSettingsURLFlag.Name, srv.URL))
|
||||
}
|
||||
if tt.args.proposerSettingsFlagValues.defaultfee != "" {
|
||||
set.String(flags.SuggestedFeeRecipientFlag.Name, tt.args.proposerSettingsFlagValues.defaultfee, "")
|
||||
require.NoError(t, set.Set(flags.SuggestedFeeRecipientFlag.Name, tt.args.proposerSettingsFlagValues.defaultfee))
|
||||
}
|
||||
if tt.args.proposerSettingsFlagValues.defaultgas != "" {
|
||||
set.String(flags.BuilderGasLimitFlag.Name, tt.args.proposerSettingsFlagValues.defaultgas, "")
|
||||
require.NoError(t, set.Set(flags.BuilderGasLimitFlag.Name, tt.args.proposerSettingsFlagValues.defaultgas))
|
||||
}
|
||||
if tt.validatorRegistrationEnabled {
|
||||
set.Bool(flags.EnableBuilderFlag.Name, true, "")
|
||||
}
|
||||
cliCtx := cli.NewContext(&app, set, nil)
|
||||
validatorDB := dbTest.SetupDB(t, [][fieldparams.BLSPubkeyLength]byte{}, isSlashingProtectionMinimal)
|
||||
if tt.withdb != nil {
|
||||
err := tt.withdb(validatorDB)
|
||||
require.NoError(t, err)
|
||||
}
|
||||
loader, err := NewProposerSettingsLoader(
|
||||
cliCtx,
|
||||
validatorDB,
|
||||
WithBuilderConfig(),
|
||||
WithGasLimit(),
|
||||
)
|
||||
if tt.wantInitErr != "" {
|
||||
require.ErrorContains(t, tt.wantInitErr, err)
|
||||
return
|
||||
} else {
|
||||
require.NoError(t, err)
|
||||
}
|
||||
got, err := loader.Load(cliCtx)
|
||||
if tt.wantErr != "" {
|
||||
require.ErrorContains(t, tt.wantErr, err)
|
||||
return
|
||||
}
|
||||
if tt.wantLog != "" {
|
||||
assert.LogsContain(t, hook,
|
||||
tt.wantLog,
|
||||
)
|
||||
}
|
||||
w := tt.want()
|
||||
require.DeepEqual(t, w, got)
|
||||
if !tt.skipDBSavedCheck {
|
||||
dbSettings, err := validatorDB.ProposerSettings(cliCtx.Context)
|
||||
require.NoError(t, err)
|
||||
require.DeepEqual(t, w, dbSettings)
|
||||
}
|
||||
})
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
func Test_ProposerSettingsLoaderWithOnlyBuilder_DoesNotSaveInDB(t *testing.T) {
|
||||
for _, isSlashingProtectionMinimal := range [...]bool{false, true} {
|
||||
t.Run(fmt.Sprintf("minimal:%v", isSlashingProtectionMinimal), func(t *testing.T) {
|
||||
app := cli.App{}
|
||||
set := flag.NewFlagSet("test", 0)
|
||||
if tt.args.proposerSettingsFlagValues.dir != "" {
|
||||
set.String(flags.ProposerSettingsFlag.Name, tt.args.proposerSettingsFlagValues.dir, "")
|
||||
require.NoError(t, set.Set(flags.ProposerSettingsFlag.Name, tt.args.proposerSettingsFlagValues.dir))
|
||||
}
|
||||
if tt.args.proposerSettingsFlagValues.url != "" {
|
||||
content, err := os.ReadFile(tt.args.proposerSettingsFlagValues.url)
|
||||
require.NoError(t, err)
|
||||
srv := httptest.NewServer(http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) {
|
||||
w.WriteHeader(200)
|
||||
w.Header().Set("Content-Type", "application/json")
|
||||
_, err := fmt.Fprintf(w, "%s", content)
|
||||
require.NoError(t, err)
|
||||
}))
|
||||
defer srv.Close()
|
||||
|
||||
set.String(flags.ProposerSettingsURLFlag.Name, tt.args.proposerSettingsFlagValues.url, "")
|
||||
require.NoError(t, set.Set(flags.ProposerSettingsURLFlag.Name, srv.URL))
|
||||
}
|
||||
if tt.args.proposerSettingsFlagValues.defaultfee != "" {
|
||||
set.String(flags.SuggestedFeeRecipientFlag.Name, tt.args.proposerSettingsFlagValues.defaultfee, "")
|
||||
require.NoError(t, set.Set(flags.SuggestedFeeRecipientFlag.Name, tt.args.proposerSettingsFlagValues.defaultfee))
|
||||
}
|
||||
if tt.args.proposerSettingsFlagValues.defaultgas != "" {
|
||||
set.String(flags.BuilderGasLimitFlag.Name, tt.args.proposerSettingsFlagValues.defaultgas, "")
|
||||
require.NoError(t, set.Set(flags.BuilderGasLimitFlag.Name, tt.args.proposerSettingsFlagValues.defaultgas))
|
||||
}
|
||||
if tt.validatorRegistrationEnabled {
|
||||
set.Bool(flags.EnableBuilderFlag.Name, true, "")
|
||||
}
|
||||
set.Bool(flags.EnableBuilderFlag.Name, true, "")
|
||||
cliCtx := cli.NewContext(&app, set, nil)
|
||||
validatorDB := dbTest.SetupDB(t, [][fieldparams.BLSPubkeyLength]byte{})
|
||||
if tt.withdb != nil {
|
||||
err := tt.withdb(validatorDB)
|
||||
require.NoError(t, err)
|
||||
}
|
||||
validatorDB := dbTest.SetupDB(t, [][fieldparams.BLSPubkeyLength]byte{}, isSlashingProtectionMinimal)
|
||||
loader, err := NewProposerSettingsLoader(
|
||||
cliCtx,
|
||||
validatorDB,
|
||||
WithBuilderConfig(),
|
||||
WithGasLimit(),
|
||||
)
|
||||
if tt.wantInitErr != "" {
|
||||
require.ErrorContains(t, tt.wantInitErr, err)
|
||||
return
|
||||
} else {
|
||||
require.NoError(t, err)
|
||||
}
|
||||
require.NoError(t, err)
|
||||
got, err := loader.Load(cliCtx)
|
||||
if tt.wantErr != "" {
|
||||
require.ErrorContains(t, tt.wantErr, err)
|
||||
return
|
||||
}
|
||||
if tt.wantLog != "" {
|
||||
assert.LogsContain(t, hook,
|
||||
tt.wantLog,
|
||||
)
|
||||
}
|
||||
w := tt.want()
|
||||
require.DeepEqual(t, w, got)
|
||||
if !tt.skipDBSavedCheck {
|
||||
dbSettings, err := validatorDB.ProposerSettings(cliCtx.Context)
|
||||
require.NoError(t, err)
|
||||
require.DeepEqual(t, w, dbSettings)
|
||||
require.NoError(t, err)
|
||||
_, err = validatorDB.ProposerSettings(cliCtx.Context)
|
||||
require.ErrorContains(t, "no proposer settings found in bucket", err)
|
||||
want := &proposer.Settings{
|
||||
DefaultConfig: &proposer.Option{
|
||||
BuilderConfig: &proposer.BuilderConfig{
|
||||
Enabled: true,
|
||||
GasLimit: validator.Uint64(params.BeaconConfig().DefaultBuilderGasLimit),
|
||||
Relays: nil,
|
||||
},
|
||||
},
|
||||
}
|
||||
require.DeepEqual(t, want, got)
|
||||
})
|
||||
}
|
||||
}
|
||||
|
||||
func Test_ProposerSettingsLoaderWithOnlyBuilder_DoesNotSaveInDB(t *testing.T) {
|
||||
app := cli.App{}
|
||||
set := flag.NewFlagSet("test", 0)
|
||||
set.Bool(flags.EnableBuilderFlag.Name, true, "")
|
||||
cliCtx := cli.NewContext(&app, set, nil)
|
||||
validatorDB := dbTest.SetupDB(t, [][fieldparams.BLSPubkeyLength]byte{})
|
||||
loader, err := NewProposerSettingsLoader(
|
||||
cliCtx,
|
||||
validatorDB,
|
||||
WithBuilderConfig(),
|
||||
WithGasLimit(),
|
||||
)
|
||||
require.NoError(t, err)
|
||||
got, err := loader.Load(cliCtx)
|
||||
require.NoError(t, err)
|
||||
_, err = validatorDB.ProposerSettings(cliCtx.Context)
|
||||
require.ErrorContains(t, "no proposer settings found in bucket", err)
|
||||
want := &proposer.Settings{
|
||||
DefaultConfig: &proposer.Option{
|
||||
BuilderConfig: &proposer.BuilderConfig{
|
||||
Enabled: true,
|
||||
GasLimit: validator.Uint64(params.BeaconConfig().DefaultBuilderGasLimit),
|
||||
Relays: nil,
|
||||
},
|
||||
},
|
||||
}
|
||||
require.DeepEqual(t, want, got)
|
||||
}
|
||||
|
||||
@@ -8,7 +8,6 @@ go_library(
|
||||
deps = [
|
||||
"//config/params:go_default_library",
|
||||
"@com_github_pkg_errors//:go_default_library",
|
||||
"@com_github_sirupsen_logrus//:go_default_library",
|
||||
],
|
||||
)
|
||||
|
||||
|
||||
@@ -14,7 +14,13 @@ import (
|
||||
|
||||
"github.com/pkg/errors"
|
||||
"github.com/prysmaticlabs/prysm/v5/config/params"
|
||||
log "github.com/sirupsen/logrus"
|
||||
)
|
||||
|
||||
type ObjType int
|
||||
|
||||
const (
|
||||
Regular ObjType = iota
|
||||
Directory
|
||||
)
|
||||
|
||||
// ExpandPath given a string which may be a relative path.
|
||||
@@ -85,7 +91,13 @@ func WriteFile(file string, data []byte) error {
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
if Exists(expanded) {
|
||||
|
||||
exists, err := Exists(expanded, Regular)
|
||||
if err != nil {
|
||||
return errors.Wrapf(err, "could not check if file exists at path %s", expanded)
|
||||
}
|
||||
|
||||
if exists {
|
||||
info, err := os.Stat(expanded)
|
||||
if err != nil {
|
||||
return err
|
||||
@@ -136,19 +148,28 @@ func HasReadWritePermissions(itemPath string) (bool, error) {
|
||||
|
||||
// Exists returns true if a file is not a directory and exists
|
||||
// at the specified path.
|
||||
func Exists(filename string) bool {
|
||||
func Exists(filename string, objType ObjType) (bool, error) {
|
||||
filePath, err := ExpandPath(filename)
|
||||
if err != nil {
|
||||
return false
|
||||
return false, errors.Wrapf(err, "could not expend path of file %s", filename)
|
||||
}
|
||||
|
||||
info, err := os.Stat(filePath)
|
||||
if err != nil {
|
||||
if !os.IsNotExist(err) {
|
||||
log.WithError(err).Info("Checking for file existence returned an error")
|
||||
if os.IsNotExist(err) {
|
||||
return false, nil
|
||||
}
|
||||
return false
|
||||
|
||||
return false, errors.Wrapf(err, "could not get file info for file %s", filename)
|
||||
}
|
||||
return info != nil && !info.IsDir()
|
||||
|
||||
if info == nil {
|
||||
return false, errors.New("file info is nil")
|
||||
}
|
||||
|
||||
isDir := info.IsDir()
|
||||
|
||||
return objType == Directory && isDir || objType == Regular && !isDir, nil
|
||||
}
|
||||
|
||||
// RecursiveFileFind returns true, and the path, if a file is not a directory and exists
|
||||
@@ -183,6 +204,40 @@ func RecursiveFileFind(filename, dir string) (bool, string, error) {
|
||||
return found, fpath, nil
|
||||
}
|
||||
|
||||
// RecursiveDirFind searches for directory in a directory and its subdirectories.
|
||||
func RecursiveDirFind(dirname, dir string) (bool, string, error) {
|
||||
var (
|
||||
found bool
|
||||
fpath string
|
||||
)
|
||||
|
||||
dir = filepath.Clean(dir)
|
||||
found = false
|
||||
|
||||
err := filepath.Walk(dir, func(path string, info os.FileInfo, err error) error {
|
||||
if err != nil {
|
||||
return errors.Wrapf(err, "error walking directory %s", dir)
|
||||
}
|
||||
|
||||
// Checks if its a file and has the exact name as the dirname
|
||||
// need to break the walk function by using a non-fatal error
|
||||
if info.IsDir() && dirname == info.Name() {
|
||||
found = true
|
||||
fpath = path
|
||||
return errStopWalk
|
||||
}
|
||||
|
||||
// No errors or file found
|
||||
return nil
|
||||
})
|
||||
|
||||
if err != nil && err != errStopWalk {
|
||||
return false, "", errors.Wrapf(err, "error walking directory %s", dir)
|
||||
}
|
||||
|
||||
return found, fpath, nil
|
||||
}
|
||||
|
||||
// ReadFileAsBytes expands a file name's absolute path and reads it as bytes from disk.
|
||||
func ReadFileAsBytes(filename string) ([]byte, error) {
|
||||
filePath, err := ExpandPath(filename)
|
||||
@@ -194,7 +249,12 @@ func ReadFileAsBytes(filename string) ([]byte, error) {
|
||||
|
||||
// CopyFile copy a file from source to destination path.
|
||||
func CopyFile(src, dst string) error {
|
||||
if !Exists(src) {
|
||||
exists, err := Exists(src, Regular)
|
||||
if err != nil {
|
||||
return errors.Wrapf(err, "could not check if file exists at path %s", src)
|
||||
}
|
||||
|
||||
if !exists {
|
||||
return errors.New("source file does not exist at provided path")
|
||||
}
|
||||
f, err := os.Open(src) // #nosec G304
|
||||
|
||||
@@ -125,8 +125,9 @@ func TestWriteFile_OK(t *testing.T) {
|
||||
require.NoError(t, err)
|
||||
someFileName := filepath.Join(dirName, "somefile.txt")
|
||||
require.NoError(t, file.WriteFile(someFileName, []byte("hi")))
|
||||
exists := file.Exists(someFileName)
|
||||
assert.Equal(t, true, exists)
|
||||
exists, err := file.Exists(someFileName, file.Regular)
|
||||
require.NoError(t, err, "could not check if file exists")
|
||||
assert.Equal(t, true, exists, "file does not exist")
|
||||
}
|
||||
|
||||
func TestCopyFile(t *testing.T) {
|
||||
@@ -176,8 +177,14 @@ func TestCopyDir(t *testing.T) {
|
||||
require.NoError(t, os.MkdirAll(filepath.Join(tmpDir1, "subfolder2"), 0777))
|
||||
for _, fd := range fds {
|
||||
require.NoError(t, file.WriteFile(filepath.Join(tmpDir1, fd.path), fd.content))
|
||||
assert.Equal(t, true, file.Exists(filepath.Join(tmpDir1, fd.path)))
|
||||
assert.Equal(t, false, file.Exists(filepath.Join(tmpDir2, fd.path)))
|
||||
|
||||
exists, err := file.Exists(filepath.Join(tmpDir1, fd.path), file.Regular)
|
||||
require.NoError(t, err, "could not check if file exists")
|
||||
assert.Equal(t, true, exists, "file does not exist")
|
||||
|
||||
exists, err = file.Exists(filepath.Join(tmpDir2, fd.path), file.Regular)
|
||||
require.NoError(t, err, "could not check if file exists")
|
||||
assert.Equal(t, false, exists, "file does exist")
|
||||
}
|
||||
|
||||
// Make sure that files are copied into non-existent directory only. If directory exists function exits.
|
||||
@@ -186,7 +193,9 @@ func TestCopyDir(t *testing.T) {
|
||||
|
||||
// Now, all files should have been copied.
|
||||
for _, fd := range fds {
|
||||
assert.Equal(t, true, file.Exists(filepath.Join(tmpDir2, fd.path)))
|
||||
exists, err := file.Exists(filepath.Join(tmpDir2, fd.path), file.Regular)
|
||||
require.NoError(t, err, "could not check if file exists")
|
||||
assert.Equal(t, true, exists)
|
||||
assert.Equal(t, true, deepCompare(t, filepath.Join(tmpDir1, fd.path), filepath.Join(tmpDir2, fd.path)))
|
||||
}
|
||||
assert.Equal(t, true, file.DirsEqual(tmpDir1, tmpDir2))
|
||||
@@ -238,6 +247,66 @@ func TestHashDir(t *testing.T) {
|
||||
})
|
||||
}
|
||||
|
||||
func TestExists(t *testing.T) {
|
||||
tmpDir := t.TempDir()
|
||||
tmpFile := filepath.Join(tmpDir, "testfile")
|
||||
nonExistentTmpFile := filepath.Join(tmpDir, "nonexistent")
|
||||
_, err := os.Create(tmpFile)
|
||||
require.NoError(t, err, "could not create test file")
|
||||
|
||||
tests := []struct {
|
||||
name string
|
||||
itemPath string
|
||||
itemType file.ObjType
|
||||
want bool
|
||||
}{
|
||||
{
|
||||
name: "file exists",
|
||||
itemPath: tmpFile,
|
||||
itemType: file.Regular,
|
||||
want: true,
|
||||
},
|
||||
{
|
||||
name: "dir exists",
|
||||
itemPath: tmpDir,
|
||||
itemType: file.Directory,
|
||||
want: true,
|
||||
},
|
||||
{
|
||||
name: "non-existent file",
|
||||
itemPath: nonExistentTmpFile,
|
||||
itemType: file.Regular,
|
||||
want: false,
|
||||
},
|
||||
{
|
||||
name: "non-existent dir",
|
||||
itemPath: nonExistentTmpFile,
|
||||
itemType: file.Directory,
|
||||
want: false,
|
||||
},
|
||||
{
|
||||
name: "file is dir",
|
||||
itemPath: tmpDir,
|
||||
itemType: file.Regular,
|
||||
want: false,
|
||||
},
|
||||
{
|
||||
name: "dir is file",
|
||||
itemPath: tmpFile,
|
||||
itemType: file.Directory,
|
||||
want: false,
|
||||
},
|
||||
}
|
||||
|
||||
for _, tt := range tests {
|
||||
t.Run(tt.name, func(t *testing.T) {
|
||||
exists, err := file.Exists(tt.itemPath, tt.itemType)
|
||||
require.NoError(t, err, "could not check if file exists")
|
||||
assert.Equal(t, tt.want, exists)
|
||||
})
|
||||
}
|
||||
}
|
||||
|
||||
func TestHashFile(t *testing.T) {
|
||||
originalData := []byte("test data")
|
||||
originalChecksum := sha256.Sum256(originalData)
|
||||
@@ -290,40 +359,43 @@ func TestDirFiles(t *testing.T) {
|
||||
|
||||
func TestRecursiveFileFind(t *testing.T) {
|
||||
tmpDir, _ := tmpDirWithContentsForRecursiveFind(t)
|
||||
/*
|
||||
tmpDir
|
||||
├── file3
|
||||
├── subfolder1
|
||||
│ └── subfolder11
|
||||
│ └── file1
|
||||
└── subfolder2
|
||||
└── file2
|
||||
*/
|
||||
tests := []struct {
|
||||
name string
|
||||
root string
|
||||
path string
|
||||
found bool
|
||||
}{
|
||||
{
|
||||
name: "file1",
|
||||
root: tmpDir,
|
||||
path: "subfolder1/subfolder11/file1",
|
||||
found: true,
|
||||
},
|
||||
{
|
||||
name: "file2",
|
||||
root: tmpDir,
|
||||
path: "subfolder2/file2",
|
||||
found: true,
|
||||
},
|
||||
{
|
||||
name: "file1",
|
||||
root: tmpDir + "/subfolder1",
|
||||
path: "subfolder11/file1",
|
||||
found: true,
|
||||
},
|
||||
{
|
||||
name: "file3",
|
||||
root: tmpDir,
|
||||
path: "file3",
|
||||
found: true,
|
||||
},
|
||||
{
|
||||
name: "file4",
|
||||
root: tmpDir,
|
||||
path: "",
|
||||
found: false,
|
||||
},
|
||||
}
|
||||
@@ -338,6 +410,61 @@ func TestRecursiveFileFind(t *testing.T) {
|
||||
}
|
||||
}
|
||||
|
||||
func TestRecursiveDirFind(t *testing.T) {
|
||||
tmpDir, _ := tmpDirWithContentsForRecursiveFind(t)
|
||||
|
||||
/*
|
||||
tmpDir
|
||||
├── file3
|
||||
├── subfolder1
|
||||
│ └── subfolder11
|
||||
│ └── file1
|
||||
└── subfolder2
|
||||
└── file2
|
||||
*/
|
||||
|
||||
tests := []struct {
|
||||
name string
|
||||
root string
|
||||
found bool
|
||||
}{
|
||||
{
|
||||
name: "subfolder11",
|
||||
root: tmpDir,
|
||||
found: true,
|
||||
},
|
||||
{
|
||||
name: "subfolder2",
|
||||
root: tmpDir,
|
||||
found: true,
|
||||
},
|
||||
{
|
||||
name: "subfolder11",
|
||||
root: tmpDir + "/subfolder1",
|
||||
found: true,
|
||||
},
|
||||
{
|
||||
name: "file3",
|
||||
root: tmpDir,
|
||||
found: false,
|
||||
},
|
||||
{
|
||||
name: "file4",
|
||||
root: tmpDir,
|
||||
found: false,
|
||||
},
|
||||
}
|
||||
|
||||
for _, tt := range tests {
|
||||
t.Run(tt.name, func(t *testing.T) {
|
||||
found, _, err := file.RecursiveDirFind(tt.name, tt.root)
|
||||
require.NoError(t, err)
|
||||
|
||||
assert.DeepEqual(t, tt.found, found)
|
||||
})
|
||||
}
|
||||
}
|
||||
|
||||
func deepCompare(t *testing.T, file1, file2 string) bool {
|
||||
sf, err := os.Open(file1)
|
||||
assert.NoError(t, err)
|
||||
|
||||
@@ -10,6 +10,7 @@ go_library(
|
||||
"//io/file:go_default_library",
|
||||
"//io/prompt:go_default_library",
|
||||
"@com_github_logrusorgru_aurora//:go_default_library",
|
||||
"@com_github_pkg_errors//:go_default_library",
|
||||
"@com_github_sirupsen_logrus//:go_default_library",
|
||||
"@com_github_urfave_cli_v2//:go_default_library",
|
||||
],
|
||||
|
||||
@@ -1,10 +1,11 @@
|
||||
package tos
|
||||
|
||||
import (
|
||||
"errors"
|
||||
"path/filepath"
|
||||
"strings"
|
||||
|
||||
"github.com/pkg/errors"
|
||||
|
||||
"github.com/logrusorgru/aurora"
|
||||
"github.com/prysmaticlabs/prysm/v5/cmd"
|
||||
"github.com/prysmaticlabs/prysm/v5/io/file"
|
||||
@@ -37,8 +38,13 @@ var (
|
||||
|
||||
// VerifyTosAcceptedOrPrompt checks if Tos was accepted before or asks to accept.
|
||||
func VerifyTosAcceptedOrPrompt(ctx *cli.Context) error {
|
||||
tosFilePath := filepath.Join(ctx.String(cmd.DataDirFlag.Name), acceptTosFilename)
|
||||
if file.Exists(tosFilePath) {
|
||||
acceptTosFilePath := filepath.Join(ctx.String(cmd.DataDirFlag.Name), acceptTosFilename)
|
||||
exists, err := file.Exists(acceptTosFilePath, file.Regular)
|
||||
if err != nil {
|
||||
return errors.Wrapf(err, "could not check if file exists: %s", acceptTosFilePath)
|
||||
}
|
||||
|
||||
if exists {
|
||||
return nil
|
||||
}
|
||||
|
||||
|
||||
@@ -156,7 +156,13 @@ func encrypt(cliCtx *cli.Context) error {
|
||||
if err != nil {
|
||||
return errors.Wrapf(err, "could not expand path: %s", outputPath)
|
||||
}
|
||||
if file.Exists(fullPath) {
|
||||
|
||||
exists, err := file.Exists(fullPath, file.Regular)
|
||||
if err != nil {
|
||||
return errors.Wrapf(err, "could not check if file exists: %s", fullPath)
|
||||
}
|
||||
|
||||
if exists {
|
||||
response, err := prompt.ValidatePrompt(
|
||||
os.Stdin,
|
||||
fmt.Sprintf("file at path %s already exists, are you sure you want to overwrite it? [y/n]", fullPath),
|
||||
|
||||
@@ -47,7 +47,12 @@ func zipKeystoresToOutputDir(keystoresToBackup []*keymanager.Keystore, outputDir
|
||||
// Marshal and zip all keystore files together and write the zip file
|
||||
// to the specified output directory.
|
||||
archivePath := filepath.Join(outputDir, ArchiveFilename)
|
||||
if file.Exists(archivePath) {
|
||||
exists, err := file.Exists(archivePath, file.Regular)
|
||||
if err != nil {
|
||||
return errors.Wrapf(err, "could not check if file exists: %s", archivePath)
|
||||
}
|
||||
|
||||
if exists {
|
||||
return errors.Errorf("Zip file already exists in directory: %s", archivePath)
|
||||
}
|
||||
// We create a new file to store our backup.zip.
|
||||
|
||||
@@ -226,7 +226,13 @@ func importPrivateKeyAsAccount(ctx context.Context, wallet *wallet.Wallet, impor
|
||||
if err != nil {
|
||||
return errors.Wrapf(err, "could not expand file path for %s", privKeyFile)
|
||||
}
|
||||
if !file.Exists(fullPath) {
|
||||
|
||||
exists, err := file.Exists(fullPath, file.Regular)
|
||||
if err != nil {
|
||||
return errors.Wrapf(err, "could not check if file exists: %s", fullPath)
|
||||
}
|
||||
|
||||
if !exists {
|
||||
return fmt.Errorf("file %s does not exist", fullPath)
|
||||
}
|
||||
privKeyHex, err := os.ReadFile(fullPath) // #nosec G304
|
||||
|
||||
@@ -378,7 +378,10 @@ func (w *Wallet) WriteFileAtPath(_ context.Context, filePath, fileName string, d
|
||||
}
|
||||
}
|
||||
fullPath := filepath.Join(accountPath, fileName)
|
||||
existedPreviously := file.Exists(fullPath)
|
||||
existedPreviously, err := file.Exists(fullPath, file.Regular)
|
||||
if err != nil {
|
||||
return false, errors.Wrapf(err, "could not check if file exists: %s", fullPath)
|
||||
}
|
||||
if err := file.WriteFile(fullPath, data); err != nil {
|
||||
return false, errors.Wrapf(err, "could not write %s", filePath)
|
||||
}
|
||||
@@ -439,7 +442,12 @@ func (w *Wallet) FileNameAtPath(_ context.Context, filePath, fileName string) (s
|
||||
// for reading if it exists at the wallet path.
|
||||
func (w *Wallet) ReadKeymanagerConfigFromDisk(_ context.Context) (io.ReadCloser, error) {
|
||||
configFilePath := filepath.Join(w.accountsPath, KeymanagerConfigFileName)
|
||||
if !file.Exists(configFilePath) {
|
||||
exists, err := file.Exists(configFilePath, file.Regular)
|
||||
if err != nil {
|
||||
return nil, errors.Wrapf(err, "could not check if file exists: %s", configFilePath)
|
||||
}
|
||||
|
||||
if !exists {
|
||||
return nil, fmt.Errorf("no keymanager config file found at path: %s", w.accountsPath)
|
||||
}
|
||||
w.configFilePath = configFilePath
|
||||
|
||||
@@ -5,13 +5,11 @@ go_library(
|
||||
srcs = [
|
||||
"aggregate.go",
|
||||
"attest.go",
|
||||
"attest_protect.go",
|
||||
"key_reload.go",
|
||||
"log.go",
|
||||
"metrics.go",
|
||||
"multiple_endpoints_grpc_resolver.go",
|
||||
"propose.go",
|
||||
"propose_protect.go",
|
||||
"registration.go",
|
||||
"runner.go",
|
||||
"service.go",
|
||||
@@ -49,7 +47,6 @@ go_library(
|
||||
"//monitoring/tracing:go_default_library",
|
||||
"//network/httputil:go_default_library",
|
||||
"//proto/prysm/v1alpha1:go_default_library",
|
||||
"//proto/prysm/v1alpha1/slashings:go_default_library",
|
||||
"//proto/prysm/v1alpha1/validator-client:go_default_library",
|
||||
"//runtime/version:go_default_library",
|
||||
"//time:go_default_library",
|
||||
@@ -62,7 +59,7 @@ go_library(
|
||||
"//validator/client/node-client-factory:go_default_library",
|
||||
"//validator/client/validator-client-factory:go_default_library",
|
||||
"//validator/db:go_default_library",
|
||||
"//validator/db/kv:go_default_library",
|
||||
"//validator/db/common:go_default_library",
|
||||
"//validator/graffiti:go_default_library",
|
||||
"//validator/helpers:go_default_library",
|
||||
"//validator/keymanager:go_default_library",
|
||||
@@ -98,14 +95,12 @@ go_library(
|
||||
|
||||
go_test(
|
||||
name = "go_default_test",
|
||||
size = "small",
|
||||
size = "medium",
|
||||
srcs = [
|
||||
"aggregate_test.go",
|
||||
"attest_protect_test.go",
|
||||
"attest_test.go",
|
||||
"key_reload_test.go",
|
||||
"metrics_test.go",
|
||||
"propose_protect_test.go",
|
||||
"propose_test.go",
|
||||
"registration_test.go",
|
||||
"runner_test.go",
|
||||
@@ -153,11 +148,11 @@ go_test(
|
||||
"//validator/client/testutil:go_default_library",
|
||||
"//validator/db/testing:go_default_library",
|
||||
"//validator/graffiti:go_default_library",
|
||||
"//validator/helpers:go_default_library",
|
||||
"//validator/keymanager:go_default_library",
|
||||
"//validator/keymanager/derived:go_default_library",
|
||||
"//validator/keymanager/local:go_default_library",
|
||||
"//validator/keymanager/remote-web3signer:go_default_library",
|
||||
"//validator/slashing-protection-history:go_default_library",
|
||||
"//validator/testing:go_default_library",
|
||||
"@com_github_ethereum_go_ethereum//common:go_default_library",
|
||||
"@com_github_ethereum_go_ethereum//common/hexutil:go_default_library",
|
||||
|
||||
@@ -3,6 +3,7 @@ package client
|
||||
import (
|
||||
"context"
|
||||
"errors"
|
||||
"fmt"
|
||||
"testing"
|
||||
|
||||
"github.com/prysmaticlabs/prysm/v5/validator/client/iface"
|
||||
@@ -23,208 +24,235 @@ import (
|
||||
)
|
||||
|
||||
func TestSubmitAggregateAndProof_GetDutiesRequestFailure(t *testing.T) {
|
||||
hook := logTest.NewGlobal()
|
||||
validator, _, validatorKey, finish := setup(t)
|
||||
validator.duties = ðpb.DutiesResponse{CurrentEpochDuties: []*ethpb.DutiesResponse_Duty{}}
|
||||
defer finish()
|
||||
for _, isSlashingProtectionMinimal := range [...]bool{false, true} {
|
||||
t.Run(fmt.Sprintf("SlashingProtectionMinimal:%v", isSlashingProtectionMinimal), func(t *testing.T) {
|
||||
hook := logTest.NewGlobal()
|
||||
validator, _, validatorKey, finish := setup(t, isSlashingProtectionMinimal)
|
||||
validator.duties = ðpb.DutiesResponse{CurrentEpochDuties: []*ethpb.DutiesResponse_Duty{}}
|
||||
defer finish()
|
||||
|
||||
var pubKey [fieldparams.BLSPubkeyLength]byte
|
||||
copy(pubKey[:], validatorKey.PublicKey().Marshal())
|
||||
validator.SubmitAggregateAndProof(context.Background(), 0, pubKey)
|
||||
var pubKey [fieldparams.BLSPubkeyLength]byte
|
||||
copy(pubKey[:], validatorKey.PublicKey().Marshal())
|
||||
validator.SubmitAggregateAndProof(context.Background(), 0, pubKey)
|
||||
|
||||
require.LogsContain(t, hook, "Could not fetch validator assignment")
|
||||
require.LogsContain(t, hook, "Could not fetch validator assignment")
|
||||
})
|
||||
}
|
||||
}
|
||||
|
||||
func TestSubmitAggregateAndProof_SignFails(t *testing.T) {
|
||||
validator, m, validatorKey, finish := setup(t)
|
||||
defer finish()
|
||||
var pubKey [fieldparams.BLSPubkeyLength]byte
|
||||
copy(pubKey[:], validatorKey.PublicKey().Marshal())
|
||||
validator.duties = ðpb.DutiesResponse{
|
||||
CurrentEpochDuties: []*ethpb.DutiesResponse_Duty{
|
||||
{
|
||||
PublicKey: validatorKey.PublicKey().Marshal(),
|
||||
},
|
||||
},
|
||||
for _, isSlashingProtectionMinimal := range [...]bool{false, true} {
|
||||
t.Run(fmt.Sprintf("SlashingProtectionMinimal:%v", isSlashingProtectionMinimal), func(t *testing.T) {
|
||||
validator, m, validatorKey, finish := setup(t, isSlashingProtectionMinimal)
|
||||
defer finish()
|
||||
var pubKey [fieldparams.BLSPubkeyLength]byte
|
||||
copy(pubKey[:], validatorKey.PublicKey().Marshal())
|
||||
validator.duties = ðpb.DutiesResponse{
|
||||
CurrentEpochDuties: []*ethpb.DutiesResponse_Duty{
|
||||
{
|
||||
PublicKey: validatorKey.PublicKey().Marshal(),
|
||||
},
|
||||
},
|
||||
}
|
||||
|
||||
m.validatorClient.EXPECT().DomainData(
|
||||
gomock.Any(), // ctx
|
||||
gomock.Any(), // epoch
|
||||
).Return(ðpb.DomainResponse{SignatureDomain: make([]byte, 32)}, nil /*err*/)
|
||||
|
||||
m.validatorClient.EXPECT().SubmitAggregateSelectionProof(
|
||||
gomock.Any(), // ctx
|
||||
gomock.AssignableToTypeOf(ðpb.AggregateSelectionRequest{}),
|
||||
).Return(ðpb.AggregateSelectionResponse{
|
||||
AggregateAndProof: ðpb.AggregateAttestationAndProof{
|
||||
AggregatorIndex: 0,
|
||||
Aggregate: util.HydrateAttestation(ðpb.Attestation{
|
||||
AggregationBits: make([]byte, 1),
|
||||
}),
|
||||
SelectionProof: make([]byte, 96),
|
||||
},
|
||||
}, nil)
|
||||
|
||||
m.validatorClient.EXPECT().DomainData(
|
||||
gomock.Any(), // ctx
|
||||
gomock.Any(), // epoch
|
||||
).Return(ðpb.DomainResponse{SignatureDomain: nil}, errors.New("bad domain root"))
|
||||
|
||||
validator.SubmitAggregateAndProof(context.Background(), 0, pubKey)
|
||||
})
|
||||
}
|
||||
|
||||
m.validatorClient.EXPECT().DomainData(
|
||||
gomock.Any(), // ctx
|
||||
gomock.Any(), // epoch
|
||||
).Return(ðpb.DomainResponse{SignatureDomain: make([]byte, 32)}, nil /*err*/)
|
||||
|
||||
m.validatorClient.EXPECT().SubmitAggregateSelectionProof(
|
||||
gomock.Any(), // ctx
|
||||
gomock.AssignableToTypeOf(ðpb.AggregateSelectionRequest{}),
|
||||
).Return(ðpb.AggregateSelectionResponse{
|
||||
AggregateAndProof: ðpb.AggregateAttestationAndProof{
|
||||
AggregatorIndex: 0,
|
||||
Aggregate: util.HydrateAttestation(ðpb.Attestation{
|
||||
AggregationBits: make([]byte, 1),
|
||||
}),
|
||||
SelectionProof: make([]byte, 96),
|
||||
},
|
||||
}, nil)
|
||||
|
||||
m.validatorClient.EXPECT().DomainData(
|
||||
gomock.Any(), // ctx
|
||||
gomock.Any(), // epoch
|
||||
).Return(ðpb.DomainResponse{SignatureDomain: nil}, errors.New("bad domain root"))
|
||||
|
||||
validator.SubmitAggregateAndProof(context.Background(), 0, pubKey)
|
||||
}
|
||||
|
||||
func TestSubmitAggregateAndProof_Ok(t *testing.T) {
|
||||
validator, m, validatorKey, finish := setup(t)
|
||||
defer finish()
|
||||
var pubKey [fieldparams.BLSPubkeyLength]byte
|
||||
copy(pubKey[:], validatorKey.PublicKey().Marshal())
|
||||
validator.duties = ðpb.DutiesResponse{
|
||||
CurrentEpochDuties: []*ethpb.DutiesResponse_Duty{
|
||||
{
|
||||
PublicKey: validatorKey.PublicKey().Marshal(),
|
||||
},
|
||||
},
|
||||
for _, isSlashingProtectionMinimal := range [...]bool{false, true} {
|
||||
t.Run(fmt.Sprintf("SlashingProtectionMinimal:%v", isSlashingProtectionMinimal), func(t *testing.T) {
|
||||
validator, m, validatorKey, finish := setup(t, isSlashingProtectionMinimal)
|
||||
defer finish()
|
||||
var pubKey [fieldparams.BLSPubkeyLength]byte
|
||||
copy(pubKey[:], validatorKey.PublicKey().Marshal())
|
||||
validator.duties = ðpb.DutiesResponse{
|
||||
CurrentEpochDuties: []*ethpb.DutiesResponse_Duty{
|
||||
{
|
||||
PublicKey: validatorKey.PublicKey().Marshal(),
|
||||
},
|
||||
},
|
||||
}
|
||||
|
||||
m.validatorClient.EXPECT().DomainData(
|
||||
gomock.Any(), // ctx
|
||||
gomock.Any(), // epoch
|
||||
).Return(ðpb.DomainResponse{SignatureDomain: make([]byte, 32)}, nil /*err*/)
|
||||
|
||||
m.validatorClient.EXPECT().SubmitAggregateSelectionProof(
|
||||
gomock.Any(), // ctx
|
||||
gomock.AssignableToTypeOf(ðpb.AggregateSelectionRequest{}),
|
||||
).Return(ðpb.AggregateSelectionResponse{
|
||||
AggregateAndProof: ðpb.AggregateAttestationAndProof{
|
||||
AggregatorIndex: 0,
|
||||
Aggregate: util.HydrateAttestation(ðpb.Attestation{
|
||||
AggregationBits: make([]byte, 1),
|
||||
}),
|
||||
SelectionProof: make([]byte, 96),
|
||||
},
|
||||
}, nil)
|
||||
|
||||
m.validatorClient.EXPECT().DomainData(
|
||||
gomock.Any(), // ctx
|
||||
gomock.Any(), // epoch
|
||||
).Return(ðpb.DomainResponse{SignatureDomain: make([]byte, 32)}, nil /*err*/)
|
||||
|
||||
m.validatorClient.EXPECT().SubmitSignedAggregateSelectionProof(
|
||||
gomock.Any(), // ctx
|
||||
gomock.AssignableToTypeOf(ðpb.SignedAggregateSubmitRequest{}),
|
||||
).Return(ðpb.SignedAggregateSubmitResponse{AttestationDataRoot: make([]byte, 32)}, nil)
|
||||
|
||||
validator.SubmitAggregateAndProof(context.Background(), 0, pubKey)
|
||||
})
|
||||
}
|
||||
|
||||
m.validatorClient.EXPECT().DomainData(
|
||||
gomock.Any(), // ctx
|
||||
gomock.Any(), // epoch
|
||||
).Return(ðpb.DomainResponse{SignatureDomain: make([]byte, 32)}, nil /*err*/)
|
||||
|
||||
m.validatorClient.EXPECT().SubmitAggregateSelectionProof(
|
||||
gomock.Any(), // ctx
|
||||
gomock.AssignableToTypeOf(ðpb.AggregateSelectionRequest{}),
|
||||
).Return(ðpb.AggregateSelectionResponse{
|
||||
AggregateAndProof: ðpb.AggregateAttestationAndProof{
|
||||
AggregatorIndex: 0,
|
||||
Aggregate: util.HydrateAttestation(ðpb.Attestation{
|
||||
AggregationBits: make([]byte, 1),
|
||||
}),
|
||||
SelectionProof: make([]byte, 96),
|
||||
},
|
||||
}, nil)
|
||||
|
||||
m.validatorClient.EXPECT().DomainData(
|
||||
gomock.Any(), // ctx
|
||||
gomock.Any(), // epoch
|
||||
).Return(ðpb.DomainResponse{SignatureDomain: make([]byte, 32)}, nil /*err*/)
|
||||
|
||||
m.validatorClient.EXPECT().SubmitSignedAggregateSelectionProof(
|
||||
gomock.Any(), // ctx
|
||||
gomock.AssignableToTypeOf(ðpb.SignedAggregateSubmitRequest{}),
|
||||
).Return(ðpb.SignedAggregateSubmitResponse{AttestationDataRoot: make([]byte, 32)}, nil)
|
||||
|
||||
validator.SubmitAggregateAndProof(context.Background(), 0, pubKey)
|
||||
}
|
||||
|
||||
func TestSubmitAggregateAndProof_Distributed(t *testing.T) {
|
||||
validatorIdx := primitives.ValidatorIndex(123)
|
||||
slot := primitives.Slot(456)
|
||||
ctx := context.Background()
|
||||
for _, isSlashingProtectionMinimal := range [...]bool{false, true} {
|
||||
t.Run(fmt.Sprintf("SlashingProtectionMinimal:%v", isSlashingProtectionMinimal), func(t *testing.T) {
|
||||
validator, m, validatorKey, finish := setup(t, isSlashingProtectionMinimal)
|
||||
defer finish()
|
||||
|
||||
validator, m, validatorKey, finish := setup(t)
|
||||
defer finish()
|
||||
var pubKey [fieldparams.BLSPubkeyLength]byte
|
||||
copy(pubKey[:], validatorKey.PublicKey().Marshal())
|
||||
validator.duties = ðpb.DutiesResponse{
|
||||
CurrentEpochDuties: []*ethpb.DutiesResponse_Duty{
|
||||
{
|
||||
PublicKey: validatorKey.PublicKey().Marshal(),
|
||||
ValidatorIndex: validatorIdx,
|
||||
AttesterSlot: slot,
|
||||
},
|
||||
},
|
||||
}
|
||||
|
||||
var pubKey [fieldparams.BLSPubkeyLength]byte
|
||||
copy(pubKey[:], validatorKey.PublicKey().Marshal())
|
||||
validator.duties = ðpb.DutiesResponse{
|
||||
CurrentEpochDuties: []*ethpb.DutiesResponse_Duty{
|
||||
{
|
||||
PublicKey: validatorKey.PublicKey().Marshal(),
|
||||
validator.distributed = true
|
||||
validator.attSelections = make(map[attSelectionKey]iface.BeaconCommitteeSelection)
|
||||
validator.attSelections[attSelectionKey{
|
||||
slot: slot,
|
||||
index: 123,
|
||||
}] = iface.BeaconCommitteeSelection{
|
||||
SelectionProof: make([]byte, 96),
|
||||
Slot: slot,
|
||||
ValidatorIndex: validatorIdx,
|
||||
AttesterSlot: slot,
|
||||
},
|
||||
},
|
||||
}
|
||||
|
||||
m.validatorClient.EXPECT().SubmitAggregateSelectionProof(
|
||||
gomock.Any(), // ctx
|
||||
gomock.AssignableToTypeOf(ðpb.AggregateSelectionRequest{}),
|
||||
).Return(ðpb.AggregateSelectionResponse{
|
||||
AggregateAndProof: ðpb.AggregateAttestationAndProof{
|
||||
AggregatorIndex: 0,
|
||||
Aggregate: util.HydrateAttestation(ðpb.Attestation{
|
||||
AggregationBits: make([]byte, 1),
|
||||
}),
|
||||
SelectionProof: make([]byte, 96),
|
||||
},
|
||||
}, nil)
|
||||
|
||||
m.validatorClient.EXPECT().DomainData(
|
||||
gomock.Any(), // ctx
|
||||
gomock.Any(), // epoch
|
||||
).Return(ðpb.DomainResponse{SignatureDomain: make([]byte, 32)}, nil /*err*/)
|
||||
|
||||
m.validatorClient.EXPECT().SubmitSignedAggregateSelectionProof(
|
||||
gomock.Any(), // ctx
|
||||
gomock.AssignableToTypeOf(ðpb.SignedAggregateSubmitRequest{}),
|
||||
).Return(ðpb.SignedAggregateSubmitResponse{AttestationDataRoot: make([]byte, 32)}, nil)
|
||||
|
||||
validator.SubmitAggregateAndProof(ctx, slot, pubKey)
|
||||
})
|
||||
}
|
||||
|
||||
validator.distributed = true
|
||||
validator.attSelections = make(map[attSelectionKey]iface.BeaconCommitteeSelection)
|
||||
validator.attSelections[attSelectionKey{
|
||||
slot: slot,
|
||||
index: 123,
|
||||
}] = iface.BeaconCommitteeSelection{
|
||||
SelectionProof: make([]byte, 96),
|
||||
Slot: slot,
|
||||
ValidatorIndex: validatorIdx,
|
||||
}
|
||||
|
||||
m.validatorClient.EXPECT().SubmitAggregateSelectionProof(
|
||||
gomock.Any(), // ctx
|
||||
gomock.AssignableToTypeOf(ðpb.AggregateSelectionRequest{}),
|
||||
).Return(ðpb.AggregateSelectionResponse{
|
||||
AggregateAndProof: ðpb.AggregateAttestationAndProof{
|
||||
AggregatorIndex: 0,
|
||||
Aggregate: util.HydrateAttestation(ðpb.Attestation{
|
||||
AggregationBits: make([]byte, 1),
|
||||
}),
|
||||
SelectionProof: make([]byte, 96),
|
||||
},
|
||||
}, nil)
|
||||
|
||||
m.validatorClient.EXPECT().DomainData(
|
||||
gomock.Any(), // ctx
|
||||
gomock.Any(), // epoch
|
||||
).Return(ðpb.DomainResponse{SignatureDomain: make([]byte, 32)}, nil /*err*/)
|
||||
|
||||
m.validatorClient.EXPECT().SubmitSignedAggregateSelectionProof(
|
||||
gomock.Any(), // ctx
|
||||
gomock.AssignableToTypeOf(ðpb.SignedAggregateSubmitRequest{}),
|
||||
).Return(ðpb.SignedAggregateSubmitResponse{AttestationDataRoot: make([]byte, 32)}, nil)
|
||||
|
||||
validator.SubmitAggregateAndProof(ctx, slot, pubKey)
|
||||
}
|
||||
|
||||
func TestWaitForSlotTwoThird_WaitCorrectly(t *testing.T) {
|
||||
validator, _, _, finish := setup(t)
|
||||
defer finish()
|
||||
currentTime := time.Now()
|
||||
numOfSlots := primitives.Slot(4)
|
||||
validator.genesisTime = uint64(currentTime.Unix()) - uint64(numOfSlots.Mul(params.BeaconConfig().SecondsPerSlot))
|
||||
oneThird := slots.DivideSlotBy(3 /* one third of slot duration */)
|
||||
timeToSleep := oneThird + oneThird
|
||||
for _, isSlashingProtectionMinimal := range [...]bool{false, true} {
|
||||
t.Run(fmt.Sprintf("SlashingProtectionMinimal:%v", isSlashingProtectionMinimal), func(t *testing.T) {
|
||||
validator, _, _, finish := setup(t, isSlashingProtectionMinimal)
|
||||
defer finish()
|
||||
currentTime := time.Now()
|
||||
numOfSlots := primitives.Slot(4)
|
||||
validator.genesisTime = uint64(currentTime.Unix()) - uint64(numOfSlots.Mul(params.BeaconConfig().SecondsPerSlot))
|
||||
oneThird := slots.DivideSlotBy(3 /* one third of slot duration */)
|
||||
timeToSleep := oneThird + oneThird
|
||||
|
||||
twoThirdTime := currentTime.Add(timeToSleep)
|
||||
validator.waitToSlotTwoThirds(context.Background(), numOfSlots)
|
||||
currentTime = time.Now()
|
||||
assert.Equal(t, twoThirdTime.Unix(), currentTime.Unix())
|
||||
twoThirdTime := currentTime.Add(timeToSleep)
|
||||
validator.waitToSlotTwoThirds(context.Background(), numOfSlots)
|
||||
currentTime = time.Now()
|
||||
assert.Equal(t, twoThirdTime.Unix(), currentTime.Unix())
|
||||
})
|
||||
}
|
||||
}
|
||||
|
||||
func TestWaitForSlotTwoThird_DoneContext_ReturnsImmediately(t *testing.T) {
|
||||
validator, _, _, finish := setup(t)
|
||||
defer finish()
|
||||
currentTime := time.Now()
|
||||
numOfSlots := primitives.Slot(4)
|
||||
validator.genesisTime = uint64(currentTime.Unix()) - uint64(numOfSlots.Mul(params.BeaconConfig().SecondsPerSlot))
|
||||
for _, isSlashingProtectionMinimal := range [...]bool{false, true} {
|
||||
t.Run(fmt.Sprintf("SlashingProtectionMinimal:%v", isSlashingProtectionMinimal), func(t *testing.T) {
|
||||
validator, _, _, finish := setup(t, isSlashingProtectionMinimal)
|
||||
defer finish()
|
||||
currentTime := time.Now()
|
||||
numOfSlots := primitives.Slot(4)
|
||||
validator.genesisTime = uint64(currentTime.Unix()) - uint64(numOfSlots.Mul(params.BeaconConfig().SecondsPerSlot))
|
||||
|
||||
expectedTime := time.Now()
|
||||
ctx, cancel := context.WithCancel(context.Background())
|
||||
cancel()
|
||||
validator.waitToSlotTwoThirds(ctx, numOfSlots)
|
||||
currentTime = time.Now()
|
||||
assert.Equal(t, expectedTime.Unix(), currentTime.Unix())
|
||||
expectedTime := time.Now()
|
||||
ctx, cancel := context.WithCancel(context.Background())
|
||||
cancel()
|
||||
validator.waitToSlotTwoThirds(ctx, numOfSlots)
|
||||
currentTime = time.Now()
|
||||
assert.Equal(t, expectedTime.Unix(), currentTime.Unix())
|
||||
})
|
||||
}
|
||||
}
|
||||
|
||||
func TestAggregateAndProofSignature_CanSignValidSignature(t *testing.T) {
|
||||
validator, m, validatorKey, finish := setup(t)
|
||||
defer finish()
|
||||
for _, isSlashingProtectionMinimal := range [...]bool{false, true} {
|
||||
t.Run(fmt.Sprintf("SlashingProtectionMinimal:%v", isSlashingProtectionMinimal), func(t *testing.T) {
|
||||
validator, m, validatorKey, finish := setup(t, isSlashingProtectionMinimal)
|
||||
defer finish()
|
||||
|
||||
var pubKey [fieldparams.BLSPubkeyLength]byte
|
||||
copy(pubKey[:], validatorKey.PublicKey().Marshal())
|
||||
m.validatorClient.EXPECT().DomainData(
|
||||
gomock.Any(), // ctx
|
||||
ðpb.DomainRequest{Epoch: 0, Domain: params.BeaconConfig().DomainAggregateAndProof[:]},
|
||||
).Return(ðpb.DomainResponse{SignatureDomain: make([]byte, 32)}, nil /*err*/)
|
||||
var pubKey [fieldparams.BLSPubkeyLength]byte
|
||||
copy(pubKey[:], validatorKey.PublicKey().Marshal())
|
||||
m.validatorClient.EXPECT().DomainData(
|
||||
gomock.Any(), // ctx
|
||||
ðpb.DomainRequest{Epoch: 0, Domain: params.BeaconConfig().DomainAggregateAndProof[:]},
|
||||
).Return(ðpb.DomainResponse{SignatureDomain: make([]byte, 32)}, nil /*err*/)
|
||||
|
||||
agg := ðpb.AggregateAttestationAndProof{
|
||||
AggregatorIndex: 0,
|
||||
Aggregate: util.HydrateAttestation(ðpb.Attestation{
|
||||
AggregationBits: bitfield.NewBitlist(1),
|
||||
}),
|
||||
SelectionProof: make([]byte, 96),
|
||||
agg := ðpb.AggregateAttestationAndProof{
|
||||
AggregatorIndex: 0,
|
||||
Aggregate: util.HydrateAttestation(ðpb.Attestation{
|
||||
AggregationBits: bitfield.NewBitlist(1),
|
||||
}),
|
||||
SelectionProof: make([]byte, 96),
|
||||
}
|
||||
sig, err := validator.aggregateAndProofSig(context.Background(), pubKey, agg, 0 /* slot */)
|
||||
require.NoError(t, err)
|
||||
_, err = bls.SignatureFromBytes(sig)
|
||||
require.NoError(t, err)
|
||||
})
|
||||
}
|
||||
sig, err := validator.aggregateAndProofSig(context.Background(), pubKey, agg, 0 /* slot */)
|
||||
require.NoError(t, err)
|
||||
_, err = bls.SignatureFromBytes(sig)
|
||||
require.NoError(t, err)
|
||||
}
|
||||
|
||||
@@ -26,6 +26,8 @@ import (
|
||||
"go.opencensus.io/trace"
|
||||
)
|
||||
|
||||
var failedAttLocalProtectionErr = "attempted to make slashable attestation, rejected by local slashing protection"
|
||||
|
||||
// SubmitAttestation completes the validator client's attester responsibility at a given slot.
|
||||
// It fetches the latest beacon block head along with the latest canonical beacon state
|
||||
// information in order to sign the block and include information about the validator's
|
||||
@@ -135,7 +137,7 @@ func (v *validator) SubmitAttestation(ctx context.Context, slot primitives.Slot,
|
||||
|
||||
// Set the signature of the attestation and send it out to the beacon node.
|
||||
indexedAtt.Signature = sig
|
||||
if err := v.slashableAttestationCheck(ctx, indexedAtt, pubKey, signingRoot); err != nil {
|
||||
if err := v.db.SlashableAttestationCheck(ctx, indexedAtt, pubKey, signingRoot, v.emitAccountMetrics, ValidatorAttestFailVec); err != nil {
|
||||
log.WithError(err).Error("Failed attestation slashing protection check")
|
||||
log.WithFields(
|
||||
attestationLogFields(pubKey, indexedAtt),
|
||||
|
||||
@@ -1,86 +0,0 @@
|
||||
package client
|
||||
|
||||
import (
|
||||
"context"
|
||||
"encoding/hex"
|
||||
"fmt"
|
||||
|
||||
"github.com/pkg/errors"
|
||||
fieldparams "github.com/prysmaticlabs/prysm/v5/config/fieldparams"
|
||||
ethpb "github.com/prysmaticlabs/prysm/v5/proto/prysm/v1alpha1"
|
||||
"github.com/prysmaticlabs/prysm/v5/proto/prysm/v1alpha1/slashings"
|
||||
"github.com/prysmaticlabs/prysm/v5/validator/db/kv"
|
||||
"go.opencensus.io/trace"
|
||||
)
|
||||
|
||||
var failedAttLocalProtectionErr = "attempted to make slashable attestation, rejected by local slashing protection"
|
||||
|
||||
// Checks if an attestation is slashable by comparing it with the attesting
|
||||
// history for the given public key in our DB. If it is not, we then update the history
|
||||
// with new values and save it to the database.
|
||||
func (v *validator) slashableAttestationCheck(
|
||||
ctx context.Context,
|
||||
indexedAtt *ethpb.IndexedAttestation,
|
||||
pubKey [fieldparams.BLSPubkeyLength]byte,
|
||||
signingRoot32 [32]byte,
|
||||
) error {
|
||||
ctx, span := trace.StartSpan(ctx, "validator.postAttSignUpdate")
|
||||
defer span.End()
|
||||
|
||||
signingRoot := signingRoot32[:]
|
||||
|
||||
// Based on EIP3076, validator should refuse to sign any attestation with source epoch less
|
||||
// than the minimum source epoch present in that signer’s attestations.
|
||||
lowestSourceEpoch, exists, err := v.db.LowestSignedSourceEpoch(ctx, pubKey)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
if exists && indexedAtt.Data.Source.Epoch < lowestSourceEpoch {
|
||||
return fmt.Errorf(
|
||||
"could not sign attestation lower than lowest source epoch in db, %d < %d",
|
||||
indexedAtt.Data.Source.Epoch,
|
||||
lowestSourceEpoch,
|
||||
)
|
||||
}
|
||||
existingSigningRoot, err := v.db.SigningRootAtTargetEpoch(ctx, pubKey, indexedAtt.Data.Target.Epoch)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
signingRootsDiffer := slashings.SigningRootsDiffer(existingSigningRoot, signingRoot)
|
||||
|
||||
// Based on EIP3076, validator should refuse to sign any attestation with target epoch less
|
||||
// than or equal to the minimum target epoch present in that signer’s attestations.
|
||||
lowestTargetEpoch, exists, err := v.db.LowestSignedTargetEpoch(ctx, pubKey)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
if signingRootsDiffer && exists && indexedAtt.Data.Target.Epoch <= lowestTargetEpoch {
|
||||
return fmt.Errorf(
|
||||
"could not sign attestation lower than or equal to lowest target epoch in db, %d <= %d",
|
||||
indexedAtt.Data.Target.Epoch,
|
||||
lowestTargetEpoch,
|
||||
)
|
||||
}
|
||||
fmtKey := "0x" + hex.EncodeToString(pubKey[:])
|
||||
slashingKind, err := v.db.CheckSlashableAttestation(ctx, pubKey, signingRoot, indexedAtt)
|
||||
if err != nil {
|
||||
if v.emitAccountMetrics {
|
||||
ValidatorAttestFailVec.WithLabelValues(fmtKey).Inc()
|
||||
}
|
||||
switch slashingKind {
|
||||
case kv.DoubleVote:
|
||||
log.Warn("Attestation is slashable as it is a double vote")
|
||||
case kv.SurroundingVote:
|
||||
log.Warn("Attestation is slashable as it is surrounding a previous attestation")
|
||||
case kv.SurroundedVote:
|
||||
log.Warn("Attestation is slashable as it is surrounded by a previous attestation")
|
||||
}
|
||||
return errors.Wrap(err, failedAttLocalProtectionErr)
|
||||
}
|
||||
|
||||
if err := v.db.SaveAttestationForPubKey(ctx, pubKey, signingRoot32, indexedAtt); err != nil {
|
||||
return errors.Wrap(err, "could not save attestation history for validator public key")
|
||||
}
|
||||
|
||||
return nil
|
||||
}
|
||||
@@ -1,147 +0,0 @@
|
||||
package client
|
||||
|
||||
import (
|
||||
"context"
|
||||
"testing"
|
||||
|
||||
fieldparams "github.com/prysmaticlabs/prysm/v5/config/fieldparams"
|
||||
"github.com/prysmaticlabs/prysm/v5/consensus-types/primitives"
|
||||
"github.com/prysmaticlabs/prysm/v5/encoding/bytesutil"
|
||||
ethpb "github.com/prysmaticlabs/prysm/v5/proto/prysm/v1alpha1"
|
||||
"github.com/prysmaticlabs/prysm/v5/testing/require"
|
||||
"go.uber.org/mock/gomock"
|
||||
)
|
||||
|
||||
func Test_slashableAttestationCheck(t *testing.T) {
|
||||
validator, _, validatorKey, finish := setup(t)
|
||||
defer finish()
|
||||
var pubKey [fieldparams.BLSPubkeyLength]byte
|
||||
copy(pubKey[:], validatorKey.PublicKey().Marshal())
|
||||
att := ðpb.IndexedAttestation{
|
||||
AttestingIndices: []uint64{1, 2},
|
||||
Data: ðpb.AttestationData{
|
||||
Slot: 5,
|
||||
CommitteeIndex: 2,
|
||||
BeaconBlockRoot: bytesutil.PadTo([]byte("great block"), 32),
|
||||
Source: ðpb.Checkpoint{
|
||||
Epoch: 4,
|
||||
Root: bytesutil.PadTo([]byte("good source"), 32),
|
||||
},
|
||||
Target: ðpb.Checkpoint{
|
||||
Epoch: 10,
|
||||
Root: bytesutil.PadTo([]byte("good target"), 32),
|
||||
},
|
||||
},
|
||||
}
|
||||
|
||||
err := validator.slashableAttestationCheck(context.Background(), att, pubKey, [32]byte{1})
|
||||
require.NoError(t, err, "Expected allowed attestation not to throw error")
|
||||
}
|
||||
|
||||
func Test_slashableAttestationCheck_UpdatesLowestSignedEpochs(t *testing.T) {
|
||||
validator, m, validatorKey, finish := setup(t)
|
||||
defer finish()
|
||||
ctx := context.Background()
|
||||
var pubKey [fieldparams.BLSPubkeyLength]byte
|
||||
copy(pubKey[:], validatorKey.PublicKey().Marshal())
|
||||
att := ðpb.IndexedAttestation{
|
||||
AttestingIndices: []uint64{1, 2},
|
||||
Data: ðpb.AttestationData{
|
||||
Slot: 5,
|
||||
CommitteeIndex: 2,
|
||||
BeaconBlockRoot: bytesutil.PadTo([]byte("great block"), 32),
|
||||
Source: ðpb.Checkpoint{
|
||||
Epoch: 4,
|
||||
Root: bytesutil.PadTo([]byte("good source"), 32),
|
||||
},
|
||||
Target: ðpb.Checkpoint{
|
||||
Epoch: 10,
|
||||
Root: bytesutil.PadTo([]byte("good target"), 32),
|
||||
},
|
||||
},
|
||||
}
|
||||
|
||||
m.validatorClient.EXPECT().DomainData(
|
||||
gomock.Any(), // ctx
|
||||
ðpb.DomainRequest{Epoch: 10, Domain: []byte{1, 0, 0, 0}},
|
||||
).Return(ðpb.DomainResponse{SignatureDomain: make([]byte, 32)}, nil /*err*/)
|
||||
_, sr, err := validator.getDomainAndSigningRoot(ctx, att.Data)
|
||||
require.NoError(t, err)
|
||||
|
||||
err = validator.slashableAttestationCheck(context.Background(), att, pubKey, sr)
|
||||
require.NoError(t, err)
|
||||
differentSigningRoot := [32]byte{2}
|
||||
|
||||
err = validator.slashableAttestationCheck(context.Background(), att, pubKey, differentSigningRoot)
|
||||
require.ErrorContains(t, "could not sign attestation", err)
|
||||
|
||||
e, exists, err := validator.db.LowestSignedSourceEpoch(context.Background(), pubKey)
|
||||
require.NoError(t, err)
|
||||
require.Equal(t, true, exists)
|
||||
require.Equal(t, primitives.Epoch(4), e)
|
||||
e, exists, err = validator.db.LowestSignedTargetEpoch(context.Background(), pubKey)
|
||||
require.NoError(t, err)
|
||||
require.Equal(t, true, exists)
|
||||
require.Equal(t, primitives.Epoch(10), e)
|
||||
}
|
||||
|
||||
func Test_slashableAttestationCheck_OK(t *testing.T) {
|
||||
ctx := context.Background()
|
||||
validator, _, _, finish := setup(t)
|
||||
defer finish()
|
||||
att := ðpb.IndexedAttestation{
|
||||
AttestingIndices: []uint64{1, 2},
|
||||
Data: ðpb.AttestationData{
|
||||
Slot: 5,
|
||||
CommitteeIndex: 2,
|
||||
BeaconBlockRoot: []byte("great block"),
|
||||
Source: ðpb.Checkpoint{
|
||||
Epoch: 4,
|
||||
Root: []byte("good source"),
|
||||
},
|
||||
Target: ðpb.Checkpoint{
|
||||
Epoch: 10,
|
||||
Root: []byte("good target"),
|
||||
},
|
||||
},
|
||||
}
|
||||
sr := [32]byte{1}
|
||||
fakePubkey := bytesutil.ToBytes48([]byte("test"))
|
||||
|
||||
err := validator.slashableAttestationCheck(ctx, att, fakePubkey, sr)
|
||||
require.NoError(t, err, "Expected allowed attestation not to throw error")
|
||||
}
|
||||
|
||||
func Test_slashableAttestationCheck_GenesisEpoch(t *testing.T) {
|
||||
ctx := context.Background()
|
||||
validator, _, _, finish := setup(t)
|
||||
defer finish()
|
||||
att := ðpb.IndexedAttestation{
|
||||
AttestingIndices: []uint64{1, 2},
|
||||
Data: ðpb.AttestationData{
|
||||
Slot: 5,
|
||||
CommitteeIndex: 2,
|
||||
BeaconBlockRoot: bytesutil.PadTo([]byte("great block root"), 32),
|
||||
Source: ðpb.Checkpoint{
|
||||
Epoch: 0,
|
||||
Root: bytesutil.PadTo([]byte("great root"), 32),
|
||||
},
|
||||
Target: ðpb.Checkpoint{
|
||||
Epoch: 0,
|
||||
Root: bytesutil.PadTo([]byte("great root"), 32),
|
||||
},
|
||||
},
|
||||
}
|
||||
|
||||
fakePubkey := bytesutil.ToBytes48([]byte("test"))
|
||||
err := validator.slashableAttestationCheck(ctx, att, fakePubkey, [32]byte{})
|
||||
require.NoError(t, err, "Expected allowed attestation not to throw error")
|
||||
e, exists, err := validator.db.LowestSignedSourceEpoch(context.Background(), fakePubkey)
|
||||
require.NoError(t, err)
|
||||
require.Equal(t, true, exists)
|
||||
require.Equal(t, primitives.Epoch(0), e)
|
||||
e, exists, err = validator.db.LowestSignedTargetEpoch(context.Background(), fakePubkey)
|
||||
require.NoError(t, err)
|
||||
require.Equal(t, true, exists)
|
||||
require.Equal(t, primitives.Epoch(0), e)
|
||||
}
|
||||
File diff suppressed because it is too large
Load Diff
@@ -28,9 +28,12 @@ import (
|
||||
"go.opencensus.io/trace"
|
||||
)
|
||||
|
||||
const domainDataErr = "could not get domain data"
|
||||
const signingRootErr = "could not get signing root"
|
||||
const signExitErr = "could not sign voluntary exit proposal"
|
||||
const (
|
||||
domainDataErr = "could not get domain data"
|
||||
signingRootErr = "could not get signing root"
|
||||
signExitErr = "could not sign voluntary exit proposal"
|
||||
failedBlockSignLocalErr = "block rejected by local protection"
|
||||
)
|
||||
|
||||
// ProposeBlock proposes a new beacon block for a given slot. This method collects the
|
||||
// previous beacon block, any pending deposits, and ETH1 data from the beacon
|
||||
@@ -111,7 +114,7 @@ func (v *validator) ProposeBlock(ctx context.Context, slot primitives.Slot, pubK
|
||||
return
|
||||
}
|
||||
|
||||
if err := v.slashableProposalCheck(ctx, pubKey, blk, signingRoot); err != nil {
|
||||
if err := v.db.SlashableProposalCheck(ctx, pubKey, blk, signingRoot, v.emitAccountMetrics, ValidatorProposeFailVec); err != nil {
|
||||
log.WithFields(
|
||||
blockLogFields(pubKey, wb, nil),
|
||||
).WithError(err).Error("Failed block slashing protection check")
|
||||
@@ -429,3 +432,15 @@ func (v *validator) getGraffiti(ctx context.Context, pubKey [fieldparams.BLSPubk
|
||||
|
||||
return []byte{}, nil
|
||||
}
|
||||
|
||||
func blockLogFields(pubKey [fieldparams.BLSPubkeyLength]byte, blk interfaces.ReadOnlyBeaconBlock, sig []byte) logrus.Fields {
|
||||
fields := logrus.Fields{
|
||||
"proposerPublicKey": fmt.Sprintf("%#x", pubKey),
|
||||
"proposerIndex": blk.ProposerIndex(),
|
||||
"blockSlot": blk.Slot(),
|
||||
}
|
||||
if sig != nil {
|
||||
fields["signature"] = fmt.Sprintf("%#x", sig)
|
||||
}
|
||||
return fields
|
||||
}
|
||||
|
||||
@@ -1,100 +0,0 @@
|
||||
package client
|
||||
|
||||
import (
|
||||
"context"
|
||||
"fmt"
|
||||
|
||||
"github.com/pkg/errors"
|
||||
fieldparams "github.com/prysmaticlabs/prysm/v5/config/fieldparams"
|
||||
"github.com/prysmaticlabs/prysm/v5/consensus-types/interfaces"
|
||||
"github.com/sirupsen/logrus"
|
||||
)
|
||||
|
||||
var failedBlockSignLocalErr = "attempted to sign a double proposal, block rejected by local protection"
|
||||
|
||||
// slashableProposalCheck checks if a block proposal is slashable by comparing it with the
|
||||
// block proposals history for the given public key in our DB. If it is not, we then update the history
|
||||
// with new values and save it to the database.
|
||||
func (v *validator) slashableProposalCheck(
|
||||
ctx context.Context, pubKey [fieldparams.BLSPubkeyLength]byte, signedBlock interfaces.ReadOnlySignedBeaconBlock, signingRoot [32]byte,
|
||||
) error {
|
||||
fmtKey := fmt.Sprintf("%#x", pubKey[:])
|
||||
|
||||
blk := signedBlock.Block()
|
||||
prevSigningRoot, proposalAtSlotExists, prevSigningRootExists, err := v.db.ProposalHistoryForSlot(ctx, pubKey, blk.Slot())
|
||||
if err != nil {
|
||||
if v.emitAccountMetrics {
|
||||
ValidatorProposeFailVec.WithLabelValues(fmtKey).Inc()
|
||||
}
|
||||
return errors.Wrap(err, "failed to get proposal history")
|
||||
}
|
||||
|
||||
lowestSignedProposalSlot, lowestProposalExists, err := v.db.LowestSignedProposal(ctx, pubKey)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
// Based on EIP-3076 - Condition 2
|
||||
// -------------------------------
|
||||
if lowestProposalExists {
|
||||
// If the block slot is (strictly) less than the lowest signed proposal slot in the DB, we consider it slashable.
|
||||
if blk.Slot() < lowestSignedProposalSlot {
|
||||
return fmt.Errorf(
|
||||
"could not sign block with slot < lowest signed slot in db, block slot: %d < lowest signed slot: %d",
|
||||
blk.Slot(),
|
||||
lowestSignedProposalSlot,
|
||||
)
|
||||
}
|
||||
|
||||
// If the block slot is equal to the lowest signed proposal slot and
|
||||
// - condition1: there is no signed proposal in the DB for this slot, or
|
||||
// - condition2: there is a signed proposal in the DB for this slot, but with no associated signing root, or
|
||||
// - condition3: there is a signed proposal in the DB for this slot, but the signing root differs,
|
||||
// ==> we consider it slashable.
|
||||
condition1 := !proposalAtSlotExists
|
||||
condition2 := proposalAtSlotExists && !prevSigningRootExists
|
||||
condition3 := proposalAtSlotExists && prevSigningRootExists && prevSigningRoot != signingRoot
|
||||
if blk.Slot() == lowestSignedProposalSlot && (condition1 || condition2 || condition3) {
|
||||
return fmt.Errorf(
|
||||
"could not sign block with slot == lowest signed slot in db if it is not a repeat signing, block slot: %d == slowest signed slot: %d",
|
||||
blk.Slot(),
|
||||
lowestSignedProposalSlot,
|
||||
)
|
||||
}
|
||||
}
|
||||
|
||||
// Based on EIP-3076 - Condition 1
|
||||
// -------------------------------
|
||||
// If there is a signed proposal in the DB for this slot and
|
||||
// - there is no associated signing root, or
|
||||
// - the signing root differs,
|
||||
// ==> we consider it slashable.
|
||||
if proposalAtSlotExists && (!prevSigningRootExists || prevSigningRoot != signingRoot) {
|
||||
if v.emitAccountMetrics {
|
||||
ValidatorProposeFailVec.WithLabelValues(fmtKey).Inc()
|
||||
}
|
||||
return errors.New(failedBlockSignLocalErr)
|
||||
}
|
||||
|
||||
// Save the proposal for this slot.
|
||||
if err := v.db.SaveProposalHistoryForSlot(ctx, pubKey, blk.Slot(), signingRoot[:]); err != nil {
|
||||
if v.emitAccountMetrics {
|
||||
ValidatorProposeFailVec.WithLabelValues(fmtKey).Inc()
|
||||
}
|
||||
return errors.Wrap(err, "failed to save updated proposal history")
|
||||
}
|
||||
|
||||
return nil
|
||||
}
|
||||
|
||||
func blockLogFields(pubKey [fieldparams.BLSPubkeyLength]byte, blk interfaces.ReadOnlyBeaconBlock, sig []byte) logrus.Fields {
|
||||
fields := logrus.Fields{
|
||||
"pubkey": fmt.Sprintf("%#x", pubKey),
|
||||
"proposerIndex": blk.ProposerIndex(),
|
||||
"slot": blk.Slot(),
|
||||
}
|
||||
if sig != nil {
|
||||
fields["signature"] = fmt.Sprintf("%#x", sig)
|
||||
}
|
||||
return fields
|
||||
}
|
||||
@@ -1,155 +0,0 @@
|
||||
package client
|
||||
|
||||
import (
|
||||
"context"
|
||||
"testing"
|
||||
|
||||
fieldparams "github.com/prysmaticlabs/prysm/v5/config/fieldparams"
|
||||
"github.com/prysmaticlabs/prysm/v5/config/params"
|
||||
"github.com/prysmaticlabs/prysm/v5/consensus-types/blocks"
|
||||
"github.com/prysmaticlabs/prysm/v5/consensus-types/primitives"
|
||||
ethpb "github.com/prysmaticlabs/prysm/v5/proto/prysm/v1alpha1"
|
||||
"github.com/prysmaticlabs/prysm/v5/testing/require"
|
||||
"github.com/prysmaticlabs/prysm/v5/testing/util"
|
||||
)
|
||||
|
||||
func Test_slashableProposalCheck_PreventsLowerThanMinProposal(t *testing.T) {
|
||||
ctx := context.Background()
|
||||
validator, _, validatorKey, finish := setup(t)
|
||||
defer finish()
|
||||
lowestSignedSlot := primitives.Slot(10)
|
||||
var pubKeyBytes [fieldparams.BLSPubkeyLength]byte
|
||||
copy(pubKeyBytes[:], validatorKey.PublicKey().Marshal())
|
||||
|
||||
// We save a proposal at the lowest signed slot in the DB.
|
||||
err := validator.db.SaveProposalHistoryForSlot(ctx, pubKeyBytes, lowestSignedSlot, []byte{1})
|
||||
require.NoError(t, err)
|
||||
require.NoError(t, err)
|
||||
|
||||
// We expect the same block with a slot lower than the lowest
|
||||
// signed slot to fail validation.
|
||||
blk := ðpb.SignedBeaconBlock{
|
||||
Block: ðpb.BeaconBlock{
|
||||
Slot: lowestSignedSlot - 1,
|
||||
ProposerIndex: 0,
|
||||
Body: ðpb.BeaconBlockBody{},
|
||||
},
|
||||
Signature: params.BeaconConfig().EmptySignature[:],
|
||||
}
|
||||
wsb, err := blocks.NewSignedBeaconBlock(blk)
|
||||
require.NoError(t, err)
|
||||
err = validator.slashableProposalCheck(context.Background(), pubKeyBytes, wsb, [32]byte{4})
|
||||
require.ErrorContains(t, "could not sign block with slot < lowest signed", err)
|
||||
|
||||
// We expect the same block with a slot equal to the lowest
|
||||
// signed slot to pass validation if signing roots are equal.
|
||||
blk = ðpb.SignedBeaconBlock{
|
||||
Block: ðpb.BeaconBlock{
|
||||
Slot: lowestSignedSlot,
|
||||
ProposerIndex: 0,
|
||||
Body: ðpb.BeaconBlockBody{},
|
||||
},
|
||||
Signature: params.BeaconConfig().EmptySignature[:],
|
||||
}
|
||||
wsb, err = blocks.NewSignedBeaconBlock(blk)
|
||||
require.NoError(t, err)
|
||||
err = validator.slashableProposalCheck(context.Background(), pubKeyBytes, wsb, [32]byte{1})
|
||||
require.NoError(t, err)
|
||||
|
||||
// We expect the same block with a slot equal to the lowest
|
||||
// signed slot to fail validation if signing roots are different.
|
||||
wsb, err = blocks.NewSignedBeaconBlock(blk)
|
||||
require.NoError(t, err)
|
||||
err = validator.slashableProposalCheck(context.Background(), pubKeyBytes, wsb, [32]byte{4})
|
||||
require.ErrorContains(t, "could not sign block with slot == lowest signed", err)
|
||||
|
||||
// We expect the same block with a slot > than the lowest
|
||||
// signed slot to pass validation.
|
||||
blk = ðpb.SignedBeaconBlock{
|
||||
Block: ðpb.BeaconBlock{
|
||||
Slot: lowestSignedSlot + 1,
|
||||
ProposerIndex: 0,
|
||||
Body: ðpb.BeaconBlockBody{},
|
||||
},
|
||||
Signature: params.BeaconConfig().EmptySignature[:],
|
||||
}
|
||||
|
||||
wsb, err = blocks.NewSignedBeaconBlock(blk)
|
||||
require.NoError(t, err)
|
||||
err = validator.slashableProposalCheck(context.Background(), pubKeyBytes, wsb, [32]byte{3})
|
||||
require.NoError(t, err)
|
||||
}
|
||||
|
||||
func Test_slashableProposalCheck(t *testing.T) {
|
||||
ctx := context.Background()
|
||||
validator, _, validatorKey, finish := setup(t)
|
||||
defer finish()
|
||||
|
||||
blk := util.HydrateSignedBeaconBlock(ðpb.SignedBeaconBlock{
|
||||
Block: ðpb.BeaconBlock{
|
||||
Slot: 10,
|
||||
ProposerIndex: 0,
|
||||
Body: ðpb.BeaconBlockBody{},
|
||||
},
|
||||
Signature: params.BeaconConfig().EmptySignature[:],
|
||||
})
|
||||
|
||||
var pubKeyBytes [fieldparams.BLSPubkeyLength]byte
|
||||
copy(pubKeyBytes[:], validatorKey.PublicKey().Marshal())
|
||||
|
||||
// We save a proposal at slot 1 as our lowest proposal.
|
||||
err := validator.db.SaveProposalHistoryForSlot(ctx, pubKeyBytes, 1, []byte{1})
|
||||
require.NoError(t, err)
|
||||
|
||||
// We save a proposal at slot 10 with a dummy signing root.
|
||||
dummySigningRoot := [32]byte{1}
|
||||
err = validator.db.SaveProposalHistoryForSlot(ctx, pubKeyBytes, 10, dummySigningRoot[:])
|
||||
require.NoError(t, err)
|
||||
var pubKey [fieldparams.BLSPubkeyLength]byte
|
||||
copy(pubKey[:], validatorKey.PublicKey().Marshal())
|
||||
sBlock, err := blocks.NewSignedBeaconBlock(blk)
|
||||
require.NoError(t, err)
|
||||
|
||||
// We expect the same block sent out with the same root should not be slasahble.
|
||||
err = validator.slashableProposalCheck(context.Background(), pubKey, sBlock, dummySigningRoot)
|
||||
require.NoError(t, err)
|
||||
|
||||
// We expect the same block sent out with a different signing root should be slasahble.
|
||||
err = validator.slashableProposalCheck(context.Background(), pubKey, sBlock, [32]byte{2})
|
||||
require.ErrorContains(t, failedBlockSignLocalErr, err)
|
||||
|
||||
// We save a proposal at slot 11 with a nil signing root.
|
||||
blk.Block.Slot = 11
|
||||
sBlock, err = blocks.NewSignedBeaconBlock(blk)
|
||||
require.NoError(t, err)
|
||||
err = validator.db.SaveProposalHistoryForSlot(ctx, pubKeyBytes, blk.Block.Slot, nil)
|
||||
require.NoError(t, err)
|
||||
|
||||
// We expect the same block sent out should return slashable error even
|
||||
// if we had a nil signing root stored in the database.
|
||||
err = validator.slashableProposalCheck(context.Background(), pubKey, sBlock, [32]byte{2})
|
||||
require.ErrorContains(t, failedBlockSignLocalErr, err)
|
||||
|
||||
// A block with a different slot for which we do not have a proposing history
|
||||
// should not be failing validation.
|
||||
blk.Block.Slot = 9
|
||||
sBlock, err = blocks.NewSignedBeaconBlock(blk)
|
||||
require.NoError(t, err)
|
||||
err = validator.slashableProposalCheck(context.Background(), pubKey, sBlock, [32]byte{3})
|
||||
require.NoError(t, err, "Expected allowed block not to throw error")
|
||||
}
|
||||
|
||||
func Test_slashableProposalCheck_RemoteProtection(t *testing.T) {
|
||||
validator, _, validatorKey, finish := setup(t)
|
||||
defer finish()
|
||||
var pubKey [fieldparams.BLSPubkeyLength]byte
|
||||
copy(pubKey[:], validatorKey.PublicKey().Marshal())
|
||||
|
||||
blk := util.NewBeaconBlock()
|
||||
blk.Block.Slot = 10
|
||||
sBlock, err := blocks.NewSignedBeaconBlock(blk)
|
||||
require.NoError(t, err)
|
||||
|
||||
err = validator.slashableProposalCheck(context.Background(), pubKey, sBlock, [32]byte{2})
|
||||
require.NoError(t, err, "Expected allowed block not to throw error")
|
||||
}
|
||||
File diff suppressed because it is too large
Load Diff
@@ -2,6 +2,7 @@ package client
|
||||
|
||||
import (
|
||||
"context"
|
||||
"fmt"
|
||||
"testing"
|
||||
"time"
|
||||
|
||||
@@ -17,38 +18,67 @@ import (
|
||||
)
|
||||
|
||||
func TestSubmitValidatorRegistrations(t *testing.T) {
|
||||
_, m, validatorKey, finish := setup(t)
|
||||
defer finish()
|
||||
for _, isSlashingProtectionMinimal := range [...]bool{false, true} {
|
||||
t.Run(fmt.Sprintf("SlashingProtectionMinimal:%v", isSlashingProtectionMinimal), func(t *testing.T) {
|
||||
_, m, validatorKey, finish := setup(t, isSlashingProtectionMinimal)
|
||||
defer finish()
|
||||
|
||||
ctx := context.Background()
|
||||
validatorRegsBatchSize := 2
|
||||
require.NoError(t, nil, SubmitValidatorRegistrations(ctx, m.validatorClient, []*ethpb.SignedValidatorRegistrationV1{}, validatorRegsBatchSize))
|
||||
ctx := context.Background()
|
||||
validatorRegsBatchSize := 2
|
||||
require.NoError(t, nil, SubmitValidatorRegistrations(ctx, m.validatorClient, []*ethpb.SignedValidatorRegistrationV1{}, validatorRegsBatchSize))
|
||||
|
||||
regs := [...]*ethpb.ValidatorRegistrationV1{
|
||||
{
|
||||
FeeRecipient: bytesutil.PadTo([]byte("fee"), 20),
|
||||
GasLimit: 123,
|
||||
Timestamp: uint64(time.Now().Unix()),
|
||||
Pubkey: validatorKey.PublicKey().Marshal(),
|
||||
},
|
||||
{
|
||||
FeeRecipient: bytesutil.PadTo([]byte("fee"), 20),
|
||||
GasLimit: 456,
|
||||
Timestamp: uint64(time.Now().Unix()),
|
||||
Pubkey: validatorKey.PublicKey().Marshal(),
|
||||
},
|
||||
{
|
||||
FeeRecipient: bytesutil.PadTo([]byte("fee"), 20),
|
||||
GasLimit: 789,
|
||||
Timestamp: uint64(time.Now().Unix()),
|
||||
Pubkey: validatorKey.PublicKey().Marshal(),
|
||||
},
|
||||
}
|
||||
regs := [...]*ethpb.ValidatorRegistrationV1{
|
||||
{
|
||||
FeeRecipient: bytesutil.PadTo([]byte("fee"), 20),
|
||||
GasLimit: 123,
|
||||
Timestamp: uint64(time.Now().Unix()),
|
||||
Pubkey: validatorKey.PublicKey().Marshal(),
|
||||
},
|
||||
{
|
||||
FeeRecipient: bytesutil.PadTo([]byte("fee"), 20),
|
||||
GasLimit: 456,
|
||||
Timestamp: uint64(time.Now().Unix()),
|
||||
Pubkey: validatorKey.PublicKey().Marshal(),
|
||||
},
|
||||
{
|
||||
FeeRecipient: bytesutil.PadTo([]byte("fee"), 20),
|
||||
GasLimit: 789,
|
||||
Timestamp: uint64(time.Now().Unix()),
|
||||
Pubkey: validatorKey.PublicKey().Marshal(),
|
||||
},
|
||||
}
|
||||
|
||||
gomock.InOrder(
|
||||
m.validatorClient.EXPECT().
|
||||
SubmitValidatorRegistrations(gomock.Any(), ðpb.SignedValidatorRegistrationsV1{
|
||||
Messages: []*ethpb.SignedValidatorRegistrationV1{
|
||||
gomock.InOrder(
|
||||
m.validatorClient.EXPECT().
|
||||
SubmitValidatorRegistrations(gomock.Any(), ðpb.SignedValidatorRegistrationsV1{
|
||||
Messages: []*ethpb.SignedValidatorRegistrationV1{
|
||||
{
|
||||
Message: regs[0],
|
||||
Signature: params.BeaconConfig().ZeroHash[:],
|
||||
},
|
||||
{
|
||||
Message: regs[1],
|
||||
Signature: params.BeaconConfig().ZeroHash[:],
|
||||
},
|
||||
},
|
||||
}).
|
||||
Return(nil, nil),
|
||||
|
||||
m.validatorClient.EXPECT().
|
||||
SubmitValidatorRegistrations(gomock.Any(), ðpb.SignedValidatorRegistrationsV1{
|
||||
Messages: []*ethpb.SignedValidatorRegistrationV1{
|
||||
{
|
||||
Message: regs[2],
|
||||
Signature: params.BeaconConfig().ZeroHash[:],
|
||||
},
|
||||
},
|
||||
}).
|
||||
Return(nil, nil),
|
||||
)
|
||||
|
||||
require.NoError(t, nil, SubmitValidatorRegistrations(
|
||||
ctx, m.validatorClient,
|
||||
[]*ethpb.SignedValidatorRegistrationV1{
|
||||
{
|
||||
Message: regs[0],
|
||||
Signature: params.BeaconConfig().ZeroHash[:],
|
||||
@@ -57,222 +87,206 @@ func TestSubmitValidatorRegistrations(t *testing.T) {
|
||||
Message: regs[1],
|
||||
Signature: params.BeaconConfig().ZeroHash[:],
|
||||
},
|
||||
},
|
||||
}).
|
||||
Return(nil, nil),
|
||||
|
||||
m.validatorClient.EXPECT().
|
||||
SubmitValidatorRegistrations(gomock.Any(), ðpb.SignedValidatorRegistrationsV1{
|
||||
Messages: []*ethpb.SignedValidatorRegistrationV1{
|
||||
{
|
||||
Message: regs[2],
|
||||
Signature: params.BeaconConfig().ZeroHash[:],
|
||||
},
|
||||
},
|
||||
}).
|
||||
Return(nil, nil),
|
||||
)
|
||||
|
||||
require.NoError(t, nil, SubmitValidatorRegistrations(
|
||||
ctx, m.validatorClient,
|
||||
[]*ethpb.SignedValidatorRegistrationV1{
|
||||
{
|
||||
Message: regs[0],
|
||||
Signature: params.BeaconConfig().ZeroHash[:],
|
||||
},
|
||||
{
|
||||
Message: regs[1],
|
||||
Signature: params.BeaconConfig().ZeroHash[:],
|
||||
},
|
||||
{
|
||||
Message: regs[2],
|
||||
Signature: params.BeaconConfig().ZeroHash[:],
|
||||
},
|
||||
},
|
||||
validatorRegsBatchSize,
|
||||
))
|
||||
validatorRegsBatchSize,
|
||||
))
|
||||
})
|
||||
}
|
||||
}
|
||||
|
||||
func TestSubmitValidatorRegistration_CantSign(t *testing.T) {
|
||||
_, m, validatorKey, finish := setup(t)
|
||||
defer finish()
|
||||
for _, isSlashingProtectionMinimal := range [...]bool{false, true} {
|
||||
t.Run(fmt.Sprintf("SlashingProtectionMinimal:%v", isSlashingProtectionMinimal), func(t *testing.T) {
|
||||
_, m, validatorKey, finish := setup(t, isSlashingProtectionMinimal)
|
||||
defer finish()
|
||||
|
||||
ctx := context.Background()
|
||||
validatorRegsBatchSize := 500
|
||||
reg := ðpb.ValidatorRegistrationV1{
|
||||
FeeRecipient: bytesutil.PadTo([]byte("fee"), 20),
|
||||
GasLimit: 123456,
|
||||
Timestamp: uint64(time.Now().Unix()),
|
||||
Pubkey: validatorKey.PublicKey().Marshal(),
|
||||
}
|
||||
ctx := context.Background()
|
||||
validatorRegsBatchSize := 500
|
||||
reg := ðpb.ValidatorRegistrationV1{
|
||||
FeeRecipient: bytesutil.PadTo([]byte("fee"), 20),
|
||||
GasLimit: 123456,
|
||||
Timestamp: uint64(time.Now().Unix()),
|
||||
Pubkey: validatorKey.PublicKey().Marshal(),
|
||||
}
|
||||
|
||||
m.validatorClient.EXPECT().
|
||||
SubmitValidatorRegistrations(gomock.Any(), ðpb.SignedValidatorRegistrationsV1{
|
||||
Messages: []*ethpb.SignedValidatorRegistrationV1{
|
||||
m.validatorClient.EXPECT().
|
||||
SubmitValidatorRegistrations(gomock.Any(), ðpb.SignedValidatorRegistrationsV1{
|
||||
Messages: []*ethpb.SignedValidatorRegistrationV1{
|
||||
{Message: reg,
|
||||
Signature: params.BeaconConfig().ZeroHash[:]},
|
||||
},
|
||||
}).
|
||||
Return(nil, errors.New("could not sign"))
|
||||
require.ErrorContains(t, "could not sign", SubmitValidatorRegistrations(ctx, m.validatorClient, []*ethpb.SignedValidatorRegistrationV1{
|
||||
{Message: reg,
|
||||
Signature: params.BeaconConfig().ZeroHash[:]},
|
||||
},
|
||||
}).
|
||||
Return(nil, errors.New("could not sign"))
|
||||
require.ErrorContains(t, "could not sign", SubmitValidatorRegistrations(ctx, m.validatorClient, []*ethpb.SignedValidatorRegistrationV1{
|
||||
{Message: reg,
|
||||
Signature: params.BeaconConfig().ZeroHash[:]},
|
||||
}, validatorRegsBatchSize))
|
||||
}, validatorRegsBatchSize))
|
||||
})
|
||||
}
|
||||
}
|
||||
|
||||
func Test_signValidatorRegistration(t *testing.T) {
|
||||
_, m, validatorKey, finish := setup(t)
|
||||
defer finish()
|
||||
for _, isSlashingProtectionMinimal := range [...]bool{false, true} {
|
||||
t.Run(fmt.Sprintf("SlashingProtectionMinimal:%v", isSlashingProtectionMinimal), func(t *testing.T) {
|
||||
_, m, validatorKey, finish := setup(t, isSlashingProtectionMinimal)
|
||||
defer finish()
|
||||
|
||||
ctx := context.Background()
|
||||
reg := ðpb.ValidatorRegistrationV1{
|
||||
FeeRecipient: bytesutil.PadTo([]byte("fee"), 20),
|
||||
GasLimit: 123456,
|
||||
Timestamp: uint64(time.Now().Unix()),
|
||||
Pubkey: validatorKey.PublicKey().Marshal(),
|
||||
ctx := context.Background()
|
||||
reg := ðpb.ValidatorRegistrationV1{
|
||||
FeeRecipient: bytesutil.PadTo([]byte("fee"), 20),
|
||||
GasLimit: 123456,
|
||||
Timestamp: uint64(time.Now().Unix()),
|
||||
Pubkey: validatorKey.PublicKey().Marshal(),
|
||||
}
|
||||
_, err := signValidatorRegistration(ctx, m.signfunc, reg)
|
||||
require.NoError(t, err)
|
||||
})
|
||||
}
|
||||
_, err := signValidatorRegistration(ctx, m.signfunc, reg)
|
||||
require.NoError(t, err)
|
||||
|
||||
}
|
||||
|
||||
func TestValidator_SignValidatorRegistrationRequest(t *testing.T) {
|
||||
_, m, validatorKey, finish := setup(t)
|
||||
defer finish()
|
||||
ctx := context.Background()
|
||||
byteval, err := hexutil.Decode("0x878705ba3f8bc32fcf7f4caa1a35e72af65cf766")
|
||||
require.NoError(t, err)
|
||||
tests := []struct {
|
||||
name string
|
||||
arg *ethpb.ValidatorRegistrationV1
|
||||
validatorSetter func(t *testing.T) *validator
|
||||
isCached bool
|
||||
err string
|
||||
}{
|
||||
{
|
||||
name: " Happy Path cached",
|
||||
arg: ðpb.ValidatorRegistrationV1{
|
||||
Pubkey: validatorKey.PublicKey().Marshal(),
|
||||
FeeRecipient: make([]byte, fieldparams.FeeRecipientLength),
|
||||
GasLimit: 30000000,
|
||||
Timestamp: uint64(time.Now().Unix()),
|
||||
for _, isSlashingProtectionMinimal := range [...]bool{false, true} {
|
||||
_, m, validatorKey, finish := setup(t, isSlashingProtectionMinimal)
|
||||
defer finish()
|
||||
ctx := context.Background()
|
||||
byteval, err := hexutil.Decode("0x878705ba3f8bc32fcf7f4caa1a35e72af65cf766")
|
||||
require.NoError(t, err)
|
||||
tests := []struct {
|
||||
name string
|
||||
arg *ethpb.ValidatorRegistrationV1
|
||||
validatorSetter func(t *testing.T) *validator
|
||||
isCached bool
|
||||
err string
|
||||
}{
|
||||
{
|
||||
name: " Happy Path cached",
|
||||
arg: ðpb.ValidatorRegistrationV1{
|
||||
Pubkey: validatorKey.PublicKey().Marshal(),
|
||||
FeeRecipient: make([]byte, fieldparams.FeeRecipientLength),
|
||||
GasLimit: 30000000,
|
||||
Timestamp: uint64(time.Now().Unix()),
|
||||
},
|
||||
validatorSetter: func(t *testing.T) *validator {
|
||||
v := validator{
|
||||
pubkeyToValidatorIndex: make(map[[fieldparams.BLSPubkeyLength]byte]primitives.ValidatorIndex),
|
||||
signedValidatorRegistrations: make(map[[fieldparams.BLSPubkeyLength]byte]*ethpb.SignedValidatorRegistrationV1),
|
||||
useWeb: false,
|
||||
genesisTime: 0,
|
||||
}
|
||||
v.signedValidatorRegistrations[bytesutil.ToBytes48(validatorKey.PublicKey().Marshal())] = ðpb.SignedValidatorRegistrationV1{
|
||||
Message: ðpb.ValidatorRegistrationV1{
|
||||
Pubkey: validatorKey.PublicKey().Marshal(),
|
||||
GasLimit: 30000000,
|
||||
FeeRecipient: make([]byte, fieldparams.FeeRecipientLength),
|
||||
Timestamp: uint64(time.Now().Unix()),
|
||||
},
|
||||
Signature: make([]byte, 0),
|
||||
}
|
||||
return &v
|
||||
},
|
||||
isCached: true,
|
||||
},
|
||||
validatorSetter: func(t *testing.T) *validator {
|
||||
v := validator{
|
||||
pubkeyToValidatorIndex: make(map[[fieldparams.BLSPubkeyLength]byte]primitives.ValidatorIndex),
|
||||
signedValidatorRegistrations: make(map[[fieldparams.BLSPubkeyLength]byte]*ethpb.SignedValidatorRegistrationV1),
|
||||
useWeb: false,
|
||||
genesisTime: 0,
|
||||
}
|
||||
v.signedValidatorRegistrations[bytesutil.ToBytes48(validatorKey.PublicKey().Marshal())] = ðpb.SignedValidatorRegistrationV1{
|
||||
Message: ðpb.ValidatorRegistrationV1{
|
||||
Pubkey: validatorKey.PublicKey().Marshal(),
|
||||
GasLimit: 30000000,
|
||||
FeeRecipient: make([]byte, fieldparams.FeeRecipientLength),
|
||||
Timestamp: uint64(time.Now().Unix()),
|
||||
},
|
||||
Signature: make([]byte, 0),
|
||||
}
|
||||
return &v
|
||||
{
|
||||
name: " Happy Path not cached gas updated",
|
||||
arg: ðpb.ValidatorRegistrationV1{
|
||||
Pubkey: validatorKey.PublicKey().Marshal(),
|
||||
FeeRecipient: make([]byte, fieldparams.FeeRecipientLength),
|
||||
GasLimit: 30000000,
|
||||
Timestamp: uint64(time.Now().Unix()),
|
||||
},
|
||||
validatorSetter: func(t *testing.T) *validator {
|
||||
v := validator{
|
||||
pubkeyToValidatorIndex: make(map[[fieldparams.BLSPubkeyLength]byte]primitives.ValidatorIndex),
|
||||
signedValidatorRegistrations: make(map[[fieldparams.BLSPubkeyLength]byte]*ethpb.SignedValidatorRegistrationV1),
|
||||
useWeb: false,
|
||||
genesisTime: 0,
|
||||
}
|
||||
v.signedValidatorRegistrations[bytesutil.ToBytes48(validatorKey.PublicKey().Marshal())] = ðpb.SignedValidatorRegistrationV1{
|
||||
Message: ðpb.ValidatorRegistrationV1{
|
||||
Pubkey: validatorKey.PublicKey().Marshal(),
|
||||
GasLimit: 35000000,
|
||||
FeeRecipient: make([]byte, fieldparams.FeeRecipientLength),
|
||||
Timestamp: uint64(time.Now().Unix() - 1),
|
||||
},
|
||||
Signature: make([]byte, 0),
|
||||
}
|
||||
return &v
|
||||
},
|
||||
isCached: false,
|
||||
},
|
||||
isCached: true,
|
||||
},
|
||||
{
|
||||
name: " Happy Path not cached gas updated",
|
||||
arg: ðpb.ValidatorRegistrationV1{
|
||||
Pubkey: validatorKey.PublicKey().Marshal(),
|
||||
FeeRecipient: make([]byte, fieldparams.FeeRecipientLength),
|
||||
GasLimit: 30000000,
|
||||
Timestamp: uint64(time.Now().Unix()),
|
||||
{
|
||||
name: " Happy Path not cached feerecipient updated",
|
||||
arg: ðpb.ValidatorRegistrationV1{
|
||||
Pubkey: validatorKey.PublicKey().Marshal(),
|
||||
FeeRecipient: byteval,
|
||||
GasLimit: 30000000,
|
||||
Timestamp: uint64(time.Now().Unix()),
|
||||
},
|
||||
validatorSetter: func(t *testing.T) *validator {
|
||||
v := validator{
|
||||
pubkeyToValidatorIndex: make(map[[fieldparams.BLSPubkeyLength]byte]primitives.ValidatorIndex),
|
||||
signedValidatorRegistrations: make(map[[fieldparams.BLSPubkeyLength]byte]*ethpb.SignedValidatorRegistrationV1),
|
||||
useWeb: false,
|
||||
genesisTime: 0,
|
||||
}
|
||||
v.signedValidatorRegistrations[bytesutil.ToBytes48(validatorKey.PublicKey().Marshal())] = ðpb.SignedValidatorRegistrationV1{
|
||||
Message: ðpb.ValidatorRegistrationV1{
|
||||
Pubkey: validatorKey.PublicKey().Marshal(),
|
||||
GasLimit: 30000000,
|
||||
FeeRecipient: make([]byte, fieldparams.FeeRecipientLength),
|
||||
Timestamp: uint64(time.Now().Unix() - 1),
|
||||
},
|
||||
Signature: make([]byte, 0),
|
||||
}
|
||||
return &v
|
||||
},
|
||||
isCached: false,
|
||||
},
|
||||
validatorSetter: func(t *testing.T) *validator {
|
||||
v := validator{
|
||||
pubkeyToValidatorIndex: make(map[[fieldparams.BLSPubkeyLength]byte]primitives.ValidatorIndex),
|
||||
signedValidatorRegistrations: make(map[[fieldparams.BLSPubkeyLength]byte]*ethpb.SignedValidatorRegistrationV1),
|
||||
useWeb: false,
|
||||
genesisTime: 0,
|
||||
}
|
||||
v.signedValidatorRegistrations[bytesutil.ToBytes48(validatorKey.PublicKey().Marshal())] = ðpb.SignedValidatorRegistrationV1{
|
||||
Message: ðpb.ValidatorRegistrationV1{
|
||||
Pubkey: validatorKey.PublicKey().Marshal(),
|
||||
GasLimit: 35000000,
|
||||
FeeRecipient: make([]byte, fieldparams.FeeRecipientLength),
|
||||
Timestamp: uint64(time.Now().Unix() - 1),
|
||||
},
|
||||
Signature: make([]byte, 0),
|
||||
}
|
||||
return &v
|
||||
{
|
||||
name: " Happy Path not cached first Entry",
|
||||
arg: ðpb.ValidatorRegistrationV1{
|
||||
Pubkey: validatorKey.PublicKey().Marshal(),
|
||||
FeeRecipient: byteval,
|
||||
GasLimit: 30000000,
|
||||
Timestamp: uint64(time.Now().Unix()),
|
||||
},
|
||||
validatorSetter: func(t *testing.T) *validator {
|
||||
v := validator{
|
||||
pubkeyToValidatorIndex: make(map[[fieldparams.BLSPubkeyLength]byte]primitives.ValidatorIndex),
|
||||
signedValidatorRegistrations: make(map[[fieldparams.BLSPubkeyLength]byte]*ethpb.SignedValidatorRegistrationV1),
|
||||
useWeb: false,
|
||||
genesisTime: 0,
|
||||
}
|
||||
return &v
|
||||
},
|
||||
isCached: false,
|
||||
},
|
||||
isCached: false,
|
||||
},
|
||||
{
|
||||
name: " Happy Path not cached feerecipient updated",
|
||||
arg: ðpb.ValidatorRegistrationV1{
|
||||
Pubkey: validatorKey.PublicKey().Marshal(),
|
||||
FeeRecipient: byteval,
|
||||
GasLimit: 30000000,
|
||||
Timestamp: uint64(time.Now().Unix()),
|
||||
},
|
||||
validatorSetter: func(t *testing.T) *validator {
|
||||
v := validator{
|
||||
pubkeyToValidatorIndex: make(map[[fieldparams.BLSPubkeyLength]byte]primitives.ValidatorIndex),
|
||||
signedValidatorRegistrations: make(map[[fieldparams.BLSPubkeyLength]byte]*ethpb.SignedValidatorRegistrationV1),
|
||||
useWeb: false,
|
||||
genesisTime: 0,
|
||||
}
|
||||
v.signedValidatorRegistrations[bytesutil.ToBytes48(validatorKey.PublicKey().Marshal())] = ðpb.SignedValidatorRegistrationV1{
|
||||
Message: ðpb.ValidatorRegistrationV1{
|
||||
Pubkey: validatorKey.PublicKey().Marshal(),
|
||||
GasLimit: 30000000,
|
||||
FeeRecipient: make([]byte, fieldparams.FeeRecipientLength),
|
||||
Timestamp: uint64(time.Now().Unix() - 1),
|
||||
},
|
||||
Signature: make([]byte, 0),
|
||||
}
|
||||
return &v
|
||||
},
|
||||
isCached: false,
|
||||
},
|
||||
{
|
||||
name: " Happy Path not cached first Entry",
|
||||
arg: ðpb.ValidatorRegistrationV1{
|
||||
Pubkey: validatorKey.PublicKey().Marshal(),
|
||||
FeeRecipient: byteval,
|
||||
GasLimit: 30000000,
|
||||
Timestamp: uint64(time.Now().Unix()),
|
||||
},
|
||||
validatorSetter: func(t *testing.T) *validator {
|
||||
v := validator{
|
||||
pubkeyToValidatorIndex: make(map[[fieldparams.BLSPubkeyLength]byte]primitives.ValidatorIndex),
|
||||
signedValidatorRegistrations: make(map[[fieldparams.BLSPubkeyLength]byte]*ethpb.SignedValidatorRegistrationV1),
|
||||
useWeb: false,
|
||||
genesisTime: 0,
|
||||
}
|
||||
return &v
|
||||
},
|
||||
isCached: false,
|
||||
},
|
||||
}
|
||||
for _, tt := range tests {
|
||||
t.Run(tt.name, func(t *testing.T) {
|
||||
v := tt.validatorSetter(t)
|
||||
}
|
||||
for _, tt := range tests {
|
||||
t.Run(fmt.Sprintf("SlashingProtectionMinimal:%v", isSlashingProtectionMinimal), func(t *testing.T) {
|
||||
v := tt.validatorSetter(t)
|
||||
|
||||
startingReq, ok := v.signedValidatorRegistrations[bytesutil.ToBytes48(tt.arg.Pubkey)]
|
||||
startingReq, ok := v.signedValidatorRegistrations[bytesutil.ToBytes48(tt.arg.Pubkey)]
|
||||
|
||||
got, err := v.SignValidatorRegistrationRequest(ctx, m.signfunc, tt.arg)
|
||||
require.NoError(t, err)
|
||||
if tt.isCached {
|
||||
require.DeepEqual(t, got, v.signedValidatorRegistrations[bytesutil.ToBytes48(tt.arg.Pubkey)])
|
||||
} else {
|
||||
if ok {
|
||||
require.NotEqual(t, got.Message.Timestamp, startingReq.Message.Timestamp)
|
||||
got, err := v.SignValidatorRegistrationRequest(ctx, m.signfunc, tt.arg)
|
||||
require.NoError(t, err)
|
||||
if tt.isCached {
|
||||
require.DeepEqual(t, got, v.signedValidatorRegistrations[bytesutil.ToBytes48(tt.arg.Pubkey)])
|
||||
} else {
|
||||
if ok {
|
||||
require.NotEqual(t, got.Message.Timestamp, startingReq.Message.Timestamp)
|
||||
}
|
||||
require.Equal(t, got.Message.Timestamp, tt.arg.Timestamp)
|
||||
require.Equal(t, got.Message.GasLimit, tt.arg.GasLimit)
|
||||
require.Equal(t, hexutil.Encode(got.Message.FeeRecipient), hexutil.Encode(tt.arg.FeeRecipient))
|
||||
require.DeepEqual(t, got, v.signedValidatorRegistrations[bytesutil.ToBytes48(tt.arg.Pubkey)])
|
||||
}
|
||||
require.Equal(t, got.Message.Timestamp, tt.arg.Timestamp)
|
||||
require.Equal(t, got.Message.GasLimit, tt.arg.GasLimit)
|
||||
require.Equal(t, hexutil.Encode(got.Message.FeeRecipient), hexutil.Encode(tt.arg.FeeRecipient))
|
||||
require.DeepEqual(t, got, v.signedValidatorRegistrations[bytesutil.ToBytes48(tt.arg.Pubkey)])
|
||||
}
|
||||
})
|
||||
})
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
|
||||
@@ -5,6 +5,7 @@ import (
|
||||
"context"
|
||||
"encoding/hex"
|
||||
"encoding/json"
|
||||
"fmt"
|
||||
"strings"
|
||||
"testing"
|
||||
|
||||
@@ -15,7 +16,7 @@ import (
|
||||
ethpb "github.com/prysmaticlabs/prysm/v5/proto/prysm/v1alpha1"
|
||||
"github.com/prysmaticlabs/prysm/v5/testing/require"
|
||||
"github.com/prysmaticlabs/prysm/v5/testing/util"
|
||||
history "github.com/prysmaticlabs/prysm/v5/validator/slashing-protection-history"
|
||||
"github.com/prysmaticlabs/prysm/v5/validator/helpers"
|
||||
)
|
||||
|
||||
type eip3076TestCase struct {
|
||||
@@ -46,6 +47,7 @@ type eip3076TestCase struct {
|
||||
Pubkey string `json:"pubkey"`
|
||||
Slot string `json:"slot"`
|
||||
SigningRoot string `json:"signing_root"`
|
||||
ShouldSucceedMinimal bool `json:"should_succeed"`
|
||||
ShouldSucceedComplete bool `json:"should_succeed_complete"`
|
||||
} `json:"blocks"`
|
||||
Attestations []struct {
|
||||
@@ -53,6 +55,7 @@ type eip3076TestCase struct {
|
||||
SourceEpoch string `json:"source_epoch"`
|
||||
TargetEpoch string `json:"target_epoch"`
|
||||
SigningRoot string `json:"signing_root"`
|
||||
ShouldSucceedMinimal bool `json:"should_succeed"`
|
||||
ShouldSucceedComplete bool `json:"should_succeed_complete"`
|
||||
} `json:"attestations"`
|
||||
} `json:"steps"`
|
||||
@@ -76,99 +79,115 @@ func setupEIP3076SpecTests(t *testing.T) []*eip3076TestCase {
|
||||
}
|
||||
|
||||
func TestEIP3076SpecTests(t *testing.T) {
|
||||
testCases := setupEIP3076SpecTests(t)
|
||||
for _, tt := range testCases {
|
||||
t.Run(tt.Name, func(t *testing.T) {
|
||||
if tt.Name == "" {
|
||||
t.Skip("Skipping eip3076TestCase with empty name")
|
||||
}
|
||||
for _, isMinimal := range []bool{false, true} {
|
||||
slashingProtectionType := "complete"
|
||||
if isMinimal {
|
||||
slashingProtectionType = "minimal"
|
||||
}
|
||||
|
||||
// Set up validator client, one new validator client per eip3076TestCase.
|
||||
// This ensures we initialize a new (empty) slashing protection database.
|
||||
validator, _, _, _ := setup(t)
|
||||
|
||||
for _, step := range tt.Steps {
|
||||
if tt.GenesisValidatorsRoot != "" {
|
||||
r, err := history.RootFromHex(tt.GenesisValidatorsRoot)
|
||||
require.NoError(t, validator.db.SaveGenesisValidatorsRoot(context.Background(), r[:]))
|
||||
require.NoError(t, err)
|
||||
for _, tt := range setupEIP3076SpecTests(t) {
|
||||
t.Run(fmt.Sprintf("%s-%s", slashingProtectionType, tt.Name), func(t *testing.T) {
|
||||
if tt.Name == "" {
|
||||
t.Skip("Skipping eip3076TestCase with empty name")
|
||||
}
|
||||
|
||||
// The eip3076TestCase config contains the interchange config in json.
|
||||
// This loads the interchange data via ImportStandardProtectionJSON.
|
||||
interchangeBytes, err := json.Marshal(step.Interchange)
|
||||
if err != nil {
|
||||
t.Fatal(err)
|
||||
}
|
||||
b := bytes.NewBuffer(interchangeBytes)
|
||||
if err := history.ImportStandardProtectionJSON(context.Background(), validator.db, b); err != nil {
|
||||
if step.ShouldSucceed {
|
||||
// Set up validator client, one new validator client per eip3076TestCase.
|
||||
// This ensures we initialize a new (empty) slashing protection database.
|
||||
validator, _, _, _ := setup(t, isMinimal)
|
||||
|
||||
for _, step := range tt.Steps {
|
||||
if tt.GenesisValidatorsRoot != "" {
|
||||
r, err := helpers.RootFromHex(tt.GenesisValidatorsRoot)
|
||||
require.NoError(t, validator.db.SaveGenesisValidatorsRoot(context.Background(), r[:]))
|
||||
require.NoError(t, err)
|
||||
}
|
||||
|
||||
// The eip3076TestCase config contains the interchange config in json.
|
||||
// This loads the interchange data via ImportStandardProtectionJSON.
|
||||
interchangeBytes, err := json.Marshal(step.Interchange)
|
||||
if err != nil {
|
||||
t.Fatal(err)
|
||||
}
|
||||
} else if !step.ShouldSucceed {
|
||||
require.NotNil(t, err, "import standard protection json should have failed")
|
||||
}
|
||||
|
||||
// This loops through a list of block signings to attempt after importing the interchange data above.
|
||||
for _, sb := range step.Blocks {
|
||||
bSlot, err := history.SlotFromString(sb.Slot)
|
||||
require.NoError(t, err)
|
||||
pk, err := history.PubKeyFromHex(sb.Pubkey)
|
||||
require.NoError(t, err)
|
||||
b := util.NewBeaconBlock()
|
||||
b.Block.Slot = bSlot
|
||||
|
||||
var signingRoot [32]byte
|
||||
if sb.SigningRoot != "" {
|
||||
signingRootBytes, err := hex.DecodeString(strings.TrimPrefix(sb.SigningRoot, "0x"))
|
||||
require.NoError(t, err)
|
||||
copy(signingRoot[:], signingRootBytes)
|
||||
b := bytes.NewBuffer(interchangeBytes)
|
||||
if err := validator.db.ImportStandardProtectionJSON(context.Background(), b); err != nil {
|
||||
if step.ShouldSucceed {
|
||||
t.Fatal(err)
|
||||
}
|
||||
} else if !step.ShouldSucceed {
|
||||
require.NotNil(t, err, "import standard protection json should have failed")
|
||||
}
|
||||
|
||||
wsb, err := blocks.NewSignedBeaconBlock(b)
|
||||
require.NoError(t, err)
|
||||
err = validator.slashableProposalCheck(context.Background(), pk, wsb, signingRoot)
|
||||
if sb.ShouldSucceedComplete {
|
||||
// This loops through a list of block signings to attempt after importing the interchange data above.
|
||||
for _, sb := range step.Blocks {
|
||||
shouldSucceed := sb.ShouldSucceedComplete
|
||||
if isMinimal {
|
||||
shouldSucceed = sb.ShouldSucceedMinimal
|
||||
}
|
||||
|
||||
bSlot, err := helpers.SlotFromString(sb.Slot)
|
||||
require.NoError(t, err)
|
||||
} else {
|
||||
require.NotEqual(t, nil, err, "pre validation should have failed for block")
|
||||
pk, err := helpers.PubKeyFromHex(sb.Pubkey)
|
||||
require.NoError(t, err)
|
||||
b := util.NewBeaconBlock()
|
||||
b.Block.Slot = bSlot
|
||||
|
||||
var signingRoot [32]byte
|
||||
if sb.SigningRoot != "" {
|
||||
signingRootBytes, err := hex.DecodeString(strings.TrimPrefix(sb.SigningRoot, "0x"))
|
||||
require.NoError(t, err)
|
||||
copy(signingRoot[:], signingRootBytes)
|
||||
}
|
||||
|
||||
wsb, err := blocks.NewSignedBeaconBlock(b)
|
||||
require.NoError(t, err)
|
||||
err = validator.db.SlashableProposalCheck(context.Background(), pk, wsb, signingRoot, validator.emitAccountMetrics, ValidatorProposeFailVec)
|
||||
if shouldSucceed {
|
||||
require.NoError(t, err)
|
||||
} else {
|
||||
require.NotEqual(t, nil, err, "pre validation should have failed for block")
|
||||
}
|
||||
}
|
||||
|
||||
// This loops through a list of attestation signings to attempt after importing the interchange data above.
|
||||
for _, sa := range step.Attestations {
|
||||
shouldSucceed := sa.ShouldSucceedComplete
|
||||
if isMinimal {
|
||||
shouldSucceed = sa.ShouldSucceedMinimal
|
||||
}
|
||||
|
||||
target, err := helpers.EpochFromString(sa.TargetEpoch)
|
||||
require.NoError(t, err)
|
||||
source, err := helpers.EpochFromString(sa.SourceEpoch)
|
||||
require.NoError(t, err)
|
||||
pk, err := helpers.PubKeyFromHex(sa.Pubkey)
|
||||
require.NoError(t, err)
|
||||
ia := ðpb.IndexedAttestation{
|
||||
Data: ðpb.AttestationData{
|
||||
BeaconBlockRoot: make([]byte, 32),
|
||||
Target: ðpb.Checkpoint{Epoch: target, Root: make([]byte, 32)},
|
||||
Source: ðpb.Checkpoint{Epoch: source, Root: make([]byte, 32)},
|
||||
},
|
||||
Signature: make([]byte, fieldparams.BLSSignatureLength),
|
||||
}
|
||||
|
||||
var signingRoot [32]byte
|
||||
if sa.SigningRoot != "" {
|
||||
signingRootBytes, err := hex.DecodeString(strings.TrimPrefix(sa.SigningRoot, "0x"))
|
||||
require.NoError(t, err)
|
||||
copy(signingRoot[:], signingRootBytes)
|
||||
}
|
||||
|
||||
err = validator.db.SlashableAttestationCheck(context.Background(), ia, pk, signingRoot, false, nil)
|
||||
if shouldSucceed {
|
||||
require.NoError(t, err)
|
||||
} else {
|
||||
require.NotNil(t, err, "pre validation should have failed for attestation")
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
// This loops through a list of attestation signings to attempt after importing the interchange data above.
|
||||
for _, sa := range step.Attestations {
|
||||
target, err := history.EpochFromString(sa.TargetEpoch)
|
||||
require.NoError(t, err)
|
||||
source, err := history.EpochFromString(sa.SourceEpoch)
|
||||
require.NoError(t, err)
|
||||
pk, err := history.PubKeyFromHex(sa.Pubkey)
|
||||
require.NoError(t, err)
|
||||
ia := ðpb.IndexedAttestation{
|
||||
Data: ðpb.AttestationData{
|
||||
BeaconBlockRoot: make([]byte, 32),
|
||||
Target: ðpb.Checkpoint{Epoch: target, Root: make([]byte, 32)},
|
||||
Source: ðpb.Checkpoint{Epoch: source, Root: make([]byte, 32)},
|
||||
},
|
||||
Signature: make([]byte, fieldparams.BLSSignatureLength),
|
||||
}
|
||||
|
||||
var signingRoot [32]byte
|
||||
if sa.SigningRoot != "" {
|
||||
signingRootBytes, err := hex.DecodeString(strings.TrimPrefix(sa.SigningRoot, "0x"))
|
||||
require.NoError(t, err)
|
||||
copy(signingRoot[:], signingRootBytes)
|
||||
}
|
||||
|
||||
err = validator.slashableAttestationCheck(context.Background(), ia, pk, signingRoot)
|
||||
if sa.ShouldSucceedComplete {
|
||||
require.NoError(t, err)
|
||||
} else {
|
||||
require.NotNil(t, err, "pre validation should have failed for attestation")
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
require.NoError(t, validator.db.Close(), "failed to close slashing protection database")
|
||||
})
|
||||
require.NoError(t, validator.db.Close(), "failed to close slashing protection database")
|
||||
})
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
@@ -3,6 +3,7 @@ package client
|
||||
import (
|
||||
"context"
|
||||
"encoding/hex"
|
||||
"fmt"
|
||||
"testing"
|
||||
|
||||
"github.com/pkg/errors"
|
||||
@@ -20,246 +21,278 @@ import (
|
||||
)
|
||||
|
||||
func TestSubmitSyncCommitteeMessage_ValidatorDutiesRequestFailure(t *testing.T) {
|
||||
hook := logTest.NewGlobal()
|
||||
validator, m, validatorKey, finish := setup(t)
|
||||
validator.duties = ðpb.DutiesResponse{CurrentEpochDuties: []*ethpb.DutiesResponse_Duty{}}
|
||||
defer finish()
|
||||
for _, isSlashingProtectionMinimal := range [...]bool{false, true} {
|
||||
t.Run(fmt.Sprintf("SlashingProtectionMinimal:%v", isSlashingProtectionMinimal), func(t *testing.T) {
|
||||
hook := logTest.NewGlobal()
|
||||
validator, m, validatorKey, finish := setup(t, isSlashingProtectionMinimal)
|
||||
validator.duties = ðpb.DutiesResponse{CurrentEpochDuties: []*ethpb.DutiesResponse_Duty{}}
|
||||
defer finish()
|
||||
|
||||
m.validatorClient.EXPECT().GetSyncMessageBlockRoot(
|
||||
gomock.Any(), // ctx
|
||||
&emptypb.Empty{},
|
||||
).Return(ðpb.SyncMessageBlockRootResponse{
|
||||
Root: bytesutil.PadTo([]byte{}, 32),
|
||||
}, nil)
|
||||
m.validatorClient.EXPECT().GetSyncMessageBlockRoot(
|
||||
gomock.Any(), // ctx
|
||||
&emptypb.Empty{},
|
||||
).Return(ðpb.SyncMessageBlockRootResponse{
|
||||
Root: bytesutil.PadTo([]byte{}, 32),
|
||||
}, nil)
|
||||
|
||||
var pubKey [fieldparams.BLSPubkeyLength]byte
|
||||
copy(pubKey[:], validatorKey.PublicKey().Marshal())
|
||||
validator.SubmitSyncCommitteeMessage(context.Background(), 1, pubKey)
|
||||
require.LogsContain(t, hook, "Could not fetch validator assignment")
|
||||
var pubKey [fieldparams.BLSPubkeyLength]byte
|
||||
copy(pubKey[:], validatorKey.PublicKey().Marshal())
|
||||
validator.SubmitSyncCommitteeMessage(context.Background(), 1, pubKey)
|
||||
require.LogsContain(t, hook, "Could not fetch validator assignment")
|
||||
})
|
||||
}
|
||||
}
|
||||
|
||||
func TestSubmitSyncCommitteeMessage_BadDomainData(t *testing.T) {
|
||||
validator, m, validatorKey, finish := setup(t)
|
||||
defer finish()
|
||||
hook := logTest.NewGlobal()
|
||||
validatorIndex := primitives.ValidatorIndex(7)
|
||||
committee := []primitives.ValidatorIndex{0, 3, 4, 2, validatorIndex, 6, 8, 9, 10}
|
||||
validator.duties = ðpb.DutiesResponse{CurrentEpochDuties: []*ethpb.DutiesResponse_Duty{
|
||||
{
|
||||
PublicKey: validatorKey.PublicKey().Marshal(),
|
||||
Committee: committee,
|
||||
ValidatorIndex: validatorIndex,
|
||||
},
|
||||
}}
|
||||
for _, isSlashingProtectionMinimal := range [...]bool{false, true} {
|
||||
t.Run(fmt.Sprintf("SlashingProtectionMinimal:%v", isSlashingProtectionMinimal), func(t *testing.T) {
|
||||
validator, m, validatorKey, finish := setup(t, isSlashingProtectionMinimal)
|
||||
defer finish()
|
||||
hook := logTest.NewGlobal()
|
||||
validatorIndex := primitives.ValidatorIndex(7)
|
||||
committee := []primitives.ValidatorIndex{0, 3, 4, 2, validatorIndex, 6, 8, 9, 10}
|
||||
validator.duties = ðpb.DutiesResponse{CurrentEpochDuties: []*ethpb.DutiesResponse_Duty{
|
||||
{
|
||||
PublicKey: validatorKey.PublicKey().Marshal(),
|
||||
Committee: committee,
|
||||
ValidatorIndex: validatorIndex,
|
||||
},
|
||||
}}
|
||||
|
||||
r := []byte{'a'}
|
||||
m.validatorClient.EXPECT().GetSyncMessageBlockRoot(
|
||||
gomock.Any(), // ctx
|
||||
&emptypb.Empty{},
|
||||
).Return(ðpb.SyncMessageBlockRootResponse{
|
||||
Root: bytesutil.PadTo(r, 32),
|
||||
}, nil)
|
||||
r := []byte{'a'}
|
||||
m.validatorClient.EXPECT().GetSyncMessageBlockRoot(
|
||||
gomock.Any(), // ctx
|
||||
&emptypb.Empty{},
|
||||
).Return(ðpb.SyncMessageBlockRootResponse{
|
||||
Root: bytesutil.PadTo(r, 32),
|
||||
}, nil)
|
||||
|
||||
m.validatorClient.EXPECT().
|
||||
DomainData(gomock.Any(), gomock.Any()).
|
||||
Return(nil, errors.New("uh oh"))
|
||||
m.validatorClient.EXPECT().
|
||||
DomainData(gomock.Any(), gomock.Any()).
|
||||
Return(nil, errors.New("uh oh"))
|
||||
|
||||
var pubKey [fieldparams.BLSPubkeyLength]byte
|
||||
copy(pubKey[:], validatorKey.PublicKey().Marshal())
|
||||
validator.SubmitSyncCommitteeMessage(context.Background(), 1, pubKey)
|
||||
require.LogsContain(t, hook, "Could not get sync committee domain data")
|
||||
var pubKey [fieldparams.BLSPubkeyLength]byte
|
||||
copy(pubKey[:], validatorKey.PublicKey().Marshal())
|
||||
validator.SubmitSyncCommitteeMessage(context.Background(), 1, pubKey)
|
||||
require.LogsContain(t, hook, "Could not get sync committee domain data")
|
||||
})
|
||||
}
|
||||
}
|
||||
|
||||
func TestSubmitSyncCommitteeMessage_CouldNotSubmit(t *testing.T) {
|
||||
validator, m, validatorKey, finish := setup(t)
|
||||
defer finish()
|
||||
hook := logTest.NewGlobal()
|
||||
validatorIndex := primitives.ValidatorIndex(7)
|
||||
committee := []primitives.ValidatorIndex{0, 3, 4, 2, validatorIndex, 6, 8, 9, 10}
|
||||
validator.duties = ðpb.DutiesResponse{CurrentEpochDuties: []*ethpb.DutiesResponse_Duty{
|
||||
{
|
||||
PublicKey: validatorKey.PublicKey().Marshal(),
|
||||
Committee: committee,
|
||||
ValidatorIndex: validatorIndex,
|
||||
},
|
||||
}}
|
||||
for _, isSlashingProtectionMinimal := range [...]bool{false, true} {
|
||||
t.Run(fmt.Sprintf("SlashingProtectionMinimal:%v", isSlashingProtectionMinimal), func(t *testing.T) {
|
||||
validator, m, validatorKey, finish := setup(t, isSlashingProtectionMinimal)
|
||||
defer finish()
|
||||
hook := logTest.NewGlobal()
|
||||
validatorIndex := primitives.ValidatorIndex(7)
|
||||
committee := []primitives.ValidatorIndex{0, 3, 4, 2, validatorIndex, 6, 8, 9, 10}
|
||||
validator.duties = ðpb.DutiesResponse{CurrentEpochDuties: []*ethpb.DutiesResponse_Duty{
|
||||
{
|
||||
PublicKey: validatorKey.PublicKey().Marshal(),
|
||||
Committee: committee,
|
||||
ValidatorIndex: validatorIndex,
|
||||
},
|
||||
}}
|
||||
|
||||
r := []byte{'a'}
|
||||
m.validatorClient.EXPECT().GetSyncMessageBlockRoot(
|
||||
gomock.Any(), // ctx
|
||||
&emptypb.Empty{},
|
||||
).Return(ðpb.SyncMessageBlockRootResponse{
|
||||
Root: bytesutil.PadTo(r, 32),
|
||||
}, nil)
|
||||
r := []byte{'a'}
|
||||
m.validatorClient.EXPECT().GetSyncMessageBlockRoot(
|
||||
gomock.Any(), // ctx
|
||||
&emptypb.Empty{},
|
||||
).Return(ðpb.SyncMessageBlockRootResponse{
|
||||
Root: bytesutil.PadTo(r, 32),
|
||||
}, nil)
|
||||
|
||||
m.validatorClient.EXPECT().
|
||||
DomainData(gomock.Any(), // ctx
|
||||
gomock.Any()). // epoch
|
||||
Return(ðpb.DomainResponse{
|
||||
SignatureDomain: make([]byte, 32),
|
||||
}, nil)
|
||||
m.validatorClient.EXPECT().
|
||||
DomainData(gomock.Any(), // ctx
|
||||
gomock.Any()). // epoch
|
||||
Return(ðpb.DomainResponse{
|
||||
SignatureDomain: make([]byte, 32),
|
||||
}, nil)
|
||||
|
||||
m.validatorClient.EXPECT().SubmitSyncMessage(
|
||||
gomock.Any(), // ctx
|
||||
gomock.AssignableToTypeOf(ðpb.SyncCommitteeMessage{}),
|
||||
).Return(&emptypb.Empty{}, errors.New("uh oh") /* error */)
|
||||
m.validatorClient.EXPECT().SubmitSyncMessage(
|
||||
gomock.Any(), // ctx
|
||||
gomock.AssignableToTypeOf(ðpb.SyncCommitteeMessage{}),
|
||||
).Return(&emptypb.Empty{}, errors.New("uh oh") /* error */)
|
||||
|
||||
var pubKey [fieldparams.BLSPubkeyLength]byte
|
||||
copy(pubKey[:], validatorKey.PublicKey().Marshal())
|
||||
validator.SubmitSyncCommitteeMessage(context.Background(), 1, pubKey)
|
||||
var pubKey [fieldparams.BLSPubkeyLength]byte
|
||||
copy(pubKey[:], validatorKey.PublicKey().Marshal())
|
||||
validator.SubmitSyncCommitteeMessage(context.Background(), 1, pubKey)
|
||||
|
||||
require.LogsContain(t, hook, "Could not submit sync committee message")
|
||||
require.LogsContain(t, hook, "Could not submit sync committee message")
|
||||
})
|
||||
}
|
||||
}
|
||||
|
||||
func TestSubmitSyncCommitteeMessage_OK(t *testing.T) {
|
||||
validator, m, validatorKey, finish := setup(t)
|
||||
defer finish()
|
||||
hook := logTest.NewGlobal()
|
||||
validatorIndex := primitives.ValidatorIndex(7)
|
||||
committee := []primitives.ValidatorIndex{0, 3, 4, 2, validatorIndex, 6, 8, 9, 10}
|
||||
validator.duties = ðpb.DutiesResponse{CurrentEpochDuties: []*ethpb.DutiesResponse_Duty{
|
||||
{
|
||||
PublicKey: validatorKey.PublicKey().Marshal(),
|
||||
Committee: committee,
|
||||
ValidatorIndex: validatorIndex,
|
||||
},
|
||||
}}
|
||||
for _, isSlashingProtectionMinimal := range [...]bool{false, true} {
|
||||
t.Run(fmt.Sprintf("SlashingProtectionMinimal:%v", isSlashingProtectionMinimal), func(t *testing.T) {
|
||||
validator, m, validatorKey, finish := setup(t, isSlashingProtectionMinimal)
|
||||
defer finish()
|
||||
hook := logTest.NewGlobal()
|
||||
validatorIndex := primitives.ValidatorIndex(7)
|
||||
committee := []primitives.ValidatorIndex{0, 3, 4, 2, validatorIndex, 6, 8, 9, 10}
|
||||
validator.duties = ðpb.DutiesResponse{CurrentEpochDuties: []*ethpb.DutiesResponse_Duty{
|
||||
{
|
||||
PublicKey: validatorKey.PublicKey().Marshal(),
|
||||
Committee: committee,
|
||||
ValidatorIndex: validatorIndex,
|
||||
},
|
||||
}}
|
||||
|
||||
r := []byte{'a'}
|
||||
m.validatorClient.EXPECT().GetSyncMessageBlockRoot(
|
||||
gomock.Any(), // ctx
|
||||
&emptypb.Empty{},
|
||||
).Return(ðpb.SyncMessageBlockRootResponse{
|
||||
Root: bytesutil.PadTo(r, 32),
|
||||
}, nil)
|
||||
r := []byte{'a'}
|
||||
m.validatorClient.EXPECT().GetSyncMessageBlockRoot(
|
||||
gomock.Any(), // ctx
|
||||
&emptypb.Empty{},
|
||||
).Return(ðpb.SyncMessageBlockRootResponse{
|
||||
Root: bytesutil.PadTo(r, 32),
|
||||
}, nil)
|
||||
|
||||
m.validatorClient.EXPECT().
|
||||
DomainData(gomock.Any(), // ctx
|
||||
gomock.Any()). // epoch
|
||||
Return(ðpb.DomainResponse{
|
||||
SignatureDomain: make([]byte, 32),
|
||||
}, nil)
|
||||
m.validatorClient.EXPECT().
|
||||
DomainData(gomock.Any(), // ctx
|
||||
gomock.Any()). // epoch
|
||||
Return(ðpb.DomainResponse{
|
||||
SignatureDomain: make([]byte, 32),
|
||||
}, nil)
|
||||
|
||||
var generatedMsg *ethpb.SyncCommitteeMessage
|
||||
m.validatorClient.EXPECT().SubmitSyncMessage(
|
||||
gomock.Any(), // ctx
|
||||
gomock.AssignableToTypeOf(ðpb.SyncCommitteeMessage{}),
|
||||
).Do(func(_ context.Context, msg *ethpb.SyncCommitteeMessage) {
|
||||
generatedMsg = msg
|
||||
}).Return(&emptypb.Empty{}, nil /* error */)
|
||||
var generatedMsg *ethpb.SyncCommitteeMessage
|
||||
m.validatorClient.EXPECT().SubmitSyncMessage(
|
||||
gomock.Any(), // ctx
|
||||
gomock.AssignableToTypeOf(ðpb.SyncCommitteeMessage{}),
|
||||
).Do(func(_ context.Context, msg *ethpb.SyncCommitteeMessage) {
|
||||
generatedMsg = msg
|
||||
}).Return(&emptypb.Empty{}, nil /* error */)
|
||||
|
||||
var pubKey [fieldparams.BLSPubkeyLength]byte
|
||||
copy(pubKey[:], validatorKey.PublicKey().Marshal())
|
||||
validator.SubmitSyncCommitteeMessage(context.Background(), 1, pubKey)
|
||||
var pubKey [fieldparams.BLSPubkeyLength]byte
|
||||
copy(pubKey[:], validatorKey.PublicKey().Marshal())
|
||||
validator.SubmitSyncCommitteeMessage(context.Background(), 1, pubKey)
|
||||
|
||||
require.LogsDoNotContain(t, hook, "Could not")
|
||||
require.Equal(t, primitives.Slot(1), generatedMsg.Slot)
|
||||
require.Equal(t, validatorIndex, generatedMsg.ValidatorIndex)
|
||||
require.DeepEqual(t, bytesutil.PadTo(r, 32), generatedMsg.BlockRoot)
|
||||
require.LogsDoNotContain(t, hook, "Could not")
|
||||
require.Equal(t, primitives.Slot(1), generatedMsg.Slot)
|
||||
require.Equal(t, validatorIndex, generatedMsg.ValidatorIndex)
|
||||
require.DeepEqual(t, bytesutil.PadTo(r, 32), generatedMsg.BlockRoot)
|
||||
})
|
||||
}
|
||||
}
|
||||
|
||||
func TestSubmitSignedContributionAndProof_ValidatorDutiesRequestFailure(t *testing.T) {
|
||||
hook := logTest.NewGlobal()
|
||||
validator, _, validatorKey, finish := setup(t)
|
||||
validator.duties = ðpb.DutiesResponse{CurrentEpochDuties: []*ethpb.DutiesResponse_Duty{}}
|
||||
defer finish()
|
||||
for _, isSlashingProtectionMinimal := range [...]bool{false, true} {
|
||||
t.Run(fmt.Sprintf("SlashingProtectionMinimal:%v", isSlashingProtectionMinimal), func(t *testing.T) {
|
||||
hook := logTest.NewGlobal()
|
||||
validator, _, validatorKey, finish := setup(t, isSlashingProtectionMinimal)
|
||||
validator.duties = ðpb.DutiesResponse{CurrentEpochDuties: []*ethpb.DutiesResponse_Duty{}}
|
||||
defer finish()
|
||||
|
||||
var pubKey [fieldparams.BLSPubkeyLength]byte
|
||||
copy(pubKey[:], validatorKey.PublicKey().Marshal())
|
||||
validator.SubmitSignedContributionAndProof(context.Background(), 1, pubKey)
|
||||
require.LogsContain(t, hook, "Could not fetch validator assignment")
|
||||
var pubKey [fieldparams.BLSPubkeyLength]byte
|
||||
copy(pubKey[:], validatorKey.PublicKey().Marshal())
|
||||
validator.SubmitSignedContributionAndProof(context.Background(), 1, pubKey)
|
||||
require.LogsContain(t, hook, "Could not fetch validator assignment")
|
||||
})
|
||||
}
|
||||
}
|
||||
|
||||
func TestSubmitSignedContributionAndProof_GetSyncSubcommitteeIndexFailure(t *testing.T) {
|
||||
hook := logTest.NewGlobal()
|
||||
validator, m, validatorKey, finish := setup(t)
|
||||
validatorIndex := primitives.ValidatorIndex(7)
|
||||
committee := []primitives.ValidatorIndex{0, 3, 4, 2, validatorIndex, 6, 8, 9, 10}
|
||||
validator.duties = ðpb.DutiesResponse{CurrentEpochDuties: []*ethpb.DutiesResponse_Duty{
|
||||
{
|
||||
PublicKey: validatorKey.PublicKey().Marshal(),
|
||||
Committee: committee,
|
||||
ValidatorIndex: validatorIndex,
|
||||
},
|
||||
}}
|
||||
defer finish()
|
||||
for _, isSlashingProtectionMinimal := range [...]bool{false, true} {
|
||||
t.Run(fmt.Sprintf("SlashingProtectionMinimal:%v", isSlashingProtectionMinimal), func(t *testing.T) {
|
||||
hook := logTest.NewGlobal()
|
||||
validator, m, validatorKey, finish := setup(t, isSlashingProtectionMinimal)
|
||||
validatorIndex := primitives.ValidatorIndex(7)
|
||||
committee := []primitives.ValidatorIndex{0, 3, 4, 2, validatorIndex, 6, 8, 9, 10}
|
||||
validator.duties = ðpb.DutiesResponse{CurrentEpochDuties: []*ethpb.DutiesResponse_Duty{
|
||||
{
|
||||
PublicKey: validatorKey.PublicKey().Marshal(),
|
||||
Committee: committee,
|
||||
ValidatorIndex: validatorIndex,
|
||||
},
|
||||
}}
|
||||
defer finish()
|
||||
|
||||
var pubKey [fieldparams.BLSPubkeyLength]byte
|
||||
copy(pubKey[:], validatorKey.PublicKey().Marshal())
|
||||
m.validatorClient.EXPECT().GetSyncSubcommitteeIndex(
|
||||
gomock.Any(), // ctx
|
||||
ðpb.SyncSubcommitteeIndexRequest{
|
||||
Slot: 1,
|
||||
PublicKey: pubKey[:],
|
||||
},
|
||||
).Return(ðpb.SyncSubcommitteeIndexResponse{}, errors.New("Bad index"))
|
||||
var pubKey [fieldparams.BLSPubkeyLength]byte
|
||||
copy(pubKey[:], validatorKey.PublicKey().Marshal())
|
||||
m.validatorClient.EXPECT().GetSyncSubcommitteeIndex(
|
||||
gomock.Any(), // ctx
|
||||
ðpb.SyncSubcommitteeIndexRequest{
|
||||
Slot: 1,
|
||||
PublicKey: pubKey[:],
|
||||
},
|
||||
).Return(ðpb.SyncSubcommitteeIndexResponse{}, errors.New("Bad index"))
|
||||
|
||||
validator.SubmitSignedContributionAndProof(context.Background(), 1, pubKey)
|
||||
require.LogsContain(t, hook, "Could not get sync subcommittee index")
|
||||
validator.SubmitSignedContributionAndProof(context.Background(), 1, pubKey)
|
||||
require.LogsContain(t, hook, "Could not get sync subcommittee index")
|
||||
})
|
||||
}
|
||||
}
|
||||
|
||||
func TestSubmitSignedContributionAndProof_NothingToDo(t *testing.T) {
|
||||
hook := logTest.NewGlobal()
|
||||
validator, m, validatorKey, finish := setup(t)
|
||||
validatorIndex := primitives.ValidatorIndex(7)
|
||||
committee := []primitives.ValidatorIndex{0, 3, 4, 2, validatorIndex, 6, 8, 9, 10}
|
||||
validator.duties = ðpb.DutiesResponse{CurrentEpochDuties: []*ethpb.DutiesResponse_Duty{
|
||||
{
|
||||
PublicKey: validatorKey.PublicKey().Marshal(),
|
||||
Committee: committee,
|
||||
ValidatorIndex: validatorIndex,
|
||||
},
|
||||
}}
|
||||
defer finish()
|
||||
for _, isSlashingProtectionMinimal := range [...]bool{false, true} {
|
||||
t.Run(fmt.Sprintf("SlashingProtectionMinimal:%v", isSlashingProtectionMinimal), func(t *testing.T) {
|
||||
hook := logTest.NewGlobal()
|
||||
validator, m, validatorKey, finish := setup(t, isSlashingProtectionMinimal)
|
||||
validatorIndex := primitives.ValidatorIndex(7)
|
||||
committee := []primitives.ValidatorIndex{0, 3, 4, 2, validatorIndex, 6, 8, 9, 10}
|
||||
validator.duties = ðpb.DutiesResponse{CurrentEpochDuties: []*ethpb.DutiesResponse_Duty{
|
||||
{
|
||||
PublicKey: validatorKey.PublicKey().Marshal(),
|
||||
Committee: committee,
|
||||
ValidatorIndex: validatorIndex,
|
||||
},
|
||||
}}
|
||||
defer finish()
|
||||
|
||||
var pubKey [fieldparams.BLSPubkeyLength]byte
|
||||
copy(pubKey[:], validatorKey.PublicKey().Marshal())
|
||||
m.validatorClient.EXPECT().GetSyncSubcommitteeIndex(
|
||||
gomock.Any(), // ctx
|
||||
ðpb.SyncSubcommitteeIndexRequest{
|
||||
Slot: 1,
|
||||
PublicKey: pubKey[:],
|
||||
},
|
||||
).Return(ðpb.SyncSubcommitteeIndexResponse{Indices: []primitives.CommitteeIndex{}}, nil)
|
||||
var pubKey [fieldparams.BLSPubkeyLength]byte
|
||||
copy(pubKey[:], validatorKey.PublicKey().Marshal())
|
||||
m.validatorClient.EXPECT().GetSyncSubcommitteeIndex(
|
||||
gomock.Any(), // ctx
|
||||
ðpb.SyncSubcommitteeIndexRequest{
|
||||
Slot: 1,
|
||||
PublicKey: pubKey[:],
|
||||
},
|
||||
).Return(ðpb.SyncSubcommitteeIndexResponse{Indices: []primitives.CommitteeIndex{}}, nil)
|
||||
|
||||
validator.SubmitSignedContributionAndProof(context.Background(), 1, pubKey)
|
||||
require.LogsContain(t, hook, "Empty subcommittee index list, do nothing")
|
||||
validator.SubmitSignedContributionAndProof(context.Background(), 1, pubKey)
|
||||
require.LogsContain(t, hook, "Empty subcommittee index list, do nothing")
|
||||
})
|
||||
}
|
||||
}
|
||||
|
||||
func TestSubmitSignedContributionAndProof_BadDomain(t *testing.T) {
|
||||
hook := logTest.NewGlobal()
|
||||
validator, m, validatorKey, finish := setup(t)
|
||||
validatorIndex := primitives.ValidatorIndex(7)
|
||||
committee := []primitives.ValidatorIndex{0, 3, 4, 2, validatorIndex, 6, 8, 9, 10}
|
||||
validator.duties = ðpb.DutiesResponse{CurrentEpochDuties: []*ethpb.DutiesResponse_Duty{
|
||||
{
|
||||
PublicKey: validatorKey.PublicKey().Marshal(),
|
||||
Committee: committee,
|
||||
ValidatorIndex: validatorIndex,
|
||||
},
|
||||
}}
|
||||
defer finish()
|
||||
for _, isSlashingProtectionMinimal := range [...]bool{false, true} {
|
||||
t.Run(fmt.Sprintf("SlashingProtectionMinimal:%v", isSlashingProtectionMinimal), func(t *testing.T) {
|
||||
hook := logTest.NewGlobal()
|
||||
validator, m, validatorKey, finish := setup(t, isSlashingProtectionMinimal)
|
||||
validatorIndex := primitives.ValidatorIndex(7)
|
||||
committee := []primitives.ValidatorIndex{0, 3, 4, 2, validatorIndex, 6, 8, 9, 10}
|
||||
validator.duties = ðpb.DutiesResponse{CurrentEpochDuties: []*ethpb.DutiesResponse_Duty{
|
||||
{
|
||||
PublicKey: validatorKey.PublicKey().Marshal(),
|
||||
Committee: committee,
|
||||
ValidatorIndex: validatorIndex,
|
||||
},
|
||||
}}
|
||||
defer finish()
|
||||
|
||||
var pubKey [fieldparams.BLSPubkeyLength]byte
|
||||
copy(pubKey[:], validatorKey.PublicKey().Marshal())
|
||||
m.validatorClient.EXPECT().GetSyncSubcommitteeIndex(
|
||||
gomock.Any(), // ctx
|
||||
ðpb.SyncSubcommitteeIndexRequest{
|
||||
Slot: 1,
|
||||
PublicKey: pubKey[:],
|
||||
},
|
||||
).Return(ðpb.SyncSubcommitteeIndexResponse{Indices: []primitives.CommitteeIndex{1}}, nil)
|
||||
var pubKey [fieldparams.BLSPubkeyLength]byte
|
||||
copy(pubKey[:], validatorKey.PublicKey().Marshal())
|
||||
m.validatorClient.EXPECT().GetSyncSubcommitteeIndex(
|
||||
gomock.Any(), // ctx
|
||||
ðpb.SyncSubcommitteeIndexRequest{
|
||||
Slot: 1,
|
||||
PublicKey: pubKey[:],
|
||||
},
|
||||
).Return(ðpb.SyncSubcommitteeIndexResponse{Indices: []primitives.CommitteeIndex{1}}, nil)
|
||||
|
||||
m.validatorClient.EXPECT().
|
||||
DomainData(gomock.Any(), // ctx
|
||||
gomock.Any()). // epoch
|
||||
Return(ðpb.DomainResponse{
|
||||
SignatureDomain: make([]byte, 32),
|
||||
}, errors.New("bad domain response"))
|
||||
m.validatorClient.EXPECT().
|
||||
DomainData(gomock.Any(), // ctx
|
||||
gomock.Any()). // epoch
|
||||
Return(ðpb.DomainResponse{
|
||||
SignatureDomain: make([]byte, 32),
|
||||
}, errors.New("bad domain response"))
|
||||
|
||||
validator.SubmitSignedContributionAndProof(context.Background(), 1, pubKey)
|
||||
require.LogsContain(t, hook, "Could not get selection proofs")
|
||||
require.LogsContain(t, hook, "bad domain response")
|
||||
validator.SubmitSignedContributionAndProof(context.Background(), 1, pubKey)
|
||||
require.LogsContain(t, hook, "Could not get selection proofs")
|
||||
require.LogsContain(t, hook, "bad domain response")
|
||||
})
|
||||
}
|
||||
}
|
||||
|
||||
func TestSubmitSignedContributionAndProof_CouldNotGetContribution(t *testing.T) {
|
||||
@@ -270,46 +303,50 @@ func TestSubmitSignedContributionAndProof_CouldNotGetContribution(t *testing.T)
|
||||
validatorKey, err := bls.SecretKeyFromBytes(rawKey)
|
||||
assert.NoError(t, err)
|
||||
|
||||
validator, m, validatorKey, finish := setupWithKey(t, validatorKey)
|
||||
validatorIndex := primitives.ValidatorIndex(7)
|
||||
committee := []primitives.ValidatorIndex{0, 3, 4, 2, validatorIndex, 6, 8, 9, 10}
|
||||
validator.duties = ðpb.DutiesResponse{CurrentEpochDuties: []*ethpb.DutiesResponse_Duty{
|
||||
{
|
||||
PublicKey: validatorKey.PublicKey().Marshal(),
|
||||
Committee: committee,
|
||||
ValidatorIndex: validatorIndex,
|
||||
},
|
||||
}}
|
||||
defer finish()
|
||||
for _, isSlashingProtectionMinimal := range [...]bool{false, true} {
|
||||
t.Run(fmt.Sprintf("SlashingProtectionMinimal:%v", isSlashingProtectionMinimal), func(t *testing.T) {
|
||||
validator, m, validatorKey, finish := setupWithKey(t, validatorKey, isSlashingProtectionMinimal)
|
||||
validatorIndex := primitives.ValidatorIndex(7)
|
||||
committee := []primitives.ValidatorIndex{0, 3, 4, 2, validatorIndex, 6, 8, 9, 10}
|
||||
validator.duties = ðpb.DutiesResponse{CurrentEpochDuties: []*ethpb.DutiesResponse_Duty{
|
||||
{
|
||||
PublicKey: validatorKey.PublicKey().Marshal(),
|
||||
Committee: committee,
|
||||
ValidatorIndex: validatorIndex,
|
||||
},
|
||||
}}
|
||||
defer finish()
|
||||
|
||||
var pubKey [fieldparams.BLSPubkeyLength]byte
|
||||
copy(pubKey[:], validatorKey.PublicKey().Marshal())
|
||||
m.validatorClient.EXPECT().GetSyncSubcommitteeIndex(
|
||||
gomock.Any(), // ctx
|
||||
ðpb.SyncSubcommitteeIndexRequest{
|
||||
Slot: 1,
|
||||
PublicKey: pubKey[:],
|
||||
},
|
||||
).Return(ðpb.SyncSubcommitteeIndexResponse{Indices: []primitives.CommitteeIndex{1}}, nil)
|
||||
var pubKey [fieldparams.BLSPubkeyLength]byte
|
||||
copy(pubKey[:], validatorKey.PublicKey().Marshal())
|
||||
m.validatorClient.EXPECT().GetSyncSubcommitteeIndex(
|
||||
gomock.Any(), // ctx
|
||||
ðpb.SyncSubcommitteeIndexRequest{
|
||||
Slot: 1,
|
||||
PublicKey: pubKey[:],
|
||||
},
|
||||
).Return(ðpb.SyncSubcommitteeIndexResponse{Indices: []primitives.CommitteeIndex{1}}, nil)
|
||||
|
||||
m.validatorClient.EXPECT().
|
||||
DomainData(gomock.Any(), // ctx
|
||||
gomock.Any()). // epoch
|
||||
Return(ðpb.DomainResponse{
|
||||
SignatureDomain: make([]byte, 32),
|
||||
}, nil)
|
||||
m.validatorClient.EXPECT().
|
||||
DomainData(gomock.Any(), // ctx
|
||||
gomock.Any()). // epoch
|
||||
Return(ðpb.DomainResponse{
|
||||
SignatureDomain: make([]byte, 32),
|
||||
}, nil)
|
||||
|
||||
m.validatorClient.EXPECT().GetSyncCommitteeContribution(
|
||||
gomock.Any(), // ctx
|
||||
ðpb.SyncCommitteeContributionRequest{
|
||||
Slot: 1,
|
||||
PublicKey: pubKey[:],
|
||||
SubnetId: 0,
|
||||
},
|
||||
).Return(nil, errors.New("Bad contribution"))
|
||||
m.validatorClient.EXPECT().GetSyncCommitteeContribution(
|
||||
gomock.Any(), // ctx
|
||||
ðpb.SyncCommitteeContributionRequest{
|
||||
Slot: 1,
|
||||
PublicKey: pubKey[:],
|
||||
SubnetId: 0,
|
||||
},
|
||||
).Return(nil, errors.New("Bad contribution"))
|
||||
|
||||
validator.SubmitSignedContributionAndProof(context.Background(), 1, pubKey)
|
||||
require.LogsContain(t, hook, "Could not get sync committee contribution")
|
||||
validator.SubmitSignedContributionAndProof(context.Background(), 1, pubKey)
|
||||
require.LogsContain(t, hook, "Could not get sync committee contribution")
|
||||
})
|
||||
}
|
||||
}
|
||||
|
||||
func TestSubmitSignedContributionAndProof_CouldNotSubmitContribution(t *testing.T) {
|
||||
@@ -320,75 +357,79 @@ func TestSubmitSignedContributionAndProof_CouldNotSubmitContribution(t *testing.
|
||||
validatorKey, err := bls.SecretKeyFromBytes(rawKey)
|
||||
assert.NoError(t, err)
|
||||
|
||||
validator, m, validatorKey, finish := setupWithKey(t, validatorKey)
|
||||
validatorIndex := primitives.ValidatorIndex(7)
|
||||
committee := []primitives.ValidatorIndex{0, 3, 4, 2, validatorIndex, 6, 8, 9, 10}
|
||||
validator.duties = ðpb.DutiesResponse{CurrentEpochDuties: []*ethpb.DutiesResponse_Duty{
|
||||
{
|
||||
PublicKey: validatorKey.PublicKey().Marshal(),
|
||||
Committee: committee,
|
||||
ValidatorIndex: validatorIndex,
|
||||
},
|
||||
}}
|
||||
defer finish()
|
||||
|
||||
var pubKey [fieldparams.BLSPubkeyLength]byte
|
||||
copy(pubKey[:], validatorKey.PublicKey().Marshal())
|
||||
m.validatorClient.EXPECT().GetSyncSubcommitteeIndex(
|
||||
gomock.Any(), // ctx
|
||||
ðpb.SyncSubcommitteeIndexRequest{
|
||||
Slot: 1,
|
||||
PublicKey: pubKey[:],
|
||||
},
|
||||
).Return(ðpb.SyncSubcommitteeIndexResponse{Indices: []primitives.CommitteeIndex{1}}, nil)
|
||||
|
||||
m.validatorClient.EXPECT().
|
||||
DomainData(gomock.Any(), // ctx
|
||||
gomock.Any()). // epoch
|
||||
Return(ðpb.DomainResponse{
|
||||
SignatureDomain: make([]byte, 32),
|
||||
}, nil)
|
||||
|
||||
aggBits := bitfield.NewBitvector128()
|
||||
aggBits.SetBitAt(0, true)
|
||||
m.validatorClient.EXPECT().GetSyncCommitteeContribution(
|
||||
gomock.Any(), // ctx
|
||||
ðpb.SyncCommitteeContributionRequest{
|
||||
Slot: 1,
|
||||
PublicKey: pubKey[:],
|
||||
SubnetId: 0,
|
||||
},
|
||||
).Return(ðpb.SyncCommitteeContribution{
|
||||
BlockRoot: make([]byte, fieldparams.RootLength),
|
||||
Signature: make([]byte, 96),
|
||||
AggregationBits: aggBits,
|
||||
}, nil)
|
||||
|
||||
m.validatorClient.EXPECT().
|
||||
DomainData(gomock.Any(), // ctx
|
||||
gomock.Any()). // epoch
|
||||
Return(ðpb.DomainResponse{
|
||||
SignatureDomain: make([]byte, 32),
|
||||
}, nil)
|
||||
|
||||
m.validatorClient.EXPECT().SubmitSignedContributionAndProof(
|
||||
gomock.Any(), // ctx
|
||||
gomock.AssignableToTypeOf(ðpb.SignedContributionAndProof{
|
||||
Message: ðpb.ContributionAndProof{
|
||||
AggregatorIndex: 7,
|
||||
Contribution: ðpb.SyncCommitteeContribution{
|
||||
BlockRoot: make([]byte, fieldparams.RootLength),
|
||||
Signature: make([]byte, 96),
|
||||
AggregationBits: bitfield.NewBitvector128(),
|
||||
Slot: 1,
|
||||
SubcommitteeIndex: 1,
|
||||
for _, isSlashingProtectionMinimal := range [...]bool{false, true} {
|
||||
t.Run(fmt.Sprintf("SlashingProtectionMinimal:%v", isSlashingProtectionMinimal), func(t *testing.T) {
|
||||
validator, m, validatorKey, finish := setupWithKey(t, validatorKey, isSlashingProtectionMinimal)
|
||||
validatorIndex := primitives.ValidatorIndex(7)
|
||||
committee := []primitives.ValidatorIndex{0, 3, 4, 2, validatorIndex, 6, 8, 9, 10}
|
||||
validator.duties = ðpb.DutiesResponse{CurrentEpochDuties: []*ethpb.DutiesResponse_Duty{
|
||||
{
|
||||
PublicKey: validatorKey.PublicKey().Marshal(),
|
||||
Committee: committee,
|
||||
ValidatorIndex: validatorIndex,
|
||||
},
|
||||
},
|
||||
}),
|
||||
).Return(&emptypb.Empty{}, errors.New("Could not submit contribution"))
|
||||
}}
|
||||
defer finish()
|
||||
|
||||
validator.SubmitSignedContributionAndProof(context.Background(), 1, pubKey)
|
||||
require.LogsContain(t, hook, "Could not submit signed contribution and proof")
|
||||
var pubKey [fieldparams.BLSPubkeyLength]byte
|
||||
copy(pubKey[:], validatorKey.PublicKey().Marshal())
|
||||
m.validatorClient.EXPECT().GetSyncSubcommitteeIndex(
|
||||
gomock.Any(), // ctx
|
||||
ðpb.SyncSubcommitteeIndexRequest{
|
||||
Slot: 1,
|
||||
PublicKey: pubKey[:],
|
||||
},
|
||||
).Return(ðpb.SyncSubcommitteeIndexResponse{Indices: []primitives.CommitteeIndex{1}}, nil)
|
||||
|
||||
m.validatorClient.EXPECT().
|
||||
DomainData(gomock.Any(), // ctx
|
||||
gomock.Any()). // epoch
|
||||
Return(ðpb.DomainResponse{
|
||||
SignatureDomain: make([]byte, 32),
|
||||
}, nil)
|
||||
|
||||
aggBits := bitfield.NewBitvector128()
|
||||
aggBits.SetBitAt(0, true)
|
||||
m.validatorClient.EXPECT().GetSyncCommitteeContribution(
|
||||
gomock.Any(), // ctx
|
||||
ðpb.SyncCommitteeContributionRequest{
|
||||
Slot: 1,
|
||||
PublicKey: pubKey[:],
|
||||
SubnetId: 0,
|
||||
},
|
||||
).Return(ðpb.SyncCommitteeContribution{
|
||||
BlockRoot: make([]byte, fieldparams.RootLength),
|
||||
Signature: make([]byte, 96),
|
||||
AggregationBits: aggBits,
|
||||
}, nil)
|
||||
|
||||
m.validatorClient.EXPECT().
|
||||
DomainData(gomock.Any(), // ctx
|
||||
gomock.Any()). // epoch
|
||||
Return(ðpb.DomainResponse{
|
||||
SignatureDomain: make([]byte, 32),
|
||||
}, nil)
|
||||
|
||||
m.validatorClient.EXPECT().SubmitSignedContributionAndProof(
|
||||
gomock.Any(), // ctx
|
||||
gomock.AssignableToTypeOf(ðpb.SignedContributionAndProof{
|
||||
Message: ðpb.ContributionAndProof{
|
||||
AggregatorIndex: 7,
|
||||
Contribution: ðpb.SyncCommitteeContribution{
|
||||
BlockRoot: make([]byte, fieldparams.RootLength),
|
||||
Signature: make([]byte, 96),
|
||||
AggregationBits: bitfield.NewBitvector128(),
|
||||
Slot: 1,
|
||||
SubcommitteeIndex: 1,
|
||||
},
|
||||
},
|
||||
}),
|
||||
).Return(&emptypb.Empty{}, errors.New("Could not submit contribution"))
|
||||
|
||||
validator.SubmitSignedContributionAndProof(context.Background(), 1, pubKey)
|
||||
require.LogsContain(t, hook, "Could not submit signed contribution and proof")
|
||||
})
|
||||
}
|
||||
}
|
||||
|
||||
func TestSubmitSignedContributionAndProof_Ok(t *testing.T) {
|
||||
@@ -398,72 +439,76 @@ func TestSubmitSignedContributionAndProof_Ok(t *testing.T) {
|
||||
validatorKey, err := bls.SecretKeyFromBytes(rawKey)
|
||||
assert.NoError(t, err)
|
||||
|
||||
validator, m, validatorKey, finish := setupWithKey(t, validatorKey)
|
||||
validatorIndex := primitives.ValidatorIndex(7)
|
||||
committee := []primitives.ValidatorIndex{0, 3, 4, 2, validatorIndex, 6, 8, 9, 10}
|
||||
validator.duties = ðpb.DutiesResponse{CurrentEpochDuties: []*ethpb.DutiesResponse_Duty{
|
||||
{
|
||||
PublicKey: validatorKey.PublicKey().Marshal(),
|
||||
Committee: committee,
|
||||
ValidatorIndex: validatorIndex,
|
||||
},
|
||||
}}
|
||||
defer finish()
|
||||
|
||||
var pubKey [fieldparams.BLSPubkeyLength]byte
|
||||
copy(pubKey[:], validatorKey.PublicKey().Marshal())
|
||||
m.validatorClient.EXPECT().GetSyncSubcommitteeIndex(
|
||||
gomock.Any(), // ctx
|
||||
ðpb.SyncSubcommitteeIndexRequest{
|
||||
Slot: 1,
|
||||
PublicKey: pubKey[:],
|
||||
},
|
||||
).Return(ðpb.SyncSubcommitteeIndexResponse{Indices: []primitives.CommitteeIndex{1}}, nil)
|
||||
|
||||
m.validatorClient.EXPECT().
|
||||
DomainData(gomock.Any(), // ctx
|
||||
gomock.Any()). // epoch
|
||||
Return(ðpb.DomainResponse{
|
||||
SignatureDomain: make([]byte, 32),
|
||||
}, nil)
|
||||
|
||||
aggBits := bitfield.NewBitvector128()
|
||||
aggBits.SetBitAt(0, true)
|
||||
m.validatorClient.EXPECT().GetSyncCommitteeContribution(
|
||||
gomock.Any(), // ctx
|
||||
ðpb.SyncCommitteeContributionRequest{
|
||||
Slot: 1,
|
||||
PublicKey: pubKey[:],
|
||||
SubnetId: 0,
|
||||
},
|
||||
).Return(ðpb.SyncCommitteeContribution{
|
||||
BlockRoot: make([]byte, fieldparams.RootLength),
|
||||
Signature: make([]byte, 96),
|
||||
AggregationBits: aggBits,
|
||||
}, nil)
|
||||
|
||||
m.validatorClient.EXPECT().
|
||||
DomainData(gomock.Any(), // ctx
|
||||
gomock.Any()). // epoch
|
||||
Return(ðpb.DomainResponse{
|
||||
SignatureDomain: make([]byte, 32),
|
||||
}, nil)
|
||||
|
||||
m.validatorClient.EXPECT().SubmitSignedContributionAndProof(
|
||||
gomock.Any(), // ctx
|
||||
gomock.AssignableToTypeOf(ðpb.SignedContributionAndProof{
|
||||
Message: ðpb.ContributionAndProof{
|
||||
AggregatorIndex: 7,
|
||||
Contribution: ðpb.SyncCommitteeContribution{
|
||||
BlockRoot: make([]byte, 32),
|
||||
Signature: make([]byte, 96),
|
||||
AggregationBits: bitfield.NewBitvector128(),
|
||||
Slot: 1,
|
||||
SubcommitteeIndex: 1,
|
||||
for _, isSlashingProtectionMinimal := range [...]bool{false, true} {
|
||||
t.Run(fmt.Sprintf("SlashingProtectionMinimal:%v", isSlashingProtectionMinimal), func(t *testing.T) {
|
||||
validator, m, validatorKey, finish := setupWithKey(t, validatorKey, isSlashingProtectionMinimal)
|
||||
validatorIndex := primitives.ValidatorIndex(7)
|
||||
committee := []primitives.ValidatorIndex{0, 3, 4, 2, validatorIndex, 6, 8, 9, 10}
|
||||
validator.duties = ðpb.DutiesResponse{CurrentEpochDuties: []*ethpb.DutiesResponse_Duty{
|
||||
{
|
||||
PublicKey: validatorKey.PublicKey().Marshal(),
|
||||
Committee: committee,
|
||||
ValidatorIndex: validatorIndex,
|
||||
},
|
||||
},
|
||||
}),
|
||||
).Return(&emptypb.Empty{}, nil)
|
||||
}}
|
||||
defer finish()
|
||||
|
||||
validator.SubmitSignedContributionAndProof(context.Background(), 1, pubKey)
|
||||
var pubKey [fieldparams.BLSPubkeyLength]byte
|
||||
copy(pubKey[:], validatorKey.PublicKey().Marshal())
|
||||
m.validatorClient.EXPECT().GetSyncSubcommitteeIndex(
|
||||
gomock.Any(), // ctx
|
||||
ðpb.SyncSubcommitteeIndexRequest{
|
||||
Slot: 1,
|
||||
PublicKey: pubKey[:],
|
||||
},
|
||||
).Return(ðpb.SyncSubcommitteeIndexResponse{Indices: []primitives.CommitteeIndex{1}}, nil)
|
||||
|
||||
m.validatorClient.EXPECT().
|
||||
DomainData(gomock.Any(), // ctx
|
||||
gomock.Any()). // epoch
|
||||
Return(ðpb.DomainResponse{
|
||||
SignatureDomain: make([]byte, 32),
|
||||
}, nil)
|
||||
|
||||
aggBits := bitfield.NewBitvector128()
|
||||
aggBits.SetBitAt(0, true)
|
||||
m.validatorClient.EXPECT().GetSyncCommitteeContribution(
|
||||
gomock.Any(), // ctx
|
||||
ðpb.SyncCommitteeContributionRequest{
|
||||
Slot: 1,
|
||||
PublicKey: pubKey[:],
|
||||
SubnetId: 0,
|
||||
},
|
||||
).Return(ðpb.SyncCommitteeContribution{
|
||||
BlockRoot: make([]byte, fieldparams.RootLength),
|
||||
Signature: make([]byte, 96),
|
||||
AggregationBits: aggBits,
|
||||
}, nil)
|
||||
|
||||
m.validatorClient.EXPECT().
|
||||
DomainData(gomock.Any(), // ctx
|
||||
gomock.Any()). // epoch
|
||||
Return(ðpb.DomainResponse{
|
||||
SignatureDomain: make([]byte, 32),
|
||||
}, nil)
|
||||
|
||||
m.validatorClient.EXPECT().SubmitSignedContributionAndProof(
|
||||
gomock.Any(), // ctx
|
||||
gomock.AssignableToTypeOf(ðpb.SignedContributionAndProof{
|
||||
Message: ðpb.ContributionAndProof{
|
||||
AggregatorIndex: 7,
|
||||
Contribution: ðpb.SyncCommitteeContribution{
|
||||
BlockRoot: make([]byte, 32),
|
||||
Signature: make([]byte, 96),
|
||||
AggregationBits: bitfield.NewBitvector128(),
|
||||
Slot: 1,
|
||||
SubcommitteeIndex: 1,
|
||||
},
|
||||
},
|
||||
}),
|
||||
).Return(&emptypb.Empty{}, nil)
|
||||
|
||||
validator.SubmitSignedContributionAndProof(context.Background(), 1, pubKey)
|
||||
})
|
||||
}
|
||||
}
|
||||
|
||||
@@ -37,7 +37,7 @@ import (
|
||||
beacon_api "github.com/prysmaticlabs/prysm/v5/validator/client/beacon-api"
|
||||
"github.com/prysmaticlabs/prysm/v5/validator/client/iface"
|
||||
vdb "github.com/prysmaticlabs/prysm/v5/validator/db"
|
||||
"github.com/prysmaticlabs/prysm/v5/validator/db/kv"
|
||||
dbCommon "github.com/prysmaticlabs/prysm/v5/validator/db/common"
|
||||
"github.com/prysmaticlabs/prysm/v5/validator/graffiti"
|
||||
"github.com/prysmaticlabs/prysm/v5/validator/keymanager"
|
||||
"github.com/prysmaticlabs/prysm/v5/validator/keymanager/local"
|
||||
@@ -517,7 +517,7 @@ func buildDuplicateError(response []*ethpb.DoppelGangerResponse_ValidatorRespons
|
||||
}
|
||||
|
||||
// Ensures that the latest attestation history is retrieved.
|
||||
func retrieveLatestRecord(recs []*kv.AttestationRecord) *kv.AttestationRecord {
|
||||
func retrieveLatestRecord(recs []*dbCommon.AttestationRecord) *dbCommon.AttestationRecord {
|
||||
if len(recs) == 0 {
|
||||
return nil
|
||||
}
|
||||
|
||||
File diff suppressed because it is too large
Load Diff
@@ -4,6 +4,7 @@ go_library(
|
||||
name = "go_default_library",
|
||||
srcs = [
|
||||
"alias.go",
|
||||
"convert.go",
|
||||
"log.go",
|
||||
"migrate.go",
|
||||
"restore.go",
|
||||
@@ -15,8 +16,13 @@ go_library(
|
||||
],
|
||||
deps = [
|
||||
"//cmd:go_default_library",
|
||||
"//config/fieldparams:go_default_library",
|
||||
"//consensus-types/primitives:go_default_library",
|
||||
"//io/file:go_default_library",
|
||||
"//io/prompt:go_default_library",
|
||||
"//proto/prysm/v1alpha1:go_default_library",
|
||||
"//validator/db/common:go_default_library",
|
||||
"//validator/db/filesystem:go_default_library",
|
||||
"//validator/db/iface:go_default_library",
|
||||
"//validator/db/kv:go_default_library",
|
||||
"@com_github_pkg_errors//:go_default_library",
|
||||
@@ -28,17 +34,27 @@ go_library(
|
||||
go_test(
|
||||
name = "go_default_test",
|
||||
srcs = [
|
||||
"convert_test.go",
|
||||
"migrate_test.go",
|
||||
"restore_test.go",
|
||||
],
|
||||
embed = [":go_default_library"],
|
||||
deps = [
|
||||
"//cmd:go_default_library",
|
||||
"//config/fieldparams:go_default_library",
|
||||
"//config/params:go_default_library",
|
||||
"//config/proposer:go_default_library",
|
||||
"//consensus-types/primitives:go_default_library",
|
||||
"//io/file:go_default_library",
|
||||
"//proto/prysm/v1alpha1:go_default_library",
|
||||
"//testing/assert:go_default_library",
|
||||
"//testing/require:go_default_library",
|
||||
"//validator/db/common:go_default_library",
|
||||
"//validator/db/filesystem:go_default_library",
|
||||
"//validator/db/iface:go_default_library",
|
||||
"//validator/db/kv:go_default_library",
|
||||
"//validator/db/testing:go_default_library",
|
||||
"@com_github_ethereum_go_ethereum//common/hexutil:go_default_library",
|
||||
"@com_github_sirupsen_logrus//hooks/test:go_default_library",
|
||||
"@com_github_urfave_cli_v2//:go_default_library",
|
||||
],
|
||||
|
||||
17
validator/db/common/BUILD.bazel
Normal file
17
validator/db/common/BUILD.bazel
Normal file
@@ -0,0 +1,17 @@
|
||||
load("@prysm//tools/go:def.bzl", "go_library")
|
||||
|
||||
go_library(
|
||||
name = "go_default_library",
|
||||
srcs = [
|
||||
"progress.go",
|
||||
"structs.go",
|
||||
],
|
||||
importpath = "github.com/prysmaticlabs/prysm/v5/validator/db/common",
|
||||
visibility = ["//visibility:public"],
|
||||
deps = [
|
||||
"//config/fieldparams:go_default_library",
|
||||
"//consensus-types/primitives:go_default_library",
|
||||
"@com_github_k0kubun_go_ansi//:go_default_library",
|
||||
"@com_github_schollz_progressbar_v3//:go_default_library",
|
||||
],
|
||||
)
|
||||
27
validator/db/common/progress.go
Normal file
27
validator/db/common/progress.go
Normal file
@@ -0,0 +1,27 @@
|
||||
package common
|
||||
|
||||
import (
|
||||
"fmt"
|
||||
|
||||
"github.com/k0kubun/go-ansi"
|
||||
|
||||
"github.com/schollz/progressbar/v3"
|
||||
)
|
||||
|
||||
func InitializeProgressBar(numItems int, msg string) *progressbar.ProgressBar {
|
||||
return progressbar.NewOptions(
|
||||
numItems,
|
||||
progressbar.OptionFullWidth(),
|
||||
progressbar.OptionSetWriter(ansi.NewAnsiStdout()),
|
||||
progressbar.OptionEnableColorCodes(true),
|
||||
progressbar.OptionSetTheme(progressbar.Theme{
|
||||
Saucer: "[green]=[reset]",
|
||||
SaucerHead: "[green]>[reset]",
|
||||
SaucerPadding: " ",
|
||||
BarStart: "[",
|
||||
BarEnd: "]",
|
||||
}),
|
||||
progressbar.OptionOnCompletion(func() { fmt.Println() }),
|
||||
progressbar.OptionSetDescription(msg),
|
||||
)
|
||||
}
|
||||
28
validator/db/common/structs.go
Normal file
28
validator/db/common/structs.go
Normal file
@@ -0,0 +1,28 @@
|
||||
package common
|
||||
|
||||
import (
|
||||
fieldparams "github.com/prysmaticlabs/prysm/v5/config/fieldparams"
|
||||
"github.com/prysmaticlabs/prysm/v5/consensus-types/primitives"
|
||||
)
|
||||
|
||||
const FailedBlockSignLocalErr = "block rejected by local protection"
|
||||
|
||||
// Proposal representation for a validator public key.
|
||||
type Proposal struct {
|
||||
Slot primitives.Slot `json:"slot"`
|
||||
SigningRoot []byte `json:"signing_root"`
|
||||
}
|
||||
|
||||
// ProposalHistoryForPubkey for a validator public key.
|
||||
type ProposalHistoryForPubkey struct {
|
||||
Proposals []Proposal
|
||||
}
|
||||
|
||||
// AttestationRecord which can be represented by these simple values
|
||||
// for manipulation by database methods.
|
||||
type AttestationRecord struct {
|
||||
PubKey [fieldparams.BLSPubkeyLength]byte
|
||||
Source primitives.Epoch
|
||||
Target primitives.Epoch
|
||||
SigningRoot []byte
|
||||
}
|
||||
257
validator/db/convert.go
Normal file
257
validator/db/convert.go
Normal file
@@ -0,0 +1,257 @@
|
||||
package db
|
||||
|
||||
import (
|
||||
"context"
|
||||
"path/filepath"
|
||||
|
||||
"github.com/pkg/errors"
|
||||
fieldparams "github.com/prysmaticlabs/prysm/v5/config/fieldparams"
|
||||
"github.com/prysmaticlabs/prysm/v5/consensus-types/primitives"
|
||||
"github.com/prysmaticlabs/prysm/v5/io/file"
|
||||
ethpb "github.com/prysmaticlabs/prysm/v5/proto/prysm/v1alpha1"
|
||||
"github.com/prysmaticlabs/prysm/v5/validator/db/common"
|
||||
"github.com/prysmaticlabs/prysm/v5/validator/db/filesystem"
|
||||
"github.com/prysmaticlabs/prysm/v5/validator/db/iface"
|
||||
"github.com/prysmaticlabs/prysm/v5/validator/db/kv"
|
||||
)
|
||||
|
||||
// ConvertDatabase converts a minimal database to a complete database or a complete database to a minimal database.
|
||||
// Delete the source database after conversion.
|
||||
func ConvertDatabase(ctx context.Context, sourceDataDir string, targetDataDir string, minimalToComplete bool) error {
|
||||
// Check if the source database exists.
|
||||
var (
|
||||
sourceDatabaseExists bool
|
||||
err error
|
||||
)
|
||||
|
||||
if minimalToComplete {
|
||||
sourceDataBasePath := filepath.Join(sourceDataDir, filesystem.DatabaseDirName)
|
||||
sourceDatabaseExists, err = file.Exists(sourceDataBasePath, file.Directory)
|
||||
} else {
|
||||
sourceDataBasePath := filepath.Join(sourceDataDir, kv.ProtectionDbFileName)
|
||||
sourceDatabaseExists, err = file.Exists(sourceDataBasePath, file.Regular)
|
||||
}
|
||||
|
||||
if err != nil {
|
||||
return errors.Wrap(err, "could not check if source database exists")
|
||||
}
|
||||
|
||||
// If the source database does not exist, there is nothing to convert.
|
||||
if !sourceDatabaseExists {
|
||||
return errors.New("source database does not exist")
|
||||
}
|
||||
|
||||
// Get the source database.
|
||||
var sourceDatabase iface.ValidatorDB
|
||||
|
||||
if minimalToComplete {
|
||||
sourceDatabase, err = filesystem.NewStore(sourceDataDir, nil)
|
||||
} else {
|
||||
sourceDatabase, err = kv.NewKVStore(ctx, sourceDataDir, nil)
|
||||
}
|
||||
|
||||
if err != nil {
|
||||
return errors.Wrap(err, "could not get source database")
|
||||
}
|
||||
|
||||
// Close the source database.
|
||||
defer func() {
|
||||
if err := sourceDatabase.Close(); err != nil {
|
||||
log.WithError(err).Error("Failed to close source database")
|
||||
}
|
||||
}()
|
||||
|
||||
// Create the target database.
|
||||
var targetDatabase iface.ValidatorDB
|
||||
|
||||
if minimalToComplete {
|
||||
targetDatabase, err = kv.NewKVStore(ctx, targetDataDir, nil)
|
||||
} else {
|
||||
targetDatabase, err = filesystem.NewStore(targetDataDir, nil)
|
||||
}
|
||||
|
||||
if err != nil {
|
||||
return errors.Wrap(err, "could not create target database")
|
||||
}
|
||||
|
||||
// Close the target database.
|
||||
defer func() {
|
||||
if err := targetDatabase.Close(); err != nil {
|
||||
log.WithError(err).Error("Failed to close target database")
|
||||
}
|
||||
}()
|
||||
|
||||
// Genesis
|
||||
// -------
|
||||
// Get the genesis validators root.
|
||||
genesisValidatorRoot, err := sourceDatabase.GenesisValidatorsRoot(ctx)
|
||||
if err != nil {
|
||||
return errors.Wrap(err, "could not get genesis validators root from source database")
|
||||
}
|
||||
|
||||
// Save the genesis validators root.
|
||||
if err := targetDatabase.SaveGenesisValidatorsRoot(ctx, genesisValidatorRoot); err != nil {
|
||||
return errors.Wrap(err, "could not save genesis validators root")
|
||||
}
|
||||
|
||||
// Graffiti
|
||||
// --------
|
||||
// Get the graffiti file hash.
|
||||
graffitiFileHash, exists, err := sourceDatabase.GraffitiFileHash()
|
||||
if err != nil {
|
||||
return errors.Wrap(err, "could not get graffiti file hash from source database")
|
||||
}
|
||||
|
||||
if exists {
|
||||
// Calling GraffitiOrderedIndex will save the graffiti file hash.
|
||||
if _, err := targetDatabase.GraffitiOrderedIndex(ctx, graffitiFileHash); err != nil {
|
||||
return errors.Wrap(err, "could get graffiti ordered index")
|
||||
}
|
||||
}
|
||||
|
||||
// Get the graffiti ordered index.
|
||||
graffitiOrderedIndex, err := sourceDatabase.GraffitiOrderedIndex(ctx, graffitiFileHash)
|
||||
if err != nil {
|
||||
return errors.Wrap(err, "could not get graffiti ordered index from source database")
|
||||
}
|
||||
|
||||
// Save the graffiti ordered index.
|
||||
if err := targetDatabase.SaveGraffitiOrderedIndex(ctx, graffitiOrderedIndex); err != nil {
|
||||
return errors.Wrap(err, "could not save graffiti ordered index")
|
||||
}
|
||||
|
||||
// Proposer settings
|
||||
// -----------------
|
||||
// Get the proposer settings.
|
||||
proposerSettings, err := sourceDatabase.ProposerSettings(ctx)
|
||||
if err != nil {
|
||||
return errors.Wrap(err, "could not get proposer settings from source database")
|
||||
}
|
||||
|
||||
// Save the proposer settings.
|
||||
if err := targetDatabase.SaveProposerSettings(ctx, proposerSettings); err != nil {
|
||||
return errors.Wrap(err, "could not save proposer settings")
|
||||
}
|
||||
|
||||
// Attestations
|
||||
// ------------
|
||||
// Get all public keys that have attested.
|
||||
attestedPublicKeys, err := sourceDatabase.AttestedPublicKeys(ctx)
|
||||
if err != nil {
|
||||
return errors.Wrap(err, "could not get attested public keys from source database")
|
||||
}
|
||||
|
||||
// Initialize the progress bar.
|
||||
bar := common.InitializeProgressBar(
|
||||
len(attestedPublicKeys),
|
||||
"Processing attestations:",
|
||||
)
|
||||
|
||||
for _, pubkey := range attestedPublicKeys {
|
||||
// Update the progress bar.
|
||||
if err := bar.Add(1); err != nil {
|
||||
log.WithError(err).Debug("Could not increase progress bar")
|
||||
}
|
||||
|
||||
// Get the attestation records.
|
||||
attestationRecords, err := sourceDatabase.AttestationHistoryForPubKey(ctx, pubkey)
|
||||
if err != nil {
|
||||
return errors.Wrap(err, "could not get attestation history for public key")
|
||||
}
|
||||
|
||||
// If there are no attestation records, skip this public key.
|
||||
if len(attestationRecords) == 0 {
|
||||
continue
|
||||
}
|
||||
|
||||
highestSource, highestTarget := primitives.Epoch(0), primitives.Epoch(0)
|
||||
for _, record := range attestationRecords {
|
||||
// If the record is nil, skip it.
|
||||
if record == nil {
|
||||
continue
|
||||
}
|
||||
|
||||
// Get the highest source and target epoch.
|
||||
if record.Source > highestSource {
|
||||
highestSource = record.Source
|
||||
}
|
||||
|
||||
if record.Target > highestTarget {
|
||||
highestTarget = record.Target
|
||||
}
|
||||
}
|
||||
|
||||
// Create the indexed attestation with the highest source and target epoch.
|
||||
indexedAttestation := ðpb.IndexedAttestation{
|
||||
Data: ðpb.AttestationData{
|
||||
Source: ðpb.Checkpoint{
|
||||
Epoch: highestSource,
|
||||
},
|
||||
Target: ðpb.Checkpoint{
|
||||
Epoch: highestTarget,
|
||||
},
|
||||
},
|
||||
}
|
||||
|
||||
if err := targetDatabase.SaveAttestationForPubKey(ctx, pubkey, [fieldparams.RootLength]byte{}, indexedAttestation); err != nil {
|
||||
return errors.Wrap(err, "could not save attestation for public key")
|
||||
}
|
||||
}
|
||||
|
||||
// Proposals
|
||||
// ---------
|
||||
// Get all pubkeys in database.
|
||||
proposedPublicKeys, err := sourceDatabase.ProposedPublicKeys(ctx)
|
||||
if err != nil {
|
||||
return errors.Wrap(err, "could not get proposed public keys from source database")
|
||||
}
|
||||
|
||||
// Initialize the progress bar.
|
||||
bar = common.InitializeProgressBar(
|
||||
len(attestedPublicKeys),
|
||||
"Processing proposals:",
|
||||
)
|
||||
|
||||
for _, pubkey := range proposedPublicKeys {
|
||||
// Update the progress bar.
|
||||
if err := bar.Add(1); err != nil {
|
||||
log.WithError(err).Debug("Could not increase progress bar")
|
||||
}
|
||||
|
||||
// Get the proposal history.
|
||||
proposals, err := sourceDatabase.ProposalHistoryForPubKey(ctx, pubkey)
|
||||
if err != nil {
|
||||
return errors.Wrap(err, "could not get proposal history for public key")
|
||||
}
|
||||
|
||||
// If there are no proposals, skip this public key.
|
||||
if len(proposals) == 0 {
|
||||
continue
|
||||
}
|
||||
|
||||
highestSlot := primitives.Slot(0)
|
||||
for _, proposal := range proposals {
|
||||
// If proposal is nil, skip it.
|
||||
if proposal == nil {
|
||||
continue
|
||||
}
|
||||
|
||||
// Get the highest slot.
|
||||
if proposal.Slot > highestSlot {
|
||||
highestSlot = proposal.Slot
|
||||
}
|
||||
}
|
||||
|
||||
// Save the proposal history for the highest slot.
|
||||
if err := targetDatabase.SaveProposalHistoryForSlot(ctx, pubkey, highestSlot, nil); err != nil {
|
||||
return errors.Wrap(err, "could not save proposal history for public key")
|
||||
}
|
||||
}
|
||||
|
||||
// Delete the source database.
|
||||
if err := sourceDatabase.ClearDB(); err != nil {
|
||||
return errors.Wrap(err, "could not delete source database")
|
||||
}
|
||||
|
||||
return nil
|
||||
}
|
||||
265
validator/db/convert_test.go
Normal file
265
validator/db/convert_test.go
Normal file
@@ -0,0 +1,265 @@
|
||||
package db
|
||||
|
||||
import (
|
||||
"context"
|
||||
"fmt"
|
||||
"path/filepath"
|
||||
"testing"
|
||||
|
||||
"github.com/ethereum/go-ethereum/common/hexutil"
|
||||
fieldparams "github.com/prysmaticlabs/prysm/v5/config/fieldparams"
|
||||
"github.com/prysmaticlabs/prysm/v5/config/proposer"
|
||||
"github.com/prysmaticlabs/prysm/v5/consensus-types/primitives"
|
||||
"github.com/prysmaticlabs/prysm/v5/io/file"
|
||||
ethpb "github.com/prysmaticlabs/prysm/v5/proto/prysm/v1alpha1"
|
||||
"github.com/prysmaticlabs/prysm/v5/testing/require"
|
||||
"github.com/prysmaticlabs/prysm/v5/validator/db/common"
|
||||
"github.com/prysmaticlabs/prysm/v5/validator/db/filesystem"
|
||||
"github.com/prysmaticlabs/prysm/v5/validator/db/iface"
|
||||
|
||||
"github.com/prysmaticlabs/prysm/v5/validator/db/kv"
|
||||
)
|
||||
|
||||
func getPubkeyFromString(t *testing.T, pubkeyString string) [fieldparams.BLSPubkeyLength]byte {
|
||||
var pubkey [fieldparams.BLSPubkeyLength]byte
|
||||
pubkeyBytes, err := hexutil.Decode(pubkeyString)
|
||||
require.NoError(t, err, "hexutil.Decode should not return an error")
|
||||
copy(pubkey[:], pubkeyBytes)
|
||||
return pubkey
|
||||
}
|
||||
|
||||
func getFeeRecipientFromString(t *testing.T, feeRecipientString string) [fieldparams.FeeRecipientLength]byte {
|
||||
var feeRecipient [fieldparams.FeeRecipientLength]byte
|
||||
feeRecipientBytes, err := hexutil.Decode(feeRecipientString)
|
||||
require.NoError(t, err, "hexutil.Decode should not return an error")
|
||||
copy(feeRecipient[:], feeRecipientBytes)
|
||||
return feeRecipient
|
||||
}
|
||||
|
||||
func TestDB_ConvertDatabase(t *testing.T) {
|
||||
ctx := context.Background()
|
||||
|
||||
pubKeyString1 := "0x80000060606fa05c7339dd7bcd0d3e4d8b573fa30dea2fdb4997031a703e3300326e3c054be682f92d9c367cd647bbea"
|
||||
pubKeyString2 := "0x81000060606fa05c7339dd7bcd0d3e4d8b573fa30dea2fdb4997031a703e3300326e3c054be682f92d9c367cd647bbea"
|
||||
defaultFeeRecipientString := "0xe688b84b23f322a994A53dbF8E15FA82CDB71127"
|
||||
customFeeRecipientString := "0xeD33259a056F4fb449FFB7B7E2eCB43a9B5685Bf"
|
||||
|
||||
pubkey1 := getPubkeyFromString(t, pubKeyString1)
|
||||
pubkey2 := getPubkeyFromString(t, pubKeyString2)
|
||||
defaultFeeRecipient := getFeeRecipientFromString(t, defaultFeeRecipientString)
|
||||
customFeeRecipient := getFeeRecipientFromString(t, customFeeRecipientString)
|
||||
|
||||
for _, minimalToComplete := range []bool{false, true} {
|
||||
t.Run(fmt.Sprintf("minimalToComplete=%v", minimalToComplete), func(t *testing.T) {
|
||||
// Create signing root
|
||||
signingRoot := [fieldparams.RootLength]byte{}
|
||||
var signingRootBytes []byte
|
||||
if minimalToComplete {
|
||||
signingRootBytes = signingRoot[:]
|
||||
}
|
||||
|
||||
// Create database directoriy path.
|
||||
datadir := t.TempDir()
|
||||
|
||||
// Run source DB preparation.
|
||||
// --------------------------
|
||||
// Create the source database.
|
||||
var (
|
||||
sourceDatabase, targetDatabase iface.ValidatorDB
|
||||
err error
|
||||
)
|
||||
|
||||
if minimalToComplete {
|
||||
sourceDatabase, err = filesystem.NewStore(datadir, &filesystem.Config{
|
||||
PubKeys: [][fieldparams.BLSPubkeyLength]byte{pubkey1, pubkey2},
|
||||
})
|
||||
} else {
|
||||
sourceDatabase, err = kv.NewKVStore(ctx, datadir, &kv.Config{
|
||||
PubKeys: [][fieldparams.BLSPubkeyLength]byte{pubkey1, pubkey2},
|
||||
})
|
||||
}
|
||||
|
||||
require.NoError(t, err, "could not create source database")
|
||||
|
||||
// Save the genesis validator root.
|
||||
expectedGenesisValidatorRoot := []byte("genesis-validator-root")
|
||||
err = sourceDatabase.SaveGenesisValidatorsRoot(ctx, expectedGenesisValidatorRoot)
|
||||
require.NoError(t, err, "could not save genesis validator root")
|
||||
|
||||
// Save the graffiti file hash.
|
||||
// (Getting the graffiti ordered index will set the graffiti file hash)
|
||||
expectedGraffitiFileHash := [32]byte{1}
|
||||
_, err = sourceDatabase.GraffitiOrderedIndex(ctx, expectedGraffitiFileHash)
|
||||
require.NoError(t, err, "could not get graffiti ordered index")
|
||||
|
||||
// Save the graffiti ordered index.
|
||||
expectedGraffitiOrderedIndex := uint64(1)
|
||||
err = sourceDatabase.SaveGraffitiOrderedIndex(ctx, expectedGraffitiOrderedIndex)
|
||||
require.NoError(t, err, "could not save graffiti ordered index")
|
||||
|
||||
// Save the proposer settings.
|
||||
var relays []string = nil
|
||||
|
||||
expectedProposerSettings := &proposer.Settings{
|
||||
ProposeConfig: map[[fieldparams.BLSPubkeyLength]byte]*proposer.Option{
|
||||
pubkey1: {
|
||||
FeeRecipientConfig: &proposer.FeeRecipientConfig{
|
||||
FeeRecipient: customFeeRecipient,
|
||||
},
|
||||
BuilderConfig: &proposer.BuilderConfig{
|
||||
Enabled: true,
|
||||
GasLimit: 42,
|
||||
Relays: relays,
|
||||
},
|
||||
},
|
||||
},
|
||||
DefaultConfig: &proposer.Option{
|
||||
FeeRecipientConfig: &proposer.FeeRecipientConfig{
|
||||
FeeRecipient: defaultFeeRecipient,
|
||||
},
|
||||
BuilderConfig: &proposer.BuilderConfig{
|
||||
Enabled: false,
|
||||
GasLimit: 43,
|
||||
Relays: relays,
|
||||
},
|
||||
},
|
||||
}
|
||||
|
||||
err = sourceDatabase.SaveProposerSettings(ctx, expectedProposerSettings)
|
||||
require.NoError(t, err, "could not save proposer settings")
|
||||
|
||||
// Save some attestations.
|
||||
completeAttestations := []*ethpb.IndexedAttestation{
|
||||
{
|
||||
Data: ðpb.AttestationData{
|
||||
Source: ðpb.Checkpoint{
|
||||
Epoch: 1,
|
||||
},
|
||||
Target: ðpb.Checkpoint{
|
||||
Epoch: 2,
|
||||
},
|
||||
},
|
||||
},
|
||||
{
|
||||
Data: ðpb.AttestationData{
|
||||
Source: ðpb.Checkpoint{
|
||||
Epoch: 2,
|
||||
},
|
||||
Target: ðpb.Checkpoint{
|
||||
Epoch: 3,
|
||||
},
|
||||
},
|
||||
},
|
||||
}
|
||||
|
||||
expectedAttestationRecords1 := []*common.AttestationRecord{
|
||||
{
|
||||
PubKey: pubkey1,
|
||||
Source: primitives.Epoch(2),
|
||||
Target: primitives.Epoch(3),
|
||||
SigningRoot: signingRootBytes,
|
||||
},
|
||||
}
|
||||
|
||||
expectedAttestationRecords2 := []*common.AttestationRecord{
|
||||
{
|
||||
PubKey: pubkey2,
|
||||
Source: primitives.Epoch(2),
|
||||
Target: primitives.Epoch(3),
|
||||
SigningRoot: signingRootBytes,
|
||||
},
|
||||
}
|
||||
|
||||
err = sourceDatabase.SaveAttestationsForPubKey(ctx, pubkey1, [][]byte{{1}, {2}}, completeAttestations)
|
||||
require.NoError(t, err, "could not save attestations")
|
||||
|
||||
err = sourceDatabase.SaveAttestationsForPubKey(ctx, pubkey2, [][]byte{{1}, {2}}, completeAttestations)
|
||||
require.NoError(t, err, "could not save attestations")
|
||||
|
||||
// Save some block proposals.
|
||||
err = sourceDatabase.SaveProposalHistoryForSlot(ctx, pubkey1, 42, []byte{})
|
||||
require.NoError(t, err, "could not save block proposal")
|
||||
|
||||
err = sourceDatabase.SaveProposalHistoryForSlot(ctx, pubkey1, 43, []byte{})
|
||||
require.NoError(t, err, "could not save block proposal")
|
||||
|
||||
expectedProposals := []*common.Proposal{
|
||||
{
|
||||
Slot: 43,
|
||||
SigningRoot: signingRootBytes,
|
||||
},
|
||||
}
|
||||
|
||||
// Close the source database.
|
||||
err = sourceDatabase.Close()
|
||||
require.NoError(t, err, "could not close source database")
|
||||
|
||||
// Source to target DB conversion.
|
||||
// ----------------------------------------
|
||||
err = ConvertDatabase(ctx, datadir, datadir, minimalToComplete)
|
||||
require.NoError(t, err, "could not convert source to target database")
|
||||
|
||||
// Check the target database.
|
||||
// --------------------------
|
||||
if minimalToComplete {
|
||||
targetDatabase, err = kv.NewKVStore(ctx, datadir, nil)
|
||||
} else {
|
||||
targetDatabase, err = filesystem.NewStore(datadir, nil)
|
||||
}
|
||||
require.NoError(t, err, "could not get minimal database")
|
||||
|
||||
// Check the genesis validator root.
|
||||
actualGenesisValidatoRoot, err := targetDatabase.GenesisValidatorsRoot(ctx)
|
||||
require.NoError(t, err, "could not get genesis validator root from target database")
|
||||
require.DeepSSZEqual(t, expectedGenesisValidatorRoot, actualGenesisValidatoRoot, "genesis validator root should match")
|
||||
|
||||
// Check the graffiti file hash.
|
||||
actualGraffitiFileHash, exists, err := targetDatabase.GraffitiFileHash()
|
||||
require.NoError(t, err, "could not get graffiti file hash from target database")
|
||||
require.Equal(t, true, exists, "graffiti file hash should exist")
|
||||
require.Equal(t, expectedGraffitiFileHash, actualGraffitiFileHash, "graffiti file hash should match")
|
||||
|
||||
// Check the graffiti ordered index.
|
||||
actualGraffitiOrderedIndex, err := targetDatabase.GraffitiOrderedIndex(ctx, expectedGraffitiFileHash)
|
||||
require.NoError(t, err, "could not get graffiti ordered index from target database")
|
||||
require.Equal(t, expectedGraffitiOrderedIndex, actualGraffitiOrderedIndex, "graffiti ordered index should match")
|
||||
|
||||
// Check the proposer settings.
|
||||
actualProposerSettings, err := targetDatabase.ProposerSettings(ctx)
|
||||
require.NoError(t, err, "could not get proposer settings from target database")
|
||||
require.DeepEqual(t, expectedProposerSettings, actualProposerSettings, "proposer settings should match")
|
||||
|
||||
// Check the attestations.
|
||||
actualAttestationRecords, err := targetDatabase.AttestationHistoryForPubKey(ctx, pubkey1)
|
||||
require.NoError(t, err, "could not get attestations from target database")
|
||||
require.DeepEqual(t, expectedAttestationRecords1, actualAttestationRecords, "attestations should match")
|
||||
|
||||
actualAttestationRecords, err = targetDatabase.AttestationHistoryForPubKey(ctx, pubkey2)
|
||||
require.NoError(t, err, "could not get attestations from target database")
|
||||
require.DeepEqual(t, expectedAttestationRecords2, actualAttestationRecords, "attestations should match")
|
||||
|
||||
// Check the block proposals.
|
||||
actualProposals, err := targetDatabase.ProposalHistoryForPubKey(ctx, pubkey1)
|
||||
require.NoError(t, err, "could not get block proposals from target database")
|
||||
require.DeepEqual(t, expectedProposals, actualProposals, "block proposals should match")
|
||||
|
||||
// Close the target database.
|
||||
err = targetDatabase.Close()
|
||||
require.NoError(t, err, "could not close target database")
|
||||
|
||||
// Check the source database does not exist anymore.
|
||||
var existing bool
|
||||
|
||||
if minimalToComplete {
|
||||
databasePath := filepath.Join(datadir, filesystem.DatabaseDirName)
|
||||
existing, err = file.Exists(databasePath, file.Directory)
|
||||
} else {
|
||||
databasePath := filepath.Join(datadir, kv.ProtectionDbFileName)
|
||||
existing, err = file.Exists(databasePath, file.Regular)
|
||||
}
|
||||
|
||||
require.NoError(t, err, "could not check if source database exists")
|
||||
require.Equal(t, false, existing, "source database should not exist")
|
||||
})
|
||||
}
|
||||
}
|
||||
70
validator/db/filesystem/BUILD.bazel
Normal file
70
validator/db/filesystem/BUILD.bazel
Normal file
@@ -0,0 +1,70 @@
|
||||
load("@prysm//tools/go:def.bzl", "go_library", "go_test")
|
||||
|
||||
go_library(
|
||||
name = "go_default_library",
|
||||
srcs = [
|
||||
"attester_protection.go",
|
||||
"db.go",
|
||||
"genesis.go",
|
||||
"graffiti.go",
|
||||
"import.go",
|
||||
"migration.go",
|
||||
"proposer_protection.go",
|
||||
"proposer_settings.go",
|
||||
],
|
||||
importpath = "github.com/prysmaticlabs/prysm/v5/validator/db/filesystem",
|
||||
visibility = ["//visibility:public"],
|
||||
deps = [
|
||||
"//config/fieldparams:go_default_library",
|
||||
"//config/proposer:go_default_library",
|
||||
"//consensus-types/interfaces:go_default_library",
|
||||
"//consensus-types/primitives:go_default_library",
|
||||
"//io/file:go_default_library",
|
||||
"//proto/prysm/v1alpha1:go_default_library",
|
||||
"//proto/prysm/v1alpha1/validator-client:go_default_library",
|
||||
"//validator/db/common:go_default_library",
|
||||
"//validator/db/iface:go_default_library",
|
||||
"//validator/helpers:go_default_library",
|
||||
"//validator/slashing-protection-history/format:go_default_library",
|
||||
"@com_github_ethereum_go_ethereum//common/hexutil:go_default_library",
|
||||
"@com_github_pkg_errors//:go_default_library",
|
||||
"@com_github_prometheus_client_golang//prometheus:go_default_library",
|
||||
"@com_github_sirupsen_logrus//:go_default_library",
|
||||
"@in_gopkg_yaml_v3//:go_default_library",
|
||||
"@io_opencensus_go//trace:go_default_library",
|
||||
],
|
||||
)
|
||||
|
||||
go_test(
|
||||
name = "go_default_test",
|
||||
srcs = [
|
||||
"attester_protection_test.go",
|
||||
"db_test.go",
|
||||
"genesis_test.go",
|
||||
"graffiti_test.go",
|
||||
"import_test.go",
|
||||
"migration_test.go",
|
||||
"proposer_protection_test.go",
|
||||
"proposer_settings_test.go",
|
||||
],
|
||||
embed = [":go_default_library"],
|
||||
deps = [
|
||||
"//config/fieldparams:go_default_library",
|
||||
"//config/params:go_default_library",
|
||||
"//config/proposer:go_default_library",
|
||||
"//consensus-types/blocks:go_default_library",
|
||||
"//consensus-types/primitives:go_default_library",
|
||||
"//crypto/bls:go_default_library",
|
||||
"//io/file:go_default_library",
|
||||
"//proto/prysm/v1alpha1:go_default_library",
|
||||
"//proto/prysm/v1alpha1/validator-client:go_default_library",
|
||||
"//testing/assert:go_default_library",
|
||||
"//testing/require:go_default_library",
|
||||
"//testing/util:go_default_library",
|
||||
"//validator/db/common:go_default_library",
|
||||
"//validator/slashing-protection-history/format:go_default_library",
|
||||
"//validator/testing:go_default_library",
|
||||
"@com_github_ethereum_go_ethereum//common:go_default_library",
|
||||
"@com_github_ethereum_go_ethereum//common/hexutil:go_default_library",
|
||||
],
|
||||
)
|
||||
315
validator/db/filesystem/attester_protection.go
Normal file
315
validator/db/filesystem/attester_protection.go
Normal file
@@ -0,0 +1,315 @@
|
||||
package filesystem
|
||||
|
||||
import (
|
||||
"context"
|
||||
"strings"
|
||||
|
||||
"github.com/pkg/errors"
|
||||
"github.com/prometheus/client_golang/prometheus"
|
||||
fieldparams "github.com/prysmaticlabs/prysm/v5/config/fieldparams"
|
||||
"github.com/prysmaticlabs/prysm/v5/consensus-types/primitives"
|
||||
ethpb "github.com/prysmaticlabs/prysm/v5/proto/prysm/v1alpha1"
|
||||
"github.com/prysmaticlabs/prysm/v5/validator/db/common"
|
||||
"go.opencensus.io/trace"
|
||||
)
|
||||
|
||||
const failedAttLocalProtectionErr = "attempted to make slashable attestation, rejected by local slashing protection"
|
||||
|
||||
// EIPImportBlacklistedPublicKeys is implemented only to satisfy the interface.
|
||||
func (*Store) EIPImportBlacklistedPublicKeys(_ context.Context) ([][fieldparams.BLSPubkeyLength]byte, error) {
|
||||
return [][fieldparams.BLSPubkeyLength]byte{}, nil
|
||||
}
|
||||
|
||||
// SaveEIPImportBlacklistedPublicKeys is implemented only to satisfy the interface.
|
||||
func (*Store) SaveEIPImportBlacklistedPublicKeys(_ context.Context, _ [][fieldparams.BLSPubkeyLength]byte) error {
|
||||
return nil
|
||||
}
|
||||
|
||||
// SigningRootAtTargetEpoch is implemented only to satisfy the interface.
|
||||
func (*Store) SigningRootAtTargetEpoch(_ context.Context, _ [fieldparams.BLSPubkeyLength]byte, _ primitives.Epoch) ([]byte, error) {
|
||||
panic("not implemented")
|
||||
}
|
||||
|
||||
// LowestSignedTargetEpoch returns the lowest signed target epoch for a public key, a boolean indicating if it exists and an error.
|
||||
func (s *Store) LowestSignedTargetEpoch(_ context.Context, pubKey [fieldparams.BLSPubkeyLength]byte) (primitives.Epoch, bool, error) {
|
||||
// Get validator slashing protection.
|
||||
validatorSlashingProtection, err := s.validatorSlashingProtection(pubKey)
|
||||
if err != nil {
|
||||
return 0, false, errors.Wrap(err, "could not get validator slashing protection")
|
||||
}
|
||||
|
||||
// If there is no validator slashing protection, return early.
|
||||
if validatorSlashingProtection == nil || validatorSlashingProtection.LastSignedAttestationTargetEpoch == nil {
|
||||
return 0, false, nil
|
||||
}
|
||||
|
||||
// Return the lowest (and unique) signed target epoch.
|
||||
return primitives.Epoch(*validatorSlashingProtection.LastSignedAttestationTargetEpoch), true, nil
|
||||
}
|
||||
|
||||
// LowestSignedSourceEpoch is implemented only to satisfy the interface.
|
||||
func (s *Store) LowestSignedSourceEpoch(_ context.Context, pubKey [fieldparams.BLSPubkeyLength]byte) (primitives.Epoch, bool, error) {
|
||||
// Get validator slashing protection.
|
||||
validatorSlashingProtection, err := s.validatorSlashingProtection(pubKey)
|
||||
if err != nil {
|
||||
return 0, false, errors.Wrap(err, "could not get validator slashing protection")
|
||||
}
|
||||
|
||||
// If there is no validator slashing protection, return early.
|
||||
if validatorSlashingProtection == nil {
|
||||
return 0, false, nil
|
||||
}
|
||||
|
||||
// Return the lowest (and unique) signed source epoch.
|
||||
return primitives.Epoch(validatorSlashingProtection.LastSignedAttestationSourceEpoch), true, nil
|
||||
}
|
||||
|
||||
// AttestedPublicKeys returns the list of public keys in the database.
|
||||
func (s *Store) AttestedPublicKeys(_ context.Context) ([][fieldparams.BLSPubkeyLength]byte, error) {
|
||||
// Retrieve all public keys in database.
|
||||
pubkeys, err := s.publicKeys()
|
||||
if err != nil {
|
||||
return nil, errors.Wrap(err, "could not get public keys")
|
||||
}
|
||||
|
||||
// Filter public keys which already attested.
|
||||
attestedPublicKeys := make([][fieldparams.BLSPubkeyLength]byte, 0, len(pubkeys))
|
||||
for _, pubkey := range pubkeys {
|
||||
// Get validator slashing protection.
|
||||
validatorSlashingProtection, err := s.validatorSlashingProtection(pubkey)
|
||||
if err != nil {
|
||||
return nil, errors.Wrap(err, "could not get validator slashing protection")
|
||||
}
|
||||
|
||||
// If there is no target epoch, return early.
|
||||
if validatorSlashingProtection == nil || validatorSlashingProtection.LastSignedAttestationTargetEpoch == nil {
|
||||
continue
|
||||
}
|
||||
|
||||
// Append the attested public key.
|
||||
attestedPublicKeys = append(attestedPublicKeys, pubkey)
|
||||
}
|
||||
|
||||
// Return the attested public keys.
|
||||
return attestedPublicKeys, nil
|
||||
}
|
||||
|
||||
// SlashableAttestationCheck checks if an attestation is slashable by comparing it with the attesting
|
||||
// history for the given public key in our minimal slashing protection database defined by EIP-3076.
|
||||
// If it is not, it updates the database.
|
||||
func (s *Store) SlashableAttestationCheck(
|
||||
ctx context.Context,
|
||||
indexedAtt *ethpb.IndexedAttestation,
|
||||
pubKey [fieldparams.BLSPubkeyLength]byte,
|
||||
signingRoot32 [32]byte,
|
||||
_ bool,
|
||||
_ *prometheus.CounterVec,
|
||||
) error {
|
||||
ctx, span := trace.StartSpan(ctx, "validator.postAttSignUpdate")
|
||||
defer span.End()
|
||||
|
||||
// Check if the attestation is potentially slashable regarding EIP-3076 minimal conditions.
|
||||
// If not, save the new attestation into the database.
|
||||
if err := s.SaveAttestationForPubKey(ctx, pubKey, signingRoot32, indexedAtt); err != nil {
|
||||
if strings.Contains(err.Error(), "could not sign attestation") {
|
||||
return errors.Wrap(err, failedAttLocalProtectionErr)
|
||||
}
|
||||
|
||||
return errors.Wrap(err, "could not save attestation history for validator public key")
|
||||
}
|
||||
|
||||
return nil
|
||||
}
|
||||
|
||||
// SaveAttestationForPubKey checks if the incoming attestation is valid regarding EIP-3076 minimal slashing protection.
|
||||
// If so, it updates the database with the incoming source and target, and returns nil.
|
||||
// If not, it does not modify the database and return an error.
|
||||
func (s *Store) SaveAttestationForPubKey(
|
||||
_ context.Context,
|
||||
pubkey [fieldparams.BLSPubkeyLength]byte,
|
||||
_ [32]byte,
|
||||
att *ethpb.IndexedAttestation,
|
||||
) error {
|
||||
// If there is no attestation, return on error.
|
||||
if att == nil || att.Data == nil || att.Data.Source == nil || att.Data.Target == nil {
|
||||
return errors.New("incoming attestation does not contain source and/or target epoch")
|
||||
}
|
||||
|
||||
// Get validator slashing protection.
|
||||
validatorSlashingProtection, err := s.validatorSlashingProtection(pubkey)
|
||||
if err != nil {
|
||||
return errors.Wrap(err, "could not get validator slashing protection")
|
||||
}
|
||||
|
||||
incomingSourceEpochUInt64 := uint64(att.Data.Source.Epoch)
|
||||
incomingTargetEpochUInt64 := uint64(att.Data.Target.Epoch)
|
||||
|
||||
if validatorSlashingProtection == nil {
|
||||
// If there is no validator slashing protection, create one.
|
||||
validatorSlashingProtection = &ValidatorSlashingProtection{
|
||||
LastSignedAttestationSourceEpoch: incomingSourceEpochUInt64,
|
||||
LastSignedAttestationTargetEpoch: &incomingTargetEpochUInt64,
|
||||
}
|
||||
|
||||
// Save the validator slashing protection.
|
||||
if err := s.saveValidatorSlashingProtection(pubkey, validatorSlashingProtection); err != nil {
|
||||
return errors.Wrap(err, "could not save validator slashing protection")
|
||||
}
|
||||
|
||||
return nil
|
||||
}
|
||||
|
||||
savedSourceEpoch := validatorSlashingProtection.LastSignedAttestationSourceEpoch
|
||||
savedTargetEpoch := validatorSlashingProtection.LastSignedAttestationTargetEpoch
|
||||
|
||||
// Based on EIP-3076 (minimal database), validator should refuse to sign any attestation
|
||||
// with source epoch less than the recorded source epoch.
|
||||
if incomingSourceEpochUInt64 < savedSourceEpoch {
|
||||
return errors.Errorf(
|
||||
"could not sign attestation with source lower than recorded source epoch, %d < %d",
|
||||
att.Data.Source.Epoch,
|
||||
validatorSlashingProtection.LastSignedAttestationSourceEpoch,
|
||||
)
|
||||
}
|
||||
|
||||
// Based on EIP-3076 (minimal database), validator should refuse to sign any attestation
|
||||
// with target epoch less than or equal to the recorded target epoch.
|
||||
if savedTargetEpoch != nil && incomingTargetEpochUInt64 <= *savedTargetEpoch {
|
||||
return errors.Errorf(
|
||||
"could not sign attestation with target lower than or equal to recorded target epoch, %d <= %d",
|
||||
att.Data.Target.Epoch,
|
||||
*savedTargetEpoch,
|
||||
)
|
||||
}
|
||||
|
||||
// Update the latest signed source and target epoch.
|
||||
validatorSlashingProtection.LastSignedAttestationSourceEpoch = incomingSourceEpochUInt64
|
||||
validatorSlashingProtection.LastSignedAttestationTargetEpoch = &incomingTargetEpochUInt64
|
||||
|
||||
// Save the validator slashing protection.
|
||||
if err := s.saveValidatorSlashingProtection(pubkey, validatorSlashingProtection); err != nil {
|
||||
return errors.Wrap(err, "could not save validator slashing protection")
|
||||
}
|
||||
|
||||
return nil
|
||||
}
|
||||
|
||||
// SaveAttestationsForPubKey saves the attestation history for a list of public keys WITHOUT checking if the incoming
|
||||
// attestations are valid regarding EIP-3076 minimal slashing protection.
|
||||
// For each public key, incoming sources and targets epochs are compared with
|
||||
// recorded source and target epochs, and maximums are saved.
|
||||
func (s *Store) SaveAttestationsForPubKey(
|
||||
_ context.Context,
|
||||
pubkey [fieldparams.BLSPubkeyLength]byte,
|
||||
_ [][]byte,
|
||||
atts []*ethpb.IndexedAttestation,
|
||||
) error {
|
||||
// If there is no attestation, return early.
|
||||
if len(atts) == 0 {
|
||||
return nil
|
||||
}
|
||||
|
||||
// Retrieve maximum source and target epoch.
|
||||
maxIncomingSourceEpoch, maxIncomingTargetEpoch, err := maxSourceTargetEpoch(atts)
|
||||
if err != nil {
|
||||
return errors.Wrap(err, "could not get maximum source and target epoch")
|
||||
}
|
||||
|
||||
// Convert epochs to uint64.
|
||||
maxIncomingSourceEpochUInt64 := uint64(maxIncomingSourceEpoch)
|
||||
maxIncomingTargetEpochUInt64 := uint64(maxIncomingTargetEpoch)
|
||||
|
||||
// Get validator slashing protection.
|
||||
validatorSlashingProtection, err := s.validatorSlashingProtection(pubkey)
|
||||
if err != nil {
|
||||
return errors.Wrap(err, "could not get validator slashing protection")
|
||||
}
|
||||
|
||||
if validatorSlashingProtection == nil {
|
||||
// If there is no validator slashing protection, create one.
|
||||
validatorSlashingProtection = &ValidatorSlashingProtection{
|
||||
LastSignedAttestationSourceEpoch: maxIncomingSourceEpochUInt64,
|
||||
LastSignedAttestationTargetEpoch: &maxIncomingTargetEpochUInt64,
|
||||
}
|
||||
|
||||
// Save the validator slashing protection.
|
||||
if err := s.saveValidatorSlashingProtection(pubkey, validatorSlashingProtection); err != nil {
|
||||
return errors.Wrap(err, "could not save validator slashing protection")
|
||||
}
|
||||
|
||||
return nil
|
||||
}
|
||||
|
||||
savedSourceEpochUInt64 := validatorSlashingProtection.LastSignedAttestationSourceEpoch
|
||||
savedTargetEpochUInt64 := validatorSlashingProtection.LastSignedAttestationTargetEpoch
|
||||
|
||||
maxSourceEpochUInt64 := maxIncomingSourceEpochUInt64
|
||||
maxTargetEpochUInt64 := maxIncomingTargetEpochUInt64
|
||||
|
||||
// Compare the maximum incoming source and target epochs with what we have recorded.
|
||||
if savedSourceEpochUInt64 > maxSourceEpochUInt64 {
|
||||
maxSourceEpochUInt64 = savedSourceEpochUInt64
|
||||
}
|
||||
|
||||
if savedTargetEpochUInt64 != nil && *savedTargetEpochUInt64 > maxTargetEpochUInt64 {
|
||||
maxTargetEpochUInt64 = *savedTargetEpochUInt64
|
||||
}
|
||||
|
||||
// Update the validator slashing protection.
|
||||
validatorSlashingProtection.LastSignedAttestationSourceEpoch = maxSourceEpochUInt64
|
||||
validatorSlashingProtection.LastSignedAttestationTargetEpoch = &maxTargetEpochUInt64
|
||||
|
||||
// Save the validator slashing protection.
|
||||
if err := s.saveValidatorSlashingProtection(pubkey, validatorSlashingProtection); err != nil {
|
||||
return errors.Wrap(err, "could not save validator slashing protection")
|
||||
}
|
||||
|
||||
return nil
|
||||
}
|
||||
|
||||
// AttestationHistoryForPubKey returns the attestation history for a public key.
|
||||
func (s *Store) AttestationHistoryForPubKey(
|
||||
_ context.Context,
|
||||
pubKey [fieldparams.BLSPubkeyLength]byte,
|
||||
) ([]*common.AttestationRecord, error) {
|
||||
// Get validator slashing protection
|
||||
validatorSlashingProtection, err := s.validatorSlashingProtection(pubKey)
|
||||
if err != nil {
|
||||
return nil, errors.Wrap(err, "could not get validator slashing protection")
|
||||
}
|
||||
|
||||
// If there is no validator slashing protection or no target epoch, return an empty slice.
|
||||
if validatorSlashingProtection == nil || validatorSlashingProtection.LastSignedAttestationTargetEpoch == nil {
|
||||
return []*common.AttestationRecord{}, nil
|
||||
}
|
||||
|
||||
// Return the (unique) attestation record.
|
||||
return []*common.AttestationRecord{
|
||||
{
|
||||
PubKey: pubKey,
|
||||
Source: primitives.Epoch(validatorSlashingProtection.LastSignedAttestationSourceEpoch),
|
||||
Target: primitives.Epoch(*validatorSlashingProtection.LastSignedAttestationTargetEpoch),
|
||||
},
|
||||
}, nil
|
||||
}
|
||||
|
||||
// maxSourceTargetEpoch gets the maximum source and target epoch from atts.
|
||||
func maxSourceTargetEpoch(atts []*ethpb.IndexedAttestation) (primitives.Epoch, primitives.Epoch, error) {
|
||||
maxSourceEpoch := primitives.Epoch(0)
|
||||
maxTargetEpoch := primitives.Epoch(0)
|
||||
|
||||
for _, att := range atts {
|
||||
if att == nil || att.Data == nil || att.Data.Source == nil || att.Data.Target == nil {
|
||||
return 0, 0, errors.New("incoming attestation does not contain source and/or target epoch")
|
||||
}
|
||||
|
||||
if att.Data.Source.Epoch > maxSourceEpoch {
|
||||
maxSourceEpoch = att.Data.Source.Epoch
|
||||
}
|
||||
|
||||
if att.Data.Target.Epoch > maxTargetEpoch {
|
||||
maxTargetEpoch = att.Data.Target.Epoch
|
||||
}
|
||||
}
|
||||
return maxSourceEpoch, maxTargetEpoch, nil
|
||||
}
|
||||
515
validator/db/filesystem/attester_protection_test.go
Normal file
515
validator/db/filesystem/attester_protection_test.go
Normal file
@@ -0,0 +1,515 @@
|
||||
package filesystem
|
||||
|
||||
import (
|
||||
"context"
|
||||
"sync"
|
||||
"testing"
|
||||
|
||||
fieldparams "github.com/prysmaticlabs/prysm/v5/config/fieldparams"
|
||||
"github.com/prysmaticlabs/prysm/v5/consensus-types/primitives"
|
||||
"github.com/prysmaticlabs/prysm/v5/crypto/bls"
|
||||
ethpb "github.com/prysmaticlabs/prysm/v5/proto/prysm/v1alpha1"
|
||||
"github.com/prysmaticlabs/prysm/v5/testing/require"
|
||||
"github.com/prysmaticlabs/prysm/v5/validator/db/common"
|
||||
)
|
||||
|
||||
func TestStore_EIPImportBlacklistedPublicKeys(t *testing.T) {
|
||||
// Create a new store.
|
||||
store, err := NewStore(t.TempDir(), nil)
|
||||
require.NoError(t, err, "could not create store")
|
||||
|
||||
var expected = [][fieldparams.BLSPubkeyLength]byte{}
|
||||
actual, err := store.EIPImportBlacklistedPublicKeys(context.Background())
|
||||
require.NoError(t, err, "could not get blacklisted public keys")
|
||||
require.DeepSSZEqual(t, expected, actual, "blacklisted public keys do not match")
|
||||
}
|
||||
|
||||
func TestStore_SaveEIPImportBlacklistedPublicKeys(t *testing.T) {
|
||||
// Create a new store.
|
||||
store, err := NewStore(t.TempDir(), nil)
|
||||
require.NoError(t, err, "could not create store")
|
||||
|
||||
// Save blacklisted public keys.
|
||||
err = store.SaveEIPImportBlacklistedPublicKeys(context.Background(), [][fieldparams.BLSPubkeyLength]byte{})
|
||||
require.NoError(t, err, "could not save blacklisted public keys")
|
||||
}
|
||||
|
||||
func TestStore_LowestSignedTargetEpoch(t *testing.T) {
|
||||
// Define some saved source and target epoch.
|
||||
savedSourceEpoch, savedTargetEpoch := 42, 43
|
||||
|
||||
// Create a pubkey.
|
||||
pubkey := getPubKeys(t, 1)[0]
|
||||
|
||||
// Create a new store.
|
||||
store, err := NewStore(t.TempDir(), nil)
|
||||
require.NoError(t, err, "could not create store")
|
||||
|
||||
// Get the lowest signed target epoch.
|
||||
_, exists, err := store.LowestSignedTargetEpoch(context.Background(), [fieldparams.BLSPubkeyLength]byte{})
|
||||
require.NoError(t, err, "could not get lowest signed target epoch")
|
||||
require.Equal(t, false, exists, "lowest signed target epoch should not exist")
|
||||
|
||||
// Create an attestation with both source and target epoch
|
||||
attestation := ðpb.IndexedAttestation{
|
||||
Data: ðpb.AttestationData{
|
||||
Source: ðpb.Checkpoint{Epoch: primitives.Epoch(savedSourceEpoch)},
|
||||
Target: ðpb.Checkpoint{Epoch: primitives.Epoch(savedTargetEpoch)},
|
||||
},
|
||||
}
|
||||
|
||||
// Save the attestation.
|
||||
err = store.SaveAttestationForPubKey(context.Background(), pubkey, [32]byte{}, attestation)
|
||||
require.NoError(t, err, "SaveAttestationForPubKey should not return an error")
|
||||
|
||||
// Get the lowest signed target epoch.
|
||||
expected := primitives.Epoch(savedTargetEpoch)
|
||||
actual, exists, err := store.LowestSignedTargetEpoch(context.Background(), pubkey)
|
||||
require.NoError(t, err, "could not get lowest signed target epoch")
|
||||
require.Equal(t, true, exists, "lowest signed target epoch should not exist")
|
||||
require.Equal(t, expected, actual, "lowest signed target epoch should match")
|
||||
}
|
||||
|
||||
func TestStore_LowestSignedSourceEpoch(t *testing.T) {
|
||||
// Create a pubkey.
|
||||
pubkey := getPubKeys(t, 1)[0]
|
||||
|
||||
// Create a new store.
|
||||
store, err := NewStore(t.TempDir(), nil)
|
||||
require.NoError(t, err, "could not create store")
|
||||
|
||||
// Get the lowest signed target epoch.
|
||||
_, exists, err := store.LowestSignedSourceEpoch(context.Background(), [fieldparams.BLSPubkeyLength]byte{})
|
||||
require.NoError(t, err, "could not get lowest signed source epoch")
|
||||
require.Equal(t, false, exists, "lowest signed source epoch should not exist")
|
||||
|
||||
// Create an attestation.
|
||||
savedSourceEpoch, savedTargetEpoch := 42, 43
|
||||
attestation := ðpb.IndexedAttestation{
|
||||
Data: ðpb.AttestationData{
|
||||
Source: ðpb.Checkpoint{Epoch: primitives.Epoch(savedSourceEpoch)},
|
||||
Target: ðpb.Checkpoint{Epoch: primitives.Epoch(savedTargetEpoch)},
|
||||
},
|
||||
}
|
||||
|
||||
// Save the attestation.
|
||||
err = store.SaveAttestationForPubKey(context.Background(), pubkey, [32]byte{}, attestation)
|
||||
require.NoError(t, err, "SaveAttestationForPubKey should not return an error")
|
||||
|
||||
// Get the lowest signed target epoch.
|
||||
expected := primitives.Epoch(savedSourceEpoch)
|
||||
actual, exists, err := store.LowestSignedSourceEpoch(context.Background(), pubkey)
|
||||
require.NoError(t, err, "could not get lowest signed target epoch")
|
||||
require.Equal(t, true, exists, "lowest signed target epoch should exist")
|
||||
require.Equal(t, expected, actual, "lowest signed target epoch should match")
|
||||
}
|
||||
|
||||
func TestStore_AttestedPublicKeys(t *testing.T) {
|
||||
// Create a database path.
|
||||
databasePath := t.TempDir()
|
||||
|
||||
// Create some pubkeys.
|
||||
pubkeys := getPubKeys(t, 5)
|
||||
|
||||
// Create a new store.
|
||||
s, err := NewStore(databasePath, &Config{PubKeys: pubkeys})
|
||||
require.NoError(t, err, "NewStore should not return an error")
|
||||
|
||||
// Attest for some pubkeys.
|
||||
attestedPubkeys := pubkeys[1:3]
|
||||
for _, pubkey := range attestedPubkeys {
|
||||
err = s.SaveAttestationForPubKey(context.Background(), pubkey, [32]byte{}, ðpb.IndexedAttestation{
|
||||
Data: ðpb.AttestationData{
|
||||
Source: ðpb.Checkpoint{Epoch: 42},
|
||||
Target: ðpb.Checkpoint{Epoch: 43},
|
||||
},
|
||||
})
|
||||
require.NoError(t, err, "SaveAttestationForPubKey should not return an error")
|
||||
}
|
||||
|
||||
// Check the public keys.
|
||||
actual, err := s.AttestedPublicKeys(context.Background())
|
||||
require.NoError(t, err, "publicKeys should not return an error")
|
||||
|
||||
// We cannot compare the slices directly because the order is not guaranteed,
|
||||
// so we compare sets instead.
|
||||
expectedSet := make(map[[fieldparams.BLSPubkeyLength]byte]bool)
|
||||
for _, pubkey := range attestedPubkeys {
|
||||
expectedSet[pubkey] = true
|
||||
}
|
||||
|
||||
actualSet := make(map[[fieldparams.BLSPubkeyLength]byte]bool)
|
||||
for _, pubkey := range actual {
|
||||
actualSet[pubkey] = true
|
||||
}
|
||||
|
||||
require.DeepEqual(t, expectedSet, actualSet)
|
||||
}
|
||||
|
||||
func TestStore_SaveAttestationForPubKey(t *testing.T) {
|
||||
// Create a public key.
|
||||
pubkey := getPubKeys(t, 1)[0]
|
||||
|
||||
for _, tt := range []struct {
|
||||
name string
|
||||
existingAttInDB *ethpb.IndexedAttestation
|
||||
incomingAtt *ethpb.IndexedAttestation
|
||||
expectedErr string
|
||||
}{
|
||||
{
|
||||
name: "att is nil",
|
||||
existingAttInDB: nil,
|
||||
incomingAtt: nil,
|
||||
expectedErr: "incoming attestation does not contain source and/or target epoch",
|
||||
},
|
||||
{
|
||||
name: "att.Data is nil",
|
||||
existingAttInDB: nil,
|
||||
incomingAtt: ðpb.IndexedAttestation{Data: nil},
|
||||
expectedErr: "incoming attestation does not contain source and/or target epoch",
|
||||
},
|
||||
{
|
||||
name: "att.Data.Source is nil",
|
||||
existingAttInDB: nil,
|
||||
incomingAtt: ðpb.IndexedAttestation{
|
||||
Data: ðpb.AttestationData{
|
||||
Source: nil,
|
||||
Target: ðpb.Checkpoint{Epoch: 43},
|
||||
},
|
||||
},
|
||||
expectedErr: "incoming attestation does not contain source and/or target epoch",
|
||||
},
|
||||
{
|
||||
name: "att.Data.Target is nil",
|
||||
existingAttInDB: nil,
|
||||
incomingAtt: ðpb.IndexedAttestation{
|
||||
Data: ðpb.AttestationData{
|
||||
Source: ðpb.Checkpoint{Epoch: 42},
|
||||
Target: nil,
|
||||
},
|
||||
},
|
||||
expectedErr: "incoming attestation does not contain source and/or target epoch",
|
||||
},
|
||||
{
|
||||
name: "no pre-existing slashing protection",
|
||||
existingAttInDB: nil,
|
||||
incomingAtt: ðpb.IndexedAttestation{
|
||||
Data: ðpb.AttestationData{
|
||||
Source: ðpb.Checkpoint{Epoch: 42},
|
||||
Target: ðpb.Checkpoint{Epoch: 43},
|
||||
},
|
||||
},
|
||||
expectedErr: "",
|
||||
},
|
||||
{
|
||||
name: "incoming source epoch lower than saved source epoch",
|
||||
existingAttInDB: ðpb.IndexedAttestation{
|
||||
Data: ðpb.AttestationData{
|
||||
Source: ðpb.Checkpoint{Epoch: 42},
|
||||
Target: ðpb.Checkpoint{Epoch: 43},
|
||||
},
|
||||
},
|
||||
incomingAtt: ðpb.IndexedAttestation{
|
||||
Data: ðpb.AttestationData{
|
||||
Source: ðpb.Checkpoint{Epoch: 41},
|
||||
Target: ðpb.Checkpoint{Epoch: 45},
|
||||
},
|
||||
},
|
||||
expectedErr: "could not sign attestation with source lower than recorded source epoch",
|
||||
},
|
||||
{
|
||||
name: "incoming target epoch lower than saved target epoch",
|
||||
existingAttInDB: ðpb.IndexedAttestation{
|
||||
Data: ðpb.AttestationData{
|
||||
Source: ðpb.Checkpoint{Epoch: 42},
|
||||
Target: ðpb.Checkpoint{Epoch: 43},
|
||||
},
|
||||
},
|
||||
incomingAtt: ðpb.IndexedAttestation{
|
||||
Data: ðpb.AttestationData{
|
||||
Source: ðpb.Checkpoint{Epoch: 42},
|
||||
Target: ðpb.Checkpoint{Epoch: 42},
|
||||
},
|
||||
},
|
||||
expectedErr: "could not sign attestation with target lower than or equal to recorded target epoch",
|
||||
},
|
||||
{
|
||||
name: "incoming target epoch equal to saved target epoch",
|
||||
existingAttInDB: ðpb.IndexedAttestation{
|
||||
Data: ðpb.AttestationData{
|
||||
Source: ðpb.Checkpoint{Epoch: 42},
|
||||
Target: ðpb.Checkpoint{Epoch: 43},
|
||||
},
|
||||
},
|
||||
incomingAtt: ðpb.IndexedAttestation{
|
||||
Data: ðpb.AttestationData{
|
||||
Source: ðpb.Checkpoint{Epoch: 42},
|
||||
Target: ðpb.Checkpoint{Epoch: 43},
|
||||
},
|
||||
},
|
||||
expectedErr: "could not sign attestation with target lower than or equal to recorded target epoch",
|
||||
},
|
||||
{
|
||||
name: "nominal",
|
||||
existingAttInDB: ðpb.IndexedAttestation{
|
||||
Data: ðpb.AttestationData{
|
||||
Source: ðpb.Checkpoint{Epoch: 42},
|
||||
Target: ðpb.Checkpoint{Epoch: 43},
|
||||
},
|
||||
},
|
||||
incomingAtt: ðpb.IndexedAttestation{
|
||||
Data: ðpb.AttestationData{
|
||||
Source: ðpb.Checkpoint{Epoch: 43},
|
||||
Target: ðpb.Checkpoint{Epoch: 44},
|
||||
},
|
||||
},
|
||||
expectedErr: "",
|
||||
},
|
||||
} {
|
||||
t.Run(tt.name, func(t *testing.T) {
|
||||
// Create a database path.
|
||||
databasePath := t.TempDir()
|
||||
|
||||
// Create a new store.
|
||||
store, err := NewStore(databasePath, nil)
|
||||
require.NoError(t, err, "NewStore should not return an error")
|
||||
|
||||
if tt.existingAttInDB != nil {
|
||||
// Simulate an already existing slashing protection.
|
||||
err = store.SaveAttestationForPubKey(context.Background(), pubkey, [32]byte{}, tt.existingAttInDB)
|
||||
require.NoError(t, err, "failed to save attestation when simulating an already existing slashing protection")
|
||||
}
|
||||
|
||||
if tt.incomingAtt != nil {
|
||||
// Attempt to save a new attestation.
|
||||
err = store.SaveAttestationForPubKey(context.Background(), pubkey, [32]byte{}, tt.incomingAtt)
|
||||
if len(tt.expectedErr) > 0 {
|
||||
require.ErrorContains(t, tt.expectedErr, err)
|
||||
} else {
|
||||
require.NoError(t, err, "call to SaveAttestationForPubKey should not return an error")
|
||||
}
|
||||
}
|
||||
})
|
||||
}
|
||||
}
|
||||
|
||||
func pointerFromInt(i uint64) *uint64 {
|
||||
return &i
|
||||
}
|
||||
|
||||
func TestStore_SaveAttestationsForPubKey2(t *testing.T) {
|
||||
// Get the context.
|
||||
ctx := context.Background()
|
||||
|
||||
// Create a public key.
|
||||
pubkey := getPubKeys(t, 1)[0]
|
||||
|
||||
for _, tt := range []struct {
|
||||
name string
|
||||
existingAttInDB *ethpb.IndexedAttestation
|
||||
incomingAtts []*ethpb.IndexedAttestation
|
||||
expectedSavedSlashingProtection *ValidatorSlashingProtection
|
||||
}{
|
||||
{
|
||||
name: "no atts",
|
||||
existingAttInDB: nil,
|
||||
incomingAtts: nil,
|
||||
expectedSavedSlashingProtection: nil,
|
||||
},
|
||||
{
|
||||
// 40 ==========> 45 <----- Will be recorded into DB
|
||||
// 30 ==========> 40
|
||||
name: "no pre-existing slashing protection",
|
||||
existingAttInDB: nil,
|
||||
incomingAtts: []*ethpb.IndexedAttestation{
|
||||
{
|
||||
Data: ðpb.AttestationData{
|
||||
Source: ðpb.Checkpoint{Epoch: primitives.Epoch(40)},
|
||||
Target: ðpb.Checkpoint{Epoch: primitives.Epoch(45)},
|
||||
},
|
||||
},
|
||||
{
|
||||
Data: ðpb.AttestationData{
|
||||
Source: ðpb.Checkpoint{Epoch: primitives.Epoch(30)},
|
||||
Target: ðpb.Checkpoint{Epoch: primitives.Epoch(40)},
|
||||
},
|
||||
},
|
||||
},
|
||||
expectedSavedSlashingProtection: &ValidatorSlashingProtection{
|
||||
LastSignedAttestationSourceEpoch: 40,
|
||||
LastSignedAttestationTargetEpoch: pointerFromInt(45),
|
||||
},
|
||||
},
|
||||
{
|
||||
name: "surrounded incoming attestation",
|
||||
// 40 ==========> 45 <----- Already recorded into DB
|
||||
// 42 => 43 <----- Incoming attestation
|
||||
// ------------------------------------------------------------------------------------------------
|
||||
// 42 ======> 45 <----- Will be recorded into DB (max source and target epochs)
|
||||
existingAttInDB: ðpb.IndexedAttestation{
|
||||
Data: ðpb.AttestationData{
|
||||
Source: ðpb.Checkpoint{Epoch: primitives.Epoch(40)},
|
||||
Target: ðpb.Checkpoint{Epoch: primitives.Epoch(45)},
|
||||
},
|
||||
},
|
||||
incomingAtts: []*ethpb.IndexedAttestation{
|
||||
{
|
||||
Data: ðpb.AttestationData{
|
||||
Source: ðpb.Checkpoint{Epoch: primitives.Epoch(42)},
|
||||
Target: ðpb.Checkpoint{Epoch: primitives.Epoch(43)},
|
||||
},
|
||||
},
|
||||
},
|
||||
expectedSavedSlashingProtection: &ValidatorSlashingProtection{
|
||||
LastSignedAttestationSourceEpoch: 42,
|
||||
LastSignedAttestationTargetEpoch: pointerFromInt(45),
|
||||
},
|
||||
},
|
||||
{
|
||||
name: "surrounding incoming attestation",
|
||||
// We create a surrounding attestation
|
||||
// 42 ======> 45 <----- Already recorded into DB
|
||||
// 40 ==================> 50 <----- Incoming attestation
|
||||
// ------------------------------------------------------------------------------------------------------
|
||||
// 42 =============> 50 <----- Will be recorded into DB (max source and target epochs)
|
||||
existingAttInDB: ðpb.IndexedAttestation{
|
||||
Data: ðpb.AttestationData{
|
||||
Source: ðpb.Checkpoint{Epoch: primitives.Epoch(42)},
|
||||
Target: ðpb.Checkpoint{Epoch: primitives.Epoch(45)},
|
||||
},
|
||||
},
|
||||
incomingAtts: []*ethpb.IndexedAttestation{
|
||||
{
|
||||
Data: ðpb.AttestationData{
|
||||
Source: ðpb.Checkpoint{Epoch: primitives.Epoch(40)},
|
||||
Target: ðpb.Checkpoint{Epoch: primitives.Epoch(50)},
|
||||
},
|
||||
},
|
||||
},
|
||||
expectedSavedSlashingProtection: &ValidatorSlashingProtection{
|
||||
LastSignedAttestationSourceEpoch: 42,
|
||||
LastSignedAttestationTargetEpoch: pointerFromInt(50),
|
||||
},
|
||||
},
|
||||
} {
|
||||
t.Run(tt.name, func(t *testing.T) {
|
||||
// Create a database path.
|
||||
databasePath := t.TempDir()
|
||||
|
||||
// Create a new store.
|
||||
store, err := NewStore(databasePath, nil)
|
||||
require.NoError(t, err, "NewStore should not return an error")
|
||||
|
||||
// Simulate an already existing slashing protection.
|
||||
if tt.existingAttInDB != nil {
|
||||
err = store.SaveAttestationForPubKey(ctx, pubkey, [32]byte{}, tt.existingAttInDB)
|
||||
require.NoError(t, err, "failed to save attestation when simulating an already existing slashing protection")
|
||||
}
|
||||
|
||||
// Save attestations.
|
||||
err = store.SaveAttestationsForPubKey(ctx, pubkey, [][]byte{}, tt.incomingAtts)
|
||||
require.NoError(t, err, "SaveAttestationsForPubKey should not return an error")
|
||||
|
||||
// Check the correct source / target epochs are saved.
|
||||
actualValidatorSlashingProtection, err := store.validatorSlashingProtection(pubkey)
|
||||
require.NoError(t, err, "validatorSlashingProtection should not return an error")
|
||||
require.DeepEqual(t, tt.expectedSavedSlashingProtection, actualValidatorSlashingProtection)
|
||||
})
|
||||
}
|
||||
}
|
||||
|
||||
func TestStore_AttestationHistoryForPubKey(t *testing.T) {
|
||||
// Get a database path.
|
||||
databasePath := t.TempDir()
|
||||
|
||||
// Create a public key.
|
||||
pubkey := getPubKeys(t, 1)[0]
|
||||
|
||||
// Create a new store.
|
||||
store, err := NewStore(databasePath, nil)
|
||||
require.NoError(t, err, "NewStore should not return an error")
|
||||
|
||||
// Get the attestation history.
|
||||
actual, err := store.AttestationHistoryForPubKey(context.Background(), pubkey)
|
||||
require.NoError(t, err, "AttestationHistoryForPubKey should not return an error")
|
||||
require.DeepEqual(t, []*common.AttestationRecord{}, actual)
|
||||
|
||||
// Create an attestation.
|
||||
savedSourceEpoch, savedTargetEpoch := 42, 43
|
||||
attestation := ðpb.IndexedAttestation{
|
||||
Data: ðpb.AttestationData{
|
||||
Source: ðpb.Checkpoint{Epoch: primitives.Epoch(savedSourceEpoch)},
|
||||
Target: ðpb.Checkpoint{Epoch: primitives.Epoch(savedTargetEpoch)},
|
||||
},
|
||||
}
|
||||
|
||||
// Save the attestation.
|
||||
err = store.SaveAttestationForPubKey(context.Background(), pubkey, [32]byte{}, attestation)
|
||||
require.NoError(t, err, "SaveAttestationForPubKey should not return an error")
|
||||
|
||||
// Get the attestation history.
|
||||
expected := []*common.AttestationRecord{
|
||||
{
|
||||
PubKey: pubkey,
|
||||
Source: primitives.Epoch(savedSourceEpoch),
|
||||
Target: primitives.Epoch(savedTargetEpoch),
|
||||
},
|
||||
}
|
||||
|
||||
actual, err = store.AttestationHistoryForPubKey(context.Background(), pubkey)
|
||||
require.NoError(t, err, "AttestationHistoryForPubKey should not return an error")
|
||||
require.DeepEqual(t, expected, actual)
|
||||
}
|
||||
|
||||
func BenchmarkStore_SaveAttestationForPubKey(b *testing.B) {
|
||||
var wg sync.WaitGroup
|
||||
ctx := context.Background()
|
||||
|
||||
// Create pubkeys
|
||||
pubkeys := make([][fieldparams.BLSPubkeyLength]byte, 2000)
|
||||
for i := range pubkeys {
|
||||
validatorKey, err := bls.RandKey()
|
||||
require.NoError(b, err, "RandKey should not return an error")
|
||||
|
||||
copy(pubkeys[i][:], validatorKey.PublicKey().Marshal())
|
||||
}
|
||||
|
||||
signingRoot := [32]byte{1}
|
||||
attestation := ðpb.IndexedAttestation{
|
||||
Data: ðpb.AttestationData{
|
||||
Source: ðpb.Checkpoint{
|
||||
Epoch: 42,
|
||||
},
|
||||
Target: ðpb.Checkpoint{
|
||||
Epoch: 43,
|
||||
},
|
||||
},
|
||||
}
|
||||
|
||||
validatorDB, err := NewStore(b.TempDir(), &Config{PubKeys: pubkeys})
|
||||
require.NoError(b, err)
|
||||
|
||||
b.ResetTimer()
|
||||
for i := 0; i < b.N; i++ {
|
||||
b.StopTimer()
|
||||
err := validatorDB.ClearDB()
|
||||
require.NoError(b, err)
|
||||
|
||||
for _, pubkey := range pubkeys {
|
||||
wg.Add(1)
|
||||
|
||||
go func(pk [fieldparams.BLSPubkeyLength]byte) {
|
||||
defer wg.Done()
|
||||
|
||||
err := validatorDB.SaveAttestationForPubKey(ctx, pk, signingRoot, attestation)
|
||||
require.NoError(b, err)
|
||||
}(pubkey)
|
||||
}
|
||||
|
||||
b.StartTimer()
|
||||
wg.Wait()
|
||||
}
|
||||
|
||||
err = validatorDB.Close()
|
||||
require.NoError(b, err)
|
||||
}
|
||||
443
validator/db/filesystem/db.go
Normal file
443
validator/db/filesystem/db.go
Normal file
@@ -0,0 +1,443 @@
|
||||
package filesystem
|
||||
|
||||
import (
|
||||
"context"
|
||||
"fmt"
|
||||
"os"
|
||||
"path"
|
||||
"path/filepath"
|
||||
"strings"
|
||||
"sync"
|
||||
"time"
|
||||
|
||||
"github.com/ethereum/go-ethereum/common/hexutil"
|
||||
fieldparams "github.com/prysmaticlabs/prysm/v5/config/fieldparams"
|
||||
"github.com/prysmaticlabs/prysm/v5/io/file"
|
||||
validatorpb "github.com/prysmaticlabs/prysm/v5/proto/prysm/v1alpha1/validator-client"
|
||||
"github.com/prysmaticlabs/prysm/v5/validator/db/iface"
|
||||
"gopkg.in/yaml.v3"
|
||||
|
||||
"github.com/pkg/errors"
|
||||
"github.com/sirupsen/logrus"
|
||||
)
|
||||
|
||||
const (
|
||||
backupsDirectoryName = "backups"
|
||||
configurationFileName = "configuration.yaml"
|
||||
slashingProtectionDirName = "slashing-protection"
|
||||
|
||||
DatabaseDirName = "validator-client-data"
|
||||
)
|
||||
|
||||
type (
|
||||
// Store is a filesystem implementation of the validator client database.
|
||||
Store struct {
|
||||
configurationMu sync.RWMutex
|
||||
pkToSlashingMu map[[fieldparams.BLSPubkeyLength]byte]*sync.RWMutex
|
||||
slashingMuMapMu sync.Mutex
|
||||
databaseParentPath string
|
||||
databasePath string
|
||||
}
|
||||
|
||||
// Graffiti contains the graffiti information.
|
||||
Graffiti struct {
|
||||
// In BoltDB implementation, calling GraffitiOrderedIndex with
|
||||
// the filehash stored in DB, but without an OrderedIndex already
|
||||
// stored in DB returns 0.
|
||||
// ==> Using the default value of uint64 is OK.
|
||||
OrderedIndex uint64
|
||||
FileHash *string
|
||||
}
|
||||
|
||||
// Configuration contains the genesis information, the proposer settings and the graffiti.
|
||||
Configuration struct {
|
||||
GenesisValidatorsRoot *string `yaml:"genesisValidatorsRoot,omitempty"`
|
||||
ProposerSettings *validatorpb.ProposerSettingsPayload `yaml:"proposerSettings,omitempty"`
|
||||
Graffiti *Graffiti `yaml:"graffiti,omitempty"`
|
||||
}
|
||||
|
||||
// ValidatorSlashingProtection contains the latest signed block slot, the last signed attestation.
|
||||
// It is used to protect against validator slashing, implementing the EIP-3076 minimal slashing protection database.
|
||||
// https://eips.ethereum.org/EIPS/eip-3076
|
||||
ValidatorSlashingProtection struct {
|
||||
LatestSignedBlockSlot *uint64 `yaml:"latestSignedBlockSlot,omitempty"`
|
||||
LastSignedAttestationSourceEpoch uint64 `yaml:"lastSignedAttestationSourceEpoch"`
|
||||
LastSignedAttestationTargetEpoch *uint64 `yaml:"lastSignedAttestationTargetEpoch,omitempty"`
|
||||
}
|
||||
|
||||
// Config represents store's config object.
|
||||
Config struct {
|
||||
PubKeys [][fieldparams.BLSPubkeyLength]byte
|
||||
}
|
||||
)
|
||||
|
||||
// Ensure the filesystem store implements the interface.
|
||||
var _ = iface.ValidatorDB(&Store{})
|
||||
|
||||
// Logging.
|
||||
var log = logrus.WithField("prefix", "db")
|
||||
|
||||
// NewStore creates a new filesystem store.
|
||||
func NewStore(databaseParentPath string, config *Config) (*Store, error) {
|
||||
s := &Store{
|
||||
databaseParentPath: databaseParentPath,
|
||||
databasePath: path.Join(databaseParentPath, DatabaseDirName),
|
||||
pkToSlashingMu: make(map[[fieldparams.BLSPubkeyLength]byte]*sync.RWMutex),
|
||||
}
|
||||
|
||||
// Initialize the required public keys into the DB to ensure they're not empty.
|
||||
if config != nil {
|
||||
if err := s.UpdatePublicKeysBuckets(config.PubKeys); err != nil {
|
||||
return nil, err
|
||||
}
|
||||
}
|
||||
|
||||
return s, nil
|
||||
}
|
||||
|
||||
// Close only exists to satisfy the interface.
|
||||
func (*Store) Close() error {
|
||||
return nil
|
||||
}
|
||||
|
||||
// DatabasePath returns the path at which this database writes files.
|
||||
func (s *Store) DatabasePath() string {
|
||||
// The returned path is actually the parent path, to be consistent with the BoltDB implementation.
|
||||
return s.databaseParentPath
|
||||
}
|
||||
|
||||
// ClearDB removes any previously stored data at the configured data directory.
|
||||
func (s *Store) ClearDB() error {
|
||||
if err := os.RemoveAll(s.databasePath); err != nil {
|
||||
return errors.Wrapf(err, "cannot remove database at path %s", s.databasePath)
|
||||
}
|
||||
|
||||
return nil
|
||||
}
|
||||
|
||||
// Backup creates a backup of the database.
|
||||
func (s *Store) Backup(_ context.Context, outputDir string, permissionOverride bool) error {
|
||||
// Get backups directory path.
|
||||
backupsDir := path.Join(outputDir, backupsDirectoryName)
|
||||
if len(outputDir) != 0 {
|
||||
backupsDir, err := file.ExpandPath(backupsDir)
|
||||
if err != nil {
|
||||
return errors.Wrapf(err, "could not expand path %s", backupsDir)
|
||||
}
|
||||
}
|
||||
|
||||
// Ensure the backups directory exists, else create it.
|
||||
if err := file.HandleBackupDir(backupsDir, permissionOverride); err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
// Get the path of this specific backup directory.
|
||||
backupPath := path.Join(backupsDir, fmt.Sprintf("prysm_validatordb_%d.backup", time.Now().Unix()), DatabaseDirName)
|
||||
log.WithField("backup", backupPath).Info("Writing backup database")
|
||||
|
||||
// Create this specific backup directory.
|
||||
if err := file.MkdirAll(backupPath); err != nil {
|
||||
return errors.Wrapf(err, "could not create directory %s", backupPath)
|
||||
}
|
||||
|
||||
// Copy the configuration file to the backup directory.
|
||||
if err := file.CopyFile(s.configurationFilePath(), path.Join(backupPath, configurationFileName)); err != nil {
|
||||
return errors.Wrap(err, "could not copy configuration file")
|
||||
}
|
||||
|
||||
// Copy the slashing protection directory to the backup directory.
|
||||
if err := file.CopyDir(s.slashingProtectionDirPath(), path.Join(backupPath, slashingProtectionDirName)); err != nil {
|
||||
return errors.Wrap(err, "could not copy slashing protection directory")
|
||||
}
|
||||
|
||||
return nil
|
||||
}
|
||||
|
||||
// UpdatePublicKeysBuckets creates a file for each public key in the database directory if needed.
|
||||
func (s *Store) UpdatePublicKeysBuckets(pubKeys [][fieldparams.BLSPubkeyLength]byte) error {
|
||||
validatorSlashingProtection := ValidatorSlashingProtection{}
|
||||
|
||||
// Marshal the ValidatorSlashingProtection struct.
|
||||
yfile, err := yaml.Marshal(validatorSlashingProtection)
|
||||
if err != nil {
|
||||
return errors.Wrap(err, "could not marshal validator slashing protection")
|
||||
}
|
||||
|
||||
// Create the directory if needed.
|
||||
slashingProtectionDirPath := s.slashingProtectionDirPath()
|
||||
if err := file.MkdirAll(slashingProtectionDirPath); err != nil {
|
||||
return errors.Wrapf(err, "could not create directory %s", s.databasePath)
|
||||
}
|
||||
|
||||
for _, pubKey := range pubKeys {
|
||||
// Get the file path for the public key.
|
||||
path := s.pubkeySlashingProtectionFilePath(pubKey)
|
||||
|
||||
// Check if the public key has a file in the database.
|
||||
exists, err := file.Exists(path, file.Regular)
|
||||
if err != nil {
|
||||
return errors.Wrapf(err, "could not check if %s exists", path)
|
||||
}
|
||||
|
||||
if exists {
|
||||
continue
|
||||
}
|
||||
|
||||
// Write the ValidatorSlashingProtection struct to the file.
|
||||
if err := file.WriteFile(path, yfile); err != nil {
|
||||
return errors.Wrapf(err, "could not write into %s.yaml", path)
|
||||
}
|
||||
}
|
||||
|
||||
return nil
|
||||
}
|
||||
|
||||
// slashingProtectionDirPath returns the path of the slashing protection directory.
|
||||
func (s *Store) slashingProtectionDirPath() string {
|
||||
return path.Join(s.databasePath, slashingProtectionDirName)
|
||||
}
|
||||
|
||||
// pubkeySlashingProtectionFilePath returns the path of the slashing protection file for a public key.
|
||||
func (s *Store) pubkeySlashingProtectionFilePath(pubKey [fieldparams.BLSPubkeyLength]byte) string {
|
||||
slashingProtectionDirPath := s.slashingProtectionDirPath()
|
||||
pubkeyFileName := fmt.Sprintf("%s.yaml", hexutil.Encode(pubKey[:]))
|
||||
|
||||
return path.Join(slashingProtectionDirPath, pubkeyFileName)
|
||||
}
|
||||
|
||||
// configurationFilePath returns the path of the configuration file.
|
||||
func (s *Store) configurationFilePath() string {
|
||||
return path.Join(s.databasePath, configurationFileName)
|
||||
}
|
||||
|
||||
// configuration returns the configuration.
|
||||
func (s *Store) configuration() (*Configuration, error) {
|
||||
config := &Configuration{}
|
||||
|
||||
// Get the path of config file.
|
||||
configFilePath := s.configurationFilePath()
|
||||
cleanedConfigFilePath := filepath.Clean(configFilePath)
|
||||
|
||||
// Read lock the mutex.
|
||||
s.configurationMu.RLock()
|
||||
defer s.configurationMu.RUnlock()
|
||||
|
||||
// Check if config file exists.
|
||||
exists, err := file.Exists(configFilePath, file.Regular)
|
||||
if err != nil {
|
||||
return nil, errors.Wrapf(err, "could not check if %s exists", cleanedConfigFilePath)
|
||||
}
|
||||
|
||||
if !exists {
|
||||
return nil, nil
|
||||
}
|
||||
|
||||
// Read the config file.
|
||||
yfile, err := os.ReadFile(cleanedConfigFilePath)
|
||||
if err != nil {
|
||||
return nil, errors.Wrapf(err, "could not read %s", cleanedConfigFilePath)
|
||||
}
|
||||
|
||||
// Unmarshal the config file into Config struct.
|
||||
if err := yaml.Unmarshal(yfile, &config); err != nil {
|
||||
return nil, errors.Wrapf(err, "could not unmarshal %s", cleanedConfigFilePath)
|
||||
}
|
||||
|
||||
// yaml.Unmarshal converts nil array to empty array.
|
||||
// To get the same behavior as the BoltDB implementation, we need to convert empty array to nil.
|
||||
if config.ProposerSettings != nil &&
|
||||
config.ProposerSettings.DefaultConfig != nil &&
|
||||
config.ProposerSettings.DefaultConfig.Builder != nil &&
|
||||
len(config.ProposerSettings.DefaultConfig.Builder.Relays) == 0 {
|
||||
config.ProposerSettings.DefaultConfig.Builder.Relays = nil
|
||||
}
|
||||
|
||||
if config.ProposerSettings != nil && config.ProposerSettings.ProposerConfig != nil {
|
||||
for _, option := range config.ProposerSettings.ProposerConfig {
|
||||
if option.Builder != nil && len(option.Builder.Relays) == 0 {
|
||||
option.Builder.Relays = nil
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
return config, nil
|
||||
}
|
||||
|
||||
// saveConfiguration saves the configuration.
|
||||
func (s *Store) saveConfiguration(config *Configuration) error {
|
||||
// If config is nil, return
|
||||
if config == nil {
|
||||
return nil
|
||||
}
|
||||
|
||||
// Create the directory if needed.
|
||||
if err := file.MkdirAll(s.databasePath); err != nil {
|
||||
return errors.Wrapf(err, "could not create directory %s", s.databasePath)
|
||||
}
|
||||
|
||||
// Get the path of config file.
|
||||
configFilePath := s.configurationFilePath()
|
||||
|
||||
// Marshal config into yaml.
|
||||
data, err := yaml.Marshal(config)
|
||||
if err != nil {
|
||||
return errors.Wrap(err, "could not marshal config.yaml")
|
||||
}
|
||||
|
||||
// Write lock the mutex.
|
||||
s.configurationMu.Lock()
|
||||
defer s.configurationMu.Unlock()
|
||||
|
||||
// Write the data to config.yaml.
|
||||
if err := file.WriteFile(configFilePath, data); err != nil {
|
||||
return errors.Wrap(err, "could not write genesis info into config.yaml")
|
||||
}
|
||||
|
||||
return nil
|
||||
}
|
||||
|
||||
// validatorSlashingProtection returns the slashing protection for a public key.
|
||||
func (s *Store) validatorSlashingProtection(publicKey [fieldparams.BLSPubkeyLength]byte) (*ValidatorSlashingProtection, error) {
|
||||
var mu *sync.RWMutex
|
||||
validatorSlashingProtection := &ValidatorSlashingProtection{}
|
||||
|
||||
// Get the slashing protection file path.
|
||||
path := s.pubkeySlashingProtectionFilePath(publicKey)
|
||||
cleanedPath := filepath.Clean(path)
|
||||
|
||||
// Check if the public key has a file in the database.
|
||||
exists, err := file.Exists(path, file.Regular)
|
||||
if err != nil {
|
||||
return nil, errors.Wrapf(err, "could not check if %s exists", cleanedPath)
|
||||
}
|
||||
|
||||
if !exists {
|
||||
return nil, nil
|
||||
}
|
||||
|
||||
// Lock the mutex protecting the map of public keys to slashing protection mutexes.
|
||||
s.slashingMuMapMu.Lock()
|
||||
|
||||
// Get / create the mutex for the public key.
|
||||
mu, ok := s.pkToSlashingMu[publicKey]
|
||||
if !ok {
|
||||
mu = &sync.RWMutex{}
|
||||
s.pkToSlashingMu[publicKey] = mu
|
||||
}
|
||||
|
||||
// Release the mutex protecting the map of public keys to slashing protection mutexes.
|
||||
s.slashingMuMapMu.Unlock()
|
||||
|
||||
// Read lock the mutex for the public key.
|
||||
mu.RLock()
|
||||
defer mu.RUnlock()
|
||||
|
||||
// Read the file and unmarshal it into ValidatorSlashingProtection struct.
|
||||
yfile, err := os.ReadFile(cleanedPath)
|
||||
if err != nil {
|
||||
return nil, errors.Wrapf(err, "could not read %s", cleanedPath)
|
||||
}
|
||||
|
||||
if err := yaml.Unmarshal(yfile, validatorSlashingProtection); err != nil {
|
||||
return nil, errors.Wrapf(err, "could not unmarshal %s", cleanedPath)
|
||||
}
|
||||
|
||||
return validatorSlashingProtection, nil
|
||||
}
|
||||
|
||||
// saveValidatorSlashingProtection saves the slashing protection for a public key.
|
||||
func (s *Store) saveValidatorSlashingProtection(
|
||||
publicKey [fieldparams.BLSPubkeyLength]byte,
|
||||
validatorSlashingProtection *ValidatorSlashingProtection,
|
||||
) error {
|
||||
// If the ValidatorSlashingProtection struct is nil, return.
|
||||
if validatorSlashingProtection == nil {
|
||||
return nil
|
||||
}
|
||||
|
||||
// Create the directory if needed.
|
||||
slashingProtectionDirPath := s.slashingProtectionDirPath()
|
||||
if err := file.MkdirAll(slashingProtectionDirPath); err != nil {
|
||||
return errors.Wrapf(err, "could not create directory %s", s.databasePath)
|
||||
}
|
||||
|
||||
// Get the file path for the public key.
|
||||
path := s.pubkeySlashingProtectionFilePath(publicKey)
|
||||
|
||||
// Lock the mutex protecting the map of public keys to slashing protection mutexes.
|
||||
s.slashingMuMapMu.Lock()
|
||||
|
||||
// Get / create the mutex for the public key.
|
||||
mu, ok := s.pkToSlashingMu[publicKey]
|
||||
if !ok {
|
||||
mu = &sync.RWMutex{}
|
||||
s.pkToSlashingMu[publicKey] = mu
|
||||
}
|
||||
|
||||
// Release the mutex protecting the map of public keys to slashing protection mutexes.
|
||||
s.slashingMuMapMu.Unlock()
|
||||
|
||||
// Write lock the mutex.
|
||||
mu.Lock()
|
||||
defer mu.Unlock()
|
||||
|
||||
// Marshal the ValidatorSlashingProtection struct.
|
||||
yfile, err := yaml.Marshal(validatorSlashingProtection)
|
||||
if err != nil {
|
||||
return errors.Wrap(err, "could not marshal validator slashing protection")
|
||||
}
|
||||
|
||||
// Write the ValidatorSlashingProtection struct to the file.
|
||||
if err := file.WriteFile(path, yfile); err != nil {
|
||||
return errors.Wrapf(err, "could not write into %s.yaml", path)
|
||||
}
|
||||
|
||||
return nil
|
||||
}
|
||||
|
||||
// publicKeys returns the public keys existing in the database directory.
|
||||
func (s *Store) publicKeys() ([][fieldparams.BLSPubkeyLength]byte, error) {
|
||||
// Get the slashing protection directory path.
|
||||
slashingProtectionDirPath := s.slashingProtectionDirPath()
|
||||
|
||||
// If the slashing protection directory does not exist, return an empty slice.
|
||||
exists, err := file.Exists(slashingProtectionDirPath, file.Directory)
|
||||
if err != nil {
|
||||
return nil, errors.Wrapf(err, "could not check if %s exists", slashingProtectionDirPath)
|
||||
}
|
||||
|
||||
if !exists {
|
||||
return nil, nil
|
||||
}
|
||||
|
||||
// Get all entries in the slashing protection directory.
|
||||
entries, err := os.ReadDir(slashingProtectionDirPath)
|
||||
if err != nil {
|
||||
return nil, errors.Wrap(err, "could not read database directory")
|
||||
}
|
||||
|
||||
// Collect public keys.
|
||||
publicKeys := make([][fieldparams.BLSPubkeyLength]byte, 0, len(entries))
|
||||
for _, entry := range entries {
|
||||
if !(entry.Type().IsRegular() && strings.HasPrefix(entry.Name(), "0x")) {
|
||||
log.WithFields(logrus.Fields{
|
||||
"file": entry.Name(),
|
||||
}).Warn("Unexpected file in slashing protection directory")
|
||||
continue
|
||||
}
|
||||
|
||||
// Convert the file name to a public key.
|
||||
publicKeyHex := strings.TrimSuffix(entry.Name(), ".yaml")
|
||||
publicKeyBytes, err := hexutil.Decode(publicKeyHex)
|
||||
if err != nil {
|
||||
return nil, errors.Wrapf(err, "could not decode %s", publicKeyHex)
|
||||
}
|
||||
|
||||
publicKey := [fieldparams.BLSPubkeyLength]byte{}
|
||||
copy(publicKey[:], publicKeyBytes)
|
||||
|
||||
publicKeys = append(publicKeys, publicKey)
|
||||
}
|
||||
|
||||
return publicKeys, nil
|
||||
}
|
||||
310
validator/db/filesystem/db_test.go
Normal file
310
validator/db/filesystem/db_test.go
Normal file
@@ -0,0 +1,310 @@
|
||||
package filesystem
|
||||
|
||||
import (
|
||||
"context"
|
||||
"fmt"
|
||||
"os"
|
||||
"path"
|
||||
"testing"
|
||||
|
||||
"github.com/ethereum/go-ethereum/common"
|
||||
"github.com/ethereum/go-ethereum/common/hexutil"
|
||||
fieldparams "github.com/prysmaticlabs/prysm/v5/config/fieldparams"
|
||||
"github.com/prysmaticlabs/prysm/v5/config/proposer"
|
||||
|
||||
"github.com/prysmaticlabs/prysm/v5/crypto/bls"
|
||||
"github.com/prysmaticlabs/prysm/v5/io/file"
|
||||
"github.com/prysmaticlabs/prysm/v5/testing/require"
|
||||
)
|
||||
|
||||
func getPubKeys(t *testing.T, count int) [][fieldparams.BLSPubkeyLength]byte {
|
||||
pubKeys := make([][fieldparams.BLSPubkeyLength]byte, count)
|
||||
|
||||
for i := range pubKeys {
|
||||
validatorKey, err := bls.RandKey()
|
||||
require.NoError(t, err, "RandKey should not return an error")
|
||||
|
||||
copy(pubKeys[i][:], validatorKey.PublicKey().Marshal())
|
||||
}
|
||||
|
||||
return pubKeys
|
||||
}
|
||||
|
||||
func TestStore_NewStore(t *testing.T) {
|
||||
// Create some pubkeys.
|
||||
pubkeys := getPubKeys(t, 5)
|
||||
|
||||
// Just check `NewStore` does not return an error.
|
||||
_, err := NewStore(t.TempDir(), &Config{PubKeys: pubkeys})
|
||||
require.NoError(t, err, "NewStore should not return an error")
|
||||
}
|
||||
|
||||
func TestStore_Close(t *testing.T) {
|
||||
// Create a new store.
|
||||
s, err := NewStore(t.TempDir(), nil)
|
||||
require.NoError(t, err, "NewStore should not return an error")
|
||||
|
||||
// Close the DB.
|
||||
require.NoError(t, s.Close(), "Close should not return an error")
|
||||
}
|
||||
|
||||
func TestStore_DatabasePath(t *testing.T) {
|
||||
// Get a database parent path.
|
||||
databaseParentPath := t.TempDir()
|
||||
|
||||
// Create a new store.
|
||||
s, err := NewStore(databaseParentPath, nil)
|
||||
require.NoError(t, err, "NewStore should not return an error")
|
||||
|
||||
expected := databaseParentPath
|
||||
actual := s.DatabasePath()
|
||||
|
||||
require.Equal(t, expected, actual)
|
||||
}
|
||||
|
||||
func TestStore_ClearDB(t *testing.T) {
|
||||
// Get a database parent path.
|
||||
databaseParentPath := t.TempDir()
|
||||
|
||||
// Compute slashing protection directory and configuration file paths.
|
||||
databasePath := path.Join(databaseParentPath, DatabaseDirName)
|
||||
|
||||
// Create some pubkeys.
|
||||
pubkeys := getPubKeys(t, 5)
|
||||
|
||||
// Create a new store.
|
||||
s, err := NewStore(databaseParentPath, &Config{PubKeys: pubkeys})
|
||||
require.NoError(t, err, "NewStore should not return an error")
|
||||
|
||||
// Check the presence of the database directory.
|
||||
exists, err := file.Exists(databasePath, file.Directory)
|
||||
require.NoError(t, err, "file.Exists should not return an error")
|
||||
require.Equal(t, true, exists, "file.Exists should return true")
|
||||
|
||||
// Clear the DB.
|
||||
err = s.ClearDB()
|
||||
require.NoError(t, err, "ClearDB should not return an error")
|
||||
|
||||
// Check the absence of the database directory.
|
||||
exists, err = file.Exists(databasePath, file.Directory)
|
||||
require.NoError(t, err, "file.Exists should not return an error")
|
||||
require.Equal(t, false, exists, "file.Exists should return false")
|
||||
}
|
||||
|
||||
func TestStore_Backup(t *testing.T) {
|
||||
// Get a database parent path.
|
||||
databaseParentPath := t.TempDir()
|
||||
originalDatabaseDirPath := path.Join(databaseParentPath, DatabaseDirName)
|
||||
|
||||
// Get a backups directory path.
|
||||
backupsPath := t.TempDir()
|
||||
|
||||
// Create some pubkeys.
|
||||
pubkeys := getPubKeys(t, 5)
|
||||
|
||||
// Create a new store.
|
||||
s, err := NewStore(databaseParentPath, &Config{PubKeys: pubkeys})
|
||||
require.NoError(t, err, "NewStore should not return an error")
|
||||
|
||||
// Update the proposer settings.
|
||||
err = s.SaveProposerSettings(context.Background(), &proposer.Settings{
|
||||
DefaultConfig: &proposer.Option{
|
||||
FeeRecipientConfig: &proposer.FeeRecipientConfig{
|
||||
FeeRecipient: common.Address{},
|
||||
},
|
||||
},
|
||||
})
|
||||
require.NoError(t, err, "SaveProposerSettings should not return an error")
|
||||
|
||||
// Backup the DB.
|
||||
require.NoError(t, s.Backup(context.Background(), backupsPath, true), "Backup should not return an error")
|
||||
|
||||
// Get the directory path of the backup.
|
||||
files, err := os.ReadDir(path.Join(backupsPath, backupsDirectoryName))
|
||||
require.NoError(t, err, "os.ReadDir should not return an error")
|
||||
require.Equal(t, 1, len(files), "os.ReadDir should return one file")
|
||||
backupDirEntry := files[0]
|
||||
require.Equal(t, true, backupDirEntry.IsDir(), "os.ReadDir should return a directory")
|
||||
backupDirPath := path.Join(backupsPath, backupsDirectoryName, backupDirEntry.Name())
|
||||
|
||||
// Get the path database directory.
|
||||
backupDatabaseDirPath := path.Join(backupDirPath, DatabaseDirName)
|
||||
|
||||
// Compare the content of the slashing protection directory.
|
||||
require.Equal(t, true, file.DirsEqual(originalDatabaseDirPath, backupDatabaseDirPath))
|
||||
}
|
||||
|
||||
func TestStore_UpdatePublickKeysBuckets(t *testing.T) {
|
||||
// Get a database path.
|
||||
databasePath := t.TempDir()
|
||||
|
||||
// Create some pubkeys.
|
||||
pubkeys := getPubKeys(t, 5)
|
||||
|
||||
// Create a new store.
|
||||
s, err := NewStore(databasePath, &Config{PubKeys: pubkeys})
|
||||
require.NoError(t, err, "NewStore should not return an error")
|
||||
|
||||
// Update the public keys.
|
||||
err = s.UpdatePublicKeysBuckets(pubkeys)
|
||||
require.NoError(t, err, "UpdatePublicKeysBuckets should not return an error")
|
||||
|
||||
// Check if the public keys files have been created.
|
||||
for i := range pubkeys {
|
||||
pubkeyHex := hexutil.Encode(pubkeys[i][:])
|
||||
pubkeyFile := path.Join(databasePath, DatabaseDirName, slashingProtectionDirName, fmt.Sprintf("%s.yaml", pubkeyHex))
|
||||
|
||||
exists, err := file.Exists(pubkeyFile, file.Regular)
|
||||
require.NoError(t, err, "file.Exists should not return an error")
|
||||
require.Equal(t, true, exists, "file.Exists should return true")
|
||||
}
|
||||
}
|
||||
|
||||
func TestStore_slashingProtectionDirPath(t *testing.T) {
|
||||
// Get a database path.
|
||||
databasePath := t.TempDir()
|
||||
|
||||
// Create a new store.
|
||||
s, err := NewStore(databasePath, nil)
|
||||
require.NoError(t, err, "NewStore should not return an error")
|
||||
|
||||
// Check the slashing protection directory path.
|
||||
expected := path.Join(databasePath, DatabaseDirName, slashingProtectionDirName)
|
||||
actual := s.slashingProtectionDirPath()
|
||||
require.Equal(t, expected, actual)
|
||||
}
|
||||
|
||||
func TestStore_pubkeySlashingProtectionFilePath(t *testing.T) {
|
||||
// Get a database path.
|
||||
databasePath := t.TempDir()
|
||||
|
||||
// Create a new store.
|
||||
s, err := NewStore(databasePath, nil)
|
||||
require.NoError(t, err, "NewStore should not return an error")
|
||||
|
||||
// Create a pubkey.
|
||||
pubkey := getPubKeys(t, 1)[0]
|
||||
|
||||
// Check the pubkey slashing protection file path.
|
||||
expected := path.Join(databasePath, DatabaseDirName, slashingProtectionDirName, hexutil.Encode(pubkey[:])+".yaml")
|
||||
actual := s.pubkeySlashingProtectionFilePath(pubkey)
|
||||
require.Equal(t, path.Join(databasePath, DatabaseDirName, slashingProtectionDirName, hexutil.Encode(pubkey[:])+".yaml"), s.pubkeySlashingProtectionFilePath(pubkey))
|
||||
require.Equal(t, expected, actual)
|
||||
}
|
||||
|
||||
func TestStore_configurationFilePath(t *testing.T) {
|
||||
// Get a database path.
|
||||
databasePath := t.TempDir()
|
||||
|
||||
// Create a new store.
|
||||
s, err := NewStore(databasePath, nil)
|
||||
require.NoError(t, err, "NewStore should not return an error")
|
||||
|
||||
// Check the configuration file path.
|
||||
expected := path.Join(databasePath, DatabaseDirName, configurationFileName)
|
||||
actual := s.configurationFilePath()
|
||||
require.Equal(t, expected, actual)
|
||||
}
|
||||
|
||||
func TestStore_configuration_saveConfiguration(t *testing.T) {
|
||||
for _, tt := range []struct {
|
||||
name string
|
||||
expectedConfiguration *Configuration
|
||||
}{
|
||||
{
|
||||
name: "nil configuration",
|
||||
expectedConfiguration: nil,
|
||||
},
|
||||
{
|
||||
name: "some configuration",
|
||||
expectedConfiguration: &Configuration{},
|
||||
},
|
||||
} {
|
||||
t.Run(tt.name, func(t *testing.T) {
|
||||
// Create a database path.
|
||||
databasePath := t.TempDir()
|
||||
|
||||
// Create a new store.
|
||||
s, err := NewStore(databasePath, nil)
|
||||
require.NoError(t, err, "NewStore should not return an error")
|
||||
|
||||
// Save the configuration.
|
||||
err = s.saveConfiguration(tt.expectedConfiguration)
|
||||
require.NoError(t, err, "saveConfiguration should not return an error")
|
||||
|
||||
// Retrieve the configuration.
|
||||
actualConfiguration, err := s.configuration()
|
||||
require.NoError(t, err, "configuration should not return an error")
|
||||
|
||||
// Compare the configurations.
|
||||
require.DeepEqual(t, tt.expectedConfiguration, actualConfiguration)
|
||||
})
|
||||
}
|
||||
|
||||
}
|
||||
|
||||
func TestStore_validatorSlashingProtection_saveValidatorSlashingProtection(t *testing.T) {
|
||||
// We get a database path
|
||||
databasePath := t.TempDir()
|
||||
|
||||
// We create a new store
|
||||
s, err := NewStore(databasePath, nil)
|
||||
require.NoError(t, err, "NewStore should not return an error")
|
||||
|
||||
// We create a pubkey
|
||||
pubkey := getPubKeys(t, 1)[0]
|
||||
|
||||
// We save an empty validator slashing protection for the pubkey
|
||||
err = s.saveValidatorSlashingProtection(pubkey, nil)
|
||||
require.NoError(t, err, "saveValidatorSlashingProtection should not return an error")
|
||||
|
||||
// We check the validator slashing protection for the pubkey
|
||||
var expected *ValidatorSlashingProtection
|
||||
actual, err := s.validatorSlashingProtection(pubkey)
|
||||
require.NoError(t, err, "validatorSlashingProtection should not return an error")
|
||||
require.Equal(t, expected, actual)
|
||||
|
||||
// We update the validator slashing protection for the pubkey
|
||||
epoch := uint64(1)
|
||||
validatorSlashingProtection := &ValidatorSlashingProtection{LatestSignedBlockSlot: &epoch}
|
||||
err = s.saveValidatorSlashingProtection(pubkey, validatorSlashingProtection)
|
||||
require.NoError(t, err, "saveValidatorSlashingProtection should not return an error")
|
||||
|
||||
// We check the validator slashing protection for the pubkey
|
||||
expected = &ValidatorSlashingProtection{LatestSignedBlockSlot: &epoch}
|
||||
actual, err = s.validatorSlashingProtection(pubkey)
|
||||
require.NoError(t, err, "validatorSlashingProtection should not return an error")
|
||||
require.DeepEqual(t, expected, actual)
|
||||
}
|
||||
|
||||
func TestStore_publicKeys(t *testing.T) {
|
||||
// We get a database path
|
||||
databasePath := t.TempDir()
|
||||
|
||||
// We create some pubkeys
|
||||
pubkeys := getPubKeys(t, 5)
|
||||
|
||||
// We create a new store
|
||||
s, err := NewStore(databasePath, &Config{PubKeys: pubkeys})
|
||||
require.NoError(t, err, "NewStore should not return an error")
|
||||
|
||||
// We check the public keys
|
||||
expected := pubkeys
|
||||
actual, err := s.publicKeys()
|
||||
require.NoError(t, err, "publicKeys should not return an error")
|
||||
|
||||
// We cannot compare the slices directly because the order is not guaranteed,
|
||||
// so we compare sets instead.
|
||||
|
||||
expectedSet := make(map[[fieldparams.BLSPubkeyLength]byte]bool)
|
||||
for _, pubkey := range expected {
|
||||
expectedSet[pubkey] = true
|
||||
}
|
||||
|
||||
actualSet := make(map[[fieldparams.BLSPubkeyLength]byte]bool)
|
||||
for _, pubkey := range actual {
|
||||
actualSet[pubkey] = true
|
||||
}
|
||||
|
||||
require.DeepEqual(t, expectedSet, actualSet)
|
||||
}
|
||||
75
validator/db/filesystem/genesis.go
Normal file
75
validator/db/filesystem/genesis.go
Normal file
@@ -0,0 +1,75 @@
|
||||
package filesystem
|
||||
|
||||
import (
|
||||
"context"
|
||||
|
||||
"github.com/ethereum/go-ethereum/common/hexutil"
|
||||
"github.com/pkg/errors"
|
||||
)
|
||||
|
||||
func (s *Store) GenesisValidatorsRoot(_ context.Context) ([]byte, error) {
|
||||
// Get configuration.
|
||||
configuration, err := s.configuration()
|
||||
if err != nil {
|
||||
return nil, errors.Wrap(err, "could not get config")
|
||||
}
|
||||
|
||||
// Return nil if config file does not exist.
|
||||
if configuration == nil {
|
||||
return nil, nil
|
||||
}
|
||||
|
||||
// Return nil if genesis validators root is empty.
|
||||
if configuration.GenesisValidatorsRoot == nil {
|
||||
return nil, nil
|
||||
}
|
||||
|
||||
// Convert genValRoot to bytes.
|
||||
genValRootBytes, err := hexutil.Decode(*configuration.GenesisValidatorsRoot)
|
||||
if err != nil {
|
||||
return nil, errors.Wrap(err, "could not decode genesis validators root")
|
||||
}
|
||||
|
||||
return genValRootBytes, nil
|
||||
}
|
||||
|
||||
// SaveGenesisValidatorsRoot saves the genesis validators root to db.
|
||||
func (s *Store) SaveGenesisValidatorsRoot(_ context.Context, genValRoot []byte) error {
|
||||
// Return nil if genesis validators root is empty.
|
||||
if genValRoot == nil {
|
||||
return nil
|
||||
}
|
||||
|
||||
// Convert genValRoot to hex.
|
||||
genValRootHex := hexutil.Encode(genValRoot)
|
||||
|
||||
// Get configuration.
|
||||
configuration, err := s.configuration()
|
||||
if err != nil {
|
||||
return errors.Wrap(err, "could not get config")
|
||||
}
|
||||
|
||||
if configuration == nil {
|
||||
// Create new config.
|
||||
configuration = &Configuration{
|
||||
GenesisValidatorsRoot: &genValRootHex,
|
||||
}
|
||||
|
||||
// Save the config.
|
||||
if err := s.saveConfiguration(configuration); err != nil {
|
||||
return errors.Wrap(err, "could not save config")
|
||||
}
|
||||
|
||||
return nil
|
||||
}
|
||||
|
||||
// Modify the value of genesis validators root.
|
||||
configuration.GenesisValidatorsRoot = &genValRootHex
|
||||
|
||||
// Save the config.
|
||||
if err := s.saveConfiguration(configuration); err != nil {
|
||||
return errors.Wrap(err, "could not save config")
|
||||
}
|
||||
|
||||
return nil
|
||||
}
|
||||
102
validator/db/filesystem/genesis_test.go
Normal file
102
validator/db/filesystem/genesis_test.go
Normal file
@@ -0,0 +1,102 @@
|
||||
package filesystem
|
||||
|
||||
import (
|
||||
"context"
|
||||
"testing"
|
||||
|
||||
"github.com/prysmaticlabs/prysm/v5/testing/require"
|
||||
)
|
||||
|
||||
func TestStore_GenesisValidatorsRoot(t *testing.T) {
|
||||
ctx := context.Background()
|
||||
|
||||
genesisValidatorRootString := "0x0100"
|
||||
genesisValidatorRootBytes := []byte{1, 0}
|
||||
|
||||
for _, tt := range []struct {
|
||||
name string
|
||||
savedConfiguration *Configuration
|
||||
expectedGenesisValidatorRoot []byte
|
||||
}{
|
||||
{
|
||||
name: "configuration is nil",
|
||||
savedConfiguration: nil,
|
||||
expectedGenesisValidatorRoot: nil,
|
||||
},
|
||||
{
|
||||
name: "configuration.GenesisValidatorsRoot is nil",
|
||||
savedConfiguration: &Configuration{GenesisValidatorsRoot: nil},
|
||||
expectedGenesisValidatorRoot: nil,
|
||||
},
|
||||
{
|
||||
name: "configuration.GenesisValidatorsRoot is something",
|
||||
savedConfiguration: &Configuration{GenesisValidatorsRoot: &genesisValidatorRootString},
|
||||
expectedGenesisValidatorRoot: genesisValidatorRootBytes,
|
||||
},
|
||||
} {
|
||||
t.Run(tt.name, func(t *testing.T) {
|
||||
// Create a new store.
|
||||
store, err := NewStore(t.TempDir(), nil)
|
||||
require.NoError(t, err)
|
||||
|
||||
// Save the configuration.
|
||||
err = store.saveConfiguration(tt.savedConfiguration)
|
||||
require.NoError(t, err, "save configuration should not error")
|
||||
|
||||
// Get genesis validators root.
|
||||
actualGenesisValidatorRoot, err := store.GenesisValidatorsRoot(ctx)
|
||||
require.NoError(t, err, "get genesis validators root should not error")
|
||||
require.DeepEqual(t, tt.expectedGenesisValidatorRoot, actualGenesisValidatorRoot, "genesis validators root should be equal")
|
||||
})
|
||||
}
|
||||
}
|
||||
|
||||
func TestStore_SaveGenesisValidatorsRoot(t *testing.T) {
|
||||
ctx := context.Background()
|
||||
genesisValidatorRootString := "0x0100"
|
||||
|
||||
for _, tt := range []struct {
|
||||
name string
|
||||
initialConfiguration *Configuration
|
||||
genesisValidatorRoot []byte
|
||||
expectedConfiguration *Configuration
|
||||
}{
|
||||
{
|
||||
name: "genValRoot is nil",
|
||||
initialConfiguration: nil,
|
||||
genesisValidatorRoot: nil,
|
||||
expectedConfiguration: nil,
|
||||
},
|
||||
{
|
||||
name: "initial configuration is nil",
|
||||
initialConfiguration: nil,
|
||||
genesisValidatorRoot: []byte{1, 0},
|
||||
expectedConfiguration: &Configuration{GenesisValidatorsRoot: &genesisValidatorRootString},
|
||||
},
|
||||
{
|
||||
name: "initial configuration exists",
|
||||
initialConfiguration: &Configuration{},
|
||||
genesisValidatorRoot: []byte{1, 0},
|
||||
expectedConfiguration: &Configuration{GenesisValidatorsRoot: &genesisValidatorRootString},
|
||||
},
|
||||
} {
|
||||
t.Run(tt.name, func(t *testing.T) {
|
||||
// Create a new store.
|
||||
store, err := NewStore(t.TempDir(), nil)
|
||||
require.NoError(t, err)
|
||||
|
||||
// Save the initial configuration.
|
||||
err = store.saveConfiguration(tt.initialConfiguration)
|
||||
require.NoError(t, err, "save configuration should not error")
|
||||
|
||||
// Save genesis validators root.
|
||||
err = store.SaveGenesisValidatorsRoot(ctx, tt.genesisValidatorRoot)
|
||||
require.NoError(t, err, "save genesis validators root should not error")
|
||||
|
||||
// Get configuration.
|
||||
actualConfiguration, err := store.configuration()
|
||||
require.NoError(t, err, "get configuration should not error")
|
||||
require.DeepEqual(t, tt.expectedConfiguration, actualConfiguration, "configuration should be equal")
|
||||
})
|
||||
}
|
||||
}
|
||||
146
validator/db/filesystem/graffiti.go
Normal file
146
validator/db/filesystem/graffiti.go
Normal file
@@ -0,0 +1,146 @@
|
||||
package filesystem
|
||||
|
||||
import (
|
||||
"context"
|
||||
|
||||
"github.com/ethereum/go-ethereum/common/hexutil"
|
||||
"github.com/pkg/errors"
|
||||
)
|
||||
|
||||
func (s *Store) SaveGraffitiOrderedIndex(_ context.Context, index uint64) error {
|
||||
// Get the configuration.
|
||||
configuration, err := s.configuration()
|
||||
if err != nil {
|
||||
return errors.Wrapf(err, "could not get configuration")
|
||||
}
|
||||
|
||||
if configuration == nil {
|
||||
// Create an new configuration.
|
||||
configuration = &Configuration{
|
||||
Graffiti: &Graffiti{
|
||||
OrderedIndex: index,
|
||||
},
|
||||
}
|
||||
|
||||
// Save the configuration.
|
||||
if err := s.saveConfiguration(configuration); err != nil {
|
||||
return errors.Wrapf(err, "could not save configuration")
|
||||
}
|
||||
|
||||
return nil
|
||||
}
|
||||
|
||||
if configuration.Graffiti == nil {
|
||||
// Create a new graffiti.
|
||||
configuration.Graffiti = &Graffiti{
|
||||
OrderedIndex: index,
|
||||
}
|
||||
|
||||
// Save the configuration.
|
||||
if err := s.saveConfiguration(configuration); err != nil {
|
||||
return errors.Wrapf(err, "could not save configuration")
|
||||
}
|
||||
|
||||
return nil
|
||||
}
|
||||
|
||||
// Modify the value of ordered index.
|
||||
configuration.Graffiti.OrderedIndex = index
|
||||
|
||||
// Save the configuration.
|
||||
if err := s.saveConfiguration(configuration); err != nil {
|
||||
return errors.Wrapf(err, "could not save configuration")
|
||||
}
|
||||
|
||||
return nil
|
||||
}
|
||||
|
||||
func (s *Store) GraffitiOrderedIndex(_ context.Context, fileHash [32]byte) (uint64, error) {
|
||||
// Encode the file hash to string.
|
||||
fileHashHex := hexutil.Encode(fileHash[:])
|
||||
|
||||
// Get the configuration.
|
||||
configuration, err := s.configuration()
|
||||
if err != nil {
|
||||
return 0, errors.Wrapf(err, "could not get configuration")
|
||||
}
|
||||
|
||||
if configuration == nil {
|
||||
// Create an new configuration.
|
||||
configuration = &Configuration{
|
||||
Graffiti: &Graffiti{
|
||||
OrderedIndex: 0,
|
||||
FileHash: &fileHashHex,
|
||||
},
|
||||
}
|
||||
|
||||
// Save the configuration.
|
||||
if err := s.saveConfiguration(configuration); err != nil {
|
||||
return 0, errors.Wrapf(err, "could not save configuration")
|
||||
}
|
||||
|
||||
return 0, nil
|
||||
}
|
||||
|
||||
if configuration.Graffiti == nil {
|
||||
// Create a new graffiti.
|
||||
configuration.Graffiti = &Graffiti{
|
||||
OrderedIndex: 0,
|
||||
FileHash: &fileHashHex,
|
||||
}
|
||||
|
||||
// Save the configuration.
|
||||
if err := s.saveConfiguration(configuration); err != nil {
|
||||
return 0, errors.Wrapf(err, "could not save configuration")
|
||||
}
|
||||
|
||||
return 0, nil
|
||||
}
|
||||
|
||||
// Check if file hash does not exist or is not equal to the file hash in configuration.
|
||||
if configuration.Graffiti.FileHash == nil || *configuration.Graffiti.FileHash != fileHashHex {
|
||||
// Modify the value of ordered index.
|
||||
configuration.Graffiti.OrderedIndex = 0
|
||||
|
||||
// Modify the value of file hash.
|
||||
configuration.Graffiti.FileHash = &fileHashHex
|
||||
|
||||
// Save the configuration.
|
||||
if err := s.saveConfiguration(configuration); err != nil {
|
||||
return 0, errors.Wrapf(err, "could not save configuration")
|
||||
}
|
||||
|
||||
return 0, nil
|
||||
}
|
||||
|
||||
return configuration.Graffiti.OrderedIndex, nil
|
||||
}
|
||||
|
||||
func (s *Store) GraffitiFileHash() ([32]byte, bool, error) {
|
||||
// Get configuration.
|
||||
configuration, err := s.configuration()
|
||||
if err != nil {
|
||||
return [32]byte{}, false, errors.Wrapf(err, "could not get configuration")
|
||||
}
|
||||
|
||||
// If configuration is nil or graffiti is nil or file hash is nil, set graffiti file hash as not existing.
|
||||
if configuration == nil || configuration.Graffiti == nil || configuration.Graffiti.FileHash == nil {
|
||||
return [32]byte{}, false, nil
|
||||
}
|
||||
|
||||
// Convert the graffiti file hash to [32]byte.
|
||||
fileHashBytes, err := hexutil.Decode(*configuration.Graffiti.FileHash)
|
||||
if err != nil {
|
||||
return [32]byte{}, false, errors.Wrapf(err, "could not decode graffiti file hash")
|
||||
}
|
||||
|
||||
if len(fileHashBytes) != 32 {
|
||||
return [32]byte{}, false, errors.Wrapf(err, "invalid graffiti file hash length")
|
||||
}
|
||||
|
||||
var fileHash [32]byte
|
||||
copy(fileHash[:], fileHashBytes)
|
||||
|
||||
// Return the graffiti file hash.
|
||||
return fileHash, true, nil
|
||||
}
|
||||
150
validator/db/filesystem/graffiti_test.go
Normal file
150
validator/db/filesystem/graffiti_test.go
Normal file
@@ -0,0 +1,150 @@
|
||||
package filesystem
|
||||
|
||||
import (
|
||||
"context"
|
||||
"testing"
|
||||
|
||||
fieldparams "github.com/prysmaticlabs/prysm/v5/config/fieldparams"
|
||||
"github.com/prysmaticlabs/prysm/v5/testing/require"
|
||||
)
|
||||
|
||||
func TestStore_SaveGraffitiOrderedIndex(t *testing.T) {
|
||||
graffitiOrderedIndex := uint64(42)
|
||||
|
||||
for _, tt := range []struct {
|
||||
name string
|
||||
configuration *Configuration
|
||||
}{
|
||||
{name: "nil configuration", configuration: nil},
|
||||
{name: "configuration without graffiti", configuration: &Configuration{}},
|
||||
{name: "configuration with graffiti", configuration: &Configuration{Graffiti: &Graffiti{}}},
|
||||
} {
|
||||
t.Run(tt.name, func(t *testing.T) {
|
||||
// Create a new store.
|
||||
store, err := NewStore(t.TempDir(), nil)
|
||||
require.NoError(t, err)
|
||||
|
||||
// Save configuration.
|
||||
err = store.saveConfiguration(tt.configuration)
|
||||
require.NoError(t, err)
|
||||
|
||||
// Save graffiti ordered index.
|
||||
err = store.SaveGraffitiOrderedIndex(context.Background(), graffitiOrderedIndex)
|
||||
require.NoError(t, err)
|
||||
})
|
||||
}
|
||||
}
|
||||
|
||||
func TestStore_GraffitiOrderedIndex(t *testing.T) {
|
||||
FileHash1 := [fieldparams.RootLength]byte{1}
|
||||
FileHash1Str := "0x0100000000000000000000000000000000000000000000000000000000000000"
|
||||
FileHash2Str := "0x0200000000000000000000000000000000000000000000000000000000000000"
|
||||
|
||||
for _, tt := range []struct {
|
||||
name string
|
||||
configuration *Configuration
|
||||
fileHash [fieldparams.RootLength]byte
|
||||
expectedGraffitiOrderedIndex uint64
|
||||
}{
|
||||
{
|
||||
name: "nil configuration saved",
|
||||
configuration: nil,
|
||||
fileHash: FileHash1,
|
||||
expectedGraffitiOrderedIndex: 0,
|
||||
},
|
||||
{
|
||||
name: "configuration without graffiti saved",
|
||||
configuration: &Configuration{},
|
||||
fileHash: FileHash1,
|
||||
expectedGraffitiOrderedIndex: 0,
|
||||
},
|
||||
{
|
||||
name: "graffiti without graffiti file hash saved",
|
||||
configuration: &Configuration{Graffiti: &Graffiti{FileHash: nil}},
|
||||
fileHash: FileHash1,
|
||||
expectedGraffitiOrderedIndex: 0,
|
||||
},
|
||||
{
|
||||
name: "graffiti with different graffiti file hash saved",
|
||||
configuration: &Configuration{Graffiti: &Graffiti{OrderedIndex: 42, FileHash: &FileHash2Str}},
|
||||
fileHash: FileHash1,
|
||||
expectedGraffitiOrderedIndex: 0,
|
||||
},
|
||||
{
|
||||
name: "graffiti with same graffiti file hash saved",
|
||||
configuration: &Configuration{Graffiti: &Graffiti{OrderedIndex: 42, FileHash: &FileHash1Str}},
|
||||
fileHash: FileHash1,
|
||||
expectedGraffitiOrderedIndex: 42,
|
||||
},
|
||||
} {
|
||||
t.Run(tt.name, func(t *testing.T) {
|
||||
// Create a new store.
|
||||
store, err := NewStore(t.TempDir(), nil)
|
||||
require.NoError(t, err)
|
||||
|
||||
// Save configuration.
|
||||
err = store.saveConfiguration(tt.configuration)
|
||||
require.NoError(t, err)
|
||||
|
||||
// Get graffiti ordered index.
|
||||
actualGraffitiOrderedIndex, err := store.GraffitiOrderedIndex(context.Background(), tt.fileHash)
|
||||
require.NoError(t, err)
|
||||
require.Equal(t, tt.expectedGraffitiOrderedIndex, actualGraffitiOrderedIndex)
|
||||
})
|
||||
}
|
||||
}
|
||||
|
||||
func TestStore_GraffitiFileHash(t *testing.T) {
|
||||
fileHashStr := "0x0100000000000000000000000000000000000000000000000000000000000000"
|
||||
|
||||
for _, tt := range []struct {
|
||||
name string
|
||||
configuration *Configuration
|
||||
expectedExists bool
|
||||
expectedFileHash [fieldparams.RootLength]byte
|
||||
}{
|
||||
{
|
||||
name: "nil configuration saved",
|
||||
configuration: nil,
|
||||
expectedExists: false,
|
||||
expectedFileHash: [fieldparams.RootLength]byte{0},
|
||||
},
|
||||
{
|
||||
name: "configuration without graffiti saved",
|
||||
configuration: &Configuration{},
|
||||
expectedExists: false,
|
||||
expectedFileHash: [fieldparams.RootLength]byte{0},
|
||||
},
|
||||
{
|
||||
name: "graffiti without graffiti file hash saved",
|
||||
configuration: &Configuration{Graffiti: &Graffiti{FileHash: nil}},
|
||||
expectedExists: false,
|
||||
expectedFileHash: [fieldparams.RootLength]byte{0},
|
||||
},
|
||||
{
|
||||
name: "graffiti with graffiti file hash saved",
|
||||
configuration: &Configuration{Graffiti: &Graffiti{FileHash: &fileHashStr}},
|
||||
expectedExists: true,
|
||||
expectedFileHash: [fieldparams.RootLength]byte{1},
|
||||
},
|
||||
} {
|
||||
t.Run(tt.name, func(t *testing.T) {
|
||||
// Create a new store.
|
||||
store, err := NewStore(t.TempDir(), nil)
|
||||
require.NoError(t, err)
|
||||
|
||||
// Save configuration.
|
||||
err = store.saveConfiguration(tt.configuration)
|
||||
require.NoError(t, err)
|
||||
|
||||
// Get graffiti file hash.
|
||||
actualFileHash, actualExists, err := store.GraffitiFileHash()
|
||||
require.NoError(t, err)
|
||||
require.Equal(t, tt.expectedExists, actualExists)
|
||||
|
||||
if tt.expectedExists {
|
||||
require.Equal(t, tt.expectedFileHash, actualFileHash)
|
||||
}
|
||||
})
|
||||
}
|
||||
}
|
||||
141
validator/db/filesystem/import.go
Normal file
141
validator/db/filesystem/import.go
Normal file
@@ -0,0 +1,141 @@
|
||||
package filesystem
|
||||
|
||||
import (
|
||||
"context"
|
||||
"encoding/json"
|
||||
"io"
|
||||
"strings"
|
||||
|
||||
"github.com/ethereum/go-ethereum/common/hexutil"
|
||||
"github.com/pkg/errors"
|
||||
fieldparams "github.com/prysmaticlabs/prysm/v5/config/fieldparams"
|
||||
ethpb "github.com/prysmaticlabs/prysm/v5/proto/prysm/v1alpha1"
|
||||
"github.com/prysmaticlabs/prysm/v5/validator/db/common"
|
||||
"github.com/prysmaticlabs/prysm/v5/validator/db/iface"
|
||||
"github.com/prysmaticlabs/prysm/v5/validator/helpers"
|
||||
"github.com/prysmaticlabs/prysm/v5/validator/slashing-protection-history/format"
|
||||
)
|
||||
|
||||
// ImportStandardProtectionJSON takes in EIP-3076 compliant JSON file used for slashing protection
|
||||
// by Ethereum validators and imports its data into Prysm's internal minimal representation of slashing
|
||||
// protection in the validator client's database.
|
||||
func (s *Store) ImportStandardProtectionJSON(ctx context.Context, r io.Reader) error {
|
||||
// Read the JSON file
|
||||
encodedJSON, err := io.ReadAll(r)
|
||||
if err != nil {
|
||||
return errors.Wrap(err, "could not read slashing protection JSON file")
|
||||
}
|
||||
|
||||
// Unmarshal the JSON file
|
||||
interchangeJSON := &format.EIPSlashingProtectionFormat{}
|
||||
if err := json.Unmarshal(encodedJSON, interchangeJSON); err != nil {
|
||||
return errors.Wrap(err, "could not unmarshal slashing protection JSON file")
|
||||
}
|
||||
|
||||
// If there is no data in the JSON file, we can return early.
|
||||
if interchangeJSON.Data == nil {
|
||||
return nil
|
||||
}
|
||||
|
||||
// We validate the `MetadataV0` field of the slashing protection JSON file.
|
||||
if err := helpers.ValidateMetadata(ctx, s, interchangeJSON); err != nil {
|
||||
return errors.Wrap(err, "slashing protection JSON metadata was incorrect")
|
||||
}
|
||||
|
||||
// Save blocks proposals and attestations into the database
|
||||
bar := common.InitializeProgressBar(len(interchangeJSON.Data), "Save blocks proposals and attestations:")
|
||||
for _, item := range interchangeJSON.Data {
|
||||
// Update progress bar
|
||||
if err := bar.Add(1); err != nil {
|
||||
return errors.Wrap(err, "could not update progress bar")
|
||||
}
|
||||
|
||||
// If item is nil, skip
|
||||
if item == nil {
|
||||
continue
|
||||
}
|
||||
|
||||
// Convert pubkey to bytes array
|
||||
pubkeyBytes, err := hexutil.Decode(item.Pubkey)
|
||||
if err != nil {
|
||||
return errors.Wrap(err, "could not decode public key from hex")
|
||||
}
|
||||
|
||||
pubkey := ([fieldparams.BLSPubkeyLength]byte)(pubkeyBytes)
|
||||
|
||||
// Block proposals
|
||||
if err := importBlockProposals(ctx, pubkey, item, s); err != nil {
|
||||
return errors.Wrap(err, "could not import block proposals")
|
||||
}
|
||||
|
||||
// Attestations
|
||||
if err := importAttestations(ctx, pubkey, item, s); err != nil {
|
||||
return errors.Wrap(err, "could not import attestations")
|
||||
}
|
||||
}
|
||||
|
||||
return nil
|
||||
}
|
||||
|
||||
func importBlockProposals(ctx context.Context, pubkey [fieldparams.BLSPubkeyLength]byte, item *format.ProtectionData, validatorDB iface.ValidatorDB) error {
|
||||
for _, sb := range item.SignedBlocks {
|
||||
// If signing block is nil, return early
|
||||
if sb == nil {
|
||||
return nil
|
||||
}
|
||||
|
||||
// Convert slot to primitives.Slot
|
||||
slot, err := helpers.SlotFromString(sb.Slot)
|
||||
if err != nil {
|
||||
return errors.Wrap(err, "could not convert slot to primitives.Slot")
|
||||
}
|
||||
|
||||
// Save proposal if not slashable regarding EIP-3076 (minimal database)
|
||||
if err := validatorDB.SaveProposalHistoryForSlot(ctx, pubkey, slot, []byte{}); err != nil && !strings.Contains(err.Error(), "could not sign proposal") {
|
||||
return errors.Wrap(err, "could not save proposal history from imported JSON to database")
|
||||
}
|
||||
}
|
||||
|
||||
return nil
|
||||
}
|
||||
|
||||
func importAttestations(ctx context.Context, pubkey [fieldparams.BLSPubkeyLength]byte, item *format.ProtectionData, validatorDB iface.ValidatorDB) error {
|
||||
atts := make([]*ethpb.IndexedAttestation, len(item.SignedAttestations))
|
||||
for i := range item.SignedAttestations {
|
||||
// Get signed attestation
|
||||
sa := item.SignedAttestations[i]
|
||||
|
||||
// Convert source epoch to primitives.Epoch
|
||||
source, err := helpers.EpochFromString(sa.SourceEpoch)
|
||||
if err != nil {
|
||||
return errors.Wrap(err, "could not convert source epoch to primitives.Epoch")
|
||||
}
|
||||
|
||||
// Convert target epoch to primitives.Epoch
|
||||
target, err := helpers.EpochFromString(sa.TargetEpoch)
|
||||
if err != nil {
|
||||
return errors.Wrap(err, "could not convert target epoch to primitives.Epoch")
|
||||
}
|
||||
|
||||
// Create indexed attestation
|
||||
att := ðpb.IndexedAttestation{
|
||||
Data: ðpb.AttestationData{
|
||||
Source: ðpb.Checkpoint{
|
||||
Epoch: source,
|
||||
},
|
||||
Target: ðpb.Checkpoint{
|
||||
Epoch: target,
|
||||
},
|
||||
},
|
||||
}
|
||||
|
||||
atts[i] = att
|
||||
}
|
||||
|
||||
// Save attestations
|
||||
if err := validatorDB.SaveAttestationsForPubKey(ctx, pubkey, [][]byte{}, atts); err != nil && !strings.Contains(err.Error(), "could not sign attestation") {
|
||||
return errors.Wrap(err, "could not save attestation record from imported JSON to database")
|
||||
}
|
||||
|
||||
return nil
|
||||
}
|
||||
145
validator/db/filesystem/import_test.go
Normal file
145
validator/db/filesystem/import_test.go
Normal file
@@ -0,0 +1,145 @@
|
||||
package filesystem
|
||||
|
||||
import (
|
||||
"bytes"
|
||||
"context"
|
||||
"encoding/json"
|
||||
"testing"
|
||||
|
||||
fieldparams "github.com/prysmaticlabs/prysm/v5/config/fieldparams"
|
||||
ethpb "github.com/prysmaticlabs/prysm/v5/proto/prysm/v1alpha1"
|
||||
"github.com/prysmaticlabs/prysm/v5/testing/assert"
|
||||
"github.com/prysmaticlabs/prysm/v5/testing/require"
|
||||
"github.com/prysmaticlabs/prysm/v5/validator/db/common"
|
||||
"github.com/prysmaticlabs/prysm/v5/validator/slashing-protection-history/format"
|
||||
valtest "github.com/prysmaticlabs/prysm/v5/validator/testing"
|
||||
)
|
||||
|
||||
func TestStore_ImportInterchangeData_BadJSON(t *testing.T) {
|
||||
// Create a database path.
|
||||
databaseParentPath := t.TempDir()
|
||||
|
||||
// Create a new store.
|
||||
s, err := NewStore(databaseParentPath, nil)
|
||||
require.NoError(t, err, "NewStore should not return an error")
|
||||
|
||||
buf := bytes.NewBuffer([]byte("helloworld"))
|
||||
err = s.ImportStandardProtectionJSON(context.Background(), buf)
|
||||
require.ErrorContains(t, "could not unmarshal slashing protection JSON file", err)
|
||||
}
|
||||
|
||||
func TestStore_ImportInterchangeData_NilData_FailsSilently(t *testing.T) {
|
||||
// Create a database path.
|
||||
databaseParentPath := t.TempDir()
|
||||
|
||||
// Create a new store.
|
||||
s, err := NewStore(databaseParentPath, nil)
|
||||
require.NoError(t, err, "NewStore should not return an error")
|
||||
|
||||
interchangeJSON := &format.EIPSlashingProtectionFormat{}
|
||||
encoded, err := json.Marshal(interchangeJSON)
|
||||
require.NoError(t, err)
|
||||
|
||||
buf := bytes.NewBuffer(encoded)
|
||||
err = s.ImportStandardProtectionJSON(context.Background(), buf)
|
||||
require.NoError(t, err)
|
||||
}
|
||||
|
||||
func TestStore_ImportInterchangeData_BadFormat_PreventsDBWrites(t *testing.T) {
|
||||
ctx := context.Background()
|
||||
numValidators := 10
|
||||
publicKeys, err := valtest.CreateRandomPubKeys(numValidators)
|
||||
require.NoError(t, err)
|
||||
|
||||
// Create a database path.
|
||||
databaseParentPath := t.TempDir()
|
||||
|
||||
// Create a new store.
|
||||
s, err := NewStore(databaseParentPath, &Config{PubKeys: publicKeys})
|
||||
require.NoError(t, err, "NewStore should not return an error")
|
||||
|
||||
// First we setup some mock attesting and proposal histories and create a mock
|
||||
// standard slashing protection format JSON struct.
|
||||
attestingHistory, proposalHistory := valtest.MockAttestingAndProposalHistories(publicKeys)
|
||||
standardProtectionFormat, err := valtest.MockSlashingProtectionJSON(publicKeys, attestingHistory, proposalHistory)
|
||||
require.NoError(t, err)
|
||||
|
||||
// We replace a slot of one of the blocks with junk data.
|
||||
standardProtectionFormat.Data[0].SignedBlocks[0].Slot = "BadSlot"
|
||||
|
||||
// We encode the standard slashing protection struct into a JSON format.
|
||||
blob, err := json.Marshal(standardProtectionFormat)
|
||||
require.NoError(t, err)
|
||||
buf := bytes.NewBuffer(blob)
|
||||
|
||||
// Next, we attempt to import it into our validator database and check that
|
||||
// we obtain an error during the import process.
|
||||
err = s.ImportStandardProtectionJSON(ctx, buf)
|
||||
assert.NotNil(t, err)
|
||||
|
||||
// Next, we attempt to retrieve the attesting and proposals histories from our database and
|
||||
// verify nothing was saved to the DB. If there is an error in the import process, we need to make
|
||||
// sure writing is an atomic operation: either the import succeeds and saves the slashing protection
|
||||
// data to our DB, or it does not.
|
||||
for i := 0; i < len(publicKeys); i++ {
|
||||
receivedHistory, err := s.ProposalHistoryForPubKey(ctx, publicKeys[i])
|
||||
require.NoError(t, err)
|
||||
require.DeepEqual(
|
||||
t,
|
||||
make([]*common.Proposal, 0),
|
||||
receivedHistory,
|
||||
"Imported proposal signing root is different than the empty default",
|
||||
)
|
||||
}
|
||||
}
|
||||
|
||||
func TestStore_ImportInterchangeData_OK(t *testing.T) {
|
||||
ctx := context.Background()
|
||||
numValidators := 10
|
||||
publicKeys, err := valtest.CreateRandomPubKeys(numValidators)
|
||||
require.NoError(t, err)
|
||||
|
||||
// Create a database path.
|
||||
databaseParentPath := t.TempDir()
|
||||
|
||||
// Create a new store.
|
||||
s, err := NewStore(databaseParentPath, &Config{PubKeys: publicKeys})
|
||||
require.NoError(t, err, "NewStore should not return an error")
|
||||
|
||||
// First we setup some mock attesting and proposal histories and create a mock
|
||||
// standard slashing protection format JSON struct.
|
||||
attestingHistory, proposalHistory := valtest.MockAttestingAndProposalHistories(publicKeys)
|
||||
standardProtectionFormat, err := valtest.MockSlashingProtectionJSON(publicKeys, attestingHistory, proposalHistory)
|
||||
require.NoError(t, err)
|
||||
|
||||
// We encode the standard slashing protection struct into a JSON format.
|
||||
blob, err := json.Marshal(standardProtectionFormat)
|
||||
require.NoError(t, err)
|
||||
buf := bytes.NewBuffer(blob)
|
||||
|
||||
// Next, we attempt to import it into our validator database.
|
||||
err = s.ImportStandardProtectionJSON(ctx, buf)
|
||||
require.NoError(t, err)
|
||||
|
||||
// Next, we attempt to retrieve the attesting and proposals histories from our database and
|
||||
// verify those indeed match the originally generated mock histories.
|
||||
for i := 0; i < len(publicKeys); i++ {
|
||||
for _, att := range attestingHistory[i] {
|
||||
indexedAtt := ðpb.IndexedAttestation{
|
||||
Data: ðpb.AttestationData{
|
||||
Source: ðpb.Checkpoint{
|
||||
Epoch: att.Source,
|
||||
},
|
||||
Target: ðpb.Checkpoint{
|
||||
Epoch: att.Target,
|
||||
},
|
||||
},
|
||||
}
|
||||
// We expect we have an attesting history for the attestation and when
|
||||
// attempting to verify the same att is slashable with a different signing root,
|
||||
// we expect to receive a double vote slashing kind.
|
||||
err := s.SaveAttestationForPubKey(ctx, publicKeys[i], [fieldparams.RootLength]byte{}, indexedAtt)
|
||||
require.ErrorContains(t, "could not sign attestation", err)
|
||||
}
|
||||
}
|
||||
}
|
||||
13
validator/db/filesystem/migration.go
Normal file
13
validator/db/filesystem/migration.go
Normal file
@@ -0,0 +1,13 @@
|
||||
package filesystem
|
||||
|
||||
import "context"
|
||||
|
||||
// RunUpMigrations only exists to satisfy the interface.
|
||||
func (*Store) RunUpMigrations(_ context.Context) error {
|
||||
return nil
|
||||
}
|
||||
|
||||
// RunDownMigrations only exists to satisfy the interface.
|
||||
func (*Store) RunDownMigrations(_ context.Context) error {
|
||||
return nil
|
||||
}
|
||||
28
validator/db/filesystem/migration_test.go
Normal file
28
validator/db/filesystem/migration_test.go
Normal file
@@ -0,0 +1,28 @@
|
||||
package filesystem
|
||||
|
||||
import (
|
||||
"context"
|
||||
"testing"
|
||||
|
||||
"github.com/prysmaticlabs/prysm/v5/testing/require"
|
||||
)
|
||||
|
||||
func TestStore_RunUpMigrations(t *testing.T) {
|
||||
// Just check `NewStore` does not return an error.
|
||||
store, err := NewStore(t.TempDir(), nil)
|
||||
require.NoError(t, err, "NewStore should not return an error")
|
||||
|
||||
// Just check `RunUpMigrations` does not return an error.
|
||||
err = store.RunUpMigrations(context.Background())
|
||||
require.NoError(t, err, "RunUpMigrations should not return an error")
|
||||
}
|
||||
|
||||
func TestStore_RunDownMigrations(t *testing.T) {
|
||||
// Just check `NewStore` does not return an error.
|
||||
store, err := NewStore(t.TempDir(), nil)
|
||||
require.NoError(t, err, "NewStore should not return an error")
|
||||
|
||||
// Just check `RunDownMigrations` does not return an error.
|
||||
err = store.RunDownMigrations(context.Background())
|
||||
require.NoError(t, err, "RunUpMigrations should not return an error")
|
||||
}
|
||||
145
validator/db/filesystem/proposer_protection.go
Normal file
145
validator/db/filesystem/proposer_protection.go
Normal file
@@ -0,0 +1,145 @@
|
||||
package filesystem
|
||||
|
||||
import (
|
||||
"context"
|
||||
"strings"
|
||||
|
||||
"github.com/pkg/errors"
|
||||
"github.com/prometheus/client_golang/prometheus"
|
||||
fieldparams "github.com/prysmaticlabs/prysm/v5/config/fieldparams"
|
||||
"github.com/prysmaticlabs/prysm/v5/consensus-types/interfaces"
|
||||
"github.com/prysmaticlabs/prysm/v5/consensus-types/primitives"
|
||||
"github.com/prysmaticlabs/prysm/v5/validator/db/common"
|
||||
)
|
||||
|
||||
// HighestSignedProposal is implemented only to satisfy the interface.
|
||||
func (*Store) HighestSignedProposal(_ context.Context, _ [fieldparams.BLSPubkeyLength]byte) (primitives.Slot, bool, error) {
|
||||
panic("not implemented")
|
||||
}
|
||||
|
||||
// LowestSignedProposal is implemented only to satisfy the interface.
|
||||
func (*Store) LowestSignedProposal(_ context.Context, _ [fieldparams.BLSPubkeyLength]byte) (primitives.Slot, bool, error) {
|
||||
panic("not implemented")
|
||||
}
|
||||
|
||||
// ProposalHistoryForPubKey returns the proposal history for a given public key.
|
||||
func (s *Store) ProposalHistoryForPubKey(_ context.Context, publicKey [fieldparams.BLSPubkeyLength]byte) ([]*common.Proposal, error) {
|
||||
// Get validator slashing protection.
|
||||
validatorSlashingProtection, err := s.validatorSlashingProtection(publicKey)
|
||||
if err != nil {
|
||||
return nil, errors.Wrap(err, "could not get validator slashing protection")
|
||||
}
|
||||
|
||||
// If there is no validator slashing protection or proposed block, return an empty slice.
|
||||
if validatorSlashingProtection == nil || validatorSlashingProtection.LatestSignedBlockSlot == nil {
|
||||
return []*common.Proposal{}, nil
|
||||
}
|
||||
|
||||
// Return the (unique) proposal history.
|
||||
return []*common.Proposal{
|
||||
{
|
||||
Slot: primitives.Slot(*validatorSlashingProtection.LatestSignedBlockSlot),
|
||||
},
|
||||
}, nil
|
||||
}
|
||||
|
||||
// ProposalHistoryForSlot is implemented only to satisfy the interface.
|
||||
func (*Store) ProposalHistoryForSlot(_ context.Context, _ [fieldparams.BLSPubkeyLength]byte, _ primitives.Slot) ([fieldparams.RootLength]byte, bool, bool, error) {
|
||||
panic("not implemented")
|
||||
}
|
||||
|
||||
// SaveProposalHistoryForSlot checks if the incoming proposal is valid regarding EIP-3076 minimal slashing protection.
|
||||
// If so, it updates the database with the incoming slot, and returns nil.
|
||||
// If not, it does not modify the database and return an error.
|
||||
func (s *Store) SaveProposalHistoryForSlot(
|
||||
_ context.Context,
|
||||
pubKey [fieldparams.BLSPubkeyLength]byte,
|
||||
slot primitives.Slot,
|
||||
_ []byte,
|
||||
) error {
|
||||
// Get validator slashing protection.
|
||||
validatorSlashingProtection, err := s.validatorSlashingProtection(pubKey)
|
||||
if err != nil {
|
||||
return errors.Wrap(err, "could not get validator slashing protection")
|
||||
}
|
||||
|
||||
// Convert the slot to uint64.
|
||||
slotUInt64 := uint64(slot)
|
||||
|
||||
if validatorSlashingProtection == nil {
|
||||
// If there is no validator slashing protection, create one
|
||||
validatorSlashingProtection = &ValidatorSlashingProtection{
|
||||
LatestSignedBlockSlot: &slotUInt64,
|
||||
}
|
||||
|
||||
// Save the validator slashing protection.
|
||||
if err := s.saveValidatorSlashingProtection(pubKey, validatorSlashingProtection); err != nil {
|
||||
return errors.Wrap(err, "could not save validator slashing protection")
|
||||
}
|
||||
|
||||
return nil
|
||||
}
|
||||
|
||||
if validatorSlashingProtection.LatestSignedBlockSlot == nil {
|
||||
// If there is no latest signed block slot, update it.
|
||||
validatorSlashingProtection.LatestSignedBlockSlot = &slotUInt64
|
||||
|
||||
// Save the validator slashing protection.
|
||||
if err := s.saveValidatorSlashingProtection(pubKey, validatorSlashingProtection); err != nil {
|
||||
return errors.Wrap(err, "could not save validator slashing protection")
|
||||
}
|
||||
|
||||
return nil
|
||||
}
|
||||
|
||||
// Based on EIP-3076 (minimal database), validator should refuse to sign any proposal
|
||||
// with slot less than or equal to the latest signed block slot in the DB.
|
||||
if slotUInt64 <= *validatorSlashingProtection.LatestSignedBlockSlot {
|
||||
return errors.Errorf(
|
||||
"could not sign proposal with slot lower than or equal to recorded slot, %d <= %d",
|
||||
slot,
|
||||
*validatorSlashingProtection.LatestSignedBlockSlot,
|
||||
)
|
||||
}
|
||||
|
||||
// Update the latest signed block slot.
|
||||
validatorSlashingProtection.LatestSignedBlockSlot = &slotUInt64
|
||||
|
||||
// Save the validator slashing protection.
|
||||
if err := s.saveValidatorSlashingProtection(pubKey, validatorSlashingProtection); err != nil {
|
||||
return errors.Wrap(err, "could not save validator slashing protection")
|
||||
}
|
||||
|
||||
return nil
|
||||
}
|
||||
|
||||
// ProposedPublicKeys returns the list of public keys we have in the database.
|
||||
// To be consistent with the complete, BoltDB implementation, pubkeys returned by
|
||||
// this function do not necessarily have proposed a block.
|
||||
func (s *Store) ProposedPublicKeys(_ context.Context) ([][fieldparams.BLSPubkeyLength]byte, error) {
|
||||
return s.publicKeys()
|
||||
}
|
||||
|
||||
// SlashableProposalCheck checks if a block proposal is slashable by comparing it with the
|
||||
// block proposals history for the given public key in our minimal slashing protection database defined by EIP-3076.
|
||||
// If it is not, it update the database.
|
||||
func (s *Store) SlashableProposalCheck(
|
||||
ctx context.Context,
|
||||
pubKey [fieldparams.BLSPubkeyLength]byte,
|
||||
signedBlock interfaces.ReadOnlySignedBeaconBlock,
|
||||
signingRoot [fieldparams.RootLength]byte,
|
||||
emitAccountMetrics bool,
|
||||
validatorProposeFailVec *prometheus.CounterVec,
|
||||
) error {
|
||||
// Check if the proposal is potentially slashable regarding EIP-3076 minimal conditions.
|
||||
// If not, save the new proposal into the database.
|
||||
if err := s.SaveProposalHistoryForSlot(ctx, pubKey, signedBlock.Block().Slot(), signingRoot[:]); err != nil {
|
||||
if strings.Contains(err.Error(), "could not sign proposal") {
|
||||
return errors.Wrapf(err, common.FailedBlockSignLocalErr)
|
||||
}
|
||||
|
||||
return errors.Wrap(err, "failed to save updated proposal history")
|
||||
}
|
||||
|
||||
return nil
|
||||
}
|
||||
341
validator/db/filesystem/proposer_protection_test.go
Normal file
341
validator/db/filesystem/proposer_protection_test.go
Normal file
@@ -0,0 +1,341 @@
|
||||
package filesystem
|
||||
|
||||
import (
|
||||
"context"
|
||||
"testing"
|
||||
|
||||
fieldparams "github.com/prysmaticlabs/prysm/v5/config/fieldparams"
|
||||
"github.com/prysmaticlabs/prysm/v5/config/params"
|
||||
"github.com/prysmaticlabs/prysm/v5/consensus-types/blocks"
|
||||
"github.com/prysmaticlabs/prysm/v5/consensus-types/primitives"
|
||||
ethpb "github.com/prysmaticlabs/prysm/v5/proto/prysm/v1alpha1"
|
||||
"github.com/prysmaticlabs/prysm/v5/testing/require"
|
||||
"github.com/prysmaticlabs/prysm/v5/testing/util"
|
||||
"github.com/prysmaticlabs/prysm/v5/validator/db/common"
|
||||
)
|
||||
|
||||
func TestStore_ProposalHistoryForPubKey(t *testing.T) {
|
||||
var slot uint64 = 42
|
||||
ctx := context.Background()
|
||||
|
||||
for _, tt := range []struct {
|
||||
name string
|
||||
validatorSlashingProtection *ValidatorSlashingProtection
|
||||
expectedProposals []*common.Proposal
|
||||
}{
|
||||
{
|
||||
name: "validatorSlashingProtection is nil",
|
||||
validatorSlashingProtection: nil,
|
||||
expectedProposals: []*common.Proposal{},
|
||||
},
|
||||
{
|
||||
name: "validatorSlashingProtection.LatestSignedBlockSlot is nil",
|
||||
validatorSlashingProtection: &ValidatorSlashingProtection{LatestSignedBlockSlot: nil},
|
||||
expectedProposals: []*common.Proposal{},
|
||||
},
|
||||
{
|
||||
name: "validatorSlashingProtection.LatestSignedBlockSlot is something",
|
||||
validatorSlashingProtection: &ValidatorSlashingProtection{LatestSignedBlockSlot: &slot},
|
||||
expectedProposals: []*common.Proposal{
|
||||
{
|
||||
Slot: primitives.Slot(slot),
|
||||
},
|
||||
},
|
||||
},
|
||||
} {
|
||||
t.Run(tt.name, func(t *testing.T) {
|
||||
// Get a database path.
|
||||
databasePath := t.TempDir()
|
||||
|
||||
// Create a public key.
|
||||
pubkey := getPubKeys(t, 1)[0]
|
||||
|
||||
// Create a new store.
|
||||
store, err := NewStore(databasePath, nil)
|
||||
require.NoError(t, err, "NewStore should not return an error")
|
||||
|
||||
// Set the validator slashing protection.
|
||||
err = store.saveValidatorSlashingProtection(pubkey, tt.validatorSlashingProtection)
|
||||
require.NoError(t, err, "saveValidatorSlashingProtection should not return an error")
|
||||
|
||||
// Get the proposal history for the public key.
|
||||
actualProposals, err := store.ProposalHistoryForPubKey(ctx, pubkey)
|
||||
require.NoError(t, err, "ProposalHistoryForPubKey should not return an error")
|
||||
require.DeepEqual(t, tt.expectedProposals, actualProposals, "ProposalHistoryForPubKey should return the expected proposals")
|
||||
})
|
||||
}
|
||||
}
|
||||
|
||||
func TestStore_SaveProposalHistoryForSlot(t *testing.T) {
|
||||
var (
|
||||
slot41 uint64 = 41
|
||||
slot42 uint64 = 42
|
||||
slot43 uint64 = 43
|
||||
)
|
||||
|
||||
ctx := context.Background()
|
||||
|
||||
for _, tt := range []struct {
|
||||
name string
|
||||
initialValidatorSlashingProtection *ValidatorSlashingProtection
|
||||
slot uint64
|
||||
expectedValidatorSlashingProtection ValidatorSlashingProtection
|
||||
expectedError string
|
||||
}{
|
||||
{
|
||||
name: "validatorSlashingProtection is nil",
|
||||
initialValidatorSlashingProtection: nil,
|
||||
slot: slot42,
|
||||
expectedValidatorSlashingProtection: ValidatorSlashingProtection{LatestSignedBlockSlot: &slot42},
|
||||
expectedError: "",
|
||||
},
|
||||
{
|
||||
name: "validatorSlashingProtection.LatestSignedBlockSlot is nil",
|
||||
initialValidatorSlashingProtection: &ValidatorSlashingProtection{LatestSignedBlockSlot: nil},
|
||||
slot: slot42,
|
||||
expectedValidatorSlashingProtection: ValidatorSlashingProtection{LatestSignedBlockSlot: &slot42},
|
||||
expectedError: "",
|
||||
},
|
||||
{
|
||||
name: "validatorSlashingProtection.LatestSignedBlockSlot is lower than the incoming slot",
|
||||
initialValidatorSlashingProtection: &ValidatorSlashingProtection{LatestSignedBlockSlot: &slot42},
|
||||
slot: slot41,
|
||||
expectedValidatorSlashingProtection: ValidatorSlashingProtection{LatestSignedBlockSlot: &slot42},
|
||||
expectedError: "could not sign proposal with slot lower than or equal to recorded slot",
|
||||
},
|
||||
{
|
||||
name: "validatorSlashingProtection.LatestSignedBlockSlot is equal to the incoming slot",
|
||||
initialValidatorSlashingProtection: &ValidatorSlashingProtection{LatestSignedBlockSlot: &slot42},
|
||||
slot: slot42,
|
||||
expectedValidatorSlashingProtection: ValidatorSlashingProtection{LatestSignedBlockSlot: &slot42},
|
||||
expectedError: "could not sign proposal with slot lower than or equal to recorded slot",
|
||||
},
|
||||
{
|
||||
name: "validatorSlashingProtection.LatestSignedBlockSlot is higher to the incoming slot",
|
||||
initialValidatorSlashingProtection: &ValidatorSlashingProtection{LatestSignedBlockSlot: &slot42},
|
||||
slot: slot43,
|
||||
expectedValidatorSlashingProtection: ValidatorSlashingProtection{LatestSignedBlockSlot: &slot43},
|
||||
expectedError: "",
|
||||
},
|
||||
} {
|
||||
t.Run(tt.name, func(t *testing.T) {
|
||||
// Get a database path.
|
||||
databasePath := t.TempDir()
|
||||
|
||||
// Create a public key.
|
||||
pubkey := getPubKeys(t, 1)[0]
|
||||
|
||||
// Create a new store.
|
||||
store, err := NewStore(databasePath, nil)
|
||||
require.NoError(t, err, "NewStore should not return an error")
|
||||
|
||||
// Set the initial validator slashing protection.
|
||||
err = store.saveValidatorSlashingProtection(pubkey, tt.initialValidatorSlashingProtection)
|
||||
require.NoError(t, err, "saveValidatorSlashingProtection should not return an error")
|
||||
|
||||
// Attempt to save the proposal history for the public key.
|
||||
err = store.SaveProposalHistoryForSlot(ctx, pubkey, primitives.Slot(tt.slot), nil)
|
||||
if len(tt.expectedError) > 0 {
|
||||
require.ErrorContains(t, tt.expectedError, err, "validatorSlashingProtection should return the expected error")
|
||||
} else {
|
||||
require.NoError(t, err, "SaveProposalHistoryForSlot should not return an error")
|
||||
}
|
||||
|
||||
// Get the final validator slashing protection.
|
||||
actualValidatorSlashingProtection, err := store.validatorSlashingProtection(pubkey)
|
||||
require.NoError(t, err, "validatorSlashingProtection should not return an error")
|
||||
|
||||
// Check the proposal history.
|
||||
require.DeepEqual(t, tt.expectedValidatorSlashingProtection, *actualValidatorSlashingProtection, "validatorSlashingProtection should be the expected one")
|
||||
})
|
||||
}
|
||||
}
|
||||
|
||||
func TestStore_ProposedPublicKeys(t *testing.T) {
|
||||
// We get a database path
|
||||
databasePath := t.TempDir()
|
||||
|
||||
// We create some pubkeys
|
||||
pubkeys := getPubKeys(t, 5)
|
||||
|
||||
// We create a new store
|
||||
s, err := NewStore(databasePath, &Config{PubKeys: pubkeys})
|
||||
require.NoError(t, err, "NewStore should not return an error")
|
||||
|
||||
// We check the public keys
|
||||
expected := pubkeys
|
||||
actual, err := s.ProposedPublicKeys(context.Background())
|
||||
require.NoError(t, err, "publicKeys should not return an error")
|
||||
|
||||
// We cannot compare the slices directly because the order is not guaranteed,
|
||||
// so we compare sets instead.
|
||||
expectedSet := make(map[[fieldparams.BLSPubkeyLength]byte]bool)
|
||||
for _, pubkey := range expected {
|
||||
expectedSet[pubkey] = true
|
||||
}
|
||||
|
||||
actualSet := make(map[[fieldparams.BLSPubkeyLength]byte]bool)
|
||||
for _, pubkey := range actual {
|
||||
actualSet[pubkey] = true
|
||||
}
|
||||
|
||||
require.DeepEqual(t, expectedSet, actualSet)
|
||||
}
|
||||
|
||||
func Test_slashableProposalCheck_PreventsLowerThanMinProposal(t *testing.T) {
|
||||
ctx := context.Background()
|
||||
|
||||
// We get a database path
|
||||
databasePath := t.TempDir()
|
||||
|
||||
// We create some pubkeys
|
||||
pubkeys := getPubKeys(t, 1)
|
||||
pubkey := pubkeys[0]
|
||||
|
||||
// We create a new store
|
||||
s, err := NewStore(databasePath, &Config{PubKeys: pubkeys})
|
||||
require.NoError(t, err, "NewStore should not return an error")
|
||||
|
||||
lowestSignedSlot := primitives.Slot(10)
|
||||
|
||||
// We save a proposal at the lowest signed slot in the DB.
|
||||
err = s.SaveProposalHistoryForSlot(ctx, pubkey, lowestSignedSlot, []byte{1})
|
||||
require.NoError(t, err)
|
||||
|
||||
// We expect the same block with a slot lower than the lowest
|
||||
// signed slot to fail validation.
|
||||
blk := ðpb.SignedBeaconBlock{
|
||||
Block: ðpb.BeaconBlock{
|
||||
Slot: lowestSignedSlot - 1,
|
||||
ProposerIndex: 0,
|
||||
Body: ðpb.BeaconBlockBody{},
|
||||
},
|
||||
Signature: params.BeaconConfig().EmptySignature[:],
|
||||
}
|
||||
wsb, err := blocks.NewSignedBeaconBlock(blk)
|
||||
require.NoError(t, err)
|
||||
err = s.SlashableProposalCheck(context.Background(), pubkey, wsb, [32]byte{4}, false, nil)
|
||||
require.ErrorContains(t, common.FailedBlockSignLocalErr, err)
|
||||
|
||||
// We expect the same block with a slot equal to the lowest
|
||||
// signed slot to pass validation if signing roots are equal.
|
||||
blk = ðpb.SignedBeaconBlock{
|
||||
Block: ðpb.BeaconBlock{
|
||||
Slot: lowestSignedSlot,
|
||||
ProposerIndex: 0,
|
||||
Body: ðpb.BeaconBlockBody{},
|
||||
},
|
||||
Signature: params.BeaconConfig().EmptySignature[:],
|
||||
}
|
||||
wsb, err = blocks.NewSignedBeaconBlock(blk)
|
||||
require.NoError(t, err)
|
||||
err = s.SlashableProposalCheck(context.Background(), pubkey, wsb, [32]byte{1}, false, nil)
|
||||
require.ErrorContains(t, common.FailedBlockSignLocalErr, err)
|
||||
|
||||
// We expect the same block with a slot equal to the lowest
|
||||
// signed slot to fail validation if signing roots are different.
|
||||
wsb, err = blocks.NewSignedBeaconBlock(blk)
|
||||
require.NoError(t, err)
|
||||
err = s.SlashableProposalCheck(context.Background(), pubkey, wsb, [32]byte{4}, false, nil)
|
||||
require.ErrorContains(t, common.FailedBlockSignLocalErr, err)
|
||||
|
||||
// We expect the same block with a slot > than the lowest
|
||||
// signed slot to pass validation.
|
||||
blk = ðpb.SignedBeaconBlock{
|
||||
Block: ðpb.BeaconBlock{
|
||||
Slot: lowestSignedSlot + 1,
|
||||
ProposerIndex: 0,
|
||||
Body: ðpb.BeaconBlockBody{},
|
||||
},
|
||||
Signature: params.BeaconConfig().EmptySignature[:],
|
||||
}
|
||||
|
||||
wsb, err = blocks.NewSignedBeaconBlock(blk)
|
||||
require.NoError(t, err)
|
||||
err = s.SlashableProposalCheck(context.Background(), pubkey, wsb, [32]byte{3}, false, nil)
|
||||
require.NoError(t, err)
|
||||
}
|
||||
|
||||
func Test_slashableProposalCheck(t *testing.T) {
|
||||
ctx := context.Background()
|
||||
|
||||
// We get a database path
|
||||
databasePath := t.TempDir()
|
||||
|
||||
// We create some pubkeys
|
||||
pubkeys := getPubKeys(t, 1)
|
||||
pubkey := pubkeys[0]
|
||||
|
||||
// We create a new store
|
||||
s, err := NewStore(databasePath, &Config{PubKeys: pubkeys})
|
||||
require.NoError(t, err, "NewStore should not return an error")
|
||||
|
||||
blk := util.HydrateSignedBeaconBlock(ðpb.SignedBeaconBlock{
|
||||
Block: ðpb.BeaconBlock{
|
||||
Slot: 10,
|
||||
ProposerIndex: 0,
|
||||
Body: ðpb.BeaconBlockBody{},
|
||||
},
|
||||
Signature: params.BeaconConfig().EmptySignature[:],
|
||||
})
|
||||
|
||||
// We save a proposal at slot 1 as our lowest proposal.
|
||||
err = s.SaveProposalHistoryForSlot(ctx, pubkey, 1, []byte{1})
|
||||
require.NoError(t, err)
|
||||
|
||||
// We save a proposal at slot 10 with a dummy signing root.
|
||||
dummySigningRoot := [32]byte{1}
|
||||
err = s.SaveProposalHistoryForSlot(ctx, pubkey, 10, dummySigningRoot[:])
|
||||
require.NoError(t, err)
|
||||
sBlock, err := blocks.NewSignedBeaconBlock(blk)
|
||||
require.NoError(t, err)
|
||||
|
||||
// We expect the same block sent out should be slasahble.
|
||||
err = s.SlashableProposalCheck(context.Background(), pubkey, sBlock, dummySigningRoot, false, nil)
|
||||
require.ErrorContains(t, common.FailedBlockSignLocalErr, err)
|
||||
|
||||
// We expect the same block sent out with a different signing root should be slashable.
|
||||
err = s.SlashableProposalCheck(context.Background(), pubkey, sBlock, [32]byte{2}, false, nil)
|
||||
require.ErrorContains(t, common.FailedBlockSignLocalErr, err)
|
||||
|
||||
// We save a proposal at slot 11 with a nil signing root.
|
||||
blk.Block.Slot = 11
|
||||
sBlock, err = blocks.NewSignedBeaconBlock(blk)
|
||||
require.NoError(t, err)
|
||||
err = s.SaveProposalHistoryForSlot(ctx, pubkey, blk.Block.Slot, nil)
|
||||
require.NoError(t, err)
|
||||
|
||||
// We expect the same block sent out should return slashable error even
|
||||
// if we had a nil signing root stored in the database.
|
||||
err = s.SlashableProposalCheck(context.Background(), pubkey, sBlock, [32]byte{2}, false, nil)
|
||||
require.ErrorContains(t, common.FailedBlockSignLocalErr, err)
|
||||
|
||||
// A block with a different slot for which we do not have a proposing history
|
||||
// should not be failing validation.
|
||||
blk.Block.Slot = 9
|
||||
sBlock, err = blocks.NewSignedBeaconBlock(blk)
|
||||
require.NoError(t, err)
|
||||
err = s.SlashableProposalCheck(context.Background(), pubkey, sBlock, [32]byte{3}, false, nil)
|
||||
require.ErrorContains(t, common.FailedBlockSignLocalErr, err)
|
||||
}
|
||||
|
||||
func Test_slashableProposalCheck_RemoteProtection(t *testing.T) {
|
||||
// We get a database path
|
||||
databasePath := t.TempDir()
|
||||
|
||||
// We create some pubkeys
|
||||
pubkeys := getPubKeys(t, 1)
|
||||
pubkey := pubkeys[0]
|
||||
|
||||
// We create a new store
|
||||
s, err := NewStore(databasePath, &Config{PubKeys: pubkeys})
|
||||
require.NoError(t, err, "NewStore should not return an error")
|
||||
|
||||
blk := util.NewBeaconBlock()
|
||||
blk.Block.Slot = 10
|
||||
sBlock, err := blocks.NewSignedBeaconBlock(blk)
|
||||
require.NoError(t, err)
|
||||
|
||||
err = s.SlashableProposalCheck(context.Background(), pubkey, sBlock, [32]byte{2}, false, nil)
|
||||
require.NoError(t, err, "Expected allowed block not to throw error")
|
||||
}
|
||||
94
validator/db/filesystem/proposer_settings.go
Normal file
94
validator/db/filesystem/proposer_settings.go
Normal file
@@ -0,0 +1,94 @@
|
||||
package filesystem
|
||||
|
||||
import (
|
||||
"context"
|
||||
|
||||
"github.com/pkg/errors"
|
||||
|
||||
"github.com/prysmaticlabs/prysm/v5/config/proposer"
|
||||
)
|
||||
|
||||
// ErrNoProposerSettingsFound is an error thrown when no settings are found.
|
||||
var ErrNoProposerSettingsFound = errors.New("no proposer settings found in bucket")
|
||||
|
||||
// ProposerSettings returns the proposer settings.
|
||||
func (s *Store) ProposerSettings(_ context.Context) (*proposer.Settings, error) {
|
||||
// Get configuration
|
||||
configuration, err := s.configuration()
|
||||
if err != nil {
|
||||
return nil, errors.Wrap(err, "could not get configuration")
|
||||
}
|
||||
|
||||
// Return on error if config file does not exist.
|
||||
if configuration == nil || configuration.ProposerSettings == nil {
|
||||
return nil, ErrNoProposerSettingsFound
|
||||
}
|
||||
|
||||
// Convert proposer settings to validator service config.
|
||||
proposerSettings, err := proposer.SettingFromConsensus(configuration.ProposerSettings)
|
||||
if err != nil {
|
||||
return nil, errors.Wrap(err, "could not convert proposer settings")
|
||||
}
|
||||
|
||||
return proposerSettings, nil
|
||||
}
|
||||
|
||||
// ProposerSettingsExists returns true if proposer settings exists, false otherwise.
|
||||
func (s *Store) ProposerSettingsExists(_ context.Context) (bool, error) {
|
||||
// Get configuration.
|
||||
configuration, err := s.configuration()
|
||||
if err != nil {
|
||||
return false, errors.Wrap(err, "could not get configuration")
|
||||
}
|
||||
|
||||
// If configuration is nil, return false.
|
||||
if configuration == nil {
|
||||
return false, nil
|
||||
}
|
||||
|
||||
// Return true if proposer settings exists, false otherwise.
|
||||
exists := configuration.ProposerSettings != nil
|
||||
return exists, nil
|
||||
}
|
||||
|
||||
// SaveProposerSettings saves the proposer settings.
|
||||
func (s *Store) SaveProposerSettings(_ context.Context, proposerSettings *proposer.Settings) error {
|
||||
// Check if there is something to save.
|
||||
if !proposerSettings.ShouldBeSaved() {
|
||||
log.Warn("proposer settings are empty, nothing has been saved")
|
||||
return nil
|
||||
}
|
||||
|
||||
// Convert proposer settings to payload.
|
||||
proposerSettingsPayload := proposerSettings.ToConsensus()
|
||||
|
||||
// Get configuration.
|
||||
configuration, err := s.configuration()
|
||||
if err != nil {
|
||||
return errors.Wrap(err, "could not get configuration")
|
||||
}
|
||||
|
||||
if configuration == nil {
|
||||
// If configuration is nil, create new config.
|
||||
configuration = &Configuration{
|
||||
ProposerSettings: proposerSettingsPayload,
|
||||
}
|
||||
|
||||
// Save the configuration.
|
||||
if err := s.saveConfiguration(configuration); err != nil {
|
||||
return errors.Wrap(err, "could not save configuration")
|
||||
}
|
||||
|
||||
return nil
|
||||
}
|
||||
|
||||
// Modify the value of proposer settings.
|
||||
configuration.ProposerSettings = proposerSettingsPayload
|
||||
|
||||
// Save the configuration.
|
||||
if err := s.saveConfiguration(configuration); err != nil {
|
||||
return errors.Wrap(err, "could not save configuration")
|
||||
}
|
||||
|
||||
return nil
|
||||
}
|
||||
229
validator/db/filesystem/proposer_settings_test.go
Normal file
229
validator/db/filesystem/proposer_settings_test.go
Normal file
@@ -0,0 +1,229 @@
|
||||
package filesystem
|
||||
|
||||
import (
|
||||
"context"
|
||||
"testing"
|
||||
|
||||
"github.com/ethereum/go-ethereum/common/hexutil"
|
||||
fieldparams "github.com/prysmaticlabs/prysm/v5/config/fieldparams"
|
||||
"github.com/prysmaticlabs/prysm/v5/config/proposer"
|
||||
validatorpb "github.com/prysmaticlabs/prysm/v5/proto/prysm/v1alpha1/validator-client"
|
||||
"github.com/prysmaticlabs/prysm/v5/testing/require"
|
||||
)
|
||||
|
||||
func getPubkeyFromString(t *testing.T, pubkeyString string) [fieldparams.BLSPubkeyLength]byte {
|
||||
var pubkey [fieldparams.BLSPubkeyLength]byte
|
||||
pubkeyBytes, err := hexutil.Decode(pubkeyString)
|
||||
require.NoError(t, err, "hexutil.Decode should not return an error")
|
||||
copy(pubkey[:], pubkeyBytes)
|
||||
return pubkey
|
||||
}
|
||||
|
||||
func getFeeRecipientFromString(t *testing.T, feeRecipientString string) [fieldparams.FeeRecipientLength]byte {
|
||||
var feeRecipient [fieldparams.FeeRecipientLength]byte
|
||||
feeRecipientBytes, err := hexutil.Decode(feeRecipientString)
|
||||
require.NoError(t, err, "hexutil.Decode should not return an error")
|
||||
copy(feeRecipient[:], feeRecipientBytes)
|
||||
return feeRecipient
|
||||
}
|
||||
|
||||
func TestStore_ProposerSettings(t *testing.T) {
|
||||
ctx := context.Background()
|
||||
|
||||
pubkeyString := "0xb3533c600c6c22aa5177f295667deacffde243980d3c04da4057ab0941dcca1dff83ae8e6534bedd2d23d83446e604d6"
|
||||
customFeeRecipientString := "0xd4E96eF8eee8678dBFf4d535E033Ed1a4F7605b7"
|
||||
defaultFeeRecipientString := "0xC771172AE08B5FC37B3AC3D445225928DE883876"
|
||||
|
||||
pubkey := getPubkeyFromString(t, pubkeyString)
|
||||
customFeeRecipient := getFeeRecipientFromString(t, customFeeRecipientString)
|
||||
defaultFeeRecipient := getFeeRecipientFromString(t, defaultFeeRecipientString)
|
||||
|
||||
for _, tt := range []struct {
|
||||
name string
|
||||
configuration *Configuration
|
||||
expectedProposerSettings *proposer.Settings
|
||||
expectedError error
|
||||
}{
|
||||
{
|
||||
name: "configuration is nil",
|
||||
configuration: nil,
|
||||
expectedProposerSettings: nil,
|
||||
expectedError: ErrNoProposerSettingsFound,
|
||||
},
|
||||
{
|
||||
name: "configuration.ProposerSettings is nil",
|
||||
configuration: &Configuration{ProposerSettings: nil},
|
||||
expectedProposerSettings: nil,
|
||||
expectedError: ErrNoProposerSettingsFound,
|
||||
},
|
||||
{
|
||||
name: "configuration.ProposerSettings is something",
|
||||
configuration: &Configuration{
|
||||
ProposerSettings: &validatorpb.ProposerSettingsPayload{
|
||||
ProposerConfig: map[string]*validatorpb.ProposerOptionPayload{
|
||||
pubkeyString: &validatorpb.ProposerOptionPayload{
|
||||
FeeRecipient: customFeeRecipientString,
|
||||
},
|
||||
},
|
||||
DefaultConfig: &validatorpb.ProposerOptionPayload{
|
||||
FeeRecipient: defaultFeeRecipientString,
|
||||
},
|
||||
},
|
||||
},
|
||||
expectedProposerSettings: &proposer.Settings{
|
||||
ProposeConfig: map[[fieldparams.BLSPubkeyLength]byte]*proposer.Option{
|
||||
pubkey: &proposer.Option{
|
||||
FeeRecipientConfig: &proposer.FeeRecipientConfig{
|
||||
FeeRecipient: customFeeRecipient,
|
||||
},
|
||||
},
|
||||
},
|
||||
DefaultConfig: &proposer.Option{
|
||||
FeeRecipientConfig: &proposer.FeeRecipientConfig{
|
||||
FeeRecipient: defaultFeeRecipient,
|
||||
},
|
||||
},
|
||||
},
|
||||
expectedError: nil,
|
||||
},
|
||||
} {
|
||||
t.Run(tt.name, func(t *testing.T) {
|
||||
// Create a new store.
|
||||
store, err := NewStore(t.TempDir(), nil)
|
||||
require.NoError(t, err, "NewStore should not return an error")
|
||||
|
||||
// Save configuration.
|
||||
err = store.saveConfiguration(tt.configuration)
|
||||
require.NoError(t, err, "saveConfiguration should not return an error")
|
||||
|
||||
// Get proposer settings.
|
||||
actualProposerSettings, err := store.ProposerSettings(ctx)
|
||||
if tt.expectedError != nil {
|
||||
require.ErrorIs(t, err, tt.expectedError, "ProposerSettings should return expected error")
|
||||
} else {
|
||||
require.NoError(t, err, "ProposerSettings should not return an error")
|
||||
}
|
||||
|
||||
require.DeepEqual(t, tt.expectedProposerSettings, actualProposerSettings, "ProposerSettings should return expected")
|
||||
})
|
||||
}
|
||||
}
|
||||
|
||||
func TestStore_ProposerSettingsExists(t *testing.T) {
|
||||
ctx := context.Background()
|
||||
|
||||
for _, tt := range []struct {
|
||||
name string
|
||||
configuration *Configuration
|
||||
expectedExits bool
|
||||
}{
|
||||
{
|
||||
name: "configuration is nil",
|
||||
configuration: nil,
|
||||
expectedExits: false,
|
||||
},
|
||||
{
|
||||
name: "configuration.ProposerSettings is nil",
|
||||
configuration: &Configuration{ProposerSettings: nil},
|
||||
expectedExits: false,
|
||||
},
|
||||
{
|
||||
name: "configuration.ProposerSettings is something",
|
||||
configuration: &Configuration{ProposerSettings: &validatorpb.ProposerSettingsPayload{}},
|
||||
expectedExits: true,
|
||||
},
|
||||
} {
|
||||
t.Run(tt.name, func(t *testing.T) {
|
||||
// Create a new store.
|
||||
store, err := NewStore(t.TempDir(), nil)
|
||||
require.NoError(t, err, "NewStore should not return an error")
|
||||
|
||||
// Save configuration.
|
||||
err = store.saveConfiguration(tt.configuration)
|
||||
require.NoError(t, err, "saveConfiguration should not return an error")
|
||||
|
||||
// Get proposer settings.
|
||||
actualExists, err := store.ProposerSettingsExists(ctx)
|
||||
require.NoError(t, err, "ProposerSettingsExists should not return an error")
|
||||
require.Equal(t, tt.expectedExits, actualExists, "ProposerSettingsExists should return expected")
|
||||
})
|
||||
}
|
||||
}
|
||||
|
||||
func TestStore_SaveProposerSettings(t *testing.T) {
|
||||
ctx := context.Background()
|
||||
|
||||
preExistingFeeRecipientString := "0xD871172AE08B5FC37B3AC3D445225928DE883876"
|
||||
incomingFeeRecipientString := "0xC771172AE08B5FC37B3AC3D445225928DE883876"
|
||||
|
||||
incomingFeeRecipient := getFeeRecipientFromString(t, incomingFeeRecipientString)
|
||||
|
||||
incomingProposerSettings := &proposer.Settings{
|
||||
DefaultConfig: &proposer.Option{
|
||||
FeeRecipientConfig: &proposer.FeeRecipientConfig{
|
||||
FeeRecipient: incomingFeeRecipient,
|
||||
},
|
||||
},
|
||||
}
|
||||
|
||||
expectedConfiguration := &Configuration{
|
||||
ProposerSettings: &validatorpb.ProposerSettingsPayload{
|
||||
ProposerConfig: map[string]*validatorpb.ProposerOptionPayload{},
|
||||
DefaultConfig: &validatorpb.ProposerOptionPayload{
|
||||
FeeRecipient: incomingFeeRecipientString,
|
||||
},
|
||||
},
|
||||
}
|
||||
|
||||
for _, tt := range []struct {
|
||||
name string
|
||||
preExistingConfiguration *Configuration
|
||||
proposerSettings *proposer.Settings
|
||||
expectedConfiguration *Configuration
|
||||
}{
|
||||
{
|
||||
name: "proposerSettings is nil",
|
||||
preExistingConfiguration: nil,
|
||||
proposerSettings: nil,
|
||||
expectedConfiguration: nil,
|
||||
},
|
||||
{
|
||||
name: "configuration is nil",
|
||||
preExistingConfiguration: nil,
|
||||
proposerSettings: incomingProposerSettings,
|
||||
expectedConfiguration: expectedConfiguration,
|
||||
},
|
||||
{
|
||||
name: "configuration is something",
|
||||
preExistingConfiguration: &Configuration{
|
||||
ProposerSettings: &validatorpb.ProposerSettingsPayload{
|
||||
ProposerConfig: map[string]*validatorpb.ProposerOptionPayload{},
|
||||
DefaultConfig: &validatorpb.ProposerOptionPayload{
|
||||
FeeRecipient: preExistingFeeRecipientString,
|
||||
},
|
||||
},
|
||||
},
|
||||
proposerSettings: incomingProposerSettings,
|
||||
expectedConfiguration: expectedConfiguration,
|
||||
},
|
||||
} {
|
||||
t.Run(tt.name, func(t *testing.T) {
|
||||
// Create a new store.
|
||||
store, err := NewStore(t.TempDir(), nil)
|
||||
require.NoError(t, err, "NewStore should not return an error")
|
||||
|
||||
// Save pre-existing configuration.
|
||||
err = store.saveConfiguration(tt.preExistingConfiguration)
|
||||
require.NoError(t, err, "saveConfiguration should not return an error")
|
||||
|
||||
// Update proposer settings.
|
||||
err = store.SaveProposerSettings(ctx, tt.proposerSettings)
|
||||
require.NoError(t, err, "UpdateProposerSettingsDefault should not return an error")
|
||||
|
||||
// Get configuration.
|
||||
actualConfiguration, err := store.configuration()
|
||||
require.NoError(t, err, "configuration should not return an error")
|
||||
require.DeepEqual(t, tt.expectedConfiguration, actualConfiguration, "configuration should return expected")
|
||||
})
|
||||
}
|
||||
}
|
||||
@@ -4,17 +4,19 @@ go_library(
|
||||
name = "go_default_library",
|
||||
srcs = ["interface.go"],
|
||||
importpath = "github.com/prysmaticlabs/prysm/v5/validator/db/iface",
|
||||
# Other packages must use github.com/prysmaticlabs/prysm/v5/validator/db.Database alias.
|
||||
visibility = [
|
||||
"//cmd/validator/slashing-protection:__subpackages__",
|
||||
"//config:__subpackages__",
|
||||
"//validator:__subpackages__",
|
||||
],
|
||||
deps = [
|
||||
"//config/fieldparams:go_default_library",
|
||||
"//config/proposer:go_default_library",
|
||||
"//consensus-types/interfaces:go_default_library",
|
||||
"//consensus-types/primitives:go_default_library",
|
||||
"//monitoring/backup:go_default_library",
|
||||
"//proto/prysm/v1alpha1:go_default_library",
|
||||
"//validator/db/kv:go_default_library",
|
||||
"//validator/db/common:go_default_library",
|
||||
"@com_github_prometheus_client_golang//prometheus:go_default_library",
|
||||
],
|
||||
)
|
||||
|
||||
@@ -5,17 +5,16 @@ import (
|
||||
"context"
|
||||
"io"
|
||||
|
||||
"github.com/prometheus/client_golang/prometheus"
|
||||
fieldparams "github.com/prysmaticlabs/prysm/v5/config/fieldparams"
|
||||
"github.com/prysmaticlabs/prysm/v5/config/proposer"
|
||||
"github.com/prysmaticlabs/prysm/v5/consensus-types/interfaces"
|
||||
"github.com/prysmaticlabs/prysm/v5/consensus-types/primitives"
|
||||
"github.com/prysmaticlabs/prysm/v5/monitoring/backup"
|
||||
ethpb "github.com/prysmaticlabs/prysm/v5/proto/prysm/v1alpha1"
|
||||
"github.com/prysmaticlabs/prysm/v5/validator/db/kv"
|
||||
"github.com/prysmaticlabs/prysm/v5/validator/db/common"
|
||||
)
|
||||
|
||||
// Ensure the kv store implements the interface.
|
||||
var _ = ValidatorDB(&kv.Store{})
|
||||
|
||||
// ValidatorDB defines the necessary methods for a Prysm validator DB.
|
||||
type ValidatorDB interface {
|
||||
io.Closer
|
||||
@@ -33,10 +32,18 @@ type ValidatorDB interface {
|
||||
// Proposer protection related methods.
|
||||
HighestSignedProposal(ctx context.Context, publicKey [fieldparams.BLSPubkeyLength]byte) (primitives.Slot, bool, error)
|
||||
LowestSignedProposal(ctx context.Context, publicKey [fieldparams.BLSPubkeyLength]byte) (primitives.Slot, bool, error)
|
||||
ProposalHistoryForPubKey(ctx context.Context, publicKey [fieldparams.BLSPubkeyLength]byte) ([]*kv.Proposal, error)
|
||||
ProposalHistoryForPubKey(ctx context.Context, publicKey [fieldparams.BLSPubkeyLength]byte) ([]*common.Proposal, error)
|
||||
ProposalHistoryForSlot(ctx context.Context, publicKey [fieldparams.BLSPubkeyLength]byte, slot primitives.Slot) ([32]byte, bool, bool, error)
|
||||
SaveProposalHistoryForSlot(ctx context.Context, pubKey [fieldparams.BLSPubkeyLength]byte, slot primitives.Slot, signingRoot []byte) error
|
||||
ProposedPublicKeys(ctx context.Context) ([][fieldparams.BLSPubkeyLength]byte, error)
|
||||
SlashableProposalCheck(
|
||||
ctx context.Context,
|
||||
pubKey [fieldparams.BLSPubkeyLength]byte,
|
||||
signedBlock interfaces.ReadOnlySignedBeaconBlock,
|
||||
signingRoot [fieldparams.RootLength]byte,
|
||||
emitAccountMetrics bool,
|
||||
validatorProposeFailVec *prometheus.CounterVec,
|
||||
) error
|
||||
|
||||
// Attester protection related methods.
|
||||
// Methods to store and read blacklisted public keys from EIP-3076
|
||||
@@ -47,25 +54,32 @@ type ValidatorDB interface {
|
||||
LowestSignedTargetEpoch(ctx context.Context, publicKey [fieldparams.BLSPubkeyLength]byte) (primitives.Epoch, bool, error)
|
||||
LowestSignedSourceEpoch(ctx context.Context, publicKey [fieldparams.BLSPubkeyLength]byte) (primitives.Epoch, bool, error)
|
||||
AttestedPublicKeys(ctx context.Context) ([][fieldparams.BLSPubkeyLength]byte, error)
|
||||
CheckSlashableAttestation(
|
||||
ctx context.Context, pubKey [fieldparams.BLSPubkeyLength]byte, signingRoot []byte, att *ethpb.IndexedAttestation,
|
||||
) (kv.SlashingKind, error)
|
||||
SlashableAttestationCheck(
|
||||
ctx context.Context, indexedAtt *ethpb.IndexedAttestation, pubKey [fieldparams.BLSPubkeyLength]byte,
|
||||
signingRoot32 [32]byte,
|
||||
emitAccountMetrics bool,
|
||||
validatorAttestFailVec *prometheus.CounterVec,
|
||||
) error
|
||||
SaveAttestationForPubKey(
|
||||
ctx context.Context, pubKey [fieldparams.BLSPubkeyLength]byte, signingRoot [32]byte, att *ethpb.IndexedAttestation,
|
||||
ctx context.Context, pubKey [fieldparams.BLSPubkeyLength]byte, signingRoot [fieldparams.RootLength]byte, att *ethpb.IndexedAttestation,
|
||||
) error
|
||||
SaveAttestationsForPubKey(
|
||||
ctx context.Context, pubKey [fieldparams.BLSPubkeyLength]byte, signingRoots [][]byte, atts []*ethpb.IndexedAttestation,
|
||||
) error
|
||||
AttestationHistoryForPubKey(
|
||||
ctx context.Context, pubKey [fieldparams.BLSPubkeyLength]byte,
|
||||
) ([]*kv.AttestationRecord, error)
|
||||
) ([]*common.AttestationRecord, error)
|
||||
|
||||
// Graffiti ordered index related methods
|
||||
SaveGraffitiOrderedIndex(ctx context.Context, index uint64) error
|
||||
GraffitiOrderedIndex(ctx context.Context, fileHash [32]byte) (uint64, error)
|
||||
GraffitiFileHash() ([32]byte, bool, error)
|
||||
|
||||
// ProposerSettings related methods
|
||||
ProposerSettings(context.Context) (*proposer.Settings, error)
|
||||
ProposerSettingsExists(ctx context.Context) (bool, error)
|
||||
SaveProposerSettings(ctx context.Context, settings *proposer.Settings) error
|
||||
|
||||
// EIP-3076 slashing protection related methods
|
||||
ImportStandardProtectionJSON(ctx context.Context, r io.Reader) error
|
||||
}
|
||||
|
||||
@@ -10,6 +10,7 @@ go_library(
|
||||
"eip_blacklisted_keys.go",
|
||||
"genesis.go",
|
||||
"graffiti.go",
|
||||
"import.go",
|
||||
"log.go",
|
||||
"migration.go",
|
||||
"migration_optimal_attester_protection.go",
|
||||
@@ -31,6 +32,7 @@ go_library(
|
||||
"//config/fieldparams:go_default_library",
|
||||
"//config/params:go_default_library",
|
||||
"//config/proposer:go_default_library",
|
||||
"//consensus-types/interfaces:go_default_library",
|
||||
"//consensus-types/primitives:go_default_library",
|
||||
"//encoding/bytesutil:go_default_library",
|
||||
"//io/file:go_default_library",
|
||||
@@ -40,6 +42,10 @@ go_library(
|
||||
"//proto/prysm/v1alpha1/slashings:go_default_library",
|
||||
"//proto/prysm/v1alpha1/validator-client:go_default_library",
|
||||
"//time/slots:go_default_library",
|
||||
"//validator/db/common:go_default_library",
|
||||
"//validator/db/iface:go_default_library",
|
||||
"//validator/helpers:go_default_library",
|
||||
"//validator/slashing-protection-history/format:go_default_library",
|
||||
"@com_github_pkg_errors//:go_default_library",
|
||||
"@com_github_prometheus_client_golang//prometheus:go_default_library",
|
||||
"@com_github_prysmaticlabs_prombbolt//:go_default_library",
|
||||
@@ -59,6 +65,7 @@ go_test(
|
||||
"eip_blacklisted_keys_test.go",
|
||||
"genesis_test.go",
|
||||
"graffiti_test.go",
|
||||
"import_test.go",
|
||||
"kv_test.go",
|
||||
"migration_optimal_attester_protection_test.go",
|
||||
"migration_source_target_epochs_bucket_test.go",
|
||||
@@ -71,13 +78,19 @@ go_test(
|
||||
"//config/fieldparams:go_default_library",
|
||||
"//config/params:go_default_library",
|
||||
"//config/proposer:go_default_library",
|
||||
"//consensus-types/blocks:go_default_library",
|
||||
"//consensus-types/primitives:go_default_library",
|
||||
"//consensus-types/validator:go_default_library",
|
||||
"//crypto/bls:go_default_library",
|
||||
"//crypto/hash:go_default_library",
|
||||
"//encoding/bytesutil:go_default_library",
|
||||
"//proto/prysm/v1alpha1:go_default_library",
|
||||
"//testing/assert:go_default_library",
|
||||
"//testing/require:go_default_library",
|
||||
"//testing/util:go_default_library",
|
||||
"//validator/db/common:go_default_library",
|
||||
"//validator/slashing-protection-history/format:go_default_library",
|
||||
"//validator/testing:go_default_library",
|
||||
"@com_github_ethereum_go_ethereum//common:go_default_library",
|
||||
"@com_github_ethereum_go_ethereum//common/hexutil:go_default_library",
|
||||
"@com_github_sirupsen_logrus//:go_default_library",
|
||||
|
||||
@@ -2,17 +2,20 @@ package kv
|
||||
|
||||
import (
|
||||
"context"
|
||||
"encoding/hex"
|
||||
"fmt"
|
||||
"sync"
|
||||
"time"
|
||||
|
||||
"github.com/pkg/errors"
|
||||
"github.com/prometheus/client_golang/prometheus"
|
||||
fieldparams "github.com/prysmaticlabs/prysm/v5/config/fieldparams"
|
||||
"github.com/prysmaticlabs/prysm/v5/consensus-types/primitives"
|
||||
"github.com/prysmaticlabs/prysm/v5/encoding/bytesutil"
|
||||
"github.com/prysmaticlabs/prysm/v5/monitoring/tracing"
|
||||
ethpb "github.com/prysmaticlabs/prysm/v5/proto/prysm/v1alpha1"
|
||||
"github.com/prysmaticlabs/prysm/v5/proto/prysm/v1alpha1/slashings"
|
||||
"github.com/prysmaticlabs/prysm/v5/validator/db/common"
|
||||
bolt "go.etcd.io/bbolt"
|
||||
"go.opencensus.io/trace"
|
||||
)
|
||||
@@ -24,35 +27,26 @@ type SlashingKind int
|
||||
// with the appropriate call context.
|
||||
type AttestationRecordSaveRequest struct {
|
||||
ctx context.Context
|
||||
record *AttestationRecord
|
||||
}
|
||||
|
||||
// AttestationRecord which can be represented by these simple values
|
||||
// for manipulation by database methods.
|
||||
type AttestationRecord struct {
|
||||
PubKey [fieldparams.BLSPubkeyLength]byte
|
||||
Source primitives.Epoch
|
||||
Target primitives.Epoch
|
||||
SigningRoot []byte
|
||||
record *common.AttestationRecord
|
||||
}
|
||||
|
||||
// NewQueuedAttestationRecords constructor allocates the underlying slice and
|
||||
// required attributes for managing pending attestation records.
|
||||
func NewQueuedAttestationRecords() *QueuedAttestationRecords {
|
||||
return &QueuedAttestationRecords{
|
||||
records: make([]*AttestationRecord, 0, attestationBatchCapacity),
|
||||
records: make([]*common.AttestationRecord, 0, attestationBatchCapacity),
|
||||
}
|
||||
}
|
||||
|
||||
// QueuedAttestationRecords is a thread-safe struct for managing a queue of
|
||||
// attestation records to save to validator database.
|
||||
type QueuedAttestationRecords struct {
|
||||
records []*AttestationRecord
|
||||
records []*common.AttestationRecord
|
||||
lock sync.RWMutex
|
||||
}
|
||||
|
||||
// Append a new attestation record to the queue.
|
||||
func (p *QueuedAttestationRecords) Append(ar *AttestationRecord) {
|
||||
func (p *QueuedAttestationRecords) Append(ar *common.AttestationRecord) {
|
||||
p.lock.Lock()
|
||||
defer p.lock.Unlock()
|
||||
p.records = append(p.records, ar)
|
||||
@@ -60,11 +54,11 @@ func (p *QueuedAttestationRecords) Append(ar *AttestationRecord) {
|
||||
|
||||
// Flush all records. This method returns the current pending records and resets
|
||||
// the pending records slice.
|
||||
func (p *QueuedAttestationRecords) Flush() []*AttestationRecord {
|
||||
func (p *QueuedAttestationRecords) Flush() []*common.AttestationRecord {
|
||||
p.lock.Lock()
|
||||
defer p.lock.Unlock()
|
||||
recs := p.records
|
||||
p.records = make([]*AttestationRecord, 0, attestationBatchCapacity)
|
||||
p.records = make([]*common.AttestationRecord, 0, attestationBatchCapacity)
|
||||
return recs
|
||||
}
|
||||
|
||||
@@ -93,15 +87,16 @@ const (
|
||||
)
|
||||
|
||||
var (
|
||||
doubleVoteMessage = "double vote found, existing attestation at target epoch %d with conflicting signing root %#x"
|
||||
surroundingVoteMessage = "attestation with (source %d, target %d) surrounds another with (source %d, target %d)"
|
||||
surroundedVoteMessage = "attestation with (source %d, target %d) is surrounded by another with (source %d, target %d)"
|
||||
doubleVoteMessage = "double vote found, existing attestation at target epoch %d with conflicting signing root %#x"
|
||||
surroundingVoteMessage = "attestation with (source %d, target %d) surrounds another with (source %d, target %d)"
|
||||
surroundedVoteMessage = "attestation with (source %d, target %d) is surrounded by another with (source %d, target %d)"
|
||||
failedAttLocalProtectionErr = "attempted to make slashable attestation, rejected by local slashing protection"
|
||||
)
|
||||
|
||||
// AttestationHistoryForPubKey retrieves a list of attestation records for data
|
||||
// we have stored in the database for the given validator public key.
|
||||
func (s *Store) AttestationHistoryForPubKey(ctx context.Context, pubKey [fieldparams.BLSPubkeyLength]byte) ([]*AttestationRecord, error) {
|
||||
records := make([]*AttestationRecord, 0)
|
||||
func (s *Store) AttestationHistoryForPubKey(ctx context.Context, pubKey [fieldparams.BLSPubkeyLength]byte) ([]*common.AttestationRecord, error) {
|
||||
records := make([]*common.AttestationRecord, 0)
|
||||
_, span := trace.StartSpan(ctx, "Validator.AttestationHistoryForPubKey")
|
||||
defer span.End()
|
||||
err := s.view(func(tx *bolt.Tx) error {
|
||||
@@ -121,7 +116,7 @@ func (s *Store) AttestationHistoryForPubKey(ctx context.Context, pubKey [fieldpa
|
||||
}
|
||||
sourceEpoch := bytesutil.BytesToEpochBigEndian(sourceBytes)
|
||||
for _, targetEpoch := range targetEpochs {
|
||||
record := &AttestationRecord{
|
||||
record := &common.AttestationRecord{
|
||||
PubKey: pubKey,
|
||||
Source: sourceEpoch,
|
||||
Target: targetEpoch,
|
||||
@@ -139,6 +134,79 @@ func (s *Store) AttestationHistoryForPubKey(ctx context.Context, pubKey [fieldpa
|
||||
return records, err
|
||||
}
|
||||
|
||||
// SlashableAttestationCheck checks if an attestation is slashable by comparing it with the attesting
|
||||
// history for the given public key in our complete slashing protection database defined by EIP-3076.
|
||||
// If it is not, it updates the database.
|
||||
func (s *Store) SlashableAttestationCheck(
|
||||
ctx context.Context,
|
||||
indexedAtt *ethpb.IndexedAttestation,
|
||||
pubKey [fieldparams.BLSPubkeyLength]byte,
|
||||
signingRoot32 [32]byte,
|
||||
emitAccountMetrics bool,
|
||||
validatorAttestFailVec *prometheus.CounterVec,
|
||||
) error {
|
||||
ctx, span := trace.StartSpan(ctx, "validator.postAttSignUpdate")
|
||||
defer span.End()
|
||||
|
||||
signingRoot := signingRoot32[:]
|
||||
|
||||
// Based on EIP-3076, validator should refuse to sign any attestation with source epoch less
|
||||
// than the minimum source epoch present in that signer’s attestations.
|
||||
lowestSourceEpoch, exists, err := s.LowestSignedSourceEpoch(ctx, pubKey)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
if exists && indexedAtt.Data.Source.Epoch < lowestSourceEpoch {
|
||||
return fmt.Errorf(
|
||||
"could not sign attestation lower than lowest source epoch in db, %d < %d",
|
||||
indexedAtt.Data.Source.Epoch,
|
||||
lowestSourceEpoch,
|
||||
)
|
||||
}
|
||||
existingSigningRoot, err := s.SigningRootAtTargetEpoch(ctx, pubKey, indexedAtt.Data.Target.Epoch)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
signingRootsDiffer := slashings.SigningRootsDiffer(existingSigningRoot, signingRoot)
|
||||
|
||||
// Based on EIP-3076, validator should refuse to sign any attestation with target epoch less
|
||||
// than or equal to the minimum target epoch present in that signer’s attestations, except
|
||||
// if it is a repeat signing as determined by the signingRoot.
|
||||
lowestTargetEpoch, exists, err := s.LowestSignedTargetEpoch(ctx, pubKey)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
if signingRootsDiffer && exists && indexedAtt.Data.Target.Epoch <= lowestTargetEpoch {
|
||||
return fmt.Errorf(
|
||||
"could not sign attestation lower than or equal to lowest target epoch in db if signing roots differ, %d <= %d",
|
||||
indexedAtt.Data.Target.Epoch,
|
||||
lowestTargetEpoch,
|
||||
)
|
||||
}
|
||||
fmtKey := "0x" + hex.EncodeToString(pubKey[:])
|
||||
slashingKind, err := s.CheckSlashableAttestation(ctx, pubKey, signingRoot, indexedAtt)
|
||||
if err != nil {
|
||||
if emitAccountMetrics {
|
||||
validatorAttestFailVec.WithLabelValues(fmtKey).Inc()
|
||||
}
|
||||
switch slashingKind {
|
||||
case DoubleVote:
|
||||
log.Warn("Attestation is slashable as it is a double vote")
|
||||
case SurroundingVote:
|
||||
log.Warn("Attestation is slashable as it is surrounding a previous attestation")
|
||||
case SurroundedVote:
|
||||
log.Warn("Attestation is slashable as it is surrounded by a previous attestation")
|
||||
}
|
||||
return errors.Wrap(err, failedAttLocalProtectionErr)
|
||||
}
|
||||
|
||||
if err := s.SaveAttestationForPubKey(ctx, pubKey, signingRoot32, indexedAtt); err != nil {
|
||||
return errors.Wrap(err, "could not save attestation history for validator public key")
|
||||
}
|
||||
|
||||
return nil
|
||||
}
|
||||
|
||||
// CheckSlashableAttestation verifies an incoming attestation is
|
||||
// not a double vote for a validator public key nor a surround vote.
|
||||
func (s *Store) CheckSlashableAttestation(
|
||||
@@ -200,7 +268,7 @@ func (s *Store) CheckSlashableAttestation(
|
||||
}
|
||||
|
||||
// Iterate from the back of the bucket since we are looking for target_epoch > att.target_epoch
|
||||
func (_ *Store) checkSurroundedVote(
|
||||
func (*Store) checkSurroundedVote(
|
||||
targetEpochsBucket *bolt.Bucket, att *ethpb.IndexedAttestation,
|
||||
) (SlashingKind, error) {
|
||||
c := targetEpochsBucket.Cursor()
|
||||
@@ -240,7 +308,7 @@ func (_ *Store) checkSurroundedVote(
|
||||
}
|
||||
|
||||
// Iterate from the back of the bucket since we are looking for source_epoch > att.source_epoch
|
||||
func (_ *Store) checkSurroundingVote(
|
||||
func (*Store) checkSurroundingVote(
|
||||
sourceEpochsBucket *bolt.Bucket, att *ethpb.IndexedAttestation,
|
||||
) (SlashingKind, error) {
|
||||
c := sourceEpochsBucket.Cursor()
|
||||
@@ -292,9 +360,9 @@ func (s *Store) SaveAttestationsForPubKey(
|
||||
len(atts),
|
||||
)
|
||||
}
|
||||
records := make([]*AttestationRecord, len(atts))
|
||||
records := make([]*common.AttestationRecord, len(atts))
|
||||
for i, a := range atts {
|
||||
records[i] = &AttestationRecord{
|
||||
records[i] = &common.AttestationRecord{
|
||||
PubKey: pubKey,
|
||||
Source: a.Data.Source.Epoch,
|
||||
Target: a.Data.Target.Epoch,
|
||||
@@ -307,13 +375,13 @@ func (s *Store) SaveAttestationsForPubKey(
|
||||
// SaveAttestationForPubKey saves an attestation for a validator public
|
||||
// key for local validator slashing protection.
|
||||
func (s *Store) SaveAttestationForPubKey(
|
||||
ctx context.Context, pubKey [fieldparams.BLSPubkeyLength]byte, signingRoot [32]byte, att *ethpb.IndexedAttestation,
|
||||
ctx context.Context, pubKey [fieldparams.BLSPubkeyLength]byte, signingRoot [fieldparams.RootLength]byte, att *ethpb.IndexedAttestation,
|
||||
) error {
|
||||
ctx, span := trace.StartSpan(ctx, "Validator.SaveAttestationForPubKey")
|
||||
defer span.End()
|
||||
s.batchedAttestationsChan <- &AttestationRecordSaveRequest{
|
||||
ctx: ctx,
|
||||
record: &AttestationRecord{
|
||||
record: &common.AttestationRecord{
|
||||
PubKey: pubKey,
|
||||
Source: att.Data.Source.Epoch,
|
||||
Target: att.Data.Target.Epoch,
|
||||
@@ -385,7 +453,7 @@ func (s *Store) batchAttestationWrites(ctx context.Context) {
|
||||
// and resets the list of batched attestations for future writes.
|
||||
// This function notifies all subscribers for flushed attestations
|
||||
// of the result of the save operation.
|
||||
func (s *Store) flushAttestationRecords(ctx context.Context, records []*AttestationRecord) {
|
||||
func (s *Store) flushAttestationRecords(ctx context.Context, records []*common.AttestationRecord) {
|
||||
ctx, span := trace.StartSpan(ctx, "validatorDB.flushAttestationRecords")
|
||||
defer span.End()
|
||||
|
||||
@@ -422,7 +490,7 @@ func (s *Store) flushAttestationRecords(ctx context.Context, records []*Attestat
|
||||
// Saves a list of attestation records to the database in a single boltDB
|
||||
// transaction to minimize write lock contention compared to doing them
|
||||
// all in individual, isolated boltDB transactions.
|
||||
func (s *Store) saveAttestationRecords(ctx context.Context, atts []*AttestationRecord) error {
|
||||
func (s *Store) saveAttestationRecords(ctx context.Context, atts []*common.AttestationRecord) error {
|
||||
_, span := trace.StartSpan(ctx, "Validator.saveAttestationRecords")
|
||||
defer span.End()
|
||||
return s.update(func(tx *bolt.Tx) error {
|
||||
|
||||
@@ -9,10 +9,12 @@ import (
|
||||
|
||||
fieldparams "github.com/prysmaticlabs/prysm/v5/config/fieldparams"
|
||||
"github.com/prysmaticlabs/prysm/v5/consensus-types/primitives"
|
||||
"github.com/prysmaticlabs/prysm/v5/crypto/bls"
|
||||
"github.com/prysmaticlabs/prysm/v5/encoding/bytesutil"
|
||||
ethpb "github.com/prysmaticlabs/prysm/v5/proto/prysm/v1alpha1"
|
||||
"github.com/prysmaticlabs/prysm/v5/testing/assert"
|
||||
"github.com/prysmaticlabs/prysm/v5/testing/require"
|
||||
"github.com/prysmaticlabs/prysm/v5/validator/db/common"
|
||||
logTest "github.com/sirupsen/logrus/hooks/test"
|
||||
bolt "go.etcd.io/bbolt"
|
||||
)
|
||||
@@ -23,7 +25,7 @@ func TestPendingAttestationRecords_Flush(t *testing.T) {
|
||||
// Add 5 atts
|
||||
num := 5
|
||||
for i := 0; i < num; i++ {
|
||||
queue.Append(&AttestationRecord{
|
||||
queue.Append(&common.AttestationRecord{
|
||||
Target: primitives.Epoch(i),
|
||||
})
|
||||
}
|
||||
@@ -36,7 +38,7 @@ func TestPendingAttestationRecords_Flush(t *testing.T) {
|
||||
func TestPendingAttestationRecords_Len(t *testing.T) {
|
||||
queue := NewQueuedAttestationRecords()
|
||||
assert.Equal(t, queue.Len(), 0)
|
||||
queue.Append(&AttestationRecord{})
|
||||
queue.Append(&common.AttestationRecord{})
|
||||
assert.Equal(t, queue.Len(), 1)
|
||||
queue.Flush()
|
||||
assert.Equal(t, queue.Len(), 0)
|
||||
@@ -555,19 +557,6 @@ func benchCheckSurroundVote(
|
||||
}
|
||||
}
|
||||
|
||||
func createAttestation(source, target primitives.Epoch) *ethpb.IndexedAttestation {
|
||||
return ðpb.IndexedAttestation{
|
||||
Data: ðpb.AttestationData{
|
||||
Source: ðpb.Checkpoint{
|
||||
Epoch: source,
|
||||
},
|
||||
Target: ðpb.Checkpoint{
|
||||
Epoch: target,
|
||||
},
|
||||
},
|
||||
}
|
||||
}
|
||||
|
||||
func TestStore_flushAttestationRecords_InProgress(t *testing.T) {
|
||||
s := &Store{}
|
||||
s.batchedAttestationsFlushInProgress.Set()
|
||||
@@ -576,3 +565,55 @@ func TestStore_flushAttestationRecords_InProgress(t *testing.T) {
|
||||
s.flushAttestationRecords(context.Background(), nil)
|
||||
assert.LogsContain(t, hook, "Attempted to flush attestation records when already in progress")
|
||||
}
|
||||
|
||||
func BenchmarkStore_SaveAttestationForPubKey(b *testing.B) {
|
||||
var wg sync.WaitGroup
|
||||
ctx := context.Background()
|
||||
|
||||
// Create pubkeys
|
||||
pubkeys := make([][fieldparams.BLSPubkeyLength]byte, 10)
|
||||
for i := range pubkeys {
|
||||
validatorKey, err := bls.RandKey()
|
||||
require.NoError(b, err, "RandKey should not return an error")
|
||||
|
||||
copy(pubkeys[i][:], validatorKey.PublicKey().Marshal())
|
||||
}
|
||||
|
||||
signingRoot := [32]byte{1}
|
||||
attestation := ðpb.IndexedAttestation{
|
||||
Data: ðpb.AttestationData{
|
||||
Source: ðpb.Checkpoint{
|
||||
Epoch: 42,
|
||||
},
|
||||
Target: ðpb.Checkpoint{
|
||||
Epoch: 43,
|
||||
},
|
||||
},
|
||||
}
|
||||
|
||||
validatorDB, err := NewKVStore(ctx, b.TempDir(), &Config{PubKeys: pubkeys})
|
||||
require.NoError(b, err)
|
||||
|
||||
for i := 0; i < b.N; i++ {
|
||||
b.StopTimer()
|
||||
err := validatorDB.ClearDB()
|
||||
require.NoError(b, err)
|
||||
|
||||
for _, pubkey := range pubkeys {
|
||||
wg.Add(1)
|
||||
|
||||
go func(pk [fieldparams.BLSPubkeyLength]byte) {
|
||||
defer wg.Done()
|
||||
|
||||
err := validatorDB.SaveAttestationForPubKey(ctx, pk, signingRoot, attestation)
|
||||
require.NoError(b, err)
|
||||
}(pubkey)
|
||||
}
|
||||
|
||||
b.StartTimer()
|
||||
wg.Wait()
|
||||
}
|
||||
|
||||
err = validatorDB.Close()
|
||||
require.NoError(b, err)
|
||||
}
|
||||
|
||||
@@ -9,6 +9,7 @@ import (
|
||||
fieldparams "github.com/prysmaticlabs/prysm/v5/config/fieldparams"
|
||||
ethpb "github.com/prysmaticlabs/prysm/v5/proto/prysm/v1alpha1"
|
||||
"github.com/prysmaticlabs/prysm/v5/testing/require"
|
||||
"github.com/prysmaticlabs/prysm/v5/validator/db/common"
|
||||
)
|
||||
|
||||
func TestStore_Backup(t *testing.T) {
|
||||
@@ -92,7 +93,7 @@ func TestStore_NestedBackup(t *testing.T) {
|
||||
|
||||
hist, err := backedDB.AttestationHistoryForPubKey(context.Background(), keys[0])
|
||||
require.NoError(t, err)
|
||||
require.DeepEqual(t, &AttestationRecord{
|
||||
require.DeepEqual(t, &common.AttestationRecord{
|
||||
PubKey: keys[0],
|
||||
Source: 10,
|
||||
Target: 0,
|
||||
@@ -101,7 +102,7 @@ func TestStore_NestedBackup(t *testing.T) {
|
||||
|
||||
hist, err = backedDB.AttestationHistoryForPubKey(context.Background(), keys[1])
|
||||
require.NoError(t, err)
|
||||
require.DeepEqual(t, &AttestationRecord{
|
||||
require.DeepEqual(t, &common.AttestationRecord{
|
||||
PubKey: keys[1],
|
||||
Source: 10,
|
||||
Target: 0,
|
||||
|
||||
@@ -17,6 +17,7 @@ import (
|
||||
fieldparams "github.com/prysmaticlabs/prysm/v5/config/fieldparams"
|
||||
"github.com/prysmaticlabs/prysm/v5/config/params"
|
||||
"github.com/prysmaticlabs/prysm/v5/io/file"
|
||||
"github.com/prysmaticlabs/prysm/v5/validator/db/iface"
|
||||
bolt "go.etcd.io/bbolt"
|
||||
)
|
||||
|
||||
@@ -108,6 +109,9 @@ func createBuckets(tx *bolt.Tx, buckets ...[]byte) error {
|
||||
return nil
|
||||
}
|
||||
|
||||
// Ensure the kv store implements the interface.
|
||||
var _ = iface.ValidatorDB(&Store{})
|
||||
|
||||
// NewKVStore initializes a new boltDB key-value store at the directory
|
||||
// path specified, creates the kv-buckets based on the schema, and stores
|
||||
// an open connection db object as a property of the Store struct.
|
||||
@@ -145,8 +149,8 @@ func NewKVStore(ctx context.Context, dirPath string, config *Config) (*Store, er
|
||||
return createBuckets(
|
||||
tx,
|
||||
genesisInfoBucket,
|
||||
deprecatedAttestationHistoryBucket,
|
||||
historicProposalsBucket,
|
||||
deprecatedAttestationHistoryBucket,
|
||||
lowestSignedSourceBucket,
|
||||
lowestSignedTargetBucket,
|
||||
lowestSignedProposalsBucket,
|
||||
|
||||
@@ -37,3 +37,32 @@ func (s *Store) GraffitiOrderedIndex(_ context.Context, fileHash [32]byte) (uint
|
||||
})
|
||||
return orderedIndex, err
|
||||
}
|
||||
|
||||
// GraffitiFileHash fetches the graffiti file hash.
|
||||
func (s *Store) GraffitiFileHash() ([32]byte, bool, error) {
|
||||
// Define a default file hash.
|
||||
var fileHash [32]byte
|
||||
|
||||
exists := false
|
||||
|
||||
err := s.db.View(func(tx *bolt.Tx) error {
|
||||
// Get the graffiti bucket.
|
||||
bkt := tx.Bucket(graffitiBucket)
|
||||
|
||||
// Get the file hash.
|
||||
dbFileHash := bkt.Get(graffitiFileHashKey)
|
||||
|
||||
if dbFileHash == nil {
|
||||
// If the file hash is nil, return early.
|
||||
return nil
|
||||
}
|
||||
|
||||
// A DB file hash exists.
|
||||
exists = true
|
||||
copy(fileHash[:], dbFileHash)
|
||||
return nil
|
||||
})
|
||||
|
||||
// Return the file hash.
|
||||
return fileHash, exists, err
|
||||
}
|
||||
|
||||
@@ -58,3 +58,48 @@ func TestStore_GraffitiOrderedIndex_ReadAndWrite(t *testing.T) {
|
||||
})
|
||||
}
|
||||
}
|
||||
|
||||
func TestStore_GraffitiFileHash(t *testing.T) {
|
||||
ctx := context.Background()
|
||||
|
||||
// Creates database
|
||||
db := setupDB(t, [][fieldparams.BLSPubkeyLength]byte{})
|
||||
|
||||
tests := []struct {
|
||||
name string
|
||||
write *[32]byte
|
||||
expectedExists bool
|
||||
expectedFileHash [32]byte
|
||||
}{
|
||||
{
|
||||
name: "empty",
|
||||
write: nil,
|
||||
expectedExists: false,
|
||||
expectedFileHash: [32]byte{0},
|
||||
},
|
||||
{
|
||||
name: "existing",
|
||||
write: &[32]byte{1},
|
||||
expectedExists: true,
|
||||
expectedFileHash: [32]byte{1},
|
||||
},
|
||||
}
|
||||
for _, tt := range tests {
|
||||
t.Run(tt.name, func(t *testing.T) {
|
||||
if tt.write != nil {
|
||||
// Call to GraffitiOrderedIndex set a graffiti file hash.
|
||||
_, err := db.GraffitiOrderedIndex(ctx, *tt.write)
|
||||
require.NoError(t, err)
|
||||
}
|
||||
|
||||
// Retrieve the graffiti file hash.
|
||||
actualFileHash, actualExists, err := db.GraffitiFileHash()
|
||||
require.NoError(t, err)
|
||||
require.Equal(t, tt.expectedExists, actualExists)
|
||||
|
||||
if tt.expectedExists {
|
||||
require.Equal(t, tt.expectedFileHash, actualFileHash)
|
||||
}
|
||||
})
|
||||
}
|
||||
}
|
||||
|
||||
@@ -1,4 +1,4 @@
|
||||
package history
|
||||
package kv
|
||||
|
||||
import (
|
||||
"bytes"
|
||||
@@ -13,16 +13,16 @@ import (
|
||||
"github.com/prysmaticlabs/prysm/v5/encoding/bytesutil"
|
||||
ethpb "github.com/prysmaticlabs/prysm/v5/proto/prysm/v1alpha1"
|
||||
"github.com/prysmaticlabs/prysm/v5/proto/prysm/v1alpha1/slashings"
|
||||
"github.com/prysmaticlabs/prysm/v5/validator/db"
|
||||
"github.com/prysmaticlabs/prysm/v5/validator/db/kv"
|
||||
"github.com/prysmaticlabs/prysm/v5/validator/db/common"
|
||||
"github.com/prysmaticlabs/prysm/v5/validator/db/iface"
|
||||
"github.com/prysmaticlabs/prysm/v5/validator/helpers"
|
||||
"github.com/prysmaticlabs/prysm/v5/validator/slashing-protection-history/format"
|
||||
)
|
||||
|
||||
// ImportStandardProtectionJSON takes in EIP-3076 compliant JSON file used for slashing protection
|
||||
// by Ethereum validators and imports its data into Prysm's internal representation of slashing
|
||||
// protection in the validator client's database. For more information, see the EIP document here:
|
||||
// https://eips.ethereum.org/EIPS/eip-3076.
|
||||
func ImportStandardProtectionJSON(ctx context.Context, validatorDB db.Database, r io.Reader) error {
|
||||
// ImportStandardProtection takes in EIP-3076 compliant JSON file used for slashing protection
|
||||
// by Ethereum validators and imports its data into Prysm's internal complete representation of slashing
|
||||
// protection in the validator client's database.
|
||||
func (s *Store) ImportStandardProtectionJSON(ctx context.Context, r io.Reader) error {
|
||||
encodedJSON, err := io.ReadAll(r)
|
||||
if err != nil {
|
||||
return errors.Wrap(err, "could not read slashing protection JSON file")
|
||||
@@ -39,7 +39,7 @@ func ImportStandardProtectionJSON(ctx context.Context, validatorDB db.Database,
|
||||
}
|
||||
|
||||
// We validate the `MetadataV0` field of the slashing protection JSON file.
|
||||
if err := validateMetadata(ctx, validatorDB, interchangeJSON); err != nil {
|
||||
if err := helpers.ValidateMetadata(ctx, s, interchangeJSON); err != nil {
|
||||
return errors.Wrap(err, "slashing protection JSON metadata was incorrect")
|
||||
}
|
||||
|
||||
@@ -55,12 +55,18 @@ func ImportStandardProtectionJSON(ctx context.Context, validatorDB db.Database,
|
||||
return errors.Wrap(err, "could not parse unique entries for attestations by public key")
|
||||
}
|
||||
|
||||
attestingHistoryByPubKey := make(map[[fieldparams.BLSPubkeyLength]byte][]*kv.AttestationRecord)
|
||||
proposalHistoryByPubKey := make(map[[fieldparams.BLSPubkeyLength]byte]kv.ProposalHistoryForPubkey)
|
||||
attestingHistoryByPubKey := make(map[[fieldparams.BLSPubkeyLength]byte][]*common.AttestationRecord)
|
||||
proposalHistoryByPubKey := make(map[[fieldparams.BLSPubkeyLength]byte]common.ProposalHistoryForPubkey)
|
||||
|
||||
bar := common.InitializeProgressBar(len(signedBlocksByPubKey), "Transform signed blocks:")
|
||||
|
||||
for pubKey, signedBlocks := range signedBlocksByPubKey {
|
||||
// Transform the processed signed blocks data from the JSON
|
||||
// Transform the processed signed blocks data from the JSON.
|
||||
// file into the internal Prysm representation of proposal history.
|
||||
if err := bar.Add(1); err != nil {
|
||||
log.WithError(err).Debug("Could not increase progress bar")
|
||||
}
|
||||
|
||||
proposalHistory, err := transformSignedBlocks(ctx, signedBlocks)
|
||||
if err != nil {
|
||||
return errors.Wrapf(err, "could not parse signed blocks in JSON file for key %#x", pubKey)
|
||||
@@ -69,9 +75,14 @@ func ImportStandardProtectionJSON(ctx context.Context, validatorDB db.Database,
|
||||
proposalHistoryByPubKey[pubKey] = *proposalHistory
|
||||
}
|
||||
|
||||
bar = common.InitializeProgressBar(len(signedAttsByPubKey), "Transform signed attestations:")
|
||||
for pubKey, signedAtts := range signedAttsByPubKey {
|
||||
// Transform the processed signed attestation data from the JSON
|
||||
// Transform the processed signed attestation data from the JSON.
|
||||
// file into the internal Prysm representation of attesting history.
|
||||
if err := bar.Add(1); err != nil {
|
||||
log.WithError(err).Debug("Could not increase progress bar")
|
||||
}
|
||||
|
||||
historicalAtt, err := transformSignedAttestations(pubKey, signedAtts)
|
||||
if err != nil {
|
||||
return errors.Wrapf(err, "could not parse signed attestations in JSON file for key %#x", pubKey)
|
||||
@@ -83,8 +94,7 @@ func ImportStandardProtectionJSON(ctx context.Context, validatorDB db.Database,
|
||||
// We validate and filter out public keys parsed from JSON to ensure we are
|
||||
// not importing those which are slashable with respect to other data within the same JSON.
|
||||
slashableProposerKeys := filterSlashablePubKeysFromBlocks(ctx, proposalHistoryByPubKey)
|
||||
|
||||
slashableAttesterKeys, err := filterSlashablePubKeysFromAttestations(ctx, validatorDB, attestingHistoryByPubKey)
|
||||
slashableAttesterKeys, err := filterSlashablePubKeysFromAttestations(ctx, s, attestingHistoryByPubKey)
|
||||
if err != nil {
|
||||
return errors.Wrap(err, "could not filter slashable attester public keys from JSON data")
|
||||
}
|
||||
@@ -94,107 +104,24 @@ func ImportStandardProtectionJSON(ctx context.Context, validatorDB db.Database,
|
||||
slashablePublicKeys = append(slashablePublicKeys, slashableProposerKeys...)
|
||||
slashablePublicKeys = append(slashablePublicKeys, slashableAttesterKeys...)
|
||||
|
||||
if err := validatorDB.SaveEIPImportBlacklistedPublicKeys(ctx, slashablePublicKeys); err != nil {
|
||||
if err := s.SaveEIPImportBlacklistedPublicKeys(ctx, slashablePublicKeys); err != nil {
|
||||
return errors.Wrap(err, "could not save slashable public keys to database")
|
||||
}
|
||||
|
||||
// We save the histories to disk as atomic operations, ensuring that this only occurs
|
||||
// until after we successfully parse all data from the JSON file. If there is any error
|
||||
// in parsing the JSON proposal and attesting histories, we will not reach this point.
|
||||
if err := saveProposals(ctx, proposalHistoryByPubKey, validatorDB); err != nil {
|
||||
if err := saveProposals(ctx, proposalHistoryByPubKey, s); err != nil {
|
||||
return errors.Wrap(err, "could not save proposals")
|
||||
}
|
||||
|
||||
if err := saveAttestations(ctx, attestingHistoryByPubKey, validatorDB); err != nil {
|
||||
if err := saveAttestations(ctx, attestingHistoryByPubKey, s); err != nil {
|
||||
return errors.Wrap(err, "could not save attestations")
|
||||
}
|
||||
|
||||
return nil
|
||||
}
|
||||
|
||||
func saveProposals(ctx context.Context, proposalHistoryByPubKey map[[fieldparams.BLSPubkeyLength]byte]kv.ProposalHistoryForPubkey, validatorDB db.Database) error {
|
||||
for pubKey, proposalHistory := range proposalHistoryByPubKey {
|
||||
bar := initializeProgressBar(
|
||||
len(proposalHistory.Proposals),
|
||||
fmt.Sprintf("Importing proposals for validator public key %#x", bytesutil.Trunc(pubKey[:])),
|
||||
)
|
||||
|
||||
for _, proposal := range proposalHistory.Proposals {
|
||||
if err := bar.Add(1); err != nil {
|
||||
log.WithError(err).Debug("Could not increase progress bar")
|
||||
}
|
||||
|
||||
if err := validatorDB.SaveProposalHistoryForSlot(ctx, pubKey, proposal.Slot, proposal.SigningRoot); err != nil {
|
||||
return errors.Wrap(err, "could not save proposal history from imported JSON to database")
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
return nil
|
||||
}
|
||||
|
||||
func saveAttestations(ctx context.Context, attestingHistoryByPubKey map[[fieldparams.BLSPubkeyLength]byte][]*kv.AttestationRecord, validatorDB db.Database) error {
|
||||
bar := initializeProgressBar(
|
||||
len(attestingHistoryByPubKey),
|
||||
"Importing attesting history for validator public keys",
|
||||
)
|
||||
|
||||
for pubKey, attestations := range attestingHistoryByPubKey {
|
||||
if err := bar.Add(1); err != nil {
|
||||
log.WithError(err).Debug("Could not increase progress bar")
|
||||
}
|
||||
|
||||
indexedAtts := make([]*ethpb.IndexedAttestation, len(attestations))
|
||||
signingRoots := make([][]byte, len(attestations))
|
||||
|
||||
for i, att := range attestations {
|
||||
indexedAtt := createAttestation(att.Source, att.Target)
|
||||
indexedAtts[i] = indexedAtt
|
||||
signingRoots[i] = att.SigningRoot
|
||||
}
|
||||
|
||||
if err := validatorDB.SaveAttestationsForPubKey(ctx, pubKey, signingRoots, indexedAtts); err != nil {
|
||||
return errors.Wrap(err, "could not save attestations from imported JSON to database")
|
||||
}
|
||||
}
|
||||
|
||||
return nil
|
||||
}
|
||||
|
||||
func validateMetadata(ctx context.Context, validatorDB db.Database, interchangeJSON *format.EIPSlashingProtectionFormat) error {
|
||||
// We need to ensure the version in the metadata field matches the one we support.
|
||||
version := interchangeJSON.Metadata.InterchangeFormatVersion
|
||||
if version != format.InterchangeFormatVersion {
|
||||
return fmt.Errorf(
|
||||
"slashing protection JSON version '%s' is not supported, wanted '%s'",
|
||||
version,
|
||||
format.InterchangeFormatVersion,
|
||||
)
|
||||
}
|
||||
|
||||
// We need to verify the genesis validators root matches that of our chain data, otherwise
|
||||
// the imported slashing protection JSON was created on a different chain.
|
||||
gvr, err := RootFromHex(interchangeJSON.Metadata.GenesisValidatorsRoot)
|
||||
if err != nil {
|
||||
return fmt.Errorf("%#x is not a valid root: %w", interchangeJSON.Metadata.GenesisValidatorsRoot, err)
|
||||
}
|
||||
dbGvr, err := validatorDB.GenesisValidatorsRoot(ctx)
|
||||
if err != nil {
|
||||
return errors.Wrap(err, "could not retrieve genesis validators root to db")
|
||||
}
|
||||
if dbGvr == nil {
|
||||
if err = validatorDB.SaveGenesisValidatorsRoot(ctx, gvr[:]); err != nil {
|
||||
return errors.Wrap(err, "could not save genesis validators root to db")
|
||||
}
|
||||
return nil
|
||||
}
|
||||
if !bytes.Equal(dbGvr, gvr[:]) {
|
||||
return errors.New("genesis validators root doesn't match the one that is stored in slashing protection db. " +
|
||||
"Please make sure you import the protection data that is relevant to the chain you are on")
|
||||
}
|
||||
return nil
|
||||
}
|
||||
|
||||
// We create a map of pubKey -> []*SignedBlock. Then, for each public key we observe,
|
||||
// we append to this map. This allows us to handle valid input JSON data such as:
|
||||
//
|
||||
@@ -212,9 +139,18 @@ func validateMetadata(ctx context.Context, validatorDB db.Database, interchangeJ
|
||||
// SignedBlocks: [Slot: 5, Slot: 5, Slot: 6, Slot: 7, Slot: 10, Slot: 11],
|
||||
// }
|
||||
func parseBlocksForUniquePublicKeys(data []*format.ProtectionData) (map[[fieldparams.BLSPubkeyLength]byte][]*format.SignedBlock, error) {
|
||||
bar := common.InitializeProgressBar(
|
||||
len(data),
|
||||
"Parsing blocks for unique public keys:",
|
||||
)
|
||||
|
||||
signedBlocksByPubKey := make(map[[fieldparams.BLSPubkeyLength]byte][]*format.SignedBlock)
|
||||
for _, validatorData := range data {
|
||||
pubKey, err := PubKeyFromHex(validatorData.Pubkey)
|
||||
if err := bar.Add(1); err != nil {
|
||||
return nil, errors.Wrap(err, "could not increase progress bar")
|
||||
}
|
||||
|
||||
pubKey, err := helpers.PubKeyFromHex(validatorData.Pubkey)
|
||||
if err != nil {
|
||||
return nil, fmt.Errorf("%s is not a valid public key: %w", validatorData.Pubkey, err)
|
||||
}
|
||||
@@ -245,9 +181,18 @@ func parseBlocksForUniquePublicKeys(data []*format.ProtectionData) (map[[fieldpa
|
||||
// SignedAttestations: [{Source: 5, Target: 6}, {Source: 5, Target: 6}, {Source: 6, Target: 7}],
|
||||
// }
|
||||
func parseAttestationsForUniquePublicKeys(data []*format.ProtectionData) (map[[fieldparams.BLSPubkeyLength]byte][]*format.SignedAttestation, error) {
|
||||
bar := common.InitializeProgressBar(
|
||||
len(data),
|
||||
"Parsing attestations for unique public keys:",
|
||||
)
|
||||
|
||||
signedAttestationsByPubKey := make(map[[fieldparams.BLSPubkeyLength]byte][]*format.SignedAttestation)
|
||||
for _, validatorData := range data {
|
||||
pubKey, err := PubKeyFromHex(validatorData.Pubkey)
|
||||
if err := bar.Add(1); err != nil {
|
||||
return nil, errors.Wrap(err, "could not increase progress bar")
|
||||
}
|
||||
|
||||
pubKey, err := helpers.PubKeyFromHex(validatorData.Pubkey)
|
||||
if err != nil {
|
||||
return nil, fmt.Errorf("%s is not a valid public key: %w", validatorData.Pubkey, err)
|
||||
}
|
||||
@@ -261,15 +206,84 @@ func parseAttestationsForUniquePublicKeys(data []*format.ProtectionData) (map[[f
|
||||
return signedAttestationsByPubKey, nil
|
||||
}
|
||||
|
||||
func filterSlashablePubKeysFromBlocks(_ context.Context, historyByPubKey map[[fieldparams.BLSPubkeyLength]byte]kv.ProposalHistoryForPubkey) [][fieldparams.BLSPubkeyLength]byte {
|
||||
func transformSignedBlocks(_ context.Context, signedBlocks []*format.SignedBlock) (*common.ProposalHistoryForPubkey, error) {
|
||||
proposals := make([]common.Proposal, len(signedBlocks))
|
||||
for i, proposal := range signedBlocks {
|
||||
slot, err := helpers.SlotFromString(proposal.Slot)
|
||||
if err != nil {
|
||||
return nil, fmt.Errorf("%s is not a valid slot: %w", proposal.Slot, err)
|
||||
}
|
||||
|
||||
// Signing roots are optional in the standard JSON file.
|
||||
// If the signing root is not provided, we use a default value which is a zero-length byte slice.
|
||||
signingRoot := make([]byte, 0, fieldparams.RootLength)
|
||||
|
||||
if proposal.SigningRoot != "" {
|
||||
signingRoot32, err := helpers.RootFromHex(proposal.SigningRoot)
|
||||
if err != nil {
|
||||
return nil, fmt.Errorf("%s is not a valid root: %w", proposal.SigningRoot, err)
|
||||
}
|
||||
signingRoot = signingRoot32[:]
|
||||
}
|
||||
|
||||
proposals[i] = common.Proposal{
|
||||
Slot: slot,
|
||||
SigningRoot: signingRoot,
|
||||
}
|
||||
}
|
||||
|
||||
return &common.ProposalHistoryForPubkey{
|
||||
Proposals: proposals,
|
||||
}, nil
|
||||
}
|
||||
|
||||
func transformSignedAttestations(pubKey [fieldparams.BLSPubkeyLength]byte, atts []*format.SignedAttestation) ([]*common.AttestationRecord, error) {
|
||||
historicalAtts := make([]*common.AttestationRecord, 0)
|
||||
|
||||
for _, attestation := range atts {
|
||||
target, err := helpers.EpochFromString(attestation.TargetEpoch)
|
||||
if err != nil {
|
||||
return nil, fmt.Errorf("%s is not a valid epoch: %w", attestation.TargetEpoch, err)
|
||||
}
|
||||
source, err := helpers.EpochFromString(attestation.SourceEpoch)
|
||||
if err != nil {
|
||||
return nil, fmt.Errorf("%s is not a valid epoch: %w", attestation.SourceEpoch, err)
|
||||
}
|
||||
|
||||
// Signing roots are optional in the standard JSON file.
|
||||
// If the signing root is not provided, we use a default value which is a zero-length byte slice.
|
||||
signingRoot := make([]byte, 0, fieldparams.RootLength)
|
||||
|
||||
if attestation.SigningRoot != "" {
|
||||
signingRoot32, err := helpers.RootFromHex(attestation.SigningRoot)
|
||||
if err != nil {
|
||||
return nil, fmt.Errorf("%s is not a valid root: %w", attestation.SigningRoot, err)
|
||||
}
|
||||
signingRoot = signingRoot32[:]
|
||||
}
|
||||
historicalAtts = append(historicalAtts, &common.AttestationRecord{
|
||||
PubKey: pubKey,
|
||||
Source: source,
|
||||
Target: target,
|
||||
SigningRoot: signingRoot,
|
||||
})
|
||||
}
|
||||
return historicalAtts, nil
|
||||
}
|
||||
|
||||
func filterSlashablePubKeysFromBlocks(_ context.Context, historyByPubKey map[[fieldparams.BLSPubkeyLength]byte]common.ProposalHistoryForPubkey) [][fieldparams.BLSPubkeyLength]byte {
|
||||
// Given signing roots are optional in the EIP standard, we behave as follows:
|
||||
// For a given block:
|
||||
// If we have a previous block with the same slot in our history:
|
||||
// If signing root is nil, we consider that proposer public key as slashable
|
||||
// If signing root is not nil , then we compare signing roots. If they are different,
|
||||
// then we consider that proposer public key as slashable.
|
||||
bar := common.InitializeProgressBar(len(historyByPubKey), "Filter slashable pubkeys from blocks:")
|
||||
slashablePubKeys := make([][fieldparams.BLSPubkeyLength]byte, 0)
|
||||
for pubKey, proposals := range historyByPubKey {
|
||||
if err := bar.Add(1); err != nil {
|
||||
log.WithError(err).Debug("Could not increase progress bar")
|
||||
}
|
||||
seenSigningRootsBySlot := make(map[primitives.Slot][]byte)
|
||||
for _, blk := range proposals.Proposals {
|
||||
if signingRoot, ok := seenSigningRootsBySlot[blk.Slot]; ok {
|
||||
@@ -286,8 +300,8 @@ func filterSlashablePubKeysFromBlocks(_ context.Context, historyByPubKey map[[fi
|
||||
|
||||
func filterSlashablePubKeysFromAttestations(
|
||||
ctx context.Context,
|
||||
validatorDB db.Database,
|
||||
signedAttsByPubKey map[[fieldparams.BLSPubkeyLength]byte][]*kv.AttestationRecord,
|
||||
validatorDB *Store,
|
||||
signedAttsByPubKey map[[fieldparams.BLSPubkeyLength]byte][]*common.AttestationRecord,
|
||||
) ([][fieldparams.BLSPubkeyLength]byte, error) {
|
||||
slashablePubKeys := make([][fieldparams.BLSPubkeyLength]byte, 0)
|
||||
// First we need to find attestations that are slashable with respect to other
|
||||
@@ -295,8 +309,17 @@ func filterSlashablePubKeysFromAttestations(
|
||||
for pubKey, signedAtts := range signedAttsByPubKey {
|
||||
signingRootsByTarget := make(map[primitives.Epoch][]byte)
|
||||
targetEpochsBySource := make(map[primitives.Epoch][]primitives.Epoch)
|
||||
|
||||
bar := common.InitializeProgressBar(
|
||||
len(signedAtts),
|
||||
fmt.Sprintf("Pubkey %#x - Filter attestations wrt. JSON file:", pubKey),
|
||||
)
|
||||
|
||||
Loop:
|
||||
for _, att := range signedAtts {
|
||||
if err := bar.Add(1); err != nil {
|
||||
log.WithError(err).Debug("Could not increase progress bar")
|
||||
}
|
||||
// Check for double votes.
|
||||
if sr, ok := signingRootsByTarget[att.Target]; ok {
|
||||
if slashings.SigningRootsDiffer(sr, att.SigningRoot) {
|
||||
@@ -304,6 +327,7 @@ func filterSlashablePubKeysFromAttestations(
|
||||
break Loop
|
||||
}
|
||||
}
|
||||
|
||||
// Check for surround voting.
|
||||
for source, targets := range targetEpochsBySource {
|
||||
for _, target := range targets {
|
||||
@@ -319,19 +343,28 @@ func filterSlashablePubKeysFromAttestations(
|
||||
targetEpochsBySource[att.Source] = append(targetEpochsBySource[att.Source], att.Target)
|
||||
}
|
||||
}
|
||||
|
||||
// Then, we need to find attestations that are slashable with respect to our database.
|
||||
for pubKey, signedAtts := range signedAttsByPubKey {
|
||||
bar := common.InitializeProgressBar(
|
||||
len(signedAtts),
|
||||
fmt.Sprintf("Pubkey %#x - Filter attestations wrt. database file:", pubKey),
|
||||
)
|
||||
for _, att := range signedAtts {
|
||||
if err := bar.Add(1); err != nil {
|
||||
log.WithError(err).Debug("Could not increase progress bar")
|
||||
}
|
||||
|
||||
indexedAtt := createAttestation(att.Source, att.Target)
|
||||
|
||||
// If slashable == NotSlashable and err != nil, then CheckSlashableAttestation failed.
|
||||
// If slashable != NotSlashable, then err contains the reason why the attestation is slashable.
|
||||
slashable, err := validatorDB.CheckSlashableAttestation(ctx, pubKey, att.SigningRoot, indexedAtt)
|
||||
if err != nil && slashable == kv.NotSlashable {
|
||||
if err != nil && slashable == NotSlashable {
|
||||
return nil, err
|
||||
}
|
||||
|
||||
if slashable != kv.NotSlashable {
|
||||
if slashable != NotSlashable {
|
||||
slashablePubKeys = append(slashablePubKeys, pubKey)
|
||||
break
|
||||
}
|
||||
@@ -340,68 +373,53 @@ func filterSlashablePubKeysFromAttestations(
|
||||
return slashablePubKeys, nil
|
||||
}
|
||||
|
||||
func transformSignedBlocks(_ context.Context, signedBlocks []*format.SignedBlock) (*kv.ProposalHistoryForPubkey, error) {
|
||||
proposals := make([]kv.Proposal, len(signedBlocks))
|
||||
for i, proposal := range signedBlocks {
|
||||
slot, err := SlotFromString(proposal.Slot)
|
||||
if err != nil {
|
||||
return nil, fmt.Errorf("%s is not a valid slot: %w", proposal.Slot, err)
|
||||
}
|
||||
func saveProposals(ctx context.Context, proposalHistoryByPubKey map[[fieldparams.BLSPubkeyLength]byte]common.ProposalHistoryForPubkey, validatorDB iface.ValidatorDB) error {
|
||||
for pubKey, proposalHistory := range proposalHistoryByPubKey {
|
||||
bar := common.InitializeProgressBar(
|
||||
len(proposalHistory.Proposals),
|
||||
fmt.Sprintf("Importing proposals for validator public key %#x", bytesutil.Trunc(pubKey[:])),
|
||||
)
|
||||
|
||||
// Signing roots are optional in the standard JSON file.
|
||||
// If the signing root is not provided, we use a default value which is a zero-length byte slice.
|
||||
signingRoot := make([]byte, 0, fieldparams.RootLength)
|
||||
|
||||
if proposal.SigningRoot != "" {
|
||||
signingRoot32, err := RootFromHex(proposal.SigningRoot)
|
||||
if err != nil {
|
||||
return nil, fmt.Errorf("%s is not a valid root: %w", proposal.SigningRoot, err)
|
||||
for _, proposal := range proposalHistory.Proposals {
|
||||
if err := bar.Add(1); err != nil {
|
||||
log.WithError(err).Debug("Could not increase progress bar")
|
||||
}
|
||||
signingRoot = signingRoot32[:]
|
||||
}
|
||||
|
||||
proposals[i] = kv.Proposal{
|
||||
Slot: slot,
|
||||
SigningRoot: signingRoot,
|
||||
if err := validatorDB.SaveProposalHistoryForSlot(ctx, pubKey, proposal.Slot, proposal.SigningRoot); err != nil {
|
||||
return errors.Wrap(err, "could not save proposal history from imported JSON to database")
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
return &kv.ProposalHistoryForPubkey{
|
||||
Proposals: proposals,
|
||||
}, nil
|
||||
return nil
|
||||
}
|
||||
|
||||
func transformSignedAttestations(pubKey [fieldparams.BLSPubkeyLength]byte, atts []*format.SignedAttestation) ([]*kv.AttestationRecord, error) {
|
||||
historicalAtts := make([]*kv.AttestationRecord, 0)
|
||||
for _, attestation := range atts {
|
||||
target, err := EpochFromString(attestation.TargetEpoch)
|
||||
if err != nil {
|
||||
return nil, fmt.Errorf("%s is not a valid epoch: %w", attestation.TargetEpoch, err)
|
||||
}
|
||||
source, err := EpochFromString(attestation.SourceEpoch)
|
||||
if err != nil {
|
||||
return nil, fmt.Errorf("%s is not a valid epoch: %w", attestation.SourceEpoch, err)
|
||||
func saveAttestations(ctx context.Context, attestingHistoryByPubKey map[[fieldparams.BLSPubkeyLength]byte][]*common.AttestationRecord, validatorDB iface.ValidatorDB) error {
|
||||
bar := common.InitializeProgressBar(
|
||||
len(attestingHistoryByPubKey),
|
||||
"Importing attesting history for validator public keys",
|
||||
)
|
||||
|
||||
for pubKey, attestations := range attestingHistoryByPubKey {
|
||||
if err := bar.Add(1); err != nil {
|
||||
log.WithError(err).Debug("Could not increase progress bar")
|
||||
}
|
||||
|
||||
// Signing roots are optional in the standard JSON file.
|
||||
// If the signing root is not provided, we use a default value which is a zero-length byte slice.
|
||||
signingRoot := make([]byte, 0, fieldparams.RootLength)
|
||||
indexedAtts := make([]*ethpb.IndexedAttestation, len(attestations))
|
||||
signingRoots := make([][]byte, len(attestations))
|
||||
|
||||
if attestation.SigningRoot != "" {
|
||||
signingRoot32, err := RootFromHex(attestation.SigningRoot)
|
||||
if err != nil {
|
||||
return nil, fmt.Errorf("%s is not a valid root: %w", attestation.SigningRoot, err)
|
||||
}
|
||||
signingRoot = signingRoot32[:]
|
||||
for i, att := range attestations {
|
||||
indexedAtt := createAttestation(att.Source, att.Target)
|
||||
indexedAtts[i] = indexedAtt
|
||||
signingRoots[i] = att.SigningRoot
|
||||
}
|
||||
|
||||
if err := validatorDB.SaveAttestationsForPubKey(ctx, pubKey, signingRoots, indexedAtts); err != nil {
|
||||
return errors.Wrap(err, "could not save attestations from imported JSON to database")
|
||||
}
|
||||
historicalAtts = append(historicalAtts, &kv.AttestationRecord{
|
||||
PubKey: pubKey,
|
||||
Source: source,
|
||||
Target: target,
|
||||
SigningRoot: signingRoot,
|
||||
})
|
||||
}
|
||||
return historicalAtts, nil
|
||||
|
||||
return nil
|
||||
}
|
||||
|
||||
func createAttestation(source, target primitives.Epoch) *ethpb.IndexedAttestation {
|
||||
@@ -1,9 +1,8 @@
|
||||
package history
|
||||
package kv
|
||||
|
||||
import (
|
||||
"bytes"
|
||||
"context"
|
||||
"encoding/hex"
|
||||
"encoding/json"
|
||||
"fmt"
|
||||
"reflect"
|
||||
@@ -14,8 +13,7 @@ import (
|
||||
ethpb "github.com/prysmaticlabs/prysm/v5/proto/prysm/v1alpha1"
|
||||
"github.com/prysmaticlabs/prysm/v5/testing/assert"
|
||||
"github.com/prysmaticlabs/prysm/v5/testing/require"
|
||||
"github.com/prysmaticlabs/prysm/v5/validator/db/kv"
|
||||
dbtest "github.com/prysmaticlabs/prysm/v5/validator/db/testing"
|
||||
"github.com/prysmaticlabs/prysm/v5/validator/db/common"
|
||||
"github.com/prysmaticlabs/prysm/v5/validator/slashing-protection-history/format"
|
||||
valtest "github.com/prysmaticlabs/prysm/v5/validator/testing"
|
||||
logTest "github.com/sirupsen/logrus/hooks/test"
|
||||
@@ -23,24 +21,24 @@ import (
|
||||
|
||||
func TestStore_ImportInterchangeData_BadJSON(t *testing.T) {
|
||||
ctx := context.Background()
|
||||
validatorDB := dbtest.SetupDB(t, nil)
|
||||
validatorDB := setupDB(t, nil)
|
||||
|
||||
buf := bytes.NewBuffer([]byte("helloworld"))
|
||||
err := ImportStandardProtectionJSON(ctx, validatorDB, buf)
|
||||
err := validatorDB.ImportStandardProtectionJSON(ctx, buf)
|
||||
require.ErrorContains(t, "could not unmarshal slashing protection JSON file", err)
|
||||
}
|
||||
|
||||
func TestStore_ImportInterchangeData_NilData_FailsSilently(t *testing.T) {
|
||||
hook := logTest.NewGlobal()
|
||||
ctx := context.Background()
|
||||
validatorDB := dbtest.SetupDB(t, nil)
|
||||
validatorDB := setupDB(t, nil)
|
||||
|
||||
interchangeJSON := &format.EIPSlashingProtectionFormat{}
|
||||
encoded, err := json.Marshal(interchangeJSON)
|
||||
require.NoError(t, err)
|
||||
|
||||
buf := bytes.NewBuffer(encoded)
|
||||
err = ImportStandardProtectionJSON(ctx, validatorDB, buf)
|
||||
err = validatorDB.ImportStandardProtectionJSON(ctx, buf)
|
||||
require.NoError(t, err)
|
||||
require.LogsContain(t, hook, "No slashing protection data to import")
|
||||
}
|
||||
@@ -50,7 +48,7 @@ func TestStore_ImportInterchangeData_BadFormat_PreventsDBWrites(t *testing.T) {
|
||||
numValidators := 10
|
||||
publicKeys, err := valtest.CreateRandomPubKeys(numValidators)
|
||||
require.NoError(t, err)
|
||||
validatorDB := dbtest.SetupDB(t, publicKeys)
|
||||
validatorDB := setupDB(t, publicKeys)
|
||||
|
||||
// First we setup some mock attesting and proposal histories and create a mock
|
||||
// standard slashing protection format JSON struct.
|
||||
@@ -68,7 +66,7 @@ func TestStore_ImportInterchangeData_BadFormat_PreventsDBWrites(t *testing.T) {
|
||||
|
||||
// Next, we attempt to import it into our validator database and check that
|
||||
// we obtain an error during the import process.
|
||||
err = ImportStandardProtectionJSON(ctx, validatorDB, buf)
|
||||
err = validatorDB.ImportStandardProtectionJSON(ctx, buf)
|
||||
assert.NotNil(t, err)
|
||||
|
||||
// Next, we attempt to retrieve the attesting and proposals histories from our database and
|
||||
@@ -87,16 +85,18 @@ func TestStore_ImportInterchangeData_BadFormat_PreventsDBWrites(t *testing.T) {
|
||||
},
|
||||
},
|
||||
}
|
||||
|
||||
slashingKind, err := validatorDB.CheckSlashableAttestation(ctx, publicKeys[i], []byte{}, indexedAtt)
|
||||
// We expect we do not have an attesting history for each attestation
|
||||
require.NoError(t, err)
|
||||
require.Equal(t, kv.NotSlashable, slashingKind)
|
||||
require.Equal(t, NotSlashable, slashingKind)
|
||||
}
|
||||
|
||||
receivedHistory, err := validatorDB.ProposalHistoryForPubKey(ctx, publicKeys[i])
|
||||
require.NoError(t, err)
|
||||
require.DeepEqual(
|
||||
t,
|
||||
make([]*kv.Proposal, 0),
|
||||
make([]*common.Proposal, 0),
|
||||
receivedHistory,
|
||||
"Imported proposal signing root is different than the empty default",
|
||||
)
|
||||
@@ -108,7 +108,7 @@ func TestStore_ImportInterchangeData_OK(t *testing.T) {
|
||||
numValidators := 10
|
||||
publicKeys, err := valtest.CreateRandomPubKeys(numValidators)
|
||||
require.NoError(t, err)
|
||||
validatorDB := dbtest.SetupDB(t, publicKeys)
|
||||
validatorDB := setupDB(t, publicKeys)
|
||||
|
||||
// First we setup some mock attesting and proposal histories and create a mock
|
||||
// standard slashing protection format JSON struct.
|
||||
@@ -122,7 +122,7 @@ func TestStore_ImportInterchangeData_OK(t *testing.T) {
|
||||
buf := bytes.NewBuffer(blob)
|
||||
|
||||
// Next, we attempt to import it into our validator database.
|
||||
err = ImportStandardProtectionJSON(ctx, validatorDB, buf)
|
||||
err = validatorDB.ImportStandardProtectionJSON(ctx, buf)
|
||||
require.NoError(t, err)
|
||||
|
||||
// Next, we attempt to retrieve the attesting and proposals histories from our database and
|
||||
@@ -139,12 +139,12 @@ func TestStore_ImportInterchangeData_OK(t *testing.T) {
|
||||
},
|
||||
},
|
||||
}
|
||||
slashingKind, err := validatorDB.CheckSlashableAttestation(ctx, publicKeys[i], []byte{}, indexedAtt)
|
||||
// We expect we have an attesting history for the attestation and when
|
||||
// attempting to verify the same att is slashable with a different signing root,
|
||||
// we expect to receive a double vote slashing kind.
|
||||
slashingKind, err := validatorDB.CheckSlashableAttestation(ctx, publicKeys[i], []byte{}, indexedAtt)
|
||||
require.NotNil(t, err)
|
||||
require.Equal(t, kv.DoubleVote, slashingKind)
|
||||
require.Equal(t, DoubleVote, slashingKind)
|
||||
}
|
||||
|
||||
proposals := proposalHistory[i].Proposals
|
||||
@@ -168,128 +168,6 @@ func TestStore_ImportInterchangeData_OK(t *testing.T) {
|
||||
}
|
||||
}
|
||||
|
||||
func Test_validateMetadata(t *testing.T) {
|
||||
goodRoot := [32]byte{1}
|
||||
goodStr := make([]byte, hex.EncodedLen(len(goodRoot)))
|
||||
hex.Encode(goodStr, goodRoot[:])
|
||||
tests := []struct {
|
||||
name string
|
||||
interchangeJSON *format.EIPSlashingProtectionFormat
|
||||
dbGenesisValidatorsRoot []byte
|
||||
wantErr bool
|
||||
wantFatal string
|
||||
}{
|
||||
{
|
||||
name: "Incorrect version for EIP format should fail",
|
||||
interchangeJSON: &format.EIPSlashingProtectionFormat{
|
||||
Metadata: struct {
|
||||
InterchangeFormatVersion string `json:"interchange_format_version"`
|
||||
GenesisValidatorsRoot string `json:"genesis_validators_root"`
|
||||
}{
|
||||
InterchangeFormatVersion: "1",
|
||||
GenesisValidatorsRoot: string(goodStr),
|
||||
},
|
||||
},
|
||||
wantErr: true,
|
||||
},
|
||||
{
|
||||
name: "Junk data for version should fail",
|
||||
interchangeJSON: &format.EIPSlashingProtectionFormat{
|
||||
Metadata: struct {
|
||||
InterchangeFormatVersion string `json:"interchange_format_version"`
|
||||
GenesisValidatorsRoot string `json:"genesis_validators_root"`
|
||||
}{
|
||||
InterchangeFormatVersion: "asdljas$d",
|
||||
GenesisValidatorsRoot: string(goodStr),
|
||||
},
|
||||
},
|
||||
wantErr: true,
|
||||
},
|
||||
{
|
||||
name: "Proper version field should pass",
|
||||
interchangeJSON: &format.EIPSlashingProtectionFormat{
|
||||
Metadata: struct {
|
||||
InterchangeFormatVersion string `json:"interchange_format_version"`
|
||||
GenesisValidatorsRoot string `json:"genesis_validators_root"`
|
||||
}{
|
||||
InterchangeFormatVersion: format.InterchangeFormatVersion,
|
||||
GenesisValidatorsRoot: string(goodStr),
|
||||
},
|
||||
},
|
||||
wantErr: false,
|
||||
},
|
||||
}
|
||||
for _, tt := range tests {
|
||||
t.Run(tt.name, func(t *testing.T) {
|
||||
validatorDB := dbtest.SetupDB(t, nil)
|
||||
ctx := context.Background()
|
||||
if err := validateMetadata(ctx, validatorDB, tt.interchangeJSON); (err != nil) != tt.wantErr {
|
||||
t.Errorf("validateMetadata() error = %v, wantErr %v", err, tt.wantErr)
|
||||
}
|
||||
|
||||
})
|
||||
}
|
||||
}
|
||||
|
||||
func Test_validateMetadataGenesisValidatorsRoot(t *testing.T) {
|
||||
goodRoot := [32]byte{1}
|
||||
goodStr := make([]byte, hex.EncodedLen(len(goodRoot)))
|
||||
hex.Encode(goodStr, goodRoot[:])
|
||||
secondRoot := [32]byte{2}
|
||||
secondStr := make([]byte, hex.EncodedLen(len(secondRoot)))
|
||||
hex.Encode(secondStr, secondRoot[:])
|
||||
|
||||
tests := []struct {
|
||||
name string
|
||||
interchangeJSON *format.EIPSlashingProtectionFormat
|
||||
dbGenesisValidatorsRoot []byte
|
||||
wantErr bool
|
||||
}{
|
||||
{
|
||||
name: "Same genesis roots should not fail",
|
||||
interchangeJSON: &format.EIPSlashingProtectionFormat{
|
||||
Metadata: struct {
|
||||
InterchangeFormatVersion string `json:"interchange_format_version"`
|
||||
GenesisValidatorsRoot string `json:"genesis_validators_root"`
|
||||
}{
|
||||
InterchangeFormatVersion: format.InterchangeFormatVersion,
|
||||
GenesisValidatorsRoot: string(goodStr),
|
||||
},
|
||||
},
|
||||
dbGenesisValidatorsRoot: goodRoot[:],
|
||||
wantErr: false,
|
||||
},
|
||||
{
|
||||
name: "Different genesis roots should not fail",
|
||||
interchangeJSON: &format.EIPSlashingProtectionFormat{
|
||||
Metadata: struct {
|
||||
InterchangeFormatVersion string `json:"interchange_format_version"`
|
||||
GenesisValidatorsRoot string `json:"genesis_validators_root"`
|
||||
}{
|
||||
InterchangeFormatVersion: format.InterchangeFormatVersion,
|
||||
GenesisValidatorsRoot: string(secondStr),
|
||||
},
|
||||
},
|
||||
dbGenesisValidatorsRoot: goodRoot[:],
|
||||
wantErr: true,
|
||||
},
|
||||
}
|
||||
for _, tt := range tests {
|
||||
t.Run(tt.name, func(t *testing.T) {
|
||||
validatorDB := dbtest.SetupDB(t, nil)
|
||||
ctx := context.Background()
|
||||
require.NoError(t, validatorDB.SaveGenesisValidatorsRoot(ctx, tt.dbGenesisValidatorsRoot))
|
||||
err := validateMetadata(ctx, validatorDB, tt.interchangeJSON)
|
||||
if tt.wantErr {
|
||||
require.ErrorContains(t, "genesis validators root doesn't match the one that is stored", err)
|
||||
} else {
|
||||
require.NoError(t, err)
|
||||
}
|
||||
|
||||
})
|
||||
}
|
||||
}
|
||||
|
||||
func Test_parseUniqueSignedBlocksByPubKey(t *testing.T) {
|
||||
numValidators := 4
|
||||
publicKeys, err := valtest.CreateRandomPubKeys(numValidators)
|
||||
@@ -896,10 +774,9 @@ func Test_filterSlashablePubKeysFromBlocks(t *testing.T) {
|
||||
},
|
||||
}
|
||||
for _, tt := range tests {
|
||||
tt := tt
|
||||
t.Run(tt.name, func(t *testing.T) {
|
||||
ctx := context.Background()
|
||||
historyByPubKey := make(map[[fieldparams.BLSPubkeyLength]byte]kv.ProposalHistoryForPubkey)
|
||||
historyByPubKey := make(map[[fieldparams.BLSPubkeyLength]byte]common.ProposalHistoryForPubkey)
|
||||
for pubKey, signedBlocks := range tt.given {
|
||||
proposalHistory, err := transformSignedBlocks(ctx, signedBlocks)
|
||||
require.NoError(t, err)
|
||||
@@ -920,6 +797,7 @@ func Test_filterSlashablePubKeysFromBlocks(t *testing.T) {
|
||||
}
|
||||
|
||||
func Test_filterSlashablePubKeysFromAttestations(t *testing.T) {
|
||||
// filterSlashablePubKeysFromAttestations is used only for complete slashing protection.
|
||||
ctx := context.Background()
|
||||
tests := []struct {
|
||||
name string
|
||||
@@ -1041,12 +919,12 @@ func Test_filterSlashablePubKeysFromAttestations(t *testing.T) {
|
||||
}
|
||||
for _, tt := range tests {
|
||||
t.Run(tt.name, func(t *testing.T) {
|
||||
attestingHistoriesByPubKey := make(map[[fieldparams.BLSPubkeyLength]byte][]*kv.AttestationRecord)
|
||||
attestingHistoriesByPubKey := make(map[[fieldparams.BLSPubkeyLength]byte][]*common.AttestationRecord)
|
||||
pubKeys := make([][fieldparams.BLSPubkeyLength]byte, 0)
|
||||
for pubKey := range tt.incomingAttsByPubKey {
|
||||
pubKeys = append(pubKeys, pubKey)
|
||||
}
|
||||
validatorDB := dbtest.SetupDB(t, pubKeys)
|
||||
validatorDB := setupDB(t, pubKeys)
|
||||
for pubKey, signedAtts := range tt.incomingAttsByPubKey {
|
||||
attestingHistory, err := transformSignedAttestations(pubKey, signedAtts)
|
||||
require.NoError(t, err)
|
||||
@@ -5,27 +5,20 @@ import (
|
||||
"fmt"
|
||||
|
||||
"github.com/pkg/errors"
|
||||
"github.com/prometheus/client_golang/prometheus"
|
||||
fieldparams "github.com/prysmaticlabs/prysm/v5/config/fieldparams"
|
||||
"github.com/prysmaticlabs/prysm/v5/config/params"
|
||||
"github.com/prysmaticlabs/prysm/v5/consensus-types/interfaces"
|
||||
"github.com/prysmaticlabs/prysm/v5/consensus-types/primitives"
|
||||
"github.com/prysmaticlabs/prysm/v5/encoding/bytesutil"
|
||||
"github.com/prysmaticlabs/prysm/v5/time/slots"
|
||||
"github.com/prysmaticlabs/prysm/v5/validator/db/common"
|
||||
bolt "go.etcd.io/bbolt"
|
||||
"go.opencensus.io/trace"
|
||||
)
|
||||
|
||||
// ProposalHistoryForPubkey for a validator public key.
|
||||
type ProposalHistoryForPubkey struct {
|
||||
Proposals []Proposal
|
||||
}
|
||||
|
||||
// Proposal representation for a validator public key.
|
||||
type Proposal struct {
|
||||
Slot primitives.Slot `json:"slot"`
|
||||
SigningRoot []byte `json:"signing_root"`
|
||||
}
|
||||
|
||||
// ProposedPublicKeys retrieves all public keys in our proposals history bucket.
|
||||
// Warning: A public key in this bucket does not necessarily mean it has signed a block.
|
||||
func (s *Store) ProposedPublicKeys(ctx context.Context) ([][fieldparams.BLSPubkeyLength]byte, error) {
|
||||
_, span := trace.StartSpan(ctx, "Validator.ProposedPublicKeys")
|
||||
defer span.End()
|
||||
@@ -83,11 +76,11 @@ func (s *Store) ProposalHistoryForSlot(ctx context.Context, publicKey [fieldpara
|
||||
}
|
||||
|
||||
// ProposalHistoryForPubKey returns the entire proposal history for a given public key.
|
||||
func (s *Store) ProposalHistoryForPubKey(ctx context.Context, publicKey [fieldparams.BLSPubkeyLength]byte) ([]*Proposal, error) {
|
||||
func (s *Store) ProposalHistoryForPubKey(ctx context.Context, publicKey [fieldparams.BLSPubkeyLength]byte) ([]*common.Proposal, error) {
|
||||
_, span := trace.StartSpan(ctx, "Validator.ProposalHistoryForPubKey")
|
||||
defer span.End()
|
||||
|
||||
proposals := make([]*Proposal, 0)
|
||||
proposals := make([]*common.Proposal, 0)
|
||||
err := s.view(func(tx *bolt.Tx) error {
|
||||
bucket := tx.Bucket(historicProposalsBucket)
|
||||
valBucket := bucket.Bucket(publicKey[:])
|
||||
@@ -98,7 +91,7 @@ func (s *Store) ProposalHistoryForPubKey(ctx context.Context, publicKey [fieldpa
|
||||
slot := bytesutil.BytesToSlotBigEndian(slotKey)
|
||||
sr := make([]byte, fieldparams.RootLength)
|
||||
copy(sr, signingRootBytes)
|
||||
proposals = append(proposals, &Proposal{
|
||||
proposals = append(proposals, &common.Proposal{
|
||||
Slot: slot,
|
||||
SigningRoot: sr,
|
||||
})
|
||||
@@ -202,6 +195,86 @@ func (s *Store) HighestSignedProposal(ctx context.Context, publicKey [fieldparam
|
||||
return highestSignedProposalSlot, exists, err
|
||||
}
|
||||
|
||||
// SlashableProposalCheck checks if a block proposal is slashable by comparing it with the
|
||||
// block proposals history for the given public key in our complete slashing protection database defined by EIP-3076.
|
||||
// If it is not, we then update the history.
|
||||
func (s *Store) SlashableProposalCheck(
|
||||
ctx context.Context,
|
||||
pubKey [fieldparams.BLSPubkeyLength]byte,
|
||||
signedBlock interfaces.ReadOnlySignedBeaconBlock,
|
||||
signingRoot [fieldparams.RootLength]byte,
|
||||
emitAccountMetrics bool,
|
||||
validatorProposeFailVec *prometheus.CounterVec,
|
||||
) error {
|
||||
fmtKey := fmt.Sprintf("%#x", pubKey[:])
|
||||
|
||||
blk := signedBlock.Block()
|
||||
prevSigningRoot, proposalAtSlotExists, prevSigningRootExists, err := s.ProposalHistoryForSlot(ctx, pubKey, blk.Slot())
|
||||
if err != nil {
|
||||
if emitAccountMetrics {
|
||||
validatorProposeFailVec.WithLabelValues(fmtKey).Inc()
|
||||
}
|
||||
return errors.Wrap(err, "failed to get proposal history")
|
||||
}
|
||||
|
||||
lowestSignedProposalSlot, lowestProposalExists, err := s.LowestSignedProposal(ctx, pubKey)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
// Based on EIP-3076 - Condition 2
|
||||
// -------------------------------
|
||||
if lowestProposalExists {
|
||||
// If the block slot is (strictly) less than the lowest signed proposal slot in the DB, we consider it slashable.
|
||||
if blk.Slot() < lowestSignedProposalSlot {
|
||||
return fmt.Errorf(
|
||||
"could not sign block with slot < lowest signed slot in db, block slot: %d < lowest signed slot: %d",
|
||||
blk.Slot(),
|
||||
lowestSignedProposalSlot,
|
||||
)
|
||||
}
|
||||
|
||||
// If the block slot is equal to the lowest signed proposal slot and
|
||||
// - condition1: there is no signed proposal in the DB for this slot, or
|
||||
// - condition2: there is a signed proposal in the DB for this slot, but with no associated signing root, or
|
||||
// - condition3: there is a signed proposal in the DB for this slot, but the signing root differs,
|
||||
// ==> we consider it slashable.
|
||||
condition1 := !proposalAtSlotExists
|
||||
condition2 := proposalAtSlotExists && !prevSigningRootExists
|
||||
condition3 := proposalAtSlotExists && prevSigningRootExists && prevSigningRoot != signingRoot
|
||||
if blk.Slot() == lowestSignedProposalSlot && (condition1 || condition2 || condition3) {
|
||||
return fmt.Errorf(
|
||||
"could not sign block with slot == lowest signed slot in db if it is not a repeat signing, block slot: %d == slowest signed slot: %d",
|
||||
blk.Slot(),
|
||||
lowestSignedProposalSlot,
|
||||
)
|
||||
}
|
||||
}
|
||||
|
||||
// Based on EIP-3076 - Condition 1
|
||||
// -------------------------------
|
||||
// If there is a signed proposal in the DB for this slot and
|
||||
// - there is no associated signing root, or
|
||||
// - the signing root differs,
|
||||
// ==> we consider it slashable.
|
||||
if proposalAtSlotExists && (!prevSigningRootExists || prevSigningRoot != signingRoot) {
|
||||
if emitAccountMetrics {
|
||||
validatorProposeFailVec.WithLabelValues(fmtKey).Inc()
|
||||
}
|
||||
return errors.New(common.FailedBlockSignLocalErr)
|
||||
}
|
||||
|
||||
// Save the proposal for this slot.
|
||||
if err := s.SaveProposalHistoryForSlot(ctx, pubKey, blk.Slot(), signingRoot[:]); err != nil {
|
||||
if emitAccountMetrics {
|
||||
validatorProposeFailVec.WithLabelValues(fmtKey).Inc()
|
||||
}
|
||||
return errors.Wrap(err, "failed to save updated proposal history")
|
||||
}
|
||||
|
||||
return nil
|
||||
}
|
||||
|
||||
func pruneProposalHistoryBySlot(valBucket *bolt.Bucket, newestSlot primitives.Slot) error {
|
||||
c := valBucket.Cursor()
|
||||
for k, _ := c.First(); k != nil; k, _ = c.First() {
|
||||
|
||||
@@ -4,12 +4,17 @@ import (
|
||||
"context"
|
||||
"testing"
|
||||
|
||||
"github.com/ethereum/go-ethereum/common/hexutil"
|
||||
fieldparams "github.com/prysmaticlabs/prysm/v5/config/fieldparams"
|
||||
"github.com/prysmaticlabs/prysm/v5/config/params"
|
||||
"github.com/prysmaticlabs/prysm/v5/consensus-types/blocks"
|
||||
"github.com/prysmaticlabs/prysm/v5/consensus-types/primitives"
|
||||
"github.com/prysmaticlabs/prysm/v5/encoding/bytesutil"
|
||||
ethpb "github.com/prysmaticlabs/prysm/v5/proto/prysm/v1alpha1"
|
||||
"github.com/prysmaticlabs/prysm/v5/testing/assert"
|
||||
"github.com/prysmaticlabs/prysm/v5/testing/require"
|
||||
"github.com/prysmaticlabs/prysm/v5/testing/util"
|
||||
"github.com/prysmaticlabs/prysm/v5/validator/db/common"
|
||||
)
|
||||
|
||||
func TestNewProposalHistoryForSlot_ReturnsNilIfNoHistory(t *testing.T) {
|
||||
@@ -72,7 +77,7 @@ func TestNewProposalHistoryForPubKey_ReturnsEmptyIfNoHistory(t *testing.T) {
|
||||
|
||||
proposalHistory, err := db.ProposalHistoryForPubKey(context.Background(), valPubkey)
|
||||
require.NoError(t, err)
|
||||
assert.DeepEqual(t, make([]*Proposal, 0), proposalHistory)
|
||||
assert.DeepEqual(t, make([]*common.Proposal, 0), proposalHistory)
|
||||
}
|
||||
|
||||
func TestSaveProposalHistoryForPubKey_OK(t *testing.T) {
|
||||
@@ -88,7 +93,7 @@ func TestSaveProposalHistoryForPubKey_OK(t *testing.T) {
|
||||
require.NoError(t, err, "Failed to get proposal history")
|
||||
|
||||
require.NotNil(t, proposalHistory)
|
||||
want := []*Proposal{
|
||||
want := []*common.Proposal{
|
||||
{
|
||||
Slot: slot,
|
||||
SigningRoot: root[:],
|
||||
@@ -300,3 +305,151 @@ func TestStore_HighestSignedProposal(t *testing.T) {
|
||||
require.Equal(t, true, exists)
|
||||
assert.Equal(t, primitives.Slot(3), slot)
|
||||
}
|
||||
|
||||
func Test_slashableProposalCheck_PreventsLowerThanMinProposal(t *testing.T) {
|
||||
ctx := context.Background()
|
||||
lowestSignedSlot := primitives.Slot(10)
|
||||
|
||||
var pubkey [fieldparams.BLSPubkeyLength]byte
|
||||
pubkeyBytes, err := hexutil.Decode("0xa057816155ad77931185101128655c0191bd0214c201ca48ed887f6c4c6adf334070efcd75140eada5ac83a92506dd7a")
|
||||
require.NoError(t, err, "Failed to decode pubkey")
|
||||
copy(pubkey[:], pubkeyBytes)
|
||||
|
||||
db := setupDB(t, [][fieldparams.BLSPubkeyLength]byte{pubkey})
|
||||
require.NoError(t, err)
|
||||
|
||||
// We save a proposal at the lowest signed slot in the DB.
|
||||
err = db.SaveProposalHistoryForSlot(ctx, pubkey, lowestSignedSlot, []byte{1})
|
||||
require.NoError(t, err)
|
||||
|
||||
// We expect the same block with a slot lower than the lowest
|
||||
// signed slot to fail validation.
|
||||
blk := ðpb.SignedBeaconBlock{
|
||||
Block: ðpb.BeaconBlock{
|
||||
Slot: lowestSignedSlot - 1,
|
||||
ProposerIndex: 0,
|
||||
Body: ðpb.BeaconBlockBody{},
|
||||
},
|
||||
Signature: params.BeaconConfig().EmptySignature[:],
|
||||
}
|
||||
wsb, err := blocks.NewSignedBeaconBlock(blk)
|
||||
require.NoError(t, err)
|
||||
err = db.SlashableProposalCheck(context.Background(), pubkey, wsb, [32]byte{4}, false, nil)
|
||||
require.ErrorContains(t, "could not sign block with slot < lowest signed", err)
|
||||
|
||||
// We expect the same block with a slot equal to the lowest
|
||||
// signed slot to pass validation if signing roots are equal.
|
||||
blk = ðpb.SignedBeaconBlock{
|
||||
Block: ðpb.BeaconBlock{
|
||||
Slot: lowestSignedSlot,
|
||||
ProposerIndex: 0,
|
||||
Body: ðpb.BeaconBlockBody{},
|
||||
},
|
||||
Signature: params.BeaconConfig().EmptySignature[:],
|
||||
}
|
||||
wsb, err = blocks.NewSignedBeaconBlock(blk)
|
||||
require.NoError(t, err)
|
||||
err = db.SlashableProposalCheck(ctx, pubkey, wsb, [32]byte{1}, false, nil)
|
||||
require.NoError(t, err)
|
||||
|
||||
// We expect the same block with a slot equal to the lowest
|
||||
// signed slot to fail validation if signing roots are different.
|
||||
wsb, err = blocks.NewSignedBeaconBlock(blk)
|
||||
require.NoError(t, err)
|
||||
err = db.SlashableProposalCheck(ctx, pubkey, wsb, [32]byte{4}, false, nil)
|
||||
require.ErrorContains(t, "could not sign block with slot == lowest signed", err)
|
||||
|
||||
// We expect the same block with a slot > than the lowest
|
||||
// signed slot to pass validation.
|
||||
blk = ðpb.SignedBeaconBlock{
|
||||
Block: ðpb.BeaconBlock{
|
||||
Slot: lowestSignedSlot + 1,
|
||||
ProposerIndex: 0,
|
||||
Body: ðpb.BeaconBlockBody{},
|
||||
},
|
||||
Signature: params.BeaconConfig().EmptySignature[:],
|
||||
}
|
||||
|
||||
wsb, err = blocks.NewSignedBeaconBlock(blk)
|
||||
require.NoError(t, err)
|
||||
err = db.SlashableProposalCheck(ctx, pubkey, wsb, [32]byte{3}, false, nil)
|
||||
require.NoError(t, err)
|
||||
}
|
||||
|
||||
func Test_slashableProposalCheck(t *testing.T) {
|
||||
ctx := context.Background()
|
||||
|
||||
var pubkey [fieldparams.BLSPubkeyLength]byte
|
||||
pubkeyBytes, err := hexutil.Decode("0xa057816155ad77931185101128655c0191bd0214c201ca48ed887f6c4c6adf334070efcd75140eada5ac83a92506dd7a")
|
||||
require.NoError(t, err, "Failed to decode pubkey")
|
||||
copy(pubkey[:], pubkeyBytes)
|
||||
|
||||
db := setupDB(t, [][fieldparams.BLSPubkeyLength]byte{pubkey})
|
||||
require.NoError(t, err)
|
||||
|
||||
blk := util.HydrateSignedBeaconBlock(ðpb.SignedBeaconBlock{
|
||||
Block: ðpb.BeaconBlock{
|
||||
Slot: 10,
|
||||
ProposerIndex: 0,
|
||||
Body: ðpb.BeaconBlockBody{},
|
||||
},
|
||||
Signature: params.BeaconConfig().EmptySignature[:],
|
||||
})
|
||||
|
||||
// We save a proposal at slot 1 as our lowest proposal.
|
||||
err = db.SaveProposalHistoryForSlot(ctx, pubkey, 1, []byte{1})
|
||||
require.NoError(t, err)
|
||||
|
||||
// We save a proposal at slot 10 with a dummy signing root.
|
||||
dummySigningRoot := [32]byte{1}
|
||||
err = db.SaveProposalHistoryForSlot(ctx, pubkey, 10, dummySigningRoot[:])
|
||||
require.NoError(t, err)
|
||||
sBlock, err := blocks.NewSignedBeaconBlock(blk)
|
||||
require.NoError(t, err)
|
||||
|
||||
err = db.SlashableProposalCheck(ctx, pubkey, sBlock, dummySigningRoot, false, nil)
|
||||
// We expect the same block sent out with the same root should not be slasahble.
|
||||
require.NoError(t, err)
|
||||
|
||||
// We expect the same block sent out with a different signing root should be slashable.
|
||||
err = db.SlashableProposalCheck(ctx, pubkey, sBlock, [32]byte{2}, false, nil)
|
||||
require.ErrorContains(t, common.FailedBlockSignLocalErr, err)
|
||||
|
||||
// We save a proposal at slot 11 with a nil signing root.
|
||||
blk.Block.Slot = 11
|
||||
sBlock, err = blocks.NewSignedBeaconBlock(blk)
|
||||
require.NoError(t, err)
|
||||
err = db.SaveProposalHistoryForSlot(ctx, pubkey, blk.Block.Slot, nil)
|
||||
require.NoError(t, err)
|
||||
|
||||
// We expect the same block sent out should return slashable error even
|
||||
// if we had a nil signing root stored in the database.
|
||||
err = db.SlashableProposalCheck(ctx, pubkey, sBlock, [32]byte{2}, false, nil)
|
||||
require.ErrorContains(t, common.FailedBlockSignLocalErr, err)
|
||||
|
||||
// A block with a different slot for which we do not have a proposing history
|
||||
// should not be failing validation.
|
||||
blk.Block.Slot = 9
|
||||
sBlock, err = blocks.NewSignedBeaconBlock(blk)
|
||||
require.NoError(t, err)
|
||||
err = db.SlashableProposalCheck(ctx, pubkey, sBlock, [32]byte{3}, false, nil)
|
||||
require.NoError(t, err, "Expected allowed block not to throw error")
|
||||
}
|
||||
|
||||
func Test_slashableProposalCheck_RemoteProtection(t *testing.T) {
|
||||
var pubkey [fieldparams.BLSPubkeyLength]byte
|
||||
pubkeyBytes, err := hexutil.Decode("0xa057816155ad77931185101128655c0191bd0214c201ca48ed887f6c4c6adf334070efcd75140eada5ac83a92506dd7a")
|
||||
require.NoError(t, err, "Failed to decode pubkey")
|
||||
copy(pubkey[:], pubkeyBytes)
|
||||
|
||||
db := setupDB(t, [][fieldparams.BLSPubkeyLength]byte{pubkey})
|
||||
require.NoError(t, err)
|
||||
|
||||
blk := util.NewBeaconBlock()
|
||||
blk.Block.Slot = 10
|
||||
sBlock, err := blocks.NewSignedBeaconBlock(blk)
|
||||
require.NoError(t, err)
|
||||
|
||||
err = db.SlashableProposalCheck(context.Background(), pubkey, sBlock, [32]byte{2}, false, nil)
|
||||
require.NoError(t, err, "Expected allowed block not to throw error")
|
||||
}
|
||||
|
||||
@@ -42,3 +42,16 @@ var (
|
||||
proposerSettingsBucket = []byte("proposer-settings-bucket")
|
||||
proposerSettingsKey = []byte("proposer-settings")
|
||||
)
|
||||
|
||||
// Attestations:
|
||||
// -------------
|
||||
// lowest-signed-source-bucket --> <pubkey> --> <epoch>
|
||||
// lowest-signed-target-bucket --> <pubkey> --> <epoch>
|
||||
//
|
||||
// pubkeys-bucket --> <pubkey> --> att-signing-roots-bucket --> <target epoch> --> <signing root>
|
||||
// |-> att-source-epochs-bucket --> <source epoch> --> []<target epoch>
|
||||
// |-> att-target-epochs-bucket --> <target epoch> --> []<source epoch>
|
||||
|
||||
// Proposals:
|
||||
// ----------
|
||||
// proposal-history-bucket-interchange -> <pubkey> --> <slot> --> <signing root>
|
||||
|
||||
@@ -15,7 +15,13 @@ import (
|
||||
func MigrateUp(cliCtx *cli.Context) error {
|
||||
dataDir := cliCtx.String(cmd.DataDirFlag.Name)
|
||||
|
||||
if !file.Exists(path.Join(dataDir, kv.ProtectionDbFileName)) {
|
||||
dbFilePath := path.Join(dataDir, kv.ProtectionDbFileName)
|
||||
exists, err := file.Exists(dbFilePath, file.Regular)
|
||||
if err != nil {
|
||||
return errors.Wrapf(err, "could not check if file exists: %s", dbFilePath)
|
||||
}
|
||||
|
||||
if !exists {
|
||||
return errors.New("No validator db found at path, nothing to migrate")
|
||||
}
|
||||
|
||||
@@ -33,7 +39,13 @@ func MigrateUp(cliCtx *cli.Context) error {
|
||||
func MigrateDown(cliCtx *cli.Context) error {
|
||||
dataDir := cliCtx.String(cmd.DataDirFlag.Name)
|
||||
|
||||
if !file.Exists(path.Join(dataDir, kv.ProtectionDbFileName)) {
|
||||
dbFilePath := path.Join(dataDir, kv.ProtectionDbFileName)
|
||||
exists, err := file.Exists(dbFilePath, file.Regular)
|
||||
if err != nil {
|
||||
return errors.Wrapf(err, "could not check if file exists: %s", dbFilePath)
|
||||
}
|
||||
|
||||
if !exists {
|
||||
return errors.New("No validator db found at path, nothing to rollback")
|
||||
}
|
||||
|
||||
|
||||
@@ -21,8 +21,12 @@ func TestMigrateUp_NoDBFound(t *testing.T) {
|
||||
assert.ErrorContains(t, "No validator db found at path", err)
|
||||
}
|
||||
|
||||
// TestMigrateUp_OK tests that a migration up is successful.
|
||||
// Migration is not needed nor supported for minimal slashing protection database.
|
||||
// This, it is tested only for complete slashing protection database.
|
||||
func TestMigrateUp_OK(t *testing.T) {
|
||||
validatorDB := dbtest.SetupDB(t, nil)
|
||||
isSlashingProtectionMinimal := false
|
||||
validatorDB := dbtest.SetupDB(t, nil, isSlashingProtectionMinimal)
|
||||
dbPath := validatorDB.DatabasePath()
|
||||
require.NoError(t, validatorDB.Close())
|
||||
app := cli.App{}
|
||||
@@ -43,8 +47,12 @@ func TestMigrateDown_NoDBFound(t *testing.T) {
|
||||
assert.ErrorContains(t, "No validator db found at path", err)
|
||||
}
|
||||
|
||||
// TestMigrateUp_OK tests that a migration down is successful.
|
||||
// Migration is not needed nor supported for minimal slashing protection database.
|
||||
// This, it is tested only for complete slashing protection database.
|
||||
func TestMigrateDown_OK(t *testing.T) {
|
||||
validatorDB := dbtest.SetupDB(t, nil)
|
||||
isSlashingProtectionMinimal := false
|
||||
validatorDB := dbtest.SetupDB(t, nil, isSlashingProtectionMinimal)
|
||||
dbPath := validatorDB.DatabasePath()
|
||||
require.NoError(t, validatorDB.Close())
|
||||
app := cli.App{}
|
||||
|
||||
@@ -21,7 +21,13 @@ func Restore(cliCtx *cli.Context) error {
|
||||
sourceFile := cliCtx.String(cmd.RestoreSourceFileFlag.Name)
|
||||
targetDir := cliCtx.String(cmd.RestoreTargetDirFlag.Name)
|
||||
|
||||
if file.Exists(path.Join(targetDir, kv.ProtectionDbFileName)) {
|
||||
dbFilePath := path.Join(targetDir, kv.ProtectionDbFileName)
|
||||
exists, err := file.Exists(dbFilePath, file.Regular)
|
||||
if err != nil {
|
||||
return errors.Wrapf(err, "could not check if file exists at %s", dbFilePath)
|
||||
}
|
||||
|
||||
if exists {
|
||||
resp, err := prompt.ValidatePrompt(
|
||||
os.Stdin, dbExistsYesNoPrompt, prompt.ValidateYesOrNo,
|
||||
)
|
||||
|
||||
@@ -11,6 +11,7 @@ go_library(
|
||||
],
|
||||
deps = [
|
||||
"//config/fieldparams:go_default_library",
|
||||
"//validator/db/filesystem:go_default_library",
|
||||
"//validator/db/iface:go_default_library",
|
||||
"//validator/db/kv:go_default_library",
|
||||
],
|
||||
@@ -21,7 +22,10 @@ go_test(
|
||||
srcs = ["setup_db_test.go"],
|
||||
embed = [":go_default_library"],
|
||||
deps = [
|
||||
"//io/file:go_default_library",
|
||||
"//testing/require:go_default_library",
|
||||
"//validator/db/filesystem:go_default_library",
|
||||
"//validator/db/iface:go_default_library",
|
||||
"//validator/db/kv:go_default_library",
|
||||
],
|
||||
)
|
||||
|
||||
@@ -5,22 +5,39 @@ import (
|
||||
"testing"
|
||||
|
||||
fieldparams "github.com/prysmaticlabs/prysm/v5/config/fieldparams"
|
||||
"github.com/prysmaticlabs/prysm/v5/validator/db/filesystem"
|
||||
"github.com/prysmaticlabs/prysm/v5/validator/db/iface"
|
||||
"github.com/prysmaticlabs/prysm/v5/validator/db/kv"
|
||||
)
|
||||
|
||||
// SetupDB instantiates and returns a DB instance for the validator client.
|
||||
func SetupDB(t testing.TB, pubkeys [][fieldparams.BLSPubkeyLength]byte) iface.ValidatorDB {
|
||||
db, err := kv.NewKVStore(context.Background(), t.TempDir(), &kv.Config{
|
||||
PubKeys: pubkeys,
|
||||
})
|
||||
// The `minimal` flag indicates whether the DB should be instantiated with minimal, filesystem
|
||||
// slashing protection database.
|
||||
func SetupDB(t testing.TB, pubkeys [][fieldparams.BLSPubkeyLength]byte, mimimal bool) iface.ValidatorDB {
|
||||
var (
|
||||
db iface.ValidatorDB
|
||||
err error
|
||||
)
|
||||
|
||||
// Create a new DB instance.
|
||||
if mimimal {
|
||||
config := &filesystem.Config{PubKeys: pubkeys}
|
||||
db, err = filesystem.NewStore(t.TempDir(), config)
|
||||
} else {
|
||||
config := &kv.Config{PubKeys: pubkeys}
|
||||
db, err = kv.NewKVStore(context.Background(), t.TempDir(), config)
|
||||
}
|
||||
|
||||
if err != nil {
|
||||
t.Fatalf("Failed to instantiate DB: %v", err)
|
||||
}
|
||||
|
||||
// Cleanup the DB after the test.
|
||||
t.Cleanup(func() {
|
||||
if err := db.ClearDB(); err != nil {
|
||||
t.Fatalf("Failed to clear database: %v", err)
|
||||
}
|
||||
})
|
||||
|
||||
return db
|
||||
}
|
||||
|
||||
@@ -2,23 +2,48 @@ package testing
|
||||
|
||||
import (
|
||||
"context"
|
||||
"os"
|
||||
"fmt"
|
||||
"path/filepath"
|
||||
"testing"
|
||||
|
||||
"github.com/prysmaticlabs/prysm/v5/io/file"
|
||||
"github.com/prysmaticlabs/prysm/v5/testing/require"
|
||||
"github.com/prysmaticlabs/prysm/v5/validator/db/filesystem"
|
||||
"github.com/prysmaticlabs/prysm/v5/validator/db/iface"
|
||||
"github.com/prysmaticlabs/prysm/v5/validator/db/kv"
|
||||
)
|
||||
|
||||
func TestClearDB(t *testing.T) {
|
||||
// Setting up manually is required, since SetupDB() will also register a teardown procedure.
|
||||
testDB, err := kv.NewKVStore(context.Background(), t.TempDir(), &kv.Config{
|
||||
PubKeys: nil,
|
||||
})
|
||||
require.NoError(t, err, "Failed to instantiate DB")
|
||||
require.NoError(t, testDB.ClearDB())
|
||||
for _, isSlashingProtectionMinimal := range []bool{false, true} {
|
||||
t.Run(fmt.Sprintf("slashing protection minimal: %v", isSlashingProtectionMinimal), func(t *testing.T) {
|
||||
// Setting up manually is required, since SetupDB() will also register a teardown procedure.
|
||||
var (
|
||||
testDB iface.ValidatorDB
|
||||
err error
|
||||
)
|
||||
|
||||
if _, err := os.Stat(filepath.Join(testDB.DatabasePath(), "validator.db")); !os.IsNotExist(err) {
|
||||
t.Fatalf("DB was not cleared: %v", err)
|
||||
if isSlashingProtectionMinimal {
|
||||
testDB, err = filesystem.NewStore(t.TempDir(), &filesystem.Config{
|
||||
PubKeys: nil,
|
||||
})
|
||||
} else {
|
||||
testDB, err = kv.NewKVStore(context.Background(), t.TempDir(), &kv.Config{
|
||||
PubKeys: nil,
|
||||
})
|
||||
}
|
||||
|
||||
require.NoError(t, err, "Failed to instantiate DB")
|
||||
require.NoError(t, testDB.ClearDB())
|
||||
|
||||
databaseName := kv.ProtectionDbFileName
|
||||
if isSlashingProtectionMinimal {
|
||||
databaseName = filesystem.DatabaseDirName
|
||||
}
|
||||
|
||||
databasePath := filepath.Join(testDB.DatabasePath(), databaseName)
|
||||
exists, err := file.Exists(databasePath, file.Regular)
|
||||
require.NoError(t, err, "Failed to check if DB exists")
|
||||
require.Equal(t, false, exists, "DB was not cleared")
|
||||
})
|
||||
}
|
||||
}
|
||||
|
||||
@@ -1,11 +1,41 @@
|
||||
load("@prysm//tools/go:def.bzl", "go_library")
|
||||
load("@prysm//tools/go:def.bzl", "go_library", "go_test")
|
||||
|
||||
go_library(
|
||||
name = "go_default_library",
|
||||
srcs = ["node_connection.go"],
|
||||
srcs = [
|
||||
"converts.go",
|
||||
"metadata.go",
|
||||
"node_connection.go",
|
||||
],
|
||||
importpath = "github.com/prysmaticlabs/prysm/v5/validator/helpers",
|
||||
visibility = ["//visibility:public"],
|
||||
deps = [
|
||||
"//config/fieldparams:go_default_library",
|
||||
"//consensus-types/primitives:go_default_library",
|
||||
"//validator/db/iface:go_default_library",
|
||||
"//validator/slashing-protection-history/format:go_default_library",
|
||||
"@com_github_pkg_errors//:go_default_library",
|
||||
"@org_golang_google_grpc//:go_default_library",
|
||||
],
|
||||
)
|
||||
|
||||
go_test(
|
||||
name = "go_default_test",
|
||||
srcs = [
|
||||
"converts_test.go",
|
||||
"metadata_test.go",
|
||||
],
|
||||
embed = [":go_default_library"],
|
||||
deps = [
|
||||
"//config/fieldparams:go_default_library",
|
||||
"//config/proposer:go_default_library",
|
||||
"//consensus-types/interfaces:go_default_library",
|
||||
"//consensus-types/primitives:go_default_library",
|
||||
"//proto/prysm/v1alpha1:go_default_library",
|
||||
"//testing/require:go_default_library",
|
||||
"//validator/db/common:go_default_library",
|
||||
"//validator/db/iface:go_default_library",
|
||||
"//validator/slashing-protection-history/format:go_default_library",
|
||||
"@com_github_prometheus_client_golang//prometheus:go_default_library",
|
||||
],
|
||||
)
|
||||
|
||||
@@ -1,4 +1,4 @@
|
||||
package history
|
||||
package helpers
|
||||
|
||||
import (
|
||||
"encoding/hex"
|
||||
@@ -6,30 +6,10 @@ import (
|
||||
"strconv"
|
||||
"strings"
|
||||
|
||||
"github.com/k0kubun/go-ansi"
|
||||
fieldparams "github.com/prysmaticlabs/prysm/v5/config/fieldparams"
|
||||
"github.com/prysmaticlabs/prysm/v5/consensus-types/primitives"
|
||||
"github.com/schollz/progressbar/v3"
|
||||
)
|
||||
|
||||
func initializeProgressBar(numItems int, msg string) *progressbar.ProgressBar {
|
||||
return progressbar.NewOptions(
|
||||
numItems,
|
||||
progressbar.OptionFullWidth(),
|
||||
progressbar.OptionSetWriter(ansi.NewAnsiStdout()),
|
||||
progressbar.OptionEnableColorCodes(true),
|
||||
progressbar.OptionSetTheme(progressbar.Theme{
|
||||
Saucer: "[green]=[reset]",
|
||||
SaucerHead: "[green]>[reset]",
|
||||
SaucerPadding: " ",
|
||||
BarStart: "[",
|
||||
BarEnd: "]",
|
||||
}),
|
||||
progressbar.OptionOnCompletion(func() { fmt.Println() }),
|
||||
progressbar.OptionSetDescription(msg),
|
||||
)
|
||||
}
|
||||
|
||||
// Uint64FromString converts a string into a uint64 representation.
|
||||
func Uint64FromString(str string) (uint64, error) {
|
||||
return strconv.ParseUint(str, 10, 64)
|
||||
@@ -37,7 +17,7 @@ func Uint64FromString(str string) (uint64, error) {
|
||||
|
||||
// EpochFromString converts a string into Epoch.
|
||||
func EpochFromString(str string) (primitives.Epoch, error) {
|
||||
e, err := strconv.ParseUint(str, 10, 64)
|
||||
e, err := Uint64FromString(str)
|
||||
if err != nil {
|
||||
return primitives.Epoch(e), err
|
||||
}
|
||||
@@ -46,7 +26,7 @@ func EpochFromString(str string) (primitives.Epoch, error) {
|
||||
|
||||
// SlotFromString converts a string into Slot.
|
||||
func SlotFromString(str string) (primitives.Slot, error) {
|
||||
s, err := strconv.ParseUint(str, 10, 64)
|
||||
s, err := Uint64FromString(str)
|
||||
if err != nil {
|
||||
return primitives.Slot(s), err
|
||||
}
|
||||
@@ -81,7 +61,7 @@ func RootFromHex(str string) ([32]byte, error) {
|
||||
return root, nil
|
||||
}
|
||||
|
||||
func rootToHexString(root []byte) (string, error) {
|
||||
func RootToHexString(root []byte) (string, error) {
|
||||
// Nil signing roots are allowed in EIP-3076.
|
||||
if len(root) == 0 {
|
||||
return "", nil
|
||||
@@ -92,7 +72,7 @@ func rootToHexString(root []byte) (string, error) {
|
||||
return fmt.Sprintf("%#x", root), nil
|
||||
}
|
||||
|
||||
func pubKeyToHexString(pubKey []byte) (string, error) {
|
||||
func PubKeyToHexString(pubKey []byte) (string, error) {
|
||||
if len(pubKey) != 48 {
|
||||
return "", fmt.Errorf("wanted length 48, received %d", len(pubKey))
|
||||
}
|
||||
@@ -1,4 +1,4 @@
|
||||
package history
|
||||
package helpers
|
||||
|
||||
import (
|
||||
"fmt"
|
||||
@@ -10,7 +10,7 @@ import (
|
||||
"github.com/prysmaticlabs/prysm/v5/consensus-types/primitives"
|
||||
)
|
||||
|
||||
func Test_uint64FromString(t *testing.T) {
|
||||
func Test_fromString(t *testing.T) {
|
||||
tests := []struct {
|
||||
name string
|
||||
str string
|
||||
@@ -223,7 +223,7 @@ func Test_rootToHexString(t *testing.T) {
|
||||
}
|
||||
for _, tt := range tests {
|
||||
t.Run(tt.name, func(t *testing.T) {
|
||||
got, err := rootToHexString(tt.root)
|
||||
got, err := RootToHexString(tt.root)
|
||||
if (err != nil) != tt.wantErr {
|
||||
t.Errorf("rootToHexString() error = %v, wantErr %v", err, tt.wantErr)
|
||||
return
|
||||
@@ -270,7 +270,7 @@ func Test_pubKeyToHexString(t *testing.T) {
|
||||
}
|
||||
for _, tt := range tests {
|
||||
t.Run(tt.name, func(t *testing.T) {
|
||||
got, err := pubKeyToHexString(tt.pubKey)
|
||||
got, err := PubKeyToHexString(tt.pubKey)
|
||||
if (err != nil) != tt.wantErr {
|
||||
t.Errorf("pubKeyToHexString() error = %v, wantErr %v", err, tt.wantErr)
|
||||
return
|
||||
45
validator/helpers/metadata.go
Normal file
45
validator/helpers/metadata.go
Normal file
@@ -0,0 +1,45 @@
|
||||
package helpers
|
||||
|
||||
import (
|
||||
"bytes"
|
||||
"context"
|
||||
"fmt"
|
||||
|
||||
"github.com/pkg/errors"
|
||||
"github.com/prysmaticlabs/prysm/v5/validator/db/iface"
|
||||
"github.com/prysmaticlabs/prysm/v5/validator/slashing-protection-history/format"
|
||||
)
|
||||
|
||||
func ValidateMetadata(ctx context.Context, validatorDB iface.ValidatorDB, interchangeJSON *format.EIPSlashingProtectionFormat) error {
|
||||
// We need to ensure the version in the metadata field matches the one we support.
|
||||
version := interchangeJSON.Metadata.InterchangeFormatVersion
|
||||
if version != format.InterchangeFormatVersion {
|
||||
return fmt.Errorf(
|
||||
"slashing protection JSON version '%s' is not supported, wanted '%s'",
|
||||
version,
|
||||
format.InterchangeFormatVersion,
|
||||
)
|
||||
}
|
||||
|
||||
// We need to verify the genesis validators root matches that of our chain data, otherwise
|
||||
// the imported slashing protection JSON was created on a different chain.
|
||||
gvr, err := RootFromHex(interchangeJSON.Metadata.GenesisValidatorsRoot)
|
||||
if err != nil {
|
||||
return fmt.Errorf("%#x is not a valid root: %w", interchangeJSON.Metadata.GenesisValidatorsRoot, err)
|
||||
}
|
||||
dbGvr, err := validatorDB.GenesisValidatorsRoot(ctx)
|
||||
if err != nil {
|
||||
return errors.Wrap(err, "could not retrieve genesis validators root to db")
|
||||
}
|
||||
if dbGvr == nil {
|
||||
if err = validatorDB.SaveGenesisValidatorsRoot(ctx, gvr[:]); err != nil {
|
||||
return errors.Wrap(err, "could not save genesis validators root to db")
|
||||
}
|
||||
return nil
|
||||
}
|
||||
if !bytes.Equal(dbGvr, gvr[:]) {
|
||||
return errors.New("genesis validators root doesn't match the one that is stored in slashing protection db. " +
|
||||
"Please make sure you import the protection data that is relevant to the chain you are on")
|
||||
}
|
||||
return nil
|
||||
}
|
||||
283
validator/helpers/metadata_test.go
Normal file
283
validator/helpers/metadata_test.go
Normal file
@@ -0,0 +1,283 @@
|
||||
package helpers
|
||||
|
||||
import (
|
||||
"context"
|
||||
"encoding/hex"
|
||||
"io"
|
||||
"testing"
|
||||
|
||||
"github.com/prometheus/client_golang/prometheus"
|
||||
fieldparams "github.com/prysmaticlabs/prysm/v5/config/fieldparams"
|
||||
"github.com/prysmaticlabs/prysm/v5/config/proposer"
|
||||
|
||||
"github.com/prysmaticlabs/prysm/v5/consensus-types/interfaces"
|
||||
"github.com/prysmaticlabs/prysm/v5/consensus-types/primitives"
|
||||
ethpb "github.com/prysmaticlabs/prysm/v5/proto/prysm/v1alpha1"
|
||||
"github.com/prysmaticlabs/prysm/v5/testing/require"
|
||||
"github.com/prysmaticlabs/prysm/v5/validator/db/common"
|
||||
"github.com/prysmaticlabs/prysm/v5/validator/db/iface"
|
||||
"github.com/prysmaticlabs/prysm/v5/validator/slashing-protection-history/format"
|
||||
)
|
||||
|
||||
type ValidatorDBMock struct {
|
||||
genesisValidatorsRoot []byte
|
||||
}
|
||||
|
||||
func NewValidatorDBMock() *ValidatorDBMock {
|
||||
return &ValidatorDBMock{}
|
||||
}
|
||||
|
||||
var _ iface.ValidatorDB = (*ValidatorDBMock)(nil)
|
||||
|
||||
func (db *ValidatorDBMock) Backup(ctx context.Context, outputPath string, permissionOverride bool) error {
|
||||
panic("not implemented")
|
||||
}
|
||||
|
||||
func (db *ValidatorDBMock) Close() error { panic("not implemented") }
|
||||
|
||||
func (db *ValidatorDBMock) DatabasePath() string { panic("not implemented") }
|
||||
func (db *ValidatorDBMock) ClearDB() error { panic("not implemented") }
|
||||
func (db *ValidatorDBMock) RunUpMigrations(ctx context.Context) error { panic("not implemented") }
|
||||
func (db *ValidatorDBMock) RunDownMigrations(ctx context.Context) error { panic("not implemented") }
|
||||
func (db *ValidatorDBMock) UpdatePublicKeysBuckets(publicKeys [][fieldparams.BLSPubkeyLength]byte) error {
|
||||
panic("not implemented")
|
||||
}
|
||||
|
||||
// Genesis information related methods.
|
||||
func (db *ValidatorDBMock) GenesisValidatorsRoot(ctx context.Context) ([]byte, error) {
|
||||
return db.genesisValidatorsRoot, nil
|
||||
}
|
||||
func (db *ValidatorDBMock) SaveGenesisValidatorsRoot(ctx context.Context, genValRoot []byte) error {
|
||||
db.genesisValidatorsRoot = genValRoot
|
||||
return nil
|
||||
}
|
||||
|
||||
// Proposer protection related methods.
|
||||
func (db *ValidatorDBMock) HighestSignedProposal(ctx context.Context, publicKey [fieldparams.BLSPubkeyLength]byte) (primitives.Slot, bool, error) {
|
||||
panic("not implemented")
|
||||
}
|
||||
func (db *ValidatorDBMock) LowestSignedProposal(ctx context.Context, publicKey [fieldparams.BLSPubkeyLength]byte) (primitives.Slot, bool, error) {
|
||||
panic("not implemented")
|
||||
}
|
||||
func (db *ValidatorDBMock) ProposalHistoryForPubKey(ctx context.Context, publicKey [fieldparams.BLSPubkeyLength]byte) ([]*common.Proposal, error) {
|
||||
panic("not implemented")
|
||||
}
|
||||
func (db *ValidatorDBMock) ProposalHistoryForSlot(ctx context.Context, publicKey [fieldparams.BLSPubkeyLength]byte, slot primitives.Slot) ([32]byte, bool, bool, error) {
|
||||
panic("not implemented")
|
||||
}
|
||||
func (db *ValidatorDBMock) SaveProposalHistoryForSlot(ctx context.Context, pubKey [fieldparams.BLSPubkeyLength]byte, slot primitives.Slot, signingRoot []byte) error {
|
||||
panic("not implemented")
|
||||
}
|
||||
func (db *ValidatorDBMock) ProposedPublicKeys(ctx context.Context) ([][fieldparams.BLSPubkeyLength]byte, error) {
|
||||
panic("not implemented")
|
||||
}
|
||||
|
||||
func (db *ValidatorDBMock) SlashableProposalCheck(
|
||||
ctx context.Context,
|
||||
pubKey [fieldparams.BLSPubkeyLength]byte,
|
||||
signedBlock interfaces.ReadOnlySignedBeaconBlock,
|
||||
signingRoot [fieldparams.RootLength]byte,
|
||||
emitAccountMetrics bool,
|
||||
validatorProposeFailVec *prometheus.CounterVec,
|
||||
) error {
|
||||
panic("not implemented")
|
||||
}
|
||||
|
||||
// Attester protection related methods.
|
||||
// Methods to store and read blacklisted public keys from EIP-3076
|
||||
// slashing protection imports.
|
||||
func (db *ValidatorDBMock) EIPImportBlacklistedPublicKeys(ctx context.Context) ([][fieldparams.BLSPubkeyLength]byte, error) {
|
||||
panic("not implemented")
|
||||
}
|
||||
func (db *ValidatorDBMock) SaveEIPImportBlacklistedPublicKeys(ctx context.Context, publicKeys [][fieldparams.BLSPubkeyLength]byte) error {
|
||||
panic("not implemented")
|
||||
}
|
||||
func (db *ValidatorDBMock) SigningRootAtTargetEpoch(ctx context.Context, publicKey [fieldparams.BLSPubkeyLength]byte, target primitives.Epoch) ([]byte, error) {
|
||||
panic("not implemented")
|
||||
}
|
||||
func (db *ValidatorDBMock) LowestSignedTargetEpoch(ctx context.Context, publicKey [fieldparams.BLSPubkeyLength]byte) (primitives.Epoch, bool, error) {
|
||||
panic("not implemented")
|
||||
}
|
||||
func (db *ValidatorDBMock) LowestSignedSourceEpoch(ctx context.Context, publicKey [fieldparams.BLSPubkeyLength]byte) (primitives.Epoch, bool, error) {
|
||||
panic("not implemented")
|
||||
}
|
||||
func (db *ValidatorDBMock) AttestedPublicKeys(ctx context.Context) ([][fieldparams.BLSPubkeyLength]byte, error) {
|
||||
panic("not implemented")
|
||||
}
|
||||
|
||||
func (db *ValidatorDBMock) SlashableAttestationCheck(
|
||||
ctx context.Context, indexedAtt *ethpb.IndexedAttestation, pubKey [fieldparams.BLSPubkeyLength]byte,
|
||||
signingRoot32 [32]byte,
|
||||
emitAccountMetrics bool,
|
||||
validatorAttestFailVec *prometheus.CounterVec,
|
||||
) error {
|
||||
panic("not implemented")
|
||||
}
|
||||
|
||||
func (db *ValidatorDBMock) SaveAttestationForPubKey(
|
||||
ctx context.Context, pubKey [fieldparams.BLSPubkeyLength]byte, signingRoot [fieldparams.RootLength]byte, att *ethpb.IndexedAttestation,
|
||||
) error {
|
||||
panic("not implemented")
|
||||
}
|
||||
|
||||
func (db *ValidatorDBMock) SaveAttestationsForPubKey(
|
||||
ctx context.Context, pubKey [fieldparams.BLSPubkeyLength]byte, signingRoots [][]byte, atts []*ethpb.IndexedAttestation,
|
||||
) error {
|
||||
panic("not implemented")
|
||||
}
|
||||
|
||||
func (db *ValidatorDBMock) AttestationHistoryForPubKey(
|
||||
ctx context.Context, pubKey [fieldparams.BLSPubkeyLength]byte,
|
||||
) ([]*common.AttestationRecord, error) {
|
||||
panic("not implemented")
|
||||
}
|
||||
|
||||
// Graffiti ordered index related methods
|
||||
func (db *ValidatorDBMock) SaveGraffitiOrderedIndex(ctx context.Context, index uint64) error {
|
||||
panic("not implemented")
|
||||
}
|
||||
func (db *ValidatorDBMock) GraffitiOrderedIndex(ctx context.Context, fileHash [32]byte) (uint64, error) {
|
||||
panic("not implemented")
|
||||
}
|
||||
func (db *ValidatorDBMock) GraffitiFileHash() ([32]byte, bool, error) { panic("not implemented") }
|
||||
|
||||
// ProposerSettings related methods
|
||||
func (db *ValidatorDBMock) ProposerSettings(context.Context) (*proposer.Settings, error) {
|
||||
panic("not implemented")
|
||||
}
|
||||
func (db *ValidatorDBMock) ProposerSettingsExists(ctx context.Context) (bool, error) {
|
||||
panic("not implemented")
|
||||
}
|
||||
func (db *ValidatorDBMock) UpdateProposerSettingsDefault(context.Context, *proposer.Option) error {
|
||||
panic("not implemented")
|
||||
}
|
||||
func (db *ValidatorDBMock) UpdateProposerSettingsForPubkey(context.Context, [fieldparams.BLSPubkeyLength]byte, *proposer.Option) error {
|
||||
panic("not implemented")
|
||||
}
|
||||
func (db *ValidatorDBMock) SaveProposerSettings(ctx context.Context, settings *proposer.Settings) error {
|
||||
panic("not implemented")
|
||||
}
|
||||
|
||||
// EIP-3076 slashing protection related methods
|
||||
func (db *ValidatorDBMock) ImportStandardProtectionJSON(ctx context.Context, r io.Reader) error {
|
||||
panic("not implemented")
|
||||
}
|
||||
|
||||
func Test_validateMetadata(t *testing.T) {
|
||||
goodRoot := [32]byte{1}
|
||||
goodStr := make([]byte, hex.EncodedLen(len(goodRoot)))
|
||||
hex.Encode(goodStr, goodRoot[:])
|
||||
tests := []struct {
|
||||
name string
|
||||
interchangeJSON *format.EIPSlashingProtectionFormat
|
||||
dbGenesisValidatorsRoot []byte
|
||||
wantErr bool
|
||||
wantFatal string
|
||||
}{
|
||||
{
|
||||
name: "Incorrect version for EIP format should fail",
|
||||
interchangeJSON: &format.EIPSlashingProtectionFormat{
|
||||
Metadata: struct {
|
||||
InterchangeFormatVersion string `json:"interchange_format_version"`
|
||||
GenesisValidatorsRoot string `json:"genesis_validators_root"`
|
||||
}{
|
||||
InterchangeFormatVersion: "1",
|
||||
GenesisValidatorsRoot: string(goodStr),
|
||||
},
|
||||
},
|
||||
wantErr: true,
|
||||
},
|
||||
{
|
||||
name: "Junk data for version should fail",
|
||||
interchangeJSON: &format.EIPSlashingProtectionFormat{
|
||||
Metadata: struct {
|
||||
InterchangeFormatVersion string `json:"interchange_format_version"`
|
||||
GenesisValidatorsRoot string `json:"genesis_validators_root"`
|
||||
}{
|
||||
InterchangeFormatVersion: "asdljas$d",
|
||||
GenesisValidatorsRoot: string(goodStr),
|
||||
},
|
||||
},
|
||||
wantErr: true,
|
||||
},
|
||||
{
|
||||
name: "Proper version field should pass",
|
||||
interchangeJSON: &format.EIPSlashingProtectionFormat{
|
||||
Metadata: struct {
|
||||
InterchangeFormatVersion string `json:"interchange_format_version"`
|
||||
GenesisValidatorsRoot string `json:"genesis_validators_root"`
|
||||
}{
|
||||
InterchangeFormatVersion: format.InterchangeFormatVersion,
|
||||
GenesisValidatorsRoot: string(goodStr),
|
||||
},
|
||||
},
|
||||
wantErr: false,
|
||||
},
|
||||
}
|
||||
for _, tt := range tests {
|
||||
t.Run(tt.name, func(t *testing.T) {
|
||||
if err := ValidateMetadata(context.Background(), NewValidatorDBMock(), tt.interchangeJSON); (err != nil) != tt.wantErr {
|
||||
t.Errorf("validateMetadata() error = %v, wantErr %v", err, tt.wantErr)
|
||||
}
|
||||
})
|
||||
}
|
||||
}
|
||||
|
||||
func Test_validateMetadataGenesisValidatorsRoot(t *testing.T) {
|
||||
goodRoot := [32]byte{1}
|
||||
goodStr := make([]byte, hex.EncodedLen(len(goodRoot)))
|
||||
hex.Encode(goodStr, goodRoot[:])
|
||||
secondRoot := [32]byte{2}
|
||||
secondStr := make([]byte, hex.EncodedLen(len(secondRoot)))
|
||||
hex.Encode(secondStr, secondRoot[:])
|
||||
|
||||
tests := []struct {
|
||||
name string
|
||||
interchangeJSON *format.EIPSlashingProtectionFormat
|
||||
dbGenesisValidatorsRoot []byte
|
||||
wantErr bool
|
||||
}{
|
||||
{
|
||||
name: "Same genesis roots should not fail",
|
||||
interchangeJSON: &format.EIPSlashingProtectionFormat{
|
||||
Metadata: struct {
|
||||
InterchangeFormatVersion string `json:"interchange_format_version"`
|
||||
GenesisValidatorsRoot string `json:"genesis_validators_root"`
|
||||
}{
|
||||
InterchangeFormatVersion: format.InterchangeFormatVersion,
|
||||
GenesisValidatorsRoot: string(goodStr),
|
||||
},
|
||||
},
|
||||
dbGenesisValidatorsRoot: goodRoot[:],
|
||||
wantErr: false,
|
||||
},
|
||||
{
|
||||
name: "Different genesis roots should not fail",
|
||||
interchangeJSON: &format.EIPSlashingProtectionFormat{
|
||||
Metadata: struct {
|
||||
InterchangeFormatVersion string `json:"interchange_format_version"`
|
||||
GenesisValidatorsRoot string `json:"genesis_validators_root"`
|
||||
}{
|
||||
InterchangeFormatVersion: format.InterchangeFormatVersion,
|
||||
GenesisValidatorsRoot: string(secondStr),
|
||||
},
|
||||
},
|
||||
dbGenesisValidatorsRoot: goodRoot[:],
|
||||
wantErr: true,
|
||||
},
|
||||
}
|
||||
for _, tt := range tests {
|
||||
t.Run(tt.name, func(t *testing.T) {
|
||||
ctx := context.Background()
|
||||
validatorDB := NewValidatorDBMock()
|
||||
require.NoError(t, validatorDB.SaveGenesisValidatorsRoot(ctx, tt.dbGenesisValidatorsRoot))
|
||||
err := ValidateMetadata(ctx, validatorDB, tt.interchangeJSON)
|
||||
if tt.wantErr {
|
||||
require.ErrorContains(t, "genesis validators root doesn't match the one that is stored", err)
|
||||
} else {
|
||||
require.NoError(t, err)
|
||||
}
|
||||
|
||||
})
|
||||
}
|
||||
}
|
||||
@@ -1,4 +1,4 @@
|
||||
package validator_helpers
|
||||
package helpers
|
||||
|
||||
import (
|
||||
"time"
|
||||
|
||||
@@ -25,10 +25,18 @@ import (
|
||||
func (km *Keymanager) listenForAccountChanges(ctx context.Context) {
|
||||
debounceFileChangesInterval := features.Get().KeystoreImportDebounceInterval
|
||||
accountsFilePath := filepath.Join(km.wallet.AccountsDir(), AccountsPath, AccountsKeystoreFileName)
|
||||
if !file.Exists(accountsFilePath) {
|
||||
exists, err := file.Exists(accountsFilePath, file.Regular)
|
||||
|
||||
if err != nil {
|
||||
log.WithError(err).Errorf("Could not check if file exists: %s", accountsFilePath)
|
||||
return
|
||||
}
|
||||
|
||||
if !exists {
|
||||
log.Warnf("Starting without accounts located in wallet at %s", accountsFilePath)
|
||||
return
|
||||
}
|
||||
|
||||
watcher, err := fsnotify.NewWatcher()
|
||||
if err != nil {
|
||||
log.WithError(err).Error("Could not initialize file watcher")
|
||||
|
||||
@@ -58,6 +58,8 @@ go_library(
|
||||
"//runtime/version:go_default_library",
|
||||
"//validator/accounts/wallet:go_default_library",
|
||||
"//validator/client:go_default_library",
|
||||
"//validator/db:go_default_library",
|
||||
"//validator/db/filesystem:go_default_library",
|
||||
"//validator/db/iface:go_default_library",
|
||||
"//validator/db/kv:go_default_library",
|
||||
"//validator/graffiti:go_default_library",
|
||||
|
||||
@@ -11,6 +11,7 @@ import (
|
||||
"net/url"
|
||||
"os"
|
||||
"os/signal"
|
||||
"path"
|
||||
"path/filepath"
|
||||
"strings"
|
||||
"sync"
|
||||
@@ -45,6 +46,8 @@ import (
|
||||
"github.com/prysmaticlabs/prysm/v5/runtime/version"
|
||||
"github.com/prysmaticlabs/prysm/v5/validator/accounts/wallet"
|
||||
"github.com/prysmaticlabs/prysm/v5/validator/client"
|
||||
"github.com/prysmaticlabs/prysm/v5/validator/db"
|
||||
"github.com/prysmaticlabs/prysm/v5/validator/db/filesystem"
|
||||
"github.com/prysmaticlabs/prysm/v5/validator/db/iface"
|
||||
"github.com/prysmaticlabs/prysm/v5/validator/db/kv"
|
||||
g "github.com/prysmaticlabs/prysm/v5/validator/graffiti"
|
||||
@@ -63,7 +66,7 @@ type ValidatorClient struct {
|
||||
cliCtx *cli.Context
|
||||
ctx context.Context
|
||||
cancel context.CancelFunc
|
||||
db *kv.Store
|
||||
db iface.ValidatorDB
|
||||
services *runtime.ServiceRegistry // Lifecycle and service store.
|
||||
lock sync.RWMutex
|
||||
wallet *wallet.Wallet
|
||||
@@ -211,9 +214,14 @@ func (c *ValidatorClient) getLegacyDatabaseLocation(
|
||||
dataDir string,
|
||||
dataFile string,
|
||||
walletDir string,
|
||||
) (string, string) {
|
||||
if isInteropNumValidatorsSet || dataDir != cmd.DefaultDataDir() || file.Exists(dataFile) || c.wallet == nil {
|
||||
return dataDir, dataFile
|
||||
) (string, string, error) {
|
||||
exists, err := file.Exists(dataFile, file.Regular)
|
||||
if err != nil {
|
||||
return "", "", errors.Wrapf(err, "could not check if file exists: %s", dataFile)
|
||||
}
|
||||
|
||||
if isInteropNumValidatorsSet || dataDir != cmd.DefaultDataDir() || exists || c.wallet == nil {
|
||||
return dataDir, dataFile, nil
|
||||
}
|
||||
|
||||
// We look in the previous, legacy directories.
|
||||
@@ -225,7 +233,12 @@ func (c *ValidatorClient) getLegacyDatabaseLocation(
|
||||
|
||||
legacyDataFile := filepath.Join(legacyDataDir, kv.ProtectionDbFileName)
|
||||
|
||||
if file.Exists(legacyDataFile) {
|
||||
legacyDataFileExists, err := file.Exists(legacyDataFile, file.Regular)
|
||||
if err != nil {
|
||||
return "", "", errors.Wrapf(err, "could not check if file exists: %s", legacyDataFile)
|
||||
}
|
||||
|
||||
if legacyDataFileExists {
|
||||
log.Infof(`Database not found in the --datadir directory (%s)
|
||||
but found in the --wallet-dir directory (%s),
|
||||
which was the legacy default.
|
||||
@@ -239,13 +252,10 @@ func (c *ValidatorClient) getLegacyDatabaseLocation(
|
||||
dataFile = legacyDataFile
|
||||
}
|
||||
|
||||
return dataDir, dataFile
|
||||
return dataDir, dataFile, nil
|
||||
}
|
||||
|
||||
func (c *ValidatorClient) initializeFromCLI(cliCtx *cli.Context, router *mux.Router) error {
|
||||
dataDir := cliCtx.String(cmd.DataDirFlag.Name)
|
||||
dataFile := filepath.Join(dataDir, kv.ProtectionDbFileName)
|
||||
walletDir := cliCtx.String(flags.WalletDirFlag.Name)
|
||||
isInteropNumValidatorsSet := cliCtx.IsSet(flags.InteropNumValidators.Name)
|
||||
isWeb3SignerURLFlagSet := cliCtx.IsSet(flags.Web3SignerURLFlag.Name)
|
||||
|
||||
@@ -269,39 +279,8 @@ func (c *ValidatorClient) initializeFromCLI(cliCtx *cli.Context, router *mux.Rou
|
||||
}
|
||||
}
|
||||
|
||||
// Workaround for https://github.com/prysmaticlabs/prysm/issues/13391
|
||||
dataDir, dataFile = c.getLegacyDatabaseLocation(
|
||||
isInteropNumValidatorsSet,
|
||||
isWeb3SignerURLFlagSet,
|
||||
dataDir,
|
||||
dataFile,
|
||||
walletDir,
|
||||
)
|
||||
|
||||
clearFlag := cliCtx.Bool(cmd.ClearDB.Name)
|
||||
forceClearFlag := cliCtx.Bool(cmd.ForceClearDB.Name)
|
||||
if clearFlag || forceClearFlag {
|
||||
if err := clearDB(cliCtx.Context, dataDir, forceClearFlag); err != nil {
|
||||
return err
|
||||
}
|
||||
} else {
|
||||
if !file.Exists(dataFile) {
|
||||
log.Warnf("Slashing protection file %s is missing.\n"+
|
||||
"If you changed your --datadir, please copy your previous \"validator.db\" file into your current --datadir.\n"+
|
||||
"Disregard this warning if this is the first time you are running this set of keys.", dataFile)
|
||||
}
|
||||
}
|
||||
log.WithField("databasePath", dataDir).Info("Checking DB")
|
||||
|
||||
valDB, err := kv.NewKVStore(cliCtx.Context, dataDir, &kv.Config{
|
||||
PubKeys: nil,
|
||||
})
|
||||
if err != nil {
|
||||
return errors.Wrap(err, "could not initialize db")
|
||||
}
|
||||
c.db = valDB
|
||||
if err := valDB.RunUpMigrations(cliCtx.Context); err != nil {
|
||||
return errors.Wrap(err, "could not run database migration")
|
||||
if err := c.initializeDB(cliCtx); err != nil {
|
||||
return errors.Wrapf(err, "could not initialize database")
|
||||
}
|
||||
|
||||
if !cliCtx.Bool(cmd.DisableMonitoringFlag.Name) {
|
||||
@@ -324,12 +303,6 @@ func (c *ValidatorClient) initializeFromCLI(cliCtx *cli.Context, router *mux.Rou
|
||||
}
|
||||
|
||||
func (c *ValidatorClient) initializeForWeb(cliCtx *cli.Context, router *mux.Router) error {
|
||||
dataDir := cliCtx.String(cmd.DataDirFlag.Name)
|
||||
dataFile := filepath.Join(dataDir, kv.ProtectionDbFileName)
|
||||
walletDir := cliCtx.String(flags.WalletDirFlag.Name)
|
||||
isInteropNumValidatorsSet := cliCtx.IsSet(flags.InteropNumValidators.Name)
|
||||
isWeb3SignerURLFlagSet := cliCtx.IsSet(flags.Web3SignerURLFlag.Name)
|
||||
|
||||
if cliCtx.IsSet(flags.Web3SignerURLFlag.Name) {
|
||||
// Custom Check For Web3Signer
|
||||
c.wallet = wallet.NewWalletForWeb3Signer()
|
||||
@@ -349,33 +322,8 @@ func (c *ValidatorClient) initializeForWeb(cliCtx *cli.Context, router *mux.Rout
|
||||
c.wallet = w
|
||||
}
|
||||
|
||||
// Workaround for https://github.com/prysmaticlabs/prysm/issues/13391
|
||||
dataDir, _ = c.getLegacyDatabaseLocation(
|
||||
isInteropNumValidatorsSet,
|
||||
isWeb3SignerURLFlagSet,
|
||||
dataDir,
|
||||
dataFile,
|
||||
walletDir,
|
||||
)
|
||||
|
||||
clearFlag := cliCtx.Bool(cmd.ClearDB.Name)
|
||||
forceClearFlag := cliCtx.Bool(cmd.ForceClearDB.Name)
|
||||
|
||||
if clearFlag || forceClearFlag {
|
||||
if err := clearDB(cliCtx.Context, dataDir, forceClearFlag); err != nil {
|
||||
return err
|
||||
}
|
||||
}
|
||||
log.WithField("databasePath", dataDir).Info("Checking DB")
|
||||
valDB, err := kv.NewKVStore(cliCtx.Context, dataDir, &kv.Config{
|
||||
PubKeys: nil,
|
||||
})
|
||||
if err != nil {
|
||||
return errors.Wrap(err, "could not initialize db")
|
||||
}
|
||||
c.db = valDB
|
||||
if err := valDB.RunUpMigrations(cliCtx.Context); err != nil {
|
||||
return errors.Wrap(err, "could not run database migration")
|
||||
if err := c.initializeDB(cliCtx); err != nil {
|
||||
return errors.Wrapf(err, "could not initialize database")
|
||||
}
|
||||
|
||||
if !cliCtx.Bool(cmd.DisableMonitoringFlag.Name) {
|
||||
@@ -402,6 +350,119 @@ func (c *ValidatorClient) initializeForWeb(cliCtx *cli.Context, router *mux.Rout
|
||||
return nil
|
||||
}
|
||||
|
||||
func (c *ValidatorClient) initializeDB(cliCtx *cli.Context) error {
|
||||
fileSystemDataDir := cliCtx.String(cmd.DataDirFlag.Name)
|
||||
kvDataDir := cliCtx.String(cmd.DataDirFlag.Name)
|
||||
kvDataFile := filepath.Join(kvDataDir, kv.ProtectionDbFileName)
|
||||
walletDir := cliCtx.String(flags.WalletDirFlag.Name)
|
||||
isInteropNumValidatorsSet := cliCtx.IsSet(flags.InteropNumValidators.Name)
|
||||
isWeb3SignerURLFlagSet := cliCtx.IsSet(flags.Web3SignerURLFlag.Name)
|
||||
clearFlag := cliCtx.Bool(cmd.ClearDB.Name)
|
||||
forceClearFlag := cliCtx.Bool(cmd.ForceClearDB.Name)
|
||||
|
||||
// Workaround for https://github.com/prysmaticlabs/prysm/issues/13391
|
||||
kvDataDir, _, err := c.getLegacyDatabaseLocation(
|
||||
isInteropNumValidatorsSet,
|
||||
isWeb3SignerURLFlagSet,
|
||||
kvDataDir,
|
||||
kvDataFile,
|
||||
walletDir,
|
||||
)
|
||||
|
||||
if err != nil {
|
||||
return errors.Wrap(err, "could not get legacy database location")
|
||||
}
|
||||
|
||||
// Check if minimal slashing protection is requested.
|
||||
isMinimalSlashingProtectionRequested := cliCtx.Bool(features.EnableMinimalSlashingProtection.Name)
|
||||
|
||||
if clearFlag || forceClearFlag {
|
||||
var err error
|
||||
|
||||
if isMinimalSlashingProtectionRequested {
|
||||
err = clearDB(cliCtx.Context, fileSystemDataDir, forceClearFlag, true)
|
||||
} else {
|
||||
err = clearDB(cliCtx.Context, kvDataDir, forceClearFlag, false)
|
||||
// Reset the BoltDB datadir to the requested location, so the new one is not located any more in the legacy location.
|
||||
kvDataDir = cliCtx.String(cmd.DataDirFlag.Name)
|
||||
}
|
||||
|
||||
if err != nil {
|
||||
return errors.Wrap(err, "could not clear database")
|
||||
}
|
||||
}
|
||||
|
||||
// Check if a minimal database exists.
|
||||
minimalDatabasePath := path.Join(fileSystemDataDir, filesystem.DatabaseDirName)
|
||||
minimalDatabaseExists, err := file.Exists(minimalDatabasePath, file.Directory)
|
||||
if err != nil {
|
||||
return errors.Wrapf(err, "could not check if minimal slashing protection database exists")
|
||||
}
|
||||
|
||||
// Check if a complete database exists.
|
||||
completeDatabasePath := path.Join(kvDataDir, kv.ProtectionDbFileName)
|
||||
completeDatabaseExists, err := file.Exists(completeDatabasePath, file.Regular)
|
||||
if err != nil {
|
||||
return errors.Wrapf(err, "could not check if complete slashing protection database exists")
|
||||
}
|
||||
|
||||
// If both a complete and minimal database exist, return on error.
|
||||
if completeDatabaseExists && minimalDatabaseExists {
|
||||
log.Fatalf(
|
||||
"Both complete (%s) and minimal slashing (%s) protection databases exist. Please delete one of them.",
|
||||
path.Join(kvDataDir, kv.ProtectionDbFileName),
|
||||
path.Join(fileSystemDataDir, filesystem.DatabaseDirName),
|
||||
)
|
||||
return nil
|
||||
}
|
||||
|
||||
// If a minimal database exists AND complete slashing protection is requested, convert the minimal
|
||||
// database to a complete one and use the complete database.
|
||||
if !isMinimalSlashingProtectionRequested && minimalDatabaseExists {
|
||||
log.Warning("Complete slashing protection database requested, while minimal slashing protection database currently used. Converting.")
|
||||
|
||||
if err := db.ConvertDatabase(cliCtx.Context, fileSystemDataDir, kvDataDir, true); err != nil {
|
||||
return errors.Wrapf(err, "could not convert minimal slashing protection database to complete slashing protection database")
|
||||
}
|
||||
}
|
||||
|
||||
// If a complete database exists AND minimal slashing protection is requested, use complete database.
|
||||
useMinimalSlashingProtection := isMinimalSlashingProtectionRequested
|
||||
if isMinimalSlashingProtectionRequested && completeDatabaseExists {
|
||||
log.Warningf(`Minimal slashing protection database requested, while complete slashing protection database currently used.
|
||||
Will continue to use complete slashing protection database.
|
||||
Please convert your database by using 'validator db convert-complete-to-minimal --source-data-dir %s --target-data-dir %s'`,
|
||||
kvDataDir, fileSystemDataDir,
|
||||
)
|
||||
|
||||
useMinimalSlashingProtection = false
|
||||
}
|
||||
|
||||
// Create / get the database.
|
||||
var valDB iface.ValidatorDB
|
||||
if useMinimalSlashingProtection {
|
||||
log.WithField("databasePath", fileSystemDataDir).Info("Checking DB")
|
||||
valDB, err = filesystem.NewStore(fileSystemDataDir, nil)
|
||||
} else {
|
||||
log.WithField("databasePath", kvDataDir).Info("Checking DB")
|
||||
valDB, err = kv.NewKVStore(cliCtx.Context, kvDataDir, nil)
|
||||
}
|
||||
|
||||
if err != nil {
|
||||
return errors.Wrap(err, "could not create validator database")
|
||||
}
|
||||
|
||||
// Assign the database to the validator client.
|
||||
c.db = valDB
|
||||
|
||||
// Migrate the database
|
||||
if err := valDB.RunUpMigrations(cliCtx.Context); err != nil {
|
||||
return errors.Wrap(err, "could not run database migration")
|
||||
}
|
||||
|
||||
return nil
|
||||
}
|
||||
|
||||
func (c *ValidatorClient) registerPrometheusService(cliCtx *cli.Context) error {
|
||||
var additionalHandlers []prometheus.Handler
|
||||
if cliCtx.IsSet(cmd.EnableBackupWebhookFlag.Name) {
|
||||
@@ -683,7 +744,12 @@ func (c *ValidatorClient) registerRPCGatewayService(router *mux.Router) error {
|
||||
func setWalletPasswordFilePath(cliCtx *cli.Context) error {
|
||||
walletDir := cliCtx.String(flags.WalletDirFlag.Name)
|
||||
defaultWalletPasswordFilePath := filepath.Join(walletDir, wallet.DefaultWalletPasswordFile)
|
||||
if file.Exists(defaultWalletPasswordFilePath) {
|
||||
exists, err := file.Exists(defaultWalletPasswordFilePath, file.Regular)
|
||||
if err != nil {
|
||||
return errors.Wrap(err, "could not check if default wallet password file exists")
|
||||
}
|
||||
|
||||
if exists {
|
||||
// Ensure file has proper permissions.
|
||||
hasPerms, err := file.HasReadWritePermissions(defaultWalletPasswordFilePath)
|
||||
if err != nil {
|
||||
@@ -704,8 +770,12 @@ func setWalletPasswordFilePath(cliCtx *cli.Context) error {
|
||||
return nil
|
||||
}
|
||||
|
||||
func clearDB(ctx context.Context, dataDir string, force bool) error {
|
||||
var err error
|
||||
func clearDB(ctx context.Context, dataDir string, force bool, isDatabaseMinimal bool) error {
|
||||
var (
|
||||
valDB iface.ValidatorDB
|
||||
err error
|
||||
)
|
||||
|
||||
clearDBConfirmed := force
|
||||
|
||||
if !force {
|
||||
@@ -719,10 +789,16 @@ func clearDB(ctx context.Context, dataDir string, force bool) error {
|
||||
}
|
||||
|
||||
if clearDBConfirmed {
|
||||
valDB, err := kv.NewKVStore(ctx, dataDir, &kv.Config{})
|
||||
if err != nil {
|
||||
return errors.Wrapf(err, "Could not create DB in dir %s", dataDir)
|
||||
if isDatabaseMinimal {
|
||||
valDB, err = filesystem.NewStore(dataDir, nil)
|
||||
} else {
|
||||
valDB, err = kv.NewKVStore(ctx, dataDir, nil)
|
||||
}
|
||||
|
||||
if err != nil {
|
||||
return errors.Wrap(err, "could not create validator database")
|
||||
}
|
||||
|
||||
if err := valDB.Close(); err != nil {
|
||||
return errors.Wrapf(err, "could not close DB in dir %s", dataDir)
|
||||
}
|
||||
|
||||
@@ -3,6 +3,7 @@ package node
|
||||
import (
|
||||
"context"
|
||||
"flag"
|
||||
"fmt"
|
||||
"os"
|
||||
"path"
|
||||
"path/filepath"
|
||||
@@ -178,7 +179,7 @@ func TestGetLegacyDatabaseLocation(t *testing.T) {
|
||||
for _, tt := range testCases {
|
||||
t.Run(tt.name, func(t *testing.T) {
|
||||
validatorClient := &ValidatorClient{wallet: tt.wallet}
|
||||
actualDataDir, actualDataFile := validatorClient.getLegacyDatabaseLocation(
|
||||
actualDataDir, actualDataFile, err := validatorClient.getLegacyDatabaseLocation(
|
||||
tt.isInteropNumValidatorsSet,
|
||||
tt.isWeb3SignerURLFlagSet,
|
||||
tt.dataDir,
|
||||
@@ -186,6 +187,8 @@ func TestGetLegacyDatabaseLocation(t *testing.T) {
|
||||
tt.walletDir,
|
||||
)
|
||||
|
||||
require.NoError(t, err, "Failed to get legacy database location")
|
||||
|
||||
assert.Equal(t, tt.expectedDataDir, actualDataDir, "data dir should be equal")
|
||||
assert.Equal(t, tt.expectedDataFile, actualDataFile, "data file should be equal")
|
||||
})
|
||||
@@ -196,10 +199,14 @@ func TestGetLegacyDatabaseLocation(t *testing.T) {
|
||||
|
||||
// TestClearDB tests clearing the database
|
||||
func TestClearDB(t *testing.T) {
|
||||
hook := logtest.NewGlobal()
|
||||
tmp := filepath.Join(t.TempDir(), "datadirtest")
|
||||
require.NoError(t, clearDB(context.Background(), tmp, true))
|
||||
require.LogsContain(t, hook, "Removing database")
|
||||
for _, isMinimalDatabase := range []bool{false, true} {
|
||||
t.Run(fmt.Sprintf("isMinimalDatabase=%v", isMinimalDatabase), func(t *testing.T) {
|
||||
hook := logtest.NewGlobal()
|
||||
tmp := filepath.Join(t.TempDir(), "datadirtest")
|
||||
require.NoError(t, clearDB(context.Background(), tmp, true, isMinimalDatabase))
|
||||
require.LogsContain(t, hook, "Removing database")
|
||||
})
|
||||
}
|
||||
}
|
||||
|
||||
// TestWeb3SignerConfig tests the web3 signer config returns the correct values.
|
||||
|
||||
@@ -129,6 +129,9 @@ go_test(
|
||||
"//validator/accounts/testing:go_default_library",
|
||||
"//validator/accounts/wallet:go_default_library",
|
||||
"//validator/client:go_default_library",
|
||||
"//validator/db/common:go_default_library",
|
||||
"//validator/db/filesystem:go_default_library",
|
||||
"//validator/db/iface:go_default_library",
|
||||
"//validator/db/kv:go_default_library",
|
||||
"//validator/db/testing:go_default_library",
|
||||
"//validator/keymanager:go_default_library",
|
||||
|
||||
@@ -51,7 +51,12 @@ func CreateAuthToken(walletDirPath, validatorWebAddr string) error {
|
||||
// of the URL. This token is then used as the bearer token for jwt auth.
|
||||
func (s *Server) initializeAuthToken(walletDir string) (string, error) {
|
||||
authTokenFile := filepath.Join(walletDir, AuthTokenFileName)
|
||||
if file.Exists(authTokenFile) {
|
||||
exists, err := file.Exists(authTokenFile, file.Regular)
|
||||
if err != nil {
|
||||
return "", errors.Wrapf(err, "could not check if file exists: %s", authTokenFile)
|
||||
}
|
||||
|
||||
if exists {
|
||||
// #nosec G304
|
||||
f, err := os.Open(authTokenFile)
|
||||
if err != nil {
|
||||
|
||||
@@ -366,7 +366,12 @@ func writeWalletPasswordToDisk(walletDir, password string) error {
|
||||
return nil
|
||||
}
|
||||
passwordFilePath := filepath.Join(walletDir, wallet.DefaultWalletPasswordFile)
|
||||
if file.Exists(passwordFilePath) {
|
||||
exists, err := file.Exists(passwordFilePath, file.Regular)
|
||||
if err != nil {
|
||||
return errors.Wrapf(err, "could not check if file exists: %s", passwordFilePath)
|
||||
}
|
||||
|
||||
if exists {
|
||||
return fmt.Errorf("cannot write wallet password file as it already exists %s", passwordFilePath)
|
||||
}
|
||||
return file.WriteFile(passwordFilePath, []byte(password))
|
||||
|
||||
@@ -268,7 +268,9 @@ func TestServer_RecoverWallet_Derived(t *testing.T) {
|
||||
|
||||
// Password File should have been written.
|
||||
passwordFilePath := filepath.Join(localWalletDir, wallet.DefaultWalletPasswordFile)
|
||||
assert.Equal(t, true, file.Exists(passwordFilePath))
|
||||
exists, err := file.Exists(passwordFilePath, file.Regular)
|
||||
require.NoError(t, err, "could not check if password file exists")
|
||||
assert.Equal(t, true, exists)
|
||||
|
||||
// Attempting to write again should trigger an error.
|
||||
err = writeWalletPasswordToDisk(localWalletDir, "somepassword")
|
||||
@@ -474,7 +476,9 @@ func Test_writeWalletPasswordToDisk(t *testing.T) {
|
||||
|
||||
// Expected a silent failure if the feature flag is not enabled.
|
||||
passwordFilePath := filepath.Join(walletDir, wallet.DefaultWalletPasswordFile)
|
||||
assert.Equal(t, false, file.Exists(passwordFilePath))
|
||||
exists, err := file.Exists(passwordFilePath, file.Regular)
|
||||
require.NoError(t, err, "could not check if password file exists")
|
||||
assert.Equal(t, false, exists, "password file should not exist")
|
||||
resetCfg = features.InitWithReset(&features.Flags{
|
||||
WriteWalletPasswordOnWebOnboarding: true,
|
||||
})
|
||||
@@ -483,7 +487,9 @@ func Test_writeWalletPasswordToDisk(t *testing.T) {
|
||||
require.NoError(t, err)
|
||||
|
||||
// File should have been written.
|
||||
assert.Equal(t, true, file.Exists(passwordFilePath))
|
||||
exists, err = file.Exists(passwordFilePath, file.Regular)
|
||||
require.NoError(t, err, "could not check if password file exists")
|
||||
assert.Equal(t, true, exists, "password file should exist")
|
||||
|
||||
// Attempting to write again should trigger an error.
|
||||
err = writeWalletPasswordToDisk(walletDir, "somepassword")
|
||||
|
||||
@@ -21,8 +21,13 @@ func (s *Server) Initialize(w http.ResponseWriter, r *http.Request) {
|
||||
return
|
||||
}
|
||||
authTokenPath := filepath.Join(s.walletDir, AuthTokenFileName)
|
||||
exists, err := file.Exists(authTokenPath, file.Regular)
|
||||
if err != nil {
|
||||
httputil.HandleError(w, errors.Wrap(err, "Could not check if auth token exists").Error(), http.StatusInternalServerError)
|
||||
return
|
||||
}
|
||||
httputil.WriteJson(w, &InitializeAuthResponse{
|
||||
HasSignedUp: file.Exists(authTokenPath),
|
||||
HasSignedUp: exists,
|
||||
HasWallet: walletExists,
|
||||
})
|
||||
}
|
||||
|
||||
@@ -131,9 +131,7 @@ func (s *Server) ImportKeystores(w http.ResponseWriter, r *http.Request) {
|
||||
keystores[i] = k
|
||||
}
|
||||
if req.SlashingProtection != "" {
|
||||
if err := slashingprotection.ImportStandardProtectionJSON(
|
||||
ctx, s.valDB, bytes.NewBufferString(req.SlashingProtection),
|
||||
); err != nil {
|
||||
if s.valDB == nil || s.valDB.ImportStandardProtectionJSON(ctx, bytes.NewBufferString(req.SlashingProtection)) != nil {
|
||||
statuses := make([]*keymanager.KeyStatus, len(req.Keystores))
|
||||
for i := 0; i < len(req.Keystores); i++ {
|
||||
statuses[i] = &keymanager.KeyStatus{
|
||||
|
||||
@@ -29,6 +29,9 @@ import (
|
||||
mock "github.com/prysmaticlabs/prysm/v5/validator/accounts/testing"
|
||||
"github.com/prysmaticlabs/prysm/v5/validator/accounts/wallet"
|
||||
"github.com/prysmaticlabs/prysm/v5/validator/client"
|
||||
dbCommon "github.com/prysmaticlabs/prysm/v5/validator/db/common"
|
||||
"github.com/prysmaticlabs/prysm/v5/validator/db/filesystem"
|
||||
DBIface "github.com/prysmaticlabs/prysm/v5/validator/db/iface"
|
||||
"github.com/prysmaticlabs/prysm/v5/validator/db/kv"
|
||||
dbtest "github.com/prysmaticlabs/prysm/v5/validator/db/testing"
|
||||
"github.com/prysmaticlabs/prysm/v5/validator/keymanager"
|
||||
@@ -257,73 +260,83 @@ func TestServer_ImportKeystores(t *testing.T) {
|
||||
require.Equal(t, keymanager.StatusError, st.Status)
|
||||
}
|
||||
})
|
||||
t.Run("returns proper statuses for keystores in request", func(t *testing.T) {
|
||||
numKeystores := 5
|
||||
password := "12345678"
|
||||
keystores := make([]*keymanager.Keystore, numKeystores)
|
||||
passwords := make([]string, numKeystores)
|
||||
publicKeys := make([][fieldparams.BLSPubkeyLength]byte, numKeystores)
|
||||
for i := 0; i < numKeystores; i++ {
|
||||
keystores[i] = createRandomKeystore(t, password)
|
||||
pubKey, err := hexutil.Decode("0x" + keystores[i].Pubkey)
|
||||
require.NoError(t, err)
|
||||
publicKeys[i] = bytesutil.ToBytes48(pubKey)
|
||||
passwords[i] = password
|
||||
}
|
||||
|
||||
// Create a validator database.
|
||||
validatorDB, err := kv.NewKVStore(ctx, defaultWalletPath, &kv.Config{
|
||||
PubKeys: publicKeys,
|
||||
for _, isSlashingProtectionMinimal := range []bool{false, true} {
|
||||
t.Run(fmt.Sprintf("returns proper statuses for keystores in request/isSlashingProtectionMininal:%v", isSlashingProtectionMinimal), func(t *testing.T) {
|
||||
numKeystores := 5
|
||||
password := "12345678"
|
||||
keystores := make([]*keymanager.Keystore, numKeystores)
|
||||
passwords := make([]string, numKeystores)
|
||||
publicKeys := make([][fieldparams.BLSPubkeyLength]byte, numKeystores)
|
||||
for i := 0; i < numKeystores; i++ {
|
||||
keystores[i] = createRandomKeystore(t, password)
|
||||
pubKey, err := hexutil.Decode("0x" + keystores[i].Pubkey)
|
||||
require.NoError(t, err)
|
||||
publicKeys[i] = bytesutil.ToBytes48(pubKey)
|
||||
passwords[i] = password
|
||||
}
|
||||
|
||||
// Create a validator database.
|
||||
var validatorDB DBIface.ValidatorDB
|
||||
if isSlashingProtectionMinimal {
|
||||
validatorDB, err = filesystem.NewStore(defaultWalletPath, &filesystem.Config{
|
||||
PubKeys: publicKeys,
|
||||
})
|
||||
} else {
|
||||
validatorDB, err = kv.NewKVStore(ctx, defaultWalletPath, &kv.Config{
|
||||
PubKeys: publicKeys,
|
||||
})
|
||||
}
|
||||
require.NoError(t, err)
|
||||
s.valDB = validatorDB
|
||||
|
||||
// Have to close it after import is done otherwise it complains db is not open.
|
||||
defer func() {
|
||||
require.NoError(t, validatorDB.Close())
|
||||
}()
|
||||
encodedKeystores := make([]string, numKeystores)
|
||||
for i := 0; i < numKeystores; i++ {
|
||||
enc, err := json.Marshal(keystores[i])
|
||||
require.NoError(t, err)
|
||||
encodedKeystores[i] = string(enc)
|
||||
}
|
||||
|
||||
// Generate mock slashing history.
|
||||
attestingHistory := make([][]*dbCommon.AttestationRecord, 0)
|
||||
proposalHistory := make([]dbCommon.ProposalHistoryForPubkey, len(publicKeys))
|
||||
for i := 0; i < len(publicKeys); i++ {
|
||||
proposalHistory[i].Proposals = make([]dbCommon.Proposal, 0)
|
||||
}
|
||||
mockJSON, err := mocks.MockSlashingProtectionJSON(publicKeys, attestingHistory, proposalHistory)
|
||||
require.NoError(t, err)
|
||||
|
||||
// JSON encode the protection JSON and save it.
|
||||
encodedSlashingProtection, err := json.Marshal(mockJSON)
|
||||
require.NoError(t, err)
|
||||
|
||||
request := &ImportKeystoresRequest{
|
||||
Keystores: encodedKeystores,
|
||||
Passwords: passwords,
|
||||
SlashingProtection: string(encodedSlashingProtection),
|
||||
}
|
||||
|
||||
var buf bytes.Buffer
|
||||
err = json.NewEncoder(&buf).Encode(request)
|
||||
require.NoError(t, err)
|
||||
|
||||
req := httptest.NewRequest(http.MethodPost, fmt.Sprintf("/eth/v1/keystores"), &buf)
|
||||
wr := httptest.NewRecorder()
|
||||
wr.Body = &bytes.Buffer{}
|
||||
s.ImportKeystores(wr, req)
|
||||
require.Equal(t, http.StatusOK, wr.Code)
|
||||
resp := &ImportKeystoresResponse{}
|
||||
require.NoError(t, json.Unmarshal(wr.Body.Bytes(), resp))
|
||||
require.Equal(t, numKeystores, len(resp.Data))
|
||||
for _, st := range resp.Data {
|
||||
require.Equal(t, keymanager.StatusImported, st.Status)
|
||||
}
|
||||
})
|
||||
require.NoError(t, err)
|
||||
s.valDB = validatorDB
|
||||
|
||||
// Have to close it after import is done otherwise it complains db is not open.
|
||||
defer func() {
|
||||
require.NoError(t, validatorDB.Close())
|
||||
}()
|
||||
encodedKeystores := make([]string, numKeystores)
|
||||
for i := 0; i < numKeystores; i++ {
|
||||
enc, err := json.Marshal(keystores[i])
|
||||
require.NoError(t, err)
|
||||
encodedKeystores[i] = string(enc)
|
||||
}
|
||||
|
||||
// Generate mock slashing history.
|
||||
attestingHistory := make([][]*kv.AttestationRecord, 0)
|
||||
proposalHistory := make([]kv.ProposalHistoryForPubkey, len(publicKeys))
|
||||
for i := 0; i < len(publicKeys); i++ {
|
||||
proposalHistory[i].Proposals = make([]kv.Proposal, 0)
|
||||
}
|
||||
mockJSON, err := mocks.MockSlashingProtectionJSON(publicKeys, attestingHistory, proposalHistory)
|
||||
require.NoError(t, err)
|
||||
|
||||
// JSON encode the protection JSON and save it.
|
||||
encodedSlashingProtection, err := json.Marshal(mockJSON)
|
||||
require.NoError(t, err)
|
||||
|
||||
request := &ImportKeystoresRequest{
|
||||
Keystores: encodedKeystores,
|
||||
Passwords: passwords,
|
||||
SlashingProtection: string(encodedSlashingProtection),
|
||||
}
|
||||
|
||||
var buf bytes.Buffer
|
||||
err = json.NewEncoder(&buf).Encode(request)
|
||||
require.NoError(t, err)
|
||||
|
||||
req := httptest.NewRequest(http.MethodPost, fmt.Sprintf("/eth/v1/keystores"), &buf)
|
||||
wr := httptest.NewRecorder()
|
||||
wr.Body = &bytes.Buffer{}
|
||||
s.ImportKeystores(wr, req)
|
||||
require.Equal(t, http.StatusOK, wr.Code)
|
||||
resp := &ImportKeystoresResponse{}
|
||||
require.NoError(t, json.Unmarshal(wr.Body.Bytes(), resp))
|
||||
require.Equal(t, numKeystores, len(resp.Data))
|
||||
for _, st := range resp.Data {
|
||||
require.Equal(t, keymanager.StatusImported, st.Status)
|
||||
}
|
||||
})
|
||||
}
|
||||
}
|
||||
|
||||
func TestServer_ImportKeystores_WrongKeymanagerKind(t *testing.T) {
|
||||
@@ -372,215 +385,236 @@ func TestServer_ImportKeystores_WrongKeymanagerKind(t *testing.T) {
|
||||
}
|
||||
|
||||
func TestServer_DeleteKeystores(t *testing.T) {
|
||||
ctx := context.Background()
|
||||
srv := setupServerWithWallet(t)
|
||||
for _, isSlashingProtectionMinimal := range []bool{false, true} {
|
||||
ctx := context.Background()
|
||||
srv := setupServerWithWallet(t)
|
||||
|
||||
// We recover 3 accounts from a test mnemonic.
|
||||
numAccounts := 3
|
||||
km, er := srv.validatorService.Keymanager()
|
||||
require.NoError(t, er)
|
||||
dr, ok := km.(*derived.Keymanager)
|
||||
require.Equal(t, true, ok)
|
||||
err := dr.RecoverAccountsFromMnemonic(ctx, mocks.TestMnemonic, derived.DefaultMnemonicLanguage, "", numAccounts)
|
||||
require.NoError(t, err)
|
||||
publicKeys, err := dr.FetchValidatingPublicKeys(ctx)
|
||||
require.NoError(t, err)
|
||||
// We recover 3 accounts from a test mnemonic.
|
||||
numAccounts := 3
|
||||
km, er := srv.validatorService.Keymanager()
|
||||
require.NoError(t, er)
|
||||
dr, ok := km.(*derived.Keymanager)
|
||||
require.Equal(t, true, ok)
|
||||
err := dr.RecoverAccountsFromMnemonic(ctx, mocks.TestMnemonic, derived.DefaultMnemonicLanguage, "", numAccounts)
|
||||
require.NoError(t, err)
|
||||
publicKeys, err := dr.FetchValidatingPublicKeys(ctx)
|
||||
require.NoError(t, err)
|
||||
|
||||
// Create a validator database.
|
||||
validatorDB, err := kv.NewKVStore(ctx, defaultWalletPath, &kv.Config{
|
||||
PubKeys: publicKeys,
|
||||
})
|
||||
require.NoError(t, err)
|
||||
srv.valDB = validatorDB
|
||||
|
||||
// Have to close it after import is done otherwise it complains db is not open.
|
||||
defer func() {
|
||||
require.NoError(t, validatorDB.Close())
|
||||
}()
|
||||
|
||||
// Generate mock slashing history.
|
||||
attestingHistory := make([][]*kv.AttestationRecord, 0)
|
||||
proposalHistory := make([]kv.ProposalHistoryForPubkey, len(publicKeys))
|
||||
for i := 0; i < len(publicKeys); i++ {
|
||||
proposalHistory[i].Proposals = make([]kv.Proposal, 0)
|
||||
}
|
||||
mockJSON, err := mocks.MockSlashingProtectionJSON(publicKeys, attestingHistory, proposalHistory)
|
||||
require.NoError(t, err)
|
||||
|
||||
// JSON encode the protection JSON and save it.
|
||||
encoded, err := json.Marshal(mockJSON)
|
||||
require.NoError(t, err)
|
||||
request := &ImportSlashingProtectionRequest{
|
||||
SlashingProtectionJson: string(encoded),
|
||||
}
|
||||
var buf bytes.Buffer
|
||||
err = json.NewEncoder(&buf).Encode(request)
|
||||
require.NoError(t, err)
|
||||
|
||||
req := httptest.NewRequest(http.MethodPost, "/v2/validator/slashing-protection/import", &buf)
|
||||
wr := httptest.NewRecorder()
|
||||
srv.ImportSlashingProtection(wr, req)
|
||||
require.Equal(t, http.StatusOK, wr.Code)
|
||||
t.Run("no slashing protection response if no keys in request even if we have a history in DB", func(t *testing.T) {
|
||||
request := &DeleteKeystoresRequest{
|
||||
Pubkeys: nil,
|
||||
// Create a validator database.
|
||||
var validatorDB DBIface.ValidatorDB
|
||||
if isSlashingProtectionMinimal {
|
||||
validatorDB, err = filesystem.NewStore(defaultWalletPath, &filesystem.Config{
|
||||
PubKeys: publicKeys,
|
||||
})
|
||||
} else {
|
||||
validatorDB, err = kv.NewKVStore(ctx, defaultWalletPath, &kv.Config{
|
||||
PubKeys: publicKeys,
|
||||
})
|
||||
}
|
||||
require.NoError(t, err)
|
||||
srv.valDB = validatorDB
|
||||
|
||||
// Have to close it after import is done otherwise it complains db is not open.
|
||||
defer func() {
|
||||
require.NoError(t, validatorDB.Close())
|
||||
}()
|
||||
|
||||
// Generate mock slashing history.
|
||||
attestingHistory := make([][]*dbCommon.AttestationRecord, 0)
|
||||
proposalHistory := make([]dbCommon.ProposalHistoryForPubkey, len(publicKeys))
|
||||
for i := 0; i < len(publicKeys); i++ {
|
||||
proposalHistory[i].Proposals = make([]dbCommon.Proposal, 0)
|
||||
}
|
||||
mockJSON, err := mocks.MockSlashingProtectionJSON(publicKeys, attestingHistory, proposalHistory)
|
||||
require.NoError(t, err)
|
||||
|
||||
// JSON encode the protection JSON and save it.
|
||||
encoded, err := json.Marshal(mockJSON)
|
||||
require.NoError(t, err)
|
||||
request := &ImportSlashingProtectionRequest{
|
||||
SlashingProtectionJson: string(encoded),
|
||||
}
|
||||
var buf bytes.Buffer
|
||||
err = json.NewEncoder(&buf).Encode(request)
|
||||
require.NoError(t, err)
|
||||
req := httptest.NewRequest(http.MethodDelete, fmt.Sprintf("/eth/v1/keystores"), &buf)
|
||||
|
||||
req := httptest.NewRequest(http.MethodPost, "/v2/validator/slashing-protection/import", &buf)
|
||||
wr := httptest.NewRecorder()
|
||||
wr.Body = &bytes.Buffer{}
|
||||
srv.DeleteKeystores(wr, req)
|
||||
srv.ImportSlashingProtection(wr, req)
|
||||
require.Equal(t, http.StatusOK, wr.Code)
|
||||
resp := &DeleteKeystoresResponse{}
|
||||
require.NoError(t, json.Unmarshal(wr.Body.Bytes(), resp))
|
||||
require.Equal(t, "", resp.SlashingProtection)
|
||||
})
|
||||
t.Run(fmt.Sprintf("no slashing protection response if no keys in request even if we have a history in DB/mininalSlaghinProtection:%v", isSlashingProtectionMinimal), func(t *testing.T) {
|
||||
request := &DeleteKeystoresRequest{
|
||||
Pubkeys: nil,
|
||||
}
|
||||
|
||||
// For ease of test setup, we'll give each public key a string identifier.
|
||||
publicKeysWithId := map[string][fieldparams.BLSPubkeyLength]byte{
|
||||
"a": publicKeys[0],
|
||||
"b": publicKeys[1],
|
||||
"c": publicKeys[2],
|
||||
}
|
||||
var buf bytes.Buffer
|
||||
err = json.NewEncoder(&buf).Encode(request)
|
||||
require.NoError(t, err)
|
||||
req := httptest.NewRequest(http.MethodDelete, fmt.Sprintf("/eth/v1/keystores"), &buf)
|
||||
wr := httptest.NewRecorder()
|
||||
wr.Body = &bytes.Buffer{}
|
||||
srv.DeleteKeystores(wr, req)
|
||||
require.Equal(t, http.StatusOK, wr.Code)
|
||||
resp := &DeleteKeystoresResponse{}
|
||||
require.NoError(t, json.Unmarshal(wr.Body.Bytes(), resp))
|
||||
require.Equal(t, "", resp.SlashingProtection)
|
||||
})
|
||||
|
||||
type keyCase struct {
|
||||
id string
|
||||
wantProtectionData bool
|
||||
}
|
||||
tests := []struct {
|
||||
keys []*keyCase
|
||||
wantStatuses []keymanager.KeyStatusType
|
||||
}{
|
||||
{
|
||||
keys: []*keyCase{
|
||||
{id: "a", wantProtectionData: true},
|
||||
{id: "a", wantProtectionData: true},
|
||||
{id: "d"},
|
||||
{id: "c", wantProtectionData: true},
|
||||
},
|
||||
wantStatuses: []keymanager.KeyStatusType{
|
||||
keymanager.StatusDeleted,
|
||||
keymanager.StatusNotActive,
|
||||
keymanager.StatusNotFound,
|
||||
keymanager.StatusDeleted,
|
||||
},
|
||||
},
|
||||
{
|
||||
keys: []*keyCase{
|
||||
{id: "a", wantProtectionData: true},
|
||||
{id: "c", wantProtectionData: true},
|
||||
},
|
||||
wantStatuses: []keymanager.KeyStatusType{
|
||||
keymanager.StatusNotActive,
|
||||
keymanager.StatusNotActive,
|
||||
},
|
||||
},
|
||||
{
|
||||
keys: []*keyCase{
|
||||
{id: "x"},
|
||||
},
|
||||
wantStatuses: []keymanager.KeyStatusType{
|
||||
keymanager.StatusNotFound,
|
||||
},
|
||||
},
|
||||
}
|
||||
for _, tc := range tests {
|
||||
keys := make([]string, len(tc.keys))
|
||||
for i := 0; i < len(tc.keys); i++ {
|
||||
pk := publicKeysWithId[tc.keys[i].id]
|
||||
keys[i] = hexutil.Encode(pk[:])
|
||||
}
|
||||
request := &DeleteKeystoresRequest{
|
||||
Pubkeys: keys,
|
||||
// For ease of test setup, we'll give each public key a string identifier.
|
||||
publicKeysWithId := map[string][fieldparams.BLSPubkeyLength]byte{
|
||||
"a": publicKeys[0],
|
||||
"b": publicKeys[1],
|
||||
"c": publicKeys[2],
|
||||
}
|
||||
|
||||
var buf bytes.Buffer
|
||||
err = json.NewEncoder(&buf).Encode(request)
|
||||
require.NoError(t, err)
|
||||
req := httptest.NewRequest(http.MethodDelete, fmt.Sprintf("/eth/v1/keystores"), &buf)
|
||||
wr := httptest.NewRecorder()
|
||||
wr.Body = &bytes.Buffer{}
|
||||
srv.DeleteKeystores(wr, req)
|
||||
require.Equal(t, http.StatusOK, wr.Code)
|
||||
resp := &DeleteKeystoresResponse{}
|
||||
require.NoError(t, json.Unmarshal(wr.Body.Bytes(), resp))
|
||||
require.Equal(t, len(keys), len(resp.Data))
|
||||
slashingProtectionData := &format.EIPSlashingProtectionFormat{}
|
||||
require.NoError(t, json.Unmarshal([]byte(resp.SlashingProtection), slashingProtectionData))
|
||||
require.Equal(t, true, len(slashingProtectionData.Data) > 0)
|
||||
type keyCase struct {
|
||||
id string
|
||||
wantProtectionData bool
|
||||
}
|
||||
tests := []struct {
|
||||
keys []*keyCase
|
||||
wantStatuses []keymanager.KeyStatusType
|
||||
}{
|
||||
{
|
||||
keys: []*keyCase{
|
||||
{id: "a", wantProtectionData: true},
|
||||
{id: "a", wantProtectionData: true},
|
||||
{id: "d"},
|
||||
{id: "c", wantProtectionData: true},
|
||||
},
|
||||
wantStatuses: []keymanager.KeyStatusType{
|
||||
keymanager.StatusDeleted,
|
||||
keymanager.StatusNotActive,
|
||||
keymanager.StatusNotFound,
|
||||
keymanager.StatusDeleted,
|
||||
},
|
||||
},
|
||||
{
|
||||
keys: []*keyCase{
|
||||
{id: "a", wantProtectionData: true},
|
||||
{id: "c", wantProtectionData: true},
|
||||
},
|
||||
wantStatuses: []keymanager.KeyStatusType{
|
||||
keymanager.StatusNotActive,
|
||||
keymanager.StatusNotActive,
|
||||
},
|
||||
},
|
||||
{
|
||||
keys: []*keyCase{
|
||||
{id: "x"},
|
||||
},
|
||||
wantStatuses: []keymanager.KeyStatusType{
|
||||
keymanager.StatusNotFound,
|
||||
},
|
||||
},
|
||||
}
|
||||
for _, tc := range tests {
|
||||
keys := make([]string, len(tc.keys))
|
||||
for i := 0; i < len(tc.keys); i++ {
|
||||
pk := publicKeysWithId[tc.keys[i].id]
|
||||
keys[i] = hexutil.Encode(pk[:])
|
||||
}
|
||||
request := &DeleteKeystoresRequest{
|
||||
Pubkeys: keys,
|
||||
}
|
||||
|
||||
for i := 0; i < len(tc.keys); i++ {
|
||||
require.Equal(
|
||||
t,
|
||||
tc.wantStatuses[i],
|
||||
resp.Data[i].Status,
|
||||
fmt.Sprintf("Checking status for key %s", tc.keys[i].id),
|
||||
)
|
||||
if tc.keys[i].wantProtectionData {
|
||||
// We check that we can find the key in the slashing protection data.
|
||||
var found bool
|
||||
for _, dt := range slashingProtectionData.Data {
|
||||
if dt.Pubkey == keys[i] {
|
||||
found = true
|
||||
break
|
||||
var buf bytes.Buffer
|
||||
err = json.NewEncoder(&buf).Encode(request)
|
||||
require.NoError(t, err)
|
||||
req := httptest.NewRequest(http.MethodDelete, fmt.Sprintf("/eth/v1/keystores"), &buf)
|
||||
wr := httptest.NewRecorder()
|
||||
wr.Body = &bytes.Buffer{}
|
||||
srv.DeleteKeystores(wr, req)
|
||||
require.Equal(t, http.StatusOK, wr.Code)
|
||||
resp := &DeleteKeystoresResponse{}
|
||||
require.NoError(t, json.Unmarshal(wr.Body.Bytes(), resp))
|
||||
require.Equal(t, len(keys), len(resp.Data))
|
||||
slashingProtectionData := &format.EIPSlashingProtectionFormat{}
|
||||
require.NoError(t, json.Unmarshal([]byte(resp.SlashingProtection), slashingProtectionData))
|
||||
require.Equal(t, true, len(slashingProtectionData.Data) > 0)
|
||||
|
||||
for i := 0; i < len(tc.keys); i++ {
|
||||
require.Equal(
|
||||
t,
|
||||
tc.wantStatuses[i],
|
||||
resp.Data[i].Status,
|
||||
fmt.Sprintf("Checking status for key %s", tc.keys[i].id),
|
||||
)
|
||||
if tc.keys[i].wantProtectionData {
|
||||
// We check that we can find the key in the slashing protection data.
|
||||
var found bool
|
||||
for _, dt := range slashingProtectionData.Data {
|
||||
if dt.Pubkey == keys[i] {
|
||||
found = true
|
||||
break
|
||||
}
|
||||
}
|
||||
require.Equal(t, true, found)
|
||||
}
|
||||
require.Equal(t, true, found)
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
func TestServer_DeleteKeystores_FailedSlashingProtectionExport(t *testing.T) {
|
||||
ctx := context.Background()
|
||||
srv := setupServerWithWallet(t)
|
||||
for _, isSlashingProtectionMinimal := range []bool{false, true} {
|
||||
t.Run(fmt.Sprintf("minimalSlashingProtection:%v", isSlashingProtectionMinimal), func(t *testing.T) {
|
||||
ctx := context.Background()
|
||||
srv := setupServerWithWallet(t)
|
||||
|
||||
// We recover 3 accounts from a test mnemonic.
|
||||
numAccounts := 3
|
||||
km, er := srv.validatorService.Keymanager()
|
||||
require.NoError(t, er)
|
||||
dr, ok := km.(*derived.Keymanager)
|
||||
require.Equal(t, true, ok)
|
||||
err := dr.RecoverAccountsFromMnemonic(ctx, mocks.TestMnemonic, derived.DefaultMnemonicLanguage, "", numAccounts)
|
||||
require.NoError(t, err)
|
||||
publicKeys, err := dr.FetchValidatingPublicKeys(ctx)
|
||||
require.NoError(t, err)
|
||||
// We recover 3 accounts from a test mnemonic.
|
||||
numAccounts := 3
|
||||
km, er := srv.validatorService.Keymanager()
|
||||
require.NoError(t, er)
|
||||
dr, ok := km.(*derived.Keymanager)
|
||||
require.Equal(t, true, ok)
|
||||
err := dr.RecoverAccountsFromMnemonic(ctx, mocks.TestMnemonic, derived.DefaultMnemonicLanguage, "", numAccounts)
|
||||
require.NoError(t, err)
|
||||
publicKeys, err := dr.FetchValidatingPublicKeys(ctx)
|
||||
require.NoError(t, err)
|
||||
|
||||
// Create a validator database.
|
||||
validatorDB, err := kv.NewKVStore(ctx, defaultWalletPath, &kv.Config{
|
||||
PubKeys: publicKeys,
|
||||
})
|
||||
require.NoError(t, err)
|
||||
err = validatorDB.SaveGenesisValidatorsRoot(ctx, make([]byte, fieldparams.RootLength))
|
||||
require.NoError(t, err)
|
||||
srv.valDB = validatorDB
|
||||
// Create a validator database.
|
||||
var validatorDB DBIface.ValidatorDB
|
||||
if isSlashingProtectionMinimal {
|
||||
validatorDB, err = filesystem.NewStore(defaultWalletPath, &filesystem.Config{
|
||||
PubKeys: publicKeys,
|
||||
})
|
||||
} else {
|
||||
validatorDB, err = kv.NewKVStore(ctx, defaultWalletPath, &kv.Config{
|
||||
PubKeys: publicKeys,
|
||||
})
|
||||
}
|
||||
|
||||
// Have to close it after import is done otherwise it complains db is not open.
|
||||
defer func() {
|
||||
require.NoError(t, validatorDB.Close())
|
||||
}()
|
||||
require.NoError(t, err)
|
||||
err = validatorDB.SaveGenesisValidatorsRoot(ctx, make([]byte, fieldparams.RootLength))
|
||||
require.NoError(t, err)
|
||||
srv.valDB = validatorDB
|
||||
|
||||
request := &DeleteKeystoresRequest{
|
||||
Pubkeys: []string{"0xaf2e7ba294e03438ea819bd4033c6c1bf6b04320ee2075b77273c08d02f8a61bcc303c2c06bd3713cb442072ae591494"},
|
||||
// Have to close it after import is done otherwise it complains db is not open.
|
||||
defer func() {
|
||||
require.NoError(t, validatorDB.Close())
|
||||
}()
|
||||
|
||||
request := &DeleteKeystoresRequest{
|
||||
Pubkeys: []string{"0xaf2e7ba294e03438ea819bd4033c6c1bf6b04320ee2075b77273c08d02f8a61bcc303c2c06bd3713cb442072ae591494"},
|
||||
}
|
||||
var buf bytes.Buffer
|
||||
err = json.NewEncoder(&buf).Encode(request)
|
||||
require.NoError(t, err)
|
||||
req := httptest.NewRequest(http.MethodDelete, fmt.Sprintf("/eth/v1/keystores"), &buf)
|
||||
wr := httptest.NewRecorder()
|
||||
wr.Body = &bytes.Buffer{}
|
||||
srv.DeleteKeystores(wr, req)
|
||||
require.Equal(t, http.StatusOK, wr.Code)
|
||||
resp := &DeleteKeystoresResponse{}
|
||||
require.NoError(t, json.Unmarshal(wr.Body.Bytes(), resp))
|
||||
require.Equal(t, 1, len(resp.Data))
|
||||
require.Equal(t, keymanager.StatusError, resp.Data[0].Status)
|
||||
require.Equal(t, "Could not export slashing protection history as existing non duplicate keys were deleted",
|
||||
resp.Data[0].Message,
|
||||
)
|
||||
})
|
||||
}
|
||||
var buf bytes.Buffer
|
||||
err = json.NewEncoder(&buf).Encode(request)
|
||||
require.NoError(t, err)
|
||||
req := httptest.NewRequest(http.MethodDelete, fmt.Sprintf("/eth/v1/keystores"), &buf)
|
||||
wr := httptest.NewRecorder()
|
||||
wr.Body = &bytes.Buffer{}
|
||||
srv.DeleteKeystores(wr, req)
|
||||
require.Equal(t, http.StatusOK, wr.Code)
|
||||
resp := &DeleteKeystoresResponse{}
|
||||
require.NoError(t, json.Unmarshal(wr.Body.Bytes(), resp))
|
||||
require.Equal(t, 1, len(resp.Data))
|
||||
require.Equal(t, keymanager.StatusError, resp.Data[0].Status)
|
||||
require.Equal(t, "Could not export slashing protection history as existing non duplicate keys were deleted",
|
||||
resp.Data[0].Message,
|
||||
)
|
||||
}
|
||||
|
||||
func TestServer_DeleteKeystores_WrongKeymanagerKind(t *testing.T) {
|
||||
@@ -1047,56 +1081,58 @@ func TestServer_SetGasLimit(t *testing.T) {
|
||||
},
|
||||
},
|
||||
}
|
||||
for _, tt := range tests {
|
||||
t.Run(tt.name, func(t *testing.T) {
|
||||
m := &mock.Validator{}
|
||||
err := m.SetProposerSettings(ctx, tt.proposerSettings)
|
||||
require.NoError(t, err)
|
||||
validatorDB := dbtest.SetupDB(t, [][fieldparams.BLSPubkeyLength]byte{})
|
||||
vs, err := client.NewValidatorService(ctx, &client.Config{
|
||||
Validator: m,
|
||||
ValDB: validatorDB,
|
||||
})
|
||||
require.NoError(t, err)
|
||||
for _, isSlashingProtectionMinimal := range [...]bool{false, true} {
|
||||
for _, tt := range tests {
|
||||
t.Run(fmt.Sprintf("%s/isSlashingProtectionMinimal:%v", tt.name, isSlashingProtectionMinimal), func(t *testing.T) {
|
||||
m := &mock.Validator{}
|
||||
err := m.SetProposerSettings(ctx, tt.proposerSettings)
|
||||
require.NoError(t, err)
|
||||
validatorDB := dbtest.SetupDB(t, [][fieldparams.BLSPubkeyLength]byte{}, isSlashingProtectionMinimal)
|
||||
vs, err := client.NewValidatorService(ctx, &client.Config{
|
||||
Validator: m,
|
||||
ValDB: validatorDB,
|
||||
})
|
||||
require.NoError(t, err)
|
||||
|
||||
s := &Server{
|
||||
validatorService: vs,
|
||||
beaconNodeValidatorClient: beaconClient,
|
||||
valDB: validatorDB,
|
||||
}
|
||||
|
||||
if tt.beaconReturn != nil {
|
||||
beaconClient.EXPECT().GetFeeRecipientByPubKey(
|
||||
gomock.Any(),
|
||||
gomock.Any(),
|
||||
).Return(tt.beaconReturn.resp, tt.beaconReturn.error)
|
||||
}
|
||||
|
||||
request := &SetGasLimitRequest{
|
||||
GasLimit: fmt.Sprintf("%d", tt.newGasLimit),
|
||||
}
|
||||
|
||||
var buf bytes.Buffer
|
||||
err = json.NewEncoder(&buf).Encode(request)
|
||||
require.NoError(t, err)
|
||||
|
||||
req := httptest.NewRequest(http.MethodPost, fmt.Sprintf("/eth/v1/validator/{pubkey}/gas_limit"), &buf)
|
||||
req = mux.SetURLVars(req, map[string]string{"pubkey": hexutil.Encode(tt.pubkey)})
|
||||
w := httptest.NewRecorder()
|
||||
w.Body = &bytes.Buffer{}
|
||||
|
||||
s.SetGasLimit(w, req)
|
||||
|
||||
if tt.wantErr != "" {
|
||||
assert.NotEqual(t, http.StatusOK, w.Code)
|
||||
require.StringContains(t, tt.wantErr, w.Body.String())
|
||||
} else {
|
||||
assert.Equal(t, http.StatusAccepted, w.Code)
|
||||
for _, wantObj := range tt.w {
|
||||
assert.Equal(t, wantObj.gaslimit, uint64(s.validatorService.ProposerSettings().ProposeConfig[bytesutil.ToBytes48(wantObj.pubkey)].BuilderConfig.GasLimit))
|
||||
s := &Server{
|
||||
validatorService: vs,
|
||||
beaconNodeValidatorClient: beaconClient,
|
||||
valDB: validatorDB,
|
||||
}
|
||||
}
|
||||
})
|
||||
|
||||
if tt.beaconReturn != nil {
|
||||
beaconClient.EXPECT().GetFeeRecipientByPubKey(
|
||||
gomock.Any(),
|
||||
gomock.Any(),
|
||||
).Return(tt.beaconReturn.resp, tt.beaconReturn.error)
|
||||
}
|
||||
|
||||
request := &SetGasLimitRequest{
|
||||
GasLimit: fmt.Sprintf("%d", tt.newGasLimit),
|
||||
}
|
||||
|
||||
var buf bytes.Buffer
|
||||
err = json.NewEncoder(&buf).Encode(request)
|
||||
require.NoError(t, err)
|
||||
|
||||
req := httptest.NewRequest(http.MethodPost, fmt.Sprintf("/eth/v1/validator/{pubkey}/gas_limit"), &buf)
|
||||
req = mux.SetURLVars(req, map[string]string{"pubkey": hexutil.Encode(tt.pubkey)})
|
||||
w := httptest.NewRecorder()
|
||||
w.Body = &bytes.Buffer{}
|
||||
|
||||
s.SetGasLimit(w, req)
|
||||
|
||||
if tt.wantErr != "" {
|
||||
assert.NotEqual(t, http.StatusOK, w.Code)
|
||||
require.StringContains(t, tt.wantErr, w.Body.String())
|
||||
} else {
|
||||
assert.Equal(t, http.StatusAccepted, w.Code)
|
||||
for _, wantObj := range tt.w {
|
||||
assert.Equal(t, wantObj.gaslimit, uint64(s.validatorService.ProposerSettings().ProposeConfig[bytesutil.ToBytes48(wantObj.pubkey)].BuilderConfig.GasLimit))
|
||||
}
|
||||
}
|
||||
})
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
@@ -1234,40 +1270,42 @@ func TestServer_DeleteGasLimit(t *testing.T) {
|
||||
w: []want{},
|
||||
},
|
||||
}
|
||||
for _, tt := range tests {
|
||||
t.Run(tt.name, func(t *testing.T) {
|
||||
m := &mock.Validator{}
|
||||
err := m.SetProposerSettings(ctx, tt.proposerSettings)
|
||||
require.NoError(t, err)
|
||||
validatorDB := dbtest.SetupDB(t, [][fieldparams.BLSPubkeyLength]byte{})
|
||||
vs, err := client.NewValidatorService(ctx, &client.Config{
|
||||
Validator: m,
|
||||
ValDB: validatorDB,
|
||||
for _, isSlashingProtectionMinimal := range [...]bool{false, true} {
|
||||
for _, tt := range tests {
|
||||
t.Run(fmt.Sprintf("%s/isSlashingProtectionMinimal:%v", tt.name, isSlashingProtectionMinimal), func(t *testing.T) {
|
||||
m := &mock.Validator{}
|
||||
err := m.SetProposerSettings(ctx, tt.proposerSettings)
|
||||
require.NoError(t, err)
|
||||
validatorDB := dbtest.SetupDB(t, [][fieldparams.BLSPubkeyLength]byte{}, isSlashingProtectionMinimal)
|
||||
vs, err := client.NewValidatorService(ctx, &client.Config{
|
||||
Validator: m,
|
||||
ValDB: validatorDB,
|
||||
})
|
||||
require.NoError(t, err)
|
||||
s := &Server{
|
||||
validatorService: vs,
|
||||
valDB: validatorDB,
|
||||
}
|
||||
// Set up global default value for builder gas limit.
|
||||
params.BeaconConfig().DefaultBuilderGasLimit = uint64(globalDefaultGasLimit)
|
||||
|
||||
req := httptest.NewRequest(http.MethodDelete, fmt.Sprintf("/eth/v1/validator/{pubkey}/gas_limit"), nil)
|
||||
req = mux.SetURLVars(req, map[string]string{"pubkey": hexutil.Encode(tt.pubkey)})
|
||||
w := httptest.NewRecorder()
|
||||
w.Body = &bytes.Buffer{}
|
||||
|
||||
s.DeleteGasLimit(w, req)
|
||||
|
||||
if tt.wantError != nil {
|
||||
assert.StringContains(t, tt.wantError.Error(), w.Body.String())
|
||||
} else {
|
||||
assert.Equal(t, http.StatusNoContent, w.Code)
|
||||
}
|
||||
for _, wantedObj := range tt.w {
|
||||
assert.Equal(t, wantedObj.gaslimit, s.validatorService.ProposerSettings().ProposeConfig[bytesutil.ToBytes48(wantedObj.pubkey)].BuilderConfig.GasLimit)
|
||||
}
|
||||
})
|
||||
require.NoError(t, err)
|
||||
s := &Server{
|
||||
validatorService: vs,
|
||||
valDB: validatorDB,
|
||||
}
|
||||
// Set up global default value for builder gas limit.
|
||||
params.BeaconConfig().DefaultBuilderGasLimit = uint64(globalDefaultGasLimit)
|
||||
|
||||
req := httptest.NewRequest(http.MethodDelete, fmt.Sprintf("/eth/v1/validator/{pubkey}/gas_limit"), nil)
|
||||
req = mux.SetURLVars(req, map[string]string{"pubkey": hexutil.Encode(tt.pubkey)})
|
||||
w := httptest.NewRecorder()
|
||||
w.Body = &bytes.Buffer{}
|
||||
|
||||
s.DeleteGasLimit(w, req)
|
||||
|
||||
if tt.wantError != nil {
|
||||
assert.StringContains(t, tt.wantError.Error(), w.Body.String())
|
||||
} else {
|
||||
assert.Equal(t, http.StatusNoContent, w.Code)
|
||||
}
|
||||
for _, wantedObj := range tt.w {
|
||||
assert.Equal(t, wantedObj.gaslimit, s.validatorService.ProposerSettings().ProposeConfig[bytesutil.ToBytes48(wantedObj.pubkey)].BuilderConfig.GasLimit)
|
||||
}
|
||||
})
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
@@ -1693,41 +1731,43 @@ func TestServer_FeeRecipientByPubkey(t *testing.T) {
|
||||
},
|
||||
},
|
||||
}
|
||||
for _, tt := range tests {
|
||||
t.Run(tt.name, func(t *testing.T) {
|
||||
m := &mock.Validator{}
|
||||
err := m.SetProposerSettings(ctx, tt.proposerSettings)
|
||||
require.NoError(t, err)
|
||||
validatorDB := dbtest.SetupDB(t, [][fieldparams.BLSPubkeyLength]byte{})
|
||||
for _, isSlashingProtectionMinimal := range [...]bool{false, true} {
|
||||
for _, tt := range tests {
|
||||
t.Run(fmt.Sprintf("%s/isSlashingProtectionMinimal:%v", tt.name, isSlashingProtectionMinimal), func(t *testing.T) {
|
||||
m := &mock.Validator{}
|
||||
err := m.SetProposerSettings(ctx, tt.proposerSettings)
|
||||
require.NoError(t, err)
|
||||
validatorDB := dbtest.SetupDB(t, [][fieldparams.BLSPubkeyLength]byte{}, isSlashingProtectionMinimal)
|
||||
|
||||
// save a default here
|
||||
vs, err := client.NewValidatorService(ctx, &client.Config{
|
||||
Validator: m,
|
||||
ValDB: validatorDB,
|
||||
// save a default here
|
||||
vs, err := client.NewValidatorService(ctx, &client.Config{
|
||||
Validator: m,
|
||||
ValDB: validatorDB,
|
||||
})
|
||||
require.NoError(t, err)
|
||||
s := &Server{
|
||||
validatorService: vs,
|
||||
beaconNodeValidatorClient: beaconClient,
|
||||
valDB: validatorDB,
|
||||
}
|
||||
request := &SetFeeRecipientByPubkeyRequest{
|
||||
Ethaddress: tt.args,
|
||||
}
|
||||
|
||||
var buf bytes.Buffer
|
||||
err = json.NewEncoder(&buf).Encode(request)
|
||||
require.NoError(t, err)
|
||||
|
||||
req := httptest.NewRequest(http.MethodPost, fmt.Sprintf("/eth/v1/validator/{pubkey}/feerecipient"), &buf)
|
||||
req = mux.SetURLVars(req, map[string]string{"pubkey": pubkey})
|
||||
w := httptest.NewRecorder()
|
||||
w.Body = &bytes.Buffer{}
|
||||
s.SetFeeRecipientByPubkey(w, req)
|
||||
assert.Equal(t, http.StatusAccepted, w.Code)
|
||||
|
||||
assert.Equal(t, tt.want.valEthAddress, s.validatorService.ProposerSettings().ProposeConfig[bytesutil.ToBytes48(byteval)].FeeRecipientConfig.FeeRecipient.Hex())
|
||||
})
|
||||
require.NoError(t, err)
|
||||
s := &Server{
|
||||
validatorService: vs,
|
||||
beaconNodeValidatorClient: beaconClient,
|
||||
valDB: validatorDB,
|
||||
}
|
||||
request := &SetFeeRecipientByPubkeyRequest{
|
||||
Ethaddress: tt.args,
|
||||
}
|
||||
|
||||
var buf bytes.Buffer
|
||||
err = json.NewEncoder(&buf).Encode(request)
|
||||
require.NoError(t, err)
|
||||
|
||||
req := httptest.NewRequest(http.MethodPost, fmt.Sprintf("/eth/v1/validator/{pubkey}/feerecipient"), &buf)
|
||||
req = mux.SetURLVars(req, map[string]string{"pubkey": pubkey})
|
||||
w := httptest.NewRecorder()
|
||||
w.Body = &bytes.Buffer{}
|
||||
s.SetFeeRecipientByPubkey(w, req)
|
||||
assert.Equal(t, http.StatusAccepted, w.Code)
|
||||
|
||||
assert.Equal(t, tt.want.valEthAddress, s.validatorService.ProposerSettings().ProposeConfig[bytesutil.ToBytes48(byteval)].FeeRecipientConfig.FeeRecipient.Hex())
|
||||
})
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
@@ -1803,29 +1843,31 @@ func TestServer_DeleteFeeRecipientByPubkey(t *testing.T) {
|
||||
wantErr: false,
|
||||
},
|
||||
}
|
||||
for _, tt := range tests {
|
||||
t.Run(tt.name, func(t *testing.T) {
|
||||
m := &mock.Validator{}
|
||||
err := m.SetProposerSettings(ctx, tt.proposerSettings)
|
||||
require.NoError(t, err)
|
||||
validatorDB := dbtest.SetupDB(t, [][fieldparams.BLSPubkeyLength]byte{})
|
||||
vs, err := client.NewValidatorService(ctx, &client.Config{
|
||||
Validator: m,
|
||||
ValDB: validatorDB,
|
||||
for _, isSlashingProtectionMinimal := range [...]bool{false, true} {
|
||||
for _, tt := range tests {
|
||||
t.Run(fmt.Sprintf("%s/isSlashingProtectionMinimal:%v", tt.name, isSlashingProtectionMinimal), func(t *testing.T) {
|
||||
m := &mock.Validator{}
|
||||
err := m.SetProposerSettings(ctx, tt.proposerSettings)
|
||||
require.NoError(t, err)
|
||||
validatorDB := dbtest.SetupDB(t, [][fieldparams.BLSPubkeyLength]byte{}, isSlashingProtectionMinimal)
|
||||
vs, err := client.NewValidatorService(ctx, &client.Config{
|
||||
Validator: m,
|
||||
ValDB: validatorDB,
|
||||
})
|
||||
require.NoError(t, err)
|
||||
s := &Server{
|
||||
validatorService: vs,
|
||||
valDB: validatorDB,
|
||||
}
|
||||
req := httptest.NewRequest(http.MethodDelete, fmt.Sprintf("/eth/v1/validator/{pubkey}/feerecipient"), nil)
|
||||
req = mux.SetURLVars(req, map[string]string{"pubkey": pubkey})
|
||||
w := httptest.NewRecorder()
|
||||
w.Body = &bytes.Buffer{}
|
||||
s.DeleteFeeRecipientByPubkey(w, req)
|
||||
assert.Equal(t, http.StatusNoContent, w.Code)
|
||||
assert.Equal(t, true, s.validatorService.ProposerSettings().ProposeConfig[bytesutil.ToBytes48(byteval)].FeeRecipientConfig == nil)
|
||||
})
|
||||
require.NoError(t, err)
|
||||
s := &Server{
|
||||
validatorService: vs,
|
||||
valDB: validatorDB,
|
||||
}
|
||||
req := httptest.NewRequest(http.MethodDelete, fmt.Sprintf("/eth/v1/validator/{pubkey}/feerecipient"), nil)
|
||||
req = mux.SetURLVars(req, map[string]string{"pubkey": pubkey})
|
||||
w := httptest.NewRecorder()
|
||||
w.Body = &bytes.Buffer{}
|
||||
s.DeleteFeeRecipientByPubkey(w, req)
|
||||
assert.Equal(t, http.StatusNoContent, w.Code)
|
||||
assert.Equal(t, true, s.validatorService.ProposerSettings().ProposeConfig[bytesutil.ToBytes48(byteval)].FeeRecipientConfig == nil)
|
||||
})
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
|
||||
@@ -76,7 +76,7 @@ func (s *Server) ImportSlashingProtection(w http.ResponseWriter, r *http.Request
|
||||
}
|
||||
enc := []byte(req.SlashingProtectionJson)
|
||||
buf := bytes.NewBuffer(enc)
|
||||
if err := slashing.ImportStandardProtectionJSON(ctx, s.valDB, buf); err != nil {
|
||||
if err := s.valDB.ImportStandardProtectionJSON(ctx, buf); err != nil {
|
||||
httputil.HandleError(w, errors.Wrap(err, "could not import slashing protection history").Error(), http.StatusInternalServerError)
|
||||
return
|
||||
}
|
||||
|
||||
@@ -4,12 +4,16 @@ import (
|
||||
"bytes"
|
||||
"context"
|
||||
"encoding/json"
|
||||
"fmt"
|
||||
"net/http"
|
||||
"net/http/httptest"
|
||||
"testing"
|
||||
|
||||
"github.com/prysmaticlabs/prysm/v5/testing/require"
|
||||
"github.com/prysmaticlabs/prysm/v5/validator/accounts"
|
||||
"github.com/prysmaticlabs/prysm/v5/validator/db/common"
|
||||
"github.com/prysmaticlabs/prysm/v5/validator/db/filesystem"
|
||||
"github.com/prysmaticlabs/prysm/v5/validator/db/iface"
|
||||
"github.com/prysmaticlabs/prysm/v5/validator/db/kv"
|
||||
"github.com/prysmaticlabs/prysm/v5/validator/keymanager"
|
||||
"github.com/prysmaticlabs/prysm/v5/validator/slashing-protection-history/format"
|
||||
@@ -17,132 +21,156 @@ import (
|
||||
)
|
||||
|
||||
func TestImportSlashingProtection_Preconditions(t *testing.T) {
|
||||
ctx := context.Background()
|
||||
localWalletDir := setupWalletDir(t)
|
||||
defaultWalletPath = localWalletDir
|
||||
for _, isSlashingProtectionMinimal := range []bool{false, true} {
|
||||
t.Run(fmt.Sprintf("slashing protection minimal: %v", isSlashingProtectionMinimal), func(t *testing.T) {
|
||||
ctx := context.Background()
|
||||
localWalletDir := setupWalletDir(t)
|
||||
defaultWalletPath = localWalletDir
|
||||
|
||||
// Empty JSON.
|
||||
s := &Server{
|
||||
walletDir: defaultWalletPath,
|
||||
// Empty JSON.
|
||||
s := &Server{
|
||||
walletDir: defaultWalletPath,
|
||||
}
|
||||
|
||||
request := &ImportSlashingProtectionRequest{
|
||||
SlashingProtectionJson: "",
|
||||
}
|
||||
var buf bytes.Buffer
|
||||
err := json.NewEncoder(&buf).Encode(request)
|
||||
require.NoError(t, err)
|
||||
|
||||
req := httptest.NewRequest(http.MethodPost, "/v2/validator/slashing-protection/import", &buf)
|
||||
wr := httptest.NewRecorder()
|
||||
wr.Body = &bytes.Buffer{}
|
||||
// No validator DB provided.
|
||||
s.ImportSlashingProtection(wr, req)
|
||||
require.Equal(t, http.StatusInternalServerError, wr.Code)
|
||||
require.StringContains(t, "could not find validator database", wr.Body.String())
|
||||
|
||||
// Create Wallet and add to server for more realistic testing.
|
||||
opts := []accounts.Option{
|
||||
accounts.WithWalletDir(defaultWalletPath),
|
||||
accounts.WithKeymanagerType(keymanager.Local),
|
||||
accounts.WithWalletPassword(strongPass),
|
||||
accounts.WithSkipMnemonicConfirm(true),
|
||||
}
|
||||
acc, err := accounts.NewCLIManager(opts...)
|
||||
require.NoError(t, err)
|
||||
w, err := acc.WalletCreate(ctx)
|
||||
require.NoError(t, err)
|
||||
s.wallet = w
|
||||
|
||||
numValidators := 1
|
||||
// Create public keys for the mock validator DB.
|
||||
pubKeys, err := mocks.CreateRandomPubKeys(numValidators)
|
||||
require.NoError(t, err)
|
||||
|
||||
// Create a validator database.
|
||||
var validatorDB iface.ValidatorDB
|
||||
if isSlashingProtectionMinimal {
|
||||
validatorDB, err = filesystem.NewStore(defaultWalletPath, &filesystem.Config{
|
||||
PubKeys: pubKeys,
|
||||
})
|
||||
} else {
|
||||
validatorDB, err = kv.NewKVStore(ctx, defaultWalletPath, &kv.Config{
|
||||
PubKeys: pubKeys,
|
||||
})
|
||||
}
|
||||
require.NoError(t, err)
|
||||
s.valDB = validatorDB
|
||||
|
||||
// Have to close it after import is done otherwise it complains db is not open.
|
||||
defer func() {
|
||||
require.NoError(t, validatorDB.Close())
|
||||
}()
|
||||
|
||||
// Test empty JSON.
|
||||
wr = httptest.NewRecorder()
|
||||
wr.Body = &bytes.Buffer{}
|
||||
s.ImportSlashingProtection(wr, req)
|
||||
require.Equal(t, http.StatusBadRequest, wr.Code)
|
||||
require.StringContains(t, "empty slashing_protection_json specified", wr.Body.String())
|
||||
|
||||
// Generate mock slashing history.
|
||||
attestingHistory := make([][]*common.AttestationRecord, 0)
|
||||
proposalHistory := make([]common.ProposalHistoryForPubkey, len(pubKeys))
|
||||
for i := 0; i < len(pubKeys); i++ {
|
||||
proposalHistory[i].Proposals = make([]common.Proposal, 0)
|
||||
}
|
||||
mockJSON, err := mocks.MockSlashingProtectionJSON(pubKeys, attestingHistory, proposalHistory)
|
||||
require.NoError(t, err)
|
||||
|
||||
// JSON encode the protection JSON and save it in rpc req.
|
||||
encoded, err := json.Marshal(mockJSON)
|
||||
require.NoError(t, err)
|
||||
request.SlashingProtectionJson = string(encoded)
|
||||
err = json.NewEncoder(&buf).Encode(request)
|
||||
require.NoError(t, err)
|
||||
|
||||
req = httptest.NewRequest(http.MethodPost, "/v2/validator/slashing-protection/import", &buf)
|
||||
wr = httptest.NewRecorder()
|
||||
wr.Body = &bytes.Buffer{}
|
||||
s.ImportSlashingProtection(wr, req)
|
||||
require.Equal(t, http.StatusOK, wr.Code)
|
||||
})
|
||||
}
|
||||
|
||||
request := &ImportSlashingProtectionRequest{
|
||||
SlashingProtectionJson: "",
|
||||
}
|
||||
var buf bytes.Buffer
|
||||
err := json.NewEncoder(&buf).Encode(request)
|
||||
require.NoError(t, err)
|
||||
|
||||
req := httptest.NewRequest(http.MethodPost, "/v2/validator/slashing-protection/import", &buf)
|
||||
wr := httptest.NewRecorder()
|
||||
wr.Body = &bytes.Buffer{}
|
||||
// No validator DB provided.
|
||||
s.ImportSlashingProtection(wr, req)
|
||||
require.Equal(t, http.StatusInternalServerError, wr.Code)
|
||||
require.StringContains(t, "could not find validator database", wr.Body.String())
|
||||
|
||||
// Create Wallet and add to server for more realistic testing.
|
||||
opts := []accounts.Option{
|
||||
accounts.WithWalletDir(defaultWalletPath),
|
||||
accounts.WithKeymanagerType(keymanager.Local),
|
||||
accounts.WithWalletPassword(strongPass),
|
||||
accounts.WithSkipMnemonicConfirm(true),
|
||||
}
|
||||
acc, err := accounts.NewCLIManager(opts...)
|
||||
require.NoError(t, err)
|
||||
w, err := acc.WalletCreate(ctx)
|
||||
require.NoError(t, err)
|
||||
s.wallet = w
|
||||
|
||||
numValidators := 1
|
||||
// Create public keys for the mock validator DB.
|
||||
pubKeys, err := mocks.CreateRandomPubKeys(numValidators)
|
||||
require.NoError(t, err)
|
||||
|
||||
// Create a validator database.
|
||||
validatorDB, err := kv.NewKVStore(ctx, defaultWalletPath, &kv.Config{
|
||||
PubKeys: pubKeys,
|
||||
})
|
||||
require.NoError(t, err)
|
||||
s.valDB = validatorDB
|
||||
|
||||
// Have to close it after import is done otherwise it complains db is not open.
|
||||
defer func() {
|
||||
require.NoError(t, validatorDB.Close())
|
||||
}()
|
||||
|
||||
// Test empty JSON.
|
||||
wr = httptest.NewRecorder()
|
||||
wr.Body = &bytes.Buffer{}
|
||||
s.ImportSlashingProtection(wr, req)
|
||||
require.Equal(t, http.StatusBadRequest, wr.Code)
|
||||
require.StringContains(t, "empty slashing_protection_json specified", wr.Body.String())
|
||||
|
||||
// Generate mock slashing history.
|
||||
attestingHistory := make([][]*kv.AttestationRecord, 0)
|
||||
proposalHistory := make([]kv.ProposalHistoryForPubkey, len(pubKeys))
|
||||
for i := 0; i < len(pubKeys); i++ {
|
||||
proposalHistory[i].Proposals = make([]kv.Proposal, 0)
|
||||
}
|
||||
mockJSON, err := mocks.MockSlashingProtectionJSON(pubKeys, attestingHistory, proposalHistory)
|
||||
require.NoError(t, err)
|
||||
|
||||
// JSON encode the protection JSON and save it in rpc req.
|
||||
encoded, err := json.Marshal(mockJSON)
|
||||
require.NoError(t, err)
|
||||
request.SlashingProtectionJson = string(encoded)
|
||||
err = json.NewEncoder(&buf).Encode(request)
|
||||
require.NoError(t, err)
|
||||
|
||||
req = httptest.NewRequest(http.MethodPost, "/v2/validator/slashing-protection/import", &buf)
|
||||
wr = httptest.NewRecorder()
|
||||
wr.Body = &bytes.Buffer{}
|
||||
s.ImportSlashingProtection(wr, req)
|
||||
require.Equal(t, http.StatusOK, wr.Code)
|
||||
}
|
||||
|
||||
func TestExportSlashingProtection_Preconditions(t *testing.T) {
|
||||
ctx := context.Background()
|
||||
localWalletDir := setupWalletDir(t)
|
||||
defaultWalletPath = localWalletDir
|
||||
for _, isSlashingProtectionMinimal := range []bool{false, true} {
|
||||
t.Run(fmt.Sprintf("slashing protection minimal: %v", isSlashingProtectionMinimal), func(t *testing.T) {
|
||||
ctx := context.Background()
|
||||
localWalletDir := setupWalletDir(t)
|
||||
defaultWalletPath = localWalletDir
|
||||
|
||||
s := &Server{
|
||||
walletDir: defaultWalletPath,
|
||||
s := &Server{
|
||||
walletDir: defaultWalletPath,
|
||||
}
|
||||
req := httptest.NewRequest(http.MethodGet, "/v2/validator/slashing-protection/export", nil)
|
||||
wr := httptest.NewRecorder()
|
||||
wr.Body = &bytes.Buffer{}
|
||||
// No validator DB provided.
|
||||
s.ExportSlashingProtection(wr, req)
|
||||
require.Equal(t, http.StatusInternalServerError, wr.Code)
|
||||
require.StringContains(t, "could not find validator database", wr.Body.String())
|
||||
|
||||
numValidators := 10
|
||||
// Create public keys for the mock validator DB.
|
||||
pubKeys, err := mocks.CreateRandomPubKeys(numValidators)
|
||||
require.NoError(t, err)
|
||||
|
||||
// We create a validator database.
|
||||
var validatorDB iface.ValidatorDB
|
||||
if isSlashingProtectionMinimal {
|
||||
validatorDB, err = filesystem.NewStore(t.TempDir(), &filesystem.Config{
|
||||
PubKeys: pubKeys,
|
||||
})
|
||||
} else {
|
||||
validatorDB, err = kv.NewKVStore(context.Background(), t.TempDir(), &kv.Config{
|
||||
PubKeys: pubKeys,
|
||||
})
|
||||
}
|
||||
require.NoError(t, err)
|
||||
s.valDB = validatorDB
|
||||
|
||||
// Have to close it after export is done otherwise it complains db is not open.
|
||||
defer func() {
|
||||
require.NoError(t, validatorDB.Close())
|
||||
}()
|
||||
genesisValidatorsRoot := [32]byte{1}
|
||||
err = validatorDB.SaveGenesisValidatorsRoot(ctx, genesisValidatorsRoot[:])
|
||||
require.NoError(t, err)
|
||||
wr = httptest.NewRecorder()
|
||||
wr.Body = &bytes.Buffer{}
|
||||
s.ExportSlashingProtection(wr, req)
|
||||
require.Equal(t, http.StatusOK, wr.Code)
|
||||
})
|
||||
}
|
||||
req := httptest.NewRequest(http.MethodGet, "/v2/validator/slashing-protection/export", nil)
|
||||
wr := httptest.NewRecorder()
|
||||
wr.Body = &bytes.Buffer{}
|
||||
// No validator DB provided.
|
||||
s.ExportSlashingProtection(wr, req)
|
||||
require.Equal(t, http.StatusInternalServerError, wr.Code)
|
||||
require.StringContains(t, "could not find validator database", wr.Body.String())
|
||||
|
||||
numValidators := 10
|
||||
// Create public keys for the mock validator DB.
|
||||
pubKeys, err := mocks.CreateRandomPubKeys(numValidators)
|
||||
require.NoError(t, err)
|
||||
|
||||
// We create a validator database.
|
||||
validatorDB, err := kv.NewKVStore(ctx, defaultWalletPath, &kv.Config{
|
||||
PubKeys: pubKeys,
|
||||
})
|
||||
require.NoError(t, err)
|
||||
s.valDB = validatorDB
|
||||
|
||||
// Have to close it after export is done otherwise it complains db is not open.
|
||||
defer func() {
|
||||
require.NoError(t, validatorDB.Close())
|
||||
}()
|
||||
genesisValidatorsRoot := [32]byte{1}
|
||||
err = validatorDB.SaveGenesisValidatorsRoot(ctx, genesisValidatorsRoot[:])
|
||||
require.NoError(t, err)
|
||||
wr = httptest.NewRecorder()
|
||||
wr.Body = &bytes.Buffer{}
|
||||
s.ExportSlashingProtection(wr, req)
|
||||
require.Equal(t, http.StatusOK, wr.Code)
|
||||
}
|
||||
|
||||
func TestImportExportSlashingProtection_RoundTrip(t *testing.T) {
|
||||
// Round trip is only suitable with complete slashing protection, since
|
||||
// minimal slashing protections only keep latest attestation and proposal.
|
||||
ctx := context.Background()
|
||||
localWalletDir := setupWalletDir(t)
|
||||
defaultWalletPath = localWalletDir
|
||||
@@ -169,10 +197,10 @@ func TestImportExportSlashingProtection_RoundTrip(t *testing.T) {
|
||||
}()
|
||||
|
||||
// Generate mock slashing history.
|
||||
attestingHistory := make([][]*kv.AttestationRecord, 0)
|
||||
proposalHistory := make([]kv.ProposalHistoryForPubkey, len(pubKeys))
|
||||
attestingHistory := make([][]*common.AttestationRecord, 0)
|
||||
proposalHistory := make([]common.ProposalHistoryForPubkey, len(pubKeys))
|
||||
for i := 0; i < len(pubKeys); i++ {
|
||||
proposalHistory[i].Proposals = make([]kv.Proposal, 0)
|
||||
proposalHistory[i].Proposals = make([]common.Proposal, 0)
|
||||
}
|
||||
mockJSON, err := mocks.MockSlashingProtectionJSON(pubKeys, attestingHistory, proposalHistory)
|
||||
require.NoError(t, err)
|
||||
|
||||
@@ -5,9 +5,6 @@ go_library(
|
||||
srcs = [
|
||||
"doc.go",
|
||||
"export.go",
|
||||
"helpers.go",
|
||||
"import.go",
|
||||
"log.go",
|
||||
],
|
||||
importpath = "github.com/prysmaticlabs/prysm/v5/validator/slashing-protection-history",
|
||||
visibility = [
|
||||
@@ -16,18 +13,12 @@ go_library(
|
||||
],
|
||||
deps = [
|
||||
"//config/fieldparams:go_default_library",
|
||||
"//consensus-types/primitives:go_default_library",
|
||||
"//encoding/bytesutil:go_default_library",
|
||||
"//monitoring/progress:go_default_library",
|
||||
"//proto/prysm/v1alpha1:go_default_library",
|
||||
"//proto/prysm/v1alpha1/slashings:go_default_library",
|
||||
"//validator/db:go_default_library",
|
||||
"//validator/db/kv:go_default_library",
|
||||
"//validator/helpers:go_default_library",
|
||||
"//validator/slashing-protection-history/format:go_default_library",
|
||||
"@com_github_k0kubun_go_ansi//:go_default_library",
|
||||
"@com_github_pkg_errors//:go_default_library",
|
||||
"@com_github_schollz_progressbar_v3//:go_default_library",
|
||||
"@com_github_sirupsen_logrus//:go_default_library",
|
||||
],
|
||||
)
|
||||
|
||||
@@ -35,8 +26,6 @@ go_test(
|
||||
name = "go_default_test",
|
||||
srcs = [
|
||||
"export_test.go",
|
||||
"helpers_test.go",
|
||||
"import_test.go",
|
||||
"round_trip_test.go",
|
||||
],
|
||||
embed = [":go_default_library"],
|
||||
@@ -46,10 +35,9 @@ go_test(
|
||||
"//proto/prysm/v1alpha1:go_default_library",
|
||||
"//testing/assert:go_default_library",
|
||||
"//testing/require:go_default_library",
|
||||
"//validator/db/kv:go_default_library",
|
||||
"//validator/db/common:go_default_library",
|
||||
"//validator/db/testing:go_default_library",
|
||||
"//validator/slashing-protection-history/format:go_default_library",
|
||||
"//validator/testing:go_default_library",
|
||||
"@com_github_sirupsen_logrus//hooks/test:go_default_library",
|
||||
],
|
||||
)
|
||||
|
||||
@@ -11,6 +11,7 @@ import (
|
||||
"github.com/prysmaticlabs/prysm/v5/encoding/bytesutil"
|
||||
"github.com/prysmaticlabs/prysm/v5/monitoring/progress"
|
||||
"github.com/prysmaticlabs/prysm/v5/validator/db"
|
||||
"github.com/prysmaticlabs/prysm/v5/validator/helpers"
|
||||
"github.com/prysmaticlabs/prysm/v5/validator/slashing-protection-history/format"
|
||||
)
|
||||
|
||||
@@ -31,7 +32,7 @@ func ExportStandardProtectionJSON(
|
||||
"genesis validators root is empty, perhaps you are not connected to your beacon node",
|
||||
)
|
||||
}
|
||||
genesisRootHex, err := rootToHexString(genesisValidatorsRoot)
|
||||
genesisRootHex, err := helpers.RootToHexString(genesisValidatorsRoot)
|
||||
if err != nil {
|
||||
return nil, errors.Wrap(err, "could not convert genesis validators root to hex string")
|
||||
}
|
||||
@@ -63,7 +64,7 @@ func ExportStandardProtectionJSON(
|
||||
if _, ok := filteredKeysMap[string(pubKey[:])]; len(filteredKeys) > 0 && !ok {
|
||||
continue
|
||||
}
|
||||
pubKeyHex, err := pubKeyToHexString(pubKey[:])
|
||||
pubKeyHex, err := helpers.PubKeyToHexString(pubKey[:])
|
||||
if err != nil {
|
||||
return nil, errors.Wrap(err, "could not convert public key to hex string")
|
||||
}
|
||||
@@ -89,7 +90,7 @@ func ExportStandardProtectionJSON(
|
||||
if _, ok := filteredKeysMap[string(pubKey[:])]; len(filteredKeys) > 0 && !ok {
|
||||
continue
|
||||
}
|
||||
pubKeyHex, err := pubKeyToHexString(pubKey[:])
|
||||
pubKeyHex, err := helpers.PubKeyToHexString(pubKey[:])
|
||||
if err != nil {
|
||||
return nil, errors.Wrap(err, "could not convert public key to hex string")
|
||||
}
|
||||
@@ -97,15 +98,12 @@ func ExportStandardProtectionJSON(
|
||||
if err != nil {
|
||||
return nil, errors.Wrapf(err, "could not retrieve signed attestations for public key %s", pubKeyHex)
|
||||
}
|
||||
if _, ok := dataByPubKey[pubKey]; ok {
|
||||
dataByPubKey[pubKey].SignedAttestations = signedAttestations
|
||||
} else {
|
||||
dataByPubKey[pubKey] = &format.ProtectionData{
|
||||
Pubkey: pubKeyHex,
|
||||
SignedBlocks: nil,
|
||||
SignedAttestations: signedAttestations,
|
||||
}
|
||||
if _, ok := dataByPubKey[pubKey]; !ok {
|
||||
// This should never happen
|
||||
return nil, errors.Wrapf(err, "could not retrieve proposer public key from array")
|
||||
}
|
||||
dataByPubKey[pubKey].SignedAttestations = signedAttestations
|
||||
|
||||
if err := bar.Add(1); err != nil {
|
||||
return nil, err
|
||||
}
|
||||
@@ -157,7 +155,7 @@ func signedAttestationsByPubKey(ctx context.Context, validatorDB db.Database, pu
|
||||
}
|
||||
var root string
|
||||
if len(att.SigningRoot) != 0 {
|
||||
root, err = rootToHexString(att.SigningRoot)
|
||||
root, err = helpers.RootToHexString(att.SigningRoot)
|
||||
if err != nil {
|
||||
return nil, errors.Wrap(err, "could not convert signing root to hex string")
|
||||
}
|
||||
@@ -173,8 +171,8 @@ func signedAttestationsByPubKey(ctx context.Context, validatorDB db.Database, pu
|
||||
|
||||
func signedBlocksByPubKey(ctx context.Context, validatorDB db.Database, pubKey [fieldparams.BLSPubkeyLength]byte) ([]*format.SignedBlock, error) {
|
||||
// If a key does not have a lowest or highest signed proposal history
|
||||
// in our database, we return nil. This way, a user will be able to export their
|
||||
// slashing protection history even if one of their keys does not have a history
|
||||
// in our database, we return an empty list. This way, a user will be able to export
|
||||
// their slashing protection history even if one of their keys does not have a history
|
||||
// of signed blocks.
|
||||
proposalHistory, err := validatorDB.ProposalHistoryForPubKey(ctx, pubKey)
|
||||
if err != nil {
|
||||
@@ -185,7 +183,7 @@ func signedBlocksByPubKey(ctx context.Context, validatorDB db.Database, pubKey [
|
||||
if ctx.Err() != nil {
|
||||
return nil, errors.Wrap(err, "context canceled")
|
||||
}
|
||||
signingRootHex, err := rootToHexString(proposal.SigningRoot)
|
||||
signingRootHex, err := helpers.RootToHexString(proposal.SigningRoot)
|
||||
if err != nil {
|
||||
return nil, errors.Wrap(err, "could not convert signing root to hex string")
|
||||
}
|
||||
|
||||
Some files were not shown because too many files have changed in this diff Show More
Reference in New Issue
Block a user