mirror of
https://github.com/vacp2p/status-linea-besu.git
synced 2026-01-07 21:13:56 -05:00
Merge branch 'main' into zkbesu
This commit is contained in:
31
CHANGELOG.md
31
CHANGELOG.md
@@ -1,35 +1,56 @@
|
||||
# Changelog
|
||||
|
||||
## 24.1.1-SNAPSHOT
|
||||
## 24.1.2-SNAPSHOT
|
||||
|
||||
### Breaking Changes
|
||||
|
||||
### Deprecations
|
||||
|
||||
### Additions and Improvements
|
||||
|
||||
### Bug fixes
|
||||
- Fix the way an advertised host configured with `--p2p-host` is treated when communicating with the originator of a PING packet [#6225](https://github.com/hyperledger/besu/pull/6225)
|
||||
|
||||
### Download Links
|
||||
|
||||
## 24.1.1
|
||||
|
||||
### Breaking Changes
|
||||
- New `EXECUTION_HALTED` error returned if there is an error executing or simulating a transaction, with the reason for execution being halted. Replaces the generic `INTERNAL_ERROR` return code in certain cases which some applications may be checking for [#6343](https://github.com/hyperledger/besu/pull/6343)
|
||||
- The Besu Docker images with `openjdk-latest` tags since 23.10.3 were incorrectly using UID 1001 instead of 1000 for the container's `besu` user. The user now uses 1000 again. Containers created from or migrated to images using UID 1001 will need to chown their persistent database files to UID 1000 [#6360](https://github.com/hyperledger/besu/pull/6360)
|
||||
- The deprecated `--privacy-onchain-groups-enabled` option has now been removed. Use the `--privacy-flexible-groups-enabled` option instead. [#6411](https://github.com/hyperledger/besu/pull/6411)
|
||||
- Requesting the Ethereum Node Record (ENR) to acquire the fork id from bonded peers is now enabled by default, so the following change has been made [#5628](https://github.com/hyperledger/besu/pull/5628):
|
||||
- `--Xfilter-on-enr-fork-id` has been removed. To disable the feature use `--filter-on-enr-fork-id=false`.
|
||||
- The time that can be spent selecting transactions during block creation is not capped at 5 seconds for PoS and PoW networks, and for PoA networks, at 75% of the block period specified in the genesis, this to prevent possible DoS in case a single transaction is taking too long to execute, and to have a stable block production rate, but it could be a breaking change if an existing network used to have transactions that takes more time to executed that the newly introduced limit, if it is mandatory for these network to keep processing these long processing transaction, then the default value of `block-txs-selection-max-time` or `poa-block-txs-selection-max-time` needs to be tuned accordingly.
|
||||
|
||||
### Deprecations
|
||||
|
||||
### Additions and Improvements
|
||||
- Optimize RocksDB WAL files, allows for faster restart and a more linear disk space utilization [#6328](https://github.com/hyperledger/besu/pull/6328)
|
||||
- Disable transaction handling when the node is not in sync, to avoid unnecessary transaction validation work [#6302](https://github.com/hyperledger/besu/pull/6302)
|
||||
- Introduce TransactionEvaluationContext to pass data between transaction selectors and plugin, during block creation [#6381](https://github.com/hyperledger/besu/pull/6381)
|
||||
- Introduce TransactionEvaluationContext to pass data between transaction selectors and plugin, during block creation [#6381](https://github.com/hyperledger/besu/pull/6381)
|
||||
- Upgrade dependencies [#6377](https://github.com/hyperledger/besu/pull/6377)
|
||||
- Upgrade `com.fasterxml.jackson` dependencies [#6378](https://github.com/hyperledger/besu/pull/6378)
|
||||
- Upgrade `com.fasterxml.jackson` dependencies [#6378](https://github.com/hyperledger/besu/pull/6378)
|
||||
- Upgrade Guava dependency [#6396](https://github.com/hyperledger/besu/pull/6396)
|
||||
- Upgrade Mockito [#6397](https://github.com/hyperledger/besu/pull/6397)
|
||||
- Upgrade `tech.pegasys.discovery:discovery` [#6414](https://github.com/hyperledger/besu/pull/6414)
|
||||
- Options to tune the max allowed time that can be spent selecting transactions during block creation are now stable [#6423](https://github.com/hyperledger/besu/pull/6423)
|
||||
|
||||
### Bug fixes
|
||||
- INTERNAL_ERROR from `eth_estimateGas` JSON/RPC calls [#6344](https://github.com/hyperledger/besu/issues/6344)
|
||||
- Fix Besu Docker images with `openjdk-latest` tags since 23.10.3 using UID 1001 instead of 1000 for the `besu` user [#6360](https://github.com/hyperledger/besu/pull/6360)
|
||||
- Fluent EVM API definition for Tangerine Whistle had incorrect code size validation configured [#6382](https://github.com/hyperledger/besu/pull/6382)
|
||||
- Correct mining beneficiary for Clique networks in TraceServiceImpl [#6390](https://github.com/hyperledger/besu/pull/6390)
|
||||
- Fix to gas limit delta calculations used in block production. Besu should now increment or decrement the block gas limit towards its target correctly (thanks @arbora) #6425
|
||||
|
||||
### Download Links
|
||||
|
||||
|
||||
## 24.1.0
|
||||
|
||||
### Breaking Changes
|
||||
|
||||
### Deprecations
|
||||
- Forest pruning (`pruning-enabled` options) is deprecated and will be removed soon. To save disk space consider switching to Bonsai data storage format [#6230](https://github.com/hyperledger/besu/pull/6230)
|
||||
- Forest pruning (`pruning-enabled` option) is deprecated and will be removed soon. To save disk space consider switching to Bonsai data storage format [#6230](https://github.com/hyperledger/besu/pull/6230)
|
||||
|
||||
### Additions and Improvements
|
||||
- Add error messages on authentication failures with username and password [#6212](https://github.com/hyperledger/besu/pull/6212)
|
||||
|
||||
@@ -7,7 +7,7 @@ Welcome to the Besu repository! The following links are a set of guidelines for
|
||||
Having Github, Discord, and Linux Foundation accounts is necessary for obtaining support for Besu through the community channels, wiki and issue management.
|
||||
* If you want to raise an issue, you can do so [on the github issue tab](https://github.com/hyperledger/besu/issues).
|
||||
* Hyperledger Discord requires a [Discord account].
|
||||
* The Hyperlegder wiki also requires a [Linux Foundation (LF) account] in order to edit pages.
|
||||
* The Hyperledger wiki also requires a [Linux Foundation (LF) account] in order to edit pages.
|
||||
|
||||
### Useful support links
|
||||
|
||||
|
||||
@@ -10,7 +10,7 @@
|
||||
"stateRoot" : "0x8d9115d9211932d4a3a1f068fb8fe262b0b2ab0bfd74eaece1a572efe6336677",
|
||||
"logsBloom" : "0x00000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000",
|
||||
"prevRandao" : "0xc13da06dc53836ca0766057413b9683eb9a8773bbb8fcc5691e41c25b56dda1d",
|
||||
"gasLimit" : "0x2ff3d8",
|
||||
"gasLimit" : "0x2ffbd2",
|
||||
"gasUsed" : "0xf618",
|
||||
"timestamp" : "0x1236",
|
||||
"extraData" : "0x",
|
||||
@@ -70,7 +70,7 @@
|
||||
"amount" : "0x64"
|
||||
} ],
|
||||
"blockNumber" : "0x1",
|
||||
"blockHash" : "0xf1e35607932349e87f29e1053a4fb2666782e09fde21ded74c1f7e4a57d3fa2b",
|
||||
"blockHash" : "0x736bdddc2eca36fe8ed4ed515e5d295a08d7eaddc0d0fda2a35408127eb890d0",
|
||||
"receiptsRoot" : "0x9af165447e5b3193e9ac8389418648ee6d6cb1d37459fe65cfc245fc358721bd",
|
||||
"blobGasUsed" : "0x60000"
|
||||
},
|
||||
|
||||
@@ -22,7 +22,6 @@ import static java.util.Collections.singletonList;
|
||||
import static org.hyperledger.besu.cli.DefaultCommandValues.getDefaultBesuDataPath;
|
||||
import static org.hyperledger.besu.cli.config.NetworkName.MAINNET;
|
||||
import static org.hyperledger.besu.cli.util.CommandLineUtils.DEPENDENCY_WARNING_MSG;
|
||||
import static org.hyperledger.besu.cli.util.CommandLineUtils.DEPRECATION_WARNING_MSG;
|
||||
import static org.hyperledger.besu.cli.util.CommandLineUtils.isOptionSet;
|
||||
import static org.hyperledger.besu.controller.BesuController.DATABASE_PATH;
|
||||
import static org.hyperledger.besu.ethereum.api.graphql.GraphQLConfiguration.DEFAULT_GRAPHQL_HTTP_PORT;
|
||||
@@ -148,6 +147,7 @@ import org.hyperledger.besu.ethereum.storage.keyvalue.KeyValueSegmentIdentifier;
|
||||
import org.hyperledger.besu.ethereum.storage.keyvalue.KeyValueStorageProvider;
|
||||
import org.hyperledger.besu.ethereum.storage.keyvalue.KeyValueStorageProviderBuilder;
|
||||
import org.hyperledger.besu.ethereum.trie.forest.pruner.PrunerConfiguration;
|
||||
import org.hyperledger.besu.ethereum.worldstate.DataStorageFormat;
|
||||
import org.hyperledger.besu.evm.precompile.AbstractAltBnPrecompiledContract;
|
||||
import org.hyperledger.besu.evm.precompile.BigIntegerModularExponentiationPrecompiledContract;
|
||||
import org.hyperledger.besu.evm.precompile.KZGPointEvalPrecompiledContract;
|
||||
@@ -953,13 +953,6 @@ public class BesuCommand implements DefaultCommandValues, Runnable {
|
||||
names = {"--privacy-flexible-groups-enabled"},
|
||||
description = "Enable flexible privacy groups (default: ${DEFAULT-VALUE})")
|
||||
private final Boolean isFlexiblePrivacyGroupsEnabled = false;
|
||||
|
||||
@Option(
|
||||
hidden = true,
|
||||
names = {"--privacy-onchain-groups-enabled"},
|
||||
description =
|
||||
"!!DEPRECATED!! Use `--privacy-flexible-groups-enabled` instead. Enable flexible (onchain) privacy groups (default: ${DEFAULT-VALUE})")
|
||||
private final Boolean isOnchainPrivacyGroupsEnabled = false;
|
||||
}
|
||||
|
||||
// Metrics Option Group
|
||||
@@ -1716,8 +1709,7 @@ public class BesuCommand implements DefaultCommandValues, Runnable {
|
||||
}
|
||||
|
||||
if (unstablePrivacyPluginOptions.isPrivacyPluginEnabled()
|
||||
&& (privacyOptionGroup.isFlexiblePrivacyGroupsEnabled
|
||||
|| privacyOptionGroup.isOnchainPrivacyGroupsEnabled)) {
|
||||
&& privacyOptionGroup.isFlexiblePrivacyGroupsEnabled) {
|
||||
throw new ParameterException(
|
||||
commandLine, "Privacy Plugin can not be used with flexible privacy groups");
|
||||
}
|
||||
@@ -2056,16 +2048,16 @@ public class BesuCommand implements DefaultCommandValues, Runnable {
|
||||
"--security-module=" + DEFAULT_SECURITY_MODULE);
|
||||
}
|
||||
|
||||
if (Boolean.TRUE.equals(privacyOptionGroup.isOnchainPrivacyGroupsEnabled)) {
|
||||
logger.warn(
|
||||
DEPRECATION_WARNING_MSG,
|
||||
"--privacy-onchain-groups-enabled",
|
||||
"--privacy-flexible-groups-enabled");
|
||||
}
|
||||
|
||||
if (isPruningEnabled()) {
|
||||
logger.warn(
|
||||
"Forest pruning is deprecated and will be removed soon. To save disk space consider switching to Bonsai data storage format.");
|
||||
if (dataStorageOptions
|
||||
.toDomainObject()
|
||||
.getDataStorageFormat()
|
||||
.equals(DataStorageFormat.BONSAI)) {
|
||||
logger.warn("Forest pruning is ignored with Bonsai data storage format.");
|
||||
} else {
|
||||
logger.warn(
|
||||
"Forest pruning is deprecated and will be removed soon. To save disk space consider switching to Bonsai data storage format.");
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
@@ -2743,8 +2735,7 @@ public class BesuCommand implements DefaultCommandValues, Runnable {
|
||||
privacyParametersBuilder.setMultiTenancyEnabled(
|
||||
privacyOptionGroup.isPrivacyMultiTenancyEnabled);
|
||||
privacyParametersBuilder.setFlexiblePrivacyGroupsEnabled(
|
||||
privacyOptionGroup.isFlexiblePrivacyGroupsEnabled
|
||||
|| privacyOptionGroup.isOnchainPrivacyGroupsEnabled);
|
||||
privacyOptionGroup.isFlexiblePrivacyGroupsEnabled);
|
||||
privacyParametersBuilder.setPrivacyPluginEnabled(
|
||||
unstablePrivacyPluginOptions.isPrivacyPluginEnabled());
|
||||
|
||||
@@ -2917,17 +2908,15 @@ public class BesuCommand implements DefaultCommandValues, Runnable {
|
||||
ImmutableMiningParameters.builder().from(miningOptions.toDomainObject());
|
||||
final var actualGenesisOptions = getActualGenesisConfigOptions();
|
||||
if (actualGenesisOptions.isPoa()) {
|
||||
miningParametersBuilder.unstable(
|
||||
ImmutableMiningParameters.Unstable.builder()
|
||||
.minBlockTime(getMinBlockTime(actualGenesisOptions))
|
||||
.build());
|
||||
miningParametersBuilder.genesisBlockPeriodSeconds(
|
||||
getGenesisBlockPeriodSeconds(actualGenesisOptions));
|
||||
}
|
||||
miningParameters = miningParametersBuilder.build();
|
||||
}
|
||||
return miningParameters;
|
||||
}
|
||||
|
||||
private int getMinBlockTime(final GenesisConfigOptions genesisConfigOptions) {
|
||||
private int getGenesisBlockPeriodSeconds(final GenesisConfigOptions genesisConfigOptions) {
|
||||
if (genesisConfigOptions.isClique()) {
|
||||
return genesisConfigOptions.getCliqueConfigOptions().getBlockPeriodSeconds();
|
||||
}
|
||||
|
||||
@@ -0,0 +1,33 @@
|
||||
/*
|
||||
* Copyright Hyperledger Besu Contributors.
|
||||
*
|
||||
* Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with
|
||||
* the License. You may obtain a copy of the License at
|
||||
*
|
||||
* http://www.apache.org/licenses/LICENSE-2.0
|
||||
*
|
||||
* Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
|
||||
* an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
|
||||
* specific language governing permissions and limitations under the License.
|
||||
*
|
||||
* SPDX-License-Identifier: Apache-2.0
|
||||
*/
|
||||
package org.hyperledger.besu.cli.converter;
|
||||
|
||||
import org.hyperledger.besu.cli.converter.exception.PercentageConversionException;
|
||||
import org.hyperledger.besu.util.number.PositiveNumber;
|
||||
|
||||
import picocli.CommandLine;
|
||||
|
||||
/** The PositiveNumber Cli type converter. */
|
||||
public class PositiveNumberConverter implements CommandLine.ITypeConverter<PositiveNumber> {
|
||||
|
||||
@Override
|
||||
public PositiveNumber convert(final String value) throws PercentageConversionException {
|
||||
try {
|
||||
return PositiveNumber.fromString(value);
|
||||
} catch (NullPointerException | IllegalArgumentException e) {
|
||||
throw new PercentageConversionException(value);
|
||||
}
|
||||
}
|
||||
}
|
||||
@@ -0,0 +1,30 @@
|
||||
/*
|
||||
* Copyright Hyperledger Besu Contributors.
|
||||
*
|
||||
* Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with
|
||||
* the License. You may obtain a copy of the License at
|
||||
*
|
||||
* http://www.apache.org/licenses/LICENSE-2.0
|
||||
*
|
||||
* Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
|
||||
* an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
|
||||
* specific language governing permissions and limitations under the License.
|
||||
*
|
||||
* SPDX-License-Identifier: Apache-2.0
|
||||
*/
|
||||
package org.hyperledger.besu.cli.converter.exception;
|
||||
|
||||
import static java.lang.String.format;
|
||||
|
||||
/** The custom PositiveNumber conversion exception. */
|
||||
public final class PositiveNumberConversionException extends Exception {
|
||||
|
||||
/**
|
||||
* Instantiates a new PositiveNumber conversion exception.
|
||||
*
|
||||
* @param value the invalid value to add in exception message
|
||||
*/
|
||||
public PositiveNumberConversionException(final String value) {
|
||||
super(format("Invalid value: %s, should be a positive number >0.", value));
|
||||
}
|
||||
}
|
||||
@@ -16,20 +16,20 @@ package org.hyperledger.besu.cli.options;
|
||||
|
||||
import static java.util.Arrays.asList;
|
||||
import static java.util.Collections.singletonList;
|
||||
import static org.hyperledger.besu.ethereum.core.MiningParameters.DEFAULT_NON_POA_BLOCK_TXS_SELECTION_MAX_TIME;
|
||||
import static org.hyperledger.besu.ethereum.core.MiningParameters.DEFAULT_POA_BLOCK_TXS_SELECTION_MAX_TIME;
|
||||
import static org.hyperledger.besu.ethereum.core.MiningParameters.MutableInitValues.DEFAULT_EXTRA_DATA;
|
||||
import static org.hyperledger.besu.ethereum.core.MiningParameters.MutableInitValues.DEFAULT_MIN_BLOCK_OCCUPANCY_RATIO;
|
||||
import static org.hyperledger.besu.ethereum.core.MiningParameters.MutableInitValues.DEFAULT_MIN_PRIORITY_FEE_PER_GAS;
|
||||
import static org.hyperledger.besu.ethereum.core.MiningParameters.MutableInitValues.DEFAULT_MIN_TRANSACTION_GAS_PRICE;
|
||||
import static org.hyperledger.besu.ethereum.core.MiningParameters.Unstable.DEFAULT_MAX_OMMERS_DEPTH;
|
||||
import static org.hyperledger.besu.ethereum.core.MiningParameters.Unstable.DEFAULT_NON_POA_BLOCK_TXS_SELECTION_MAX_TIME;
|
||||
import static org.hyperledger.besu.ethereum.core.MiningParameters.Unstable.DEFAULT_POA_BLOCK_TXS_SELECTION_MAX_TIME;
|
||||
import static org.hyperledger.besu.ethereum.core.MiningParameters.Unstable.DEFAULT_POS_BLOCK_CREATION_MAX_TIME;
|
||||
import static org.hyperledger.besu.ethereum.core.MiningParameters.Unstable.DEFAULT_POS_BLOCK_CREATION_REPETITION_MIN_DURATION;
|
||||
import static org.hyperledger.besu.ethereum.core.MiningParameters.Unstable.DEFAULT_POW_JOB_TTL;
|
||||
import static org.hyperledger.besu.ethereum.core.MiningParameters.Unstable.DEFAULT_REMOTE_SEALERS_LIMIT;
|
||||
import static org.hyperledger.besu.ethereum.core.MiningParameters.Unstable.DEFAULT_REMOTE_SEALERS_TTL;
|
||||
|
||||
import org.hyperledger.besu.cli.converter.PercentageConverter;
|
||||
import org.hyperledger.besu.cli.converter.PositiveNumberConverter;
|
||||
import org.hyperledger.besu.cli.util.CommandLineUtils;
|
||||
import org.hyperledger.besu.config.GenesisConfigOptions;
|
||||
import org.hyperledger.besu.datatypes.Address;
|
||||
@@ -37,7 +37,7 @@ import org.hyperledger.besu.datatypes.Wei;
|
||||
import org.hyperledger.besu.ethereum.core.ImmutableMiningParameters;
|
||||
import org.hyperledger.besu.ethereum.core.ImmutableMiningParameters.MutableInitValues;
|
||||
import org.hyperledger.besu.ethereum.core.MiningParameters;
|
||||
import org.hyperledger.besu.util.number.Percentage;
|
||||
import org.hyperledger.besu.util.number.PositiveNumber;
|
||||
|
||||
import java.util.List;
|
||||
|
||||
@@ -115,6 +115,24 @@ public class MiningOptions implements CLIOptions<MiningParameters> {
|
||||
+ " If set, each block's gas limit will approach this setting over time.")
|
||||
private Long targetGasLimit = null;
|
||||
|
||||
@Option(
|
||||
names = {"--block-txs-selection-max-time"},
|
||||
converter = PositiveNumberConverter.class,
|
||||
description =
|
||||
"Specifies the maximum time, in milliseconds, that could be spent selecting transactions to be included in the block."
|
||||
+ " Not compatible with PoA networks, see poa-block-txs-selection-max-time. (default: ${DEFAULT-VALUE})")
|
||||
private PositiveNumber nonPoaBlockTxsSelectionMaxTime =
|
||||
DEFAULT_NON_POA_BLOCK_TXS_SELECTION_MAX_TIME;
|
||||
|
||||
@Option(
|
||||
names = {"--poa-block-txs-selection-max-time"},
|
||||
converter = PositiveNumberConverter.class,
|
||||
description =
|
||||
"Specifies the maximum time that could be spent selecting transactions to be included in the block, as a percentage of the fixed block time of the PoA network."
|
||||
+ " To be only used on PoA networks, for other networks see block-txs-selection-max-time."
|
||||
+ " (default: ${DEFAULT-VALUE})")
|
||||
private PositiveNumber poaBlockTxsSelectionMaxTime = DEFAULT_POA_BLOCK_TXS_SELECTION_MAX_TIME;
|
||||
|
||||
@CommandLine.ArgGroup(validate = false)
|
||||
private final Unstable unstableOptions = new Unstable();
|
||||
|
||||
@@ -168,25 +186,6 @@ public class MiningOptions implements CLIOptions<MiningParameters> {
|
||||
+ " then it waits before next repetition. Must be positive and ≤ 2000 (default: ${DEFAULT-VALUE} milliseconds)")
|
||||
private Long posBlockCreationRepetitionMinDuration =
|
||||
DEFAULT_POS_BLOCK_CREATION_REPETITION_MIN_DURATION;
|
||||
|
||||
@CommandLine.Option(
|
||||
hidden = true,
|
||||
names = {"--Xblock-txs-selection-max-time"},
|
||||
description =
|
||||
"Specifies the maximum time, in milliseconds, that could be spent selecting transactions to be included in the block."
|
||||
+ " Not compatible with PoA networks, see Xpoa-block-txs-selection-max-time."
|
||||
+ " Must be positive and ≤ (default: ${DEFAULT-VALUE})")
|
||||
private Long nonPoaBlockTxsSelectionMaxTime = DEFAULT_NON_POA_BLOCK_TXS_SELECTION_MAX_TIME;
|
||||
|
||||
@CommandLine.Option(
|
||||
hidden = true,
|
||||
names = {"--Xpoa-block-txs-selection-max-time"},
|
||||
converter = PercentageConverter.class,
|
||||
description =
|
||||
"Specifies the maximum time that could be spent selecting transactions to be included in the block, as a percentage of the fixed block time of the PoA network."
|
||||
+ " To be only used on PoA networks, for other networks see Xblock-txs-selection-max-time."
|
||||
+ " (default: ${DEFAULT-VALUE})")
|
||||
private Percentage poaBlockTxsSelectionMaxTime = DEFAULT_POA_BLOCK_TXS_SELECTION_MAX_TIME;
|
||||
}
|
||||
|
||||
private MiningOptions() {}
|
||||
@@ -270,26 +269,17 @@ public class MiningOptions implements CLIOptions<MiningParameters> {
|
||||
if (genesisConfigOptions.isPoa()) {
|
||||
CommandLineUtils.failIfOptionDoesntMeetRequirement(
|
||||
commandLine,
|
||||
"--Xblock-txs-selection-max-time can't be used with PoA networks,"
|
||||
+ " see Xpoa-block-txs-selection-max-time instead",
|
||||
"--block-txs-selection-max-time can't be used with PoA networks,"
|
||||
+ " see poa-block-txs-selection-max-time instead",
|
||||
false,
|
||||
singletonList("--Xblock-txs-selection-max-time"));
|
||||
singletonList("--block-txs-selection-max-time"));
|
||||
} else {
|
||||
CommandLineUtils.failIfOptionDoesntMeetRequirement(
|
||||
commandLine,
|
||||
"--Xpoa-block-txs-selection-max-time can be only used with PoA networks,"
|
||||
+ " see --Xblock-txs-selection-max-time instead",
|
||||
"--poa-block-txs-selection-max-time can be only used with PoA networks,"
|
||||
+ " see --block-txs-selection-max-time instead",
|
||||
false,
|
||||
singletonList("--Xpoa-block-txs-selection-max-time"));
|
||||
|
||||
if (unstableOptions.nonPoaBlockTxsSelectionMaxTime <= 0
|
||||
|| unstableOptions.nonPoaBlockTxsSelectionMaxTime
|
||||
> DEFAULT_NON_POA_BLOCK_TXS_SELECTION_MAX_TIME) {
|
||||
throw new ParameterException(
|
||||
commandLine,
|
||||
"--Xblock-txs-selection-max-time must be positive and ≤ "
|
||||
+ DEFAULT_NON_POA_BLOCK_TXS_SELECTION_MAX_TIME);
|
||||
}
|
||||
singletonList("--poa-block-txs-selection-max-time"));
|
||||
}
|
||||
}
|
||||
|
||||
@@ -303,6 +293,10 @@ public class MiningOptions implements CLIOptions<MiningParameters> {
|
||||
miningOptions.minTransactionGasPrice = miningParameters.getMinTransactionGasPrice();
|
||||
miningOptions.minPriorityFeePerGas = miningParameters.getMinPriorityFeePerGas();
|
||||
miningOptions.minBlockOccupancyRatio = miningParameters.getMinBlockOccupancyRatio();
|
||||
miningOptions.nonPoaBlockTxsSelectionMaxTime =
|
||||
miningParameters.getNonPoaBlockTxsSelectionMaxTime();
|
||||
miningOptions.poaBlockTxsSelectionMaxTime = miningParameters.getPoaBlockTxsSelectionMaxTime();
|
||||
|
||||
miningOptions.unstableOptions.remoteSealersLimit =
|
||||
miningParameters.getUnstable().getRemoteSealersLimit();
|
||||
miningOptions.unstableOptions.remoteSealersTimeToLive =
|
||||
@@ -317,10 +311,6 @@ public class MiningOptions implements CLIOptions<MiningParameters> {
|
||||
miningParameters.getUnstable().getPosBlockCreationMaxTime();
|
||||
miningOptions.unstableOptions.posBlockCreationRepetitionMinDuration =
|
||||
miningParameters.getUnstable().getPosBlockCreationRepetitionMinDuration();
|
||||
miningOptions.unstableOptions.nonPoaBlockTxsSelectionMaxTime =
|
||||
miningParameters.getUnstable().getBlockTxsSelectionMaxTime();
|
||||
miningOptions.unstableOptions.poaBlockTxsSelectionMaxTime =
|
||||
miningParameters.getUnstable().getPoaBlockTxsSelectionMaxTime();
|
||||
|
||||
miningParameters.getCoinbase().ifPresent(coinbase -> miningOptions.coinbase = coinbase);
|
||||
miningParameters.getTargetGasLimit().ifPresent(tgl -> miningOptions.targetGasLimit = tgl);
|
||||
@@ -350,6 +340,8 @@ public class MiningOptions implements CLIOptions<MiningParameters> {
|
||||
.isStratumMiningEnabled(iStratumMiningEnabled)
|
||||
.stratumNetworkInterface(stratumNetworkInterface)
|
||||
.stratumPort(stratumPort)
|
||||
.nonPoaBlockTxsSelectionMaxTime(nonPoaBlockTxsSelectionMaxTime)
|
||||
.poaBlockTxsSelectionMaxTime(poaBlockTxsSelectionMaxTime)
|
||||
.unstable(
|
||||
ImmutableMiningParameters.Unstable.builder()
|
||||
.remoteSealersLimit(unstableOptions.remoteSealersLimit)
|
||||
@@ -360,8 +352,6 @@ public class MiningOptions implements CLIOptions<MiningParameters> {
|
||||
.posBlockCreationMaxTime(unstableOptions.posBlockCreationMaxTime)
|
||||
.posBlockCreationRepetitionMinDuration(
|
||||
unstableOptions.posBlockCreationRepetitionMinDuration)
|
||||
.nonPoaBlockTxsSelectionMaxTime(unstableOptions.nonPoaBlockTxsSelectionMaxTime)
|
||||
.poaBlockTxsSelectionMaxTime(unstableOptions.poaBlockTxsSelectionMaxTime)
|
||||
.build());
|
||||
|
||||
return miningParametersBuilder.build();
|
||||
|
||||
@@ -62,23 +62,28 @@ public class DataStorageOptions implements CLIOptions<DataStorageConfiguration>
|
||||
private final DataStorageOptions.Unstable unstableOptions = new Unstable();
|
||||
|
||||
static class Unstable {
|
||||
private static final String BONSAI_LIMIT_TRIE_LOGS_ENABLED =
|
||||
"--Xbonsai-limit-trie-logs-enabled";
|
||||
private static final String BONSAI_TRIE_LOGS_RETENTION_THRESHOLD =
|
||||
"--Xbonsai-trie-logs-retention-threshold";
|
||||
private static final String BONSAI_TRIE_LOG_PRUNING_LIMIT = "--Xbonsai-trie-logs-pruning-limit";
|
||||
|
||||
@CommandLine.Option(
|
||||
hidden = true,
|
||||
names = {"--Xbonsai-trie-log-pruning-enabled"},
|
||||
names = {BONSAI_LIMIT_TRIE_LOGS_ENABLED},
|
||||
description = "Enable trie log pruning. (default: ${DEFAULT-VALUE})")
|
||||
private boolean bonsaiTrieLogPruningEnabled = DEFAULT_BONSAI_TRIE_LOG_PRUNING_ENABLED;
|
||||
|
||||
@CommandLine.Option(
|
||||
hidden = true,
|
||||
names = {"--Xbonsai-trie-log-retention-threshold"},
|
||||
names = {BONSAI_TRIE_LOGS_RETENTION_THRESHOLD},
|
||||
description =
|
||||
"The number of blocks for which to retain trie logs. (default: ${DEFAULT-VALUE})")
|
||||
private long bonsaiTrieLogRetentionThreshold = DEFAULT_BONSAI_TRIE_LOG_RETENTION_THRESHOLD;
|
||||
|
||||
@CommandLine.Option(
|
||||
hidden = true,
|
||||
names = {"--Xbonsai-trie-log-pruning-limit"},
|
||||
names = {BONSAI_TRIE_LOG_PRUNING_LIMIT},
|
||||
description =
|
||||
"The max number of blocks to load and prune trie logs for at startup. (default: ${DEFAULT-VALUE})")
|
||||
private int bonsaiTrieLogPruningLimit = DEFAULT_BONSAI_TRIE_LOG_PRUNING_LIMIT;
|
||||
|
||||
@@ -37,7 +37,7 @@ public class NetworkingOptions implements CLIOptions<NetworkingConfiguration> {
|
||||
private final String DNS_DISCOVERY_SERVER_OVERRIDE_FLAG = "--Xp2p-dns-discovery-server";
|
||||
private final String DISCOVERY_PROTOCOL_V5_ENABLED = "--Xv5-discovery-enabled";
|
||||
/** The constant FILTER_ON_ENR_FORK_ID. */
|
||||
public static final String FILTER_ON_ENR_FORK_ID = "--Xfilter-on-enr-fork-id";
|
||||
public static final String FILTER_ON_ENR_FORK_ID = "--filter-on-enr-fork-id";
|
||||
|
||||
@CommandLine.Option(
|
||||
names = INITIATE_CONNECTIONS_FREQUENCY_FLAG,
|
||||
@@ -76,9 +76,9 @@ public class NetworkingOptions implements CLIOptions<NetworkingConfiguration> {
|
||||
@CommandLine.Option(
|
||||
names = FILTER_ON_ENR_FORK_ID,
|
||||
hidden = true,
|
||||
defaultValue = "false",
|
||||
defaultValue = "true",
|
||||
description = "Whether to enable filtering of peers based on the ENR field ForkId)")
|
||||
private final Boolean filterOnEnrForkId = false;
|
||||
private final Boolean filterOnEnrForkId = NetworkingConfiguration.DEFAULT_FILTER_ON_ENR_FORK_ID;
|
||||
|
||||
@CommandLine.Option(
|
||||
hidden = true,
|
||||
|
||||
@@ -22,7 +22,11 @@ import org.hyperledger.besu.datatypes.Hash;
|
||||
import org.hyperledger.besu.ethereum.chain.Blockchain;
|
||||
import org.hyperledger.besu.ethereum.chain.MutableBlockchain;
|
||||
import org.hyperledger.besu.ethereum.core.BlockHeader;
|
||||
import org.hyperledger.besu.ethereum.rlp.BytesValueRLPInput;
|
||||
import org.hyperledger.besu.ethereum.rlp.RLP;
|
||||
import org.hyperledger.besu.ethereum.trie.bonsai.storage.BonsaiWorldStateKeyValueStorage;
|
||||
import org.hyperledger.besu.ethereum.trie.bonsai.trielog.TrieLogFactoryImpl;
|
||||
import org.hyperledger.besu.ethereum.trie.bonsai.trielog.TrieLogLayer;
|
||||
import org.hyperledger.besu.ethereum.worldstate.DataStorageConfiguration;
|
||||
|
||||
import java.io.File;
|
||||
@@ -32,6 +36,7 @@ import java.io.IOException;
|
||||
import java.io.ObjectInputStream;
|
||||
import java.io.ObjectOutputStream;
|
||||
import java.io.PrintWriter;
|
||||
import java.nio.file.Files;
|
||||
import java.nio.file.Path;
|
||||
import java.util.ArrayList;
|
||||
import java.util.IdentityHashMap;
|
||||
@@ -39,6 +44,7 @@ import java.util.List;
|
||||
import java.util.Optional;
|
||||
import java.util.concurrent.atomic.AtomicInteger;
|
||||
|
||||
import org.apache.tuweni.bytes.Bytes;
|
||||
import org.apache.tuweni.bytes.Bytes32;
|
||||
import org.slf4j.Logger;
|
||||
import org.slf4j.LoggerFactory;
|
||||
@@ -97,16 +103,15 @@ public class TrieLogHelper {
|
||||
final String batchFileNameBase) {
|
||||
|
||||
for (long batchNumber = 1; batchNumber <= numberOfBatches; batchNumber++) {
|
||||
|
||||
final String batchFileName = batchFileNameBase + "-" + batchNumber;
|
||||
final long firstBlockOfBatch = chainHeight - ((batchNumber - 1) * BATCH_SIZE);
|
||||
|
||||
final long lastBlockOfBatch =
|
||||
Math.max(chainHeight - (batchNumber * BATCH_SIZE), lastBlockNumberToRetainTrieLogsFor);
|
||||
|
||||
final List<Hash> trieLogKeys =
|
||||
getTrieLogKeysForBlocks(blockchain, firstBlockOfBatch, lastBlockOfBatch);
|
||||
|
||||
saveTrieLogBatches(batchFileNameBase, rootWorldStateStorage, batchNumber, trieLogKeys);
|
||||
LOG.info("Saving trie logs to retain in file (batch {})...", batchNumber);
|
||||
saveTrieLogBatches(batchFileName, rootWorldStateStorage, trieLogKeys);
|
||||
}
|
||||
|
||||
LOG.info("Clear trie logs...");
|
||||
@@ -118,15 +123,12 @@ public class TrieLogHelper {
|
||||
}
|
||||
|
||||
private static void saveTrieLogBatches(
|
||||
final String batchFileNameBase,
|
||||
final String batchFileName,
|
||||
final BonsaiWorldStateKeyValueStorage rootWorldStateStorage,
|
||||
final long batchNumber,
|
||||
final List<Hash> trieLogKeys) {
|
||||
|
||||
LOG.info("Saving trie logs to retain in file (batch {})...", batchNumber);
|
||||
|
||||
try {
|
||||
saveTrieLogsInFile(trieLogKeys, rootWorldStateStorage, batchNumber, batchFileNameBase);
|
||||
saveTrieLogsInFile(trieLogKeys, rootWorldStateStorage, batchFileName);
|
||||
} catch (IOException e) {
|
||||
LOG.error("Error saving trie logs to file: {}", e.getMessage());
|
||||
throw new RuntimeException(e);
|
||||
@@ -210,9 +212,8 @@ public class TrieLogHelper {
|
||||
final String batchFileNameBase)
|
||||
throws IOException {
|
||||
// process in chunk to avoid OOM
|
||||
|
||||
IdentityHashMap<byte[], byte[]> trieLogsToRetain =
|
||||
readTrieLogsFromFile(batchFileNameBase, batchNumber);
|
||||
final String batchFileName = batchFileNameBase + "-" + batchNumber;
|
||||
IdentityHashMap<byte[], byte[]> trieLogsToRetain = readTrieLogsFromFile(batchFileName);
|
||||
final int chunkSize = ROCKSDB_MAX_INSERTS_PER_TRANSACTION;
|
||||
List<byte[]> keys = new ArrayList<>(trieLogsToRetain.keySet());
|
||||
|
||||
@@ -265,11 +266,10 @@ public class TrieLogHelper {
|
||||
private static void saveTrieLogsInFile(
|
||||
final List<Hash> trieLogsKeys,
|
||||
final BonsaiWorldStateKeyValueStorage rootWorldStateStorage,
|
||||
final long batchNumber,
|
||||
final String batchFileNameBase)
|
||||
final String batchFileName)
|
||||
throws IOException {
|
||||
|
||||
File file = new File(batchFileNameBase + "-" + batchNumber);
|
||||
File file = new File(batchFileName);
|
||||
if (file.exists()) {
|
||||
LOG.error("File already exists, skipping file creation");
|
||||
return;
|
||||
@@ -285,17 +285,14 @@ public class TrieLogHelper {
|
||||
}
|
||||
|
||||
@SuppressWarnings("unchecked")
|
||||
private static IdentityHashMap<byte[], byte[]> readTrieLogsFromFile(
|
||||
final String batchFileNameBase, final long batchNumber) {
|
||||
static IdentityHashMap<byte[], byte[]> readTrieLogsFromFile(final String batchFileName) {
|
||||
|
||||
IdentityHashMap<byte[], byte[]> trieLogs;
|
||||
try (FileInputStream fis = new FileInputStream(batchFileNameBase + "-" + batchNumber);
|
||||
try (FileInputStream fis = new FileInputStream(batchFileName);
|
||||
ObjectInputStream ois = new ObjectInputStream(fis)) {
|
||||
|
||||
trieLogs = (IdentityHashMap<byte[], byte[]>) ois.readObject();
|
||||
|
||||
} catch (IOException | ClassNotFoundException e) {
|
||||
|
||||
LOG.error(e.getMessage());
|
||||
throw new RuntimeException(e);
|
||||
}
|
||||
@@ -303,6 +300,52 @@ public class TrieLogHelper {
|
||||
return trieLogs;
|
||||
}
|
||||
|
||||
private static void saveTrieLogsAsRlpInFile(
|
||||
final List<Hash> trieLogsKeys,
|
||||
final BonsaiWorldStateKeyValueStorage rootWorldStateStorage,
|
||||
final String batchFileName) {
|
||||
File file = new File(batchFileName);
|
||||
if (file.exists()) {
|
||||
LOG.error("File already exists, skipping file creation");
|
||||
return;
|
||||
}
|
||||
|
||||
final IdentityHashMap<byte[], byte[]> trieLogs =
|
||||
getTrieLogs(trieLogsKeys, rootWorldStateStorage);
|
||||
final Bytes rlp =
|
||||
RLP.encode(
|
||||
o ->
|
||||
o.writeList(
|
||||
trieLogs.entrySet(), (val, out) -> out.writeRaw(Bytes.wrap(val.getValue()))));
|
||||
try {
|
||||
Files.write(file.toPath(), rlp.toArrayUnsafe());
|
||||
} catch (IOException e) {
|
||||
LOG.error(e.getMessage());
|
||||
throw new RuntimeException(e);
|
||||
}
|
||||
}
|
||||
|
||||
static IdentityHashMap<byte[], byte[]> readTrieLogsAsRlpFromFile(final String batchFileName) {
|
||||
try {
|
||||
final Bytes file = Bytes.wrap(Files.readAllBytes(Path.of(batchFileName)));
|
||||
final BytesValueRLPInput input = new BytesValueRLPInput(file, false);
|
||||
|
||||
input.enterList();
|
||||
final IdentityHashMap<byte[], byte[]> trieLogs = new IdentityHashMap<>();
|
||||
while (!input.isEndOfCurrentList()) {
|
||||
final Bytes trieLogBytes = input.currentListAsBytes();
|
||||
TrieLogLayer trieLogLayer =
|
||||
TrieLogFactoryImpl.readFrom(new BytesValueRLPInput(Bytes.wrap(trieLogBytes), false));
|
||||
trieLogs.put(trieLogLayer.getBlockHash().toArrayUnsafe(), trieLogBytes.toArrayUnsafe());
|
||||
}
|
||||
input.leaveList();
|
||||
|
||||
return trieLogs;
|
||||
} catch (IOException e) {
|
||||
throw new RuntimeException(e);
|
||||
}
|
||||
}
|
||||
|
||||
private static IdentityHashMap<byte[], byte[]> getTrieLogs(
|
||||
final List<Hash> trieLogKeys, final BonsaiWorldStateKeyValueStorage rootWorldStateStorage) {
|
||||
IdentityHashMap<byte[], byte[]> trieLogsToRetain = new IdentityHashMap<>();
|
||||
@@ -357,5 +400,25 @@ public class TrieLogHelper {
|
||||
count.total, count.canonicalCount, count.forkCount, count.orphanCount);
|
||||
}
|
||||
|
||||
static void importTrieLog(
|
||||
final BonsaiWorldStateKeyValueStorage rootWorldStateStorage, final Path trieLogFilePath) {
|
||||
|
||||
var trieLog = readTrieLogsAsRlpFromFile(trieLogFilePath.toString());
|
||||
|
||||
var updater = rootWorldStateStorage.updater();
|
||||
trieLog.forEach((key, value) -> updater.getTrieLogStorageTransaction().put(key, value));
|
||||
updater.getTrieLogStorageTransaction().commit();
|
||||
}
|
||||
|
||||
static void exportTrieLog(
|
||||
final BonsaiWorldStateKeyValueStorage rootWorldStateStorage,
|
||||
final List<Hash> trieLogHash,
|
||||
final Path directoryPath)
|
||||
throws IOException {
|
||||
final String trieLogFile = directoryPath.toString();
|
||||
|
||||
saveTrieLogsAsRlpInFile(trieLogHash, rootWorldStateStorage, trieLogFile);
|
||||
}
|
||||
|
||||
record TrieLogCount(int total, int canonicalCount, int forkCount, int orphanCount) {}
|
||||
}
|
||||
|
||||
@@ -19,6 +19,7 @@ import static com.google.common.base.Preconditions.checkNotNull;
|
||||
|
||||
import org.hyperledger.besu.cli.util.VersionProvider;
|
||||
import org.hyperledger.besu.controller.BesuController;
|
||||
import org.hyperledger.besu.datatypes.Hash;
|
||||
import org.hyperledger.besu.ethereum.chain.MutableBlockchain;
|
||||
import org.hyperledger.besu.ethereum.storage.StorageProvider;
|
||||
import org.hyperledger.besu.ethereum.trie.bonsai.storage.BonsaiWorldStateKeyValueStorage;
|
||||
@@ -26,9 +27,11 @@ import org.hyperledger.besu.ethereum.trie.bonsai.trielog.TrieLogPruner;
|
||||
import org.hyperledger.besu.ethereum.worldstate.DataStorageConfiguration;
|
||||
import org.hyperledger.besu.ethereum.worldstate.DataStorageFormat;
|
||||
|
||||
import java.io.IOException;
|
||||
import java.io.PrintWriter;
|
||||
import java.nio.file.Path;
|
||||
import java.nio.file.Paths;
|
||||
import java.util.List;
|
||||
|
||||
import org.apache.logging.log4j.Level;
|
||||
import org.apache.logging.log4j.core.config.Configurator;
|
||||
@@ -43,7 +46,12 @@ import picocli.CommandLine.ParentCommand;
|
||||
description = "Manipulate trie logs",
|
||||
mixinStandardHelpOptions = true,
|
||||
versionProvider = VersionProvider.class,
|
||||
subcommands = {TrieLogSubCommand.CountTrieLog.class, TrieLogSubCommand.PruneTrieLog.class})
|
||||
subcommands = {
|
||||
TrieLogSubCommand.CountTrieLog.class,
|
||||
TrieLogSubCommand.PruneTrieLog.class,
|
||||
TrieLogSubCommand.ExportTrieLog.class,
|
||||
TrieLogSubCommand.ImportTrieLog.class
|
||||
})
|
||||
public class TrieLogSubCommand implements Runnable {
|
||||
|
||||
@SuppressWarnings("UnusedVariable")
|
||||
@@ -123,6 +131,102 @@ public class TrieLogSubCommand implements Runnable {
|
||||
}
|
||||
}
|
||||
|
||||
@Command(
|
||||
name = "export",
|
||||
description = "This command exports the trie log of a determined block to a binary file",
|
||||
mixinStandardHelpOptions = true,
|
||||
versionProvider = VersionProvider.class)
|
||||
static class ExportTrieLog implements Runnable {
|
||||
|
||||
@SuppressWarnings("unused")
|
||||
@ParentCommand
|
||||
private TrieLogSubCommand parentCommand;
|
||||
|
||||
@SuppressWarnings("unused")
|
||||
@CommandLine.Spec
|
||||
private CommandLine.Model.CommandSpec spec; // Picocli injects reference to command spec
|
||||
|
||||
@CommandLine.Option(
|
||||
names = "--trie-log-block-hash",
|
||||
description =
|
||||
"Comma separated list of hashes from the blocks you want to export the trie logs of",
|
||||
split = " {0,1}, {0,1}",
|
||||
arity = "1..*")
|
||||
private List<String> trieLogBlockHashList;
|
||||
|
||||
@CommandLine.Option(
|
||||
names = "--trie-log-file-path",
|
||||
description = "The file you want to export the trie logs to",
|
||||
arity = "1..1")
|
||||
private Path trieLogFilePath = null;
|
||||
|
||||
@Override
|
||||
public void run() {
|
||||
if (trieLogFilePath == null) {
|
||||
trieLogFilePath =
|
||||
Paths.get(
|
||||
TrieLogSubCommand.parentCommand
|
||||
.parentCommand
|
||||
.dataDir()
|
||||
.resolve("trie-logs.bin")
|
||||
.toAbsolutePath()
|
||||
.toString());
|
||||
}
|
||||
|
||||
TrieLogContext context = getTrieLogContext();
|
||||
|
||||
final List<Hash> listOfBlockHashes =
|
||||
trieLogBlockHashList.stream().map(Hash::fromHexString).toList();
|
||||
|
||||
try {
|
||||
TrieLogHelper.exportTrieLog(
|
||||
context.rootWorldStateStorage(), listOfBlockHashes, trieLogFilePath);
|
||||
} catch (IOException e) {
|
||||
throw new RuntimeException(e);
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
@Command(
|
||||
name = "import",
|
||||
description = "This command imports a trie log exported by another besu node",
|
||||
mixinStandardHelpOptions = true,
|
||||
versionProvider = VersionProvider.class)
|
||||
static class ImportTrieLog implements Runnable {
|
||||
|
||||
@SuppressWarnings("unused")
|
||||
@ParentCommand
|
||||
private TrieLogSubCommand parentCommand;
|
||||
|
||||
@SuppressWarnings("unused")
|
||||
@CommandLine.Spec
|
||||
private CommandLine.Model.CommandSpec spec; // Picocli injects reference to command spec
|
||||
|
||||
@CommandLine.Option(
|
||||
names = "--trie-log-file-path",
|
||||
description = "The file you want to import the trie logs from",
|
||||
arity = "1..1")
|
||||
private Path trieLogFilePath = null;
|
||||
|
||||
@Override
|
||||
public void run() {
|
||||
if (trieLogFilePath == null) {
|
||||
trieLogFilePath =
|
||||
Paths.get(
|
||||
TrieLogSubCommand.parentCommand
|
||||
.parentCommand
|
||||
.dataDir()
|
||||
.resolve("trie-logs.bin")
|
||||
.toAbsolutePath()
|
||||
.toString());
|
||||
}
|
||||
|
||||
TrieLogContext context = getTrieLogContext();
|
||||
|
||||
TrieLogHelper.importTrieLog(context.rootWorldStateStorage(), trieLogFilePath);
|
||||
}
|
||||
}
|
||||
|
||||
record TrieLogContext(
|
||||
DataStorageConfiguration config,
|
||||
BonsaiWorldStateKeyValueStorage rootWorldStateStorage,
|
||||
@@ -139,8 +243,7 @@ public class TrieLogSubCommand implements Runnable {
|
||||
|
||||
final StorageProvider storageProvider = besuController.getStorageProvider();
|
||||
final BonsaiWorldStateKeyValueStorage rootWorldStateStorage =
|
||||
(BonsaiWorldStateKeyValueStorage)
|
||||
storageProvider.createWorldStateStorage(DataStorageFormat.BONSAI);
|
||||
(BonsaiWorldStateKeyValueStorage) storageProvider.createWorldStateStorage(config);
|
||||
final MutableBlockchain blockchain = besuController.getProtocolContext().getBlockchain();
|
||||
return new TrieLogContext(config, rootWorldStateStorage, blockchain);
|
||||
}
|
||||
|
||||
@@ -55,7 +55,6 @@ public class ConfigOptionSearchAndRunHandler extends CommandLine.RunLast {
|
||||
public List<Object> handle(final ParseResult parseResult) throws ParameterException {
|
||||
final CommandLine commandLine = parseResult.commandSpec().commandLine();
|
||||
final Optional<File> configFile = findConfigFile(parseResult, commandLine);
|
||||
validatePrivacyOptions(parseResult, commandLine);
|
||||
commandLine.setDefaultValueProvider(createDefaultValueProvider(commandLine, configFile));
|
||||
commandLine.setExecutionStrategy(resultHandler);
|
||||
commandLine.setParameterExceptionHandler(parameterExceptionHandler);
|
||||
@@ -64,16 +63,6 @@ public class ConfigOptionSearchAndRunHandler extends CommandLine.RunLast {
|
||||
return new ArrayList<>();
|
||||
}
|
||||
|
||||
private void validatePrivacyOptions(
|
||||
final ParseResult parseResult, final CommandLine commandLine) {
|
||||
if (parseResult.hasMatchedOption("--privacy-onchain-groups-enabled")
|
||||
&& parseResult.hasMatchedOption("--privacy-flexible-groups-enabled")) {
|
||||
throw new ParameterException(
|
||||
commandLine,
|
||||
"The `--privacy-onchain-groups-enabled` option is deprecated and you should only use `--privacy-flexible-groups-enabled`");
|
||||
}
|
||||
}
|
||||
|
||||
private Optional<File> findConfigFile(
|
||||
final ParseResult parseResult, final CommandLine commandLine) {
|
||||
if (parseResult.hasMatchedOption("--config-file")
|
||||
|
||||
@@ -591,12 +591,14 @@ public abstract class BesuControllerBuilder implements MiningParameterOverrides
|
||||
prepForBuild();
|
||||
|
||||
final ProtocolSchedule protocolSchedule = createProtocolSchedule();
|
||||
final GenesisState genesisState = GenesisState.fromConfig(genesisConfig, protocolSchedule);
|
||||
final GenesisState genesisState =
|
||||
GenesisState.fromConfig(
|
||||
dataStorageConfiguration.getDataStorageFormat(), genesisConfig, protocolSchedule);
|
||||
|
||||
final VariablesStorage variablesStorage = storageProvider.createVariablesStorage();
|
||||
|
||||
final WorldStateStorage worldStateStorage =
|
||||
storageProvider.createWorldStateStorage(dataStorageConfiguration.getDataStorageFormat());
|
||||
storageProvider.createWorldStateStorage(dataStorageConfiguration);
|
||||
|
||||
final BlockchainStorage blockchainStorage =
|
||||
storageProvider.createBlockchainStorage(protocolSchedule, variablesStorage);
|
||||
@@ -1086,7 +1088,6 @@ public abstract class BesuControllerBuilder implements MiningParameterOverrides
|
||||
blockchain,
|
||||
Optional.of(dataStorageConfiguration.getBonsaiMaxLayersToLoad()),
|
||||
cachedMerkleTrieLoader,
|
||||
metricsSystem,
|
||||
besuComponent.map(BesuComponent::getBesuPluginContext).orElse(null),
|
||||
evmConfiguration,
|
||||
trieLogPruner);
|
||||
|
||||
@@ -29,7 +29,6 @@ import static org.hyperledger.besu.cli.config.NetworkName.MAINNET;
|
||||
import static org.hyperledger.besu.cli.config.NetworkName.MORDOR;
|
||||
import static org.hyperledger.besu.cli.config.NetworkName.SEPOLIA;
|
||||
import static org.hyperledger.besu.cli.util.CommandLineUtils.DEPENDENCY_WARNING_MSG;
|
||||
import static org.hyperledger.besu.cli.util.CommandLineUtils.DEPRECATION_WARNING_MSG;
|
||||
import static org.hyperledger.besu.ethereum.api.jsonrpc.RpcApis.ENGINE;
|
||||
import static org.hyperledger.besu.ethereum.api.jsonrpc.RpcApis.ETH;
|
||||
import static org.hyperledger.besu.ethereum.api.jsonrpc.RpcApis.NET;
|
||||
@@ -96,6 +95,7 @@ import org.hyperledger.besu.plugin.services.privacy.PrivateMarkerTransactionFact
|
||||
import org.hyperledger.besu.plugin.services.rpc.PluginRpcRequest;
|
||||
import org.hyperledger.besu.util.number.Fraction;
|
||||
import org.hyperledger.besu.util.number.Percentage;
|
||||
import org.hyperledger.besu.util.number.PositiveNumber;
|
||||
import org.hyperledger.besu.util.platform.PlatformDetector;
|
||||
|
||||
import java.io.File;
|
||||
@@ -847,6 +847,8 @@ public class BesuCommandTest extends CommandTestAbstract {
|
||||
tomlResult.getDouble(tomlKey);
|
||||
} else if (Percentage.class.isAssignableFrom(optionSpec.type())) {
|
||||
tomlResult.getLong(tomlKey);
|
||||
} else if (PositiveNumber.class.isAssignableFrom(optionSpec.type())) {
|
||||
tomlResult.getLong(tomlKey);
|
||||
} else {
|
||||
tomlResult.getString(tomlKey);
|
||||
}
|
||||
@@ -1977,16 +1979,6 @@ public class BesuCommandTest extends CommandTestAbstract {
|
||||
"The `--ethstats-contact` requires ethstats server URL to be provided. Either remove --ethstats-contact or provide a URL (via --ethstats=nodename:secret@host:port)");
|
||||
}
|
||||
|
||||
@Test
|
||||
public void privacyOnchainGroupsEnabledCannotBeUsedWithPrivacyFlexibleGroupsEnabled() {
|
||||
parseCommand("--privacy-onchain-groups-enabled", "--privacy-flexible-groups-enabled");
|
||||
Mockito.verifyNoInteractions(mockRunnerBuilder);
|
||||
assertThat(commandOutput.toString(UTF_8)).isEmpty();
|
||||
assertThat(commandErrorOutput.toString(UTF_8))
|
||||
.contains(
|
||||
"The `--privacy-onchain-groups-enabled` option is deprecated and you should only use `--privacy-flexible-groups-enabled`");
|
||||
}
|
||||
|
||||
@Test
|
||||
public void parsesValidBonsaiTrieLimitBackLayersOption() {
|
||||
parseCommand("--data-storage-format", "BONSAI", "--bonsai-historical-block-limit", "11");
|
||||
@@ -3840,8 +3832,8 @@ public class BesuCommandTest extends CommandTestAbstract {
|
||||
}
|
||||
|
||||
@Test
|
||||
public void pruningLogsDeprecationWarning() {
|
||||
parseCommand("--pruning-enabled");
|
||||
public void pruningLogsDeprecationWarningWithForest() {
|
||||
parseCommand("--pruning-enabled", "--data-storage-format=FOREST");
|
||||
|
||||
verify(mockControllerBuilder).isPruningEnabled(true);
|
||||
|
||||
@@ -3854,6 +3846,17 @@ public class BesuCommandTest extends CommandTestAbstract {
|
||||
+ " To save disk space consider switching to Bonsai data storage format."));
|
||||
}
|
||||
|
||||
@Test
|
||||
public void pruningLogsIgnoredWarningWithBonsai() {
|
||||
parseCommand("--pruning-enabled", "--data-storage-format=BONSAI");
|
||||
|
||||
verify(mockControllerBuilder).isPruningEnabled(true);
|
||||
|
||||
assertThat(commandOutput.toString(UTF_8)).isEmpty();
|
||||
assertThat(commandErrorOutput.toString(UTF_8)).isEmpty();
|
||||
verify(mockLogger).warn(contains("Forest pruning is ignored with Bonsai data storage format."));
|
||||
}
|
||||
|
||||
@Test
|
||||
public void devModeOptionMustBeUsed() throws Exception {
|
||||
parseCommand("--network", "dev");
|
||||
@@ -4192,46 +4195,6 @@ public class BesuCommandTest extends CommandTestAbstract {
|
||||
assertThat(privacyParameters.isFlexiblePrivacyGroupsEnabled()).isEqualTo(false);
|
||||
}
|
||||
|
||||
@Test
|
||||
public void onchainPrivacyGroupEnabledFlagValueIsSet() {
|
||||
parseCommand(
|
||||
"--privacy-enabled",
|
||||
"--privacy-public-key-file",
|
||||
ENCLAVE_PUBLIC_KEY_PATH,
|
||||
"--privacy-onchain-groups-enabled",
|
||||
"--min-gas-price",
|
||||
"0");
|
||||
|
||||
final ArgumentCaptor<PrivacyParameters> privacyParametersArgumentCaptor =
|
||||
ArgumentCaptor.forClass(PrivacyParameters.class);
|
||||
|
||||
verify(mockControllerBuilder).privacyParameters(privacyParametersArgumentCaptor.capture());
|
||||
verify(mockControllerBuilder).build();
|
||||
|
||||
assertThat(commandOutput.toString(UTF_8)).isEmpty();
|
||||
assertThat(commandErrorOutput.toString(UTF_8)).isEmpty();
|
||||
|
||||
final PrivacyParameters privacyParameters = privacyParametersArgumentCaptor.getValue();
|
||||
assertThat(privacyParameters.isFlexiblePrivacyGroupsEnabled()).isEqualTo(true);
|
||||
}
|
||||
|
||||
@Test
|
||||
public void onchainPrivacyGroupEnabledOptionIsDeprecated() {
|
||||
parseCommand(
|
||||
"--privacy-enabled",
|
||||
"--privacy-public-key-file",
|
||||
ENCLAVE_PUBLIC_KEY_PATH,
|
||||
"--privacy-onchain-groups-enabled",
|
||||
"--min-gas-price",
|
||||
"0");
|
||||
|
||||
verify(mockLogger)
|
||||
.warn(
|
||||
DEPRECATION_WARNING_MSG,
|
||||
"--privacy-onchain-groups-enabled",
|
||||
"--privacy-flexible-groups-enabled");
|
||||
}
|
||||
|
||||
@Test
|
||||
public void flexiblePrivacyGroupEnabledFlagValueIsSet() {
|
||||
parseCommand(
|
||||
|
||||
@@ -15,8 +15,8 @@
|
||||
package org.hyperledger.besu.cli.options;
|
||||
|
||||
import static org.assertj.core.api.Assertions.assertThat;
|
||||
import static org.hyperledger.besu.ethereum.core.MiningParameters.Unstable.DEFAULT_NON_POA_BLOCK_TXS_SELECTION_MAX_TIME;
|
||||
import static org.hyperledger.besu.ethereum.core.MiningParameters.Unstable.DEFAULT_POA_BLOCK_TXS_SELECTION_MAX_TIME;
|
||||
import static org.hyperledger.besu.ethereum.core.MiningParameters.DEFAULT_NON_POA_BLOCK_TXS_SELECTION_MAX_TIME;
|
||||
import static org.hyperledger.besu.ethereum.core.MiningParameters.DEFAULT_POA_BLOCK_TXS_SELECTION_MAX_TIME;
|
||||
import static org.hyperledger.besu.ethereum.core.MiningParameters.Unstable.DEFAULT_POS_BLOCK_CREATION_MAX_TIME;
|
||||
import static org.mockito.Mockito.atMost;
|
||||
import static org.mockito.Mockito.verify;
|
||||
@@ -28,7 +28,7 @@ import org.hyperledger.besu.ethereum.core.ImmutableMiningParameters;
|
||||
import org.hyperledger.besu.ethereum.core.ImmutableMiningParameters.MutableInitValues;
|
||||
import org.hyperledger.besu.ethereum.core.ImmutableMiningParameters.Unstable;
|
||||
import org.hyperledger.besu.ethereum.core.MiningParameters;
|
||||
import org.hyperledger.besu.util.number.Percentage;
|
||||
import org.hyperledger.besu.util.number.PositiveNumber;
|
||||
|
||||
import java.io.IOException;
|
||||
import java.nio.file.Path;
|
||||
@@ -315,35 +315,26 @@ public class MiningOptionsTest extends AbstractCLIOptionsTest<MiningParameters,
|
||||
public void blockTxsSelectionMaxTimeDefaultValue() {
|
||||
internalTestSuccess(
|
||||
miningParams ->
|
||||
assertThat(miningParams.getUnstable().getBlockTxsSelectionMaxTime())
|
||||
assertThat(miningParams.getNonPoaBlockTxsSelectionMaxTime())
|
||||
.isEqualTo(DEFAULT_NON_POA_BLOCK_TXS_SELECTION_MAX_TIME));
|
||||
}
|
||||
|
||||
@Test
|
||||
public void blockTxsSelectionMaxTimeOption() {
|
||||
internalTestSuccess(
|
||||
miningParams ->
|
||||
assertThat(miningParams.getUnstable().getBlockTxsSelectionMaxTime()).isEqualTo(1700L),
|
||||
"--Xblock-txs-selection-max-time",
|
||||
miningParams -> assertThat(miningParams.getBlockTxsSelectionMaxTime()).isEqualTo(1700L),
|
||||
"--block-txs-selection-max-time",
|
||||
"1700");
|
||||
}
|
||||
|
||||
@Test
|
||||
public void blockTxsSelectionMaxTimeOutOfAllowedRange() {
|
||||
internalTestFailure(
|
||||
"--Xblock-txs-selection-max-time must be positive and ≤ 5000",
|
||||
"--Xblock-txs-selection-max-time",
|
||||
"6000");
|
||||
}
|
||||
|
||||
@Test
|
||||
public void blockTxsSelectionMaxTimeIncompatibleWithPoaNetworks() throws IOException {
|
||||
final Path genesisFileIBFT2 = createFakeGenesisFile(VALID_GENESIS_IBFT2_POST_LONDON);
|
||||
internalTestFailure(
|
||||
"--Xblock-txs-selection-max-time can't be used with PoA networks, see Xpoa-block-txs-selection-max-time instead",
|
||||
"--block-txs-selection-max-time can't be used with PoA networks, see poa-block-txs-selection-max-time instead",
|
||||
"--genesis-file",
|
||||
genesisFileIBFT2.toString(),
|
||||
"--Xblock-txs-selection-max-time",
|
||||
"--block-txs-selection-max-time",
|
||||
"2");
|
||||
}
|
||||
|
||||
@@ -351,7 +342,7 @@ public class MiningOptionsTest extends AbstractCLIOptionsTest<MiningParameters,
|
||||
public void poaBlockTxsSelectionMaxTimeDefaultValue() {
|
||||
internalTestSuccess(
|
||||
miningParams ->
|
||||
assertThat(miningParams.getUnstable().getPoaBlockTxsSelectionMaxTime())
|
||||
assertThat(miningParams.getPoaBlockTxsSelectionMaxTime())
|
||||
.isEqualTo(DEFAULT_POA_BLOCK_TXS_SELECTION_MAX_TIME));
|
||||
}
|
||||
|
||||
@@ -360,27 +351,32 @@ public class MiningOptionsTest extends AbstractCLIOptionsTest<MiningParameters,
|
||||
final Path genesisFileIBFT2 = createFakeGenesisFile(VALID_GENESIS_IBFT2_POST_LONDON);
|
||||
internalTestSuccess(
|
||||
miningParams ->
|
||||
assertThat(miningParams.getUnstable().getPoaBlockTxsSelectionMaxTime())
|
||||
.isEqualTo(Percentage.fromInt(80)),
|
||||
assertThat(miningParams.getPoaBlockTxsSelectionMaxTime())
|
||||
.isEqualTo(PositiveNumber.fromInt(80)),
|
||||
"--genesis-file",
|
||||
genesisFileIBFT2.toString(),
|
||||
"--Xpoa-block-txs-selection-max-time",
|
||||
"--poa-block-txs-selection-max-time",
|
||||
"80");
|
||||
}
|
||||
|
||||
@Test
|
||||
public void poaBlockTxsSelectionMaxTimeOutOfAllowedRange() {
|
||||
internalTestFailure(
|
||||
"Invalid value for option '--Xpoa-block-txs-selection-max-time': cannot convert '110' to Percentage",
|
||||
"--Xpoa-block-txs-selection-max-time",
|
||||
"110");
|
||||
public void poaBlockTxsSelectionMaxTimeOptionOver100Percent() throws IOException {
|
||||
final Path genesisFileIBFT2 = createFakeGenesisFile(VALID_GENESIS_IBFT2_POST_LONDON);
|
||||
internalTestSuccess(
|
||||
miningParams ->
|
||||
assertThat(miningParams.getPoaBlockTxsSelectionMaxTime())
|
||||
.isEqualTo(PositiveNumber.fromInt(200)),
|
||||
"--genesis-file",
|
||||
genesisFileIBFT2.toString(),
|
||||
"--poa-block-txs-selection-max-time",
|
||||
"200");
|
||||
}
|
||||
|
||||
@Test
|
||||
public void poaBlockTxsSelectionMaxTimeOnlyCompatibleWithPoaNetworks() {
|
||||
internalTestFailure(
|
||||
"--Xpoa-block-txs-selection-max-time can be only used with PoA networks, see --Xblock-txs-selection-max-time instead",
|
||||
"--Xpoa-block-txs-selection-max-time",
|
||||
"--poa-block-txs-selection-max-time can be only used with PoA networks, see --block-txs-selection-max-time instead",
|
||||
"--poa-block-txs-selection-max-time",
|
||||
"90");
|
||||
}
|
||||
|
||||
|
||||
@@ -134,7 +134,7 @@ public class NetworkingOptionsTest
|
||||
|
||||
final NetworkingOptions options = cmd.getNetworkingOptions();
|
||||
final NetworkingConfiguration networkingConfig = options.toDomainObject();
|
||||
assertThat(networkingConfig.getDiscovery().isFilterOnEnrForkIdEnabled()).isEqualTo(false);
|
||||
assertThat(networkingConfig.getDiscovery().isFilterOnEnrForkIdEnabled()).isEqualTo(true);
|
||||
|
||||
assertThat(commandErrorOutput.toString(UTF_8)).isEmpty();
|
||||
assertThat(commandOutput.toString(UTF_8)).isEmpty();
|
||||
|
||||
@@ -34,8 +34,8 @@ public class DataStorageOptionsTest
|
||||
dataStorageConfiguration ->
|
||||
assertThat(dataStorageConfiguration.getUnstable().getBonsaiTrieLogPruningLimit())
|
||||
.isEqualTo(1),
|
||||
"--Xbonsai-trie-log-pruning-enabled",
|
||||
"--Xbonsai-trie-log-pruning-limit",
|
||||
"--Xbonsai-limit-trie-logs-enabled",
|
||||
"--Xbonsai-trie-logs-pruning-limit",
|
||||
"1");
|
||||
}
|
||||
|
||||
@@ -43,8 +43,8 @@ public class DataStorageOptionsTest
|
||||
public void bonsaiTrieLogPruningLimitShouldBePositive() {
|
||||
internalTestFailure(
|
||||
"--Xbonsai-trie-log-pruning-limit=0 must be greater than 0",
|
||||
"--Xbonsai-trie-log-pruning-enabled",
|
||||
"--Xbonsai-trie-log-pruning-limit",
|
||||
"--Xbonsai-limit-trie-logs-enabled",
|
||||
"--Xbonsai-trie-logs-pruning-limit",
|
||||
"0");
|
||||
}
|
||||
|
||||
@@ -54,8 +54,8 @@ public class DataStorageOptionsTest
|
||||
dataStorageConfiguration ->
|
||||
assertThat(dataStorageConfiguration.getUnstable().getBonsaiTrieLogRetentionThreshold())
|
||||
.isEqualTo(MINIMUM_BONSAI_TRIE_LOG_RETENTION_THRESHOLD + 1),
|
||||
"--Xbonsai-trie-log-pruning-enabled",
|
||||
"--Xbonsai-trie-log-retention-threshold",
|
||||
"--Xbonsai-limit-trie-logs-enabled",
|
||||
"--Xbonsai-trie-logs-retention-threshold",
|
||||
"513");
|
||||
}
|
||||
|
||||
@@ -65,8 +65,8 @@ public class DataStorageOptionsTest
|
||||
dataStorageConfiguration ->
|
||||
assertThat(dataStorageConfiguration.getUnstable().getBonsaiTrieLogRetentionThreshold())
|
||||
.isEqualTo(MINIMUM_BONSAI_TRIE_LOG_RETENTION_THRESHOLD),
|
||||
"--Xbonsai-trie-log-pruning-enabled",
|
||||
"--Xbonsai-trie-log-retention-threshold",
|
||||
"--Xbonsai-limit-trie-logs-enabled",
|
||||
"--Xbonsai-trie-logs-retention-threshold",
|
||||
"512");
|
||||
}
|
||||
|
||||
@@ -74,8 +74,8 @@ public class DataStorageOptionsTest
|
||||
public void bonsaiTrieLogRetentionThresholdShouldBeAboveMinimum() {
|
||||
internalTestFailure(
|
||||
"--Xbonsai-trie-log-retention-threshold minimum value is 512",
|
||||
"--Xbonsai-trie-log-pruning-enabled",
|
||||
"--Xbonsai-trie-log-retention-threshold",
|
||||
"--Xbonsai-limit-trie-logs-enabled",
|
||||
"--Xbonsai-trie-logs-retention-threshold",
|
||||
"511");
|
||||
}
|
||||
|
||||
|
||||
@@ -15,6 +15,7 @@
|
||||
|
||||
package org.hyperledger.besu.cli.subcommands.storage;
|
||||
|
||||
import static java.util.Collections.singletonList;
|
||||
import static org.hyperledger.besu.ethereum.worldstate.DataStorageFormat.BONSAI;
|
||||
import static org.junit.jupiter.api.Assertions.assertArrayEquals;
|
||||
import static org.junit.jupiter.api.Assertions.assertEquals;
|
||||
@@ -27,8 +28,11 @@ import org.hyperledger.besu.ethereum.chain.MutableBlockchain;
|
||||
import org.hyperledger.besu.ethereum.core.BlockHeader;
|
||||
import org.hyperledger.besu.ethereum.core.BlockHeaderTestFixture;
|
||||
import org.hyperledger.besu.ethereum.core.InMemoryKeyValueStorageProvider;
|
||||
import org.hyperledger.besu.ethereum.rlp.BytesValueRLPOutput;
|
||||
import org.hyperledger.besu.ethereum.storage.StorageProvider;
|
||||
import org.hyperledger.besu.ethereum.trie.bonsai.storage.BonsaiWorldStateKeyValueStorage;
|
||||
import org.hyperledger.besu.ethereum.trie.bonsai.trielog.TrieLogFactoryImpl;
|
||||
import org.hyperledger.besu.ethereum.trie.bonsai.trielog.TrieLogLayer;
|
||||
import org.hyperledger.besu.ethereum.worldstate.DataStorageConfiguration;
|
||||
import org.hyperledger.besu.ethereum.worldstate.ImmutableDataStorageConfiguration;
|
||||
import org.hyperledger.besu.metrics.noop.NoOpMetricsSystem;
|
||||
@@ -36,11 +40,12 @@ import org.hyperledger.besu.metrics.noop.NoOpMetricsSystem;
|
||||
import java.io.IOException;
|
||||
import java.nio.file.Files;
|
||||
import java.nio.file.Path;
|
||||
import java.util.List;
|
||||
import java.util.Map;
|
||||
import java.util.Optional;
|
||||
import java.util.stream.Collectors;
|
||||
|
||||
import org.apache.tuweni.bytes.Bytes;
|
||||
import org.junit.jupiter.api.AfterEach;
|
||||
import org.junit.jupiter.api.BeforeAll;
|
||||
import org.junit.jupiter.api.BeforeEach;
|
||||
import org.junit.jupiter.api.Test;
|
||||
import org.junit.jupiter.api.extension.ExtendWith;
|
||||
@@ -56,17 +61,14 @@ class TrieLogHelperTest {
|
||||
|
||||
@Mock private MutableBlockchain blockchain;
|
||||
|
||||
@TempDir static Path dataDir;
|
||||
|
||||
Path test;
|
||||
static BlockHeader blockHeader1;
|
||||
static BlockHeader blockHeader2;
|
||||
static BlockHeader blockHeader3;
|
||||
static BlockHeader blockHeader4;
|
||||
static BlockHeader blockHeader5;
|
||||
|
||||
@BeforeAll
|
||||
public static void setup() throws IOException {
|
||||
@BeforeEach
|
||||
public void setup() throws IOException {
|
||||
|
||||
blockHeader1 = new BlockHeaderTestFixture().number(1).buildHeader();
|
||||
blockHeader2 = new BlockHeaderTestFixture().number(2).buildHeader();
|
||||
@@ -75,35 +77,36 @@ class TrieLogHelperTest {
|
||||
blockHeader5 = new BlockHeaderTestFixture().number(5).buildHeader();
|
||||
|
||||
inMemoryWorldState =
|
||||
new BonsaiWorldStateKeyValueStorage(storageProvider, new NoOpMetricsSystem());
|
||||
new BonsaiWorldStateKeyValueStorage(
|
||||
storageProvider, new NoOpMetricsSystem(), DataStorageConfiguration.DEFAULT_CONFIG);
|
||||
|
||||
createTrieLog(blockHeader1);
|
||||
|
||||
var updater = inMemoryWorldState.updater();
|
||||
updater
|
||||
.getTrieLogStorageTransaction()
|
||||
.put(blockHeader1.getHash().toArrayUnsafe(), Bytes.fromHexString("0x01").toArrayUnsafe());
|
||||
.put(blockHeader1.getHash().toArrayUnsafe(), createTrieLog(blockHeader1));
|
||||
updater
|
||||
.getTrieLogStorageTransaction()
|
||||
.put(blockHeader2.getHash().toArrayUnsafe(), Bytes.fromHexString("0x02").toArrayUnsafe());
|
||||
.put(blockHeader2.getHash().toArrayUnsafe(), createTrieLog(blockHeader2));
|
||||
updater
|
||||
.getTrieLogStorageTransaction()
|
||||
.put(blockHeader3.getHash().toArrayUnsafe(), Bytes.fromHexString("0x03").toArrayUnsafe());
|
||||
.put(blockHeader3.getHash().toArrayUnsafe(), createTrieLog(blockHeader3));
|
||||
updater
|
||||
.getTrieLogStorageTransaction()
|
||||
.put(blockHeader4.getHash().toArrayUnsafe(), Bytes.fromHexString("0x04").toArrayUnsafe());
|
||||
.put(blockHeader4.getHash().toArrayUnsafe(), createTrieLog(blockHeader4));
|
||||
updater
|
||||
.getTrieLogStorageTransaction()
|
||||
.put(blockHeader5.getHash().toArrayUnsafe(), Bytes.fromHexString("0x05").toArrayUnsafe());
|
||||
.put(blockHeader5.getHash().toArrayUnsafe(), createTrieLog(blockHeader5));
|
||||
updater.getTrieLogStorageTransaction().commit();
|
||||
}
|
||||
|
||||
@BeforeEach
|
||||
void createDirectory() throws IOException {
|
||||
Files.createDirectories(dataDir.resolve("database"));
|
||||
}
|
||||
|
||||
@AfterEach
|
||||
void deleteDirectory() throws IOException {
|
||||
Files.deleteIfExists(dataDir.resolve("database"));
|
||||
private static byte[] createTrieLog(final BlockHeader blockHeader) {
|
||||
TrieLogLayer trieLogLayer = new TrieLogLayer();
|
||||
trieLogLayer.setBlockHash(blockHeader.getBlockHash());
|
||||
final BytesValueRLPOutput rlpLog = new BytesValueRLPOutput();
|
||||
TrieLogFactoryImpl.writeTo(trieLogLayer, rlpLog);
|
||||
return rlpLog.encoded().toArrayUnsafe();
|
||||
}
|
||||
|
||||
void mockBlockchainBase() {
|
||||
@@ -113,7 +116,8 @@ class TrieLogHelperTest {
|
||||
}
|
||||
|
||||
@Test
|
||||
public void prune() {
|
||||
public void prune(final @TempDir Path dataDir) throws IOException {
|
||||
Files.createDirectories(dataDir.resolve("database"));
|
||||
|
||||
DataStorageConfiguration dataStorageConfiguration =
|
||||
ImmutableDataStorageConfiguration.builder()
|
||||
@@ -133,14 +137,11 @@ class TrieLogHelperTest {
|
||||
|
||||
// assert trie logs that will be pruned exist before prune call
|
||||
assertArrayEquals(
|
||||
inMemoryWorldState.getTrieLog(blockHeader1.getHash()).get(),
|
||||
Bytes.fromHexString("0x01").toArrayUnsafe());
|
||||
inMemoryWorldState.getTrieLog(blockHeader1.getHash()).get(), createTrieLog(blockHeader1));
|
||||
assertArrayEquals(
|
||||
inMemoryWorldState.getTrieLog(blockHeader2.getHash()).get(),
|
||||
Bytes.fromHexString("0x02").toArrayUnsafe());
|
||||
inMemoryWorldState.getTrieLog(blockHeader2.getHash()).get(), createTrieLog(blockHeader2));
|
||||
assertArrayEquals(
|
||||
inMemoryWorldState.getTrieLog(blockHeader3.getHash()).get(),
|
||||
Bytes.fromHexString("0x03").toArrayUnsafe());
|
||||
inMemoryWorldState.getTrieLog(blockHeader3.getHash()).get(), createTrieLog(blockHeader3));
|
||||
|
||||
TrieLogHelper.prune(dataStorageConfiguration, inMemoryWorldState, blockchain, dataDir);
|
||||
|
||||
@@ -150,18 +151,15 @@ class TrieLogHelperTest {
|
||||
|
||||
// assert retained trie logs are in the DB
|
||||
assertArrayEquals(
|
||||
inMemoryWorldState.getTrieLog(blockHeader3.getHash()).get(),
|
||||
Bytes.fromHexString("0x03").toArrayUnsafe());
|
||||
inMemoryWorldState.getTrieLog(blockHeader3.getHash()).get(), createTrieLog(blockHeader3));
|
||||
assertArrayEquals(
|
||||
inMemoryWorldState.getTrieLog(blockHeader4.getHash()).get(),
|
||||
Bytes.fromHexString("0x04").toArrayUnsafe());
|
||||
inMemoryWorldState.getTrieLog(blockHeader4.getHash()).get(), createTrieLog(blockHeader4));
|
||||
assertArrayEquals(
|
||||
inMemoryWorldState.getTrieLog(blockHeader5.getHash()).get(),
|
||||
Bytes.fromHexString("0x05").toArrayUnsafe());
|
||||
inMemoryWorldState.getTrieLog(blockHeader5.getHash()).get(), createTrieLog(blockHeader5));
|
||||
}
|
||||
|
||||
@Test
|
||||
public void cantPruneIfNoFinalizedIsFound() {
|
||||
public void cantPruneIfNoFinalizedIsFound(final @TempDir Path dataDir) {
|
||||
DataStorageConfiguration dataStorageConfiguration =
|
||||
ImmutableDataStorageConfiguration.builder()
|
||||
.dataStorageFormat(BONSAI)
|
||||
@@ -183,7 +181,7 @@ class TrieLogHelperTest {
|
||||
}
|
||||
|
||||
@Test
|
||||
public void cantPruneIfUserRetainsMoreLayerThanExistingChainLength() {
|
||||
public void cantPruneIfUserRetainsMoreLayerThanExistingChainLength(final @TempDir Path dataDir) {
|
||||
DataStorageConfiguration dataStorageConfiguration =
|
||||
ImmutableDataStorageConfiguration.builder()
|
||||
.dataStorageFormat(BONSAI)
|
||||
@@ -204,7 +202,7 @@ class TrieLogHelperTest {
|
||||
}
|
||||
|
||||
@Test
|
||||
public void cantPruneIfUserRequiredFurtherThanFinalized() {
|
||||
public void cantPruneIfUserRequiredFurtherThanFinalized(final @TempDir Path dataDir) {
|
||||
|
||||
DataStorageConfiguration dataStorageConfiguration =
|
||||
ImmutableDataStorageConfiguration.builder()
|
||||
@@ -226,8 +224,7 @@ class TrieLogHelperTest {
|
||||
}
|
||||
|
||||
@Test
|
||||
public void exceptionWhileSavingFileStopsPruneProcess() throws IOException {
|
||||
Files.delete(dataDir.resolve("database"));
|
||||
public void exceptionWhileSavingFileStopsPruneProcess(final @TempDir Path dataDir) {
|
||||
|
||||
DataStorageConfiguration dataStorageConfiguration =
|
||||
ImmutableDataStorageConfiguration.builder()
|
||||
@@ -243,23 +240,121 @@ class TrieLogHelperTest {
|
||||
assertThrows(
|
||||
RuntimeException.class,
|
||||
() ->
|
||||
TrieLogHelper.prune(dataStorageConfiguration, inMemoryWorldState, blockchain, dataDir));
|
||||
TrieLogHelper.prune(
|
||||
dataStorageConfiguration,
|
||||
inMemoryWorldState,
|
||||
blockchain,
|
||||
dataDir.resolve("unknownPath")));
|
||||
|
||||
// assert all trie logs are still in the DB
|
||||
assertArrayEquals(
|
||||
inMemoryWorldState.getTrieLog(blockHeader1.getHash()).get(),
|
||||
Bytes.fromHexString("0x01").toArrayUnsafe());
|
||||
inMemoryWorldState.getTrieLog(blockHeader1.getHash()).get(), createTrieLog(blockHeader1));
|
||||
assertArrayEquals(
|
||||
inMemoryWorldState.getTrieLog(blockHeader2.getHash()).get(),
|
||||
Bytes.fromHexString("0x02").toArrayUnsafe());
|
||||
inMemoryWorldState.getTrieLog(blockHeader2.getHash()).get(), createTrieLog(blockHeader2));
|
||||
assertArrayEquals(
|
||||
inMemoryWorldState.getTrieLog(blockHeader3.getHash()).get(),
|
||||
Bytes.fromHexString("0x03").toArrayUnsafe());
|
||||
inMemoryWorldState.getTrieLog(blockHeader3.getHash()).get(), createTrieLog(blockHeader3));
|
||||
assertArrayEquals(
|
||||
inMemoryWorldState.getTrieLog(blockHeader4.getHash()).get(),
|
||||
Bytes.fromHexString("0x04").toArrayUnsafe());
|
||||
inMemoryWorldState.getTrieLog(blockHeader4.getHash()).get(), createTrieLog(blockHeader4));
|
||||
assertArrayEquals(
|
||||
inMemoryWorldState.getTrieLog(blockHeader5.getHash()).get(),
|
||||
Bytes.fromHexString("0x05").toArrayUnsafe());
|
||||
inMemoryWorldState.getTrieLog(blockHeader5.getHash()).get(), createTrieLog(blockHeader5));
|
||||
}
|
||||
|
||||
@Test
|
||||
public void exportedTrieMatchesDbTrieLog(final @TempDir Path dataDir) throws IOException {
|
||||
TrieLogHelper.exportTrieLog(
|
||||
inMemoryWorldState,
|
||||
singletonList(blockHeader1.getHash()),
|
||||
dataDir.resolve("trie-log-dump"));
|
||||
|
||||
var trieLog =
|
||||
TrieLogHelper.readTrieLogsAsRlpFromFile(dataDir.resolve("trie-log-dump").toString())
|
||||
.entrySet()
|
||||
.stream()
|
||||
.findFirst()
|
||||
.get();
|
||||
|
||||
assertArrayEquals(trieLog.getKey(), blockHeader1.getHash().toArrayUnsafe());
|
||||
assertArrayEquals(
|
||||
trieLog.getValue(), inMemoryWorldState.getTrieLog(blockHeader1.getHash()).get());
|
||||
}
|
||||
|
||||
@Test
|
||||
public void exportedMultipleTriesMatchDbTrieLogs(final @TempDir Path dataDir) throws IOException {
|
||||
TrieLogHelper.exportTrieLog(
|
||||
inMemoryWorldState,
|
||||
List.of(blockHeader1.getHash(), blockHeader2.getHash(), blockHeader3.getHash()),
|
||||
dataDir.resolve("trie-log-dump"));
|
||||
|
||||
var trieLogs =
|
||||
TrieLogHelper.readTrieLogsAsRlpFromFile(dataDir.resolve("trie-log-dump").toString())
|
||||
.entrySet()
|
||||
.stream()
|
||||
.collect(Collectors.toMap(e -> Bytes.wrap(e.getKey()), Map.Entry::getValue));
|
||||
|
||||
assertArrayEquals(
|
||||
trieLogs.get(blockHeader1.getHash()),
|
||||
inMemoryWorldState.getTrieLog(blockHeader1.getHash()).get());
|
||||
assertArrayEquals(
|
||||
trieLogs.get(blockHeader2.getHash()),
|
||||
inMemoryWorldState.getTrieLog(blockHeader2.getHash()).get());
|
||||
assertArrayEquals(
|
||||
trieLogs.get(blockHeader3.getHash()),
|
||||
inMemoryWorldState.getTrieLog(blockHeader3.getHash()).get());
|
||||
}
|
||||
|
||||
@Test
|
||||
public void importedTrieLogMatchesDbTrieLog(final @TempDir Path dataDir) throws IOException {
|
||||
StorageProvider tempStorageProvider = new InMemoryKeyValueStorageProvider();
|
||||
BonsaiWorldStateKeyValueStorage inMemoryWorldState2 =
|
||||
new BonsaiWorldStateKeyValueStorage(
|
||||
tempStorageProvider, new NoOpMetricsSystem(), DataStorageConfiguration.DEFAULT_CONFIG);
|
||||
|
||||
TrieLogHelper.exportTrieLog(
|
||||
inMemoryWorldState,
|
||||
singletonList(blockHeader1.getHash()),
|
||||
dataDir.resolve("trie-log-dump"));
|
||||
|
||||
var trieLog =
|
||||
TrieLogHelper.readTrieLogsAsRlpFromFile(dataDir.resolve("trie-log-dump").toString());
|
||||
var updater = inMemoryWorldState2.updater();
|
||||
|
||||
trieLog.forEach((k, v) -> updater.getTrieLogStorageTransaction().put(k, v));
|
||||
|
||||
updater.getTrieLogStorageTransaction().commit();
|
||||
|
||||
assertArrayEquals(
|
||||
inMemoryWorldState2.getTrieLog(blockHeader1.getHash()).get(),
|
||||
inMemoryWorldState.getTrieLog(blockHeader1.getHash()).get());
|
||||
}
|
||||
|
||||
@Test
|
||||
public void importedMultipleTriesMatchDbTrieLogs(final @TempDir Path dataDir) throws IOException {
|
||||
StorageProvider tempStorageProvider = new InMemoryKeyValueStorageProvider();
|
||||
BonsaiWorldStateKeyValueStorage inMemoryWorldState2 =
|
||||
new BonsaiWorldStateKeyValueStorage(
|
||||
tempStorageProvider, new NoOpMetricsSystem(), DataStorageConfiguration.DEFAULT_CONFIG);
|
||||
|
||||
TrieLogHelper.exportTrieLog(
|
||||
inMemoryWorldState,
|
||||
List.of(blockHeader1.getHash(), blockHeader2.getHash(), blockHeader3.getHash()),
|
||||
dataDir.resolve("trie-log-dump"));
|
||||
|
||||
var trieLog =
|
||||
TrieLogHelper.readTrieLogsAsRlpFromFile(dataDir.resolve("trie-log-dump").toString());
|
||||
var updater = inMemoryWorldState2.updater();
|
||||
|
||||
trieLog.forEach((k, v) -> updater.getTrieLogStorageTransaction().put(k, v));
|
||||
|
||||
updater.getTrieLogStorageTransaction().commit();
|
||||
|
||||
assertArrayEquals(
|
||||
inMemoryWorldState2.getTrieLog(blockHeader1.getHash()).get(),
|
||||
inMemoryWorldState.getTrieLog(blockHeader1.getHash()).get());
|
||||
assertArrayEquals(
|
||||
inMemoryWorldState2.getTrieLog(blockHeader2.getHash()).get(),
|
||||
inMemoryWorldState.getTrieLog(blockHeader2.getHash()).get());
|
||||
assertArrayEquals(
|
||||
inMemoryWorldState2.getTrieLog(blockHeader3.getHash()).get(),
|
||||
inMemoryWorldState.getTrieLog(blockHeader3.getHash()).get());
|
||||
}
|
||||
}
|
||||
|
||||
@@ -131,7 +131,7 @@ public class BesuControllerBuilderTest {
|
||||
when(synchronizerConfiguration.getBlockPropagationRange()).thenReturn(Range.closed(1L, 2L));
|
||||
|
||||
lenient()
|
||||
.when(storageProvider.createWorldStateStorage(DataStorageFormat.FOREST))
|
||||
.when(storageProvider.createWorldStateStorage(DataStorageConfiguration.DEFAULT_CONFIG))
|
||||
.thenReturn(worldStateStorage);
|
||||
lenient()
|
||||
.when(storageProvider.createWorldStatePreimageStorage())
|
||||
@@ -166,6 +166,11 @@ public class BesuControllerBuilderTest {
|
||||
|
||||
@Test
|
||||
public void shouldDisablePruningIfBonsaiIsEnabled() {
|
||||
DataStorageConfiguration dataStorageConfiguration =
|
||||
ImmutableDataStorageConfiguration.builder()
|
||||
.dataStorageFormat(DataStorageFormat.BONSAI)
|
||||
.bonsaiMaxLayersToLoad(DataStorageConfiguration.DEFAULT_BONSAI_MAX_LAYERS_TO_LOAD)
|
||||
.build();
|
||||
BonsaiWorldState mockWorldState = mock(BonsaiWorldState.class, Answers.RETURNS_DEEP_STUBS);
|
||||
doReturn(worldStateArchive)
|
||||
.when(besuControllerBuilder)
|
||||
@@ -173,15 +178,9 @@ public class BesuControllerBuilderTest {
|
||||
any(WorldStateStorage.class), any(Blockchain.class), any(CachedMerkleTrieLoader.class));
|
||||
doReturn(mockWorldState).when(worldStateArchive).getMutable();
|
||||
|
||||
when(storageProvider.createWorldStateStorage(DataStorageFormat.BONSAI))
|
||||
when(storageProvider.createWorldStateStorage(dataStorageConfiguration))
|
||||
.thenReturn(bonsaiWorldStateStorage);
|
||||
besuControllerBuilder
|
||||
.isPruningEnabled(true)
|
||||
.dataStorageConfiguration(
|
||||
ImmutableDataStorageConfiguration.builder()
|
||||
.dataStorageFormat(DataStorageFormat.BONSAI)
|
||||
.bonsaiMaxLayersToLoad(DataStorageConfiguration.DEFAULT_BONSAI_MAX_LAYERS_TO_LOAD)
|
||||
.build());
|
||||
besuControllerBuilder.isPruningEnabled(true).dataStorageConfiguration(dataStorageConfiguration);
|
||||
besuControllerBuilder.build();
|
||||
|
||||
verify(storageProvider, never())
|
||||
|
||||
@@ -52,7 +52,7 @@ import org.hyperledger.besu.ethereum.p2p.config.NetworkingConfiguration;
|
||||
import org.hyperledger.besu.ethereum.storage.StorageProvider;
|
||||
import org.hyperledger.besu.ethereum.storage.keyvalue.KeyValueStoragePrefixedKeyBlockchainStorage;
|
||||
import org.hyperledger.besu.ethereum.storage.keyvalue.VariablesKeyValueStorage;
|
||||
import org.hyperledger.besu.ethereum.worldstate.DataStorageFormat;
|
||||
import org.hyperledger.besu.ethereum.worldstate.DataStorageConfiguration;
|
||||
import org.hyperledger.besu.ethereum.worldstate.WorldStateArchive;
|
||||
import org.hyperledger.besu.ethereum.worldstate.WorldStatePreimageStorage;
|
||||
import org.hyperledger.besu.ethereum.worldstate.WorldStateStorage;
|
||||
@@ -145,7 +145,7 @@ public class MergeBesuControllerBuilderTest {
|
||||
.thenReturn(Range.closed(1L, 2L));
|
||||
|
||||
lenient()
|
||||
.when(storageProvider.createWorldStateStorage(DataStorageFormat.FOREST))
|
||||
.when(storageProvider.createWorldStateStorage(DataStorageConfiguration.DEFAULT_CONFIG))
|
||||
.thenReturn(worldStateStorage);
|
||||
lenient()
|
||||
.when(storageProvider.createWorldStatePreimageStorage())
|
||||
|
||||
@@ -48,7 +48,7 @@ import org.hyperledger.besu.ethereum.p2p.config.NetworkingConfiguration;
|
||||
import org.hyperledger.besu.ethereum.storage.StorageProvider;
|
||||
import org.hyperledger.besu.ethereum.storage.keyvalue.KeyValueStoragePrefixedKeyBlockchainStorage;
|
||||
import org.hyperledger.besu.ethereum.storage.keyvalue.VariablesKeyValueStorage;
|
||||
import org.hyperledger.besu.ethereum.worldstate.DataStorageFormat;
|
||||
import org.hyperledger.besu.ethereum.worldstate.DataStorageConfiguration;
|
||||
import org.hyperledger.besu.ethereum.worldstate.WorldStatePreimageStorage;
|
||||
import org.hyperledger.besu.ethereum.worldstate.WorldStateStorage;
|
||||
import org.hyperledger.besu.evm.internal.EvmConfiguration;
|
||||
@@ -114,7 +114,7 @@ public class QbftBesuControllerBuilderTest {
|
||||
new VariablesKeyValueStorage(new InMemoryKeyValueStorage()),
|
||||
new MainnetBlockHeaderFunctions()));
|
||||
lenient()
|
||||
.when(storageProvider.createWorldStateStorage(DataStorageFormat.FOREST))
|
||||
.when(storageProvider.createWorldStateStorage(DataStorageConfiguration.DEFAULT_CONFIG))
|
||||
.thenReturn(worldStateStorage);
|
||||
lenient().when(worldStateStorage.isWorldStateAvailable(any(), any())).thenReturn(true);
|
||||
lenient().when(worldStateStorage.updater()).thenReturn(mock(WorldStateStorage.Updater.class));
|
||||
|
||||
@@ -142,6 +142,8 @@ min-priority-fee=0
|
||||
min-block-occupancy-ratio=0.7
|
||||
miner-stratum-host="0.0.0.0"
|
||||
miner-stratum-port=8008
|
||||
block-txs-selection-max-time=5000
|
||||
poa-block-txs-selection-max-time=75
|
||||
Xminer-remote-sealers-limit=1000
|
||||
Xminer-remote-sealers-hashrate-ttl=10
|
||||
Xpos-block-creation-max-time=5
|
||||
@@ -169,7 +171,6 @@ privacy-enabled=false
|
||||
privacy-multi-tenancy-enabled=true
|
||||
privacy-marker-transaction-signing-key-file="./signerKey"
|
||||
privacy-enable-database-migration=false
|
||||
privacy-onchain-groups-enabled=false
|
||||
privacy-flexible-groups-enabled=false
|
||||
|
||||
# Transaction Pool
|
||||
|
||||
@@ -22,6 +22,7 @@ import org.hyperledger.besu.ethereum.api.jsonrpc.internal.methods.JsonRpcMethod;
|
||||
import java.util.Collection;
|
||||
import java.util.Map;
|
||||
import java.util.Optional;
|
||||
import java.util.function.Function;
|
||||
import java.util.stream.Collectors;
|
||||
|
||||
import io.opentelemetry.api.trace.Tracer;
|
||||
@@ -35,7 +36,8 @@ public class HandlerFactory {
|
||||
assert methods != null && globalOptions != null;
|
||||
return TimeoutHandler.handler(
|
||||
Optional.of(globalOptions),
|
||||
methods.keySet().stream().collect(Collectors.toMap(String::new, ignored -> globalOptions)));
|
||||
methods.keySet().stream()
|
||||
.collect(Collectors.toMap(Function.identity(), ignored -> globalOptions)));
|
||||
}
|
||||
|
||||
public static Handler<RoutingContext> authentication(
|
||||
|
||||
@@ -46,15 +46,15 @@ public class DebugTraceBlock implements JsonRpcMethod {
|
||||
private static final Logger LOG = LoggerFactory.getLogger(DebugTraceBlock.class);
|
||||
private final Supplier<BlockTracer> blockTracerSupplier;
|
||||
private final BlockHeaderFunctions blockHeaderFunctions;
|
||||
private final BlockchainQueries blockchain;
|
||||
private final BlockchainQueries blockchainQueries;
|
||||
|
||||
public DebugTraceBlock(
|
||||
final Supplier<BlockTracer> blockTracerSupplier,
|
||||
final BlockHeaderFunctions blockHeaderFunctions,
|
||||
final BlockchainQueries blockchain) {
|
||||
final BlockchainQueries blockchainQueries) {
|
||||
this.blockTracerSupplier = blockTracerSupplier;
|
||||
this.blockHeaderFunctions = blockHeaderFunctions;
|
||||
this.blockchain = blockchain;
|
||||
this.blockchainQueries = blockchainQueries;
|
||||
}
|
||||
|
||||
@Override
|
||||
@@ -79,18 +79,17 @@ public class DebugTraceBlock implements JsonRpcMethod {
|
||||
.map(TransactionTraceParams::traceOptions)
|
||||
.orElse(TraceOptions.DEFAULT);
|
||||
|
||||
if (this.blockchain.blockByHash(block.getHeader().getParentHash()).isPresent()) {
|
||||
if (this.blockchainQueries.blockByHash(block.getHeader().getParentHash()).isPresent()) {
|
||||
final Collection<DebugTraceTransactionResult> results =
|
||||
Tracer.processTracing(
|
||||
blockchain,
|
||||
blockchainQueries,
|
||||
Optional.of(block.getHeader()),
|
||||
mutableWorldState -> {
|
||||
return blockTracerSupplier
|
||||
.get()
|
||||
.trace(mutableWorldState, block, new DebugOperationTracer(traceOptions))
|
||||
.map(BlockTrace::getTransactionTraces)
|
||||
.map(DebugTraceTransactionResult::of);
|
||||
})
|
||||
mutableWorldState ->
|
||||
blockTracerSupplier
|
||||
.get()
|
||||
.trace(mutableWorldState, block, new DebugOperationTracer(traceOptions))
|
||||
.map(BlockTrace::getTransactionTraces)
|
||||
.map(DebugTraceTransactionResult::of))
|
||||
.orElse(null);
|
||||
return new JsonRpcSuccessResponse(requestContext.getRequest().getId(), results);
|
||||
} else {
|
||||
|
||||
@@ -17,7 +17,6 @@ package org.hyperledger.besu.ethereum.api.jsonrpc;
|
||||
import static org.assertj.core.api.Assertions.assertThat;
|
||||
import static org.hyperledger.besu.ethereum.api.jsonrpc.RpcApis.DEFAULT_RPC_APIS;
|
||||
import static org.mockito.Mockito.mock;
|
||||
import static org.mockito.Mockito.spy;
|
||||
|
||||
import org.hyperledger.besu.config.StubGenesisConfigOptions;
|
||||
import org.hyperledger.besu.ethereum.ProtocolContext;
|
||||
@@ -98,38 +97,37 @@ public class JsonRpcHttpServiceHostAllowlistTest {
|
||||
supportedCapabilities.add(EthProtocol.ETH63);
|
||||
|
||||
rpcMethods =
|
||||
spy(
|
||||
new JsonRpcMethodsFactory()
|
||||
.methods(
|
||||
CLIENT_VERSION,
|
||||
CHAIN_ID,
|
||||
new StubGenesisConfigOptions(),
|
||||
peerDiscoveryMock,
|
||||
blockchainQueries,
|
||||
synchronizer,
|
||||
MainnetProtocolSchedule.fromConfig(
|
||||
new StubGenesisConfigOptions().constantinopleBlock(0).chainId(CHAIN_ID)),
|
||||
mock(ProtocolContext.class),
|
||||
mock(FilterManager.class),
|
||||
mock(TransactionPool.class),
|
||||
mock(MiningParameters.class),
|
||||
mock(PoWMiningCoordinator.class),
|
||||
new NoOpMetricsSystem(),
|
||||
supportedCapabilities,
|
||||
Optional.of(mock(AccountLocalConfigPermissioningController.class)),
|
||||
Optional.of(mock(NodeLocalConfigPermissioningController.class)),
|
||||
DEFAULT_RPC_APIS,
|
||||
mock(PrivacyParameters.class),
|
||||
mock(JsonRpcConfiguration.class),
|
||||
mock(WebSocketConfiguration.class),
|
||||
mock(MetricsConfiguration.class),
|
||||
natService,
|
||||
new HashMap<>(),
|
||||
folder,
|
||||
mock(EthPeers.class),
|
||||
vertx,
|
||||
mock(ApiConfiguration.class),
|
||||
Optional.empty()));
|
||||
new JsonRpcMethodsFactory()
|
||||
.methods(
|
||||
CLIENT_VERSION,
|
||||
CHAIN_ID,
|
||||
new StubGenesisConfigOptions(),
|
||||
peerDiscoveryMock,
|
||||
blockchainQueries,
|
||||
synchronizer,
|
||||
MainnetProtocolSchedule.fromConfig(
|
||||
new StubGenesisConfigOptions().constantinopleBlock(0).chainId(CHAIN_ID)),
|
||||
mock(ProtocolContext.class),
|
||||
mock(FilterManager.class),
|
||||
mock(TransactionPool.class),
|
||||
mock(MiningParameters.class),
|
||||
mock(PoWMiningCoordinator.class),
|
||||
new NoOpMetricsSystem(),
|
||||
supportedCapabilities,
|
||||
Optional.of(mock(AccountLocalConfigPermissioningController.class)),
|
||||
Optional.of(mock(NodeLocalConfigPermissioningController.class)),
|
||||
DEFAULT_RPC_APIS,
|
||||
mock(PrivacyParameters.class),
|
||||
mock(JsonRpcConfiguration.class),
|
||||
mock(WebSocketConfiguration.class),
|
||||
mock(MetricsConfiguration.class),
|
||||
natService,
|
||||
new HashMap<>(),
|
||||
folder,
|
||||
mock(EthPeers.class),
|
||||
vertx,
|
||||
mock(ApiConfiguration.class),
|
||||
Optional.empty());
|
||||
service = createJsonRpcHttpService();
|
||||
service.start().join();
|
||||
|
||||
|
||||
@@ -19,7 +19,6 @@ import static java.util.concurrent.TimeUnit.MINUTES;
|
||||
import static org.assertj.core.api.Assertions.assertThat;
|
||||
import static org.assertj.core.util.Lists.list;
|
||||
import static org.mockito.Mockito.mock;
|
||||
import static org.mockito.Mockito.spy;
|
||||
|
||||
import org.hyperledger.besu.config.StubGenesisConfigOptions;
|
||||
import org.hyperledger.besu.ethereum.ProtocolContext;
|
||||
@@ -129,37 +128,36 @@ public class JsonRpcHttpServiceLoginTest {
|
||||
new StubGenesisConfigOptions().constantinopleBlock(0).chainId(CHAIN_ID);
|
||||
|
||||
rpcMethods =
|
||||
spy(
|
||||
new JsonRpcMethodsFactory()
|
||||
.methods(
|
||||
CLIENT_VERSION,
|
||||
CHAIN_ID,
|
||||
genesisConfigOptions,
|
||||
peerDiscoveryMock,
|
||||
blockchainQueries,
|
||||
synchronizer,
|
||||
MainnetProtocolSchedule.fromConfig(genesisConfigOptions),
|
||||
mock(ProtocolContext.class),
|
||||
mock(FilterManager.class),
|
||||
mock(TransactionPool.class),
|
||||
mock(MiningParameters.class),
|
||||
mock(PoWMiningCoordinator.class),
|
||||
new NoOpMetricsSystem(),
|
||||
supportedCapabilities,
|
||||
Optional.empty(),
|
||||
Optional.empty(),
|
||||
JSON_RPC_APIS,
|
||||
mock(PrivacyParameters.class),
|
||||
mock(JsonRpcConfiguration.class),
|
||||
mock(WebSocketConfiguration.class),
|
||||
mock(MetricsConfiguration.class),
|
||||
natService,
|
||||
new HashMap<>(),
|
||||
folder,
|
||||
mock(EthPeers.class),
|
||||
vertx,
|
||||
mock(ApiConfiguration.class),
|
||||
Optional.empty()));
|
||||
new JsonRpcMethodsFactory()
|
||||
.methods(
|
||||
CLIENT_VERSION,
|
||||
CHAIN_ID,
|
||||
genesisConfigOptions,
|
||||
peerDiscoveryMock,
|
||||
blockchainQueries,
|
||||
synchronizer,
|
||||
MainnetProtocolSchedule.fromConfig(genesisConfigOptions),
|
||||
mock(ProtocolContext.class),
|
||||
mock(FilterManager.class),
|
||||
mock(TransactionPool.class),
|
||||
mock(MiningParameters.class),
|
||||
mock(PoWMiningCoordinator.class),
|
||||
new NoOpMetricsSystem(),
|
||||
supportedCapabilities,
|
||||
Optional.empty(),
|
||||
Optional.empty(),
|
||||
JSON_RPC_APIS,
|
||||
mock(PrivacyParameters.class),
|
||||
mock(JsonRpcConfiguration.class),
|
||||
mock(WebSocketConfiguration.class),
|
||||
mock(MetricsConfiguration.class),
|
||||
natService,
|
||||
new HashMap<>(),
|
||||
folder,
|
||||
mock(EthPeers.class),
|
||||
vertx,
|
||||
mock(ApiConfiguration.class),
|
||||
Optional.empty());
|
||||
service = createJsonRpcHttpService();
|
||||
jwtAuth = service.authenticationService.get().getJwtAuthProvider();
|
||||
service.start().join();
|
||||
|
||||
@@ -17,7 +17,6 @@ package org.hyperledger.besu.ethereum.api.jsonrpc;
|
||||
import static java.util.Collections.singletonList;
|
||||
import static org.assertj.core.api.Assertions.assertThat;
|
||||
import static org.mockito.Mockito.mock;
|
||||
import static org.mockito.Mockito.spy;
|
||||
import static org.mockito.Mockito.when;
|
||||
|
||||
import org.hyperledger.besu.config.StubGenesisConfigOptions;
|
||||
@@ -201,37 +200,36 @@ public class JsonRpcHttpServiceRpcApisTest {
|
||||
supportedCapabilities.add(EthProtocol.ETH63);
|
||||
|
||||
final Map<String, JsonRpcMethod> rpcMethods =
|
||||
spy(
|
||||
new JsonRpcMethodsFactory()
|
||||
.methods(
|
||||
CLIENT_VERSION,
|
||||
NETWORK_ID,
|
||||
new StubGenesisConfigOptions(),
|
||||
mock(P2PNetwork.class),
|
||||
blockchainQueries,
|
||||
mock(Synchronizer.class),
|
||||
ProtocolScheduleFixture.MAINNET,
|
||||
mock(ProtocolContext.class),
|
||||
mock(FilterManager.class),
|
||||
mock(TransactionPool.class),
|
||||
mock(MiningParameters.class),
|
||||
mock(PoWMiningCoordinator.class),
|
||||
new NoOpMetricsSystem(),
|
||||
supportedCapabilities,
|
||||
Optional.of(mock(AccountLocalConfigPermissioningController.class)),
|
||||
Optional.of(mock(NodeLocalConfigPermissioningController.class)),
|
||||
config.getRpcApis(),
|
||||
mock(PrivacyParameters.class),
|
||||
mock(JsonRpcConfiguration.class),
|
||||
mock(WebSocketConfiguration.class),
|
||||
mock(MetricsConfiguration.class),
|
||||
natService,
|
||||
new HashMap<>(),
|
||||
folder,
|
||||
mock(EthPeers.class),
|
||||
vertx,
|
||||
mock(ApiConfiguration.class),
|
||||
Optional.empty()));
|
||||
new JsonRpcMethodsFactory()
|
||||
.methods(
|
||||
CLIENT_VERSION,
|
||||
NETWORK_ID,
|
||||
new StubGenesisConfigOptions(),
|
||||
mock(P2PNetwork.class),
|
||||
blockchainQueries,
|
||||
mock(Synchronizer.class),
|
||||
ProtocolScheduleFixture.MAINNET,
|
||||
mock(ProtocolContext.class),
|
||||
mock(FilterManager.class),
|
||||
mock(TransactionPool.class),
|
||||
mock(MiningParameters.class),
|
||||
mock(PoWMiningCoordinator.class),
|
||||
new NoOpMetricsSystem(),
|
||||
supportedCapabilities,
|
||||
Optional.of(mock(AccountLocalConfigPermissioningController.class)),
|
||||
Optional.of(mock(NodeLocalConfigPermissioningController.class)),
|
||||
config.getRpcApis(),
|
||||
mock(PrivacyParameters.class),
|
||||
mock(JsonRpcConfiguration.class),
|
||||
mock(WebSocketConfiguration.class),
|
||||
mock(MetricsConfiguration.class),
|
||||
natService,
|
||||
new HashMap<>(),
|
||||
folder,
|
||||
mock(EthPeers.class),
|
||||
vertx,
|
||||
mock(ApiConfiguration.class),
|
||||
Optional.empty());
|
||||
final JsonRpcHttpService jsonRpcHttpService =
|
||||
new JsonRpcHttpService(
|
||||
vertx,
|
||||
@@ -302,8 +300,7 @@ public class JsonRpcHttpServiceRpcApisTest {
|
||||
final WebSocketConfiguration webSocketConfiguration,
|
||||
final P2PNetwork p2pNetwork,
|
||||
final MetricsConfiguration metricsConfiguration,
|
||||
final NatService natService)
|
||||
throws Exception {
|
||||
final NatService natService) {
|
||||
final Set<Capability> supportedCapabilities = new HashSet<>();
|
||||
supportedCapabilities.add(EthProtocol.ETH62);
|
||||
supportedCapabilities.add(EthProtocol.ETH63);
|
||||
@@ -311,37 +308,36 @@ public class JsonRpcHttpServiceRpcApisTest {
|
||||
webSocketConfiguration.setPort(0);
|
||||
|
||||
final Map<String, JsonRpcMethod> rpcMethods =
|
||||
spy(
|
||||
new JsonRpcMethodsFactory()
|
||||
.methods(
|
||||
CLIENT_VERSION,
|
||||
NETWORK_ID,
|
||||
new StubGenesisConfigOptions(),
|
||||
p2pNetwork,
|
||||
blockchainQueries,
|
||||
mock(Synchronizer.class),
|
||||
ProtocolScheduleFixture.MAINNET,
|
||||
mock(ProtocolContext.class),
|
||||
mock(FilterManager.class),
|
||||
mock(TransactionPool.class),
|
||||
mock(MiningParameters.class),
|
||||
mock(PoWMiningCoordinator.class),
|
||||
new NoOpMetricsSystem(),
|
||||
supportedCapabilities,
|
||||
Optional.of(mock(AccountLocalConfigPermissioningController.class)),
|
||||
Optional.of(mock(NodeLocalConfigPermissioningController.class)),
|
||||
jsonRpcConfiguration.getRpcApis(),
|
||||
mock(PrivacyParameters.class),
|
||||
jsonRpcConfiguration,
|
||||
webSocketConfiguration,
|
||||
metricsConfiguration,
|
||||
natService,
|
||||
new HashMap<>(),
|
||||
folder,
|
||||
mock(EthPeers.class),
|
||||
vertx,
|
||||
mock(ApiConfiguration.class),
|
||||
Optional.empty()));
|
||||
new JsonRpcMethodsFactory()
|
||||
.methods(
|
||||
CLIENT_VERSION,
|
||||
NETWORK_ID,
|
||||
new StubGenesisConfigOptions(),
|
||||
p2pNetwork,
|
||||
blockchainQueries,
|
||||
mock(Synchronizer.class),
|
||||
ProtocolScheduleFixture.MAINNET,
|
||||
mock(ProtocolContext.class),
|
||||
mock(FilterManager.class),
|
||||
mock(TransactionPool.class),
|
||||
mock(MiningParameters.class),
|
||||
mock(PoWMiningCoordinator.class),
|
||||
new NoOpMetricsSystem(),
|
||||
supportedCapabilities,
|
||||
Optional.of(mock(AccountLocalConfigPermissioningController.class)),
|
||||
Optional.of(mock(NodeLocalConfigPermissioningController.class)),
|
||||
jsonRpcConfiguration.getRpcApis(),
|
||||
mock(PrivacyParameters.class),
|
||||
jsonRpcConfiguration,
|
||||
webSocketConfiguration,
|
||||
metricsConfiguration,
|
||||
natService,
|
||||
new HashMap<>(),
|
||||
folder,
|
||||
mock(EthPeers.class),
|
||||
vertx,
|
||||
mock(ApiConfiguration.class),
|
||||
Optional.empty());
|
||||
final JsonRpcHttpService jsonRpcHttpService =
|
||||
new JsonRpcHttpService(
|
||||
vertx,
|
||||
@@ -425,8 +421,7 @@ public class JsonRpcHttpServiceRpcApisTest {
|
||||
"{\"jsonrpc\":\"2.0\",\"id\":" + Json.encode(id) + ",\"method\":\"net_services\"}", JSON);
|
||||
}
|
||||
|
||||
public JsonRpcHttpService getJsonRpcHttpService(final boolean[] enabledNetServices)
|
||||
throws Exception {
|
||||
public JsonRpcHttpService getJsonRpcHttpService(final boolean[] enabledNetServices) {
|
||||
|
||||
JsonRpcConfiguration jsonRpcConfiguration = JsonRpcConfiguration.createDefault();
|
||||
WebSocketConfiguration webSocketConfiguration = WebSocketConfiguration.createDefault();
|
||||
|
||||
@@ -17,10 +17,7 @@ package org.hyperledger.besu.ethereum.api.jsonrpc;
|
||||
import static org.assertj.core.api.Assertions.assertThat;
|
||||
import static org.mockito.ArgumentMatchers.any;
|
||||
import static org.mockito.ArgumentMatchers.eq;
|
||||
import static org.mockito.Mockito.doReturn;
|
||||
import static org.mockito.Mockito.mock;
|
||||
import static org.mockito.Mockito.reset;
|
||||
import static org.mockito.Mockito.verify;
|
||||
import static org.mockito.Mockito.when;
|
||||
|
||||
import org.hyperledger.besu.datatypes.Address;
|
||||
@@ -1389,65 +1386,68 @@ public class JsonRpcHttpServiceTest extends JsonRpcHttpServiceTestBase {
|
||||
+ "\"}",
|
||||
JSON);
|
||||
|
||||
when(rpcMethods.get(any(String.class))).thenReturn(null);
|
||||
when(rpcMethods.containsKey(any(String.class))).thenReturn(false);
|
||||
try (var unused = disableRpcMethod(methodName)) {
|
||||
|
||||
try (final Response resp = client.newCall(buildPostRequest(body)).execute()) {
|
||||
assertThat(resp.code()).isEqualTo(200);
|
||||
final JsonObject json = new JsonObject(resp.body().string());
|
||||
final RpcErrorType expectedError = RpcErrorType.METHOD_NOT_ENABLED;
|
||||
testHelper.assertValidJsonRpcError(
|
||||
json, id, expectedError.getCode(), expectedError.getMessage());
|
||||
try (final Response resp = client.newCall(buildPostRequest(body)).execute()) {
|
||||
assertThat(resp.code()).isEqualTo(200);
|
||||
final JsonObject json = new JsonObject(resp.body().string());
|
||||
final RpcErrorType expectedError = RpcErrorType.METHOD_NOT_ENABLED;
|
||||
testHelper.assertValidJsonRpcError(
|
||||
json, id, expectedError.getCode(), expectedError.getMessage());
|
||||
}
|
||||
}
|
||||
|
||||
verify(rpcMethods).containsKey(methodName);
|
||||
verify(rpcMethods).get(methodName);
|
||||
|
||||
reset(rpcMethods);
|
||||
}
|
||||
|
||||
@Test
|
||||
public void exceptionallyHandleJsonSingleRequest() throws Exception {
|
||||
final String methodName = "foo";
|
||||
final JsonRpcMethod jsonRpcMethod = mock(JsonRpcMethod.class);
|
||||
when(jsonRpcMethod.getName()).thenReturn("foo");
|
||||
when(jsonRpcMethod.getName()).thenReturn(methodName);
|
||||
when(jsonRpcMethod.response(any())).thenThrow(new RuntimeException("test exception"));
|
||||
|
||||
doReturn(jsonRpcMethod).when(rpcMethods).get("foo");
|
||||
try (var unused = addRpcMethod(methodName, jsonRpcMethod)) {
|
||||
|
||||
final RequestBody body =
|
||||
RequestBody.create("{\"jsonrpc\":\"2.0\",\"id\":\"666\",\"method\":\"foo\"}", JSON);
|
||||
final RequestBody body =
|
||||
RequestBody.create(
|
||||
"{\"jsonrpc\":\"2.0\",\"id\":\"666\",\"method\":\"" + methodName + "\"}", JSON);
|
||||
|
||||
try (final Response resp = client.newCall(buildPostRequest(body)).execute()) {
|
||||
assertThat(resp.code()).isEqualTo(200);
|
||||
final JsonObject json = new JsonObject(resp.body().string());
|
||||
final RpcErrorType expectedError = RpcErrorType.INTERNAL_ERROR;
|
||||
testHelper.assertValidJsonRpcError(
|
||||
json, "666", expectedError.getCode(), expectedError.getMessage());
|
||||
try (final Response resp = client.newCall(buildPostRequest(body)).execute()) {
|
||||
assertThat(resp.code()).isEqualTo(200);
|
||||
final JsonObject json = new JsonObject(resp.body().string());
|
||||
final RpcErrorType expectedError = RpcErrorType.INTERNAL_ERROR;
|
||||
testHelper.assertValidJsonRpcError(
|
||||
json, "666", expectedError.getCode(), expectedError.getMessage());
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
@Test
|
||||
public void exceptionallyHandleJsonBatchRequest() throws Exception {
|
||||
final String methodName = "foo";
|
||||
final JsonRpcMethod jsonRpcMethod = mock(JsonRpcMethod.class);
|
||||
when(jsonRpcMethod.getName()).thenReturn("foo");
|
||||
when(jsonRpcMethod.getName()).thenReturn(methodName);
|
||||
when(jsonRpcMethod.response(any())).thenThrow(new RuntimeException("test exception"));
|
||||
doReturn(jsonRpcMethod).when(rpcMethods).get("foo");
|
||||
|
||||
final RequestBody body =
|
||||
RequestBody.create(
|
||||
"[{\"jsonrpc\":\"2.0\",\"id\":\"000\",\"method\":\"web3_clientVersion\"},"
|
||||
+ "{\"jsonrpc\":\"2.0\",\"id\":\"111\",\"method\":\"foo\"},"
|
||||
+ "{\"jsonrpc\":\"2.0\",\"id\":\"222\",\"method\":\"net_version\"}]",
|
||||
JSON);
|
||||
try (var unused = addRpcMethod(methodName, jsonRpcMethod)) {
|
||||
|
||||
try (final Response resp = client.newCall(buildPostRequest(body)).execute()) {
|
||||
assertThat(resp.code()).isEqualTo(200);
|
||||
final JsonArray array = new JsonArray(resp.body().string());
|
||||
testHelper.assertValidJsonRpcResult(array.getJsonObject(0), "000");
|
||||
final RpcErrorType expectedError = RpcErrorType.INTERNAL_ERROR;
|
||||
testHelper.assertValidJsonRpcError(
|
||||
array.getJsonObject(1), "111", expectedError.getCode(), expectedError.getMessage());
|
||||
testHelper.assertValidJsonRpcResult(array.getJsonObject(2), "222");
|
||||
final RequestBody body =
|
||||
RequestBody.create(
|
||||
"[{\"jsonrpc\":\"2.0\",\"id\":\"000\",\"method\":\"web3_clientVersion\"},"
|
||||
+ "{\"jsonrpc\":\"2.0\",\"id\":\"111\",\"method\":\""
|
||||
+ methodName
|
||||
+ "\"},"
|
||||
+ "{\"jsonrpc\":\"2.0\",\"id\":\"222\",\"method\":\"net_version\"}]",
|
||||
JSON);
|
||||
|
||||
try (final Response resp = client.newCall(buildPostRequest(body)).execute()) {
|
||||
assertThat(resp.code()).isEqualTo(200);
|
||||
final JsonArray array = new JsonArray(resp.body().string());
|
||||
testHelper.assertValidJsonRpcResult(array.getJsonObject(0), "000");
|
||||
final RpcErrorType expectedError = RpcErrorType.INTERNAL_ERROR;
|
||||
testHelper.assertValidJsonRpcError(
|
||||
array.getJsonObject(1), "111", expectedError.getCode(), expectedError.getMessage());
|
||||
testHelper.assertValidJsonRpcResult(array.getJsonObject(2), "222");
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
|
||||
@@ -16,7 +16,6 @@
|
||||
package org.hyperledger.besu.ethereum.api.jsonrpc;
|
||||
|
||||
import static org.mockito.Mockito.mock;
|
||||
import static org.mockito.Mockito.spy;
|
||||
|
||||
import org.hyperledger.besu.config.StubGenesisConfigOptions;
|
||||
import org.hyperledger.besu.ethereum.ProtocolContext;
|
||||
@@ -72,8 +71,9 @@ public class JsonRpcHttpServiceTestBase {
|
||||
protected final JsonRpcTestHelper testHelper = new JsonRpcTestHelper();
|
||||
|
||||
private static final Vertx vertx = Vertx.vertx();
|
||||
|
||||
protected static Map<String, JsonRpcMethod> rpcMethods;
|
||||
private static Map<String, JsonRpcMethod> disabledRpcMethods;
|
||||
private static Set<String> addedRpcMethods;
|
||||
protected static JsonRpcHttpService service;
|
||||
protected static OkHttpClient client;
|
||||
protected static String baseUrl;
|
||||
@@ -106,39 +106,41 @@ public class JsonRpcHttpServiceTestBase {
|
||||
supportedCapabilities.add(EthProtocol.ETH63);
|
||||
|
||||
rpcMethods =
|
||||
spy(
|
||||
new JsonRpcMethodsFactory()
|
||||
.methods(
|
||||
CLIENT_VERSION,
|
||||
CHAIN_ID,
|
||||
new StubGenesisConfigOptions(),
|
||||
peerDiscoveryMock,
|
||||
blockchainQueries,
|
||||
synchronizer,
|
||||
MainnetProtocolSchedule.fromConfig(
|
||||
new StubGenesisConfigOptions().constantinopleBlock(0).chainId(CHAIN_ID),
|
||||
EvmConfiguration.DEFAULT),
|
||||
mock(ProtocolContext.class),
|
||||
mock(FilterManager.class),
|
||||
mock(TransactionPool.class),
|
||||
mock(MiningParameters.class),
|
||||
mock(PoWMiningCoordinator.class),
|
||||
new NoOpMetricsSystem(),
|
||||
supportedCapabilities,
|
||||
Optional.of(mock(AccountLocalConfigPermissioningController.class)),
|
||||
Optional.of(mock(NodeLocalConfigPermissioningController.class)),
|
||||
JSON_RPC_APIS,
|
||||
mock(PrivacyParameters.class),
|
||||
mock(JsonRpcConfiguration.class),
|
||||
mock(WebSocketConfiguration.class),
|
||||
mock(MetricsConfiguration.class),
|
||||
natService,
|
||||
new HashMap<>(),
|
||||
folder,
|
||||
ethPeersMock,
|
||||
vertx,
|
||||
mock(ApiConfiguration.class),
|
||||
Optional.empty()));
|
||||
new JsonRpcMethodsFactory()
|
||||
.methods(
|
||||
CLIENT_VERSION,
|
||||
CHAIN_ID,
|
||||
new StubGenesisConfigOptions(),
|
||||
peerDiscoveryMock,
|
||||
blockchainQueries,
|
||||
synchronizer,
|
||||
MainnetProtocolSchedule.fromConfig(
|
||||
new StubGenesisConfigOptions().constantinopleBlock(0).chainId(CHAIN_ID),
|
||||
EvmConfiguration.DEFAULT),
|
||||
mock(ProtocolContext.class),
|
||||
mock(FilterManager.class),
|
||||
mock(TransactionPool.class),
|
||||
mock(MiningParameters.class),
|
||||
mock(PoWMiningCoordinator.class),
|
||||
new NoOpMetricsSystem(),
|
||||
supportedCapabilities,
|
||||
Optional.of(mock(AccountLocalConfigPermissioningController.class)),
|
||||
Optional.of(mock(NodeLocalConfigPermissioningController.class)),
|
||||
JSON_RPC_APIS,
|
||||
mock(PrivacyParameters.class),
|
||||
mock(JsonRpcConfiguration.class),
|
||||
mock(WebSocketConfiguration.class),
|
||||
mock(MetricsConfiguration.class),
|
||||
natService,
|
||||
new HashMap<>(),
|
||||
folder,
|
||||
ethPeersMock,
|
||||
vertx,
|
||||
mock(ApiConfiguration.class),
|
||||
Optional.empty());
|
||||
disabledRpcMethods = new HashMap<>();
|
||||
addedRpcMethods = new HashSet<>();
|
||||
|
||||
service = createJsonRpcHttpService(createLimitedJsonRpcConfig());
|
||||
service.start().join();
|
||||
|
||||
@@ -189,6 +191,22 @@ public class JsonRpcHttpServiceTestBase {
|
||||
return new Request.Builder().get().url(baseUrl + path).build();
|
||||
}
|
||||
|
||||
protected AutoCloseable disableRpcMethod(final String methodName) {
|
||||
disabledRpcMethods.put(methodName, rpcMethods.remove(methodName));
|
||||
return () -> resetRpcMethods();
|
||||
}
|
||||
|
||||
protected AutoCloseable addRpcMethod(final String methodName, final JsonRpcMethod method) {
|
||||
rpcMethods.put(methodName, method);
|
||||
addedRpcMethods.add(methodName);
|
||||
return () -> resetRpcMethods();
|
||||
}
|
||||
|
||||
protected void resetRpcMethods() {
|
||||
disabledRpcMethods.forEach(rpcMethods::put);
|
||||
addedRpcMethods.forEach(rpcMethods::remove);
|
||||
}
|
||||
|
||||
/** Tears down the HTTP server. */
|
||||
@AfterAll
|
||||
public static void shutdownServer() {
|
||||
|
||||
@@ -21,7 +21,6 @@ import static org.hyperledger.besu.ethereum.api.tls.KnownClientFileUtil.writeToK
|
||||
import static org.hyperledger.besu.ethereum.api.tls.TlsClientAuthConfiguration.Builder.aTlsClientAuthConfiguration;
|
||||
import static org.hyperledger.besu.ethereum.api.tls.TlsConfiguration.Builder.aTlsConfiguration;
|
||||
import static org.mockito.Mockito.mock;
|
||||
import static org.mockito.Mockito.spy;
|
||||
|
||||
import org.hyperledger.besu.config.StubGenesisConfigOptions;
|
||||
import org.hyperledger.besu.ethereum.ProtocolContext;
|
||||
@@ -112,38 +111,37 @@ public class JsonRpcHttpServiceTlsClientAuthTest {
|
||||
supportedCapabilities.add(EthProtocol.ETH63);
|
||||
|
||||
rpcMethods =
|
||||
spy(
|
||||
new JsonRpcMethodsFactory()
|
||||
.methods(
|
||||
CLIENT_VERSION,
|
||||
CHAIN_ID,
|
||||
new StubGenesisConfigOptions(),
|
||||
peerDiscoveryMock,
|
||||
blockchainQueries,
|
||||
synchronizer,
|
||||
MainnetProtocolSchedule.fromConfig(
|
||||
new StubGenesisConfigOptions().constantinopleBlock(0).chainId(CHAIN_ID)),
|
||||
mock(ProtocolContext.class),
|
||||
mock(FilterManager.class),
|
||||
mock(TransactionPool.class),
|
||||
mock(MiningParameters.class),
|
||||
mock(PoWMiningCoordinator.class),
|
||||
new NoOpMetricsSystem(),
|
||||
supportedCapabilities,
|
||||
Optional.of(mock(AccountLocalConfigPermissioningController.class)),
|
||||
Optional.of(mock(NodeLocalConfigPermissioningController.class)),
|
||||
DEFAULT_RPC_APIS,
|
||||
mock(PrivacyParameters.class),
|
||||
mock(JsonRpcConfiguration.class),
|
||||
mock(WebSocketConfiguration.class),
|
||||
mock(MetricsConfiguration.class),
|
||||
natService,
|
||||
Collections.emptyMap(),
|
||||
folder,
|
||||
mock(EthPeers.class),
|
||||
vertx,
|
||||
mock(ApiConfiguration.class),
|
||||
Optional.empty()));
|
||||
new JsonRpcMethodsFactory()
|
||||
.methods(
|
||||
CLIENT_VERSION,
|
||||
CHAIN_ID,
|
||||
new StubGenesisConfigOptions(),
|
||||
peerDiscoveryMock,
|
||||
blockchainQueries,
|
||||
synchronizer,
|
||||
MainnetProtocolSchedule.fromConfig(
|
||||
new StubGenesisConfigOptions().constantinopleBlock(0).chainId(CHAIN_ID)),
|
||||
mock(ProtocolContext.class),
|
||||
mock(FilterManager.class),
|
||||
mock(TransactionPool.class),
|
||||
mock(MiningParameters.class),
|
||||
mock(PoWMiningCoordinator.class),
|
||||
new NoOpMetricsSystem(),
|
||||
supportedCapabilities,
|
||||
Optional.of(mock(AccountLocalConfigPermissioningController.class)),
|
||||
Optional.of(mock(NodeLocalConfigPermissioningController.class)),
|
||||
DEFAULT_RPC_APIS,
|
||||
mock(PrivacyParameters.class),
|
||||
mock(JsonRpcConfiguration.class),
|
||||
mock(WebSocketConfiguration.class),
|
||||
mock(MetricsConfiguration.class),
|
||||
natService,
|
||||
Collections.emptyMap(),
|
||||
folder,
|
||||
mock(EthPeers.class),
|
||||
vertx,
|
||||
mock(ApiConfiguration.class),
|
||||
Optional.empty());
|
||||
|
||||
System.setProperty("javax.net.ssl.trustStore", CLIENT_AS_CA_CERT.getKeyStoreFile().toString());
|
||||
System.setProperty(
|
||||
|
||||
@@ -20,7 +20,6 @@ import static org.hyperledger.besu.ethereum.api.tls.KnownClientFileUtil.writeToK
|
||||
import static org.hyperledger.besu.ethereum.api.tls.TlsClientAuthConfiguration.Builder.aTlsClientAuthConfiguration;
|
||||
import static org.hyperledger.besu.ethereum.api.tls.TlsConfiguration.Builder.aTlsConfiguration;
|
||||
import static org.mockito.Mockito.mock;
|
||||
import static org.mockito.Mockito.spy;
|
||||
|
||||
import org.hyperledger.besu.config.StubGenesisConfigOptions;
|
||||
import org.hyperledger.besu.ethereum.ProtocolContext;
|
||||
@@ -100,38 +99,37 @@ class JsonRpcHttpServiceTlsMisconfigurationTest {
|
||||
supportedCapabilities.add(EthProtocol.ETH63);
|
||||
|
||||
rpcMethods =
|
||||
spy(
|
||||
new JsonRpcMethodsFactory()
|
||||
.methods(
|
||||
CLIENT_VERSION,
|
||||
CHAIN_ID,
|
||||
new StubGenesisConfigOptions(),
|
||||
peerDiscoveryMock,
|
||||
blockchainQueries,
|
||||
synchronizer,
|
||||
MainnetProtocolSchedule.fromConfig(
|
||||
new StubGenesisConfigOptions().constantinopleBlock(0).chainId(CHAIN_ID)),
|
||||
mock(ProtocolContext.class),
|
||||
mock(FilterManager.class),
|
||||
mock(TransactionPool.class),
|
||||
mock(MiningParameters.class),
|
||||
mock(PoWMiningCoordinator.class),
|
||||
new NoOpMetricsSystem(),
|
||||
supportedCapabilities,
|
||||
Optional.of(mock(AccountLocalConfigPermissioningController.class)),
|
||||
Optional.of(mock(NodeLocalConfigPermissioningController.class)),
|
||||
DEFAULT_RPC_APIS,
|
||||
mock(PrivacyParameters.class),
|
||||
mock(JsonRpcConfiguration.class),
|
||||
mock(WebSocketConfiguration.class),
|
||||
mock(MetricsConfiguration.class),
|
||||
natService,
|
||||
Collections.emptyMap(),
|
||||
tempDir.getRoot(),
|
||||
mock(EthPeers.class),
|
||||
vertx,
|
||||
mock(ApiConfiguration.class),
|
||||
Optional.empty()));
|
||||
new JsonRpcMethodsFactory()
|
||||
.methods(
|
||||
CLIENT_VERSION,
|
||||
CHAIN_ID,
|
||||
new StubGenesisConfigOptions(),
|
||||
peerDiscoveryMock,
|
||||
blockchainQueries,
|
||||
synchronizer,
|
||||
MainnetProtocolSchedule.fromConfig(
|
||||
new StubGenesisConfigOptions().constantinopleBlock(0).chainId(CHAIN_ID)),
|
||||
mock(ProtocolContext.class),
|
||||
mock(FilterManager.class),
|
||||
mock(TransactionPool.class),
|
||||
mock(MiningParameters.class),
|
||||
mock(PoWMiningCoordinator.class),
|
||||
new NoOpMetricsSystem(),
|
||||
supportedCapabilities,
|
||||
Optional.of(mock(AccountLocalConfigPermissioningController.class)),
|
||||
Optional.of(mock(NodeLocalConfigPermissioningController.class)),
|
||||
DEFAULT_RPC_APIS,
|
||||
mock(PrivacyParameters.class),
|
||||
mock(JsonRpcConfiguration.class),
|
||||
mock(WebSocketConfiguration.class),
|
||||
mock(MetricsConfiguration.class),
|
||||
natService,
|
||||
Collections.emptyMap(),
|
||||
tempDir.getRoot(),
|
||||
mock(EthPeers.class),
|
||||
vertx,
|
||||
mock(ApiConfiguration.class),
|
||||
Optional.empty());
|
||||
}
|
||||
|
||||
@AfterEach
|
||||
|
||||
@@ -20,7 +20,6 @@ import static org.assertj.core.api.Assertions.assertThat;
|
||||
import static org.hyperledger.besu.ethereum.api.jsonrpc.RpcApis.DEFAULT_RPC_APIS;
|
||||
import static org.hyperledger.besu.ethereum.api.tls.TlsConfiguration.Builder.aTlsConfiguration;
|
||||
import static org.mockito.Mockito.mock;
|
||||
import static org.mockito.Mockito.spy;
|
||||
|
||||
import org.hyperledger.besu.config.StubGenesisConfigOptions;
|
||||
import org.hyperledger.besu.ethereum.ProtocolContext;
|
||||
@@ -101,38 +100,37 @@ public class JsonRpcHttpServiceTlsTest {
|
||||
supportedCapabilities.add(EthProtocol.ETH63);
|
||||
|
||||
rpcMethods =
|
||||
spy(
|
||||
new JsonRpcMethodsFactory()
|
||||
.methods(
|
||||
CLIENT_VERSION,
|
||||
CHAIN_ID,
|
||||
new StubGenesisConfigOptions(),
|
||||
peerDiscoveryMock,
|
||||
blockchainQueries,
|
||||
synchronizer,
|
||||
MainnetProtocolSchedule.fromConfig(
|
||||
new StubGenesisConfigOptions().constantinopleBlock(0).chainId(CHAIN_ID)),
|
||||
mock(ProtocolContext.class),
|
||||
mock(FilterManager.class),
|
||||
mock(TransactionPool.class),
|
||||
mock(MiningParameters.class),
|
||||
mock(PoWMiningCoordinator.class),
|
||||
new NoOpMetricsSystem(),
|
||||
supportedCapabilities,
|
||||
Optional.of(mock(AccountLocalConfigPermissioningController.class)),
|
||||
Optional.of(mock(NodeLocalConfigPermissioningController.class)),
|
||||
DEFAULT_RPC_APIS,
|
||||
mock(PrivacyParameters.class),
|
||||
mock(JsonRpcConfiguration.class),
|
||||
mock(WebSocketConfiguration.class),
|
||||
mock(MetricsConfiguration.class),
|
||||
natService,
|
||||
Collections.emptyMap(),
|
||||
folder,
|
||||
mock(EthPeers.class),
|
||||
vertx,
|
||||
mock(ApiConfiguration.class),
|
||||
Optional.empty()));
|
||||
new JsonRpcMethodsFactory()
|
||||
.methods(
|
||||
CLIENT_VERSION,
|
||||
CHAIN_ID,
|
||||
new StubGenesisConfigOptions(),
|
||||
peerDiscoveryMock,
|
||||
blockchainQueries,
|
||||
synchronizer,
|
||||
MainnetProtocolSchedule.fromConfig(
|
||||
new StubGenesisConfigOptions().constantinopleBlock(0).chainId(CHAIN_ID)),
|
||||
mock(ProtocolContext.class),
|
||||
mock(FilterManager.class),
|
||||
mock(TransactionPool.class),
|
||||
mock(MiningParameters.class),
|
||||
mock(PoWMiningCoordinator.class),
|
||||
new NoOpMetricsSystem(),
|
||||
supportedCapabilities,
|
||||
Optional.of(mock(AccountLocalConfigPermissioningController.class)),
|
||||
Optional.of(mock(NodeLocalConfigPermissioningController.class)),
|
||||
DEFAULT_RPC_APIS,
|
||||
mock(PrivacyParameters.class),
|
||||
mock(JsonRpcConfiguration.class),
|
||||
mock(WebSocketConfiguration.class),
|
||||
mock(MetricsConfiguration.class),
|
||||
natService,
|
||||
Collections.emptyMap(),
|
||||
folder,
|
||||
mock(EthPeers.class),
|
||||
vertx,
|
||||
mock(ApiConfiguration.class),
|
||||
Optional.empty());
|
||||
service = createJsonRpcHttpService(createJsonRpcConfig());
|
||||
service.start().join();
|
||||
baseUrl = service.url();
|
||||
|
||||
@@ -18,11 +18,12 @@ import static org.assertj.core.api.Assertions.assertThat;
|
||||
import static org.mockito.ArgumentMatchers.any;
|
||||
import static org.mockito.ArgumentMatchers.eq;
|
||||
import static org.mockito.Mockito.mock;
|
||||
import static org.mockito.Mockito.spy;
|
||||
import static org.mockito.Mockito.when;
|
||||
|
||||
import org.hyperledger.besu.ethereum.api.jsonrpc.internal.JsonRpcRequest;
|
||||
import org.hyperledger.besu.ethereum.api.jsonrpc.internal.JsonRpcRequestContext;
|
||||
import org.hyperledger.besu.ethereum.api.jsonrpc.internal.processor.BlockTracer;
|
||||
import org.hyperledger.besu.ethereum.api.jsonrpc.internal.processor.Tracer;
|
||||
import org.hyperledger.besu.ethereum.api.jsonrpc.internal.processor.TransactionTracer;
|
||||
import org.hyperledger.besu.ethereum.api.jsonrpc.internal.response.JsonRpcSuccessResponse;
|
||||
import org.hyperledger.besu.ethereum.api.query.BlockchainQueries;
|
||||
@@ -30,28 +31,23 @@ import org.hyperledger.besu.ethereum.chain.Blockchain;
|
||||
import org.hyperledger.besu.ethereum.core.Block;
|
||||
import org.hyperledger.besu.ethereum.core.BlockDataGenerator;
|
||||
import org.hyperledger.besu.ethereum.core.MutableWorldState;
|
||||
import org.hyperledger.besu.ethereum.worldstate.WorldStateArchive;
|
||||
|
||||
import java.nio.file.Path;
|
||||
import java.util.ArrayList;
|
||||
import java.util.HashMap;
|
||||
import java.util.List;
|
||||
import java.util.Optional;
|
||||
import java.util.function.Function;
|
||||
|
||||
import org.junit.jupiter.api.Test;
|
||||
import org.junit.jupiter.api.io.TempDir;
|
||||
import org.mockito.Answers;
|
||||
|
||||
public class DebugStandardTraceBlockToFileTest {
|
||||
|
||||
// this tempDir is deliberately static
|
||||
@TempDir private static Path folder;
|
||||
|
||||
private final WorldStateArchive archive =
|
||||
mock(WorldStateArchive.class, Answers.RETURNS_DEEP_STUBS);
|
||||
private final Blockchain blockchain = mock(Blockchain.class);
|
||||
private final BlockchainQueries blockchainQueries =
|
||||
spy(new BlockchainQueries(blockchain, archive));
|
||||
private final BlockchainQueries blockchainQueries = mock(BlockchainQueries.class);
|
||||
private final TransactionTracer transactionTracer = mock(TransactionTracer.class);
|
||||
private final DebugStandardTraceBlockToFile debugStandardTraceBlockToFile =
|
||||
new DebugStandardTraceBlockToFile(() -> transactionTracer, blockchainQueries, folder);
|
||||
@@ -76,20 +72,26 @@ public class DebugStandardTraceBlockToFileTest {
|
||||
new JsonRpcRequestContext(
|
||||
new JsonRpcRequest("2.0", "debug_standardTraceBlockToFile", params));
|
||||
|
||||
final List<String> paths = new ArrayList<>();
|
||||
paths.add("path-1");
|
||||
|
||||
when(blockchainQueries.getBlockchain()).thenReturn(blockchain);
|
||||
final List<String> paths = List.of("path-1");
|
||||
|
||||
when(blockchain.getBlockByHash(block.getHash())).thenReturn(Optional.of(block));
|
||||
when(blockchain.getBlockHeader(genesis.getHash())).thenReturn(Optional.of(genesis.getHeader()));
|
||||
when(blockchainQueries.getBlockchain()).thenReturn(blockchain);
|
||||
|
||||
when(blockchainQueries.getAndMapWorldState(any(), any()))
|
||||
.thenAnswer(
|
||||
invocationOnMock -> {
|
||||
Function<Tracer.TraceableState, ? extends Optional<BlockTracer>> mapper =
|
||||
invocationOnMock.getArgument(1);
|
||||
return mapper.apply(mock(Tracer.TraceableState.class));
|
||||
});
|
||||
|
||||
when(transactionTracer.traceTransactionToFile(
|
||||
any(MutableWorldState.class), eq(block.getHash()), any(), any()))
|
||||
.thenReturn(paths);
|
||||
final JsonRpcSuccessResponse response =
|
||||
(JsonRpcSuccessResponse) debugStandardTraceBlockToFile.response(request);
|
||||
final List result = (ArrayList) response.getResult();
|
||||
final List result = (List) response.getResult();
|
||||
|
||||
assertThat(result.size()).isEqualTo(1);
|
||||
}
|
||||
|
||||
@@ -18,9 +18,8 @@ import static java.util.Arrays.asList;
|
||||
import static java.util.Collections.singletonList;
|
||||
import static org.assertj.core.api.Assertions.assertThat;
|
||||
import static org.mockito.ArgumentMatchers.any;
|
||||
import static org.mockito.Mockito.doAnswer;
|
||||
import static org.mockito.ArgumentMatchers.eq;
|
||||
import static org.mockito.Mockito.mock;
|
||||
import static org.mockito.Mockito.spy;
|
||||
import static org.mockito.Mockito.when;
|
||||
|
||||
import org.hyperledger.besu.datatypes.Wei;
|
||||
@@ -35,32 +34,25 @@ import org.hyperledger.besu.ethereum.api.jsonrpc.internal.response.JsonRpcSucces
|
||||
import org.hyperledger.besu.ethereum.api.jsonrpc.internal.response.RpcErrorType;
|
||||
import org.hyperledger.besu.ethereum.api.query.BlockWithMetadata;
|
||||
import org.hyperledger.besu.ethereum.api.query.BlockchainQueries;
|
||||
import org.hyperledger.besu.ethereum.chain.Blockchain;
|
||||
import org.hyperledger.besu.ethereum.core.Block;
|
||||
import org.hyperledger.besu.ethereum.core.BlockDataGenerator;
|
||||
import org.hyperledger.besu.ethereum.debug.TraceFrame;
|
||||
import org.hyperledger.besu.ethereum.mainnet.MainnetBlockHeaderFunctions;
|
||||
import org.hyperledger.besu.ethereum.processing.TransactionProcessingResult;
|
||||
import org.hyperledger.besu.ethereum.worldstate.WorldStateArchive;
|
||||
|
||||
import java.util.Collection;
|
||||
import java.util.Collections;
|
||||
import java.util.Optional;
|
||||
import java.util.OptionalLong;
|
||||
import java.util.function.Function;
|
||||
|
||||
import org.apache.tuweni.bytes.Bytes;
|
||||
import org.junit.jupiter.api.Test;
|
||||
import org.mockito.Answers;
|
||||
import org.mockito.Mockito;
|
||||
|
||||
public class DebugTraceBlockTest {
|
||||
|
||||
private final BlockTracer blockTracer = mock(BlockTracer.class);
|
||||
private final WorldStateArchive archive =
|
||||
mock(WorldStateArchive.class, Answers.RETURNS_DEEP_STUBS);
|
||||
private final Blockchain blockchain = mock(Blockchain.class);
|
||||
private final BlockchainQueries blockchainQueries =
|
||||
spy(new BlockchainQueries(blockchain, archive));
|
||||
private final BlockchainQueries blockchainQueries = mock(BlockchainQueries.class);
|
||||
private final DebugTraceBlock debugTraceBlock =
|
||||
new DebugTraceBlock(() -> blockTracer, new MainnetBlockHeaderFunctions(), blockchainQueries);
|
||||
|
||||
@@ -127,22 +119,25 @@ public class DebugTraceBlockTest {
|
||||
when(transaction2Trace.getResult()).thenReturn(transaction2Result);
|
||||
when(transaction1Result.getOutput()).thenReturn(Bytes.fromHexString("1234"));
|
||||
when(transaction2Result.getOutput()).thenReturn(Bytes.fromHexString("1234"));
|
||||
when(blockTracer.trace(any(Tracer.TraceableState.class), Mockito.eq(block), any()))
|
||||
when(blockTracer.trace(any(Tracer.TraceableState.class), eq(block), any()))
|
||||
.thenReturn(Optional.of(blockTrace));
|
||||
|
||||
when(blockchain.getBlockHeader(parentBlock.getHash()))
|
||||
.thenReturn(Optional.of(parentBlock.getHeader()));
|
||||
doAnswer(
|
||||
invocation ->
|
||||
Optional.of(
|
||||
new BlockWithMetadata<>(
|
||||
parentBlock.getHeader(),
|
||||
Collections.emptyList(),
|
||||
Collections.emptyList(),
|
||||
parentBlock.getHeader().getDifficulty(),
|
||||
parentBlock.calculateSize())))
|
||||
.when(blockchainQueries)
|
||||
.blockByHash(parentBlock.getHash());
|
||||
when(blockchainQueries.blockByHash(parentBlock.getHash()))
|
||||
.thenReturn(
|
||||
Optional.of(
|
||||
new BlockWithMetadata<>(
|
||||
parentBlock.getHeader(),
|
||||
Collections.emptyList(),
|
||||
Collections.emptyList(),
|
||||
parentBlock.getHeader().getDifficulty(),
|
||||
parentBlock.calculateSize())));
|
||||
when(blockchainQueries.getAndMapWorldState(eq(parentBlock.getHash()), any()))
|
||||
.thenAnswer(
|
||||
invocationOnMock -> {
|
||||
Function<Tracer.TraceableState, ? extends Optional<BlockTracer>> mapper =
|
||||
invocationOnMock.getArgument(1);
|
||||
return mapper.apply(mock(Tracer.TraceableState.class));
|
||||
});
|
||||
|
||||
final JsonRpcSuccessResponse response =
|
||||
(JsonRpcSuccessResponse) debugTraceBlock.response(request);
|
||||
|
||||
@@ -136,7 +136,7 @@ public class BlockTransactionSelector {
|
||||
this.pluginTransactionSelector = pluginTransactionSelector;
|
||||
this.pluginOperationTracer = pluginTransactionSelector.getOperationTracer();
|
||||
blockWorldStateUpdater = worldState.updater();
|
||||
blockTxsSelectionMaxTime = miningParameters.getUnstable().getBlockTxsSelectionMaxTime();
|
||||
blockTxsSelectionMaxTime = miningParameters.getBlockTxsSelectionMaxTime();
|
||||
}
|
||||
|
||||
private List<AbstractTransactionSelector> createTransactionSelectors(
|
||||
|
||||
@@ -17,7 +17,7 @@ package org.hyperledger.besu.ethereum.blockcreation;
|
||||
import static org.assertj.core.api.Assertions.assertThat;
|
||||
import static org.assertj.core.api.Assertions.entry;
|
||||
import static org.awaitility.Awaitility.await;
|
||||
import static org.hyperledger.besu.ethereum.core.MiningParameters.Unstable.DEFAULT_NON_POA_BLOCK_TXS_SELECTION_MAX_TIME;
|
||||
import static org.hyperledger.besu.ethereum.core.MiningParameters.DEFAULT_NON_POA_BLOCK_TXS_SELECTION_MAX_TIME;
|
||||
import static org.hyperledger.besu.plugin.data.TransactionSelectionResult.BLOCK_SELECTION_TIMEOUT;
|
||||
import static org.hyperledger.besu.plugin.data.TransactionSelectionResult.PRIORITY_FEE_PER_GAS_BELOW_CURRENT_MIN;
|
||||
import static org.hyperledger.besu.plugin.data.TransactionSelectionResult.SELECTED;
|
||||
@@ -54,7 +54,6 @@ import org.hyperledger.besu.ethereum.core.BlockHeaderTestFixture;
|
||||
import org.hyperledger.besu.ethereum.core.Difficulty;
|
||||
import org.hyperledger.besu.ethereum.core.ImmutableMiningParameters;
|
||||
import org.hyperledger.besu.ethereum.core.ImmutableMiningParameters.MutableInitValues;
|
||||
import org.hyperledger.besu.ethereum.core.ImmutableMiningParameters.Unstable;
|
||||
import org.hyperledger.besu.ethereum.core.InMemoryKeyValueStorageProvider;
|
||||
import org.hyperledger.besu.ethereum.core.MiningParameters;
|
||||
import org.hyperledger.besu.ethereum.core.MutableWorldState;
|
||||
@@ -85,7 +84,7 @@ import org.hyperledger.besu.plugin.services.txselection.PluginTransactionSelecto
|
||||
import org.hyperledger.besu.plugin.services.txselection.PluginTransactionSelectorFactory;
|
||||
import org.hyperledger.besu.plugin.services.txselection.TransactionEvaluationContext;
|
||||
import org.hyperledger.besu.services.kvstore.InMemoryKeyValueStorage;
|
||||
import org.hyperledger.besu.util.number.Percentage;
|
||||
import org.hyperledger.besu.util.number.PositiveNumber;
|
||||
|
||||
import java.math.BigInteger;
|
||||
import java.time.Instant;
|
||||
@@ -960,8 +959,8 @@ public abstract class AbstractBlockTransactionSelectorTest {
|
||||
|
||||
final ProcessableBlockHeader blockHeader = createBlock(301_000);
|
||||
final Address miningBeneficiary = AddressHelpers.ofValue(1);
|
||||
final int poaMinBlockTime = 1;
|
||||
final long blockTxsSelectionMaxTime = 750;
|
||||
final int poaGenesisBlockPeriod = 1;
|
||||
final int blockTxsSelectionMaxTime = 750;
|
||||
|
||||
final List<Transaction> transactionsToInject = new ArrayList<>(3);
|
||||
for (int i = 0; i < 2; i++) {
|
||||
@@ -991,9 +990,14 @@ public abstract class AbstractBlockTransactionSelectorTest {
|
||||
createBlockSelectorAndSetupTxPool(
|
||||
isPoa
|
||||
? createMiningParameters(
|
||||
Wei.ZERO, MIN_OCCUPANCY_100_PERCENT, poaMinBlockTime, Percentage.fromInt(75))
|
||||
Wei.ZERO,
|
||||
MIN_OCCUPANCY_100_PERCENT,
|
||||
poaGenesisBlockPeriod,
|
||||
PositiveNumber.fromInt(75))
|
||||
: createMiningParameters(
|
||||
Wei.ZERO, MIN_OCCUPANCY_100_PERCENT, blockTxsSelectionMaxTime),
|
||||
Wei.ZERO,
|
||||
MIN_OCCUPANCY_100_PERCENT,
|
||||
PositiveNumber.fromInt(blockTxsSelectionMaxTime)),
|
||||
transactionProcessor,
|
||||
blockHeader,
|
||||
miningBeneficiary,
|
||||
@@ -1180,33 +1184,32 @@ public abstract class AbstractBlockTransactionSelectorTest {
|
||||
}
|
||||
|
||||
protected MiningParameters createMiningParameters(
|
||||
final Wei minGasPrice, final double minBlockOccupancyRatio, final long txsSelectionMaxTime) {
|
||||
final Wei minGasPrice,
|
||||
final double minBlockOccupancyRatio,
|
||||
final PositiveNumber txsSelectionMaxTime) {
|
||||
return ImmutableMiningParameters.builder()
|
||||
.mutableInitValues(
|
||||
MutableInitValues.builder()
|
||||
.minTransactionGasPrice(minGasPrice)
|
||||
.minBlockOccupancyRatio(minBlockOccupancyRatio)
|
||||
.build())
|
||||
.unstable(Unstable.builder().nonPoaBlockTxsSelectionMaxTime(txsSelectionMaxTime).build())
|
||||
.nonPoaBlockTxsSelectionMaxTime(txsSelectionMaxTime)
|
||||
.build();
|
||||
}
|
||||
|
||||
protected MiningParameters createMiningParameters(
|
||||
final Wei minGasPrice,
|
||||
final double minBlockOccupancyRatio,
|
||||
final int minBlockTime,
|
||||
final Percentage minBlockTimePercentage) {
|
||||
final int genesisBlockPeriodSeconds,
|
||||
final PositiveNumber minBlockTimePercentage) {
|
||||
return ImmutableMiningParameters.builder()
|
||||
.mutableInitValues(
|
||||
MutableInitValues.builder()
|
||||
.minTransactionGasPrice(minGasPrice)
|
||||
.minBlockOccupancyRatio(minBlockOccupancyRatio)
|
||||
.build())
|
||||
.unstable(
|
||||
Unstable.builder()
|
||||
.minBlockTime(minBlockTime)
|
||||
.poaBlockTxsSelectionMaxTime(minBlockTimePercentage)
|
||||
.build())
|
||||
.genesisBlockPeriodSeconds(genesisBlockPeriodSeconds)
|
||||
.poaBlockTxsSelectionMaxTime(minBlockTimePercentage)
|
||||
.build();
|
||||
}
|
||||
|
||||
|
||||
@@ -16,7 +16,7 @@ package org.hyperledger.besu.ethereum.blockcreation;
|
||||
|
||||
import static org.assertj.core.api.Assertions.assertThat;
|
||||
import static org.assertj.core.api.Assertions.entry;
|
||||
import static org.hyperledger.besu.ethereum.core.MiningParameters.Unstable.DEFAULT_NON_POA_BLOCK_TXS_SELECTION_MAX_TIME;
|
||||
import static org.hyperledger.besu.ethereum.core.MiningParameters.DEFAULT_NON_POA_BLOCK_TXS_SELECTION_MAX_TIME;
|
||||
import static org.mockito.Mockito.mock;
|
||||
|
||||
import org.hyperledger.besu.config.GenesisConfigFile;
|
||||
|
||||
@@ -15,6 +15,7 @@
|
||||
package org.hyperledger.besu.ethereum.chain;
|
||||
|
||||
import static java.util.Collections.emptyList;
|
||||
import static org.hyperledger.besu.ethereum.trie.common.GenesisWorldStateProvider.createGenesisWorldState;
|
||||
|
||||
import org.hyperledger.besu.config.GenesisAllocation;
|
||||
import org.hyperledger.besu.config.GenesisConfigFile;
|
||||
@@ -32,14 +33,10 @@ import org.hyperledger.besu.ethereum.core.MutableWorldState;
|
||||
import org.hyperledger.besu.ethereum.core.Withdrawal;
|
||||
import org.hyperledger.besu.ethereum.mainnet.ProtocolSchedule;
|
||||
import org.hyperledger.besu.ethereum.mainnet.ScheduleBasedBlockHeaderFunctions;
|
||||
import org.hyperledger.besu.ethereum.storage.keyvalue.WorldStatePreimageKeyValueStorage;
|
||||
import org.hyperledger.besu.ethereum.trie.forest.storage.ForestWorldStateKeyValueStorage;
|
||||
import org.hyperledger.besu.ethereum.trie.forest.worldview.ForestMutableWorldState;
|
||||
import org.hyperledger.besu.ethereum.worldstate.DataStorageFormat;
|
||||
import org.hyperledger.besu.evm.account.MutableAccount;
|
||||
import org.hyperledger.besu.evm.internal.EvmConfiguration;
|
||||
import org.hyperledger.besu.evm.log.LogsBloomFilter;
|
||||
import org.hyperledger.besu.evm.worldstate.WorldUpdater;
|
||||
import org.hyperledger.besu.services.kvstore.InMemoryKeyValueStorage;
|
||||
|
||||
import java.math.BigInteger;
|
||||
import java.util.HashMap;
|
||||
@@ -77,6 +74,21 @@ public final class GenesisState {
|
||||
return fromConfig(GenesisConfigFile.fromConfig(json), protocolSchedule);
|
||||
}
|
||||
|
||||
/**
|
||||
* Construct a {@link GenesisState} from a JSON string.
|
||||
*
|
||||
* @param dataStorageFormat A {@link DataStorageFormat} describing the storage format to use
|
||||
* @param json A JSON string describing the genesis block
|
||||
* @param protocolSchedule A protocol Schedule associated with
|
||||
* @return A new {@link GenesisState}.
|
||||
*/
|
||||
public static GenesisState fromJson(
|
||||
final DataStorageFormat dataStorageFormat,
|
||||
final String json,
|
||||
final ProtocolSchedule protocolSchedule) {
|
||||
return fromConfig(dataStorageFormat, GenesisConfigFile.fromConfig(json), protocolSchedule);
|
||||
}
|
||||
|
||||
/**
|
||||
* Construct a {@link GenesisState} from a JSON object.
|
||||
*
|
||||
@@ -86,10 +98,28 @@ public final class GenesisState {
|
||||
*/
|
||||
public static GenesisState fromConfig(
|
||||
final GenesisConfigFile config, final ProtocolSchedule protocolSchedule) {
|
||||
return fromConfig(DataStorageFormat.FOREST, config, protocolSchedule);
|
||||
}
|
||||
|
||||
/**
|
||||
* Construct a {@link GenesisState} from a JSON object.
|
||||
*
|
||||
* @param dataStorageFormat A {@link DataStorageFormat} describing the storage format to use
|
||||
* @param config A {@link GenesisConfigFile} describing the genesis block.
|
||||
* @param protocolSchedule A protocol Schedule associated with
|
||||
* @return A new {@link GenesisState}.
|
||||
*/
|
||||
public static GenesisState fromConfig(
|
||||
final DataStorageFormat dataStorageFormat,
|
||||
final GenesisConfigFile config,
|
||||
final ProtocolSchedule protocolSchedule) {
|
||||
final List<GenesisAccount> genesisAccounts = parseAllocations(config).toList();
|
||||
final Block block =
|
||||
new Block(
|
||||
buildHeader(config, calculateGenesisStateHash(genesisAccounts), protocolSchedule),
|
||||
buildHeader(
|
||||
config,
|
||||
calculateGenesisStateHash(dataStorageFormat, genesisAccounts),
|
||||
protocolSchedule),
|
||||
buildBody(config));
|
||||
return new GenesisState(block, genesisAccounts);
|
||||
}
|
||||
@@ -133,15 +163,14 @@ public final class GenesisState {
|
||||
target.persist(rootHeader);
|
||||
}
|
||||
|
||||
private static Hash calculateGenesisStateHash(final List<GenesisAccount> genesisAccounts) {
|
||||
final ForestWorldStateKeyValueStorage stateStorage =
|
||||
new ForestWorldStateKeyValueStorage(new InMemoryKeyValueStorage());
|
||||
final WorldStatePreimageKeyValueStorage preimageStorage =
|
||||
new WorldStatePreimageKeyValueStorage(new InMemoryKeyValueStorage());
|
||||
final MutableWorldState worldState =
|
||||
new ForestMutableWorldState(stateStorage, preimageStorage, EvmConfiguration.DEFAULT);
|
||||
writeAccountsTo(worldState, genesisAccounts, null);
|
||||
return worldState.rootHash();
|
||||
private static Hash calculateGenesisStateHash(
|
||||
final DataStorageFormat dataStorageFormat, final List<GenesisAccount> genesisAccounts) {
|
||||
try (var worldState = createGenesisWorldState(dataStorageFormat)) {
|
||||
writeAccountsTo(worldState, genesisAccounts, null);
|
||||
return worldState.rootHash();
|
||||
} catch (Exception e) {
|
||||
throw new RuntimeException(e);
|
||||
}
|
||||
}
|
||||
|
||||
private static BlockHeader buildHeader(
|
||||
|
||||
@@ -16,7 +16,7 @@ package org.hyperledger.besu.ethereum.core;
|
||||
|
||||
import org.hyperledger.besu.datatypes.Address;
|
||||
import org.hyperledger.besu.datatypes.Wei;
|
||||
import org.hyperledger.besu.util.number.Percentage;
|
||||
import org.hyperledger.besu.util.number.PositiveNumber;
|
||||
|
||||
import java.time.Duration;
|
||||
import java.util.Objects;
|
||||
@@ -32,6 +32,10 @@ import org.immutables.value.Value;
|
||||
@Value.Immutable
|
||||
@Value.Enclosing
|
||||
public abstract class MiningParameters {
|
||||
public static final PositiveNumber DEFAULT_NON_POA_BLOCK_TXS_SELECTION_MAX_TIME =
|
||||
PositiveNumber.fromInt((int) Duration.ofSeconds(5).toMillis());
|
||||
public static final PositiveNumber DEFAULT_POA_BLOCK_TXS_SELECTION_MAX_TIME =
|
||||
PositiveNumber.fromInt(75);
|
||||
public static final MiningParameters MINING_DISABLED =
|
||||
ImmutableMiningParameters.builder()
|
||||
.mutableInitValues(
|
||||
@@ -130,6 +134,28 @@ public abstract class MiningParameters {
|
||||
return 8008;
|
||||
}
|
||||
|
||||
@Value.Default
|
||||
public PositiveNumber getNonPoaBlockTxsSelectionMaxTime() {
|
||||
return DEFAULT_NON_POA_BLOCK_TXS_SELECTION_MAX_TIME;
|
||||
}
|
||||
|
||||
@Value.Default
|
||||
public PositiveNumber getPoaBlockTxsSelectionMaxTime() {
|
||||
return DEFAULT_POA_BLOCK_TXS_SELECTION_MAX_TIME;
|
||||
}
|
||||
|
||||
public abstract OptionalInt getGenesisBlockPeriodSeconds();
|
||||
|
||||
@Value.Derived
|
||||
public long getBlockTxsSelectionMaxTime() {
|
||||
if (getGenesisBlockPeriodSeconds().isPresent()) {
|
||||
return (TimeUnit.SECONDS.toMillis(getGenesisBlockPeriodSeconds().getAsInt())
|
||||
* getPoaBlockTxsSelectionMaxTime().getValue())
|
||||
/ 100;
|
||||
}
|
||||
return getNonPoaBlockTxsSelectionMaxTime().getValue();
|
||||
}
|
||||
|
||||
@Value.Default
|
||||
protected MutableRuntimeValues getMutableRuntimeValues() {
|
||||
return new MutableRuntimeValues(getMutableInitValues());
|
||||
@@ -266,8 +292,6 @@ public abstract class MiningParameters {
|
||||
int DEFAULT_MAX_OMMERS_DEPTH = 8;
|
||||
long DEFAULT_POS_BLOCK_CREATION_MAX_TIME = Duration.ofSeconds(12).toMillis();
|
||||
long DEFAULT_POS_BLOCK_CREATION_REPETITION_MIN_DURATION = Duration.ofMillis(500).toMillis();
|
||||
long DEFAULT_NON_POA_BLOCK_TXS_SELECTION_MAX_TIME = Duration.ofSeconds(5).toMillis();
|
||||
Percentage DEFAULT_POA_BLOCK_TXS_SELECTION_MAX_TIME = Percentage.fromInt(75);
|
||||
|
||||
MiningParameters.Unstable DEFAULT = ImmutableMiningParameters.Unstable.builder().build();
|
||||
|
||||
@@ -305,27 +329,5 @@ public abstract class MiningParameters {
|
||||
default String getStratumExtranonce() {
|
||||
return "080c";
|
||||
}
|
||||
|
||||
@Value.Default
|
||||
default long getNonPoaBlockTxsSelectionMaxTime() {
|
||||
return DEFAULT_NON_POA_BLOCK_TXS_SELECTION_MAX_TIME;
|
||||
}
|
||||
|
||||
@Value.Default
|
||||
default Percentage getPoaBlockTxsSelectionMaxTime() {
|
||||
return DEFAULT_POA_BLOCK_TXS_SELECTION_MAX_TIME;
|
||||
}
|
||||
|
||||
OptionalInt getMinBlockTime();
|
||||
|
||||
@Value.Derived
|
||||
default long getBlockTxsSelectionMaxTime() {
|
||||
if (getMinBlockTime().isPresent()) {
|
||||
return (TimeUnit.SECONDS.toMillis(getMinBlockTime().getAsInt())
|
||||
* getPoaBlockTxsSelectionMaxTime().getValue())
|
||||
/ 100;
|
||||
}
|
||||
return getNonPoaBlockTxsSelectionMaxTime();
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
@@ -19,7 +19,6 @@ import static com.google.common.base.Preconditions.checkArgument;
|
||||
/** Specification for the block gasLimit. */
|
||||
public abstract class AbstractGasLimitSpecification {
|
||||
|
||||
public static final long DEFAULT_MAX_CONSTANT_ADMUSTMENT_INCREMENT = 1024L;
|
||||
public static final long DEFAULT_MIN_GAS_LIMIT = 5000L;
|
||||
public static final long DEFAULT_MAX_GAS_LIMIT = Long.MAX_VALUE;
|
||||
|
||||
|
||||
@@ -23,16 +23,13 @@ public class FrontierTargetingGasLimitCalculator extends AbstractGasLimitSpecifi
|
||||
implements GasLimitCalculator {
|
||||
private static final Logger LOG =
|
||||
LoggerFactory.getLogger(FrontierTargetingGasLimitCalculator.class);
|
||||
private final long maxConstantAdjustmentIncrement;
|
||||
|
||||
public FrontierTargetingGasLimitCalculator() {
|
||||
this(DEFAULT_MAX_CONSTANT_ADMUSTMENT_INCREMENT, DEFAULT_MIN_GAS_LIMIT, DEFAULT_MAX_GAS_LIMIT);
|
||||
this(DEFAULT_MIN_GAS_LIMIT, DEFAULT_MAX_GAS_LIMIT);
|
||||
}
|
||||
|
||||
public FrontierTargetingGasLimitCalculator(
|
||||
final long maxConstantAdjustmentIncrement, final long minGasLimit, final long maxGasLimit) {
|
||||
public FrontierTargetingGasLimitCalculator(final long minGasLimit, final long maxGasLimit) {
|
||||
super(minGasLimit, maxGasLimit);
|
||||
this.maxConstantAdjustmentIncrement = maxConstantAdjustmentIncrement;
|
||||
}
|
||||
|
||||
@Override
|
||||
@@ -55,8 +52,7 @@ public class FrontierTargetingGasLimitCalculator extends AbstractGasLimitSpecifi
|
||||
}
|
||||
|
||||
private long adjustAmount(final long currentGasLimit) {
|
||||
final long maxProportionalAdjustmentLimit = Math.max(deltaBound(currentGasLimit) - 1, 0);
|
||||
return Math.min(maxConstantAdjustmentIncrement, maxProportionalAdjustmentLimit);
|
||||
return Math.max(deltaBound(currentGasLimit) - 1, 0);
|
||||
}
|
||||
|
||||
protected long safeAddAtMost(final long gasLimit) {
|
||||
|
||||
@@ -27,21 +27,15 @@ public class LondonTargetingGasLimitCalculator extends FrontierTargetingGasLimit
|
||||
|
||||
public LondonTargetingGasLimitCalculator(
|
||||
final long londonForkBlock, final BaseFeeMarket feeMarket) {
|
||||
this(
|
||||
DEFAULT_MAX_CONSTANT_ADMUSTMENT_INCREMENT,
|
||||
DEFAULT_MIN_GAS_LIMIT,
|
||||
DEFAULT_MAX_GAS_LIMIT,
|
||||
londonForkBlock,
|
||||
feeMarket);
|
||||
this(DEFAULT_MIN_GAS_LIMIT, DEFAULT_MAX_GAS_LIMIT, londonForkBlock, feeMarket);
|
||||
}
|
||||
|
||||
public LondonTargetingGasLimitCalculator(
|
||||
final long maxConstantAdjustmentIncrement,
|
||||
final long minGasLimit,
|
||||
final long maxGasLimit,
|
||||
final long londonForkBlock,
|
||||
final BaseFeeMarket feeMarket) {
|
||||
super(maxConstantAdjustmentIncrement, minGasLimit, maxGasLimit);
|
||||
super(minGasLimit, maxGasLimit);
|
||||
this.londonForkBlock = londonForkBlock;
|
||||
this.feeMarket = feeMarket;
|
||||
}
|
||||
|
||||
@@ -17,7 +17,7 @@ package org.hyperledger.besu.ethereum.storage;
|
||||
import org.hyperledger.besu.ethereum.chain.BlockchainStorage;
|
||||
import org.hyperledger.besu.ethereum.chain.VariablesStorage;
|
||||
import org.hyperledger.besu.ethereum.mainnet.ProtocolSchedule;
|
||||
import org.hyperledger.besu.ethereum.worldstate.DataStorageFormat;
|
||||
import org.hyperledger.besu.ethereum.worldstate.DataStorageConfiguration;
|
||||
import org.hyperledger.besu.ethereum.worldstate.WorldStatePreimageStorage;
|
||||
import org.hyperledger.besu.ethereum.worldstate.WorldStateStorage;
|
||||
import org.hyperledger.besu.plugin.services.storage.KeyValueStorage;
|
||||
@@ -34,7 +34,7 @@ public interface StorageProvider extends Closeable {
|
||||
BlockchainStorage createBlockchainStorage(
|
||||
ProtocolSchedule protocolSchedule, VariablesStorage variablesStorage);
|
||||
|
||||
WorldStateStorage createWorldStateStorage(DataStorageFormat dataStorageFormat);
|
||||
WorldStateStorage createWorldStateStorage(DataStorageConfiguration dataStorageFormat);
|
||||
|
||||
WorldStatePreimageStorage createWorldStatePreimageStorage();
|
||||
|
||||
|
||||
@@ -21,6 +21,7 @@ import org.hyperledger.besu.ethereum.mainnet.ScheduleBasedBlockHeaderFunctions;
|
||||
import org.hyperledger.besu.ethereum.storage.StorageProvider;
|
||||
import org.hyperledger.besu.ethereum.trie.bonsai.storage.BonsaiWorldStateKeyValueStorage;
|
||||
import org.hyperledger.besu.ethereum.trie.forest.storage.ForestWorldStateKeyValueStorage;
|
||||
import org.hyperledger.besu.ethereum.worldstate.DataStorageConfiguration;
|
||||
import org.hyperledger.besu.ethereum.worldstate.DataStorageFormat;
|
||||
import org.hyperledger.besu.ethereum.worldstate.WorldStatePreimageStorage;
|
||||
import org.hyperledger.besu.ethereum.worldstate.WorldStateStorage;
|
||||
@@ -75,9 +76,10 @@ public class KeyValueStorageProvider implements StorageProvider {
|
||||
}
|
||||
|
||||
@Override
|
||||
public WorldStateStorage createWorldStateStorage(final DataStorageFormat dataStorageFormat) {
|
||||
if (dataStorageFormat.equals(DataStorageFormat.BONSAI)) {
|
||||
return new BonsaiWorldStateKeyValueStorage(this, metricsSystem);
|
||||
public WorldStateStorage createWorldStateStorage(
|
||||
final DataStorageConfiguration dataStorageConfiguration) {
|
||||
if (dataStorageConfiguration.getDataStorageFormat().equals(DataStorageFormat.BONSAI)) {
|
||||
return new BonsaiWorldStateKeyValueStorage(this, metricsSystem, dataStorageConfiguration);
|
||||
} else {
|
||||
return new ForestWorldStateKeyValueStorage(
|
||||
getStorageBySegmentIdentifier(KeyValueSegmentIdentifier.WORLD_STATE));
|
||||
|
||||
@@ -39,7 +39,6 @@ import org.hyperledger.besu.ethereum.worldstate.StateTrieAccountValue;
|
||||
import org.hyperledger.besu.ethereum.worldstate.WorldStateArchive;
|
||||
import org.hyperledger.besu.evm.internal.EvmConfiguration;
|
||||
import org.hyperledger.besu.evm.worldstate.WorldState;
|
||||
import org.hyperledger.besu.metrics.ObservableMetricsSystem;
|
||||
import org.hyperledger.besu.plugin.BesuContext;
|
||||
import org.hyperledger.besu.plugin.services.trielogs.TrieLog;
|
||||
|
||||
@@ -73,13 +72,11 @@ public class BonsaiWorldStateProvider implements WorldStateArchive {
|
||||
final Blockchain blockchain,
|
||||
final Optional<Long> maxLayersToLoad,
|
||||
final CachedMerkleTrieLoader cachedMerkleTrieLoader,
|
||||
final ObservableMetricsSystem metricsSystem,
|
||||
final BesuContext pluginContext,
|
||||
final EvmConfiguration evmConfiguration,
|
||||
final TrieLogPruner trieLogPruner) {
|
||||
|
||||
this.cachedWorldStorageManager =
|
||||
new CachedWorldStorageManager(this, worldStateStorage, metricsSystem);
|
||||
this.cachedWorldStorageManager = new CachedWorldStorageManager(this, worldStateStorage);
|
||||
// TODO: de-dup constructors
|
||||
this.trieLogManager =
|
||||
new TrieLogManager(
|
||||
|
||||
@@ -22,7 +22,6 @@ import org.hyperledger.besu.ethereum.trie.bonsai.storage.BonsaiWorldStateKeyValu
|
||||
import org.hyperledger.besu.ethereum.trie.bonsai.storage.BonsaiWorldStateLayerStorage;
|
||||
import org.hyperledger.besu.ethereum.trie.bonsai.worldview.BonsaiWorldState;
|
||||
import org.hyperledger.besu.evm.internal.EvmConfiguration;
|
||||
import org.hyperledger.besu.metrics.ObservableMetricsSystem;
|
||||
|
||||
import java.util.ArrayList;
|
||||
import java.util.Comparator;
|
||||
@@ -41,7 +40,6 @@ public class CachedWorldStorageManager
|
||||
public static final long RETAINED_LAYERS = 512; // at least 256 + typical rollbacks
|
||||
private static final Logger LOG = LoggerFactory.getLogger(CachedWorldStorageManager.class);
|
||||
private final BonsaiWorldStateProvider archive;
|
||||
private final ObservableMetricsSystem metricsSystem;
|
||||
private final EvmConfiguration evmConfiguration;
|
||||
|
||||
private final BonsaiWorldStateKeyValueStorage rootWorldStateStorage;
|
||||
@@ -51,26 +49,18 @@ public class CachedWorldStorageManager
|
||||
final BonsaiWorldStateProvider archive,
|
||||
final BonsaiWorldStateKeyValueStorage worldStateStorage,
|
||||
final Map<Bytes32, CachedBonsaiWorldView> cachedWorldStatesByHash,
|
||||
final ObservableMetricsSystem metricsSystem,
|
||||
final EvmConfiguration evmConfiguration) {
|
||||
worldStateStorage.subscribe(this);
|
||||
this.rootWorldStateStorage = worldStateStorage;
|
||||
this.cachedWorldStatesByHash = cachedWorldStatesByHash;
|
||||
this.archive = archive;
|
||||
this.metricsSystem = metricsSystem;
|
||||
this.evmConfiguration = evmConfiguration;
|
||||
}
|
||||
|
||||
public CachedWorldStorageManager(
|
||||
final BonsaiWorldStateProvider archive,
|
||||
final BonsaiWorldStateKeyValueStorage worldStateStorage,
|
||||
final ObservableMetricsSystem metricsSystem) {
|
||||
this(
|
||||
archive,
|
||||
worldStateStorage,
|
||||
new ConcurrentHashMap<>(),
|
||||
metricsSystem,
|
||||
EvmConfiguration.DEFAULT);
|
||||
final BonsaiWorldStateKeyValueStorage worldStateStorage) {
|
||||
this(archive, worldStateStorage, new ConcurrentHashMap<>(), EvmConfiguration.DEFAULT);
|
||||
}
|
||||
|
||||
public synchronized void addCachedLayer(
|
||||
@@ -92,8 +82,7 @@ public class CachedWorldStorageManager
|
||||
cachedBonsaiWorldView
|
||||
.get()
|
||||
.updateWorldStateStorage(
|
||||
new BonsaiSnapshotWorldStateKeyValueStorage(
|
||||
forWorldState.getWorldStateStorage(), metricsSystem));
|
||||
new BonsaiSnapshotWorldStateKeyValueStorage(forWorldState.getWorldStateStorage()));
|
||||
}
|
||||
} else {
|
||||
LOG.atDebug()
|
||||
@@ -106,8 +95,7 @@ public class CachedWorldStorageManager
|
||||
blockHeader.getHash(),
|
||||
new CachedBonsaiWorldView(
|
||||
blockHeader,
|
||||
new BonsaiSnapshotWorldStateKeyValueStorage(
|
||||
forWorldState.getWorldStateStorage(), metricsSystem)));
|
||||
new BonsaiSnapshotWorldStateKeyValueStorage(forWorldState.getWorldStateStorage())));
|
||||
} else {
|
||||
// otherwise, add the layer to the cache
|
||||
cachedWorldStatesByHash.put(
|
||||
|
||||
@@ -0,0 +1,65 @@
|
||||
/*
|
||||
* Copyright Hyperledger Besu Contributors.
|
||||
*
|
||||
* Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with
|
||||
* the License. You may obtain a copy of the License at
|
||||
*
|
||||
* http://www.apache.org/licenses/LICENSE-2.0
|
||||
*
|
||||
* Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
|
||||
* an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
|
||||
* specific language governing permissions and limitations under the License.
|
||||
*
|
||||
* SPDX-License-Identifier: Apache-2.0
|
||||
*/
|
||||
package org.hyperledger.besu.ethereum.trie.bonsai.cache;
|
||||
|
||||
import org.hyperledger.besu.datatypes.Hash;
|
||||
import org.hyperledger.besu.ethereum.core.BlockHeader;
|
||||
import org.hyperledger.besu.ethereum.trie.bonsai.storage.BonsaiWorldStateKeyValueStorage;
|
||||
import org.hyperledger.besu.ethereum.trie.bonsai.worldview.BonsaiWorldState;
|
||||
|
||||
import java.util.Optional;
|
||||
import java.util.function.Function;
|
||||
|
||||
public class NoOpCachedWorldStorageManager extends CachedWorldStorageManager {
|
||||
|
||||
public NoOpCachedWorldStorageManager(
|
||||
final BonsaiWorldStateKeyValueStorage bonsaiWorldStateKeyValueStorage) {
|
||||
super(null, bonsaiWorldStateKeyValueStorage);
|
||||
}
|
||||
|
||||
@Override
|
||||
public synchronized void addCachedLayer(
|
||||
final BlockHeader blockHeader,
|
||||
final Hash worldStateRootHash,
|
||||
final BonsaiWorldState forWorldState) {
|
||||
// no cache
|
||||
}
|
||||
|
||||
@Override
|
||||
public boolean containWorldStateStorage(final Hash blockHash) {
|
||||
return false;
|
||||
}
|
||||
|
||||
@Override
|
||||
public Optional<BonsaiWorldState> getWorldState(final Hash blockHash) {
|
||||
return Optional.empty();
|
||||
}
|
||||
|
||||
@Override
|
||||
public Optional<BonsaiWorldState> getNearestWorldState(final BlockHeader blockHeader) {
|
||||
return Optional.empty();
|
||||
}
|
||||
|
||||
@Override
|
||||
public Optional<BonsaiWorldState> getHeadWorldState(
|
||||
final Function<Hash, Optional<BlockHeader>> hashBlockHeaderFunction) {
|
||||
return Optional.empty();
|
||||
}
|
||||
|
||||
@Override
|
||||
public void reset() {
|
||||
// world states are not re-used
|
||||
}
|
||||
}
|
||||
@@ -18,7 +18,6 @@ package org.hyperledger.besu.ethereum.trie.bonsai.storage;
|
||||
import org.hyperledger.besu.datatypes.Hash;
|
||||
import org.hyperledger.besu.datatypes.StorageSlotKey;
|
||||
import org.hyperledger.besu.ethereum.trie.bonsai.storage.BonsaiWorldStateKeyValueStorage.BonsaiStorageSubscriber;
|
||||
import org.hyperledger.besu.metrics.ObservableMetricsSystem;
|
||||
import org.hyperledger.besu.plugin.services.exception.StorageException;
|
||||
import org.hyperledger.besu.plugin.services.storage.KeyValueStorage;
|
||||
import org.hyperledger.besu.plugin.services.storage.SnappableKeyValueStorage;
|
||||
@@ -43,26 +42,19 @@ public class BonsaiSnapshotWorldStateKeyValueStorage extends BonsaiWorldStateKey
|
||||
public BonsaiSnapshotWorldStateKeyValueStorage(
|
||||
final BonsaiWorldStateKeyValueStorage parentWorldStateStorage,
|
||||
final SnappedKeyValueStorage segmentedWorldStateStorage,
|
||||
final KeyValueStorage trieLogStorage,
|
||||
final ObservableMetricsSystem metricsSystem) {
|
||||
final KeyValueStorage trieLogStorage) {
|
||||
super(
|
||||
parentWorldStateStorage.flatDbMode,
|
||||
parentWorldStateStorage.flatDbStrategy,
|
||||
segmentedWorldStateStorage,
|
||||
trieLogStorage,
|
||||
metricsSystem);
|
||||
parentWorldStateStorage.flatDbStrategyProvider, segmentedWorldStateStorage, trieLogStorage);
|
||||
this.parentWorldStateStorage = parentWorldStateStorage;
|
||||
this.subscribeParentId = parentWorldStateStorage.subscribe(this);
|
||||
}
|
||||
|
||||
public BonsaiSnapshotWorldStateKeyValueStorage(
|
||||
final BonsaiWorldStateKeyValueStorage worldStateStorage,
|
||||
final ObservableMetricsSystem metricsSystem) {
|
||||
final BonsaiWorldStateKeyValueStorage worldStateStorage) {
|
||||
this(
|
||||
worldStateStorage,
|
||||
((SnappableKeyValueStorage) worldStateStorage.composedWorldStateStorage).takeSnapshot(),
|
||||
worldStateStorage.trieLogStorage,
|
||||
metricsSystem);
|
||||
worldStateStorage.trieLogStorage);
|
||||
}
|
||||
|
||||
private boolean isClosedGet() {
|
||||
@@ -78,7 +70,7 @@ public class BonsaiSnapshotWorldStateKeyValueStorage extends BonsaiWorldStateKey
|
||||
return new Updater(
|
||||
((SnappedKeyValueStorage) composedWorldStateStorage).getSnapshotTransaction(),
|
||||
trieLogStorage.startTransaction(),
|
||||
flatDbStrategy);
|
||||
getFlatDbStrategy());
|
||||
}
|
||||
|
||||
@Override
|
||||
|
||||
@@ -25,14 +25,14 @@ import org.hyperledger.besu.ethereum.storage.StorageProvider;
|
||||
import org.hyperledger.besu.ethereum.storage.keyvalue.KeyValueSegmentIdentifier;
|
||||
import org.hyperledger.besu.ethereum.trie.MerkleTrie;
|
||||
import org.hyperledger.besu.ethereum.trie.bonsai.storage.flat.FlatDbStrategy;
|
||||
import org.hyperledger.besu.ethereum.trie.bonsai.storage.flat.FullFlatDbStrategy;
|
||||
import org.hyperledger.besu.ethereum.trie.bonsai.storage.flat.PartialFlatDbStrategy;
|
||||
import org.hyperledger.besu.ethereum.trie.bonsai.storage.flat.FlatDbStrategyProvider;
|
||||
import org.hyperledger.besu.ethereum.worldstate.DataStorageConfiguration;
|
||||
import org.hyperledger.besu.ethereum.worldstate.DataStorageFormat;
|
||||
import org.hyperledger.besu.ethereum.worldstate.FlatDbMode;
|
||||
import org.hyperledger.besu.ethereum.worldstate.StateTrieAccountValue;
|
||||
import org.hyperledger.besu.ethereum.worldstate.WorldStateStorage;
|
||||
import org.hyperledger.besu.evm.account.AccountStorageEntry;
|
||||
import org.hyperledger.besu.metrics.ObservableMetricsSystem;
|
||||
import org.hyperledger.besu.plugin.services.MetricsSystem;
|
||||
import org.hyperledger.besu.plugin.services.storage.KeyValueStorage;
|
||||
import org.hyperledger.besu.plugin.services.storage.KeyValueStorageTransaction;
|
||||
import org.hyperledger.besu.plugin.services.storage.SegmentedKeyValueStorage;
|
||||
@@ -64,17 +64,11 @@ public class BonsaiWorldStateKeyValueStorage implements WorldStateStorage, AutoC
|
||||
public static final byte[] WORLD_BLOCK_HASH_KEY =
|
||||
"worldBlockHash".getBytes(StandardCharsets.UTF_8);
|
||||
|
||||
// 0x666C61744462537461747573
|
||||
public static final byte[] FLAT_DB_MODE = "flatDbStatus".getBytes(StandardCharsets.UTF_8);
|
||||
|
||||
protected FlatDbMode flatDbMode;
|
||||
protected FlatDbStrategy flatDbStrategy;
|
||||
protected final FlatDbStrategyProvider flatDbStrategyProvider;
|
||||
|
||||
protected final SegmentedKeyValueStorage composedWorldStateStorage;
|
||||
protected final KeyValueStorage trieLogStorage;
|
||||
|
||||
protected final ObservableMetricsSystem metricsSystem;
|
||||
|
||||
private final AtomicBoolean shouldClose = new AtomicBoolean(false);
|
||||
|
||||
protected final AtomicBoolean isClosed = new AtomicBoolean(false);
|
||||
@@ -82,62 +76,27 @@ public class BonsaiWorldStateKeyValueStorage implements WorldStateStorage, AutoC
|
||||
protected final Subscribers<BonsaiStorageSubscriber> subscribers = Subscribers.create();
|
||||
|
||||
public BonsaiWorldStateKeyValueStorage(
|
||||
final StorageProvider provider, final ObservableMetricsSystem metricsSystem) {
|
||||
final StorageProvider provider,
|
||||
final MetricsSystem metricsSystem,
|
||||
final DataStorageConfiguration dataStorageConfiguration) {
|
||||
this.composedWorldStateStorage =
|
||||
provider.getStorageBySegmentIdentifiers(
|
||||
List.of(
|
||||
ACCOUNT_INFO_STATE, CODE_STORAGE, ACCOUNT_STORAGE_STORAGE, TRIE_BRANCH_STORAGE));
|
||||
this.trieLogStorage =
|
||||
provider.getStorageBySegmentIdentifier(KeyValueSegmentIdentifier.TRIE_LOG_STORAGE);
|
||||
this.metricsSystem = metricsSystem;
|
||||
loadFlatDbStrategy();
|
||||
this.flatDbStrategyProvider =
|
||||
new FlatDbStrategyProvider(metricsSystem, dataStorageConfiguration);
|
||||
flatDbStrategyProvider.loadFlatDbStrategy(composedWorldStateStorage);
|
||||
}
|
||||
|
||||
public BonsaiWorldStateKeyValueStorage(
|
||||
final FlatDbMode flatDbMode,
|
||||
final FlatDbStrategy flatDbStrategy,
|
||||
final FlatDbStrategyProvider flatDbStrategyProvider,
|
||||
final SegmentedKeyValueStorage composedWorldStateStorage,
|
||||
final KeyValueStorage trieLogStorage,
|
||||
final ObservableMetricsSystem metricsSystem) {
|
||||
this.flatDbMode = flatDbMode;
|
||||
this.flatDbStrategy = flatDbStrategy;
|
||||
final KeyValueStorage trieLogStorage) {
|
||||
this.flatDbStrategyProvider = flatDbStrategyProvider;
|
||||
this.composedWorldStateStorage = composedWorldStateStorage;
|
||||
this.trieLogStorage = trieLogStorage;
|
||||
this.metricsSystem = metricsSystem;
|
||||
}
|
||||
|
||||
private void loadFlatDbStrategy() {
|
||||
// derive our flatdb strategy from db or default:
|
||||
var newFlatDbMode = deriveFlatDbStrategy();
|
||||
|
||||
// if flatDbMode is not loaded or has changed, reload flatDbStrategy
|
||||
if (this.flatDbMode == null || !this.flatDbMode.equals(newFlatDbMode)) {
|
||||
this.flatDbMode = newFlatDbMode;
|
||||
if (flatDbMode == FlatDbMode.FULL) {
|
||||
this.flatDbStrategy = new FullFlatDbStrategy(metricsSystem);
|
||||
} else {
|
||||
this.flatDbStrategy = new PartialFlatDbStrategy(metricsSystem);
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
public FlatDbMode deriveFlatDbStrategy() {
|
||||
var flatDbMode =
|
||||
FlatDbMode.fromVersion(
|
||||
composedWorldStateStorage
|
||||
.get(TRIE_BRANCH_STORAGE, FLAT_DB_MODE)
|
||||
.map(Bytes::wrap)
|
||||
.orElse(FlatDbMode.PARTIAL.getVersion()));
|
||||
LOG.info("Bonsai flat db mode found {}", flatDbMode);
|
||||
|
||||
return flatDbMode;
|
||||
}
|
||||
|
||||
public FlatDbStrategy getFlatDbStrategy() {
|
||||
if (flatDbStrategy == null) {
|
||||
loadFlatDbStrategy();
|
||||
}
|
||||
return flatDbStrategy;
|
||||
}
|
||||
|
||||
@Override
|
||||
@@ -147,7 +106,7 @@ public class BonsaiWorldStateKeyValueStorage implements WorldStateStorage, AutoC
|
||||
|
||||
@Override
|
||||
public FlatDbMode getFlatDbMode() {
|
||||
return flatDbMode;
|
||||
return flatDbStrategyProvider.getFlatDbMode();
|
||||
}
|
||||
|
||||
@Override
|
||||
@@ -155,12 +114,15 @@ public class BonsaiWorldStateKeyValueStorage implements WorldStateStorage, AutoC
|
||||
if (codeHash.equals(Hash.EMPTY)) {
|
||||
return Optional.of(Bytes.EMPTY);
|
||||
} else {
|
||||
return getFlatDbStrategy().getFlatCode(codeHash, accountHash, composedWorldStateStorage);
|
||||
return flatDbStrategyProvider
|
||||
.getFlatDbStrategy(composedWorldStateStorage)
|
||||
.getFlatCode(codeHash, accountHash, composedWorldStateStorage);
|
||||
}
|
||||
}
|
||||
|
||||
public Optional<Bytes> getAccount(final Hash accountHash) {
|
||||
return getFlatDbStrategy()
|
||||
return flatDbStrategyProvider
|
||||
.getFlatDbStrategy(composedWorldStateStorage)
|
||||
.getFlatAccount(
|
||||
this::getWorldStateRootHash,
|
||||
this::getAccountStateTrieNode,
|
||||
@@ -243,7 +205,8 @@ public class BonsaiWorldStateKeyValueStorage implements WorldStateStorage, AutoC
|
||||
final Supplier<Optional<Hash>> storageRootSupplier,
|
||||
final Hash accountHash,
|
||||
final StorageSlotKey storageSlotKey) {
|
||||
return getFlatDbStrategy()
|
||||
return flatDbStrategyProvider
|
||||
.getFlatDbStrategy(composedWorldStateStorage)
|
||||
.getFlatStorageValueByStorageSlotKey(
|
||||
this::getWorldStateRootHash,
|
||||
storageRootSupplier,
|
||||
@@ -256,14 +219,16 @@ public class BonsaiWorldStateKeyValueStorage implements WorldStateStorage, AutoC
|
||||
@Override
|
||||
public Map<Bytes32, Bytes> streamFlatAccounts(
|
||||
final Bytes startKeyHash, final Bytes32 endKeyHash, final long max) {
|
||||
return getFlatDbStrategy()
|
||||
return flatDbStrategyProvider
|
||||
.getFlatDbStrategy(composedWorldStateStorage)
|
||||
.streamAccountFlatDatabase(composedWorldStateStorage, startKeyHash, endKeyHash, max);
|
||||
}
|
||||
|
||||
@Override
|
||||
public Map<Bytes32, Bytes> streamFlatStorages(
|
||||
final Hash accountHash, final Bytes startKeyHash, final Bytes32 endKeyHash, final long max) {
|
||||
return getFlatDbStrategy()
|
||||
return flatDbStrategyProvider
|
||||
.getFlatDbStrategy(composedWorldStateStorage)
|
||||
.streamStorageFlatDatabase(
|
||||
composedWorldStateStorage, accountHash, startKeyHash, endKeyHash, max);
|
||||
}
|
||||
@@ -288,31 +253,23 @@ public class BonsaiWorldStateKeyValueStorage implements WorldStateStorage, AutoC
|
||||
}
|
||||
|
||||
public void upgradeToFullFlatDbMode() {
|
||||
final SegmentedKeyValueStorageTransaction transaction =
|
||||
composedWorldStateStorage.startTransaction();
|
||||
// TODO: consider ARCHIVE mode
|
||||
transaction.put(
|
||||
TRIE_BRANCH_STORAGE, FLAT_DB_MODE, FlatDbMode.FULL.getVersion().toArrayUnsafe());
|
||||
transaction.commit();
|
||||
loadFlatDbStrategy(); // force reload of flat db reader strategy
|
||||
flatDbStrategyProvider.upgradeToFullFlatDbMode(composedWorldStateStorage);
|
||||
}
|
||||
|
||||
public void downgradeToPartialFlatDbMode() {
|
||||
final SegmentedKeyValueStorageTransaction transaction =
|
||||
composedWorldStateStorage.startTransaction();
|
||||
transaction.put(
|
||||
TRIE_BRANCH_STORAGE, FLAT_DB_MODE, FlatDbMode.PARTIAL.getVersion().toArrayUnsafe());
|
||||
transaction.commit();
|
||||
loadFlatDbStrategy(); // force reload of flat db reader strategy
|
||||
flatDbStrategyProvider.downgradeToPartialFlatDbMode(composedWorldStateStorage);
|
||||
}
|
||||
|
||||
@Override
|
||||
public void clear() {
|
||||
subscribers.forEach(BonsaiStorageSubscriber::onClearStorage);
|
||||
getFlatDbStrategy().clearAll(composedWorldStateStorage);
|
||||
flatDbStrategyProvider
|
||||
.getFlatDbStrategy(composedWorldStateStorage)
|
||||
.clearAll(composedWorldStateStorage);
|
||||
composedWorldStateStorage.clear(TRIE_BRANCH_STORAGE);
|
||||
trieLogStorage.clear();
|
||||
loadFlatDbStrategy(); // force reload of flat db reader strategy
|
||||
flatDbStrategyProvider.loadFlatDbStrategy(
|
||||
composedWorldStateStorage); // force reload of flat db reader strategy
|
||||
}
|
||||
|
||||
@Override
|
||||
@@ -324,7 +281,9 @@ public class BonsaiWorldStateKeyValueStorage implements WorldStateStorage, AutoC
|
||||
@Override
|
||||
public void clearFlatDatabase() {
|
||||
subscribers.forEach(BonsaiStorageSubscriber::onClearFlatDatabaseStorage);
|
||||
getFlatDbStrategy().resetOnResync(composedWorldStateStorage);
|
||||
flatDbStrategyProvider
|
||||
.getFlatDbStrategy(composedWorldStateStorage)
|
||||
.resetOnResync(composedWorldStateStorage);
|
||||
}
|
||||
|
||||
@Override
|
||||
@@ -332,7 +291,7 @@ public class BonsaiWorldStateKeyValueStorage implements WorldStateStorage, AutoC
|
||||
return new Updater(
|
||||
composedWorldStateStorage.startTransaction(),
|
||||
trieLogStorage.startTransaction(),
|
||||
flatDbStrategy);
|
||||
flatDbStrategyProvider.getFlatDbStrategy(composedWorldStateStorage));
|
||||
}
|
||||
|
||||
@Override
|
||||
@@ -359,6 +318,10 @@ public class BonsaiWorldStateKeyValueStorage implements WorldStateStorage, AutoC
|
||||
throw new RuntimeException("removeNodeAddedListener not available");
|
||||
}
|
||||
|
||||
public FlatDbStrategy getFlatDbStrategy() {
|
||||
return flatDbStrategyProvider.getFlatDbStrategy(composedWorldStateStorage);
|
||||
}
|
||||
|
||||
public interface BonsaiUpdater extends WorldStateStorage.Updater {
|
||||
BonsaiUpdater removeCode(final Hash accountHash);
|
||||
|
||||
|
||||
@@ -17,7 +17,6 @@ package org.hyperledger.besu.ethereum.trie.bonsai.storage;
|
||||
|
||||
import org.hyperledger.besu.ethereum.trie.bonsai.storage.BonsaiWorldStateKeyValueStorage.BonsaiStorageSubscriber;
|
||||
import org.hyperledger.besu.ethereum.worldstate.FlatDbMode;
|
||||
import org.hyperledger.besu.metrics.ObservableMetricsSystem;
|
||||
import org.hyperledger.besu.plugin.services.storage.KeyValueStorage;
|
||||
import org.hyperledger.besu.plugin.services.storage.SnappedKeyValueStorage;
|
||||
import org.hyperledger.besu.services.kvstore.LayeredKeyValueStorage;
|
||||
@@ -29,16 +28,14 @@ public class BonsaiWorldStateLayerStorage extends BonsaiSnapshotWorldStateKeyVal
|
||||
this(
|
||||
new LayeredKeyValueStorage(parent.composedWorldStateStorage),
|
||||
parent.trieLogStorage,
|
||||
parent,
|
||||
parent.metricsSystem);
|
||||
parent);
|
||||
}
|
||||
|
||||
public BonsaiWorldStateLayerStorage(
|
||||
final SnappedKeyValueStorage composedWorldStateStorage,
|
||||
final KeyValueStorage trieLogStorage,
|
||||
final BonsaiWorldStateKeyValueStorage parent,
|
||||
final ObservableMetricsSystem metricsSystem) {
|
||||
super(parent, composedWorldStateStorage, trieLogStorage, metricsSystem);
|
||||
final BonsaiWorldStateKeyValueStorage parent) {
|
||||
super(parent, composedWorldStateStorage, trieLogStorage);
|
||||
}
|
||||
|
||||
@Override
|
||||
@@ -51,7 +48,6 @@ public class BonsaiWorldStateLayerStorage extends BonsaiSnapshotWorldStateKeyVal
|
||||
return new BonsaiWorldStateLayerStorage(
|
||||
((LayeredKeyValueStorage) composedWorldStateStorage).clone(),
|
||||
trieLogStorage,
|
||||
parentWorldStateStorage,
|
||||
metricsSystem);
|
||||
parentWorldStateStorage);
|
||||
}
|
||||
}
|
||||
|
||||
@@ -0,0 +1,105 @@
|
||||
/*
|
||||
* Copyright Hyperledger Besu Contributors.
|
||||
*
|
||||
* Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with
|
||||
* the License. You may obtain a copy of the License at
|
||||
*
|
||||
* http://www.apache.org/licenses/LICENSE-2.0
|
||||
*
|
||||
* Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
|
||||
* an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
|
||||
* specific language governing permissions and limitations under the License.
|
||||
*
|
||||
* SPDX-License-Identifier: Apache-2.0
|
||||
*/
|
||||
|
||||
package org.hyperledger.besu.ethereum.trie.bonsai.storage.flat;
|
||||
|
||||
import static org.hyperledger.besu.ethereum.storage.keyvalue.KeyValueSegmentIdentifier.TRIE_BRANCH_STORAGE;
|
||||
|
||||
import org.hyperledger.besu.ethereum.worldstate.DataStorageConfiguration;
|
||||
import org.hyperledger.besu.ethereum.worldstate.FlatDbMode;
|
||||
import org.hyperledger.besu.plugin.services.MetricsSystem;
|
||||
import org.hyperledger.besu.plugin.services.storage.SegmentedKeyValueStorage;
|
||||
import org.hyperledger.besu.plugin.services.storage.SegmentedKeyValueStorageTransaction;
|
||||
|
||||
import java.nio.charset.StandardCharsets;
|
||||
|
||||
import org.apache.tuweni.bytes.Bytes;
|
||||
import org.slf4j.Logger;
|
||||
import org.slf4j.LoggerFactory;
|
||||
|
||||
public class FlatDbStrategyProvider {
|
||||
private static final Logger LOG = LoggerFactory.getLogger(FlatDbStrategyProvider.class);
|
||||
|
||||
// 0x666C61744462537461747573
|
||||
public static final byte[] FLAT_DB_MODE = "flatDbStatus".getBytes(StandardCharsets.UTF_8);
|
||||
private final MetricsSystem metricsSystem;
|
||||
protected FlatDbMode flatDbMode;
|
||||
protected FlatDbStrategy flatDbStrategy;
|
||||
|
||||
public FlatDbStrategyProvider(
|
||||
final MetricsSystem metricsSystem, final DataStorageConfiguration dataStorageConfiguration) {
|
||||
this.metricsSystem = metricsSystem;
|
||||
}
|
||||
|
||||
public void loadFlatDbStrategy(final SegmentedKeyValueStorage composedWorldStateStorage) {
|
||||
// derive our flatdb strategy from db or default:
|
||||
var newFlatDbMode = deriveFlatDbStrategy(composedWorldStateStorage);
|
||||
|
||||
// if flatDbMode is not loaded or has changed, reload flatDbStrategy
|
||||
if (this.flatDbMode == null || !this.flatDbMode.equals(newFlatDbMode)) {
|
||||
this.flatDbMode = newFlatDbMode;
|
||||
if (flatDbMode == FlatDbMode.FULL) {
|
||||
this.flatDbStrategy = new FullFlatDbStrategy(metricsSystem);
|
||||
} else {
|
||||
this.flatDbStrategy = new PartialFlatDbStrategy(metricsSystem);
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
private FlatDbMode deriveFlatDbStrategy(
|
||||
final SegmentedKeyValueStorage composedWorldStateStorage) {
|
||||
var flatDbMode =
|
||||
FlatDbMode.fromVersion(
|
||||
composedWorldStateStorage
|
||||
.get(TRIE_BRANCH_STORAGE, FLAT_DB_MODE)
|
||||
.map(Bytes::wrap)
|
||||
.orElse(FlatDbMode.PARTIAL.getVersion()));
|
||||
LOG.info("Bonsai flat db mode found {}", flatDbMode);
|
||||
|
||||
return flatDbMode;
|
||||
}
|
||||
|
||||
public FlatDbStrategy getFlatDbStrategy(
|
||||
final SegmentedKeyValueStorage composedWorldStateStorage) {
|
||||
if (flatDbStrategy == null) {
|
||||
loadFlatDbStrategy(composedWorldStateStorage);
|
||||
}
|
||||
return flatDbStrategy;
|
||||
}
|
||||
|
||||
public void upgradeToFullFlatDbMode(final SegmentedKeyValueStorage composedWorldStateStorage) {
|
||||
final SegmentedKeyValueStorageTransaction transaction =
|
||||
composedWorldStateStorage.startTransaction();
|
||||
// TODO: consider ARCHIVE mode
|
||||
transaction.put(
|
||||
TRIE_BRANCH_STORAGE, FLAT_DB_MODE, FlatDbMode.FULL.getVersion().toArrayUnsafe());
|
||||
transaction.commit();
|
||||
loadFlatDbStrategy(composedWorldStateStorage); // force reload of flat db reader strategy
|
||||
}
|
||||
|
||||
public void downgradeToPartialFlatDbMode(
|
||||
final SegmentedKeyValueStorage composedWorldStateStorage) {
|
||||
final SegmentedKeyValueStorageTransaction transaction =
|
||||
composedWorldStateStorage.startTransaction();
|
||||
transaction.put(
|
||||
TRIE_BRANCH_STORAGE, FLAT_DB_MODE, FlatDbMode.PARTIAL.getVersion().toArrayUnsafe());
|
||||
transaction.commit();
|
||||
loadFlatDbStrategy(composedWorldStateStorage); // force reload of flat db reader strategy
|
||||
}
|
||||
|
||||
public FlatDbMode getFlatDbMode() {
|
||||
return flatDbMode;
|
||||
}
|
||||
}
|
||||
@@ -0,0 +1,51 @@
|
||||
/*
|
||||
* Copyright Hyperledger Besu Contributors.
|
||||
*
|
||||
* Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with
|
||||
* the License. You may obtain a copy of the License at
|
||||
*
|
||||
* http://www.apache.org/licenses/LICENSE-2.0
|
||||
*
|
||||
* Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
|
||||
* an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
|
||||
* specific language governing permissions and limitations under the License.
|
||||
*
|
||||
* SPDX-License-Identifier: Apache-2.0
|
||||
*/
|
||||
package org.hyperledger.besu.ethereum.trie.bonsai.trielog;
|
||||
|
||||
import org.hyperledger.besu.datatypes.Hash;
|
||||
import org.hyperledger.besu.ethereum.core.BlockHeader;
|
||||
import org.hyperledger.besu.ethereum.trie.bonsai.worldview.BonsaiWorldState;
|
||||
import org.hyperledger.besu.ethereum.trie.bonsai.worldview.BonsaiWorldStateUpdateAccumulator;
|
||||
import org.hyperledger.besu.plugin.services.trielogs.TrieLog;
|
||||
|
||||
import java.util.Optional;
|
||||
|
||||
public class NoOpTrieLogManager extends TrieLogManager {
|
||||
|
||||
public NoOpTrieLogManager() {
|
||||
super(null, null, 0, null, TrieLogPruner.noOpTrieLogPruner());
|
||||
}
|
||||
|
||||
@Override
|
||||
public synchronized void saveTrieLog(
|
||||
final BonsaiWorldStateUpdateAccumulator localUpdater,
|
||||
final Hash forWorldStateRootHash,
|
||||
final BlockHeader forBlockHeader,
|
||||
final BonsaiWorldState forWorldState) {
|
||||
// notify trie log added observers, synchronously
|
||||
TrieLog trieLog = trieLogFactory.create(localUpdater, forBlockHeader);
|
||||
trieLogObservers.forEach(o -> o.onTrieLogAdded(new TrieLogAddedEvent(trieLog)));
|
||||
}
|
||||
|
||||
@Override
|
||||
public long getMaxLayersToLoad() {
|
||||
return 0;
|
||||
}
|
||||
|
||||
@Override
|
||||
public Optional<TrieLog> getTrieLogLayer(final Hash blockHash) {
|
||||
return Optional.empty();
|
||||
}
|
||||
}
|
||||
@@ -92,7 +92,7 @@ public class BonsaiWorldState
|
||||
evmConfiguration);
|
||||
}
|
||||
|
||||
protected BonsaiWorldState(
|
||||
public BonsaiWorldState(
|
||||
final BonsaiWorldStateKeyValueStorage worldStateStorage,
|
||||
final CachedMerkleTrieLoader cachedMerkleTrieLoader,
|
||||
final CachedWorldStorageManager cachedWorldStorageManager,
|
||||
|
||||
@@ -0,0 +1,91 @@
|
||||
/*
|
||||
* Copyright ConsenSys AG.
|
||||
*
|
||||
* Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with
|
||||
* the License. You may obtain a copy of the License at
|
||||
*
|
||||
* http://www.apache.org/licenses/LICENSE-2.0
|
||||
*
|
||||
* Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
|
||||
* an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
|
||||
* specific language governing permissions and limitations under the License.
|
||||
*
|
||||
* SPDX-License-Identifier: Apache-2.0
|
||||
*
|
||||
*/
|
||||
|
||||
package org.hyperledger.besu.ethereum.trie.common;
|
||||
|
||||
import org.hyperledger.besu.ethereum.core.MutableWorldState;
|
||||
import org.hyperledger.besu.ethereum.storage.keyvalue.KeyValueStorageProvider;
|
||||
import org.hyperledger.besu.ethereum.storage.keyvalue.WorldStatePreimageKeyValueStorage;
|
||||
import org.hyperledger.besu.ethereum.trie.bonsai.cache.CachedMerkleTrieLoader;
|
||||
import org.hyperledger.besu.ethereum.trie.bonsai.cache.NoOpCachedWorldStorageManager;
|
||||
import org.hyperledger.besu.ethereum.trie.bonsai.storage.BonsaiWorldStateKeyValueStorage;
|
||||
import org.hyperledger.besu.ethereum.trie.bonsai.trielog.NoOpTrieLogManager;
|
||||
import org.hyperledger.besu.ethereum.trie.bonsai.worldview.BonsaiWorldState;
|
||||
import org.hyperledger.besu.ethereum.trie.forest.storage.ForestWorldStateKeyValueStorage;
|
||||
import org.hyperledger.besu.ethereum.trie.forest.worldview.ForestMutableWorldState;
|
||||
import org.hyperledger.besu.ethereum.worldstate.DataStorageConfiguration;
|
||||
import org.hyperledger.besu.ethereum.worldstate.DataStorageFormat;
|
||||
import org.hyperledger.besu.evm.internal.EvmConfiguration;
|
||||
import org.hyperledger.besu.metrics.noop.NoOpMetricsSystem;
|
||||
import org.hyperledger.besu.services.kvstore.InMemoryKeyValueStorage;
|
||||
import org.hyperledger.besu.services.kvstore.SegmentedInMemoryKeyValueStorage;
|
||||
|
||||
import java.util.Objects;
|
||||
|
||||
public class GenesisWorldStateProvider {
|
||||
|
||||
/**
|
||||
* Creates a Genesis world state based on the provided data storage format.
|
||||
*
|
||||
* @param dataStorageFormat the data storage format to use
|
||||
* @return a mutable world state for the Genesis block
|
||||
*/
|
||||
public static MutableWorldState createGenesisWorldState(
|
||||
final DataStorageFormat dataStorageFormat) {
|
||||
if (Objects.requireNonNull(dataStorageFormat) == DataStorageFormat.BONSAI) {
|
||||
return createGenesisBonsaiWorldState();
|
||||
} else {
|
||||
return createGenesisForestWorldState();
|
||||
}
|
||||
}
|
||||
|
||||
/**
|
||||
* Creates a Genesis world state using the Bonsai data storage format.
|
||||
*
|
||||
* @return a mutable world state for the Genesis block
|
||||
*/
|
||||
private static MutableWorldState createGenesisBonsaiWorldState() {
|
||||
final CachedMerkleTrieLoader cachedMerkleTrieLoader =
|
||||
new CachedMerkleTrieLoader(new NoOpMetricsSystem());
|
||||
final BonsaiWorldStateKeyValueStorage bonsaiWorldStateKeyValueStorage =
|
||||
new BonsaiWorldStateKeyValueStorage(
|
||||
new KeyValueStorageProvider(
|
||||
segmentIdentifiers -> new SegmentedInMemoryKeyValueStorage(),
|
||||
new InMemoryKeyValueStorage(),
|
||||
new NoOpMetricsSystem()),
|
||||
new NoOpMetricsSystem(),
|
||||
DataStorageConfiguration.DEFAULT_CONFIG);
|
||||
return new BonsaiWorldState(
|
||||
bonsaiWorldStateKeyValueStorage,
|
||||
cachedMerkleTrieLoader,
|
||||
new NoOpCachedWorldStorageManager(bonsaiWorldStateKeyValueStorage),
|
||||
new NoOpTrieLogManager(),
|
||||
EvmConfiguration.DEFAULT);
|
||||
}
|
||||
|
||||
/**
|
||||
* Creates a Genesis world state using the Forest data storage format.
|
||||
*
|
||||
* @return a mutable world state for the Genesis block
|
||||
*/
|
||||
private static MutableWorldState createGenesisForestWorldState() {
|
||||
final ForestWorldStateKeyValueStorage stateStorage =
|
||||
new ForestWorldStateKeyValueStorage(new InMemoryKeyValueStorage());
|
||||
final WorldStatePreimageKeyValueStorage preimageStorage =
|
||||
new WorldStatePreimageKeyValueStorage(new InMemoryKeyValueStorage());
|
||||
return new ForestMutableWorldState(stateStorage, preimageStorage, EvmConfiguration.DEFAULT);
|
||||
}
|
||||
}
|
||||
@@ -14,6 +14,8 @@
|
||||
*/
|
||||
package org.hyperledger.besu.ethereum.core;
|
||||
|
||||
import static org.hyperledger.besu.ethereum.worldstate.DataStorageConfiguration.DEFAULT_BONSAI_MAX_LAYERS_TO_LOAD;
|
||||
|
||||
import org.hyperledger.besu.ethereum.chain.Blockchain;
|
||||
import org.hyperledger.besu.ethereum.chain.DefaultBlockchain;
|
||||
import org.hyperledger.besu.ethereum.chain.MutableBlockchain;
|
||||
@@ -32,7 +34,9 @@ import org.hyperledger.besu.ethereum.trie.bonsai.trielog.TrieLogPruner;
|
||||
import org.hyperledger.besu.ethereum.trie.forest.ForestWorldStateArchive;
|
||||
import org.hyperledger.besu.ethereum.trie.forest.storage.ForestWorldStateKeyValueStorage;
|
||||
import org.hyperledger.besu.ethereum.trie.forest.worldview.ForestMutableWorldState;
|
||||
import org.hyperledger.besu.ethereum.worldstate.DataStorageConfiguration;
|
||||
import org.hyperledger.besu.ethereum.worldstate.DataStorageFormat;
|
||||
import org.hyperledger.besu.ethereum.worldstate.ImmutableDataStorageConfiguration;
|
||||
import org.hyperledger.besu.evm.internal.EvmConfiguration;
|
||||
import org.hyperledger.besu.metrics.noop.NoOpMetricsSystem;
|
||||
import org.hyperledger.besu.services.kvstore.InMemoryKeyValueStorage;
|
||||
@@ -96,13 +100,18 @@ public class InMemoryKeyValueStorageProvider extends KeyValueStorageProvider {
|
||||
new InMemoryKeyValueStorageProvider();
|
||||
final CachedMerkleTrieLoader cachedMerkleTrieLoader =
|
||||
new CachedMerkleTrieLoader(new NoOpMetricsSystem());
|
||||
final DataStorageConfiguration bonsaiDataStorageConfig =
|
||||
ImmutableDataStorageConfiguration.builder()
|
||||
.dataStorageFormat(DataStorageFormat.BONSAI)
|
||||
.bonsaiMaxLayersToLoad(DEFAULT_BONSAI_MAX_LAYERS_TO_LOAD)
|
||||
.unstable(DataStorageConfiguration.Unstable.DEFAULT)
|
||||
.build();
|
||||
return new BonsaiWorldStateProvider(
|
||||
(BonsaiWorldStateKeyValueStorage)
|
||||
inMemoryKeyValueStorageProvider.createWorldStateStorage(DataStorageFormat.BONSAI),
|
||||
inMemoryKeyValueStorageProvider.createWorldStateStorage(bonsaiDataStorageConfig),
|
||||
blockchain,
|
||||
Optional.empty(),
|
||||
cachedMerkleTrieLoader,
|
||||
new NoOpMetricsSystem(),
|
||||
null,
|
||||
evmConfiguration,
|
||||
TrieLogPruner.noOpTrieLogPruner());
|
||||
@@ -111,7 +120,7 @@ public class InMemoryKeyValueStorageProvider extends KeyValueStorageProvider {
|
||||
public static MutableWorldState createInMemoryWorldState() {
|
||||
final InMemoryKeyValueStorageProvider provider = new InMemoryKeyValueStorageProvider();
|
||||
return new ForestMutableWorldState(
|
||||
provider.createWorldStateStorage(DataStorageFormat.FOREST),
|
||||
provider.createWorldStateStorage(DataStorageConfiguration.DEFAULT_CONFIG),
|
||||
provider.createWorldStatePreimageStorage(),
|
||||
EvmConfiguration.DEFAULT);
|
||||
}
|
||||
|
||||
@@ -44,6 +44,7 @@ import org.hyperledger.besu.ethereum.storage.StorageProvider;
|
||||
import org.hyperledger.besu.ethereum.trie.bonsai.BonsaiWorldStateProvider;
|
||||
import org.hyperledger.besu.ethereum.trie.bonsai.storage.BonsaiWorldStateKeyValueStorage;
|
||||
import org.hyperledger.besu.ethereum.trie.bonsai.worldview.BonsaiWorldState;
|
||||
import org.hyperledger.besu.ethereum.worldstate.DataStorageConfiguration;
|
||||
import org.hyperledger.besu.ethereum.worldstate.WorldStateArchive;
|
||||
import org.hyperledger.besu.ethereum.worldstate.WorldStateStorage;
|
||||
import org.hyperledger.besu.evm.internal.EvmConfiguration;
|
||||
@@ -80,7 +81,8 @@ class BlockImportExceptionHandlingTest {
|
||||
private final StorageProvider storageProvider = new InMemoryKeyValueStorageProvider();
|
||||
|
||||
private final WorldStateStorage worldStateStorage =
|
||||
new BonsaiWorldStateKeyValueStorage(storageProvider, new NoOpMetricsSystem());
|
||||
new BonsaiWorldStateKeyValueStorage(
|
||||
storageProvider, new NoOpMetricsSystem(), DataStorageConfiguration.DEFAULT_CONFIG);
|
||||
|
||||
private final WorldStateArchive worldStateArchive =
|
||||
// contains a BonsaiWorldState which we need to spy on.
|
||||
|
||||
File diff suppressed because one or more lines are too long
@@ -27,22 +27,6 @@ import org.junit.jupiter.api.Test;
|
||||
public class TargetingGasLimitCalculatorTest {
|
||||
private static final long ADJUSTMENT_FACTOR = 1024L;
|
||||
|
||||
@Test
|
||||
public void verifyGasLimitIsIncreasedWithinLimits() {
|
||||
FrontierTargetingGasLimitCalculator targetingGasLimitCalculator =
|
||||
new FrontierTargetingGasLimitCalculator();
|
||||
assertThat(targetingGasLimitCalculator.nextGasLimit(8_000_000L, 10_000_000L, 1L))
|
||||
.isEqualTo(8_000_000L + ADJUSTMENT_FACTOR);
|
||||
}
|
||||
|
||||
@Test
|
||||
public void verifyGasLimitIsDecreasedWithinLimits() {
|
||||
FrontierTargetingGasLimitCalculator targetingGasLimitCalculator =
|
||||
new FrontierTargetingGasLimitCalculator();
|
||||
assertThat(targetingGasLimitCalculator.nextGasLimit(12_000_000L, 10_000_000L, 1L))
|
||||
.isEqualTo(12_000_000L - ADJUSTMENT_FACTOR);
|
||||
}
|
||||
|
||||
@Test
|
||||
public void verifyGasLimitReachesTarget() {
|
||||
final long target = 10_000_000L;
|
||||
@@ -55,6 +39,33 @@ public class TargetingGasLimitCalculatorTest {
|
||||
.isEqualTo(target);
|
||||
}
|
||||
|
||||
@Test
|
||||
public void verifyAdjustmentDeltas() {
|
||||
assertDeltas(20000000L, 20019530L, 19980470L);
|
||||
assertDeltas(40000000L, 40039061L, 39960939L);
|
||||
}
|
||||
|
||||
private void assertDeltas(
|
||||
final long gasLimit, final long expectedIncrease, final long expectedDecrease) {
|
||||
FrontierTargetingGasLimitCalculator targetingGasLimitCalculator =
|
||||
new FrontierTargetingGasLimitCalculator();
|
||||
// increase
|
||||
assertThat(targetingGasLimitCalculator.nextGasLimit(gasLimit, gasLimit * 2, 1L))
|
||||
.isEqualTo(expectedIncrease);
|
||||
// decrease
|
||||
assertThat(targetingGasLimitCalculator.nextGasLimit(gasLimit, 0, 1L))
|
||||
.isEqualTo(expectedDecrease);
|
||||
// small decrease
|
||||
assertThat(targetingGasLimitCalculator.nextGasLimit(gasLimit, gasLimit - 1, 1L))
|
||||
.isEqualTo(gasLimit - 1);
|
||||
// small increase
|
||||
assertThat(targetingGasLimitCalculator.nextGasLimit(gasLimit, gasLimit + 1, 1L))
|
||||
.isEqualTo(gasLimit + 1);
|
||||
// no change
|
||||
assertThat(targetingGasLimitCalculator.nextGasLimit(gasLimit, gasLimit, 1L))
|
||||
.isEqualTo(gasLimit);
|
||||
}
|
||||
|
||||
@Test
|
||||
public void verifyMinGasLimit() {
|
||||
assertThat(AbstractGasLimitSpecification.isValidTargetGasLimit(DEFAULT_MIN_GAS_LIMIT - 1))
|
||||
|
||||
@@ -68,7 +68,9 @@ import org.hyperledger.besu.ethereum.storage.keyvalue.KeyValueStorageProviderBui
|
||||
import org.hyperledger.besu.ethereum.trie.bonsai.cache.CachedMerkleTrieLoader;
|
||||
import org.hyperledger.besu.ethereum.trie.bonsai.storage.BonsaiWorldStateKeyValueStorage;
|
||||
import org.hyperledger.besu.ethereum.trie.bonsai.trielog.TrieLogPruner;
|
||||
import org.hyperledger.besu.ethereum.worldstate.DataStorageConfiguration;
|
||||
import org.hyperledger.besu.ethereum.worldstate.DataStorageFormat;
|
||||
import org.hyperledger.besu.ethereum.worldstate.ImmutableDataStorageConfiguration;
|
||||
import org.hyperledger.besu.evm.internal.EvmConfiguration;
|
||||
import org.hyperledger.besu.metrics.noop.NoOpMetricsSystem;
|
||||
import org.hyperledger.besu.plugin.services.BesuConfiguration;
|
||||
@@ -147,14 +149,19 @@ public abstract class AbstractIsolationTests {
|
||||
public void createStorage() {
|
||||
bonsaiWorldStateStorage =
|
||||
(BonsaiWorldStateKeyValueStorage)
|
||||
createKeyValueStorageProvider().createWorldStateStorage(DataStorageFormat.BONSAI);
|
||||
createKeyValueStorageProvider()
|
||||
.createWorldStateStorage(
|
||||
ImmutableDataStorageConfiguration.builder()
|
||||
.dataStorageFormat(DataStorageFormat.BONSAI)
|
||||
.bonsaiMaxLayersToLoad(
|
||||
DataStorageConfiguration.DEFAULT_BONSAI_MAX_LAYERS_TO_LOAD)
|
||||
.build());
|
||||
archive =
|
||||
new BonsaiWorldStateProvider(
|
||||
bonsaiWorldStateStorage,
|
||||
blockchain,
|
||||
Optional.of(16L),
|
||||
new CachedMerkleTrieLoader(new NoOpMetricsSystem()),
|
||||
new NoOpMetricsSystem(),
|
||||
null,
|
||||
EvmConfiguration.DEFAULT,
|
||||
TrieLogPruner.noOpTrieLogPruner());
|
||||
|
||||
@@ -44,6 +44,7 @@ import org.hyperledger.besu.ethereum.trie.bonsai.trielog.TrieLogLayer;
|
||||
import org.hyperledger.besu.ethereum.trie.bonsai.trielog.TrieLogManager;
|
||||
import org.hyperledger.besu.ethereum.trie.bonsai.trielog.TrieLogPruner;
|
||||
import org.hyperledger.besu.ethereum.trie.bonsai.worldview.BonsaiWorldState;
|
||||
import org.hyperledger.besu.ethereum.worldstate.DataStorageConfiguration;
|
||||
import org.hyperledger.besu.evm.internal.EvmConfiguration;
|
||||
import org.hyperledger.besu.metrics.noop.NoOpMetricsSystem;
|
||||
import org.hyperledger.besu.plugin.services.storage.KeyValueStorage;
|
||||
@@ -106,7 +107,8 @@ class BonsaiWorldStateArchiveTest {
|
||||
new BonsaiWorldStateProvider(
|
||||
cachedWorldStorageManager,
|
||||
trieLogManager,
|
||||
new BonsaiWorldStateKeyValueStorage(storageProvider, new NoOpMetricsSystem()),
|
||||
new BonsaiWorldStateKeyValueStorage(
|
||||
storageProvider, new NoOpMetricsSystem(), DataStorageConfiguration.DEFAULT_CONFIG),
|
||||
blockchain,
|
||||
new CachedMerkleTrieLoader(new NoOpMetricsSystem()),
|
||||
EvmConfiguration.DEFAULT);
|
||||
@@ -119,11 +121,11 @@ class BonsaiWorldStateArchiveTest {
|
||||
void testGetMutableReturnEmptyWhenLoadMoreThanLimitLayersBack() {
|
||||
bonsaiWorldStateArchive =
|
||||
new BonsaiWorldStateProvider(
|
||||
new BonsaiWorldStateKeyValueStorage(storageProvider, new NoOpMetricsSystem()),
|
||||
new BonsaiWorldStateKeyValueStorage(
|
||||
storageProvider, new NoOpMetricsSystem(), DataStorageConfiguration.DEFAULT_CONFIG),
|
||||
blockchain,
|
||||
Optional.of(512L),
|
||||
new CachedMerkleTrieLoader(new NoOpMetricsSystem()),
|
||||
new NoOpMetricsSystem(),
|
||||
null,
|
||||
EvmConfiguration.DEFAULT,
|
||||
TrieLogPruner.noOpTrieLogPruner());
|
||||
@@ -141,7 +143,8 @@ class BonsaiWorldStateArchiveTest {
|
||||
new BonsaiWorldStateProvider(
|
||||
cachedWorldStorageManager,
|
||||
trieLogManager,
|
||||
new BonsaiWorldStateKeyValueStorage(storageProvider, new NoOpMetricsSystem()),
|
||||
new BonsaiWorldStateKeyValueStorage(
|
||||
storageProvider, new NoOpMetricsSystem(), DataStorageConfiguration.DEFAULT_CONFIG),
|
||||
blockchain,
|
||||
new CachedMerkleTrieLoader(new NoOpMetricsSystem()),
|
||||
EvmConfiguration.DEFAULT);
|
||||
@@ -167,7 +170,8 @@ class BonsaiWorldStateArchiveTest {
|
||||
.getTrieLogLayer(any(Hash.class));
|
||||
|
||||
var worldStateStorage =
|
||||
new BonsaiWorldStateKeyValueStorage(storageProvider, new NoOpMetricsSystem());
|
||||
new BonsaiWorldStateKeyValueStorage(
|
||||
storageProvider, new NoOpMetricsSystem(), DataStorageConfiguration.DEFAULT_CONFIG);
|
||||
bonsaiWorldStateArchive =
|
||||
spy(
|
||||
new BonsaiWorldStateProvider(
|
||||
@@ -193,7 +197,8 @@ class BonsaiWorldStateArchiveTest {
|
||||
void testGetMutableWithStorageConsistencyNotRollbackTheState() {
|
||||
|
||||
var worldStateStorage =
|
||||
new BonsaiWorldStateKeyValueStorage(storageProvider, new NoOpMetricsSystem());
|
||||
new BonsaiWorldStateKeyValueStorage(
|
||||
storageProvider, new NoOpMetricsSystem(), DataStorageConfiguration.DEFAULT_CONFIG);
|
||||
bonsaiWorldStateArchive =
|
||||
spy(
|
||||
new BonsaiWorldStateProvider(
|
||||
@@ -229,7 +234,8 @@ class BonsaiWorldStateArchiveTest {
|
||||
.getTrieLogLayer(any(Hash.class));
|
||||
|
||||
var worldStateStorage =
|
||||
new BonsaiWorldStateKeyValueStorage(storageProvider, new NoOpMetricsSystem());
|
||||
new BonsaiWorldStateKeyValueStorage(
|
||||
storageProvider, new NoOpMetricsSystem(), DataStorageConfiguration.DEFAULT_CONFIG);
|
||||
|
||||
bonsaiWorldStateArchive =
|
||||
spy(
|
||||
@@ -276,7 +282,10 @@ class BonsaiWorldStateArchiveTest {
|
||||
new BonsaiWorldStateProvider(
|
||||
cachedWorldStorageManager,
|
||||
trieLogManager,
|
||||
new BonsaiWorldStateKeyValueStorage(storageProvider, new NoOpMetricsSystem()),
|
||||
new BonsaiWorldStateKeyValueStorage(
|
||||
storageProvider,
|
||||
new NoOpMetricsSystem(),
|
||||
DataStorageConfiguration.DEFAULT_CONFIG),
|
||||
blockchain,
|
||||
new CachedMerkleTrieLoader(new NoOpMetricsSystem()),
|
||||
EvmConfiguration.DEFAULT));
|
||||
|
||||
@@ -29,6 +29,7 @@ import org.hyperledger.besu.ethereum.trie.TrieIterator;
|
||||
import org.hyperledger.besu.ethereum.trie.bonsai.cache.CachedMerkleTrieLoader;
|
||||
import org.hyperledger.besu.ethereum.trie.bonsai.storage.BonsaiWorldStateKeyValueStorage;
|
||||
import org.hyperledger.besu.ethereum.trie.patricia.StoredMerklePatriciaTrie;
|
||||
import org.hyperledger.besu.ethereum.worldstate.DataStorageConfiguration;
|
||||
import org.hyperledger.besu.ethereum.worldstate.StateTrieAccountValue;
|
||||
import org.hyperledger.besu.metrics.noop.NoOpMetricsSystem;
|
||||
|
||||
@@ -48,7 +49,9 @@ class CachedMerkleTrieLoaderTest {
|
||||
private CachedMerkleTrieLoader merkleTrieLoader;
|
||||
private final StorageProvider storageProvider = new InMemoryKeyValueStorageProvider();
|
||||
private final BonsaiWorldStateKeyValueStorage inMemoryWorldState =
|
||||
Mockito.spy(new BonsaiWorldStateKeyValueStorage(storageProvider, new NoOpMetricsSystem()));
|
||||
Mockito.spy(
|
||||
new BonsaiWorldStateKeyValueStorage(
|
||||
storageProvider, new NoOpMetricsSystem(), DataStorageConfiguration.DEFAULT_CONFIG));
|
||||
|
||||
final List<Address> accounts =
|
||||
List.of(Address.fromHexString("0xdeadbeef"), Address.fromHexString("0xdeadbeee"));
|
||||
@@ -71,7 +74,9 @@ class CachedMerkleTrieLoaderTest {
|
||||
|
||||
final BonsaiWorldStateKeyValueStorage emptyStorage =
|
||||
new BonsaiWorldStateKeyValueStorage(
|
||||
new InMemoryKeyValueStorageProvider(), new NoOpMetricsSystem());
|
||||
new InMemoryKeyValueStorageProvider(),
|
||||
new NoOpMetricsSystem(),
|
||||
DataStorageConfiguration.DEFAULT_CONFIG);
|
||||
StoredMerklePatriciaTrie<Bytes, Bytes> cachedTrie =
|
||||
new StoredMerklePatriciaTrie<>(
|
||||
(location, hash) ->
|
||||
@@ -110,7 +115,9 @@ class CachedMerkleTrieLoaderTest {
|
||||
final List<Bytes> cachedSlots = new ArrayList<>();
|
||||
final BonsaiWorldStateKeyValueStorage emptyStorage =
|
||||
new BonsaiWorldStateKeyValueStorage(
|
||||
new InMemoryKeyValueStorageProvider(), new NoOpMetricsSystem());
|
||||
new InMemoryKeyValueStorageProvider(),
|
||||
new NoOpMetricsSystem(),
|
||||
DataStorageConfiguration.DEFAULT_CONFIG);
|
||||
final StoredMerklePatriciaTrie<Bytes, Bytes> cachedTrie =
|
||||
new StoredMerklePatriciaTrie<>(
|
||||
(location, hash) ->
|
||||
|
||||
@@ -34,6 +34,7 @@ import org.hyperledger.besu.ethereum.trie.bonsai.trielog.TrieLogFactoryImpl;
|
||||
import org.hyperledger.besu.ethereum.trie.bonsai.trielog.TrieLogLayer;
|
||||
import org.hyperledger.besu.ethereum.trie.bonsai.worldview.BonsaiWorldState;
|
||||
import org.hyperledger.besu.ethereum.trie.bonsai.worldview.BonsaiWorldStateUpdateAccumulator;
|
||||
import org.hyperledger.besu.ethereum.worldstate.DataStorageConfiguration;
|
||||
import org.hyperledger.besu.evm.account.MutableAccount;
|
||||
import org.hyperledger.besu.evm.internal.EvmConfiguration;
|
||||
import org.hyperledger.besu.evm.log.LogsBloomFilter;
|
||||
@@ -161,7 +162,8 @@ class LogRollingTests {
|
||||
final BonsaiWorldState worldState =
|
||||
new BonsaiWorldState(
|
||||
archive,
|
||||
new BonsaiWorldStateKeyValueStorage(provider, new NoOpMetricsSystem()),
|
||||
new BonsaiWorldStateKeyValueStorage(
|
||||
provider, new NoOpMetricsSystem(), DataStorageConfiguration.DEFAULT_CONFIG),
|
||||
EvmConfiguration.DEFAULT);
|
||||
final WorldUpdater updater = worldState.updater();
|
||||
|
||||
@@ -174,7 +176,8 @@ class LogRollingTests {
|
||||
final BonsaiWorldState secondWorldState =
|
||||
new BonsaiWorldState(
|
||||
secondArchive,
|
||||
new BonsaiWorldStateKeyValueStorage(secondProvider, new NoOpMetricsSystem()),
|
||||
new BonsaiWorldStateKeyValueStorage(
|
||||
secondProvider, new NoOpMetricsSystem(), DataStorageConfiguration.DEFAULT_CONFIG),
|
||||
EvmConfiguration.DEFAULT);
|
||||
final BonsaiWorldStateUpdateAccumulator secondUpdater =
|
||||
(BonsaiWorldStateUpdateAccumulator) secondWorldState.updater();
|
||||
@@ -205,7 +208,8 @@ class LogRollingTests {
|
||||
final BonsaiWorldState worldState =
|
||||
new BonsaiWorldState(
|
||||
archive,
|
||||
new BonsaiWorldStateKeyValueStorage(provider, new NoOpMetricsSystem()),
|
||||
new BonsaiWorldStateKeyValueStorage(
|
||||
provider, new NoOpMetricsSystem(), DataStorageConfiguration.DEFAULT_CONFIG),
|
||||
EvmConfiguration.DEFAULT);
|
||||
|
||||
final WorldUpdater updater = worldState.updater();
|
||||
@@ -226,7 +230,8 @@ class LogRollingTests {
|
||||
final BonsaiWorldState secondWorldState =
|
||||
new BonsaiWorldState(
|
||||
secondArchive,
|
||||
new BonsaiWorldStateKeyValueStorage(secondProvider, new NoOpMetricsSystem()),
|
||||
new BonsaiWorldStateKeyValueStorage(
|
||||
secondProvider, new NoOpMetricsSystem(), DataStorageConfiguration.DEFAULT_CONFIG),
|
||||
EvmConfiguration.DEFAULT);
|
||||
final BonsaiWorldStateUpdateAccumulator secondUpdater =
|
||||
(BonsaiWorldStateUpdateAccumulator) secondWorldState.updater();
|
||||
@@ -258,7 +263,8 @@ class LogRollingTests {
|
||||
final BonsaiWorldState worldState =
|
||||
new BonsaiWorldState(
|
||||
archive,
|
||||
new BonsaiWorldStateKeyValueStorage(provider, new NoOpMetricsSystem()),
|
||||
new BonsaiWorldStateKeyValueStorage(
|
||||
provider, new NoOpMetricsSystem(), DataStorageConfiguration.DEFAULT_CONFIG),
|
||||
EvmConfiguration.DEFAULT);
|
||||
|
||||
final WorldUpdater updater = worldState.updater();
|
||||
@@ -286,7 +292,8 @@ class LogRollingTests {
|
||||
final BonsaiWorldState secondWorldState =
|
||||
new BonsaiWorldState(
|
||||
secondArchive,
|
||||
new BonsaiWorldStateKeyValueStorage(secondProvider, new NoOpMetricsSystem()),
|
||||
new BonsaiWorldStateKeyValueStorage(
|
||||
secondProvider, new NoOpMetricsSystem(), DataStorageConfiguration.DEFAULT_CONFIG),
|
||||
EvmConfiguration.DEFAULT);
|
||||
|
||||
final WorldUpdater secondUpdater = secondWorldState.updater();
|
||||
|
||||
@@ -30,6 +30,7 @@ import org.hyperledger.besu.ethereum.trie.bonsai.trielog.TrieLogFactoryImpl;
|
||||
import org.hyperledger.besu.ethereum.trie.bonsai.trielog.TrieLogLayer;
|
||||
import org.hyperledger.besu.ethereum.trie.bonsai.worldview.BonsaiWorldState;
|
||||
import org.hyperledger.besu.ethereum.trie.bonsai.worldview.BonsaiWorldStateUpdateAccumulator;
|
||||
import org.hyperledger.besu.ethereum.worldstate.DataStorageConfiguration;
|
||||
import org.hyperledger.besu.evm.internal.EvmConfiguration;
|
||||
import org.hyperledger.besu.metrics.noop.NoOpMetricsSystem;
|
||||
import org.hyperledger.besu.services.kvstore.InMemoryKeyValueStorage;
|
||||
@@ -56,7 +57,8 @@ public class RollingImport {
|
||||
final BonsaiWorldState bonsaiState =
|
||||
new BonsaiWorldState(
|
||||
archive,
|
||||
new BonsaiWorldStateKeyValueStorage(provider, new NoOpMetricsSystem()),
|
||||
new BonsaiWorldStateKeyValueStorage(
|
||||
provider, new NoOpMetricsSystem(), DataStorageConfiguration.DEFAULT_CONFIG),
|
||||
EvmConfiguration.DEFAULT);
|
||||
final SegmentedInMemoryKeyValueStorage worldStateStorage =
|
||||
(SegmentedInMemoryKeyValueStorage)
|
||||
|
||||
@@ -36,6 +36,7 @@ import org.hyperledger.besu.ethereum.storage.keyvalue.KeyValueSegmentIdentifier;
|
||||
import org.hyperledger.besu.ethereum.trie.MerkleTrie;
|
||||
import org.hyperledger.besu.ethereum.trie.StorageEntriesCollector;
|
||||
import org.hyperledger.besu.ethereum.trie.patricia.StoredMerklePatriciaTrie;
|
||||
import org.hyperledger.besu.ethereum.worldstate.DataStorageConfiguration;
|
||||
import org.hyperledger.besu.ethereum.worldstate.FlatDbMode;
|
||||
import org.hyperledger.besu.ethereum.worldstate.StateTrieAccountValue;
|
||||
import org.hyperledger.besu.metrics.noop.NoOpMetricsSystem;
|
||||
@@ -452,7 +453,9 @@ public class BonsaiWorldStateKeyValueStorageTest {
|
||||
|
||||
private BonsaiWorldStateKeyValueStorage emptyStorage() {
|
||||
return new BonsaiWorldStateKeyValueStorage(
|
||||
new InMemoryKeyValueStorageProvider(), new NoOpMetricsSystem());
|
||||
new InMemoryKeyValueStorageProvider(),
|
||||
new NoOpMetricsSystem(),
|
||||
DataStorageConfiguration.DEFAULT_CONFIG);
|
||||
}
|
||||
|
||||
@Test
|
||||
@@ -487,6 +490,7 @@ public class BonsaiWorldStateKeyValueStorageTest {
|
||||
.thenReturn(mockTrieLogStorage);
|
||||
when(mockStorageProvider.getStorageBySegmentIdentifiers(any()))
|
||||
.thenReturn(mock(SegmentedKeyValueStorage.class));
|
||||
return new BonsaiWorldStateKeyValueStorage(mockStorageProvider, new NoOpMetricsSystem());
|
||||
return new BonsaiWorldStateKeyValueStorage(
|
||||
mockStorageProvider, new NoOpMetricsSystem(), DataStorageConfiguration.DEFAULT_CONFIG);
|
||||
}
|
||||
}
|
||||
|
||||
@@ -0,0 +1,89 @@
|
||||
/*
|
||||
* Copyright Hyperledger Besu Contributors.
|
||||
*
|
||||
* Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with
|
||||
* the License. You may obtain a copy of the License at
|
||||
*
|
||||
* http://www.apache.org/licenses/LICENSE-2.0
|
||||
*
|
||||
* Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
|
||||
* an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
|
||||
* specific language governing permissions and limitations under the License.
|
||||
*
|
||||
* SPDX-License-Identifier: Apache-2.0
|
||||
*/
|
||||
|
||||
package org.hyperledger.besu.ethereum.trie.bonsai.storage.flat;
|
||||
|
||||
import static org.assertj.core.api.Assertions.assertThat;
|
||||
|
||||
import org.hyperledger.besu.ethereum.storage.keyvalue.KeyValueSegmentIdentifier;
|
||||
import org.hyperledger.besu.ethereum.worldstate.DataStorageConfiguration;
|
||||
import org.hyperledger.besu.ethereum.worldstate.FlatDbMode;
|
||||
import org.hyperledger.besu.metrics.noop.NoOpMetricsSystem;
|
||||
import org.hyperledger.besu.plugin.services.storage.SegmentedKeyValueStorage;
|
||||
import org.hyperledger.besu.plugin.services.storage.SegmentedKeyValueStorageTransaction;
|
||||
import org.hyperledger.besu.services.kvstore.SegmentedInMemoryKeyValueStorage;
|
||||
|
||||
import java.util.List;
|
||||
|
||||
import org.junit.jupiter.api.Test;
|
||||
import org.junit.jupiter.api.extension.ExtendWith;
|
||||
import org.junit.jupiter.params.ParameterizedTest;
|
||||
import org.junit.jupiter.params.provider.EnumSource;
|
||||
import org.mockito.junit.jupiter.MockitoExtension;
|
||||
|
||||
@ExtendWith(MockitoExtension.class)
|
||||
class FlatDbStrategyProviderTest {
|
||||
private final FlatDbStrategyProvider flatDbStrategyProvider =
|
||||
new FlatDbStrategyProvider(new NoOpMetricsSystem(), DataStorageConfiguration.DEFAULT_CONFIG);
|
||||
private final SegmentedKeyValueStorage composedWorldStateStorage =
|
||||
new SegmentedInMemoryKeyValueStorage(List.of(KeyValueSegmentIdentifier.TRIE_BRANCH_STORAGE));
|
||||
|
||||
@ParameterizedTest
|
||||
@EnumSource(FlatDbMode.class)
|
||||
void loadsFlatDbStrategyForStoredFlatDbMode(final FlatDbMode flatDbMode) {
|
||||
updateFlatDbMode(flatDbMode);
|
||||
|
||||
flatDbStrategyProvider.loadFlatDbStrategy(composedWorldStateStorage);
|
||||
assertThat(flatDbStrategyProvider.getFlatDbMode()).isEqualTo(flatDbMode);
|
||||
}
|
||||
|
||||
@Test
|
||||
void loadsPartialFlatDbStrategyWhenNoFlatDbModeStored() {
|
||||
flatDbStrategyProvider.loadFlatDbStrategy(composedWorldStateStorage);
|
||||
assertThat(flatDbStrategyProvider.getFlatDbMode()).isEqualTo(FlatDbMode.PARTIAL);
|
||||
}
|
||||
|
||||
@Test
|
||||
void upgradesFlatDbStrategyToFullFlatDbMode() {
|
||||
updateFlatDbMode(FlatDbMode.PARTIAL);
|
||||
|
||||
flatDbStrategyProvider.upgradeToFullFlatDbMode(composedWorldStateStorage);
|
||||
assertThat(flatDbStrategyProvider.flatDbMode).isEqualTo(FlatDbMode.FULL);
|
||||
assertThat(flatDbStrategyProvider.flatDbStrategy).isNotNull();
|
||||
assertThat(flatDbStrategyProvider.getFlatDbStrategy(composedWorldStateStorage))
|
||||
.isInstanceOf(FullFlatDbStrategy.class);
|
||||
}
|
||||
|
||||
@Test
|
||||
void downgradesFlatDbStrategyToPartiallyFlatDbMode() {
|
||||
updateFlatDbMode(FlatDbMode.FULL);
|
||||
|
||||
flatDbStrategyProvider.downgradeToPartialFlatDbMode(composedWorldStateStorage);
|
||||
assertThat(flatDbStrategyProvider.flatDbMode).isEqualTo(FlatDbMode.PARTIAL);
|
||||
assertThat(flatDbStrategyProvider.flatDbStrategy).isNotNull();
|
||||
assertThat(flatDbStrategyProvider.getFlatDbStrategy(composedWorldStateStorage))
|
||||
.isInstanceOf(PartialFlatDbStrategy.class);
|
||||
}
|
||||
|
||||
private void updateFlatDbMode(final FlatDbMode flatDbMode) {
|
||||
final SegmentedKeyValueStorageTransaction transaction =
|
||||
composedWorldStateStorage.startTransaction();
|
||||
transaction.put(
|
||||
KeyValueSegmentIdentifier.TRIE_BRANCH_STORAGE,
|
||||
FlatDbStrategyProvider.FLAT_DB_MODE,
|
||||
flatDbMode.getVersion().toArrayUnsafe());
|
||||
transaction.commit();
|
||||
}
|
||||
}
|
||||
@@ -38,7 +38,7 @@ import org.hyperledger.besu.ethereum.eth.sync.fastsync.worldstate.NodeDataReques
|
||||
import org.hyperledger.besu.ethereum.storage.StorageProvider;
|
||||
import org.hyperledger.besu.ethereum.storage.keyvalue.KeyValueSegmentIdentifier;
|
||||
import org.hyperledger.besu.ethereum.storage.keyvalue.KeyValueStorageProviderBuilder;
|
||||
import org.hyperledger.besu.ethereum.worldstate.DataStorageFormat;
|
||||
import org.hyperledger.besu.ethereum.worldstate.DataStorageConfiguration;
|
||||
import org.hyperledger.besu.ethereum.worldstate.WorldStateArchive;
|
||||
import org.hyperledger.besu.ethereum.worldstate.WorldStateStorage;
|
||||
import org.hyperledger.besu.metrics.ObservableMetricsSystem;
|
||||
@@ -105,7 +105,8 @@ public class WorldStateDownloaderBenchmark {
|
||||
|
||||
final StorageProvider storageProvider =
|
||||
createKeyValueStorageProvider(tempDir, tempDir.resolve("database"));
|
||||
worldStateStorage = storageProvider.createWorldStateStorage(DataStorageFormat.FOREST);
|
||||
worldStateStorage =
|
||||
storageProvider.createWorldStateStorage(DataStorageConfiguration.DEFAULT_CONFIG);
|
||||
|
||||
pendingRequests = new InMemoryTasksPriorityQueues<>();
|
||||
worldStateDownloader =
|
||||
|
||||
@@ -139,6 +139,7 @@ public class EthPeers {
|
||||
"peer_limit",
|
||||
"The maximum number of peers this node allows to connect",
|
||||
() -> peerUpperBound);
|
||||
|
||||
connectedPeersCounter =
|
||||
metricsSystem.createCounter(
|
||||
BesuMetricCategory.PEERS, "connected_total", "Total number of peers connected");
|
||||
|
||||
@@ -110,7 +110,7 @@ public class EthProtocolManager implements ProtocolManager, MinedBlockObserver {
|
||||
|
||||
this.blockBroadcaster = new BlockBroadcaster(ethContext);
|
||||
|
||||
supportedCapabilities =
|
||||
this.supportedCapabilities =
|
||||
calculateCapabilities(synchronizerConfiguration, ethereumWireProtocolConfiguration);
|
||||
|
||||
// Run validators
|
||||
@@ -252,11 +252,14 @@ public class EthProtocolManager implements ProtocolManager, MinedBlockObserver {
|
||||
@Override
|
||||
public void stop() {
|
||||
if (stopped.compareAndSet(false, true)) {
|
||||
LOG.info("Stopping {} Subprotocol.", getSupportedProtocol());
|
||||
LOG.atInfo().setMessage("Stopping {} Subprotocol.").addArgument(getSupportedProtocol()).log();
|
||||
scheduler.stop();
|
||||
shutdown.countDown();
|
||||
} else {
|
||||
LOG.error("Attempted to stop already stopped {} Subprotocol.", getSupportedProtocol());
|
||||
LOG.atInfo()
|
||||
.setMessage("Attempted to stop already stopped {} Subprotocol.")
|
||||
.addArgument(this::getSupportedProtocol)
|
||||
.log();
|
||||
}
|
||||
}
|
||||
|
||||
@@ -264,7 +267,10 @@ public class EthProtocolManager implements ProtocolManager, MinedBlockObserver {
|
||||
public void awaitStop() throws InterruptedException {
|
||||
shutdown.await();
|
||||
scheduler.awaitStop();
|
||||
LOG.info("{} Subprotocol stopped.", getSupportedProtocol());
|
||||
LOG.atInfo()
|
||||
.setMessage("{} Subprotocol stopped.")
|
||||
.addArgument(this::getSupportedProtocol)
|
||||
.log();
|
||||
}
|
||||
|
||||
@Override
|
||||
@@ -277,8 +283,10 @@ public class EthProtocolManager implements ProtocolManager, MinedBlockObserver {
|
||||
EthProtocolLogger.logProcessMessage(cap, code);
|
||||
final EthPeer ethPeer = ethPeers.peer(message.getConnection());
|
||||
if (ethPeer == null) {
|
||||
LOG.debug(
|
||||
"Ignoring message received from unknown peer connection: {}", message.getConnection());
|
||||
LOG.atDebug()
|
||||
.setMessage("Ignoring message received from unknown peer connection: {}")
|
||||
.addArgument(message::getConnection)
|
||||
.log();
|
||||
return;
|
||||
}
|
||||
|
||||
@@ -288,19 +296,24 @@ public class EthProtocolManager implements ProtocolManager, MinedBlockObserver {
|
||||
return;
|
||||
} else if (!ethPeer.statusHasBeenReceived()) {
|
||||
// Peers are required to send status messages before any other message type
|
||||
LOG.debug(
|
||||
"{} requires a Status ({}) message to be sent first. Instead, received message {} (BREACH_OF_PROTOCOL). Disconnecting from {}.",
|
||||
this.getClass().getSimpleName(),
|
||||
EthPV62.STATUS,
|
||||
code,
|
||||
ethPeer);
|
||||
LOG.atDebug()
|
||||
.setMessage(
|
||||
"{} requires a Status ({}) message to be sent first. Instead, received message {} (BREACH_OF_PROTOCOL). Disconnecting from {}.")
|
||||
.addArgument(() -> this.getClass().getSimpleName())
|
||||
.addArgument(EthPV62.STATUS)
|
||||
.addArgument(code)
|
||||
.addArgument(ethPeer::toString)
|
||||
.log();
|
||||
ethPeer.disconnect(DisconnectReason.BREACH_OF_PROTOCOL);
|
||||
return;
|
||||
}
|
||||
|
||||
if (this.mergePeerFilter.isPresent()) {
|
||||
if (this.mergePeerFilter.get().disconnectIfGossipingBlocks(message, ethPeer)) {
|
||||
LOG.debug("Post-merge disconnect: peer still gossiping blocks {}", ethPeer);
|
||||
LOG.atDebug()
|
||||
.setMessage("Post-merge disconnect: peer still gossiping blocks {}")
|
||||
.addArgument(ethPeer::toString)
|
||||
.log();
|
||||
handleDisconnect(ethPeer.getConnection(), DisconnectReason.SUBPROTOCOL_TRIGGERED, false);
|
||||
return;
|
||||
}
|
||||
@@ -333,11 +346,12 @@ public class EthProtocolManager implements ProtocolManager, MinedBlockObserver {
|
||||
maybeResponseData = ethMessages.dispatch(ethMessage);
|
||||
}
|
||||
} catch (final RLPException e) {
|
||||
LOG.debug(
|
||||
"Received malformed message {} (BREACH_OF_PROTOCOL), disconnecting: {}",
|
||||
messageData.getData(),
|
||||
ethPeer,
|
||||
e);
|
||||
LOG.atDebug()
|
||||
.setMessage("Received malformed message {} (BREACH_OF_PROTOCOL), disconnecting: {}, {}")
|
||||
.addArgument(messageData::getData)
|
||||
.addArgument(ethPeer::toString)
|
||||
.addArgument(e::toString)
|
||||
.log();
|
||||
|
||||
ethPeer.disconnect(DisconnectMessage.DisconnectReason.BREACH_OF_PROTOCOL);
|
||||
}
|
||||
@@ -368,23 +382,31 @@ public class EthProtocolManager implements ProtocolManager, MinedBlockObserver {
|
||||
genesisHash,
|
||||
latestForkId);
|
||||
try {
|
||||
LOG.trace("Sending status message to {} for connection {}.", peer.getId(), connection);
|
||||
LOG.atTrace()
|
||||
.setMessage("Sending status message to {} for connection {}.")
|
||||
.addArgument(peer::getId)
|
||||
.addArgument(connection::toString)
|
||||
.log();
|
||||
peer.send(status, getSupportedProtocol(), connection);
|
||||
peer.registerStatusSent(connection);
|
||||
} catch (final PeerNotConnected peerNotConnected) {
|
||||
// Nothing to do.
|
||||
}
|
||||
LOG.trace("{}", ethPeers);
|
||||
LOG.atTrace().setMessage("{}").addArgument(ethPeers::toString).log();
|
||||
}
|
||||
|
||||
@Override
|
||||
public boolean shouldConnect(final Peer peer, final boolean incoming) {
|
||||
if (peer.getForkId().map(forkId -> forkIdManager.peerCheck(forkId)).orElse(true)) {
|
||||
LOG.trace("ForkId OK or not available");
|
||||
if (peer.getForkId().map(forkIdManager::peerCheck).orElse(true)) {
|
||||
LOG.atDebug()
|
||||
.setMessage("ForkId OK or not available for peer {}")
|
||||
.addArgument(peer::getId)
|
||||
.log();
|
||||
if (ethPeers.shouldConnect(peer, incoming)) {
|
||||
return true;
|
||||
}
|
||||
}
|
||||
LOG.atDebug().setMessage("ForkId check failed for peer {}").addArgument(peer::getId).log();
|
||||
return false;
|
||||
}
|
||||
|
||||
@@ -397,11 +419,11 @@ public class EthProtocolManager implements ProtocolManager, MinedBlockObserver {
|
||||
LOG.atDebug()
|
||||
.setMessage("Disconnect - {} - {} - {}... - {} peers left")
|
||||
.addArgument(initiatedByPeer ? "Inbound" : "Outbound")
|
||||
.addArgument(reason)
|
||||
.addArgument(connection.getPeer().getId().slice(0, 8))
|
||||
.addArgument(ethPeers.peerCount())
|
||||
.addArgument(reason::toString)
|
||||
.addArgument(() -> connection.getPeer().getId().slice(0, 8))
|
||||
.addArgument(ethPeers::peerCount)
|
||||
.log();
|
||||
LOG.trace("{}", ethPeers);
|
||||
LOG.atTrace().setMessage("{}").addArgument(ethPeers::toString).log();
|
||||
}
|
||||
}
|
||||
|
||||
@@ -412,43 +434,41 @@ public class EthProtocolManager implements ProtocolManager, MinedBlockObserver {
|
||||
try {
|
||||
if (!status.networkId().equals(networkId)) {
|
||||
LOG.atDebug()
|
||||
.setMessage("Mismatched network id: {}, EthPeer {}...")
|
||||
.addArgument(status.networkId())
|
||||
.addArgument(peer.getShortNodeId())
|
||||
.log();
|
||||
LOG.atTrace()
|
||||
.setMessage("Mismatched network id: {}, EthPeer {}")
|
||||
.addArgument(status.networkId())
|
||||
.addArgument(peer)
|
||||
.setMessage("Mismatched network id: {}, peer {}")
|
||||
.addArgument(status::networkId)
|
||||
.addArgument(() -> getPeerOrPeerId(peer))
|
||||
.log();
|
||||
peer.disconnect(DisconnectReason.SUBPROTOCOL_TRIGGERED);
|
||||
} else if (!forkIdManager.peerCheck(forkId) && status.protocolVersion() > 63) {
|
||||
LOG.debug(
|
||||
"{} has matching network id ({}), but non-matching fork id: {}",
|
||||
peer,
|
||||
networkId,
|
||||
forkId);
|
||||
LOG.atDebug()
|
||||
.setMessage("{} has matching network id ({}), but non-matching fork id: {}")
|
||||
.addArgument(() -> getPeerOrPeerId(peer))
|
||||
.addArgument(networkId::toString)
|
||||
.addArgument(forkId)
|
||||
.log();
|
||||
peer.disconnect(DisconnectReason.SUBPROTOCOL_TRIGGERED);
|
||||
} else if (forkIdManager.peerCheck(status.genesisHash())) {
|
||||
LOG.debug(
|
||||
"{} has matching network id ({}), but non-matching genesis hash: {}",
|
||||
peer,
|
||||
networkId,
|
||||
status.genesisHash());
|
||||
LOG.atDebug()
|
||||
.setMessage("{} has matching network id ({}), but non-matching genesis hash: {}")
|
||||
.addArgument(() -> getPeerOrPeerId(peer))
|
||||
.addArgument(networkId::toString)
|
||||
.addArgument(status::genesisHash)
|
||||
.log();
|
||||
peer.disconnect(DisconnectReason.SUBPROTOCOL_TRIGGERED);
|
||||
} else if (mergePeerFilter.isPresent()
|
||||
&& mergePeerFilter.get().disconnectIfPoW(status, peer)) {
|
||||
LOG.atDebug()
|
||||
.setMessage("Post-merge disconnect: peer still PoW {}")
|
||||
.addArgument(peer.getShortNodeId())
|
||||
.addArgument(() -> getPeerOrPeerId(peer))
|
||||
.log();
|
||||
handleDisconnect(peer.getConnection(), DisconnectReason.SUBPROTOCOL_TRIGGERED, false);
|
||||
} else {
|
||||
LOG.debug(
|
||||
"Received status message from {}: {} with connection {}",
|
||||
peer,
|
||||
status,
|
||||
message.getConnection());
|
||||
LOG.atDebug()
|
||||
.setMessage("Received status message from {}: {} with connection {}")
|
||||
.addArgument(peer::toString)
|
||||
.addArgument(status::toString)
|
||||
.addArgument(message::getConnection)
|
||||
.log();
|
||||
peer.registerStatusReceived(
|
||||
status.bestHash(),
|
||||
status.totalDifficulty(),
|
||||
@@ -467,6 +487,10 @@ public class EthProtocolManager implements ProtocolManager, MinedBlockObserver {
|
||||
}
|
||||
}
|
||||
|
||||
private Object getPeerOrPeerId(final EthPeer peer) {
|
||||
return LOG.isTraceEnabled() ? peer : peer.getShortNodeId();
|
||||
}
|
||||
|
||||
@Override
|
||||
public void blockMined(final Block block) {
|
||||
// This assumes the block has already been included in the chain
|
||||
|
||||
@@ -165,7 +165,7 @@ public class BackwardSyncAlgSpec {
|
||||
|
||||
ttdCaptor.getValue().onTTDReached(true);
|
||||
|
||||
voidCompletableFuture.get(100, TimeUnit.MILLISECONDS);
|
||||
voidCompletableFuture.get(200, TimeUnit.MILLISECONDS);
|
||||
assertThat(voidCompletableFuture).isCompleted();
|
||||
|
||||
verify(context.getSyncState()).unsubscribeTTDReached(88L);
|
||||
@@ -192,7 +192,7 @@ public class BackwardSyncAlgSpec {
|
||||
|
||||
completionCaptor.getValue().onInitialSyncCompleted();
|
||||
|
||||
voidCompletableFuture.get(100, TimeUnit.MILLISECONDS);
|
||||
voidCompletableFuture.get(200, TimeUnit.MILLISECONDS);
|
||||
assertThat(voidCompletableFuture).isCompleted();
|
||||
|
||||
verify(context.getSyncState()).unsubscribeTTDReached(88L);
|
||||
|
||||
@@ -28,6 +28,7 @@ import org.hyperledger.besu.ethereum.eth.sync.worldstate.StalledDownloadExceptio
|
||||
import org.hyperledger.besu.ethereum.eth.sync.worldstate.WorldStateDownloadProcess;
|
||||
import org.hyperledger.besu.ethereum.trie.bonsai.storage.BonsaiWorldStateKeyValueStorage;
|
||||
import org.hyperledger.besu.ethereum.trie.forest.storage.ForestWorldStateKeyValueStorage;
|
||||
import org.hyperledger.besu.ethereum.worldstate.DataStorageConfiguration;
|
||||
import org.hyperledger.besu.ethereum.worldstate.DataStorageFormat;
|
||||
import org.hyperledger.besu.ethereum.worldstate.WorldStateStorage;
|
||||
import org.hyperledger.besu.metrics.noop.NoOpMetricsSystem;
|
||||
@@ -80,7 +81,9 @@ public class FastWorldDownloadStateTest {
|
||||
if (storageFormat == DataStorageFormat.BONSAI) {
|
||||
worldStateStorage =
|
||||
new BonsaiWorldStateKeyValueStorage(
|
||||
new InMemoryKeyValueStorageProvider(), new NoOpMetricsSystem());
|
||||
new InMemoryKeyValueStorageProvider(),
|
||||
new NoOpMetricsSystem(),
|
||||
DataStorageConfiguration.DEFAULT_CONFIG);
|
||||
} else {
|
||||
worldStateStorage = new ForestWorldStateKeyValueStorage(new InMemoryKeyValueStorage());
|
||||
}
|
||||
|
||||
@@ -26,7 +26,7 @@ import org.hyperledger.besu.ethereum.core.BlockHeaderTestFixture;
|
||||
import org.hyperledger.besu.ethereum.core.InMemoryKeyValueStorageProvider;
|
||||
import org.hyperledger.besu.ethereum.trie.MerkleTrie;
|
||||
import org.hyperledger.besu.ethereum.trie.patricia.SimpleMerklePatriciaTrie;
|
||||
import org.hyperledger.besu.ethereum.worldstate.DataStorageFormat;
|
||||
import org.hyperledger.besu.ethereum.worldstate.DataStorageConfiguration;
|
||||
import org.hyperledger.besu.ethereum.worldstate.WorldStateStorage;
|
||||
import org.hyperledger.besu.services.tasks.Task;
|
||||
|
||||
@@ -40,7 +40,8 @@ import org.junit.jupiter.api.Test;
|
||||
public class PersistDataStepTest {
|
||||
|
||||
private final WorldStateStorage worldStateStorage =
|
||||
new InMemoryKeyValueStorageProvider().createWorldStateStorage(DataStorageFormat.FOREST);
|
||||
new InMemoryKeyValueStorageProvider()
|
||||
.createWorldStateStorage(DataStorageConfiguration.DEFAULT_CONFIG);
|
||||
private final FastWorldDownloadState downloadState = mock(FastWorldDownloadState.class);
|
||||
|
||||
private final Bytes rootNodeData = Bytes.of(1, 1, 1, 1);
|
||||
|
||||
@@ -34,6 +34,7 @@ import org.hyperledger.besu.ethereum.trie.TrieIterator;
|
||||
import org.hyperledger.besu.ethereum.trie.bonsai.storage.BonsaiWorldStateKeyValueStorage;
|
||||
import org.hyperledger.besu.ethereum.trie.patricia.StoredMerklePatriciaTrie;
|
||||
import org.hyperledger.besu.ethereum.trie.patricia.StoredNodeFactory;
|
||||
import org.hyperledger.besu.ethereum.worldstate.DataStorageConfiguration;
|
||||
import org.hyperledger.besu.ethereum.worldstate.StateTrieAccountValue;
|
||||
import org.hyperledger.besu.ethereum.worldstate.WorldStateStorage;
|
||||
import org.hyperledger.besu.metrics.noop.NoOpMetricsSystem;
|
||||
@@ -58,7 +59,9 @@ public class AccountHealingTrackingTest {
|
||||
private final List<Address> accounts = List.of(Address.fromHexString("0xdeadbeef"));
|
||||
private final WorldStateStorage worldStateStorage =
|
||||
new BonsaiWorldStateKeyValueStorage(
|
||||
new InMemoryKeyValueStorageProvider(), new NoOpMetricsSystem());
|
||||
new InMemoryKeyValueStorageProvider(),
|
||||
new NoOpMetricsSystem(),
|
||||
DataStorageConfiguration.DEFAULT_CONFIG);
|
||||
|
||||
private WorldStateProofProvider worldStateProofProvider;
|
||||
|
||||
|
||||
@@ -26,7 +26,7 @@ import org.hyperledger.besu.ethereum.eth.sync.snapsync.request.BytecodeRequest;
|
||||
import org.hyperledger.besu.ethereum.eth.sync.snapsync.request.SnapDataRequest;
|
||||
import org.hyperledger.besu.ethereum.eth.sync.snapsync.request.StorageRangeDataRequest;
|
||||
import org.hyperledger.besu.ethereum.trie.patricia.StoredMerklePatriciaTrie;
|
||||
import org.hyperledger.besu.ethereum.worldstate.DataStorageFormat;
|
||||
import org.hyperledger.besu.ethereum.worldstate.DataStorageConfiguration;
|
||||
import org.hyperledger.besu.ethereum.worldstate.WorldStateStorage;
|
||||
import org.hyperledger.besu.services.tasks.Task;
|
||||
|
||||
@@ -39,7 +39,8 @@ import org.junit.jupiter.api.Test;
|
||||
public class PersistDataStepTest {
|
||||
|
||||
private final WorldStateStorage worldStateStorage =
|
||||
new InMemoryKeyValueStorageProvider().createWorldStateStorage(DataStorageFormat.FOREST);
|
||||
new InMemoryKeyValueStorageProvider()
|
||||
.createWorldStateStorage(DataStorageConfiguration.DEFAULT_CONFIG);
|
||||
private final SnapSyncProcessState snapSyncState = mock(SnapSyncProcessState.class);
|
||||
private final SnapWorldDownloadState downloadState = mock(SnapWorldDownloadState.class);
|
||||
|
||||
|
||||
@@ -40,6 +40,7 @@ import org.hyperledger.besu.ethereum.eth.sync.snapsync.request.SnapDataRequest;
|
||||
import org.hyperledger.besu.ethereum.eth.sync.worldstate.WorldStateDownloadProcess;
|
||||
import org.hyperledger.besu.ethereum.trie.bonsai.storage.BonsaiWorldStateKeyValueStorage;
|
||||
import org.hyperledger.besu.ethereum.trie.forest.storage.ForestWorldStateKeyValueStorage;
|
||||
import org.hyperledger.besu.ethereum.worldstate.DataStorageConfiguration;
|
||||
import org.hyperledger.besu.ethereum.worldstate.DataStorageFormat;
|
||||
import org.hyperledger.besu.ethereum.worldstate.WorldStateStorage;
|
||||
import org.hyperledger.besu.metrics.noop.NoOpMetricsSystem;
|
||||
@@ -108,7 +109,9 @@ public class SnapWorldDownloadStateTest {
|
||||
if (storageFormat == DataStorageFormat.BONSAI) {
|
||||
worldStateStorage =
|
||||
new BonsaiWorldStateKeyValueStorage(
|
||||
new InMemoryKeyValueStorageProvider(), new NoOpMetricsSystem());
|
||||
new InMemoryKeyValueStorageProvider(),
|
||||
new NoOpMetricsSystem(),
|
||||
DataStorageConfiguration.DEFAULT_CONFIG);
|
||||
} else {
|
||||
worldStateStorage = new ForestWorldStateKeyValueStorage(new InMemoryKeyValueStorage());
|
||||
}
|
||||
|
||||
@@ -27,7 +27,7 @@ import org.hyperledger.besu.ethereum.trie.MerkleTrie;
|
||||
import org.hyperledger.besu.ethereum.trie.RangeStorageEntriesCollector;
|
||||
import org.hyperledger.besu.ethereum.trie.TrieIterator;
|
||||
import org.hyperledger.besu.ethereum.trie.patricia.StoredMerklePatriciaTrie;
|
||||
import org.hyperledger.besu.ethereum.worldstate.DataStorageFormat;
|
||||
import org.hyperledger.besu.ethereum.worldstate.DataStorageConfiguration;
|
||||
import org.hyperledger.besu.ethereum.worldstate.StateTrieAccountValue;
|
||||
import org.hyperledger.besu.ethereum.worldstate.WorldStateStorage;
|
||||
import org.hyperledger.besu.services.tasks.Task;
|
||||
@@ -44,7 +44,8 @@ public class TaskGenerator {
|
||||
public static List<Task<SnapDataRequest>> createAccountRequest(final boolean withData) {
|
||||
|
||||
final WorldStateStorage worldStateStorage =
|
||||
new InMemoryKeyValueStorageProvider().createWorldStateStorage(DataStorageFormat.FOREST);
|
||||
new InMemoryKeyValueStorageProvider()
|
||||
.createWorldStateStorage(DataStorageConfiguration.DEFAULT_CONFIG);
|
||||
|
||||
final WorldStateProofProvider worldStateProofProvider =
|
||||
new WorldStateProofProvider(worldStateStorage);
|
||||
|
||||
@@ -31,6 +31,7 @@ import org.hyperledger.besu.ethereum.trie.RangeStorageEntriesCollector;
|
||||
import org.hyperledger.besu.ethereum.trie.TrieIterator;
|
||||
import org.hyperledger.besu.ethereum.trie.bonsai.storage.BonsaiWorldStateKeyValueStorage;
|
||||
import org.hyperledger.besu.ethereum.trie.forest.storage.ForestWorldStateKeyValueStorage;
|
||||
import org.hyperledger.besu.ethereum.worldstate.DataStorageConfiguration;
|
||||
import org.hyperledger.besu.ethereum.worldstate.WorldStateStorage;
|
||||
import org.hyperledger.besu.metrics.noop.NoOpMetricsSystem;
|
||||
import org.hyperledger.besu.services.kvstore.InMemoryKeyValueStorage;
|
||||
@@ -179,7 +180,8 @@ public class AccountFlatDatabaseHealingRangeRequestTest {
|
||||
final StorageProvider storageProvider = new InMemoryKeyValueStorageProvider();
|
||||
|
||||
final WorldStateStorage worldStateStorage =
|
||||
new BonsaiWorldStateKeyValueStorage(storageProvider, new NoOpMetricsSystem());
|
||||
new BonsaiWorldStateKeyValueStorage(
|
||||
storageProvider, new NoOpMetricsSystem(), DataStorageConfiguration.DEFAULT_CONFIG);
|
||||
final WorldStateProofProvider proofProvider = new WorldStateProofProvider(worldStateStorage);
|
||||
final MerkleTrie<Bytes, Bytes> accountStateTrie =
|
||||
TrieGenerator.generateTrie(worldStateStorage, 15);
|
||||
@@ -233,7 +235,8 @@ public class AccountFlatDatabaseHealingRangeRequestTest {
|
||||
final StorageProvider storageProvider = new InMemoryKeyValueStorageProvider();
|
||||
|
||||
final WorldStateStorage worldStateStorage =
|
||||
new BonsaiWorldStateKeyValueStorage(storageProvider, new NoOpMetricsSystem());
|
||||
new BonsaiWorldStateKeyValueStorage(
|
||||
storageProvider, new NoOpMetricsSystem(), DataStorageConfiguration.DEFAULT_CONFIG);
|
||||
final WorldStateProofProvider proofProvider = new WorldStateProofProvider(worldStateStorage);
|
||||
final MerkleTrie<Bytes, Bytes> accountStateTrie =
|
||||
TrieGenerator.generateTrie(worldStateStorage, 15);
|
||||
|
||||
@@ -33,6 +33,7 @@ import org.hyperledger.besu.ethereum.trie.RangeStorageEntriesCollector;
|
||||
import org.hyperledger.besu.ethereum.trie.TrieIterator;
|
||||
import org.hyperledger.besu.ethereum.trie.bonsai.storage.BonsaiWorldStateKeyValueStorage;
|
||||
import org.hyperledger.besu.ethereum.trie.patricia.StoredMerklePatriciaTrie;
|
||||
import org.hyperledger.besu.ethereum.worldstate.DataStorageConfiguration;
|
||||
import org.hyperledger.besu.ethereum.worldstate.StateTrieAccountValue;
|
||||
import org.hyperledger.besu.ethereum.worldstate.WorldStateStorage;
|
||||
import org.hyperledger.besu.metrics.noop.NoOpMetricsSystem;
|
||||
@@ -78,7 +79,8 @@ class StorageFlatDatabaseHealingRangeRequestTest {
|
||||
public void setup() {
|
||||
final StorageProvider storageProvider = new InMemoryKeyValueStorageProvider();
|
||||
worldStateStorage =
|
||||
new BonsaiWorldStateKeyValueStorage(storageProvider, new NoOpMetricsSystem());
|
||||
new BonsaiWorldStateKeyValueStorage(
|
||||
storageProvider, new NoOpMetricsSystem(), DataStorageConfiguration.DEFAULT_CONFIG);
|
||||
proofProvider = new WorldStateProofProvider(worldStateStorage);
|
||||
trie =
|
||||
TrieGenerator.generateTrie(
|
||||
|
||||
@@ -24,6 +24,7 @@ import org.hyperledger.besu.ethereum.storage.StorageProvider;
|
||||
import org.hyperledger.besu.ethereum.trie.MerkleTrie;
|
||||
import org.hyperledger.besu.ethereum.trie.bonsai.storage.BonsaiWorldStateKeyValueStorage;
|
||||
import org.hyperledger.besu.ethereum.trie.forest.storage.ForestWorldStateKeyValueStorage;
|
||||
import org.hyperledger.besu.ethereum.worldstate.DataStorageConfiguration;
|
||||
import org.hyperledger.besu.ethereum.worldstate.DataStorageFormat;
|
||||
import org.hyperledger.besu.ethereum.worldstate.StateTrieAccountValue;
|
||||
import org.hyperledger.besu.ethereum.worldstate.WorldStateStorage;
|
||||
@@ -74,7 +75,8 @@ class StorageTrieNodeHealingRequestTest {
|
||||
} else {
|
||||
final StorageProvider storageProvider = new InMemoryKeyValueStorageProvider();
|
||||
worldStateStorage =
|
||||
new BonsaiWorldStateKeyValueStorage(storageProvider, new NoOpMetricsSystem());
|
||||
new BonsaiWorldStateKeyValueStorage(
|
||||
storageProvider, new NoOpMetricsSystem(), DataStorageConfiguration.DEFAULT_CONFIG);
|
||||
}
|
||||
final MerkleTrie<Bytes, Bytes> trie =
|
||||
TrieGenerator.generateTrie(
|
||||
|
||||
@@ -58,7 +58,7 @@ public class PendingTransactionEstimatedMemorySizeTest extends BaseTransactionPo
|
||||
private static final Set<Class<?>> SHARED_CLASSES =
|
||||
Set.of(SignatureAlgorithm.class, TransactionType.class);
|
||||
private static final Set<String> COMMON_CONSTANT_FIELD_PATHS =
|
||||
Set.of(".value.ctor", ".hashNoSignature");
|
||||
Set.of(".value.ctor", ".hashNoSignature", ".signature.encoded.delegate");
|
||||
private static final Set<String> EIP1559_EIP4844_CONSTANT_FIELD_PATHS =
|
||||
Sets.union(COMMON_CONSTANT_FIELD_PATHS, Set.of(".gasPrice"));
|
||||
private static final Set<String> FRONTIER_ACCESS_LIST_CONSTANT_FIELD_PATHS =
|
||||
|
||||
@@ -371,6 +371,9 @@ public class EvmToolCommand implements Runnable {
|
||||
long txGas = gas - intrinsicGasCost - accessListCost;
|
||||
|
||||
final EVM evm = protocolSpec.getEvm();
|
||||
if (codeBytes.isEmpty()) {
|
||||
codeBytes = component.getWorldState().get(receiver).getCode();
|
||||
}
|
||||
Code code = evm.getCode(Hash.hash(codeBytes), codeBytes);
|
||||
if (!code.isValid()) {
|
||||
out.println(((CodeInvalid) code).getInvalidReason());
|
||||
|
||||
@@ -32,7 +32,7 @@ public class DiscoveryConfiguration {
|
||||
private List<EnodeURL> bootnodes = new ArrayList<>();
|
||||
private String dnsDiscoveryURL;
|
||||
private boolean discoveryV5Enabled = false;
|
||||
private boolean filterOnEnrForkId = false;
|
||||
private boolean filterOnEnrForkId = NetworkingConfiguration.DEFAULT_FILTER_ON_ENR_FORK_ID;
|
||||
|
||||
public static DiscoveryConfiguration create() {
|
||||
return new DiscoveryConfiguration();
|
||||
|
||||
@@ -23,6 +23,7 @@ public class NetworkingConfiguration {
|
||||
public static final int DEFAULT_INITIATE_CONNECTIONS_FREQUENCY_SEC = 30;
|
||||
public static final int DEFAULT_CHECK_MAINTAINED_CONNECTIONS_FREQUENCY_SEC = 60;
|
||||
public static final int DEFAULT_PEER_LOWER_BOUND = 25;
|
||||
public static final boolean DEFAULT_FILTER_ON_ENR_FORK_ID = true;
|
||||
|
||||
private DiscoveryConfiguration discovery = new DiscoveryConfiguration();
|
||||
private RlpxConfiguration rlpx = new RlpxConfiguration();
|
||||
|
||||
@@ -26,6 +26,7 @@ import org.hyperledger.besu.ethereum.p2p.config.DiscoveryConfiguration;
|
||||
import org.hyperledger.besu.ethereum.p2p.discovery.internal.Packet;
|
||||
import org.hyperledger.besu.ethereum.p2p.discovery.internal.PeerDiscoveryController;
|
||||
import org.hyperledger.besu.ethereum.p2p.discovery.internal.PeerRequirement;
|
||||
import org.hyperledger.besu.ethereum.p2p.discovery.internal.PeerTable;
|
||||
import org.hyperledger.besu.ethereum.p2p.discovery.internal.PingPacketData;
|
||||
import org.hyperledger.besu.ethereum.p2p.discovery.internal.TimerUtil;
|
||||
import org.hyperledger.besu.ethereum.p2p.peers.EnodeURLImpl;
|
||||
@@ -81,6 +82,7 @@ public abstract class PeerDiscoveryAgent {
|
||||
private final MetricsSystem metricsSystem;
|
||||
private final RlpxAgent rlpxAgent;
|
||||
private final ForkIdManager forkIdManager;
|
||||
private final PeerTable peerTable;
|
||||
|
||||
/* The peer controller, which takes care of the state machine of peers. */
|
||||
protected Optional<PeerDiscoveryController> controller = Optional.empty();
|
||||
@@ -109,7 +111,8 @@ public abstract class PeerDiscoveryAgent {
|
||||
final MetricsSystem metricsSystem,
|
||||
final StorageProvider storageProvider,
|
||||
final ForkIdManager forkIdManager,
|
||||
final RlpxAgent rlpxAgent) {
|
||||
final RlpxAgent rlpxAgent,
|
||||
final PeerTable peerTable) {
|
||||
this.metricsSystem = metricsSystem;
|
||||
checkArgument(nodeKey != null, "nodeKey cannot be null");
|
||||
checkArgument(config != null, "provided configuration cannot be null");
|
||||
@@ -130,6 +133,7 @@ public abstract class PeerDiscoveryAgent {
|
||||
this.forkIdManager = forkIdManager;
|
||||
this.forkIdSupplier = () -> forkIdManager.getForkIdForChainHead().getForkIdAsBytesList();
|
||||
this.rlpxAgent = rlpxAgent;
|
||||
this.peerTable = peerTable;
|
||||
}
|
||||
|
||||
protected abstract TimerUtil createTimer();
|
||||
@@ -263,9 +267,9 @@ public abstract class PeerDiscoveryAgent {
|
||||
.peerRequirement(PeerRequirement.combine(peerRequirements))
|
||||
.peerPermissions(peerPermissions)
|
||||
.metricsSystem(metricsSystem)
|
||||
.forkIdManager(forkIdManager)
|
||||
.filterOnEnrForkId((config.isFilterOnEnrForkIdEnabled()))
|
||||
.rlpxAgent(rlpxAgent)
|
||||
.peerTable(peerTable)
|
||||
.build();
|
||||
}
|
||||
|
||||
@@ -282,8 +286,31 @@ public abstract class PeerDiscoveryAgent {
|
||||
.flatMap(Endpoint::getTcpPort)
|
||||
.orElse(udpPort);
|
||||
|
||||
// If the host is present in the P2P PING packet itself, use that as the endpoint. If the P2P
|
||||
// PING packet specifies 127.0.0.1 (the default if a custom value is not specified with
|
||||
// --p2p-host or via a suitable --nat-method) we ignore it in favour of the UDP source address.
|
||||
// The likelihood is that the UDP source will be 127.0.0.1 anyway, but this reduces the chance
|
||||
// of an unexpected change in behaviour as a result of
|
||||
// https://github.com/hyperledger/besu/issues/6224 being fixed.
|
||||
final String host =
|
||||
packet
|
||||
.getPacketData(PingPacketData.class)
|
||||
.flatMap(PingPacketData::getFrom)
|
||||
.map(Endpoint::getHost)
|
||||
.filter(
|
||||
fromAddr ->
|
||||
(!fromAddr.equals("127.0.0.1") && InetAddresses.isInetAddress(fromAddr)))
|
||||
.stream()
|
||||
.peek(
|
||||
h ->
|
||||
LOG.trace(
|
||||
"Using \"From\" endpoint {} specified in ping packet. Ignoring UDP source host {}",
|
||||
h,
|
||||
sourceEndpoint.getHost()))
|
||||
.findFirst()
|
||||
.orElseGet(sourceEndpoint::getHost);
|
||||
|
||||
// Notify the peer controller.
|
||||
final String host = sourceEndpoint.getHost();
|
||||
final DiscoveryPeer peer =
|
||||
DiscoveryPeer.fromEnode(
|
||||
EnodeURLImpl.builder()
|
||||
|
||||
@@ -23,6 +23,7 @@ import org.hyperledger.besu.ethereum.p2p.config.DiscoveryConfiguration;
|
||||
import org.hyperledger.besu.ethereum.p2p.discovery.internal.Packet;
|
||||
import org.hyperledger.besu.ethereum.p2p.discovery.internal.PeerDiscoveryController;
|
||||
import org.hyperledger.besu.ethereum.p2p.discovery.internal.PeerDiscoveryController.AsyncExecutor;
|
||||
import org.hyperledger.besu.ethereum.p2p.discovery.internal.PeerTable;
|
||||
import org.hyperledger.besu.ethereum.p2p.discovery.internal.TimerUtil;
|
||||
import org.hyperledger.besu.ethereum.p2p.discovery.internal.VertxTimerUtil;
|
||||
import org.hyperledger.besu.ethereum.p2p.permissions.PeerPermissions;
|
||||
@@ -73,7 +74,8 @@ public class VertxPeerDiscoveryAgent extends PeerDiscoveryAgent {
|
||||
final MetricsSystem metricsSystem,
|
||||
final StorageProvider storageProvider,
|
||||
final ForkIdManager forkIdManager,
|
||||
final RlpxAgent rlpxAgent) {
|
||||
final RlpxAgent rlpxAgent,
|
||||
final PeerTable peerTable) {
|
||||
super(
|
||||
nodeKey,
|
||||
config,
|
||||
@@ -82,7 +84,8 @@ public class VertxPeerDiscoveryAgent extends PeerDiscoveryAgent {
|
||||
metricsSystem,
|
||||
storageProvider,
|
||||
forkIdManager,
|
||||
rlpxAgent);
|
||||
rlpxAgent,
|
||||
peerTable);
|
||||
checkArgument(vertx != null, "vertx instance cannot be null");
|
||||
this.vertx = vertx;
|
||||
|
||||
|
||||
@@ -21,8 +21,6 @@ import static java.util.concurrent.TimeUnit.MILLISECONDS;
|
||||
import static java.util.concurrent.TimeUnit.SECONDS;
|
||||
|
||||
import org.hyperledger.besu.cryptoservices.NodeKey;
|
||||
import org.hyperledger.besu.ethereum.forkid.ForkId;
|
||||
import org.hyperledger.besu.ethereum.forkid.ForkIdManager;
|
||||
import org.hyperledger.besu.ethereum.p2p.discovery.DiscoveryPeer;
|
||||
import org.hyperledger.besu.ethereum.p2p.discovery.PeerDiscoveryStatus;
|
||||
import org.hyperledger.besu.ethereum.p2p.peers.Peer;
|
||||
@@ -129,7 +127,6 @@ public class PeerDiscoveryController {
|
||||
private final DiscoveryProtocolLogger discoveryProtocolLogger;
|
||||
private final LabelledMetric<Counter> interactionCounter;
|
||||
private final LabelledMetric<Counter> interactionRetryCounter;
|
||||
private final ForkIdManager forkIdManager;
|
||||
private final boolean filterOnEnrForkId;
|
||||
private final RlpxAgent rlpxAgent;
|
||||
|
||||
@@ -161,7 +158,6 @@ public class PeerDiscoveryController {
|
||||
final PeerPermissions peerPermissions,
|
||||
final MetricsSystem metricsSystem,
|
||||
final Optional<Cache<Bytes, Packet>> maybeCacheForEnrRequests,
|
||||
final ForkIdManager forkIdManager,
|
||||
final boolean filterOnEnrForkId,
|
||||
final RlpxAgent rlpxAgent) {
|
||||
this.timerUtil = timerUtil;
|
||||
@@ -197,11 +193,11 @@ public class PeerDiscoveryController {
|
||||
"discovery_interaction_retry_count",
|
||||
"Total number of interaction retries performed",
|
||||
"type");
|
||||
|
||||
this.cachedEnrRequests =
|
||||
maybeCacheForEnrRequests.orElse(
|
||||
CacheBuilder.newBuilder().maximumSize(50).expireAfterWrite(10, SECONDS).build());
|
||||
|
||||
this.forkIdManager = forkIdManager;
|
||||
this.filterOnEnrForkId = filterOnEnrForkId;
|
||||
}
|
||||
|
||||
@@ -314,6 +310,7 @@ public class PeerDiscoveryController {
|
||||
}
|
||||
|
||||
final DiscoveryPeer peer = resolvePeer(sender);
|
||||
final Bytes peerId = peer.getId();
|
||||
switch (packet.getType()) {
|
||||
case PING:
|
||||
if (peerPermissions.allowInboundBonding(peer)) {
|
||||
@@ -333,10 +330,10 @@ public class PeerDiscoveryController {
|
||||
if (filterOnEnrForkId) {
|
||||
requestENR(peer);
|
||||
}
|
||||
bondingPeers.invalidate(peer.getId());
|
||||
bondingPeers.invalidate(peerId);
|
||||
addToPeerTable(peer);
|
||||
recursivePeerRefreshState.onBondingComplete(peer);
|
||||
Optional.ofNullable(cachedEnrRequests.getIfPresent(peer.getId()))
|
||||
Optional.ofNullable(cachedEnrRequests.getIfPresent(peerId))
|
||||
.ifPresent(cachedEnrRequest -> processEnrRequest(peer, cachedEnrRequest));
|
||||
});
|
||||
break;
|
||||
@@ -360,12 +357,12 @@ public class PeerDiscoveryController {
|
||||
if (PeerDiscoveryStatus.BONDED.equals(peer.getStatus())) {
|
||||
processEnrRequest(peer, packet);
|
||||
} else if (PeerDiscoveryStatus.BONDING.equals(peer.getStatus())) {
|
||||
LOG.trace("ENR_REQUEST cached for bonding peer Id: {}", peer.getId());
|
||||
LOG.trace("ENR_REQUEST cached for bonding peer Id: {}", peerId);
|
||||
// Due to UDP, it may happen that we receive the ENR_REQUEST just before the PONG.
|
||||
// Because peers want to send the ENR_REQUEST directly after the pong.
|
||||
// If this happens we don't want to ignore the request but process when bonded.
|
||||
// this cache allows to keep the request and to respond after having processed the PONG
|
||||
cachedEnrRequests.put(peer.getId(), packet);
|
||||
cachedEnrRequests.put(peerId, packet);
|
||||
}
|
||||
break;
|
||||
case ENR_RESPONSE:
|
||||
@@ -376,26 +373,6 @@ public class PeerDiscoveryController {
|
||||
packet.getPacketData(ENRResponsePacketData.class);
|
||||
final NodeRecord enr = packetData.get().getEnr();
|
||||
peer.setNodeRecord(enr);
|
||||
|
||||
final Optional<ForkId> maybeForkId = peer.getForkId();
|
||||
if (maybeForkId.isPresent()) {
|
||||
if (forkIdManager.peerCheck(maybeForkId.get())) {
|
||||
connectOnRlpxLayer(peer);
|
||||
LOG.debug(
|
||||
"Peer {} PASSED fork id check. ForkId received: {}",
|
||||
sender.getId(),
|
||||
maybeForkId.get());
|
||||
} else {
|
||||
LOG.debug(
|
||||
"Peer {} FAILED fork id check. ForkId received: {}",
|
||||
sender.getId(),
|
||||
maybeForkId.get());
|
||||
}
|
||||
} else {
|
||||
// if the peer hasn't sent the ForkId try to connect to it anyways
|
||||
connectOnRlpxLayer(peer);
|
||||
LOG.debug("No fork id sent by peer: {}", peer.getId());
|
||||
}
|
||||
});
|
||||
break;
|
||||
}
|
||||
@@ -431,9 +408,7 @@ public class PeerDiscoveryController {
|
||||
|
||||
if (peer.getStatus() != PeerDiscoveryStatus.BONDED) {
|
||||
peer.setStatus(PeerDiscoveryStatus.BONDED);
|
||||
if (!filterOnEnrForkId) {
|
||||
connectOnRlpxLayer(peer);
|
||||
}
|
||||
connectOnRlpxLayer(peer);
|
||||
}
|
||||
|
||||
final PeerTable.AddResult result = peerTable.tryAdd(peer);
|
||||
@@ -560,8 +535,6 @@ public class PeerDiscoveryController {
|
||||
*/
|
||||
@VisibleForTesting
|
||||
void requestENR(final DiscoveryPeer peer) {
|
||||
peer.setStatus(PeerDiscoveryStatus.ENR_REQUESTED);
|
||||
|
||||
final Consumer<PeerInteractionState> action =
|
||||
interaction -> {
|
||||
final ENRRequestPacketData data = ENRRequestPacketData.create();
|
||||
@@ -838,7 +811,6 @@ public class PeerDiscoveryController {
|
||||
|
||||
private Cache<Bytes, Packet> cachedEnrRequests =
|
||||
CacheBuilder.newBuilder().maximumSize(50).expireAfterWrite(10, SECONDS).build();
|
||||
private ForkIdManager forkIdManager;
|
||||
private RlpxAgent rlpxAgent;
|
||||
|
||||
private Builder() {}
|
||||
@@ -846,10 +818,6 @@ public class PeerDiscoveryController {
|
||||
public PeerDiscoveryController build() {
|
||||
validate();
|
||||
|
||||
if (peerTable == null) {
|
||||
peerTable = new PeerTable(this.nodeKey.getPublicKey().getEncodedBytes(), 16);
|
||||
}
|
||||
|
||||
return new PeerDiscoveryController(
|
||||
nodeKey,
|
||||
localPeer,
|
||||
@@ -864,7 +832,6 @@ public class PeerDiscoveryController {
|
||||
peerPermissions,
|
||||
metricsSystem,
|
||||
Optional.of(cachedEnrRequests),
|
||||
forkIdManager,
|
||||
filterOnEnrForkId,
|
||||
rlpxAgent);
|
||||
}
|
||||
@@ -875,8 +842,8 @@ public class PeerDiscoveryController {
|
||||
validateRequiredDependency(timerUtil, "TimerUtil");
|
||||
validateRequiredDependency(workerExecutor, "AsyncExecutor");
|
||||
validateRequiredDependency(metricsSystem, "MetricsSystem");
|
||||
validateRequiredDependency(forkIdManager, "ForkIdManager");
|
||||
validateRequiredDependency(rlpxAgent, "RlpxAgent");
|
||||
validateRequiredDependency(peerTable, "PeerTable");
|
||||
}
|
||||
|
||||
private void validateRequiredDependency(final Object object, final String name) {
|
||||
@@ -970,11 +937,5 @@ public class PeerDiscoveryController {
|
||||
this.rlpxAgent = rlpxAgent;
|
||||
return this;
|
||||
}
|
||||
|
||||
public Builder forkIdManager(final ForkIdManager forkIdManager) {
|
||||
checkNotNull(forkIdManager);
|
||||
this.forkIdManager = forkIdManager;
|
||||
return this;
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
@@ -56,26 +56,21 @@ public class PeerTable {
|
||||
* Builds a new peer table, where distance is calculated using the provided nodeId as a baseline.
|
||||
*
|
||||
* @param nodeId The ID of the node where this peer table is stored.
|
||||
* @param bucketSize The maximum length of each k-bucket.
|
||||
*/
|
||||
public PeerTable(final Bytes nodeId, final int bucketSize) {
|
||||
public PeerTable(final Bytes nodeId) {
|
||||
this.keccak256 = Hash.keccak256(nodeId);
|
||||
this.table =
|
||||
Stream.generate(() -> new Bucket(DEFAULT_BUCKET_SIZE))
|
||||
.limit(N_BUCKETS + 1)
|
||||
.toArray(Bucket[]::new);
|
||||
this.distanceCache = new ConcurrentHashMap<>();
|
||||
this.maxEntriesCnt = N_BUCKETS * bucketSize;
|
||||
this.maxEntriesCnt = N_BUCKETS * DEFAULT_BUCKET_SIZE;
|
||||
|
||||
// A bloom filter with 4096 expected insertions of 64-byte keys with a 0.1% false positive
|
||||
// probability yields a memory footprint of ~7.5kb.
|
||||
buildBloomFilter();
|
||||
}
|
||||
|
||||
public PeerTable(final Bytes nodeId) {
|
||||
this(nodeId, DEFAULT_BUCKET_SIZE);
|
||||
}
|
||||
|
||||
/**
|
||||
* Returns the table's representation of a peer, if it exists.
|
||||
*
|
||||
@@ -83,11 +78,12 @@ public class PeerTable {
|
||||
* @return The stored representation.
|
||||
*/
|
||||
public Optional<DiscoveryPeer> get(final PeerId peer) {
|
||||
if (!idBloom.mightContain(peer.getId())) {
|
||||
final Bytes peerId = peer.getId();
|
||||
if (!idBloom.mightContain(peerId)) {
|
||||
return Optional.empty();
|
||||
}
|
||||
final int distance = distanceFrom(peer);
|
||||
return table[distance].getAndTouch(peer.getId());
|
||||
return table[distance].getAndTouch(peerId);
|
||||
}
|
||||
|
||||
/**
|
||||
|
||||
@@ -27,6 +27,7 @@ import org.hyperledger.besu.ethereum.p2p.discovery.DiscoveryPeer;
|
||||
import org.hyperledger.besu.ethereum.p2p.discovery.PeerDiscoveryAgent;
|
||||
import org.hyperledger.besu.ethereum.p2p.discovery.PeerDiscoveryStatus;
|
||||
import org.hyperledger.besu.ethereum.p2p.discovery.VertxPeerDiscoveryAgent;
|
||||
import org.hyperledger.besu.ethereum.p2p.discovery.internal.PeerTable;
|
||||
import org.hyperledger.besu.ethereum.p2p.peers.DefaultPeerPrivileges;
|
||||
import org.hyperledger.besu.ethereum.p2p.peers.EnodeURLImpl;
|
||||
import org.hyperledger.besu.ethereum.p2p.peers.LocalNode;
|
||||
@@ -383,11 +384,12 @@ public class DefaultP2PNetwork implements P2PNetwork {
|
||||
@VisibleForTesting
|
||||
void attemptPeerConnections() {
|
||||
LOG.trace("Initiating connections to discovered peers.");
|
||||
rlpxAgent.connect(
|
||||
final Stream<DiscoveryPeer> toTry =
|
||||
streamDiscoveredPeers()
|
||||
.filter(peer -> peer.getStatus() == PeerDiscoveryStatus.BONDED)
|
||||
.filter(peerDiscoveryAgent::checkForkId)
|
||||
.sorted(Comparator.comparing(DiscoveryPeer::getLastAttemptedConnection)));
|
||||
.sorted(Comparator.comparing(DiscoveryPeer::getLastAttemptedConnection));
|
||||
toTry.forEach(rlpxAgent::connect);
|
||||
}
|
||||
|
||||
@Override
|
||||
@@ -511,6 +513,7 @@ public class DefaultP2PNetwork implements P2PNetwork {
|
||||
private Supplier<Stream<PeerConnection>> allConnectionsSupplier;
|
||||
private Supplier<Stream<PeerConnection>> allActiveConnectionsSupplier;
|
||||
private int peersLowerBound;
|
||||
private PeerTable peerTable;
|
||||
|
||||
public P2PNetwork build() {
|
||||
validate();
|
||||
@@ -528,6 +531,7 @@ public class DefaultP2PNetwork implements P2PNetwork {
|
||||
final MutableLocalNode localNode =
|
||||
MutableLocalNode.create(config.getRlpx().getClientId(), 5, supportedCapabilities);
|
||||
final PeerPrivileges peerPrivileges = new DefaultPeerPrivileges(maintainedPeers);
|
||||
peerTable = new PeerTable(nodeKey.getPublicKey().getEncodedBytes());
|
||||
rlpxAgent = rlpxAgent == null ? createRlpxAgent(localNode, peerPrivileges) : rlpxAgent;
|
||||
peerDiscoveryAgent = peerDiscoveryAgent == null ? createDiscoveryAgent() : peerDiscoveryAgent;
|
||||
|
||||
@@ -572,7 +576,8 @@ public class DefaultP2PNetwork implements P2PNetwork {
|
||||
metricsSystem,
|
||||
storageProvider,
|
||||
forkIdManager,
|
||||
rlpxAgent);
|
||||
rlpxAgent,
|
||||
peerTable);
|
||||
}
|
||||
|
||||
private RlpxAgent createRlpxAgent(
|
||||
@@ -589,6 +594,7 @@ public class DefaultP2PNetwork implements P2PNetwork {
|
||||
.allConnectionsSupplier(allConnectionsSupplier)
|
||||
.allActiveConnectionsSupplier(allActiveConnectionsSupplier)
|
||||
.peersLowerBound(peersLowerBound)
|
||||
.peerTable(peerTable)
|
||||
.build();
|
||||
}
|
||||
|
||||
|
||||
@@ -20,6 +20,7 @@ import static com.google.common.base.Preconditions.checkState;
|
||||
import org.hyperledger.besu.cryptoservices.NodeKey;
|
||||
import org.hyperledger.besu.ethereum.p2p.config.RlpxConfiguration;
|
||||
import org.hyperledger.besu.ethereum.p2p.discovery.DiscoveryPeer;
|
||||
import org.hyperledger.besu.ethereum.p2p.discovery.internal.PeerTable;
|
||||
import org.hyperledger.besu.ethereum.p2p.peers.LocalNode;
|
||||
import org.hyperledger.besu.ethereum.p2p.peers.Peer;
|
||||
import org.hyperledger.besu.ethereum.p2p.peers.PeerPrivileges;
|
||||
@@ -162,13 +163,6 @@ public class RlpxAgent {
|
||||
}
|
||||
}
|
||||
|
||||
public void connect(final Stream<? extends Peer> peerStream) {
|
||||
if (!localNode.isReady()) {
|
||||
return;
|
||||
}
|
||||
peerStream.forEach(this::connect);
|
||||
}
|
||||
|
||||
public void disconnect(final Bytes peerId, final DisconnectReason reason) {
|
||||
try {
|
||||
allActiveConnectionsSupplier
|
||||
@@ -206,6 +200,7 @@ public class RlpxAgent {
|
||||
+ this.getClass().getSimpleName()
|
||||
+ " has finished starting"));
|
||||
}
|
||||
|
||||
// Check peer is valid
|
||||
final EnodeURL enode = peer.getEnodeURL();
|
||||
if (!enode.isListening()) {
|
||||
@@ -380,6 +375,7 @@ public class RlpxAgent {
|
||||
private Supplier<Stream<PeerConnection>> allConnectionsSupplier;
|
||||
private Supplier<Stream<PeerConnection>> allActiveConnectionsSupplier;
|
||||
private int peersLowerBound;
|
||||
private PeerTable peerTable;
|
||||
|
||||
private Builder() {}
|
||||
|
||||
@@ -399,12 +395,13 @@ public class RlpxAgent {
|
||||
localNode,
|
||||
connectionEvents,
|
||||
metricsSystem,
|
||||
p2pTLSConfiguration.get());
|
||||
p2pTLSConfiguration.get(),
|
||||
peerTable);
|
||||
} else {
|
||||
LOG.debug("Using default NettyConnectionInitializer");
|
||||
connectionInitializer =
|
||||
new NettyConnectionInitializer(
|
||||
nodeKey, config, localNode, connectionEvents, metricsSystem);
|
||||
nodeKey, config, localNode, connectionEvents, metricsSystem, peerTable);
|
||||
}
|
||||
}
|
||||
|
||||
@@ -499,5 +496,10 @@ public class RlpxAgent {
|
||||
this.peersLowerBound = peersLowerBound;
|
||||
return this;
|
||||
}
|
||||
|
||||
public Builder peerTable(final PeerTable peerTable) {
|
||||
this.peerTable = peerTable;
|
||||
return this;
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
@@ -14,6 +14,7 @@
|
||||
*/
|
||||
package org.hyperledger.besu.ethereum.p2p.rlpx.connections.netty;
|
||||
|
||||
import org.hyperledger.besu.ethereum.p2p.discovery.internal.PeerTable;
|
||||
import org.hyperledger.besu.ethereum.p2p.peers.LocalNode;
|
||||
import org.hyperledger.besu.ethereum.p2p.peers.Peer;
|
||||
import org.hyperledger.besu.ethereum.p2p.rlpx.connections.PeerConnection;
|
||||
@@ -60,6 +61,7 @@ abstract class AbstractHandshakeHandler extends SimpleChannelInboundHandler<Byte
|
||||
|
||||
private final FramerProvider framerProvider;
|
||||
private final boolean inboundInitiated;
|
||||
private final PeerTable peerTable;
|
||||
|
||||
AbstractHandshakeHandler(
|
||||
final List<SubProtocol> subProtocols,
|
||||
@@ -70,7 +72,8 @@ abstract class AbstractHandshakeHandler extends SimpleChannelInboundHandler<Byte
|
||||
final MetricsSystem metricsSystem,
|
||||
final HandshakerProvider handshakerProvider,
|
||||
final FramerProvider framerProvider,
|
||||
final boolean inboundInitiated) {
|
||||
final boolean inboundInitiated,
|
||||
final PeerTable peerTable) {
|
||||
this.subProtocols = subProtocols;
|
||||
this.localNode = localNode;
|
||||
this.expectedPeer = expectedPeer;
|
||||
@@ -80,6 +83,7 @@ abstract class AbstractHandshakeHandler extends SimpleChannelInboundHandler<Byte
|
||||
this.handshaker = handshakerProvider.buildInstance();
|
||||
this.framerProvider = framerProvider;
|
||||
this.inboundInitiated = inboundInitiated;
|
||||
this.peerTable = peerTable;
|
||||
}
|
||||
|
||||
/**
|
||||
@@ -97,47 +101,48 @@ abstract class AbstractHandshakeHandler extends SimpleChannelInboundHandler<Byte
|
||||
ctx.writeAndFlush(nextMsg.get());
|
||||
} else if (handshaker.getStatus() != Handshaker.HandshakeStatus.SUCCESS) {
|
||||
LOG.debug("waiting for more bytes");
|
||||
return;
|
||||
} else {
|
||||
|
||||
final Bytes nodeId = handshaker.partyPubKey().getEncodedBytes();
|
||||
if (!localNode.isReady()) {
|
||||
// If we're handling a connection before the node is fully up, just disconnect
|
||||
LOG.debug("Rejecting connection because local node is not ready {}", nodeId);
|
||||
disconnect(ctx, DisconnectMessage.DisconnectReason.UNKNOWN);
|
||||
return;
|
||||
}
|
||||
|
||||
LOG.trace("Sending framed hello");
|
||||
|
||||
// Exchange keys done
|
||||
final Framer framer = this.framerProvider.buildFramer(handshaker.secrets());
|
||||
|
||||
final ByteToMessageDecoder deFramer =
|
||||
new DeFramer(
|
||||
framer,
|
||||
subProtocols,
|
||||
localNode,
|
||||
expectedPeer,
|
||||
connectionEventDispatcher,
|
||||
connectionFuture,
|
||||
metricsSystem,
|
||||
inboundInitiated,
|
||||
peerTable);
|
||||
|
||||
ctx.channel()
|
||||
.pipeline()
|
||||
.replace(this, "DeFramer", deFramer)
|
||||
.addBefore("DeFramer", "validate", new ValidateFirstOutboundMessage(framer));
|
||||
|
||||
ctx.writeAndFlush(new OutboundMessage(null, HelloMessage.create(localNode.getPeerInfo())))
|
||||
.addListener(
|
||||
ff -> {
|
||||
if (ff.isSuccess()) {
|
||||
LOG.trace("Successfully wrote hello message");
|
||||
}
|
||||
});
|
||||
msg.retain();
|
||||
ctx.fireChannelRead(msg);
|
||||
}
|
||||
|
||||
final Bytes nodeId = handshaker.partyPubKey().getEncodedBytes();
|
||||
if (!localNode.isReady()) {
|
||||
// If we're handling a connection before the node is fully up, just disconnect
|
||||
LOG.debug("Rejecting connection because local node is not ready {}", nodeId);
|
||||
disconnect(ctx, DisconnectMessage.DisconnectReason.UNKNOWN);
|
||||
return;
|
||||
}
|
||||
|
||||
LOG.trace("Sending framed hello");
|
||||
|
||||
// Exchange keys done
|
||||
final Framer framer = this.framerProvider.buildFramer(handshaker.secrets());
|
||||
|
||||
final ByteToMessageDecoder deFramer =
|
||||
new DeFramer(
|
||||
framer,
|
||||
subProtocols,
|
||||
localNode,
|
||||
expectedPeer,
|
||||
connectionEventDispatcher,
|
||||
connectionFuture,
|
||||
metricsSystem,
|
||||
inboundInitiated);
|
||||
|
||||
ctx.channel()
|
||||
.pipeline()
|
||||
.replace(this, "DeFramer", deFramer)
|
||||
.addBefore("DeFramer", "validate", new ValidateFirstOutboundMessage(framer));
|
||||
|
||||
ctx.writeAndFlush(new OutboundMessage(null, HelloMessage.create(localNode.getPeerInfo())))
|
||||
.addListener(
|
||||
ff -> {
|
||||
if (ff.isSuccess()) {
|
||||
LOG.trace("Successfully wrote hello message");
|
||||
}
|
||||
});
|
||||
msg.retain();
|
||||
ctx.fireChannelRead(msg);
|
||||
}
|
||||
|
||||
private void disconnect(
|
||||
|
||||
@@ -14,6 +14,8 @@
|
||||
*/
|
||||
package org.hyperledger.besu.ethereum.p2p.rlpx.connections.netty;
|
||||
|
||||
import org.hyperledger.besu.ethereum.p2p.discovery.DiscoveryPeer;
|
||||
import org.hyperledger.besu.ethereum.p2p.discovery.internal.PeerTable;
|
||||
import org.hyperledger.besu.ethereum.p2p.network.exceptions.BreachOfProtocolException;
|
||||
import org.hyperledger.besu.ethereum.p2p.network.exceptions.IncompatiblePeerException;
|
||||
import org.hyperledger.besu.ethereum.p2p.network.exceptions.PeerChannelClosedException;
|
||||
@@ -70,6 +72,7 @@ final class DeFramer extends ByteToMessageDecoder {
|
||||
private final Optional<Peer> expectedPeer;
|
||||
private final List<SubProtocol> subProtocols;
|
||||
private final boolean inboundInitiated;
|
||||
private final PeerTable peerTable;
|
||||
private boolean hellosExchanged;
|
||||
private final LabelledMetric<Counter> outboundMessagesCounter;
|
||||
|
||||
@@ -81,7 +84,8 @@ final class DeFramer extends ByteToMessageDecoder {
|
||||
final PeerConnectionEventDispatcher connectionEventDispatcher,
|
||||
final CompletableFuture<PeerConnection> connectFuture,
|
||||
final MetricsSystem metricsSystem,
|
||||
final boolean inboundInitiated) {
|
||||
final boolean inboundInitiated,
|
||||
final PeerTable peerTable) {
|
||||
this.framer = framer;
|
||||
this.subProtocols = subProtocols;
|
||||
this.localNode = localNode;
|
||||
@@ -89,6 +93,7 @@ final class DeFramer extends ByteToMessageDecoder {
|
||||
this.connectFuture = connectFuture;
|
||||
this.connectionEventDispatcher = connectionEventDispatcher;
|
||||
this.inboundInitiated = inboundInitiated;
|
||||
this.peerTable = peerTable;
|
||||
this.outboundMessagesCounter =
|
||||
metricsSystem.createLabelledCounter(
|
||||
BesuMetricCategory.NETWORK,
|
||||
@@ -105,8 +110,11 @@ final class DeFramer extends ByteToMessageDecoder {
|
||||
while ((message = framer.deframe(in)) != null) {
|
||||
|
||||
if (hellosExchanged) {
|
||||
|
||||
out.add(message);
|
||||
|
||||
} else if (message.getCode() == WireMessageCodes.HELLO) {
|
||||
|
||||
hellosExchanged = true;
|
||||
// Decode first hello and use the payload to modify pipeline
|
||||
final PeerInfo peerInfo;
|
||||
@@ -129,13 +137,27 @@ final class DeFramer extends ByteToMessageDecoder {
|
||||
subProtocols,
|
||||
localNode.getPeerInfo().getCapabilities(),
|
||||
peerInfo.getCapabilities());
|
||||
final Optional<Peer> peer = expectedPeer.or(() -> createPeer(peerInfo, ctx));
|
||||
if (peer.isEmpty()) {
|
||||
LOG.debug("Failed to create connection for peer {}", peerInfo);
|
||||
connectFuture.completeExceptionally(new PeerChannelClosedException(peerInfo));
|
||||
ctx.close();
|
||||
return;
|
||||
|
||||
Optional<Peer> peer;
|
||||
if (expectedPeer.isPresent()) {
|
||||
peer = expectedPeer;
|
||||
} else {
|
||||
// This is an inbound "Hello" message. Create peer from information from the Hello message
|
||||
peer = createPeer(peerInfo, ctx);
|
||||
if (peer.isEmpty()) {
|
||||
LOG.debug("Failed to create connection for peer {}", peerInfo);
|
||||
connectFuture.completeExceptionally(new PeerChannelClosedException(peerInfo));
|
||||
ctx.close();
|
||||
return;
|
||||
}
|
||||
// If we can find the DiscoveryPeer for the peer in the PeerTable we use it, because
|
||||
// it could contains additional information, like the fork id.
|
||||
final Optional<DiscoveryPeer> discoveryPeer = peerTable.get(peer.get());
|
||||
if (discoveryPeer.isPresent()) {
|
||||
peer = Optional.of(discoveryPeer.get());
|
||||
}
|
||||
}
|
||||
|
||||
final PeerConnection connection =
|
||||
new NettyPeerConnection(
|
||||
ctx,
|
||||
@@ -176,7 +198,9 @@ final class DeFramer extends ByteToMessageDecoder {
|
||||
capabilityMultiplexer, connection, connectionEventDispatcher, waitingForPong),
|
||||
new MessageFramer(capabilityMultiplexer, framer));
|
||||
connectFuture.complete(connection);
|
||||
|
||||
} else if (message.getCode() == WireMessageCodes.DISCONNECT) {
|
||||
|
||||
final DisconnectMessage disconnectMessage = DisconnectMessage.readFrom(message);
|
||||
LOG.debug(
|
||||
"Peer {} disconnected before sending HELLO. Reason: {}",
|
||||
@@ -185,8 +209,10 @@ final class DeFramer extends ByteToMessageDecoder {
|
||||
ctx.close();
|
||||
connectFuture.completeExceptionally(
|
||||
new PeerDisconnectedException(disconnectMessage.getReason()));
|
||||
|
||||
} else {
|
||||
// Unexpected message - disconnect
|
||||
|
||||
LOG.debug(
|
||||
"Message received before HELLO's exchanged (BREACH_OF_PROTOCOL), disconnecting. Peer: {}, Code: {}, Data: {}",
|
||||
expectedPeer.map(Peer::getEnodeURLString).orElse("unknown"),
|
||||
|
||||
@@ -15,6 +15,7 @@
|
||||
package org.hyperledger.besu.ethereum.p2p.rlpx.connections.netty;
|
||||
|
||||
import org.hyperledger.besu.cryptoservices.NodeKey;
|
||||
import org.hyperledger.besu.ethereum.p2p.discovery.internal.PeerTable;
|
||||
import org.hyperledger.besu.ethereum.p2p.peers.LocalNode;
|
||||
import org.hyperledger.besu.ethereum.p2p.rlpx.connections.PeerConnection;
|
||||
import org.hyperledger.besu.ethereum.p2p.rlpx.connections.PeerConnectionEventDispatcher;
|
||||
@@ -40,7 +41,8 @@ final class HandshakeHandlerInbound extends AbstractHandshakeHandler {
|
||||
final PeerConnectionEventDispatcher connectionEventDispatcher,
|
||||
final MetricsSystem metricsSystem,
|
||||
final HandshakerProvider handshakerProvider,
|
||||
final FramerProvider framerProvider) {
|
||||
final FramerProvider framerProvider,
|
||||
final PeerTable peerTable) {
|
||||
super(
|
||||
subProtocols,
|
||||
localNode,
|
||||
@@ -50,7 +52,8 @@ final class HandshakeHandlerInbound extends AbstractHandshakeHandler {
|
||||
metricsSystem,
|
||||
handshakerProvider,
|
||||
framerProvider,
|
||||
true);
|
||||
true,
|
||||
peerTable);
|
||||
handshaker.prepareResponder(nodeKey);
|
||||
}
|
||||
|
||||
|
||||
@@ -16,6 +16,7 @@ package org.hyperledger.besu.ethereum.p2p.rlpx.connections.netty;
|
||||
|
||||
import org.hyperledger.besu.crypto.SignatureAlgorithmFactory;
|
||||
import org.hyperledger.besu.cryptoservices.NodeKey;
|
||||
import org.hyperledger.besu.ethereum.p2p.discovery.internal.PeerTable;
|
||||
import org.hyperledger.besu.ethereum.p2p.peers.LocalNode;
|
||||
import org.hyperledger.besu.ethereum.p2p.peers.Peer;
|
||||
import org.hyperledger.besu.ethereum.p2p.rlpx.connections.PeerConnection;
|
||||
@@ -50,7 +51,8 @@ final class HandshakeHandlerOutbound extends AbstractHandshakeHandler {
|
||||
final PeerConnectionEventDispatcher connectionEventDispatcher,
|
||||
final MetricsSystem metricsSystem,
|
||||
final HandshakerProvider handshakerProvider,
|
||||
final FramerProvider framerProvider) {
|
||||
final FramerProvider framerProvider,
|
||||
final PeerTable peerTable) {
|
||||
super(
|
||||
subProtocols,
|
||||
localNode,
|
||||
@@ -60,7 +62,8 @@ final class HandshakeHandlerOutbound extends AbstractHandshakeHandler {
|
||||
metricsSystem,
|
||||
handshakerProvider,
|
||||
framerProvider,
|
||||
false);
|
||||
false,
|
||||
peerTable);
|
||||
handshaker.prepareInitiator(
|
||||
nodeKey, SignatureAlgorithmFactory.getInstance().createPublicKey(peer.getId()));
|
||||
this.first = handshaker.firstMessage();
|
||||
|
||||
@@ -17,6 +17,7 @@ package org.hyperledger.besu.ethereum.p2p.rlpx.connections.netty;
|
||||
import org.hyperledger.besu.cryptoservices.NodeKey;
|
||||
import org.hyperledger.besu.ethereum.p2p.config.RlpxConfiguration;
|
||||
import org.hyperledger.besu.ethereum.p2p.discovery.DiscoveryPeer;
|
||||
import org.hyperledger.besu.ethereum.p2p.discovery.internal.PeerTable;
|
||||
import org.hyperledger.besu.ethereum.p2p.peers.LocalNode;
|
||||
import org.hyperledger.besu.ethereum.p2p.peers.Peer;
|
||||
import org.hyperledger.besu.ethereum.p2p.rlpx.ConnectCallback;
|
||||
@@ -68,6 +69,7 @@ public class NettyConnectionInitializer
|
||||
private final PeerConnectionEventDispatcher eventDispatcher;
|
||||
private final MetricsSystem metricsSystem;
|
||||
private final Subscribers<ConnectCallback> connectSubscribers = Subscribers.create();
|
||||
private final PeerTable peerTable;
|
||||
|
||||
private ChannelFuture server;
|
||||
private final EventLoopGroup boss = new NioEventLoopGroup(1);
|
||||
@@ -80,12 +82,14 @@ public class NettyConnectionInitializer
|
||||
final RlpxConfiguration config,
|
||||
final LocalNode localNode,
|
||||
final PeerConnectionEventDispatcher eventDispatcher,
|
||||
final MetricsSystem metricsSystem) {
|
||||
final MetricsSystem metricsSystem,
|
||||
final PeerTable peerTable) {
|
||||
this.nodeKey = nodeKey;
|
||||
this.config = config;
|
||||
this.localNode = localNode;
|
||||
this.eventDispatcher = eventDispatcher;
|
||||
this.metricsSystem = metricsSystem;
|
||||
this.peerTable = peerTable;
|
||||
|
||||
metricsSystem.createIntegerGauge(
|
||||
BesuMetricCategory.NETWORK,
|
||||
@@ -244,7 +248,8 @@ public class NettyConnectionInitializer
|
||||
eventDispatcher,
|
||||
metricsSystem,
|
||||
this,
|
||||
this);
|
||||
this,
|
||||
peerTable);
|
||||
}
|
||||
|
||||
@Nonnull
|
||||
@@ -259,7 +264,8 @@ public class NettyConnectionInitializer
|
||||
eventDispatcher,
|
||||
metricsSystem,
|
||||
this,
|
||||
this);
|
||||
this,
|
||||
peerTable);
|
||||
}
|
||||
|
||||
@Nonnull
|
||||
|
||||
@@ -19,6 +19,7 @@ import static org.hyperledger.besu.ethereum.p2p.rlpx.RlpxFrameConstants.LENGTH_M
|
||||
|
||||
import org.hyperledger.besu.cryptoservices.NodeKey;
|
||||
import org.hyperledger.besu.ethereum.p2p.config.RlpxConfiguration;
|
||||
import org.hyperledger.besu.ethereum.p2p.discovery.internal.PeerTable;
|
||||
import org.hyperledger.besu.ethereum.p2p.peers.LocalNode;
|
||||
import org.hyperledger.besu.ethereum.p2p.peers.Peer;
|
||||
import org.hyperledger.besu.ethereum.p2p.plain.PlainFramer;
|
||||
@@ -55,7 +56,8 @@ public class NettyTLSConnectionInitializer extends NettyConnectionInitializer {
|
||||
final LocalNode localNode,
|
||||
final PeerConnectionEventDispatcher eventDispatcher,
|
||||
final MetricsSystem metricsSystem,
|
||||
final TLSConfiguration p2pTLSConfiguration) {
|
||||
final TLSConfiguration p2pTLSConfiguration,
|
||||
final PeerTable peerTable) {
|
||||
this(
|
||||
nodeKey,
|
||||
config,
|
||||
@@ -63,7 +65,8 @@ public class NettyTLSConnectionInitializer extends NettyConnectionInitializer {
|
||||
eventDispatcher,
|
||||
metricsSystem,
|
||||
defaultTlsContextFactorySupplier(p2pTLSConfiguration),
|
||||
p2pTLSConfiguration.getClientHelloSniHeaderEnabled());
|
||||
p2pTLSConfiguration.getClientHelloSniHeaderEnabled(),
|
||||
peerTable);
|
||||
}
|
||||
|
||||
@VisibleForTesting
|
||||
@@ -74,8 +77,9 @@ public class NettyTLSConnectionInitializer extends NettyConnectionInitializer {
|
||||
final PeerConnectionEventDispatcher eventDispatcher,
|
||||
final MetricsSystem metricsSystem,
|
||||
final Supplier<TLSContextFactory> tlsContextFactorySupplier,
|
||||
final Boolean clientHelloSniHeaderEnabled) {
|
||||
super(nodeKey, config, localNode, eventDispatcher, metricsSystem);
|
||||
final Boolean clientHelloSniHeaderEnabled,
|
||||
final PeerTable peerTable) {
|
||||
super(nodeKey, config, localNode, eventDispatcher, metricsSystem, peerTable);
|
||||
if (tlsContextFactorySupplier != null) {
|
||||
this.tlsContextFactorySupplier =
|
||||
Optional.of(Suppliers.memoize(tlsContextFactorySupplier::get));
|
||||
|
||||
@@ -244,6 +244,30 @@ public class PeerDiscoveryAgentTest {
|
||||
}
|
||||
}
|
||||
|
||||
@Test
|
||||
public void endpointHonoursCustomAdvertisedAddressInPingPacket() {
|
||||
|
||||
// Start a peer with the default advertised host
|
||||
final MockPeerDiscoveryAgent agent1 = helper.startDiscoveryAgent();
|
||||
|
||||
// Start another peer with its advertised host set to a custom value
|
||||
final MockPeerDiscoveryAgent agent2 = helper.startDiscoveryAgent("192.168.0.1");
|
||||
|
||||
// Send a PING so we can exchange messages
|
||||
Packet packet = helper.createPingPacket(agent2, agent1);
|
||||
helper.sendMessageBetweenAgents(agent2, agent1, packet);
|
||||
|
||||
// Agent 1's peers should have endpoints that match the custom advertised value...
|
||||
agent1
|
||||
.streamDiscoveredPeers()
|
||||
.forEach(peer -> assertThat(peer.getEndpoint().getHost()).isEqualTo("192.168.0.1"));
|
||||
|
||||
// ...but agent 2's peers should have endpoints that match the default
|
||||
agent2
|
||||
.streamDiscoveredPeers()
|
||||
.forEach(peer -> assertThat(peer.getEndpoint().getHost()).isEqualTo("127.0.0.1"));
|
||||
}
|
||||
|
||||
@Test
|
||||
public void shouldEvictPeerWhenPermissionsRevoked() {
|
||||
final PeerPermissionsDenylist denylist = PeerPermissionsDenylist.create();
|
||||
|
||||
@@ -165,6 +165,14 @@ public class PeerDiscoveryTestHelper {
|
||||
return startDiscoveryAgent(agentBuilder);
|
||||
}
|
||||
|
||||
public MockPeerDiscoveryAgent startDiscoveryAgent(
|
||||
final String advertisedHost, final DiscoveryPeer... bootstrapPeers) {
|
||||
final AgentBuilder agentBuilder =
|
||||
agentBuilder().bootstrapPeers(bootstrapPeers).advertisedHost(advertisedHost);
|
||||
|
||||
return startDiscoveryAgent(agentBuilder);
|
||||
}
|
||||
|
||||
/**
|
||||
* Start a single discovery agent with the provided bootstrap peers.
|
||||
*
|
||||
@@ -287,6 +295,7 @@ public class PeerDiscoveryTestHelper {
|
||||
config.setAdvertisedHost(advertisedHost);
|
||||
config.setBindPort(port);
|
||||
config.setActive(active);
|
||||
config.setFilterOnEnrForkId(false);
|
||||
|
||||
final ForkIdManager mockForkIdManager = mock(ForkIdManager.class);
|
||||
final ForkId forkId = new ForkId(Bytes.EMPTY, Bytes.EMPTY);
|
||||
|
||||
@@ -20,6 +20,8 @@ import org.hyperledger.besu.ethereum.rlp.BytesValueRLPOutput;
|
||||
import org.hyperledger.besu.ethereum.rlp.RLP;
|
||||
|
||||
import org.apache.tuweni.bytes.Bytes;
|
||||
import org.apache.tuweni.bytes.Bytes32;
|
||||
import org.apache.tuweni.crypto.SECP256K1;
|
||||
import org.apache.tuweni.units.bigints.UInt64;
|
||||
import org.ethereum.beacon.discovery.schema.EnrField;
|
||||
import org.ethereum.beacon.discovery.schema.IdentitySchema;
|
||||
@@ -34,8 +36,10 @@ public class ENRResponsePacketDataTest {
|
||||
final Bytes requestHash = Bytes.fromHexStringLenient("0x1234");
|
||||
final Bytes nodeId =
|
||||
Bytes.fromHexString("a448f24c6d18e575453db13171562b71999873db5b286df957af199ec94617f7");
|
||||
final Bytes privateKey =
|
||||
Bytes.fromHexString("b71c71a67e1177ad4e901695e1b4b9ee17ae16c6668d313eac2f96dbcda3f291");
|
||||
final SECP256K1.SecretKey privateKey =
|
||||
SECP256K1.SecretKey.fromBytes(
|
||||
Bytes32.fromHexString(
|
||||
"b71c71a67e1177ad4e901695e1b4b9ee17ae16c6668d313eac2f96dbcda3f291"));
|
||||
|
||||
NodeRecord nodeRecord =
|
||||
NodeRecordFactory.DEFAULT.createFromValues(
|
||||
@@ -48,7 +52,8 @@ public class ENRResponsePacketDataTest {
|
||||
new EnrField(EnrField.TCP, 8080),
|
||||
new EnrField(EnrField.TCP_V6, 8080),
|
||||
new EnrField(
|
||||
EnrField.PKEY_SECP256K1, Functions.derivePublicKeyFromPrivate(privateKey)));
|
||||
EnrField.PKEY_SECP256K1,
|
||||
Functions.deriveCompressedPublicKeyFromPrivate(privateKey)));
|
||||
nodeRecord.sign(privateKey);
|
||||
|
||||
assertThat(nodeRecord.getNodeId()).isEqualTo(nodeId);
|
||||
@@ -72,8 +77,10 @@ public class ENRResponsePacketDataTest {
|
||||
final Bytes requestHash = Bytes.fromHexStringLenient("0x1234");
|
||||
final Bytes nodeId =
|
||||
Bytes.fromHexString("a448f24c6d18e575453db13171562b71999873db5b286df957af199ec94617f7");
|
||||
final Bytes privateKey =
|
||||
Bytes.fromHexString("b71c71a67e1177ad4e901695e1b4b9ee17ae16c6668d313eac2f96dbcda3f291");
|
||||
final SECP256K1.SecretKey privateKey =
|
||||
SECP256K1.SecretKey.fromBytes(
|
||||
Bytes32.fromHexString(
|
||||
"b71c71a67e1177ad4e901695e1b4b9ee17ae16c6668d313eac2f96dbcda3f291"));
|
||||
|
||||
NodeRecord nodeRecord =
|
||||
NodeRecordFactory.DEFAULT.createFromValues(
|
||||
@@ -82,7 +89,8 @@ public class ENRResponsePacketDataTest {
|
||||
new EnrField(EnrField.IP_V4, Bytes.fromHexString("0x7F000001")),
|
||||
new EnrField(EnrField.UDP, 30303),
|
||||
new EnrField(
|
||||
EnrField.PKEY_SECP256K1, Functions.derivePublicKeyFromPrivate(privateKey)));
|
||||
EnrField.PKEY_SECP256K1,
|
||||
Functions.deriveCompressedPublicKeyFromPrivate(privateKey)));
|
||||
nodeRecord.sign(privateKey);
|
||||
|
||||
assertThat(nodeRecord.getNodeId()).isEqualTo(nodeId);
|
||||
@@ -109,8 +117,10 @@ public class ENRResponsePacketDataTest {
|
||||
final Bytes requestHash = Bytes.fromHexStringLenient("0x1234");
|
||||
final Bytes nodeId =
|
||||
Bytes.fromHexString("a448f24c6d18e575453db13171562b71999873db5b286df957af199ec94617f7");
|
||||
final Bytes privateKey =
|
||||
Bytes.fromHexString("b71c71a67e1177ad4e901695e1b4b9ee17ae16c6668d313eac2f96dbcda3f291");
|
||||
final SECP256K1.SecretKey privateKey =
|
||||
SECP256K1.SecretKey.fromBytes(
|
||||
Bytes32.fromHexString(
|
||||
"b71c71a67e1177ad4e901695e1b4b9ee17ae16c6668d313eac2f96dbcda3f291"));
|
||||
|
||||
NodeRecord nodeRecord =
|
||||
NodeRecordFactory.DEFAULT.createFromValues(
|
||||
@@ -119,7 +129,8 @@ public class ENRResponsePacketDataTest {
|
||||
new EnrField(EnrField.IP_V4, Bytes.fromHexString("0x7F000001")),
|
||||
new EnrField(EnrField.UDP, 30303),
|
||||
new EnrField(
|
||||
EnrField.PKEY_SECP256K1, Functions.derivePublicKeyFromPrivate(privateKey)));
|
||||
EnrField.PKEY_SECP256K1,
|
||||
Functions.deriveCompressedPublicKeyFromPrivate(privateKey)));
|
||||
nodeRecord.sign(privateKey);
|
||||
|
||||
assertThat(nodeRecord.getNodeId()).isEqualTo(nodeId);
|
||||
@@ -144,8 +155,10 @@ public class ENRResponsePacketDataTest {
|
||||
final Bytes requestHash = Bytes.fromHexStringLenient("0x1234");
|
||||
final Bytes nodeId =
|
||||
Bytes.fromHexString("a448f24c6d18e575453db13171562b71999873db5b286df957af199ec94617f7");
|
||||
final Bytes privateKey =
|
||||
Bytes.fromHexString("b71c71a67e1177ad4e901695e1b4b9ee17ae16c6668d313eac2f96dbcda3f291");
|
||||
final SECP256K1.SecretKey privateKey =
|
||||
SECP256K1.SecretKey.fromBytes(
|
||||
Bytes32.fromHexString(
|
||||
"b71c71a67e1177ad4e901695e1b4b9ee17ae16c6668d313eac2f96dbcda3f291"));
|
||||
|
||||
NodeRecord nodeRecord =
|
||||
NodeRecordFactory.DEFAULT.createFromValues(
|
||||
@@ -153,7 +166,9 @@ public class ENRResponsePacketDataTest {
|
||||
new EnrField(EnrField.ID, IdentitySchema.V4),
|
||||
new EnrField(EnrField.IP_V4, Bytes.fromHexString("0x7F000001")),
|
||||
new EnrField(EnrField.UDP, 30303),
|
||||
new EnrField(EnrField.PKEY_SECP256K1, Functions.derivePublicKeyFromPrivate(privateKey)),
|
||||
new EnrField(
|
||||
EnrField.PKEY_SECP256K1,
|
||||
Functions.deriveCompressedPublicKeyFromPrivate(privateKey)),
|
||||
new EnrField("foo", Bytes.fromHexString("0x1234")));
|
||||
nodeRecord.sign(privateKey);
|
||||
|
||||
@@ -181,8 +196,10 @@ public class ENRResponsePacketDataTest {
|
||||
@Test
|
||||
public void readFrom_invalidSignature() {
|
||||
final Bytes requestHash = Bytes.fromHexStringLenient("0x1234");
|
||||
final Bytes privateKey =
|
||||
Bytes.fromHexString("b71c71a67e1177ad4e901695e1b4b9ee17ae16c6668d313eac2f96dbcda3f292");
|
||||
final SECP256K1.SecretKey privateKey =
|
||||
SECP256K1.SecretKey.fromBytes(
|
||||
Bytes32.fromHexString(
|
||||
"b71c71a67e1177ad4e901695e1b4b9ee17ae16c6668d313eac2f96dbcda3f292"));
|
||||
|
||||
NodeRecord nodeRecord =
|
||||
NodeRecordFactory.DEFAULT.createFromValues(
|
||||
@@ -191,7 +208,8 @@ public class ENRResponsePacketDataTest {
|
||||
new EnrField(EnrField.IP_V4, Bytes.fromHexString("0x7F000001")),
|
||||
new EnrField(EnrField.UDP, 30303),
|
||||
new EnrField(
|
||||
EnrField.PKEY_SECP256K1, Functions.derivePublicKeyFromPrivate(privateKey)));
|
||||
EnrField.PKEY_SECP256K1,
|
||||
Functions.deriveCompressedPublicKeyFromPrivate(privateKey)));
|
||||
nodeRecord.sign(privateKey);
|
||||
nodeRecord.set(EnrField.UDP, 1234);
|
||||
|
||||
|
||||
@@ -63,7 +63,8 @@ public class MockPeerDiscoveryAgent extends PeerDiscoveryAgent {
|
||||
new NoOpMetricsSystem(),
|
||||
new InMemoryKeyValueStorageProvider(),
|
||||
forkIdManager,
|
||||
rlpxAgent);
|
||||
rlpxAgent,
|
||||
new PeerTable(nodeKey.getPublicKey().getEncodedBytes()));
|
||||
this.agentNetwork = agentNetwork;
|
||||
}
|
||||
|
||||
|
||||
@@ -35,8 +35,6 @@ import org.hyperledger.besu.crypto.Hash;
|
||||
import org.hyperledger.besu.crypto.SignatureAlgorithm;
|
||||
import org.hyperledger.besu.crypto.SignatureAlgorithmFactory;
|
||||
import org.hyperledger.besu.cryptoservices.NodeKey;
|
||||
import org.hyperledger.besu.ethereum.forkid.ForkId;
|
||||
import org.hyperledger.besu.ethereum.forkid.ForkIdManager;
|
||||
import org.hyperledger.besu.ethereum.p2p.discovery.DiscoveryPeer;
|
||||
import org.hyperledger.besu.ethereum.p2p.discovery.Endpoint;
|
||||
import org.hyperledger.besu.ethereum.p2p.discovery.PeerDiscoveryStatus;
|
||||
@@ -1480,14 +1478,12 @@ public class PeerDiscoveryControllerTest {
|
||||
}
|
||||
|
||||
@Test
|
||||
public void shouldFiltersOnForkIdSuccess() {
|
||||
public void forkIdShouldBeAvailableIfEnrPacketContainsForkId() {
|
||||
final List<NodeKey> nodeKeys = PeerDiscoveryTestHelper.generateNodeKeys(1);
|
||||
final List<DiscoveryPeer> peers = helper.createDiscoveryPeers(nodeKeys);
|
||||
final ForkIdManager forkIdManager = mock(ForkIdManager.class);
|
||||
final DiscoveryPeer sender = peers.get(0);
|
||||
final Packet enrPacket = prepareForForkIdCheck(forkIdManager, nodeKeys, sender, true);
|
||||
final Packet enrPacket = prepareForForkIdCheck(nodeKeys, sender, true);
|
||||
|
||||
when(forkIdManager.peerCheck(any(ForkId.class))).thenReturn(true);
|
||||
controller.onMessage(enrPacket, sender);
|
||||
|
||||
final Optional<DiscoveryPeer> maybePeer =
|
||||
@@ -1501,35 +1497,12 @@ public class PeerDiscoveryControllerTest {
|
||||
verify(controller, times(1)).connectOnRlpxLayer(eq(maybePeer.get()));
|
||||
}
|
||||
|
||||
@Test
|
||||
public void shouldFiltersOnForkIdFailure() {
|
||||
final List<NodeKey> nodeKeys = PeerDiscoveryTestHelper.generateNodeKeys(1);
|
||||
final List<DiscoveryPeer> peers = helper.createDiscoveryPeers(nodeKeys);
|
||||
final ForkIdManager forkIdManager = mock(ForkIdManager.class);
|
||||
final DiscoveryPeer sender = peers.get(0);
|
||||
final Packet enrPacket = prepareForForkIdCheck(forkIdManager, nodeKeys, sender, true);
|
||||
|
||||
when(forkIdManager.peerCheck(any(ForkId.class))).thenReturn(false);
|
||||
controller.onMessage(enrPacket, sender);
|
||||
|
||||
final Optional<DiscoveryPeer> maybePeer =
|
||||
controller
|
||||
.streamDiscoveredPeers()
|
||||
.filter(p -> p.getId().equals(sender.getId()))
|
||||
.findFirst();
|
||||
|
||||
assertThat(maybePeer.isPresent()).isTrue();
|
||||
assertThat(maybePeer.get().getForkId().isPresent()).isTrue();
|
||||
verify(controller, never()).connectOnRlpxLayer(eq(maybePeer.get()));
|
||||
}
|
||||
|
||||
@Test
|
||||
public void shouldStillCallConnectIfNoForkIdSent() {
|
||||
final List<NodeKey> nodeKeys = PeerDiscoveryTestHelper.generateNodeKeys(1);
|
||||
final List<DiscoveryPeer> peers = helper.createDiscoveryPeers(nodeKeys);
|
||||
final DiscoveryPeer sender = peers.get(0);
|
||||
final Packet enrPacket =
|
||||
prepareForForkIdCheck(mock(ForkIdManager.class), nodeKeys, sender, false);
|
||||
final Packet enrPacket = prepareForForkIdCheck(nodeKeys, sender, false);
|
||||
|
||||
controller.onMessage(enrPacket, sender);
|
||||
|
||||
@@ -1546,10 +1519,7 @@ public class PeerDiscoveryControllerTest {
|
||||
|
||||
@NotNull
|
||||
private Packet prepareForForkIdCheck(
|
||||
final ForkIdManager forkIdManager,
|
||||
final List<NodeKey> nodeKeys,
|
||||
final DiscoveryPeer sender,
|
||||
final boolean sendForkId) {
|
||||
final List<NodeKey> nodeKeys, final DiscoveryPeer sender, final boolean sendForkId) {
|
||||
final HashMap<PacketType, Bytes> packetTypeBytesHashMap = new HashMap<>();
|
||||
final OutboundMessageHandler outboundMessageHandler =
|
||||
(dp, pa) -> packetTypeBytesHashMap.put(pa.getType(), pa.getHash());
|
||||
@@ -1573,7 +1543,6 @@ public class PeerDiscoveryControllerTest {
|
||||
.outboundMessageHandler(outboundMessageHandler)
|
||||
.enrCache(enrs)
|
||||
.filterOnForkId(true)
|
||||
.forkIdManager(forkIdManager)
|
||||
.build();
|
||||
|
||||
// Mock the creation of the PING packet, so that we can control the hash, which gets validated
|
||||
@@ -1720,7 +1689,6 @@ public class PeerDiscoveryControllerTest {
|
||||
private Cache<Bytes, Packet> enrs =
|
||||
CacheBuilder.newBuilder().maximumSize(50).expireAfterWrite(10, TimeUnit.SECONDS).build();
|
||||
private boolean filterOnForkId = false;
|
||||
private ForkIdManager forkIdManager;
|
||||
|
||||
public static ControllerBuilder create() {
|
||||
return new ControllerBuilder();
|
||||
@@ -1776,11 +1744,6 @@ public class PeerDiscoveryControllerTest {
|
||||
return this;
|
||||
}
|
||||
|
||||
public ControllerBuilder forkIdManager(final ForkIdManager forkIdManager) {
|
||||
this.forkIdManager = forkIdManager;
|
||||
return this;
|
||||
}
|
||||
|
||||
PeerDiscoveryController build() {
|
||||
checkNotNull(nodeKey);
|
||||
if (localPeer == null) {
|
||||
@@ -1803,7 +1766,6 @@ public class PeerDiscoveryControllerTest {
|
||||
.peerPermissions(peerPermissions)
|
||||
.metricsSystem(new NoOpMetricsSystem())
|
||||
.cacheForEnrRequests(enrs)
|
||||
.forkIdManager(forkIdManager == null ? mock(ForkIdManager.class) : forkIdManager)
|
||||
.filterOnEnrForkId(filterOnForkId)
|
||||
.rlpxAgent(mock(RlpxAgent.class))
|
||||
.build());
|
||||
|
||||
@@ -24,7 +24,6 @@ import static org.mockito.Mockito.spy;
|
||||
import static org.mockito.Mockito.verify;
|
||||
|
||||
import org.hyperledger.besu.cryptoservices.NodeKey;
|
||||
import org.hyperledger.besu.ethereum.forkid.ForkIdManager;
|
||||
import org.hyperledger.besu.ethereum.p2p.discovery.DiscoveryPeer;
|
||||
import org.hyperledger.besu.ethereum.p2p.discovery.PeerDiscoveryStatus;
|
||||
import org.hyperledger.besu.ethereum.p2p.discovery.PeerDiscoveryTestHelper;
|
||||
@@ -72,7 +71,6 @@ public class PeerDiscoveryTableRefreshTest {
|
||||
.tableRefreshIntervalMs(0)
|
||||
.metricsSystem(new NoOpMetricsSystem())
|
||||
.rlpxAgent(mock(RlpxAgent.class))
|
||||
.forkIdManager(mock(ForkIdManager.class))
|
||||
.build());
|
||||
controller.start();
|
||||
|
||||
|
||||
Some files were not shown because too many files have changed in this diff Show More
Reference in New Issue
Block a user