mirror of
https://github.com/vacp2p/linea-besu.git
synced 2026-01-08 20:47:59 -05:00
Merge branch 'main' into zkbesu
This commit is contained in:
31
CHANGELOG.md
31
CHANGELOG.md
@@ -1,35 +1,56 @@
|
|||||||
# Changelog
|
# Changelog
|
||||||
|
|
||||||
## 24.1.1-SNAPSHOT
|
## 24.1.2-SNAPSHOT
|
||||||
|
|
||||||
|
### Breaking Changes
|
||||||
|
|
||||||
|
### Deprecations
|
||||||
|
|
||||||
|
### Additions and Improvements
|
||||||
|
|
||||||
|
### Bug fixes
|
||||||
|
- Fix the way an advertised host configured with `--p2p-host` is treated when communicating with the originator of a PING packet [#6225](https://github.com/hyperledger/besu/pull/6225)
|
||||||
|
|
||||||
|
### Download Links
|
||||||
|
|
||||||
|
## 24.1.1
|
||||||
|
|
||||||
### Breaking Changes
|
### Breaking Changes
|
||||||
- New `EXECUTION_HALTED` error returned if there is an error executing or simulating a transaction, with the reason for execution being halted. Replaces the generic `INTERNAL_ERROR` return code in certain cases which some applications may be checking for [#6343](https://github.com/hyperledger/besu/pull/6343)
|
- New `EXECUTION_HALTED` error returned if there is an error executing or simulating a transaction, with the reason for execution being halted. Replaces the generic `INTERNAL_ERROR` return code in certain cases which some applications may be checking for [#6343](https://github.com/hyperledger/besu/pull/6343)
|
||||||
- The Besu Docker images with `openjdk-latest` tags since 23.10.3 were incorrectly using UID 1001 instead of 1000 for the container's `besu` user. The user now uses 1000 again. Containers created from or migrated to images using UID 1001 will need to chown their persistent database files to UID 1000 [#6360](https://github.com/hyperledger/besu/pull/6360)
|
- The Besu Docker images with `openjdk-latest` tags since 23.10.3 were incorrectly using UID 1001 instead of 1000 for the container's `besu` user. The user now uses 1000 again. Containers created from or migrated to images using UID 1001 will need to chown their persistent database files to UID 1000 [#6360](https://github.com/hyperledger/besu/pull/6360)
|
||||||
|
- The deprecated `--privacy-onchain-groups-enabled` option has now been removed. Use the `--privacy-flexible-groups-enabled` option instead. [#6411](https://github.com/hyperledger/besu/pull/6411)
|
||||||
|
- Requesting the Ethereum Node Record (ENR) to acquire the fork id from bonded peers is now enabled by default, so the following change has been made [#5628](https://github.com/hyperledger/besu/pull/5628):
|
||||||
|
- `--Xfilter-on-enr-fork-id` has been removed. To disable the feature use `--filter-on-enr-fork-id=false`.
|
||||||
|
- The time that can be spent selecting transactions during block creation is not capped at 5 seconds for PoS and PoW networks, and for PoA networks, at 75% of the block period specified in the genesis, this to prevent possible DoS in case a single transaction is taking too long to execute, and to have a stable block production rate, but it could be a breaking change if an existing network used to have transactions that takes more time to executed that the newly introduced limit, if it is mandatory for these network to keep processing these long processing transaction, then the default value of `block-txs-selection-max-time` or `poa-block-txs-selection-max-time` needs to be tuned accordingly.
|
||||||
|
|
||||||
### Deprecations
|
### Deprecations
|
||||||
|
|
||||||
### Additions and Improvements
|
### Additions and Improvements
|
||||||
- Optimize RocksDB WAL files, allows for faster restart and a more linear disk space utilization [#6328](https://github.com/hyperledger/besu/pull/6328)
|
- Optimize RocksDB WAL files, allows for faster restart and a more linear disk space utilization [#6328](https://github.com/hyperledger/besu/pull/6328)
|
||||||
- Disable transaction handling when the node is not in sync, to avoid unnecessary transaction validation work [#6302](https://github.com/hyperledger/besu/pull/6302)
|
- Disable transaction handling when the node is not in sync, to avoid unnecessary transaction validation work [#6302](https://github.com/hyperledger/besu/pull/6302)
|
||||||
- Introduce TransactionEvaluationContext to pass data between transaction selectors and plugin, during block creation [#6381](https://github.com/hyperledger/besu/pull/6381)
|
- Introduce TransactionEvaluationContext to pass data between transaction selectors and plugin, during block creation [#6381](https://github.com/hyperledger/besu/pull/6381)
|
||||||
- Upgrade dependencies [#6377](https://github.com/hyperledger/besu/pull/6377)
|
- Upgrade dependencies [#6377](https://github.com/hyperledger/besu/pull/6377)
|
||||||
- Upgrade `com.fasterxml.jackson` dependencies [#6378](https://github.com/hyperledger/besu/pull/6378)
|
- Upgrade `com.fasterxml.jackson` dependencies [#6378](https://github.com/hyperledger/besu/pull/6378)
|
||||||
|
- Upgrade Guava dependency [#6396](https://github.com/hyperledger/besu/pull/6396)
|
||||||
|
- Upgrade Mockito [#6397](https://github.com/hyperledger/besu/pull/6397)
|
||||||
|
- Upgrade `tech.pegasys.discovery:discovery` [#6414](https://github.com/hyperledger/besu/pull/6414)
|
||||||
|
- Options to tune the max allowed time that can be spent selecting transactions during block creation are now stable [#6423](https://github.com/hyperledger/besu/pull/6423)
|
||||||
|
|
||||||
### Bug fixes
|
### Bug fixes
|
||||||
- INTERNAL_ERROR from `eth_estimateGas` JSON/RPC calls [#6344](https://github.com/hyperledger/besu/issues/6344)
|
- INTERNAL_ERROR from `eth_estimateGas` JSON/RPC calls [#6344](https://github.com/hyperledger/besu/issues/6344)
|
||||||
- Fix Besu Docker images with `openjdk-latest` tags since 23.10.3 using UID 1001 instead of 1000 for the `besu` user [#6360](https://github.com/hyperledger/besu/pull/6360)
|
- Fix Besu Docker images with `openjdk-latest` tags since 23.10.3 using UID 1001 instead of 1000 for the `besu` user [#6360](https://github.com/hyperledger/besu/pull/6360)
|
||||||
- Fluent EVM API definition for Tangerine Whistle had incorrect code size validation configured [#6382](https://github.com/hyperledger/besu/pull/6382)
|
- Fluent EVM API definition for Tangerine Whistle had incorrect code size validation configured [#6382](https://github.com/hyperledger/besu/pull/6382)
|
||||||
- Correct mining beneficiary for Clique networks in TraceServiceImpl [#6390](https://github.com/hyperledger/besu/pull/6390)
|
- Correct mining beneficiary for Clique networks in TraceServiceImpl [#6390](https://github.com/hyperledger/besu/pull/6390)
|
||||||
|
- Fix to gas limit delta calculations used in block production. Besu should now increment or decrement the block gas limit towards its target correctly (thanks @arbora) #6425
|
||||||
|
|
||||||
### Download Links
|
### Download Links
|
||||||
|
|
||||||
|
|
||||||
## 24.1.0
|
## 24.1.0
|
||||||
|
|
||||||
### Breaking Changes
|
### Breaking Changes
|
||||||
|
|
||||||
### Deprecations
|
### Deprecations
|
||||||
- Forest pruning (`pruning-enabled` options) is deprecated and will be removed soon. To save disk space consider switching to Bonsai data storage format [#6230](https://github.com/hyperledger/besu/pull/6230)
|
- Forest pruning (`pruning-enabled` option) is deprecated and will be removed soon. To save disk space consider switching to Bonsai data storage format [#6230](https://github.com/hyperledger/besu/pull/6230)
|
||||||
|
|
||||||
### Additions and Improvements
|
### Additions and Improvements
|
||||||
- Add error messages on authentication failures with username and password [#6212](https://github.com/hyperledger/besu/pull/6212)
|
- Add error messages on authentication failures with username and password [#6212](https://github.com/hyperledger/besu/pull/6212)
|
||||||
|
|||||||
@@ -7,7 +7,7 @@ Welcome to the Besu repository! The following links are a set of guidelines for
|
|||||||
Having Github, Discord, and Linux Foundation accounts is necessary for obtaining support for Besu through the community channels, wiki and issue management.
|
Having Github, Discord, and Linux Foundation accounts is necessary for obtaining support for Besu through the community channels, wiki and issue management.
|
||||||
* If you want to raise an issue, you can do so [on the github issue tab](https://github.com/hyperledger/besu/issues).
|
* If you want to raise an issue, you can do so [on the github issue tab](https://github.com/hyperledger/besu/issues).
|
||||||
* Hyperledger Discord requires a [Discord account].
|
* Hyperledger Discord requires a [Discord account].
|
||||||
* The Hyperlegder wiki also requires a [Linux Foundation (LF) account] in order to edit pages.
|
* The Hyperledger wiki also requires a [Linux Foundation (LF) account] in order to edit pages.
|
||||||
|
|
||||||
### Useful support links
|
### Useful support links
|
||||||
|
|
||||||
|
|||||||
@@ -10,7 +10,7 @@
|
|||||||
"stateRoot" : "0x8d9115d9211932d4a3a1f068fb8fe262b0b2ab0bfd74eaece1a572efe6336677",
|
"stateRoot" : "0x8d9115d9211932d4a3a1f068fb8fe262b0b2ab0bfd74eaece1a572efe6336677",
|
||||||
"logsBloom" : "0x00000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000",
|
"logsBloom" : "0x00000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000",
|
||||||
"prevRandao" : "0xc13da06dc53836ca0766057413b9683eb9a8773bbb8fcc5691e41c25b56dda1d",
|
"prevRandao" : "0xc13da06dc53836ca0766057413b9683eb9a8773bbb8fcc5691e41c25b56dda1d",
|
||||||
"gasLimit" : "0x2ff3d8",
|
"gasLimit" : "0x2ffbd2",
|
||||||
"gasUsed" : "0xf618",
|
"gasUsed" : "0xf618",
|
||||||
"timestamp" : "0x1236",
|
"timestamp" : "0x1236",
|
||||||
"extraData" : "0x",
|
"extraData" : "0x",
|
||||||
@@ -70,7 +70,7 @@
|
|||||||
"amount" : "0x64"
|
"amount" : "0x64"
|
||||||
} ],
|
} ],
|
||||||
"blockNumber" : "0x1",
|
"blockNumber" : "0x1",
|
||||||
"blockHash" : "0xf1e35607932349e87f29e1053a4fb2666782e09fde21ded74c1f7e4a57d3fa2b",
|
"blockHash" : "0x736bdddc2eca36fe8ed4ed515e5d295a08d7eaddc0d0fda2a35408127eb890d0",
|
||||||
"receiptsRoot" : "0x9af165447e5b3193e9ac8389418648ee6d6cb1d37459fe65cfc245fc358721bd",
|
"receiptsRoot" : "0x9af165447e5b3193e9ac8389418648ee6d6cb1d37459fe65cfc245fc358721bd",
|
||||||
"blobGasUsed" : "0x60000"
|
"blobGasUsed" : "0x60000"
|
||||||
},
|
},
|
||||||
|
|||||||
@@ -22,7 +22,6 @@ import static java.util.Collections.singletonList;
|
|||||||
import static org.hyperledger.besu.cli.DefaultCommandValues.getDefaultBesuDataPath;
|
import static org.hyperledger.besu.cli.DefaultCommandValues.getDefaultBesuDataPath;
|
||||||
import static org.hyperledger.besu.cli.config.NetworkName.MAINNET;
|
import static org.hyperledger.besu.cli.config.NetworkName.MAINNET;
|
||||||
import static org.hyperledger.besu.cli.util.CommandLineUtils.DEPENDENCY_WARNING_MSG;
|
import static org.hyperledger.besu.cli.util.CommandLineUtils.DEPENDENCY_WARNING_MSG;
|
||||||
import static org.hyperledger.besu.cli.util.CommandLineUtils.DEPRECATION_WARNING_MSG;
|
|
||||||
import static org.hyperledger.besu.cli.util.CommandLineUtils.isOptionSet;
|
import static org.hyperledger.besu.cli.util.CommandLineUtils.isOptionSet;
|
||||||
import static org.hyperledger.besu.controller.BesuController.DATABASE_PATH;
|
import static org.hyperledger.besu.controller.BesuController.DATABASE_PATH;
|
||||||
import static org.hyperledger.besu.ethereum.api.graphql.GraphQLConfiguration.DEFAULT_GRAPHQL_HTTP_PORT;
|
import static org.hyperledger.besu.ethereum.api.graphql.GraphQLConfiguration.DEFAULT_GRAPHQL_HTTP_PORT;
|
||||||
@@ -148,6 +147,7 @@ import org.hyperledger.besu.ethereum.storage.keyvalue.KeyValueSegmentIdentifier;
|
|||||||
import org.hyperledger.besu.ethereum.storage.keyvalue.KeyValueStorageProvider;
|
import org.hyperledger.besu.ethereum.storage.keyvalue.KeyValueStorageProvider;
|
||||||
import org.hyperledger.besu.ethereum.storage.keyvalue.KeyValueStorageProviderBuilder;
|
import org.hyperledger.besu.ethereum.storage.keyvalue.KeyValueStorageProviderBuilder;
|
||||||
import org.hyperledger.besu.ethereum.trie.forest.pruner.PrunerConfiguration;
|
import org.hyperledger.besu.ethereum.trie.forest.pruner.PrunerConfiguration;
|
||||||
|
import org.hyperledger.besu.ethereum.worldstate.DataStorageFormat;
|
||||||
import org.hyperledger.besu.evm.precompile.AbstractAltBnPrecompiledContract;
|
import org.hyperledger.besu.evm.precompile.AbstractAltBnPrecompiledContract;
|
||||||
import org.hyperledger.besu.evm.precompile.BigIntegerModularExponentiationPrecompiledContract;
|
import org.hyperledger.besu.evm.precompile.BigIntegerModularExponentiationPrecompiledContract;
|
||||||
import org.hyperledger.besu.evm.precompile.KZGPointEvalPrecompiledContract;
|
import org.hyperledger.besu.evm.precompile.KZGPointEvalPrecompiledContract;
|
||||||
@@ -953,13 +953,6 @@ public class BesuCommand implements DefaultCommandValues, Runnable {
|
|||||||
names = {"--privacy-flexible-groups-enabled"},
|
names = {"--privacy-flexible-groups-enabled"},
|
||||||
description = "Enable flexible privacy groups (default: ${DEFAULT-VALUE})")
|
description = "Enable flexible privacy groups (default: ${DEFAULT-VALUE})")
|
||||||
private final Boolean isFlexiblePrivacyGroupsEnabled = false;
|
private final Boolean isFlexiblePrivacyGroupsEnabled = false;
|
||||||
|
|
||||||
@Option(
|
|
||||||
hidden = true,
|
|
||||||
names = {"--privacy-onchain-groups-enabled"},
|
|
||||||
description =
|
|
||||||
"!!DEPRECATED!! Use `--privacy-flexible-groups-enabled` instead. Enable flexible (onchain) privacy groups (default: ${DEFAULT-VALUE})")
|
|
||||||
private final Boolean isOnchainPrivacyGroupsEnabled = false;
|
|
||||||
}
|
}
|
||||||
|
|
||||||
// Metrics Option Group
|
// Metrics Option Group
|
||||||
@@ -1716,8 +1709,7 @@ public class BesuCommand implements DefaultCommandValues, Runnable {
|
|||||||
}
|
}
|
||||||
|
|
||||||
if (unstablePrivacyPluginOptions.isPrivacyPluginEnabled()
|
if (unstablePrivacyPluginOptions.isPrivacyPluginEnabled()
|
||||||
&& (privacyOptionGroup.isFlexiblePrivacyGroupsEnabled
|
&& privacyOptionGroup.isFlexiblePrivacyGroupsEnabled) {
|
||||||
|| privacyOptionGroup.isOnchainPrivacyGroupsEnabled)) {
|
|
||||||
throw new ParameterException(
|
throw new ParameterException(
|
||||||
commandLine, "Privacy Plugin can not be used with flexible privacy groups");
|
commandLine, "Privacy Plugin can not be used with flexible privacy groups");
|
||||||
}
|
}
|
||||||
@@ -2056,16 +2048,16 @@ public class BesuCommand implements DefaultCommandValues, Runnable {
|
|||||||
"--security-module=" + DEFAULT_SECURITY_MODULE);
|
"--security-module=" + DEFAULT_SECURITY_MODULE);
|
||||||
}
|
}
|
||||||
|
|
||||||
if (Boolean.TRUE.equals(privacyOptionGroup.isOnchainPrivacyGroupsEnabled)) {
|
|
||||||
logger.warn(
|
|
||||||
DEPRECATION_WARNING_MSG,
|
|
||||||
"--privacy-onchain-groups-enabled",
|
|
||||||
"--privacy-flexible-groups-enabled");
|
|
||||||
}
|
|
||||||
|
|
||||||
if (isPruningEnabled()) {
|
if (isPruningEnabled()) {
|
||||||
logger.warn(
|
if (dataStorageOptions
|
||||||
"Forest pruning is deprecated and will be removed soon. To save disk space consider switching to Bonsai data storage format.");
|
.toDomainObject()
|
||||||
|
.getDataStorageFormat()
|
||||||
|
.equals(DataStorageFormat.BONSAI)) {
|
||||||
|
logger.warn("Forest pruning is ignored with Bonsai data storage format.");
|
||||||
|
} else {
|
||||||
|
logger.warn(
|
||||||
|
"Forest pruning is deprecated and will be removed soon. To save disk space consider switching to Bonsai data storage format.");
|
||||||
|
}
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
@@ -2743,8 +2735,7 @@ public class BesuCommand implements DefaultCommandValues, Runnable {
|
|||||||
privacyParametersBuilder.setMultiTenancyEnabled(
|
privacyParametersBuilder.setMultiTenancyEnabled(
|
||||||
privacyOptionGroup.isPrivacyMultiTenancyEnabled);
|
privacyOptionGroup.isPrivacyMultiTenancyEnabled);
|
||||||
privacyParametersBuilder.setFlexiblePrivacyGroupsEnabled(
|
privacyParametersBuilder.setFlexiblePrivacyGroupsEnabled(
|
||||||
privacyOptionGroup.isFlexiblePrivacyGroupsEnabled
|
privacyOptionGroup.isFlexiblePrivacyGroupsEnabled);
|
||||||
|| privacyOptionGroup.isOnchainPrivacyGroupsEnabled);
|
|
||||||
privacyParametersBuilder.setPrivacyPluginEnabled(
|
privacyParametersBuilder.setPrivacyPluginEnabled(
|
||||||
unstablePrivacyPluginOptions.isPrivacyPluginEnabled());
|
unstablePrivacyPluginOptions.isPrivacyPluginEnabled());
|
||||||
|
|
||||||
@@ -2917,17 +2908,15 @@ public class BesuCommand implements DefaultCommandValues, Runnable {
|
|||||||
ImmutableMiningParameters.builder().from(miningOptions.toDomainObject());
|
ImmutableMiningParameters.builder().from(miningOptions.toDomainObject());
|
||||||
final var actualGenesisOptions = getActualGenesisConfigOptions();
|
final var actualGenesisOptions = getActualGenesisConfigOptions();
|
||||||
if (actualGenesisOptions.isPoa()) {
|
if (actualGenesisOptions.isPoa()) {
|
||||||
miningParametersBuilder.unstable(
|
miningParametersBuilder.genesisBlockPeriodSeconds(
|
||||||
ImmutableMiningParameters.Unstable.builder()
|
getGenesisBlockPeriodSeconds(actualGenesisOptions));
|
||||||
.minBlockTime(getMinBlockTime(actualGenesisOptions))
|
|
||||||
.build());
|
|
||||||
}
|
}
|
||||||
miningParameters = miningParametersBuilder.build();
|
miningParameters = miningParametersBuilder.build();
|
||||||
}
|
}
|
||||||
return miningParameters;
|
return miningParameters;
|
||||||
}
|
}
|
||||||
|
|
||||||
private int getMinBlockTime(final GenesisConfigOptions genesisConfigOptions) {
|
private int getGenesisBlockPeriodSeconds(final GenesisConfigOptions genesisConfigOptions) {
|
||||||
if (genesisConfigOptions.isClique()) {
|
if (genesisConfigOptions.isClique()) {
|
||||||
return genesisConfigOptions.getCliqueConfigOptions().getBlockPeriodSeconds();
|
return genesisConfigOptions.getCliqueConfigOptions().getBlockPeriodSeconds();
|
||||||
}
|
}
|
||||||
|
|||||||
@@ -0,0 +1,33 @@
|
|||||||
|
/*
|
||||||
|
* Copyright Hyperledger Besu Contributors.
|
||||||
|
*
|
||||||
|
* Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with
|
||||||
|
* the License. You may obtain a copy of the License at
|
||||||
|
*
|
||||||
|
* http://www.apache.org/licenses/LICENSE-2.0
|
||||||
|
*
|
||||||
|
* Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
|
||||||
|
* an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
|
||||||
|
* specific language governing permissions and limitations under the License.
|
||||||
|
*
|
||||||
|
* SPDX-License-Identifier: Apache-2.0
|
||||||
|
*/
|
||||||
|
package org.hyperledger.besu.cli.converter;
|
||||||
|
|
||||||
|
import org.hyperledger.besu.cli.converter.exception.PercentageConversionException;
|
||||||
|
import org.hyperledger.besu.util.number.PositiveNumber;
|
||||||
|
|
||||||
|
import picocli.CommandLine;
|
||||||
|
|
||||||
|
/** The PositiveNumber Cli type converter. */
|
||||||
|
public class PositiveNumberConverter implements CommandLine.ITypeConverter<PositiveNumber> {
|
||||||
|
|
||||||
|
@Override
|
||||||
|
public PositiveNumber convert(final String value) throws PercentageConversionException {
|
||||||
|
try {
|
||||||
|
return PositiveNumber.fromString(value);
|
||||||
|
} catch (NullPointerException | IllegalArgumentException e) {
|
||||||
|
throw new PercentageConversionException(value);
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
@@ -0,0 +1,30 @@
|
|||||||
|
/*
|
||||||
|
* Copyright Hyperledger Besu Contributors.
|
||||||
|
*
|
||||||
|
* Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with
|
||||||
|
* the License. You may obtain a copy of the License at
|
||||||
|
*
|
||||||
|
* http://www.apache.org/licenses/LICENSE-2.0
|
||||||
|
*
|
||||||
|
* Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
|
||||||
|
* an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
|
||||||
|
* specific language governing permissions and limitations under the License.
|
||||||
|
*
|
||||||
|
* SPDX-License-Identifier: Apache-2.0
|
||||||
|
*/
|
||||||
|
package org.hyperledger.besu.cli.converter.exception;
|
||||||
|
|
||||||
|
import static java.lang.String.format;
|
||||||
|
|
||||||
|
/** The custom PositiveNumber conversion exception. */
|
||||||
|
public final class PositiveNumberConversionException extends Exception {
|
||||||
|
|
||||||
|
/**
|
||||||
|
* Instantiates a new PositiveNumber conversion exception.
|
||||||
|
*
|
||||||
|
* @param value the invalid value to add in exception message
|
||||||
|
*/
|
||||||
|
public PositiveNumberConversionException(final String value) {
|
||||||
|
super(format("Invalid value: %s, should be a positive number >0.", value));
|
||||||
|
}
|
||||||
|
}
|
||||||
@@ -16,20 +16,20 @@ package org.hyperledger.besu.cli.options;
|
|||||||
|
|
||||||
import static java.util.Arrays.asList;
|
import static java.util.Arrays.asList;
|
||||||
import static java.util.Collections.singletonList;
|
import static java.util.Collections.singletonList;
|
||||||
|
import static org.hyperledger.besu.ethereum.core.MiningParameters.DEFAULT_NON_POA_BLOCK_TXS_SELECTION_MAX_TIME;
|
||||||
|
import static org.hyperledger.besu.ethereum.core.MiningParameters.DEFAULT_POA_BLOCK_TXS_SELECTION_MAX_TIME;
|
||||||
import static org.hyperledger.besu.ethereum.core.MiningParameters.MutableInitValues.DEFAULT_EXTRA_DATA;
|
import static org.hyperledger.besu.ethereum.core.MiningParameters.MutableInitValues.DEFAULT_EXTRA_DATA;
|
||||||
import static org.hyperledger.besu.ethereum.core.MiningParameters.MutableInitValues.DEFAULT_MIN_BLOCK_OCCUPANCY_RATIO;
|
import static org.hyperledger.besu.ethereum.core.MiningParameters.MutableInitValues.DEFAULT_MIN_BLOCK_OCCUPANCY_RATIO;
|
||||||
import static org.hyperledger.besu.ethereum.core.MiningParameters.MutableInitValues.DEFAULT_MIN_PRIORITY_FEE_PER_GAS;
|
import static org.hyperledger.besu.ethereum.core.MiningParameters.MutableInitValues.DEFAULT_MIN_PRIORITY_FEE_PER_GAS;
|
||||||
import static org.hyperledger.besu.ethereum.core.MiningParameters.MutableInitValues.DEFAULT_MIN_TRANSACTION_GAS_PRICE;
|
import static org.hyperledger.besu.ethereum.core.MiningParameters.MutableInitValues.DEFAULT_MIN_TRANSACTION_GAS_PRICE;
|
||||||
import static org.hyperledger.besu.ethereum.core.MiningParameters.Unstable.DEFAULT_MAX_OMMERS_DEPTH;
|
import static org.hyperledger.besu.ethereum.core.MiningParameters.Unstable.DEFAULT_MAX_OMMERS_DEPTH;
|
||||||
import static org.hyperledger.besu.ethereum.core.MiningParameters.Unstable.DEFAULT_NON_POA_BLOCK_TXS_SELECTION_MAX_TIME;
|
|
||||||
import static org.hyperledger.besu.ethereum.core.MiningParameters.Unstable.DEFAULT_POA_BLOCK_TXS_SELECTION_MAX_TIME;
|
|
||||||
import static org.hyperledger.besu.ethereum.core.MiningParameters.Unstable.DEFAULT_POS_BLOCK_CREATION_MAX_TIME;
|
import static org.hyperledger.besu.ethereum.core.MiningParameters.Unstable.DEFAULT_POS_BLOCK_CREATION_MAX_TIME;
|
||||||
import static org.hyperledger.besu.ethereum.core.MiningParameters.Unstable.DEFAULT_POS_BLOCK_CREATION_REPETITION_MIN_DURATION;
|
import static org.hyperledger.besu.ethereum.core.MiningParameters.Unstable.DEFAULT_POS_BLOCK_CREATION_REPETITION_MIN_DURATION;
|
||||||
import static org.hyperledger.besu.ethereum.core.MiningParameters.Unstable.DEFAULT_POW_JOB_TTL;
|
import static org.hyperledger.besu.ethereum.core.MiningParameters.Unstable.DEFAULT_POW_JOB_TTL;
|
||||||
import static org.hyperledger.besu.ethereum.core.MiningParameters.Unstable.DEFAULT_REMOTE_SEALERS_LIMIT;
|
import static org.hyperledger.besu.ethereum.core.MiningParameters.Unstable.DEFAULT_REMOTE_SEALERS_LIMIT;
|
||||||
import static org.hyperledger.besu.ethereum.core.MiningParameters.Unstable.DEFAULT_REMOTE_SEALERS_TTL;
|
import static org.hyperledger.besu.ethereum.core.MiningParameters.Unstable.DEFAULT_REMOTE_SEALERS_TTL;
|
||||||
|
|
||||||
import org.hyperledger.besu.cli.converter.PercentageConverter;
|
import org.hyperledger.besu.cli.converter.PositiveNumberConverter;
|
||||||
import org.hyperledger.besu.cli.util.CommandLineUtils;
|
import org.hyperledger.besu.cli.util.CommandLineUtils;
|
||||||
import org.hyperledger.besu.config.GenesisConfigOptions;
|
import org.hyperledger.besu.config.GenesisConfigOptions;
|
||||||
import org.hyperledger.besu.datatypes.Address;
|
import org.hyperledger.besu.datatypes.Address;
|
||||||
@@ -37,7 +37,7 @@ import org.hyperledger.besu.datatypes.Wei;
|
|||||||
import org.hyperledger.besu.ethereum.core.ImmutableMiningParameters;
|
import org.hyperledger.besu.ethereum.core.ImmutableMiningParameters;
|
||||||
import org.hyperledger.besu.ethereum.core.ImmutableMiningParameters.MutableInitValues;
|
import org.hyperledger.besu.ethereum.core.ImmutableMiningParameters.MutableInitValues;
|
||||||
import org.hyperledger.besu.ethereum.core.MiningParameters;
|
import org.hyperledger.besu.ethereum.core.MiningParameters;
|
||||||
import org.hyperledger.besu.util.number.Percentage;
|
import org.hyperledger.besu.util.number.PositiveNumber;
|
||||||
|
|
||||||
import java.util.List;
|
import java.util.List;
|
||||||
|
|
||||||
@@ -115,6 +115,24 @@ public class MiningOptions implements CLIOptions<MiningParameters> {
|
|||||||
+ " If set, each block's gas limit will approach this setting over time.")
|
+ " If set, each block's gas limit will approach this setting over time.")
|
||||||
private Long targetGasLimit = null;
|
private Long targetGasLimit = null;
|
||||||
|
|
||||||
|
@Option(
|
||||||
|
names = {"--block-txs-selection-max-time"},
|
||||||
|
converter = PositiveNumberConverter.class,
|
||||||
|
description =
|
||||||
|
"Specifies the maximum time, in milliseconds, that could be spent selecting transactions to be included in the block."
|
||||||
|
+ " Not compatible with PoA networks, see poa-block-txs-selection-max-time. (default: ${DEFAULT-VALUE})")
|
||||||
|
private PositiveNumber nonPoaBlockTxsSelectionMaxTime =
|
||||||
|
DEFAULT_NON_POA_BLOCK_TXS_SELECTION_MAX_TIME;
|
||||||
|
|
||||||
|
@Option(
|
||||||
|
names = {"--poa-block-txs-selection-max-time"},
|
||||||
|
converter = PositiveNumberConverter.class,
|
||||||
|
description =
|
||||||
|
"Specifies the maximum time that could be spent selecting transactions to be included in the block, as a percentage of the fixed block time of the PoA network."
|
||||||
|
+ " To be only used on PoA networks, for other networks see block-txs-selection-max-time."
|
||||||
|
+ " (default: ${DEFAULT-VALUE})")
|
||||||
|
private PositiveNumber poaBlockTxsSelectionMaxTime = DEFAULT_POA_BLOCK_TXS_SELECTION_MAX_TIME;
|
||||||
|
|
||||||
@CommandLine.ArgGroup(validate = false)
|
@CommandLine.ArgGroup(validate = false)
|
||||||
private final Unstable unstableOptions = new Unstable();
|
private final Unstable unstableOptions = new Unstable();
|
||||||
|
|
||||||
@@ -168,25 +186,6 @@ public class MiningOptions implements CLIOptions<MiningParameters> {
|
|||||||
+ " then it waits before next repetition. Must be positive and ≤ 2000 (default: ${DEFAULT-VALUE} milliseconds)")
|
+ " then it waits before next repetition. Must be positive and ≤ 2000 (default: ${DEFAULT-VALUE} milliseconds)")
|
||||||
private Long posBlockCreationRepetitionMinDuration =
|
private Long posBlockCreationRepetitionMinDuration =
|
||||||
DEFAULT_POS_BLOCK_CREATION_REPETITION_MIN_DURATION;
|
DEFAULT_POS_BLOCK_CREATION_REPETITION_MIN_DURATION;
|
||||||
|
|
||||||
@CommandLine.Option(
|
|
||||||
hidden = true,
|
|
||||||
names = {"--Xblock-txs-selection-max-time"},
|
|
||||||
description =
|
|
||||||
"Specifies the maximum time, in milliseconds, that could be spent selecting transactions to be included in the block."
|
|
||||||
+ " Not compatible with PoA networks, see Xpoa-block-txs-selection-max-time."
|
|
||||||
+ " Must be positive and ≤ (default: ${DEFAULT-VALUE})")
|
|
||||||
private Long nonPoaBlockTxsSelectionMaxTime = DEFAULT_NON_POA_BLOCK_TXS_SELECTION_MAX_TIME;
|
|
||||||
|
|
||||||
@CommandLine.Option(
|
|
||||||
hidden = true,
|
|
||||||
names = {"--Xpoa-block-txs-selection-max-time"},
|
|
||||||
converter = PercentageConverter.class,
|
|
||||||
description =
|
|
||||||
"Specifies the maximum time that could be spent selecting transactions to be included in the block, as a percentage of the fixed block time of the PoA network."
|
|
||||||
+ " To be only used on PoA networks, for other networks see Xblock-txs-selection-max-time."
|
|
||||||
+ " (default: ${DEFAULT-VALUE})")
|
|
||||||
private Percentage poaBlockTxsSelectionMaxTime = DEFAULT_POA_BLOCK_TXS_SELECTION_MAX_TIME;
|
|
||||||
}
|
}
|
||||||
|
|
||||||
private MiningOptions() {}
|
private MiningOptions() {}
|
||||||
@@ -270,26 +269,17 @@ public class MiningOptions implements CLIOptions<MiningParameters> {
|
|||||||
if (genesisConfigOptions.isPoa()) {
|
if (genesisConfigOptions.isPoa()) {
|
||||||
CommandLineUtils.failIfOptionDoesntMeetRequirement(
|
CommandLineUtils.failIfOptionDoesntMeetRequirement(
|
||||||
commandLine,
|
commandLine,
|
||||||
"--Xblock-txs-selection-max-time can't be used with PoA networks,"
|
"--block-txs-selection-max-time can't be used with PoA networks,"
|
||||||
+ " see Xpoa-block-txs-selection-max-time instead",
|
+ " see poa-block-txs-selection-max-time instead",
|
||||||
false,
|
false,
|
||||||
singletonList("--Xblock-txs-selection-max-time"));
|
singletonList("--block-txs-selection-max-time"));
|
||||||
} else {
|
} else {
|
||||||
CommandLineUtils.failIfOptionDoesntMeetRequirement(
|
CommandLineUtils.failIfOptionDoesntMeetRequirement(
|
||||||
commandLine,
|
commandLine,
|
||||||
"--Xpoa-block-txs-selection-max-time can be only used with PoA networks,"
|
"--poa-block-txs-selection-max-time can be only used with PoA networks,"
|
||||||
+ " see --Xblock-txs-selection-max-time instead",
|
+ " see --block-txs-selection-max-time instead",
|
||||||
false,
|
false,
|
||||||
singletonList("--Xpoa-block-txs-selection-max-time"));
|
singletonList("--poa-block-txs-selection-max-time"));
|
||||||
|
|
||||||
if (unstableOptions.nonPoaBlockTxsSelectionMaxTime <= 0
|
|
||||||
|| unstableOptions.nonPoaBlockTxsSelectionMaxTime
|
|
||||||
> DEFAULT_NON_POA_BLOCK_TXS_SELECTION_MAX_TIME) {
|
|
||||||
throw new ParameterException(
|
|
||||||
commandLine,
|
|
||||||
"--Xblock-txs-selection-max-time must be positive and ≤ "
|
|
||||||
+ DEFAULT_NON_POA_BLOCK_TXS_SELECTION_MAX_TIME);
|
|
||||||
}
|
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
@@ -303,6 +293,10 @@ public class MiningOptions implements CLIOptions<MiningParameters> {
|
|||||||
miningOptions.minTransactionGasPrice = miningParameters.getMinTransactionGasPrice();
|
miningOptions.minTransactionGasPrice = miningParameters.getMinTransactionGasPrice();
|
||||||
miningOptions.minPriorityFeePerGas = miningParameters.getMinPriorityFeePerGas();
|
miningOptions.minPriorityFeePerGas = miningParameters.getMinPriorityFeePerGas();
|
||||||
miningOptions.minBlockOccupancyRatio = miningParameters.getMinBlockOccupancyRatio();
|
miningOptions.minBlockOccupancyRatio = miningParameters.getMinBlockOccupancyRatio();
|
||||||
|
miningOptions.nonPoaBlockTxsSelectionMaxTime =
|
||||||
|
miningParameters.getNonPoaBlockTxsSelectionMaxTime();
|
||||||
|
miningOptions.poaBlockTxsSelectionMaxTime = miningParameters.getPoaBlockTxsSelectionMaxTime();
|
||||||
|
|
||||||
miningOptions.unstableOptions.remoteSealersLimit =
|
miningOptions.unstableOptions.remoteSealersLimit =
|
||||||
miningParameters.getUnstable().getRemoteSealersLimit();
|
miningParameters.getUnstable().getRemoteSealersLimit();
|
||||||
miningOptions.unstableOptions.remoteSealersTimeToLive =
|
miningOptions.unstableOptions.remoteSealersTimeToLive =
|
||||||
@@ -317,10 +311,6 @@ public class MiningOptions implements CLIOptions<MiningParameters> {
|
|||||||
miningParameters.getUnstable().getPosBlockCreationMaxTime();
|
miningParameters.getUnstable().getPosBlockCreationMaxTime();
|
||||||
miningOptions.unstableOptions.posBlockCreationRepetitionMinDuration =
|
miningOptions.unstableOptions.posBlockCreationRepetitionMinDuration =
|
||||||
miningParameters.getUnstable().getPosBlockCreationRepetitionMinDuration();
|
miningParameters.getUnstable().getPosBlockCreationRepetitionMinDuration();
|
||||||
miningOptions.unstableOptions.nonPoaBlockTxsSelectionMaxTime =
|
|
||||||
miningParameters.getUnstable().getBlockTxsSelectionMaxTime();
|
|
||||||
miningOptions.unstableOptions.poaBlockTxsSelectionMaxTime =
|
|
||||||
miningParameters.getUnstable().getPoaBlockTxsSelectionMaxTime();
|
|
||||||
|
|
||||||
miningParameters.getCoinbase().ifPresent(coinbase -> miningOptions.coinbase = coinbase);
|
miningParameters.getCoinbase().ifPresent(coinbase -> miningOptions.coinbase = coinbase);
|
||||||
miningParameters.getTargetGasLimit().ifPresent(tgl -> miningOptions.targetGasLimit = tgl);
|
miningParameters.getTargetGasLimit().ifPresent(tgl -> miningOptions.targetGasLimit = tgl);
|
||||||
@@ -350,6 +340,8 @@ public class MiningOptions implements CLIOptions<MiningParameters> {
|
|||||||
.isStratumMiningEnabled(iStratumMiningEnabled)
|
.isStratumMiningEnabled(iStratumMiningEnabled)
|
||||||
.stratumNetworkInterface(stratumNetworkInterface)
|
.stratumNetworkInterface(stratumNetworkInterface)
|
||||||
.stratumPort(stratumPort)
|
.stratumPort(stratumPort)
|
||||||
|
.nonPoaBlockTxsSelectionMaxTime(nonPoaBlockTxsSelectionMaxTime)
|
||||||
|
.poaBlockTxsSelectionMaxTime(poaBlockTxsSelectionMaxTime)
|
||||||
.unstable(
|
.unstable(
|
||||||
ImmutableMiningParameters.Unstable.builder()
|
ImmutableMiningParameters.Unstable.builder()
|
||||||
.remoteSealersLimit(unstableOptions.remoteSealersLimit)
|
.remoteSealersLimit(unstableOptions.remoteSealersLimit)
|
||||||
@@ -360,8 +352,6 @@ public class MiningOptions implements CLIOptions<MiningParameters> {
|
|||||||
.posBlockCreationMaxTime(unstableOptions.posBlockCreationMaxTime)
|
.posBlockCreationMaxTime(unstableOptions.posBlockCreationMaxTime)
|
||||||
.posBlockCreationRepetitionMinDuration(
|
.posBlockCreationRepetitionMinDuration(
|
||||||
unstableOptions.posBlockCreationRepetitionMinDuration)
|
unstableOptions.posBlockCreationRepetitionMinDuration)
|
||||||
.nonPoaBlockTxsSelectionMaxTime(unstableOptions.nonPoaBlockTxsSelectionMaxTime)
|
|
||||||
.poaBlockTxsSelectionMaxTime(unstableOptions.poaBlockTxsSelectionMaxTime)
|
|
||||||
.build());
|
.build());
|
||||||
|
|
||||||
return miningParametersBuilder.build();
|
return miningParametersBuilder.build();
|
||||||
|
|||||||
@@ -62,23 +62,28 @@ public class DataStorageOptions implements CLIOptions<DataStorageConfiguration>
|
|||||||
private final DataStorageOptions.Unstable unstableOptions = new Unstable();
|
private final DataStorageOptions.Unstable unstableOptions = new Unstable();
|
||||||
|
|
||||||
static class Unstable {
|
static class Unstable {
|
||||||
|
private static final String BONSAI_LIMIT_TRIE_LOGS_ENABLED =
|
||||||
|
"--Xbonsai-limit-trie-logs-enabled";
|
||||||
|
private static final String BONSAI_TRIE_LOGS_RETENTION_THRESHOLD =
|
||||||
|
"--Xbonsai-trie-logs-retention-threshold";
|
||||||
|
private static final String BONSAI_TRIE_LOG_PRUNING_LIMIT = "--Xbonsai-trie-logs-pruning-limit";
|
||||||
|
|
||||||
@CommandLine.Option(
|
@CommandLine.Option(
|
||||||
hidden = true,
|
hidden = true,
|
||||||
names = {"--Xbonsai-trie-log-pruning-enabled"},
|
names = {BONSAI_LIMIT_TRIE_LOGS_ENABLED},
|
||||||
description = "Enable trie log pruning. (default: ${DEFAULT-VALUE})")
|
description = "Enable trie log pruning. (default: ${DEFAULT-VALUE})")
|
||||||
private boolean bonsaiTrieLogPruningEnabled = DEFAULT_BONSAI_TRIE_LOG_PRUNING_ENABLED;
|
private boolean bonsaiTrieLogPruningEnabled = DEFAULT_BONSAI_TRIE_LOG_PRUNING_ENABLED;
|
||||||
|
|
||||||
@CommandLine.Option(
|
@CommandLine.Option(
|
||||||
hidden = true,
|
hidden = true,
|
||||||
names = {"--Xbonsai-trie-log-retention-threshold"},
|
names = {BONSAI_TRIE_LOGS_RETENTION_THRESHOLD},
|
||||||
description =
|
description =
|
||||||
"The number of blocks for which to retain trie logs. (default: ${DEFAULT-VALUE})")
|
"The number of blocks for which to retain trie logs. (default: ${DEFAULT-VALUE})")
|
||||||
private long bonsaiTrieLogRetentionThreshold = DEFAULT_BONSAI_TRIE_LOG_RETENTION_THRESHOLD;
|
private long bonsaiTrieLogRetentionThreshold = DEFAULT_BONSAI_TRIE_LOG_RETENTION_THRESHOLD;
|
||||||
|
|
||||||
@CommandLine.Option(
|
@CommandLine.Option(
|
||||||
hidden = true,
|
hidden = true,
|
||||||
names = {"--Xbonsai-trie-log-pruning-limit"},
|
names = {BONSAI_TRIE_LOG_PRUNING_LIMIT},
|
||||||
description =
|
description =
|
||||||
"The max number of blocks to load and prune trie logs for at startup. (default: ${DEFAULT-VALUE})")
|
"The max number of blocks to load and prune trie logs for at startup. (default: ${DEFAULT-VALUE})")
|
||||||
private int bonsaiTrieLogPruningLimit = DEFAULT_BONSAI_TRIE_LOG_PRUNING_LIMIT;
|
private int bonsaiTrieLogPruningLimit = DEFAULT_BONSAI_TRIE_LOG_PRUNING_LIMIT;
|
||||||
|
|||||||
@@ -37,7 +37,7 @@ public class NetworkingOptions implements CLIOptions<NetworkingConfiguration> {
|
|||||||
private final String DNS_DISCOVERY_SERVER_OVERRIDE_FLAG = "--Xp2p-dns-discovery-server";
|
private final String DNS_DISCOVERY_SERVER_OVERRIDE_FLAG = "--Xp2p-dns-discovery-server";
|
||||||
private final String DISCOVERY_PROTOCOL_V5_ENABLED = "--Xv5-discovery-enabled";
|
private final String DISCOVERY_PROTOCOL_V5_ENABLED = "--Xv5-discovery-enabled";
|
||||||
/** The constant FILTER_ON_ENR_FORK_ID. */
|
/** The constant FILTER_ON_ENR_FORK_ID. */
|
||||||
public static final String FILTER_ON_ENR_FORK_ID = "--Xfilter-on-enr-fork-id";
|
public static final String FILTER_ON_ENR_FORK_ID = "--filter-on-enr-fork-id";
|
||||||
|
|
||||||
@CommandLine.Option(
|
@CommandLine.Option(
|
||||||
names = INITIATE_CONNECTIONS_FREQUENCY_FLAG,
|
names = INITIATE_CONNECTIONS_FREQUENCY_FLAG,
|
||||||
@@ -76,9 +76,9 @@ public class NetworkingOptions implements CLIOptions<NetworkingConfiguration> {
|
|||||||
@CommandLine.Option(
|
@CommandLine.Option(
|
||||||
names = FILTER_ON_ENR_FORK_ID,
|
names = FILTER_ON_ENR_FORK_ID,
|
||||||
hidden = true,
|
hidden = true,
|
||||||
defaultValue = "false",
|
defaultValue = "true",
|
||||||
description = "Whether to enable filtering of peers based on the ENR field ForkId)")
|
description = "Whether to enable filtering of peers based on the ENR field ForkId)")
|
||||||
private final Boolean filterOnEnrForkId = false;
|
private final Boolean filterOnEnrForkId = NetworkingConfiguration.DEFAULT_FILTER_ON_ENR_FORK_ID;
|
||||||
|
|
||||||
@CommandLine.Option(
|
@CommandLine.Option(
|
||||||
hidden = true,
|
hidden = true,
|
||||||
|
|||||||
@@ -22,7 +22,11 @@ import org.hyperledger.besu.datatypes.Hash;
|
|||||||
import org.hyperledger.besu.ethereum.chain.Blockchain;
|
import org.hyperledger.besu.ethereum.chain.Blockchain;
|
||||||
import org.hyperledger.besu.ethereum.chain.MutableBlockchain;
|
import org.hyperledger.besu.ethereum.chain.MutableBlockchain;
|
||||||
import org.hyperledger.besu.ethereum.core.BlockHeader;
|
import org.hyperledger.besu.ethereum.core.BlockHeader;
|
||||||
|
import org.hyperledger.besu.ethereum.rlp.BytesValueRLPInput;
|
||||||
|
import org.hyperledger.besu.ethereum.rlp.RLP;
|
||||||
import org.hyperledger.besu.ethereum.trie.bonsai.storage.BonsaiWorldStateKeyValueStorage;
|
import org.hyperledger.besu.ethereum.trie.bonsai.storage.BonsaiWorldStateKeyValueStorage;
|
||||||
|
import org.hyperledger.besu.ethereum.trie.bonsai.trielog.TrieLogFactoryImpl;
|
||||||
|
import org.hyperledger.besu.ethereum.trie.bonsai.trielog.TrieLogLayer;
|
||||||
import org.hyperledger.besu.ethereum.worldstate.DataStorageConfiguration;
|
import org.hyperledger.besu.ethereum.worldstate.DataStorageConfiguration;
|
||||||
|
|
||||||
import java.io.File;
|
import java.io.File;
|
||||||
@@ -32,6 +36,7 @@ import java.io.IOException;
|
|||||||
import java.io.ObjectInputStream;
|
import java.io.ObjectInputStream;
|
||||||
import java.io.ObjectOutputStream;
|
import java.io.ObjectOutputStream;
|
||||||
import java.io.PrintWriter;
|
import java.io.PrintWriter;
|
||||||
|
import java.nio.file.Files;
|
||||||
import java.nio.file.Path;
|
import java.nio.file.Path;
|
||||||
import java.util.ArrayList;
|
import java.util.ArrayList;
|
||||||
import java.util.IdentityHashMap;
|
import java.util.IdentityHashMap;
|
||||||
@@ -39,6 +44,7 @@ import java.util.List;
|
|||||||
import java.util.Optional;
|
import java.util.Optional;
|
||||||
import java.util.concurrent.atomic.AtomicInteger;
|
import java.util.concurrent.atomic.AtomicInteger;
|
||||||
|
|
||||||
|
import org.apache.tuweni.bytes.Bytes;
|
||||||
import org.apache.tuweni.bytes.Bytes32;
|
import org.apache.tuweni.bytes.Bytes32;
|
||||||
import org.slf4j.Logger;
|
import org.slf4j.Logger;
|
||||||
import org.slf4j.LoggerFactory;
|
import org.slf4j.LoggerFactory;
|
||||||
@@ -97,16 +103,15 @@ public class TrieLogHelper {
|
|||||||
final String batchFileNameBase) {
|
final String batchFileNameBase) {
|
||||||
|
|
||||||
for (long batchNumber = 1; batchNumber <= numberOfBatches; batchNumber++) {
|
for (long batchNumber = 1; batchNumber <= numberOfBatches; batchNumber++) {
|
||||||
|
final String batchFileName = batchFileNameBase + "-" + batchNumber;
|
||||||
final long firstBlockOfBatch = chainHeight - ((batchNumber - 1) * BATCH_SIZE);
|
final long firstBlockOfBatch = chainHeight - ((batchNumber - 1) * BATCH_SIZE);
|
||||||
|
|
||||||
final long lastBlockOfBatch =
|
final long lastBlockOfBatch =
|
||||||
Math.max(chainHeight - (batchNumber * BATCH_SIZE), lastBlockNumberToRetainTrieLogsFor);
|
Math.max(chainHeight - (batchNumber * BATCH_SIZE), lastBlockNumberToRetainTrieLogsFor);
|
||||||
|
|
||||||
final List<Hash> trieLogKeys =
|
final List<Hash> trieLogKeys =
|
||||||
getTrieLogKeysForBlocks(blockchain, firstBlockOfBatch, lastBlockOfBatch);
|
getTrieLogKeysForBlocks(blockchain, firstBlockOfBatch, lastBlockOfBatch);
|
||||||
|
|
||||||
saveTrieLogBatches(batchFileNameBase, rootWorldStateStorage, batchNumber, trieLogKeys);
|
LOG.info("Saving trie logs to retain in file (batch {})...", batchNumber);
|
||||||
|
saveTrieLogBatches(batchFileName, rootWorldStateStorage, trieLogKeys);
|
||||||
}
|
}
|
||||||
|
|
||||||
LOG.info("Clear trie logs...");
|
LOG.info("Clear trie logs...");
|
||||||
@@ -118,15 +123,12 @@ public class TrieLogHelper {
|
|||||||
}
|
}
|
||||||
|
|
||||||
private static void saveTrieLogBatches(
|
private static void saveTrieLogBatches(
|
||||||
final String batchFileNameBase,
|
final String batchFileName,
|
||||||
final BonsaiWorldStateKeyValueStorage rootWorldStateStorage,
|
final BonsaiWorldStateKeyValueStorage rootWorldStateStorage,
|
||||||
final long batchNumber,
|
|
||||||
final List<Hash> trieLogKeys) {
|
final List<Hash> trieLogKeys) {
|
||||||
|
|
||||||
LOG.info("Saving trie logs to retain in file (batch {})...", batchNumber);
|
|
||||||
|
|
||||||
try {
|
try {
|
||||||
saveTrieLogsInFile(trieLogKeys, rootWorldStateStorage, batchNumber, batchFileNameBase);
|
saveTrieLogsInFile(trieLogKeys, rootWorldStateStorage, batchFileName);
|
||||||
} catch (IOException e) {
|
} catch (IOException e) {
|
||||||
LOG.error("Error saving trie logs to file: {}", e.getMessage());
|
LOG.error("Error saving trie logs to file: {}", e.getMessage());
|
||||||
throw new RuntimeException(e);
|
throw new RuntimeException(e);
|
||||||
@@ -210,9 +212,8 @@ public class TrieLogHelper {
|
|||||||
final String batchFileNameBase)
|
final String batchFileNameBase)
|
||||||
throws IOException {
|
throws IOException {
|
||||||
// process in chunk to avoid OOM
|
// process in chunk to avoid OOM
|
||||||
|
final String batchFileName = batchFileNameBase + "-" + batchNumber;
|
||||||
IdentityHashMap<byte[], byte[]> trieLogsToRetain =
|
IdentityHashMap<byte[], byte[]> trieLogsToRetain = readTrieLogsFromFile(batchFileName);
|
||||||
readTrieLogsFromFile(batchFileNameBase, batchNumber);
|
|
||||||
final int chunkSize = ROCKSDB_MAX_INSERTS_PER_TRANSACTION;
|
final int chunkSize = ROCKSDB_MAX_INSERTS_PER_TRANSACTION;
|
||||||
List<byte[]> keys = new ArrayList<>(trieLogsToRetain.keySet());
|
List<byte[]> keys = new ArrayList<>(trieLogsToRetain.keySet());
|
||||||
|
|
||||||
@@ -265,11 +266,10 @@ public class TrieLogHelper {
|
|||||||
private static void saveTrieLogsInFile(
|
private static void saveTrieLogsInFile(
|
||||||
final List<Hash> trieLogsKeys,
|
final List<Hash> trieLogsKeys,
|
||||||
final BonsaiWorldStateKeyValueStorage rootWorldStateStorage,
|
final BonsaiWorldStateKeyValueStorage rootWorldStateStorage,
|
||||||
final long batchNumber,
|
final String batchFileName)
|
||||||
final String batchFileNameBase)
|
|
||||||
throws IOException {
|
throws IOException {
|
||||||
|
|
||||||
File file = new File(batchFileNameBase + "-" + batchNumber);
|
File file = new File(batchFileName);
|
||||||
if (file.exists()) {
|
if (file.exists()) {
|
||||||
LOG.error("File already exists, skipping file creation");
|
LOG.error("File already exists, skipping file creation");
|
||||||
return;
|
return;
|
||||||
@@ -285,17 +285,14 @@ public class TrieLogHelper {
|
|||||||
}
|
}
|
||||||
|
|
||||||
@SuppressWarnings("unchecked")
|
@SuppressWarnings("unchecked")
|
||||||
private static IdentityHashMap<byte[], byte[]> readTrieLogsFromFile(
|
static IdentityHashMap<byte[], byte[]> readTrieLogsFromFile(final String batchFileName) {
|
||||||
final String batchFileNameBase, final long batchNumber) {
|
|
||||||
|
|
||||||
IdentityHashMap<byte[], byte[]> trieLogs;
|
IdentityHashMap<byte[], byte[]> trieLogs;
|
||||||
try (FileInputStream fis = new FileInputStream(batchFileNameBase + "-" + batchNumber);
|
try (FileInputStream fis = new FileInputStream(batchFileName);
|
||||||
ObjectInputStream ois = new ObjectInputStream(fis)) {
|
ObjectInputStream ois = new ObjectInputStream(fis)) {
|
||||||
|
|
||||||
trieLogs = (IdentityHashMap<byte[], byte[]>) ois.readObject();
|
trieLogs = (IdentityHashMap<byte[], byte[]>) ois.readObject();
|
||||||
|
|
||||||
} catch (IOException | ClassNotFoundException e) {
|
} catch (IOException | ClassNotFoundException e) {
|
||||||
|
|
||||||
LOG.error(e.getMessage());
|
LOG.error(e.getMessage());
|
||||||
throw new RuntimeException(e);
|
throw new RuntimeException(e);
|
||||||
}
|
}
|
||||||
@@ -303,6 +300,52 @@ public class TrieLogHelper {
|
|||||||
return trieLogs;
|
return trieLogs;
|
||||||
}
|
}
|
||||||
|
|
||||||
|
private static void saveTrieLogsAsRlpInFile(
|
||||||
|
final List<Hash> trieLogsKeys,
|
||||||
|
final BonsaiWorldStateKeyValueStorage rootWorldStateStorage,
|
||||||
|
final String batchFileName) {
|
||||||
|
File file = new File(batchFileName);
|
||||||
|
if (file.exists()) {
|
||||||
|
LOG.error("File already exists, skipping file creation");
|
||||||
|
return;
|
||||||
|
}
|
||||||
|
|
||||||
|
final IdentityHashMap<byte[], byte[]> trieLogs =
|
||||||
|
getTrieLogs(trieLogsKeys, rootWorldStateStorage);
|
||||||
|
final Bytes rlp =
|
||||||
|
RLP.encode(
|
||||||
|
o ->
|
||||||
|
o.writeList(
|
||||||
|
trieLogs.entrySet(), (val, out) -> out.writeRaw(Bytes.wrap(val.getValue()))));
|
||||||
|
try {
|
||||||
|
Files.write(file.toPath(), rlp.toArrayUnsafe());
|
||||||
|
} catch (IOException e) {
|
||||||
|
LOG.error(e.getMessage());
|
||||||
|
throw new RuntimeException(e);
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
static IdentityHashMap<byte[], byte[]> readTrieLogsAsRlpFromFile(final String batchFileName) {
|
||||||
|
try {
|
||||||
|
final Bytes file = Bytes.wrap(Files.readAllBytes(Path.of(batchFileName)));
|
||||||
|
final BytesValueRLPInput input = new BytesValueRLPInput(file, false);
|
||||||
|
|
||||||
|
input.enterList();
|
||||||
|
final IdentityHashMap<byte[], byte[]> trieLogs = new IdentityHashMap<>();
|
||||||
|
while (!input.isEndOfCurrentList()) {
|
||||||
|
final Bytes trieLogBytes = input.currentListAsBytes();
|
||||||
|
TrieLogLayer trieLogLayer =
|
||||||
|
TrieLogFactoryImpl.readFrom(new BytesValueRLPInput(Bytes.wrap(trieLogBytes), false));
|
||||||
|
trieLogs.put(trieLogLayer.getBlockHash().toArrayUnsafe(), trieLogBytes.toArrayUnsafe());
|
||||||
|
}
|
||||||
|
input.leaveList();
|
||||||
|
|
||||||
|
return trieLogs;
|
||||||
|
} catch (IOException e) {
|
||||||
|
throw new RuntimeException(e);
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
private static IdentityHashMap<byte[], byte[]> getTrieLogs(
|
private static IdentityHashMap<byte[], byte[]> getTrieLogs(
|
||||||
final List<Hash> trieLogKeys, final BonsaiWorldStateKeyValueStorage rootWorldStateStorage) {
|
final List<Hash> trieLogKeys, final BonsaiWorldStateKeyValueStorage rootWorldStateStorage) {
|
||||||
IdentityHashMap<byte[], byte[]> trieLogsToRetain = new IdentityHashMap<>();
|
IdentityHashMap<byte[], byte[]> trieLogsToRetain = new IdentityHashMap<>();
|
||||||
@@ -357,5 +400,25 @@ public class TrieLogHelper {
|
|||||||
count.total, count.canonicalCount, count.forkCount, count.orphanCount);
|
count.total, count.canonicalCount, count.forkCount, count.orphanCount);
|
||||||
}
|
}
|
||||||
|
|
||||||
|
static void importTrieLog(
|
||||||
|
final BonsaiWorldStateKeyValueStorage rootWorldStateStorage, final Path trieLogFilePath) {
|
||||||
|
|
||||||
|
var trieLog = readTrieLogsAsRlpFromFile(trieLogFilePath.toString());
|
||||||
|
|
||||||
|
var updater = rootWorldStateStorage.updater();
|
||||||
|
trieLog.forEach((key, value) -> updater.getTrieLogStorageTransaction().put(key, value));
|
||||||
|
updater.getTrieLogStorageTransaction().commit();
|
||||||
|
}
|
||||||
|
|
||||||
|
static void exportTrieLog(
|
||||||
|
final BonsaiWorldStateKeyValueStorage rootWorldStateStorage,
|
||||||
|
final List<Hash> trieLogHash,
|
||||||
|
final Path directoryPath)
|
||||||
|
throws IOException {
|
||||||
|
final String trieLogFile = directoryPath.toString();
|
||||||
|
|
||||||
|
saveTrieLogsAsRlpInFile(trieLogHash, rootWorldStateStorage, trieLogFile);
|
||||||
|
}
|
||||||
|
|
||||||
record TrieLogCount(int total, int canonicalCount, int forkCount, int orphanCount) {}
|
record TrieLogCount(int total, int canonicalCount, int forkCount, int orphanCount) {}
|
||||||
}
|
}
|
||||||
|
|||||||
@@ -19,6 +19,7 @@ import static com.google.common.base.Preconditions.checkNotNull;
|
|||||||
|
|
||||||
import org.hyperledger.besu.cli.util.VersionProvider;
|
import org.hyperledger.besu.cli.util.VersionProvider;
|
||||||
import org.hyperledger.besu.controller.BesuController;
|
import org.hyperledger.besu.controller.BesuController;
|
||||||
|
import org.hyperledger.besu.datatypes.Hash;
|
||||||
import org.hyperledger.besu.ethereum.chain.MutableBlockchain;
|
import org.hyperledger.besu.ethereum.chain.MutableBlockchain;
|
||||||
import org.hyperledger.besu.ethereum.storage.StorageProvider;
|
import org.hyperledger.besu.ethereum.storage.StorageProvider;
|
||||||
import org.hyperledger.besu.ethereum.trie.bonsai.storage.BonsaiWorldStateKeyValueStorage;
|
import org.hyperledger.besu.ethereum.trie.bonsai.storage.BonsaiWorldStateKeyValueStorage;
|
||||||
@@ -26,9 +27,11 @@ import org.hyperledger.besu.ethereum.trie.bonsai.trielog.TrieLogPruner;
|
|||||||
import org.hyperledger.besu.ethereum.worldstate.DataStorageConfiguration;
|
import org.hyperledger.besu.ethereum.worldstate.DataStorageConfiguration;
|
||||||
import org.hyperledger.besu.ethereum.worldstate.DataStorageFormat;
|
import org.hyperledger.besu.ethereum.worldstate.DataStorageFormat;
|
||||||
|
|
||||||
|
import java.io.IOException;
|
||||||
import java.io.PrintWriter;
|
import java.io.PrintWriter;
|
||||||
import java.nio.file.Path;
|
import java.nio.file.Path;
|
||||||
import java.nio.file.Paths;
|
import java.nio.file.Paths;
|
||||||
|
import java.util.List;
|
||||||
|
|
||||||
import org.apache.logging.log4j.Level;
|
import org.apache.logging.log4j.Level;
|
||||||
import org.apache.logging.log4j.core.config.Configurator;
|
import org.apache.logging.log4j.core.config.Configurator;
|
||||||
@@ -43,7 +46,12 @@ import picocli.CommandLine.ParentCommand;
|
|||||||
description = "Manipulate trie logs",
|
description = "Manipulate trie logs",
|
||||||
mixinStandardHelpOptions = true,
|
mixinStandardHelpOptions = true,
|
||||||
versionProvider = VersionProvider.class,
|
versionProvider = VersionProvider.class,
|
||||||
subcommands = {TrieLogSubCommand.CountTrieLog.class, TrieLogSubCommand.PruneTrieLog.class})
|
subcommands = {
|
||||||
|
TrieLogSubCommand.CountTrieLog.class,
|
||||||
|
TrieLogSubCommand.PruneTrieLog.class,
|
||||||
|
TrieLogSubCommand.ExportTrieLog.class,
|
||||||
|
TrieLogSubCommand.ImportTrieLog.class
|
||||||
|
})
|
||||||
public class TrieLogSubCommand implements Runnable {
|
public class TrieLogSubCommand implements Runnable {
|
||||||
|
|
||||||
@SuppressWarnings("UnusedVariable")
|
@SuppressWarnings("UnusedVariable")
|
||||||
@@ -123,6 +131,102 @@ public class TrieLogSubCommand implements Runnable {
|
|||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
|
@Command(
|
||||||
|
name = "export",
|
||||||
|
description = "This command exports the trie log of a determined block to a binary file",
|
||||||
|
mixinStandardHelpOptions = true,
|
||||||
|
versionProvider = VersionProvider.class)
|
||||||
|
static class ExportTrieLog implements Runnable {
|
||||||
|
|
||||||
|
@SuppressWarnings("unused")
|
||||||
|
@ParentCommand
|
||||||
|
private TrieLogSubCommand parentCommand;
|
||||||
|
|
||||||
|
@SuppressWarnings("unused")
|
||||||
|
@CommandLine.Spec
|
||||||
|
private CommandLine.Model.CommandSpec spec; // Picocli injects reference to command spec
|
||||||
|
|
||||||
|
@CommandLine.Option(
|
||||||
|
names = "--trie-log-block-hash",
|
||||||
|
description =
|
||||||
|
"Comma separated list of hashes from the blocks you want to export the trie logs of",
|
||||||
|
split = " {0,1}, {0,1}",
|
||||||
|
arity = "1..*")
|
||||||
|
private List<String> trieLogBlockHashList;
|
||||||
|
|
||||||
|
@CommandLine.Option(
|
||||||
|
names = "--trie-log-file-path",
|
||||||
|
description = "The file you want to export the trie logs to",
|
||||||
|
arity = "1..1")
|
||||||
|
private Path trieLogFilePath = null;
|
||||||
|
|
||||||
|
@Override
|
||||||
|
public void run() {
|
||||||
|
if (trieLogFilePath == null) {
|
||||||
|
trieLogFilePath =
|
||||||
|
Paths.get(
|
||||||
|
TrieLogSubCommand.parentCommand
|
||||||
|
.parentCommand
|
||||||
|
.dataDir()
|
||||||
|
.resolve("trie-logs.bin")
|
||||||
|
.toAbsolutePath()
|
||||||
|
.toString());
|
||||||
|
}
|
||||||
|
|
||||||
|
TrieLogContext context = getTrieLogContext();
|
||||||
|
|
||||||
|
final List<Hash> listOfBlockHashes =
|
||||||
|
trieLogBlockHashList.stream().map(Hash::fromHexString).toList();
|
||||||
|
|
||||||
|
try {
|
||||||
|
TrieLogHelper.exportTrieLog(
|
||||||
|
context.rootWorldStateStorage(), listOfBlockHashes, trieLogFilePath);
|
||||||
|
} catch (IOException e) {
|
||||||
|
throw new RuntimeException(e);
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
@Command(
|
||||||
|
name = "import",
|
||||||
|
description = "This command imports a trie log exported by another besu node",
|
||||||
|
mixinStandardHelpOptions = true,
|
||||||
|
versionProvider = VersionProvider.class)
|
||||||
|
static class ImportTrieLog implements Runnable {
|
||||||
|
|
||||||
|
@SuppressWarnings("unused")
|
||||||
|
@ParentCommand
|
||||||
|
private TrieLogSubCommand parentCommand;
|
||||||
|
|
||||||
|
@SuppressWarnings("unused")
|
||||||
|
@CommandLine.Spec
|
||||||
|
private CommandLine.Model.CommandSpec spec; // Picocli injects reference to command spec
|
||||||
|
|
||||||
|
@CommandLine.Option(
|
||||||
|
names = "--trie-log-file-path",
|
||||||
|
description = "The file you want to import the trie logs from",
|
||||||
|
arity = "1..1")
|
||||||
|
private Path trieLogFilePath = null;
|
||||||
|
|
||||||
|
@Override
|
||||||
|
public void run() {
|
||||||
|
if (trieLogFilePath == null) {
|
||||||
|
trieLogFilePath =
|
||||||
|
Paths.get(
|
||||||
|
TrieLogSubCommand.parentCommand
|
||||||
|
.parentCommand
|
||||||
|
.dataDir()
|
||||||
|
.resolve("trie-logs.bin")
|
||||||
|
.toAbsolutePath()
|
||||||
|
.toString());
|
||||||
|
}
|
||||||
|
|
||||||
|
TrieLogContext context = getTrieLogContext();
|
||||||
|
|
||||||
|
TrieLogHelper.importTrieLog(context.rootWorldStateStorage(), trieLogFilePath);
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
record TrieLogContext(
|
record TrieLogContext(
|
||||||
DataStorageConfiguration config,
|
DataStorageConfiguration config,
|
||||||
BonsaiWorldStateKeyValueStorage rootWorldStateStorage,
|
BonsaiWorldStateKeyValueStorage rootWorldStateStorage,
|
||||||
@@ -139,8 +243,7 @@ public class TrieLogSubCommand implements Runnable {
|
|||||||
|
|
||||||
final StorageProvider storageProvider = besuController.getStorageProvider();
|
final StorageProvider storageProvider = besuController.getStorageProvider();
|
||||||
final BonsaiWorldStateKeyValueStorage rootWorldStateStorage =
|
final BonsaiWorldStateKeyValueStorage rootWorldStateStorage =
|
||||||
(BonsaiWorldStateKeyValueStorage)
|
(BonsaiWorldStateKeyValueStorage) storageProvider.createWorldStateStorage(config);
|
||||||
storageProvider.createWorldStateStorage(DataStorageFormat.BONSAI);
|
|
||||||
final MutableBlockchain blockchain = besuController.getProtocolContext().getBlockchain();
|
final MutableBlockchain blockchain = besuController.getProtocolContext().getBlockchain();
|
||||||
return new TrieLogContext(config, rootWorldStateStorage, blockchain);
|
return new TrieLogContext(config, rootWorldStateStorage, blockchain);
|
||||||
}
|
}
|
||||||
|
|||||||
@@ -55,7 +55,6 @@ public class ConfigOptionSearchAndRunHandler extends CommandLine.RunLast {
|
|||||||
public List<Object> handle(final ParseResult parseResult) throws ParameterException {
|
public List<Object> handle(final ParseResult parseResult) throws ParameterException {
|
||||||
final CommandLine commandLine = parseResult.commandSpec().commandLine();
|
final CommandLine commandLine = parseResult.commandSpec().commandLine();
|
||||||
final Optional<File> configFile = findConfigFile(parseResult, commandLine);
|
final Optional<File> configFile = findConfigFile(parseResult, commandLine);
|
||||||
validatePrivacyOptions(parseResult, commandLine);
|
|
||||||
commandLine.setDefaultValueProvider(createDefaultValueProvider(commandLine, configFile));
|
commandLine.setDefaultValueProvider(createDefaultValueProvider(commandLine, configFile));
|
||||||
commandLine.setExecutionStrategy(resultHandler);
|
commandLine.setExecutionStrategy(resultHandler);
|
||||||
commandLine.setParameterExceptionHandler(parameterExceptionHandler);
|
commandLine.setParameterExceptionHandler(parameterExceptionHandler);
|
||||||
@@ -64,16 +63,6 @@ public class ConfigOptionSearchAndRunHandler extends CommandLine.RunLast {
|
|||||||
return new ArrayList<>();
|
return new ArrayList<>();
|
||||||
}
|
}
|
||||||
|
|
||||||
private void validatePrivacyOptions(
|
|
||||||
final ParseResult parseResult, final CommandLine commandLine) {
|
|
||||||
if (parseResult.hasMatchedOption("--privacy-onchain-groups-enabled")
|
|
||||||
&& parseResult.hasMatchedOption("--privacy-flexible-groups-enabled")) {
|
|
||||||
throw new ParameterException(
|
|
||||||
commandLine,
|
|
||||||
"The `--privacy-onchain-groups-enabled` option is deprecated and you should only use `--privacy-flexible-groups-enabled`");
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
private Optional<File> findConfigFile(
|
private Optional<File> findConfigFile(
|
||||||
final ParseResult parseResult, final CommandLine commandLine) {
|
final ParseResult parseResult, final CommandLine commandLine) {
|
||||||
if (parseResult.hasMatchedOption("--config-file")
|
if (parseResult.hasMatchedOption("--config-file")
|
||||||
|
|||||||
@@ -591,12 +591,14 @@ public abstract class BesuControllerBuilder implements MiningParameterOverrides
|
|||||||
prepForBuild();
|
prepForBuild();
|
||||||
|
|
||||||
final ProtocolSchedule protocolSchedule = createProtocolSchedule();
|
final ProtocolSchedule protocolSchedule = createProtocolSchedule();
|
||||||
final GenesisState genesisState = GenesisState.fromConfig(genesisConfig, protocolSchedule);
|
final GenesisState genesisState =
|
||||||
|
GenesisState.fromConfig(
|
||||||
|
dataStorageConfiguration.getDataStorageFormat(), genesisConfig, protocolSchedule);
|
||||||
|
|
||||||
final VariablesStorage variablesStorage = storageProvider.createVariablesStorage();
|
final VariablesStorage variablesStorage = storageProvider.createVariablesStorage();
|
||||||
|
|
||||||
final WorldStateStorage worldStateStorage =
|
final WorldStateStorage worldStateStorage =
|
||||||
storageProvider.createWorldStateStorage(dataStorageConfiguration.getDataStorageFormat());
|
storageProvider.createWorldStateStorage(dataStorageConfiguration);
|
||||||
|
|
||||||
final BlockchainStorage blockchainStorage =
|
final BlockchainStorage blockchainStorage =
|
||||||
storageProvider.createBlockchainStorage(protocolSchedule, variablesStorage);
|
storageProvider.createBlockchainStorage(protocolSchedule, variablesStorage);
|
||||||
@@ -1086,7 +1088,6 @@ public abstract class BesuControllerBuilder implements MiningParameterOverrides
|
|||||||
blockchain,
|
blockchain,
|
||||||
Optional.of(dataStorageConfiguration.getBonsaiMaxLayersToLoad()),
|
Optional.of(dataStorageConfiguration.getBonsaiMaxLayersToLoad()),
|
||||||
cachedMerkleTrieLoader,
|
cachedMerkleTrieLoader,
|
||||||
metricsSystem,
|
|
||||||
besuComponent.map(BesuComponent::getBesuPluginContext).orElse(null),
|
besuComponent.map(BesuComponent::getBesuPluginContext).orElse(null),
|
||||||
evmConfiguration,
|
evmConfiguration,
|
||||||
trieLogPruner);
|
trieLogPruner);
|
||||||
|
|||||||
@@ -29,7 +29,6 @@ import static org.hyperledger.besu.cli.config.NetworkName.MAINNET;
|
|||||||
import static org.hyperledger.besu.cli.config.NetworkName.MORDOR;
|
import static org.hyperledger.besu.cli.config.NetworkName.MORDOR;
|
||||||
import static org.hyperledger.besu.cli.config.NetworkName.SEPOLIA;
|
import static org.hyperledger.besu.cli.config.NetworkName.SEPOLIA;
|
||||||
import static org.hyperledger.besu.cli.util.CommandLineUtils.DEPENDENCY_WARNING_MSG;
|
import static org.hyperledger.besu.cli.util.CommandLineUtils.DEPENDENCY_WARNING_MSG;
|
||||||
import static org.hyperledger.besu.cli.util.CommandLineUtils.DEPRECATION_WARNING_MSG;
|
|
||||||
import static org.hyperledger.besu.ethereum.api.jsonrpc.RpcApis.ENGINE;
|
import static org.hyperledger.besu.ethereum.api.jsonrpc.RpcApis.ENGINE;
|
||||||
import static org.hyperledger.besu.ethereum.api.jsonrpc.RpcApis.ETH;
|
import static org.hyperledger.besu.ethereum.api.jsonrpc.RpcApis.ETH;
|
||||||
import static org.hyperledger.besu.ethereum.api.jsonrpc.RpcApis.NET;
|
import static org.hyperledger.besu.ethereum.api.jsonrpc.RpcApis.NET;
|
||||||
@@ -96,6 +95,7 @@ import org.hyperledger.besu.plugin.services.privacy.PrivateMarkerTransactionFact
|
|||||||
import org.hyperledger.besu.plugin.services.rpc.PluginRpcRequest;
|
import org.hyperledger.besu.plugin.services.rpc.PluginRpcRequest;
|
||||||
import org.hyperledger.besu.util.number.Fraction;
|
import org.hyperledger.besu.util.number.Fraction;
|
||||||
import org.hyperledger.besu.util.number.Percentage;
|
import org.hyperledger.besu.util.number.Percentage;
|
||||||
|
import org.hyperledger.besu.util.number.PositiveNumber;
|
||||||
import org.hyperledger.besu.util.platform.PlatformDetector;
|
import org.hyperledger.besu.util.platform.PlatformDetector;
|
||||||
|
|
||||||
import java.io.File;
|
import java.io.File;
|
||||||
@@ -847,6 +847,8 @@ public class BesuCommandTest extends CommandTestAbstract {
|
|||||||
tomlResult.getDouble(tomlKey);
|
tomlResult.getDouble(tomlKey);
|
||||||
} else if (Percentage.class.isAssignableFrom(optionSpec.type())) {
|
} else if (Percentage.class.isAssignableFrom(optionSpec.type())) {
|
||||||
tomlResult.getLong(tomlKey);
|
tomlResult.getLong(tomlKey);
|
||||||
|
} else if (PositiveNumber.class.isAssignableFrom(optionSpec.type())) {
|
||||||
|
tomlResult.getLong(tomlKey);
|
||||||
} else {
|
} else {
|
||||||
tomlResult.getString(tomlKey);
|
tomlResult.getString(tomlKey);
|
||||||
}
|
}
|
||||||
@@ -1977,16 +1979,6 @@ public class BesuCommandTest extends CommandTestAbstract {
|
|||||||
"The `--ethstats-contact` requires ethstats server URL to be provided. Either remove --ethstats-contact or provide a URL (via --ethstats=nodename:secret@host:port)");
|
"The `--ethstats-contact` requires ethstats server URL to be provided. Either remove --ethstats-contact or provide a URL (via --ethstats=nodename:secret@host:port)");
|
||||||
}
|
}
|
||||||
|
|
||||||
@Test
|
|
||||||
public void privacyOnchainGroupsEnabledCannotBeUsedWithPrivacyFlexibleGroupsEnabled() {
|
|
||||||
parseCommand("--privacy-onchain-groups-enabled", "--privacy-flexible-groups-enabled");
|
|
||||||
Mockito.verifyNoInteractions(mockRunnerBuilder);
|
|
||||||
assertThat(commandOutput.toString(UTF_8)).isEmpty();
|
|
||||||
assertThat(commandErrorOutput.toString(UTF_8))
|
|
||||||
.contains(
|
|
||||||
"The `--privacy-onchain-groups-enabled` option is deprecated and you should only use `--privacy-flexible-groups-enabled`");
|
|
||||||
}
|
|
||||||
|
|
||||||
@Test
|
@Test
|
||||||
public void parsesValidBonsaiTrieLimitBackLayersOption() {
|
public void parsesValidBonsaiTrieLimitBackLayersOption() {
|
||||||
parseCommand("--data-storage-format", "BONSAI", "--bonsai-historical-block-limit", "11");
|
parseCommand("--data-storage-format", "BONSAI", "--bonsai-historical-block-limit", "11");
|
||||||
@@ -3840,8 +3832,8 @@ public class BesuCommandTest extends CommandTestAbstract {
|
|||||||
}
|
}
|
||||||
|
|
||||||
@Test
|
@Test
|
||||||
public void pruningLogsDeprecationWarning() {
|
public void pruningLogsDeprecationWarningWithForest() {
|
||||||
parseCommand("--pruning-enabled");
|
parseCommand("--pruning-enabled", "--data-storage-format=FOREST");
|
||||||
|
|
||||||
verify(mockControllerBuilder).isPruningEnabled(true);
|
verify(mockControllerBuilder).isPruningEnabled(true);
|
||||||
|
|
||||||
@@ -3854,6 +3846,17 @@ public class BesuCommandTest extends CommandTestAbstract {
|
|||||||
+ " To save disk space consider switching to Bonsai data storage format."));
|
+ " To save disk space consider switching to Bonsai data storage format."));
|
||||||
}
|
}
|
||||||
|
|
||||||
|
@Test
|
||||||
|
public void pruningLogsIgnoredWarningWithBonsai() {
|
||||||
|
parseCommand("--pruning-enabled", "--data-storage-format=BONSAI");
|
||||||
|
|
||||||
|
verify(mockControllerBuilder).isPruningEnabled(true);
|
||||||
|
|
||||||
|
assertThat(commandOutput.toString(UTF_8)).isEmpty();
|
||||||
|
assertThat(commandErrorOutput.toString(UTF_8)).isEmpty();
|
||||||
|
verify(mockLogger).warn(contains("Forest pruning is ignored with Bonsai data storage format."));
|
||||||
|
}
|
||||||
|
|
||||||
@Test
|
@Test
|
||||||
public void devModeOptionMustBeUsed() throws Exception {
|
public void devModeOptionMustBeUsed() throws Exception {
|
||||||
parseCommand("--network", "dev");
|
parseCommand("--network", "dev");
|
||||||
@@ -4192,46 +4195,6 @@ public class BesuCommandTest extends CommandTestAbstract {
|
|||||||
assertThat(privacyParameters.isFlexiblePrivacyGroupsEnabled()).isEqualTo(false);
|
assertThat(privacyParameters.isFlexiblePrivacyGroupsEnabled()).isEqualTo(false);
|
||||||
}
|
}
|
||||||
|
|
||||||
@Test
|
|
||||||
public void onchainPrivacyGroupEnabledFlagValueIsSet() {
|
|
||||||
parseCommand(
|
|
||||||
"--privacy-enabled",
|
|
||||||
"--privacy-public-key-file",
|
|
||||||
ENCLAVE_PUBLIC_KEY_PATH,
|
|
||||||
"--privacy-onchain-groups-enabled",
|
|
||||||
"--min-gas-price",
|
|
||||||
"0");
|
|
||||||
|
|
||||||
final ArgumentCaptor<PrivacyParameters> privacyParametersArgumentCaptor =
|
|
||||||
ArgumentCaptor.forClass(PrivacyParameters.class);
|
|
||||||
|
|
||||||
verify(mockControllerBuilder).privacyParameters(privacyParametersArgumentCaptor.capture());
|
|
||||||
verify(mockControllerBuilder).build();
|
|
||||||
|
|
||||||
assertThat(commandOutput.toString(UTF_8)).isEmpty();
|
|
||||||
assertThat(commandErrorOutput.toString(UTF_8)).isEmpty();
|
|
||||||
|
|
||||||
final PrivacyParameters privacyParameters = privacyParametersArgumentCaptor.getValue();
|
|
||||||
assertThat(privacyParameters.isFlexiblePrivacyGroupsEnabled()).isEqualTo(true);
|
|
||||||
}
|
|
||||||
|
|
||||||
@Test
|
|
||||||
public void onchainPrivacyGroupEnabledOptionIsDeprecated() {
|
|
||||||
parseCommand(
|
|
||||||
"--privacy-enabled",
|
|
||||||
"--privacy-public-key-file",
|
|
||||||
ENCLAVE_PUBLIC_KEY_PATH,
|
|
||||||
"--privacy-onchain-groups-enabled",
|
|
||||||
"--min-gas-price",
|
|
||||||
"0");
|
|
||||||
|
|
||||||
verify(mockLogger)
|
|
||||||
.warn(
|
|
||||||
DEPRECATION_WARNING_MSG,
|
|
||||||
"--privacy-onchain-groups-enabled",
|
|
||||||
"--privacy-flexible-groups-enabled");
|
|
||||||
}
|
|
||||||
|
|
||||||
@Test
|
@Test
|
||||||
public void flexiblePrivacyGroupEnabledFlagValueIsSet() {
|
public void flexiblePrivacyGroupEnabledFlagValueIsSet() {
|
||||||
parseCommand(
|
parseCommand(
|
||||||
|
|||||||
@@ -15,8 +15,8 @@
|
|||||||
package org.hyperledger.besu.cli.options;
|
package org.hyperledger.besu.cli.options;
|
||||||
|
|
||||||
import static org.assertj.core.api.Assertions.assertThat;
|
import static org.assertj.core.api.Assertions.assertThat;
|
||||||
import static org.hyperledger.besu.ethereum.core.MiningParameters.Unstable.DEFAULT_NON_POA_BLOCK_TXS_SELECTION_MAX_TIME;
|
import static org.hyperledger.besu.ethereum.core.MiningParameters.DEFAULT_NON_POA_BLOCK_TXS_SELECTION_MAX_TIME;
|
||||||
import static org.hyperledger.besu.ethereum.core.MiningParameters.Unstable.DEFAULT_POA_BLOCK_TXS_SELECTION_MAX_TIME;
|
import static org.hyperledger.besu.ethereum.core.MiningParameters.DEFAULT_POA_BLOCK_TXS_SELECTION_MAX_TIME;
|
||||||
import static org.hyperledger.besu.ethereum.core.MiningParameters.Unstable.DEFAULT_POS_BLOCK_CREATION_MAX_TIME;
|
import static org.hyperledger.besu.ethereum.core.MiningParameters.Unstable.DEFAULT_POS_BLOCK_CREATION_MAX_TIME;
|
||||||
import static org.mockito.Mockito.atMost;
|
import static org.mockito.Mockito.atMost;
|
||||||
import static org.mockito.Mockito.verify;
|
import static org.mockito.Mockito.verify;
|
||||||
@@ -28,7 +28,7 @@ import org.hyperledger.besu.ethereum.core.ImmutableMiningParameters;
|
|||||||
import org.hyperledger.besu.ethereum.core.ImmutableMiningParameters.MutableInitValues;
|
import org.hyperledger.besu.ethereum.core.ImmutableMiningParameters.MutableInitValues;
|
||||||
import org.hyperledger.besu.ethereum.core.ImmutableMiningParameters.Unstable;
|
import org.hyperledger.besu.ethereum.core.ImmutableMiningParameters.Unstable;
|
||||||
import org.hyperledger.besu.ethereum.core.MiningParameters;
|
import org.hyperledger.besu.ethereum.core.MiningParameters;
|
||||||
import org.hyperledger.besu.util.number.Percentage;
|
import org.hyperledger.besu.util.number.PositiveNumber;
|
||||||
|
|
||||||
import java.io.IOException;
|
import java.io.IOException;
|
||||||
import java.nio.file.Path;
|
import java.nio.file.Path;
|
||||||
@@ -315,35 +315,26 @@ public class MiningOptionsTest extends AbstractCLIOptionsTest<MiningParameters,
|
|||||||
public void blockTxsSelectionMaxTimeDefaultValue() {
|
public void blockTxsSelectionMaxTimeDefaultValue() {
|
||||||
internalTestSuccess(
|
internalTestSuccess(
|
||||||
miningParams ->
|
miningParams ->
|
||||||
assertThat(miningParams.getUnstable().getBlockTxsSelectionMaxTime())
|
assertThat(miningParams.getNonPoaBlockTxsSelectionMaxTime())
|
||||||
.isEqualTo(DEFAULT_NON_POA_BLOCK_TXS_SELECTION_MAX_TIME));
|
.isEqualTo(DEFAULT_NON_POA_BLOCK_TXS_SELECTION_MAX_TIME));
|
||||||
}
|
}
|
||||||
|
|
||||||
@Test
|
@Test
|
||||||
public void blockTxsSelectionMaxTimeOption() {
|
public void blockTxsSelectionMaxTimeOption() {
|
||||||
internalTestSuccess(
|
internalTestSuccess(
|
||||||
miningParams ->
|
miningParams -> assertThat(miningParams.getBlockTxsSelectionMaxTime()).isEqualTo(1700L),
|
||||||
assertThat(miningParams.getUnstable().getBlockTxsSelectionMaxTime()).isEqualTo(1700L),
|
"--block-txs-selection-max-time",
|
||||||
"--Xblock-txs-selection-max-time",
|
|
||||||
"1700");
|
"1700");
|
||||||
}
|
}
|
||||||
|
|
||||||
@Test
|
|
||||||
public void blockTxsSelectionMaxTimeOutOfAllowedRange() {
|
|
||||||
internalTestFailure(
|
|
||||||
"--Xblock-txs-selection-max-time must be positive and ≤ 5000",
|
|
||||||
"--Xblock-txs-selection-max-time",
|
|
||||||
"6000");
|
|
||||||
}
|
|
||||||
|
|
||||||
@Test
|
@Test
|
||||||
public void blockTxsSelectionMaxTimeIncompatibleWithPoaNetworks() throws IOException {
|
public void blockTxsSelectionMaxTimeIncompatibleWithPoaNetworks() throws IOException {
|
||||||
final Path genesisFileIBFT2 = createFakeGenesisFile(VALID_GENESIS_IBFT2_POST_LONDON);
|
final Path genesisFileIBFT2 = createFakeGenesisFile(VALID_GENESIS_IBFT2_POST_LONDON);
|
||||||
internalTestFailure(
|
internalTestFailure(
|
||||||
"--Xblock-txs-selection-max-time can't be used with PoA networks, see Xpoa-block-txs-selection-max-time instead",
|
"--block-txs-selection-max-time can't be used with PoA networks, see poa-block-txs-selection-max-time instead",
|
||||||
"--genesis-file",
|
"--genesis-file",
|
||||||
genesisFileIBFT2.toString(),
|
genesisFileIBFT2.toString(),
|
||||||
"--Xblock-txs-selection-max-time",
|
"--block-txs-selection-max-time",
|
||||||
"2");
|
"2");
|
||||||
}
|
}
|
||||||
|
|
||||||
@@ -351,7 +342,7 @@ public class MiningOptionsTest extends AbstractCLIOptionsTest<MiningParameters,
|
|||||||
public void poaBlockTxsSelectionMaxTimeDefaultValue() {
|
public void poaBlockTxsSelectionMaxTimeDefaultValue() {
|
||||||
internalTestSuccess(
|
internalTestSuccess(
|
||||||
miningParams ->
|
miningParams ->
|
||||||
assertThat(miningParams.getUnstable().getPoaBlockTxsSelectionMaxTime())
|
assertThat(miningParams.getPoaBlockTxsSelectionMaxTime())
|
||||||
.isEqualTo(DEFAULT_POA_BLOCK_TXS_SELECTION_MAX_TIME));
|
.isEqualTo(DEFAULT_POA_BLOCK_TXS_SELECTION_MAX_TIME));
|
||||||
}
|
}
|
||||||
|
|
||||||
@@ -360,27 +351,32 @@ public class MiningOptionsTest extends AbstractCLIOptionsTest<MiningParameters,
|
|||||||
final Path genesisFileIBFT2 = createFakeGenesisFile(VALID_GENESIS_IBFT2_POST_LONDON);
|
final Path genesisFileIBFT2 = createFakeGenesisFile(VALID_GENESIS_IBFT2_POST_LONDON);
|
||||||
internalTestSuccess(
|
internalTestSuccess(
|
||||||
miningParams ->
|
miningParams ->
|
||||||
assertThat(miningParams.getUnstable().getPoaBlockTxsSelectionMaxTime())
|
assertThat(miningParams.getPoaBlockTxsSelectionMaxTime())
|
||||||
.isEqualTo(Percentage.fromInt(80)),
|
.isEqualTo(PositiveNumber.fromInt(80)),
|
||||||
"--genesis-file",
|
"--genesis-file",
|
||||||
genesisFileIBFT2.toString(),
|
genesisFileIBFT2.toString(),
|
||||||
"--Xpoa-block-txs-selection-max-time",
|
"--poa-block-txs-selection-max-time",
|
||||||
"80");
|
"80");
|
||||||
}
|
}
|
||||||
|
|
||||||
@Test
|
@Test
|
||||||
public void poaBlockTxsSelectionMaxTimeOutOfAllowedRange() {
|
public void poaBlockTxsSelectionMaxTimeOptionOver100Percent() throws IOException {
|
||||||
internalTestFailure(
|
final Path genesisFileIBFT2 = createFakeGenesisFile(VALID_GENESIS_IBFT2_POST_LONDON);
|
||||||
"Invalid value for option '--Xpoa-block-txs-selection-max-time': cannot convert '110' to Percentage",
|
internalTestSuccess(
|
||||||
"--Xpoa-block-txs-selection-max-time",
|
miningParams ->
|
||||||
"110");
|
assertThat(miningParams.getPoaBlockTxsSelectionMaxTime())
|
||||||
|
.isEqualTo(PositiveNumber.fromInt(200)),
|
||||||
|
"--genesis-file",
|
||||||
|
genesisFileIBFT2.toString(),
|
||||||
|
"--poa-block-txs-selection-max-time",
|
||||||
|
"200");
|
||||||
}
|
}
|
||||||
|
|
||||||
@Test
|
@Test
|
||||||
public void poaBlockTxsSelectionMaxTimeOnlyCompatibleWithPoaNetworks() {
|
public void poaBlockTxsSelectionMaxTimeOnlyCompatibleWithPoaNetworks() {
|
||||||
internalTestFailure(
|
internalTestFailure(
|
||||||
"--Xpoa-block-txs-selection-max-time can be only used with PoA networks, see --Xblock-txs-selection-max-time instead",
|
"--poa-block-txs-selection-max-time can be only used with PoA networks, see --block-txs-selection-max-time instead",
|
||||||
"--Xpoa-block-txs-selection-max-time",
|
"--poa-block-txs-selection-max-time",
|
||||||
"90");
|
"90");
|
||||||
}
|
}
|
||||||
|
|
||||||
|
|||||||
@@ -134,7 +134,7 @@ public class NetworkingOptionsTest
|
|||||||
|
|
||||||
final NetworkingOptions options = cmd.getNetworkingOptions();
|
final NetworkingOptions options = cmd.getNetworkingOptions();
|
||||||
final NetworkingConfiguration networkingConfig = options.toDomainObject();
|
final NetworkingConfiguration networkingConfig = options.toDomainObject();
|
||||||
assertThat(networkingConfig.getDiscovery().isFilterOnEnrForkIdEnabled()).isEqualTo(false);
|
assertThat(networkingConfig.getDiscovery().isFilterOnEnrForkIdEnabled()).isEqualTo(true);
|
||||||
|
|
||||||
assertThat(commandErrorOutput.toString(UTF_8)).isEmpty();
|
assertThat(commandErrorOutput.toString(UTF_8)).isEmpty();
|
||||||
assertThat(commandOutput.toString(UTF_8)).isEmpty();
|
assertThat(commandOutput.toString(UTF_8)).isEmpty();
|
||||||
|
|||||||
@@ -34,8 +34,8 @@ public class DataStorageOptionsTest
|
|||||||
dataStorageConfiguration ->
|
dataStorageConfiguration ->
|
||||||
assertThat(dataStorageConfiguration.getUnstable().getBonsaiTrieLogPruningLimit())
|
assertThat(dataStorageConfiguration.getUnstable().getBonsaiTrieLogPruningLimit())
|
||||||
.isEqualTo(1),
|
.isEqualTo(1),
|
||||||
"--Xbonsai-trie-log-pruning-enabled",
|
"--Xbonsai-limit-trie-logs-enabled",
|
||||||
"--Xbonsai-trie-log-pruning-limit",
|
"--Xbonsai-trie-logs-pruning-limit",
|
||||||
"1");
|
"1");
|
||||||
}
|
}
|
||||||
|
|
||||||
@@ -43,8 +43,8 @@ public class DataStorageOptionsTest
|
|||||||
public void bonsaiTrieLogPruningLimitShouldBePositive() {
|
public void bonsaiTrieLogPruningLimitShouldBePositive() {
|
||||||
internalTestFailure(
|
internalTestFailure(
|
||||||
"--Xbonsai-trie-log-pruning-limit=0 must be greater than 0",
|
"--Xbonsai-trie-log-pruning-limit=0 must be greater than 0",
|
||||||
"--Xbonsai-trie-log-pruning-enabled",
|
"--Xbonsai-limit-trie-logs-enabled",
|
||||||
"--Xbonsai-trie-log-pruning-limit",
|
"--Xbonsai-trie-logs-pruning-limit",
|
||||||
"0");
|
"0");
|
||||||
}
|
}
|
||||||
|
|
||||||
@@ -54,8 +54,8 @@ public class DataStorageOptionsTest
|
|||||||
dataStorageConfiguration ->
|
dataStorageConfiguration ->
|
||||||
assertThat(dataStorageConfiguration.getUnstable().getBonsaiTrieLogRetentionThreshold())
|
assertThat(dataStorageConfiguration.getUnstable().getBonsaiTrieLogRetentionThreshold())
|
||||||
.isEqualTo(MINIMUM_BONSAI_TRIE_LOG_RETENTION_THRESHOLD + 1),
|
.isEqualTo(MINIMUM_BONSAI_TRIE_LOG_RETENTION_THRESHOLD + 1),
|
||||||
"--Xbonsai-trie-log-pruning-enabled",
|
"--Xbonsai-limit-trie-logs-enabled",
|
||||||
"--Xbonsai-trie-log-retention-threshold",
|
"--Xbonsai-trie-logs-retention-threshold",
|
||||||
"513");
|
"513");
|
||||||
}
|
}
|
||||||
|
|
||||||
@@ -65,8 +65,8 @@ public class DataStorageOptionsTest
|
|||||||
dataStorageConfiguration ->
|
dataStorageConfiguration ->
|
||||||
assertThat(dataStorageConfiguration.getUnstable().getBonsaiTrieLogRetentionThreshold())
|
assertThat(dataStorageConfiguration.getUnstable().getBonsaiTrieLogRetentionThreshold())
|
||||||
.isEqualTo(MINIMUM_BONSAI_TRIE_LOG_RETENTION_THRESHOLD),
|
.isEqualTo(MINIMUM_BONSAI_TRIE_LOG_RETENTION_THRESHOLD),
|
||||||
"--Xbonsai-trie-log-pruning-enabled",
|
"--Xbonsai-limit-trie-logs-enabled",
|
||||||
"--Xbonsai-trie-log-retention-threshold",
|
"--Xbonsai-trie-logs-retention-threshold",
|
||||||
"512");
|
"512");
|
||||||
}
|
}
|
||||||
|
|
||||||
@@ -74,8 +74,8 @@ public class DataStorageOptionsTest
|
|||||||
public void bonsaiTrieLogRetentionThresholdShouldBeAboveMinimum() {
|
public void bonsaiTrieLogRetentionThresholdShouldBeAboveMinimum() {
|
||||||
internalTestFailure(
|
internalTestFailure(
|
||||||
"--Xbonsai-trie-log-retention-threshold minimum value is 512",
|
"--Xbonsai-trie-log-retention-threshold minimum value is 512",
|
||||||
"--Xbonsai-trie-log-pruning-enabled",
|
"--Xbonsai-limit-trie-logs-enabled",
|
||||||
"--Xbonsai-trie-log-retention-threshold",
|
"--Xbonsai-trie-logs-retention-threshold",
|
||||||
"511");
|
"511");
|
||||||
}
|
}
|
||||||
|
|
||||||
|
|||||||
@@ -15,6 +15,7 @@
|
|||||||
|
|
||||||
package org.hyperledger.besu.cli.subcommands.storage;
|
package org.hyperledger.besu.cli.subcommands.storage;
|
||||||
|
|
||||||
|
import static java.util.Collections.singletonList;
|
||||||
import static org.hyperledger.besu.ethereum.worldstate.DataStorageFormat.BONSAI;
|
import static org.hyperledger.besu.ethereum.worldstate.DataStorageFormat.BONSAI;
|
||||||
import static org.junit.jupiter.api.Assertions.assertArrayEquals;
|
import static org.junit.jupiter.api.Assertions.assertArrayEquals;
|
||||||
import static org.junit.jupiter.api.Assertions.assertEquals;
|
import static org.junit.jupiter.api.Assertions.assertEquals;
|
||||||
@@ -27,8 +28,11 @@ import org.hyperledger.besu.ethereum.chain.MutableBlockchain;
|
|||||||
import org.hyperledger.besu.ethereum.core.BlockHeader;
|
import org.hyperledger.besu.ethereum.core.BlockHeader;
|
||||||
import org.hyperledger.besu.ethereum.core.BlockHeaderTestFixture;
|
import org.hyperledger.besu.ethereum.core.BlockHeaderTestFixture;
|
||||||
import org.hyperledger.besu.ethereum.core.InMemoryKeyValueStorageProvider;
|
import org.hyperledger.besu.ethereum.core.InMemoryKeyValueStorageProvider;
|
||||||
|
import org.hyperledger.besu.ethereum.rlp.BytesValueRLPOutput;
|
||||||
import org.hyperledger.besu.ethereum.storage.StorageProvider;
|
import org.hyperledger.besu.ethereum.storage.StorageProvider;
|
||||||
import org.hyperledger.besu.ethereum.trie.bonsai.storage.BonsaiWorldStateKeyValueStorage;
|
import org.hyperledger.besu.ethereum.trie.bonsai.storage.BonsaiWorldStateKeyValueStorage;
|
||||||
|
import org.hyperledger.besu.ethereum.trie.bonsai.trielog.TrieLogFactoryImpl;
|
||||||
|
import org.hyperledger.besu.ethereum.trie.bonsai.trielog.TrieLogLayer;
|
||||||
import org.hyperledger.besu.ethereum.worldstate.DataStorageConfiguration;
|
import org.hyperledger.besu.ethereum.worldstate.DataStorageConfiguration;
|
||||||
import org.hyperledger.besu.ethereum.worldstate.ImmutableDataStorageConfiguration;
|
import org.hyperledger.besu.ethereum.worldstate.ImmutableDataStorageConfiguration;
|
||||||
import org.hyperledger.besu.metrics.noop.NoOpMetricsSystem;
|
import org.hyperledger.besu.metrics.noop.NoOpMetricsSystem;
|
||||||
@@ -36,11 +40,12 @@ import org.hyperledger.besu.metrics.noop.NoOpMetricsSystem;
|
|||||||
import java.io.IOException;
|
import java.io.IOException;
|
||||||
import java.nio.file.Files;
|
import java.nio.file.Files;
|
||||||
import java.nio.file.Path;
|
import java.nio.file.Path;
|
||||||
|
import java.util.List;
|
||||||
|
import java.util.Map;
|
||||||
import java.util.Optional;
|
import java.util.Optional;
|
||||||
|
import java.util.stream.Collectors;
|
||||||
|
|
||||||
import org.apache.tuweni.bytes.Bytes;
|
import org.apache.tuweni.bytes.Bytes;
|
||||||
import org.junit.jupiter.api.AfterEach;
|
|
||||||
import org.junit.jupiter.api.BeforeAll;
|
|
||||||
import org.junit.jupiter.api.BeforeEach;
|
import org.junit.jupiter.api.BeforeEach;
|
||||||
import org.junit.jupiter.api.Test;
|
import org.junit.jupiter.api.Test;
|
||||||
import org.junit.jupiter.api.extension.ExtendWith;
|
import org.junit.jupiter.api.extension.ExtendWith;
|
||||||
@@ -56,17 +61,14 @@ class TrieLogHelperTest {
|
|||||||
|
|
||||||
@Mock private MutableBlockchain blockchain;
|
@Mock private MutableBlockchain blockchain;
|
||||||
|
|
||||||
@TempDir static Path dataDir;
|
|
||||||
|
|
||||||
Path test;
|
|
||||||
static BlockHeader blockHeader1;
|
static BlockHeader blockHeader1;
|
||||||
static BlockHeader blockHeader2;
|
static BlockHeader blockHeader2;
|
||||||
static BlockHeader blockHeader3;
|
static BlockHeader blockHeader3;
|
||||||
static BlockHeader blockHeader4;
|
static BlockHeader blockHeader4;
|
||||||
static BlockHeader blockHeader5;
|
static BlockHeader blockHeader5;
|
||||||
|
|
||||||
@BeforeAll
|
@BeforeEach
|
||||||
public static void setup() throws IOException {
|
public void setup() throws IOException {
|
||||||
|
|
||||||
blockHeader1 = new BlockHeaderTestFixture().number(1).buildHeader();
|
blockHeader1 = new BlockHeaderTestFixture().number(1).buildHeader();
|
||||||
blockHeader2 = new BlockHeaderTestFixture().number(2).buildHeader();
|
blockHeader2 = new BlockHeaderTestFixture().number(2).buildHeader();
|
||||||
@@ -75,35 +77,36 @@ class TrieLogHelperTest {
|
|||||||
blockHeader5 = new BlockHeaderTestFixture().number(5).buildHeader();
|
blockHeader5 = new BlockHeaderTestFixture().number(5).buildHeader();
|
||||||
|
|
||||||
inMemoryWorldState =
|
inMemoryWorldState =
|
||||||
new BonsaiWorldStateKeyValueStorage(storageProvider, new NoOpMetricsSystem());
|
new BonsaiWorldStateKeyValueStorage(
|
||||||
|
storageProvider, new NoOpMetricsSystem(), DataStorageConfiguration.DEFAULT_CONFIG);
|
||||||
|
|
||||||
|
createTrieLog(blockHeader1);
|
||||||
|
|
||||||
var updater = inMemoryWorldState.updater();
|
var updater = inMemoryWorldState.updater();
|
||||||
updater
|
updater
|
||||||
.getTrieLogStorageTransaction()
|
.getTrieLogStorageTransaction()
|
||||||
.put(blockHeader1.getHash().toArrayUnsafe(), Bytes.fromHexString("0x01").toArrayUnsafe());
|
.put(blockHeader1.getHash().toArrayUnsafe(), createTrieLog(blockHeader1));
|
||||||
updater
|
updater
|
||||||
.getTrieLogStorageTransaction()
|
.getTrieLogStorageTransaction()
|
||||||
.put(blockHeader2.getHash().toArrayUnsafe(), Bytes.fromHexString("0x02").toArrayUnsafe());
|
.put(blockHeader2.getHash().toArrayUnsafe(), createTrieLog(blockHeader2));
|
||||||
updater
|
updater
|
||||||
.getTrieLogStorageTransaction()
|
.getTrieLogStorageTransaction()
|
||||||
.put(blockHeader3.getHash().toArrayUnsafe(), Bytes.fromHexString("0x03").toArrayUnsafe());
|
.put(blockHeader3.getHash().toArrayUnsafe(), createTrieLog(blockHeader3));
|
||||||
updater
|
updater
|
||||||
.getTrieLogStorageTransaction()
|
.getTrieLogStorageTransaction()
|
||||||
.put(blockHeader4.getHash().toArrayUnsafe(), Bytes.fromHexString("0x04").toArrayUnsafe());
|
.put(blockHeader4.getHash().toArrayUnsafe(), createTrieLog(blockHeader4));
|
||||||
updater
|
updater
|
||||||
.getTrieLogStorageTransaction()
|
.getTrieLogStorageTransaction()
|
||||||
.put(blockHeader5.getHash().toArrayUnsafe(), Bytes.fromHexString("0x05").toArrayUnsafe());
|
.put(blockHeader5.getHash().toArrayUnsafe(), createTrieLog(blockHeader5));
|
||||||
updater.getTrieLogStorageTransaction().commit();
|
updater.getTrieLogStorageTransaction().commit();
|
||||||
}
|
}
|
||||||
|
|
||||||
@BeforeEach
|
private static byte[] createTrieLog(final BlockHeader blockHeader) {
|
||||||
void createDirectory() throws IOException {
|
TrieLogLayer trieLogLayer = new TrieLogLayer();
|
||||||
Files.createDirectories(dataDir.resolve("database"));
|
trieLogLayer.setBlockHash(blockHeader.getBlockHash());
|
||||||
}
|
final BytesValueRLPOutput rlpLog = new BytesValueRLPOutput();
|
||||||
|
TrieLogFactoryImpl.writeTo(trieLogLayer, rlpLog);
|
||||||
@AfterEach
|
return rlpLog.encoded().toArrayUnsafe();
|
||||||
void deleteDirectory() throws IOException {
|
|
||||||
Files.deleteIfExists(dataDir.resolve("database"));
|
|
||||||
}
|
}
|
||||||
|
|
||||||
void mockBlockchainBase() {
|
void mockBlockchainBase() {
|
||||||
@@ -113,7 +116,8 @@ class TrieLogHelperTest {
|
|||||||
}
|
}
|
||||||
|
|
||||||
@Test
|
@Test
|
||||||
public void prune() {
|
public void prune(final @TempDir Path dataDir) throws IOException {
|
||||||
|
Files.createDirectories(dataDir.resolve("database"));
|
||||||
|
|
||||||
DataStorageConfiguration dataStorageConfiguration =
|
DataStorageConfiguration dataStorageConfiguration =
|
||||||
ImmutableDataStorageConfiguration.builder()
|
ImmutableDataStorageConfiguration.builder()
|
||||||
@@ -133,14 +137,11 @@ class TrieLogHelperTest {
|
|||||||
|
|
||||||
// assert trie logs that will be pruned exist before prune call
|
// assert trie logs that will be pruned exist before prune call
|
||||||
assertArrayEquals(
|
assertArrayEquals(
|
||||||
inMemoryWorldState.getTrieLog(blockHeader1.getHash()).get(),
|
inMemoryWorldState.getTrieLog(blockHeader1.getHash()).get(), createTrieLog(blockHeader1));
|
||||||
Bytes.fromHexString("0x01").toArrayUnsafe());
|
|
||||||
assertArrayEquals(
|
assertArrayEquals(
|
||||||
inMemoryWorldState.getTrieLog(blockHeader2.getHash()).get(),
|
inMemoryWorldState.getTrieLog(blockHeader2.getHash()).get(), createTrieLog(blockHeader2));
|
||||||
Bytes.fromHexString("0x02").toArrayUnsafe());
|
|
||||||
assertArrayEquals(
|
assertArrayEquals(
|
||||||
inMemoryWorldState.getTrieLog(blockHeader3.getHash()).get(),
|
inMemoryWorldState.getTrieLog(blockHeader3.getHash()).get(), createTrieLog(blockHeader3));
|
||||||
Bytes.fromHexString("0x03").toArrayUnsafe());
|
|
||||||
|
|
||||||
TrieLogHelper.prune(dataStorageConfiguration, inMemoryWorldState, blockchain, dataDir);
|
TrieLogHelper.prune(dataStorageConfiguration, inMemoryWorldState, blockchain, dataDir);
|
||||||
|
|
||||||
@@ -150,18 +151,15 @@ class TrieLogHelperTest {
|
|||||||
|
|
||||||
// assert retained trie logs are in the DB
|
// assert retained trie logs are in the DB
|
||||||
assertArrayEquals(
|
assertArrayEquals(
|
||||||
inMemoryWorldState.getTrieLog(blockHeader3.getHash()).get(),
|
inMemoryWorldState.getTrieLog(blockHeader3.getHash()).get(), createTrieLog(blockHeader3));
|
||||||
Bytes.fromHexString("0x03").toArrayUnsafe());
|
|
||||||
assertArrayEquals(
|
assertArrayEquals(
|
||||||
inMemoryWorldState.getTrieLog(blockHeader4.getHash()).get(),
|
inMemoryWorldState.getTrieLog(blockHeader4.getHash()).get(), createTrieLog(blockHeader4));
|
||||||
Bytes.fromHexString("0x04").toArrayUnsafe());
|
|
||||||
assertArrayEquals(
|
assertArrayEquals(
|
||||||
inMemoryWorldState.getTrieLog(blockHeader5.getHash()).get(),
|
inMemoryWorldState.getTrieLog(blockHeader5.getHash()).get(), createTrieLog(blockHeader5));
|
||||||
Bytes.fromHexString("0x05").toArrayUnsafe());
|
|
||||||
}
|
}
|
||||||
|
|
||||||
@Test
|
@Test
|
||||||
public void cantPruneIfNoFinalizedIsFound() {
|
public void cantPruneIfNoFinalizedIsFound(final @TempDir Path dataDir) {
|
||||||
DataStorageConfiguration dataStorageConfiguration =
|
DataStorageConfiguration dataStorageConfiguration =
|
||||||
ImmutableDataStorageConfiguration.builder()
|
ImmutableDataStorageConfiguration.builder()
|
||||||
.dataStorageFormat(BONSAI)
|
.dataStorageFormat(BONSAI)
|
||||||
@@ -183,7 +181,7 @@ class TrieLogHelperTest {
|
|||||||
}
|
}
|
||||||
|
|
||||||
@Test
|
@Test
|
||||||
public void cantPruneIfUserRetainsMoreLayerThanExistingChainLength() {
|
public void cantPruneIfUserRetainsMoreLayerThanExistingChainLength(final @TempDir Path dataDir) {
|
||||||
DataStorageConfiguration dataStorageConfiguration =
|
DataStorageConfiguration dataStorageConfiguration =
|
||||||
ImmutableDataStorageConfiguration.builder()
|
ImmutableDataStorageConfiguration.builder()
|
||||||
.dataStorageFormat(BONSAI)
|
.dataStorageFormat(BONSAI)
|
||||||
@@ -204,7 +202,7 @@ class TrieLogHelperTest {
|
|||||||
}
|
}
|
||||||
|
|
||||||
@Test
|
@Test
|
||||||
public void cantPruneIfUserRequiredFurtherThanFinalized() {
|
public void cantPruneIfUserRequiredFurtherThanFinalized(final @TempDir Path dataDir) {
|
||||||
|
|
||||||
DataStorageConfiguration dataStorageConfiguration =
|
DataStorageConfiguration dataStorageConfiguration =
|
||||||
ImmutableDataStorageConfiguration.builder()
|
ImmutableDataStorageConfiguration.builder()
|
||||||
@@ -226,8 +224,7 @@ class TrieLogHelperTest {
|
|||||||
}
|
}
|
||||||
|
|
||||||
@Test
|
@Test
|
||||||
public void exceptionWhileSavingFileStopsPruneProcess() throws IOException {
|
public void exceptionWhileSavingFileStopsPruneProcess(final @TempDir Path dataDir) {
|
||||||
Files.delete(dataDir.resolve("database"));
|
|
||||||
|
|
||||||
DataStorageConfiguration dataStorageConfiguration =
|
DataStorageConfiguration dataStorageConfiguration =
|
||||||
ImmutableDataStorageConfiguration.builder()
|
ImmutableDataStorageConfiguration.builder()
|
||||||
@@ -243,23 +240,121 @@ class TrieLogHelperTest {
|
|||||||
assertThrows(
|
assertThrows(
|
||||||
RuntimeException.class,
|
RuntimeException.class,
|
||||||
() ->
|
() ->
|
||||||
TrieLogHelper.prune(dataStorageConfiguration, inMemoryWorldState, blockchain, dataDir));
|
TrieLogHelper.prune(
|
||||||
|
dataStorageConfiguration,
|
||||||
|
inMemoryWorldState,
|
||||||
|
blockchain,
|
||||||
|
dataDir.resolve("unknownPath")));
|
||||||
|
|
||||||
// assert all trie logs are still in the DB
|
// assert all trie logs are still in the DB
|
||||||
assertArrayEquals(
|
assertArrayEquals(
|
||||||
inMemoryWorldState.getTrieLog(blockHeader1.getHash()).get(),
|
inMemoryWorldState.getTrieLog(blockHeader1.getHash()).get(), createTrieLog(blockHeader1));
|
||||||
Bytes.fromHexString("0x01").toArrayUnsafe());
|
|
||||||
assertArrayEquals(
|
assertArrayEquals(
|
||||||
inMemoryWorldState.getTrieLog(blockHeader2.getHash()).get(),
|
inMemoryWorldState.getTrieLog(blockHeader2.getHash()).get(), createTrieLog(blockHeader2));
|
||||||
Bytes.fromHexString("0x02").toArrayUnsafe());
|
|
||||||
assertArrayEquals(
|
assertArrayEquals(
|
||||||
inMemoryWorldState.getTrieLog(blockHeader3.getHash()).get(),
|
inMemoryWorldState.getTrieLog(blockHeader3.getHash()).get(), createTrieLog(blockHeader3));
|
||||||
Bytes.fromHexString("0x03").toArrayUnsafe());
|
|
||||||
assertArrayEquals(
|
assertArrayEquals(
|
||||||
inMemoryWorldState.getTrieLog(blockHeader4.getHash()).get(),
|
inMemoryWorldState.getTrieLog(blockHeader4.getHash()).get(), createTrieLog(blockHeader4));
|
||||||
Bytes.fromHexString("0x04").toArrayUnsafe());
|
|
||||||
assertArrayEquals(
|
assertArrayEquals(
|
||||||
inMemoryWorldState.getTrieLog(blockHeader5.getHash()).get(),
|
inMemoryWorldState.getTrieLog(blockHeader5.getHash()).get(), createTrieLog(blockHeader5));
|
||||||
Bytes.fromHexString("0x05").toArrayUnsafe());
|
}
|
||||||
|
|
||||||
|
@Test
|
||||||
|
public void exportedTrieMatchesDbTrieLog(final @TempDir Path dataDir) throws IOException {
|
||||||
|
TrieLogHelper.exportTrieLog(
|
||||||
|
inMemoryWorldState,
|
||||||
|
singletonList(blockHeader1.getHash()),
|
||||||
|
dataDir.resolve("trie-log-dump"));
|
||||||
|
|
||||||
|
var trieLog =
|
||||||
|
TrieLogHelper.readTrieLogsAsRlpFromFile(dataDir.resolve("trie-log-dump").toString())
|
||||||
|
.entrySet()
|
||||||
|
.stream()
|
||||||
|
.findFirst()
|
||||||
|
.get();
|
||||||
|
|
||||||
|
assertArrayEquals(trieLog.getKey(), blockHeader1.getHash().toArrayUnsafe());
|
||||||
|
assertArrayEquals(
|
||||||
|
trieLog.getValue(), inMemoryWorldState.getTrieLog(blockHeader1.getHash()).get());
|
||||||
|
}
|
||||||
|
|
||||||
|
@Test
|
||||||
|
public void exportedMultipleTriesMatchDbTrieLogs(final @TempDir Path dataDir) throws IOException {
|
||||||
|
TrieLogHelper.exportTrieLog(
|
||||||
|
inMemoryWorldState,
|
||||||
|
List.of(blockHeader1.getHash(), blockHeader2.getHash(), blockHeader3.getHash()),
|
||||||
|
dataDir.resolve("trie-log-dump"));
|
||||||
|
|
||||||
|
var trieLogs =
|
||||||
|
TrieLogHelper.readTrieLogsAsRlpFromFile(dataDir.resolve("trie-log-dump").toString())
|
||||||
|
.entrySet()
|
||||||
|
.stream()
|
||||||
|
.collect(Collectors.toMap(e -> Bytes.wrap(e.getKey()), Map.Entry::getValue));
|
||||||
|
|
||||||
|
assertArrayEquals(
|
||||||
|
trieLogs.get(blockHeader1.getHash()),
|
||||||
|
inMemoryWorldState.getTrieLog(blockHeader1.getHash()).get());
|
||||||
|
assertArrayEquals(
|
||||||
|
trieLogs.get(blockHeader2.getHash()),
|
||||||
|
inMemoryWorldState.getTrieLog(blockHeader2.getHash()).get());
|
||||||
|
assertArrayEquals(
|
||||||
|
trieLogs.get(blockHeader3.getHash()),
|
||||||
|
inMemoryWorldState.getTrieLog(blockHeader3.getHash()).get());
|
||||||
|
}
|
||||||
|
|
||||||
|
@Test
|
||||||
|
public void importedTrieLogMatchesDbTrieLog(final @TempDir Path dataDir) throws IOException {
|
||||||
|
StorageProvider tempStorageProvider = new InMemoryKeyValueStorageProvider();
|
||||||
|
BonsaiWorldStateKeyValueStorage inMemoryWorldState2 =
|
||||||
|
new BonsaiWorldStateKeyValueStorage(
|
||||||
|
tempStorageProvider, new NoOpMetricsSystem(), DataStorageConfiguration.DEFAULT_CONFIG);
|
||||||
|
|
||||||
|
TrieLogHelper.exportTrieLog(
|
||||||
|
inMemoryWorldState,
|
||||||
|
singletonList(blockHeader1.getHash()),
|
||||||
|
dataDir.resolve("trie-log-dump"));
|
||||||
|
|
||||||
|
var trieLog =
|
||||||
|
TrieLogHelper.readTrieLogsAsRlpFromFile(dataDir.resolve("trie-log-dump").toString());
|
||||||
|
var updater = inMemoryWorldState2.updater();
|
||||||
|
|
||||||
|
trieLog.forEach((k, v) -> updater.getTrieLogStorageTransaction().put(k, v));
|
||||||
|
|
||||||
|
updater.getTrieLogStorageTransaction().commit();
|
||||||
|
|
||||||
|
assertArrayEquals(
|
||||||
|
inMemoryWorldState2.getTrieLog(blockHeader1.getHash()).get(),
|
||||||
|
inMemoryWorldState.getTrieLog(blockHeader1.getHash()).get());
|
||||||
|
}
|
||||||
|
|
||||||
|
@Test
|
||||||
|
public void importedMultipleTriesMatchDbTrieLogs(final @TempDir Path dataDir) throws IOException {
|
||||||
|
StorageProvider tempStorageProvider = new InMemoryKeyValueStorageProvider();
|
||||||
|
BonsaiWorldStateKeyValueStorage inMemoryWorldState2 =
|
||||||
|
new BonsaiWorldStateKeyValueStorage(
|
||||||
|
tempStorageProvider, new NoOpMetricsSystem(), DataStorageConfiguration.DEFAULT_CONFIG);
|
||||||
|
|
||||||
|
TrieLogHelper.exportTrieLog(
|
||||||
|
inMemoryWorldState,
|
||||||
|
List.of(blockHeader1.getHash(), blockHeader2.getHash(), blockHeader3.getHash()),
|
||||||
|
dataDir.resolve("trie-log-dump"));
|
||||||
|
|
||||||
|
var trieLog =
|
||||||
|
TrieLogHelper.readTrieLogsAsRlpFromFile(dataDir.resolve("trie-log-dump").toString());
|
||||||
|
var updater = inMemoryWorldState2.updater();
|
||||||
|
|
||||||
|
trieLog.forEach((k, v) -> updater.getTrieLogStorageTransaction().put(k, v));
|
||||||
|
|
||||||
|
updater.getTrieLogStorageTransaction().commit();
|
||||||
|
|
||||||
|
assertArrayEquals(
|
||||||
|
inMemoryWorldState2.getTrieLog(blockHeader1.getHash()).get(),
|
||||||
|
inMemoryWorldState.getTrieLog(blockHeader1.getHash()).get());
|
||||||
|
assertArrayEquals(
|
||||||
|
inMemoryWorldState2.getTrieLog(blockHeader2.getHash()).get(),
|
||||||
|
inMemoryWorldState.getTrieLog(blockHeader2.getHash()).get());
|
||||||
|
assertArrayEquals(
|
||||||
|
inMemoryWorldState2.getTrieLog(blockHeader3.getHash()).get(),
|
||||||
|
inMemoryWorldState.getTrieLog(blockHeader3.getHash()).get());
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|||||||
@@ -131,7 +131,7 @@ public class BesuControllerBuilderTest {
|
|||||||
when(synchronizerConfiguration.getBlockPropagationRange()).thenReturn(Range.closed(1L, 2L));
|
when(synchronizerConfiguration.getBlockPropagationRange()).thenReturn(Range.closed(1L, 2L));
|
||||||
|
|
||||||
lenient()
|
lenient()
|
||||||
.when(storageProvider.createWorldStateStorage(DataStorageFormat.FOREST))
|
.when(storageProvider.createWorldStateStorage(DataStorageConfiguration.DEFAULT_CONFIG))
|
||||||
.thenReturn(worldStateStorage);
|
.thenReturn(worldStateStorage);
|
||||||
lenient()
|
lenient()
|
||||||
.when(storageProvider.createWorldStatePreimageStorage())
|
.when(storageProvider.createWorldStatePreimageStorage())
|
||||||
@@ -166,6 +166,11 @@ public class BesuControllerBuilderTest {
|
|||||||
|
|
||||||
@Test
|
@Test
|
||||||
public void shouldDisablePruningIfBonsaiIsEnabled() {
|
public void shouldDisablePruningIfBonsaiIsEnabled() {
|
||||||
|
DataStorageConfiguration dataStorageConfiguration =
|
||||||
|
ImmutableDataStorageConfiguration.builder()
|
||||||
|
.dataStorageFormat(DataStorageFormat.BONSAI)
|
||||||
|
.bonsaiMaxLayersToLoad(DataStorageConfiguration.DEFAULT_BONSAI_MAX_LAYERS_TO_LOAD)
|
||||||
|
.build();
|
||||||
BonsaiWorldState mockWorldState = mock(BonsaiWorldState.class, Answers.RETURNS_DEEP_STUBS);
|
BonsaiWorldState mockWorldState = mock(BonsaiWorldState.class, Answers.RETURNS_DEEP_STUBS);
|
||||||
doReturn(worldStateArchive)
|
doReturn(worldStateArchive)
|
||||||
.when(besuControllerBuilder)
|
.when(besuControllerBuilder)
|
||||||
@@ -173,15 +178,9 @@ public class BesuControllerBuilderTest {
|
|||||||
any(WorldStateStorage.class), any(Blockchain.class), any(CachedMerkleTrieLoader.class));
|
any(WorldStateStorage.class), any(Blockchain.class), any(CachedMerkleTrieLoader.class));
|
||||||
doReturn(mockWorldState).when(worldStateArchive).getMutable();
|
doReturn(mockWorldState).when(worldStateArchive).getMutable();
|
||||||
|
|
||||||
when(storageProvider.createWorldStateStorage(DataStorageFormat.BONSAI))
|
when(storageProvider.createWorldStateStorage(dataStorageConfiguration))
|
||||||
.thenReturn(bonsaiWorldStateStorage);
|
.thenReturn(bonsaiWorldStateStorage);
|
||||||
besuControllerBuilder
|
besuControllerBuilder.isPruningEnabled(true).dataStorageConfiguration(dataStorageConfiguration);
|
||||||
.isPruningEnabled(true)
|
|
||||||
.dataStorageConfiguration(
|
|
||||||
ImmutableDataStorageConfiguration.builder()
|
|
||||||
.dataStorageFormat(DataStorageFormat.BONSAI)
|
|
||||||
.bonsaiMaxLayersToLoad(DataStorageConfiguration.DEFAULT_BONSAI_MAX_LAYERS_TO_LOAD)
|
|
||||||
.build());
|
|
||||||
besuControllerBuilder.build();
|
besuControllerBuilder.build();
|
||||||
|
|
||||||
verify(storageProvider, never())
|
verify(storageProvider, never())
|
||||||
|
|||||||
@@ -52,7 +52,7 @@ import org.hyperledger.besu.ethereum.p2p.config.NetworkingConfiguration;
|
|||||||
import org.hyperledger.besu.ethereum.storage.StorageProvider;
|
import org.hyperledger.besu.ethereum.storage.StorageProvider;
|
||||||
import org.hyperledger.besu.ethereum.storage.keyvalue.KeyValueStoragePrefixedKeyBlockchainStorage;
|
import org.hyperledger.besu.ethereum.storage.keyvalue.KeyValueStoragePrefixedKeyBlockchainStorage;
|
||||||
import org.hyperledger.besu.ethereum.storage.keyvalue.VariablesKeyValueStorage;
|
import org.hyperledger.besu.ethereum.storage.keyvalue.VariablesKeyValueStorage;
|
||||||
import org.hyperledger.besu.ethereum.worldstate.DataStorageFormat;
|
import org.hyperledger.besu.ethereum.worldstate.DataStorageConfiguration;
|
||||||
import org.hyperledger.besu.ethereum.worldstate.WorldStateArchive;
|
import org.hyperledger.besu.ethereum.worldstate.WorldStateArchive;
|
||||||
import org.hyperledger.besu.ethereum.worldstate.WorldStatePreimageStorage;
|
import org.hyperledger.besu.ethereum.worldstate.WorldStatePreimageStorage;
|
||||||
import org.hyperledger.besu.ethereum.worldstate.WorldStateStorage;
|
import org.hyperledger.besu.ethereum.worldstate.WorldStateStorage;
|
||||||
@@ -145,7 +145,7 @@ public class MergeBesuControllerBuilderTest {
|
|||||||
.thenReturn(Range.closed(1L, 2L));
|
.thenReturn(Range.closed(1L, 2L));
|
||||||
|
|
||||||
lenient()
|
lenient()
|
||||||
.when(storageProvider.createWorldStateStorage(DataStorageFormat.FOREST))
|
.when(storageProvider.createWorldStateStorage(DataStorageConfiguration.DEFAULT_CONFIG))
|
||||||
.thenReturn(worldStateStorage);
|
.thenReturn(worldStateStorage);
|
||||||
lenient()
|
lenient()
|
||||||
.when(storageProvider.createWorldStatePreimageStorage())
|
.when(storageProvider.createWorldStatePreimageStorage())
|
||||||
|
|||||||
@@ -48,7 +48,7 @@ import org.hyperledger.besu.ethereum.p2p.config.NetworkingConfiguration;
|
|||||||
import org.hyperledger.besu.ethereum.storage.StorageProvider;
|
import org.hyperledger.besu.ethereum.storage.StorageProvider;
|
||||||
import org.hyperledger.besu.ethereum.storage.keyvalue.KeyValueStoragePrefixedKeyBlockchainStorage;
|
import org.hyperledger.besu.ethereum.storage.keyvalue.KeyValueStoragePrefixedKeyBlockchainStorage;
|
||||||
import org.hyperledger.besu.ethereum.storage.keyvalue.VariablesKeyValueStorage;
|
import org.hyperledger.besu.ethereum.storage.keyvalue.VariablesKeyValueStorage;
|
||||||
import org.hyperledger.besu.ethereum.worldstate.DataStorageFormat;
|
import org.hyperledger.besu.ethereum.worldstate.DataStorageConfiguration;
|
||||||
import org.hyperledger.besu.ethereum.worldstate.WorldStatePreimageStorage;
|
import org.hyperledger.besu.ethereum.worldstate.WorldStatePreimageStorage;
|
||||||
import org.hyperledger.besu.ethereum.worldstate.WorldStateStorage;
|
import org.hyperledger.besu.ethereum.worldstate.WorldStateStorage;
|
||||||
import org.hyperledger.besu.evm.internal.EvmConfiguration;
|
import org.hyperledger.besu.evm.internal.EvmConfiguration;
|
||||||
@@ -114,7 +114,7 @@ public class QbftBesuControllerBuilderTest {
|
|||||||
new VariablesKeyValueStorage(new InMemoryKeyValueStorage()),
|
new VariablesKeyValueStorage(new InMemoryKeyValueStorage()),
|
||||||
new MainnetBlockHeaderFunctions()));
|
new MainnetBlockHeaderFunctions()));
|
||||||
lenient()
|
lenient()
|
||||||
.when(storageProvider.createWorldStateStorage(DataStorageFormat.FOREST))
|
.when(storageProvider.createWorldStateStorage(DataStorageConfiguration.DEFAULT_CONFIG))
|
||||||
.thenReturn(worldStateStorage);
|
.thenReturn(worldStateStorage);
|
||||||
lenient().when(worldStateStorage.isWorldStateAvailable(any(), any())).thenReturn(true);
|
lenient().when(worldStateStorage.isWorldStateAvailable(any(), any())).thenReturn(true);
|
||||||
lenient().when(worldStateStorage.updater()).thenReturn(mock(WorldStateStorage.Updater.class));
|
lenient().when(worldStateStorage.updater()).thenReturn(mock(WorldStateStorage.Updater.class));
|
||||||
|
|||||||
@@ -142,6 +142,8 @@ min-priority-fee=0
|
|||||||
min-block-occupancy-ratio=0.7
|
min-block-occupancy-ratio=0.7
|
||||||
miner-stratum-host="0.0.0.0"
|
miner-stratum-host="0.0.0.0"
|
||||||
miner-stratum-port=8008
|
miner-stratum-port=8008
|
||||||
|
block-txs-selection-max-time=5000
|
||||||
|
poa-block-txs-selection-max-time=75
|
||||||
Xminer-remote-sealers-limit=1000
|
Xminer-remote-sealers-limit=1000
|
||||||
Xminer-remote-sealers-hashrate-ttl=10
|
Xminer-remote-sealers-hashrate-ttl=10
|
||||||
Xpos-block-creation-max-time=5
|
Xpos-block-creation-max-time=5
|
||||||
@@ -169,7 +171,6 @@ privacy-enabled=false
|
|||||||
privacy-multi-tenancy-enabled=true
|
privacy-multi-tenancy-enabled=true
|
||||||
privacy-marker-transaction-signing-key-file="./signerKey"
|
privacy-marker-transaction-signing-key-file="./signerKey"
|
||||||
privacy-enable-database-migration=false
|
privacy-enable-database-migration=false
|
||||||
privacy-onchain-groups-enabled=false
|
|
||||||
privacy-flexible-groups-enabled=false
|
privacy-flexible-groups-enabled=false
|
||||||
|
|
||||||
# Transaction Pool
|
# Transaction Pool
|
||||||
|
|||||||
@@ -22,6 +22,7 @@ import org.hyperledger.besu.ethereum.api.jsonrpc.internal.methods.JsonRpcMethod;
|
|||||||
import java.util.Collection;
|
import java.util.Collection;
|
||||||
import java.util.Map;
|
import java.util.Map;
|
||||||
import java.util.Optional;
|
import java.util.Optional;
|
||||||
|
import java.util.function.Function;
|
||||||
import java.util.stream.Collectors;
|
import java.util.stream.Collectors;
|
||||||
|
|
||||||
import io.opentelemetry.api.trace.Tracer;
|
import io.opentelemetry.api.trace.Tracer;
|
||||||
@@ -35,7 +36,8 @@ public class HandlerFactory {
|
|||||||
assert methods != null && globalOptions != null;
|
assert methods != null && globalOptions != null;
|
||||||
return TimeoutHandler.handler(
|
return TimeoutHandler.handler(
|
||||||
Optional.of(globalOptions),
|
Optional.of(globalOptions),
|
||||||
methods.keySet().stream().collect(Collectors.toMap(String::new, ignored -> globalOptions)));
|
methods.keySet().stream()
|
||||||
|
.collect(Collectors.toMap(Function.identity(), ignored -> globalOptions)));
|
||||||
}
|
}
|
||||||
|
|
||||||
public static Handler<RoutingContext> authentication(
|
public static Handler<RoutingContext> authentication(
|
||||||
|
|||||||
@@ -46,15 +46,15 @@ public class DebugTraceBlock implements JsonRpcMethod {
|
|||||||
private static final Logger LOG = LoggerFactory.getLogger(DebugTraceBlock.class);
|
private static final Logger LOG = LoggerFactory.getLogger(DebugTraceBlock.class);
|
||||||
private final Supplier<BlockTracer> blockTracerSupplier;
|
private final Supplier<BlockTracer> blockTracerSupplier;
|
||||||
private final BlockHeaderFunctions blockHeaderFunctions;
|
private final BlockHeaderFunctions blockHeaderFunctions;
|
||||||
private final BlockchainQueries blockchain;
|
private final BlockchainQueries blockchainQueries;
|
||||||
|
|
||||||
public DebugTraceBlock(
|
public DebugTraceBlock(
|
||||||
final Supplier<BlockTracer> blockTracerSupplier,
|
final Supplier<BlockTracer> blockTracerSupplier,
|
||||||
final BlockHeaderFunctions blockHeaderFunctions,
|
final BlockHeaderFunctions blockHeaderFunctions,
|
||||||
final BlockchainQueries blockchain) {
|
final BlockchainQueries blockchainQueries) {
|
||||||
this.blockTracerSupplier = blockTracerSupplier;
|
this.blockTracerSupplier = blockTracerSupplier;
|
||||||
this.blockHeaderFunctions = blockHeaderFunctions;
|
this.blockHeaderFunctions = blockHeaderFunctions;
|
||||||
this.blockchain = blockchain;
|
this.blockchainQueries = blockchainQueries;
|
||||||
}
|
}
|
||||||
|
|
||||||
@Override
|
@Override
|
||||||
@@ -79,18 +79,17 @@ public class DebugTraceBlock implements JsonRpcMethod {
|
|||||||
.map(TransactionTraceParams::traceOptions)
|
.map(TransactionTraceParams::traceOptions)
|
||||||
.orElse(TraceOptions.DEFAULT);
|
.orElse(TraceOptions.DEFAULT);
|
||||||
|
|
||||||
if (this.blockchain.blockByHash(block.getHeader().getParentHash()).isPresent()) {
|
if (this.blockchainQueries.blockByHash(block.getHeader().getParentHash()).isPresent()) {
|
||||||
final Collection<DebugTraceTransactionResult> results =
|
final Collection<DebugTraceTransactionResult> results =
|
||||||
Tracer.processTracing(
|
Tracer.processTracing(
|
||||||
blockchain,
|
blockchainQueries,
|
||||||
Optional.of(block.getHeader()),
|
Optional.of(block.getHeader()),
|
||||||
mutableWorldState -> {
|
mutableWorldState ->
|
||||||
return blockTracerSupplier
|
blockTracerSupplier
|
||||||
.get()
|
.get()
|
||||||
.trace(mutableWorldState, block, new DebugOperationTracer(traceOptions))
|
.trace(mutableWorldState, block, new DebugOperationTracer(traceOptions))
|
||||||
.map(BlockTrace::getTransactionTraces)
|
.map(BlockTrace::getTransactionTraces)
|
||||||
.map(DebugTraceTransactionResult::of);
|
.map(DebugTraceTransactionResult::of))
|
||||||
})
|
|
||||||
.orElse(null);
|
.orElse(null);
|
||||||
return new JsonRpcSuccessResponse(requestContext.getRequest().getId(), results);
|
return new JsonRpcSuccessResponse(requestContext.getRequest().getId(), results);
|
||||||
} else {
|
} else {
|
||||||
|
|||||||
@@ -17,7 +17,6 @@ package org.hyperledger.besu.ethereum.api.jsonrpc;
|
|||||||
import static org.assertj.core.api.Assertions.assertThat;
|
import static org.assertj.core.api.Assertions.assertThat;
|
||||||
import static org.hyperledger.besu.ethereum.api.jsonrpc.RpcApis.DEFAULT_RPC_APIS;
|
import static org.hyperledger.besu.ethereum.api.jsonrpc.RpcApis.DEFAULT_RPC_APIS;
|
||||||
import static org.mockito.Mockito.mock;
|
import static org.mockito.Mockito.mock;
|
||||||
import static org.mockito.Mockito.spy;
|
|
||||||
|
|
||||||
import org.hyperledger.besu.config.StubGenesisConfigOptions;
|
import org.hyperledger.besu.config.StubGenesisConfigOptions;
|
||||||
import org.hyperledger.besu.ethereum.ProtocolContext;
|
import org.hyperledger.besu.ethereum.ProtocolContext;
|
||||||
@@ -98,38 +97,37 @@ public class JsonRpcHttpServiceHostAllowlistTest {
|
|||||||
supportedCapabilities.add(EthProtocol.ETH63);
|
supportedCapabilities.add(EthProtocol.ETH63);
|
||||||
|
|
||||||
rpcMethods =
|
rpcMethods =
|
||||||
spy(
|
new JsonRpcMethodsFactory()
|
||||||
new JsonRpcMethodsFactory()
|
.methods(
|
||||||
.methods(
|
CLIENT_VERSION,
|
||||||
CLIENT_VERSION,
|
CHAIN_ID,
|
||||||
CHAIN_ID,
|
new StubGenesisConfigOptions(),
|
||||||
new StubGenesisConfigOptions(),
|
peerDiscoveryMock,
|
||||||
peerDiscoveryMock,
|
blockchainQueries,
|
||||||
blockchainQueries,
|
synchronizer,
|
||||||
synchronizer,
|
MainnetProtocolSchedule.fromConfig(
|
||||||
MainnetProtocolSchedule.fromConfig(
|
new StubGenesisConfigOptions().constantinopleBlock(0).chainId(CHAIN_ID)),
|
||||||
new StubGenesisConfigOptions().constantinopleBlock(0).chainId(CHAIN_ID)),
|
mock(ProtocolContext.class),
|
||||||
mock(ProtocolContext.class),
|
mock(FilterManager.class),
|
||||||
mock(FilterManager.class),
|
mock(TransactionPool.class),
|
||||||
mock(TransactionPool.class),
|
mock(MiningParameters.class),
|
||||||
mock(MiningParameters.class),
|
mock(PoWMiningCoordinator.class),
|
||||||
mock(PoWMiningCoordinator.class),
|
new NoOpMetricsSystem(),
|
||||||
new NoOpMetricsSystem(),
|
supportedCapabilities,
|
||||||
supportedCapabilities,
|
Optional.of(mock(AccountLocalConfigPermissioningController.class)),
|
||||||
Optional.of(mock(AccountLocalConfigPermissioningController.class)),
|
Optional.of(mock(NodeLocalConfigPermissioningController.class)),
|
||||||
Optional.of(mock(NodeLocalConfigPermissioningController.class)),
|
DEFAULT_RPC_APIS,
|
||||||
DEFAULT_RPC_APIS,
|
mock(PrivacyParameters.class),
|
||||||
mock(PrivacyParameters.class),
|
mock(JsonRpcConfiguration.class),
|
||||||
mock(JsonRpcConfiguration.class),
|
mock(WebSocketConfiguration.class),
|
||||||
mock(WebSocketConfiguration.class),
|
mock(MetricsConfiguration.class),
|
||||||
mock(MetricsConfiguration.class),
|
natService,
|
||||||
natService,
|
new HashMap<>(),
|
||||||
new HashMap<>(),
|
folder,
|
||||||
folder,
|
mock(EthPeers.class),
|
||||||
mock(EthPeers.class),
|
vertx,
|
||||||
vertx,
|
mock(ApiConfiguration.class),
|
||||||
mock(ApiConfiguration.class),
|
Optional.empty());
|
||||||
Optional.empty()));
|
|
||||||
service = createJsonRpcHttpService();
|
service = createJsonRpcHttpService();
|
||||||
service.start().join();
|
service.start().join();
|
||||||
|
|
||||||
|
|||||||
@@ -19,7 +19,6 @@ import static java.util.concurrent.TimeUnit.MINUTES;
|
|||||||
import static org.assertj.core.api.Assertions.assertThat;
|
import static org.assertj.core.api.Assertions.assertThat;
|
||||||
import static org.assertj.core.util.Lists.list;
|
import static org.assertj.core.util.Lists.list;
|
||||||
import static org.mockito.Mockito.mock;
|
import static org.mockito.Mockito.mock;
|
||||||
import static org.mockito.Mockito.spy;
|
|
||||||
|
|
||||||
import org.hyperledger.besu.config.StubGenesisConfigOptions;
|
import org.hyperledger.besu.config.StubGenesisConfigOptions;
|
||||||
import org.hyperledger.besu.ethereum.ProtocolContext;
|
import org.hyperledger.besu.ethereum.ProtocolContext;
|
||||||
@@ -129,37 +128,36 @@ public class JsonRpcHttpServiceLoginTest {
|
|||||||
new StubGenesisConfigOptions().constantinopleBlock(0).chainId(CHAIN_ID);
|
new StubGenesisConfigOptions().constantinopleBlock(0).chainId(CHAIN_ID);
|
||||||
|
|
||||||
rpcMethods =
|
rpcMethods =
|
||||||
spy(
|
new JsonRpcMethodsFactory()
|
||||||
new JsonRpcMethodsFactory()
|
.methods(
|
||||||
.methods(
|
CLIENT_VERSION,
|
||||||
CLIENT_VERSION,
|
CHAIN_ID,
|
||||||
CHAIN_ID,
|
genesisConfigOptions,
|
||||||
genesisConfigOptions,
|
peerDiscoveryMock,
|
||||||
peerDiscoveryMock,
|
blockchainQueries,
|
||||||
blockchainQueries,
|
synchronizer,
|
||||||
synchronizer,
|
MainnetProtocolSchedule.fromConfig(genesisConfigOptions),
|
||||||
MainnetProtocolSchedule.fromConfig(genesisConfigOptions),
|
mock(ProtocolContext.class),
|
||||||
mock(ProtocolContext.class),
|
mock(FilterManager.class),
|
||||||
mock(FilterManager.class),
|
mock(TransactionPool.class),
|
||||||
mock(TransactionPool.class),
|
mock(MiningParameters.class),
|
||||||
mock(MiningParameters.class),
|
mock(PoWMiningCoordinator.class),
|
||||||
mock(PoWMiningCoordinator.class),
|
new NoOpMetricsSystem(),
|
||||||
new NoOpMetricsSystem(),
|
supportedCapabilities,
|
||||||
supportedCapabilities,
|
Optional.empty(),
|
||||||
Optional.empty(),
|
Optional.empty(),
|
||||||
Optional.empty(),
|
JSON_RPC_APIS,
|
||||||
JSON_RPC_APIS,
|
mock(PrivacyParameters.class),
|
||||||
mock(PrivacyParameters.class),
|
mock(JsonRpcConfiguration.class),
|
||||||
mock(JsonRpcConfiguration.class),
|
mock(WebSocketConfiguration.class),
|
||||||
mock(WebSocketConfiguration.class),
|
mock(MetricsConfiguration.class),
|
||||||
mock(MetricsConfiguration.class),
|
natService,
|
||||||
natService,
|
new HashMap<>(),
|
||||||
new HashMap<>(),
|
folder,
|
||||||
folder,
|
mock(EthPeers.class),
|
||||||
mock(EthPeers.class),
|
vertx,
|
||||||
vertx,
|
mock(ApiConfiguration.class),
|
||||||
mock(ApiConfiguration.class),
|
Optional.empty());
|
||||||
Optional.empty()));
|
|
||||||
service = createJsonRpcHttpService();
|
service = createJsonRpcHttpService();
|
||||||
jwtAuth = service.authenticationService.get().getJwtAuthProvider();
|
jwtAuth = service.authenticationService.get().getJwtAuthProvider();
|
||||||
service.start().join();
|
service.start().join();
|
||||||
|
|||||||
@@ -17,7 +17,6 @@ package org.hyperledger.besu.ethereum.api.jsonrpc;
|
|||||||
import static java.util.Collections.singletonList;
|
import static java.util.Collections.singletonList;
|
||||||
import static org.assertj.core.api.Assertions.assertThat;
|
import static org.assertj.core.api.Assertions.assertThat;
|
||||||
import static org.mockito.Mockito.mock;
|
import static org.mockito.Mockito.mock;
|
||||||
import static org.mockito.Mockito.spy;
|
|
||||||
import static org.mockito.Mockito.when;
|
import static org.mockito.Mockito.when;
|
||||||
|
|
||||||
import org.hyperledger.besu.config.StubGenesisConfigOptions;
|
import org.hyperledger.besu.config.StubGenesisConfigOptions;
|
||||||
@@ -201,37 +200,36 @@ public class JsonRpcHttpServiceRpcApisTest {
|
|||||||
supportedCapabilities.add(EthProtocol.ETH63);
|
supportedCapabilities.add(EthProtocol.ETH63);
|
||||||
|
|
||||||
final Map<String, JsonRpcMethod> rpcMethods =
|
final Map<String, JsonRpcMethod> rpcMethods =
|
||||||
spy(
|
new JsonRpcMethodsFactory()
|
||||||
new JsonRpcMethodsFactory()
|
.methods(
|
||||||
.methods(
|
CLIENT_VERSION,
|
||||||
CLIENT_VERSION,
|
NETWORK_ID,
|
||||||
NETWORK_ID,
|
new StubGenesisConfigOptions(),
|
||||||
new StubGenesisConfigOptions(),
|
mock(P2PNetwork.class),
|
||||||
mock(P2PNetwork.class),
|
blockchainQueries,
|
||||||
blockchainQueries,
|
mock(Synchronizer.class),
|
||||||
mock(Synchronizer.class),
|
ProtocolScheduleFixture.MAINNET,
|
||||||
ProtocolScheduleFixture.MAINNET,
|
mock(ProtocolContext.class),
|
||||||
mock(ProtocolContext.class),
|
mock(FilterManager.class),
|
||||||
mock(FilterManager.class),
|
mock(TransactionPool.class),
|
||||||
mock(TransactionPool.class),
|
mock(MiningParameters.class),
|
||||||
mock(MiningParameters.class),
|
mock(PoWMiningCoordinator.class),
|
||||||
mock(PoWMiningCoordinator.class),
|
new NoOpMetricsSystem(),
|
||||||
new NoOpMetricsSystem(),
|
supportedCapabilities,
|
||||||
supportedCapabilities,
|
Optional.of(mock(AccountLocalConfigPermissioningController.class)),
|
||||||
Optional.of(mock(AccountLocalConfigPermissioningController.class)),
|
Optional.of(mock(NodeLocalConfigPermissioningController.class)),
|
||||||
Optional.of(mock(NodeLocalConfigPermissioningController.class)),
|
config.getRpcApis(),
|
||||||
config.getRpcApis(),
|
mock(PrivacyParameters.class),
|
||||||
mock(PrivacyParameters.class),
|
mock(JsonRpcConfiguration.class),
|
||||||
mock(JsonRpcConfiguration.class),
|
mock(WebSocketConfiguration.class),
|
||||||
mock(WebSocketConfiguration.class),
|
mock(MetricsConfiguration.class),
|
||||||
mock(MetricsConfiguration.class),
|
natService,
|
||||||
natService,
|
new HashMap<>(),
|
||||||
new HashMap<>(),
|
folder,
|
||||||
folder,
|
mock(EthPeers.class),
|
||||||
mock(EthPeers.class),
|
vertx,
|
||||||
vertx,
|
mock(ApiConfiguration.class),
|
||||||
mock(ApiConfiguration.class),
|
Optional.empty());
|
||||||
Optional.empty()));
|
|
||||||
final JsonRpcHttpService jsonRpcHttpService =
|
final JsonRpcHttpService jsonRpcHttpService =
|
||||||
new JsonRpcHttpService(
|
new JsonRpcHttpService(
|
||||||
vertx,
|
vertx,
|
||||||
@@ -302,8 +300,7 @@ public class JsonRpcHttpServiceRpcApisTest {
|
|||||||
final WebSocketConfiguration webSocketConfiguration,
|
final WebSocketConfiguration webSocketConfiguration,
|
||||||
final P2PNetwork p2pNetwork,
|
final P2PNetwork p2pNetwork,
|
||||||
final MetricsConfiguration metricsConfiguration,
|
final MetricsConfiguration metricsConfiguration,
|
||||||
final NatService natService)
|
final NatService natService) {
|
||||||
throws Exception {
|
|
||||||
final Set<Capability> supportedCapabilities = new HashSet<>();
|
final Set<Capability> supportedCapabilities = new HashSet<>();
|
||||||
supportedCapabilities.add(EthProtocol.ETH62);
|
supportedCapabilities.add(EthProtocol.ETH62);
|
||||||
supportedCapabilities.add(EthProtocol.ETH63);
|
supportedCapabilities.add(EthProtocol.ETH63);
|
||||||
@@ -311,37 +308,36 @@ public class JsonRpcHttpServiceRpcApisTest {
|
|||||||
webSocketConfiguration.setPort(0);
|
webSocketConfiguration.setPort(0);
|
||||||
|
|
||||||
final Map<String, JsonRpcMethod> rpcMethods =
|
final Map<String, JsonRpcMethod> rpcMethods =
|
||||||
spy(
|
new JsonRpcMethodsFactory()
|
||||||
new JsonRpcMethodsFactory()
|
.methods(
|
||||||
.methods(
|
CLIENT_VERSION,
|
||||||
CLIENT_VERSION,
|
NETWORK_ID,
|
||||||
NETWORK_ID,
|
new StubGenesisConfigOptions(),
|
||||||
new StubGenesisConfigOptions(),
|
p2pNetwork,
|
||||||
p2pNetwork,
|
blockchainQueries,
|
||||||
blockchainQueries,
|
mock(Synchronizer.class),
|
||||||
mock(Synchronizer.class),
|
ProtocolScheduleFixture.MAINNET,
|
||||||
ProtocolScheduleFixture.MAINNET,
|
mock(ProtocolContext.class),
|
||||||
mock(ProtocolContext.class),
|
mock(FilterManager.class),
|
||||||
mock(FilterManager.class),
|
mock(TransactionPool.class),
|
||||||
mock(TransactionPool.class),
|
mock(MiningParameters.class),
|
||||||
mock(MiningParameters.class),
|
mock(PoWMiningCoordinator.class),
|
||||||
mock(PoWMiningCoordinator.class),
|
new NoOpMetricsSystem(),
|
||||||
new NoOpMetricsSystem(),
|
supportedCapabilities,
|
||||||
supportedCapabilities,
|
Optional.of(mock(AccountLocalConfigPermissioningController.class)),
|
||||||
Optional.of(mock(AccountLocalConfigPermissioningController.class)),
|
Optional.of(mock(NodeLocalConfigPermissioningController.class)),
|
||||||
Optional.of(mock(NodeLocalConfigPermissioningController.class)),
|
jsonRpcConfiguration.getRpcApis(),
|
||||||
jsonRpcConfiguration.getRpcApis(),
|
mock(PrivacyParameters.class),
|
||||||
mock(PrivacyParameters.class),
|
jsonRpcConfiguration,
|
||||||
jsonRpcConfiguration,
|
webSocketConfiguration,
|
||||||
webSocketConfiguration,
|
metricsConfiguration,
|
||||||
metricsConfiguration,
|
natService,
|
||||||
natService,
|
new HashMap<>(),
|
||||||
new HashMap<>(),
|
folder,
|
||||||
folder,
|
mock(EthPeers.class),
|
||||||
mock(EthPeers.class),
|
vertx,
|
||||||
vertx,
|
mock(ApiConfiguration.class),
|
||||||
mock(ApiConfiguration.class),
|
Optional.empty());
|
||||||
Optional.empty()));
|
|
||||||
final JsonRpcHttpService jsonRpcHttpService =
|
final JsonRpcHttpService jsonRpcHttpService =
|
||||||
new JsonRpcHttpService(
|
new JsonRpcHttpService(
|
||||||
vertx,
|
vertx,
|
||||||
@@ -425,8 +421,7 @@ public class JsonRpcHttpServiceRpcApisTest {
|
|||||||
"{\"jsonrpc\":\"2.0\",\"id\":" + Json.encode(id) + ",\"method\":\"net_services\"}", JSON);
|
"{\"jsonrpc\":\"2.0\",\"id\":" + Json.encode(id) + ",\"method\":\"net_services\"}", JSON);
|
||||||
}
|
}
|
||||||
|
|
||||||
public JsonRpcHttpService getJsonRpcHttpService(final boolean[] enabledNetServices)
|
public JsonRpcHttpService getJsonRpcHttpService(final boolean[] enabledNetServices) {
|
||||||
throws Exception {
|
|
||||||
|
|
||||||
JsonRpcConfiguration jsonRpcConfiguration = JsonRpcConfiguration.createDefault();
|
JsonRpcConfiguration jsonRpcConfiguration = JsonRpcConfiguration.createDefault();
|
||||||
WebSocketConfiguration webSocketConfiguration = WebSocketConfiguration.createDefault();
|
WebSocketConfiguration webSocketConfiguration = WebSocketConfiguration.createDefault();
|
||||||
|
|||||||
@@ -17,10 +17,7 @@ package org.hyperledger.besu.ethereum.api.jsonrpc;
|
|||||||
import static org.assertj.core.api.Assertions.assertThat;
|
import static org.assertj.core.api.Assertions.assertThat;
|
||||||
import static org.mockito.ArgumentMatchers.any;
|
import static org.mockito.ArgumentMatchers.any;
|
||||||
import static org.mockito.ArgumentMatchers.eq;
|
import static org.mockito.ArgumentMatchers.eq;
|
||||||
import static org.mockito.Mockito.doReturn;
|
|
||||||
import static org.mockito.Mockito.mock;
|
import static org.mockito.Mockito.mock;
|
||||||
import static org.mockito.Mockito.reset;
|
|
||||||
import static org.mockito.Mockito.verify;
|
|
||||||
import static org.mockito.Mockito.when;
|
import static org.mockito.Mockito.when;
|
||||||
|
|
||||||
import org.hyperledger.besu.datatypes.Address;
|
import org.hyperledger.besu.datatypes.Address;
|
||||||
@@ -1389,65 +1386,68 @@ public class JsonRpcHttpServiceTest extends JsonRpcHttpServiceTestBase {
|
|||||||
+ "\"}",
|
+ "\"}",
|
||||||
JSON);
|
JSON);
|
||||||
|
|
||||||
when(rpcMethods.get(any(String.class))).thenReturn(null);
|
try (var unused = disableRpcMethod(methodName)) {
|
||||||
when(rpcMethods.containsKey(any(String.class))).thenReturn(false);
|
|
||||||
|
|
||||||
try (final Response resp = client.newCall(buildPostRequest(body)).execute()) {
|
try (final Response resp = client.newCall(buildPostRequest(body)).execute()) {
|
||||||
assertThat(resp.code()).isEqualTo(200);
|
assertThat(resp.code()).isEqualTo(200);
|
||||||
final JsonObject json = new JsonObject(resp.body().string());
|
final JsonObject json = new JsonObject(resp.body().string());
|
||||||
final RpcErrorType expectedError = RpcErrorType.METHOD_NOT_ENABLED;
|
final RpcErrorType expectedError = RpcErrorType.METHOD_NOT_ENABLED;
|
||||||
testHelper.assertValidJsonRpcError(
|
testHelper.assertValidJsonRpcError(
|
||||||
json, id, expectedError.getCode(), expectedError.getMessage());
|
json, id, expectedError.getCode(), expectedError.getMessage());
|
||||||
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
verify(rpcMethods).containsKey(methodName);
|
|
||||||
verify(rpcMethods).get(methodName);
|
|
||||||
|
|
||||||
reset(rpcMethods);
|
|
||||||
}
|
}
|
||||||
|
|
||||||
@Test
|
@Test
|
||||||
public void exceptionallyHandleJsonSingleRequest() throws Exception {
|
public void exceptionallyHandleJsonSingleRequest() throws Exception {
|
||||||
|
final String methodName = "foo";
|
||||||
final JsonRpcMethod jsonRpcMethod = mock(JsonRpcMethod.class);
|
final JsonRpcMethod jsonRpcMethod = mock(JsonRpcMethod.class);
|
||||||
when(jsonRpcMethod.getName()).thenReturn("foo");
|
when(jsonRpcMethod.getName()).thenReturn(methodName);
|
||||||
when(jsonRpcMethod.response(any())).thenThrow(new RuntimeException("test exception"));
|
when(jsonRpcMethod.response(any())).thenThrow(new RuntimeException("test exception"));
|
||||||
|
|
||||||
doReturn(jsonRpcMethod).when(rpcMethods).get("foo");
|
try (var unused = addRpcMethod(methodName, jsonRpcMethod)) {
|
||||||
|
|
||||||
final RequestBody body =
|
final RequestBody body =
|
||||||
RequestBody.create("{\"jsonrpc\":\"2.0\",\"id\":\"666\",\"method\":\"foo\"}", JSON);
|
RequestBody.create(
|
||||||
|
"{\"jsonrpc\":\"2.0\",\"id\":\"666\",\"method\":\"" + methodName + "\"}", JSON);
|
||||||
|
|
||||||
try (final Response resp = client.newCall(buildPostRequest(body)).execute()) {
|
try (final Response resp = client.newCall(buildPostRequest(body)).execute()) {
|
||||||
assertThat(resp.code()).isEqualTo(200);
|
assertThat(resp.code()).isEqualTo(200);
|
||||||
final JsonObject json = new JsonObject(resp.body().string());
|
final JsonObject json = new JsonObject(resp.body().string());
|
||||||
final RpcErrorType expectedError = RpcErrorType.INTERNAL_ERROR;
|
final RpcErrorType expectedError = RpcErrorType.INTERNAL_ERROR;
|
||||||
testHelper.assertValidJsonRpcError(
|
testHelper.assertValidJsonRpcError(
|
||||||
json, "666", expectedError.getCode(), expectedError.getMessage());
|
json, "666", expectedError.getCode(), expectedError.getMessage());
|
||||||
|
}
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
@Test
|
@Test
|
||||||
public void exceptionallyHandleJsonBatchRequest() throws Exception {
|
public void exceptionallyHandleJsonBatchRequest() throws Exception {
|
||||||
|
final String methodName = "foo";
|
||||||
final JsonRpcMethod jsonRpcMethod = mock(JsonRpcMethod.class);
|
final JsonRpcMethod jsonRpcMethod = mock(JsonRpcMethod.class);
|
||||||
when(jsonRpcMethod.getName()).thenReturn("foo");
|
when(jsonRpcMethod.getName()).thenReturn(methodName);
|
||||||
when(jsonRpcMethod.response(any())).thenThrow(new RuntimeException("test exception"));
|
when(jsonRpcMethod.response(any())).thenThrow(new RuntimeException("test exception"));
|
||||||
doReturn(jsonRpcMethod).when(rpcMethods).get("foo");
|
|
||||||
|
|
||||||
final RequestBody body =
|
try (var unused = addRpcMethod(methodName, jsonRpcMethod)) {
|
||||||
RequestBody.create(
|
|
||||||
"[{\"jsonrpc\":\"2.0\",\"id\":\"000\",\"method\":\"web3_clientVersion\"},"
|
|
||||||
+ "{\"jsonrpc\":\"2.0\",\"id\":\"111\",\"method\":\"foo\"},"
|
|
||||||
+ "{\"jsonrpc\":\"2.0\",\"id\":\"222\",\"method\":\"net_version\"}]",
|
|
||||||
JSON);
|
|
||||||
|
|
||||||
try (final Response resp = client.newCall(buildPostRequest(body)).execute()) {
|
final RequestBody body =
|
||||||
assertThat(resp.code()).isEqualTo(200);
|
RequestBody.create(
|
||||||
final JsonArray array = new JsonArray(resp.body().string());
|
"[{\"jsonrpc\":\"2.0\",\"id\":\"000\",\"method\":\"web3_clientVersion\"},"
|
||||||
testHelper.assertValidJsonRpcResult(array.getJsonObject(0), "000");
|
+ "{\"jsonrpc\":\"2.0\",\"id\":\"111\",\"method\":\""
|
||||||
final RpcErrorType expectedError = RpcErrorType.INTERNAL_ERROR;
|
+ methodName
|
||||||
testHelper.assertValidJsonRpcError(
|
+ "\"},"
|
||||||
array.getJsonObject(1), "111", expectedError.getCode(), expectedError.getMessage());
|
+ "{\"jsonrpc\":\"2.0\",\"id\":\"222\",\"method\":\"net_version\"}]",
|
||||||
testHelper.assertValidJsonRpcResult(array.getJsonObject(2), "222");
|
JSON);
|
||||||
|
|
||||||
|
try (final Response resp = client.newCall(buildPostRequest(body)).execute()) {
|
||||||
|
assertThat(resp.code()).isEqualTo(200);
|
||||||
|
final JsonArray array = new JsonArray(resp.body().string());
|
||||||
|
testHelper.assertValidJsonRpcResult(array.getJsonObject(0), "000");
|
||||||
|
final RpcErrorType expectedError = RpcErrorType.INTERNAL_ERROR;
|
||||||
|
testHelper.assertValidJsonRpcError(
|
||||||
|
array.getJsonObject(1), "111", expectedError.getCode(), expectedError.getMessage());
|
||||||
|
testHelper.assertValidJsonRpcResult(array.getJsonObject(2), "222");
|
||||||
|
}
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
|
|||||||
@@ -16,7 +16,6 @@
|
|||||||
package org.hyperledger.besu.ethereum.api.jsonrpc;
|
package org.hyperledger.besu.ethereum.api.jsonrpc;
|
||||||
|
|
||||||
import static org.mockito.Mockito.mock;
|
import static org.mockito.Mockito.mock;
|
||||||
import static org.mockito.Mockito.spy;
|
|
||||||
|
|
||||||
import org.hyperledger.besu.config.StubGenesisConfigOptions;
|
import org.hyperledger.besu.config.StubGenesisConfigOptions;
|
||||||
import org.hyperledger.besu.ethereum.ProtocolContext;
|
import org.hyperledger.besu.ethereum.ProtocolContext;
|
||||||
@@ -72,8 +71,9 @@ public class JsonRpcHttpServiceTestBase {
|
|||||||
protected final JsonRpcTestHelper testHelper = new JsonRpcTestHelper();
|
protected final JsonRpcTestHelper testHelper = new JsonRpcTestHelper();
|
||||||
|
|
||||||
private static final Vertx vertx = Vertx.vertx();
|
private static final Vertx vertx = Vertx.vertx();
|
||||||
|
|
||||||
protected static Map<String, JsonRpcMethod> rpcMethods;
|
protected static Map<String, JsonRpcMethod> rpcMethods;
|
||||||
|
private static Map<String, JsonRpcMethod> disabledRpcMethods;
|
||||||
|
private static Set<String> addedRpcMethods;
|
||||||
protected static JsonRpcHttpService service;
|
protected static JsonRpcHttpService service;
|
||||||
protected static OkHttpClient client;
|
protected static OkHttpClient client;
|
||||||
protected static String baseUrl;
|
protected static String baseUrl;
|
||||||
@@ -106,39 +106,41 @@ public class JsonRpcHttpServiceTestBase {
|
|||||||
supportedCapabilities.add(EthProtocol.ETH63);
|
supportedCapabilities.add(EthProtocol.ETH63);
|
||||||
|
|
||||||
rpcMethods =
|
rpcMethods =
|
||||||
spy(
|
new JsonRpcMethodsFactory()
|
||||||
new JsonRpcMethodsFactory()
|
.methods(
|
||||||
.methods(
|
CLIENT_VERSION,
|
||||||
CLIENT_VERSION,
|
CHAIN_ID,
|
||||||
CHAIN_ID,
|
new StubGenesisConfigOptions(),
|
||||||
new StubGenesisConfigOptions(),
|
peerDiscoveryMock,
|
||||||
peerDiscoveryMock,
|
blockchainQueries,
|
||||||
blockchainQueries,
|
synchronizer,
|
||||||
synchronizer,
|
MainnetProtocolSchedule.fromConfig(
|
||||||
MainnetProtocolSchedule.fromConfig(
|
new StubGenesisConfigOptions().constantinopleBlock(0).chainId(CHAIN_ID),
|
||||||
new StubGenesisConfigOptions().constantinopleBlock(0).chainId(CHAIN_ID),
|
EvmConfiguration.DEFAULT),
|
||||||
EvmConfiguration.DEFAULT),
|
mock(ProtocolContext.class),
|
||||||
mock(ProtocolContext.class),
|
mock(FilterManager.class),
|
||||||
mock(FilterManager.class),
|
mock(TransactionPool.class),
|
||||||
mock(TransactionPool.class),
|
mock(MiningParameters.class),
|
||||||
mock(MiningParameters.class),
|
mock(PoWMiningCoordinator.class),
|
||||||
mock(PoWMiningCoordinator.class),
|
new NoOpMetricsSystem(),
|
||||||
new NoOpMetricsSystem(),
|
supportedCapabilities,
|
||||||
supportedCapabilities,
|
Optional.of(mock(AccountLocalConfigPermissioningController.class)),
|
||||||
Optional.of(mock(AccountLocalConfigPermissioningController.class)),
|
Optional.of(mock(NodeLocalConfigPermissioningController.class)),
|
||||||
Optional.of(mock(NodeLocalConfigPermissioningController.class)),
|
JSON_RPC_APIS,
|
||||||
JSON_RPC_APIS,
|
mock(PrivacyParameters.class),
|
||||||
mock(PrivacyParameters.class),
|
mock(JsonRpcConfiguration.class),
|
||||||
mock(JsonRpcConfiguration.class),
|
mock(WebSocketConfiguration.class),
|
||||||
mock(WebSocketConfiguration.class),
|
mock(MetricsConfiguration.class),
|
||||||
mock(MetricsConfiguration.class),
|
natService,
|
||||||
natService,
|
new HashMap<>(),
|
||||||
new HashMap<>(),
|
folder,
|
||||||
folder,
|
ethPeersMock,
|
||||||
ethPeersMock,
|
vertx,
|
||||||
vertx,
|
mock(ApiConfiguration.class),
|
||||||
mock(ApiConfiguration.class),
|
Optional.empty());
|
||||||
Optional.empty()));
|
disabledRpcMethods = new HashMap<>();
|
||||||
|
addedRpcMethods = new HashSet<>();
|
||||||
|
|
||||||
service = createJsonRpcHttpService(createLimitedJsonRpcConfig());
|
service = createJsonRpcHttpService(createLimitedJsonRpcConfig());
|
||||||
service.start().join();
|
service.start().join();
|
||||||
|
|
||||||
@@ -189,6 +191,22 @@ public class JsonRpcHttpServiceTestBase {
|
|||||||
return new Request.Builder().get().url(baseUrl + path).build();
|
return new Request.Builder().get().url(baseUrl + path).build();
|
||||||
}
|
}
|
||||||
|
|
||||||
|
protected AutoCloseable disableRpcMethod(final String methodName) {
|
||||||
|
disabledRpcMethods.put(methodName, rpcMethods.remove(methodName));
|
||||||
|
return () -> resetRpcMethods();
|
||||||
|
}
|
||||||
|
|
||||||
|
protected AutoCloseable addRpcMethod(final String methodName, final JsonRpcMethod method) {
|
||||||
|
rpcMethods.put(methodName, method);
|
||||||
|
addedRpcMethods.add(methodName);
|
||||||
|
return () -> resetRpcMethods();
|
||||||
|
}
|
||||||
|
|
||||||
|
protected void resetRpcMethods() {
|
||||||
|
disabledRpcMethods.forEach(rpcMethods::put);
|
||||||
|
addedRpcMethods.forEach(rpcMethods::remove);
|
||||||
|
}
|
||||||
|
|
||||||
/** Tears down the HTTP server. */
|
/** Tears down the HTTP server. */
|
||||||
@AfterAll
|
@AfterAll
|
||||||
public static void shutdownServer() {
|
public static void shutdownServer() {
|
||||||
|
|||||||
@@ -21,7 +21,6 @@ import static org.hyperledger.besu.ethereum.api.tls.KnownClientFileUtil.writeToK
|
|||||||
import static org.hyperledger.besu.ethereum.api.tls.TlsClientAuthConfiguration.Builder.aTlsClientAuthConfiguration;
|
import static org.hyperledger.besu.ethereum.api.tls.TlsClientAuthConfiguration.Builder.aTlsClientAuthConfiguration;
|
||||||
import static org.hyperledger.besu.ethereum.api.tls.TlsConfiguration.Builder.aTlsConfiguration;
|
import static org.hyperledger.besu.ethereum.api.tls.TlsConfiguration.Builder.aTlsConfiguration;
|
||||||
import static org.mockito.Mockito.mock;
|
import static org.mockito.Mockito.mock;
|
||||||
import static org.mockito.Mockito.spy;
|
|
||||||
|
|
||||||
import org.hyperledger.besu.config.StubGenesisConfigOptions;
|
import org.hyperledger.besu.config.StubGenesisConfigOptions;
|
||||||
import org.hyperledger.besu.ethereum.ProtocolContext;
|
import org.hyperledger.besu.ethereum.ProtocolContext;
|
||||||
@@ -112,38 +111,37 @@ public class JsonRpcHttpServiceTlsClientAuthTest {
|
|||||||
supportedCapabilities.add(EthProtocol.ETH63);
|
supportedCapabilities.add(EthProtocol.ETH63);
|
||||||
|
|
||||||
rpcMethods =
|
rpcMethods =
|
||||||
spy(
|
new JsonRpcMethodsFactory()
|
||||||
new JsonRpcMethodsFactory()
|
.methods(
|
||||||
.methods(
|
CLIENT_VERSION,
|
||||||
CLIENT_VERSION,
|
CHAIN_ID,
|
||||||
CHAIN_ID,
|
new StubGenesisConfigOptions(),
|
||||||
new StubGenesisConfigOptions(),
|
peerDiscoveryMock,
|
||||||
peerDiscoveryMock,
|
blockchainQueries,
|
||||||
blockchainQueries,
|
synchronizer,
|
||||||
synchronizer,
|
MainnetProtocolSchedule.fromConfig(
|
||||||
MainnetProtocolSchedule.fromConfig(
|
new StubGenesisConfigOptions().constantinopleBlock(0).chainId(CHAIN_ID)),
|
||||||
new StubGenesisConfigOptions().constantinopleBlock(0).chainId(CHAIN_ID)),
|
mock(ProtocolContext.class),
|
||||||
mock(ProtocolContext.class),
|
mock(FilterManager.class),
|
||||||
mock(FilterManager.class),
|
mock(TransactionPool.class),
|
||||||
mock(TransactionPool.class),
|
mock(MiningParameters.class),
|
||||||
mock(MiningParameters.class),
|
mock(PoWMiningCoordinator.class),
|
||||||
mock(PoWMiningCoordinator.class),
|
new NoOpMetricsSystem(),
|
||||||
new NoOpMetricsSystem(),
|
supportedCapabilities,
|
||||||
supportedCapabilities,
|
Optional.of(mock(AccountLocalConfigPermissioningController.class)),
|
||||||
Optional.of(mock(AccountLocalConfigPermissioningController.class)),
|
Optional.of(mock(NodeLocalConfigPermissioningController.class)),
|
||||||
Optional.of(mock(NodeLocalConfigPermissioningController.class)),
|
DEFAULT_RPC_APIS,
|
||||||
DEFAULT_RPC_APIS,
|
mock(PrivacyParameters.class),
|
||||||
mock(PrivacyParameters.class),
|
mock(JsonRpcConfiguration.class),
|
||||||
mock(JsonRpcConfiguration.class),
|
mock(WebSocketConfiguration.class),
|
||||||
mock(WebSocketConfiguration.class),
|
mock(MetricsConfiguration.class),
|
||||||
mock(MetricsConfiguration.class),
|
natService,
|
||||||
natService,
|
Collections.emptyMap(),
|
||||||
Collections.emptyMap(),
|
folder,
|
||||||
folder,
|
mock(EthPeers.class),
|
||||||
mock(EthPeers.class),
|
vertx,
|
||||||
vertx,
|
mock(ApiConfiguration.class),
|
||||||
mock(ApiConfiguration.class),
|
Optional.empty());
|
||||||
Optional.empty()));
|
|
||||||
|
|
||||||
System.setProperty("javax.net.ssl.trustStore", CLIENT_AS_CA_CERT.getKeyStoreFile().toString());
|
System.setProperty("javax.net.ssl.trustStore", CLIENT_AS_CA_CERT.getKeyStoreFile().toString());
|
||||||
System.setProperty(
|
System.setProperty(
|
||||||
|
|||||||
@@ -20,7 +20,6 @@ import static org.hyperledger.besu.ethereum.api.tls.KnownClientFileUtil.writeToK
|
|||||||
import static org.hyperledger.besu.ethereum.api.tls.TlsClientAuthConfiguration.Builder.aTlsClientAuthConfiguration;
|
import static org.hyperledger.besu.ethereum.api.tls.TlsClientAuthConfiguration.Builder.aTlsClientAuthConfiguration;
|
||||||
import static org.hyperledger.besu.ethereum.api.tls.TlsConfiguration.Builder.aTlsConfiguration;
|
import static org.hyperledger.besu.ethereum.api.tls.TlsConfiguration.Builder.aTlsConfiguration;
|
||||||
import static org.mockito.Mockito.mock;
|
import static org.mockito.Mockito.mock;
|
||||||
import static org.mockito.Mockito.spy;
|
|
||||||
|
|
||||||
import org.hyperledger.besu.config.StubGenesisConfigOptions;
|
import org.hyperledger.besu.config.StubGenesisConfigOptions;
|
||||||
import org.hyperledger.besu.ethereum.ProtocolContext;
|
import org.hyperledger.besu.ethereum.ProtocolContext;
|
||||||
@@ -100,38 +99,37 @@ class JsonRpcHttpServiceTlsMisconfigurationTest {
|
|||||||
supportedCapabilities.add(EthProtocol.ETH63);
|
supportedCapabilities.add(EthProtocol.ETH63);
|
||||||
|
|
||||||
rpcMethods =
|
rpcMethods =
|
||||||
spy(
|
new JsonRpcMethodsFactory()
|
||||||
new JsonRpcMethodsFactory()
|
.methods(
|
||||||
.methods(
|
CLIENT_VERSION,
|
||||||
CLIENT_VERSION,
|
CHAIN_ID,
|
||||||
CHAIN_ID,
|
new StubGenesisConfigOptions(),
|
||||||
new StubGenesisConfigOptions(),
|
peerDiscoveryMock,
|
||||||
peerDiscoveryMock,
|
blockchainQueries,
|
||||||
blockchainQueries,
|
synchronizer,
|
||||||
synchronizer,
|
MainnetProtocolSchedule.fromConfig(
|
||||||
MainnetProtocolSchedule.fromConfig(
|
new StubGenesisConfigOptions().constantinopleBlock(0).chainId(CHAIN_ID)),
|
||||||
new StubGenesisConfigOptions().constantinopleBlock(0).chainId(CHAIN_ID)),
|
mock(ProtocolContext.class),
|
||||||
mock(ProtocolContext.class),
|
mock(FilterManager.class),
|
||||||
mock(FilterManager.class),
|
mock(TransactionPool.class),
|
||||||
mock(TransactionPool.class),
|
mock(MiningParameters.class),
|
||||||
mock(MiningParameters.class),
|
mock(PoWMiningCoordinator.class),
|
||||||
mock(PoWMiningCoordinator.class),
|
new NoOpMetricsSystem(),
|
||||||
new NoOpMetricsSystem(),
|
supportedCapabilities,
|
||||||
supportedCapabilities,
|
Optional.of(mock(AccountLocalConfigPermissioningController.class)),
|
||||||
Optional.of(mock(AccountLocalConfigPermissioningController.class)),
|
Optional.of(mock(NodeLocalConfigPermissioningController.class)),
|
||||||
Optional.of(mock(NodeLocalConfigPermissioningController.class)),
|
DEFAULT_RPC_APIS,
|
||||||
DEFAULT_RPC_APIS,
|
mock(PrivacyParameters.class),
|
||||||
mock(PrivacyParameters.class),
|
mock(JsonRpcConfiguration.class),
|
||||||
mock(JsonRpcConfiguration.class),
|
mock(WebSocketConfiguration.class),
|
||||||
mock(WebSocketConfiguration.class),
|
mock(MetricsConfiguration.class),
|
||||||
mock(MetricsConfiguration.class),
|
natService,
|
||||||
natService,
|
Collections.emptyMap(),
|
||||||
Collections.emptyMap(),
|
tempDir.getRoot(),
|
||||||
tempDir.getRoot(),
|
mock(EthPeers.class),
|
||||||
mock(EthPeers.class),
|
vertx,
|
||||||
vertx,
|
mock(ApiConfiguration.class),
|
||||||
mock(ApiConfiguration.class),
|
Optional.empty());
|
||||||
Optional.empty()));
|
|
||||||
}
|
}
|
||||||
|
|
||||||
@AfterEach
|
@AfterEach
|
||||||
|
|||||||
@@ -20,7 +20,6 @@ import static org.assertj.core.api.Assertions.assertThat;
|
|||||||
import static org.hyperledger.besu.ethereum.api.jsonrpc.RpcApis.DEFAULT_RPC_APIS;
|
import static org.hyperledger.besu.ethereum.api.jsonrpc.RpcApis.DEFAULT_RPC_APIS;
|
||||||
import static org.hyperledger.besu.ethereum.api.tls.TlsConfiguration.Builder.aTlsConfiguration;
|
import static org.hyperledger.besu.ethereum.api.tls.TlsConfiguration.Builder.aTlsConfiguration;
|
||||||
import static org.mockito.Mockito.mock;
|
import static org.mockito.Mockito.mock;
|
||||||
import static org.mockito.Mockito.spy;
|
|
||||||
|
|
||||||
import org.hyperledger.besu.config.StubGenesisConfigOptions;
|
import org.hyperledger.besu.config.StubGenesisConfigOptions;
|
||||||
import org.hyperledger.besu.ethereum.ProtocolContext;
|
import org.hyperledger.besu.ethereum.ProtocolContext;
|
||||||
@@ -101,38 +100,37 @@ public class JsonRpcHttpServiceTlsTest {
|
|||||||
supportedCapabilities.add(EthProtocol.ETH63);
|
supportedCapabilities.add(EthProtocol.ETH63);
|
||||||
|
|
||||||
rpcMethods =
|
rpcMethods =
|
||||||
spy(
|
new JsonRpcMethodsFactory()
|
||||||
new JsonRpcMethodsFactory()
|
.methods(
|
||||||
.methods(
|
CLIENT_VERSION,
|
||||||
CLIENT_VERSION,
|
CHAIN_ID,
|
||||||
CHAIN_ID,
|
new StubGenesisConfigOptions(),
|
||||||
new StubGenesisConfigOptions(),
|
peerDiscoveryMock,
|
||||||
peerDiscoveryMock,
|
blockchainQueries,
|
||||||
blockchainQueries,
|
synchronizer,
|
||||||
synchronizer,
|
MainnetProtocolSchedule.fromConfig(
|
||||||
MainnetProtocolSchedule.fromConfig(
|
new StubGenesisConfigOptions().constantinopleBlock(0).chainId(CHAIN_ID)),
|
||||||
new StubGenesisConfigOptions().constantinopleBlock(0).chainId(CHAIN_ID)),
|
mock(ProtocolContext.class),
|
||||||
mock(ProtocolContext.class),
|
mock(FilterManager.class),
|
||||||
mock(FilterManager.class),
|
mock(TransactionPool.class),
|
||||||
mock(TransactionPool.class),
|
mock(MiningParameters.class),
|
||||||
mock(MiningParameters.class),
|
mock(PoWMiningCoordinator.class),
|
||||||
mock(PoWMiningCoordinator.class),
|
new NoOpMetricsSystem(),
|
||||||
new NoOpMetricsSystem(),
|
supportedCapabilities,
|
||||||
supportedCapabilities,
|
Optional.of(mock(AccountLocalConfigPermissioningController.class)),
|
||||||
Optional.of(mock(AccountLocalConfigPermissioningController.class)),
|
Optional.of(mock(NodeLocalConfigPermissioningController.class)),
|
||||||
Optional.of(mock(NodeLocalConfigPermissioningController.class)),
|
DEFAULT_RPC_APIS,
|
||||||
DEFAULT_RPC_APIS,
|
mock(PrivacyParameters.class),
|
||||||
mock(PrivacyParameters.class),
|
mock(JsonRpcConfiguration.class),
|
||||||
mock(JsonRpcConfiguration.class),
|
mock(WebSocketConfiguration.class),
|
||||||
mock(WebSocketConfiguration.class),
|
mock(MetricsConfiguration.class),
|
||||||
mock(MetricsConfiguration.class),
|
natService,
|
||||||
natService,
|
Collections.emptyMap(),
|
||||||
Collections.emptyMap(),
|
folder,
|
||||||
folder,
|
mock(EthPeers.class),
|
||||||
mock(EthPeers.class),
|
vertx,
|
||||||
vertx,
|
mock(ApiConfiguration.class),
|
||||||
mock(ApiConfiguration.class),
|
Optional.empty());
|
||||||
Optional.empty()));
|
|
||||||
service = createJsonRpcHttpService(createJsonRpcConfig());
|
service = createJsonRpcHttpService(createJsonRpcConfig());
|
||||||
service.start().join();
|
service.start().join();
|
||||||
baseUrl = service.url();
|
baseUrl = service.url();
|
||||||
|
|||||||
@@ -18,11 +18,12 @@ import static org.assertj.core.api.Assertions.assertThat;
|
|||||||
import static org.mockito.ArgumentMatchers.any;
|
import static org.mockito.ArgumentMatchers.any;
|
||||||
import static org.mockito.ArgumentMatchers.eq;
|
import static org.mockito.ArgumentMatchers.eq;
|
||||||
import static org.mockito.Mockito.mock;
|
import static org.mockito.Mockito.mock;
|
||||||
import static org.mockito.Mockito.spy;
|
|
||||||
import static org.mockito.Mockito.when;
|
import static org.mockito.Mockito.when;
|
||||||
|
|
||||||
import org.hyperledger.besu.ethereum.api.jsonrpc.internal.JsonRpcRequest;
|
import org.hyperledger.besu.ethereum.api.jsonrpc.internal.JsonRpcRequest;
|
||||||
import org.hyperledger.besu.ethereum.api.jsonrpc.internal.JsonRpcRequestContext;
|
import org.hyperledger.besu.ethereum.api.jsonrpc.internal.JsonRpcRequestContext;
|
||||||
|
import org.hyperledger.besu.ethereum.api.jsonrpc.internal.processor.BlockTracer;
|
||||||
|
import org.hyperledger.besu.ethereum.api.jsonrpc.internal.processor.Tracer;
|
||||||
import org.hyperledger.besu.ethereum.api.jsonrpc.internal.processor.TransactionTracer;
|
import org.hyperledger.besu.ethereum.api.jsonrpc.internal.processor.TransactionTracer;
|
||||||
import org.hyperledger.besu.ethereum.api.jsonrpc.internal.response.JsonRpcSuccessResponse;
|
import org.hyperledger.besu.ethereum.api.jsonrpc.internal.response.JsonRpcSuccessResponse;
|
||||||
import org.hyperledger.besu.ethereum.api.query.BlockchainQueries;
|
import org.hyperledger.besu.ethereum.api.query.BlockchainQueries;
|
||||||
@@ -30,28 +31,23 @@ import org.hyperledger.besu.ethereum.chain.Blockchain;
|
|||||||
import org.hyperledger.besu.ethereum.core.Block;
|
import org.hyperledger.besu.ethereum.core.Block;
|
||||||
import org.hyperledger.besu.ethereum.core.BlockDataGenerator;
|
import org.hyperledger.besu.ethereum.core.BlockDataGenerator;
|
||||||
import org.hyperledger.besu.ethereum.core.MutableWorldState;
|
import org.hyperledger.besu.ethereum.core.MutableWorldState;
|
||||||
import org.hyperledger.besu.ethereum.worldstate.WorldStateArchive;
|
|
||||||
|
|
||||||
import java.nio.file.Path;
|
import java.nio.file.Path;
|
||||||
import java.util.ArrayList;
|
|
||||||
import java.util.HashMap;
|
import java.util.HashMap;
|
||||||
import java.util.List;
|
import java.util.List;
|
||||||
import java.util.Optional;
|
import java.util.Optional;
|
||||||
|
import java.util.function.Function;
|
||||||
|
|
||||||
import org.junit.jupiter.api.Test;
|
import org.junit.jupiter.api.Test;
|
||||||
import org.junit.jupiter.api.io.TempDir;
|
import org.junit.jupiter.api.io.TempDir;
|
||||||
import org.mockito.Answers;
|
|
||||||
|
|
||||||
public class DebugStandardTraceBlockToFileTest {
|
public class DebugStandardTraceBlockToFileTest {
|
||||||
|
|
||||||
// this tempDir is deliberately static
|
// this tempDir is deliberately static
|
||||||
@TempDir private static Path folder;
|
@TempDir private static Path folder;
|
||||||
|
|
||||||
private final WorldStateArchive archive =
|
|
||||||
mock(WorldStateArchive.class, Answers.RETURNS_DEEP_STUBS);
|
|
||||||
private final Blockchain blockchain = mock(Blockchain.class);
|
private final Blockchain blockchain = mock(Blockchain.class);
|
||||||
private final BlockchainQueries blockchainQueries =
|
private final BlockchainQueries blockchainQueries = mock(BlockchainQueries.class);
|
||||||
spy(new BlockchainQueries(blockchain, archive));
|
|
||||||
private final TransactionTracer transactionTracer = mock(TransactionTracer.class);
|
private final TransactionTracer transactionTracer = mock(TransactionTracer.class);
|
||||||
private final DebugStandardTraceBlockToFile debugStandardTraceBlockToFile =
|
private final DebugStandardTraceBlockToFile debugStandardTraceBlockToFile =
|
||||||
new DebugStandardTraceBlockToFile(() -> transactionTracer, blockchainQueries, folder);
|
new DebugStandardTraceBlockToFile(() -> transactionTracer, blockchainQueries, folder);
|
||||||
@@ -76,20 +72,26 @@ public class DebugStandardTraceBlockToFileTest {
|
|||||||
new JsonRpcRequestContext(
|
new JsonRpcRequestContext(
|
||||||
new JsonRpcRequest("2.0", "debug_standardTraceBlockToFile", params));
|
new JsonRpcRequest("2.0", "debug_standardTraceBlockToFile", params));
|
||||||
|
|
||||||
final List<String> paths = new ArrayList<>();
|
final List<String> paths = List.of("path-1");
|
||||||
paths.add("path-1");
|
|
||||||
|
|
||||||
when(blockchainQueries.getBlockchain()).thenReturn(blockchain);
|
|
||||||
|
|
||||||
when(blockchain.getBlockByHash(block.getHash())).thenReturn(Optional.of(block));
|
when(blockchain.getBlockByHash(block.getHash())).thenReturn(Optional.of(block));
|
||||||
when(blockchain.getBlockHeader(genesis.getHash())).thenReturn(Optional.of(genesis.getHeader()));
|
when(blockchain.getBlockHeader(genesis.getHash())).thenReturn(Optional.of(genesis.getHeader()));
|
||||||
|
when(blockchainQueries.getBlockchain()).thenReturn(blockchain);
|
||||||
|
|
||||||
|
when(blockchainQueries.getAndMapWorldState(any(), any()))
|
||||||
|
.thenAnswer(
|
||||||
|
invocationOnMock -> {
|
||||||
|
Function<Tracer.TraceableState, ? extends Optional<BlockTracer>> mapper =
|
||||||
|
invocationOnMock.getArgument(1);
|
||||||
|
return mapper.apply(mock(Tracer.TraceableState.class));
|
||||||
|
});
|
||||||
|
|
||||||
when(transactionTracer.traceTransactionToFile(
|
when(transactionTracer.traceTransactionToFile(
|
||||||
any(MutableWorldState.class), eq(block.getHash()), any(), any()))
|
any(MutableWorldState.class), eq(block.getHash()), any(), any()))
|
||||||
.thenReturn(paths);
|
.thenReturn(paths);
|
||||||
final JsonRpcSuccessResponse response =
|
final JsonRpcSuccessResponse response =
|
||||||
(JsonRpcSuccessResponse) debugStandardTraceBlockToFile.response(request);
|
(JsonRpcSuccessResponse) debugStandardTraceBlockToFile.response(request);
|
||||||
final List result = (ArrayList) response.getResult();
|
final List result = (List) response.getResult();
|
||||||
|
|
||||||
assertThat(result.size()).isEqualTo(1);
|
assertThat(result.size()).isEqualTo(1);
|
||||||
}
|
}
|
||||||
|
|||||||
@@ -18,9 +18,8 @@ import static java.util.Arrays.asList;
|
|||||||
import static java.util.Collections.singletonList;
|
import static java.util.Collections.singletonList;
|
||||||
import static org.assertj.core.api.Assertions.assertThat;
|
import static org.assertj.core.api.Assertions.assertThat;
|
||||||
import static org.mockito.ArgumentMatchers.any;
|
import static org.mockito.ArgumentMatchers.any;
|
||||||
import static org.mockito.Mockito.doAnswer;
|
import static org.mockito.ArgumentMatchers.eq;
|
||||||
import static org.mockito.Mockito.mock;
|
import static org.mockito.Mockito.mock;
|
||||||
import static org.mockito.Mockito.spy;
|
|
||||||
import static org.mockito.Mockito.when;
|
import static org.mockito.Mockito.when;
|
||||||
|
|
||||||
import org.hyperledger.besu.datatypes.Wei;
|
import org.hyperledger.besu.datatypes.Wei;
|
||||||
@@ -35,32 +34,25 @@ import org.hyperledger.besu.ethereum.api.jsonrpc.internal.response.JsonRpcSucces
|
|||||||
import org.hyperledger.besu.ethereum.api.jsonrpc.internal.response.RpcErrorType;
|
import org.hyperledger.besu.ethereum.api.jsonrpc.internal.response.RpcErrorType;
|
||||||
import org.hyperledger.besu.ethereum.api.query.BlockWithMetadata;
|
import org.hyperledger.besu.ethereum.api.query.BlockWithMetadata;
|
||||||
import org.hyperledger.besu.ethereum.api.query.BlockchainQueries;
|
import org.hyperledger.besu.ethereum.api.query.BlockchainQueries;
|
||||||
import org.hyperledger.besu.ethereum.chain.Blockchain;
|
|
||||||
import org.hyperledger.besu.ethereum.core.Block;
|
import org.hyperledger.besu.ethereum.core.Block;
|
||||||
import org.hyperledger.besu.ethereum.core.BlockDataGenerator;
|
import org.hyperledger.besu.ethereum.core.BlockDataGenerator;
|
||||||
import org.hyperledger.besu.ethereum.debug.TraceFrame;
|
import org.hyperledger.besu.ethereum.debug.TraceFrame;
|
||||||
import org.hyperledger.besu.ethereum.mainnet.MainnetBlockHeaderFunctions;
|
import org.hyperledger.besu.ethereum.mainnet.MainnetBlockHeaderFunctions;
|
||||||
import org.hyperledger.besu.ethereum.processing.TransactionProcessingResult;
|
import org.hyperledger.besu.ethereum.processing.TransactionProcessingResult;
|
||||||
import org.hyperledger.besu.ethereum.worldstate.WorldStateArchive;
|
|
||||||
|
|
||||||
import java.util.Collection;
|
import java.util.Collection;
|
||||||
import java.util.Collections;
|
import java.util.Collections;
|
||||||
import java.util.Optional;
|
import java.util.Optional;
|
||||||
import java.util.OptionalLong;
|
import java.util.OptionalLong;
|
||||||
|
import java.util.function.Function;
|
||||||
|
|
||||||
import org.apache.tuweni.bytes.Bytes;
|
import org.apache.tuweni.bytes.Bytes;
|
||||||
import org.junit.jupiter.api.Test;
|
import org.junit.jupiter.api.Test;
|
||||||
import org.mockito.Answers;
|
|
||||||
import org.mockito.Mockito;
|
|
||||||
|
|
||||||
public class DebugTraceBlockTest {
|
public class DebugTraceBlockTest {
|
||||||
|
|
||||||
private final BlockTracer blockTracer = mock(BlockTracer.class);
|
private final BlockTracer blockTracer = mock(BlockTracer.class);
|
||||||
private final WorldStateArchive archive =
|
private final BlockchainQueries blockchainQueries = mock(BlockchainQueries.class);
|
||||||
mock(WorldStateArchive.class, Answers.RETURNS_DEEP_STUBS);
|
|
||||||
private final Blockchain blockchain = mock(Blockchain.class);
|
|
||||||
private final BlockchainQueries blockchainQueries =
|
|
||||||
spy(new BlockchainQueries(blockchain, archive));
|
|
||||||
private final DebugTraceBlock debugTraceBlock =
|
private final DebugTraceBlock debugTraceBlock =
|
||||||
new DebugTraceBlock(() -> blockTracer, new MainnetBlockHeaderFunctions(), blockchainQueries);
|
new DebugTraceBlock(() -> blockTracer, new MainnetBlockHeaderFunctions(), blockchainQueries);
|
||||||
|
|
||||||
@@ -127,22 +119,25 @@ public class DebugTraceBlockTest {
|
|||||||
when(transaction2Trace.getResult()).thenReturn(transaction2Result);
|
when(transaction2Trace.getResult()).thenReturn(transaction2Result);
|
||||||
when(transaction1Result.getOutput()).thenReturn(Bytes.fromHexString("1234"));
|
when(transaction1Result.getOutput()).thenReturn(Bytes.fromHexString("1234"));
|
||||||
when(transaction2Result.getOutput()).thenReturn(Bytes.fromHexString("1234"));
|
when(transaction2Result.getOutput()).thenReturn(Bytes.fromHexString("1234"));
|
||||||
when(blockTracer.trace(any(Tracer.TraceableState.class), Mockito.eq(block), any()))
|
when(blockTracer.trace(any(Tracer.TraceableState.class), eq(block), any()))
|
||||||
.thenReturn(Optional.of(blockTrace));
|
.thenReturn(Optional.of(blockTrace));
|
||||||
|
|
||||||
when(blockchain.getBlockHeader(parentBlock.getHash()))
|
when(blockchainQueries.blockByHash(parentBlock.getHash()))
|
||||||
.thenReturn(Optional.of(parentBlock.getHeader()));
|
.thenReturn(
|
||||||
doAnswer(
|
Optional.of(
|
||||||
invocation ->
|
new BlockWithMetadata<>(
|
||||||
Optional.of(
|
parentBlock.getHeader(),
|
||||||
new BlockWithMetadata<>(
|
Collections.emptyList(),
|
||||||
parentBlock.getHeader(),
|
Collections.emptyList(),
|
||||||
Collections.emptyList(),
|
parentBlock.getHeader().getDifficulty(),
|
||||||
Collections.emptyList(),
|
parentBlock.calculateSize())));
|
||||||
parentBlock.getHeader().getDifficulty(),
|
when(blockchainQueries.getAndMapWorldState(eq(parentBlock.getHash()), any()))
|
||||||
parentBlock.calculateSize())))
|
.thenAnswer(
|
||||||
.when(blockchainQueries)
|
invocationOnMock -> {
|
||||||
.blockByHash(parentBlock.getHash());
|
Function<Tracer.TraceableState, ? extends Optional<BlockTracer>> mapper =
|
||||||
|
invocationOnMock.getArgument(1);
|
||||||
|
return mapper.apply(mock(Tracer.TraceableState.class));
|
||||||
|
});
|
||||||
|
|
||||||
final JsonRpcSuccessResponse response =
|
final JsonRpcSuccessResponse response =
|
||||||
(JsonRpcSuccessResponse) debugTraceBlock.response(request);
|
(JsonRpcSuccessResponse) debugTraceBlock.response(request);
|
||||||
|
|||||||
@@ -136,7 +136,7 @@ public class BlockTransactionSelector {
|
|||||||
this.pluginTransactionSelector = pluginTransactionSelector;
|
this.pluginTransactionSelector = pluginTransactionSelector;
|
||||||
this.pluginOperationTracer = pluginTransactionSelector.getOperationTracer();
|
this.pluginOperationTracer = pluginTransactionSelector.getOperationTracer();
|
||||||
blockWorldStateUpdater = worldState.updater();
|
blockWorldStateUpdater = worldState.updater();
|
||||||
blockTxsSelectionMaxTime = miningParameters.getUnstable().getBlockTxsSelectionMaxTime();
|
blockTxsSelectionMaxTime = miningParameters.getBlockTxsSelectionMaxTime();
|
||||||
}
|
}
|
||||||
|
|
||||||
private List<AbstractTransactionSelector> createTransactionSelectors(
|
private List<AbstractTransactionSelector> createTransactionSelectors(
|
||||||
|
|||||||
@@ -17,7 +17,7 @@ package org.hyperledger.besu.ethereum.blockcreation;
|
|||||||
import static org.assertj.core.api.Assertions.assertThat;
|
import static org.assertj.core.api.Assertions.assertThat;
|
||||||
import static org.assertj.core.api.Assertions.entry;
|
import static org.assertj.core.api.Assertions.entry;
|
||||||
import static org.awaitility.Awaitility.await;
|
import static org.awaitility.Awaitility.await;
|
||||||
import static org.hyperledger.besu.ethereum.core.MiningParameters.Unstable.DEFAULT_NON_POA_BLOCK_TXS_SELECTION_MAX_TIME;
|
import static org.hyperledger.besu.ethereum.core.MiningParameters.DEFAULT_NON_POA_BLOCK_TXS_SELECTION_MAX_TIME;
|
||||||
import static org.hyperledger.besu.plugin.data.TransactionSelectionResult.BLOCK_SELECTION_TIMEOUT;
|
import static org.hyperledger.besu.plugin.data.TransactionSelectionResult.BLOCK_SELECTION_TIMEOUT;
|
||||||
import static org.hyperledger.besu.plugin.data.TransactionSelectionResult.PRIORITY_FEE_PER_GAS_BELOW_CURRENT_MIN;
|
import static org.hyperledger.besu.plugin.data.TransactionSelectionResult.PRIORITY_FEE_PER_GAS_BELOW_CURRENT_MIN;
|
||||||
import static org.hyperledger.besu.plugin.data.TransactionSelectionResult.SELECTED;
|
import static org.hyperledger.besu.plugin.data.TransactionSelectionResult.SELECTED;
|
||||||
@@ -54,7 +54,6 @@ import org.hyperledger.besu.ethereum.core.BlockHeaderTestFixture;
|
|||||||
import org.hyperledger.besu.ethereum.core.Difficulty;
|
import org.hyperledger.besu.ethereum.core.Difficulty;
|
||||||
import org.hyperledger.besu.ethereum.core.ImmutableMiningParameters;
|
import org.hyperledger.besu.ethereum.core.ImmutableMiningParameters;
|
||||||
import org.hyperledger.besu.ethereum.core.ImmutableMiningParameters.MutableInitValues;
|
import org.hyperledger.besu.ethereum.core.ImmutableMiningParameters.MutableInitValues;
|
||||||
import org.hyperledger.besu.ethereum.core.ImmutableMiningParameters.Unstable;
|
|
||||||
import org.hyperledger.besu.ethereum.core.InMemoryKeyValueStorageProvider;
|
import org.hyperledger.besu.ethereum.core.InMemoryKeyValueStorageProvider;
|
||||||
import org.hyperledger.besu.ethereum.core.MiningParameters;
|
import org.hyperledger.besu.ethereum.core.MiningParameters;
|
||||||
import org.hyperledger.besu.ethereum.core.MutableWorldState;
|
import org.hyperledger.besu.ethereum.core.MutableWorldState;
|
||||||
@@ -85,7 +84,7 @@ import org.hyperledger.besu.plugin.services.txselection.PluginTransactionSelecto
|
|||||||
import org.hyperledger.besu.plugin.services.txselection.PluginTransactionSelectorFactory;
|
import org.hyperledger.besu.plugin.services.txselection.PluginTransactionSelectorFactory;
|
||||||
import org.hyperledger.besu.plugin.services.txselection.TransactionEvaluationContext;
|
import org.hyperledger.besu.plugin.services.txselection.TransactionEvaluationContext;
|
||||||
import org.hyperledger.besu.services.kvstore.InMemoryKeyValueStorage;
|
import org.hyperledger.besu.services.kvstore.InMemoryKeyValueStorage;
|
||||||
import org.hyperledger.besu.util.number.Percentage;
|
import org.hyperledger.besu.util.number.PositiveNumber;
|
||||||
|
|
||||||
import java.math.BigInteger;
|
import java.math.BigInteger;
|
||||||
import java.time.Instant;
|
import java.time.Instant;
|
||||||
@@ -960,8 +959,8 @@ public abstract class AbstractBlockTransactionSelectorTest {
|
|||||||
|
|
||||||
final ProcessableBlockHeader blockHeader = createBlock(301_000);
|
final ProcessableBlockHeader blockHeader = createBlock(301_000);
|
||||||
final Address miningBeneficiary = AddressHelpers.ofValue(1);
|
final Address miningBeneficiary = AddressHelpers.ofValue(1);
|
||||||
final int poaMinBlockTime = 1;
|
final int poaGenesisBlockPeriod = 1;
|
||||||
final long blockTxsSelectionMaxTime = 750;
|
final int blockTxsSelectionMaxTime = 750;
|
||||||
|
|
||||||
final List<Transaction> transactionsToInject = new ArrayList<>(3);
|
final List<Transaction> transactionsToInject = new ArrayList<>(3);
|
||||||
for (int i = 0; i < 2; i++) {
|
for (int i = 0; i < 2; i++) {
|
||||||
@@ -991,9 +990,14 @@ public abstract class AbstractBlockTransactionSelectorTest {
|
|||||||
createBlockSelectorAndSetupTxPool(
|
createBlockSelectorAndSetupTxPool(
|
||||||
isPoa
|
isPoa
|
||||||
? createMiningParameters(
|
? createMiningParameters(
|
||||||
Wei.ZERO, MIN_OCCUPANCY_100_PERCENT, poaMinBlockTime, Percentage.fromInt(75))
|
Wei.ZERO,
|
||||||
|
MIN_OCCUPANCY_100_PERCENT,
|
||||||
|
poaGenesisBlockPeriod,
|
||||||
|
PositiveNumber.fromInt(75))
|
||||||
: createMiningParameters(
|
: createMiningParameters(
|
||||||
Wei.ZERO, MIN_OCCUPANCY_100_PERCENT, blockTxsSelectionMaxTime),
|
Wei.ZERO,
|
||||||
|
MIN_OCCUPANCY_100_PERCENT,
|
||||||
|
PositiveNumber.fromInt(blockTxsSelectionMaxTime)),
|
||||||
transactionProcessor,
|
transactionProcessor,
|
||||||
blockHeader,
|
blockHeader,
|
||||||
miningBeneficiary,
|
miningBeneficiary,
|
||||||
@@ -1180,33 +1184,32 @@ public abstract class AbstractBlockTransactionSelectorTest {
|
|||||||
}
|
}
|
||||||
|
|
||||||
protected MiningParameters createMiningParameters(
|
protected MiningParameters createMiningParameters(
|
||||||
final Wei minGasPrice, final double minBlockOccupancyRatio, final long txsSelectionMaxTime) {
|
final Wei minGasPrice,
|
||||||
|
final double minBlockOccupancyRatio,
|
||||||
|
final PositiveNumber txsSelectionMaxTime) {
|
||||||
return ImmutableMiningParameters.builder()
|
return ImmutableMiningParameters.builder()
|
||||||
.mutableInitValues(
|
.mutableInitValues(
|
||||||
MutableInitValues.builder()
|
MutableInitValues.builder()
|
||||||
.minTransactionGasPrice(minGasPrice)
|
.minTransactionGasPrice(minGasPrice)
|
||||||
.minBlockOccupancyRatio(minBlockOccupancyRatio)
|
.minBlockOccupancyRatio(minBlockOccupancyRatio)
|
||||||
.build())
|
.build())
|
||||||
.unstable(Unstable.builder().nonPoaBlockTxsSelectionMaxTime(txsSelectionMaxTime).build())
|
.nonPoaBlockTxsSelectionMaxTime(txsSelectionMaxTime)
|
||||||
.build();
|
.build();
|
||||||
}
|
}
|
||||||
|
|
||||||
protected MiningParameters createMiningParameters(
|
protected MiningParameters createMiningParameters(
|
||||||
final Wei minGasPrice,
|
final Wei minGasPrice,
|
||||||
final double minBlockOccupancyRatio,
|
final double minBlockOccupancyRatio,
|
||||||
final int minBlockTime,
|
final int genesisBlockPeriodSeconds,
|
||||||
final Percentage minBlockTimePercentage) {
|
final PositiveNumber minBlockTimePercentage) {
|
||||||
return ImmutableMiningParameters.builder()
|
return ImmutableMiningParameters.builder()
|
||||||
.mutableInitValues(
|
.mutableInitValues(
|
||||||
MutableInitValues.builder()
|
MutableInitValues.builder()
|
||||||
.minTransactionGasPrice(minGasPrice)
|
.minTransactionGasPrice(minGasPrice)
|
||||||
.minBlockOccupancyRatio(minBlockOccupancyRatio)
|
.minBlockOccupancyRatio(minBlockOccupancyRatio)
|
||||||
.build())
|
.build())
|
||||||
.unstable(
|
.genesisBlockPeriodSeconds(genesisBlockPeriodSeconds)
|
||||||
Unstable.builder()
|
.poaBlockTxsSelectionMaxTime(minBlockTimePercentage)
|
||||||
.minBlockTime(minBlockTime)
|
|
||||||
.poaBlockTxsSelectionMaxTime(minBlockTimePercentage)
|
|
||||||
.build())
|
|
||||||
.build();
|
.build();
|
||||||
}
|
}
|
||||||
|
|
||||||
|
|||||||
@@ -16,7 +16,7 @@ package org.hyperledger.besu.ethereum.blockcreation;
|
|||||||
|
|
||||||
import static org.assertj.core.api.Assertions.assertThat;
|
import static org.assertj.core.api.Assertions.assertThat;
|
||||||
import static org.assertj.core.api.Assertions.entry;
|
import static org.assertj.core.api.Assertions.entry;
|
||||||
import static org.hyperledger.besu.ethereum.core.MiningParameters.Unstable.DEFAULT_NON_POA_BLOCK_TXS_SELECTION_MAX_TIME;
|
import static org.hyperledger.besu.ethereum.core.MiningParameters.DEFAULT_NON_POA_BLOCK_TXS_SELECTION_MAX_TIME;
|
||||||
import static org.mockito.Mockito.mock;
|
import static org.mockito.Mockito.mock;
|
||||||
|
|
||||||
import org.hyperledger.besu.config.GenesisConfigFile;
|
import org.hyperledger.besu.config.GenesisConfigFile;
|
||||||
|
|||||||
@@ -15,6 +15,7 @@
|
|||||||
package org.hyperledger.besu.ethereum.chain;
|
package org.hyperledger.besu.ethereum.chain;
|
||||||
|
|
||||||
import static java.util.Collections.emptyList;
|
import static java.util.Collections.emptyList;
|
||||||
|
import static org.hyperledger.besu.ethereum.trie.common.GenesisWorldStateProvider.createGenesisWorldState;
|
||||||
|
|
||||||
import org.hyperledger.besu.config.GenesisAllocation;
|
import org.hyperledger.besu.config.GenesisAllocation;
|
||||||
import org.hyperledger.besu.config.GenesisConfigFile;
|
import org.hyperledger.besu.config.GenesisConfigFile;
|
||||||
@@ -32,14 +33,10 @@ import org.hyperledger.besu.ethereum.core.MutableWorldState;
|
|||||||
import org.hyperledger.besu.ethereum.core.Withdrawal;
|
import org.hyperledger.besu.ethereum.core.Withdrawal;
|
||||||
import org.hyperledger.besu.ethereum.mainnet.ProtocolSchedule;
|
import org.hyperledger.besu.ethereum.mainnet.ProtocolSchedule;
|
||||||
import org.hyperledger.besu.ethereum.mainnet.ScheduleBasedBlockHeaderFunctions;
|
import org.hyperledger.besu.ethereum.mainnet.ScheduleBasedBlockHeaderFunctions;
|
||||||
import org.hyperledger.besu.ethereum.storage.keyvalue.WorldStatePreimageKeyValueStorage;
|
import org.hyperledger.besu.ethereum.worldstate.DataStorageFormat;
|
||||||
import org.hyperledger.besu.ethereum.trie.forest.storage.ForestWorldStateKeyValueStorage;
|
|
||||||
import org.hyperledger.besu.ethereum.trie.forest.worldview.ForestMutableWorldState;
|
|
||||||
import org.hyperledger.besu.evm.account.MutableAccount;
|
import org.hyperledger.besu.evm.account.MutableAccount;
|
||||||
import org.hyperledger.besu.evm.internal.EvmConfiguration;
|
|
||||||
import org.hyperledger.besu.evm.log.LogsBloomFilter;
|
import org.hyperledger.besu.evm.log.LogsBloomFilter;
|
||||||
import org.hyperledger.besu.evm.worldstate.WorldUpdater;
|
import org.hyperledger.besu.evm.worldstate.WorldUpdater;
|
||||||
import org.hyperledger.besu.services.kvstore.InMemoryKeyValueStorage;
|
|
||||||
|
|
||||||
import java.math.BigInteger;
|
import java.math.BigInteger;
|
||||||
import java.util.HashMap;
|
import java.util.HashMap;
|
||||||
@@ -77,6 +74,21 @@ public final class GenesisState {
|
|||||||
return fromConfig(GenesisConfigFile.fromConfig(json), protocolSchedule);
|
return fromConfig(GenesisConfigFile.fromConfig(json), protocolSchedule);
|
||||||
}
|
}
|
||||||
|
|
||||||
|
/**
|
||||||
|
* Construct a {@link GenesisState} from a JSON string.
|
||||||
|
*
|
||||||
|
* @param dataStorageFormat A {@link DataStorageFormat} describing the storage format to use
|
||||||
|
* @param json A JSON string describing the genesis block
|
||||||
|
* @param protocolSchedule A protocol Schedule associated with
|
||||||
|
* @return A new {@link GenesisState}.
|
||||||
|
*/
|
||||||
|
public static GenesisState fromJson(
|
||||||
|
final DataStorageFormat dataStorageFormat,
|
||||||
|
final String json,
|
||||||
|
final ProtocolSchedule protocolSchedule) {
|
||||||
|
return fromConfig(dataStorageFormat, GenesisConfigFile.fromConfig(json), protocolSchedule);
|
||||||
|
}
|
||||||
|
|
||||||
/**
|
/**
|
||||||
* Construct a {@link GenesisState} from a JSON object.
|
* Construct a {@link GenesisState} from a JSON object.
|
||||||
*
|
*
|
||||||
@@ -86,10 +98,28 @@ public final class GenesisState {
|
|||||||
*/
|
*/
|
||||||
public static GenesisState fromConfig(
|
public static GenesisState fromConfig(
|
||||||
final GenesisConfigFile config, final ProtocolSchedule protocolSchedule) {
|
final GenesisConfigFile config, final ProtocolSchedule protocolSchedule) {
|
||||||
|
return fromConfig(DataStorageFormat.FOREST, config, protocolSchedule);
|
||||||
|
}
|
||||||
|
|
||||||
|
/**
|
||||||
|
* Construct a {@link GenesisState} from a JSON object.
|
||||||
|
*
|
||||||
|
* @param dataStorageFormat A {@link DataStorageFormat} describing the storage format to use
|
||||||
|
* @param config A {@link GenesisConfigFile} describing the genesis block.
|
||||||
|
* @param protocolSchedule A protocol Schedule associated with
|
||||||
|
* @return A new {@link GenesisState}.
|
||||||
|
*/
|
||||||
|
public static GenesisState fromConfig(
|
||||||
|
final DataStorageFormat dataStorageFormat,
|
||||||
|
final GenesisConfigFile config,
|
||||||
|
final ProtocolSchedule protocolSchedule) {
|
||||||
final List<GenesisAccount> genesisAccounts = parseAllocations(config).toList();
|
final List<GenesisAccount> genesisAccounts = parseAllocations(config).toList();
|
||||||
final Block block =
|
final Block block =
|
||||||
new Block(
|
new Block(
|
||||||
buildHeader(config, calculateGenesisStateHash(genesisAccounts), protocolSchedule),
|
buildHeader(
|
||||||
|
config,
|
||||||
|
calculateGenesisStateHash(dataStorageFormat, genesisAccounts),
|
||||||
|
protocolSchedule),
|
||||||
buildBody(config));
|
buildBody(config));
|
||||||
return new GenesisState(block, genesisAccounts);
|
return new GenesisState(block, genesisAccounts);
|
||||||
}
|
}
|
||||||
@@ -133,15 +163,14 @@ public final class GenesisState {
|
|||||||
target.persist(rootHeader);
|
target.persist(rootHeader);
|
||||||
}
|
}
|
||||||
|
|
||||||
private static Hash calculateGenesisStateHash(final List<GenesisAccount> genesisAccounts) {
|
private static Hash calculateGenesisStateHash(
|
||||||
final ForestWorldStateKeyValueStorage stateStorage =
|
final DataStorageFormat dataStorageFormat, final List<GenesisAccount> genesisAccounts) {
|
||||||
new ForestWorldStateKeyValueStorage(new InMemoryKeyValueStorage());
|
try (var worldState = createGenesisWorldState(dataStorageFormat)) {
|
||||||
final WorldStatePreimageKeyValueStorage preimageStorage =
|
writeAccountsTo(worldState, genesisAccounts, null);
|
||||||
new WorldStatePreimageKeyValueStorage(new InMemoryKeyValueStorage());
|
return worldState.rootHash();
|
||||||
final MutableWorldState worldState =
|
} catch (Exception e) {
|
||||||
new ForestMutableWorldState(stateStorage, preimageStorage, EvmConfiguration.DEFAULT);
|
throw new RuntimeException(e);
|
||||||
writeAccountsTo(worldState, genesisAccounts, null);
|
}
|
||||||
return worldState.rootHash();
|
|
||||||
}
|
}
|
||||||
|
|
||||||
private static BlockHeader buildHeader(
|
private static BlockHeader buildHeader(
|
||||||
|
|||||||
@@ -16,7 +16,7 @@ package org.hyperledger.besu.ethereum.core;
|
|||||||
|
|
||||||
import org.hyperledger.besu.datatypes.Address;
|
import org.hyperledger.besu.datatypes.Address;
|
||||||
import org.hyperledger.besu.datatypes.Wei;
|
import org.hyperledger.besu.datatypes.Wei;
|
||||||
import org.hyperledger.besu.util.number.Percentage;
|
import org.hyperledger.besu.util.number.PositiveNumber;
|
||||||
|
|
||||||
import java.time.Duration;
|
import java.time.Duration;
|
||||||
import java.util.Objects;
|
import java.util.Objects;
|
||||||
@@ -32,6 +32,10 @@ import org.immutables.value.Value;
|
|||||||
@Value.Immutable
|
@Value.Immutable
|
||||||
@Value.Enclosing
|
@Value.Enclosing
|
||||||
public abstract class MiningParameters {
|
public abstract class MiningParameters {
|
||||||
|
public static final PositiveNumber DEFAULT_NON_POA_BLOCK_TXS_SELECTION_MAX_TIME =
|
||||||
|
PositiveNumber.fromInt((int) Duration.ofSeconds(5).toMillis());
|
||||||
|
public static final PositiveNumber DEFAULT_POA_BLOCK_TXS_SELECTION_MAX_TIME =
|
||||||
|
PositiveNumber.fromInt(75);
|
||||||
public static final MiningParameters MINING_DISABLED =
|
public static final MiningParameters MINING_DISABLED =
|
||||||
ImmutableMiningParameters.builder()
|
ImmutableMiningParameters.builder()
|
||||||
.mutableInitValues(
|
.mutableInitValues(
|
||||||
@@ -130,6 +134,28 @@ public abstract class MiningParameters {
|
|||||||
return 8008;
|
return 8008;
|
||||||
}
|
}
|
||||||
|
|
||||||
|
@Value.Default
|
||||||
|
public PositiveNumber getNonPoaBlockTxsSelectionMaxTime() {
|
||||||
|
return DEFAULT_NON_POA_BLOCK_TXS_SELECTION_MAX_TIME;
|
||||||
|
}
|
||||||
|
|
||||||
|
@Value.Default
|
||||||
|
public PositiveNumber getPoaBlockTxsSelectionMaxTime() {
|
||||||
|
return DEFAULT_POA_BLOCK_TXS_SELECTION_MAX_TIME;
|
||||||
|
}
|
||||||
|
|
||||||
|
public abstract OptionalInt getGenesisBlockPeriodSeconds();
|
||||||
|
|
||||||
|
@Value.Derived
|
||||||
|
public long getBlockTxsSelectionMaxTime() {
|
||||||
|
if (getGenesisBlockPeriodSeconds().isPresent()) {
|
||||||
|
return (TimeUnit.SECONDS.toMillis(getGenesisBlockPeriodSeconds().getAsInt())
|
||||||
|
* getPoaBlockTxsSelectionMaxTime().getValue())
|
||||||
|
/ 100;
|
||||||
|
}
|
||||||
|
return getNonPoaBlockTxsSelectionMaxTime().getValue();
|
||||||
|
}
|
||||||
|
|
||||||
@Value.Default
|
@Value.Default
|
||||||
protected MutableRuntimeValues getMutableRuntimeValues() {
|
protected MutableRuntimeValues getMutableRuntimeValues() {
|
||||||
return new MutableRuntimeValues(getMutableInitValues());
|
return new MutableRuntimeValues(getMutableInitValues());
|
||||||
@@ -266,8 +292,6 @@ public abstract class MiningParameters {
|
|||||||
int DEFAULT_MAX_OMMERS_DEPTH = 8;
|
int DEFAULT_MAX_OMMERS_DEPTH = 8;
|
||||||
long DEFAULT_POS_BLOCK_CREATION_MAX_TIME = Duration.ofSeconds(12).toMillis();
|
long DEFAULT_POS_BLOCK_CREATION_MAX_TIME = Duration.ofSeconds(12).toMillis();
|
||||||
long DEFAULT_POS_BLOCK_CREATION_REPETITION_MIN_DURATION = Duration.ofMillis(500).toMillis();
|
long DEFAULT_POS_BLOCK_CREATION_REPETITION_MIN_DURATION = Duration.ofMillis(500).toMillis();
|
||||||
long DEFAULT_NON_POA_BLOCK_TXS_SELECTION_MAX_TIME = Duration.ofSeconds(5).toMillis();
|
|
||||||
Percentage DEFAULT_POA_BLOCK_TXS_SELECTION_MAX_TIME = Percentage.fromInt(75);
|
|
||||||
|
|
||||||
MiningParameters.Unstable DEFAULT = ImmutableMiningParameters.Unstable.builder().build();
|
MiningParameters.Unstable DEFAULT = ImmutableMiningParameters.Unstable.builder().build();
|
||||||
|
|
||||||
@@ -305,27 +329,5 @@ public abstract class MiningParameters {
|
|||||||
default String getStratumExtranonce() {
|
default String getStratumExtranonce() {
|
||||||
return "080c";
|
return "080c";
|
||||||
}
|
}
|
||||||
|
|
||||||
@Value.Default
|
|
||||||
default long getNonPoaBlockTxsSelectionMaxTime() {
|
|
||||||
return DEFAULT_NON_POA_BLOCK_TXS_SELECTION_MAX_TIME;
|
|
||||||
}
|
|
||||||
|
|
||||||
@Value.Default
|
|
||||||
default Percentage getPoaBlockTxsSelectionMaxTime() {
|
|
||||||
return DEFAULT_POA_BLOCK_TXS_SELECTION_MAX_TIME;
|
|
||||||
}
|
|
||||||
|
|
||||||
OptionalInt getMinBlockTime();
|
|
||||||
|
|
||||||
@Value.Derived
|
|
||||||
default long getBlockTxsSelectionMaxTime() {
|
|
||||||
if (getMinBlockTime().isPresent()) {
|
|
||||||
return (TimeUnit.SECONDS.toMillis(getMinBlockTime().getAsInt())
|
|
||||||
* getPoaBlockTxsSelectionMaxTime().getValue())
|
|
||||||
/ 100;
|
|
||||||
}
|
|
||||||
return getNonPoaBlockTxsSelectionMaxTime();
|
|
||||||
}
|
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|||||||
@@ -19,7 +19,6 @@ import static com.google.common.base.Preconditions.checkArgument;
|
|||||||
/** Specification for the block gasLimit. */
|
/** Specification for the block gasLimit. */
|
||||||
public abstract class AbstractGasLimitSpecification {
|
public abstract class AbstractGasLimitSpecification {
|
||||||
|
|
||||||
public static final long DEFAULT_MAX_CONSTANT_ADMUSTMENT_INCREMENT = 1024L;
|
|
||||||
public static final long DEFAULT_MIN_GAS_LIMIT = 5000L;
|
public static final long DEFAULT_MIN_GAS_LIMIT = 5000L;
|
||||||
public static final long DEFAULT_MAX_GAS_LIMIT = Long.MAX_VALUE;
|
public static final long DEFAULT_MAX_GAS_LIMIT = Long.MAX_VALUE;
|
||||||
|
|
||||||
|
|||||||
@@ -23,16 +23,13 @@ public class FrontierTargetingGasLimitCalculator extends AbstractGasLimitSpecifi
|
|||||||
implements GasLimitCalculator {
|
implements GasLimitCalculator {
|
||||||
private static final Logger LOG =
|
private static final Logger LOG =
|
||||||
LoggerFactory.getLogger(FrontierTargetingGasLimitCalculator.class);
|
LoggerFactory.getLogger(FrontierTargetingGasLimitCalculator.class);
|
||||||
private final long maxConstantAdjustmentIncrement;
|
|
||||||
|
|
||||||
public FrontierTargetingGasLimitCalculator() {
|
public FrontierTargetingGasLimitCalculator() {
|
||||||
this(DEFAULT_MAX_CONSTANT_ADMUSTMENT_INCREMENT, DEFAULT_MIN_GAS_LIMIT, DEFAULT_MAX_GAS_LIMIT);
|
this(DEFAULT_MIN_GAS_LIMIT, DEFAULT_MAX_GAS_LIMIT);
|
||||||
}
|
}
|
||||||
|
|
||||||
public FrontierTargetingGasLimitCalculator(
|
public FrontierTargetingGasLimitCalculator(final long minGasLimit, final long maxGasLimit) {
|
||||||
final long maxConstantAdjustmentIncrement, final long minGasLimit, final long maxGasLimit) {
|
|
||||||
super(minGasLimit, maxGasLimit);
|
super(minGasLimit, maxGasLimit);
|
||||||
this.maxConstantAdjustmentIncrement = maxConstantAdjustmentIncrement;
|
|
||||||
}
|
}
|
||||||
|
|
||||||
@Override
|
@Override
|
||||||
@@ -55,8 +52,7 @@ public class FrontierTargetingGasLimitCalculator extends AbstractGasLimitSpecifi
|
|||||||
}
|
}
|
||||||
|
|
||||||
private long adjustAmount(final long currentGasLimit) {
|
private long adjustAmount(final long currentGasLimit) {
|
||||||
final long maxProportionalAdjustmentLimit = Math.max(deltaBound(currentGasLimit) - 1, 0);
|
return Math.max(deltaBound(currentGasLimit) - 1, 0);
|
||||||
return Math.min(maxConstantAdjustmentIncrement, maxProportionalAdjustmentLimit);
|
|
||||||
}
|
}
|
||||||
|
|
||||||
protected long safeAddAtMost(final long gasLimit) {
|
protected long safeAddAtMost(final long gasLimit) {
|
||||||
|
|||||||
@@ -27,21 +27,15 @@ public class LondonTargetingGasLimitCalculator extends FrontierTargetingGasLimit
|
|||||||
|
|
||||||
public LondonTargetingGasLimitCalculator(
|
public LondonTargetingGasLimitCalculator(
|
||||||
final long londonForkBlock, final BaseFeeMarket feeMarket) {
|
final long londonForkBlock, final BaseFeeMarket feeMarket) {
|
||||||
this(
|
this(DEFAULT_MIN_GAS_LIMIT, DEFAULT_MAX_GAS_LIMIT, londonForkBlock, feeMarket);
|
||||||
DEFAULT_MAX_CONSTANT_ADMUSTMENT_INCREMENT,
|
|
||||||
DEFAULT_MIN_GAS_LIMIT,
|
|
||||||
DEFAULT_MAX_GAS_LIMIT,
|
|
||||||
londonForkBlock,
|
|
||||||
feeMarket);
|
|
||||||
}
|
}
|
||||||
|
|
||||||
public LondonTargetingGasLimitCalculator(
|
public LondonTargetingGasLimitCalculator(
|
||||||
final long maxConstantAdjustmentIncrement,
|
|
||||||
final long minGasLimit,
|
final long minGasLimit,
|
||||||
final long maxGasLimit,
|
final long maxGasLimit,
|
||||||
final long londonForkBlock,
|
final long londonForkBlock,
|
||||||
final BaseFeeMarket feeMarket) {
|
final BaseFeeMarket feeMarket) {
|
||||||
super(maxConstantAdjustmentIncrement, minGasLimit, maxGasLimit);
|
super(minGasLimit, maxGasLimit);
|
||||||
this.londonForkBlock = londonForkBlock;
|
this.londonForkBlock = londonForkBlock;
|
||||||
this.feeMarket = feeMarket;
|
this.feeMarket = feeMarket;
|
||||||
}
|
}
|
||||||
|
|||||||
@@ -17,7 +17,7 @@ package org.hyperledger.besu.ethereum.storage;
|
|||||||
import org.hyperledger.besu.ethereum.chain.BlockchainStorage;
|
import org.hyperledger.besu.ethereum.chain.BlockchainStorage;
|
||||||
import org.hyperledger.besu.ethereum.chain.VariablesStorage;
|
import org.hyperledger.besu.ethereum.chain.VariablesStorage;
|
||||||
import org.hyperledger.besu.ethereum.mainnet.ProtocolSchedule;
|
import org.hyperledger.besu.ethereum.mainnet.ProtocolSchedule;
|
||||||
import org.hyperledger.besu.ethereum.worldstate.DataStorageFormat;
|
import org.hyperledger.besu.ethereum.worldstate.DataStorageConfiguration;
|
||||||
import org.hyperledger.besu.ethereum.worldstate.WorldStatePreimageStorage;
|
import org.hyperledger.besu.ethereum.worldstate.WorldStatePreimageStorage;
|
||||||
import org.hyperledger.besu.ethereum.worldstate.WorldStateStorage;
|
import org.hyperledger.besu.ethereum.worldstate.WorldStateStorage;
|
||||||
import org.hyperledger.besu.plugin.services.storage.KeyValueStorage;
|
import org.hyperledger.besu.plugin.services.storage.KeyValueStorage;
|
||||||
@@ -34,7 +34,7 @@ public interface StorageProvider extends Closeable {
|
|||||||
BlockchainStorage createBlockchainStorage(
|
BlockchainStorage createBlockchainStorage(
|
||||||
ProtocolSchedule protocolSchedule, VariablesStorage variablesStorage);
|
ProtocolSchedule protocolSchedule, VariablesStorage variablesStorage);
|
||||||
|
|
||||||
WorldStateStorage createWorldStateStorage(DataStorageFormat dataStorageFormat);
|
WorldStateStorage createWorldStateStorage(DataStorageConfiguration dataStorageFormat);
|
||||||
|
|
||||||
WorldStatePreimageStorage createWorldStatePreimageStorage();
|
WorldStatePreimageStorage createWorldStatePreimageStorage();
|
||||||
|
|
||||||
|
|||||||
@@ -21,6 +21,7 @@ import org.hyperledger.besu.ethereum.mainnet.ScheduleBasedBlockHeaderFunctions;
|
|||||||
import org.hyperledger.besu.ethereum.storage.StorageProvider;
|
import org.hyperledger.besu.ethereum.storage.StorageProvider;
|
||||||
import org.hyperledger.besu.ethereum.trie.bonsai.storage.BonsaiWorldStateKeyValueStorage;
|
import org.hyperledger.besu.ethereum.trie.bonsai.storage.BonsaiWorldStateKeyValueStorage;
|
||||||
import org.hyperledger.besu.ethereum.trie.forest.storage.ForestWorldStateKeyValueStorage;
|
import org.hyperledger.besu.ethereum.trie.forest.storage.ForestWorldStateKeyValueStorage;
|
||||||
|
import org.hyperledger.besu.ethereum.worldstate.DataStorageConfiguration;
|
||||||
import org.hyperledger.besu.ethereum.worldstate.DataStorageFormat;
|
import org.hyperledger.besu.ethereum.worldstate.DataStorageFormat;
|
||||||
import org.hyperledger.besu.ethereum.worldstate.WorldStatePreimageStorage;
|
import org.hyperledger.besu.ethereum.worldstate.WorldStatePreimageStorage;
|
||||||
import org.hyperledger.besu.ethereum.worldstate.WorldStateStorage;
|
import org.hyperledger.besu.ethereum.worldstate.WorldStateStorage;
|
||||||
@@ -75,9 +76,10 @@ public class KeyValueStorageProvider implements StorageProvider {
|
|||||||
}
|
}
|
||||||
|
|
||||||
@Override
|
@Override
|
||||||
public WorldStateStorage createWorldStateStorage(final DataStorageFormat dataStorageFormat) {
|
public WorldStateStorage createWorldStateStorage(
|
||||||
if (dataStorageFormat.equals(DataStorageFormat.BONSAI)) {
|
final DataStorageConfiguration dataStorageConfiguration) {
|
||||||
return new BonsaiWorldStateKeyValueStorage(this, metricsSystem);
|
if (dataStorageConfiguration.getDataStorageFormat().equals(DataStorageFormat.BONSAI)) {
|
||||||
|
return new BonsaiWorldStateKeyValueStorage(this, metricsSystem, dataStorageConfiguration);
|
||||||
} else {
|
} else {
|
||||||
return new ForestWorldStateKeyValueStorage(
|
return new ForestWorldStateKeyValueStorage(
|
||||||
getStorageBySegmentIdentifier(KeyValueSegmentIdentifier.WORLD_STATE));
|
getStorageBySegmentIdentifier(KeyValueSegmentIdentifier.WORLD_STATE));
|
||||||
|
|||||||
@@ -39,7 +39,6 @@ import org.hyperledger.besu.ethereum.worldstate.StateTrieAccountValue;
|
|||||||
import org.hyperledger.besu.ethereum.worldstate.WorldStateArchive;
|
import org.hyperledger.besu.ethereum.worldstate.WorldStateArchive;
|
||||||
import org.hyperledger.besu.evm.internal.EvmConfiguration;
|
import org.hyperledger.besu.evm.internal.EvmConfiguration;
|
||||||
import org.hyperledger.besu.evm.worldstate.WorldState;
|
import org.hyperledger.besu.evm.worldstate.WorldState;
|
||||||
import org.hyperledger.besu.metrics.ObservableMetricsSystem;
|
|
||||||
import org.hyperledger.besu.plugin.BesuContext;
|
import org.hyperledger.besu.plugin.BesuContext;
|
||||||
import org.hyperledger.besu.plugin.services.trielogs.TrieLog;
|
import org.hyperledger.besu.plugin.services.trielogs.TrieLog;
|
||||||
|
|
||||||
@@ -73,13 +72,11 @@ public class BonsaiWorldStateProvider implements WorldStateArchive {
|
|||||||
final Blockchain blockchain,
|
final Blockchain blockchain,
|
||||||
final Optional<Long> maxLayersToLoad,
|
final Optional<Long> maxLayersToLoad,
|
||||||
final CachedMerkleTrieLoader cachedMerkleTrieLoader,
|
final CachedMerkleTrieLoader cachedMerkleTrieLoader,
|
||||||
final ObservableMetricsSystem metricsSystem,
|
|
||||||
final BesuContext pluginContext,
|
final BesuContext pluginContext,
|
||||||
final EvmConfiguration evmConfiguration,
|
final EvmConfiguration evmConfiguration,
|
||||||
final TrieLogPruner trieLogPruner) {
|
final TrieLogPruner trieLogPruner) {
|
||||||
|
|
||||||
this.cachedWorldStorageManager =
|
this.cachedWorldStorageManager = new CachedWorldStorageManager(this, worldStateStorage);
|
||||||
new CachedWorldStorageManager(this, worldStateStorage, metricsSystem);
|
|
||||||
// TODO: de-dup constructors
|
// TODO: de-dup constructors
|
||||||
this.trieLogManager =
|
this.trieLogManager =
|
||||||
new TrieLogManager(
|
new TrieLogManager(
|
||||||
|
|||||||
@@ -22,7 +22,6 @@ import org.hyperledger.besu.ethereum.trie.bonsai.storage.BonsaiWorldStateKeyValu
|
|||||||
import org.hyperledger.besu.ethereum.trie.bonsai.storage.BonsaiWorldStateLayerStorage;
|
import org.hyperledger.besu.ethereum.trie.bonsai.storage.BonsaiWorldStateLayerStorage;
|
||||||
import org.hyperledger.besu.ethereum.trie.bonsai.worldview.BonsaiWorldState;
|
import org.hyperledger.besu.ethereum.trie.bonsai.worldview.BonsaiWorldState;
|
||||||
import org.hyperledger.besu.evm.internal.EvmConfiguration;
|
import org.hyperledger.besu.evm.internal.EvmConfiguration;
|
||||||
import org.hyperledger.besu.metrics.ObservableMetricsSystem;
|
|
||||||
|
|
||||||
import java.util.ArrayList;
|
import java.util.ArrayList;
|
||||||
import java.util.Comparator;
|
import java.util.Comparator;
|
||||||
@@ -41,7 +40,6 @@ public class CachedWorldStorageManager
|
|||||||
public static final long RETAINED_LAYERS = 512; // at least 256 + typical rollbacks
|
public static final long RETAINED_LAYERS = 512; // at least 256 + typical rollbacks
|
||||||
private static final Logger LOG = LoggerFactory.getLogger(CachedWorldStorageManager.class);
|
private static final Logger LOG = LoggerFactory.getLogger(CachedWorldStorageManager.class);
|
||||||
private final BonsaiWorldStateProvider archive;
|
private final BonsaiWorldStateProvider archive;
|
||||||
private final ObservableMetricsSystem metricsSystem;
|
|
||||||
private final EvmConfiguration evmConfiguration;
|
private final EvmConfiguration evmConfiguration;
|
||||||
|
|
||||||
private final BonsaiWorldStateKeyValueStorage rootWorldStateStorage;
|
private final BonsaiWorldStateKeyValueStorage rootWorldStateStorage;
|
||||||
@@ -51,26 +49,18 @@ public class CachedWorldStorageManager
|
|||||||
final BonsaiWorldStateProvider archive,
|
final BonsaiWorldStateProvider archive,
|
||||||
final BonsaiWorldStateKeyValueStorage worldStateStorage,
|
final BonsaiWorldStateKeyValueStorage worldStateStorage,
|
||||||
final Map<Bytes32, CachedBonsaiWorldView> cachedWorldStatesByHash,
|
final Map<Bytes32, CachedBonsaiWorldView> cachedWorldStatesByHash,
|
||||||
final ObservableMetricsSystem metricsSystem,
|
|
||||||
final EvmConfiguration evmConfiguration) {
|
final EvmConfiguration evmConfiguration) {
|
||||||
worldStateStorage.subscribe(this);
|
worldStateStorage.subscribe(this);
|
||||||
this.rootWorldStateStorage = worldStateStorage;
|
this.rootWorldStateStorage = worldStateStorage;
|
||||||
this.cachedWorldStatesByHash = cachedWorldStatesByHash;
|
this.cachedWorldStatesByHash = cachedWorldStatesByHash;
|
||||||
this.archive = archive;
|
this.archive = archive;
|
||||||
this.metricsSystem = metricsSystem;
|
|
||||||
this.evmConfiguration = evmConfiguration;
|
this.evmConfiguration = evmConfiguration;
|
||||||
}
|
}
|
||||||
|
|
||||||
public CachedWorldStorageManager(
|
public CachedWorldStorageManager(
|
||||||
final BonsaiWorldStateProvider archive,
|
final BonsaiWorldStateProvider archive,
|
||||||
final BonsaiWorldStateKeyValueStorage worldStateStorage,
|
final BonsaiWorldStateKeyValueStorage worldStateStorage) {
|
||||||
final ObservableMetricsSystem metricsSystem) {
|
this(archive, worldStateStorage, new ConcurrentHashMap<>(), EvmConfiguration.DEFAULT);
|
||||||
this(
|
|
||||||
archive,
|
|
||||||
worldStateStorage,
|
|
||||||
new ConcurrentHashMap<>(),
|
|
||||||
metricsSystem,
|
|
||||||
EvmConfiguration.DEFAULT);
|
|
||||||
}
|
}
|
||||||
|
|
||||||
public synchronized void addCachedLayer(
|
public synchronized void addCachedLayer(
|
||||||
@@ -92,8 +82,7 @@ public class CachedWorldStorageManager
|
|||||||
cachedBonsaiWorldView
|
cachedBonsaiWorldView
|
||||||
.get()
|
.get()
|
||||||
.updateWorldStateStorage(
|
.updateWorldStateStorage(
|
||||||
new BonsaiSnapshotWorldStateKeyValueStorage(
|
new BonsaiSnapshotWorldStateKeyValueStorage(forWorldState.getWorldStateStorage()));
|
||||||
forWorldState.getWorldStateStorage(), metricsSystem));
|
|
||||||
}
|
}
|
||||||
} else {
|
} else {
|
||||||
LOG.atDebug()
|
LOG.atDebug()
|
||||||
@@ -106,8 +95,7 @@ public class CachedWorldStorageManager
|
|||||||
blockHeader.getHash(),
|
blockHeader.getHash(),
|
||||||
new CachedBonsaiWorldView(
|
new CachedBonsaiWorldView(
|
||||||
blockHeader,
|
blockHeader,
|
||||||
new BonsaiSnapshotWorldStateKeyValueStorage(
|
new BonsaiSnapshotWorldStateKeyValueStorage(forWorldState.getWorldStateStorage())));
|
||||||
forWorldState.getWorldStateStorage(), metricsSystem)));
|
|
||||||
} else {
|
} else {
|
||||||
// otherwise, add the layer to the cache
|
// otherwise, add the layer to the cache
|
||||||
cachedWorldStatesByHash.put(
|
cachedWorldStatesByHash.put(
|
||||||
|
|||||||
@@ -0,0 +1,65 @@
|
|||||||
|
/*
|
||||||
|
* Copyright Hyperledger Besu Contributors.
|
||||||
|
*
|
||||||
|
* Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with
|
||||||
|
* the License. You may obtain a copy of the License at
|
||||||
|
*
|
||||||
|
* http://www.apache.org/licenses/LICENSE-2.0
|
||||||
|
*
|
||||||
|
* Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
|
||||||
|
* an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
|
||||||
|
* specific language governing permissions and limitations under the License.
|
||||||
|
*
|
||||||
|
* SPDX-License-Identifier: Apache-2.0
|
||||||
|
*/
|
||||||
|
package org.hyperledger.besu.ethereum.trie.bonsai.cache;
|
||||||
|
|
||||||
|
import org.hyperledger.besu.datatypes.Hash;
|
||||||
|
import org.hyperledger.besu.ethereum.core.BlockHeader;
|
||||||
|
import org.hyperledger.besu.ethereum.trie.bonsai.storage.BonsaiWorldStateKeyValueStorage;
|
||||||
|
import org.hyperledger.besu.ethereum.trie.bonsai.worldview.BonsaiWorldState;
|
||||||
|
|
||||||
|
import java.util.Optional;
|
||||||
|
import java.util.function.Function;
|
||||||
|
|
||||||
|
public class NoOpCachedWorldStorageManager extends CachedWorldStorageManager {
|
||||||
|
|
||||||
|
public NoOpCachedWorldStorageManager(
|
||||||
|
final BonsaiWorldStateKeyValueStorage bonsaiWorldStateKeyValueStorage) {
|
||||||
|
super(null, bonsaiWorldStateKeyValueStorage);
|
||||||
|
}
|
||||||
|
|
||||||
|
@Override
|
||||||
|
public synchronized void addCachedLayer(
|
||||||
|
final BlockHeader blockHeader,
|
||||||
|
final Hash worldStateRootHash,
|
||||||
|
final BonsaiWorldState forWorldState) {
|
||||||
|
// no cache
|
||||||
|
}
|
||||||
|
|
||||||
|
@Override
|
||||||
|
public boolean containWorldStateStorage(final Hash blockHash) {
|
||||||
|
return false;
|
||||||
|
}
|
||||||
|
|
||||||
|
@Override
|
||||||
|
public Optional<BonsaiWorldState> getWorldState(final Hash blockHash) {
|
||||||
|
return Optional.empty();
|
||||||
|
}
|
||||||
|
|
||||||
|
@Override
|
||||||
|
public Optional<BonsaiWorldState> getNearestWorldState(final BlockHeader blockHeader) {
|
||||||
|
return Optional.empty();
|
||||||
|
}
|
||||||
|
|
||||||
|
@Override
|
||||||
|
public Optional<BonsaiWorldState> getHeadWorldState(
|
||||||
|
final Function<Hash, Optional<BlockHeader>> hashBlockHeaderFunction) {
|
||||||
|
return Optional.empty();
|
||||||
|
}
|
||||||
|
|
||||||
|
@Override
|
||||||
|
public void reset() {
|
||||||
|
// world states are not re-used
|
||||||
|
}
|
||||||
|
}
|
||||||
@@ -18,7 +18,6 @@ package org.hyperledger.besu.ethereum.trie.bonsai.storage;
|
|||||||
import org.hyperledger.besu.datatypes.Hash;
|
import org.hyperledger.besu.datatypes.Hash;
|
||||||
import org.hyperledger.besu.datatypes.StorageSlotKey;
|
import org.hyperledger.besu.datatypes.StorageSlotKey;
|
||||||
import org.hyperledger.besu.ethereum.trie.bonsai.storage.BonsaiWorldStateKeyValueStorage.BonsaiStorageSubscriber;
|
import org.hyperledger.besu.ethereum.trie.bonsai.storage.BonsaiWorldStateKeyValueStorage.BonsaiStorageSubscriber;
|
||||||
import org.hyperledger.besu.metrics.ObservableMetricsSystem;
|
|
||||||
import org.hyperledger.besu.plugin.services.exception.StorageException;
|
import org.hyperledger.besu.plugin.services.exception.StorageException;
|
||||||
import org.hyperledger.besu.plugin.services.storage.KeyValueStorage;
|
import org.hyperledger.besu.plugin.services.storage.KeyValueStorage;
|
||||||
import org.hyperledger.besu.plugin.services.storage.SnappableKeyValueStorage;
|
import org.hyperledger.besu.plugin.services.storage.SnappableKeyValueStorage;
|
||||||
@@ -43,26 +42,19 @@ public class BonsaiSnapshotWorldStateKeyValueStorage extends BonsaiWorldStateKey
|
|||||||
public BonsaiSnapshotWorldStateKeyValueStorage(
|
public BonsaiSnapshotWorldStateKeyValueStorage(
|
||||||
final BonsaiWorldStateKeyValueStorage parentWorldStateStorage,
|
final BonsaiWorldStateKeyValueStorage parentWorldStateStorage,
|
||||||
final SnappedKeyValueStorage segmentedWorldStateStorage,
|
final SnappedKeyValueStorage segmentedWorldStateStorage,
|
||||||
final KeyValueStorage trieLogStorage,
|
final KeyValueStorage trieLogStorage) {
|
||||||
final ObservableMetricsSystem metricsSystem) {
|
|
||||||
super(
|
super(
|
||||||
parentWorldStateStorage.flatDbMode,
|
parentWorldStateStorage.flatDbStrategyProvider, segmentedWorldStateStorage, trieLogStorage);
|
||||||
parentWorldStateStorage.flatDbStrategy,
|
|
||||||
segmentedWorldStateStorage,
|
|
||||||
trieLogStorage,
|
|
||||||
metricsSystem);
|
|
||||||
this.parentWorldStateStorage = parentWorldStateStorage;
|
this.parentWorldStateStorage = parentWorldStateStorage;
|
||||||
this.subscribeParentId = parentWorldStateStorage.subscribe(this);
|
this.subscribeParentId = parentWorldStateStorage.subscribe(this);
|
||||||
}
|
}
|
||||||
|
|
||||||
public BonsaiSnapshotWorldStateKeyValueStorage(
|
public BonsaiSnapshotWorldStateKeyValueStorage(
|
||||||
final BonsaiWorldStateKeyValueStorage worldStateStorage,
|
final BonsaiWorldStateKeyValueStorage worldStateStorage) {
|
||||||
final ObservableMetricsSystem metricsSystem) {
|
|
||||||
this(
|
this(
|
||||||
worldStateStorage,
|
worldStateStorage,
|
||||||
((SnappableKeyValueStorage) worldStateStorage.composedWorldStateStorage).takeSnapshot(),
|
((SnappableKeyValueStorage) worldStateStorage.composedWorldStateStorage).takeSnapshot(),
|
||||||
worldStateStorage.trieLogStorage,
|
worldStateStorage.trieLogStorage);
|
||||||
metricsSystem);
|
|
||||||
}
|
}
|
||||||
|
|
||||||
private boolean isClosedGet() {
|
private boolean isClosedGet() {
|
||||||
@@ -78,7 +70,7 @@ public class BonsaiSnapshotWorldStateKeyValueStorage extends BonsaiWorldStateKey
|
|||||||
return new Updater(
|
return new Updater(
|
||||||
((SnappedKeyValueStorage) composedWorldStateStorage).getSnapshotTransaction(),
|
((SnappedKeyValueStorage) composedWorldStateStorage).getSnapshotTransaction(),
|
||||||
trieLogStorage.startTransaction(),
|
trieLogStorage.startTransaction(),
|
||||||
flatDbStrategy);
|
getFlatDbStrategy());
|
||||||
}
|
}
|
||||||
|
|
||||||
@Override
|
@Override
|
||||||
|
|||||||
@@ -25,14 +25,14 @@ import org.hyperledger.besu.ethereum.storage.StorageProvider;
|
|||||||
import org.hyperledger.besu.ethereum.storage.keyvalue.KeyValueSegmentIdentifier;
|
import org.hyperledger.besu.ethereum.storage.keyvalue.KeyValueSegmentIdentifier;
|
||||||
import org.hyperledger.besu.ethereum.trie.MerkleTrie;
|
import org.hyperledger.besu.ethereum.trie.MerkleTrie;
|
||||||
import org.hyperledger.besu.ethereum.trie.bonsai.storage.flat.FlatDbStrategy;
|
import org.hyperledger.besu.ethereum.trie.bonsai.storage.flat.FlatDbStrategy;
|
||||||
import org.hyperledger.besu.ethereum.trie.bonsai.storage.flat.FullFlatDbStrategy;
|
import org.hyperledger.besu.ethereum.trie.bonsai.storage.flat.FlatDbStrategyProvider;
|
||||||
import org.hyperledger.besu.ethereum.trie.bonsai.storage.flat.PartialFlatDbStrategy;
|
import org.hyperledger.besu.ethereum.worldstate.DataStorageConfiguration;
|
||||||
import org.hyperledger.besu.ethereum.worldstate.DataStorageFormat;
|
import org.hyperledger.besu.ethereum.worldstate.DataStorageFormat;
|
||||||
import org.hyperledger.besu.ethereum.worldstate.FlatDbMode;
|
import org.hyperledger.besu.ethereum.worldstate.FlatDbMode;
|
||||||
import org.hyperledger.besu.ethereum.worldstate.StateTrieAccountValue;
|
import org.hyperledger.besu.ethereum.worldstate.StateTrieAccountValue;
|
||||||
import org.hyperledger.besu.ethereum.worldstate.WorldStateStorage;
|
import org.hyperledger.besu.ethereum.worldstate.WorldStateStorage;
|
||||||
import org.hyperledger.besu.evm.account.AccountStorageEntry;
|
import org.hyperledger.besu.evm.account.AccountStorageEntry;
|
||||||
import org.hyperledger.besu.metrics.ObservableMetricsSystem;
|
import org.hyperledger.besu.plugin.services.MetricsSystem;
|
||||||
import org.hyperledger.besu.plugin.services.storage.KeyValueStorage;
|
import org.hyperledger.besu.plugin.services.storage.KeyValueStorage;
|
||||||
import org.hyperledger.besu.plugin.services.storage.KeyValueStorageTransaction;
|
import org.hyperledger.besu.plugin.services.storage.KeyValueStorageTransaction;
|
||||||
import org.hyperledger.besu.plugin.services.storage.SegmentedKeyValueStorage;
|
import org.hyperledger.besu.plugin.services.storage.SegmentedKeyValueStorage;
|
||||||
@@ -64,17 +64,11 @@ public class BonsaiWorldStateKeyValueStorage implements WorldStateStorage, AutoC
|
|||||||
public static final byte[] WORLD_BLOCK_HASH_KEY =
|
public static final byte[] WORLD_BLOCK_HASH_KEY =
|
||||||
"worldBlockHash".getBytes(StandardCharsets.UTF_8);
|
"worldBlockHash".getBytes(StandardCharsets.UTF_8);
|
||||||
|
|
||||||
// 0x666C61744462537461747573
|
protected final FlatDbStrategyProvider flatDbStrategyProvider;
|
||||||
public static final byte[] FLAT_DB_MODE = "flatDbStatus".getBytes(StandardCharsets.UTF_8);
|
|
||||||
|
|
||||||
protected FlatDbMode flatDbMode;
|
|
||||||
protected FlatDbStrategy flatDbStrategy;
|
|
||||||
|
|
||||||
protected final SegmentedKeyValueStorage composedWorldStateStorage;
|
protected final SegmentedKeyValueStorage composedWorldStateStorage;
|
||||||
protected final KeyValueStorage trieLogStorage;
|
protected final KeyValueStorage trieLogStorage;
|
||||||
|
|
||||||
protected final ObservableMetricsSystem metricsSystem;
|
|
||||||
|
|
||||||
private final AtomicBoolean shouldClose = new AtomicBoolean(false);
|
private final AtomicBoolean shouldClose = new AtomicBoolean(false);
|
||||||
|
|
||||||
protected final AtomicBoolean isClosed = new AtomicBoolean(false);
|
protected final AtomicBoolean isClosed = new AtomicBoolean(false);
|
||||||
@@ -82,62 +76,27 @@ public class BonsaiWorldStateKeyValueStorage implements WorldStateStorage, AutoC
|
|||||||
protected final Subscribers<BonsaiStorageSubscriber> subscribers = Subscribers.create();
|
protected final Subscribers<BonsaiStorageSubscriber> subscribers = Subscribers.create();
|
||||||
|
|
||||||
public BonsaiWorldStateKeyValueStorage(
|
public BonsaiWorldStateKeyValueStorage(
|
||||||
final StorageProvider provider, final ObservableMetricsSystem metricsSystem) {
|
final StorageProvider provider,
|
||||||
|
final MetricsSystem metricsSystem,
|
||||||
|
final DataStorageConfiguration dataStorageConfiguration) {
|
||||||
this.composedWorldStateStorage =
|
this.composedWorldStateStorage =
|
||||||
provider.getStorageBySegmentIdentifiers(
|
provider.getStorageBySegmentIdentifiers(
|
||||||
List.of(
|
List.of(
|
||||||
ACCOUNT_INFO_STATE, CODE_STORAGE, ACCOUNT_STORAGE_STORAGE, TRIE_BRANCH_STORAGE));
|
ACCOUNT_INFO_STATE, CODE_STORAGE, ACCOUNT_STORAGE_STORAGE, TRIE_BRANCH_STORAGE));
|
||||||
this.trieLogStorage =
|
this.trieLogStorage =
|
||||||
provider.getStorageBySegmentIdentifier(KeyValueSegmentIdentifier.TRIE_LOG_STORAGE);
|
provider.getStorageBySegmentIdentifier(KeyValueSegmentIdentifier.TRIE_LOG_STORAGE);
|
||||||
this.metricsSystem = metricsSystem;
|
this.flatDbStrategyProvider =
|
||||||
loadFlatDbStrategy();
|
new FlatDbStrategyProvider(metricsSystem, dataStorageConfiguration);
|
||||||
|
flatDbStrategyProvider.loadFlatDbStrategy(composedWorldStateStorage);
|
||||||
}
|
}
|
||||||
|
|
||||||
public BonsaiWorldStateKeyValueStorage(
|
public BonsaiWorldStateKeyValueStorage(
|
||||||
final FlatDbMode flatDbMode,
|
final FlatDbStrategyProvider flatDbStrategyProvider,
|
||||||
final FlatDbStrategy flatDbStrategy,
|
|
||||||
final SegmentedKeyValueStorage composedWorldStateStorage,
|
final SegmentedKeyValueStorage composedWorldStateStorage,
|
||||||
final KeyValueStorage trieLogStorage,
|
final KeyValueStorage trieLogStorage) {
|
||||||
final ObservableMetricsSystem metricsSystem) {
|
this.flatDbStrategyProvider = flatDbStrategyProvider;
|
||||||
this.flatDbMode = flatDbMode;
|
|
||||||
this.flatDbStrategy = flatDbStrategy;
|
|
||||||
this.composedWorldStateStorage = composedWorldStateStorage;
|
this.composedWorldStateStorage = composedWorldStateStorage;
|
||||||
this.trieLogStorage = trieLogStorage;
|
this.trieLogStorage = trieLogStorage;
|
||||||
this.metricsSystem = metricsSystem;
|
|
||||||
}
|
|
||||||
|
|
||||||
private void loadFlatDbStrategy() {
|
|
||||||
// derive our flatdb strategy from db or default:
|
|
||||||
var newFlatDbMode = deriveFlatDbStrategy();
|
|
||||||
|
|
||||||
// if flatDbMode is not loaded or has changed, reload flatDbStrategy
|
|
||||||
if (this.flatDbMode == null || !this.flatDbMode.equals(newFlatDbMode)) {
|
|
||||||
this.flatDbMode = newFlatDbMode;
|
|
||||||
if (flatDbMode == FlatDbMode.FULL) {
|
|
||||||
this.flatDbStrategy = new FullFlatDbStrategy(metricsSystem);
|
|
||||||
} else {
|
|
||||||
this.flatDbStrategy = new PartialFlatDbStrategy(metricsSystem);
|
|
||||||
}
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
public FlatDbMode deriveFlatDbStrategy() {
|
|
||||||
var flatDbMode =
|
|
||||||
FlatDbMode.fromVersion(
|
|
||||||
composedWorldStateStorage
|
|
||||||
.get(TRIE_BRANCH_STORAGE, FLAT_DB_MODE)
|
|
||||||
.map(Bytes::wrap)
|
|
||||||
.orElse(FlatDbMode.PARTIAL.getVersion()));
|
|
||||||
LOG.info("Bonsai flat db mode found {}", flatDbMode);
|
|
||||||
|
|
||||||
return flatDbMode;
|
|
||||||
}
|
|
||||||
|
|
||||||
public FlatDbStrategy getFlatDbStrategy() {
|
|
||||||
if (flatDbStrategy == null) {
|
|
||||||
loadFlatDbStrategy();
|
|
||||||
}
|
|
||||||
return flatDbStrategy;
|
|
||||||
}
|
}
|
||||||
|
|
||||||
@Override
|
@Override
|
||||||
@@ -147,7 +106,7 @@ public class BonsaiWorldStateKeyValueStorage implements WorldStateStorage, AutoC
|
|||||||
|
|
||||||
@Override
|
@Override
|
||||||
public FlatDbMode getFlatDbMode() {
|
public FlatDbMode getFlatDbMode() {
|
||||||
return flatDbMode;
|
return flatDbStrategyProvider.getFlatDbMode();
|
||||||
}
|
}
|
||||||
|
|
||||||
@Override
|
@Override
|
||||||
@@ -155,12 +114,15 @@ public class BonsaiWorldStateKeyValueStorage implements WorldStateStorage, AutoC
|
|||||||
if (codeHash.equals(Hash.EMPTY)) {
|
if (codeHash.equals(Hash.EMPTY)) {
|
||||||
return Optional.of(Bytes.EMPTY);
|
return Optional.of(Bytes.EMPTY);
|
||||||
} else {
|
} else {
|
||||||
return getFlatDbStrategy().getFlatCode(codeHash, accountHash, composedWorldStateStorage);
|
return flatDbStrategyProvider
|
||||||
|
.getFlatDbStrategy(composedWorldStateStorage)
|
||||||
|
.getFlatCode(codeHash, accountHash, composedWorldStateStorage);
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
public Optional<Bytes> getAccount(final Hash accountHash) {
|
public Optional<Bytes> getAccount(final Hash accountHash) {
|
||||||
return getFlatDbStrategy()
|
return flatDbStrategyProvider
|
||||||
|
.getFlatDbStrategy(composedWorldStateStorage)
|
||||||
.getFlatAccount(
|
.getFlatAccount(
|
||||||
this::getWorldStateRootHash,
|
this::getWorldStateRootHash,
|
||||||
this::getAccountStateTrieNode,
|
this::getAccountStateTrieNode,
|
||||||
@@ -243,7 +205,8 @@ public class BonsaiWorldStateKeyValueStorage implements WorldStateStorage, AutoC
|
|||||||
final Supplier<Optional<Hash>> storageRootSupplier,
|
final Supplier<Optional<Hash>> storageRootSupplier,
|
||||||
final Hash accountHash,
|
final Hash accountHash,
|
||||||
final StorageSlotKey storageSlotKey) {
|
final StorageSlotKey storageSlotKey) {
|
||||||
return getFlatDbStrategy()
|
return flatDbStrategyProvider
|
||||||
|
.getFlatDbStrategy(composedWorldStateStorage)
|
||||||
.getFlatStorageValueByStorageSlotKey(
|
.getFlatStorageValueByStorageSlotKey(
|
||||||
this::getWorldStateRootHash,
|
this::getWorldStateRootHash,
|
||||||
storageRootSupplier,
|
storageRootSupplier,
|
||||||
@@ -256,14 +219,16 @@ public class BonsaiWorldStateKeyValueStorage implements WorldStateStorage, AutoC
|
|||||||
@Override
|
@Override
|
||||||
public Map<Bytes32, Bytes> streamFlatAccounts(
|
public Map<Bytes32, Bytes> streamFlatAccounts(
|
||||||
final Bytes startKeyHash, final Bytes32 endKeyHash, final long max) {
|
final Bytes startKeyHash, final Bytes32 endKeyHash, final long max) {
|
||||||
return getFlatDbStrategy()
|
return flatDbStrategyProvider
|
||||||
|
.getFlatDbStrategy(composedWorldStateStorage)
|
||||||
.streamAccountFlatDatabase(composedWorldStateStorage, startKeyHash, endKeyHash, max);
|
.streamAccountFlatDatabase(composedWorldStateStorage, startKeyHash, endKeyHash, max);
|
||||||
}
|
}
|
||||||
|
|
||||||
@Override
|
@Override
|
||||||
public Map<Bytes32, Bytes> streamFlatStorages(
|
public Map<Bytes32, Bytes> streamFlatStorages(
|
||||||
final Hash accountHash, final Bytes startKeyHash, final Bytes32 endKeyHash, final long max) {
|
final Hash accountHash, final Bytes startKeyHash, final Bytes32 endKeyHash, final long max) {
|
||||||
return getFlatDbStrategy()
|
return flatDbStrategyProvider
|
||||||
|
.getFlatDbStrategy(composedWorldStateStorage)
|
||||||
.streamStorageFlatDatabase(
|
.streamStorageFlatDatabase(
|
||||||
composedWorldStateStorage, accountHash, startKeyHash, endKeyHash, max);
|
composedWorldStateStorage, accountHash, startKeyHash, endKeyHash, max);
|
||||||
}
|
}
|
||||||
@@ -288,31 +253,23 @@ public class BonsaiWorldStateKeyValueStorage implements WorldStateStorage, AutoC
|
|||||||
}
|
}
|
||||||
|
|
||||||
public void upgradeToFullFlatDbMode() {
|
public void upgradeToFullFlatDbMode() {
|
||||||
final SegmentedKeyValueStorageTransaction transaction =
|
flatDbStrategyProvider.upgradeToFullFlatDbMode(composedWorldStateStorage);
|
||||||
composedWorldStateStorage.startTransaction();
|
|
||||||
// TODO: consider ARCHIVE mode
|
|
||||||
transaction.put(
|
|
||||||
TRIE_BRANCH_STORAGE, FLAT_DB_MODE, FlatDbMode.FULL.getVersion().toArrayUnsafe());
|
|
||||||
transaction.commit();
|
|
||||||
loadFlatDbStrategy(); // force reload of flat db reader strategy
|
|
||||||
}
|
}
|
||||||
|
|
||||||
public void downgradeToPartialFlatDbMode() {
|
public void downgradeToPartialFlatDbMode() {
|
||||||
final SegmentedKeyValueStorageTransaction transaction =
|
flatDbStrategyProvider.downgradeToPartialFlatDbMode(composedWorldStateStorage);
|
||||||
composedWorldStateStorage.startTransaction();
|
|
||||||
transaction.put(
|
|
||||||
TRIE_BRANCH_STORAGE, FLAT_DB_MODE, FlatDbMode.PARTIAL.getVersion().toArrayUnsafe());
|
|
||||||
transaction.commit();
|
|
||||||
loadFlatDbStrategy(); // force reload of flat db reader strategy
|
|
||||||
}
|
}
|
||||||
|
|
||||||
@Override
|
@Override
|
||||||
public void clear() {
|
public void clear() {
|
||||||
subscribers.forEach(BonsaiStorageSubscriber::onClearStorage);
|
subscribers.forEach(BonsaiStorageSubscriber::onClearStorage);
|
||||||
getFlatDbStrategy().clearAll(composedWorldStateStorage);
|
flatDbStrategyProvider
|
||||||
|
.getFlatDbStrategy(composedWorldStateStorage)
|
||||||
|
.clearAll(composedWorldStateStorage);
|
||||||
composedWorldStateStorage.clear(TRIE_BRANCH_STORAGE);
|
composedWorldStateStorage.clear(TRIE_BRANCH_STORAGE);
|
||||||
trieLogStorage.clear();
|
trieLogStorage.clear();
|
||||||
loadFlatDbStrategy(); // force reload of flat db reader strategy
|
flatDbStrategyProvider.loadFlatDbStrategy(
|
||||||
|
composedWorldStateStorage); // force reload of flat db reader strategy
|
||||||
}
|
}
|
||||||
|
|
||||||
@Override
|
@Override
|
||||||
@@ -324,7 +281,9 @@ public class BonsaiWorldStateKeyValueStorage implements WorldStateStorage, AutoC
|
|||||||
@Override
|
@Override
|
||||||
public void clearFlatDatabase() {
|
public void clearFlatDatabase() {
|
||||||
subscribers.forEach(BonsaiStorageSubscriber::onClearFlatDatabaseStorage);
|
subscribers.forEach(BonsaiStorageSubscriber::onClearFlatDatabaseStorage);
|
||||||
getFlatDbStrategy().resetOnResync(composedWorldStateStorage);
|
flatDbStrategyProvider
|
||||||
|
.getFlatDbStrategy(composedWorldStateStorage)
|
||||||
|
.resetOnResync(composedWorldStateStorage);
|
||||||
}
|
}
|
||||||
|
|
||||||
@Override
|
@Override
|
||||||
@@ -332,7 +291,7 @@ public class BonsaiWorldStateKeyValueStorage implements WorldStateStorage, AutoC
|
|||||||
return new Updater(
|
return new Updater(
|
||||||
composedWorldStateStorage.startTransaction(),
|
composedWorldStateStorage.startTransaction(),
|
||||||
trieLogStorage.startTransaction(),
|
trieLogStorage.startTransaction(),
|
||||||
flatDbStrategy);
|
flatDbStrategyProvider.getFlatDbStrategy(composedWorldStateStorage));
|
||||||
}
|
}
|
||||||
|
|
||||||
@Override
|
@Override
|
||||||
@@ -359,6 +318,10 @@ public class BonsaiWorldStateKeyValueStorage implements WorldStateStorage, AutoC
|
|||||||
throw new RuntimeException("removeNodeAddedListener not available");
|
throw new RuntimeException("removeNodeAddedListener not available");
|
||||||
}
|
}
|
||||||
|
|
||||||
|
public FlatDbStrategy getFlatDbStrategy() {
|
||||||
|
return flatDbStrategyProvider.getFlatDbStrategy(composedWorldStateStorage);
|
||||||
|
}
|
||||||
|
|
||||||
public interface BonsaiUpdater extends WorldStateStorage.Updater {
|
public interface BonsaiUpdater extends WorldStateStorage.Updater {
|
||||||
BonsaiUpdater removeCode(final Hash accountHash);
|
BonsaiUpdater removeCode(final Hash accountHash);
|
||||||
|
|
||||||
|
|||||||
@@ -17,7 +17,6 @@ package org.hyperledger.besu.ethereum.trie.bonsai.storage;
|
|||||||
|
|
||||||
import org.hyperledger.besu.ethereum.trie.bonsai.storage.BonsaiWorldStateKeyValueStorage.BonsaiStorageSubscriber;
|
import org.hyperledger.besu.ethereum.trie.bonsai.storage.BonsaiWorldStateKeyValueStorage.BonsaiStorageSubscriber;
|
||||||
import org.hyperledger.besu.ethereum.worldstate.FlatDbMode;
|
import org.hyperledger.besu.ethereum.worldstate.FlatDbMode;
|
||||||
import org.hyperledger.besu.metrics.ObservableMetricsSystem;
|
|
||||||
import org.hyperledger.besu.plugin.services.storage.KeyValueStorage;
|
import org.hyperledger.besu.plugin.services.storage.KeyValueStorage;
|
||||||
import org.hyperledger.besu.plugin.services.storage.SnappedKeyValueStorage;
|
import org.hyperledger.besu.plugin.services.storage.SnappedKeyValueStorage;
|
||||||
import org.hyperledger.besu.services.kvstore.LayeredKeyValueStorage;
|
import org.hyperledger.besu.services.kvstore.LayeredKeyValueStorage;
|
||||||
@@ -29,16 +28,14 @@ public class BonsaiWorldStateLayerStorage extends BonsaiSnapshotWorldStateKeyVal
|
|||||||
this(
|
this(
|
||||||
new LayeredKeyValueStorage(parent.composedWorldStateStorage),
|
new LayeredKeyValueStorage(parent.composedWorldStateStorage),
|
||||||
parent.trieLogStorage,
|
parent.trieLogStorage,
|
||||||
parent,
|
parent);
|
||||||
parent.metricsSystem);
|
|
||||||
}
|
}
|
||||||
|
|
||||||
public BonsaiWorldStateLayerStorage(
|
public BonsaiWorldStateLayerStorage(
|
||||||
final SnappedKeyValueStorage composedWorldStateStorage,
|
final SnappedKeyValueStorage composedWorldStateStorage,
|
||||||
final KeyValueStorage trieLogStorage,
|
final KeyValueStorage trieLogStorage,
|
||||||
final BonsaiWorldStateKeyValueStorage parent,
|
final BonsaiWorldStateKeyValueStorage parent) {
|
||||||
final ObservableMetricsSystem metricsSystem) {
|
super(parent, composedWorldStateStorage, trieLogStorage);
|
||||||
super(parent, composedWorldStateStorage, trieLogStorage, metricsSystem);
|
|
||||||
}
|
}
|
||||||
|
|
||||||
@Override
|
@Override
|
||||||
@@ -51,7 +48,6 @@ public class BonsaiWorldStateLayerStorage extends BonsaiSnapshotWorldStateKeyVal
|
|||||||
return new BonsaiWorldStateLayerStorage(
|
return new BonsaiWorldStateLayerStorage(
|
||||||
((LayeredKeyValueStorage) composedWorldStateStorage).clone(),
|
((LayeredKeyValueStorage) composedWorldStateStorage).clone(),
|
||||||
trieLogStorage,
|
trieLogStorage,
|
||||||
parentWorldStateStorage,
|
parentWorldStateStorage);
|
||||||
metricsSystem);
|
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|||||||
@@ -0,0 +1,105 @@
|
|||||||
|
/*
|
||||||
|
* Copyright Hyperledger Besu Contributors.
|
||||||
|
*
|
||||||
|
* Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with
|
||||||
|
* the License. You may obtain a copy of the License at
|
||||||
|
*
|
||||||
|
* http://www.apache.org/licenses/LICENSE-2.0
|
||||||
|
*
|
||||||
|
* Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
|
||||||
|
* an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
|
||||||
|
* specific language governing permissions and limitations under the License.
|
||||||
|
*
|
||||||
|
* SPDX-License-Identifier: Apache-2.0
|
||||||
|
*/
|
||||||
|
|
||||||
|
package org.hyperledger.besu.ethereum.trie.bonsai.storage.flat;
|
||||||
|
|
||||||
|
import static org.hyperledger.besu.ethereum.storage.keyvalue.KeyValueSegmentIdentifier.TRIE_BRANCH_STORAGE;
|
||||||
|
|
||||||
|
import org.hyperledger.besu.ethereum.worldstate.DataStorageConfiguration;
|
||||||
|
import org.hyperledger.besu.ethereum.worldstate.FlatDbMode;
|
||||||
|
import org.hyperledger.besu.plugin.services.MetricsSystem;
|
||||||
|
import org.hyperledger.besu.plugin.services.storage.SegmentedKeyValueStorage;
|
||||||
|
import org.hyperledger.besu.plugin.services.storage.SegmentedKeyValueStorageTransaction;
|
||||||
|
|
||||||
|
import java.nio.charset.StandardCharsets;
|
||||||
|
|
||||||
|
import org.apache.tuweni.bytes.Bytes;
|
||||||
|
import org.slf4j.Logger;
|
||||||
|
import org.slf4j.LoggerFactory;
|
||||||
|
|
||||||
|
public class FlatDbStrategyProvider {
|
||||||
|
private static final Logger LOG = LoggerFactory.getLogger(FlatDbStrategyProvider.class);
|
||||||
|
|
||||||
|
// 0x666C61744462537461747573
|
||||||
|
public static final byte[] FLAT_DB_MODE = "flatDbStatus".getBytes(StandardCharsets.UTF_8);
|
||||||
|
private final MetricsSystem metricsSystem;
|
||||||
|
protected FlatDbMode flatDbMode;
|
||||||
|
protected FlatDbStrategy flatDbStrategy;
|
||||||
|
|
||||||
|
public FlatDbStrategyProvider(
|
||||||
|
final MetricsSystem metricsSystem, final DataStorageConfiguration dataStorageConfiguration) {
|
||||||
|
this.metricsSystem = metricsSystem;
|
||||||
|
}
|
||||||
|
|
||||||
|
public void loadFlatDbStrategy(final SegmentedKeyValueStorage composedWorldStateStorage) {
|
||||||
|
// derive our flatdb strategy from db or default:
|
||||||
|
var newFlatDbMode = deriveFlatDbStrategy(composedWorldStateStorage);
|
||||||
|
|
||||||
|
// if flatDbMode is not loaded or has changed, reload flatDbStrategy
|
||||||
|
if (this.flatDbMode == null || !this.flatDbMode.equals(newFlatDbMode)) {
|
||||||
|
this.flatDbMode = newFlatDbMode;
|
||||||
|
if (flatDbMode == FlatDbMode.FULL) {
|
||||||
|
this.flatDbStrategy = new FullFlatDbStrategy(metricsSystem);
|
||||||
|
} else {
|
||||||
|
this.flatDbStrategy = new PartialFlatDbStrategy(metricsSystem);
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
private FlatDbMode deriveFlatDbStrategy(
|
||||||
|
final SegmentedKeyValueStorage composedWorldStateStorage) {
|
||||||
|
var flatDbMode =
|
||||||
|
FlatDbMode.fromVersion(
|
||||||
|
composedWorldStateStorage
|
||||||
|
.get(TRIE_BRANCH_STORAGE, FLAT_DB_MODE)
|
||||||
|
.map(Bytes::wrap)
|
||||||
|
.orElse(FlatDbMode.PARTIAL.getVersion()));
|
||||||
|
LOG.info("Bonsai flat db mode found {}", flatDbMode);
|
||||||
|
|
||||||
|
return flatDbMode;
|
||||||
|
}
|
||||||
|
|
||||||
|
public FlatDbStrategy getFlatDbStrategy(
|
||||||
|
final SegmentedKeyValueStorage composedWorldStateStorage) {
|
||||||
|
if (flatDbStrategy == null) {
|
||||||
|
loadFlatDbStrategy(composedWorldStateStorage);
|
||||||
|
}
|
||||||
|
return flatDbStrategy;
|
||||||
|
}
|
||||||
|
|
||||||
|
public void upgradeToFullFlatDbMode(final SegmentedKeyValueStorage composedWorldStateStorage) {
|
||||||
|
final SegmentedKeyValueStorageTransaction transaction =
|
||||||
|
composedWorldStateStorage.startTransaction();
|
||||||
|
// TODO: consider ARCHIVE mode
|
||||||
|
transaction.put(
|
||||||
|
TRIE_BRANCH_STORAGE, FLAT_DB_MODE, FlatDbMode.FULL.getVersion().toArrayUnsafe());
|
||||||
|
transaction.commit();
|
||||||
|
loadFlatDbStrategy(composedWorldStateStorage); // force reload of flat db reader strategy
|
||||||
|
}
|
||||||
|
|
||||||
|
public void downgradeToPartialFlatDbMode(
|
||||||
|
final SegmentedKeyValueStorage composedWorldStateStorage) {
|
||||||
|
final SegmentedKeyValueStorageTransaction transaction =
|
||||||
|
composedWorldStateStorage.startTransaction();
|
||||||
|
transaction.put(
|
||||||
|
TRIE_BRANCH_STORAGE, FLAT_DB_MODE, FlatDbMode.PARTIAL.getVersion().toArrayUnsafe());
|
||||||
|
transaction.commit();
|
||||||
|
loadFlatDbStrategy(composedWorldStateStorage); // force reload of flat db reader strategy
|
||||||
|
}
|
||||||
|
|
||||||
|
public FlatDbMode getFlatDbMode() {
|
||||||
|
return flatDbMode;
|
||||||
|
}
|
||||||
|
}
|
||||||
@@ -0,0 +1,51 @@
|
|||||||
|
/*
|
||||||
|
* Copyright Hyperledger Besu Contributors.
|
||||||
|
*
|
||||||
|
* Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with
|
||||||
|
* the License. You may obtain a copy of the License at
|
||||||
|
*
|
||||||
|
* http://www.apache.org/licenses/LICENSE-2.0
|
||||||
|
*
|
||||||
|
* Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
|
||||||
|
* an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
|
||||||
|
* specific language governing permissions and limitations under the License.
|
||||||
|
*
|
||||||
|
* SPDX-License-Identifier: Apache-2.0
|
||||||
|
*/
|
||||||
|
package org.hyperledger.besu.ethereum.trie.bonsai.trielog;
|
||||||
|
|
||||||
|
import org.hyperledger.besu.datatypes.Hash;
|
||||||
|
import org.hyperledger.besu.ethereum.core.BlockHeader;
|
||||||
|
import org.hyperledger.besu.ethereum.trie.bonsai.worldview.BonsaiWorldState;
|
||||||
|
import org.hyperledger.besu.ethereum.trie.bonsai.worldview.BonsaiWorldStateUpdateAccumulator;
|
||||||
|
import org.hyperledger.besu.plugin.services.trielogs.TrieLog;
|
||||||
|
|
||||||
|
import java.util.Optional;
|
||||||
|
|
||||||
|
public class NoOpTrieLogManager extends TrieLogManager {
|
||||||
|
|
||||||
|
public NoOpTrieLogManager() {
|
||||||
|
super(null, null, 0, null, TrieLogPruner.noOpTrieLogPruner());
|
||||||
|
}
|
||||||
|
|
||||||
|
@Override
|
||||||
|
public synchronized void saveTrieLog(
|
||||||
|
final BonsaiWorldStateUpdateAccumulator localUpdater,
|
||||||
|
final Hash forWorldStateRootHash,
|
||||||
|
final BlockHeader forBlockHeader,
|
||||||
|
final BonsaiWorldState forWorldState) {
|
||||||
|
// notify trie log added observers, synchronously
|
||||||
|
TrieLog trieLog = trieLogFactory.create(localUpdater, forBlockHeader);
|
||||||
|
trieLogObservers.forEach(o -> o.onTrieLogAdded(new TrieLogAddedEvent(trieLog)));
|
||||||
|
}
|
||||||
|
|
||||||
|
@Override
|
||||||
|
public long getMaxLayersToLoad() {
|
||||||
|
return 0;
|
||||||
|
}
|
||||||
|
|
||||||
|
@Override
|
||||||
|
public Optional<TrieLog> getTrieLogLayer(final Hash blockHash) {
|
||||||
|
return Optional.empty();
|
||||||
|
}
|
||||||
|
}
|
||||||
@@ -92,7 +92,7 @@ public class BonsaiWorldState
|
|||||||
evmConfiguration);
|
evmConfiguration);
|
||||||
}
|
}
|
||||||
|
|
||||||
protected BonsaiWorldState(
|
public BonsaiWorldState(
|
||||||
final BonsaiWorldStateKeyValueStorage worldStateStorage,
|
final BonsaiWorldStateKeyValueStorage worldStateStorage,
|
||||||
final CachedMerkleTrieLoader cachedMerkleTrieLoader,
|
final CachedMerkleTrieLoader cachedMerkleTrieLoader,
|
||||||
final CachedWorldStorageManager cachedWorldStorageManager,
|
final CachedWorldStorageManager cachedWorldStorageManager,
|
||||||
|
|||||||
@@ -0,0 +1,91 @@
|
|||||||
|
/*
|
||||||
|
* Copyright ConsenSys AG.
|
||||||
|
*
|
||||||
|
* Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with
|
||||||
|
* the License. You may obtain a copy of the License at
|
||||||
|
*
|
||||||
|
* http://www.apache.org/licenses/LICENSE-2.0
|
||||||
|
*
|
||||||
|
* Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
|
||||||
|
* an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
|
||||||
|
* specific language governing permissions and limitations under the License.
|
||||||
|
*
|
||||||
|
* SPDX-License-Identifier: Apache-2.0
|
||||||
|
*
|
||||||
|
*/
|
||||||
|
|
||||||
|
package org.hyperledger.besu.ethereum.trie.common;
|
||||||
|
|
||||||
|
import org.hyperledger.besu.ethereum.core.MutableWorldState;
|
||||||
|
import org.hyperledger.besu.ethereum.storage.keyvalue.KeyValueStorageProvider;
|
||||||
|
import org.hyperledger.besu.ethereum.storage.keyvalue.WorldStatePreimageKeyValueStorage;
|
||||||
|
import org.hyperledger.besu.ethereum.trie.bonsai.cache.CachedMerkleTrieLoader;
|
||||||
|
import org.hyperledger.besu.ethereum.trie.bonsai.cache.NoOpCachedWorldStorageManager;
|
||||||
|
import org.hyperledger.besu.ethereum.trie.bonsai.storage.BonsaiWorldStateKeyValueStorage;
|
||||||
|
import org.hyperledger.besu.ethereum.trie.bonsai.trielog.NoOpTrieLogManager;
|
||||||
|
import org.hyperledger.besu.ethereum.trie.bonsai.worldview.BonsaiWorldState;
|
||||||
|
import org.hyperledger.besu.ethereum.trie.forest.storage.ForestWorldStateKeyValueStorage;
|
||||||
|
import org.hyperledger.besu.ethereum.trie.forest.worldview.ForestMutableWorldState;
|
||||||
|
import org.hyperledger.besu.ethereum.worldstate.DataStorageConfiguration;
|
||||||
|
import org.hyperledger.besu.ethereum.worldstate.DataStorageFormat;
|
||||||
|
import org.hyperledger.besu.evm.internal.EvmConfiguration;
|
||||||
|
import org.hyperledger.besu.metrics.noop.NoOpMetricsSystem;
|
||||||
|
import org.hyperledger.besu.services.kvstore.InMemoryKeyValueStorage;
|
||||||
|
import org.hyperledger.besu.services.kvstore.SegmentedInMemoryKeyValueStorage;
|
||||||
|
|
||||||
|
import java.util.Objects;
|
||||||
|
|
||||||
|
public class GenesisWorldStateProvider {
|
||||||
|
|
||||||
|
/**
|
||||||
|
* Creates a Genesis world state based on the provided data storage format.
|
||||||
|
*
|
||||||
|
* @param dataStorageFormat the data storage format to use
|
||||||
|
* @return a mutable world state for the Genesis block
|
||||||
|
*/
|
||||||
|
public static MutableWorldState createGenesisWorldState(
|
||||||
|
final DataStorageFormat dataStorageFormat) {
|
||||||
|
if (Objects.requireNonNull(dataStorageFormat) == DataStorageFormat.BONSAI) {
|
||||||
|
return createGenesisBonsaiWorldState();
|
||||||
|
} else {
|
||||||
|
return createGenesisForestWorldState();
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
/**
|
||||||
|
* Creates a Genesis world state using the Bonsai data storage format.
|
||||||
|
*
|
||||||
|
* @return a mutable world state for the Genesis block
|
||||||
|
*/
|
||||||
|
private static MutableWorldState createGenesisBonsaiWorldState() {
|
||||||
|
final CachedMerkleTrieLoader cachedMerkleTrieLoader =
|
||||||
|
new CachedMerkleTrieLoader(new NoOpMetricsSystem());
|
||||||
|
final BonsaiWorldStateKeyValueStorage bonsaiWorldStateKeyValueStorage =
|
||||||
|
new BonsaiWorldStateKeyValueStorage(
|
||||||
|
new KeyValueStorageProvider(
|
||||||
|
segmentIdentifiers -> new SegmentedInMemoryKeyValueStorage(),
|
||||||
|
new InMemoryKeyValueStorage(),
|
||||||
|
new NoOpMetricsSystem()),
|
||||||
|
new NoOpMetricsSystem(),
|
||||||
|
DataStorageConfiguration.DEFAULT_CONFIG);
|
||||||
|
return new BonsaiWorldState(
|
||||||
|
bonsaiWorldStateKeyValueStorage,
|
||||||
|
cachedMerkleTrieLoader,
|
||||||
|
new NoOpCachedWorldStorageManager(bonsaiWorldStateKeyValueStorage),
|
||||||
|
new NoOpTrieLogManager(),
|
||||||
|
EvmConfiguration.DEFAULT);
|
||||||
|
}
|
||||||
|
|
||||||
|
/**
|
||||||
|
* Creates a Genesis world state using the Forest data storage format.
|
||||||
|
*
|
||||||
|
* @return a mutable world state for the Genesis block
|
||||||
|
*/
|
||||||
|
private static MutableWorldState createGenesisForestWorldState() {
|
||||||
|
final ForestWorldStateKeyValueStorage stateStorage =
|
||||||
|
new ForestWorldStateKeyValueStorage(new InMemoryKeyValueStorage());
|
||||||
|
final WorldStatePreimageKeyValueStorage preimageStorage =
|
||||||
|
new WorldStatePreimageKeyValueStorage(new InMemoryKeyValueStorage());
|
||||||
|
return new ForestMutableWorldState(stateStorage, preimageStorage, EvmConfiguration.DEFAULT);
|
||||||
|
}
|
||||||
|
}
|
||||||
@@ -14,6 +14,8 @@
|
|||||||
*/
|
*/
|
||||||
package org.hyperledger.besu.ethereum.core;
|
package org.hyperledger.besu.ethereum.core;
|
||||||
|
|
||||||
|
import static org.hyperledger.besu.ethereum.worldstate.DataStorageConfiguration.DEFAULT_BONSAI_MAX_LAYERS_TO_LOAD;
|
||||||
|
|
||||||
import org.hyperledger.besu.ethereum.chain.Blockchain;
|
import org.hyperledger.besu.ethereum.chain.Blockchain;
|
||||||
import org.hyperledger.besu.ethereum.chain.DefaultBlockchain;
|
import org.hyperledger.besu.ethereum.chain.DefaultBlockchain;
|
||||||
import org.hyperledger.besu.ethereum.chain.MutableBlockchain;
|
import org.hyperledger.besu.ethereum.chain.MutableBlockchain;
|
||||||
@@ -32,7 +34,9 @@ import org.hyperledger.besu.ethereum.trie.bonsai.trielog.TrieLogPruner;
|
|||||||
import org.hyperledger.besu.ethereum.trie.forest.ForestWorldStateArchive;
|
import org.hyperledger.besu.ethereum.trie.forest.ForestWorldStateArchive;
|
||||||
import org.hyperledger.besu.ethereum.trie.forest.storage.ForestWorldStateKeyValueStorage;
|
import org.hyperledger.besu.ethereum.trie.forest.storage.ForestWorldStateKeyValueStorage;
|
||||||
import org.hyperledger.besu.ethereum.trie.forest.worldview.ForestMutableWorldState;
|
import org.hyperledger.besu.ethereum.trie.forest.worldview.ForestMutableWorldState;
|
||||||
|
import org.hyperledger.besu.ethereum.worldstate.DataStorageConfiguration;
|
||||||
import org.hyperledger.besu.ethereum.worldstate.DataStorageFormat;
|
import org.hyperledger.besu.ethereum.worldstate.DataStorageFormat;
|
||||||
|
import org.hyperledger.besu.ethereum.worldstate.ImmutableDataStorageConfiguration;
|
||||||
import org.hyperledger.besu.evm.internal.EvmConfiguration;
|
import org.hyperledger.besu.evm.internal.EvmConfiguration;
|
||||||
import org.hyperledger.besu.metrics.noop.NoOpMetricsSystem;
|
import org.hyperledger.besu.metrics.noop.NoOpMetricsSystem;
|
||||||
import org.hyperledger.besu.services.kvstore.InMemoryKeyValueStorage;
|
import org.hyperledger.besu.services.kvstore.InMemoryKeyValueStorage;
|
||||||
@@ -96,13 +100,18 @@ public class InMemoryKeyValueStorageProvider extends KeyValueStorageProvider {
|
|||||||
new InMemoryKeyValueStorageProvider();
|
new InMemoryKeyValueStorageProvider();
|
||||||
final CachedMerkleTrieLoader cachedMerkleTrieLoader =
|
final CachedMerkleTrieLoader cachedMerkleTrieLoader =
|
||||||
new CachedMerkleTrieLoader(new NoOpMetricsSystem());
|
new CachedMerkleTrieLoader(new NoOpMetricsSystem());
|
||||||
|
final DataStorageConfiguration bonsaiDataStorageConfig =
|
||||||
|
ImmutableDataStorageConfiguration.builder()
|
||||||
|
.dataStorageFormat(DataStorageFormat.BONSAI)
|
||||||
|
.bonsaiMaxLayersToLoad(DEFAULT_BONSAI_MAX_LAYERS_TO_LOAD)
|
||||||
|
.unstable(DataStorageConfiguration.Unstable.DEFAULT)
|
||||||
|
.build();
|
||||||
return new BonsaiWorldStateProvider(
|
return new BonsaiWorldStateProvider(
|
||||||
(BonsaiWorldStateKeyValueStorage)
|
(BonsaiWorldStateKeyValueStorage)
|
||||||
inMemoryKeyValueStorageProvider.createWorldStateStorage(DataStorageFormat.BONSAI),
|
inMemoryKeyValueStorageProvider.createWorldStateStorage(bonsaiDataStorageConfig),
|
||||||
blockchain,
|
blockchain,
|
||||||
Optional.empty(),
|
Optional.empty(),
|
||||||
cachedMerkleTrieLoader,
|
cachedMerkleTrieLoader,
|
||||||
new NoOpMetricsSystem(),
|
|
||||||
null,
|
null,
|
||||||
evmConfiguration,
|
evmConfiguration,
|
||||||
TrieLogPruner.noOpTrieLogPruner());
|
TrieLogPruner.noOpTrieLogPruner());
|
||||||
@@ -111,7 +120,7 @@ public class InMemoryKeyValueStorageProvider extends KeyValueStorageProvider {
|
|||||||
public static MutableWorldState createInMemoryWorldState() {
|
public static MutableWorldState createInMemoryWorldState() {
|
||||||
final InMemoryKeyValueStorageProvider provider = new InMemoryKeyValueStorageProvider();
|
final InMemoryKeyValueStorageProvider provider = new InMemoryKeyValueStorageProvider();
|
||||||
return new ForestMutableWorldState(
|
return new ForestMutableWorldState(
|
||||||
provider.createWorldStateStorage(DataStorageFormat.FOREST),
|
provider.createWorldStateStorage(DataStorageConfiguration.DEFAULT_CONFIG),
|
||||||
provider.createWorldStatePreimageStorage(),
|
provider.createWorldStatePreimageStorage(),
|
||||||
EvmConfiguration.DEFAULT);
|
EvmConfiguration.DEFAULT);
|
||||||
}
|
}
|
||||||
|
|||||||
@@ -44,6 +44,7 @@ import org.hyperledger.besu.ethereum.storage.StorageProvider;
|
|||||||
import org.hyperledger.besu.ethereum.trie.bonsai.BonsaiWorldStateProvider;
|
import org.hyperledger.besu.ethereum.trie.bonsai.BonsaiWorldStateProvider;
|
||||||
import org.hyperledger.besu.ethereum.trie.bonsai.storage.BonsaiWorldStateKeyValueStorage;
|
import org.hyperledger.besu.ethereum.trie.bonsai.storage.BonsaiWorldStateKeyValueStorage;
|
||||||
import org.hyperledger.besu.ethereum.trie.bonsai.worldview.BonsaiWorldState;
|
import org.hyperledger.besu.ethereum.trie.bonsai.worldview.BonsaiWorldState;
|
||||||
|
import org.hyperledger.besu.ethereum.worldstate.DataStorageConfiguration;
|
||||||
import org.hyperledger.besu.ethereum.worldstate.WorldStateArchive;
|
import org.hyperledger.besu.ethereum.worldstate.WorldStateArchive;
|
||||||
import org.hyperledger.besu.ethereum.worldstate.WorldStateStorage;
|
import org.hyperledger.besu.ethereum.worldstate.WorldStateStorage;
|
||||||
import org.hyperledger.besu.evm.internal.EvmConfiguration;
|
import org.hyperledger.besu.evm.internal.EvmConfiguration;
|
||||||
@@ -80,7 +81,8 @@ class BlockImportExceptionHandlingTest {
|
|||||||
private final StorageProvider storageProvider = new InMemoryKeyValueStorageProvider();
|
private final StorageProvider storageProvider = new InMemoryKeyValueStorageProvider();
|
||||||
|
|
||||||
private final WorldStateStorage worldStateStorage =
|
private final WorldStateStorage worldStateStorage =
|
||||||
new BonsaiWorldStateKeyValueStorage(storageProvider, new NoOpMetricsSystem());
|
new BonsaiWorldStateKeyValueStorage(
|
||||||
|
storageProvider, new NoOpMetricsSystem(), DataStorageConfiguration.DEFAULT_CONFIG);
|
||||||
|
|
||||||
private final WorldStateArchive worldStateArchive =
|
private final WorldStateArchive worldStateArchive =
|
||||||
// contains a BonsaiWorldState which we need to spy on.
|
// contains a BonsaiWorldState which we need to spy on.
|
||||||
|
|||||||
File diff suppressed because one or more lines are too long
@@ -27,22 +27,6 @@ import org.junit.jupiter.api.Test;
|
|||||||
public class TargetingGasLimitCalculatorTest {
|
public class TargetingGasLimitCalculatorTest {
|
||||||
private static final long ADJUSTMENT_FACTOR = 1024L;
|
private static final long ADJUSTMENT_FACTOR = 1024L;
|
||||||
|
|
||||||
@Test
|
|
||||||
public void verifyGasLimitIsIncreasedWithinLimits() {
|
|
||||||
FrontierTargetingGasLimitCalculator targetingGasLimitCalculator =
|
|
||||||
new FrontierTargetingGasLimitCalculator();
|
|
||||||
assertThat(targetingGasLimitCalculator.nextGasLimit(8_000_000L, 10_000_000L, 1L))
|
|
||||||
.isEqualTo(8_000_000L + ADJUSTMENT_FACTOR);
|
|
||||||
}
|
|
||||||
|
|
||||||
@Test
|
|
||||||
public void verifyGasLimitIsDecreasedWithinLimits() {
|
|
||||||
FrontierTargetingGasLimitCalculator targetingGasLimitCalculator =
|
|
||||||
new FrontierTargetingGasLimitCalculator();
|
|
||||||
assertThat(targetingGasLimitCalculator.nextGasLimit(12_000_000L, 10_000_000L, 1L))
|
|
||||||
.isEqualTo(12_000_000L - ADJUSTMENT_FACTOR);
|
|
||||||
}
|
|
||||||
|
|
||||||
@Test
|
@Test
|
||||||
public void verifyGasLimitReachesTarget() {
|
public void verifyGasLimitReachesTarget() {
|
||||||
final long target = 10_000_000L;
|
final long target = 10_000_000L;
|
||||||
@@ -55,6 +39,33 @@ public class TargetingGasLimitCalculatorTest {
|
|||||||
.isEqualTo(target);
|
.isEqualTo(target);
|
||||||
}
|
}
|
||||||
|
|
||||||
|
@Test
|
||||||
|
public void verifyAdjustmentDeltas() {
|
||||||
|
assertDeltas(20000000L, 20019530L, 19980470L);
|
||||||
|
assertDeltas(40000000L, 40039061L, 39960939L);
|
||||||
|
}
|
||||||
|
|
||||||
|
private void assertDeltas(
|
||||||
|
final long gasLimit, final long expectedIncrease, final long expectedDecrease) {
|
||||||
|
FrontierTargetingGasLimitCalculator targetingGasLimitCalculator =
|
||||||
|
new FrontierTargetingGasLimitCalculator();
|
||||||
|
// increase
|
||||||
|
assertThat(targetingGasLimitCalculator.nextGasLimit(gasLimit, gasLimit * 2, 1L))
|
||||||
|
.isEqualTo(expectedIncrease);
|
||||||
|
// decrease
|
||||||
|
assertThat(targetingGasLimitCalculator.nextGasLimit(gasLimit, 0, 1L))
|
||||||
|
.isEqualTo(expectedDecrease);
|
||||||
|
// small decrease
|
||||||
|
assertThat(targetingGasLimitCalculator.nextGasLimit(gasLimit, gasLimit - 1, 1L))
|
||||||
|
.isEqualTo(gasLimit - 1);
|
||||||
|
// small increase
|
||||||
|
assertThat(targetingGasLimitCalculator.nextGasLimit(gasLimit, gasLimit + 1, 1L))
|
||||||
|
.isEqualTo(gasLimit + 1);
|
||||||
|
// no change
|
||||||
|
assertThat(targetingGasLimitCalculator.nextGasLimit(gasLimit, gasLimit, 1L))
|
||||||
|
.isEqualTo(gasLimit);
|
||||||
|
}
|
||||||
|
|
||||||
@Test
|
@Test
|
||||||
public void verifyMinGasLimit() {
|
public void verifyMinGasLimit() {
|
||||||
assertThat(AbstractGasLimitSpecification.isValidTargetGasLimit(DEFAULT_MIN_GAS_LIMIT - 1))
|
assertThat(AbstractGasLimitSpecification.isValidTargetGasLimit(DEFAULT_MIN_GAS_LIMIT - 1))
|
||||||
|
|||||||
@@ -68,7 +68,9 @@ import org.hyperledger.besu.ethereum.storage.keyvalue.KeyValueStorageProviderBui
|
|||||||
import org.hyperledger.besu.ethereum.trie.bonsai.cache.CachedMerkleTrieLoader;
|
import org.hyperledger.besu.ethereum.trie.bonsai.cache.CachedMerkleTrieLoader;
|
||||||
import org.hyperledger.besu.ethereum.trie.bonsai.storage.BonsaiWorldStateKeyValueStorage;
|
import org.hyperledger.besu.ethereum.trie.bonsai.storage.BonsaiWorldStateKeyValueStorage;
|
||||||
import org.hyperledger.besu.ethereum.trie.bonsai.trielog.TrieLogPruner;
|
import org.hyperledger.besu.ethereum.trie.bonsai.trielog.TrieLogPruner;
|
||||||
|
import org.hyperledger.besu.ethereum.worldstate.DataStorageConfiguration;
|
||||||
import org.hyperledger.besu.ethereum.worldstate.DataStorageFormat;
|
import org.hyperledger.besu.ethereum.worldstate.DataStorageFormat;
|
||||||
|
import org.hyperledger.besu.ethereum.worldstate.ImmutableDataStorageConfiguration;
|
||||||
import org.hyperledger.besu.evm.internal.EvmConfiguration;
|
import org.hyperledger.besu.evm.internal.EvmConfiguration;
|
||||||
import org.hyperledger.besu.metrics.noop.NoOpMetricsSystem;
|
import org.hyperledger.besu.metrics.noop.NoOpMetricsSystem;
|
||||||
import org.hyperledger.besu.plugin.services.BesuConfiguration;
|
import org.hyperledger.besu.plugin.services.BesuConfiguration;
|
||||||
@@ -147,14 +149,19 @@ public abstract class AbstractIsolationTests {
|
|||||||
public void createStorage() {
|
public void createStorage() {
|
||||||
bonsaiWorldStateStorage =
|
bonsaiWorldStateStorage =
|
||||||
(BonsaiWorldStateKeyValueStorage)
|
(BonsaiWorldStateKeyValueStorage)
|
||||||
createKeyValueStorageProvider().createWorldStateStorage(DataStorageFormat.BONSAI);
|
createKeyValueStorageProvider()
|
||||||
|
.createWorldStateStorage(
|
||||||
|
ImmutableDataStorageConfiguration.builder()
|
||||||
|
.dataStorageFormat(DataStorageFormat.BONSAI)
|
||||||
|
.bonsaiMaxLayersToLoad(
|
||||||
|
DataStorageConfiguration.DEFAULT_BONSAI_MAX_LAYERS_TO_LOAD)
|
||||||
|
.build());
|
||||||
archive =
|
archive =
|
||||||
new BonsaiWorldStateProvider(
|
new BonsaiWorldStateProvider(
|
||||||
bonsaiWorldStateStorage,
|
bonsaiWorldStateStorage,
|
||||||
blockchain,
|
blockchain,
|
||||||
Optional.of(16L),
|
Optional.of(16L),
|
||||||
new CachedMerkleTrieLoader(new NoOpMetricsSystem()),
|
new CachedMerkleTrieLoader(new NoOpMetricsSystem()),
|
||||||
new NoOpMetricsSystem(),
|
|
||||||
null,
|
null,
|
||||||
EvmConfiguration.DEFAULT,
|
EvmConfiguration.DEFAULT,
|
||||||
TrieLogPruner.noOpTrieLogPruner());
|
TrieLogPruner.noOpTrieLogPruner());
|
||||||
|
|||||||
@@ -44,6 +44,7 @@ import org.hyperledger.besu.ethereum.trie.bonsai.trielog.TrieLogLayer;
|
|||||||
import org.hyperledger.besu.ethereum.trie.bonsai.trielog.TrieLogManager;
|
import org.hyperledger.besu.ethereum.trie.bonsai.trielog.TrieLogManager;
|
||||||
import org.hyperledger.besu.ethereum.trie.bonsai.trielog.TrieLogPruner;
|
import org.hyperledger.besu.ethereum.trie.bonsai.trielog.TrieLogPruner;
|
||||||
import org.hyperledger.besu.ethereum.trie.bonsai.worldview.BonsaiWorldState;
|
import org.hyperledger.besu.ethereum.trie.bonsai.worldview.BonsaiWorldState;
|
||||||
|
import org.hyperledger.besu.ethereum.worldstate.DataStorageConfiguration;
|
||||||
import org.hyperledger.besu.evm.internal.EvmConfiguration;
|
import org.hyperledger.besu.evm.internal.EvmConfiguration;
|
||||||
import org.hyperledger.besu.metrics.noop.NoOpMetricsSystem;
|
import org.hyperledger.besu.metrics.noop.NoOpMetricsSystem;
|
||||||
import org.hyperledger.besu.plugin.services.storage.KeyValueStorage;
|
import org.hyperledger.besu.plugin.services.storage.KeyValueStorage;
|
||||||
@@ -106,7 +107,8 @@ class BonsaiWorldStateArchiveTest {
|
|||||||
new BonsaiWorldStateProvider(
|
new BonsaiWorldStateProvider(
|
||||||
cachedWorldStorageManager,
|
cachedWorldStorageManager,
|
||||||
trieLogManager,
|
trieLogManager,
|
||||||
new BonsaiWorldStateKeyValueStorage(storageProvider, new NoOpMetricsSystem()),
|
new BonsaiWorldStateKeyValueStorage(
|
||||||
|
storageProvider, new NoOpMetricsSystem(), DataStorageConfiguration.DEFAULT_CONFIG),
|
||||||
blockchain,
|
blockchain,
|
||||||
new CachedMerkleTrieLoader(new NoOpMetricsSystem()),
|
new CachedMerkleTrieLoader(new NoOpMetricsSystem()),
|
||||||
EvmConfiguration.DEFAULT);
|
EvmConfiguration.DEFAULT);
|
||||||
@@ -119,11 +121,11 @@ class BonsaiWorldStateArchiveTest {
|
|||||||
void testGetMutableReturnEmptyWhenLoadMoreThanLimitLayersBack() {
|
void testGetMutableReturnEmptyWhenLoadMoreThanLimitLayersBack() {
|
||||||
bonsaiWorldStateArchive =
|
bonsaiWorldStateArchive =
|
||||||
new BonsaiWorldStateProvider(
|
new BonsaiWorldStateProvider(
|
||||||
new BonsaiWorldStateKeyValueStorage(storageProvider, new NoOpMetricsSystem()),
|
new BonsaiWorldStateKeyValueStorage(
|
||||||
|
storageProvider, new NoOpMetricsSystem(), DataStorageConfiguration.DEFAULT_CONFIG),
|
||||||
blockchain,
|
blockchain,
|
||||||
Optional.of(512L),
|
Optional.of(512L),
|
||||||
new CachedMerkleTrieLoader(new NoOpMetricsSystem()),
|
new CachedMerkleTrieLoader(new NoOpMetricsSystem()),
|
||||||
new NoOpMetricsSystem(),
|
|
||||||
null,
|
null,
|
||||||
EvmConfiguration.DEFAULT,
|
EvmConfiguration.DEFAULT,
|
||||||
TrieLogPruner.noOpTrieLogPruner());
|
TrieLogPruner.noOpTrieLogPruner());
|
||||||
@@ -141,7 +143,8 @@ class BonsaiWorldStateArchiveTest {
|
|||||||
new BonsaiWorldStateProvider(
|
new BonsaiWorldStateProvider(
|
||||||
cachedWorldStorageManager,
|
cachedWorldStorageManager,
|
||||||
trieLogManager,
|
trieLogManager,
|
||||||
new BonsaiWorldStateKeyValueStorage(storageProvider, new NoOpMetricsSystem()),
|
new BonsaiWorldStateKeyValueStorage(
|
||||||
|
storageProvider, new NoOpMetricsSystem(), DataStorageConfiguration.DEFAULT_CONFIG),
|
||||||
blockchain,
|
blockchain,
|
||||||
new CachedMerkleTrieLoader(new NoOpMetricsSystem()),
|
new CachedMerkleTrieLoader(new NoOpMetricsSystem()),
|
||||||
EvmConfiguration.DEFAULT);
|
EvmConfiguration.DEFAULT);
|
||||||
@@ -167,7 +170,8 @@ class BonsaiWorldStateArchiveTest {
|
|||||||
.getTrieLogLayer(any(Hash.class));
|
.getTrieLogLayer(any(Hash.class));
|
||||||
|
|
||||||
var worldStateStorage =
|
var worldStateStorage =
|
||||||
new BonsaiWorldStateKeyValueStorage(storageProvider, new NoOpMetricsSystem());
|
new BonsaiWorldStateKeyValueStorage(
|
||||||
|
storageProvider, new NoOpMetricsSystem(), DataStorageConfiguration.DEFAULT_CONFIG);
|
||||||
bonsaiWorldStateArchive =
|
bonsaiWorldStateArchive =
|
||||||
spy(
|
spy(
|
||||||
new BonsaiWorldStateProvider(
|
new BonsaiWorldStateProvider(
|
||||||
@@ -193,7 +197,8 @@ class BonsaiWorldStateArchiveTest {
|
|||||||
void testGetMutableWithStorageConsistencyNotRollbackTheState() {
|
void testGetMutableWithStorageConsistencyNotRollbackTheState() {
|
||||||
|
|
||||||
var worldStateStorage =
|
var worldStateStorage =
|
||||||
new BonsaiWorldStateKeyValueStorage(storageProvider, new NoOpMetricsSystem());
|
new BonsaiWorldStateKeyValueStorage(
|
||||||
|
storageProvider, new NoOpMetricsSystem(), DataStorageConfiguration.DEFAULT_CONFIG);
|
||||||
bonsaiWorldStateArchive =
|
bonsaiWorldStateArchive =
|
||||||
spy(
|
spy(
|
||||||
new BonsaiWorldStateProvider(
|
new BonsaiWorldStateProvider(
|
||||||
@@ -229,7 +234,8 @@ class BonsaiWorldStateArchiveTest {
|
|||||||
.getTrieLogLayer(any(Hash.class));
|
.getTrieLogLayer(any(Hash.class));
|
||||||
|
|
||||||
var worldStateStorage =
|
var worldStateStorage =
|
||||||
new BonsaiWorldStateKeyValueStorage(storageProvider, new NoOpMetricsSystem());
|
new BonsaiWorldStateKeyValueStorage(
|
||||||
|
storageProvider, new NoOpMetricsSystem(), DataStorageConfiguration.DEFAULT_CONFIG);
|
||||||
|
|
||||||
bonsaiWorldStateArchive =
|
bonsaiWorldStateArchive =
|
||||||
spy(
|
spy(
|
||||||
@@ -276,7 +282,10 @@ class BonsaiWorldStateArchiveTest {
|
|||||||
new BonsaiWorldStateProvider(
|
new BonsaiWorldStateProvider(
|
||||||
cachedWorldStorageManager,
|
cachedWorldStorageManager,
|
||||||
trieLogManager,
|
trieLogManager,
|
||||||
new BonsaiWorldStateKeyValueStorage(storageProvider, new NoOpMetricsSystem()),
|
new BonsaiWorldStateKeyValueStorage(
|
||||||
|
storageProvider,
|
||||||
|
new NoOpMetricsSystem(),
|
||||||
|
DataStorageConfiguration.DEFAULT_CONFIG),
|
||||||
blockchain,
|
blockchain,
|
||||||
new CachedMerkleTrieLoader(new NoOpMetricsSystem()),
|
new CachedMerkleTrieLoader(new NoOpMetricsSystem()),
|
||||||
EvmConfiguration.DEFAULT));
|
EvmConfiguration.DEFAULT));
|
||||||
|
|||||||
@@ -29,6 +29,7 @@ import org.hyperledger.besu.ethereum.trie.TrieIterator;
|
|||||||
import org.hyperledger.besu.ethereum.trie.bonsai.cache.CachedMerkleTrieLoader;
|
import org.hyperledger.besu.ethereum.trie.bonsai.cache.CachedMerkleTrieLoader;
|
||||||
import org.hyperledger.besu.ethereum.trie.bonsai.storage.BonsaiWorldStateKeyValueStorage;
|
import org.hyperledger.besu.ethereum.trie.bonsai.storage.BonsaiWorldStateKeyValueStorage;
|
||||||
import org.hyperledger.besu.ethereum.trie.patricia.StoredMerklePatriciaTrie;
|
import org.hyperledger.besu.ethereum.trie.patricia.StoredMerklePatriciaTrie;
|
||||||
|
import org.hyperledger.besu.ethereum.worldstate.DataStorageConfiguration;
|
||||||
import org.hyperledger.besu.ethereum.worldstate.StateTrieAccountValue;
|
import org.hyperledger.besu.ethereum.worldstate.StateTrieAccountValue;
|
||||||
import org.hyperledger.besu.metrics.noop.NoOpMetricsSystem;
|
import org.hyperledger.besu.metrics.noop.NoOpMetricsSystem;
|
||||||
|
|
||||||
@@ -48,7 +49,9 @@ class CachedMerkleTrieLoaderTest {
|
|||||||
private CachedMerkleTrieLoader merkleTrieLoader;
|
private CachedMerkleTrieLoader merkleTrieLoader;
|
||||||
private final StorageProvider storageProvider = new InMemoryKeyValueStorageProvider();
|
private final StorageProvider storageProvider = new InMemoryKeyValueStorageProvider();
|
||||||
private final BonsaiWorldStateKeyValueStorage inMemoryWorldState =
|
private final BonsaiWorldStateKeyValueStorage inMemoryWorldState =
|
||||||
Mockito.spy(new BonsaiWorldStateKeyValueStorage(storageProvider, new NoOpMetricsSystem()));
|
Mockito.spy(
|
||||||
|
new BonsaiWorldStateKeyValueStorage(
|
||||||
|
storageProvider, new NoOpMetricsSystem(), DataStorageConfiguration.DEFAULT_CONFIG));
|
||||||
|
|
||||||
final List<Address> accounts =
|
final List<Address> accounts =
|
||||||
List.of(Address.fromHexString("0xdeadbeef"), Address.fromHexString("0xdeadbeee"));
|
List.of(Address.fromHexString("0xdeadbeef"), Address.fromHexString("0xdeadbeee"));
|
||||||
@@ -71,7 +74,9 @@ class CachedMerkleTrieLoaderTest {
|
|||||||
|
|
||||||
final BonsaiWorldStateKeyValueStorage emptyStorage =
|
final BonsaiWorldStateKeyValueStorage emptyStorage =
|
||||||
new BonsaiWorldStateKeyValueStorage(
|
new BonsaiWorldStateKeyValueStorage(
|
||||||
new InMemoryKeyValueStorageProvider(), new NoOpMetricsSystem());
|
new InMemoryKeyValueStorageProvider(),
|
||||||
|
new NoOpMetricsSystem(),
|
||||||
|
DataStorageConfiguration.DEFAULT_CONFIG);
|
||||||
StoredMerklePatriciaTrie<Bytes, Bytes> cachedTrie =
|
StoredMerklePatriciaTrie<Bytes, Bytes> cachedTrie =
|
||||||
new StoredMerklePatriciaTrie<>(
|
new StoredMerklePatriciaTrie<>(
|
||||||
(location, hash) ->
|
(location, hash) ->
|
||||||
@@ -110,7 +115,9 @@ class CachedMerkleTrieLoaderTest {
|
|||||||
final List<Bytes> cachedSlots = new ArrayList<>();
|
final List<Bytes> cachedSlots = new ArrayList<>();
|
||||||
final BonsaiWorldStateKeyValueStorage emptyStorage =
|
final BonsaiWorldStateKeyValueStorage emptyStorage =
|
||||||
new BonsaiWorldStateKeyValueStorage(
|
new BonsaiWorldStateKeyValueStorage(
|
||||||
new InMemoryKeyValueStorageProvider(), new NoOpMetricsSystem());
|
new InMemoryKeyValueStorageProvider(),
|
||||||
|
new NoOpMetricsSystem(),
|
||||||
|
DataStorageConfiguration.DEFAULT_CONFIG);
|
||||||
final StoredMerklePatriciaTrie<Bytes, Bytes> cachedTrie =
|
final StoredMerklePatriciaTrie<Bytes, Bytes> cachedTrie =
|
||||||
new StoredMerklePatriciaTrie<>(
|
new StoredMerklePatriciaTrie<>(
|
||||||
(location, hash) ->
|
(location, hash) ->
|
||||||
|
|||||||
@@ -34,6 +34,7 @@ import org.hyperledger.besu.ethereum.trie.bonsai.trielog.TrieLogFactoryImpl;
|
|||||||
import org.hyperledger.besu.ethereum.trie.bonsai.trielog.TrieLogLayer;
|
import org.hyperledger.besu.ethereum.trie.bonsai.trielog.TrieLogLayer;
|
||||||
import org.hyperledger.besu.ethereum.trie.bonsai.worldview.BonsaiWorldState;
|
import org.hyperledger.besu.ethereum.trie.bonsai.worldview.BonsaiWorldState;
|
||||||
import org.hyperledger.besu.ethereum.trie.bonsai.worldview.BonsaiWorldStateUpdateAccumulator;
|
import org.hyperledger.besu.ethereum.trie.bonsai.worldview.BonsaiWorldStateUpdateAccumulator;
|
||||||
|
import org.hyperledger.besu.ethereum.worldstate.DataStorageConfiguration;
|
||||||
import org.hyperledger.besu.evm.account.MutableAccount;
|
import org.hyperledger.besu.evm.account.MutableAccount;
|
||||||
import org.hyperledger.besu.evm.internal.EvmConfiguration;
|
import org.hyperledger.besu.evm.internal.EvmConfiguration;
|
||||||
import org.hyperledger.besu.evm.log.LogsBloomFilter;
|
import org.hyperledger.besu.evm.log.LogsBloomFilter;
|
||||||
@@ -161,7 +162,8 @@ class LogRollingTests {
|
|||||||
final BonsaiWorldState worldState =
|
final BonsaiWorldState worldState =
|
||||||
new BonsaiWorldState(
|
new BonsaiWorldState(
|
||||||
archive,
|
archive,
|
||||||
new BonsaiWorldStateKeyValueStorage(provider, new NoOpMetricsSystem()),
|
new BonsaiWorldStateKeyValueStorage(
|
||||||
|
provider, new NoOpMetricsSystem(), DataStorageConfiguration.DEFAULT_CONFIG),
|
||||||
EvmConfiguration.DEFAULT);
|
EvmConfiguration.DEFAULT);
|
||||||
final WorldUpdater updater = worldState.updater();
|
final WorldUpdater updater = worldState.updater();
|
||||||
|
|
||||||
@@ -174,7 +176,8 @@ class LogRollingTests {
|
|||||||
final BonsaiWorldState secondWorldState =
|
final BonsaiWorldState secondWorldState =
|
||||||
new BonsaiWorldState(
|
new BonsaiWorldState(
|
||||||
secondArchive,
|
secondArchive,
|
||||||
new BonsaiWorldStateKeyValueStorage(secondProvider, new NoOpMetricsSystem()),
|
new BonsaiWorldStateKeyValueStorage(
|
||||||
|
secondProvider, new NoOpMetricsSystem(), DataStorageConfiguration.DEFAULT_CONFIG),
|
||||||
EvmConfiguration.DEFAULT);
|
EvmConfiguration.DEFAULT);
|
||||||
final BonsaiWorldStateUpdateAccumulator secondUpdater =
|
final BonsaiWorldStateUpdateAccumulator secondUpdater =
|
||||||
(BonsaiWorldStateUpdateAccumulator) secondWorldState.updater();
|
(BonsaiWorldStateUpdateAccumulator) secondWorldState.updater();
|
||||||
@@ -205,7 +208,8 @@ class LogRollingTests {
|
|||||||
final BonsaiWorldState worldState =
|
final BonsaiWorldState worldState =
|
||||||
new BonsaiWorldState(
|
new BonsaiWorldState(
|
||||||
archive,
|
archive,
|
||||||
new BonsaiWorldStateKeyValueStorage(provider, new NoOpMetricsSystem()),
|
new BonsaiWorldStateKeyValueStorage(
|
||||||
|
provider, new NoOpMetricsSystem(), DataStorageConfiguration.DEFAULT_CONFIG),
|
||||||
EvmConfiguration.DEFAULT);
|
EvmConfiguration.DEFAULT);
|
||||||
|
|
||||||
final WorldUpdater updater = worldState.updater();
|
final WorldUpdater updater = worldState.updater();
|
||||||
@@ -226,7 +230,8 @@ class LogRollingTests {
|
|||||||
final BonsaiWorldState secondWorldState =
|
final BonsaiWorldState secondWorldState =
|
||||||
new BonsaiWorldState(
|
new BonsaiWorldState(
|
||||||
secondArchive,
|
secondArchive,
|
||||||
new BonsaiWorldStateKeyValueStorage(secondProvider, new NoOpMetricsSystem()),
|
new BonsaiWorldStateKeyValueStorage(
|
||||||
|
secondProvider, new NoOpMetricsSystem(), DataStorageConfiguration.DEFAULT_CONFIG),
|
||||||
EvmConfiguration.DEFAULT);
|
EvmConfiguration.DEFAULT);
|
||||||
final BonsaiWorldStateUpdateAccumulator secondUpdater =
|
final BonsaiWorldStateUpdateAccumulator secondUpdater =
|
||||||
(BonsaiWorldStateUpdateAccumulator) secondWorldState.updater();
|
(BonsaiWorldStateUpdateAccumulator) secondWorldState.updater();
|
||||||
@@ -258,7 +263,8 @@ class LogRollingTests {
|
|||||||
final BonsaiWorldState worldState =
|
final BonsaiWorldState worldState =
|
||||||
new BonsaiWorldState(
|
new BonsaiWorldState(
|
||||||
archive,
|
archive,
|
||||||
new BonsaiWorldStateKeyValueStorage(provider, new NoOpMetricsSystem()),
|
new BonsaiWorldStateKeyValueStorage(
|
||||||
|
provider, new NoOpMetricsSystem(), DataStorageConfiguration.DEFAULT_CONFIG),
|
||||||
EvmConfiguration.DEFAULT);
|
EvmConfiguration.DEFAULT);
|
||||||
|
|
||||||
final WorldUpdater updater = worldState.updater();
|
final WorldUpdater updater = worldState.updater();
|
||||||
@@ -286,7 +292,8 @@ class LogRollingTests {
|
|||||||
final BonsaiWorldState secondWorldState =
|
final BonsaiWorldState secondWorldState =
|
||||||
new BonsaiWorldState(
|
new BonsaiWorldState(
|
||||||
secondArchive,
|
secondArchive,
|
||||||
new BonsaiWorldStateKeyValueStorage(secondProvider, new NoOpMetricsSystem()),
|
new BonsaiWorldStateKeyValueStorage(
|
||||||
|
secondProvider, new NoOpMetricsSystem(), DataStorageConfiguration.DEFAULT_CONFIG),
|
||||||
EvmConfiguration.DEFAULT);
|
EvmConfiguration.DEFAULT);
|
||||||
|
|
||||||
final WorldUpdater secondUpdater = secondWorldState.updater();
|
final WorldUpdater secondUpdater = secondWorldState.updater();
|
||||||
|
|||||||
@@ -30,6 +30,7 @@ import org.hyperledger.besu.ethereum.trie.bonsai.trielog.TrieLogFactoryImpl;
|
|||||||
import org.hyperledger.besu.ethereum.trie.bonsai.trielog.TrieLogLayer;
|
import org.hyperledger.besu.ethereum.trie.bonsai.trielog.TrieLogLayer;
|
||||||
import org.hyperledger.besu.ethereum.trie.bonsai.worldview.BonsaiWorldState;
|
import org.hyperledger.besu.ethereum.trie.bonsai.worldview.BonsaiWorldState;
|
||||||
import org.hyperledger.besu.ethereum.trie.bonsai.worldview.BonsaiWorldStateUpdateAccumulator;
|
import org.hyperledger.besu.ethereum.trie.bonsai.worldview.BonsaiWorldStateUpdateAccumulator;
|
||||||
|
import org.hyperledger.besu.ethereum.worldstate.DataStorageConfiguration;
|
||||||
import org.hyperledger.besu.evm.internal.EvmConfiguration;
|
import org.hyperledger.besu.evm.internal.EvmConfiguration;
|
||||||
import org.hyperledger.besu.metrics.noop.NoOpMetricsSystem;
|
import org.hyperledger.besu.metrics.noop.NoOpMetricsSystem;
|
||||||
import org.hyperledger.besu.services.kvstore.InMemoryKeyValueStorage;
|
import org.hyperledger.besu.services.kvstore.InMemoryKeyValueStorage;
|
||||||
@@ -56,7 +57,8 @@ public class RollingImport {
|
|||||||
final BonsaiWorldState bonsaiState =
|
final BonsaiWorldState bonsaiState =
|
||||||
new BonsaiWorldState(
|
new BonsaiWorldState(
|
||||||
archive,
|
archive,
|
||||||
new BonsaiWorldStateKeyValueStorage(provider, new NoOpMetricsSystem()),
|
new BonsaiWorldStateKeyValueStorage(
|
||||||
|
provider, new NoOpMetricsSystem(), DataStorageConfiguration.DEFAULT_CONFIG),
|
||||||
EvmConfiguration.DEFAULT);
|
EvmConfiguration.DEFAULT);
|
||||||
final SegmentedInMemoryKeyValueStorage worldStateStorage =
|
final SegmentedInMemoryKeyValueStorage worldStateStorage =
|
||||||
(SegmentedInMemoryKeyValueStorage)
|
(SegmentedInMemoryKeyValueStorage)
|
||||||
|
|||||||
@@ -36,6 +36,7 @@ import org.hyperledger.besu.ethereum.storage.keyvalue.KeyValueSegmentIdentifier;
|
|||||||
import org.hyperledger.besu.ethereum.trie.MerkleTrie;
|
import org.hyperledger.besu.ethereum.trie.MerkleTrie;
|
||||||
import org.hyperledger.besu.ethereum.trie.StorageEntriesCollector;
|
import org.hyperledger.besu.ethereum.trie.StorageEntriesCollector;
|
||||||
import org.hyperledger.besu.ethereum.trie.patricia.StoredMerklePatriciaTrie;
|
import org.hyperledger.besu.ethereum.trie.patricia.StoredMerklePatriciaTrie;
|
||||||
|
import org.hyperledger.besu.ethereum.worldstate.DataStorageConfiguration;
|
||||||
import org.hyperledger.besu.ethereum.worldstate.FlatDbMode;
|
import org.hyperledger.besu.ethereum.worldstate.FlatDbMode;
|
||||||
import org.hyperledger.besu.ethereum.worldstate.StateTrieAccountValue;
|
import org.hyperledger.besu.ethereum.worldstate.StateTrieAccountValue;
|
||||||
import org.hyperledger.besu.metrics.noop.NoOpMetricsSystem;
|
import org.hyperledger.besu.metrics.noop.NoOpMetricsSystem;
|
||||||
@@ -452,7 +453,9 @@ public class BonsaiWorldStateKeyValueStorageTest {
|
|||||||
|
|
||||||
private BonsaiWorldStateKeyValueStorage emptyStorage() {
|
private BonsaiWorldStateKeyValueStorage emptyStorage() {
|
||||||
return new BonsaiWorldStateKeyValueStorage(
|
return new BonsaiWorldStateKeyValueStorage(
|
||||||
new InMemoryKeyValueStorageProvider(), new NoOpMetricsSystem());
|
new InMemoryKeyValueStorageProvider(),
|
||||||
|
new NoOpMetricsSystem(),
|
||||||
|
DataStorageConfiguration.DEFAULT_CONFIG);
|
||||||
}
|
}
|
||||||
|
|
||||||
@Test
|
@Test
|
||||||
@@ -487,6 +490,7 @@ public class BonsaiWorldStateKeyValueStorageTest {
|
|||||||
.thenReturn(mockTrieLogStorage);
|
.thenReturn(mockTrieLogStorage);
|
||||||
when(mockStorageProvider.getStorageBySegmentIdentifiers(any()))
|
when(mockStorageProvider.getStorageBySegmentIdentifiers(any()))
|
||||||
.thenReturn(mock(SegmentedKeyValueStorage.class));
|
.thenReturn(mock(SegmentedKeyValueStorage.class));
|
||||||
return new BonsaiWorldStateKeyValueStorage(mockStorageProvider, new NoOpMetricsSystem());
|
return new BonsaiWorldStateKeyValueStorage(
|
||||||
|
mockStorageProvider, new NoOpMetricsSystem(), DataStorageConfiguration.DEFAULT_CONFIG);
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|||||||
@@ -0,0 +1,89 @@
|
|||||||
|
/*
|
||||||
|
* Copyright Hyperledger Besu Contributors.
|
||||||
|
*
|
||||||
|
* Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with
|
||||||
|
* the License. You may obtain a copy of the License at
|
||||||
|
*
|
||||||
|
* http://www.apache.org/licenses/LICENSE-2.0
|
||||||
|
*
|
||||||
|
* Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
|
||||||
|
* an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
|
||||||
|
* specific language governing permissions and limitations under the License.
|
||||||
|
*
|
||||||
|
* SPDX-License-Identifier: Apache-2.0
|
||||||
|
*/
|
||||||
|
|
||||||
|
package org.hyperledger.besu.ethereum.trie.bonsai.storage.flat;
|
||||||
|
|
||||||
|
import static org.assertj.core.api.Assertions.assertThat;
|
||||||
|
|
||||||
|
import org.hyperledger.besu.ethereum.storage.keyvalue.KeyValueSegmentIdentifier;
|
||||||
|
import org.hyperledger.besu.ethereum.worldstate.DataStorageConfiguration;
|
||||||
|
import org.hyperledger.besu.ethereum.worldstate.FlatDbMode;
|
||||||
|
import org.hyperledger.besu.metrics.noop.NoOpMetricsSystem;
|
||||||
|
import org.hyperledger.besu.plugin.services.storage.SegmentedKeyValueStorage;
|
||||||
|
import org.hyperledger.besu.plugin.services.storage.SegmentedKeyValueStorageTransaction;
|
||||||
|
import org.hyperledger.besu.services.kvstore.SegmentedInMemoryKeyValueStorage;
|
||||||
|
|
||||||
|
import java.util.List;
|
||||||
|
|
||||||
|
import org.junit.jupiter.api.Test;
|
||||||
|
import org.junit.jupiter.api.extension.ExtendWith;
|
||||||
|
import org.junit.jupiter.params.ParameterizedTest;
|
||||||
|
import org.junit.jupiter.params.provider.EnumSource;
|
||||||
|
import org.mockito.junit.jupiter.MockitoExtension;
|
||||||
|
|
||||||
|
@ExtendWith(MockitoExtension.class)
|
||||||
|
class FlatDbStrategyProviderTest {
|
||||||
|
private final FlatDbStrategyProvider flatDbStrategyProvider =
|
||||||
|
new FlatDbStrategyProvider(new NoOpMetricsSystem(), DataStorageConfiguration.DEFAULT_CONFIG);
|
||||||
|
private final SegmentedKeyValueStorage composedWorldStateStorage =
|
||||||
|
new SegmentedInMemoryKeyValueStorage(List.of(KeyValueSegmentIdentifier.TRIE_BRANCH_STORAGE));
|
||||||
|
|
||||||
|
@ParameterizedTest
|
||||||
|
@EnumSource(FlatDbMode.class)
|
||||||
|
void loadsFlatDbStrategyForStoredFlatDbMode(final FlatDbMode flatDbMode) {
|
||||||
|
updateFlatDbMode(flatDbMode);
|
||||||
|
|
||||||
|
flatDbStrategyProvider.loadFlatDbStrategy(composedWorldStateStorage);
|
||||||
|
assertThat(flatDbStrategyProvider.getFlatDbMode()).isEqualTo(flatDbMode);
|
||||||
|
}
|
||||||
|
|
||||||
|
@Test
|
||||||
|
void loadsPartialFlatDbStrategyWhenNoFlatDbModeStored() {
|
||||||
|
flatDbStrategyProvider.loadFlatDbStrategy(composedWorldStateStorage);
|
||||||
|
assertThat(flatDbStrategyProvider.getFlatDbMode()).isEqualTo(FlatDbMode.PARTIAL);
|
||||||
|
}
|
||||||
|
|
||||||
|
@Test
|
||||||
|
void upgradesFlatDbStrategyToFullFlatDbMode() {
|
||||||
|
updateFlatDbMode(FlatDbMode.PARTIAL);
|
||||||
|
|
||||||
|
flatDbStrategyProvider.upgradeToFullFlatDbMode(composedWorldStateStorage);
|
||||||
|
assertThat(flatDbStrategyProvider.flatDbMode).isEqualTo(FlatDbMode.FULL);
|
||||||
|
assertThat(flatDbStrategyProvider.flatDbStrategy).isNotNull();
|
||||||
|
assertThat(flatDbStrategyProvider.getFlatDbStrategy(composedWorldStateStorage))
|
||||||
|
.isInstanceOf(FullFlatDbStrategy.class);
|
||||||
|
}
|
||||||
|
|
||||||
|
@Test
|
||||||
|
void downgradesFlatDbStrategyToPartiallyFlatDbMode() {
|
||||||
|
updateFlatDbMode(FlatDbMode.FULL);
|
||||||
|
|
||||||
|
flatDbStrategyProvider.downgradeToPartialFlatDbMode(composedWorldStateStorage);
|
||||||
|
assertThat(flatDbStrategyProvider.flatDbMode).isEqualTo(FlatDbMode.PARTIAL);
|
||||||
|
assertThat(flatDbStrategyProvider.flatDbStrategy).isNotNull();
|
||||||
|
assertThat(flatDbStrategyProvider.getFlatDbStrategy(composedWorldStateStorage))
|
||||||
|
.isInstanceOf(PartialFlatDbStrategy.class);
|
||||||
|
}
|
||||||
|
|
||||||
|
private void updateFlatDbMode(final FlatDbMode flatDbMode) {
|
||||||
|
final SegmentedKeyValueStorageTransaction transaction =
|
||||||
|
composedWorldStateStorage.startTransaction();
|
||||||
|
transaction.put(
|
||||||
|
KeyValueSegmentIdentifier.TRIE_BRANCH_STORAGE,
|
||||||
|
FlatDbStrategyProvider.FLAT_DB_MODE,
|
||||||
|
flatDbMode.getVersion().toArrayUnsafe());
|
||||||
|
transaction.commit();
|
||||||
|
}
|
||||||
|
}
|
||||||
@@ -38,7 +38,7 @@ import org.hyperledger.besu.ethereum.eth.sync.fastsync.worldstate.NodeDataReques
|
|||||||
import org.hyperledger.besu.ethereum.storage.StorageProvider;
|
import org.hyperledger.besu.ethereum.storage.StorageProvider;
|
||||||
import org.hyperledger.besu.ethereum.storage.keyvalue.KeyValueSegmentIdentifier;
|
import org.hyperledger.besu.ethereum.storage.keyvalue.KeyValueSegmentIdentifier;
|
||||||
import org.hyperledger.besu.ethereum.storage.keyvalue.KeyValueStorageProviderBuilder;
|
import org.hyperledger.besu.ethereum.storage.keyvalue.KeyValueStorageProviderBuilder;
|
||||||
import org.hyperledger.besu.ethereum.worldstate.DataStorageFormat;
|
import org.hyperledger.besu.ethereum.worldstate.DataStorageConfiguration;
|
||||||
import org.hyperledger.besu.ethereum.worldstate.WorldStateArchive;
|
import org.hyperledger.besu.ethereum.worldstate.WorldStateArchive;
|
||||||
import org.hyperledger.besu.ethereum.worldstate.WorldStateStorage;
|
import org.hyperledger.besu.ethereum.worldstate.WorldStateStorage;
|
||||||
import org.hyperledger.besu.metrics.ObservableMetricsSystem;
|
import org.hyperledger.besu.metrics.ObservableMetricsSystem;
|
||||||
@@ -105,7 +105,8 @@ public class WorldStateDownloaderBenchmark {
|
|||||||
|
|
||||||
final StorageProvider storageProvider =
|
final StorageProvider storageProvider =
|
||||||
createKeyValueStorageProvider(tempDir, tempDir.resolve("database"));
|
createKeyValueStorageProvider(tempDir, tempDir.resolve("database"));
|
||||||
worldStateStorage = storageProvider.createWorldStateStorage(DataStorageFormat.FOREST);
|
worldStateStorage =
|
||||||
|
storageProvider.createWorldStateStorage(DataStorageConfiguration.DEFAULT_CONFIG);
|
||||||
|
|
||||||
pendingRequests = new InMemoryTasksPriorityQueues<>();
|
pendingRequests = new InMemoryTasksPriorityQueues<>();
|
||||||
worldStateDownloader =
|
worldStateDownloader =
|
||||||
|
|||||||
@@ -139,6 +139,7 @@ public class EthPeers {
|
|||||||
"peer_limit",
|
"peer_limit",
|
||||||
"The maximum number of peers this node allows to connect",
|
"The maximum number of peers this node allows to connect",
|
||||||
() -> peerUpperBound);
|
() -> peerUpperBound);
|
||||||
|
|
||||||
connectedPeersCounter =
|
connectedPeersCounter =
|
||||||
metricsSystem.createCounter(
|
metricsSystem.createCounter(
|
||||||
BesuMetricCategory.PEERS, "connected_total", "Total number of peers connected");
|
BesuMetricCategory.PEERS, "connected_total", "Total number of peers connected");
|
||||||
|
|||||||
@@ -110,7 +110,7 @@ public class EthProtocolManager implements ProtocolManager, MinedBlockObserver {
|
|||||||
|
|
||||||
this.blockBroadcaster = new BlockBroadcaster(ethContext);
|
this.blockBroadcaster = new BlockBroadcaster(ethContext);
|
||||||
|
|
||||||
supportedCapabilities =
|
this.supportedCapabilities =
|
||||||
calculateCapabilities(synchronizerConfiguration, ethereumWireProtocolConfiguration);
|
calculateCapabilities(synchronizerConfiguration, ethereumWireProtocolConfiguration);
|
||||||
|
|
||||||
// Run validators
|
// Run validators
|
||||||
@@ -252,11 +252,14 @@ public class EthProtocolManager implements ProtocolManager, MinedBlockObserver {
|
|||||||
@Override
|
@Override
|
||||||
public void stop() {
|
public void stop() {
|
||||||
if (stopped.compareAndSet(false, true)) {
|
if (stopped.compareAndSet(false, true)) {
|
||||||
LOG.info("Stopping {} Subprotocol.", getSupportedProtocol());
|
LOG.atInfo().setMessage("Stopping {} Subprotocol.").addArgument(getSupportedProtocol()).log();
|
||||||
scheduler.stop();
|
scheduler.stop();
|
||||||
shutdown.countDown();
|
shutdown.countDown();
|
||||||
} else {
|
} else {
|
||||||
LOG.error("Attempted to stop already stopped {} Subprotocol.", getSupportedProtocol());
|
LOG.atInfo()
|
||||||
|
.setMessage("Attempted to stop already stopped {} Subprotocol.")
|
||||||
|
.addArgument(this::getSupportedProtocol)
|
||||||
|
.log();
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
@@ -264,7 +267,10 @@ public class EthProtocolManager implements ProtocolManager, MinedBlockObserver {
|
|||||||
public void awaitStop() throws InterruptedException {
|
public void awaitStop() throws InterruptedException {
|
||||||
shutdown.await();
|
shutdown.await();
|
||||||
scheduler.awaitStop();
|
scheduler.awaitStop();
|
||||||
LOG.info("{} Subprotocol stopped.", getSupportedProtocol());
|
LOG.atInfo()
|
||||||
|
.setMessage("{} Subprotocol stopped.")
|
||||||
|
.addArgument(this::getSupportedProtocol)
|
||||||
|
.log();
|
||||||
}
|
}
|
||||||
|
|
||||||
@Override
|
@Override
|
||||||
@@ -277,8 +283,10 @@ public class EthProtocolManager implements ProtocolManager, MinedBlockObserver {
|
|||||||
EthProtocolLogger.logProcessMessage(cap, code);
|
EthProtocolLogger.logProcessMessage(cap, code);
|
||||||
final EthPeer ethPeer = ethPeers.peer(message.getConnection());
|
final EthPeer ethPeer = ethPeers.peer(message.getConnection());
|
||||||
if (ethPeer == null) {
|
if (ethPeer == null) {
|
||||||
LOG.debug(
|
LOG.atDebug()
|
||||||
"Ignoring message received from unknown peer connection: {}", message.getConnection());
|
.setMessage("Ignoring message received from unknown peer connection: {}")
|
||||||
|
.addArgument(message::getConnection)
|
||||||
|
.log();
|
||||||
return;
|
return;
|
||||||
}
|
}
|
||||||
|
|
||||||
@@ -288,19 +296,24 @@ public class EthProtocolManager implements ProtocolManager, MinedBlockObserver {
|
|||||||
return;
|
return;
|
||||||
} else if (!ethPeer.statusHasBeenReceived()) {
|
} else if (!ethPeer.statusHasBeenReceived()) {
|
||||||
// Peers are required to send status messages before any other message type
|
// Peers are required to send status messages before any other message type
|
||||||
LOG.debug(
|
LOG.atDebug()
|
||||||
"{} requires a Status ({}) message to be sent first. Instead, received message {} (BREACH_OF_PROTOCOL). Disconnecting from {}.",
|
.setMessage(
|
||||||
this.getClass().getSimpleName(),
|
"{} requires a Status ({}) message to be sent first. Instead, received message {} (BREACH_OF_PROTOCOL). Disconnecting from {}.")
|
||||||
EthPV62.STATUS,
|
.addArgument(() -> this.getClass().getSimpleName())
|
||||||
code,
|
.addArgument(EthPV62.STATUS)
|
||||||
ethPeer);
|
.addArgument(code)
|
||||||
|
.addArgument(ethPeer::toString)
|
||||||
|
.log();
|
||||||
ethPeer.disconnect(DisconnectReason.BREACH_OF_PROTOCOL);
|
ethPeer.disconnect(DisconnectReason.BREACH_OF_PROTOCOL);
|
||||||
return;
|
return;
|
||||||
}
|
}
|
||||||
|
|
||||||
if (this.mergePeerFilter.isPresent()) {
|
if (this.mergePeerFilter.isPresent()) {
|
||||||
if (this.mergePeerFilter.get().disconnectIfGossipingBlocks(message, ethPeer)) {
|
if (this.mergePeerFilter.get().disconnectIfGossipingBlocks(message, ethPeer)) {
|
||||||
LOG.debug("Post-merge disconnect: peer still gossiping blocks {}", ethPeer);
|
LOG.atDebug()
|
||||||
|
.setMessage("Post-merge disconnect: peer still gossiping blocks {}")
|
||||||
|
.addArgument(ethPeer::toString)
|
||||||
|
.log();
|
||||||
handleDisconnect(ethPeer.getConnection(), DisconnectReason.SUBPROTOCOL_TRIGGERED, false);
|
handleDisconnect(ethPeer.getConnection(), DisconnectReason.SUBPROTOCOL_TRIGGERED, false);
|
||||||
return;
|
return;
|
||||||
}
|
}
|
||||||
@@ -333,11 +346,12 @@ public class EthProtocolManager implements ProtocolManager, MinedBlockObserver {
|
|||||||
maybeResponseData = ethMessages.dispatch(ethMessage);
|
maybeResponseData = ethMessages.dispatch(ethMessage);
|
||||||
}
|
}
|
||||||
} catch (final RLPException e) {
|
} catch (final RLPException e) {
|
||||||
LOG.debug(
|
LOG.atDebug()
|
||||||
"Received malformed message {} (BREACH_OF_PROTOCOL), disconnecting: {}",
|
.setMessage("Received malformed message {} (BREACH_OF_PROTOCOL), disconnecting: {}, {}")
|
||||||
messageData.getData(),
|
.addArgument(messageData::getData)
|
||||||
ethPeer,
|
.addArgument(ethPeer::toString)
|
||||||
e);
|
.addArgument(e::toString)
|
||||||
|
.log();
|
||||||
|
|
||||||
ethPeer.disconnect(DisconnectMessage.DisconnectReason.BREACH_OF_PROTOCOL);
|
ethPeer.disconnect(DisconnectMessage.DisconnectReason.BREACH_OF_PROTOCOL);
|
||||||
}
|
}
|
||||||
@@ -368,23 +382,31 @@ public class EthProtocolManager implements ProtocolManager, MinedBlockObserver {
|
|||||||
genesisHash,
|
genesisHash,
|
||||||
latestForkId);
|
latestForkId);
|
||||||
try {
|
try {
|
||||||
LOG.trace("Sending status message to {} for connection {}.", peer.getId(), connection);
|
LOG.atTrace()
|
||||||
|
.setMessage("Sending status message to {} for connection {}.")
|
||||||
|
.addArgument(peer::getId)
|
||||||
|
.addArgument(connection::toString)
|
||||||
|
.log();
|
||||||
peer.send(status, getSupportedProtocol(), connection);
|
peer.send(status, getSupportedProtocol(), connection);
|
||||||
peer.registerStatusSent(connection);
|
peer.registerStatusSent(connection);
|
||||||
} catch (final PeerNotConnected peerNotConnected) {
|
} catch (final PeerNotConnected peerNotConnected) {
|
||||||
// Nothing to do.
|
// Nothing to do.
|
||||||
}
|
}
|
||||||
LOG.trace("{}", ethPeers);
|
LOG.atTrace().setMessage("{}").addArgument(ethPeers::toString).log();
|
||||||
}
|
}
|
||||||
|
|
||||||
@Override
|
@Override
|
||||||
public boolean shouldConnect(final Peer peer, final boolean incoming) {
|
public boolean shouldConnect(final Peer peer, final boolean incoming) {
|
||||||
if (peer.getForkId().map(forkId -> forkIdManager.peerCheck(forkId)).orElse(true)) {
|
if (peer.getForkId().map(forkIdManager::peerCheck).orElse(true)) {
|
||||||
LOG.trace("ForkId OK or not available");
|
LOG.atDebug()
|
||||||
|
.setMessage("ForkId OK or not available for peer {}")
|
||||||
|
.addArgument(peer::getId)
|
||||||
|
.log();
|
||||||
if (ethPeers.shouldConnect(peer, incoming)) {
|
if (ethPeers.shouldConnect(peer, incoming)) {
|
||||||
return true;
|
return true;
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
LOG.atDebug().setMessage("ForkId check failed for peer {}").addArgument(peer::getId).log();
|
||||||
return false;
|
return false;
|
||||||
}
|
}
|
||||||
|
|
||||||
@@ -397,11 +419,11 @@ public class EthProtocolManager implements ProtocolManager, MinedBlockObserver {
|
|||||||
LOG.atDebug()
|
LOG.atDebug()
|
||||||
.setMessage("Disconnect - {} - {} - {}... - {} peers left")
|
.setMessage("Disconnect - {} - {} - {}... - {} peers left")
|
||||||
.addArgument(initiatedByPeer ? "Inbound" : "Outbound")
|
.addArgument(initiatedByPeer ? "Inbound" : "Outbound")
|
||||||
.addArgument(reason)
|
.addArgument(reason::toString)
|
||||||
.addArgument(connection.getPeer().getId().slice(0, 8))
|
.addArgument(() -> connection.getPeer().getId().slice(0, 8))
|
||||||
.addArgument(ethPeers.peerCount())
|
.addArgument(ethPeers::peerCount)
|
||||||
.log();
|
.log();
|
||||||
LOG.trace("{}", ethPeers);
|
LOG.atTrace().setMessage("{}").addArgument(ethPeers::toString).log();
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
@@ -412,43 +434,41 @@ public class EthProtocolManager implements ProtocolManager, MinedBlockObserver {
|
|||||||
try {
|
try {
|
||||||
if (!status.networkId().equals(networkId)) {
|
if (!status.networkId().equals(networkId)) {
|
||||||
LOG.atDebug()
|
LOG.atDebug()
|
||||||
.setMessage("Mismatched network id: {}, EthPeer {}...")
|
.setMessage("Mismatched network id: {}, peer {}")
|
||||||
.addArgument(status.networkId())
|
.addArgument(status::networkId)
|
||||||
.addArgument(peer.getShortNodeId())
|
.addArgument(() -> getPeerOrPeerId(peer))
|
||||||
.log();
|
|
||||||
LOG.atTrace()
|
|
||||||
.setMessage("Mismatched network id: {}, EthPeer {}")
|
|
||||||
.addArgument(status.networkId())
|
|
||||||
.addArgument(peer)
|
|
||||||
.log();
|
.log();
|
||||||
peer.disconnect(DisconnectReason.SUBPROTOCOL_TRIGGERED);
|
peer.disconnect(DisconnectReason.SUBPROTOCOL_TRIGGERED);
|
||||||
} else if (!forkIdManager.peerCheck(forkId) && status.protocolVersion() > 63) {
|
} else if (!forkIdManager.peerCheck(forkId) && status.protocolVersion() > 63) {
|
||||||
LOG.debug(
|
LOG.atDebug()
|
||||||
"{} has matching network id ({}), but non-matching fork id: {}",
|
.setMessage("{} has matching network id ({}), but non-matching fork id: {}")
|
||||||
peer,
|
.addArgument(() -> getPeerOrPeerId(peer))
|
||||||
networkId,
|
.addArgument(networkId::toString)
|
||||||
forkId);
|
.addArgument(forkId)
|
||||||
|
.log();
|
||||||
peer.disconnect(DisconnectReason.SUBPROTOCOL_TRIGGERED);
|
peer.disconnect(DisconnectReason.SUBPROTOCOL_TRIGGERED);
|
||||||
} else if (forkIdManager.peerCheck(status.genesisHash())) {
|
} else if (forkIdManager.peerCheck(status.genesisHash())) {
|
||||||
LOG.debug(
|
LOG.atDebug()
|
||||||
"{} has matching network id ({}), but non-matching genesis hash: {}",
|
.setMessage("{} has matching network id ({}), but non-matching genesis hash: {}")
|
||||||
peer,
|
.addArgument(() -> getPeerOrPeerId(peer))
|
||||||
networkId,
|
.addArgument(networkId::toString)
|
||||||
status.genesisHash());
|
.addArgument(status::genesisHash)
|
||||||
|
.log();
|
||||||
peer.disconnect(DisconnectReason.SUBPROTOCOL_TRIGGERED);
|
peer.disconnect(DisconnectReason.SUBPROTOCOL_TRIGGERED);
|
||||||
} else if (mergePeerFilter.isPresent()
|
} else if (mergePeerFilter.isPresent()
|
||||||
&& mergePeerFilter.get().disconnectIfPoW(status, peer)) {
|
&& mergePeerFilter.get().disconnectIfPoW(status, peer)) {
|
||||||
LOG.atDebug()
|
LOG.atDebug()
|
||||||
.setMessage("Post-merge disconnect: peer still PoW {}")
|
.setMessage("Post-merge disconnect: peer still PoW {}")
|
||||||
.addArgument(peer.getShortNodeId())
|
.addArgument(() -> getPeerOrPeerId(peer))
|
||||||
.log();
|
.log();
|
||||||
handleDisconnect(peer.getConnection(), DisconnectReason.SUBPROTOCOL_TRIGGERED, false);
|
handleDisconnect(peer.getConnection(), DisconnectReason.SUBPROTOCOL_TRIGGERED, false);
|
||||||
} else {
|
} else {
|
||||||
LOG.debug(
|
LOG.atDebug()
|
||||||
"Received status message from {}: {} with connection {}",
|
.setMessage("Received status message from {}: {} with connection {}")
|
||||||
peer,
|
.addArgument(peer::toString)
|
||||||
status,
|
.addArgument(status::toString)
|
||||||
message.getConnection());
|
.addArgument(message::getConnection)
|
||||||
|
.log();
|
||||||
peer.registerStatusReceived(
|
peer.registerStatusReceived(
|
||||||
status.bestHash(),
|
status.bestHash(),
|
||||||
status.totalDifficulty(),
|
status.totalDifficulty(),
|
||||||
@@ -467,6 +487,10 @@ public class EthProtocolManager implements ProtocolManager, MinedBlockObserver {
|
|||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
|
private Object getPeerOrPeerId(final EthPeer peer) {
|
||||||
|
return LOG.isTraceEnabled() ? peer : peer.getShortNodeId();
|
||||||
|
}
|
||||||
|
|
||||||
@Override
|
@Override
|
||||||
public void blockMined(final Block block) {
|
public void blockMined(final Block block) {
|
||||||
// This assumes the block has already been included in the chain
|
// This assumes the block has already been included in the chain
|
||||||
|
|||||||
@@ -165,7 +165,7 @@ public class BackwardSyncAlgSpec {
|
|||||||
|
|
||||||
ttdCaptor.getValue().onTTDReached(true);
|
ttdCaptor.getValue().onTTDReached(true);
|
||||||
|
|
||||||
voidCompletableFuture.get(100, TimeUnit.MILLISECONDS);
|
voidCompletableFuture.get(200, TimeUnit.MILLISECONDS);
|
||||||
assertThat(voidCompletableFuture).isCompleted();
|
assertThat(voidCompletableFuture).isCompleted();
|
||||||
|
|
||||||
verify(context.getSyncState()).unsubscribeTTDReached(88L);
|
verify(context.getSyncState()).unsubscribeTTDReached(88L);
|
||||||
@@ -192,7 +192,7 @@ public class BackwardSyncAlgSpec {
|
|||||||
|
|
||||||
completionCaptor.getValue().onInitialSyncCompleted();
|
completionCaptor.getValue().onInitialSyncCompleted();
|
||||||
|
|
||||||
voidCompletableFuture.get(100, TimeUnit.MILLISECONDS);
|
voidCompletableFuture.get(200, TimeUnit.MILLISECONDS);
|
||||||
assertThat(voidCompletableFuture).isCompleted();
|
assertThat(voidCompletableFuture).isCompleted();
|
||||||
|
|
||||||
verify(context.getSyncState()).unsubscribeTTDReached(88L);
|
verify(context.getSyncState()).unsubscribeTTDReached(88L);
|
||||||
|
|||||||
@@ -28,6 +28,7 @@ import org.hyperledger.besu.ethereum.eth.sync.worldstate.StalledDownloadExceptio
|
|||||||
import org.hyperledger.besu.ethereum.eth.sync.worldstate.WorldStateDownloadProcess;
|
import org.hyperledger.besu.ethereum.eth.sync.worldstate.WorldStateDownloadProcess;
|
||||||
import org.hyperledger.besu.ethereum.trie.bonsai.storage.BonsaiWorldStateKeyValueStorage;
|
import org.hyperledger.besu.ethereum.trie.bonsai.storage.BonsaiWorldStateKeyValueStorage;
|
||||||
import org.hyperledger.besu.ethereum.trie.forest.storage.ForestWorldStateKeyValueStorage;
|
import org.hyperledger.besu.ethereum.trie.forest.storage.ForestWorldStateKeyValueStorage;
|
||||||
|
import org.hyperledger.besu.ethereum.worldstate.DataStorageConfiguration;
|
||||||
import org.hyperledger.besu.ethereum.worldstate.DataStorageFormat;
|
import org.hyperledger.besu.ethereum.worldstate.DataStorageFormat;
|
||||||
import org.hyperledger.besu.ethereum.worldstate.WorldStateStorage;
|
import org.hyperledger.besu.ethereum.worldstate.WorldStateStorage;
|
||||||
import org.hyperledger.besu.metrics.noop.NoOpMetricsSystem;
|
import org.hyperledger.besu.metrics.noop.NoOpMetricsSystem;
|
||||||
@@ -80,7 +81,9 @@ public class FastWorldDownloadStateTest {
|
|||||||
if (storageFormat == DataStorageFormat.BONSAI) {
|
if (storageFormat == DataStorageFormat.BONSAI) {
|
||||||
worldStateStorage =
|
worldStateStorage =
|
||||||
new BonsaiWorldStateKeyValueStorage(
|
new BonsaiWorldStateKeyValueStorage(
|
||||||
new InMemoryKeyValueStorageProvider(), new NoOpMetricsSystem());
|
new InMemoryKeyValueStorageProvider(),
|
||||||
|
new NoOpMetricsSystem(),
|
||||||
|
DataStorageConfiguration.DEFAULT_CONFIG);
|
||||||
} else {
|
} else {
|
||||||
worldStateStorage = new ForestWorldStateKeyValueStorage(new InMemoryKeyValueStorage());
|
worldStateStorage = new ForestWorldStateKeyValueStorage(new InMemoryKeyValueStorage());
|
||||||
}
|
}
|
||||||
|
|||||||
@@ -26,7 +26,7 @@ import org.hyperledger.besu.ethereum.core.BlockHeaderTestFixture;
|
|||||||
import org.hyperledger.besu.ethereum.core.InMemoryKeyValueStorageProvider;
|
import org.hyperledger.besu.ethereum.core.InMemoryKeyValueStorageProvider;
|
||||||
import org.hyperledger.besu.ethereum.trie.MerkleTrie;
|
import org.hyperledger.besu.ethereum.trie.MerkleTrie;
|
||||||
import org.hyperledger.besu.ethereum.trie.patricia.SimpleMerklePatriciaTrie;
|
import org.hyperledger.besu.ethereum.trie.patricia.SimpleMerklePatriciaTrie;
|
||||||
import org.hyperledger.besu.ethereum.worldstate.DataStorageFormat;
|
import org.hyperledger.besu.ethereum.worldstate.DataStorageConfiguration;
|
||||||
import org.hyperledger.besu.ethereum.worldstate.WorldStateStorage;
|
import org.hyperledger.besu.ethereum.worldstate.WorldStateStorage;
|
||||||
import org.hyperledger.besu.services.tasks.Task;
|
import org.hyperledger.besu.services.tasks.Task;
|
||||||
|
|
||||||
@@ -40,7 +40,8 @@ import org.junit.jupiter.api.Test;
|
|||||||
public class PersistDataStepTest {
|
public class PersistDataStepTest {
|
||||||
|
|
||||||
private final WorldStateStorage worldStateStorage =
|
private final WorldStateStorage worldStateStorage =
|
||||||
new InMemoryKeyValueStorageProvider().createWorldStateStorage(DataStorageFormat.FOREST);
|
new InMemoryKeyValueStorageProvider()
|
||||||
|
.createWorldStateStorage(DataStorageConfiguration.DEFAULT_CONFIG);
|
||||||
private final FastWorldDownloadState downloadState = mock(FastWorldDownloadState.class);
|
private final FastWorldDownloadState downloadState = mock(FastWorldDownloadState.class);
|
||||||
|
|
||||||
private final Bytes rootNodeData = Bytes.of(1, 1, 1, 1);
|
private final Bytes rootNodeData = Bytes.of(1, 1, 1, 1);
|
||||||
|
|||||||
@@ -34,6 +34,7 @@ import org.hyperledger.besu.ethereum.trie.TrieIterator;
|
|||||||
import org.hyperledger.besu.ethereum.trie.bonsai.storage.BonsaiWorldStateKeyValueStorage;
|
import org.hyperledger.besu.ethereum.trie.bonsai.storage.BonsaiWorldStateKeyValueStorage;
|
||||||
import org.hyperledger.besu.ethereum.trie.patricia.StoredMerklePatriciaTrie;
|
import org.hyperledger.besu.ethereum.trie.patricia.StoredMerklePatriciaTrie;
|
||||||
import org.hyperledger.besu.ethereum.trie.patricia.StoredNodeFactory;
|
import org.hyperledger.besu.ethereum.trie.patricia.StoredNodeFactory;
|
||||||
|
import org.hyperledger.besu.ethereum.worldstate.DataStorageConfiguration;
|
||||||
import org.hyperledger.besu.ethereum.worldstate.StateTrieAccountValue;
|
import org.hyperledger.besu.ethereum.worldstate.StateTrieAccountValue;
|
||||||
import org.hyperledger.besu.ethereum.worldstate.WorldStateStorage;
|
import org.hyperledger.besu.ethereum.worldstate.WorldStateStorage;
|
||||||
import org.hyperledger.besu.metrics.noop.NoOpMetricsSystem;
|
import org.hyperledger.besu.metrics.noop.NoOpMetricsSystem;
|
||||||
@@ -58,7 +59,9 @@ public class AccountHealingTrackingTest {
|
|||||||
private final List<Address> accounts = List.of(Address.fromHexString("0xdeadbeef"));
|
private final List<Address> accounts = List.of(Address.fromHexString("0xdeadbeef"));
|
||||||
private final WorldStateStorage worldStateStorage =
|
private final WorldStateStorage worldStateStorage =
|
||||||
new BonsaiWorldStateKeyValueStorage(
|
new BonsaiWorldStateKeyValueStorage(
|
||||||
new InMemoryKeyValueStorageProvider(), new NoOpMetricsSystem());
|
new InMemoryKeyValueStorageProvider(),
|
||||||
|
new NoOpMetricsSystem(),
|
||||||
|
DataStorageConfiguration.DEFAULT_CONFIG);
|
||||||
|
|
||||||
private WorldStateProofProvider worldStateProofProvider;
|
private WorldStateProofProvider worldStateProofProvider;
|
||||||
|
|
||||||
|
|||||||
@@ -26,7 +26,7 @@ import org.hyperledger.besu.ethereum.eth.sync.snapsync.request.BytecodeRequest;
|
|||||||
import org.hyperledger.besu.ethereum.eth.sync.snapsync.request.SnapDataRequest;
|
import org.hyperledger.besu.ethereum.eth.sync.snapsync.request.SnapDataRequest;
|
||||||
import org.hyperledger.besu.ethereum.eth.sync.snapsync.request.StorageRangeDataRequest;
|
import org.hyperledger.besu.ethereum.eth.sync.snapsync.request.StorageRangeDataRequest;
|
||||||
import org.hyperledger.besu.ethereum.trie.patricia.StoredMerklePatriciaTrie;
|
import org.hyperledger.besu.ethereum.trie.patricia.StoredMerklePatriciaTrie;
|
||||||
import org.hyperledger.besu.ethereum.worldstate.DataStorageFormat;
|
import org.hyperledger.besu.ethereum.worldstate.DataStorageConfiguration;
|
||||||
import org.hyperledger.besu.ethereum.worldstate.WorldStateStorage;
|
import org.hyperledger.besu.ethereum.worldstate.WorldStateStorage;
|
||||||
import org.hyperledger.besu.services.tasks.Task;
|
import org.hyperledger.besu.services.tasks.Task;
|
||||||
|
|
||||||
@@ -39,7 +39,8 @@ import org.junit.jupiter.api.Test;
|
|||||||
public class PersistDataStepTest {
|
public class PersistDataStepTest {
|
||||||
|
|
||||||
private final WorldStateStorage worldStateStorage =
|
private final WorldStateStorage worldStateStorage =
|
||||||
new InMemoryKeyValueStorageProvider().createWorldStateStorage(DataStorageFormat.FOREST);
|
new InMemoryKeyValueStorageProvider()
|
||||||
|
.createWorldStateStorage(DataStorageConfiguration.DEFAULT_CONFIG);
|
||||||
private final SnapSyncProcessState snapSyncState = mock(SnapSyncProcessState.class);
|
private final SnapSyncProcessState snapSyncState = mock(SnapSyncProcessState.class);
|
||||||
private final SnapWorldDownloadState downloadState = mock(SnapWorldDownloadState.class);
|
private final SnapWorldDownloadState downloadState = mock(SnapWorldDownloadState.class);
|
||||||
|
|
||||||
|
|||||||
@@ -40,6 +40,7 @@ import org.hyperledger.besu.ethereum.eth.sync.snapsync.request.SnapDataRequest;
|
|||||||
import org.hyperledger.besu.ethereum.eth.sync.worldstate.WorldStateDownloadProcess;
|
import org.hyperledger.besu.ethereum.eth.sync.worldstate.WorldStateDownloadProcess;
|
||||||
import org.hyperledger.besu.ethereum.trie.bonsai.storage.BonsaiWorldStateKeyValueStorage;
|
import org.hyperledger.besu.ethereum.trie.bonsai.storage.BonsaiWorldStateKeyValueStorage;
|
||||||
import org.hyperledger.besu.ethereum.trie.forest.storage.ForestWorldStateKeyValueStorage;
|
import org.hyperledger.besu.ethereum.trie.forest.storage.ForestWorldStateKeyValueStorage;
|
||||||
|
import org.hyperledger.besu.ethereum.worldstate.DataStorageConfiguration;
|
||||||
import org.hyperledger.besu.ethereum.worldstate.DataStorageFormat;
|
import org.hyperledger.besu.ethereum.worldstate.DataStorageFormat;
|
||||||
import org.hyperledger.besu.ethereum.worldstate.WorldStateStorage;
|
import org.hyperledger.besu.ethereum.worldstate.WorldStateStorage;
|
||||||
import org.hyperledger.besu.metrics.noop.NoOpMetricsSystem;
|
import org.hyperledger.besu.metrics.noop.NoOpMetricsSystem;
|
||||||
@@ -108,7 +109,9 @@ public class SnapWorldDownloadStateTest {
|
|||||||
if (storageFormat == DataStorageFormat.BONSAI) {
|
if (storageFormat == DataStorageFormat.BONSAI) {
|
||||||
worldStateStorage =
|
worldStateStorage =
|
||||||
new BonsaiWorldStateKeyValueStorage(
|
new BonsaiWorldStateKeyValueStorage(
|
||||||
new InMemoryKeyValueStorageProvider(), new NoOpMetricsSystem());
|
new InMemoryKeyValueStorageProvider(),
|
||||||
|
new NoOpMetricsSystem(),
|
||||||
|
DataStorageConfiguration.DEFAULT_CONFIG);
|
||||||
} else {
|
} else {
|
||||||
worldStateStorage = new ForestWorldStateKeyValueStorage(new InMemoryKeyValueStorage());
|
worldStateStorage = new ForestWorldStateKeyValueStorage(new InMemoryKeyValueStorage());
|
||||||
}
|
}
|
||||||
|
|||||||
@@ -27,7 +27,7 @@ import org.hyperledger.besu.ethereum.trie.MerkleTrie;
|
|||||||
import org.hyperledger.besu.ethereum.trie.RangeStorageEntriesCollector;
|
import org.hyperledger.besu.ethereum.trie.RangeStorageEntriesCollector;
|
||||||
import org.hyperledger.besu.ethereum.trie.TrieIterator;
|
import org.hyperledger.besu.ethereum.trie.TrieIterator;
|
||||||
import org.hyperledger.besu.ethereum.trie.patricia.StoredMerklePatriciaTrie;
|
import org.hyperledger.besu.ethereum.trie.patricia.StoredMerklePatriciaTrie;
|
||||||
import org.hyperledger.besu.ethereum.worldstate.DataStorageFormat;
|
import org.hyperledger.besu.ethereum.worldstate.DataStorageConfiguration;
|
||||||
import org.hyperledger.besu.ethereum.worldstate.StateTrieAccountValue;
|
import org.hyperledger.besu.ethereum.worldstate.StateTrieAccountValue;
|
||||||
import org.hyperledger.besu.ethereum.worldstate.WorldStateStorage;
|
import org.hyperledger.besu.ethereum.worldstate.WorldStateStorage;
|
||||||
import org.hyperledger.besu.services.tasks.Task;
|
import org.hyperledger.besu.services.tasks.Task;
|
||||||
@@ -44,7 +44,8 @@ public class TaskGenerator {
|
|||||||
public static List<Task<SnapDataRequest>> createAccountRequest(final boolean withData) {
|
public static List<Task<SnapDataRequest>> createAccountRequest(final boolean withData) {
|
||||||
|
|
||||||
final WorldStateStorage worldStateStorage =
|
final WorldStateStorage worldStateStorage =
|
||||||
new InMemoryKeyValueStorageProvider().createWorldStateStorage(DataStorageFormat.FOREST);
|
new InMemoryKeyValueStorageProvider()
|
||||||
|
.createWorldStateStorage(DataStorageConfiguration.DEFAULT_CONFIG);
|
||||||
|
|
||||||
final WorldStateProofProvider worldStateProofProvider =
|
final WorldStateProofProvider worldStateProofProvider =
|
||||||
new WorldStateProofProvider(worldStateStorage);
|
new WorldStateProofProvider(worldStateStorage);
|
||||||
|
|||||||
@@ -31,6 +31,7 @@ import org.hyperledger.besu.ethereum.trie.RangeStorageEntriesCollector;
|
|||||||
import org.hyperledger.besu.ethereum.trie.TrieIterator;
|
import org.hyperledger.besu.ethereum.trie.TrieIterator;
|
||||||
import org.hyperledger.besu.ethereum.trie.bonsai.storage.BonsaiWorldStateKeyValueStorage;
|
import org.hyperledger.besu.ethereum.trie.bonsai.storage.BonsaiWorldStateKeyValueStorage;
|
||||||
import org.hyperledger.besu.ethereum.trie.forest.storage.ForestWorldStateKeyValueStorage;
|
import org.hyperledger.besu.ethereum.trie.forest.storage.ForestWorldStateKeyValueStorage;
|
||||||
|
import org.hyperledger.besu.ethereum.worldstate.DataStorageConfiguration;
|
||||||
import org.hyperledger.besu.ethereum.worldstate.WorldStateStorage;
|
import org.hyperledger.besu.ethereum.worldstate.WorldStateStorage;
|
||||||
import org.hyperledger.besu.metrics.noop.NoOpMetricsSystem;
|
import org.hyperledger.besu.metrics.noop.NoOpMetricsSystem;
|
||||||
import org.hyperledger.besu.services.kvstore.InMemoryKeyValueStorage;
|
import org.hyperledger.besu.services.kvstore.InMemoryKeyValueStorage;
|
||||||
@@ -179,7 +180,8 @@ public class AccountFlatDatabaseHealingRangeRequestTest {
|
|||||||
final StorageProvider storageProvider = new InMemoryKeyValueStorageProvider();
|
final StorageProvider storageProvider = new InMemoryKeyValueStorageProvider();
|
||||||
|
|
||||||
final WorldStateStorage worldStateStorage =
|
final WorldStateStorage worldStateStorage =
|
||||||
new BonsaiWorldStateKeyValueStorage(storageProvider, new NoOpMetricsSystem());
|
new BonsaiWorldStateKeyValueStorage(
|
||||||
|
storageProvider, new NoOpMetricsSystem(), DataStorageConfiguration.DEFAULT_CONFIG);
|
||||||
final WorldStateProofProvider proofProvider = new WorldStateProofProvider(worldStateStorage);
|
final WorldStateProofProvider proofProvider = new WorldStateProofProvider(worldStateStorage);
|
||||||
final MerkleTrie<Bytes, Bytes> accountStateTrie =
|
final MerkleTrie<Bytes, Bytes> accountStateTrie =
|
||||||
TrieGenerator.generateTrie(worldStateStorage, 15);
|
TrieGenerator.generateTrie(worldStateStorage, 15);
|
||||||
@@ -233,7 +235,8 @@ public class AccountFlatDatabaseHealingRangeRequestTest {
|
|||||||
final StorageProvider storageProvider = new InMemoryKeyValueStorageProvider();
|
final StorageProvider storageProvider = new InMemoryKeyValueStorageProvider();
|
||||||
|
|
||||||
final WorldStateStorage worldStateStorage =
|
final WorldStateStorage worldStateStorage =
|
||||||
new BonsaiWorldStateKeyValueStorage(storageProvider, new NoOpMetricsSystem());
|
new BonsaiWorldStateKeyValueStorage(
|
||||||
|
storageProvider, new NoOpMetricsSystem(), DataStorageConfiguration.DEFAULT_CONFIG);
|
||||||
final WorldStateProofProvider proofProvider = new WorldStateProofProvider(worldStateStorage);
|
final WorldStateProofProvider proofProvider = new WorldStateProofProvider(worldStateStorage);
|
||||||
final MerkleTrie<Bytes, Bytes> accountStateTrie =
|
final MerkleTrie<Bytes, Bytes> accountStateTrie =
|
||||||
TrieGenerator.generateTrie(worldStateStorage, 15);
|
TrieGenerator.generateTrie(worldStateStorage, 15);
|
||||||
|
|||||||
@@ -33,6 +33,7 @@ import org.hyperledger.besu.ethereum.trie.RangeStorageEntriesCollector;
|
|||||||
import org.hyperledger.besu.ethereum.trie.TrieIterator;
|
import org.hyperledger.besu.ethereum.trie.TrieIterator;
|
||||||
import org.hyperledger.besu.ethereum.trie.bonsai.storage.BonsaiWorldStateKeyValueStorage;
|
import org.hyperledger.besu.ethereum.trie.bonsai.storage.BonsaiWorldStateKeyValueStorage;
|
||||||
import org.hyperledger.besu.ethereum.trie.patricia.StoredMerklePatriciaTrie;
|
import org.hyperledger.besu.ethereum.trie.patricia.StoredMerklePatriciaTrie;
|
||||||
|
import org.hyperledger.besu.ethereum.worldstate.DataStorageConfiguration;
|
||||||
import org.hyperledger.besu.ethereum.worldstate.StateTrieAccountValue;
|
import org.hyperledger.besu.ethereum.worldstate.StateTrieAccountValue;
|
||||||
import org.hyperledger.besu.ethereum.worldstate.WorldStateStorage;
|
import org.hyperledger.besu.ethereum.worldstate.WorldStateStorage;
|
||||||
import org.hyperledger.besu.metrics.noop.NoOpMetricsSystem;
|
import org.hyperledger.besu.metrics.noop.NoOpMetricsSystem;
|
||||||
@@ -78,7 +79,8 @@ class StorageFlatDatabaseHealingRangeRequestTest {
|
|||||||
public void setup() {
|
public void setup() {
|
||||||
final StorageProvider storageProvider = new InMemoryKeyValueStorageProvider();
|
final StorageProvider storageProvider = new InMemoryKeyValueStorageProvider();
|
||||||
worldStateStorage =
|
worldStateStorage =
|
||||||
new BonsaiWorldStateKeyValueStorage(storageProvider, new NoOpMetricsSystem());
|
new BonsaiWorldStateKeyValueStorage(
|
||||||
|
storageProvider, new NoOpMetricsSystem(), DataStorageConfiguration.DEFAULT_CONFIG);
|
||||||
proofProvider = new WorldStateProofProvider(worldStateStorage);
|
proofProvider = new WorldStateProofProvider(worldStateStorage);
|
||||||
trie =
|
trie =
|
||||||
TrieGenerator.generateTrie(
|
TrieGenerator.generateTrie(
|
||||||
|
|||||||
@@ -24,6 +24,7 @@ import org.hyperledger.besu.ethereum.storage.StorageProvider;
|
|||||||
import org.hyperledger.besu.ethereum.trie.MerkleTrie;
|
import org.hyperledger.besu.ethereum.trie.MerkleTrie;
|
||||||
import org.hyperledger.besu.ethereum.trie.bonsai.storage.BonsaiWorldStateKeyValueStorage;
|
import org.hyperledger.besu.ethereum.trie.bonsai.storage.BonsaiWorldStateKeyValueStorage;
|
||||||
import org.hyperledger.besu.ethereum.trie.forest.storage.ForestWorldStateKeyValueStorage;
|
import org.hyperledger.besu.ethereum.trie.forest.storage.ForestWorldStateKeyValueStorage;
|
||||||
|
import org.hyperledger.besu.ethereum.worldstate.DataStorageConfiguration;
|
||||||
import org.hyperledger.besu.ethereum.worldstate.DataStorageFormat;
|
import org.hyperledger.besu.ethereum.worldstate.DataStorageFormat;
|
||||||
import org.hyperledger.besu.ethereum.worldstate.StateTrieAccountValue;
|
import org.hyperledger.besu.ethereum.worldstate.StateTrieAccountValue;
|
||||||
import org.hyperledger.besu.ethereum.worldstate.WorldStateStorage;
|
import org.hyperledger.besu.ethereum.worldstate.WorldStateStorage;
|
||||||
@@ -74,7 +75,8 @@ class StorageTrieNodeHealingRequestTest {
|
|||||||
} else {
|
} else {
|
||||||
final StorageProvider storageProvider = new InMemoryKeyValueStorageProvider();
|
final StorageProvider storageProvider = new InMemoryKeyValueStorageProvider();
|
||||||
worldStateStorage =
|
worldStateStorage =
|
||||||
new BonsaiWorldStateKeyValueStorage(storageProvider, new NoOpMetricsSystem());
|
new BonsaiWorldStateKeyValueStorage(
|
||||||
|
storageProvider, new NoOpMetricsSystem(), DataStorageConfiguration.DEFAULT_CONFIG);
|
||||||
}
|
}
|
||||||
final MerkleTrie<Bytes, Bytes> trie =
|
final MerkleTrie<Bytes, Bytes> trie =
|
||||||
TrieGenerator.generateTrie(
|
TrieGenerator.generateTrie(
|
||||||
|
|||||||
@@ -58,7 +58,7 @@ public class PendingTransactionEstimatedMemorySizeTest extends BaseTransactionPo
|
|||||||
private static final Set<Class<?>> SHARED_CLASSES =
|
private static final Set<Class<?>> SHARED_CLASSES =
|
||||||
Set.of(SignatureAlgorithm.class, TransactionType.class);
|
Set.of(SignatureAlgorithm.class, TransactionType.class);
|
||||||
private static final Set<String> COMMON_CONSTANT_FIELD_PATHS =
|
private static final Set<String> COMMON_CONSTANT_FIELD_PATHS =
|
||||||
Set.of(".value.ctor", ".hashNoSignature");
|
Set.of(".value.ctor", ".hashNoSignature", ".signature.encoded.delegate");
|
||||||
private static final Set<String> EIP1559_EIP4844_CONSTANT_FIELD_PATHS =
|
private static final Set<String> EIP1559_EIP4844_CONSTANT_FIELD_PATHS =
|
||||||
Sets.union(COMMON_CONSTANT_FIELD_PATHS, Set.of(".gasPrice"));
|
Sets.union(COMMON_CONSTANT_FIELD_PATHS, Set.of(".gasPrice"));
|
||||||
private static final Set<String> FRONTIER_ACCESS_LIST_CONSTANT_FIELD_PATHS =
|
private static final Set<String> FRONTIER_ACCESS_LIST_CONSTANT_FIELD_PATHS =
|
||||||
|
|||||||
@@ -371,6 +371,9 @@ public class EvmToolCommand implements Runnable {
|
|||||||
long txGas = gas - intrinsicGasCost - accessListCost;
|
long txGas = gas - intrinsicGasCost - accessListCost;
|
||||||
|
|
||||||
final EVM evm = protocolSpec.getEvm();
|
final EVM evm = protocolSpec.getEvm();
|
||||||
|
if (codeBytes.isEmpty()) {
|
||||||
|
codeBytes = component.getWorldState().get(receiver).getCode();
|
||||||
|
}
|
||||||
Code code = evm.getCode(Hash.hash(codeBytes), codeBytes);
|
Code code = evm.getCode(Hash.hash(codeBytes), codeBytes);
|
||||||
if (!code.isValid()) {
|
if (!code.isValid()) {
|
||||||
out.println(((CodeInvalid) code).getInvalidReason());
|
out.println(((CodeInvalid) code).getInvalidReason());
|
||||||
|
|||||||
@@ -32,7 +32,7 @@ public class DiscoveryConfiguration {
|
|||||||
private List<EnodeURL> bootnodes = new ArrayList<>();
|
private List<EnodeURL> bootnodes = new ArrayList<>();
|
||||||
private String dnsDiscoveryURL;
|
private String dnsDiscoveryURL;
|
||||||
private boolean discoveryV5Enabled = false;
|
private boolean discoveryV5Enabled = false;
|
||||||
private boolean filterOnEnrForkId = false;
|
private boolean filterOnEnrForkId = NetworkingConfiguration.DEFAULT_FILTER_ON_ENR_FORK_ID;
|
||||||
|
|
||||||
public static DiscoveryConfiguration create() {
|
public static DiscoveryConfiguration create() {
|
||||||
return new DiscoveryConfiguration();
|
return new DiscoveryConfiguration();
|
||||||
|
|||||||
@@ -23,6 +23,7 @@ public class NetworkingConfiguration {
|
|||||||
public static final int DEFAULT_INITIATE_CONNECTIONS_FREQUENCY_SEC = 30;
|
public static final int DEFAULT_INITIATE_CONNECTIONS_FREQUENCY_SEC = 30;
|
||||||
public static final int DEFAULT_CHECK_MAINTAINED_CONNECTIONS_FREQUENCY_SEC = 60;
|
public static final int DEFAULT_CHECK_MAINTAINED_CONNECTIONS_FREQUENCY_SEC = 60;
|
||||||
public static final int DEFAULT_PEER_LOWER_BOUND = 25;
|
public static final int DEFAULT_PEER_LOWER_BOUND = 25;
|
||||||
|
public static final boolean DEFAULT_FILTER_ON_ENR_FORK_ID = true;
|
||||||
|
|
||||||
private DiscoveryConfiguration discovery = new DiscoveryConfiguration();
|
private DiscoveryConfiguration discovery = new DiscoveryConfiguration();
|
||||||
private RlpxConfiguration rlpx = new RlpxConfiguration();
|
private RlpxConfiguration rlpx = new RlpxConfiguration();
|
||||||
|
|||||||
@@ -26,6 +26,7 @@ import org.hyperledger.besu.ethereum.p2p.config.DiscoveryConfiguration;
|
|||||||
import org.hyperledger.besu.ethereum.p2p.discovery.internal.Packet;
|
import org.hyperledger.besu.ethereum.p2p.discovery.internal.Packet;
|
||||||
import org.hyperledger.besu.ethereum.p2p.discovery.internal.PeerDiscoveryController;
|
import org.hyperledger.besu.ethereum.p2p.discovery.internal.PeerDiscoveryController;
|
||||||
import org.hyperledger.besu.ethereum.p2p.discovery.internal.PeerRequirement;
|
import org.hyperledger.besu.ethereum.p2p.discovery.internal.PeerRequirement;
|
||||||
|
import org.hyperledger.besu.ethereum.p2p.discovery.internal.PeerTable;
|
||||||
import org.hyperledger.besu.ethereum.p2p.discovery.internal.PingPacketData;
|
import org.hyperledger.besu.ethereum.p2p.discovery.internal.PingPacketData;
|
||||||
import org.hyperledger.besu.ethereum.p2p.discovery.internal.TimerUtil;
|
import org.hyperledger.besu.ethereum.p2p.discovery.internal.TimerUtil;
|
||||||
import org.hyperledger.besu.ethereum.p2p.peers.EnodeURLImpl;
|
import org.hyperledger.besu.ethereum.p2p.peers.EnodeURLImpl;
|
||||||
@@ -81,6 +82,7 @@ public abstract class PeerDiscoveryAgent {
|
|||||||
private final MetricsSystem metricsSystem;
|
private final MetricsSystem metricsSystem;
|
||||||
private final RlpxAgent rlpxAgent;
|
private final RlpxAgent rlpxAgent;
|
||||||
private final ForkIdManager forkIdManager;
|
private final ForkIdManager forkIdManager;
|
||||||
|
private final PeerTable peerTable;
|
||||||
|
|
||||||
/* The peer controller, which takes care of the state machine of peers. */
|
/* The peer controller, which takes care of the state machine of peers. */
|
||||||
protected Optional<PeerDiscoveryController> controller = Optional.empty();
|
protected Optional<PeerDiscoveryController> controller = Optional.empty();
|
||||||
@@ -109,7 +111,8 @@ public abstract class PeerDiscoveryAgent {
|
|||||||
final MetricsSystem metricsSystem,
|
final MetricsSystem metricsSystem,
|
||||||
final StorageProvider storageProvider,
|
final StorageProvider storageProvider,
|
||||||
final ForkIdManager forkIdManager,
|
final ForkIdManager forkIdManager,
|
||||||
final RlpxAgent rlpxAgent) {
|
final RlpxAgent rlpxAgent,
|
||||||
|
final PeerTable peerTable) {
|
||||||
this.metricsSystem = metricsSystem;
|
this.metricsSystem = metricsSystem;
|
||||||
checkArgument(nodeKey != null, "nodeKey cannot be null");
|
checkArgument(nodeKey != null, "nodeKey cannot be null");
|
||||||
checkArgument(config != null, "provided configuration cannot be null");
|
checkArgument(config != null, "provided configuration cannot be null");
|
||||||
@@ -130,6 +133,7 @@ public abstract class PeerDiscoveryAgent {
|
|||||||
this.forkIdManager = forkIdManager;
|
this.forkIdManager = forkIdManager;
|
||||||
this.forkIdSupplier = () -> forkIdManager.getForkIdForChainHead().getForkIdAsBytesList();
|
this.forkIdSupplier = () -> forkIdManager.getForkIdForChainHead().getForkIdAsBytesList();
|
||||||
this.rlpxAgent = rlpxAgent;
|
this.rlpxAgent = rlpxAgent;
|
||||||
|
this.peerTable = peerTable;
|
||||||
}
|
}
|
||||||
|
|
||||||
protected abstract TimerUtil createTimer();
|
protected abstract TimerUtil createTimer();
|
||||||
@@ -263,9 +267,9 @@ public abstract class PeerDiscoveryAgent {
|
|||||||
.peerRequirement(PeerRequirement.combine(peerRequirements))
|
.peerRequirement(PeerRequirement.combine(peerRequirements))
|
||||||
.peerPermissions(peerPermissions)
|
.peerPermissions(peerPermissions)
|
||||||
.metricsSystem(metricsSystem)
|
.metricsSystem(metricsSystem)
|
||||||
.forkIdManager(forkIdManager)
|
|
||||||
.filterOnEnrForkId((config.isFilterOnEnrForkIdEnabled()))
|
.filterOnEnrForkId((config.isFilterOnEnrForkIdEnabled()))
|
||||||
.rlpxAgent(rlpxAgent)
|
.rlpxAgent(rlpxAgent)
|
||||||
|
.peerTable(peerTable)
|
||||||
.build();
|
.build();
|
||||||
}
|
}
|
||||||
|
|
||||||
@@ -282,8 +286,31 @@ public abstract class PeerDiscoveryAgent {
|
|||||||
.flatMap(Endpoint::getTcpPort)
|
.flatMap(Endpoint::getTcpPort)
|
||||||
.orElse(udpPort);
|
.orElse(udpPort);
|
||||||
|
|
||||||
|
// If the host is present in the P2P PING packet itself, use that as the endpoint. If the P2P
|
||||||
|
// PING packet specifies 127.0.0.1 (the default if a custom value is not specified with
|
||||||
|
// --p2p-host or via a suitable --nat-method) we ignore it in favour of the UDP source address.
|
||||||
|
// The likelihood is that the UDP source will be 127.0.0.1 anyway, but this reduces the chance
|
||||||
|
// of an unexpected change in behaviour as a result of
|
||||||
|
// https://github.com/hyperledger/besu/issues/6224 being fixed.
|
||||||
|
final String host =
|
||||||
|
packet
|
||||||
|
.getPacketData(PingPacketData.class)
|
||||||
|
.flatMap(PingPacketData::getFrom)
|
||||||
|
.map(Endpoint::getHost)
|
||||||
|
.filter(
|
||||||
|
fromAddr ->
|
||||||
|
(!fromAddr.equals("127.0.0.1") && InetAddresses.isInetAddress(fromAddr)))
|
||||||
|
.stream()
|
||||||
|
.peek(
|
||||||
|
h ->
|
||||||
|
LOG.trace(
|
||||||
|
"Using \"From\" endpoint {} specified in ping packet. Ignoring UDP source host {}",
|
||||||
|
h,
|
||||||
|
sourceEndpoint.getHost()))
|
||||||
|
.findFirst()
|
||||||
|
.orElseGet(sourceEndpoint::getHost);
|
||||||
|
|
||||||
// Notify the peer controller.
|
// Notify the peer controller.
|
||||||
final String host = sourceEndpoint.getHost();
|
|
||||||
final DiscoveryPeer peer =
|
final DiscoveryPeer peer =
|
||||||
DiscoveryPeer.fromEnode(
|
DiscoveryPeer.fromEnode(
|
||||||
EnodeURLImpl.builder()
|
EnodeURLImpl.builder()
|
||||||
|
|||||||
@@ -23,6 +23,7 @@ import org.hyperledger.besu.ethereum.p2p.config.DiscoveryConfiguration;
|
|||||||
import org.hyperledger.besu.ethereum.p2p.discovery.internal.Packet;
|
import org.hyperledger.besu.ethereum.p2p.discovery.internal.Packet;
|
||||||
import org.hyperledger.besu.ethereum.p2p.discovery.internal.PeerDiscoveryController;
|
import org.hyperledger.besu.ethereum.p2p.discovery.internal.PeerDiscoveryController;
|
||||||
import org.hyperledger.besu.ethereum.p2p.discovery.internal.PeerDiscoveryController.AsyncExecutor;
|
import org.hyperledger.besu.ethereum.p2p.discovery.internal.PeerDiscoveryController.AsyncExecutor;
|
||||||
|
import org.hyperledger.besu.ethereum.p2p.discovery.internal.PeerTable;
|
||||||
import org.hyperledger.besu.ethereum.p2p.discovery.internal.TimerUtil;
|
import org.hyperledger.besu.ethereum.p2p.discovery.internal.TimerUtil;
|
||||||
import org.hyperledger.besu.ethereum.p2p.discovery.internal.VertxTimerUtil;
|
import org.hyperledger.besu.ethereum.p2p.discovery.internal.VertxTimerUtil;
|
||||||
import org.hyperledger.besu.ethereum.p2p.permissions.PeerPermissions;
|
import org.hyperledger.besu.ethereum.p2p.permissions.PeerPermissions;
|
||||||
@@ -73,7 +74,8 @@ public class VertxPeerDiscoveryAgent extends PeerDiscoveryAgent {
|
|||||||
final MetricsSystem metricsSystem,
|
final MetricsSystem metricsSystem,
|
||||||
final StorageProvider storageProvider,
|
final StorageProvider storageProvider,
|
||||||
final ForkIdManager forkIdManager,
|
final ForkIdManager forkIdManager,
|
||||||
final RlpxAgent rlpxAgent) {
|
final RlpxAgent rlpxAgent,
|
||||||
|
final PeerTable peerTable) {
|
||||||
super(
|
super(
|
||||||
nodeKey,
|
nodeKey,
|
||||||
config,
|
config,
|
||||||
@@ -82,7 +84,8 @@ public class VertxPeerDiscoveryAgent extends PeerDiscoveryAgent {
|
|||||||
metricsSystem,
|
metricsSystem,
|
||||||
storageProvider,
|
storageProvider,
|
||||||
forkIdManager,
|
forkIdManager,
|
||||||
rlpxAgent);
|
rlpxAgent,
|
||||||
|
peerTable);
|
||||||
checkArgument(vertx != null, "vertx instance cannot be null");
|
checkArgument(vertx != null, "vertx instance cannot be null");
|
||||||
this.vertx = vertx;
|
this.vertx = vertx;
|
||||||
|
|
||||||
|
|||||||
@@ -21,8 +21,6 @@ import static java.util.concurrent.TimeUnit.MILLISECONDS;
|
|||||||
import static java.util.concurrent.TimeUnit.SECONDS;
|
import static java.util.concurrent.TimeUnit.SECONDS;
|
||||||
|
|
||||||
import org.hyperledger.besu.cryptoservices.NodeKey;
|
import org.hyperledger.besu.cryptoservices.NodeKey;
|
||||||
import org.hyperledger.besu.ethereum.forkid.ForkId;
|
|
||||||
import org.hyperledger.besu.ethereum.forkid.ForkIdManager;
|
|
||||||
import org.hyperledger.besu.ethereum.p2p.discovery.DiscoveryPeer;
|
import org.hyperledger.besu.ethereum.p2p.discovery.DiscoveryPeer;
|
||||||
import org.hyperledger.besu.ethereum.p2p.discovery.PeerDiscoveryStatus;
|
import org.hyperledger.besu.ethereum.p2p.discovery.PeerDiscoveryStatus;
|
||||||
import org.hyperledger.besu.ethereum.p2p.peers.Peer;
|
import org.hyperledger.besu.ethereum.p2p.peers.Peer;
|
||||||
@@ -129,7 +127,6 @@ public class PeerDiscoveryController {
|
|||||||
private final DiscoveryProtocolLogger discoveryProtocolLogger;
|
private final DiscoveryProtocolLogger discoveryProtocolLogger;
|
||||||
private final LabelledMetric<Counter> interactionCounter;
|
private final LabelledMetric<Counter> interactionCounter;
|
||||||
private final LabelledMetric<Counter> interactionRetryCounter;
|
private final LabelledMetric<Counter> interactionRetryCounter;
|
||||||
private final ForkIdManager forkIdManager;
|
|
||||||
private final boolean filterOnEnrForkId;
|
private final boolean filterOnEnrForkId;
|
||||||
private final RlpxAgent rlpxAgent;
|
private final RlpxAgent rlpxAgent;
|
||||||
|
|
||||||
@@ -161,7 +158,6 @@ public class PeerDiscoveryController {
|
|||||||
final PeerPermissions peerPermissions,
|
final PeerPermissions peerPermissions,
|
||||||
final MetricsSystem metricsSystem,
|
final MetricsSystem metricsSystem,
|
||||||
final Optional<Cache<Bytes, Packet>> maybeCacheForEnrRequests,
|
final Optional<Cache<Bytes, Packet>> maybeCacheForEnrRequests,
|
||||||
final ForkIdManager forkIdManager,
|
|
||||||
final boolean filterOnEnrForkId,
|
final boolean filterOnEnrForkId,
|
||||||
final RlpxAgent rlpxAgent) {
|
final RlpxAgent rlpxAgent) {
|
||||||
this.timerUtil = timerUtil;
|
this.timerUtil = timerUtil;
|
||||||
@@ -197,11 +193,11 @@ public class PeerDiscoveryController {
|
|||||||
"discovery_interaction_retry_count",
|
"discovery_interaction_retry_count",
|
||||||
"Total number of interaction retries performed",
|
"Total number of interaction retries performed",
|
||||||
"type");
|
"type");
|
||||||
|
|
||||||
this.cachedEnrRequests =
|
this.cachedEnrRequests =
|
||||||
maybeCacheForEnrRequests.orElse(
|
maybeCacheForEnrRequests.orElse(
|
||||||
CacheBuilder.newBuilder().maximumSize(50).expireAfterWrite(10, SECONDS).build());
|
CacheBuilder.newBuilder().maximumSize(50).expireAfterWrite(10, SECONDS).build());
|
||||||
|
|
||||||
this.forkIdManager = forkIdManager;
|
|
||||||
this.filterOnEnrForkId = filterOnEnrForkId;
|
this.filterOnEnrForkId = filterOnEnrForkId;
|
||||||
}
|
}
|
||||||
|
|
||||||
@@ -314,6 +310,7 @@ public class PeerDiscoveryController {
|
|||||||
}
|
}
|
||||||
|
|
||||||
final DiscoveryPeer peer = resolvePeer(sender);
|
final DiscoveryPeer peer = resolvePeer(sender);
|
||||||
|
final Bytes peerId = peer.getId();
|
||||||
switch (packet.getType()) {
|
switch (packet.getType()) {
|
||||||
case PING:
|
case PING:
|
||||||
if (peerPermissions.allowInboundBonding(peer)) {
|
if (peerPermissions.allowInboundBonding(peer)) {
|
||||||
@@ -333,10 +330,10 @@ public class PeerDiscoveryController {
|
|||||||
if (filterOnEnrForkId) {
|
if (filterOnEnrForkId) {
|
||||||
requestENR(peer);
|
requestENR(peer);
|
||||||
}
|
}
|
||||||
bondingPeers.invalidate(peer.getId());
|
bondingPeers.invalidate(peerId);
|
||||||
addToPeerTable(peer);
|
addToPeerTable(peer);
|
||||||
recursivePeerRefreshState.onBondingComplete(peer);
|
recursivePeerRefreshState.onBondingComplete(peer);
|
||||||
Optional.ofNullable(cachedEnrRequests.getIfPresent(peer.getId()))
|
Optional.ofNullable(cachedEnrRequests.getIfPresent(peerId))
|
||||||
.ifPresent(cachedEnrRequest -> processEnrRequest(peer, cachedEnrRequest));
|
.ifPresent(cachedEnrRequest -> processEnrRequest(peer, cachedEnrRequest));
|
||||||
});
|
});
|
||||||
break;
|
break;
|
||||||
@@ -360,12 +357,12 @@ public class PeerDiscoveryController {
|
|||||||
if (PeerDiscoveryStatus.BONDED.equals(peer.getStatus())) {
|
if (PeerDiscoveryStatus.BONDED.equals(peer.getStatus())) {
|
||||||
processEnrRequest(peer, packet);
|
processEnrRequest(peer, packet);
|
||||||
} else if (PeerDiscoveryStatus.BONDING.equals(peer.getStatus())) {
|
} else if (PeerDiscoveryStatus.BONDING.equals(peer.getStatus())) {
|
||||||
LOG.trace("ENR_REQUEST cached for bonding peer Id: {}", peer.getId());
|
LOG.trace("ENR_REQUEST cached for bonding peer Id: {}", peerId);
|
||||||
// Due to UDP, it may happen that we receive the ENR_REQUEST just before the PONG.
|
// Due to UDP, it may happen that we receive the ENR_REQUEST just before the PONG.
|
||||||
// Because peers want to send the ENR_REQUEST directly after the pong.
|
// Because peers want to send the ENR_REQUEST directly after the pong.
|
||||||
// If this happens we don't want to ignore the request but process when bonded.
|
// If this happens we don't want to ignore the request but process when bonded.
|
||||||
// this cache allows to keep the request and to respond after having processed the PONG
|
// this cache allows to keep the request and to respond after having processed the PONG
|
||||||
cachedEnrRequests.put(peer.getId(), packet);
|
cachedEnrRequests.put(peerId, packet);
|
||||||
}
|
}
|
||||||
break;
|
break;
|
||||||
case ENR_RESPONSE:
|
case ENR_RESPONSE:
|
||||||
@@ -376,26 +373,6 @@ public class PeerDiscoveryController {
|
|||||||
packet.getPacketData(ENRResponsePacketData.class);
|
packet.getPacketData(ENRResponsePacketData.class);
|
||||||
final NodeRecord enr = packetData.get().getEnr();
|
final NodeRecord enr = packetData.get().getEnr();
|
||||||
peer.setNodeRecord(enr);
|
peer.setNodeRecord(enr);
|
||||||
|
|
||||||
final Optional<ForkId> maybeForkId = peer.getForkId();
|
|
||||||
if (maybeForkId.isPresent()) {
|
|
||||||
if (forkIdManager.peerCheck(maybeForkId.get())) {
|
|
||||||
connectOnRlpxLayer(peer);
|
|
||||||
LOG.debug(
|
|
||||||
"Peer {} PASSED fork id check. ForkId received: {}",
|
|
||||||
sender.getId(),
|
|
||||||
maybeForkId.get());
|
|
||||||
} else {
|
|
||||||
LOG.debug(
|
|
||||||
"Peer {} FAILED fork id check. ForkId received: {}",
|
|
||||||
sender.getId(),
|
|
||||||
maybeForkId.get());
|
|
||||||
}
|
|
||||||
} else {
|
|
||||||
// if the peer hasn't sent the ForkId try to connect to it anyways
|
|
||||||
connectOnRlpxLayer(peer);
|
|
||||||
LOG.debug("No fork id sent by peer: {}", peer.getId());
|
|
||||||
}
|
|
||||||
});
|
});
|
||||||
break;
|
break;
|
||||||
}
|
}
|
||||||
@@ -431,9 +408,7 @@ public class PeerDiscoveryController {
|
|||||||
|
|
||||||
if (peer.getStatus() != PeerDiscoveryStatus.BONDED) {
|
if (peer.getStatus() != PeerDiscoveryStatus.BONDED) {
|
||||||
peer.setStatus(PeerDiscoveryStatus.BONDED);
|
peer.setStatus(PeerDiscoveryStatus.BONDED);
|
||||||
if (!filterOnEnrForkId) {
|
connectOnRlpxLayer(peer);
|
||||||
connectOnRlpxLayer(peer);
|
|
||||||
}
|
|
||||||
}
|
}
|
||||||
|
|
||||||
final PeerTable.AddResult result = peerTable.tryAdd(peer);
|
final PeerTable.AddResult result = peerTable.tryAdd(peer);
|
||||||
@@ -560,8 +535,6 @@ public class PeerDiscoveryController {
|
|||||||
*/
|
*/
|
||||||
@VisibleForTesting
|
@VisibleForTesting
|
||||||
void requestENR(final DiscoveryPeer peer) {
|
void requestENR(final DiscoveryPeer peer) {
|
||||||
peer.setStatus(PeerDiscoveryStatus.ENR_REQUESTED);
|
|
||||||
|
|
||||||
final Consumer<PeerInteractionState> action =
|
final Consumer<PeerInteractionState> action =
|
||||||
interaction -> {
|
interaction -> {
|
||||||
final ENRRequestPacketData data = ENRRequestPacketData.create();
|
final ENRRequestPacketData data = ENRRequestPacketData.create();
|
||||||
@@ -838,7 +811,6 @@ public class PeerDiscoveryController {
|
|||||||
|
|
||||||
private Cache<Bytes, Packet> cachedEnrRequests =
|
private Cache<Bytes, Packet> cachedEnrRequests =
|
||||||
CacheBuilder.newBuilder().maximumSize(50).expireAfterWrite(10, SECONDS).build();
|
CacheBuilder.newBuilder().maximumSize(50).expireAfterWrite(10, SECONDS).build();
|
||||||
private ForkIdManager forkIdManager;
|
|
||||||
private RlpxAgent rlpxAgent;
|
private RlpxAgent rlpxAgent;
|
||||||
|
|
||||||
private Builder() {}
|
private Builder() {}
|
||||||
@@ -846,10 +818,6 @@ public class PeerDiscoveryController {
|
|||||||
public PeerDiscoveryController build() {
|
public PeerDiscoveryController build() {
|
||||||
validate();
|
validate();
|
||||||
|
|
||||||
if (peerTable == null) {
|
|
||||||
peerTable = new PeerTable(this.nodeKey.getPublicKey().getEncodedBytes(), 16);
|
|
||||||
}
|
|
||||||
|
|
||||||
return new PeerDiscoveryController(
|
return new PeerDiscoveryController(
|
||||||
nodeKey,
|
nodeKey,
|
||||||
localPeer,
|
localPeer,
|
||||||
@@ -864,7 +832,6 @@ public class PeerDiscoveryController {
|
|||||||
peerPermissions,
|
peerPermissions,
|
||||||
metricsSystem,
|
metricsSystem,
|
||||||
Optional.of(cachedEnrRequests),
|
Optional.of(cachedEnrRequests),
|
||||||
forkIdManager,
|
|
||||||
filterOnEnrForkId,
|
filterOnEnrForkId,
|
||||||
rlpxAgent);
|
rlpxAgent);
|
||||||
}
|
}
|
||||||
@@ -875,8 +842,8 @@ public class PeerDiscoveryController {
|
|||||||
validateRequiredDependency(timerUtil, "TimerUtil");
|
validateRequiredDependency(timerUtil, "TimerUtil");
|
||||||
validateRequiredDependency(workerExecutor, "AsyncExecutor");
|
validateRequiredDependency(workerExecutor, "AsyncExecutor");
|
||||||
validateRequiredDependency(metricsSystem, "MetricsSystem");
|
validateRequiredDependency(metricsSystem, "MetricsSystem");
|
||||||
validateRequiredDependency(forkIdManager, "ForkIdManager");
|
|
||||||
validateRequiredDependency(rlpxAgent, "RlpxAgent");
|
validateRequiredDependency(rlpxAgent, "RlpxAgent");
|
||||||
|
validateRequiredDependency(peerTable, "PeerTable");
|
||||||
}
|
}
|
||||||
|
|
||||||
private void validateRequiredDependency(final Object object, final String name) {
|
private void validateRequiredDependency(final Object object, final String name) {
|
||||||
@@ -970,11 +937,5 @@ public class PeerDiscoveryController {
|
|||||||
this.rlpxAgent = rlpxAgent;
|
this.rlpxAgent = rlpxAgent;
|
||||||
return this;
|
return this;
|
||||||
}
|
}
|
||||||
|
|
||||||
public Builder forkIdManager(final ForkIdManager forkIdManager) {
|
|
||||||
checkNotNull(forkIdManager);
|
|
||||||
this.forkIdManager = forkIdManager;
|
|
||||||
return this;
|
|
||||||
}
|
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|||||||
@@ -56,26 +56,21 @@ public class PeerTable {
|
|||||||
* Builds a new peer table, where distance is calculated using the provided nodeId as a baseline.
|
* Builds a new peer table, where distance is calculated using the provided nodeId as a baseline.
|
||||||
*
|
*
|
||||||
* @param nodeId The ID of the node where this peer table is stored.
|
* @param nodeId The ID of the node where this peer table is stored.
|
||||||
* @param bucketSize The maximum length of each k-bucket.
|
|
||||||
*/
|
*/
|
||||||
public PeerTable(final Bytes nodeId, final int bucketSize) {
|
public PeerTable(final Bytes nodeId) {
|
||||||
this.keccak256 = Hash.keccak256(nodeId);
|
this.keccak256 = Hash.keccak256(nodeId);
|
||||||
this.table =
|
this.table =
|
||||||
Stream.generate(() -> new Bucket(DEFAULT_BUCKET_SIZE))
|
Stream.generate(() -> new Bucket(DEFAULT_BUCKET_SIZE))
|
||||||
.limit(N_BUCKETS + 1)
|
.limit(N_BUCKETS + 1)
|
||||||
.toArray(Bucket[]::new);
|
.toArray(Bucket[]::new);
|
||||||
this.distanceCache = new ConcurrentHashMap<>();
|
this.distanceCache = new ConcurrentHashMap<>();
|
||||||
this.maxEntriesCnt = N_BUCKETS * bucketSize;
|
this.maxEntriesCnt = N_BUCKETS * DEFAULT_BUCKET_SIZE;
|
||||||
|
|
||||||
// A bloom filter with 4096 expected insertions of 64-byte keys with a 0.1% false positive
|
// A bloom filter with 4096 expected insertions of 64-byte keys with a 0.1% false positive
|
||||||
// probability yields a memory footprint of ~7.5kb.
|
// probability yields a memory footprint of ~7.5kb.
|
||||||
buildBloomFilter();
|
buildBloomFilter();
|
||||||
}
|
}
|
||||||
|
|
||||||
public PeerTable(final Bytes nodeId) {
|
|
||||||
this(nodeId, DEFAULT_BUCKET_SIZE);
|
|
||||||
}
|
|
||||||
|
|
||||||
/**
|
/**
|
||||||
* Returns the table's representation of a peer, if it exists.
|
* Returns the table's representation of a peer, if it exists.
|
||||||
*
|
*
|
||||||
@@ -83,11 +78,12 @@ public class PeerTable {
|
|||||||
* @return The stored representation.
|
* @return The stored representation.
|
||||||
*/
|
*/
|
||||||
public Optional<DiscoveryPeer> get(final PeerId peer) {
|
public Optional<DiscoveryPeer> get(final PeerId peer) {
|
||||||
if (!idBloom.mightContain(peer.getId())) {
|
final Bytes peerId = peer.getId();
|
||||||
|
if (!idBloom.mightContain(peerId)) {
|
||||||
return Optional.empty();
|
return Optional.empty();
|
||||||
}
|
}
|
||||||
final int distance = distanceFrom(peer);
|
final int distance = distanceFrom(peer);
|
||||||
return table[distance].getAndTouch(peer.getId());
|
return table[distance].getAndTouch(peerId);
|
||||||
}
|
}
|
||||||
|
|
||||||
/**
|
/**
|
||||||
|
|||||||
@@ -27,6 +27,7 @@ import org.hyperledger.besu.ethereum.p2p.discovery.DiscoveryPeer;
|
|||||||
import org.hyperledger.besu.ethereum.p2p.discovery.PeerDiscoveryAgent;
|
import org.hyperledger.besu.ethereum.p2p.discovery.PeerDiscoveryAgent;
|
||||||
import org.hyperledger.besu.ethereum.p2p.discovery.PeerDiscoveryStatus;
|
import org.hyperledger.besu.ethereum.p2p.discovery.PeerDiscoveryStatus;
|
||||||
import org.hyperledger.besu.ethereum.p2p.discovery.VertxPeerDiscoveryAgent;
|
import org.hyperledger.besu.ethereum.p2p.discovery.VertxPeerDiscoveryAgent;
|
||||||
|
import org.hyperledger.besu.ethereum.p2p.discovery.internal.PeerTable;
|
||||||
import org.hyperledger.besu.ethereum.p2p.peers.DefaultPeerPrivileges;
|
import org.hyperledger.besu.ethereum.p2p.peers.DefaultPeerPrivileges;
|
||||||
import org.hyperledger.besu.ethereum.p2p.peers.EnodeURLImpl;
|
import org.hyperledger.besu.ethereum.p2p.peers.EnodeURLImpl;
|
||||||
import org.hyperledger.besu.ethereum.p2p.peers.LocalNode;
|
import org.hyperledger.besu.ethereum.p2p.peers.LocalNode;
|
||||||
@@ -383,11 +384,12 @@ public class DefaultP2PNetwork implements P2PNetwork {
|
|||||||
@VisibleForTesting
|
@VisibleForTesting
|
||||||
void attemptPeerConnections() {
|
void attemptPeerConnections() {
|
||||||
LOG.trace("Initiating connections to discovered peers.");
|
LOG.trace("Initiating connections to discovered peers.");
|
||||||
rlpxAgent.connect(
|
final Stream<DiscoveryPeer> toTry =
|
||||||
streamDiscoveredPeers()
|
streamDiscoveredPeers()
|
||||||
.filter(peer -> peer.getStatus() == PeerDiscoveryStatus.BONDED)
|
.filter(peer -> peer.getStatus() == PeerDiscoveryStatus.BONDED)
|
||||||
.filter(peerDiscoveryAgent::checkForkId)
|
.filter(peerDiscoveryAgent::checkForkId)
|
||||||
.sorted(Comparator.comparing(DiscoveryPeer::getLastAttemptedConnection)));
|
.sorted(Comparator.comparing(DiscoveryPeer::getLastAttemptedConnection));
|
||||||
|
toTry.forEach(rlpxAgent::connect);
|
||||||
}
|
}
|
||||||
|
|
||||||
@Override
|
@Override
|
||||||
@@ -511,6 +513,7 @@ public class DefaultP2PNetwork implements P2PNetwork {
|
|||||||
private Supplier<Stream<PeerConnection>> allConnectionsSupplier;
|
private Supplier<Stream<PeerConnection>> allConnectionsSupplier;
|
||||||
private Supplier<Stream<PeerConnection>> allActiveConnectionsSupplier;
|
private Supplier<Stream<PeerConnection>> allActiveConnectionsSupplier;
|
||||||
private int peersLowerBound;
|
private int peersLowerBound;
|
||||||
|
private PeerTable peerTable;
|
||||||
|
|
||||||
public P2PNetwork build() {
|
public P2PNetwork build() {
|
||||||
validate();
|
validate();
|
||||||
@@ -528,6 +531,7 @@ public class DefaultP2PNetwork implements P2PNetwork {
|
|||||||
final MutableLocalNode localNode =
|
final MutableLocalNode localNode =
|
||||||
MutableLocalNode.create(config.getRlpx().getClientId(), 5, supportedCapabilities);
|
MutableLocalNode.create(config.getRlpx().getClientId(), 5, supportedCapabilities);
|
||||||
final PeerPrivileges peerPrivileges = new DefaultPeerPrivileges(maintainedPeers);
|
final PeerPrivileges peerPrivileges = new DefaultPeerPrivileges(maintainedPeers);
|
||||||
|
peerTable = new PeerTable(nodeKey.getPublicKey().getEncodedBytes());
|
||||||
rlpxAgent = rlpxAgent == null ? createRlpxAgent(localNode, peerPrivileges) : rlpxAgent;
|
rlpxAgent = rlpxAgent == null ? createRlpxAgent(localNode, peerPrivileges) : rlpxAgent;
|
||||||
peerDiscoveryAgent = peerDiscoveryAgent == null ? createDiscoveryAgent() : peerDiscoveryAgent;
|
peerDiscoveryAgent = peerDiscoveryAgent == null ? createDiscoveryAgent() : peerDiscoveryAgent;
|
||||||
|
|
||||||
@@ -572,7 +576,8 @@ public class DefaultP2PNetwork implements P2PNetwork {
|
|||||||
metricsSystem,
|
metricsSystem,
|
||||||
storageProvider,
|
storageProvider,
|
||||||
forkIdManager,
|
forkIdManager,
|
||||||
rlpxAgent);
|
rlpxAgent,
|
||||||
|
peerTable);
|
||||||
}
|
}
|
||||||
|
|
||||||
private RlpxAgent createRlpxAgent(
|
private RlpxAgent createRlpxAgent(
|
||||||
@@ -589,6 +594,7 @@ public class DefaultP2PNetwork implements P2PNetwork {
|
|||||||
.allConnectionsSupplier(allConnectionsSupplier)
|
.allConnectionsSupplier(allConnectionsSupplier)
|
||||||
.allActiveConnectionsSupplier(allActiveConnectionsSupplier)
|
.allActiveConnectionsSupplier(allActiveConnectionsSupplier)
|
||||||
.peersLowerBound(peersLowerBound)
|
.peersLowerBound(peersLowerBound)
|
||||||
|
.peerTable(peerTable)
|
||||||
.build();
|
.build();
|
||||||
}
|
}
|
||||||
|
|
||||||
|
|||||||
@@ -20,6 +20,7 @@ import static com.google.common.base.Preconditions.checkState;
|
|||||||
import org.hyperledger.besu.cryptoservices.NodeKey;
|
import org.hyperledger.besu.cryptoservices.NodeKey;
|
||||||
import org.hyperledger.besu.ethereum.p2p.config.RlpxConfiguration;
|
import org.hyperledger.besu.ethereum.p2p.config.RlpxConfiguration;
|
||||||
import org.hyperledger.besu.ethereum.p2p.discovery.DiscoveryPeer;
|
import org.hyperledger.besu.ethereum.p2p.discovery.DiscoveryPeer;
|
||||||
|
import org.hyperledger.besu.ethereum.p2p.discovery.internal.PeerTable;
|
||||||
import org.hyperledger.besu.ethereum.p2p.peers.LocalNode;
|
import org.hyperledger.besu.ethereum.p2p.peers.LocalNode;
|
||||||
import org.hyperledger.besu.ethereum.p2p.peers.Peer;
|
import org.hyperledger.besu.ethereum.p2p.peers.Peer;
|
||||||
import org.hyperledger.besu.ethereum.p2p.peers.PeerPrivileges;
|
import org.hyperledger.besu.ethereum.p2p.peers.PeerPrivileges;
|
||||||
@@ -162,13 +163,6 @@ public class RlpxAgent {
|
|||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
public void connect(final Stream<? extends Peer> peerStream) {
|
|
||||||
if (!localNode.isReady()) {
|
|
||||||
return;
|
|
||||||
}
|
|
||||||
peerStream.forEach(this::connect);
|
|
||||||
}
|
|
||||||
|
|
||||||
public void disconnect(final Bytes peerId, final DisconnectReason reason) {
|
public void disconnect(final Bytes peerId, final DisconnectReason reason) {
|
||||||
try {
|
try {
|
||||||
allActiveConnectionsSupplier
|
allActiveConnectionsSupplier
|
||||||
@@ -206,6 +200,7 @@ public class RlpxAgent {
|
|||||||
+ this.getClass().getSimpleName()
|
+ this.getClass().getSimpleName()
|
||||||
+ " has finished starting"));
|
+ " has finished starting"));
|
||||||
}
|
}
|
||||||
|
|
||||||
// Check peer is valid
|
// Check peer is valid
|
||||||
final EnodeURL enode = peer.getEnodeURL();
|
final EnodeURL enode = peer.getEnodeURL();
|
||||||
if (!enode.isListening()) {
|
if (!enode.isListening()) {
|
||||||
@@ -380,6 +375,7 @@ public class RlpxAgent {
|
|||||||
private Supplier<Stream<PeerConnection>> allConnectionsSupplier;
|
private Supplier<Stream<PeerConnection>> allConnectionsSupplier;
|
||||||
private Supplier<Stream<PeerConnection>> allActiveConnectionsSupplier;
|
private Supplier<Stream<PeerConnection>> allActiveConnectionsSupplier;
|
||||||
private int peersLowerBound;
|
private int peersLowerBound;
|
||||||
|
private PeerTable peerTable;
|
||||||
|
|
||||||
private Builder() {}
|
private Builder() {}
|
||||||
|
|
||||||
@@ -399,12 +395,13 @@ public class RlpxAgent {
|
|||||||
localNode,
|
localNode,
|
||||||
connectionEvents,
|
connectionEvents,
|
||||||
metricsSystem,
|
metricsSystem,
|
||||||
p2pTLSConfiguration.get());
|
p2pTLSConfiguration.get(),
|
||||||
|
peerTable);
|
||||||
} else {
|
} else {
|
||||||
LOG.debug("Using default NettyConnectionInitializer");
|
LOG.debug("Using default NettyConnectionInitializer");
|
||||||
connectionInitializer =
|
connectionInitializer =
|
||||||
new NettyConnectionInitializer(
|
new NettyConnectionInitializer(
|
||||||
nodeKey, config, localNode, connectionEvents, metricsSystem);
|
nodeKey, config, localNode, connectionEvents, metricsSystem, peerTable);
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
@@ -499,5 +496,10 @@ public class RlpxAgent {
|
|||||||
this.peersLowerBound = peersLowerBound;
|
this.peersLowerBound = peersLowerBound;
|
||||||
return this;
|
return this;
|
||||||
}
|
}
|
||||||
|
|
||||||
|
public Builder peerTable(final PeerTable peerTable) {
|
||||||
|
this.peerTable = peerTable;
|
||||||
|
return this;
|
||||||
|
}
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|||||||
@@ -14,6 +14,7 @@
|
|||||||
*/
|
*/
|
||||||
package org.hyperledger.besu.ethereum.p2p.rlpx.connections.netty;
|
package org.hyperledger.besu.ethereum.p2p.rlpx.connections.netty;
|
||||||
|
|
||||||
|
import org.hyperledger.besu.ethereum.p2p.discovery.internal.PeerTable;
|
||||||
import org.hyperledger.besu.ethereum.p2p.peers.LocalNode;
|
import org.hyperledger.besu.ethereum.p2p.peers.LocalNode;
|
||||||
import org.hyperledger.besu.ethereum.p2p.peers.Peer;
|
import org.hyperledger.besu.ethereum.p2p.peers.Peer;
|
||||||
import org.hyperledger.besu.ethereum.p2p.rlpx.connections.PeerConnection;
|
import org.hyperledger.besu.ethereum.p2p.rlpx.connections.PeerConnection;
|
||||||
@@ -60,6 +61,7 @@ abstract class AbstractHandshakeHandler extends SimpleChannelInboundHandler<Byte
|
|||||||
|
|
||||||
private final FramerProvider framerProvider;
|
private final FramerProvider framerProvider;
|
||||||
private final boolean inboundInitiated;
|
private final boolean inboundInitiated;
|
||||||
|
private final PeerTable peerTable;
|
||||||
|
|
||||||
AbstractHandshakeHandler(
|
AbstractHandshakeHandler(
|
||||||
final List<SubProtocol> subProtocols,
|
final List<SubProtocol> subProtocols,
|
||||||
@@ -70,7 +72,8 @@ abstract class AbstractHandshakeHandler extends SimpleChannelInboundHandler<Byte
|
|||||||
final MetricsSystem metricsSystem,
|
final MetricsSystem metricsSystem,
|
||||||
final HandshakerProvider handshakerProvider,
|
final HandshakerProvider handshakerProvider,
|
||||||
final FramerProvider framerProvider,
|
final FramerProvider framerProvider,
|
||||||
final boolean inboundInitiated) {
|
final boolean inboundInitiated,
|
||||||
|
final PeerTable peerTable) {
|
||||||
this.subProtocols = subProtocols;
|
this.subProtocols = subProtocols;
|
||||||
this.localNode = localNode;
|
this.localNode = localNode;
|
||||||
this.expectedPeer = expectedPeer;
|
this.expectedPeer = expectedPeer;
|
||||||
@@ -80,6 +83,7 @@ abstract class AbstractHandshakeHandler extends SimpleChannelInboundHandler<Byte
|
|||||||
this.handshaker = handshakerProvider.buildInstance();
|
this.handshaker = handshakerProvider.buildInstance();
|
||||||
this.framerProvider = framerProvider;
|
this.framerProvider = framerProvider;
|
||||||
this.inboundInitiated = inboundInitiated;
|
this.inboundInitiated = inboundInitiated;
|
||||||
|
this.peerTable = peerTable;
|
||||||
}
|
}
|
||||||
|
|
||||||
/**
|
/**
|
||||||
@@ -97,47 +101,48 @@ abstract class AbstractHandshakeHandler extends SimpleChannelInboundHandler<Byte
|
|||||||
ctx.writeAndFlush(nextMsg.get());
|
ctx.writeAndFlush(nextMsg.get());
|
||||||
} else if (handshaker.getStatus() != Handshaker.HandshakeStatus.SUCCESS) {
|
} else if (handshaker.getStatus() != Handshaker.HandshakeStatus.SUCCESS) {
|
||||||
LOG.debug("waiting for more bytes");
|
LOG.debug("waiting for more bytes");
|
||||||
return;
|
} else {
|
||||||
|
|
||||||
|
final Bytes nodeId = handshaker.partyPubKey().getEncodedBytes();
|
||||||
|
if (!localNode.isReady()) {
|
||||||
|
// If we're handling a connection before the node is fully up, just disconnect
|
||||||
|
LOG.debug("Rejecting connection because local node is not ready {}", nodeId);
|
||||||
|
disconnect(ctx, DisconnectMessage.DisconnectReason.UNKNOWN);
|
||||||
|
return;
|
||||||
|
}
|
||||||
|
|
||||||
|
LOG.trace("Sending framed hello");
|
||||||
|
|
||||||
|
// Exchange keys done
|
||||||
|
final Framer framer = this.framerProvider.buildFramer(handshaker.secrets());
|
||||||
|
|
||||||
|
final ByteToMessageDecoder deFramer =
|
||||||
|
new DeFramer(
|
||||||
|
framer,
|
||||||
|
subProtocols,
|
||||||
|
localNode,
|
||||||
|
expectedPeer,
|
||||||
|
connectionEventDispatcher,
|
||||||
|
connectionFuture,
|
||||||
|
metricsSystem,
|
||||||
|
inboundInitiated,
|
||||||
|
peerTable);
|
||||||
|
|
||||||
|
ctx.channel()
|
||||||
|
.pipeline()
|
||||||
|
.replace(this, "DeFramer", deFramer)
|
||||||
|
.addBefore("DeFramer", "validate", new ValidateFirstOutboundMessage(framer));
|
||||||
|
|
||||||
|
ctx.writeAndFlush(new OutboundMessage(null, HelloMessage.create(localNode.getPeerInfo())))
|
||||||
|
.addListener(
|
||||||
|
ff -> {
|
||||||
|
if (ff.isSuccess()) {
|
||||||
|
LOG.trace("Successfully wrote hello message");
|
||||||
|
}
|
||||||
|
});
|
||||||
|
msg.retain();
|
||||||
|
ctx.fireChannelRead(msg);
|
||||||
}
|
}
|
||||||
|
|
||||||
final Bytes nodeId = handshaker.partyPubKey().getEncodedBytes();
|
|
||||||
if (!localNode.isReady()) {
|
|
||||||
// If we're handling a connection before the node is fully up, just disconnect
|
|
||||||
LOG.debug("Rejecting connection because local node is not ready {}", nodeId);
|
|
||||||
disconnect(ctx, DisconnectMessage.DisconnectReason.UNKNOWN);
|
|
||||||
return;
|
|
||||||
}
|
|
||||||
|
|
||||||
LOG.trace("Sending framed hello");
|
|
||||||
|
|
||||||
// Exchange keys done
|
|
||||||
final Framer framer = this.framerProvider.buildFramer(handshaker.secrets());
|
|
||||||
|
|
||||||
final ByteToMessageDecoder deFramer =
|
|
||||||
new DeFramer(
|
|
||||||
framer,
|
|
||||||
subProtocols,
|
|
||||||
localNode,
|
|
||||||
expectedPeer,
|
|
||||||
connectionEventDispatcher,
|
|
||||||
connectionFuture,
|
|
||||||
metricsSystem,
|
|
||||||
inboundInitiated);
|
|
||||||
|
|
||||||
ctx.channel()
|
|
||||||
.pipeline()
|
|
||||||
.replace(this, "DeFramer", deFramer)
|
|
||||||
.addBefore("DeFramer", "validate", new ValidateFirstOutboundMessage(framer));
|
|
||||||
|
|
||||||
ctx.writeAndFlush(new OutboundMessage(null, HelloMessage.create(localNode.getPeerInfo())))
|
|
||||||
.addListener(
|
|
||||||
ff -> {
|
|
||||||
if (ff.isSuccess()) {
|
|
||||||
LOG.trace("Successfully wrote hello message");
|
|
||||||
}
|
|
||||||
});
|
|
||||||
msg.retain();
|
|
||||||
ctx.fireChannelRead(msg);
|
|
||||||
}
|
}
|
||||||
|
|
||||||
private void disconnect(
|
private void disconnect(
|
||||||
|
|||||||
@@ -14,6 +14,8 @@
|
|||||||
*/
|
*/
|
||||||
package org.hyperledger.besu.ethereum.p2p.rlpx.connections.netty;
|
package org.hyperledger.besu.ethereum.p2p.rlpx.connections.netty;
|
||||||
|
|
||||||
|
import org.hyperledger.besu.ethereum.p2p.discovery.DiscoveryPeer;
|
||||||
|
import org.hyperledger.besu.ethereum.p2p.discovery.internal.PeerTable;
|
||||||
import org.hyperledger.besu.ethereum.p2p.network.exceptions.BreachOfProtocolException;
|
import org.hyperledger.besu.ethereum.p2p.network.exceptions.BreachOfProtocolException;
|
||||||
import org.hyperledger.besu.ethereum.p2p.network.exceptions.IncompatiblePeerException;
|
import org.hyperledger.besu.ethereum.p2p.network.exceptions.IncompatiblePeerException;
|
||||||
import org.hyperledger.besu.ethereum.p2p.network.exceptions.PeerChannelClosedException;
|
import org.hyperledger.besu.ethereum.p2p.network.exceptions.PeerChannelClosedException;
|
||||||
@@ -70,6 +72,7 @@ final class DeFramer extends ByteToMessageDecoder {
|
|||||||
private final Optional<Peer> expectedPeer;
|
private final Optional<Peer> expectedPeer;
|
||||||
private final List<SubProtocol> subProtocols;
|
private final List<SubProtocol> subProtocols;
|
||||||
private final boolean inboundInitiated;
|
private final boolean inboundInitiated;
|
||||||
|
private final PeerTable peerTable;
|
||||||
private boolean hellosExchanged;
|
private boolean hellosExchanged;
|
||||||
private final LabelledMetric<Counter> outboundMessagesCounter;
|
private final LabelledMetric<Counter> outboundMessagesCounter;
|
||||||
|
|
||||||
@@ -81,7 +84,8 @@ final class DeFramer extends ByteToMessageDecoder {
|
|||||||
final PeerConnectionEventDispatcher connectionEventDispatcher,
|
final PeerConnectionEventDispatcher connectionEventDispatcher,
|
||||||
final CompletableFuture<PeerConnection> connectFuture,
|
final CompletableFuture<PeerConnection> connectFuture,
|
||||||
final MetricsSystem metricsSystem,
|
final MetricsSystem metricsSystem,
|
||||||
final boolean inboundInitiated) {
|
final boolean inboundInitiated,
|
||||||
|
final PeerTable peerTable) {
|
||||||
this.framer = framer;
|
this.framer = framer;
|
||||||
this.subProtocols = subProtocols;
|
this.subProtocols = subProtocols;
|
||||||
this.localNode = localNode;
|
this.localNode = localNode;
|
||||||
@@ -89,6 +93,7 @@ final class DeFramer extends ByteToMessageDecoder {
|
|||||||
this.connectFuture = connectFuture;
|
this.connectFuture = connectFuture;
|
||||||
this.connectionEventDispatcher = connectionEventDispatcher;
|
this.connectionEventDispatcher = connectionEventDispatcher;
|
||||||
this.inboundInitiated = inboundInitiated;
|
this.inboundInitiated = inboundInitiated;
|
||||||
|
this.peerTable = peerTable;
|
||||||
this.outboundMessagesCounter =
|
this.outboundMessagesCounter =
|
||||||
metricsSystem.createLabelledCounter(
|
metricsSystem.createLabelledCounter(
|
||||||
BesuMetricCategory.NETWORK,
|
BesuMetricCategory.NETWORK,
|
||||||
@@ -105,8 +110,11 @@ final class DeFramer extends ByteToMessageDecoder {
|
|||||||
while ((message = framer.deframe(in)) != null) {
|
while ((message = framer.deframe(in)) != null) {
|
||||||
|
|
||||||
if (hellosExchanged) {
|
if (hellosExchanged) {
|
||||||
|
|
||||||
out.add(message);
|
out.add(message);
|
||||||
|
|
||||||
} else if (message.getCode() == WireMessageCodes.HELLO) {
|
} else if (message.getCode() == WireMessageCodes.HELLO) {
|
||||||
|
|
||||||
hellosExchanged = true;
|
hellosExchanged = true;
|
||||||
// Decode first hello and use the payload to modify pipeline
|
// Decode first hello and use the payload to modify pipeline
|
||||||
final PeerInfo peerInfo;
|
final PeerInfo peerInfo;
|
||||||
@@ -129,13 +137,27 @@ final class DeFramer extends ByteToMessageDecoder {
|
|||||||
subProtocols,
|
subProtocols,
|
||||||
localNode.getPeerInfo().getCapabilities(),
|
localNode.getPeerInfo().getCapabilities(),
|
||||||
peerInfo.getCapabilities());
|
peerInfo.getCapabilities());
|
||||||
final Optional<Peer> peer = expectedPeer.or(() -> createPeer(peerInfo, ctx));
|
|
||||||
if (peer.isEmpty()) {
|
Optional<Peer> peer;
|
||||||
LOG.debug("Failed to create connection for peer {}", peerInfo);
|
if (expectedPeer.isPresent()) {
|
||||||
connectFuture.completeExceptionally(new PeerChannelClosedException(peerInfo));
|
peer = expectedPeer;
|
||||||
ctx.close();
|
} else {
|
||||||
return;
|
// This is an inbound "Hello" message. Create peer from information from the Hello message
|
||||||
|
peer = createPeer(peerInfo, ctx);
|
||||||
|
if (peer.isEmpty()) {
|
||||||
|
LOG.debug("Failed to create connection for peer {}", peerInfo);
|
||||||
|
connectFuture.completeExceptionally(new PeerChannelClosedException(peerInfo));
|
||||||
|
ctx.close();
|
||||||
|
return;
|
||||||
|
}
|
||||||
|
// If we can find the DiscoveryPeer for the peer in the PeerTable we use it, because
|
||||||
|
// it could contains additional information, like the fork id.
|
||||||
|
final Optional<DiscoveryPeer> discoveryPeer = peerTable.get(peer.get());
|
||||||
|
if (discoveryPeer.isPresent()) {
|
||||||
|
peer = Optional.of(discoveryPeer.get());
|
||||||
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
final PeerConnection connection =
|
final PeerConnection connection =
|
||||||
new NettyPeerConnection(
|
new NettyPeerConnection(
|
||||||
ctx,
|
ctx,
|
||||||
@@ -176,7 +198,9 @@ final class DeFramer extends ByteToMessageDecoder {
|
|||||||
capabilityMultiplexer, connection, connectionEventDispatcher, waitingForPong),
|
capabilityMultiplexer, connection, connectionEventDispatcher, waitingForPong),
|
||||||
new MessageFramer(capabilityMultiplexer, framer));
|
new MessageFramer(capabilityMultiplexer, framer));
|
||||||
connectFuture.complete(connection);
|
connectFuture.complete(connection);
|
||||||
|
|
||||||
} else if (message.getCode() == WireMessageCodes.DISCONNECT) {
|
} else if (message.getCode() == WireMessageCodes.DISCONNECT) {
|
||||||
|
|
||||||
final DisconnectMessage disconnectMessage = DisconnectMessage.readFrom(message);
|
final DisconnectMessage disconnectMessage = DisconnectMessage.readFrom(message);
|
||||||
LOG.debug(
|
LOG.debug(
|
||||||
"Peer {} disconnected before sending HELLO. Reason: {}",
|
"Peer {} disconnected before sending HELLO. Reason: {}",
|
||||||
@@ -185,8 +209,10 @@ final class DeFramer extends ByteToMessageDecoder {
|
|||||||
ctx.close();
|
ctx.close();
|
||||||
connectFuture.completeExceptionally(
|
connectFuture.completeExceptionally(
|
||||||
new PeerDisconnectedException(disconnectMessage.getReason()));
|
new PeerDisconnectedException(disconnectMessage.getReason()));
|
||||||
|
|
||||||
} else {
|
} else {
|
||||||
// Unexpected message - disconnect
|
// Unexpected message - disconnect
|
||||||
|
|
||||||
LOG.debug(
|
LOG.debug(
|
||||||
"Message received before HELLO's exchanged (BREACH_OF_PROTOCOL), disconnecting. Peer: {}, Code: {}, Data: {}",
|
"Message received before HELLO's exchanged (BREACH_OF_PROTOCOL), disconnecting. Peer: {}, Code: {}, Data: {}",
|
||||||
expectedPeer.map(Peer::getEnodeURLString).orElse("unknown"),
|
expectedPeer.map(Peer::getEnodeURLString).orElse("unknown"),
|
||||||
|
|||||||
@@ -15,6 +15,7 @@
|
|||||||
package org.hyperledger.besu.ethereum.p2p.rlpx.connections.netty;
|
package org.hyperledger.besu.ethereum.p2p.rlpx.connections.netty;
|
||||||
|
|
||||||
import org.hyperledger.besu.cryptoservices.NodeKey;
|
import org.hyperledger.besu.cryptoservices.NodeKey;
|
||||||
|
import org.hyperledger.besu.ethereum.p2p.discovery.internal.PeerTable;
|
||||||
import org.hyperledger.besu.ethereum.p2p.peers.LocalNode;
|
import org.hyperledger.besu.ethereum.p2p.peers.LocalNode;
|
||||||
import org.hyperledger.besu.ethereum.p2p.rlpx.connections.PeerConnection;
|
import org.hyperledger.besu.ethereum.p2p.rlpx.connections.PeerConnection;
|
||||||
import org.hyperledger.besu.ethereum.p2p.rlpx.connections.PeerConnectionEventDispatcher;
|
import org.hyperledger.besu.ethereum.p2p.rlpx.connections.PeerConnectionEventDispatcher;
|
||||||
@@ -40,7 +41,8 @@ final class HandshakeHandlerInbound extends AbstractHandshakeHandler {
|
|||||||
final PeerConnectionEventDispatcher connectionEventDispatcher,
|
final PeerConnectionEventDispatcher connectionEventDispatcher,
|
||||||
final MetricsSystem metricsSystem,
|
final MetricsSystem metricsSystem,
|
||||||
final HandshakerProvider handshakerProvider,
|
final HandshakerProvider handshakerProvider,
|
||||||
final FramerProvider framerProvider) {
|
final FramerProvider framerProvider,
|
||||||
|
final PeerTable peerTable) {
|
||||||
super(
|
super(
|
||||||
subProtocols,
|
subProtocols,
|
||||||
localNode,
|
localNode,
|
||||||
@@ -50,7 +52,8 @@ final class HandshakeHandlerInbound extends AbstractHandshakeHandler {
|
|||||||
metricsSystem,
|
metricsSystem,
|
||||||
handshakerProvider,
|
handshakerProvider,
|
||||||
framerProvider,
|
framerProvider,
|
||||||
true);
|
true,
|
||||||
|
peerTable);
|
||||||
handshaker.prepareResponder(nodeKey);
|
handshaker.prepareResponder(nodeKey);
|
||||||
}
|
}
|
||||||
|
|
||||||
|
|||||||
@@ -16,6 +16,7 @@ package org.hyperledger.besu.ethereum.p2p.rlpx.connections.netty;
|
|||||||
|
|
||||||
import org.hyperledger.besu.crypto.SignatureAlgorithmFactory;
|
import org.hyperledger.besu.crypto.SignatureAlgorithmFactory;
|
||||||
import org.hyperledger.besu.cryptoservices.NodeKey;
|
import org.hyperledger.besu.cryptoservices.NodeKey;
|
||||||
|
import org.hyperledger.besu.ethereum.p2p.discovery.internal.PeerTable;
|
||||||
import org.hyperledger.besu.ethereum.p2p.peers.LocalNode;
|
import org.hyperledger.besu.ethereum.p2p.peers.LocalNode;
|
||||||
import org.hyperledger.besu.ethereum.p2p.peers.Peer;
|
import org.hyperledger.besu.ethereum.p2p.peers.Peer;
|
||||||
import org.hyperledger.besu.ethereum.p2p.rlpx.connections.PeerConnection;
|
import org.hyperledger.besu.ethereum.p2p.rlpx.connections.PeerConnection;
|
||||||
@@ -50,7 +51,8 @@ final class HandshakeHandlerOutbound extends AbstractHandshakeHandler {
|
|||||||
final PeerConnectionEventDispatcher connectionEventDispatcher,
|
final PeerConnectionEventDispatcher connectionEventDispatcher,
|
||||||
final MetricsSystem metricsSystem,
|
final MetricsSystem metricsSystem,
|
||||||
final HandshakerProvider handshakerProvider,
|
final HandshakerProvider handshakerProvider,
|
||||||
final FramerProvider framerProvider) {
|
final FramerProvider framerProvider,
|
||||||
|
final PeerTable peerTable) {
|
||||||
super(
|
super(
|
||||||
subProtocols,
|
subProtocols,
|
||||||
localNode,
|
localNode,
|
||||||
@@ -60,7 +62,8 @@ final class HandshakeHandlerOutbound extends AbstractHandshakeHandler {
|
|||||||
metricsSystem,
|
metricsSystem,
|
||||||
handshakerProvider,
|
handshakerProvider,
|
||||||
framerProvider,
|
framerProvider,
|
||||||
false);
|
false,
|
||||||
|
peerTable);
|
||||||
handshaker.prepareInitiator(
|
handshaker.prepareInitiator(
|
||||||
nodeKey, SignatureAlgorithmFactory.getInstance().createPublicKey(peer.getId()));
|
nodeKey, SignatureAlgorithmFactory.getInstance().createPublicKey(peer.getId()));
|
||||||
this.first = handshaker.firstMessage();
|
this.first = handshaker.firstMessage();
|
||||||
|
|||||||
@@ -17,6 +17,7 @@ package org.hyperledger.besu.ethereum.p2p.rlpx.connections.netty;
|
|||||||
import org.hyperledger.besu.cryptoservices.NodeKey;
|
import org.hyperledger.besu.cryptoservices.NodeKey;
|
||||||
import org.hyperledger.besu.ethereum.p2p.config.RlpxConfiguration;
|
import org.hyperledger.besu.ethereum.p2p.config.RlpxConfiguration;
|
||||||
import org.hyperledger.besu.ethereum.p2p.discovery.DiscoveryPeer;
|
import org.hyperledger.besu.ethereum.p2p.discovery.DiscoveryPeer;
|
||||||
|
import org.hyperledger.besu.ethereum.p2p.discovery.internal.PeerTable;
|
||||||
import org.hyperledger.besu.ethereum.p2p.peers.LocalNode;
|
import org.hyperledger.besu.ethereum.p2p.peers.LocalNode;
|
||||||
import org.hyperledger.besu.ethereum.p2p.peers.Peer;
|
import org.hyperledger.besu.ethereum.p2p.peers.Peer;
|
||||||
import org.hyperledger.besu.ethereum.p2p.rlpx.ConnectCallback;
|
import org.hyperledger.besu.ethereum.p2p.rlpx.ConnectCallback;
|
||||||
@@ -68,6 +69,7 @@ public class NettyConnectionInitializer
|
|||||||
private final PeerConnectionEventDispatcher eventDispatcher;
|
private final PeerConnectionEventDispatcher eventDispatcher;
|
||||||
private final MetricsSystem metricsSystem;
|
private final MetricsSystem metricsSystem;
|
||||||
private final Subscribers<ConnectCallback> connectSubscribers = Subscribers.create();
|
private final Subscribers<ConnectCallback> connectSubscribers = Subscribers.create();
|
||||||
|
private final PeerTable peerTable;
|
||||||
|
|
||||||
private ChannelFuture server;
|
private ChannelFuture server;
|
||||||
private final EventLoopGroup boss = new NioEventLoopGroup(1);
|
private final EventLoopGroup boss = new NioEventLoopGroup(1);
|
||||||
@@ -80,12 +82,14 @@ public class NettyConnectionInitializer
|
|||||||
final RlpxConfiguration config,
|
final RlpxConfiguration config,
|
||||||
final LocalNode localNode,
|
final LocalNode localNode,
|
||||||
final PeerConnectionEventDispatcher eventDispatcher,
|
final PeerConnectionEventDispatcher eventDispatcher,
|
||||||
final MetricsSystem metricsSystem) {
|
final MetricsSystem metricsSystem,
|
||||||
|
final PeerTable peerTable) {
|
||||||
this.nodeKey = nodeKey;
|
this.nodeKey = nodeKey;
|
||||||
this.config = config;
|
this.config = config;
|
||||||
this.localNode = localNode;
|
this.localNode = localNode;
|
||||||
this.eventDispatcher = eventDispatcher;
|
this.eventDispatcher = eventDispatcher;
|
||||||
this.metricsSystem = metricsSystem;
|
this.metricsSystem = metricsSystem;
|
||||||
|
this.peerTable = peerTable;
|
||||||
|
|
||||||
metricsSystem.createIntegerGauge(
|
metricsSystem.createIntegerGauge(
|
||||||
BesuMetricCategory.NETWORK,
|
BesuMetricCategory.NETWORK,
|
||||||
@@ -244,7 +248,8 @@ public class NettyConnectionInitializer
|
|||||||
eventDispatcher,
|
eventDispatcher,
|
||||||
metricsSystem,
|
metricsSystem,
|
||||||
this,
|
this,
|
||||||
this);
|
this,
|
||||||
|
peerTable);
|
||||||
}
|
}
|
||||||
|
|
||||||
@Nonnull
|
@Nonnull
|
||||||
@@ -259,7 +264,8 @@ public class NettyConnectionInitializer
|
|||||||
eventDispatcher,
|
eventDispatcher,
|
||||||
metricsSystem,
|
metricsSystem,
|
||||||
this,
|
this,
|
||||||
this);
|
this,
|
||||||
|
peerTable);
|
||||||
}
|
}
|
||||||
|
|
||||||
@Nonnull
|
@Nonnull
|
||||||
|
|||||||
@@ -19,6 +19,7 @@ import static org.hyperledger.besu.ethereum.p2p.rlpx.RlpxFrameConstants.LENGTH_M
|
|||||||
|
|
||||||
import org.hyperledger.besu.cryptoservices.NodeKey;
|
import org.hyperledger.besu.cryptoservices.NodeKey;
|
||||||
import org.hyperledger.besu.ethereum.p2p.config.RlpxConfiguration;
|
import org.hyperledger.besu.ethereum.p2p.config.RlpxConfiguration;
|
||||||
|
import org.hyperledger.besu.ethereum.p2p.discovery.internal.PeerTable;
|
||||||
import org.hyperledger.besu.ethereum.p2p.peers.LocalNode;
|
import org.hyperledger.besu.ethereum.p2p.peers.LocalNode;
|
||||||
import org.hyperledger.besu.ethereum.p2p.peers.Peer;
|
import org.hyperledger.besu.ethereum.p2p.peers.Peer;
|
||||||
import org.hyperledger.besu.ethereum.p2p.plain.PlainFramer;
|
import org.hyperledger.besu.ethereum.p2p.plain.PlainFramer;
|
||||||
@@ -55,7 +56,8 @@ public class NettyTLSConnectionInitializer extends NettyConnectionInitializer {
|
|||||||
final LocalNode localNode,
|
final LocalNode localNode,
|
||||||
final PeerConnectionEventDispatcher eventDispatcher,
|
final PeerConnectionEventDispatcher eventDispatcher,
|
||||||
final MetricsSystem metricsSystem,
|
final MetricsSystem metricsSystem,
|
||||||
final TLSConfiguration p2pTLSConfiguration) {
|
final TLSConfiguration p2pTLSConfiguration,
|
||||||
|
final PeerTable peerTable) {
|
||||||
this(
|
this(
|
||||||
nodeKey,
|
nodeKey,
|
||||||
config,
|
config,
|
||||||
@@ -63,7 +65,8 @@ public class NettyTLSConnectionInitializer extends NettyConnectionInitializer {
|
|||||||
eventDispatcher,
|
eventDispatcher,
|
||||||
metricsSystem,
|
metricsSystem,
|
||||||
defaultTlsContextFactorySupplier(p2pTLSConfiguration),
|
defaultTlsContextFactorySupplier(p2pTLSConfiguration),
|
||||||
p2pTLSConfiguration.getClientHelloSniHeaderEnabled());
|
p2pTLSConfiguration.getClientHelloSniHeaderEnabled(),
|
||||||
|
peerTable);
|
||||||
}
|
}
|
||||||
|
|
||||||
@VisibleForTesting
|
@VisibleForTesting
|
||||||
@@ -74,8 +77,9 @@ public class NettyTLSConnectionInitializer extends NettyConnectionInitializer {
|
|||||||
final PeerConnectionEventDispatcher eventDispatcher,
|
final PeerConnectionEventDispatcher eventDispatcher,
|
||||||
final MetricsSystem metricsSystem,
|
final MetricsSystem metricsSystem,
|
||||||
final Supplier<TLSContextFactory> tlsContextFactorySupplier,
|
final Supplier<TLSContextFactory> tlsContextFactorySupplier,
|
||||||
final Boolean clientHelloSniHeaderEnabled) {
|
final Boolean clientHelloSniHeaderEnabled,
|
||||||
super(nodeKey, config, localNode, eventDispatcher, metricsSystem);
|
final PeerTable peerTable) {
|
||||||
|
super(nodeKey, config, localNode, eventDispatcher, metricsSystem, peerTable);
|
||||||
if (tlsContextFactorySupplier != null) {
|
if (tlsContextFactorySupplier != null) {
|
||||||
this.tlsContextFactorySupplier =
|
this.tlsContextFactorySupplier =
|
||||||
Optional.of(Suppliers.memoize(tlsContextFactorySupplier::get));
|
Optional.of(Suppliers.memoize(tlsContextFactorySupplier::get));
|
||||||
|
|||||||
@@ -244,6 +244,30 @@ public class PeerDiscoveryAgentTest {
|
|||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
|
@Test
|
||||||
|
public void endpointHonoursCustomAdvertisedAddressInPingPacket() {
|
||||||
|
|
||||||
|
// Start a peer with the default advertised host
|
||||||
|
final MockPeerDiscoveryAgent agent1 = helper.startDiscoveryAgent();
|
||||||
|
|
||||||
|
// Start another peer with its advertised host set to a custom value
|
||||||
|
final MockPeerDiscoveryAgent agent2 = helper.startDiscoveryAgent("192.168.0.1");
|
||||||
|
|
||||||
|
// Send a PING so we can exchange messages
|
||||||
|
Packet packet = helper.createPingPacket(agent2, agent1);
|
||||||
|
helper.sendMessageBetweenAgents(agent2, agent1, packet);
|
||||||
|
|
||||||
|
// Agent 1's peers should have endpoints that match the custom advertised value...
|
||||||
|
agent1
|
||||||
|
.streamDiscoveredPeers()
|
||||||
|
.forEach(peer -> assertThat(peer.getEndpoint().getHost()).isEqualTo("192.168.0.1"));
|
||||||
|
|
||||||
|
// ...but agent 2's peers should have endpoints that match the default
|
||||||
|
agent2
|
||||||
|
.streamDiscoveredPeers()
|
||||||
|
.forEach(peer -> assertThat(peer.getEndpoint().getHost()).isEqualTo("127.0.0.1"));
|
||||||
|
}
|
||||||
|
|
||||||
@Test
|
@Test
|
||||||
public void shouldEvictPeerWhenPermissionsRevoked() {
|
public void shouldEvictPeerWhenPermissionsRevoked() {
|
||||||
final PeerPermissionsDenylist denylist = PeerPermissionsDenylist.create();
|
final PeerPermissionsDenylist denylist = PeerPermissionsDenylist.create();
|
||||||
|
|||||||
@@ -165,6 +165,14 @@ public class PeerDiscoveryTestHelper {
|
|||||||
return startDiscoveryAgent(agentBuilder);
|
return startDiscoveryAgent(agentBuilder);
|
||||||
}
|
}
|
||||||
|
|
||||||
|
public MockPeerDiscoveryAgent startDiscoveryAgent(
|
||||||
|
final String advertisedHost, final DiscoveryPeer... bootstrapPeers) {
|
||||||
|
final AgentBuilder agentBuilder =
|
||||||
|
agentBuilder().bootstrapPeers(bootstrapPeers).advertisedHost(advertisedHost);
|
||||||
|
|
||||||
|
return startDiscoveryAgent(agentBuilder);
|
||||||
|
}
|
||||||
|
|
||||||
/**
|
/**
|
||||||
* Start a single discovery agent with the provided bootstrap peers.
|
* Start a single discovery agent with the provided bootstrap peers.
|
||||||
*
|
*
|
||||||
@@ -287,6 +295,7 @@ public class PeerDiscoveryTestHelper {
|
|||||||
config.setAdvertisedHost(advertisedHost);
|
config.setAdvertisedHost(advertisedHost);
|
||||||
config.setBindPort(port);
|
config.setBindPort(port);
|
||||||
config.setActive(active);
|
config.setActive(active);
|
||||||
|
config.setFilterOnEnrForkId(false);
|
||||||
|
|
||||||
final ForkIdManager mockForkIdManager = mock(ForkIdManager.class);
|
final ForkIdManager mockForkIdManager = mock(ForkIdManager.class);
|
||||||
final ForkId forkId = new ForkId(Bytes.EMPTY, Bytes.EMPTY);
|
final ForkId forkId = new ForkId(Bytes.EMPTY, Bytes.EMPTY);
|
||||||
|
|||||||
@@ -20,6 +20,8 @@ import org.hyperledger.besu.ethereum.rlp.BytesValueRLPOutput;
|
|||||||
import org.hyperledger.besu.ethereum.rlp.RLP;
|
import org.hyperledger.besu.ethereum.rlp.RLP;
|
||||||
|
|
||||||
import org.apache.tuweni.bytes.Bytes;
|
import org.apache.tuweni.bytes.Bytes;
|
||||||
|
import org.apache.tuweni.bytes.Bytes32;
|
||||||
|
import org.apache.tuweni.crypto.SECP256K1;
|
||||||
import org.apache.tuweni.units.bigints.UInt64;
|
import org.apache.tuweni.units.bigints.UInt64;
|
||||||
import org.ethereum.beacon.discovery.schema.EnrField;
|
import org.ethereum.beacon.discovery.schema.EnrField;
|
||||||
import org.ethereum.beacon.discovery.schema.IdentitySchema;
|
import org.ethereum.beacon.discovery.schema.IdentitySchema;
|
||||||
@@ -34,8 +36,10 @@ public class ENRResponsePacketDataTest {
|
|||||||
final Bytes requestHash = Bytes.fromHexStringLenient("0x1234");
|
final Bytes requestHash = Bytes.fromHexStringLenient("0x1234");
|
||||||
final Bytes nodeId =
|
final Bytes nodeId =
|
||||||
Bytes.fromHexString("a448f24c6d18e575453db13171562b71999873db5b286df957af199ec94617f7");
|
Bytes.fromHexString("a448f24c6d18e575453db13171562b71999873db5b286df957af199ec94617f7");
|
||||||
final Bytes privateKey =
|
final SECP256K1.SecretKey privateKey =
|
||||||
Bytes.fromHexString("b71c71a67e1177ad4e901695e1b4b9ee17ae16c6668d313eac2f96dbcda3f291");
|
SECP256K1.SecretKey.fromBytes(
|
||||||
|
Bytes32.fromHexString(
|
||||||
|
"b71c71a67e1177ad4e901695e1b4b9ee17ae16c6668d313eac2f96dbcda3f291"));
|
||||||
|
|
||||||
NodeRecord nodeRecord =
|
NodeRecord nodeRecord =
|
||||||
NodeRecordFactory.DEFAULT.createFromValues(
|
NodeRecordFactory.DEFAULT.createFromValues(
|
||||||
@@ -48,7 +52,8 @@ public class ENRResponsePacketDataTest {
|
|||||||
new EnrField(EnrField.TCP, 8080),
|
new EnrField(EnrField.TCP, 8080),
|
||||||
new EnrField(EnrField.TCP_V6, 8080),
|
new EnrField(EnrField.TCP_V6, 8080),
|
||||||
new EnrField(
|
new EnrField(
|
||||||
EnrField.PKEY_SECP256K1, Functions.derivePublicKeyFromPrivate(privateKey)));
|
EnrField.PKEY_SECP256K1,
|
||||||
|
Functions.deriveCompressedPublicKeyFromPrivate(privateKey)));
|
||||||
nodeRecord.sign(privateKey);
|
nodeRecord.sign(privateKey);
|
||||||
|
|
||||||
assertThat(nodeRecord.getNodeId()).isEqualTo(nodeId);
|
assertThat(nodeRecord.getNodeId()).isEqualTo(nodeId);
|
||||||
@@ -72,8 +77,10 @@ public class ENRResponsePacketDataTest {
|
|||||||
final Bytes requestHash = Bytes.fromHexStringLenient("0x1234");
|
final Bytes requestHash = Bytes.fromHexStringLenient("0x1234");
|
||||||
final Bytes nodeId =
|
final Bytes nodeId =
|
||||||
Bytes.fromHexString("a448f24c6d18e575453db13171562b71999873db5b286df957af199ec94617f7");
|
Bytes.fromHexString("a448f24c6d18e575453db13171562b71999873db5b286df957af199ec94617f7");
|
||||||
final Bytes privateKey =
|
final SECP256K1.SecretKey privateKey =
|
||||||
Bytes.fromHexString("b71c71a67e1177ad4e901695e1b4b9ee17ae16c6668d313eac2f96dbcda3f291");
|
SECP256K1.SecretKey.fromBytes(
|
||||||
|
Bytes32.fromHexString(
|
||||||
|
"b71c71a67e1177ad4e901695e1b4b9ee17ae16c6668d313eac2f96dbcda3f291"));
|
||||||
|
|
||||||
NodeRecord nodeRecord =
|
NodeRecord nodeRecord =
|
||||||
NodeRecordFactory.DEFAULT.createFromValues(
|
NodeRecordFactory.DEFAULT.createFromValues(
|
||||||
@@ -82,7 +89,8 @@ public class ENRResponsePacketDataTest {
|
|||||||
new EnrField(EnrField.IP_V4, Bytes.fromHexString("0x7F000001")),
|
new EnrField(EnrField.IP_V4, Bytes.fromHexString("0x7F000001")),
|
||||||
new EnrField(EnrField.UDP, 30303),
|
new EnrField(EnrField.UDP, 30303),
|
||||||
new EnrField(
|
new EnrField(
|
||||||
EnrField.PKEY_SECP256K1, Functions.derivePublicKeyFromPrivate(privateKey)));
|
EnrField.PKEY_SECP256K1,
|
||||||
|
Functions.deriveCompressedPublicKeyFromPrivate(privateKey)));
|
||||||
nodeRecord.sign(privateKey);
|
nodeRecord.sign(privateKey);
|
||||||
|
|
||||||
assertThat(nodeRecord.getNodeId()).isEqualTo(nodeId);
|
assertThat(nodeRecord.getNodeId()).isEqualTo(nodeId);
|
||||||
@@ -109,8 +117,10 @@ public class ENRResponsePacketDataTest {
|
|||||||
final Bytes requestHash = Bytes.fromHexStringLenient("0x1234");
|
final Bytes requestHash = Bytes.fromHexStringLenient("0x1234");
|
||||||
final Bytes nodeId =
|
final Bytes nodeId =
|
||||||
Bytes.fromHexString("a448f24c6d18e575453db13171562b71999873db5b286df957af199ec94617f7");
|
Bytes.fromHexString("a448f24c6d18e575453db13171562b71999873db5b286df957af199ec94617f7");
|
||||||
final Bytes privateKey =
|
final SECP256K1.SecretKey privateKey =
|
||||||
Bytes.fromHexString("b71c71a67e1177ad4e901695e1b4b9ee17ae16c6668d313eac2f96dbcda3f291");
|
SECP256K1.SecretKey.fromBytes(
|
||||||
|
Bytes32.fromHexString(
|
||||||
|
"b71c71a67e1177ad4e901695e1b4b9ee17ae16c6668d313eac2f96dbcda3f291"));
|
||||||
|
|
||||||
NodeRecord nodeRecord =
|
NodeRecord nodeRecord =
|
||||||
NodeRecordFactory.DEFAULT.createFromValues(
|
NodeRecordFactory.DEFAULT.createFromValues(
|
||||||
@@ -119,7 +129,8 @@ public class ENRResponsePacketDataTest {
|
|||||||
new EnrField(EnrField.IP_V4, Bytes.fromHexString("0x7F000001")),
|
new EnrField(EnrField.IP_V4, Bytes.fromHexString("0x7F000001")),
|
||||||
new EnrField(EnrField.UDP, 30303),
|
new EnrField(EnrField.UDP, 30303),
|
||||||
new EnrField(
|
new EnrField(
|
||||||
EnrField.PKEY_SECP256K1, Functions.derivePublicKeyFromPrivate(privateKey)));
|
EnrField.PKEY_SECP256K1,
|
||||||
|
Functions.deriveCompressedPublicKeyFromPrivate(privateKey)));
|
||||||
nodeRecord.sign(privateKey);
|
nodeRecord.sign(privateKey);
|
||||||
|
|
||||||
assertThat(nodeRecord.getNodeId()).isEqualTo(nodeId);
|
assertThat(nodeRecord.getNodeId()).isEqualTo(nodeId);
|
||||||
@@ -144,8 +155,10 @@ public class ENRResponsePacketDataTest {
|
|||||||
final Bytes requestHash = Bytes.fromHexStringLenient("0x1234");
|
final Bytes requestHash = Bytes.fromHexStringLenient("0x1234");
|
||||||
final Bytes nodeId =
|
final Bytes nodeId =
|
||||||
Bytes.fromHexString("a448f24c6d18e575453db13171562b71999873db5b286df957af199ec94617f7");
|
Bytes.fromHexString("a448f24c6d18e575453db13171562b71999873db5b286df957af199ec94617f7");
|
||||||
final Bytes privateKey =
|
final SECP256K1.SecretKey privateKey =
|
||||||
Bytes.fromHexString("b71c71a67e1177ad4e901695e1b4b9ee17ae16c6668d313eac2f96dbcda3f291");
|
SECP256K1.SecretKey.fromBytes(
|
||||||
|
Bytes32.fromHexString(
|
||||||
|
"b71c71a67e1177ad4e901695e1b4b9ee17ae16c6668d313eac2f96dbcda3f291"));
|
||||||
|
|
||||||
NodeRecord nodeRecord =
|
NodeRecord nodeRecord =
|
||||||
NodeRecordFactory.DEFAULT.createFromValues(
|
NodeRecordFactory.DEFAULT.createFromValues(
|
||||||
@@ -153,7 +166,9 @@ public class ENRResponsePacketDataTest {
|
|||||||
new EnrField(EnrField.ID, IdentitySchema.V4),
|
new EnrField(EnrField.ID, IdentitySchema.V4),
|
||||||
new EnrField(EnrField.IP_V4, Bytes.fromHexString("0x7F000001")),
|
new EnrField(EnrField.IP_V4, Bytes.fromHexString("0x7F000001")),
|
||||||
new EnrField(EnrField.UDP, 30303),
|
new EnrField(EnrField.UDP, 30303),
|
||||||
new EnrField(EnrField.PKEY_SECP256K1, Functions.derivePublicKeyFromPrivate(privateKey)),
|
new EnrField(
|
||||||
|
EnrField.PKEY_SECP256K1,
|
||||||
|
Functions.deriveCompressedPublicKeyFromPrivate(privateKey)),
|
||||||
new EnrField("foo", Bytes.fromHexString("0x1234")));
|
new EnrField("foo", Bytes.fromHexString("0x1234")));
|
||||||
nodeRecord.sign(privateKey);
|
nodeRecord.sign(privateKey);
|
||||||
|
|
||||||
@@ -181,8 +196,10 @@ public class ENRResponsePacketDataTest {
|
|||||||
@Test
|
@Test
|
||||||
public void readFrom_invalidSignature() {
|
public void readFrom_invalidSignature() {
|
||||||
final Bytes requestHash = Bytes.fromHexStringLenient("0x1234");
|
final Bytes requestHash = Bytes.fromHexStringLenient("0x1234");
|
||||||
final Bytes privateKey =
|
final SECP256K1.SecretKey privateKey =
|
||||||
Bytes.fromHexString("b71c71a67e1177ad4e901695e1b4b9ee17ae16c6668d313eac2f96dbcda3f292");
|
SECP256K1.SecretKey.fromBytes(
|
||||||
|
Bytes32.fromHexString(
|
||||||
|
"b71c71a67e1177ad4e901695e1b4b9ee17ae16c6668d313eac2f96dbcda3f292"));
|
||||||
|
|
||||||
NodeRecord nodeRecord =
|
NodeRecord nodeRecord =
|
||||||
NodeRecordFactory.DEFAULT.createFromValues(
|
NodeRecordFactory.DEFAULT.createFromValues(
|
||||||
@@ -191,7 +208,8 @@ public class ENRResponsePacketDataTest {
|
|||||||
new EnrField(EnrField.IP_V4, Bytes.fromHexString("0x7F000001")),
|
new EnrField(EnrField.IP_V4, Bytes.fromHexString("0x7F000001")),
|
||||||
new EnrField(EnrField.UDP, 30303),
|
new EnrField(EnrField.UDP, 30303),
|
||||||
new EnrField(
|
new EnrField(
|
||||||
EnrField.PKEY_SECP256K1, Functions.derivePublicKeyFromPrivate(privateKey)));
|
EnrField.PKEY_SECP256K1,
|
||||||
|
Functions.deriveCompressedPublicKeyFromPrivate(privateKey)));
|
||||||
nodeRecord.sign(privateKey);
|
nodeRecord.sign(privateKey);
|
||||||
nodeRecord.set(EnrField.UDP, 1234);
|
nodeRecord.set(EnrField.UDP, 1234);
|
||||||
|
|
||||||
|
|||||||
@@ -63,7 +63,8 @@ public class MockPeerDiscoveryAgent extends PeerDiscoveryAgent {
|
|||||||
new NoOpMetricsSystem(),
|
new NoOpMetricsSystem(),
|
||||||
new InMemoryKeyValueStorageProvider(),
|
new InMemoryKeyValueStorageProvider(),
|
||||||
forkIdManager,
|
forkIdManager,
|
||||||
rlpxAgent);
|
rlpxAgent,
|
||||||
|
new PeerTable(nodeKey.getPublicKey().getEncodedBytes()));
|
||||||
this.agentNetwork = agentNetwork;
|
this.agentNetwork = agentNetwork;
|
||||||
}
|
}
|
||||||
|
|
||||||
|
|||||||
@@ -35,8 +35,6 @@ import org.hyperledger.besu.crypto.Hash;
|
|||||||
import org.hyperledger.besu.crypto.SignatureAlgorithm;
|
import org.hyperledger.besu.crypto.SignatureAlgorithm;
|
||||||
import org.hyperledger.besu.crypto.SignatureAlgorithmFactory;
|
import org.hyperledger.besu.crypto.SignatureAlgorithmFactory;
|
||||||
import org.hyperledger.besu.cryptoservices.NodeKey;
|
import org.hyperledger.besu.cryptoservices.NodeKey;
|
||||||
import org.hyperledger.besu.ethereum.forkid.ForkId;
|
|
||||||
import org.hyperledger.besu.ethereum.forkid.ForkIdManager;
|
|
||||||
import org.hyperledger.besu.ethereum.p2p.discovery.DiscoveryPeer;
|
import org.hyperledger.besu.ethereum.p2p.discovery.DiscoveryPeer;
|
||||||
import org.hyperledger.besu.ethereum.p2p.discovery.Endpoint;
|
import org.hyperledger.besu.ethereum.p2p.discovery.Endpoint;
|
||||||
import org.hyperledger.besu.ethereum.p2p.discovery.PeerDiscoveryStatus;
|
import org.hyperledger.besu.ethereum.p2p.discovery.PeerDiscoveryStatus;
|
||||||
@@ -1480,14 +1478,12 @@ public class PeerDiscoveryControllerTest {
|
|||||||
}
|
}
|
||||||
|
|
||||||
@Test
|
@Test
|
||||||
public void shouldFiltersOnForkIdSuccess() {
|
public void forkIdShouldBeAvailableIfEnrPacketContainsForkId() {
|
||||||
final List<NodeKey> nodeKeys = PeerDiscoveryTestHelper.generateNodeKeys(1);
|
final List<NodeKey> nodeKeys = PeerDiscoveryTestHelper.generateNodeKeys(1);
|
||||||
final List<DiscoveryPeer> peers = helper.createDiscoveryPeers(nodeKeys);
|
final List<DiscoveryPeer> peers = helper.createDiscoveryPeers(nodeKeys);
|
||||||
final ForkIdManager forkIdManager = mock(ForkIdManager.class);
|
|
||||||
final DiscoveryPeer sender = peers.get(0);
|
final DiscoveryPeer sender = peers.get(0);
|
||||||
final Packet enrPacket = prepareForForkIdCheck(forkIdManager, nodeKeys, sender, true);
|
final Packet enrPacket = prepareForForkIdCheck(nodeKeys, sender, true);
|
||||||
|
|
||||||
when(forkIdManager.peerCheck(any(ForkId.class))).thenReturn(true);
|
|
||||||
controller.onMessage(enrPacket, sender);
|
controller.onMessage(enrPacket, sender);
|
||||||
|
|
||||||
final Optional<DiscoveryPeer> maybePeer =
|
final Optional<DiscoveryPeer> maybePeer =
|
||||||
@@ -1501,35 +1497,12 @@ public class PeerDiscoveryControllerTest {
|
|||||||
verify(controller, times(1)).connectOnRlpxLayer(eq(maybePeer.get()));
|
verify(controller, times(1)).connectOnRlpxLayer(eq(maybePeer.get()));
|
||||||
}
|
}
|
||||||
|
|
||||||
@Test
|
|
||||||
public void shouldFiltersOnForkIdFailure() {
|
|
||||||
final List<NodeKey> nodeKeys = PeerDiscoveryTestHelper.generateNodeKeys(1);
|
|
||||||
final List<DiscoveryPeer> peers = helper.createDiscoveryPeers(nodeKeys);
|
|
||||||
final ForkIdManager forkIdManager = mock(ForkIdManager.class);
|
|
||||||
final DiscoveryPeer sender = peers.get(0);
|
|
||||||
final Packet enrPacket = prepareForForkIdCheck(forkIdManager, nodeKeys, sender, true);
|
|
||||||
|
|
||||||
when(forkIdManager.peerCheck(any(ForkId.class))).thenReturn(false);
|
|
||||||
controller.onMessage(enrPacket, sender);
|
|
||||||
|
|
||||||
final Optional<DiscoveryPeer> maybePeer =
|
|
||||||
controller
|
|
||||||
.streamDiscoveredPeers()
|
|
||||||
.filter(p -> p.getId().equals(sender.getId()))
|
|
||||||
.findFirst();
|
|
||||||
|
|
||||||
assertThat(maybePeer.isPresent()).isTrue();
|
|
||||||
assertThat(maybePeer.get().getForkId().isPresent()).isTrue();
|
|
||||||
verify(controller, never()).connectOnRlpxLayer(eq(maybePeer.get()));
|
|
||||||
}
|
|
||||||
|
|
||||||
@Test
|
@Test
|
||||||
public void shouldStillCallConnectIfNoForkIdSent() {
|
public void shouldStillCallConnectIfNoForkIdSent() {
|
||||||
final List<NodeKey> nodeKeys = PeerDiscoveryTestHelper.generateNodeKeys(1);
|
final List<NodeKey> nodeKeys = PeerDiscoveryTestHelper.generateNodeKeys(1);
|
||||||
final List<DiscoveryPeer> peers = helper.createDiscoveryPeers(nodeKeys);
|
final List<DiscoveryPeer> peers = helper.createDiscoveryPeers(nodeKeys);
|
||||||
final DiscoveryPeer sender = peers.get(0);
|
final DiscoveryPeer sender = peers.get(0);
|
||||||
final Packet enrPacket =
|
final Packet enrPacket = prepareForForkIdCheck(nodeKeys, sender, false);
|
||||||
prepareForForkIdCheck(mock(ForkIdManager.class), nodeKeys, sender, false);
|
|
||||||
|
|
||||||
controller.onMessage(enrPacket, sender);
|
controller.onMessage(enrPacket, sender);
|
||||||
|
|
||||||
@@ -1546,10 +1519,7 @@ public class PeerDiscoveryControllerTest {
|
|||||||
|
|
||||||
@NotNull
|
@NotNull
|
||||||
private Packet prepareForForkIdCheck(
|
private Packet prepareForForkIdCheck(
|
||||||
final ForkIdManager forkIdManager,
|
final List<NodeKey> nodeKeys, final DiscoveryPeer sender, final boolean sendForkId) {
|
||||||
final List<NodeKey> nodeKeys,
|
|
||||||
final DiscoveryPeer sender,
|
|
||||||
final boolean sendForkId) {
|
|
||||||
final HashMap<PacketType, Bytes> packetTypeBytesHashMap = new HashMap<>();
|
final HashMap<PacketType, Bytes> packetTypeBytesHashMap = new HashMap<>();
|
||||||
final OutboundMessageHandler outboundMessageHandler =
|
final OutboundMessageHandler outboundMessageHandler =
|
||||||
(dp, pa) -> packetTypeBytesHashMap.put(pa.getType(), pa.getHash());
|
(dp, pa) -> packetTypeBytesHashMap.put(pa.getType(), pa.getHash());
|
||||||
@@ -1573,7 +1543,6 @@ public class PeerDiscoveryControllerTest {
|
|||||||
.outboundMessageHandler(outboundMessageHandler)
|
.outboundMessageHandler(outboundMessageHandler)
|
||||||
.enrCache(enrs)
|
.enrCache(enrs)
|
||||||
.filterOnForkId(true)
|
.filterOnForkId(true)
|
||||||
.forkIdManager(forkIdManager)
|
|
||||||
.build();
|
.build();
|
||||||
|
|
||||||
// Mock the creation of the PING packet, so that we can control the hash, which gets validated
|
// Mock the creation of the PING packet, so that we can control the hash, which gets validated
|
||||||
@@ -1720,7 +1689,6 @@ public class PeerDiscoveryControllerTest {
|
|||||||
private Cache<Bytes, Packet> enrs =
|
private Cache<Bytes, Packet> enrs =
|
||||||
CacheBuilder.newBuilder().maximumSize(50).expireAfterWrite(10, TimeUnit.SECONDS).build();
|
CacheBuilder.newBuilder().maximumSize(50).expireAfterWrite(10, TimeUnit.SECONDS).build();
|
||||||
private boolean filterOnForkId = false;
|
private boolean filterOnForkId = false;
|
||||||
private ForkIdManager forkIdManager;
|
|
||||||
|
|
||||||
public static ControllerBuilder create() {
|
public static ControllerBuilder create() {
|
||||||
return new ControllerBuilder();
|
return new ControllerBuilder();
|
||||||
@@ -1776,11 +1744,6 @@ public class PeerDiscoveryControllerTest {
|
|||||||
return this;
|
return this;
|
||||||
}
|
}
|
||||||
|
|
||||||
public ControllerBuilder forkIdManager(final ForkIdManager forkIdManager) {
|
|
||||||
this.forkIdManager = forkIdManager;
|
|
||||||
return this;
|
|
||||||
}
|
|
||||||
|
|
||||||
PeerDiscoveryController build() {
|
PeerDiscoveryController build() {
|
||||||
checkNotNull(nodeKey);
|
checkNotNull(nodeKey);
|
||||||
if (localPeer == null) {
|
if (localPeer == null) {
|
||||||
@@ -1803,7 +1766,6 @@ public class PeerDiscoveryControllerTest {
|
|||||||
.peerPermissions(peerPermissions)
|
.peerPermissions(peerPermissions)
|
||||||
.metricsSystem(new NoOpMetricsSystem())
|
.metricsSystem(new NoOpMetricsSystem())
|
||||||
.cacheForEnrRequests(enrs)
|
.cacheForEnrRequests(enrs)
|
||||||
.forkIdManager(forkIdManager == null ? mock(ForkIdManager.class) : forkIdManager)
|
|
||||||
.filterOnEnrForkId(filterOnForkId)
|
.filterOnEnrForkId(filterOnForkId)
|
||||||
.rlpxAgent(mock(RlpxAgent.class))
|
.rlpxAgent(mock(RlpxAgent.class))
|
||||||
.build());
|
.build());
|
||||||
|
|||||||
@@ -24,7 +24,6 @@ import static org.mockito.Mockito.spy;
|
|||||||
import static org.mockito.Mockito.verify;
|
import static org.mockito.Mockito.verify;
|
||||||
|
|
||||||
import org.hyperledger.besu.cryptoservices.NodeKey;
|
import org.hyperledger.besu.cryptoservices.NodeKey;
|
||||||
import org.hyperledger.besu.ethereum.forkid.ForkIdManager;
|
|
||||||
import org.hyperledger.besu.ethereum.p2p.discovery.DiscoveryPeer;
|
import org.hyperledger.besu.ethereum.p2p.discovery.DiscoveryPeer;
|
||||||
import org.hyperledger.besu.ethereum.p2p.discovery.PeerDiscoveryStatus;
|
import org.hyperledger.besu.ethereum.p2p.discovery.PeerDiscoveryStatus;
|
||||||
import org.hyperledger.besu.ethereum.p2p.discovery.PeerDiscoveryTestHelper;
|
import org.hyperledger.besu.ethereum.p2p.discovery.PeerDiscoveryTestHelper;
|
||||||
@@ -72,7 +71,6 @@ public class PeerDiscoveryTableRefreshTest {
|
|||||||
.tableRefreshIntervalMs(0)
|
.tableRefreshIntervalMs(0)
|
||||||
.metricsSystem(new NoOpMetricsSystem())
|
.metricsSystem(new NoOpMetricsSystem())
|
||||||
.rlpxAgent(mock(RlpxAgent.class))
|
.rlpxAgent(mock(RlpxAgent.class))
|
||||||
.forkIdManager(mock(ForkIdManager.class))
|
|
||||||
.build());
|
.build());
|
||||||
controller.start();
|
controller.start();
|
||||||
|
|
||||||
|
|||||||
Some files were not shown because too many files have changed in this diff Show More
Reference in New Issue
Block a user