docs: apply spelling and grammar fixes (#18836)

Co-authored-by: Jennifer Paffrath <jenpaff0@gmail.com>
Co-authored-by: Max <max@digi.net>
This commit is contained in:
Matthias Seitz
2025-10-02 13:22:43 +02:00
committed by GitHub
parent 467420ec25
commit 1d1fea72b6
33 changed files with 67 additions and 73 deletions

View File

@@ -36,7 +36,7 @@ engine-api: []
# no fix due to https://github.com/paradigmxyz/reth/issues/8732
engine-cancun:
- Invalid PayloadAttributes, Missing BeaconRoot, Syncing=True (Cancun) (reth)
# the test fails with older verions of the code for which it passed before, probably related to changes
# the test fails with older versions of the code for which it passed before, probably related to changes
# in hive or its dependencies
- Blob Transaction Ordering, Multiple Clients (Cancun) (reth)

View File

@@ -20,7 +20,7 @@
## What is Reth?
Reth (short for Rust Ethereum, [pronunciation](https://twitter.com/kelvinfichter/status/1597653609411268608)) is a new Ethereum full node implementation that is focused on being user-friendly, highly modular, as well as being fast and efficient. Reth is an Execution Layer (EL) and is compatible with all Ethereum Consensus Layer (CL) implementations that support the [Engine API](https://github.com/ethereum/execution-apis/tree/a0d03086564ab1838b462befbc083f873dcf0c0f/src/engine). It is originally built and driven forward by [Paradigm](https://paradigm.xyz/), and is licensed under the Apache and MIT licenses.
Reth (short for Rust Ethereum, [pronunciation](https://x.com/kelvinfichter/status/1597653609411268608)) is a new Ethereum full node implementation that is focused on being user-friendly, highly modular, as well as being fast and efficient. Reth is an Execution Layer (EL) and is compatible with all Ethereum Consensus Layer (CL) implementations that support the [Engine API](https://github.com/ethereum/execution-apis/tree/a0d03086564ab1838b462befbc083f873dcf0c0f/src/engine). It is originally built and driven forward by [Paradigm](https://paradigm.xyz/), and is licensed under the Apache and MIT licenses.
## Goals
@@ -43,7 +43,7 @@ More historical context below:
- We released 1.0 "production-ready" stable Reth in June 2024.
- Reth completed an audit with [Sigma Prime](https://sigmaprime.io/), the developers of [Lighthouse](https://github.com/sigp/lighthouse), the Rust Consensus Layer implementation. Find it [here](./audit/sigma_prime_audit_v2.pdf).
- Revm (the EVM used in Reth) underwent an audit with [Guido Vranken](https://twitter.com/guidovranken) (#1 [Ethereum Bug Bounty](https://ethereum.org/en/bug-bounty)). We will publish the results soon.
- Revm (the EVM used in Reth) underwent an audit with [Guido Vranken](https://x.com/guidovranken) (#1 [Ethereum Bug Bounty](https://ethereum.org/en/bug-bounty)). We will publish the results soon.
- We released multiple iterative beta versions, up to [beta.9](https://github.com/paradigmxyz/reth/releases/tag/v0.2.0-beta.9) on Monday June 3, 2024,the last beta release.
- We released [beta](https://github.com/paradigmxyz/reth/releases/tag/v0.2.0-beta.1) on Monday March 4, 2024, our first breaking change to the database model, providing faster query speed, smaller database footprint, and allowing "history" to be mounted on separate drives.
- We shipped iterative improvements until the last alpha release on February 28, 2024, [0.1.0-alpha.21](https://github.com/paradigmxyz/reth/releases/tag/v0.1.0-alpha.21).
@@ -61,7 +61,7 @@ If you had a database produced by alpha versions of Reth, you need to drop it wi
## For Users
See the [Reth documentation](https://paradigmxyz.github.io/reth) for instructions on how to install and run Reth.
See the [Reth documentation](https://reth.rs/) for instructions on how to install and run Reth.
## For Developers
@@ -69,7 +69,7 @@ See the [Reth documentation](https://paradigmxyz.github.io/reth) for instruction
You can use individual crates of reth in your project.
The crate docs can be found [here](https://paradigmxyz.github.io/reth/docs).
The crate docs can be found [here](https://reth.rs/docs/).
For a general overview of the crates, see [Project Layout](./docs/repo/layout.md).
@@ -90,7 +90,7 @@ When updating this, also update:
The Minimum Supported Rust Version (MSRV) of this project is [1.88.0](https://blog.rust-lang.org/2025/06/26/Rust-1.88.0/).
See the docs for detailed instructions on how to [build from source](https://paradigmxyz.github.io/reth/installation/source).
See the docs for detailed instructions on how to [build from source](https://reth.rs/installation/source/).
To fully test Reth, you will need to have [Geth installed](https://geth.ethereum.org/docs/getting-started/installing-geth), but it is possible to run a subset of tests without Geth.
@@ -145,5 +145,5 @@ None of this would have been possible without them, so big shoutout to the teams
The `NippyJar` and `Compact` encoding formats and their implementations are designed for storing and retrieving data internally. They are not hardened to safely read potentially malicious data.
[book]: https://paradigmxyz.github.io/reth/
[book]: https://reth.rs/
[tg-url]: https://t.me/paradigm_reth

View File

@@ -1244,7 +1244,7 @@ Post-merge hard forks (timestamp based):
Head { number: 101, timestamp: 11313123, ..Default::default() };
assert_eq!(
fork_cond_ttd_blocknum_head, fork_cond_ttd_blocknum_expected,
"expected satisfy() to return {fork_cond_ttd_blocknum_expected:#?}, but got {fork_cond_ttd_blocknum_expected:#?} ",
"expected satisfy() to return {fork_cond_ttd_blocknum_expected:#?}, but got {fork_cond_ttd_blocknum_head:#?} ",
);
// spec w/ only ForkCondition::Block - test the match arm for ForkCondition::Block to ensure
@@ -1273,7 +1273,7 @@ Post-merge hard forks (timestamp based):
Head { total_difficulty: U256::from(10_790_000), ..Default::default() };
assert_eq!(
fork_cond_ttd_no_new_spec, fork_cond_ttd_no_new_spec_expected,
"expected satisfy() to return {fork_cond_ttd_blocknum_expected:#?}, but got {fork_cond_ttd_blocknum_expected:#?} ",
"expected satisfy() to return {fork_cond_ttd_no_new_spec_expected:#?}, but got {fork_cond_ttd_no_new_spec:#?} ",
);
}

View File

@@ -57,7 +57,7 @@ where
Self { provider, incoming, pruner, metrics: PersistenceMetrics::default(), sync_metrics_tx }
}
/// Prunes block data before the given block hash according to the configured prune
/// Prunes block data before the given block number according to the configured prune
/// configuration.
fn prune_before(&mut self, block_num: u64) -> Result<PrunerOutput, PrunerError> {
debug!(target: "engine::persistence", ?block_num, "Running pruner");
@@ -271,7 +271,7 @@ impl<T: NodePrimitives> PersistenceHandle<T> {
self.send_action(PersistenceAction::SaveFinalizedBlock(finalized_block))
}
/// Persists the finalized block number on disk.
/// Persists the safe block number on disk.
pub fn save_safe_block_number(
&self,
safe_block: u64,

View File

@@ -20,7 +20,7 @@ use tracing::debug;
const DEFAULT_PERSISTED_TRIE_UPDATES_RETENTION: u64 = EPOCH_SLOTS * 2;
/// Number of blocks to retain persisted trie updates for OP Stack chains
/// OP Stack chains only need `EPOCH_BLOCKS` as reorgs are relevant only when
/// OP Stack chains only need `EPOCH_SLOTS` as reorgs are relevant only when
/// op-node reorgs to the same chain twice
const OPSTACK_PERSISTED_TRIE_UPDATES_RETENTION: u64 = EPOCH_SLOTS;
@@ -348,11 +348,11 @@ impl<N: NodePrimitives> TreeState<N> {
}
}
/// Determines if the second block is a direct descendant of the first block.
/// Determines if the second block is a descendant of the first block.
///
/// If the two blocks are the same, this returns `false`.
pub(crate) fn is_descendant(&self, first: BlockNumHash, second: BlockWithParent) -> bool {
// If the second block's parent is the first block's hash, then it is a direct descendant
// If the second block's parent is the first block's hash, then it is a direct child
// and we can return early.
if second.parent == first.hash {
return true

View File

@@ -9,7 +9,7 @@ use crate::{
};
use alloy_primitives::BlockNumber;
/// `BlockIndex` record: ['i', '2']
/// `BlockIndex` record: ['f', '2']
pub const BLOCK_INDEX: [u8; 2] = [0x66, 0x32];
/// File content in an Era1 file
@@ -26,7 +26,7 @@ pub struct Era1Group {
/// Accumulator is hash tree root of block headers and difficulties
pub accumulator: Accumulator,
/// Block index, optional, omitted for genesis era
/// Block index, required
pub block_index: BlockIndex,
}

View File

@@ -171,7 +171,7 @@ pub enum SparseTrieErrorKind {
#[error(transparent)]
Rlp(#[from] alloy_rlp::Error),
/// Node not found in provider during revealing.
#[error("node {path:?} not found in provider during removal")]
#[error("node {path:?} not found in provider during revealing")]
NodeNotFoundInProvider {
/// Path to the missing node.
path: Nibbles,

View File

@@ -8,13 +8,13 @@ use std::{
use tokio::time::{Instant, Interval, Sleep};
use tokio_stream::Stream;
/// The pinger is a state machine that is created with a maximum number of pongs that can be
/// missed.
/// The pinger is a simple state machine that sends a ping, waits for a pong,
/// and transitions to timeout if the pong is not received within the timeout.
#[derive(Debug)]
pub(crate) struct Pinger {
/// The timer used for the next ping.
ping_interval: Interval,
/// The timer used for the next ping.
/// The timer used to detect a ping timeout.
timeout_timer: Pin<Box<Sleep>>,
/// The timeout duration for each ping.
timeout: Duration,
@@ -39,7 +39,7 @@ impl Pinger {
}
/// Mark a pong as received, and transition the pinger to the `Ready` state if it was in the
/// `WaitingForPong` state. Unsets the sleep timer.
/// `WaitingForPong` state. Resets readiness by resetting the ping interval.
pub(crate) fn on_pong(&mut self) -> Result<(), PingerError> {
match self.state {
PingState::Ready => Err(PingerError::UnexpectedPong),

View File

@@ -62,7 +62,7 @@ pub async fn connect_passthrough(
p2p_stream
}
/// An Rplx subprotocol for testing
/// An Rlpx subprotocol for testing
pub mod proto {
use super::*;
use crate::{protocol::Protocol, Capability};

View File

@@ -19,8 +19,8 @@ where
}
/// This method delegates to `roundtrip_encoding`, but is used to enforce that each type input to
/// the macro has a proper Default, Clone, and Serialize impl. These trait implementations are
/// necessary for test-fuzz to autogenerate a corpus.
/// the macro has proper `Clone` and `Serialize` impls. These trait implementations are necessary
/// for test-fuzz to autogenerate a corpus.
///
/// If it makes sense to remove a Default impl from a type that we fuzz, this should prevent the
/// fuzz test from compiling, rather than failing at runtime.

View File

@@ -878,7 +878,7 @@ pub enum RpcPoolError {
/// respect the tx fee exceeds the configured cap
#[error("tx fee ({max_tx_fee_wei} wei) exceeds the configured cap ({tx_fee_cap_wei} wei)")]
ExceedsFeeCap {
/// max fee in wei of new tx submitted to the pull (e.g. 0.11534 ETH)
/// max fee in wei of new tx submitted to the pool (e.g. 0.11534 ETH)
max_tx_fee_wei: u128,
/// configured tx fee cap in wei (e.g. 1.0 ETH)
tx_fee_cap_wei: u128,

View File

@@ -30,7 +30,7 @@ pub type StaticFileProducerResult = ProviderResult<StaticFileTargets>;
pub type StaticFileProducerWithResult<Provider> =
(StaticFileProducer<Provider>, StaticFileProducerResult);
/// Static File producer. It's a wrapper around [`StaticFileProducer`] that allows to share it
/// Static File producer. It's a wrapper around [`StaticFileProducerInner`] that allows to share it
/// between threads.
#[derive(Debug)]
pub struct StaticFileProducer<Provider>(Arc<Mutex<StaticFileProducerInner<Provider>>>);

View File

@@ -256,7 +256,7 @@ macro_rules! impl_compression_fixed_compact {
}
fn compress_to_buf<B: bytes::BufMut + AsMut<[u8]>>(&self, buf: &mut B) {
let _ = Compact::to_compact(self, buf);
let _ = Compact::to_compact(self, buf);
}
}

View File

@@ -26,7 +26,7 @@ pub fn assert_genesis_block<DB: Database, N: NodeTypes>(
let h = B256::ZERO;
let tx = provider;
// check if all tables are empty
// check if tables contain only the genesis block data
assert_eq!(tx.table::<tables::Headers>().unwrap(), vec![(g.number, g.header().clone())]);
assert_eq!(tx.table::<tables::HeaderNumbers>().unwrap(), vec![(h, n)]);

View File

@@ -477,7 +477,7 @@ impl DiskFileBlobStoreInner {
/// Retrieves the raw blob data for the given transaction hashes.
///
/// Only returns the blobs that were found on file.
/// Only returns the blobs that were found in file.
#[inline]
fn read_many_raw(&self, txs: Vec<TxHash>) -> Vec<(TxHash, Vec<u8>)> {
let mut res = Vec::with_capacity(txs.len());

View File

@@ -225,7 +225,7 @@ pub enum InvalidPoolTransactionError {
/// respect the tx fee exceeds the configured cap
#[error("tx fee ({max_tx_fee_wei} wei) exceeds the configured cap ({tx_fee_cap_wei} wei)")]
ExceedsFeeCap {
/// max fee in wei of new tx submitted to the pull (e.g. 0.11534 ETH)
/// max fee in wei of new tx submitted to the pool (e.g. 0.11534 ETH)
max_tx_fee_wei: u128,
/// configured tx fee cap in wei (e.g. 1.0 ETH)
tx_fee_cap_wei: u128,

View File

@@ -21,7 +21,8 @@ use tracing::debug;
///
/// This is a wrapper around [`BestTransactions`] that also enforces a specific basefee.
///
/// This iterator guarantees that all transaction it returns satisfy both the base fee and blob fee!
/// This iterator guarantees that all transactions it returns satisfy both the base fee and blob
/// fee!
pub(crate) struct BestTransactionsWithFees<T: TransactionOrdering> {
pub(crate) best: BestTransactions<T>,
pub(crate) base_fee: u64,
@@ -98,14 +99,14 @@ pub struct BestTransactions<T: TransactionOrdering> {
pub(crate) new_transaction_receiver: Option<Receiver<PendingTransaction<T>>>,
/// The priority value of most recently yielded transaction.
///
/// This is required if we new pending transactions are fed in while it yields new values.
/// This is required if new pending transactions are fed in while it yields new values.
pub(crate) last_priority: Option<Priority<T::PriorityValue>>,
/// Flag to control whether to skip blob transactions (EIP4844).
pub(crate) skip_blobs: bool,
}
impl<T: TransactionOrdering> BestTransactions<T> {
/// Mark the transaction and it's descendants as invalid.
/// Mark the transaction and its descendants as invalid.
pub(crate) fn mark_invalid(
&mut self,
tx: &Arc<ValidPoolTransaction<T::Transaction>>,
@@ -117,7 +118,7 @@ impl<T: TransactionOrdering> BestTransactions<T> {
/// Returns the ancestor the given transaction, the transaction with `nonce - 1`.
///
/// Note: for a transaction with nonce higher than the current on chain nonce this will always
/// return an ancestor since all transaction in this pool are gapless.
/// return an ancestor since all transactions in this pool are gapless.
pub(crate) fn ancestor(&self, id: &TransactionId) -> Option<&PendingTransaction<T>> {
self.all.get(&id.unchecked_ancestor()?)
}
@@ -818,7 +819,7 @@ mod tests {
assert_eq!(iter.next().unwrap().max_fee_per_gas(), (gas_price + 1) * 10);
}
// Due to the gas limit, the transaction from second prioritized sender was not
// Due to the gas limit, the transaction from second-prioritized sender was not
// prioritized.
let top_of_block_tx2 = iter.next().unwrap();
assert_eq!(top_of_block_tx2.max_fee_per_gas(), 3);

View File

@@ -15,7 +15,7 @@ use std::{
/// worst blob transactions once the sub-pool is full.
///
/// This expects that certain constraints are met:
/// - blob transactions are always gap less
/// - blob transactions are always gapless
#[derive(Debug, Clone)]
pub struct BlobTransactions<T: PoolTransaction> {
/// Keeps track of transactions inserted in the pool.
@@ -83,7 +83,7 @@ impl<T: PoolTransaction> BlobTransactions<T> {
/// Returns all transactions that satisfy the given basefee and blobfee.
///
/// Note: This does not remove any the transactions from the pool.
/// Note: This does not remove any of the transactions from the pool.
pub(crate) fn satisfy_attributes(
&self,
best_transactions_attributes: BestTransactionsAttributes,
@@ -584,7 +584,7 @@ mod tests {
],
network_fees: PendingFees { base_fee: 0, blob_fee: 1999 },
},
// If both basefee and blobfee is specified, sort by the larger distance
// If both basefee and blobfee are specified, sort by the larger distance
// of the two from the current network conditions, splitting same (loglog)
// ones via the tip.
//

View File

@@ -11,7 +11,7 @@ pub struct ForwardInMemoryCursor<'a, K, V> {
impl<'a, K, V> ForwardInMemoryCursor<'a, K, V> {
/// Create new forward cursor positioned at the beginning of the collection.
///
/// The cursor expects all of the entries have been sorted in advance.
/// The cursor expects all of the entries to have been sorted in advance.
#[inline]
pub fn new(entries: &'a [(K, V)]) -> Self {
Self { entries: entries.iter(), is_empty: entries.is_empty() }

View File

@@ -35,8 +35,8 @@ pub trait HashedCursor {
/// Value returned by the cursor.
type Value: std::fmt::Debug;
/// Seek an entry greater or equal to the given key and position the cursor there.
/// Returns the first entry with the key greater or equal to the sought key.
/// Seek an entry greater than or equal to the given key and position the cursor there.
/// Returns the first entry with the key greater than or equal to the sought key.
fn seek(&mut self, key: B256) -> Result<Option<(B256, Self::Value)>, DatabaseError>;
/// Move the cursor to the next entry and return it.

View File

@@ -120,7 +120,7 @@ where
///
/// If the key is the same as the last seeked key, the result of the last seek is returned.
///
/// If `metrics` feature is enabled, also updates the metrics.
/// If `metrics` feature is enabled, it also updates the metrics.
fn seek_hashed_entry(&mut self, key: B256) -> Result<Option<(B256, H::Value)>, DatabaseError> {
if let Some((last_key, last_value)) = self.last_next_result &&
last_key == key
@@ -158,7 +158,7 @@ where
/// Advances the hashed cursor to the next entry.
///
/// If `metrics` feature is enabled, also updates the metrics.
/// If `metrics` feature is enabled, it also updates the metrics.
fn next_hashed_entry(&mut self) -> Result<Option<(B256, H::Value)>, DatabaseError> {
let result = self.hashed_cursor.next();

View File

@@ -121,7 +121,7 @@ where
.with_updates(self.collect_branch_node_masks);
// Initialize all storage multiproofs as empty.
// Storage multiproofs for non empty tries will be overwritten if necessary.
// Storage multiproofs for non-empty tries will be overwritten if necessary.
let mut storages: B256Map<_> =
targets.keys().map(|key| (*key, StorageMultiProof::empty())).collect();
let mut account_rlp = Vec::with_capacity(TRIE_ACCOUNT_RLP_MAX_SIZE);

View File

@@ -20,7 +20,7 @@ use tracing::trace;
/// Result: 0x11, 0x12, 0x1, 0x21
/// ```
pub fn cmp(a: &Nibbles, b: &Nibbles) -> Ordering {
// If the two are equal length then compare them lexicographically
// If the two are of equal length, then compare them lexicographically
if a.len() == b.len() {
return a.cmp(b)
}
@@ -261,7 +261,7 @@ mod tests {
// Expected depth-first order:
// All descendants come before ancestors
// Within same level, lexicographical order
// Within the same level, lexicographical order
assert_eq!(paths[0], Nibbles::from_nibbles([0x1, 0x1, 0x1])); // 0x111 (deepest in 0x1 branch)
assert_eq!(paths[1], Nibbles::from_nibbles([0x1, 0x1, 0x2])); // 0x112 (sibling of 0x111)
assert_eq!(paths[2], Nibbles::from_nibbles([0x1, 0x1])); // 0x11 (parent of 0x111, 0x112)

View File

@@ -80,7 +80,7 @@ impl<H: HashedCursorFactory + Clone> Iterator for StateRootBranchNodesIter<H> {
return Some(Ok(node))
}
// If there's not a storage trie already being iterated over than check if there's a
// If there's not a storage trie already being iterated over then check if there's a
// storage trie we could start iterating over.
if let Some((account, storage_updates)) = self.storage_tries.pop() {
debug_assert!(!storage_updates.is_empty());
@@ -135,7 +135,7 @@ impl<H: HashedCursorFactory + Clone> Iterator for StateRootBranchNodesIter<H> {
.collect::<Vec<_>>();
// `root_with_progress` will output storage updates ordered by their account hash. If
// `root_with_progress` only returns a partial result then it will pick up with where
// `root_with_progress` only returns a partial result then it will pick up where
// it left off in the storage trie on the next run.
//
// By sorting by the account we ensure that we continue with the partially processed
@@ -155,7 +155,7 @@ impl<H: HashedCursorFactory + Clone> Iterator for StateRootBranchNodesIter<H> {
pub enum Output {
/// An extra account node was found.
AccountExtra(Nibbles, BranchNodeCompact),
/// A extra storage node was found.
/// An extra storage node was found.
StorageExtra(B256, Nibbles, BranchNodeCompact),
/// An account node had the wrong value.
AccountWrong {
@@ -261,7 +261,7 @@ impl<C: TrieCursor> SingleVerifier<DepthFirstTrieIterator<C>> {
return Ok(())
}
Ordering::Equal => {
// If the the current path matches the given one (happy path) but the nodes
// If the current path matches the given one (happy path) but the nodes
// aren't equal then we produce a wrong node. Either way we want to move the
// iterator forward.
if *curr_node != node {
@@ -298,7 +298,7 @@ impl<C: TrieCursor> SingleVerifier<DepthFirstTrieIterator<C>> {
}
/// Checks that data stored in the trie database is consistent, using hashed accounts/storages
/// database tables as the source of truth. This will iteratively re-compute the entire trie based
/// database tables as the source of truth. This will iteratively recompute the entire trie based
/// on the hashed state, and produce any discovered [`Output`]s via the `next` method.
#[derive(Debug)]
pub struct Verifier<T: TrieCursorFactory, H> {

View File

@@ -28,7 +28,7 @@ pub struct TrieWalker<C, K = AddedRemovedKeys> {
pub changes: PrefixSet,
/// The retained trie node keys that need to be removed.
removed_keys: Option<HashSet<Nibbles>>,
/// Provided when it's necessary to not skip certain nodes during proof generation.
/// Provided when it's necessary not to skip certain nodes during proof generation.
/// Specifically we don't skip certain branch nodes even when they are not in the `PrefixSet`,
/// when they might be required to support leaf removal.
added_removed_keys: Option<K>,
@@ -185,7 +185,7 @@ impl<C: TrieCursor, K: AsRef<AddedRemovedKeys>> TrieWalker<C, K> {
target: "trie::walker",
?key_is_only_nonremoved_child,
full_key=?node.full_key(),
"Checked for only nonremoved child",
"Checked for only non-removed child",
);
!self.changes.contains(node.full_key()) &&

View File

@@ -84,7 +84,7 @@ impl<T, H> TrieWitness<T, H> {
self
}
/// Set `always_include_root_node` to true. Root node will be included even on empty state.
/// Set `always_include_root_node` to true. Root node will be included even in empty state.
/// This setting is useful if the caller wants to verify the witness against the
/// parent state root.
pub const fn always_include_root_node(mut self) -> Self {

View File

@@ -256,7 +256,7 @@ self.tx.put::<tables::HeaderNumbers>(block.hash(), block_number)?;
Let's take a look at the `DatabaseProviderRW<DB: Database>` struct, which is used to create a mutable transaction to interact with the database.
The `DatabaseProviderRW<DB: Database>` struct implements the `Deref` and `DerefMut` traits, which return a reference to its first field, which is a `TxMut`. Recall that `TxMut` is a generic type on the `Database` trait, which is defined as `type TXMut: DbTxMut + DbTx + Send + Sync;`, giving it access to all of the functions available to `DbTx`, including the `DbTx::get()` function.
This next example uses the `DbTx::cursor()` method to get a `Cursor`. The `Cursor` type provides a way to traverse through rows in a database table, one row at a time. A cursor enables the program to perform an operation (updating, deleting, etc) on each row in the table individually. The following code snippet gets a cursor for a few different tables in the database.
This next example uses the `DbTx::cursor_read()` method to get a `Cursor`. The `Cursor` type provides a way to traverse through rows in a database table, one row at a time. A cursor enables the program to perform an operation (updating, deleting, etc) on each row in the table individually. The following code snippet gets a cursor for a few different tables in the database.
[File: crates/static-file/static-file/src/segments/headers.rs](https://github.com/paradigmxyz/reth/blob/bf9cac7571f018fec581fe3647862dab527aeafb/crates/static-file/static-file/src/segments/headers.rs#L22-L58)
@@ -301,11 +301,6 @@ fn unwind(&mut self, provider: &DatabaseProviderRW<DB>, input: UnwindInput) {
withdrawals_cursor.delete_current()?;
}
// Delete the requests entry if any
if requests_cursor.seek_exact(number)?.is_some() {
requests_cursor.delete_current()?;
}
// Delete all transactions to block values.
if !block_meta.is_empty() &&
tx_block_cursor.seek_exact(block_meta.last_tx_num())?.is_some()

View File

@@ -2,6 +2,7 @@
Docs under this page contain some context on how we've iterated on the Reth design (still WIP, please contribute!):
- [Reth Goals](./goals.md)
- [Database](./database.md)
- Networking
- [P2P](./p2p.md)

View File

@@ -17,7 +17,7 @@ Reth secures real value on Ethereum mainnet today, trusted by institutions like
Reth pushes the performance frontier across every dimension, from L2 sequencers to MEV block building.
- **L2 Sequencer Performance**: Used by [Base](https://www.base.org/), other production L2s and also rollup-as-a-service providers such as [Conduit](https://conduit.xyz) which require high throughput and fast block times.
- **MEV & Block Building**: [rbuilder](https://github.com/flashbots/rbuilder) is an open-source implementation of a block builder built on Reth due to developer friendless and blazing fast performance.
- **MEV & Block Building**: [rbuilder](https://github.com/flashbots/rbuilder) is an open-source implementation of a block builder built on Reth due to developer friendliness and blazing fast performance.
## Infinitely Customizable

View File

@@ -6,7 +6,7 @@ description: Trace API for inspecting Ethereum state and transactions.
The `trace` API provides several methods to inspect the Ethereum state, including Parity-style traces.
A similar module exists (with other debug functions) with Geth-style traces ([`debug`](/jsonrpc/debug)).
A similar module exists (with other debug functions) with Geth-style traces ([`debug`](https://github.com/paradigmxyz/reth/blob/main/docs/vocs/docs/pages/jsonrpc/debug.mdx)).
The `trace` API gives deeper insight into transaction processing.
@@ -176,9 +176,9 @@ The second parameter is an array of one or more trace types (`vmTrace`, `trace`,
The third and optional parameter is a block number, block hash, or a block tag (`latest`, `finalized`, `safe`, `earliest`, `pending`).
| Client | Method invocation |
| ------ | --------------------------------------------------------- |
| RPC | `{"method": "trace_call", "params": [tx, type[], block]}` |
| Client | Method invocation |
| ------ | -------------------------------------------------------------- |
| RPC | `{"method": "trace_callMany", "params": [trace[], block]}` |
### Example
@@ -284,8 +284,8 @@ The second and optional parameter is a block number, block hash, or a block tag
Traces a call to `eth_sendRawTransaction` without making the call, returning the traces.
| Client | Method invocation |
| ------ | ---------------------------------------------------------------- |
| Client | Method invocation |
| ------ | --------------------------------------------------------------------- |
| RPC | `{"method": "trace_rawTransaction", "params": [raw_tx, type[]]}` |
### Example

View File

@@ -77,7 +77,7 @@ Once you're synced to the tip you will need a reliable connection, especially if
### Build your own
- Storage: Consult the [Great and less great SSDs for Ethereum nodes](https://gist.github.com/yorickdowne/f3a3e79a573bf35767cd002cc977b038) gist. The Seagate Firecuda 530 and WD Black SN850(X) are popular TLC NVMEe options. Ensure proper cooling via heatsinks or active fans.
- Storage: Consult the [Great and less great SSDs for Ethereum nodes](https://gist.github.com/yorickdowne/f3a3e79a573bf35767cd002cc977b038) gist. The Seagate Firecuda 530 and WD Black SN850(X) are popular TLC NVMe options. Ensure proper cooling via heatsinks or active fans.
- CPU: AMD Ryzen 5000/7000/9000 series, AMD EPYC 4004/4005 or Intel Core i5/i7 (11th gen or newer) with at least 6 cores. The AMD Ryzen 9000 series and the AMD EPYC 4005 series offer good value.
- Memory: 32GB DDR4 or DDR5 (ECC if your motherboard & CPU supports it).

View File

@@ -9,7 +9,7 @@ The network stack implements the Ethereum Wire Protocol (ETH) and provides:
- Connection management with configurable peer limits
- Transaction propagation
- State synchronization
- Request/response protocols (e.g. GetBHeaders, GetBodies)
- Request/response protocols (e.g. GetBlockHeaders, GetBodies)
## Architecture

View File

@@ -85,10 +85,7 @@ pub struct BeaconSidecarConfig {
impl Default for BeaconSidecarConfig {
/// Default setup for lighthouse client
fn default() -> Self {
Self {
cl_addr: IpAddr::V4(Ipv4Addr::new(127, 0, 0, 1)), // Equivalent to Ipv4Addr::LOCALHOST
cl_port: 5052,
}
Self { cl_addr: IpAddr::V4(Ipv4Addr::LOCALHOST), cl_port: 5052 }
}
}