Compare commits

..

14 Commits

Author SHA1 Message Date
Jonas Theis
de7f6e56a9 refactor(rollup relayer): remove max_block_num_per_chunk configuration parameter (#1729)
Co-authored-by: jonastheis <jonastheis@users.noreply.github.com>
2025-09-10 14:16:13 +08:00
Ho
3b323198dc [feat] Integration e2e test tools (#1694)
Co-authored-by: georgehao <haohongfan@gmail.com>
2025-09-09 15:48:20 +09:00
Ho
c11e0283e8 [FIX] script for detecting plonky3gpu version (#1730) 2025-09-09 10:45:37 +08:00
Ho
a5a7844646 [FIX] Compatible with current prover (before 0.5.6) (#1732) 2025-09-02 19:20:02 +09:00
georgehao
7ff5b190ec bump version to v4.5.44 (#1731) 2025-09-02 09:56:59 +08:00
Ho
b297edd28d [Feat] Prover loading assets (circuits) dynamically (#1717) 2025-08-29 19:32:44 +09:00
Péter Garamvölgyi
47c85d4983 Fix unique chunk hash (#1727) 2025-08-26 14:27:24 +02:00
Morty
1552e98b79 fix(bridge-history+rollup-relayer): update da-codec to prevent zstd deadlock (#1724)
Co-authored-by: yiweichi <yiweichi@users.noreply.github.com>
Co-authored-by: Péter Garamvölgyi <peter@scroll.io>
2025-08-25 16:23:01 +08:00
Péter Garamvölgyi
a65b3066a3 fix: remove unnecessary logs (#1725) 2025-08-22 15:02:20 +02:00
Morty
1f2b397bbd feat(bridge-history): add aws s3 blob client (#1716)
Co-authored-by: yiweichi <yiweichi@users.noreply.github.com>
Co-authored-by: colin <102356659+colinlyguo@users.noreply.github.com>
Co-authored-by: colinlyguo <colinlyguo@users.noreply.github.com>
2025-08-12 14:59:44 +08:00
colin
ae791a0714 fix(rollup-relayer): sanity checks (#1720) 2025-08-12 14:57:02 +08:00
colin
c012f7132d feat(rollup-relayer): add sanity checks before committing and finalizing (#1714)
Co-authored-by: colinlyguo <colinlyguo@users.noreply.github.com>
2025-08-11 17:49:29 +08:00
Jonas Theis
6897cc54bd feat(permissionless batches): batch production toolkit and operator recovery (#1555)
Signed-off-by: noelwei <fan@scroll.io>
Co-authored-by: Ömer Faruk Irmak <omerfirmak@gmail.com>
Co-authored-by: noelwei <fan@scroll.io>
Co-authored-by: colin <102356659+colinlyguo@users.noreply.github.com>
Co-authored-by: Rohit Narurkar <rohit.narurkar@proton.me>
Co-authored-by: colinlyguo <colinlyguo@scroll.io>
Co-authored-by: Péter Garamvölgyi <peter@scroll.io>
Co-authored-by: Morty <70688412+yiweichi@users.noreply.github.com>
Co-authored-by: omerfirmak <omerfirmak@users.noreply.github.com>
Co-authored-by: jonastheis <jonastheis@users.noreply.github.com>
Co-authored-by: georgehao <georgehao@users.noreply.github.com>
Co-authored-by: kunxian xia <xiakunxian130@gmail.com>
Co-authored-by: Velaciela <git.rover@outlook.com>
Co-authored-by: colinlyguo <colinlyguo@users.noreply.github.com>
Co-authored-by: Morty <yiweichi1@gmail.com>
2025-08-04 12:37:31 +08:00
georgehao
d21fa36803 change l2watcher from w.GetBlockByNumberOrHash to BlockByNumber (#1715) 2025-07-31 18:29:50 +08:00
105 changed files with 4461 additions and 1131 deletions

481
Cargo.lock generated

File diff suppressed because it is too large Load Diff

View File

@@ -17,9 +17,9 @@ repository = "https://github.com/scroll-tech/scroll"
version = "4.5.8"
[workspace.dependencies]
scroll-zkvm-prover-euclid = { git = "https://github.com/scroll-tech/zkvm-prover", branch = "feat/0.5.1", package = "scroll-zkvm-prover" }
scroll-zkvm-verifier-euclid = { git = "https://github.com/scroll-tech/zkvm-prover", branch = "feat/0.5.1", package = "scroll-zkvm-verifier" }
scroll-zkvm-types = { git = "https://github.com/scroll-tech/zkvm-prover", branch = "feat/0.5.1" }
scroll-zkvm-prover = { git = "https://github.com/scroll-tech/zkvm-prover", rev = "ad0efe7" }
scroll-zkvm-verifier = { git = "https://github.com/scroll-tech/zkvm-prover", rev = "ad0efe7" }
scroll-zkvm-types = { git = "https://github.com/scroll-tech/zkvm-prover", rev = "ad0efe7" }
sbv-primitives = { git = "https://github.com/scroll-tech/stateless-block-verifier", branch = "chore/openvm-1.3", features = ["scroll"] }
sbv-utils = { git = "https://github.com/scroll-tech/stateless-block-verifier", branch = "chore/openvm-1.3" }

View File

@@ -10,7 +10,7 @@ require (
github.com/go-redis/redis/v8 v8.11.5
github.com/pressly/goose/v3 v3.16.0
github.com/prometheus/client_golang v1.19.0
github.com/scroll-tech/da-codec v0.1.3-0.20250626091118-58b899494da6
github.com/scroll-tech/da-codec v0.1.3-0.20250826112206-b4cce5c5d178
github.com/scroll-tech/go-ethereum v1.10.14-0.20250729113104-bd8f141bb3e9
github.com/stretchr/testify v1.9.0
github.com/urfave/cli/v2 v2.25.7

View File

@@ -309,8 +309,8 @@ github.com/rs/cors v1.7.0 h1:+88SsELBHx5r+hZ8TCkggzSstaWNbDvThkVK8H6f9ik=
github.com/rs/cors v1.7.0/go.mod h1:gFx+x8UowdsKA9AchylcLynDq+nNFfI8FkUZdN/jGCU=
github.com/russross/blackfriday/v2 v2.1.0 h1:JIOH55/0cWyOuilr9/qlrm0BSXldqnqwMsf35Ld67mk=
github.com/russross/blackfriday/v2 v2.1.0/go.mod h1:+Rmxgy9KzJVeS9/2gXHxylqXiyQDYRxCVz55jmeOWTM=
github.com/scroll-tech/da-codec v0.1.3-0.20250626091118-58b899494da6 h1:vb2XLvQwCf+F/ifP6P/lfeiQrHY6+Yb/E3R4KHXLqSE=
github.com/scroll-tech/da-codec v0.1.3-0.20250626091118-58b899494da6/go.mod h1:Z6kN5u2khPhiqHyk172kGB7o38bH/nj7Ilrb/46wZGg=
github.com/scroll-tech/da-codec v0.1.3-0.20250826112206-b4cce5c5d178 h1:4utngmJHXSOS5FoSdZhEV1xMRirpArbXvyoCZY9nYj0=
github.com/scroll-tech/da-codec v0.1.3-0.20250826112206-b4cce5c5d178/go.mod h1:Z6kN5u2khPhiqHyk172kGB7o38bH/nj7Ilrb/46wZGg=
github.com/scroll-tech/go-ethereum v1.10.14-0.20250729113104-bd8f141bb3e9 h1:u371VK8eOU2Z/0SVf5KDI3eJc8msHSpJbav4do/8n38=
github.com/scroll-tech/go-ethereum v1.10.14-0.20250729113104-bd8f141bb3e9/go.mod h1:pDCZ4iGvEGmdIe4aSAGBrb7XSrKEML6/L/wEMmNxOdk=
github.com/scroll-tech/zktrie v0.8.4 h1:UagmnZ4Z3ITCk+aUq9NQZJNAwnWl4gSxsLb2Nl7IgRE=

View File

@@ -38,6 +38,7 @@ type FetcherConfig struct {
BeaconNodeAPIEndpoint string `json:"BeaconNodeAPIEndpoint"`
BlobScanAPIEndpoint string `json:"BlobScanAPIEndpoint"`
BlockNativeAPIEndpoint string `json:"BlockNativeAPIEndpoint"`
AwsS3Endpoint string `json:"AwsS3Endpoint"`
}
// RedisConfig redis config

View File

@@ -39,6 +39,9 @@ type L1MessageFetcher struct {
// NewL1MessageFetcher creates a new L1MessageFetcher instance.
func NewL1MessageFetcher(ctx context.Context, cfg *config.FetcherConfig, db *gorm.DB, client *ethclient.Client) (*L1MessageFetcher, error) {
blobClient := blob_client.NewBlobClients()
if cfg.AwsS3Endpoint != "" {
blobClient.AddBlobClient(blob_client.NewAwsS3Client(cfg.AwsS3Endpoint))
}
if cfg.BeaconNodeAPIEndpoint != "" {
beaconNodeClient, err := blob_client.NewBeaconNodeClient(cfg.BeaconNodeAPIEndpoint)
if err != nil {

View File

@@ -4,3 +4,5 @@ docs/
l2geth/
rpc-gateway/
*target/*
permissionless-batches/conf/

View File

@@ -4,3 +4,5 @@ docs/
l2geth/
rpc-gateway/
*target/*
permissionless-batches/conf/

View File

@@ -4,3 +4,5 @@ docs/
l2geth/
rpc-gateway/
*target/*
permissionless-batches/conf/

View File

@@ -1,5 +1,8 @@
assets/
contracts/
docs/
l2geth/
rpc-gateway/
*target/*
*target/*
permissionless-batches/conf/

View File

@@ -0,0 +1,30 @@
# Download Go dependencies
FROM scrolltech/go-rust-builder:go-1.21-rust-nightly-2023-12-03 as base
WORKDIR /src
COPY go.work* ./
COPY ./rollup/go.* ./rollup/
COPY ./common/go.* ./common/
COPY ./coordinator/go.* ./coordinator/
COPY ./database/go.* ./database/
COPY ./tests/integration-test/go.* ./tests/integration-test/
COPY ./bridge-history-api/go.* ./bridge-history-api/
RUN go mod download -x
# Build rollup_relayer
FROM base as builder
RUN --mount=target=. \
--mount=type=cache,target=/root/.cache/go-build \
cd /src/rollup/cmd/permissionless_batches/ && CGO_LDFLAGS="-ldl" go build -v -p 4 -o /bin/rollup_relayer
# Pull rollup_relayer into a second stage deploy ubuntu container
FROM ubuntu:20.04
RUN apt update && apt install vim netcat-openbsd net-tools curl ca-certificates -y
ENV CGO_LDFLAGS="-ldl"
COPY --from=builder /bin/rollup_relayer /bin/
WORKDIR /app
ENTRYPOINT ["rollup_relayer"]

View File

@@ -0,0 +1,8 @@
assets/
contracts/
docs/
l2geth/
rpc-gateway/
*target/*
permissionless-batches/conf/

View File

@@ -1,5 +1,8 @@
assets/
contracts/
docs/
l2geth/
rpc-gateway/
*target/*
*target/*
permissionless-batches/conf/

View File

@@ -15,7 +15,7 @@ require (
github.com/modern-go/reflect2 v1.0.2
github.com/orcaman/concurrent-map v1.0.0
github.com/prometheus/client_golang v1.19.0
github.com/scroll-tech/go-ethereum v1.10.14-0.20250305151038-478940e79601
github.com/scroll-tech/go-ethereum v1.10.14-0.20250625112225-a67863c65587
github.com/stretchr/testify v1.10.0
github.com/testcontainers/testcontainers-go v0.30.0
github.com/testcontainers/testcontainers-go/modules/compose v0.30.0
@@ -184,7 +184,7 @@ require (
github.com/rjeczalik/notify v0.9.1 // indirect
github.com/rs/cors v1.7.0 // indirect
github.com/russross/blackfriday/v2 v2.1.0 // indirect
github.com/scroll-tech/da-codec v0.1.3-0.20250310095435-012aaee6b435 // indirect
github.com/scroll-tech/da-codec v0.1.3-0.20250826112206-b4cce5c5d178 // indirect
github.com/scroll-tech/zktrie v0.8.4 // indirect
github.com/secure-systems-lab/go-securesystemslib v0.4.0 // indirect
github.com/serialx/hashring v0.0.0-20190422032157-8b2912629002 // indirect

View File

@@ -636,10 +636,10 @@ github.com/rs/cors v1.7.0 h1:+88SsELBHx5r+hZ8TCkggzSstaWNbDvThkVK8H6f9ik=
github.com/rs/cors v1.7.0/go.mod h1:gFx+x8UowdsKA9AchylcLynDq+nNFfI8FkUZdN/jGCU=
github.com/russross/blackfriday/v2 v2.1.0 h1:JIOH55/0cWyOuilr9/qlrm0BSXldqnqwMsf35Ld67mk=
github.com/russross/blackfriday/v2 v2.1.0/go.mod h1:+Rmxgy9KzJVeS9/2gXHxylqXiyQDYRxCVz55jmeOWTM=
github.com/scroll-tech/da-codec v0.1.3-0.20250310095435-012aaee6b435 h1:X9fkvjrYBY79lGgKEPpUhuiJ4vWpWwzOVw4H8CU8L54=
github.com/scroll-tech/da-codec v0.1.3-0.20250310095435-012aaee6b435/go.mod h1:yhTS9OVC0xQGhg7DN5iV5KZJvnSIlFWAxDdp+6jxQtY=
github.com/scroll-tech/go-ethereum v1.10.14-0.20250305151038-478940e79601 h1:NEsjCG6uSvLRBlsP3+x6PL1kM+Ojs3g8UGotIPgJSz8=
github.com/scroll-tech/go-ethereum v1.10.14-0.20250305151038-478940e79601/go.mod h1:OblWe1+QrZwdpwO0j/LY3BSGuKT3YPUFBDQQgvvfStQ=
github.com/scroll-tech/da-codec v0.1.3-0.20250826112206-b4cce5c5d178 h1:4utngmJHXSOS5FoSdZhEV1xMRirpArbXvyoCZY9nYj0=
github.com/scroll-tech/da-codec v0.1.3-0.20250826112206-b4cce5c5d178/go.mod h1:Z6kN5u2khPhiqHyk172kGB7o38bH/nj7Ilrb/46wZGg=
github.com/scroll-tech/go-ethereum v1.10.14-0.20250625112225-a67863c65587 h1:wG1+gb+K4iLtxAHhiAreMdIjP5x9hB64duraN2+u1QU=
github.com/scroll-tech/go-ethereum v1.10.14-0.20250625112225-a67863c65587/go.mod h1:YyfB2AyAtphlbIuDQgaxc2b9mo0zE4EBA1+qtXvzlmg=
github.com/scroll-tech/zktrie v0.8.4 h1:UagmnZ4Z3ITCk+aUq9NQZJNAwnWl4gSxsLb2Nl7IgRE=
github.com/scroll-tech/zktrie v0.8.4/go.mod h1:XvNo7vAk8yxNyTjBDj5WIiFzYW4bx/gJ78+NK6Zn6Uk=
github.com/secure-systems-lab/go-securesystemslib v0.4.0 h1:b23VGrQhTA8cN2CbBw7/FulN9fTtqYUdS5+Oxzt+DUE=

View File

@@ -135,10 +135,18 @@ type BlockContextV2 struct {
NumL1Msgs uint16 `json:"num_l1_msgs"`
}
// Metric data carried with OpenVMProof
type OpenVMProofStat struct {
TotalCycle uint64 `json:"total_cycles"`
ExecutionTimeMills uint64 `json:"execution_time_mills"`
ProvingTimeMills uint64 `json:"proving_time_mills"`
}
// Proof for flatten VM proof
type OpenVMProof struct {
Proof []byte `json:"proofs"`
PublicValues []byte `json:"public_values"`
Proof []byte `json:"proofs"`
PublicValues []byte `json:"public_values"`
Stat *OpenVMProofStat `json:"stat,omitempty"`
}
// Proof for flatten EVM proof
@@ -150,7 +158,8 @@ type OpenVMEvmProof struct {
// OpenVMChunkProof includes the proof info that are required for chunk verification and rollup.
type OpenVMChunkProof struct {
MetaData struct {
ChunkInfo *ChunkInfo `json:"chunk_info"`
ChunkInfo *ChunkInfo `json:"chunk_info"`
TotalGasUsed uint64 `json:"chunk_total_gas"`
} `json:"metadata"`
VmProof *OpenVMProof `json:"proof"`

View File

@@ -5,7 +5,7 @@ import (
"runtime/debug"
)
var tag = "v4.5.36"
var tag = "v4.5.46"
var commit = func() string {
if info, ok := debug.ReadBuildInfo(); ok {

View File

@@ -34,6 +34,13 @@ coordinator_cron:
coordinator_tool:
go build -ldflags "-X scroll-tech/common/version.ZkVersion=${ZK_VERSION}" -o $(PWD)/build/bin/coordinator_tool ./cmd/tool
localsetup: coordinator_api ## Local setup: build coordinator_api, copy config, and setup releases
@echo "Copying configuration files..."
cp -r $(PWD)/conf $(PWD)/build/bin/
@echo "Setting up releases..."
cd $(PWD)/build && bash setup_releases.sh
#coordinator_api_skip_libzkp:
# go build -ldflags "-X scroll-tech/common/version.ZkVersion=${ZK_VERSION}" -o $(PWD)/build/bin/coordinator_api ./cmd/api

View File

@@ -0,0 +1,62 @@
#!/bin/bash
# release version
if [ -z "${SCROLL_ZKVM_VERSION}" ]; then
echo "SCROLL_ZKVM_VERSION not set"
exit 1
fi
# set ASSET_DIR by reading from config.json
CONFIG_FILE="bin/conf/config.json"
if [ ! -f "$CONFIG_FILE" ]; then
echo "Config file $CONFIG_FILE not found"
exit 1
fi
# get the number of verifiers in the array
VERIFIER_COUNT=$(jq -r '.prover_manager.verifier.verifiers | length' "$CONFIG_FILE")
if [ "$VERIFIER_COUNT" = "null" ] || [ "$VERIFIER_COUNT" -eq 0 ]; then
echo "No verifiers found in config file"
exit 1
fi
echo "Found $VERIFIER_COUNT verifier(s) in config"
# iterate through each verifier entry
for ((i=0; i<$VERIFIER_COUNT; i++)); do
# extract assets_path for current verifier
ASSETS_PATH=$(jq -r ".prover_manager.verifier.verifiers[$i].assets_path" "$CONFIG_FILE")
FORK_NAME=$(jq -r ".prover_manager.verifier.verifiers[$i].fork_name" "$CONFIG_FILE")
if [ "$ASSETS_PATH" = "null" ]; then
echo "Warning: Could not find assets_path for verifier $i, skipping..."
continue
fi
echo "Processing verifier $i ($FORK_NAME): assets_path=$ASSETS_PATH"
# check if it's an absolute path (starts with /)
if [[ "$ASSETS_PATH" = /* ]]; then
# absolute path, use as is
ASSET_DIR="$ASSETS_PATH"
else
# relative path, prefix with "bin/"
ASSET_DIR="bin/$ASSETS_PATH"
fi
echo "Using ASSET_DIR: $ASSET_DIR"
# create directory if it doesn't exist
mkdir -p "$ASSET_DIR"
# assets for verifier-only mode
echo "Downloading assets for $FORK_NAME to $ASSET_DIR..."
wget https://circuit-release.s3.us-west-2.amazonaws.com/scroll-zkvm/releases/$SCROLL_ZKVM_VERSION/verifier/verifier.bin -O ${ASSET_DIR}/verifier.bin
wget https://circuit-release.s3.us-west-2.amazonaws.com/scroll-zkvm/releases/$SCROLL_ZKVM_VERSION/verifier/openVmVk.json -O ${ASSET_DIR}/openVmVk.json
echo "Completed downloading assets for $FORK_NAME"
echo "---"
done
echo "All verifier assets downloaded successfully"

View File

@@ -12,7 +12,7 @@
{
"assets_path": "assets",
"fork_name": "euclidV2"
},
},
{
"assets_path": "assets",
"fork_name": "feynman"

View File

@@ -9,7 +9,7 @@ require (
github.com/google/uuid v1.6.0
github.com/mitchellh/mapstructure v1.5.0
github.com/prometheus/client_golang v1.19.0
github.com/scroll-tech/da-codec v0.1.3-0.20250626091118-58b899494da6
github.com/scroll-tech/da-codec v0.1.3-0.20250826112206-b4cce5c5d178
github.com/scroll-tech/go-ethereum v1.10.14-0.20250626110859-cc9a1dd82de7
github.com/shopspring/decimal v1.3.1
github.com/stretchr/testify v1.10.0

View File

@@ -253,8 +253,8 @@ github.com/rs/cors v1.7.0 h1:+88SsELBHx5r+hZ8TCkggzSstaWNbDvThkVK8H6f9ik=
github.com/rs/cors v1.7.0/go.mod h1:gFx+x8UowdsKA9AchylcLynDq+nNFfI8FkUZdN/jGCU=
github.com/russross/blackfriday/v2 v2.1.0 h1:JIOH55/0cWyOuilr9/qlrm0BSXldqnqwMsf35Ld67mk=
github.com/russross/blackfriday/v2 v2.1.0/go.mod h1:+Rmxgy9KzJVeS9/2gXHxylqXiyQDYRxCVz55jmeOWTM=
github.com/scroll-tech/da-codec v0.1.3-0.20250626091118-58b899494da6 h1:vb2XLvQwCf+F/ifP6P/lfeiQrHY6+Yb/E3R4KHXLqSE=
github.com/scroll-tech/da-codec v0.1.3-0.20250626091118-58b899494da6/go.mod h1:Z6kN5u2khPhiqHyk172kGB7o38bH/nj7Ilrb/46wZGg=
github.com/scroll-tech/da-codec v0.1.3-0.20250826112206-b4cce5c5d178 h1:4utngmJHXSOS5FoSdZhEV1xMRirpArbXvyoCZY9nYj0=
github.com/scroll-tech/da-codec v0.1.3-0.20250826112206-b4cce5c5d178/go.mod h1:Z6kN5u2khPhiqHyk172kGB7o38bH/nj7Ilrb/46wZGg=
github.com/scroll-tech/go-ethereum v1.10.14-0.20250626110859-cc9a1dd82de7 h1:1rN1qocsQlOyk1VCpIEF1J5pfQbLAi1pnMZSLQS37jQ=
github.com/scroll-tech/go-ethereum v1.10.14-0.20250626110859-cc9a1dd82de7/go.mod h1:pDCZ4iGvEGmdIe4aSAGBrb7XSrKEML6/L/wEMmNxOdk=
github.com/scroll-tech/zktrie v0.8.4 h1:UagmnZ4Z3ITCk+aUq9NQZJNAwnWl4gSxsLb2Nl7IgRE=

View File

@@ -57,9 +57,10 @@ type Config struct {
// AssetConfig contain assets configurated for each fork, the defaul vkfile name is "OpenVmVk.json".
type AssetConfig struct {
AssetsPath string `json:"assets_path"`
ForkName string `json:"fork_name"`
Vkfile string `json:"vk_file,omitempty"`
AssetsPath string `json:"assets_path"`
ForkName string `json:"fork_name"`
Vkfile string `json:"vk_file,omitempty"`
MinProverVersion string `json:"min_prover_version,omitempty"`
}
// VerifierConfig load zk verifier config.

View File

@@ -24,18 +24,16 @@ type LoginLogic struct {
openVmVks map[string]struct{}
proverVersionHardForkMap map[string][]string
proverVersionHardForkMap map[string]string
}
// NewLoginLogic new a LoginLogic
func NewLoginLogic(db *gorm.DB, cfg *config.Config, vf *verifier.Verifier) *LoginLogic {
proverVersionHardForkMap := make(map[string][]string)
proverVersionHardForkMap := make(map[string]string)
var hardForks []string
for _, cfg := range cfg.ProverManager.Verifier.Verifiers {
hardForks = append(hardForks, cfg.ForkName)
proverVersionHardForkMap[cfg.ForkName] = cfg.MinProverVersion
}
proverVersionHardForkMap[cfg.ProverManager.Verifier.MinProverVersion] = hardForks
return &LoginLogic{
cfg: cfg,
@@ -101,9 +99,15 @@ func (l *LoginLogic) ProverHardForkName(login *types.LoginParameter) (string, er
}
proverVersion := proverVersionSplits[0]
if hardForkNames, ok := l.proverVersionHardForkMap[proverVersion]; ok {
return strings.Join(hardForkNames, ","), nil
var hardForkNames []string
for n, minVersion := range l.proverVersionHardForkMap {
if minVersion == "" || version.CheckScrollRepoVersion(proverVersion, minVersion) {
hardForkNames = append(hardForkNames, n)
}
}
if len(hardForkNames) == 0 {
return "", fmt.Errorf("invalid prover prover_version:%s", login.Message.ProverVersion)
}
return "", fmt.Errorf("invalid prover prover_version:%s", login.Message.ProverVersion)
return strings.Join(hardForkNames, ","), nil
}

View File

@@ -71,6 +71,9 @@ type ProofReceiverLogic struct {
validateFailureProverTaskStatusNotOk prometheus.Counter
validateFailureProverTaskTimeout prometheus.Counter
validateFailureProverTaskHaveVerifier prometheus.Counter
proverSpeed *prometheus.GaugeVec
provingTime prometheus.Gauge
evmCyclePerGas prometheus.Gauge
ChunkTask provertask.ProverTask
BundleTask provertask.ProverTask
@@ -79,6 +82,7 @@ type ProofReceiverLogic struct {
// NewSubmitProofReceiverLogic create a proof receiver logic
func NewSubmitProofReceiverLogic(cfg *config.ProverManager, chainCfg *params.ChainConfig, db *gorm.DB, vf *verifier.Verifier, reg prometheus.Registerer) *ProofReceiverLogic {
return &ProofReceiverLogic{
chunkOrm: orm.NewChunk(db),
batchOrm: orm.NewBatch(db),
@@ -133,6 +137,18 @@ func NewSubmitProofReceiverLogic(cfg *config.ProverManager, chainCfg *params.Cha
Name: "coordinator_validate_failure_submit_have_been_verifier",
Help: "Total number of submit proof validate failure proof have been verifier.",
}),
evmCyclePerGas: promauto.With(reg).NewGauge(prometheus.GaugeOpts{
Name: "evm_circuit_cycle_per_gas",
Help: "VM cycles cost for a gas unit cost in evm execution",
}),
provingTime: promauto.With(reg).NewGauge(prometheus.GaugeOpts{
Name: "chunk_proving_time",
Help: "Wall clock time for chunk proving in second",
}),
proverSpeed: promauto.With(reg).NewGaugeVec(prometheus.GaugeOpts{
Name: "prover_speed",
Help: "Cycle against running time of prover (in mhz)",
}, []string{"type", "phase"}),
}
}
@@ -204,12 +220,34 @@ func (m *ProofReceiverLogic) HandleZkProof(ctx *gin.Context, proofParameter coor
return unmarshalErr
}
success, verifyErr = m.verifier.VerifyChunkProof(chunkProof, hardForkName)
if stat := chunkProof.VmProof.Stat; stat != nil {
if g, _ := m.proverSpeed.GetMetricWithLabelValues("chunk", "exec"); g != nil && stat.ExecutionTimeMills > 0 {
g.Set(float64(stat.TotalCycle) / float64(stat.ExecutionTimeMills*1000))
}
if g, _ := m.proverSpeed.GetMetricWithLabelValues("chunk", "proving"); g != nil && stat.ProvingTimeMills > 0 {
g.Set(float64(stat.TotalCycle) / float64(stat.ProvingTimeMills*1000))
}
if chunkProof.MetaData.TotalGasUsed > 0 {
cycle_per_gas := float64(stat.TotalCycle) / float64(chunkProof.MetaData.TotalGasUsed)
m.evmCyclePerGas.Set(cycle_per_gas)
}
m.provingTime.Set(float64(stat.ProvingTimeMills) / 1000)
}
case message.ProofTypeBatch:
batchProof := &message.OpenVMBatchProof{}
if unmarshalErr := json.Unmarshal([]byte(proofParameter.Proof), &batchProof); unmarshalErr != nil {
return unmarshalErr
}
success, verifyErr = m.verifier.VerifyBatchProof(batchProof, hardForkName)
if stat := batchProof.VmProof.Stat; stat != nil {
if g, _ := m.proverSpeed.GetMetricWithLabelValues("batch", "exec"); g != nil && stat.ExecutionTimeMills > 0 {
g.Set(float64(stat.TotalCycle) / float64(stat.ExecutionTimeMills*1000))
}
if g, _ := m.proverSpeed.GetMetricWithLabelValues("batch", "proving"); g != nil && stat.ProvingTimeMills > 0 {
g.Set(float64(stat.TotalCycle) / float64(stat.ProvingTimeMills*1000))
}
}
case message.ProofTypeBundle:
bundleProof := &message.OpenVMBundleProof{}
if unmarshalErr := json.Unmarshal([]byte(proofParameter.Proof), &bundleProof); unmarshalErr != nil {

View File

@@ -4,6 +4,7 @@ package verifier
import (
"encoding/base64"
"encoding/hex"
"encoding/json"
"fmt"
"io"
@@ -129,6 +130,23 @@ const blocked_vks = `
D6YFHwTLZF/U2zpYJPQ3LwJZRm85yA5Vq2iFBqd3Mk4iwOUpS8sbOp3vg2+NDxhhKphgYpuUlykpdsoRhEt+cw==,
`
// tries to decode s as hex, and if that fails, as base64.
func decodeVkString(s string) ([]byte, error) {
// Try hex decoding first
if b, err := hex.DecodeString(s); err == nil {
return b, nil
}
// Fallback to base64 decoding
b, err := base64.StdEncoding.DecodeString(s)
if err != nil {
return nil, err
}
if len(b) == 0 {
return nil, fmt.Errorf("decode vk string %s fail (empty bytes)", s)
}
return b, nil
}
func (v *Verifier) loadOpenVMVks(cfg config.AssetConfig) error {
vkFileName := cfg.Vkfile
@@ -165,17 +183,17 @@ func (v *Verifier) loadOpenVMVks(cfg config.AssetConfig) error {
v.OpenVMVkMap[dump.Bundle] = struct{}{}
log.Info("Load vks", "from", cfg.AssetsPath, "chunk", dump.Chunk, "batch", dump.Batch, "bundle", dump.Bundle)
decodedBytes, err := base64.StdEncoding.DecodeString(dump.Chunk)
decodedBytes, err := decodeVkString(dump.Chunk)
if err != nil {
return err
}
v.ChunkVk[cfg.ForkName] = decodedBytes
decodedBytes, err = base64.StdEncoding.DecodeString(dump.Batch)
decodedBytes, err = decodeVkString(dump.Batch)
if err != nil {
return err
}
v.BatchVk[cfg.ForkName] = decodedBytes
decodedBytes, err = base64.StdEncoding.DecodeString(dump.Bundle)
decodedBytes, err = decodeVkString(dump.Bundle)
if err != nil {
return err
}

View File

@@ -584,7 +584,8 @@ func testTimeoutProof(t *testing.T) {
err = chunkOrm.UpdateBatchHashInRange(context.Background(), 0, 100, batch.Hash)
assert.NoError(t, err)
encodeData, err := json.Marshal(message.OpenVMChunkProof{VmProof: &message.OpenVMProof{}, MetaData: struct {
ChunkInfo *message.ChunkInfo `json:"chunk_info"`
ChunkInfo *message.ChunkInfo `json:"chunk_info"`
TotalGasUsed uint64 `json:"chunk_total_gas"`
}{ChunkInfo: &message.ChunkInfo{}}})
assert.NoError(t, err)
assert.NotEmpty(t, encodeData)

View File

@@ -1,16 +1,16 @@
[patch."https://github.com/openvm-org/openvm.git"]
openvm-build = { git = "ssh://git@github.com/scroll-tech/openvm-gpu.git", branch = "patch-v1.2.1-rc.1-pipe", default-features = false }
openvm-circuit = { git = "ssh://git@github.com/scroll-tech/openvm-gpu.git", branch = "patch-v1.2.1-rc.1-pipe", default-features = false }
openvm-continuations = { git = "ssh://git@github.com/scroll-tech/openvm-gpu.git", branch = "patch-v1.2.1-rc.1-pipe", default-features = false }
openvm-instructions ={ git = "ssh://git@github.com/scroll-tech/openvm-gpu.git", branch = "patch-v1.2.1-rc.1-pipe", default-features = false }
openvm-native-circuit = { git = "ssh://git@github.com/scroll-tech/openvm-gpu.git", branch = "patch-v1.2.1-rc.1-pipe", default-features = false }
openvm-native-compiler = { git = "ssh://git@github.com/scroll-tech/openvm-gpu.git", branch = "patch-v1.2.1-rc.1-pipe", default-features = false }
openvm-native-recursion = { git = "ssh://git@github.com/scroll-tech/openvm-gpu.git", branch = "patch-v1.2.1-rc.1-pipe", default-features = false }
openvm-native-transpiler = { git = "ssh://git@github.com/scroll-tech/openvm-gpu.git", branch = "patch-v1.2.1-rc.1-pipe", default-features = false }
openvm-rv32im-transpiler = { git = "ssh://git@github.com/scroll-tech/openvm-gpu.git", branch = "patch-v1.2.1-rc.1-pipe", default-features = false }
openvm-sdk = { git = "ssh://git@github.com/scroll-tech/openvm-gpu.git", branch = "patch-v1.2.1-rc.1-pipe", default-features = false, features = ["parallel", "bench-metrics", "evm-prove"] }
openvm-transpiler = { git = "ssh://git@github.com/scroll-tech/openvm-gpu.git", branch = "patch-v1.2.1-rc.1-pipe", default-features = false }
openvm-build = { git = "ssh://git@github.com/scroll-tech/openvm-gpu.git", branch = "patch-v1.3.0-pipe", default-features = false }
openvm-circuit = { git = "ssh://git@github.com/scroll-tech/openvm-gpu.git", branch = "patch-v1.3.0-pipe", default-features = false }
openvm-continuations = { git = "ssh://git@github.com/scroll-tech/openvm-gpu.git", branch = "patch-v1.3.0-pipe", default-features = false }
openvm-instructions ={ git = "ssh://git@github.com/scroll-tech/openvm-gpu.git", branch = "patch-v1.3.0-pipe", default-features = false }
openvm-native-circuit = { git = "ssh://git@github.com/scroll-tech/openvm-gpu.git", branch = "patch-v1.3.0-pipe", default-features = false }
openvm-native-compiler = { git = "ssh://git@github.com/scroll-tech/openvm-gpu.git", branch = "patch-v1.3.0-pipe", default-features = false }
openvm-native-recursion = { git = "ssh://git@github.com/scroll-tech/openvm-gpu.git", branch = "patch-v1.3.0-pipe", default-features = false }
openvm-native-transpiler = { git = "ssh://git@github.com/scroll-tech/openvm-gpu.git", branch = "patch-v1.3.0-pipe", default-features = false }
openvm-rv32im-transpiler = { git = "ssh://git@github.com/scroll-tech/openvm-gpu.git", branch = "patch-v1.3.0-pipe", default-features = false }
openvm-sdk = { git = "ssh://git@github.com/scroll-tech/openvm-gpu.git", branch = "patch-v1.3.0-pipe", default-features = false, features = ["parallel", "bench-metrics", "evm-prove"] }
openvm-transpiler = { git = "ssh://git@github.com/scroll-tech/openvm-gpu.git", branch = "patch-v1.3.0-pipe", default-features = false }
[patch."https://github.com/openvm-org/stark-backend.git"]
openvm-stark-backend = { git = "ssh://git@github.com/scroll-tech/openvm-stark-gpu.git", branch = "main", features = ["gpu"] }
@@ -42,4 +42,4 @@ p3-poseidon2-air = { git = "ssh://git@github.com/scroll-tech/plonky3-gpu.git", t
p3-symmetric = { git = "ssh://git@github.com/scroll-tech/plonky3-gpu.git", tag = "v0.2.1" }
p3-uni-stark = { git = "ssh://git@github.com/scroll-tech/plonky3-gpu.git", tag = "v0.2.1" }
p3-maybe-rayon = { git = "ssh://git@github.com/scroll-tech/plonky3-gpu.git", tag = "v0.2.1" } # the "parallel" feature is NOT on by default to allow single-threaded benchmarking
p3-bn254-fr = { git = "ssh://git@github.com/scroll-tech/plonky3-gpu.git", tag = "v0.2.1" }
p3-bn254-fr = { git = "ssh://git@github.com/scroll-tech/plonky3-gpu.git", tag = "v0.2.1" }

View File

@@ -4573,36 +4573,36 @@ dependencies = [
[[package]]
name = "openvm"
version = "1.2.1-rc.0"
source = "git+ssh://git@github.com/scroll-tech/openvm-gpu.git?branch=patch-v1.2.1-rc.1-pipe#7dd6d1620d07c2c3faa5b91105cbb3d19ff0c9b0"
version = "1.3.0"
source = "git+ssh://git@github.com/scroll-tech/openvm-gpu.git?branch=patch-v1.3.0-pipe#07e2731a2afd8bcb05b76566331e68e1e4ef00d0"
dependencies = [
"bytemuck",
"num-bigint 0.4.6",
"openvm-custom-insn 0.1.0 (git+ssh://git@github.com/scroll-tech/openvm-gpu.git?branch=patch-v1.2.1-rc.1-pipe)",
"openvm-platform 1.2.1-rc.0",
"openvm-rv32im-guest 1.2.1-rc.0",
"openvm-custom-insn 0.1.0 (git+ssh://git@github.com/scroll-tech/openvm-gpu.git?branch=patch-v1.3.0-pipe)",
"openvm-platform 1.3.0 (git+ssh://git@github.com/scroll-tech/openvm-gpu.git?branch=patch-v1.3.0-pipe)",
"openvm-rv32im-guest 1.3.0 (git+ssh://git@github.com/scroll-tech/openvm-gpu.git?branch=patch-v1.3.0-pipe)",
"serde",
]
[[package]]
name = "openvm"
version = "1.3.0"
source = "git+https://github.com/openvm-org/openvm.git?rev=4973d38cb3f2e14ebdd59e03802e65bb657ee422#4973d38cb3f2e14ebdd59e03802e65bb657ee422"
source = "git+https://github.com/openvm-org/openvm.git?rev=5368d4756993fc1e51092499a816867cf4808de0#5368d4756993fc1e51092499a816867cf4808de0"
dependencies = [
"bytemuck",
"getrandom 0.2.16",
"getrandom 0.3.3",
"num-bigint 0.4.6",
"openvm-custom-insn 0.1.0 (git+https://github.com/openvm-org/openvm.git?rev=4973d38cb3f2e14ebdd59e03802e65bb657ee422)",
"openvm-platform 1.3.0",
"openvm-rv32im-guest 1.3.0",
"openvm-custom-insn 0.1.0 (git+https://github.com/openvm-org/openvm.git?rev=5368d4756993fc1e51092499a816867cf4808de0)",
"openvm-platform 1.3.0 (git+https://github.com/openvm-org/openvm.git?rev=5368d4756993fc1e51092499a816867cf4808de0)",
"openvm-rv32im-guest 1.3.0 (git+https://github.com/openvm-org/openvm.git?rev=5368d4756993fc1e51092499a816867cf4808de0)",
"serde",
]
[[package]]
name = "openvm-algebra-circuit"
version = "1.2.1-rc.0"
source = "git+ssh://git@github.com/scroll-tech/openvm-gpu.git?branch=patch-v1.2.1-rc.1-pipe#7dd6d1620d07c2c3faa5b91105cbb3d19ff0c9b0"
version = "1.3.0"
source = "git+ssh://git@github.com/scroll-tech/openvm-gpu.git?branch=patch-v1.3.0-pipe#07e2731a2afd8bcb05b76566331e68e1e4ef00d0"
dependencies = [
"derive-new 0.6.0",
"derive_more 1.0.0",
@@ -4630,10 +4630,10 @@ dependencies = [
[[package]]
name = "openvm-algebra-complex-macros"
version = "1.2.1-rc.0"
source = "git+ssh://git@github.com/scroll-tech/openvm-gpu.git?branch=patch-v1.2.1-rc.1-pipe#7dd6d1620d07c2c3faa5b91105cbb3d19ff0c9b0"
version = "1.3.0"
source = "git+ssh://git@github.com/scroll-tech/openvm-gpu.git?branch=patch-v1.3.0-pipe#07e2731a2afd8bcb05b76566331e68e1e4ef00d0"
dependencies = [
"openvm-macros-common 1.2.1-rc.0",
"openvm-macros-common 1.3.0 (git+ssh://git@github.com/scroll-tech/openvm-gpu.git?branch=patch-v1.3.0-pipe)",
"quote",
"syn 2.0.101",
]
@@ -4641,25 +4641,25 @@ dependencies = [
[[package]]
name = "openvm-algebra-complex-macros"
version = "1.3.0"
source = "git+https://github.com/openvm-org/openvm.git?rev=4973d38cb3f2e14ebdd59e03802e65bb657ee422#4973d38cb3f2e14ebdd59e03802e65bb657ee422"
source = "git+https://github.com/openvm-org/openvm.git?rev=5368d4756993fc1e51092499a816867cf4808de0#5368d4756993fc1e51092499a816867cf4808de0"
dependencies = [
"openvm-macros-common 1.3.0",
"openvm-macros-common 1.3.0 (git+https://github.com/openvm-org/openvm.git?rev=5368d4756993fc1e51092499a816867cf4808de0)",
"quote",
"syn 2.0.101",
]
[[package]]
name = "openvm-algebra-guest"
version = "1.2.1-rc.0"
source = "git+ssh://git@github.com/scroll-tech/openvm-gpu.git?branch=patch-v1.2.1-rc.1-pipe#7dd6d1620d07c2c3faa5b91105cbb3d19ff0c9b0"
version = "1.3.0"
source = "git+ssh://git@github.com/scroll-tech/openvm-gpu.git?branch=patch-v1.3.0-pipe#07e2731a2afd8bcb05b76566331e68e1e4ef00d0"
dependencies = [
"halo2curves-axiom",
"num-bigint 0.4.6",
"once_cell",
"openvm-algebra-complex-macros 1.2.1-rc.0",
"openvm-algebra-moduli-macros 1.2.1-rc.0",
"openvm-custom-insn 0.1.0 (git+ssh://git@github.com/scroll-tech/openvm-gpu.git?branch=patch-v1.2.1-rc.1-pipe)",
"openvm-rv32im-guest 1.2.1-rc.0",
"openvm-algebra-complex-macros 1.3.0 (git+ssh://git@github.com/scroll-tech/openvm-gpu.git?branch=patch-v1.3.0-pipe)",
"openvm-algebra-moduli-macros 1.3.0 (git+ssh://git@github.com/scroll-tech/openvm-gpu.git?branch=patch-v1.3.0-pipe)",
"openvm-custom-insn 0.1.0 (git+ssh://git@github.com/scroll-tech/openvm-gpu.git?branch=patch-v1.3.0-pipe)",
"openvm-rv32im-guest 1.3.0 (git+ssh://git@github.com/scroll-tech/openvm-gpu.git?branch=patch-v1.3.0-pipe)",
"serde-big-array",
"strum_macros 0.26.4",
]
@@ -4667,27 +4667,27 @@ dependencies = [
[[package]]
name = "openvm-algebra-guest"
version = "1.3.0"
source = "git+https://github.com/openvm-org/openvm.git?rev=4973d38cb3f2e14ebdd59e03802e65bb657ee422#4973d38cb3f2e14ebdd59e03802e65bb657ee422"
source = "git+https://github.com/openvm-org/openvm.git?rev=5368d4756993fc1e51092499a816867cf4808de0#5368d4756993fc1e51092499a816867cf4808de0"
dependencies = [
"halo2curves-axiom",
"num-bigint 0.4.6",
"once_cell",
"openvm-algebra-complex-macros 1.3.0",
"openvm-algebra-moduli-macros 1.3.0",
"openvm-custom-insn 0.1.0 (git+https://github.com/openvm-org/openvm.git?rev=4973d38cb3f2e14ebdd59e03802e65bb657ee422)",
"openvm-rv32im-guest 1.3.0",
"openvm-algebra-complex-macros 1.3.0 (git+https://github.com/openvm-org/openvm.git?rev=5368d4756993fc1e51092499a816867cf4808de0)",
"openvm-algebra-moduli-macros 1.3.0 (git+https://github.com/openvm-org/openvm.git?rev=5368d4756993fc1e51092499a816867cf4808de0)",
"openvm-custom-insn 0.1.0 (git+https://github.com/openvm-org/openvm.git?rev=5368d4756993fc1e51092499a816867cf4808de0)",
"openvm-rv32im-guest 1.3.0 (git+https://github.com/openvm-org/openvm.git?rev=5368d4756993fc1e51092499a816867cf4808de0)",
"serde-big-array",
"strum_macros 0.26.4",
]
[[package]]
name = "openvm-algebra-moduli-macros"
version = "1.2.1-rc.0"
source = "git+ssh://git@github.com/scroll-tech/openvm-gpu.git?branch=patch-v1.2.1-rc.1-pipe#7dd6d1620d07c2c3faa5b91105cbb3d19ff0c9b0"
version = "1.3.0"
source = "git+ssh://git@github.com/scroll-tech/openvm-gpu.git?branch=patch-v1.3.0-pipe#07e2731a2afd8bcb05b76566331e68e1e4ef00d0"
dependencies = [
"num-bigint 0.4.6",
"num-prime",
"openvm-macros-common 1.2.1-rc.0",
"openvm-macros-common 1.3.0 (git+ssh://git@github.com/scroll-tech/openvm-gpu.git?branch=patch-v1.3.0-pipe)",
"quote",
"syn 2.0.101",
]
@@ -4695,21 +4695,21 @@ dependencies = [
[[package]]
name = "openvm-algebra-moduli-macros"
version = "1.3.0"
source = "git+https://github.com/openvm-org/openvm.git?rev=4973d38cb3f2e14ebdd59e03802e65bb657ee422#4973d38cb3f2e14ebdd59e03802e65bb657ee422"
source = "git+https://github.com/openvm-org/openvm.git?rev=5368d4756993fc1e51092499a816867cf4808de0#5368d4756993fc1e51092499a816867cf4808de0"
dependencies = [
"num-bigint 0.4.6",
"num-prime",
"openvm-macros-common 1.3.0",
"openvm-macros-common 1.3.0 (git+https://github.com/openvm-org/openvm.git?rev=5368d4756993fc1e51092499a816867cf4808de0)",
"quote",
"syn 2.0.101",
]
[[package]]
name = "openvm-algebra-transpiler"
version = "1.2.1-rc.0"
source = "git+ssh://git@github.com/scroll-tech/openvm-gpu.git?branch=patch-v1.2.1-rc.1-pipe#7dd6d1620d07c2c3faa5b91105cbb3d19ff0c9b0"
version = "1.3.0"
source = "git+ssh://git@github.com/scroll-tech/openvm-gpu.git?branch=patch-v1.3.0-pipe#07e2731a2afd8bcb05b76566331e68e1e4ef00d0"
dependencies = [
"openvm-algebra-guest 1.2.1-rc.0",
"openvm-algebra-guest 1.3.0 (git+ssh://git@github.com/scroll-tech/openvm-gpu.git?branch=patch-v1.3.0-pipe)",
"openvm-instructions",
"openvm-instructions-derive",
"openvm-stark-backend",
@@ -4720,8 +4720,8 @@ dependencies = [
[[package]]
name = "openvm-bigint-circuit"
version = "1.2.1-rc.0"
source = "git+ssh://git@github.com/scroll-tech/openvm-gpu.git?branch=patch-v1.2.1-rc.1-pipe#7dd6d1620d07c2c3faa5b91105cbb3d19ff0c9b0"
version = "1.3.0"
source = "git+ssh://git@github.com/scroll-tech/openvm-gpu.git?branch=patch-v1.3.0-pipe#07e2731a2afd8bcb05b76566331e68e1e4ef00d0"
dependencies = [
"derive-new 0.6.0",
"derive_more 1.0.0",
@@ -4742,17 +4742,17 @@ dependencies = [
[[package]]
name = "openvm-bigint-guest"
version = "1.2.1-rc.0"
source = "git+ssh://git@github.com/scroll-tech/openvm-gpu.git?branch=patch-v1.2.1-rc.1-pipe#7dd6d1620d07c2c3faa5b91105cbb3d19ff0c9b0"
version = "1.3.0"
source = "git+ssh://git@github.com/scroll-tech/openvm-gpu.git?branch=patch-v1.3.0-pipe#07e2731a2afd8bcb05b76566331e68e1e4ef00d0"
dependencies = [
"openvm-platform 1.2.1-rc.0",
"openvm-platform 1.3.0 (git+ssh://git@github.com/scroll-tech/openvm-gpu.git?branch=patch-v1.3.0-pipe)",
"strum_macros 0.26.4",
]
[[package]]
name = "openvm-bigint-transpiler"
version = "1.2.1-rc.0"
source = "git+ssh://git@github.com/scroll-tech/openvm-gpu.git?branch=patch-v1.2.1-rc.1-pipe#7dd6d1620d07c2c3faa5b91105cbb3d19ff0c9b0"
version = "1.3.0"
source = "git+ssh://git@github.com/scroll-tech/openvm-gpu.git?branch=patch-v1.3.0-pipe#07e2731a2afd8bcb05b76566331e68e1e4ef00d0"
dependencies = [
"openvm-bigint-guest",
"openvm-instructions",
@@ -4766,20 +4766,20 @@ dependencies = [
[[package]]
name = "openvm-build"
version = "1.2.1-rc.0"
source = "git+ssh://git@github.com/scroll-tech/openvm-gpu.git?branch=patch-v1.2.1-rc.1-pipe#7dd6d1620d07c2c3faa5b91105cbb3d19ff0c9b0"
version = "1.3.0"
source = "git+ssh://git@github.com/scroll-tech/openvm-gpu.git?branch=patch-v1.3.0-pipe#07e2731a2afd8bcb05b76566331e68e1e4ef00d0"
dependencies = [
"cargo_metadata",
"eyre",
"openvm-platform 1.2.1-rc.0",
"openvm-platform 1.3.0 (git+ssh://git@github.com/scroll-tech/openvm-gpu.git?branch=patch-v1.3.0-pipe)",
"serde",
"serde_json",
]
[[package]]
name = "openvm-circuit"
version = "1.2.1-rc.0"
source = "git+ssh://git@github.com/scroll-tech/openvm-gpu.git?branch=patch-v1.2.1-rc.1-pipe#7dd6d1620d07c2c3faa5b91105cbb3d19ff0c9b0"
version = "1.3.0"
source = "git+ssh://git@github.com/scroll-tech/openvm-gpu.git?branch=patch-v1.3.0-pipe#07e2731a2afd8bcb05b76566331e68e1e4ef00d0"
dependencies = [
"backtrace",
"cfg-if",
@@ -4809,8 +4809,8 @@ dependencies = [
[[package]]
name = "openvm-circuit-derive"
version = "1.2.1-rc.0"
source = "git+ssh://git@github.com/scroll-tech/openvm-gpu.git?branch=patch-v1.2.1-rc.1-pipe#7dd6d1620d07c2c3faa5b91105cbb3d19ff0c9b0"
version = "1.3.0"
source = "git+ssh://git@github.com/scroll-tech/openvm-gpu.git?branch=patch-v1.3.0-pipe#07e2731a2afd8bcb05b76566331e68e1e4ef00d0"
dependencies = [
"itertools 0.14.0",
"quote",
@@ -4819,8 +4819,8 @@ dependencies = [
[[package]]
name = "openvm-circuit-primitives"
version = "1.2.1-rc.0"
source = "git+ssh://git@github.com/scroll-tech/openvm-gpu.git?branch=patch-v1.2.1-rc.1-pipe#7dd6d1620d07c2c3faa5b91105cbb3d19ff0c9b0"
version = "1.3.0"
source = "git+ssh://git@github.com/scroll-tech/openvm-gpu.git?branch=patch-v1.3.0-pipe#07e2731a2afd8bcb05b76566331e68e1e4ef00d0"
dependencies = [
"derive-new 0.6.0",
"itertools 0.14.0",
@@ -4834,8 +4834,8 @@ dependencies = [
[[package]]
name = "openvm-circuit-primitives-derive"
version = "1.2.1-rc.0"
source = "git+ssh://git@github.com/scroll-tech/openvm-gpu.git?branch=patch-v1.2.1-rc.1-pipe#7dd6d1620d07c2c3faa5b91105cbb3d19ff0c9b0"
version = "1.3.0"
source = "git+ssh://git@github.com/scroll-tech/openvm-gpu.git?branch=patch-v1.3.0-pipe#07e2731a2afd8bcb05b76566331e68e1e4ef00d0"
dependencies = [
"itertools 0.14.0",
"quote",
@@ -4844,8 +4844,8 @@ dependencies = [
[[package]]
name = "openvm-continuations"
version = "1.2.1-rc.0"
source = "git+ssh://git@github.com/scroll-tech/openvm-gpu.git?branch=patch-v1.2.1-rc.1-pipe#7dd6d1620d07c2c3faa5b91105cbb3d19ff0c9b0"
version = "1.3.0"
source = "git+ssh://git@github.com/scroll-tech/openvm-gpu.git?branch=patch-v1.3.0-pipe#07e2731a2afd8bcb05b76566331e68e1e4ef00d0"
dependencies = [
"derivative",
"openvm-circuit",
@@ -4860,7 +4860,7 @@ dependencies = [
[[package]]
name = "openvm-custom-insn"
version = "0.1.0"
source = "git+ssh://git@github.com/scroll-tech/openvm-gpu.git?branch=patch-v1.2.1-rc.1-pipe#7dd6d1620d07c2c3faa5b91105cbb3d19ff0c9b0"
source = "git+ssh://git@github.com/scroll-tech/openvm-gpu.git?branch=patch-v1.3.0-pipe#07e2731a2afd8bcb05b76566331e68e1e4ef00d0"
dependencies = [
"proc-macro2",
"quote",
@@ -4870,7 +4870,7 @@ dependencies = [
[[package]]
name = "openvm-custom-insn"
version = "0.1.0"
source = "git+https://github.com/openvm-org/openvm.git?rev=4973d38cb3f2e14ebdd59e03802e65bb657ee422#4973d38cb3f2e14ebdd59e03802e65bb657ee422"
source = "git+https://github.com/openvm-org/openvm.git?rev=5368d4756993fc1e51092499a816867cf4808de0#5368d4756993fc1e51092499a816867cf4808de0"
dependencies = [
"proc-macro2",
"quote",
@@ -4879,16 +4879,14 @@ dependencies = [
[[package]]
name = "openvm-ecc-circuit"
version = "1.2.1-rc.0"
source = "git+ssh://git@github.com/scroll-tech/openvm-gpu.git?branch=patch-v1.2.1-rc.1-pipe#7dd6d1620d07c2c3faa5b91105cbb3d19ff0c9b0"
version = "1.3.0"
source = "git+ssh://git@github.com/scroll-tech/openvm-gpu.git?branch=patch-v1.3.0-pipe#07e2731a2afd8bcb05b76566331e68e1e4ef00d0"
dependencies = [
"derive-new 0.6.0",
"derive_more 1.0.0",
"eyre",
"hex-literal",
"lazy_static",
"num-bigint 0.4.6",
"num-integer",
"num-traits",
"once_cell",
"openvm-algebra-circuit",
@@ -4909,19 +4907,19 @@ dependencies = [
[[package]]
name = "openvm-ecc-guest"
version = "1.2.1-rc.0"
source = "git+ssh://git@github.com/scroll-tech/openvm-gpu.git?branch=patch-v1.2.1-rc.1-pipe#7dd6d1620d07c2c3faa5b91105cbb3d19ff0c9b0"
version = "1.3.0"
source = "git+ssh://git@github.com/scroll-tech/openvm-gpu.git?branch=patch-v1.3.0-pipe#07e2731a2afd8bcb05b76566331e68e1e4ef00d0"
dependencies = [
"ecdsa",
"elliptic-curve",
"group 0.13.0",
"halo2curves-axiom",
"once_cell",
"openvm 1.2.1-rc.0",
"openvm-algebra-guest 1.2.1-rc.0",
"openvm-custom-insn 0.1.0 (git+ssh://git@github.com/scroll-tech/openvm-gpu.git?branch=patch-v1.2.1-rc.1-pipe)",
"openvm-ecc-sw-macros 1.2.1-rc.0",
"openvm-rv32im-guest 1.2.1-rc.0",
"openvm 1.3.0 (git+ssh://git@github.com/scroll-tech/openvm-gpu.git?branch=patch-v1.3.0-pipe)",
"openvm-algebra-guest 1.3.0 (git+ssh://git@github.com/scroll-tech/openvm-gpu.git?branch=patch-v1.3.0-pipe)",
"openvm-custom-insn 0.1.0 (git+ssh://git@github.com/scroll-tech/openvm-gpu.git?branch=patch-v1.3.0-pipe)",
"openvm-ecc-sw-macros 1.3.0 (git+ssh://git@github.com/scroll-tech/openvm-gpu.git?branch=patch-v1.3.0-pipe)",
"openvm-rv32im-guest 1.3.0 (git+ssh://git@github.com/scroll-tech/openvm-gpu.git?branch=patch-v1.3.0-pipe)",
"serde",
"strum_macros 0.26.4",
]
@@ -4929,28 +4927,28 @@ dependencies = [
[[package]]
name = "openvm-ecc-guest"
version = "1.3.0"
source = "git+https://github.com/openvm-org/openvm.git?rev=4973d38cb3f2e14ebdd59e03802e65bb657ee422#4973d38cb3f2e14ebdd59e03802e65bb657ee422"
source = "git+https://github.com/openvm-org/openvm.git?rev=5368d4756993fc1e51092499a816867cf4808de0#5368d4756993fc1e51092499a816867cf4808de0"
dependencies = [
"ecdsa",
"elliptic-curve",
"group 0.13.0",
"halo2curves-axiom",
"once_cell",
"openvm 1.3.0",
"openvm-algebra-guest 1.3.0",
"openvm-custom-insn 0.1.0 (git+https://github.com/openvm-org/openvm.git?rev=4973d38cb3f2e14ebdd59e03802e65bb657ee422)",
"openvm-ecc-sw-macros 1.3.0",
"openvm-rv32im-guest 1.3.0",
"openvm 1.3.0 (git+https://github.com/openvm-org/openvm.git?rev=5368d4756993fc1e51092499a816867cf4808de0)",
"openvm-algebra-guest 1.3.0 (git+https://github.com/openvm-org/openvm.git?rev=5368d4756993fc1e51092499a816867cf4808de0)",
"openvm-custom-insn 0.1.0 (git+https://github.com/openvm-org/openvm.git?rev=5368d4756993fc1e51092499a816867cf4808de0)",
"openvm-ecc-sw-macros 1.3.0 (git+https://github.com/openvm-org/openvm.git?rev=5368d4756993fc1e51092499a816867cf4808de0)",
"openvm-rv32im-guest 1.3.0 (git+https://github.com/openvm-org/openvm.git?rev=5368d4756993fc1e51092499a816867cf4808de0)",
"serde",
"strum_macros 0.26.4",
]
[[package]]
name = "openvm-ecc-sw-macros"
version = "1.2.1-rc.0"
source = "git+ssh://git@github.com/scroll-tech/openvm-gpu.git?branch=patch-v1.2.1-rc.1-pipe#7dd6d1620d07c2c3faa5b91105cbb3d19ff0c9b0"
version = "1.3.0"
source = "git+ssh://git@github.com/scroll-tech/openvm-gpu.git?branch=patch-v1.3.0-pipe#07e2731a2afd8bcb05b76566331e68e1e4ef00d0"
dependencies = [
"openvm-macros-common 1.2.1-rc.0",
"openvm-macros-common 1.3.0 (git+ssh://git@github.com/scroll-tech/openvm-gpu.git?branch=patch-v1.3.0-pipe)",
"quote",
"syn 2.0.101",
]
@@ -4958,19 +4956,19 @@ dependencies = [
[[package]]
name = "openvm-ecc-sw-macros"
version = "1.3.0"
source = "git+https://github.com/openvm-org/openvm.git?rev=4973d38cb3f2e14ebdd59e03802e65bb657ee422#4973d38cb3f2e14ebdd59e03802e65bb657ee422"
source = "git+https://github.com/openvm-org/openvm.git?rev=5368d4756993fc1e51092499a816867cf4808de0#5368d4756993fc1e51092499a816867cf4808de0"
dependencies = [
"openvm-macros-common 1.3.0",
"openvm-macros-common 1.3.0 (git+https://github.com/openvm-org/openvm.git?rev=5368d4756993fc1e51092499a816867cf4808de0)",
"quote",
"syn 2.0.101",
]
[[package]]
name = "openvm-ecc-transpiler"
version = "1.2.1-rc.0"
source = "git+ssh://git@github.com/scroll-tech/openvm-gpu.git?branch=patch-v1.2.1-rc.1-pipe#7dd6d1620d07c2c3faa5b91105cbb3d19ff0c9b0"
version = "1.3.0"
source = "git+ssh://git@github.com/scroll-tech/openvm-gpu.git?branch=patch-v1.3.0-pipe#07e2731a2afd8bcb05b76566331e68e1e4ef00d0"
dependencies = [
"openvm-ecc-guest 1.2.1-rc.0",
"openvm-ecc-guest 1.3.0 (git+ssh://git@github.com/scroll-tech/openvm-gpu.git?branch=patch-v1.3.0-pipe)",
"openvm-instructions",
"openvm-instructions-derive",
"openvm-stark-backend",
@@ -4981,8 +4979,8 @@ dependencies = [
[[package]]
name = "openvm-instructions"
version = "1.2.1-rc.0"
source = "git+ssh://git@github.com/scroll-tech/openvm-gpu.git?branch=patch-v1.2.1-rc.1-pipe#7dd6d1620d07c2c3faa5b91105cbb3d19ff0c9b0"
version = "1.3.0"
source = "git+ssh://git@github.com/scroll-tech/openvm-gpu.git?branch=patch-v1.3.0-pipe#07e2731a2afd8bcb05b76566331e68e1e4ef00d0"
dependencies = [
"backtrace",
"derive-new 0.6.0",
@@ -4998,8 +4996,8 @@ dependencies = [
[[package]]
name = "openvm-instructions-derive"
version = "1.2.1-rc.0"
source = "git+ssh://git@github.com/scroll-tech/openvm-gpu.git?branch=patch-v1.2.1-rc.1-pipe#7dd6d1620d07c2c3faa5b91105cbb3d19ff0c9b0"
version = "1.3.0"
source = "git+ssh://git@github.com/scroll-tech/openvm-gpu.git?branch=patch-v1.3.0-pipe#07e2731a2afd8bcb05b76566331e68e1e4ef00d0"
dependencies = [
"quote",
"syn 2.0.101",
@@ -5007,8 +5005,8 @@ dependencies = [
[[package]]
name = "openvm-keccak256-circuit"
version = "1.2.1-rc.0"
source = "git+ssh://git@github.com/scroll-tech/openvm-gpu.git?branch=patch-v1.2.1-rc.1-pipe#7dd6d1620d07c2c3faa5b91105cbb3d19ff0c9b0"
version = "1.3.0"
source = "git+ssh://git@github.com/scroll-tech/openvm-gpu.git?branch=patch-v1.3.0-pipe#07e2731a2afd8bcb05b76566331e68e1e4ef00d0"
dependencies = [
"derive-new 0.6.0",
"derive_more 1.0.0",
@@ -5033,16 +5031,16 @@ dependencies = [
[[package]]
name = "openvm-keccak256-guest"
version = "1.2.1-rc.0"
source = "git+ssh://git@github.com/scroll-tech/openvm-gpu.git?branch=patch-v1.2.1-rc.1-pipe#7dd6d1620d07c2c3faa5b91105cbb3d19ff0c9b0"
version = "1.3.0"
source = "git+ssh://git@github.com/scroll-tech/openvm-gpu.git?branch=patch-v1.3.0-pipe#07e2731a2afd8bcb05b76566331e68e1e4ef00d0"
dependencies = [
"openvm-platform 1.2.1-rc.0",
"openvm-platform 1.3.0 (git+ssh://git@github.com/scroll-tech/openvm-gpu.git?branch=patch-v1.3.0-pipe)",
]
[[package]]
name = "openvm-keccak256-transpiler"
version = "1.2.1-rc.0"
source = "git+ssh://git@github.com/scroll-tech/openvm-gpu.git?branch=patch-v1.2.1-rc.1-pipe#7dd6d1620d07c2c3faa5b91105cbb3d19ff0c9b0"
version = "1.3.0"
source = "git+ssh://git@github.com/scroll-tech/openvm-gpu.git?branch=patch-v1.3.0-pipe#07e2731a2afd8bcb05b76566331e68e1e4ef00d0"
dependencies = [
"openvm-instructions",
"openvm-instructions-derive",
@@ -5055,8 +5053,8 @@ dependencies = [
[[package]]
name = "openvm-macros-common"
version = "1.2.1-rc.0"
source = "git+ssh://git@github.com/scroll-tech/openvm-gpu.git?branch=patch-v1.2.1-rc.1-pipe#7dd6d1620d07c2c3faa5b91105cbb3d19ff0c9b0"
version = "1.3.0"
source = "git+ssh://git@github.com/scroll-tech/openvm-gpu.git?branch=patch-v1.3.0-pipe#07e2731a2afd8bcb05b76566331e68e1e4ef00d0"
dependencies = [
"syn 2.0.101",
]
@@ -5064,15 +5062,15 @@ dependencies = [
[[package]]
name = "openvm-macros-common"
version = "1.3.0"
source = "git+https://github.com/openvm-org/openvm.git?rev=4973d38cb3f2e14ebdd59e03802e65bb657ee422#4973d38cb3f2e14ebdd59e03802e65bb657ee422"
source = "git+https://github.com/openvm-org/openvm.git?rev=5368d4756993fc1e51092499a816867cf4808de0#5368d4756993fc1e51092499a816867cf4808de0"
dependencies = [
"syn 2.0.101",
]
[[package]]
name = "openvm-mod-circuit-builder"
version = "1.2.1-rc.0"
source = "git+ssh://git@github.com/scroll-tech/openvm-gpu.git?branch=patch-v1.2.1-rc.1-pipe#7dd6d1620d07c2c3faa5b91105cbb3d19ff0c9b0"
version = "1.3.0"
source = "git+ssh://git@github.com/scroll-tech/openvm-gpu.git?branch=patch-v1.3.0-pipe#07e2731a2afd8bcb05b76566331e68e1e4ef00d0"
dependencies = [
"itertools 0.14.0",
"num-bigint 0.4.6",
@@ -5090,8 +5088,8 @@ dependencies = [
[[package]]
name = "openvm-native-circuit"
version = "1.2.1-rc.0"
source = "git+ssh://git@github.com/scroll-tech/openvm-gpu.git?branch=patch-v1.2.1-rc.1-pipe#7dd6d1620d07c2c3faa5b91105cbb3d19ff0c9b0"
version = "1.3.0"
source = "git+ssh://git@github.com/scroll-tech/openvm-gpu.git?branch=patch-v1.3.0-pipe#07e2731a2afd8bcb05b76566331e68e1e4ef00d0"
dependencies = [
"derive-new 0.6.0",
"derive_more 1.0.0",
@@ -5117,8 +5115,8 @@ dependencies = [
[[package]]
name = "openvm-native-compiler"
version = "1.2.1-rc.0"
source = "git+ssh://git@github.com/scroll-tech/openvm-gpu.git?branch=patch-v1.2.1-rc.1-pipe#7dd6d1620d07c2c3faa5b91105cbb3d19ff0c9b0"
version = "1.3.0"
source = "git+ssh://git@github.com/scroll-tech/openvm-gpu.git?branch=patch-v1.3.0-pipe#07e2731a2afd8bcb05b76566331e68e1e4ef00d0"
dependencies = [
"backtrace",
"itertools 0.14.0",
@@ -5141,8 +5139,8 @@ dependencies = [
[[package]]
name = "openvm-native-compiler-derive"
version = "1.2.1-rc.0"
source = "git+ssh://git@github.com/scroll-tech/openvm-gpu.git?branch=patch-v1.2.1-rc.1-pipe#7dd6d1620d07c2c3faa5b91105cbb3d19ff0c9b0"
version = "1.3.0"
source = "git+ssh://git@github.com/scroll-tech/openvm-gpu.git?branch=patch-v1.3.0-pipe#07e2731a2afd8bcb05b76566331e68e1e4ef00d0"
dependencies = [
"quote",
"syn 2.0.101",
@@ -5150,8 +5148,8 @@ dependencies = [
[[package]]
name = "openvm-native-recursion"
version = "1.2.1-rc.0"
source = "git+ssh://git@github.com/scroll-tech/openvm-gpu.git?branch=patch-v1.2.1-rc.1-pipe#7dd6d1620d07c2c3faa5b91105cbb3d19ff0c9b0"
version = "1.3.0"
source = "git+ssh://git@github.com/scroll-tech/openvm-gpu.git?branch=patch-v1.3.0-pipe#07e2731a2afd8bcb05b76566331e68e1e4ef00d0"
dependencies = [
"cfg-if",
"itertools 0.14.0",
@@ -5178,8 +5176,8 @@ dependencies = [
[[package]]
name = "openvm-native-transpiler"
version = "1.2.1-rc.0"
source = "git+ssh://git@github.com/scroll-tech/openvm-gpu.git?branch=patch-v1.2.1-rc.1-pipe#7dd6d1620d07c2c3faa5b91105cbb3d19ff0c9b0"
version = "1.3.0"
source = "git+ssh://git@github.com/scroll-tech/openvm-gpu.git?branch=patch-v1.3.0-pipe#07e2731a2afd8bcb05b76566331e68e1e4ef00d0"
dependencies = [
"openvm-instructions",
"openvm-transpiler",
@@ -5189,7 +5187,7 @@ dependencies = [
[[package]]
name = "openvm-pairing"
version = "1.3.0"
source = "git+https://github.com/openvm-org/openvm.git?rev=4973d38cb3f2e14ebdd59e03802e65bb657ee422#4973d38cb3f2e14ebdd59e03802e65bb657ee422"
source = "git+https://github.com/openvm-org/openvm.git?rev=5368d4756993fc1e51092499a816867cf4808de0#5368d4756993fc1e51092499a816867cf4808de0"
dependencies = [
"group 0.13.0",
"halo2curves-axiom",
@@ -5197,24 +5195,24 @@ dependencies = [
"itertools 0.14.0",
"num-bigint 0.4.6",
"num-traits",
"openvm 1.3.0",
"openvm-algebra-complex-macros 1.3.0",
"openvm-algebra-guest 1.3.0",
"openvm-algebra-moduli-macros 1.3.0",
"openvm-custom-insn 0.1.0 (git+https://github.com/openvm-org/openvm.git?rev=4973d38cb3f2e14ebdd59e03802e65bb657ee422)",
"openvm-ecc-guest 1.3.0",
"openvm-ecc-sw-macros 1.3.0",
"openvm-pairing-guest 1.3.0",
"openvm-platform 1.3.0",
"openvm-rv32im-guest 1.3.0",
"openvm 1.3.0 (git+https://github.com/openvm-org/openvm.git?rev=5368d4756993fc1e51092499a816867cf4808de0)",
"openvm-algebra-complex-macros 1.3.0 (git+https://github.com/openvm-org/openvm.git?rev=5368d4756993fc1e51092499a816867cf4808de0)",
"openvm-algebra-guest 1.3.0 (git+https://github.com/openvm-org/openvm.git?rev=5368d4756993fc1e51092499a816867cf4808de0)",
"openvm-algebra-moduli-macros 1.3.0 (git+https://github.com/openvm-org/openvm.git?rev=5368d4756993fc1e51092499a816867cf4808de0)",
"openvm-custom-insn 0.1.0 (git+https://github.com/openvm-org/openvm.git?rev=5368d4756993fc1e51092499a816867cf4808de0)",
"openvm-ecc-guest 1.3.0 (git+https://github.com/openvm-org/openvm.git?rev=5368d4756993fc1e51092499a816867cf4808de0)",
"openvm-ecc-sw-macros 1.3.0 (git+https://github.com/openvm-org/openvm.git?rev=5368d4756993fc1e51092499a816867cf4808de0)",
"openvm-pairing-guest 1.3.0 (git+https://github.com/openvm-org/openvm.git?rev=5368d4756993fc1e51092499a816867cf4808de0)",
"openvm-platform 1.3.0 (git+https://github.com/openvm-org/openvm.git?rev=5368d4756993fc1e51092499a816867cf4808de0)",
"openvm-rv32im-guest 1.3.0 (git+https://github.com/openvm-org/openvm.git?rev=5368d4756993fc1e51092499a816867cf4808de0)",
"rand 0.8.5",
"serde",
]
[[package]]
name = "openvm-pairing-circuit"
version = "1.2.1-rc.0"
source = "git+ssh://git@github.com/scroll-tech/openvm-gpu.git?branch=patch-v1.2.1-rc.1-pipe#7dd6d1620d07c2c3faa5b91105cbb3d19ff0c9b0"
version = "1.3.0"
source = "git+ssh://git@github.com/scroll-tech/openvm-gpu.git?branch=patch-v1.3.0-pipe#07e2731a2afd8bcb05b76566331e68e1e4ef00d0"
dependencies = [
"derive-new 0.6.0",
"derive_more 1.0.0",
@@ -5229,10 +5227,10 @@ dependencies = [
"openvm-circuit-primitives",
"openvm-circuit-primitives-derive",
"openvm-ecc-circuit",
"openvm-ecc-guest 1.2.1-rc.0",
"openvm-ecc-guest 1.3.0 (git+ssh://git@github.com/scroll-tech/openvm-gpu.git?branch=patch-v1.3.0-pipe)",
"openvm-instructions",
"openvm-mod-circuit-builder",
"openvm-pairing-guest 1.2.1-rc.0",
"openvm-pairing-guest 1.3.0 (git+ssh://git@github.com/scroll-tech/openvm-gpu.git?branch=patch-v1.3.0-pipe)",
"openvm-pairing-transpiler",
"openvm-rv32-adapters",
"openvm-rv32im-circuit",
@@ -5244,8 +5242,8 @@ dependencies = [
[[package]]
name = "openvm-pairing-guest"
version = "1.2.1-rc.0"
source = "git+ssh://git@github.com/scroll-tech/openvm-gpu.git?branch=patch-v1.2.1-rc.1-pipe#7dd6d1620d07c2c3faa5b91105cbb3d19ff0c9b0"
version = "1.3.0"
source = "git+ssh://git@github.com/scroll-tech/openvm-gpu.git?branch=patch-v1.3.0-pipe#07e2731a2afd8bcb05b76566331e68e1e4ef00d0"
dependencies = [
"halo2curves-axiom",
"hex-literal",
@@ -5253,11 +5251,11 @@ dependencies = [
"lazy_static",
"num-bigint 0.4.6",
"num-traits",
"openvm 1.2.1-rc.0",
"openvm-algebra-guest 1.2.1-rc.0",
"openvm-algebra-moduli-macros 1.2.1-rc.0",
"openvm-custom-insn 0.1.0 (git+ssh://git@github.com/scroll-tech/openvm-gpu.git?branch=patch-v1.2.1-rc.1-pipe)",
"openvm-ecc-guest 1.2.1-rc.0",
"openvm 1.3.0 (git+ssh://git@github.com/scroll-tech/openvm-gpu.git?branch=patch-v1.3.0-pipe)",
"openvm-algebra-guest 1.3.0 (git+ssh://git@github.com/scroll-tech/openvm-gpu.git?branch=patch-v1.3.0-pipe)",
"openvm-algebra-moduli-macros 1.3.0 (git+ssh://git@github.com/scroll-tech/openvm-gpu.git?branch=patch-v1.3.0-pipe)",
"openvm-custom-insn 0.1.0 (git+ssh://git@github.com/scroll-tech/openvm-gpu.git?branch=patch-v1.3.0-pipe)",
"openvm-ecc-guest 1.3.0 (git+ssh://git@github.com/scroll-tech/openvm-gpu.git?branch=patch-v1.3.0-pipe)",
"rand 0.8.5",
"serde",
"strum_macros 0.26.4",
@@ -5266,7 +5264,7 @@ dependencies = [
[[package]]
name = "openvm-pairing-guest"
version = "1.3.0"
source = "git+https://github.com/openvm-org/openvm.git?rev=4973d38cb3f2e14ebdd59e03802e65bb657ee422#4973d38cb3f2e14ebdd59e03802e65bb657ee422"
source = "git+https://github.com/openvm-org/openvm.git?rev=5368d4756993fc1e51092499a816867cf4808de0#5368d4756993fc1e51092499a816867cf4808de0"
dependencies = [
"halo2curves-axiom",
"hex-literal",
@@ -5274,11 +5272,11 @@ dependencies = [
"lazy_static",
"num-bigint 0.4.6",
"num-traits",
"openvm 1.3.0",
"openvm-algebra-guest 1.3.0",
"openvm-algebra-moduli-macros 1.3.0",
"openvm-custom-insn 0.1.0 (git+https://github.com/openvm-org/openvm.git?rev=4973d38cb3f2e14ebdd59e03802e65bb657ee422)",
"openvm-ecc-guest 1.3.0",
"openvm 1.3.0 (git+https://github.com/openvm-org/openvm.git?rev=5368d4756993fc1e51092499a816867cf4808de0)",
"openvm-algebra-guest 1.3.0 (git+https://github.com/openvm-org/openvm.git?rev=5368d4756993fc1e51092499a816867cf4808de0)",
"openvm-algebra-moduli-macros 1.3.0 (git+https://github.com/openvm-org/openvm.git?rev=5368d4756993fc1e51092499a816867cf4808de0)",
"openvm-custom-insn 0.1.0 (git+https://github.com/openvm-org/openvm.git?rev=5368d4756993fc1e51092499a816867cf4808de0)",
"openvm-ecc-guest 1.3.0 (git+https://github.com/openvm-org/openvm.git?rev=5368d4756993fc1e51092499a816867cf4808de0)",
"rand 0.8.5",
"serde",
"strum_macros 0.26.4",
@@ -5286,12 +5284,12 @@ dependencies = [
[[package]]
name = "openvm-pairing-transpiler"
version = "1.2.1-rc.0"
source = "git+ssh://git@github.com/scroll-tech/openvm-gpu.git?branch=patch-v1.2.1-rc.1-pipe#7dd6d1620d07c2c3faa5b91105cbb3d19ff0c9b0"
version = "1.3.0"
source = "git+ssh://git@github.com/scroll-tech/openvm-gpu.git?branch=patch-v1.3.0-pipe#07e2731a2afd8bcb05b76566331e68e1e4ef00d0"
dependencies = [
"openvm-instructions",
"openvm-instructions-derive",
"openvm-pairing-guest 1.2.1-rc.0",
"openvm-pairing-guest 1.3.0 (git+ssh://git@github.com/scroll-tech/openvm-gpu.git?branch=patch-v1.3.0-pipe)",
"openvm-stark-backend",
"openvm-transpiler",
"rrs-lib",
@@ -5300,28 +5298,28 @@ dependencies = [
[[package]]
name = "openvm-platform"
version = "1.2.1-rc.0"
source = "git+ssh://git@github.com/scroll-tech/openvm-gpu.git?branch=patch-v1.2.1-rc.1-pipe#7dd6d1620d07c2c3faa5b91105cbb3d19ff0c9b0"
version = "1.3.0"
source = "git+ssh://git@github.com/scroll-tech/openvm-gpu.git?branch=patch-v1.3.0-pipe#07e2731a2afd8bcb05b76566331e68e1e4ef00d0"
dependencies = [
"libm",
"openvm-custom-insn 0.1.0 (git+ssh://git@github.com/scroll-tech/openvm-gpu.git?branch=patch-v1.2.1-rc.1-pipe)",
"openvm-rv32im-guest 1.2.1-rc.0",
"openvm-custom-insn 0.1.0 (git+ssh://git@github.com/scroll-tech/openvm-gpu.git?branch=patch-v1.3.0-pipe)",
"openvm-rv32im-guest 1.3.0 (git+ssh://git@github.com/scroll-tech/openvm-gpu.git?branch=patch-v1.3.0-pipe)",
]
[[package]]
name = "openvm-platform"
version = "1.3.0"
source = "git+https://github.com/openvm-org/openvm.git?rev=4973d38cb3f2e14ebdd59e03802e65bb657ee422#4973d38cb3f2e14ebdd59e03802e65bb657ee422"
source = "git+https://github.com/openvm-org/openvm.git?rev=5368d4756993fc1e51092499a816867cf4808de0#5368d4756993fc1e51092499a816867cf4808de0"
dependencies = [
"libm",
"openvm-custom-insn 0.1.0 (git+https://github.com/openvm-org/openvm.git?rev=4973d38cb3f2e14ebdd59e03802e65bb657ee422)",
"openvm-rv32im-guest 1.3.0",
"openvm-custom-insn 0.1.0 (git+https://github.com/openvm-org/openvm.git?rev=5368d4756993fc1e51092499a816867cf4808de0)",
"openvm-rv32im-guest 1.3.0 (git+https://github.com/openvm-org/openvm.git?rev=5368d4756993fc1e51092499a816867cf4808de0)",
]
[[package]]
name = "openvm-poseidon2-air"
version = "1.2.1-rc.0"
source = "git+ssh://git@github.com/scroll-tech/openvm-gpu.git?branch=patch-v1.2.1-rc.1-pipe#7dd6d1620d07c2c3faa5b91105cbb3d19ff0c9b0"
version = "1.3.0"
source = "git+ssh://git@github.com/scroll-tech/openvm-gpu.git?branch=patch-v1.3.0-pipe#07e2731a2afd8bcb05b76566331e68e1e4ef00d0"
dependencies = [
"derivative",
"lazy_static",
@@ -5337,8 +5335,8 @@ dependencies = [
[[package]]
name = "openvm-rv32-adapters"
version = "1.2.1-rc.0"
source = "git+ssh://git@github.com/scroll-tech/openvm-gpu.git?branch=patch-v1.2.1-rc.1-pipe#7dd6d1620d07c2c3faa5b91105cbb3d19ff0c9b0"
version = "1.3.0"
source = "git+ssh://git@github.com/scroll-tech/openvm-gpu.git?branch=patch-v1.3.0-pipe#07e2731a2afd8bcb05b76566331e68e1e4ef00d0"
dependencies = [
"derive-new 0.6.0",
"itertools 0.14.0",
@@ -5357,8 +5355,8 @@ dependencies = [
[[package]]
name = "openvm-rv32im-circuit"
version = "1.2.1-rc.0"
source = "git+ssh://git@github.com/scroll-tech/openvm-gpu.git?branch=patch-v1.2.1-rc.1-pipe#7dd6d1620d07c2c3faa5b91105cbb3d19ff0c9b0"
version = "1.3.0"
source = "git+ssh://git@github.com/scroll-tech/openvm-gpu.git?branch=patch-v1.3.0-pipe#07e2731a2afd8bcb05b76566331e68e1e4ef00d0"
dependencies = [
"derive-new 0.6.0",
"derive_more 1.0.0",
@@ -5380,10 +5378,10 @@ dependencies = [
[[package]]
name = "openvm-rv32im-guest"
version = "1.2.1-rc.0"
source = "git+ssh://git@github.com/scroll-tech/openvm-gpu.git?branch=patch-v1.2.1-rc.1-pipe#7dd6d1620d07c2c3faa5b91105cbb3d19ff0c9b0"
version = "1.3.0"
source = "git+ssh://git@github.com/scroll-tech/openvm-gpu.git?branch=patch-v1.3.0-pipe#07e2731a2afd8bcb05b76566331e68e1e4ef00d0"
dependencies = [
"openvm-custom-insn 0.1.0 (git+ssh://git@github.com/scroll-tech/openvm-gpu.git?branch=patch-v1.2.1-rc.1-pipe)",
"openvm-custom-insn 0.1.0 (git+ssh://git@github.com/scroll-tech/openvm-gpu.git?branch=patch-v1.3.0-pipe)",
"p3-field 0.1.0",
"strum_macros 0.26.4",
]
@@ -5391,21 +5389,21 @@ dependencies = [
[[package]]
name = "openvm-rv32im-guest"
version = "1.3.0"
source = "git+https://github.com/openvm-org/openvm.git?rev=4973d38cb3f2e14ebdd59e03802e65bb657ee422#4973d38cb3f2e14ebdd59e03802e65bb657ee422"
source = "git+https://github.com/openvm-org/openvm.git?rev=5368d4756993fc1e51092499a816867cf4808de0#5368d4756993fc1e51092499a816867cf4808de0"
dependencies = [
"openvm-custom-insn 0.1.0 (git+https://github.com/openvm-org/openvm.git?rev=4973d38cb3f2e14ebdd59e03802e65bb657ee422)",
"openvm-custom-insn 0.1.0 (git+https://github.com/openvm-org/openvm.git?rev=5368d4756993fc1e51092499a816867cf4808de0)",
"p3-field 0.1.0",
"strum_macros 0.26.4",
]
[[package]]
name = "openvm-rv32im-transpiler"
version = "1.2.1-rc.0"
source = "git+ssh://git@github.com/scroll-tech/openvm-gpu.git?branch=patch-v1.2.1-rc.1-pipe#7dd6d1620d07c2c3faa5b91105cbb3d19ff0c9b0"
version = "1.3.0"
source = "git+ssh://git@github.com/scroll-tech/openvm-gpu.git?branch=patch-v1.3.0-pipe#07e2731a2afd8bcb05b76566331e68e1e4ef00d0"
dependencies = [
"openvm-instructions",
"openvm-instructions-derive",
"openvm-rv32im-guest 1.2.1-rc.0",
"openvm-rv32im-guest 1.3.0 (git+ssh://git@github.com/scroll-tech/openvm-gpu.git?branch=patch-v1.3.0-pipe)",
"openvm-stark-backend",
"openvm-transpiler",
"rrs-lib",
@@ -5416,8 +5414,8 @@ dependencies = [
[[package]]
name = "openvm-sdk"
version = "1.2.1-rc.0"
source = "git+ssh://git@github.com/scroll-tech/openvm-gpu.git?branch=patch-v1.2.1-rc.1-pipe#7dd6d1620d07c2c3faa5b91105cbb3d19ff0c9b0"
version = "1.3.0"
source = "git+ssh://git@github.com/scroll-tech/openvm-gpu.git?branch=patch-v1.3.0-pipe#07e2731a2afd8bcb05b76566331e68e1e4ef00d0"
dependencies = [
"async-trait",
"bitcode",
@@ -5431,7 +5429,7 @@ dependencies = [
"itertools 0.14.0",
"metrics",
"num-bigint 0.4.6",
"openvm 1.2.1-rc.0",
"openvm 1.3.0 (git+ssh://git@github.com/scroll-tech/openvm-gpu.git?branch=patch-v1.3.0-pipe)",
"openvm-algebra-circuit",
"openvm-algebra-transpiler",
"openvm-bigint-circuit",
@@ -5471,16 +5469,16 @@ dependencies = [
[[package]]
name = "openvm-sha2"
version = "1.3.0"
source = "git+https://github.com/openvm-org/openvm.git?rev=4973d38cb3f2e14ebdd59e03802e65bb657ee422#4973d38cb3f2e14ebdd59e03802e65bb657ee422"
source = "git+https://github.com/openvm-org/openvm.git?rev=5368d4756993fc1e51092499a816867cf4808de0#5368d4756993fc1e51092499a816867cf4808de0"
dependencies = [
"openvm-sha256-guest 1.3.0",
"openvm-sha256-guest 1.3.0 (git+https://github.com/openvm-org/openvm.git?rev=5368d4756993fc1e51092499a816867cf4808de0)",
"sha2 0.10.9",
]
[[package]]
name = "openvm-sha256-air"
version = "1.2.1-rc.0"
source = "git+ssh://git@github.com/scroll-tech/openvm-gpu.git?branch=patch-v1.2.1-rc.1-pipe#7dd6d1620d07c2c3faa5b91105cbb3d19ff0c9b0"
version = "1.3.0"
source = "git+ssh://git@github.com/scroll-tech/openvm-gpu.git?branch=patch-v1.3.0-pipe#07e2731a2afd8bcb05b76566331e68e1e4ef00d0"
dependencies = [
"openvm-circuit-primitives",
"openvm-stark-backend",
@@ -5490,8 +5488,8 @@ dependencies = [
[[package]]
name = "openvm-sha256-circuit"
version = "1.2.1-rc.0"
source = "git+ssh://git@github.com/scroll-tech/openvm-gpu.git?branch=patch-v1.2.1-rc.1-pipe#7dd6d1620d07c2c3faa5b91105cbb3d19ff0c9b0"
version = "1.3.0"
source = "git+ssh://git@github.com/scroll-tech/openvm-gpu.git?branch=patch-v1.3.0-pipe#07e2731a2afd8bcb05b76566331e68e1e4ef00d0"
dependencies = [
"derive-new 0.6.0",
"derive_more 1.0.0",
@@ -5513,28 +5511,28 @@ dependencies = [
[[package]]
name = "openvm-sha256-guest"
version = "1.2.1-rc.0"
source = "git+ssh://git@github.com/scroll-tech/openvm-gpu.git?branch=patch-v1.2.1-rc.1-pipe#7dd6d1620d07c2c3faa5b91105cbb3d19ff0c9b0"
version = "1.3.0"
source = "git+ssh://git@github.com/scroll-tech/openvm-gpu.git?branch=patch-v1.3.0-pipe#07e2731a2afd8bcb05b76566331e68e1e4ef00d0"
dependencies = [
"openvm-platform 1.2.1-rc.0",
"openvm-platform 1.3.0 (git+ssh://git@github.com/scroll-tech/openvm-gpu.git?branch=patch-v1.3.0-pipe)",
]
[[package]]
name = "openvm-sha256-guest"
version = "1.3.0"
source = "git+https://github.com/openvm-org/openvm.git?rev=4973d38cb3f2e14ebdd59e03802e65bb657ee422#4973d38cb3f2e14ebdd59e03802e65bb657ee422"
source = "git+https://github.com/openvm-org/openvm.git?rev=5368d4756993fc1e51092499a816867cf4808de0#5368d4756993fc1e51092499a816867cf4808de0"
dependencies = [
"openvm-platform 1.3.0",
"openvm-platform 1.3.0 (git+https://github.com/openvm-org/openvm.git?rev=5368d4756993fc1e51092499a816867cf4808de0)",
]
[[package]]
name = "openvm-sha256-transpiler"
version = "1.2.1-rc.0"
source = "git+ssh://git@github.com/scroll-tech/openvm-gpu.git?branch=patch-v1.2.1-rc.1-pipe#7dd6d1620d07c2c3faa5b91105cbb3d19ff0c9b0"
version = "1.3.0"
source = "git+ssh://git@github.com/scroll-tech/openvm-gpu.git?branch=patch-v1.3.0-pipe#07e2731a2afd8bcb05b76566331e68e1e4ef00d0"
dependencies = [
"openvm-instructions",
"openvm-instructions-derive",
"openvm-sha256-guest 1.2.1-rc.0",
"openvm-sha256-guest 1.3.0 (git+ssh://git@github.com/scroll-tech/openvm-gpu.git?branch=patch-v1.3.0-pipe)",
"openvm-stark-backend",
"openvm-transpiler",
"rrs-lib",
@@ -5609,13 +5607,13 @@ dependencies = [
[[package]]
name = "openvm-transpiler"
version = "1.2.1-rc.0"
source = "git+ssh://git@github.com/scroll-tech/openvm-gpu.git?branch=patch-v1.2.1-rc.1-pipe#7dd6d1620d07c2c3faa5b91105cbb3d19ff0c9b0"
version = "1.3.0"
source = "git+ssh://git@github.com/scroll-tech/openvm-gpu.git?branch=patch-v1.3.0-pipe#07e2731a2afd8bcb05b76566331e68e1e4ef00d0"
dependencies = [
"elf",
"eyre",
"openvm-instructions",
"openvm-platform 1.2.1-rc.0",
"openvm-platform 1.3.0 (git+ssh://git@github.com/scroll-tech/openvm-gpu.git?branch=patch-v1.3.0-pipe)",
"openvm-stark-backend",
"rrs-lib",
"thiserror 1.0.69",
@@ -6609,6 +6607,7 @@ dependencies = [
"ctor",
"eyre",
"futures",
"futures-util",
"hex",
"http 1.3.1",
"once_cell",
@@ -7047,6 +7046,7 @@ dependencies = [
"url",
"wasm-bindgen",
"wasm-bindgen-futures",
"wasm-streams",
"web-sys",
"webpki-roots 1.0.0",
]
@@ -8662,7 +8662,7 @@ dependencies = [
[[package]]
name = "scroll-proving-sdk"
version = "0.1.0"
source = "git+https://github.com/scroll-tech/scroll-proving-sdk.git?branch=refactor%2Fscroll#c144015870771db14b1b5d6071e4d3c4e9b48b9c"
source = "git+https://github.com/scroll-tech/scroll-proving-sdk.git?rev=4c36ab2#4c36ab29255481c34beb08ee7c3d8d4f5d7390c2"
dependencies = [
"anyhow",
"async-trait",
@@ -8692,7 +8692,7 @@ dependencies = [
[[package]]
name = "scroll-zkvm-prover"
version = "0.5.0"
source = "git+https://github.com/scroll-tech/zkvm-prover?tag=v0.5.0rc1#0eb4c11df4909dc6096dfc98875038385578264a"
source = "git+https://github.com/scroll-tech/zkvm-prover?rev=ad0efe7#ad0efe750d5f03781f28037c312e50400ac22a8d"
dependencies = [
"alloy-primitives",
"base64 0.22.1",
@@ -8714,8 +8714,8 @@ dependencies = [
"revm 24.0.0",
"rkyv",
"sbv-primitives",
"scroll-alloy-evm",
"scroll-zkvm-types",
"scroll-zkvm-types-batch",
"scroll-zkvm-types-chunk",
"scroll-zkvm-verifier",
"serde",
@@ -8730,15 +8730,17 @@ dependencies = [
[[package]]
name = "scroll-zkvm-types"
version = "0.5.0"
source = "git+https://github.com/scroll-tech/zkvm-prover?tag=v0.5.0rc1#0eb4c11df4909dc6096dfc98875038385578264a"
source = "git+https://github.com/scroll-tech/zkvm-prover?rev=ad0efe7#ad0efe750d5f03781f28037c312e50400ac22a8d"
dependencies = [
"base64 0.22.1",
"bincode",
"c-kzg",
"openvm-continuations",
"openvm-native-recursion",
"openvm-sdk",
"openvm-stark-sdk",
"rkyv",
"sbv-primitives",
"scroll-zkvm-types-base",
"scroll-zkvm-types-batch",
"scroll-zkvm-types-bundle",
@@ -8750,7 +8752,7 @@ dependencies = [
[[package]]
name = "scroll-zkvm-types-base"
version = "0.5.0"
source = "git+https://github.com/scroll-tech/zkvm-prover?tag=v0.5.0rc1#0eb4c11df4909dc6096dfc98875038385578264a"
source = "git+https://github.com/scroll-tech/zkvm-prover?rev=ad0efe7#ad0efe750d5f03781f28037c312e50400ac22a8d"
dependencies = [
"alloy-primitives",
"alloy-serde 1.0.16",
@@ -8765,17 +8767,17 @@ dependencies = [
[[package]]
name = "scroll-zkvm-types-batch"
version = "0.5.0"
source = "git+https://github.com/scroll-tech/zkvm-prover?tag=v0.5.0rc1#0eb4c11df4909dc6096dfc98875038385578264a"
source = "git+https://github.com/scroll-tech/zkvm-prover?rev=ad0efe7#ad0efe750d5f03781f28037c312e50400ac22a8d"
dependencies = [
"alloy-primitives",
"halo2curves-axiom",
"itertools 0.14.0",
"openvm 1.3.0",
"openvm-ecc-guest 1.3.0",
"openvm 1.3.0 (git+https://github.com/openvm-org/openvm.git?rev=5368d4756993fc1e51092499a816867cf4808de0)",
"openvm-ecc-guest 1.3.0 (git+https://github.com/openvm-org/openvm.git?rev=5368d4756993fc1e51092499a816867cf4808de0)",
"openvm-pairing",
"openvm-pairing-guest 1.3.0",
"openvm-pairing-guest 1.3.0 (git+https://github.com/openvm-org/openvm.git?rev=5368d4756993fc1e51092499a816867cf4808de0)",
"openvm-sha2",
"openvm-sha256-guest 1.3.0",
"openvm-sha256-guest 1.3.0 (git+https://github.com/openvm-org/openvm.git?rev=5368d4756993fc1e51092499a816867cf4808de0)",
"rkyv",
"scroll-zkvm-types-base",
"serde",
@@ -8785,7 +8787,7 @@ dependencies = [
[[package]]
name = "scroll-zkvm-types-bundle"
version = "0.5.0"
source = "git+https://github.com/scroll-tech/zkvm-prover?tag=v0.5.0rc1#0eb4c11df4909dc6096dfc98875038385578264a"
source = "git+https://github.com/scroll-tech/zkvm-prover?rev=ad0efe7#ad0efe750d5f03781f28037c312e50400ac22a8d"
dependencies = [
"alloy-primitives",
"itertools 0.14.0",
@@ -8798,13 +8800,13 @@ dependencies = [
[[package]]
name = "scroll-zkvm-types-chunk"
version = "0.5.0"
source = "git+https://github.com/scroll-tech/zkvm-prover?tag=v0.5.0rc1#0eb4c11df4909dc6096dfc98875038385578264a"
source = "git+https://github.com/scroll-tech/zkvm-prover?rev=ad0efe7#ad0efe750d5f03781f28037c312e50400ac22a8d"
dependencies = [
"alloy-primitives",
"itertools 0.14.0",
"openvm 1.3.0",
"openvm-custom-insn 0.1.0 (git+https://github.com/openvm-org/openvm.git?rev=4973d38cb3f2e14ebdd59e03802e65bb657ee422)",
"openvm-rv32im-guest 1.3.0",
"openvm 1.3.0 (git+https://github.com/openvm-org/openvm.git?rev=5368d4756993fc1e51092499a816867cf4808de0)",
"openvm-custom-insn 0.1.0 (git+https://github.com/openvm-org/openvm.git?rev=5368d4756993fc1e51092499a816867cf4808de0)",
"openvm-rv32im-guest 1.3.0 (git+https://github.com/openvm-org/openvm.git?rev=5368d4756993fc1e51092499a816867cf4808de0)",
"revm-precompile 21.0.0 (git+https://github.com/scroll-tech/revm?branch=feat%2Freth-v74)",
"rkyv",
"sbv-core",
@@ -8818,11 +8820,12 @@ dependencies = [
[[package]]
name = "scroll-zkvm-verifier"
version = "0.5.0"
source = "git+https://github.com/scroll-tech/zkvm-prover?tag=v0.5.0rc1#0eb4c11df4909dc6096dfc98875038385578264a"
source = "git+https://github.com/scroll-tech/zkvm-prover?rev=ad0efe7#ad0efe750d5f03781f28037c312e50400ac22a8d"
dependencies = [
"bincode",
"eyre",
"itertools 0.14.0",
"once_cell",
"openvm-circuit",
"openvm-continuations",
"openvm-native-circuit",
@@ -8832,7 +8835,9 @@ dependencies = [
"revm 24.0.0",
"scroll-zkvm-types",
"serde",
"serde_json",
"snark-verifier-sdk",
"tracing",
]
[[package]]
@@ -10237,6 +10242,7 @@ dependencies = [
"form_urlencoded",
"idna",
"percent-encoding",
"serde",
]
[[package]]
@@ -10414,6 +10420,19 @@ dependencies = [
"unicode-ident",
]
[[package]]
name = "wasm-streams"
version = "0.4.2"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "15053d8d85c7eccdbefef60f06769760a563c7f0a9d6902a13d35c7800b0ad65"
dependencies = [
"futures-util",
"js-sys",
"wasm-bindgen",
"wasm-bindgen-futures",
"web-sys",
]
[[package]]
name = "wasm-timer"
version = "0.2.5"

View File

@@ -16,6 +16,8 @@ clean:
build:
GO_TAG=${GO_TAG} GIT_REV=${GIT_REV} ZK_VERSION=${ZK_VERSION} cargo build -Z unstable-options --release -p prover --lockfile-path ./Cargo.lock
version:
echo ${GO_TAG}-${GIT_REV}-${ZK_VERSION}
# update Cargo.lock while override config has been updated
#update:
# GO_TAG=${GO_TAG} GIT_REV=${GIT_REV} ZK_VERSION=${ZK_VERSION} cargo build -Z unstable-options --release -p prover --lockfile-path ./Cargo.lock

View File

@@ -1,15 +1,10 @@
#!/bin/bash
config_file=.cargo/config.toml
plonky3_gpu_path=$(grep 'path.*plonky3-gpu' "$config_file" | cut -d'"' -f2 | head -n 1)
plonky3_gpu_path=$(dirname "$plonky3_gpu_path")
higher_plonky3_item=`grep "plonky3-gpu" ./Cargo.lock | sort | uniq | awk -F "[#=]" '{print $3" "$4}' | sort -k 1 | tail -n 1`
if [ -z $plonky3_gpu_path ]; then
exit 0
else
pushd $plonky3_gpu_path
commit_hash=$(git log --pretty=format:%h -n 1)
echo "${commit_hash:0:7}"
higher_version=`echo $higher_plonky3_item | awk '{print $1}'`
popd
fi
higher_commit=`echo $higher_plonky3_item | cut -d ' ' -f2 | cut -c-7`
echo "$higher_version"
echo "$higher_commit"

View File

@@ -6,7 +6,7 @@ edition.workspace = true
# See more keys and their definitions at https://doc.rust-lang.org/cargo/reference/manifest.html
[dependencies]
scroll-zkvm-types.workspace = true
scroll-zkvm-verifier-euclid.workspace = true
scroll-zkvm-verifier.workspace = true
alloy-primitives.workspace = true #depress the effect of "native-keccak"
sbv-primitives = {workspace = true, features = ["scroll-compress-ratio", "scroll"]}

View File

@@ -5,7 +5,7 @@ pub use verifier::{TaskType, VerifierConfig};
mod utils;
use sbv_primitives::B256;
use scroll_zkvm_types::util::vec_as_base64;
use scroll_zkvm_types::utils::vec_as_base64;
use serde::{Deserialize, Serialize};
use serde_json::value::RawValue;
use std::path::Path;

View File

@@ -7,10 +7,10 @@ use scroll_zkvm_types::{
batch::BatchInfo,
bundle::BundleInfo,
chunk::ChunkInfo,
proof::{EvmProof, OpenVmEvmProof, ProofEnum, RootProof},
proof::{EvmProof, OpenVmEvmProof, ProofEnum, StarkProof},
public_inputs::{ForkName, MultiVersionPublicInputs},
types_agg::{AggregationInput, ProgramCommitment},
util::vec_as_base64,
utils::vec_as_base64,
};
use serde::{de::DeserializeOwned, Deserialize, Serialize};
@@ -40,7 +40,7 @@ pub struct WrappedProof<Metadata> {
}
pub trait AsRootProof {
fn as_root_proof(&self) -> &RootProof;
fn as_root_proof(&self) -> &StarkProof;
}
pub trait AsEvmProof {
@@ -61,17 +61,17 @@ pub type BatchProof = WrappedProof<BatchProofMetadata>;
pub type BundleProof = WrappedProof<BundleProofMetadata>;
impl AsRootProof for ChunkProof {
fn as_root_proof(&self) -> &RootProof {
fn as_root_proof(&self) -> &StarkProof {
self.proof
.as_root_proof()
.as_stark_proof()
.expect("batch proof use root proof")
}
}
impl AsRootProof for BatchProof {
fn as_root_proof(&self) -> &RootProof {
fn as_root_proof(&self) -> &StarkProof {
self.proof
.as_root_proof()
.as_stark_proof()
.expect("batch proof use root proof")
}
}
@@ -122,6 +122,8 @@ pub trait PersistableProof: Sized {
pub struct ChunkProofMetadata {
/// The chunk information describing the list of blocks contained within the chunk.
pub chunk_info: ChunkInfo,
/// Additional data for stat
pub chunk_total_gas: u64,
}
impl ProofMetadata for ChunkProofMetadata {

View File

@@ -44,12 +44,16 @@ pub fn gen_universal_chunk_task(
if let Some(interpreter) = interpreter {
task.prepare_task_via_interpret(interpreter)?;
}
let chunk_total_gas = task.stats().total_gas_used;
let chunk_info = task.precheck_and_build_metadata()?;
let proving_task = task.try_into()?;
let expected_pi_hash = chunk_info.pi_hash_by_fork(fork_name);
Ok((
expected_pi_hash,
ChunkProofMetadata { chunk_info },
ChunkProofMetadata {
chunk_info,
chunk_total_gas,
},
proving_task,
))
}

View File

@@ -91,7 +91,7 @@ impl TryFrom<BatchProvingTask> for ProvingTask {
aggregated_proofs: value
.chunk_proofs
.into_iter()
.map(|w_proof| w_proof.proof.into_root_proof().expect("expect root proof"))
.map(|w_proof| w_proof.proof.into_stark_proof().expect("expect root proof"))
.collect(),
serialized_witness: vec![to_rkyv_bytes::<RancorError>(&witness)?.into_vec()],
vk: Vec::new(),

View File

@@ -18,7 +18,7 @@ pub mod base64 {
pub mod point_eval {
use c_kzg;
use sbv_primitives::{types::eips::eip4844::BLS_MODULUS, B256 as H256, U256};
use scroll_zkvm_types::util::sha256_rv32;
use scroll_zkvm_types::utils::sha256_rv32;
/// Given the blob-envelope, translate it to a fixed size EIP-4844 blob.
///

View File

@@ -81,7 +81,7 @@ impl TryFrom<BundleProvingTask> for ProvingTask {
aggregated_proofs: value
.batch_proofs
.into_iter()
.map(|w_proof| w_proof.proof.into_root_proof().expect("expect root proof"))
.map(|w_proof| w_proof.proof.into_stark_proof().expect("expect root proof"))
.collect(),
serialized_witness: vec![witness.rkyv_serialize(None)?.to_vec()],
vk: Vec::new(),

View File

@@ -1,7 +1,6 @@
#![allow(static_mut_refs)]
mod euclidv2;
use euclidv2::EuclidV2Verifier;
mod universal;
use eyre::Result;
use serde::{Deserialize, Serialize};
use std::{
@@ -9,6 +8,7 @@ use std::{
path::Path,
sync::{Arc, Mutex, OnceLock},
};
use universal::Verifier;
#[derive(Debug, Clone, Copy, PartialEq)]
pub enum TaskType {
@@ -61,7 +61,7 @@ pub fn init(config: VerifierConfig) {
for cfg in &config.circuits {
let canonical_fork_name = cfg.fork_name.to_lowercase();
let verifier = EuclidV2Verifier::new(&cfg.assets_path, canonical_fork_name.as_str().into());
let verifier = Verifier::new(&cfg.assets_path, canonical_fork_name.as_str().into());
let ret = verifiers.insert(canonical_fork_name, Arc::new(Mutex::new(verifier)));
assert!(
ret.is_none(),

View File

@@ -7,59 +7,47 @@ use crate::{
utils::panic_catch,
};
use scroll_zkvm_types::public_inputs::ForkName;
use scroll_zkvm_verifier_euclid::verifier::UniversalVerifier;
use scroll_zkvm_verifier::verifier::UniversalVerifier;
use std::path::Path;
pub struct EuclidV2Verifier {
pub struct Verifier {
verifier: UniversalVerifier,
fork: ForkName,
}
impl EuclidV2Verifier {
impl Verifier {
pub fn new(assets_dir: &str, fork: ForkName) -> Self {
let verifier_bin = Path::new(assets_dir).join("verifier.bin");
let config = Path::new(assets_dir).join("root-verifier-vm-config");
let exe = Path::new(assets_dir).join("root-verifier-committed-exe");
Self {
verifier: UniversalVerifier::setup(&config, &exe, &verifier_bin)
.expect("Setting up chunk verifier"),
verifier: UniversalVerifier::setup(&verifier_bin).expect("Setting up chunk verifier"),
fork,
}
}
}
impl ProofVerifier for EuclidV2Verifier {
impl ProofVerifier for Verifier {
fn verify(&self, task_type: super::TaskType, proof: &[u8]) -> Result<bool> {
panic_catch(|| match task_type {
TaskType::Chunk => {
let proof = serde_json::from_slice::<ChunkProof>(proof).unwrap();
if !proof.pi_hash_check(self.fork) {
return false;
}
self.verifier
.verify_proof(proof.as_root_proof(), &proof.vk)
.unwrap()
assert!(proof.pi_hash_check(self.fork));
UniversalVerifier::verify_stark_proof(proof.as_root_proof(), &proof.vk).unwrap()
}
TaskType::Batch => {
let proof = serde_json::from_slice::<BatchProof>(proof).unwrap();
if !proof.pi_hash_check(self.fork) {
return false;
}
self.verifier
.verify_proof(proof.as_root_proof(), &proof.vk)
.unwrap()
assert!(proof.pi_hash_check(self.fork));
UniversalVerifier::verify_stark_proof(proof.as_root_proof(), &proof.vk).unwrap()
}
TaskType::Bundle => {
let proof = serde_json::from_slice::<BundleProof>(proof).unwrap();
if !proof.pi_hash_check(self.fork) {
return false;
}
assert!(proof.pi_hash_check(self.fork));
let vk = proof.vk.clone();
let evm_proof = proof.into_evm_proof();
self.verifier.verify_proof_evm(&evm_proof, &vk).unwrap()
self.verifier.verify_evm_proof(&evm_proof, &vk).unwrap()
}
})
.map(|_| true)
.map_err(|err_str: String| eyre::eyre!("{err_str}"))
}

View File

@@ -7,8 +7,8 @@ edition.workspace = true
[dependencies]
scroll-zkvm-types.workspace = true
scroll-zkvm-prover-euclid.workspace = true
scroll-proving-sdk = { git = "https://github.com/scroll-tech/scroll-proving-sdk.git", branch = "refactor/scroll" }
scroll-zkvm-prover.workspace = true
scroll-proving-sdk = { git = "https://github.com/scroll-tech/scroll-proving-sdk.git", rev = "4c36ab2" }
serde.workspace = true
serde_json.workspace = true
once_cell.workspace =true
@@ -17,8 +17,9 @@ tiny-keccak = { workspace = true, features = ["sha3", "keccak"] }
eyre.workspace = true
futures = "0.3.30"
futures-util = "0.3"
reqwest = { version = "0.12.4", features = ["gzip"] }
reqwest = { version = "0.12.4", features = ["gzip", "stream"] }
reqwest-middleware = "0.3"
reqwest-retry = "0.5"
hex = "0.4.3"
@@ -30,5 +31,5 @@ sled = "0.34.7"
http = "1.1.0"
clap = { version = "4.5", features = ["derive"] }
ctor = "0.2.8"
url = "2.5.4"
url = { version = "2.5.4", features = ["serde",] }
serde_bytes = "0.11.15"

View File

@@ -0,0 +1,7 @@
{
"feynman": {
"b68fdc3f28a5ce006280980df70cd3447e56913e5bca6054603ba85f0794c23a6618ea25a7991845bbc5fd571670ee47379ba31ace92d345bca59702a0d4112d": "https://circuit-release.s3.us-west-2.amazonaws.com/scroll-zkvm/releases/0.5.2/chunk/",
"9a3f66370f11e3303f1a1248921025104e83253efea43a70d221cf4e15fc145bf2be2f4468d1ac4a70e7682babb1c60417e21c7633d4b55b58f44703ec82b05a": "https://circuit-release.s3.us-west-2.amazonaws.com/scroll-zkvm/releases/0.5.2/batch/",
"1f8627277e1c1f6e1cc70c03e6fde06929e5ea27ca5b1d56e23b235dfeda282e22c0e5294bcb1b3a9def836f8d0f18612a9860629b9497292976ca11844b7e73": "https://circuit-release.s3.us-west-2.amazonaws.com/scroll-zkvm/releases/0.5.2/bundle/"
}
}

View File

@@ -34,11 +34,6 @@ struct Args {
#[derive(Subcommand, Debug)]
enum Commands {
/// Dump vk of this prover
Dump {
/// path to save the verifier's asset
asset_path: String,
},
Handle {
/// path to save the verifier's asset
task_path: String,
@@ -64,16 +59,10 @@ async fn main() -> eyre::Result<()> {
}
let cfg = LocalProverConfig::from_file(args.config_file)?;
let default_fork_name = cfg.circuits.keys().next().unwrap().clone();
let sdk_config = cfg.sdk_config.clone();
let local_prover = LocalProver::new(cfg.clone());
match args.command {
Some(Commands::Dump { asset_path }) => {
let fork_name = args.fork_name.unwrap_or(default_fork_name);
println!("dump assets for {fork_name} into {asset_path}");
local_prover.dump_verifier_assets(&fork_name, asset_path.as_ref())?;
}
Some(Commands::Handle { task_path }) => {
let file = File::open(Path::new(&task_path))?;
let reader = BufReader::new(file);

View File

@@ -1,4 +1,4 @@
use crate::zk_circuits_handler::{euclidV2::EuclidV2Handler, CircuitsHandler};
use crate::zk_circuits_handler::{universal::UniversalHandler, CircuitsHandler};
use async_trait::async_trait;
use eyre::Result;
use scroll_proving_sdk::{
@@ -16,12 +16,111 @@ use serde::{Deserialize, Serialize};
use std::{
collections::HashMap,
fs::File,
path::Path,
sync::{Arc, OnceLock},
path::{Path, PathBuf},
sync::{Arc, LazyLock},
time::{SystemTime, UNIX_EPOCH},
};
use tokio::{runtime::Handle, sync::Mutex, task::JoinHandle};
#[derive(Clone, Serialize, Deserialize)]
pub struct AssetsLocationData {
/// the base url to form a general downloading url for an asset, MUST HAVE A TRAILING SLASH
pub base_url: url::Url,
#[serde(default)]
/// a altered url for specififed vk
pub asset_detours: HashMap<String, url::Url>,
}
impl AssetsLocationData {
pub fn gen_asset_url(&self, vk_as_path: &str, proof_type: ProofType) -> Result<url::Url> {
Ok(self.base_url.join(
match proof_type {
ProofType::Chunk => format!("chunk/{vk_as_path}/"),
ProofType::Batch => format!("batch/{vk_as_path}/"),
ProofType::Bundle => format!("bundle/{vk_as_path}/"),
t => eyre::bail!("unrecognized proof type: {}", t as u8),
}
.as_str(),
)?)
}
pub fn validate(&self) -> Result<()> {
if !self.base_url.path().ends_with('/') {
eyre::bail!(
"base_url must have a trailing slash, got: {}",
self.base_url
);
}
Ok(())
}
pub async fn get_asset(
&self,
vk: &str,
url_base: &url::Url,
base_path: impl AsRef<Path>,
) -> Result<PathBuf> {
let download_files = ["app.vmexe", "openvm.toml"];
// Step 1: Create a local path for storage
let storage_path = base_path.as_ref().join(vk);
std::fs::create_dir_all(&storage_path)?;
// Step 2 & 3: Download each file if needed
let client = reqwest::Client::new();
for filename in download_files.iter() {
let local_file_path = storage_path.join(filename);
let download_url = url_base.join(filename)?;
// Check if file already exists
if local_file_path.exists() {
// Get file metadata to check size
if let Ok(metadata) = std::fs::metadata(&local_file_path) {
// Make a HEAD request to get remote file size
if let Ok(head_resp) = client.head(download_url.clone()).send().await {
if let Some(content_length) = head_resp.headers().get("content-length") {
if let Ok(remote_size) =
content_length.to_str().unwrap_or("0").parse::<u64>()
{
// If sizes match, skip download
if metadata.len() == remote_size {
println!("File {} already exists with matching size, skipping download", filename);
continue;
}
}
}
}
}
}
println!("Downloading {} from {}", filename, download_url);
let response = client.get(download_url).send().await?;
if !response.status().is_success() {
eyre::bail!(
"Failed to download {}: HTTP status {}",
filename,
response.status()
);
}
// Stream the content directly to file instead of loading into memory
let mut file = std::fs::File::create(&local_file_path)?;
let mut stream = response.bytes_stream();
use futures_util::StreamExt;
while let Some(chunk) = stream.next().await {
std::io::Write::write_all(&mut file, &chunk?)?;
}
}
// Step 4: Return the storage path
Ok(storage_path)
}
}
#[derive(Clone, Serialize, Deserialize)]
pub struct LocalProverConfig {
pub sdk_config: SdkConfig,
@@ -45,7 +144,11 @@ impl LocalProverConfig {
#[derive(Clone, Serialize, Deserialize)]
pub struct CircuitConfig {
pub hard_fork_name: String,
/// The path to save assets for a specified hard fork phase
pub workspace_path: String,
#[serde(flatten)]
/// The location data for dynamic loading
pub location_data: AssetsLocationData,
/// cached vk value to save some initial cost, for debugging only
#[serde(default)]
pub vks: HashMap<ProofType, String>,
@@ -56,7 +159,7 @@ pub struct LocalProver {
next_task_id: u64,
current_task: Option<JoinHandle<Result<String>>>,
handlers: HashMap<String, OnceLock<Arc<dyn CircuitsHandler>>>,
handlers: HashMap<String, Arc<dyn CircuitsHandler>>,
}
#[async_trait]
@@ -64,24 +167,15 @@ impl ProvingService for LocalProver {
fn is_local(&self) -> bool {
true
}
async fn get_vks(&self, req: GetVkRequest) -> GetVkResponse {
let mut vks = vec![];
for (hard_fork_name, cfg) in self.config.circuits.iter() {
for proof_type in &req.proof_types {
if let Some(vk) = cfg.vks.get(proof_type) {
vks.push(vk.clone())
} else {
let handler = self.get_or_init_handler(hard_fork_name);
vks.push(handler.get_vk(*proof_type).await);
}
}
async fn get_vks(&self, _: GetVkRequest) -> GetVkResponse {
// get vk has been deprecated in new prover with dynamic asset loading scheme
GetVkResponse {
vks: vec![],
error: None,
}
GetVkResponse { vks, error: None }
}
async fn prove(&mut self, req: ProveRequest) -> ProveResponse {
let handler = self.get_or_init_handler(&req.hard_fork_name);
match self.do_prove(req, handler).await {
match self.do_prove(req).await {
Ok(resp) => resp,
Err(e) => ProveResponse {
status: TaskStatus::Failed,
@@ -132,34 +226,91 @@ impl ProvingService for LocalProver {
}
}
static GLOBAL_ASSET_URLS: LazyLock<HashMap<String, HashMap<String, url::Url>>> =
LazyLock::new(|| {
const ASSETS_JSON: &str = include_str!("../assets_url_preset.json");
serde_json::from_str(ASSETS_JSON).expect("Failed to parse assets_url_preset.json")
});
impl LocalProver {
pub fn new(config: LocalProverConfig) -> Self {
let handlers = config
.circuits
.keys()
.map(|k| (k.clone(), OnceLock::new()))
.collect();
pub fn new(mut config: LocalProverConfig) -> Self {
for (fork_name, circuit_config) in config.circuits.iter_mut() {
// validate each base url
circuit_config.location_data.validate().unwrap();
let mut template_url_mapping = GLOBAL_ASSET_URLS
.get(&fork_name.to_lowercase())
.cloned()
.unwrap_or_default();
// apply default settings in template
for (key, url) in circuit_config.location_data.asset_detours.drain() {
template_url_mapping.insert(key, url);
}
circuit_config.location_data.asset_detours = template_url_mapping;
// validate each detours url
for url in circuit_config.location_data.asset_detours.values() {
assert!(
url.path().ends_with('/'),
"url {} must be end with /",
url.as_str()
);
}
}
Self {
config,
next_task_id: 0,
current_task: None,
handlers,
handlers: HashMap::new(),
}
}
async fn do_prove(
&mut self,
req: ProveRequest,
handler: Arc<dyn CircuitsHandler>,
) -> Result<ProveResponse> {
async fn do_prove(&mut self, req: ProveRequest) -> Result<ProveResponse> {
self.next_task_id += 1;
let duration = SystemTime::now().duration_since(UNIX_EPOCH).unwrap();
let created_at = duration.as_secs() as f64 + duration.subsec_nanos() as f64 * 1e-9;
let req_clone = req.clone();
let prover_task = UniversalHandler::get_task_from_input(&req.input)?;
let vk = hex::encode(&prover_task.vk);
let handler = if let Some(handler) = self.handlers.get(&vk) {
handler.clone()
} else {
let base_config = self
.config
.circuits
.get(&req.hard_fork_name)
.ok_or_else(|| {
eyre::eyre!(
"coordinator sent unexpected forkname {}",
req.hard_fork_name
)
})?;
let url_base = if let Some(url) = base_config.location_data.asset_detours.get(&vk) {
url.clone()
} else {
base_config
.location_data
.gen_asset_url(&vk, req.proof_type)?
};
let asset_path = base_config
.location_data
.get_asset(&vk, &url_base, &base_config.workspace_path)
.await?;
let circuits_handler = Arc::new(Mutex::new(UniversalHandler::new(
&asset_path,
req.proof_type,
)?));
self.handlers.insert(vk, circuits_handler.clone());
circuits_handler
};
let handle = Handle::current();
let task_handle =
tokio::task::spawn_blocking(move || handle.block_on(handler.get_proof_data(req_clone)));
let is_evm = req.proof_type == ProofType::Bundle;
let task_handle = tokio::task::spawn_blocking(move || {
handle.block_on(handler.get_proof_data(&prover_task, is_evm))
});
self.current_task = Some(task_handle);
Ok(ProveResponse {
@@ -173,77 +324,4 @@ impl LocalProver {
..Default::default()
})
}
fn get_or_init_handler(&self, hard_fork_name: &str) -> Arc<dyn CircuitsHandler> {
let lk = self
.handlers
.get(hard_fork_name)
.expect("coordinator should never sent unexpected forkname");
lk.get_or_init(|| self.new_handler(hard_fork_name)).clone()
}
pub fn new_handler(&self, hard_fork_name: &str) -> Arc<dyn CircuitsHandler> {
// if we got assigned a task for an unknown hard fork, there is something wrong in the
// coordinator
let config = self.config.circuits.get(hard_fork_name).unwrap();
match hard_fork_name {
// The new EuclidV2Handler is a universal handler
// We can add other handler implements if needed
"some future forkname" => unreachable!(),
_ => Arc::new(Arc::new(Mutex::new(EuclidV2Handler::new(config))))
as Arc<dyn CircuitsHandler>,
}
}
pub fn dump_verifier_assets(&self, hard_fork_name: &str, out_path: &Path) -> Result<()> {
let config = self
.config
.circuits
.get(hard_fork_name)
.ok_or_else(|| eyre::eyre!("no corresponding config for fork {hard_fork_name}"))?;
if !config.vks.is_empty() {
eyre::bail!("clean vks cache first or we will have wrong dumped vk");
}
let workspace_path = &config.workspace_path;
let universal_prover = EuclidV2Handler::new(config);
let _ = universal_prover
.get_prover()
.dump_universal_verifier(Some(out_path))?;
#[derive(Debug, serde::Serialize)]
struct VKDump {
pub chunk_vk: String,
pub batch_vk: String,
pub bundle_vk: String,
}
let dump = VKDump {
chunk_vk: universal_prover.get_vk_and_cache(ProofType::Chunk),
batch_vk: universal_prover.get_vk_and_cache(ProofType::Batch),
bundle_vk: universal_prover.get_vk_and_cache(ProofType::Bundle),
};
let f = File::create(out_path.join("openVmVk.json"))?;
serde_json::to_writer(f, &dump)?;
// Copy verifier.bin from workspace bundle directory to output path
let bundle_verifier_path = Path::new(workspace_path)
.join("bundle")
.join("verifier.bin");
if bundle_verifier_path.exists() {
let dest_path = out_path.join("verifier.bin");
std::fs::copy(&bundle_verifier_path, &dest_path)
.map_err(|e| eyre::eyre!("Failed to copy verifier.bin: {}", e))?;
} else {
eprintln!(
"Warning: verifier.bin not found at {:?}",
bundle_verifier_path
);
}
Ok(())
}
}

View File

@@ -1,65 +1,13 @@
//pub mod euclid;
#[allow(non_snake_case)]
pub mod euclidV2;
pub mod universal;
use async_trait::async_trait;
use eyre::Result;
use scroll_proving_sdk::prover::{proving_service::ProveRequest, ProofType};
use scroll_zkvm_prover_euclid::ProverConfig;
use std::path::Path;
use scroll_zkvm_types::ProvingTask;
#[async_trait]
pub trait CircuitsHandler: Sync + Send {
async fn get_vk(&self, task_type: ProofType) -> String;
async fn get_proof_data(&self, prove_request: ProveRequest) -> Result<String>;
}
#[derive(Clone, Copy)]
pub(crate) enum Phase {
EuclidV2,
}
impl Phase {
pub fn phase_spec_chunk(&self, workspace_path: &Path) -> ProverConfig {
let dir_cache = Some(workspace_path.join("cache"));
let path_app_exe = workspace_path.join("chunk/app.vmexe");
let path_app_config = workspace_path.join("chunk/openvm.toml");
let segment_len = Some((1 << 22) - 100);
ProverConfig {
dir_cache,
path_app_config,
path_app_exe,
segment_len,
..Default::default()
}
}
pub fn phase_spec_batch(&self, workspace_path: &Path) -> ProverConfig {
let dir_cache = Some(workspace_path.join("cache"));
let path_app_exe = workspace_path.join("batch/app.vmexe");
let path_app_config = workspace_path.join("batch/openvm.toml");
let segment_len = Some((1 << 22) - 100);
ProverConfig {
dir_cache,
path_app_config,
path_app_exe,
segment_len,
..Default::default()
}
}
pub fn phase_spec_bundle(&self, workspace_path: &Path) -> ProverConfig {
let dir_cache = Some(workspace_path.join("cache"));
let path_app_config = workspace_path.join("bundle/openvm.toml");
let segment_len = Some((1 << 22) - 100);
ProverConfig {
dir_cache,
path_app_config,
segment_len,
path_app_exe: workspace_path.join("bundle/app.vmexe"),
..Default::default()
}
}
async fn get_proof_data(&self, u_task: &ProvingTask, need_snark: bool) -> Result<String>;
}

View File

@@ -1,144 +0,0 @@
use std::{path::Path, sync::Arc};
use super::CircuitsHandler;
use anyhow::{anyhow, Result};
use async_trait::async_trait;
use scroll_proving_sdk::prover::{proving_service::ProveRequest, ProofType};
use scroll_zkvm_prover_euclid::{
task::{batch::BatchProvingTask, bundle::BundleProvingTask, chunk::ChunkProvingTask},
BatchProver, BundleProverEuclidV1, ChunkProver, ProverConfig,
};
use tokio::sync::Mutex;
pub struct EuclidHandler {
chunk_prover: ChunkProver,
batch_prover: BatchProver,
bundle_prover: BundleProverEuclidV1,
}
#[derive(Clone, Copy)]
pub(crate) enum Phase {
EuclidV1,
EuclidV2,
}
impl Phase {
pub fn as_str(&self) -> &str {
match self {
Phase::EuclidV1 => "euclidv1",
Phase::EuclidV2 => "euclidv2",
}
}
pub fn phase_spec_chunk(&self, workspace_path: &Path) -> ProverConfig {
let dir_cache = Some(workspace_path.join("cache"));
let path_app_exe = workspace_path.join("chunk/app.vmexe");
let path_app_config = workspace_path.join("chunk/openvm.toml");
let segment_len = Some((1 << 22) - 100);
ProverConfig {
dir_cache,
path_app_config,
path_app_exe,
segment_len,
..Default::default()
}
}
pub fn phase_spec_batch(&self, workspace_path: &Path) -> ProverConfig {
let dir_cache = Some(workspace_path.join("cache"));
let path_app_exe = workspace_path.join("batch/app.vmexe");
let path_app_config = workspace_path.join("batch/openvm.toml");
let segment_len = Some((1 << 22) - 100);
ProverConfig {
dir_cache,
path_app_config,
path_app_exe,
segment_len,
..Default::default()
}
}
pub fn phase_spec_bundle(&self, workspace_path: &Path) -> ProverConfig {
let dir_cache = Some(workspace_path.join("cache"));
let path_app_config = workspace_path.join("bundle/openvm.toml");
let segment_len = Some((1 << 22) - 100);
match self {
Phase::EuclidV1 => ProverConfig {
dir_cache,
path_app_config,
segment_len,
path_app_exe: workspace_path.join("bundle/app_euclidv1.vmexe"),
..Default::default()
},
Phase::EuclidV2 => ProverConfig {
dir_cache,
path_app_config,
segment_len,
path_app_exe: workspace_path.join("bundle/app.vmexe"),
..Default::default()
},
}
}
}
unsafe impl Send for EuclidHandler {}
impl EuclidHandler {
pub fn new(workspace_path: &str) -> Self {
let p = Phase::EuclidV1;
let workspace_path = Path::new(workspace_path);
let chunk_prover = ChunkProver::setup(p.phase_spec_chunk(workspace_path))
.expect("Failed to setup chunk prover");
let batch_prover = BatchProver::setup(p.phase_spec_batch(workspace_path))
.expect("Failed to setup batch prover");
let bundle_prover = BundleProverEuclidV1::setup(p.phase_spec_bundle(workspace_path))
.expect("Failed to setup bundle prover");
Self {
chunk_prover,
batch_prover,
bundle_prover,
}
}
}
#[async_trait]
impl CircuitsHandler for Arc<Mutex<EuclidHandler>> {
async fn get_vk(&self, task_type: ProofType) -> Option<Vec<u8>> {
Some(match task_type {
ProofType::Chunk => self.try_lock().unwrap().chunk_prover.get_app_vk(),
ProofType::Batch => self.try_lock().unwrap().batch_prover.get_app_vk(),
ProofType::Bundle => self.try_lock().unwrap().bundle_prover.get_app_vk(),
_ => unreachable!("Unsupported proof type"),
})
}
async fn get_proof_data(&self, prove_request: ProveRequest) -> Result<String> {
match prove_request.proof_type {
ProofType::Chunk => {
let task: ChunkProvingTask = serde_json::from_str(&prove_request.input)?;
let proof = self.try_lock().unwrap().chunk_prover.gen_proof(&task)?;
Ok(serde_json::to_string(&proof)?)
}
ProofType::Batch => {
let task: BatchProvingTask = serde_json::from_str(&prove_request.input)?;
let proof = self.try_lock().unwrap().batch_prover.gen_proof(&task)?;
Ok(serde_json::to_string(&proof)?)
}
ProofType::Bundle => {
let batch_proofs: BundleProvingTask = serde_json::from_str(&prove_request.input)?;
let proof = self
.try_lock()
.unwrap()
.bundle_prover
.gen_proof_evm(&batch_proofs)?;
Ok(serde_json::to_string(&proof)?)
}
_ => Err(anyhow!("Unsupported proof type")),
}
}
}

View File

@@ -1,119 +0,0 @@
use std::{
collections::HashMap,
path::Path,
sync::{Arc, OnceLock},
};
use super::{CircuitsHandler, Phase};
use crate::prover::CircuitConfig;
use async_trait::async_trait;
use base64::{prelude::BASE64_STANDARD, Engine};
use eyre::Result;
use scroll_proving_sdk::prover::{proving_service::ProveRequest, ProofType};
use scroll_zkvm_prover_euclid::{BatchProver, BundleProverEuclidV2, ChunkProver};
use scroll_zkvm_types::ProvingTask;
use tokio::sync::Mutex;
pub struct EuclidV2Handler {
chunk_prover: ChunkProver,
batch_prover: BatchProver,
bundle_prover: BundleProverEuclidV2,
cached_vks: HashMap<ProofType, OnceLock<String>>,
}
unsafe impl Send for EuclidV2Handler {}
impl EuclidV2Handler {
pub fn new(cfg: &CircuitConfig) -> Self {
let workspace_path = &cfg.workspace_path;
let p = Phase::EuclidV2;
let workspace_path = Path::new(workspace_path);
let chunk_prover = ChunkProver::setup(p.phase_spec_chunk(workspace_path))
.expect("Failed to setup chunk prover");
let batch_prover = BatchProver::setup(p.phase_spec_batch(workspace_path))
.expect("Failed to setup batch prover");
let bundle_prover = BundleProverEuclidV2::setup(p.phase_spec_bundle(workspace_path))
.expect("Failed to setup bundle prover");
let build_vk_cache = |proof_type: ProofType| {
let vk = if let Some(vk) = cfg.vks.get(&proof_type) {
OnceLock::from(vk.clone())
} else {
OnceLock::new()
};
(proof_type, vk)
};
Self {
chunk_prover,
batch_prover,
bundle_prover,
cached_vks: HashMap::from([
build_vk_cache(ProofType::Chunk),
build_vk_cache(ProofType::Batch),
build_vk_cache(ProofType::Bundle),
]),
}
}
/// get_prover get the inner prover, later we would replace chunk/batch/bundle_prover with
/// universal prover, before that, use bundle_prover as the represent one
pub fn get_prover(&self) -> &BundleProverEuclidV2 {
&self.bundle_prover
}
pub fn get_vk_and_cache(&self, task_type: ProofType) -> String {
match task_type {
ProofType::Chunk => self.cached_vks[&ProofType::Chunk]
.get_or_init(|| BASE64_STANDARD.encode(self.chunk_prover.get_app_vk())),
ProofType::Batch => self.cached_vks[&ProofType::Batch]
.get_or_init(|| BASE64_STANDARD.encode(self.batch_prover.get_app_vk())),
ProofType::Bundle => self.cached_vks[&ProofType::Bundle]
.get_or_init(|| BASE64_STANDARD.encode(self.bundle_prover.get_app_vk())),
_ => unreachable!("Unsupported proof type {:?}", task_type),
}
.clone()
}
}
#[async_trait]
impl CircuitsHandler for Arc<Mutex<EuclidV2Handler>> {
async fn get_vk(&self, task_type: ProofType) -> String {
self.lock().await.get_vk_and_cache(task_type)
}
async fn get_proof_data(&self, prove_request: ProveRequest) -> Result<String> {
let handler_self = self.lock().await;
let u_task: ProvingTask = serde_json::from_str(&prove_request.input)?;
let expected_vk = handler_self.get_vk_and_cache(prove_request.proof_type);
if BASE64_STANDARD.encode(&u_task.vk) != expected_vk {
eyre::bail!(
"vk is not match!, prove type {:?}, expected {}, get {}",
prove_request.proof_type,
expected_vk,
BASE64_STANDARD.encode(&u_task.vk),
);
}
let proof = match prove_request.proof_type {
ProofType::Chunk => handler_self
.chunk_prover
.gen_proof_universal(&u_task, false)?,
ProofType::Batch => handler_self
.batch_prover
.gen_proof_universal(&u_task, false)?,
ProofType::Bundle => handler_self
.bundle_prover
.gen_proof_universal(&u_task, true)?,
_ => {
return Err(eyre::eyre!(
"Unsupported proof type {:?}",
prove_request.proof_type
))
}
};
//TODO: check expected PI
Ok(serde_json::to_string(&proof)?)
}
}

View File

@@ -0,0 +1,63 @@
use std::path::Path;
use super::CircuitsHandler;
use async_trait::async_trait;
use base64::{prelude::BASE64_STANDARD, Engine};
use eyre::Result;
use scroll_proving_sdk::prover::ProofType;
use scroll_zkvm_prover::{Prover, ProverConfig};
use scroll_zkvm_types::ProvingTask;
use tokio::sync::Mutex;
pub struct UniversalHandler {
prover: Prover,
}
unsafe impl Send for UniversalHandler {}
impl UniversalHandler {
pub fn new(workspace_path: impl AsRef<Path>, proof_type: ProofType) -> Result<Self> {
let path_app_exe = workspace_path.as_ref().join("app.vmexe");
let path_app_config = workspace_path.as_ref().join("openvm.toml");
let segment_len = Some((1 << 22) - 100);
let config = ProverConfig {
path_app_config,
path_app_exe,
segment_len,
};
let use_evm = proof_type == ProofType::Bundle;
let prover = Prover::setup(config, use_evm, None)?;
Ok(Self { prover })
}
/// get_prover get the inner prover, later we would replace chunk/batch/bundle_prover with
/// universal prover, before that, use bundle_prover as the represent one
pub fn get_prover(&self) -> &Prover {
&self.prover
}
pub fn get_task_from_input(input: &str) -> Result<ProvingTask> {
Ok(serde_json::from_str(input)?)
}
}
#[async_trait]
impl CircuitsHandler for Mutex<UniversalHandler> {
async fn get_proof_data(&self, u_task: &ProvingTask, need_snark: bool) -> Result<String> {
let handler_self = self.lock().await;
if need_snark && handler_self.prover.evm_prover.is_none() {
eyre::bail!(
"do not init prover for evm (vk: {})",
BASE64_STANDARD.encode(handler_self.get_prover().get_app_vk())
)
}
let proof = handler_self
.get_prover()
.gen_proof_universal(u_task, need_snark)?;
Ok(serde_json::to_string(&proof)?)
}
}

View File

@@ -1408,13 +1408,12 @@ github.com/scroll-tech/da-codec v0.1.3-0.20250609113414-f33adf0904bd h1:NUol+dPt
github.com/scroll-tech/da-codec v0.1.3-0.20250609113414-f33adf0904bd/go.mod h1:gz5x3CsLy5htNTbv4PWRPBU9nSAujfx1U2XtFcXoFuk=
github.com/scroll-tech/da-codec v0.1.3-0.20250609154559-8935de62c148 h1:cyK1ifU2fRoMl8YWR9LOsZK4RvJnlG3RODgakj5I8VY=
github.com/scroll-tech/da-codec v0.1.3-0.20250609154559-8935de62c148/go.mod h1:gz5x3CsLy5htNTbv4PWRPBU9nSAujfx1U2XtFcXoFuk=
github.com/scroll-tech/da-codec v0.1.3-0.20250626091118-58b899494da6/go.mod h1:Z6kN5u2khPhiqHyk172kGB7o38bH/nj7Ilrb/46wZGg=
github.com/scroll-tech/go-ethereum v1.10.14-0.20240607130425-e2becce6a1a4/go.mod h1:byf/mZ8jLYUCnUePTicjJWn+RvKdxDn7buS6glTnMwQ=
github.com/scroll-tech/go-ethereum v1.10.14-0.20240821074444-b3fa00861e5e/go.mod h1:swB5NSp8pKNDuYsTxfR08bHS6L56i119PBx8fxvV8Cs=
github.com/scroll-tech/go-ethereum v1.10.14-0.20241010064814-3d88e870ae22/go.mod h1:r9FwtxCtybMkTbWYCyBuevT9TW3zHmOTHqD082Uh+Oo=
github.com/scroll-tech/go-ethereum v1.10.14-0.20250206083728-ea43834c198f/go.mod h1:Ik3OBLl7cJxPC+CFyCBYNXBPek4wpdzkWehn/y5qLM8=
github.com/scroll-tech/go-ethereum v1.10.14-0.20250225152658-bcfdb48dd939/go.mod h1:AgU8JJxC7+nfs7R7ma35AU7dMAGW7wCw3dRZRefIKyQ=
github.com/scroll-tech/go-ethereum v1.10.14-0.20250729113104-bd8f141bb3e9 h1:u371VK8eOU2Z/0SVf5KDI3eJc8msHSpJbav4do/8n38=
github.com/scroll-tech/go-ethereum v1.10.14-0.20250729113104-bd8f141bb3e9/go.mod h1:pDCZ4iGvEGmdIe4aSAGBrb7XSrKEML6/L/wEMmNxOdk=
github.com/sean-/seed v0.0.0-20170313163322-e2103e2c3529 h1:nn5Wsu0esKSJiIVhscUtVbo7ada43DJhG55ua/hjS5I=
github.com/sean-/seed v0.0.0-20170313163322-e2103e2c3529/go.mod h1:DxrIzT+xaE7yg65j358z/aeFdxmN0P9QXhEzd20vsDc=
github.com/segmentio/kafka-go v0.1.0/go.mod h1:X6itGqS9L4jDletMsxZ7Dz+JFWxM6JHfPOCvTvk+EJo=

View File

@@ -0,0 +1,22 @@
.PHONY: batch_production_submission launch_prover psql check_proving_status
export SCROLL_ZKVM_VERSION=0.4.2
PG_URL=postgres://postgres@localhost:5432/scroll
batch_production_submission:
docker compose --profile batch-production-submission up
launch_prover:
docker compose up -d
psql:
psql 'postgres://postgres@localhost:5432/scroll'
check_proving_status:
@echo "Checking proving status..."
@result=$$(psql "${PG_URL}" -t -c "SELECT proving_status = 4 AS is_status_success FROM batch ORDER BY index LIMIT 1;" | tr -d '[:space:]'); \
if [ "$$result" = "t" ]; then \
echo "✅ Prove succeeded! You're ready to submit permissionless batch and proof!"; \
else \
echo "Proof is not ready..."; \
fi

View File

@@ -0,0 +1,172 @@
# Permissionless Batches
Permissionless batches aka enforced batches is a feature that provides guarantee to users that they can exit Scroll even if the operator is down or censoring.
It allows anyone to take over and submit a batch (permissionless batch submission) together with a proof after a certain time period has passed without a batch being finalized on L1.
Once permissionless batch mode is activated, the operator can no longer submit batches in a permissioned way. Only the security council can deactivate permissionless batch mode and reinstate the operator as the only batch submitter.
There are two types of situations to consider:
- `Permissionless batch mode is activated:` This means that finalization halted for some time. Now anyone can submit batches utilizing the [batch production toolkit](#batch-production-toolkit).
- `Permissionless batch mode is deactivated:` This means that the security council has decided to reinstate the operator as the only batch submitter. The operator needs to [recover](#operator-recovery) the sequencer and relayer to resume batch submission and the valid L2 chain.
## Batch production toolkit
The batch production toolkit is a set of tools that allow anyone to submit a batch in permissionless mode. It consists of three main components:
1. l2geth state recovery from L1
2. l2geth block production
3. production, proving and submission of batch with `docker-compose.yml`
### Prerequisites
- Unix-like OS, 32GB RAM
- Docker
- [l2geth](https://github.com/scroll-tech/go-ethereum/) or [Docker image](https://hub.docker.com/r/scrolltech/l2geth) of corresponding [version](https://docs.scroll.io/en/technology/overview/scroll-upgrades/).
- access to an Ethereum L1 RPC node (beacon node and execution client)
- ability to run a prover
- L1 account with funds to pay for the batch submission
### 1. l2geth state recovery from L1
Once permissionless mode is activated there's no blocks being produced and propagated on L2. The first step is to recover the latest state of the L2 chain from L1. This is done by running l2geth in recovery mode.
Running l2geth in recovery mode requires following configuration:
- `--scroll` or `--scroll-sepolia` - enables Scroll Mainnet or Sepolia mode
- `--da.blob.beaconnode` - L1 RPC beacon node
- `--l1.endpoint` - L1 RPC execution client
- `--da.sync=true` - enables syncing with L1
- `--da.recovery` - enables recovery mode
- `--da.recovery.initiall1block` - initial L1 block (commit tx of initial batch)
- `--da.recovery.initialbatch` - batch where to start recovery from. Can be found on [Scrollscan Explorer](https://scrollscan.com/batches).
- `--da.recovery.l2endblock` - until which L2 block recovery should run (optional)
```bash
./build/bin/geth --scroll<-sepolia> \
--datadir "tmp/datadir" \
--gcmode archive \
--http --http.addr "0.0.0.0" --http.port 8545 --http.api "eth,net,web3,debug,scroll" --http.vhosts "*" \
--da.blob.beaconnode "<L1 RPC beacon node>" \
--l1.endpoint "<L1 RPC execution client>" \
--da.sync=true --da.recovery --da.recovery.initiall1block "<initial L1 block (commit tx of initial batch)>" --da.recovery.initialbatch "<batch where to start recovery from>" --da.recovery.l2endblock "<until which L2 block recovery should run (optional)>" \
--verbosity 3
```
### 2. l2geth block production
After the state is recovered, the next step is to produce blocks on L2. This is done by running l2geth in block production mode.
As a prerequisite, the state recovery must be completed and the latest state of the L2 chain must be available.
You also need to generate a keystore e.g. with [Clef](https://geth.ethereum.org/docs/fundamentals/account-management) to be able to sign blocks.
This key is not used for any funds, but required for block production to work. Once you generated blocks you can safely discard it.
Running l2geth in block production mode requires following configuration:
- `--scroll` or `--scroll-sepolia` - enables Scroll Mainnet or Sepolia mode
- `--da.blob.beaconnode` - L1 RPC beacon node
- `--l1.endpoint` - L1 RPC execution client
- `--da.sync=true` - enables syncing with L1
- `--da.recovery` - enables recovery mode
- `--da.recovery.produceblocks` - enables block production
- `--miner.etherbase '0xeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeee' --mine` - enables mining. the address is not used, but required for mining to work
- `---miner.gaslimit 1 --miner.gasprice 1 --miner.maxaccountsnum 100 --rpc.gascap 0 --gpo.ignoreprice 1` - gas limits for block production
```bash
./build/bin/geth --scroll<-sepolia> \
--datadir "tmp/datadir" \
--gcmode archive \
--http --http.addr "0.0.0.0" --http.port 8545 --http.api "eth,net,web3,debug,scroll" --http.vhosts "*" \
--da.blob.beaconnode "<L1 RPC beacon node>" \
--l1.endpoint "<L1 RPC execution client>" \
--da.sync=true --da.recovery --da.recovery.produceblocks \
--miner.gaslimit 1 --miner.gasprice 1 --miner.maxaccountsnum 100 --rpc.gascap 0 --gpo.ignoreprice 1 \
--miner.etherbase '0xeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeee' --mine \
--verbosity 3
```
### 3. production, proving and submission of batch with `docker-compose.yml`
After the blocks are produced, the next step is to produce a batch, prove it and submit it to L1. This is done by running the `docker-compose.yml` in the `permissionless-batches` folder.
#### Producing a batch
To produce a batch you need to run the `batch-production-submission` profile in `docker-compose.yml`.
1. Fill `conf/genesis.json` with the latest genesis state from the L2 chain. The genesis for the current fork can be found [here](https://docs.scroll.io/en/technology/overview/scroll-upgrades/).
2. Make sure that `l2geth` with your locally produced blocks is running and reachable from the Docker network (e.g. `http://host.docker.internal:8545`)
3. Fill in required fields in `conf/relayer/config.json`
Run with `make batch_production_submission`.
This will produce chunks, a batch and bundle which will be proven in the next step.
`Success! You're ready to generate proofs!` indicates that everything is working correctly and the batch is ready to be proven.
#### Proving a batch
To prove the chunk, batch and bundle you just generated you need to run the `prover` profile in `docker-compose.yml`.
Local Proving:
1. Hardware spec for local prover: CPU: 36+ core, 128G memory GPU: 24G memory (e.g. Rtx 3090/3090Ti/4090/A10/L4)
2. Make sure `verifier` and `high_version_circuit` in `conf/coordinator/config.json` are correct for the [latest fork](https://docs.scroll.io/en/technology/overview/scroll-upgrades/)
3. Set the `SCROLL_ZKVM_VERSION` environment variable on `Makefile` to the correct [version](https://docs.scroll.io/en/technology/overview/scroll-upgrades/).
4. Fill in the required fields in `conf/proving-service/config.json`
Run with `make launch_prover`.
#### Batch submission
To submit the batch you need to run the `batch-production-submission` profile in `docker-compose.yml`.
1. Fill in required fields in `conf/relayer/config.json` for the sender config.
Run with `make batch_production_submission`.
This will submit the batch to L1 and finalize it. The transaction will be retried in case of failure.
**Troubleshooting**
- in case the submission fails it will print the calldata for the transaction in an error message. You can use this with `cast call --trace --rpc-url "$SCROLL_L1_DEPLOYMENT_RPC" "$L1_SCROLL_CHAIN_PROXY_ADDR" <calldata>` to see what went wrong.
- `0x4df567b9: ErrorNotInEnforcedBatchMode`: permissionless batch mode is not activated, you can't submit a batch
- `0xa5d305cc: ErrorBatchIsEmpty`: no blob was provided. This is usually returned if you do the `cast call`, permissionless mode is activated but you didn't provide a blob in the transaction.
## Operator recovery
Operator recovery needs to be run by the rollup operator to resume normal rollup operation after permissionless batch mode is deactivated. It consists of two main components:
1. l2geth recovery
2. Relayer recovery
These steps are required to resume permissioned batch submission and the valid L2 chain. They will restore the entire history of the batches submitted during permissionless mode.
### Prerequisites
- l2geth with the latest state of the L2 chain (before permissionless mode was activated)
- signer key for the sequencer according to Clique consensus
- relayer and coordinator are set up, running and up-to-date with the latest state of the L2 chain (before permissionless mode was activated)
### l2geth recovery
Running l2geth in recovery mode requires following configuration:
- `--scroll` or `--scroll-sepolia` - enables Scroll Mainnet or Sepolia mode
- `--da.blob.beaconnode` - L1 RPC beacon node
- `--l1.endpoint` - L1 RPC execution client
- `--da.sync=true` - enables syncing with L1
- `--da.recovery` - enables recovery mode
- `--da.recovery.signblocks` - enables signing blocks with the sequencer and configured key
- `--da.recovery.initiall1block` - initial L1 block (commit tx of initial batch)
- `--da.recovery.initialbatch` - batch where to start recovery from. Can be found on [Scrollscan Explorer](https://scrollscan.com/batches).
- `--da.recovery.l2endblock` - until which L2 block recovery should run (optional)
```bash
./build/bin/geth --scroll<-sepolia> \
--datadir "tmp/datadir" \
--gcmode archive \
--http --http.addr "0.0.0.0" --http.port 8545 --http.api "eth,net,web3,debug,scroll" --http.vhosts "*" \
--da.blob.beaconnode "<L1 RPC beacon node>" \
--l1.endpoint "<L1 RPC execution client>" \
--da.sync=true --da.recovery --da.recovery.signblocks --da.recovery.initiall1block "<initial L1 block (commit tx of initial batch)>" --da.recovery.initialbatch "<batch where to start recovery from>" --da.recovery.l2endblock "<until which L2 block recovery should run (optional)>" \
--verbosity 3
```
After the recovery is finished, start the sequencer in normal operation and continue issuing L2 blocks as normal. This will resume the L2 chain, allow the relayer (after running recovery) to create new batches and allow other L2 follower nodes to sync up the valid and signed L2 chain.
### Relayer recovery
Start the relayer with the following additional top-level configuration:
```
"recovery_config": {
"enable": true
}
```
This will make the relayer recover all the chunks, batches and bundles that were submitted during permissionless mode. These batches are marked automatically as proven and finalized.
Once this process is finished, start the relayer normally without the recovery config to resume normal operation.
```
"recovery_config": {
"enable": false
}
```

View File

@@ -0,0 +1,30 @@
{
"prover_manager": {
"provers_per_session": 1,
"session_attempts": 100,
"chunk_collection_time_sec": 36000,
"batch_collection_time_sec": 2700,
"bundle_collection_time_sec": 2700,
"verifier": {
"high_version_circuit" : {
"fork_name": "euclid",
"assets_path": "/verifier/openvm/verifier",
"min_prover_version": "v4.5.7"
}
}
},
"db": {
"driver_name": "postgres",
"dsn": "postgres://db/scroll?sslmode=disable&user=postgres",
"maxOpenNum": 200,
"maxIdleNum": 20
},
"l2": {
"chain_id": 333333
},
"auth": {
"secret": "e788b62d39254928a821ac1c76b274a8c835aa1e20ecfb6f50eb10e87847de44",
"challenge_expire_duration_sec": 10,
"login_expire_duration_sec": 3600
}
}

View File

@@ -0,0 +1,76 @@
#!/usr/bin/bash
apt update
apt install -y wget libdigest-sha-perl
# release version
if [ -z "${SCROLL_ZKVM_VERSION}" ]; then
echo "SCROLL_ZKVM_VERSION not set"
exit 1
fi
if [ -z "${HTTP_PORT}" ]; then
echo "HTTP_PORT not set"
exit 1
fi
if [ -z "${METRICS_PORT}" ]; then
echo "METRICS_PORT not set"
exit 1
fi
case $CHAIN_ID in
"5343532222") # staging network
echo "staging network not supported"
exit 1
;;
"534353") # alpha network
echo "alpha network not supported"
exit 1
;;
esac
BASE_DOWNLOAD_DIR="/verifier"
# Ensure the base directory exists
mkdir -p "$BASE_DOWNLOAD_DIR"
# Set subdirectories
OPENVM_DIR="$BASE_DOWNLOAD_DIR/openvm"
# Create necessary directories
mkdir -p "$OPENVM_DIR/verifier"
# Define URLs for OpenVM files (No checksum verification)
OPENVM_URLS=(
"https://circuit-release.s3.us-west-2.amazonaws.com/scroll-zkvm/releases/$SCROLL_ZKVM_VERSION/verifier/verifier.bin"
"https://circuit-release.s3.us-west-2.amazonaws.com/scroll-zkvm/releases/$SCROLL_ZKVM_VERSION/verifier/root-verifier-vm-config"
"https://circuit-release.s3.us-west-2.amazonaws.com/scroll-zkvm/releases/$SCROLL_ZKVM_VERSION/verifier/root-verifier-committed-exe"
)
# Download OpenVM files (No checksum verification, but skips if file exists)
for url in "${OPENVM_URLS[@]}"; do
dest_subdir="$OPENVM_DIR/$(basename $(dirname "$url"))"
mkdir -p "$dest_subdir"
filepath="$dest_subdir/$(basename "$url")"
echo "Downloading $filepath..."
curl -o "$filepath" -L "$url"
done
mkdir -p "$HOME/.openvm"
ln -s "$OPENVM_DIR/params" "$HOME/.openvm/params"
echo "All files downloaded successfully! 🎉"
mkdir -p /usr/local/bin
wget https://github.com/ethereum/solidity/releases/download/v0.8.19/solc-static-linux -O /usr/local/bin/solc
chmod +x /usr/local/bin/solc
# Start coordinator
echo "Starting coordinator api"
RUST_BACKTRACE=1 exec coordinator_api --config /coordinator/config.json \
--genesis /coordinator/genesis.json \
--http --http.addr "0.0.0.0" --http.port ${HTTP_PORT} \
--metrics --metrics.addr "0.0.0.0" --metrics.port ${METRICS_PORT} \
--log.debug

View File

@@ -0,0 +1 @@
<fill with correct genesis.json>

View File

@@ -0,0 +1,28 @@
{
"sdk_config": {
"prover_name_prefix": "local_prover",
"keys_dir": "/keys",
"db_path": "/db",
"coordinator": {
"base_url": "http://172.17.0.1:8556",
"retry_count": 10,
"retry_wait_time_sec": 10,
"connection_timeout_sec": 30
},
"l2geth": {
"endpoint": "<L2 RPC with generated blocks reachable from Docker network>"
},
"prover": {
"circuit_type": 2,
"supported_proof_types": [1,2,3],
"circuit_version": "v0.13.1"
},
"health_listener_addr": "0.0.0.0:89"
},
"circuits": {
"euclidV2": {
"hard_fork_name": "euclidV2",
"workspace_path": "/openvm"
}
}
}

View File

@@ -0,0 +1,54 @@
#!/usr/bin/bash
apt update
apt install -y wget curl
# release version
if [ -z "${SCROLL_ZKVM_VERSION}" ]; then
echo "SCROLL_ZKVM_VERSION not set"
exit 1
fi
BASE_DOWNLOAD_DIR="/openvm"
# Ensure the base directory exists
mkdir -p "$BASE_DOWNLOAD_DIR"
# Define URLs for OpenVM files (No checksum verification)
OPENVM_URLS=(
"https://circuit-release.s3.us-west-2.amazonaws.com/scroll-zkvm/releases/$SCROLL_ZKVM_VERSION/chunk/app.vmexe"
"https://circuit-release.s3.us-west-2.amazonaws.com/scroll-zkvm/releases/$SCROLL_ZKVM_VERSION/chunk/openvm.toml"
"https://circuit-release.s3.us-west-2.amazonaws.com/scroll-zkvm/releases/$SCROLL_ZKVM_VERSION/batch/app.vmexe"
"https://circuit-release.s3.us-west-2.amazonaws.com/scroll-zkvm/releases/$SCROLL_ZKVM_VERSION/batch/openvm.toml"
"https://circuit-release.s3.us-west-2.amazonaws.com/scroll-zkvm/releases/$SCROLL_ZKVM_VERSION/bundle/app.vmexe"
"https://circuit-release.s3.us-west-2.amazonaws.com/scroll-zkvm/releases/$SCROLL_ZKVM_VERSION/bundle/app_euclidv1.vmexe"
"https://circuit-release.s3.us-west-2.amazonaws.com/scroll-zkvm/releases/$SCROLL_ZKVM_VERSION/bundle/openvm.toml"
"https://circuit-release.s3.us-west-2.amazonaws.com/scroll-zkvm/releases/$SCROLL_ZKVM_VERSION/bundle/verifier.bin"
"https://circuit-release.s3.us-west-2.amazonaws.com/scroll-zkvm/releases/$SCROLL_ZKVM_VERSION/bundle/verifier.sol"
"https://circuit-release.s3.us-west-2.amazonaws.com/scroll-zkvm/releases/$SCROLL_ZKVM_VERSION/bundle/digest_1.hex"
"https://circuit-release.s3.us-west-2.amazonaws.com/scroll-zkvm/releases/$SCROLL_ZKVM_VERSION/bundle/digest_2.hex"
"https://circuit-release.s3.us-west-2.amazonaws.com/scroll-zkvm/releases/$SCROLL_ZKVM_VERSION/bundle/digest_1_euclidv1.hex"
"https://circuit-release.s3.us-west-2.amazonaws.com/scroll-zkvm/releases/$SCROLL_ZKVM_VERSION/bundle/digest_2_euclidv1.hex"
"https://circuit-release.s3.us-west-2.amazonaws.com/scroll-zkvm/params/kzg_bn254_22.srs"
"https://circuit-release.s3.us-west-2.amazonaws.com/scroll-zkvm/params/kzg_bn254_24.srs"
)
# Download OpenVM files (No checksum verification, but skips if file exists)
for url in "${OPENVM_URLS[@]}"; do
dest_subdir="$BASE_DOWNLOAD_DIR/$(basename $(dirname "$url"))"
mkdir -p "$dest_subdir"
filepath="$dest_subdir/$(basename "$url")"
echo "Downloading $filepath..."
curl -o "$filepath" -L "$url"
done
mkdir -p "$HOME/.openvm"
ln -s "/openvm/params" "$HOME/.openvm/params"
mkdir -p /usr/local/bin
wget https://github.com/ethereum/solidity/releases/download/v0.8.19/solc-static-linux -O /usr/local/bin/solc
chmod +x /usr/local/bin/solc
mkdir -p /openvm/cache
RUST_MIN_STACK=16777216 RUST_BACKTRACE=1 exec /prover/prover --config /prover/conf/config.json

View File

@@ -0,0 +1,48 @@
{
"l1_config": {
"endpoint": "<L1 RPC execution node>"
},
"l2_config": {
"confirmations": "0x0",
"endpoint": "<L2 RPC with generated blocks reachable from Docker network>",
"relayer_config": {
"commit_sender_signer_config": {
"signer_type": "PrivateKey",
"private_key_signer_config": {
"private_key": "<the private key of L1 address to submit permissionless batch, please fund it in advance>"
}
}
},
"chunk_proposer_config": {
"propose_interval_milliseconds": 100,
"max_l2_gas_per_chunk": 20000000,
"chunk_timeout_sec": 300,
"max_uncompressed_batch_bytes_size": 4194304
},
"batch_proposer_config": {
"propose_interval_milliseconds": 1000,
"batch_timeout_sec": 300,
"max_chunks_per_batch": 45,
"max_uncompressed_batch_bytes_size": 4194304
},
"bundle_proposer_config": {
"max_batch_num_per_bundle": 20,
"bundle_timeout_sec": 36000
}
},
"db_config": {
"driver_name": "postgres",
"dsn": "postgres://172.17.0.1:5432/scroll?sslmode=disable&user=postgres",
"maxOpenNum": 200,
"maxIdleNum": 20
},
"recovery_config": {
"enable": true,
"l1_block_height": "<commit tx of last finalized batch on L1>",
"latest_finalized_batch": "<last finalized batch on L1>",
"l2_block_height_limit": "<L2 block up to which to produce batch>",
"force_latest_finalized_batch": false,
"force_l1_message_count": 0,
"submit_without_proof": false
}
}

View File

@@ -0,0 +1,98 @@
name: permissionless-batches
services:
relayer-batch-production:
build:
context: ../
dockerfile: build/dockerfiles/recovery_permissionless_batches.Dockerfile
network_mode: host
container_name: permissionless-batches-relayer
volumes:
- ./conf/relayer/config.json:/app/conf/config.json
- ./conf/genesis.json:/app/conf/genesis.json
command: "--config /app/conf/config.json --min-codec-version 0"
profiles:
- batch-production-submission
depends_on:
db:
condition: service_healthy
db:
image: postgres:17.0
environment:
POSTGRES_HOST_AUTH_METHOD: trust
POSTGRES_USER: postgres
POSTGRES_DB: scroll
healthcheck:
test: [ "CMD-SHELL", "pg_isready -U postgres" ]
interval: 1s
timeout: 1s
retries: 10
volumes:
- db_data:/var/lib/postgresql/data
ports:
- "5432:5432"
coordinator-api:
image: scrolltech/coordinator-api:v4.5.19
volumes:
- ./conf/coordinator/config.json:/coordinator/config.json:ro
- ./conf/genesis.json:/coordinator/genesis.json:ro
- ./conf/coordinator/coordinator_run.sh:/bin/coordinator_run.sh
entrypoint: /bin/coordinator_run.sh
profiles:
- local-prover
- cloud-prover
ports: [8556:8555]
environment:
- SCROLL_ZKVM_VERSION=${SCROLL_ZKVM_VERSION}
- SCROLL_PROVER_ASSETS_DIR=/verifier/assets/
- HTTP_PORT=8555
- METRICS_PORT=8390
depends_on:
db:
condition: service_healthy
healthcheck:
test: ["CMD", "curl", "-f", "http://localhost:8555/coordinator/v1/challenge"]
interval: 1s
timeout: 1s
retries: 10
start_period: 5m
coordinator-cron:
build:
context: ../
dockerfile: build/dockerfiles/coordinator-cron.Dockerfile
volumes:
- ./conf/coordinator/config.json:/app/conf/config.json
command: "--config /app/conf/config.json --verbosity 3"
profiles:
- local-prover
- cloud-prover
depends_on:
db:
condition: service_healthy
local-prover:
image: scrolltech/cuda-prover:v4.5.12-97de9882-6ad5d8c-261b322
network_mode: host
platform: linux/amd64
runtime: nvidia
entrypoint: /bin/prover_run.sh
environment:
- SCROLL_ZKVM_VERSION=${SCROLL_ZKVM_VERSION}
- LD_LIBRARY_PATH=/prover:/usr/local/cuda/lib64
- RUST_MIN_STACK=16777216
- RUST_BACKTRACE=1
- RUST_LOG=info
volumes:
- ./conf/proving-service/config.json:/prover/conf/config.json:ro
- ./conf/proving-service/prover_run.sh:/bin/prover_run.sh
- ./conf/proving-service/db:/db
- ./conf/proving-service/keys:/keys
depends_on:
coordinator-api:
condition: service_healthy
volumes:
db_data:

View File

@@ -1,13 +1,12 @@
package bridgeabi
import (
"fmt"
"math/big"
"testing"
"github.com/scroll-tech/go-ethereum/common"
"github.com/scroll-tech/go-ethereum/common/hexutil"
"github.com/stretchr/testify/assert"
"github.com/scroll-tech/go-ethereum/common"
)
func TestPackCommitBatches(t *testing.T) {
@@ -92,20 +91,3 @@ func TestPackSetL2BaseFee(t *testing.T) {
_, err = l2GasOracleABI.Pack("setL2BaseFee", baseFee)
assert.NoError(err)
}
func TestPrintABISignatures(t *testing.T) {
// print all error signatures of ABI
abi, err := ScrollChainMetaData.GetAbi()
if err != nil {
t.Fatal(err)
}
for _, method := range abi.Methods {
fmt.Println(hexutil.Encode(method.ID[:4]), method.Sig, method.Name)
}
fmt.Println("------------------------------")
for _, errors := range abi.Errors {
fmt.Println(hexutil.Encode(errors.ID[:4]), errors.Sig, errors.Name)
}
}

View File

@@ -0,0 +1,144 @@
package app
import (
"context"
"fmt"
"os"
"os/signal"
"github.com/prometheus/client_golang/prometheus"
"github.com/scroll-tech/da-codec/encoding"
"github.com/urfave/cli/v2"
"github.com/scroll-tech/go-ethereum/ethclient"
"github.com/scroll-tech/go-ethereum/log"
"scroll-tech/common/database"
"scroll-tech/common/observability"
"scroll-tech/common/utils"
"scroll-tech/common/version"
"scroll-tech/rollup/internal/config"
"scroll-tech/rollup/internal/controller/permissionless_batches"
"scroll-tech/rollup/internal/controller/watcher"
)
var app *cli.App
func init() {
// Set up rollup-relayer app info.
app = cli.NewApp()
app.Action = action
app.Name = "permissionless-batches"
app.Usage = "The Scroll Rollup Relayer for permissionless batch production"
app.Version = version.Version
app.Flags = append(app.Flags, utils.CommonFlags...)
app.Flags = append(app.Flags, utils.RollupRelayerFlags...)
app.Commands = []*cli.Command{}
app.Before = func(ctx *cli.Context) error {
return utils.LogSetup(ctx)
}
}
func action(ctx *cli.Context) error {
// Load config file.
cfgFile := ctx.String(utils.ConfigFileFlag.Name)
cfg, err := config.NewConfig(cfgFile)
if err != nil {
log.Crit("failed to load config file", "config file", cfgFile, "error", err)
}
subCtx, cancel := context.WithCancel(ctx.Context)
defer cancel()
// Sanity check config. Make sure the required fields are set.
if cfg.RecoveryConfig == nil {
return fmt.Errorf("recovery config must be specified")
}
if cfg.RecoveryConfig.L1BeaconNodeEndpoint == "" {
return fmt.Errorf("L1 beacon node endpoint must be specified")
}
if cfg.RecoveryConfig.L1BlockHeight == 0 {
return fmt.Errorf("L1 block height must be specified")
}
if cfg.RecoveryConfig.LatestFinalizedBatch == 0 {
return fmt.Errorf("latest finalized batch must be specified")
}
// init db connection
db, err := database.InitDB(cfg.DBConfig)
if err != nil {
log.Crit("failed to init db connection", "err", err)
}
defer func() {
if err = database.CloseDB(db); err != nil {
log.Crit("failed to close db connection", "error", err)
}
}()
registry := prometheus.DefaultRegisterer
observability.Server(ctx, db)
genesisPath := ctx.String(utils.Genesis.Name)
genesis, err := utils.ReadGenesis(genesisPath)
if err != nil {
log.Crit("failed to read genesis", "genesis file", genesisPath, "error", err)
}
minCodecVersion := encoding.CodecVersion(ctx.Uint(utils.MinCodecVersionFlag.Name))
chunkProposer := watcher.NewChunkProposer(subCtx, cfg.L2Config.ChunkProposerConfig, minCodecVersion, genesis.Config, db, registry)
batchProposer := watcher.NewBatchProposer(subCtx, cfg.L2Config.BatchProposerConfig, minCodecVersion, genesis.Config, db, false, registry)
bundleProposer := watcher.NewBundleProposer(subCtx, cfg.L2Config.BundleProposerConfig, minCodecVersion, genesis.Config, db, registry)
// Init l2geth connection
l2client, err := ethclient.Dial(cfg.L2Config.Endpoint)
if err != nil {
return fmt.Errorf("failed to connect to L2geth at RPC=%s: %w", cfg.L2Config.Endpoint, err)
}
l2Watcher := watcher.NewL2WatcherClient(subCtx, l2client, cfg.L2Config.Confirmations, cfg.L2Config.L2MessageQueueAddress, cfg.L2Config.WithdrawTrieRootSlot, genesis.Config, db, registry)
recovery := permissionless_batches.NewRecovery(subCtx, cfg, genesis, db, chunkProposer, batchProposer, bundleProposer, l2Watcher)
if recovery.RecoveryNeeded() {
if err = recovery.Run(); err != nil {
return fmt.Errorf("failed to run recovery: %w", err)
}
log.Info("Success! You're ready to generate proofs!")
} else {
log.Info("No recovery needed, submitting batch and proof to L1...")
submitter, err := permissionless_batches.NewSubmitter(subCtx, db, cfg.L2Config.RelayerConfig, genesis.Config)
if err != nil {
return fmt.Errorf("failed to create submitter: %w", err)
}
if err = submitter.Submit(!cfg.RecoveryConfig.SubmitWithoutProof); err != nil {
return fmt.Errorf("failed to submit batch: %w", err)
}
log.Info("Transaction submitted to L1, waiting for confirmation...")
// Catch CTRL-C to ensure a graceful shutdown.
interrupt := make(chan os.Signal, 1)
signal.Notify(interrupt, os.Interrupt)
select {
case <-subCtx.Done():
case confirmation := <-submitter.Sender().ConfirmChan():
if confirmation.IsSuccessful {
log.Info("Transaction confirmed on L1, your permissionless batch is part of the ledger!", "tx hash", confirmation.TxHash)
}
case <-interrupt:
log.Info("CTRL-C received, shutting down...")
}
}
return nil
}
// Run rollup relayer cmd instance.
func Run() {
if err := app.Run(os.Args); err != nil {
_, _ = fmt.Fprintln(os.Stderr, err)
os.Exit(1)
}
}

View File

@@ -0,0 +1,7 @@
package main
import "scroll-tech/rollup/cmd/permissionless_batches/app"
func main() {
app.Run()
}

View File

@@ -11,6 +11,7 @@ import (
"github.com/scroll-tech/da-codec/encoding"
"github.com/scroll-tech/go-ethereum/ethclient"
"github.com/scroll-tech/go-ethereum/log"
"github.com/scroll-tech/go-ethereum/rollup/l1"
"github.com/urfave/cli/v2"
"scroll-tech/common/database"
@@ -112,6 +113,32 @@ func action(ctx *cli.Context) error {
l2watcher := watcher.NewL2WatcherClient(subCtx, l2client, cfg.L2Config.Confirmations, cfg.L2Config.L2MessageQueueAddress, cfg.L2Config.WithdrawTrieRootSlot, genesis.Config, db, registry)
if cfg.RecoveryConfig != nil && cfg.RecoveryConfig.Enable {
log.Info("Starting rollup-relayer in recovery mode", "version", version.Version)
l1Client, err := ethclient.Dial(cfg.L1Config.Endpoint)
if err != nil {
return fmt.Errorf("failed to connect to L1 client: %w", err)
}
reader, err := l1.NewReader(context.Background(), l1.Config{
ScrollChainAddress: genesis.Config.Scroll.L1Config.ScrollChainAddress,
L1MessageQueueAddress: genesis.Config.Scroll.L1Config.L1MessageQueueAddress,
}, l1Client)
if err != nil {
return fmt.Errorf("failed to create L1 reader: %w", err)
}
fullRecovery, err := relayer.NewFullRecovery(subCtx, cfg, genesis, db, chunkProposer, batchProposer, bundleProposer, l2watcher, l1Client, reader)
if err != nil {
return fmt.Errorf("failed to create full recovery: %w", err)
}
if err = fullRecovery.RestoreFullPreviousState(); err != nil {
log.Crit("failed to restore full previous state", "error", err)
}
return nil
}
// Watcher loop to fetch missing blocks
go utils.LoopWithContext(subCtx, 2*time.Second, func(ctx context.Context) {
number, loopErr := rutils.GetLatestConfirmedBlockNumber(ctx, l2client, cfg.L2Config.Confirmations)
@@ -119,7 +146,8 @@ func action(ctx *cli.Context) error {
log.Error("failed to get block number", "err", loopErr)
return
}
l2watcher.TryFetchRunningMissingBlocks(number)
// errors are logged in the try method as well
_ = l2watcher.TryFetchRunningMissingBlocks(number)
})
go utils.Loop(subCtx, time.Duration(cfg.L2Config.ChunkProposerConfig.ProposeIntervalMilliseconds)*time.Millisecond, chunkProposer.TryProposeChunk)

View File

@@ -92,7 +92,6 @@
},
"chunk_proposer_config": {
"propose_interval_milliseconds": 100,
"max_block_num_per_chunk": 100,
"max_l2_gas_per_chunk": 20000000,
"chunk_timeout_sec": 300,
"max_uncompressed_batch_bytes_size": 4194304

View File

@@ -15,7 +15,7 @@ require (
github.com/holiman/uint256 v1.3.2
github.com/mitchellh/mapstructure v1.5.0
github.com/prometheus/client_golang v1.16.0
github.com/scroll-tech/da-codec v0.1.3-0.20250626091118-58b899494da6
github.com/scroll-tech/da-codec v0.1.3-0.20250826112206-b4cce5c5d178
github.com/scroll-tech/go-ethereum v1.10.14-0.20250626110859-cc9a1dd82de7
github.com/smartystreets/goconvey v1.8.0
github.com/spf13/viper v1.19.0

View File

@@ -285,8 +285,8 @@ github.com/sagikazarmark/locafero v0.4.0 h1:HApY1R9zGo4DBgr7dqsTH/JJxLTTsOt7u6ke
github.com/sagikazarmark/locafero v0.4.0/go.mod h1:Pe1W6UlPYUk/+wc/6KFhbORCfqzgYEpgQ3O5fPuL3H4=
github.com/sagikazarmark/slog-shim v0.1.0 h1:diDBnUNK9N/354PgrxMywXnAwEr1QZcOr6gto+ugjYE=
github.com/sagikazarmark/slog-shim v0.1.0/go.mod h1:SrcSrq8aKtyuqEI1uvTDTK1arOWRIczQRv+GVI1AkeQ=
github.com/scroll-tech/da-codec v0.1.3-0.20250626091118-58b899494da6 h1:vb2XLvQwCf+F/ifP6P/lfeiQrHY6+Yb/E3R4KHXLqSE=
github.com/scroll-tech/da-codec v0.1.3-0.20250626091118-58b899494da6/go.mod h1:Z6kN5u2khPhiqHyk172kGB7o38bH/nj7Ilrb/46wZGg=
github.com/scroll-tech/da-codec v0.1.3-0.20250826112206-b4cce5c5d178 h1:4utngmJHXSOS5FoSdZhEV1xMRirpArbXvyoCZY9nYj0=
github.com/scroll-tech/da-codec v0.1.3-0.20250826112206-b4cce5c5d178/go.mod h1:Z6kN5u2khPhiqHyk172kGB7o38bH/nj7Ilrb/46wZGg=
github.com/scroll-tech/go-ethereum v1.10.14-0.20250626110859-cc9a1dd82de7 h1:1rN1qocsQlOyk1VCpIEF1J5pfQbLAi1pnMZSLQS37jQ=
github.com/scroll-tech/go-ethereum v1.10.14-0.20250626110859-cc9a1dd82de7/go.mod h1:pDCZ4iGvEGmdIe4aSAGBrb7XSrKEML6/L/wEMmNxOdk=
github.com/scroll-tech/zktrie v0.8.4 h1:UagmnZ4Z3ITCk+aUq9NQZJNAwnWl4gSxsLb2Nl7IgRE=

View File

@@ -18,9 +18,10 @@ import (
// Config load configuration items.
type Config struct {
L1Config *L1Config `json:"l1_config"`
L2Config *L2Config `json:"l2_config"`
DBConfig *database.Config `json:"db_config"`
L1Config *L1Config `json:"l1_config"`
L2Config *L2Config `json:"l2_config"`
DBConfig *database.Config `json:"db_config"`
RecoveryConfig *RecoveryConfig `json:"recovery_config"`
}
type ConfigForReplay struct {

View File

@@ -8,4 +8,7 @@ type L1Config struct {
StartHeight uint64 `json:"start_height"`
// The relayer config
RelayerConfig *RelayerConfig `json:"relayer_config"`
// beacon node url
BeaconNodeEndpoint string `json:"beacon_node_endpoint"`
}

View File

@@ -31,7 +31,6 @@ type L2Config struct {
// ChunkProposerConfig loads chunk_proposer configuration items.
type ChunkProposerConfig struct {
ProposeIntervalMilliseconds uint64 `json:"propose_interval_milliseconds"`
MaxBlockNumPerChunk uint64 `json:"max_block_num_per_chunk"`
MaxL2GasPerChunk uint64 `json:"max_l2_gas_per_chunk"`
ChunkTimeoutSec uint64 `json:"chunk_timeout_sec"`
MaxUncompressedBatchBytesSize uint64 `json:"max_uncompressed_batch_bytes_size"`

View File

@@ -0,0 +1,14 @@
package config
type RecoveryConfig struct {
Enable bool `json:"enable"`
L1BeaconNodeEndpoint string `json:"l1_beacon_node_endpoint"` // the L1 beacon node endpoint to connect to
L1BlockHeight uint64 `json:"l1_block_height"` // the L1 block height to start recovery from
LatestFinalizedBatch uint64 `json:"latest_finalized_batch"` // the latest finalized batch number
L2BlockHeightLimit uint64 `json:"l2_block_height_limit"` // L2 block up to which to produce batch
ForceLatestFinalizedBatch bool `json:"force_latest_finalized_batch"` // whether to force usage of the latest finalized batch - mainly used for testing
ForceL1MessageCount uint64 `json:"force_l1_message_count"` // force the number of L1 messages, useful for testing
SubmitWithoutProof bool `json:"submit_without_proof"` // whether to submit batches without proof, useful for testing
}

View File

@@ -0,0 +1,458 @@
package permissionless_batches
import (
"context"
"fmt"
"github.com/scroll-tech/da-codec/encoding"
"gorm.io/gorm"
"github.com/scroll-tech/go-ethereum/common"
"github.com/scroll-tech/go-ethereum/core"
"github.com/scroll-tech/go-ethereum/ethclient"
"github.com/scroll-tech/go-ethereum/log"
"github.com/scroll-tech/go-ethereum/rollup/da_syncer/blob_client"
"github.com/scroll-tech/go-ethereum/rollup/l1"
"scroll-tech/common/types"
"scroll-tech/database/migrate"
"scroll-tech/rollup/internal/config"
"scroll-tech/rollup/internal/controller/watcher"
"scroll-tech/rollup/internal/orm"
)
const (
// defaultFakeRestoredChunkIndex is the default index of the last restored fake chunk. It is used to be able to generate new chunks pretending that we have already processed some chunks.
defaultFakeRestoredChunkIndex uint64 = 1337
// defaultFakeRestoredBundleIndex is the default index of the last restored fake bundle. It is used to be able to generate new bundles pretending that we have already processed some bundles.
defaultFakeRestoredBundleIndex uint64 = 1
)
type MinimalRecovery struct {
ctx context.Context
cfg *config.Config
genesis *core.Genesis
db *gorm.DB
chunkORM *orm.Chunk
batchORM *orm.Batch
bundleORM *orm.Bundle
chunkProposer *watcher.ChunkProposer
batchProposer *watcher.BatchProposer
bundleProposer *watcher.BundleProposer
l2Watcher *watcher.L2WatcherClient
}
func NewRecovery(ctx context.Context, cfg *config.Config, genesis *core.Genesis, db *gorm.DB, chunkProposer *watcher.ChunkProposer, batchProposer *watcher.BatchProposer, bundleProposer *watcher.BundleProposer, l2Watcher *watcher.L2WatcherClient) *MinimalRecovery {
return &MinimalRecovery{
ctx: ctx,
cfg: cfg,
genesis: genesis,
db: db,
chunkORM: orm.NewChunk(db),
batchORM: orm.NewBatch(db),
bundleORM: orm.NewBundle(db),
chunkProposer: chunkProposer,
batchProposer: batchProposer,
bundleProposer: bundleProposer,
l2Watcher: l2Watcher,
}
}
func (r *MinimalRecovery) RecoveryNeeded() bool {
chunk, err := r.chunkORM.GetLatestChunk(r.ctx)
if err != nil || chunk == nil {
return true
}
if chunk.Index <= defaultFakeRestoredChunkIndex {
return true
}
batch, err := r.batchORM.GetLatestBatch(r.ctx)
if err != nil {
return true
}
if batch.Index <= r.cfg.RecoveryConfig.LatestFinalizedBatch {
return true
}
bundle, err := r.bundleORM.GetLatestBundle(r.ctx)
if err != nil {
return true
}
if bundle.Index <= defaultFakeRestoredBundleIndex {
return true
}
return false
}
func (r *MinimalRecovery) Run() error {
// Make sure we start from a clean state.
if err := r.resetDB(); err != nil {
return fmt.Errorf("failed to reset DB: %w", err)
}
// Restore minimal previous state required to be able to create new chunks, batches and bundles.
restoredFinalizedChunk, restoredFinalizedBatch, restoredFinalizedBundle, err := r.restoreMinimalPreviousState()
if err != nil {
return fmt.Errorf("failed to restore minimal previous state: %w", err)
}
// Fetch and insert the missing blocks from the last block in the latestFinalizedBatch to the latest L2 block.
fromBlock := restoredFinalizedChunk.EndBlockNumber
toBlock, err := r.fetchL2Blocks(fromBlock, r.cfg.RecoveryConfig.L2BlockHeightLimit)
if err != nil {
return fmt.Errorf("failed to fetch L2 blocks: %w", err)
}
// Create chunks for L2 blocks.
log.Info("Creating chunks for L2 blocks", "from", fromBlock, "to", toBlock)
var latestChunk *orm.Chunk
var count int
for {
if err = r.chunkProposer.ProposeChunk(); err != nil {
return fmt.Errorf("failed to propose chunk: %w", err)
}
count++
latestChunk, err = r.chunkORM.GetLatestChunk(r.ctx)
if err != nil {
return fmt.Errorf("failed to get latest latestFinalizedChunk: %w", err)
}
log.Info("Chunk created", "index", latestChunk.Index, "hash", latestChunk.Hash, "StartBlockNumber", latestChunk.StartBlockNumber, "EndBlockNumber", latestChunk.EndBlockNumber, "TotalL1MessagesPoppedBefore", latestChunk.TotalL1MessagesPoppedBefore)
// We have created chunks for all available L2 blocks.
if latestChunk.EndBlockNumber >= toBlock {
break
}
}
log.Info("Chunks created", "count", count, "latest Chunk", latestChunk.Index, "hash", latestChunk.Hash, "StartBlockNumber", latestChunk.StartBlockNumber, "EndBlockNumber", latestChunk.EndBlockNumber, "TotalL1MessagesPoppedBefore", latestChunk.TotalL1MessagesPoppedBefore, "PrevL1MessageQueueHash", latestChunk.PrevL1MessageQueueHash, "PostL1MessageQueueHash", latestChunk.PostL1MessageQueueHash)
// Create batch for the created chunks. We only allow 1 batch it needs to be submitted (and finalized) with a proof in a single step.
log.Info("Creating batch for chunks", "from", restoredFinalizedChunk.Index+1, "to", latestChunk.Index)
r.batchProposer.TryProposeBatch()
latestBatch, err := r.batchORM.GetLatestBatch(r.ctx)
if err != nil {
return fmt.Errorf("failed to get latest latestFinalizedBatch: %w", err)
}
// Sanity check that the batch was created correctly:
// 1. should be a new batch
// 2. should contain all chunks created
if restoredFinalizedBatch.Index+1 != latestBatch.Index {
return fmt.Errorf("batch was not created correctly, expected %d but got %d", restoredFinalizedBatch.Index+1, latestBatch.Index)
}
firstChunkInBatch, err := r.chunkORM.GetChunkByIndex(r.ctx, latestBatch.StartChunkIndex)
if err != nil {
return fmt.Errorf("failed to get first chunk in batch: %w", err)
}
lastChunkInBatch, err := r.chunkORM.GetChunkByIndex(r.ctx, latestBatch.EndChunkIndex)
if err != nil {
return fmt.Errorf("failed to get last chunk in batch: %w", err)
}
// Make sure that the batch contains all previously created chunks and thus all blocks. If not the user will need to
// produce another batch (running the application again) starting from the end block of the last chunk in the batch + 1.
if latestBatch.EndChunkIndex != latestChunk.Index {
log.Warn("Produced batch does not contain all chunks and blocks. You'll need to produce another batch starting from end block+1.", "starting block", firstChunkInBatch.StartBlockNumber, "end block", lastChunkInBatch.EndBlockNumber, "latest block", latestChunk.EndBlockNumber)
}
log.Info("Batch created", "index", latestBatch.Index, "hash", latestBatch.Hash, "StartChunkIndex", latestBatch.StartChunkIndex, "EndChunkIndex", latestBatch.EndChunkIndex, "starting block", firstChunkInBatch.StartBlockNumber, "ending block", lastChunkInBatch.EndBlockNumber, "PrevL1MessageQueueHash", latestBatch.PrevL1MessageQueueHash, "PostL1MessageQueueHash", latestBatch.PostL1MessageQueueHash)
if err = r.bundleProposer.UpdateDBBundleInfo([]*orm.Batch{latestBatch}, encoding.CodecVersion(latestBatch.CodecVersion)); err != nil {
return fmt.Errorf("failed to create bundle: %w", err)
}
latestBundle, err := r.bundleORM.GetLatestBundle(r.ctx)
if err != nil {
return fmt.Errorf("failed to get latest bundle: %w", err)
}
// Sanity check that the bundle was created correctly:
// 1. should be a new bundle
// 2. should only contain 1 batch, the one we created
if restoredFinalizedBundle.Index == latestBundle.Index {
return fmt.Errorf("bundle was not created correctly")
}
if latestBundle.StartBatchIndex != latestBatch.Index || latestBundle.EndBatchIndex != latestBatch.Index {
return fmt.Errorf("bundle does not contain the correct batch: %d != %d", latestBundle.StartBatchIndex, latestBatch.Index)
}
log.Info("Bundle created", "index", latestBundle.Index, "hash", latestBundle.Hash, "StartBatchIndex", latestBundle.StartBatchIndex, "EndBatchIndex", latestBundle.EndBatchIndex, "starting block", firstChunkInBatch.StartBlockNumber, "ending block", lastChunkInBatch.EndBlockNumber)
return nil
}
// restoreMinimalPreviousState restores the minimal previous state required to be able to create new chunks, batches and bundles.
func (r *MinimalRecovery) restoreMinimalPreviousState() (*orm.Chunk, *orm.Batch, *orm.Bundle, error) {
log.Info("Restoring previous state with", "L1 block height", r.cfg.RecoveryConfig.L1BlockHeight, "latest finalized batch", r.cfg.RecoveryConfig.LatestFinalizedBatch)
l1Client, err := ethclient.Dial(r.cfg.L1Config.Endpoint)
if err != nil {
return nil, nil, nil, fmt.Errorf("failed to connect to L1 client: %w", err)
}
reader, err := l1.NewReader(r.ctx, l1.Config{
ScrollChainAddress: r.genesis.Config.Scroll.L1Config.ScrollChainAddress,
L1MessageQueueAddress: r.genesis.Config.Scroll.L1Config.L1MessageQueueV2Address,
}, l1Client)
if err != nil {
return nil, nil, nil, fmt.Errorf("failed to create L1 reader: %w", err)
}
// 1. Sanity check user input: Make sure that the user's L1 block height is not higher than the latest finalized block number.
latestFinalizedL1Block, err := reader.GetLatestFinalizedBlockNumber()
if err != nil {
return nil, nil, nil, fmt.Errorf("failed to get latest finalized L1 block number: %w", err)
}
if r.cfg.RecoveryConfig.L1BlockHeight > latestFinalizedL1Block {
return nil, nil, nil, fmt.Errorf("specified L1 block height is higher than the latest finalized block number: %d > %d", r.cfg.RecoveryConfig.L1BlockHeight, latestFinalizedL1Block)
}
log.Info("Latest finalized L1 block number", "latest finalized L1 block", latestFinalizedL1Block)
// 2. Make sure that the specified batch is indeed finalized on the L1 rollup contract and is the latest finalized batch.
var latestFinalizedBatchIndex uint64
if r.cfg.RecoveryConfig.ForceLatestFinalizedBatch {
latestFinalizedBatchIndex = r.cfg.RecoveryConfig.LatestFinalizedBatch
} else {
latestFinalizedBatchIndex, err = reader.LatestFinalizedBatchIndex(latestFinalizedL1Block)
if err != nil {
return nil, nil, nil, fmt.Errorf("failed to get latest finalized batch: %w", err)
}
if r.cfg.RecoveryConfig.LatestFinalizedBatch != latestFinalizedBatchIndex {
return nil, nil, nil, fmt.Errorf("batch %d is not the latest finalized batch: %d", r.cfg.RecoveryConfig.LatestFinalizedBatch, latestFinalizedBatchIndex)
}
}
// Find the commit event for the latest finalized batch.
var batchCommitEvent *l1.CommitBatchEvent
err = reader.FetchRollupEventsInRangeWithCallback(r.cfg.RecoveryConfig.L1BlockHeight, latestFinalizedL1Block, func(event l1.RollupEvent) bool {
if event.Type() == l1.CommitEventType && event.BatchIndex().Uint64() == latestFinalizedBatchIndex {
batchCommitEvent = event.(*l1.CommitBatchEvent)
// We found the commit event for the batch, stop searching.
return false
}
// Continue until we find the commit event for the batch.
return true
})
if batchCommitEvent == nil {
return nil, nil, nil, fmt.Errorf("commit event not found for batch %d", latestFinalizedBatchIndex)
}
log.Info("Found commit event for batch", "batch", batchCommitEvent.BatchIndex(), "hash", batchCommitEvent.BatchHash(), "L1 block height", batchCommitEvent.BlockNumber(), "L1 tx hash", batchCommitEvent.TxHash())
// 3. Fetch commit tx data for latest finalized batch and decode it.
daBatch, daBlobPayload, err := r.decodeLatestFinalizedBatch(reader, batchCommitEvent)
if err != nil {
return nil, nil, nil, fmt.Errorf("failed to decode latest finalized batch: %w", err)
}
fmt.Println(daBatch, daBlobPayload)
blocksInBatch := daBlobPayload.Blocks()
if len(blocksInBatch) == 0 {
return nil, nil, nil, fmt.Errorf("no blocks in batch %d", batchCommitEvent.BatchIndex())
}
lastBlockInBatch := blocksInBatch[len(blocksInBatch)-1]
log.Info("Last L2 block in batch", "batch", batchCommitEvent.BatchIndex(), "L2 block", lastBlockInBatch, "PostL1MessageQueueHash", daBlobPayload.PostL1MessageQueueHash())
// 4. Get the L1 messages count and state root after the latest finalized batch.
var l1MessagesCount uint64
if r.cfg.RecoveryConfig.ForceL1MessageCount == 0 {
l1MessagesCount, err = reader.NextUnfinalizedL1MessageQueueIndex(latestFinalizedL1Block)
if err != nil {
return nil, nil, nil, fmt.Errorf("failed to get L1 messages count: %w", err)
}
} else {
l1MessagesCount = r.cfg.RecoveryConfig.ForceL1MessageCount
}
log.Info("L1 messages count after latest finalized batch", "batch", batchCommitEvent.BatchIndex(), "count", l1MessagesCount)
stateRoot, err := reader.GetFinalizedStateRootByBatchIndex(latestFinalizedL1Block, latestFinalizedBatchIndex)
if err != nil {
return nil, nil, nil, fmt.Errorf("failed to get state root: %w", err)
}
log.Info("State root after latest finalized batch", "batch", batchCommitEvent.BatchIndex(), "stateRoot", stateRoot.Hex())
// 5. Insert minimal state to DB.
chunk, err := r.chunkORM.InsertPermissionlessChunk(r.ctx, defaultFakeRestoredChunkIndex, daBatch.Version(), daBlobPayload, l1MessagesCount, stateRoot)
if err != nil {
return nil, nil, nil, fmt.Errorf("failed to insert chunk raw: %w", err)
}
log.Info("Inserted last finalized chunk to DB", "chunk", chunk.Index, "hash", chunk.Hash, "StartBlockNumber", chunk.StartBlockNumber, "EndBlockNumber", chunk.EndBlockNumber, "TotalL1MessagesPoppedBefore", chunk.TotalL1MessagesPoppedBefore)
batch, err := r.batchORM.InsertPermissionlessBatch(r.ctx, batchCommitEvent.BatchIndex(), batchCommitEvent.BatchHash(), daBatch.Version(), chunk)
if err != nil {
return nil, nil, nil, fmt.Errorf("failed to insert batch raw: %w", err)
}
log.Info("Inserted last finalized batch to DB", "batch", batch.Index, "hash", batch.Hash)
var bundle *orm.Bundle
err = r.db.Transaction(func(dbTX *gorm.DB) error {
bundle, err = r.bundleORM.InsertBundle(r.ctx, []*orm.Batch{batch}, encoding.CodecVersion(batch.CodecVersion), dbTX)
if err != nil {
return fmt.Errorf("failed to insert bundle: %w", err)
}
if err = r.bundleORM.UpdateProvingStatus(r.ctx, bundle.Hash, types.ProvingTaskVerified, dbTX); err != nil {
return fmt.Errorf("failed to update proving status: %w", err)
}
if err = r.bundleORM.UpdateRollupStatus(r.ctx, bundle.Hash, types.RollupFinalized); err != nil {
return fmt.Errorf("failed to update rollup status: %w", err)
}
log.Info("Inserted last finalized bundle to DB", "bundle", bundle.Index, "hash", bundle.Hash, "StartBatchIndex", bundle.StartBatchIndex, "EndBatchIndex", bundle.EndBatchIndex)
return nil
})
if err != nil {
return nil, nil, nil, fmt.Errorf("failed to insert bundle: %w", err)
}
return chunk, batch, bundle, nil
}
func (r *MinimalRecovery) decodeLatestFinalizedBatch(reader *l1.Reader, event *l1.CommitBatchEvent) (encoding.DABatch, encoding.DABlobPayload, error) {
blockHeader, err := reader.FetchBlockHeaderByNumber(event.BlockNumber())
if err != nil {
return nil, nil, fmt.Errorf("failed to get header by number, err: %w", err)
}
args, err := reader.FetchCommitTxData(event)
if err != nil {
return nil, nil, fmt.Errorf("failed to fetch commit tx data: %w", err)
}
codecVersion := encoding.CodecVersion(args.Version)
if codecVersion < encoding.CodecV7 {
return nil, nil, fmt.Errorf("codec version %d is not supported", codecVersion)
}
codec, err := encoding.CodecFromVersion(codecVersion)
if err != nil {
return nil, nil, fmt.Errorf("failed to get codec: %w", err)
}
// Since we only store the last batch hash committed in a single tx in the contracts we can also only ever
// finalize a last batch of a tx. This means we can assume here that the batch given in the event is the last batch
// that was committed in the tx.
if event.BatchIndex().Uint64()+1 < uint64(len(args.BlobHashes)) {
return nil, nil, fmt.Errorf("batch index %d+1 is lower than the number of blobs %d", event.BatchIndex().Uint64(), len(args.BlobHashes))
}
firstBatchIndex := event.BatchIndex().Uint64() + 1 - uint64(len(args.BlobHashes))
var targetBatch encoding.DABatch
var targetBlobVersionedHash common.Hash
parentBatchHash := args.ParentBatchHash
for i, blobVersionedHash := range args.BlobHashes {
batchIndex := firstBatchIndex + uint64(i)
calculatedBatch, err := codec.NewDABatchFromParams(batchIndex, blobVersionedHash, parentBatchHash)
if err != nil {
return nil, nil, fmt.Errorf("failed to create new DA batch from params, batch index: %d, err: %w", event.BatchIndex().Uint64(), err)
}
parentBatchHash = calculatedBatch.Hash()
if batchIndex == event.BatchIndex().Uint64() {
if calculatedBatch.Hash() != event.BatchHash() {
return nil, nil, fmt.Errorf("batch hash mismatch for batch %d, expected: %s, got: %s", event.BatchIndex(), event.BatchHash().String(), calculatedBatch.Hash().String())
}
// We found the batch we are looking for, break out of the loop.
targetBatch = calculatedBatch
targetBlobVersionedHash = blobVersionedHash
break
}
}
if targetBatch == nil {
return nil, nil, fmt.Errorf("target batch with index %d could not be found and decoded", event.BatchIndex())
}
// sanity check that this is indeed the last batch in the tx
if targetBatch.Hash() != args.LastBatchHash {
return nil, nil, fmt.Errorf("last batch hash mismatch for batch %d, expected: %s, got: %s", event.BatchIndex(), args.LastBatchHash.String(), targetBatch.Hash().String())
}
// TODO: add support for multiple blob clients
blobClient := blob_client.NewBlobClients()
if r.cfg.RecoveryConfig.L1BeaconNodeEndpoint != "" {
client, err := blob_client.NewBeaconNodeClient(r.cfg.RecoveryConfig.L1BeaconNodeEndpoint)
if err != nil {
return nil, nil, fmt.Errorf("failed to create beacon node client: %w", err)
}
blobClient.AddBlobClient(client)
}
log.Info("Fetching blob by versioned hash and block time", "TargetBlobVersionedHash", targetBlobVersionedHash, "BlockTime", blockHeader.Time, "BlockNumber", blockHeader.Number)
blob, err := blobClient.GetBlobByVersionedHashAndBlockTime(r.ctx, targetBlobVersionedHash, blockHeader.Time)
if err != nil {
return nil, nil, fmt.Errorf("failed to get blob by versioned hash and block time for batch %d: %w", event.BatchIndex(), err)
}
daBlobPayload, err := codec.DecodeBlob(blob)
if err != nil {
return nil, nil, fmt.Errorf("failed to decode blob for batch %d: %w", event.BatchIndex(), err)
}
return targetBatch, daBlobPayload, nil
}
func (r *MinimalRecovery) fetchL2Blocks(fromBlock uint64, l2BlockHeightLimit uint64) (uint64, error) {
if l2BlockHeightLimit > 0 && fromBlock > l2BlockHeightLimit {
return 0, fmt.Errorf("fromBlock (latest finalized L2 block) is higher than specified L2BlockHeightLimit: %d > %d", fromBlock, l2BlockHeightLimit)
}
log.Info("Fetching L2 blocks with", "fromBlock", fromBlock, "l2BlockHeightLimit", l2BlockHeightLimit)
// Fetch and insert the missing blocks from the last block in the batch to the latest L2 block.
latestL2Block, err := r.l2Watcher.Client.BlockNumber(r.ctx)
if err != nil {
return 0, fmt.Errorf("failed to get latest L2 block number: %w", err)
}
log.Info("Latest L2 block number", "latest L2 block", latestL2Block)
if l2BlockHeightLimit > latestL2Block {
return 0, fmt.Errorf("l2BlockHeightLimit is higher than the latest L2 block number, not all blocks are available in L2geth: %d > %d", l2BlockHeightLimit, latestL2Block)
}
toBlock := latestL2Block
if l2BlockHeightLimit > 0 {
toBlock = l2BlockHeightLimit
}
err = r.l2Watcher.GetAndStoreBlocks(r.ctx, fromBlock, toBlock)
if err != nil {
return 0, fmt.Errorf("failed to get and store blocks: %w", err)
}
log.Info("Fetched L2 blocks from", "fromBlock", fromBlock, "toBlock", toBlock)
return toBlock, nil
}
func (r *MinimalRecovery) resetDB() error {
sqlDB, err := r.db.DB()
if err != nil {
return fmt.Errorf("failed to get db connection: %w", err)
}
if err = migrate.ResetDB(sqlDB); err != nil {
return fmt.Errorf("failed to reset db: %w", err)
}
return nil
}

View File

@@ -0,0 +1,265 @@
package permissionless_batches
import (
"context"
"errors"
"fmt"
"math/big"
"github.com/prometheus/client_golang/prometheus"
"github.com/scroll-tech/da-codec/encoding"
"gorm.io/gorm"
"github.com/scroll-tech/go-ethereum/accounts/abi"
"github.com/scroll-tech/go-ethereum/common"
"github.com/scroll-tech/go-ethereum/crypto/kzg4844"
"github.com/scroll-tech/go-ethereum/log"
"github.com/scroll-tech/go-ethereum/params"
"github.com/scroll-tech/go-ethereum/rpc"
"scroll-tech/common/types"
"scroll-tech/common/types/message"
bridgeAbi "scroll-tech/rollup/abi"
"scroll-tech/rollup/internal/config"
"scroll-tech/rollup/internal/controller/sender"
"scroll-tech/rollup/internal/orm"
)
type Submitter struct {
ctx context.Context
db *gorm.DB
l2BlockOrm *orm.L2Block
chunkOrm *orm.Chunk
batchOrm *orm.Batch
bundleOrm *orm.Bundle
cfg *config.RelayerConfig
finalizeSender *sender.Sender
l1RollupABI *abi.ABI
chainCfg *params.ChainConfig
}
func NewSubmitter(ctx context.Context, db *gorm.DB, cfg *config.RelayerConfig, chainCfg *params.ChainConfig) (*Submitter, error) {
registry := prometheus.DefaultRegisterer
finalizeSender, err := sender.NewSender(ctx, cfg.SenderConfig, cfg.FinalizeSenderSignerConfig, "permissionless_batches_submitter", "finalize_sender", types.SenderTypeFinalizeBatch, db, registry)
if err != nil {
return nil, fmt.Errorf("new finalize sender failed, err: %w", err)
}
return &Submitter{
ctx: ctx,
db: db,
l2BlockOrm: orm.NewL2Block(db),
chunkOrm: orm.NewChunk(db),
batchOrm: orm.NewBatch(db),
bundleOrm: orm.NewBundle(db),
cfg: cfg,
finalizeSender: finalizeSender,
l1RollupABI: bridgeAbi.ScrollChainABI,
chainCfg: chainCfg,
}, nil
}
func (s *Submitter) Sender() *sender.Sender {
return s.finalizeSender
}
func (s *Submitter) Submit(withProof bool) error {
// Check if the bundle is already finalized
bundle, err := s.bundleOrm.GetLatestBundle(s.ctx)
if err != nil {
return fmt.Errorf("error loading latest bundle: %w", err)
}
if bundle.Index != defaultFakeRestoredBundleIndex+1 {
return fmt.Errorf("unexpected bundle index %d with hash %s, expected %d", bundle.Index, bundle.Hash, defaultFakeRestoredBundleIndex+1)
}
if types.RollupStatus(bundle.RollupStatus) == types.RollupFinalized {
return fmt.Errorf("bundle %d %s is already finalized. nothing to do", bundle.Index, bundle.Hash)
}
if bundle.StartBatchIndex != bundle.EndBatchIndex {
return fmt.Errorf("bundle %d %s has unexpected batch indices (should only contain a single batch): start %d, end %d", bundle.Index, bundle.Hash, bundle.StartBatchIndex, bundle.EndBatchIndex)
}
if bundle.StartBatchHash != bundle.EndBatchHash {
return fmt.Errorf("bundle %d %s has unexpected batch hashes (should only contain a single batch): start %s, end %s", bundle.Index, bundle.Hash, bundle.StartBatchHash, bundle.EndBatchHash)
}
batch, err := s.batchOrm.GetBatchByIndex(s.ctx, bundle.StartBatchIndex)
if err != nil {
return fmt.Errorf("failed to load batch %d: %w", bundle.StartBatchIndex, err)
}
if batch == nil {
return fmt.Errorf("batch %d not found", bundle.StartBatchIndex)
}
if batch.Hash != bundle.StartBatchHash {
return fmt.Errorf("bundle %d %s has unexpected batch hash: %s", bundle.Index, bundle.Hash, batch.Hash)
}
log.Info("submitting batch", "index", batch.Index, "hash", batch.Hash)
endChunk, err := s.chunkOrm.GetChunkByIndex(s.ctx, batch.EndChunkIndex)
if err != nil || endChunk == nil {
return fmt.Errorf("failed to get end chunk with index %d of batch: %w", batch.EndChunkIndex, err)
}
var aggProof *message.OpenVMBundleProof
if withProof {
firstChunk, err := s.chunkOrm.GetChunkByIndex(s.ctx, batch.StartChunkIndex)
if err != nil || firstChunk == nil {
return fmt.Errorf("failed to get first chunk %d of batch: %w", batch.StartChunkIndex, err)
}
aggProof, err = s.bundleOrm.GetVerifiedProofByHash(s.ctx, bundle.Hash)
if err != nil {
return fmt.Errorf("failed to get verified proof by bundle index: %d, err: %w", bundle.Index, err)
}
if err = aggProof.SanityCheck(); err != nil {
return fmt.Errorf("failed to check agg_proof sanity, index: %d, err: %w", bundle.Index, err)
}
}
var calldata []byte
var blob *kzg4844.Blob
switch encoding.CodecVersion(bundle.CodecVersion) {
case encoding.CodecV7:
calldata, blob, err = s.constructCommitAndFinalizeCalldataAndBlob(batch, endChunk, aggProof)
if err != nil {
return fmt.Errorf("failed to construct CommitAndFinalize calldata and blob, bundle index: %v, batch index: %v, err: %w", bundle.Index, batch.Index, err)
}
default:
return fmt.Errorf("unsupported codec version in finalizeBundle, bundle index: %v, version: %d", bundle.Index, bundle.CodecVersion)
}
txHash, _, err := s.finalizeSender.SendTransaction("commitAndFinalize-"+bundle.Hash, &s.cfg.RollupContractAddress, calldata, []*kzg4844.Blob{blob})
if err != nil {
log.Error("commitAndFinalize in layer1 failed", "with proof", withProof, "index", bundle.Index,
"batch index", bundle.StartBatchIndex,
"RollupContractAddress", s.cfg.RollupContractAddress, "err", err, "calldata", common.Bytes2Hex(calldata))
var rpcError rpc.DataError
if errors.As(err, &rpcError) {
log.Error("rpc.DataError ", "error", rpcError.Error(), "message", rpcError.ErrorData())
}
return fmt.Errorf("commitAndFinalize failed, bundle index: %d, err: %w", bundle.Index, err)
}
log.Info("commitAndFinalize in layer1", "with proof", withProof, "batch index", bundle.StartBatchIndex, "tx hash", txHash.String())
// Updating rollup status in database.
err = s.db.Transaction(func(dbTX *gorm.DB) error {
if err = s.batchOrm.UpdateFinalizeTxHashAndRollupStatusByBundleHash(s.ctx, bundle.Hash, txHash.String(), types.RollupFinalizing, dbTX); err != nil {
log.Warn("UpdateFinalizeTxHashAndRollupStatusByBundleHash failed", "index", bundle.Index, "bundle hash", bundle.Hash, "tx hash", txHash.String(), "err", err)
return err
}
if err = s.bundleOrm.UpdateFinalizeTxHashAndRollupStatus(s.ctx, bundle.Hash, txHash.String(), types.RollupFinalizing, dbTX); err != nil {
log.Warn("UpdateFinalizeTxHashAndRollupStatus failed", "index", bundle.Index, "bundle hash", bundle.Hash, "tx hash", txHash.String(), "err", err)
return err
}
return nil
})
if err != nil {
log.Warn("failed to update rollup status of bundle and batches", "err", err)
return err
}
// Updating the proving status when finalizing without proof, thus the coordinator could omit this task.
// it isn't a necessary step, so don't put in a transaction with UpdateFinalizeTxHashAndRollupStatus
if !withProof {
txErr := s.db.Transaction(func(dbTX *gorm.DB) error {
if updateErr := s.bundleOrm.UpdateProvingStatus(s.ctx, bundle.Hash, types.ProvingTaskVerified, dbTX); updateErr != nil {
return updateErr
}
if updateErr := s.batchOrm.UpdateProvingStatusByBundleHash(s.ctx, bundle.Hash, types.ProvingTaskVerified, dbTX); updateErr != nil {
return updateErr
}
for batchIndex := bundle.StartBatchIndex; batchIndex <= bundle.EndBatchIndex; batchIndex++ {
tmpBatch, getErr := s.batchOrm.GetBatchByIndex(s.ctx, batchIndex)
if getErr != nil {
return getErr
}
if updateErr := s.chunkOrm.UpdateProvingStatusByBatchHash(s.ctx, tmpBatch.Hash, types.ProvingTaskVerified, dbTX); updateErr != nil {
return updateErr
}
}
return nil
})
if txErr != nil {
log.Error("Updating chunk and batch proving status when finalizing without proof failure", "bundleHash", bundle.Hash, "err", txErr)
}
}
return nil
}
func (s *Submitter) constructCommitAndFinalizeCalldataAndBlob(batch *orm.Batch, endChunk *orm.Chunk, aggProof *message.OpenVMBundleProof) ([]byte, *kzg4844.Blob, error) {
// Create the FinalizeStruct tuple as an abi-compatible struct
finalizeStruct := struct {
BatchHeader []byte
TotalL1MessagesPoppedOverall *big.Int
PostStateRoot common.Hash
WithdrawRoot common.Hash
ZkProof []byte
}{
BatchHeader: batch.BatchHeader,
TotalL1MessagesPoppedOverall: new(big.Int).SetUint64(endChunk.TotalL1MessagesPoppedBefore + endChunk.TotalL1MessagesPoppedInChunk),
PostStateRoot: common.HexToHash(batch.StateRoot),
WithdrawRoot: common.HexToHash(batch.WithdrawRoot),
}
if aggProof != nil {
finalizeStruct.ZkProof = aggProof.Proof()
}
calldata, err := s.l1RollupABI.Pack("commitAndFinalizeBatch", uint8(batch.CodecVersion), common.HexToHash(batch.ParentBatchHash), finalizeStruct)
if err != nil {
return nil, nil, fmt.Errorf("failed to pack commitAndFinalizeBatch: %w", err)
}
chunks, err := s.chunkOrm.GetChunksInRange(s.ctx, batch.StartChunkIndex, batch.EndChunkIndex)
if err != nil {
return nil, nil, fmt.Errorf("failed to get chunks in range for batch %d: %w", batch.Index, err)
}
if chunks[len(chunks)-1].Index != batch.EndChunkIndex {
return nil, nil, fmt.Errorf("unexpected last chunk index %d, expected %d", chunks[len(chunks)-1].Index, batch.EndChunkIndex)
}
var batchBlocks []*encoding.Block
for _, c := range chunks {
blocks, err := s.l2BlockOrm.GetL2BlocksInRange(s.ctx, c.StartBlockNumber, c.EndBlockNumber)
if err != nil {
return nil, nil, fmt.Errorf("failed to get blocks in range for batch %d: %w", batch.Index, err)
}
batchBlocks = append(batchBlocks, blocks...)
}
encodingBatch := &encoding.Batch{
Index: batch.Index,
ParentBatchHash: common.HexToHash(batch.ParentBatchHash),
PrevL1MessageQueueHash: common.HexToHash(batch.PrevL1MessageQueueHash),
PostL1MessageQueueHash: common.HexToHash(batch.PostL1MessageQueueHash),
Blocks: batchBlocks,
}
codec, err := encoding.CodecFromVersion(encoding.CodecVersion(batch.CodecVersion))
if err != nil {
return nil, nil, fmt.Errorf("failed to get codec from version %d, err: %w", batch.CodecVersion, err)
}
daBatch, err := codec.NewDABatch(encodingBatch)
if err != nil {
return nil, nil, fmt.Errorf("failed to create DA batch: %w", err)
}
return calldata, daBatch.Blob(), nil
}

View File

@@ -0,0 +1,476 @@
package relayer
import (
"context"
"fmt"
"github.com/scroll-tech/da-codec/encoding"
"github.com/scroll-tech/go-ethereum/common"
"github.com/scroll-tech/go-ethereum/core"
"github.com/scroll-tech/go-ethereum/ethclient"
"github.com/scroll-tech/go-ethereum/log"
"github.com/scroll-tech/go-ethereum/rollup/da_syncer/blob_client"
"github.com/scroll-tech/go-ethereum/rollup/l1"
"gorm.io/gorm"
"scroll-tech/common/types"
"scroll-tech/rollup/internal/config"
"scroll-tech/rollup/internal/controller/watcher"
"scroll-tech/rollup/internal/orm"
butils "scroll-tech/rollup/internal/utils"
)
type FullRecovery struct {
ctx context.Context
cfg *config.Config
genesis *core.Genesis
db *gorm.DB
blockORM *orm.L2Block
chunkORM *orm.Chunk
batchORM *orm.Batch
bundleORM *orm.Bundle
chunkProposer *watcher.ChunkProposer
batchProposer *watcher.BatchProposer
bundleProposer *watcher.BundleProposer
l2Watcher *watcher.L2WatcherClient
l1Client *ethclient.Client
l1Reader *l1.Reader
beaconNodeClient *blob_client.BeaconNodeClient
}
func NewFullRecovery(ctx context.Context, cfg *config.Config, genesis *core.Genesis, db *gorm.DB, chunkProposer *watcher.ChunkProposer, batchProposer *watcher.BatchProposer, bundleProposer *watcher.BundleProposer, l2Watcher *watcher.L2WatcherClient, l1Client *ethclient.Client, l1Reader *l1.Reader) (*FullRecovery, error) {
beaconNodeClient, err := blob_client.NewBeaconNodeClient(cfg.L1Config.BeaconNodeEndpoint)
if err != nil {
return nil, fmt.Errorf("create blob client failed: %v", err)
}
return &FullRecovery{
ctx: ctx,
cfg: cfg,
genesis: genesis,
db: db,
blockORM: orm.NewL2Block(db),
chunkORM: orm.NewChunk(db),
batchORM: orm.NewBatch(db),
bundleORM: orm.NewBundle(db),
chunkProposer: chunkProposer,
batchProposer: batchProposer,
bundleProposer: bundleProposer,
l2Watcher: l2Watcher,
l1Client: l1Client,
l1Reader: l1Reader,
beaconNodeClient: beaconNodeClient,
}, nil
}
// RestoreFullPreviousState restores the full state from L1.
// The DB state should be clean: the latest batch in the DB should be finalized on L1. This function will
// restore all batches between the latest finalized batch in the DB and the latest finalized batch on L1.
func (f *FullRecovery) RestoreFullPreviousState() error {
log.Info("Restoring full previous state")
// 1. Get latest finalized batch stored in DB
latestDBBatch, err := f.batchORM.GetLatestBatch(f.ctx)
if err != nil {
return fmt.Errorf("failed to get latest batch from DB: %w", err)
}
log.Info("Latest finalized batch in DB", "batch", latestDBBatch.Index, "hash", latestDBBatch.Hash)
// 2. Get latest finalized L1 block
latestFinalizedL1Block, err := f.l1Reader.GetLatestFinalizedBlockNumber()
if err != nil {
return fmt.Errorf("failed to get latest finalized L1 block number: %w", err)
}
log.Info("Latest finalized L1 block number", "latest finalized L1 block", latestFinalizedL1Block)
// 3. Get latest finalized batch from contract (at latest finalized L1 block)
latestFinalizedBatchContract, err := f.l1Reader.LatestFinalizedBatchIndex(latestFinalizedL1Block)
if err != nil {
return fmt.Errorf("failed to get latest finalized batch: %w", err)
}
log.Info("Latest finalized batch from L1 contract", "latest finalized batch", latestFinalizedBatchContract, "at latest finalized L1 block", latestFinalizedL1Block)
// 4. Get batches one by one from stored in DB to latest finalized batch.
var fromBlock uint64
if latestDBBatch.Index > 0 {
receipt, err := f.l1Client.TransactionReceipt(f.ctx, common.HexToHash(latestDBBatch.CommitTxHash))
if err != nil {
return fmt.Errorf("failed to get transaction receipt of latest DB batch finalization transaction: %w", err)
}
fromBlock = receipt.BlockNumber.Uint64()
} else {
fromBlock = f.cfg.L1Config.StartHeight
}
log.Info("Fetching rollup events from L1", "from block", fromBlock, "to block", latestFinalizedL1Block, "from batch", latestDBBatch.Index, "to batch", latestFinalizedBatchContract)
commitsHeapMap := common.NewHeapMap[uint64, *l1.CommitBatchEvent](func(event *l1.CommitBatchEvent) uint64 {
return event.BatchIndex().Uint64()
})
batchEventsHeap := common.NewHeap[*batchEvents]()
var bundles [][]*batchEvents
err = f.l1Reader.FetchRollupEventsInRangeWithCallback(fromBlock, latestFinalizedL1Block, func(event l1.RollupEvent) bool {
// We're only interested in batches that are newer than the latest finalized batch in the DB.
if event.BatchIndex().Uint64() <= latestDBBatch.Index {
return true
}
switch event.Type() {
case l1.CommitEventType:
commitEvent := event.(*l1.CommitBatchEvent)
commitsHeapMap.Push(commitEvent)
case l1.FinalizeEventType:
finalizeEvent := event.(*l1.FinalizeBatchEvent)
var bundle []*batchEvents
// with bundles all committed batches until this finalized batch are finalized in the same bundle
for commitsHeapMap.Len() > 0 {
commitEvent := commitsHeapMap.Peek()
if commitEvent.BatchIndex().Uint64() > finalizeEvent.BatchIndex().Uint64() {
break
}
bEvents := newBatchEvents(commitEvent, finalizeEvent)
commitsHeapMap.Pop()
batchEventsHeap.Push(bEvents)
bundle = append(bundle, bEvents)
}
bundles = append(bundles, bundle)
// Stop fetching rollup events if we reached the latest finalized batch.
if finalizeEvent.BatchIndex().Uint64() >= latestFinalizedBatchContract {
return false
}
case l1.RevertEventV0Type:
// We ignore reverted batches.
commitsHeapMap.RemoveByKey(event.BatchIndex().Uint64())
case l1.RevertEventV7Type:
// We ignore reverted batches.
revertBatch, ok := event.(*l1.RevertBatchEventV7)
if !ok {
log.Error(fmt.Sprintf("unexpected type of revert event: %T, expected RevertEventV7Type", event))
return false
}
// delete all batches from revertBatch.StartBatchIndex (inclusive) to revertBatch.FinishBatchIndex (inclusive)
for i := revertBatch.StartBatchIndex().Uint64(); i <= revertBatch.FinishBatchIndex().Uint64(); i++ {
commitsHeapMap.RemoveByKey(i)
}
}
return true
})
if err != nil {
return fmt.Errorf("failed to fetch rollup events: %w", err)
}
// 5. Process all finalized batches: fetch L2 blocks and reproduce chunks and batches.
var batches []*batchEvents
for batchEventsHeap.Len() > 0 {
nextBatch := batchEventsHeap.Pop().Value()
batches = append(batches, nextBatch)
}
if err = f.processFinalizedBatches(batches); err != nil {
return fmt.Errorf("failed to process finalized batches: %w", err)
}
// 6. Create bundles if needed.
for _, bundle := range bundles {
var dbBatches []*orm.Batch
var lastBatchInBundle *orm.Batch
for _, batch := range bundle {
dbBatch, err := f.batchORM.GetBatchByIndex(f.ctx, batch.commit.BatchIndex().Uint64())
if err != nil {
return fmt.Errorf("failed to get batch by index for bundle generation: %w", err)
}
// Bundles are only supported for codec version 3 and above.
if encoding.CodecVersion(dbBatch.CodecVersion) < encoding.CodecV3 {
break
}
dbBatches = append(dbBatches, dbBatch)
lastBatchInBundle = dbBatch
}
if len(dbBatches) == 0 {
continue
}
err = f.db.Transaction(func(dbTX *gorm.DB) error {
newBundle, err := f.bundleORM.InsertBundle(f.ctx, dbBatches, encoding.CodecVersion(lastBatchInBundle.CodecVersion), dbTX)
if err != nil {
return fmt.Errorf("failed to insert bundle to DB: %w", err)
}
if err = f.batchORM.UpdateBundleHashInRange(f.ctx, newBundle.StartBatchIndex, newBundle.EndBatchIndex, newBundle.Hash, dbTX); err != nil {
return fmt.Errorf("failed to update bundle_hash %s for batches (%d to %d): %w", newBundle.Hash, newBundle.StartBatchIndex, newBundle.EndBatchIndex, err)
}
if err = f.bundleORM.UpdateFinalizeTxHashAndRollupStatus(f.ctx, newBundle.Hash, lastBatchInBundle.FinalizeTxHash, types.RollupFinalized, dbTX); err != nil {
return fmt.Errorf("failed to update finalize tx hash and rollup status for bundle %s: %w", newBundle.Hash, err)
}
if err = f.bundleORM.UpdateProvingStatus(f.ctx, newBundle.Hash, types.ProvingTaskVerified, dbTX); err != nil {
return fmt.Errorf("failed to update proving status for bundle %s: %w", newBundle.Hash, err)
}
log.Info("Inserted bundle", "hash", newBundle.Hash, "start batch index", newBundle.StartBatchIndex, "end batch index", newBundle.EndBatchIndex)
return nil
})
if err != nil {
return fmt.Errorf("failed to insert bundle in DB transaction: %w", err)
}
}
return nil
}
func (f *FullRecovery) processFinalizedBatches(batches []*batchEvents) error {
if len(batches) == 0 {
return fmt.Errorf("no finalized batches to process")
}
firstBatch := batches[0]
lastBatch := batches[len(batches)-1]
log.Info("Processing finalized batches", "first batch", firstBatch.commit.BatchIndex(), "hash", firstBatch.commit.BatchHash(), "last batch", lastBatch.commit.BatchIndex(), "hash", lastBatch.commit.BatchHash())
// Since multiple CommitBatch events per transaction is introduced >= CodecV7,
// with one transaction carrying multiple blobs,
// each CommitBatch event corresponds to a blob containing block range data.
// To correctly process these events, we need to:
// 1. Parsing the associated blob data to extract the block range for each event
// 2. Tracking the parent batch hash for each processed CommitBatch event, to:
// - Validate the batch hash, since parent batch hash is needed to calculate the batch hash
// - Derive the index of the current batch by the number of parent batch hashes tracked
// In commitBatches and commitAndFinalizeBatch, the parent batch hash is passed in calldata,
// so that we can use it to get the first batch's parent batch hash, and derive the rest.
// The index map serves this purpose with:
// Key: commit transaction hash
// Value: parent batch hashes (in order) for each processed CommitBatch event in the transaction
txBlobIndexMap := make(map[common.Hash][]common.Hash)
for _, b := range batches {
args, err := f.l1Reader.FetchCommitTxData(b.commit)
if err != nil {
return fmt.Errorf("failed to fetch commit tx data of batch %d, tx hash: %v, err: %w", b.commit.BatchIndex().Uint64(), b.commit.TxHash().Hex(), err)
}
// all batches we process here will be > CodecV7 since that is the minimum codec version for permissionless batches
if args.Version < 7 {
return fmt.Errorf("unsupported codec version: %v, batch index: %v, tx hash: %s", args.Version, b.commit.BatchIndex().Uint64(), b.commit.TxHash().Hex())
}
codec, err := encoding.CodecFromVersion(encoding.CodecVersion(args.Version))
if err != nil {
return fmt.Errorf("unsupported codec version: %v, err: %w", args.Version, err)
}
// we append the batch hash to the slice for the current commit transaction after processing the batch.
// that means the current index of the batch within the transaction is len(txBlobIndexMap[vlog.TxHash]).
currentIndex := len(txBlobIndexMap[b.commit.TxHash()])
if currentIndex >= len(args.BlobHashes) {
return fmt.Errorf("commit transaction %s has %d blobs, but trying to access index %d (batch index %d)",
b.commit.TxHash(), len(args.BlobHashes), currentIndex, b.commit.BatchIndex().Uint64())
}
blobVersionedHash := args.BlobHashes[currentIndex]
// validate the batch hash
var parentBatchHash common.Hash
if currentIndex == 0 {
parentBatchHash = args.ParentBatchHash
} else {
// here we need to subtract 1 from the current index to get the parent batch hash.
parentBatchHash = txBlobIndexMap[b.commit.TxHash()][currentIndex-1]
}
calculatedBatch, err := codec.NewDABatchFromParams(b.commit.BatchIndex().Uint64(), blobVersionedHash, parentBatchHash)
if err != nil {
return fmt.Errorf("failed to create new DA batch from params, batch index: %d, err: %w", b.commit.BatchIndex().Uint64(), err)
}
if calculatedBatch.Hash() != b.commit.BatchHash() {
return fmt.Errorf("batch hash mismatch for batch %d, expected: %s, got: %s", b.commit.BatchIndex(), b.commit.BatchHash().String(), calculatedBatch.Hash().String())
}
txBlobIndexMap[b.commit.TxHash()] = append(txBlobIndexMap[b.commit.TxHash()], b.commit.BatchHash())
if err = f.insertBatchIntoDB(b, codec, blobVersionedHash); err != nil {
return fmt.Errorf("failed to insert batch into DB, batch index: %d, err: %w", b.commit.BatchIndex().Uint64(), err)
}
log.Info("Processed batch", "index", b.commit.BatchIndex(), "hash", b.commit.BatchHash(), "commit tx hash", b.commit.TxHash().Hex(), "finalize tx hash", b.finalize.TxHash().Hex(), "blob versioned hash", blobVersionedHash.String(), "parent batch hash", parentBatchHash.String())
}
return nil
}
func (f *FullRecovery) insertBatchIntoDB(batch *batchEvents, codec encoding.Codec, blobVersionedHash common.Hash) error {
// 5.1 Fetch block time.
blockHeader, err := f.l1Reader.FetchBlockHeaderByNumber(batch.commit.BlockNumber())
if err != nil {
return fmt.Errorf("failed to fetch block header by number %d: %w", batch.commit.BlockNumber(), err)
}
// 5.2 Fetch blob data for batch.
daBlocks, err := f.getBatchBlockRangeFromBlob(codec, blobVersionedHash, blockHeader.Time)
if err != nil {
return fmt.Errorf("failed to get batch block range from blob %s: %w", blobVersionedHash.Hex(), err)
}
lastBlock := daBlocks[len(daBlocks)-1]
// 5.2. Fetch L2 blocks for the entire batch.
if err = f.l2Watcher.TryFetchRunningMissingBlocks(lastBlock.Number()); err != nil {
return fmt.Errorf("failed to fetch L2 blocks: %w", err)
}
// 5.3. Reproduce chunk. Since we don't know the internals of a batch we just create 1 chunk per batch.
start := daBlocks[0].Number()
end := lastBlock.Number()
// get last chunk from DB
lastChunk, err := f.chunkORM.GetLatestChunk(f.ctx)
if err != nil {
return fmt.Errorf("failed to get latest chunk from DB: %w", err)
}
blocks, err := f.blockORM.GetL2BlocksInRange(f.ctx, start, end)
if err != nil {
return fmt.Errorf("failed to get L2 blocks in range: %w", err)
}
log.Info("Reproducing chunk", "start block", start, "end block", end)
var chunk encoding.Chunk
chunk.Blocks = blocks
chunk.PrevL1MessageQueueHash = common.HexToHash(lastChunk.PostL1MessageQueueHash)
chunk.PostL1MessageQueueHash, err = encoding.MessageQueueV2ApplyL1MessagesFromBlocks(chunk.PrevL1MessageQueueHash, blocks)
if err != nil {
return fmt.Errorf("failed to apply L1 messages from blocks: %w", err)
}
metrics, err := butils.CalculateChunkMetrics(&chunk, codec.Version())
if err != nil {
return fmt.Errorf("failed to calculate chunk metrics: %w", err)
}
var dbChunk *orm.Chunk
err = f.db.Transaction(func(dbTX *gorm.DB) error {
dbChunk, err = f.chunkORM.InsertChunk(f.ctx, &chunk, codec.Version(), *metrics, dbTX)
if err != nil {
return fmt.Errorf("failed to insert chunk to DB: %w", err)
}
if err := f.blockORM.UpdateChunkHashInRange(f.ctx, dbChunk.StartBlockNumber, dbChunk.EndBlockNumber, dbChunk.Hash, dbTX); err != nil {
return fmt.Errorf("failed to update chunk_hash for l2_blocks (chunk hash: %s, start block: %d, end block: %d): %w", dbChunk.Hash, dbChunk.StartBlockNumber, dbChunk.EndBlockNumber, err)
}
if err = f.chunkORM.UpdateProvingStatus(f.ctx, dbChunk.Hash, types.ProvingTaskVerified, dbTX); err != nil {
return fmt.Errorf("failed to update proving status for chunk %s: %w", dbChunk.Hash, err)
}
log.Info("Inserted chunk", "index", dbChunk.Index, "hash", dbChunk.Hash, "start block", dbChunk.StartBlockNumber, "end block", dbChunk.EndBlockNumber)
return nil
})
if err != nil {
return fmt.Errorf("failed to insert chunk in DB transaction: %w", err)
}
// 5.4 Reproduce batch.
dbParentBatch, err := f.batchORM.GetLatestBatch(f.ctx)
if err != nil || dbParentBatch == nil {
return fmt.Errorf("failed to get latest batch from DB: %w", err)
}
var encBatch encoding.Batch
encBatch.Index = dbParentBatch.Index + 1
encBatch.ParentBatchHash = common.HexToHash(dbParentBatch.Hash)
encBatch.TotalL1MessagePoppedBefore = dbChunk.TotalL1MessagesPoppedBefore
encBatch.PrevL1MessageQueueHash = chunk.PrevL1MessageQueueHash
encBatch.PostL1MessageQueueHash = chunk.PostL1MessageQueueHash
encBatch.Chunks = []*encoding.Chunk{&chunk}
encBatch.Blocks = blocks
batchMetrics, err := butils.CalculateBatchMetrics(&encBatch, codec.Version(), false)
if err != nil {
return fmt.Errorf("failed to calculate batch metrics: %w", err)
}
err = f.db.Transaction(func(dbTX *gorm.DB) error {
dbBatch, err := f.batchORM.InsertBatch(f.ctx, &encBatch, codec.Version(), *batchMetrics, dbTX)
if err != nil {
return fmt.Errorf("failed to insert batch to DB: %w", err)
}
if err = f.chunkORM.UpdateBatchHashInRange(f.ctx, dbBatch.StartChunkIndex, dbBatch.EndChunkIndex, dbBatch.Hash, dbTX); err != nil {
return fmt.Errorf("failed to update batch_hash for chunks (batch hash: %s, start chunk: %d, end chunk: %d): %w", dbBatch.Hash, dbBatch.StartChunkIndex, dbBatch.EndChunkIndex, err)
}
if err = f.batchORM.UpdateProvingStatus(f.ctx, dbBatch.Hash, types.ProvingTaskVerified, dbTX); err != nil {
return fmt.Errorf("failed to update proving status for batch %s: %w", dbBatch.Hash, err)
}
if err = f.batchORM.UpdateRollupStatusCommitAndFinalizeTxHash(f.ctx, dbBatch.Hash, types.RollupFinalized, batch.commit.TxHash().Hex(), batch.finalize.TxHash().Hex(), dbTX); err != nil {
return fmt.Errorf("failed to update rollup status for batch %s: %w", dbBatch.Hash, err)
}
log.Info("Inserted batch", "index", dbBatch.Index, "hash", dbBatch.Hash, "start chunk", dbBatch.StartChunkIndex, "end chunk", dbBatch.EndChunkIndex)
return nil
})
if err != nil {
return fmt.Errorf("failed to insert batch in DB transaction: %w", err)
}
return nil
}
func (f *FullRecovery) getBatchBlockRangeFromBlob(codec encoding.Codec, blobVersionedHash common.Hash, l1BlockTime uint64) ([]encoding.DABlock, error) {
blob, err := f.beaconNodeClient.GetBlobByVersionedHashAndBlockTime(f.ctx, blobVersionedHash, l1BlockTime)
if err != nil {
return nil, fmt.Errorf("failed to get blob %s: %w", blobVersionedHash.Hex(), err)
}
if blob == nil {
return nil, fmt.Errorf("blob %s not found", blobVersionedHash.Hex())
}
blobPayload, err := codec.DecodeBlob(blob)
if err != nil {
return nil, fmt.Errorf("blob %s decode error: %w", blobVersionedHash.Hex(), err)
}
blocks := blobPayload.Blocks()
if len(blocks) == 0 {
return nil, fmt.Errorf("empty blocks in blob %s", blobVersionedHash.Hex())
}
return blocks, nil
}
type batchEvents struct {
commit *l1.CommitBatchEvent
finalize *l1.FinalizeBatchEvent
}
func newBatchEvents(commit *l1.CommitBatchEvent, finalize *l1.FinalizeBatchEvent) *batchEvents {
if commit.BatchIndex().Uint64() > finalize.BatchIndex().Uint64() {
panic(fmt.Sprintf("commit and finalize batch index mismatch: %d != %d", commit.BatchIndex().Uint64(), finalize.BatchIndex().Uint64()))
}
return &batchEvents{
commit: commit,
finalize: finalize,
}
}
func (e *batchEvents) CompareTo(other *batchEvents) int {
return e.commit.BatchIndex().Cmp(other.commit.BatchIndex())
}

View File

@@ -13,6 +13,8 @@ import (
"github.com/go-resty/resty/v2"
"github.com/prometheus/client_golang/prometheus"
"github.com/scroll-tech/da-codec/encoding"
"gorm.io/gorm"
"github.com/scroll-tech/go-ethereum/accounts/abi"
"github.com/scroll-tech/go-ethereum/common"
"github.com/scroll-tech/go-ethereum/crypto"
@@ -20,7 +22,6 @@ import (
"github.com/scroll-tech/go-ethereum/ethclient"
"github.com/scroll-tech/go-ethereum/log"
"github.com/scroll-tech/go-ethereum/params"
"gorm.io/gorm"
"scroll-tech/common/types"
"scroll-tech/common/types/message"
@@ -289,6 +290,12 @@ func (r *Layer2Relayer) commitGenesisBatch(batchHash string, batchHeader []byte,
log.Info("Validium importGenesis", "calldata", common.Bytes2Hex(calldata))
} else {
// rollup mode: pass batchHeader and stateRoot
// Check state root is not zero
if stateRoot == (common.Hash{}) {
return fmt.Errorf("state root is zero")
}
calldata, packErr = r.l1RollupABI.Pack("importGenesisBatch", batchHeader, stateRoot)
if packErr != nil {
return fmt.Errorf("failed to pack rollup importGenesisBatch with batch header: %v and state root: %v. error: %v", common.Bytes2Hex(batchHeader), stateRoot, packErr)
@@ -387,8 +394,6 @@ func (r *Layer2Relayer) ProcessPendingBatches() {
// return if not hitting target price
if skip {
log.Debug("Skipping batch submission", "first batch index", dbBatches[0].Index, "backlog count", backlogCount, "reason", err)
log.Debug("first batch index", dbBatches[0].Index)
log.Debug("backlog count", backlogCount)
return
}
if err != nil {
@@ -501,6 +506,11 @@ func (r *Layer2Relayer) ProcessPendingBatches() {
log.Error("failed to construct normal payload", "codecVersion", codecVersion, "start index", firstBatch.Index, "end index", lastBatch.Index, "err", err)
return
}
if err = r.sanityChecksCommitBatchCodecV7CalldataAndBlobs(calldata, blobs); err != nil {
log.Error("Sanity check failed for calldata and blobs", "codecVersion", codecVersion, "start index", firstBatch.Index, "end index", lastBatch.Index, "err", err)
return
}
}
default:
log.Error("unsupported codec version in ProcessPendingBatches", "codecVersion", codecVersion, "start index", firstBatch, "end index", lastBatch.Index)
@@ -998,6 +1008,18 @@ func (r *Layer2Relayer) constructCommitBatchPayloadCodecV7(batchesToSubmit []*db
}
func (r *Layer2Relayer) constructCommitBatchPayloadValidium(batch *dbBatchWithChunks) ([]byte, uint64, uint64, error) {
// Check state root is not zero
stateRoot := common.HexToHash(batch.Batch.StateRoot)
if stateRoot == (common.Hash{}) {
return nil, 0, 0, fmt.Errorf("batch %d state root is zero", batch.Batch.Index)
}
// Check parent batch hash is not zero
parentBatchHash := common.HexToHash(batch.Batch.ParentBatchHash)
if parentBatchHash == (common.Hash{}) {
return nil, 0, 0, fmt.Errorf("batch %d parent batch hash is zero", batch.Batch.Index)
}
// Calculate metrics
var maxBlockHeight uint64
var totalGasUsed uint64
@@ -1017,6 +1039,7 @@ func (r *Layer2Relayer) constructCommitBatchPayloadValidium(batch *dbBatchWithCh
lastChunk := batch.Chunks[len(batch.Chunks)-1]
commitment := common.HexToHash(lastChunk.EndBlockHash)
version := encoding.CodecVersion(batch.Batch.CodecVersion)
calldata, err := r.validiumABI.Pack("commitBatch", version, common.HexToHash(batch.Batch.ParentBatchHash), common.HexToHash(batch.Batch.StateRoot), common.HexToHash(batch.Batch.WithdrawRoot), commitment[:])
if err != nil {
@@ -1027,6 +1050,12 @@ func (r *Layer2Relayer) constructCommitBatchPayloadValidium(batch *dbBatchWithCh
}
func (r *Layer2Relayer) constructFinalizeBundlePayloadCodecV7(dbBatch *orm.Batch, endChunk *orm.Chunk, aggProof *message.OpenVMBundleProof) ([]byte, error) {
// Check state root is not zero
stateRoot := common.HexToHash(dbBatch.StateRoot)
if stateRoot == (common.Hash{}) {
return nil, fmt.Errorf("batch %d state root is zero", dbBatch.Index)
}
if aggProof != nil { // finalizeBundle with proof.
calldata, packErr := r.l1RollupABI.Pack(
"finalizeBundlePostEuclidV2",

View File

@@ -0,0 +1,487 @@
package relayer
import (
"fmt"
"math/big"
"github.com/scroll-tech/da-codec/encoding"
"github.com/scroll-tech/go-ethereum/accounts/abi"
"github.com/scroll-tech/go-ethereum/common"
"github.com/scroll-tech/go-ethereum/core/types"
"github.com/scroll-tech/go-ethereum/crypto/kzg4844"
"scroll-tech/rollup/internal/orm"
)
// sanityChecksCommitBatchCodecV7CalldataAndBlobs performs comprehensive validation of the constructed
// transaction data (calldata and blobs) by parsing them and comparing against database records.
// This ensures the constructed transaction data is correct and consistent with the database state.
func (r *Layer2Relayer) sanityChecksCommitBatchCodecV7CalldataAndBlobs(calldata []byte, blobs []*kzg4844.Blob) error {
if r.l1RollupABI == nil {
return fmt.Errorf("l1RollupABI is nil: cannot parse commitBatches calldata")
}
calldataInfo, err := parseCommitBatchesCalldata(r.l1RollupABI, calldata)
if err != nil {
return fmt.Errorf("failed to parse calldata: %w", err)
}
batchesToValidate, l1MessagesWithBlockNumbers, err := r.getBatchesFromCalldata(calldataInfo)
if err != nil {
return fmt.Errorf("failed to get batches from database: %w", err)
}
if err := r.validateCalldataAndBlobsAgainstDatabase(calldataInfo, blobs, batchesToValidate, l1MessagesWithBlockNumbers); err != nil {
return fmt.Errorf("calldata and blobs validation failed: %w", err)
}
if err := r.validateDatabaseConsistency(batchesToValidate); err != nil {
return fmt.Errorf("database consistency validation failed: %w", err)
}
return nil
}
// CalldataInfo holds parsed information from commitBatches calldata
type CalldataInfo struct {
Version uint8
ParentBatchHash common.Hash
LastBatchHash common.Hash
}
// parseCommitBatchesCalldata parses the commitBatches calldata and extracts key information
func parseCommitBatchesCalldata(abi *abi.ABI, calldata []byte) (*CalldataInfo, error) {
method := abi.Methods["commitBatches"]
decoded, err := method.Inputs.Unpack(calldata[4:])
if err != nil {
return nil, fmt.Errorf("failed to unpack commitBatches calldata: %w", err)
}
if len(decoded) != 3 {
return nil, fmt.Errorf("unexpected number of decoded parameters: got %d, want 3", len(decoded))
}
version, ok := decoded[0].(uint8)
if !ok {
return nil, fmt.Errorf("failed to type assert version to uint8")
}
parentBatchHashB, ok := decoded[1].([32]uint8)
if !ok {
return nil, fmt.Errorf("failed to type assert parentBatchHash to [32]uint8")
}
parentBatchHash := common.BytesToHash(parentBatchHashB[:])
lastBatchHashB, ok := decoded[2].([32]uint8)
if !ok {
return nil, fmt.Errorf("failed to type assert lastBatchHash to [32]uint8")
}
lastBatchHash := common.BytesToHash(lastBatchHashB[:])
return &CalldataInfo{
Version: version,
ParentBatchHash: parentBatchHash,
LastBatchHash: lastBatchHash,
}, nil
}
// getBatchesFromCalldata retrieves the relevant batches from database based on calldata information
func (r *Layer2Relayer) getBatchesFromCalldata(info *CalldataInfo) ([]*dbBatchWithChunks, map[uint64][]*types.TransactionData, error) {
// Get the parent batch to determine the starting point
parentBatch, err := r.batchOrm.GetBatchByHash(r.ctx, info.ParentBatchHash.Hex())
if err != nil {
return nil, nil, fmt.Errorf("failed to get parent batch by hash %s: %w", info.ParentBatchHash.Hex(), err)
}
// Get the last batch to determine the ending point
lastBatch, err := r.batchOrm.GetBatchByHash(r.ctx, info.LastBatchHash.Hex())
if err != nil {
return nil, nil, fmt.Errorf("failed to get last batch by hash %s: %w", info.LastBatchHash.Hex(), err)
}
// Get all batches in the range (parent+1 to last)
firstBatchIndex := parentBatch.Index + 1
lastBatchIndex := lastBatch.Index
// Check if the range is valid
if firstBatchIndex > lastBatchIndex {
return nil, nil, fmt.Errorf("no batches found in range: first index %d, last index %d", firstBatchIndex, lastBatchIndex)
}
var batchesToValidate []*dbBatchWithChunks
l1MessagesWithBlockNumbers := make(map[uint64][]*types.TransactionData)
for batchIndex := firstBatchIndex; batchIndex <= lastBatchIndex; batchIndex++ {
dbBatch, err := r.batchOrm.GetBatchByIndex(r.ctx, batchIndex)
if err != nil {
return nil, nil, fmt.Errorf("failed to get batch by index %d: %w", batchIndex, err)
}
// Get chunks for this batch
dbChunks, err := r.chunkOrm.GetChunksInRange(r.ctx, dbBatch.StartChunkIndex, dbBatch.EndChunkIndex)
if err != nil {
return nil, nil, fmt.Errorf("failed to get chunks for batch %d: %w", batchIndex, err)
}
batchesToValidate = append(batchesToValidate, &dbBatchWithChunks{
Batch: dbBatch,
Chunks: dbChunks,
})
// If there are L1 messages in this batch, retrieve L1 messages with block numbers
for _, chunk := range dbChunks {
if chunk.TotalL1MessagesPoppedInChunk > 0 {
blockWithL1Messages, err := r.l2BlockOrm.GetL2BlocksInRange(r.ctx, chunk.StartBlockNumber, chunk.EndBlockNumber)
if err != nil {
return nil, nil, fmt.Errorf("failed to get L2 blocks for chunk %d: %w", chunk.Index, err)
}
var l1MessagesCount uint64
for _, block := range blockWithL1Messages {
bn := block.Header.Number.Uint64()
seenL2 := false
for _, tx := range block.Transactions {
if tx.Type == types.L1MessageTxType {
if seenL2 {
// Invariant violated: found an L1 message after an L2 transaction in the same block.
return nil, nil, fmt.Errorf("L1 message after L2 transaction in block %d", bn)
}
l1MessagesWithBlockNumbers[bn] = append(l1MessagesWithBlockNumbers[bn], tx)
l1MessagesCount++
} else {
seenL2 = true
}
}
}
if chunk.TotalL1MessagesPoppedInChunk != l1MessagesCount {
return nil, nil, fmt.Errorf("chunk %d has inconsistent L1 messages count: expected %d, got %d", chunk.Index, chunk.TotalL1MessagesPoppedInChunk, l1MessagesCount)
}
}
}
}
return batchesToValidate, l1MessagesWithBlockNumbers, nil
}
// validateDatabaseConsistency performs comprehensive validation of database records
func (r *Layer2Relayer) validateDatabaseConsistency(batchesToValidate []*dbBatchWithChunks) error {
if len(batchesToValidate) == 0 {
return fmt.Errorf("no batches to validate")
}
// Get previous chunk for continuity check
firstChunk := batchesToValidate[0].Chunks[0]
if firstChunk.Index == 0 {
return fmt.Errorf("genesis chunk should not be in normal batch submission flow, chunk index: %d", firstChunk.Index)
}
prevChunk, err := r.chunkOrm.GetChunkByIndex(r.ctx, firstChunk.Index-1)
if err != nil {
return fmt.Errorf("failed to get previous chunk %d for continuity check: %w", firstChunk.Index-1, err)
}
firstBatchCodecVersion := batchesToValidate[0].Batch.CodecVersion
for i, batch := range batchesToValidate {
// Validate codec version consistency
if batch.Batch.CodecVersion != firstBatchCodecVersion {
return fmt.Errorf("batch %d has different codec version %d, expected %d", batch.Batch.Index, batch.Batch.CodecVersion, firstBatchCodecVersion)
}
// Validate individual batch
if err := r.validateSingleBatchConsistency(batch, i, batchesToValidate); err != nil {
return err
}
// Validate chunks in this batch
if err := r.validateBatchChunksConsistency(batch, prevChunk); err != nil {
return err
}
// Update prevChunk to the last chunk of this batch for next iteration
if len(batch.Chunks) == 0 {
return fmt.Errorf("batch %d has no chunks", batch.Batch.Index)
}
prevChunk = batch.Chunks[len(batch.Chunks)-1]
}
return nil
}
// validateSingleBatchConsistency validates a single batch's consistency
func (r *Layer2Relayer) validateSingleBatchConsistency(batch *dbBatchWithChunks, i int, allBatches []*dbBatchWithChunks) error {
if batch == nil || batch.Batch == nil {
return fmt.Errorf("batch %d is nil", i)
}
if len(batch.Chunks) == 0 {
return fmt.Errorf("batch %d has no chunks", batch.Batch.Index)
}
// Validate essential batch fields
batchHash := common.HexToHash(batch.Batch.Hash)
if batchHash == (common.Hash{}) {
return fmt.Errorf("batch %d hash is zero", batch.Batch.Index)
}
if batch.Batch.Index == 0 {
return fmt.Errorf("batch %d has zero index (only genesis batch should have index 0)", i)
}
parentBatchHash := common.HexToHash(batch.Batch.ParentBatchHash)
if parentBatchHash == (common.Hash{}) {
return fmt.Errorf("batch %d parent batch hash is zero", batch.Batch.Index)
}
stateRoot := common.HexToHash(batch.Batch.StateRoot)
if stateRoot == (common.Hash{}) {
return fmt.Errorf("batch %d state root is zero", batch.Batch.Index)
}
// Check batch index continuity
if i > 0 {
prevBatch := allBatches[i-1]
if batch.Batch.Index != prevBatch.Batch.Index+1 {
return fmt.Errorf("batch index is not sequential: prev batch index %d, current batch index %d", prevBatch.Batch.Index, batch.Batch.Index)
}
if parentBatchHash != common.HexToHash(prevBatch.Batch.Hash) {
return fmt.Errorf("parent batch hash does not match previous batch hash: expected %s, got %s", prevBatch.Batch.Hash, batch.Batch.ParentBatchHash)
}
} else {
// For the first batch, verify continuity with parent batch from database
parentBatch, err := r.batchOrm.GetBatchByHash(r.ctx, batch.Batch.ParentBatchHash)
if err != nil {
return fmt.Errorf("failed to get parent batch %s for batch %d: %w", batch.Batch.ParentBatchHash, batch.Batch.Index, err)
}
if batch.Batch.Index != parentBatch.Index+1 {
return fmt.Errorf("first batch index is not sequential with parent: parent batch index %d, current batch index %d", parentBatch.Index, batch.Batch.Index)
}
}
// Validate L1 message queue consistency
if err := r.validateMessageQueueConsistency(batch.Batch.Index, batch.Chunks, common.HexToHash(batch.Batch.PrevL1MessageQueueHash), common.HexToHash(batch.Batch.PostL1MessageQueueHash)); err != nil {
return err
}
return nil
}
// validateBatchChunksConsistency validates chunks within a batch
func (r *Layer2Relayer) validateBatchChunksConsistency(batch *dbBatchWithChunks, prevChunk *orm.Chunk) error {
// Check codec version consistency between chunks and batch
for _, chunk := range batch.Chunks {
if chunk.CodecVersion != batch.Batch.CodecVersion {
return fmt.Errorf("batch %d chunk %d has different codec version %d, expected %d", batch.Batch.Index, chunk.Index, chunk.CodecVersion, batch.Batch.CodecVersion)
}
}
// Validate each chunk individually
currentPrevChunk := prevChunk
for j, chunk := range batch.Chunks {
if err := r.validateSingleChunkConsistency(chunk, currentPrevChunk); err != nil {
return fmt.Errorf("batch %d chunk %d: %w", batch.Batch.Index, j, err)
}
currentPrevChunk = chunk
}
return nil
}
// validateSingleChunkConsistency validates a single chunk
func (r *Layer2Relayer) validateSingleChunkConsistency(chunk *orm.Chunk, prevChunk *orm.Chunk) error {
if chunk == nil {
return fmt.Errorf("chunk is nil")
}
chunkHash := common.HexToHash(chunk.Hash)
if chunkHash == (common.Hash{}) {
return fmt.Errorf("chunk %d hash is zero", chunk.Index)
}
// Check chunk index continuity
if chunk.Index != prevChunk.Index+1 {
return fmt.Errorf("chunk index is not sequential: prev chunk index %d, current chunk index %d", prevChunk.Index, chunk.Index)
}
// Validate block range
if chunk.StartBlockNumber == 0 && chunk.EndBlockNumber == 0 {
return fmt.Errorf("chunk %d has zero block range", chunk.Index)
}
if chunk.StartBlockNumber > chunk.EndBlockNumber {
return fmt.Errorf("chunk %d has invalid block range: start %d > end %d", chunk.Index, chunk.StartBlockNumber, chunk.EndBlockNumber)
}
// Check hash fields
startBlockHash := common.HexToHash(chunk.StartBlockHash)
if startBlockHash == (common.Hash{}) {
return fmt.Errorf("chunk %d start block hash is zero", chunk.Index)
}
endBlockHash := common.HexToHash(chunk.EndBlockHash)
if endBlockHash == (common.Hash{}) {
return fmt.Errorf("chunk %d end block hash is zero", chunk.Index)
}
// Check block continuity with previous chunk
if prevChunk.EndBlockNumber+1 != chunk.StartBlockNumber {
return fmt.Errorf("chunk is not continuous with previous chunk %d: prev end block %d, current start block %d", prevChunk.Index, prevChunk.EndBlockNumber, chunk.StartBlockNumber)
}
// Check L1 messages continuity
expectedPoppedBefore := prevChunk.TotalL1MessagesPoppedBefore + prevChunk.TotalL1MessagesPoppedInChunk
if chunk.TotalL1MessagesPoppedBefore != expectedPoppedBefore {
return fmt.Errorf("L1 messages popped before is incorrect: expected %d, got %d", expectedPoppedBefore, chunk.TotalL1MessagesPoppedBefore)
}
return nil
}
// validateCalldataAndBlobsAgainstDatabase validates calldata and blobs against database records
func (r *Layer2Relayer) validateCalldataAndBlobsAgainstDatabase(calldataInfo *CalldataInfo, blobs []*kzg4844.Blob, batchesToValidate []*dbBatchWithChunks, l1MessagesWithBlockNumbers map[uint64][]*types.TransactionData) error {
// Validate blobs
if len(blobs) == 0 {
return fmt.Errorf("no blobs provided")
}
// Validate blob count
if len(blobs) != len(batchesToValidate) {
return fmt.Errorf("blob count mismatch: got %d blobs, expected %d batches", len(blobs), len(batchesToValidate))
}
// Get first and last batches for validation, length check is already done above
firstBatch := batchesToValidate[0].Batch
lastBatch := batchesToValidate[len(batchesToValidate)-1].Batch
// Validate codec version
if calldataInfo.Version != uint8(firstBatch.CodecVersion) {
return fmt.Errorf("version mismatch: calldata=%d, db=%d", calldataInfo.Version, firstBatch.CodecVersion)
}
// Validate parent batch hash
if calldataInfo.ParentBatchHash != common.HexToHash(firstBatch.ParentBatchHash) {
return fmt.Errorf("parentBatchHash mismatch: calldata=%s, db=%s", calldataInfo.ParentBatchHash.Hex(), firstBatch.ParentBatchHash)
}
// Validate last batch hash
if calldataInfo.LastBatchHash != common.HexToHash(lastBatch.Hash) {
return fmt.Errorf("lastBatchHash mismatch: calldata=%s, db=%s", calldataInfo.LastBatchHash.Hex(), lastBatch.Hash)
}
// Get codec for blob decoding
codec, err := encoding.CodecFromVersion(encoding.CodecVersion(firstBatch.CodecVersion))
if err != nil {
return fmt.Errorf("failed to get codec: %w", err)
}
// Validate each blob against its corresponding batch
for i, blob := range blobs {
dbBatch := batchesToValidate[i].Batch
if err := r.validateSingleBlobAgainstBatch(blob, dbBatch, codec, l1MessagesWithBlockNumbers); err != nil {
return fmt.Errorf("blob validation failed for batch %d: %w", dbBatch.Index, err)
}
}
return nil
}
// validateSingleBlobAgainstBatch validates a single blob against its batch data
func (r *Layer2Relayer) validateSingleBlobAgainstBatch(blob *kzg4844.Blob, dbBatch *orm.Batch, codec encoding.Codec, l1MessagesWithBlockNumbers map[uint64][]*types.TransactionData) error {
// Decode blob payload
payload, err := codec.DecodeBlob(blob)
if err != nil {
return fmt.Errorf("failed to decode blob: %w", err)
}
// Validate batch hash
daBatch, err := assembleDABatchFromPayload(payload, dbBatch, codec, l1MessagesWithBlockNumbers)
if err != nil {
return fmt.Errorf("failed to assemble batch from payload: %w", err)
}
if daBatch.Hash() != common.HexToHash(dbBatch.Hash) {
return fmt.Errorf("batch hash mismatch: decoded from blob=%s, db=%s", daBatch.Hash().Hex(), dbBatch.Hash)
}
return nil
}
// validateMessageQueueConsistency validates L1 message queue hash consistency
func (r *Layer2Relayer) validateMessageQueueConsistency(batchIndex uint64, chunks []*orm.Chunk, prevL1MsgQueueHash common.Hash, postL1MsgQueueHash common.Hash) error {
if len(chunks) == 0 {
return fmt.Errorf("batch %d has no chunks for message queue validation", batchIndex)
}
firstChunk := chunks[0]
lastChunk := chunks[len(chunks)-1]
// Calculate total L1 messages in this batch
var totalL1MessagesInBatch uint64
for _, chunk := range chunks {
totalL1MessagesInBatch += chunk.TotalL1MessagesPoppedInChunk
}
// If there were L1 messages processed before this batch, prev hash should not be zero
if firstChunk.TotalL1MessagesPoppedBefore > 0 && prevL1MsgQueueHash == (common.Hash{}) {
return fmt.Errorf("batch %d prev L1 message queue hash is zero but %d L1 messages were processed before", batchIndex, firstChunk.TotalL1MessagesPoppedBefore)
}
// If there are any L1 messages processed up to this batch, post hash should not be zero
totalL1MessagesProcessed := lastChunk.TotalL1MessagesPoppedBefore + lastChunk.TotalL1MessagesPoppedInChunk
if totalL1MessagesProcessed > 0 && postL1MsgQueueHash == (common.Hash{}) {
return fmt.Errorf("batch %d post L1 message queue hash is zero but %d L1 messages were processed in total", batchIndex, totalL1MessagesProcessed)
}
// Prev and post queue hashes should be different if L1 messages were processed in this batch
if totalL1MessagesInBatch > 0 && prevL1MsgQueueHash == postL1MsgQueueHash {
return fmt.Errorf("batch %d has same prev and post L1 message queue hashes but processed %d L1 messages in this batch", batchIndex, totalL1MessagesInBatch)
}
return nil
}
func assembleDABatchFromPayload(payload encoding.DABlobPayload, dbBatch *orm.Batch, codec encoding.Codec, l1MessagesWithBlockNumbers map[uint64][]*types.TransactionData) (encoding.DABatch, error) {
blocks, err := assembleBlocksFromPayload(payload, l1MessagesWithBlockNumbers)
if err != nil {
return nil, fmt.Errorf("failed to assemble blocks from payload batch_index=%d codec_version=%d parent_batch_hash=%s: %w", dbBatch.Index, dbBatch.CodecVersion, dbBatch.ParentBatchHash, err)
}
batch := &encoding.Batch{
Index: dbBatch.Index, // The database provides only batch index, other fields are derived from blob payload
ParentBatchHash: common.HexToHash(dbBatch.ParentBatchHash), // The first batch's parent hash is verified with calldata, subsequent batches are linked via dbBatch.ParentBatchHash and verified in database consistency checks
PrevL1MessageQueueHash: payload.PrevL1MessageQueueHash(),
PostL1MessageQueueHash: payload.PostL1MessageQueueHash(),
Blocks: blocks,
Chunks: []*encoding.Chunk{ // One chunk for this batch to pass sanity checks when building DABatch
{
Blocks: blocks,
PrevL1MessageQueueHash: payload.PrevL1MessageQueueHash(),
PostL1MessageQueueHash: payload.PostL1MessageQueueHash(),
},
},
}
daBatch, err := codec.NewDABatch(batch)
if err != nil {
return nil, fmt.Errorf("failed to build DABatch batch_index=%d codec_version=%d parent_batch_hash=%s: %w", dbBatch.Index, dbBatch.CodecVersion, dbBatch.ParentBatchHash, err)
}
return daBatch, nil
}
func assembleBlocksFromPayload(payload encoding.DABlobPayload, l1MessagesWithBlockNumbers map[uint64][]*types.TransactionData) ([]*encoding.Block, error) {
daBlocks := payload.Blocks()
txns := payload.Transactions()
if len(daBlocks) != len(txns) {
return nil, fmt.Errorf("mismatched number of blocks and transactions: %d blocks, %d transactions", len(daBlocks), len(txns))
}
blocks := make([]*encoding.Block, len(daBlocks))
for i := range daBlocks {
blocks[i] = &encoding.Block{
Header: &types.Header{
Number: new(big.Int).SetUint64(daBlocks[i].Number()),
Time: daBlocks[i].Timestamp(),
BaseFee: daBlocks[i].BaseFee(),
GasLimit: daBlocks[i].GasLimit(),
},
}
// Ensure per-block ordering: [L1 messages][L2 transactions]. Prepend L1 messages (if any), then append L2 transactions.
if l1Messages, ok := l1MessagesWithBlockNumbers[daBlocks[i].Number()]; ok {
blocks[i].Transactions = l1Messages
}
blocks[i].Transactions = append(blocks[i].Transactions, encoding.TxsToTxsData(txns[i])...)
}
return blocks, nil
}

View File

@@ -0,0 +1,131 @@
package relayer
import (
"encoding/json"
"fmt"
"math/big"
"os"
"path/filepath"
"strings"
"testing"
"github.com/stretchr/testify/assert"
"github.com/scroll-tech/da-codec/encoding"
"github.com/scroll-tech/go-ethereum/common"
"github.com/scroll-tech/go-ethereum/common/hexutil"
"github.com/scroll-tech/go-ethereum/core/types"
"github.com/scroll-tech/go-ethereum/crypto/kzg4844"
bridgeabi "scroll-tech/rollup/abi"
"scroll-tech/rollup/internal/orm"
)
func TestAssembleDABatch(t *testing.T) {
calldataHex := "0x9bbaa2ba0000000000000000000000000000000000000000000000000000000000000008146793a7d71663cd87ec9713f72242a3798d5e801050130a3e16efaa09fb803e58af2593dadc8b9fff75a2d27199cb97ec115bade109b8d691a512608ef180eb"
blobsPath := filepath.Join("../../../testdata", "commit_batches_blobs.json")
calldata, err := hexutil.Decode(strings.TrimSpace(calldataHex))
assert.NoErrorf(t, err, "failed to decode calldata: %s", calldataHex)
blobs, err := loadBlobsFromJSON(blobsPath)
assert.NoErrorf(t, err, "failed to read blobs: %s", blobsPath)
assert.NotEmpty(t, blobs, "no blobs provided")
info, err := parseCommitBatchesCalldata(bridgeabi.ScrollChainABI, calldata)
assert.NoError(t, err)
codec, err := encoding.CodecFromVersion(encoding.CodecVersion(info.Version))
assert.NoErrorf(t, err, "failed to get codec from version %d", info.Version)
parentBatchHash := info.ParentBatchHash
index := uint64(113571)
t.Logf("calldata parsed: version=%d parentBatchHash=%s lastBatchHash=%s blobs=%d", info.Version, info.ParentBatchHash.Hex(), info.LastBatchHash.Hex(), len(blobs))
fromAddr := common.HexToAddress("0x61d8d3e7f7c656493d1d76aaa1a836cedfcbc27b")
toAddr := common.HexToAddress("0xba50f5340fb9f3bd074bd638c9be13ecb36e603d")
l1MessagesWithBlockNumbers := map[uint64][]*types.TransactionData{
11488527: {
&types.TransactionData{
Type: types.L1MessageTxType,
Nonce: 1072515,
Gas: 340000,
To: &toAddr,
Value: (*hexutil.Big)(big.NewInt(0)),
Data: "0x8ef1332e00000000000000000000000081f3843af1fbab046b771f0d440c04ebb2b7513f000000000000000000000000cec03800074d0ac0854bf1f34153cc4c8baeeb1e00000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000105d8300000000000000000000000000000000000000000000000000000000000000a00000000000000000000000000000000000000000000000000000000000000084f03efa3700000000000000000000000000000000000000000000000000000000000024730000000000000000000000000000000000000000000000000000000000000040000000000000000000000000000000000000000000000000000000000000000171bdb6e3062daaee1845ba4cb1902169feb5a9b9555a882d45637d3bd29eb83500000000000000000000000000000000000000000000000000000000",
From: fromAddr,
},
},
11488622: {
&types.TransactionData{
Type: types.L1MessageTxType,
Nonce: 1072516,
Gas: 340000,
To: &toAddr,
Value: (*hexutil.Big)(big.NewInt(0)),
Data: "0x8ef1332e00000000000000000000000081f3843af1fbab046b771f0d440c04ebb2b7513f000000000000000000000000cec03800074d0ac0854bf1f34153cc4c8baeeb1e00000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000105d8400000000000000000000000000000000000000000000000000000000000000a00000000000000000000000000000000000000000000000000000000000000084f03efa370000000000000000000000000000000000000000000000000000000000002474000000000000000000000000000000000000000000000000000000000000004000000000000000000000000000000000000000000000000000000000000000012aeb01535c1845b689bfce22e53029ec59ec75ea20f660d7c5fcd99f55b75b6900000000000000000000000000000000000000000000000000000000",
From: fromAddr,
},
},
11489190: {
&types.TransactionData{
Type: types.L1MessageTxType,
Nonce: 1072517,
Gas: 168000,
To: &toAddr,
Value: (*hexutil.Big)(big.NewInt(0)),
Data: "0x8ef1332e0000000000000000000000003b1399523f819ea4c4d3e76dddefaf4226c6ba570000000000000000000000003b1399523f819ea4c4d3e76dddefaf4226c6ba5700000000000000000000000000000000000000000000000000000000000027100000000000000000000000000000000000000000000000000000000000105d8500000000000000000000000000000000000000000000000000000000000000a00000000000000000000000000000000000000000000000000000000000000000",
From: fromAddr,
},
},
}
for i, blob := range blobs {
payload, decErr := codec.DecodeBlob(blob)
assert.NoErrorf(t, decErr, "blob[%d] decode failed", i)
if decErr != nil {
continue
}
dbBatch := &orm.Batch{
Index: index,
ParentBatchHash: parentBatchHash.Hex(),
}
daBatch, asmErr := assembleDABatchFromPayload(payload, dbBatch, codec, l1MessagesWithBlockNumbers)
assert.NoErrorf(t, asmErr, "blob[%d] assemble failed", i)
if asmErr == nil {
t.Logf("blob[%d] DABatch hash=%s", i, daBatch.Hash().Hex())
}
index += 1
parentBatchHash = daBatch.Hash()
}
}
func loadBlobsFromJSON(path string) ([]*kzg4844.Blob, error) {
raw, err := os.ReadFile(path)
if err != nil {
return nil, err
}
var arr []hexutil.Bytes
if err := json.Unmarshal(raw, &arr); err != nil {
return nil, fmt.Errorf("invalid JSON; expect [\"0x...\"] array: %w", err)
}
out := make([]*kzg4844.Blob, 0, len(arr))
var empty kzg4844.Blob
want := len(empty)
for i, b := range arr {
if len(b) != want {
return nil, fmt.Errorf("blob[%d] length mismatch: got %d, want %d", i, len(b), want)
}
blob := new(kzg4844.Blob)
copy(blob[:], b)
out = append(out, blob)
}
return out, nil
}

View File

@@ -70,15 +70,18 @@ func testL2RelayerProcessPendingBatches(t *testing.T) {
_, err = chunkOrm.InsertChunk(context.Background(), chunk2, encoding.CodecV7, rutils.ChunkMetrics{})
assert.NoError(t, err)
batchOrm := orm.NewBatch(db)
genesisBatch, err := batchOrm.GetBatchByIndex(context.Background(), 0)
assert.NoError(t, err)
batch := &encoding.Batch{
Index: 1,
TotalL1MessagePoppedBefore: 0,
ParentBatchHash: common.Hash{},
ParentBatchHash: common.HexToHash(genesisBatch.Hash),
Chunks: []*encoding.Chunk{chunk1, chunk2},
Blocks: []*encoding.Block{block1, block2},
}
batchOrm := orm.NewBatch(db)
dbBatch, err := batchOrm.InsertBatch(context.Background(), batch, encoding.CodecV7, rutils.BatchMetrics{})
assert.NoError(t, err)

View File

@@ -81,6 +81,7 @@ func setupEnv(t *testing.T) {
block1 = &encoding.Block{}
err = json.Unmarshal(templateBlockTrace1, block1)
assert.NoError(t, err)
block1.Header.Number = big.NewInt(1)
chunk1 = &encoding.Chunk{Blocks: []*encoding.Block{block1}}
codec, err := encoding.CodecFromVersion(encoding.CodecV0)
assert.NoError(t, err)
@@ -94,6 +95,7 @@ func setupEnv(t *testing.T) {
block2 = &encoding.Block{}
err = json.Unmarshal(templateBlockTrace2, block2)
assert.NoError(t, err)
block2.Header.Number = big.NewInt(2)
chunk2 = &encoding.Chunk{Blocks: []*encoding.Block{block2}}
daChunk2, err := codec.NewDAChunk(chunk2, chunk1.NumL1Messages(0))
assert.NoError(t, err)

View File

@@ -36,7 +36,7 @@ func testBatchProposerLimitsCodecV7(t *testing.T) {
name: "Timeout",
batchTimeoutSec: 0,
expectedBatchesLen: 1,
expectedChunksInFirstBatch: 2,
expectedChunksInFirstBatch: 1,
},
}
@@ -72,8 +72,7 @@ func testBatchProposerLimitsCodecV7(t *testing.T) {
assert.NoError(t, err)
cp := NewChunkProposer(context.Background(), &config.ChunkProposerConfig{
MaxBlockNumPerChunk: 1,
MaxL2GasPerChunk: 20000000,
MaxL2GasPerChunk: math.MaxUint64,
ChunkTimeoutSec: 300,
MaxUncompressedBatchBytesSize: math.MaxUint64,
}, encoding.CodecV7, &params.ChainConfig{
@@ -154,7 +153,6 @@ func testBatchProposerBlobSizeLimitCodecV7(t *testing.T) {
chainConfig := &params.ChainConfig{LondonBlock: big.NewInt(0), BernoulliBlock: big.NewInt(0), CurieBlock: big.NewInt(0), DarwinTime: new(uint64), DarwinV2Time: new(uint64), EuclidTime: new(uint64), EuclidV2Time: new(uint64)}
cp := NewChunkProposer(context.Background(), &config.ChunkProposerConfig{
MaxBlockNumPerChunk: math.MaxUint64,
MaxL2GasPerChunk: math.MaxUint64,
ChunkTimeoutSec: 0,
MaxUncompressedBatchBytesSize: math.MaxUint64,
@@ -227,7 +225,6 @@ func testBatchProposerMaxChunkNumPerBatchLimitCodecV7(t *testing.T) {
chainConfig := &params.ChainConfig{LondonBlock: big.NewInt(0), BernoulliBlock: big.NewInt(0), CurieBlock: big.NewInt(0), DarwinTime: new(uint64), DarwinV2Time: new(uint64), EuclidTime: new(uint64), EuclidV2Time: new(uint64)}
cp := NewChunkProposer(context.Background(), &config.ChunkProposerConfig{
MaxBlockNumPerChunk: math.MaxUint64,
MaxL2GasPerChunk: math.MaxUint64,
ChunkTimeoutSec: 0,
MaxUncompressedBatchBytesSize: math.MaxUint64,
@@ -309,15 +306,14 @@ func testBatchProposerUncompressedBatchBytesLimitCodecV8(t *testing.T) {
// Create chunk proposer with no uncompressed batch bytes limit for chunks
cp := NewChunkProposer(context.Background(), &config.ChunkProposerConfig{
MaxBlockNumPerChunk: 1, // One block per chunk
MaxL2GasPerChunk: math.MaxUint64,
MaxL2GasPerChunk: 1200000, // One block per chunk via gas limit
ChunkTimeoutSec: math.MaxUint32,
MaxUncompressedBatchBytesSize: math.MaxUint64,
}, encoding.CodecV8, chainConfig, db, nil)
// Insert 2 blocks with large calldata and create 2 chunks
l2BlockOrm := orm.NewL2Block(db)
for i := uint64(1); i <= 2; i++ {
for i := uint64(1); i <= 3; i++ {
blockCopy := *block
blockCopy.Header = &gethTypes.Header{}
*blockCopy.Header = *block.Header
@@ -326,7 +322,9 @@ func testBatchProposerUncompressedBatchBytesLimitCodecV8(t *testing.T) {
err := l2BlockOrm.InsertL2Blocks(context.Background(), []*encoding.Block{&blockCopy})
assert.NoError(t, err)
cp.TryProposeChunk() // Each call creates one chunk with one block
cp.TryProposeChunk() // Each chunk will contain 1 block (~3KiB)
// We create 2 chunks here, as we have 3 blocks and reach the gas limit for the 1st chunk with the 2nd block
// and the 2nd chunk with the 3rd block.
}
// Create batch proposer with 4KiB uncompressed batch bytes limit

View File

@@ -9,9 +9,10 @@ import (
"github.com/prometheus/client_golang/prometheus"
"github.com/prometheus/client_golang/prometheus/promauto"
"github.com/scroll-tech/da-codec/encoding"
"gorm.io/gorm"
"github.com/scroll-tech/go-ethereum/log"
"github.com/scroll-tech/go-ethereum/params"
"gorm.io/gorm"
"scroll-tech/rollup/internal/config"
"scroll-tech/rollup/internal/orm"
@@ -97,7 +98,7 @@ func (p *BundleProposer) TryProposeBundle() {
}
}
func (p *BundleProposer) updateDBBundleInfo(batches []*orm.Batch, codecVersion encoding.CodecVersion) error {
func (p *BundleProposer) UpdateDBBundleInfo(batches []*orm.Batch, codecVersion encoding.CodecVersion) error {
if len(batches) == 0 {
return nil
}
@@ -187,7 +188,7 @@ func (p *BundleProposer) proposeBundle() error {
p.bundleFirstBlockTimeoutReached.Inc()
p.bundleBatchesNum.Set(float64(len(batches)))
return p.updateDBBundleInfo(batches, codecVersion)
return p.UpdateDBBundleInfo(batches, codecVersion)
}
currentTimeSec := uint64(time.Now().Unix())
@@ -201,7 +202,7 @@ func (p *BundleProposer) proposeBundle() error {
p.bundleFirstBlockTimeoutReached.Inc()
p.bundleBatchesNum.Set(float64(len(batches)))
return p.updateDBBundleInfo(batches, codecVersion)
return p.UpdateDBBundleInfo(batches, codecVersion)
}
log.Debug("pending batches are not enough and do not contain a timeout batch")

View File

@@ -86,15 +86,19 @@ func testBundleProposerLimitsCodecV7(t *testing.T) {
_, err = batchOrm.InsertBatch(context.Background(), batch, encoding.CodecV0, utils.BatchMetrics{})
assert.NoError(t, err)
block3 := *block1
block3.Header = &gethTypes.Header{}
*block3.Header = *block1.Header
block3.Header.Number = new(big.Int).SetUint64(block2.Header.Number.Uint64() + 1)
l2BlockOrm := orm.NewL2Block(db)
err = l2BlockOrm.InsertL2Blocks(context.Background(), []*encoding.Block{block1, block2})
err = l2BlockOrm.InsertL2Blocks(context.Background(), []*encoding.Block{block1, block2, &block3})
assert.NoError(t, err)
chainConfig := &params.ChainConfig{LondonBlock: big.NewInt(0), BernoulliBlock: big.NewInt(0), CurieBlock: big.NewInt(0), DarwinTime: new(uint64), DarwinV2Time: new(uint64), EuclidTime: new(uint64), EuclidV2Time: new(uint64)}
cp := NewChunkProposer(context.Background(), &config.ChunkProposerConfig{
MaxBlockNumPerChunk: 1,
MaxL2GasPerChunk: math.MaxUint64,
MaxL2GasPerChunk: 1152994, // One block per chunk via gas limit
ChunkTimeoutSec: math.MaxUint32,
MaxUncompressedBatchBytesSize: math.MaxUint64,
}, encoding.CodecV7, chainConfig, db, nil)

View File

@@ -54,7 +54,6 @@ type ChunkProposer struct {
// NewChunkProposer creates a new ChunkProposer instance.
func NewChunkProposer(ctx context.Context, cfg *config.ChunkProposerConfig, minCodecVersion encoding.CodecVersion, chainCfg *params.ChainConfig, db *gorm.DB, reg prometheus.Registerer) *ChunkProposer {
log.Info("new chunk proposer",
"maxBlockNumPerChunk", cfg.MaxBlockNumPerChunk,
"maxL2GasPerChunk", cfg.MaxL2GasPerChunk,
"chunkTimeoutSec", cfg.ChunkTimeoutSec,
"maxBlobSize", maxBlobSize)
@@ -142,7 +141,7 @@ func (p *ChunkProposer) SetReplayDB(replayDB *gorm.DB) {
// TryProposeChunk tries to propose a new chunk.
func (p *ChunkProposer) TryProposeChunk() {
p.chunkProposerCircleTotal.Inc()
if err := p.proposeChunk(); err != nil {
if err := p.ProposeChunk(); err != nil {
p.proposeChunkFailureTotal.Inc()
log.Error("propose new chunk failed", "err", err)
return
@@ -225,17 +224,16 @@ func (p *ChunkProposer) updateDBChunkInfo(chunk *encoding.Chunk, codecVersion en
return nil
}
func (p *ChunkProposer) proposeChunk() error {
func (p *ChunkProposer) ProposeChunk() error {
// unchunkedBlockHeight >= 1, assuming genesis batch with chunk 0, block 0 is committed.
unchunkedBlockHeight, err := p.chunkOrm.GetUnchunkedBlockHeight(p.ctx)
if err != nil {
return err
}
maxBlocksThisChunk := p.cfg.MaxBlockNumPerChunk
// select at most maxBlocksThisChunk blocks
blocks, err := p.l2BlockOrm.GetL2BlocksGEHeight(p.ctx, unchunkedBlockHeight, int(maxBlocksThisChunk))
// select blocks without a hard limit on count in practice (use a large value)
// The actual limits will be enforced by gas, timeout, and blob size constraints
blocks, err := p.l2BlockOrm.GetL2BlocksGEHeight(p.ctx, unchunkedBlockHeight, 1000)
if err != nil {
return err
}
@@ -251,7 +249,7 @@ func (p *ChunkProposer) proposeChunk() error {
currentHardfork := encoding.GetHardforkName(p.chainCfg, blocks[i].Header.Number.Uint64(), blocks[i].Header.Time)
if currentHardfork != hardforkName {
blocks = blocks[:i]
maxBlocksThisChunk = uint64(i) // update maxBlocksThisChunk to trigger chunking, because these blocks are the last blocks before the hardfork
// Truncate blocks at hardfork boundary
break
}
}
@@ -324,8 +322,8 @@ func (p *ChunkProposer) proposeChunk() error {
}
currentTimeSec := uint64(time.Now().Unix())
if metrics.FirstBlockTimestamp+p.cfg.ChunkTimeoutSec < currentTimeSec || metrics.NumBlocks == maxBlocksThisChunk {
log.Info("reached maximum number of blocks in chunk or first block timeout",
if metrics.FirstBlockTimestamp+p.cfg.ChunkTimeoutSec < currentTimeSec {
log.Info("first block timeout reached",
"block count", len(chunk.Blocks),
"start block number", chunk.Blocks[0].Header.Number,
"start block timestamp", metrics.FirstBlockTimestamp,

View File

@@ -22,7 +22,6 @@ import (
func testChunkProposerLimitsCodecV7(t *testing.T) {
tests := []struct {
name string
maxBlockNum uint64
maxL2Gas uint64
chunkTimeoutSec uint64
expectedChunksLen int
@@ -30,14 +29,12 @@ func testChunkProposerLimitsCodecV7(t *testing.T) {
}{
{
name: "NoLimitReached",
maxBlockNum: 100,
maxL2Gas: 20_000_000,
chunkTimeoutSec: 1000000000000,
expectedChunksLen: 0,
},
{
name: "Timeout",
maxBlockNum: 100,
maxL2Gas: 20_000_000,
chunkTimeoutSec: 0,
expectedChunksLen: 1,
@@ -45,15 +42,13 @@ func testChunkProposerLimitsCodecV7(t *testing.T) {
},
{
name: "MaxL2GasPerChunkIs0",
maxBlockNum: 10,
maxL2Gas: 0,
chunkTimeoutSec: 1000000000000,
expectedChunksLen: 0,
},
{
name: "MaxBlockNumPerChunkIs1",
maxBlockNum: 1,
maxL2Gas: 20_000_000,
name: "SingleBlockByGasLimit",
maxL2Gas: 1_100_000,
chunkTimeoutSec: 1000000000000,
expectedChunksLen: 1,
expectedBlocksInFirstChunk: 1,
@@ -62,7 +57,6 @@ func testChunkProposerLimitsCodecV7(t *testing.T) {
// In this test the second block is not included in the chunk because together
// with the first block it exceeds the maxL2GasPerChunk limit.
name: "MaxL2GasPerChunkIsSecondBlock",
maxBlockNum: 10,
maxL2Gas: 1_153_000,
chunkTimeoutSec: 1000000000000,
expectedChunksLen: 1,
@@ -85,7 +79,6 @@ func testChunkProposerLimitsCodecV7(t *testing.T) {
assert.NoError(t, err)
cp := NewChunkProposer(context.Background(), &config.ChunkProposerConfig{
MaxBlockNumPerChunk: tt.maxBlockNum,
MaxL2GasPerChunk: tt.maxL2Gas,
ChunkTimeoutSec: tt.chunkTimeoutSec,
MaxUncompressedBatchBytesSize: math.MaxUint64,
@@ -110,53 +103,6 @@ func testChunkProposerLimitsCodecV7(t *testing.T) {
}
}
func testChunkProposerBlobSizeLimitCodecV7(t *testing.T) {
db := setupDB(t)
defer database.CloseDB(db)
block := readBlockFromJSON(t, "../../../testdata/blockTrace_03.json")
for i := uint64(0); i < 510; i++ {
l2BlockOrm := orm.NewL2Block(db)
block.Header.Number = new(big.Int).SetUint64(i + 1)
block.Header.Time = i + 1
err := l2BlockOrm.InsertL2Blocks(context.Background(), []*encoding.Block{block})
assert.NoError(t, err)
}
// Add genesis chunk.
chunkOrm := orm.NewChunk(db)
_, err := chunkOrm.InsertChunk(context.Background(), &encoding.Chunk{Blocks: []*encoding.Block{{Header: &gethTypes.Header{Number: big.NewInt(0)}}}}, encoding.CodecV0, utils.ChunkMetrics{})
assert.NoError(t, err)
chainConfig := &params.ChainConfig{LondonBlock: big.NewInt(0), BernoulliBlock: big.NewInt(0), CurieBlock: big.NewInt(0), DarwinTime: new(uint64), DarwinV2Time: new(uint64), EuclidTime: new(uint64), EuclidV2Time: new(uint64)}
cp := NewChunkProposer(context.Background(), &config.ChunkProposerConfig{
MaxBlockNumPerChunk: 255,
MaxL2GasPerChunk: math.MaxUint64,
ChunkTimeoutSec: math.MaxUint32,
MaxUncompressedBatchBytesSize: math.MaxUint64,
}, encoding.CodecV7, chainConfig, db, nil)
for i := 0; i < 2; i++ {
cp.TryProposeChunk()
}
chunkOrm = orm.NewChunk(db)
chunks, err := chunkOrm.GetChunksGEIndex(context.Background(), 1, 0)
assert.NoError(t, err)
var expectedNumChunks int = 2
var numBlocksMultiplier uint64 = 255
assert.Len(t, chunks, expectedNumChunks)
for i, chunk := range chunks {
expected := numBlocksMultiplier * (uint64(i) + 1)
if expected > 2000 {
expected = 2000
}
assert.Equal(t, expected, chunk.EndBlockNumber)
}
}
func testChunkProposerUncompressedBatchBytesLimitCodecV8(t *testing.T) {
db := setupDB(t)
defer database.CloseDB(db)
@@ -204,7 +150,6 @@ func testChunkProposerUncompressedBatchBytesLimitCodecV8(t *testing.T) {
// Set max_uncompressed_batch_bytes_size to 4KiB (4 * 1024)
// One block (~3KiB) should fit, but two blocks (~6KiB) should exceed the limit
cp := NewChunkProposer(context.Background(), &config.ChunkProposerConfig{
MaxBlockNumPerChunk: math.MaxUint64, // No block number limit
MaxL2GasPerChunk: math.MaxUint64, // No gas limit
ChunkTimeoutSec: math.MaxUint32, // No timeout limit
MaxUncompressedBatchBytesSize: 4 * 1024, // 4KiB limit

View File

@@ -59,12 +59,12 @@ func NewL2WatcherClient(ctx context.Context, client *ethclient.Client, confirmat
const blocksFetchLimit = uint64(10)
// TryFetchRunningMissingBlocks attempts to fetch and store block traces for any missing blocks.
func (w *L2WatcherClient) TryFetchRunningMissingBlocks(blockHeight uint64) {
func (w *L2WatcherClient) TryFetchRunningMissingBlocks(blockHeight uint64) error {
w.metrics.fetchRunningMissingBlocksTotal.Inc()
heightInDB, err := w.l2BlockOrm.GetL2BlocksLatestHeight(w.ctx)
if err != nil {
log.Error("failed to GetL2BlocksLatestHeight", "err", err)
return
return fmt.Errorf("failed to GetL2BlocksLatestHeight: %w", err)
}
// Fetch and store block traces for missing blocks
@@ -75,22 +75,24 @@ func (w *L2WatcherClient) TryFetchRunningMissingBlocks(blockHeight uint64) {
to = blockHeight
}
if err = w.getAndStoreBlocks(w.ctx, from, to); err != nil {
if err = w.GetAndStoreBlocks(w.ctx, from, to); err != nil {
log.Error("fail to getAndStoreBlockTraces", "from", from, "to", to, "err", err)
return
return fmt.Errorf("fail to getAndStoreBlockTraces: %w", err)
}
w.metrics.fetchRunningMissingBlocksHeight.Set(float64(to))
w.metrics.rollupL2BlocksFetchedGap.Set(float64(blockHeight - to))
}
return nil
}
func (w *L2WatcherClient) getAndStoreBlocks(ctx context.Context, from, to uint64) error {
func (w *L2WatcherClient) GetAndStoreBlocks(ctx context.Context, from, to uint64) error {
var blocks []*encoding.Block
for number := from; number <= to; number++ {
log.Debug("retrieving block", "height", number)
block, err := w.GetBlockByNumberOrHash(ctx, rpc.BlockNumberOrHashWithNumber(rpc.BlockNumber(number)))
block, err := w.BlockByNumber(ctx, new(big.Int).SetUint64(number))
if err != nil {
return fmt.Errorf("failed to GetBlockByNumberOrHash: %v. number: %v", err, number)
return fmt.Errorf("failed to BlockByNumber: %v. number: %v", err, number)
}
var count int

View File

@@ -102,7 +102,6 @@ func TestFunction(t *testing.T) {
// Run chunk proposer test cases.
t.Run("TestChunkProposerLimitsCodecV7", testChunkProposerLimitsCodecV7)
t.Run("TestChunkProposerBlobSizeLimitCodecV7", testChunkProposerBlobSizeLimitCodecV7)
t.Run("TestChunkProposerUncompressedBatchBytesLimitCodecV8", testChunkProposerUncompressedBatchBytesLimitCodecV8)
// Run batch proposer test cases.

View File

@@ -5,12 +5,15 @@ import (
"encoding/json"
"errors"
"fmt"
"math/big"
"time"
"github.com/scroll-tech/da-codec/encoding"
"github.com/scroll-tech/go-ethereum/log"
"gorm.io/gorm"
"github.com/scroll-tech/go-ethereum/common"
"github.com/scroll-tech/go-ethereum/log"
"scroll-tech/common/types"
"scroll-tech/common/types/message"
"scroll-tech/common/utils"
@@ -263,6 +266,19 @@ func (o *Batch) GetBatchByIndex(ctx context.Context, index uint64) (*Batch, erro
return &batch, nil
}
// GetBatchByHash retrieves the batch by the given hash.
func (o *Batch) GetBatchByHash(ctx context.Context, hash string) (*Batch, error) {
db := o.db.WithContext(ctx)
db = db.Model(&Batch{})
db = db.Where("hash = ?", hash)
var batch Batch
if err := db.First(&batch).Error; err != nil {
return nil, fmt.Errorf("Batch.GetBatchByHash error: %w, batch hash: %v", err, hash)
}
return &batch, nil
}
// InsertBatch inserts a new batch into the database.
func (o *Batch) InsertBatch(ctx context.Context, batch *encoding.Batch, codecVersion encoding.CodecVersion, metrics rutils.BatchMetrics, dbTX ...*gorm.DB) (*Batch, error) {
if batch == nil {
@@ -338,6 +354,37 @@ func (o *Batch) InsertBatch(ctx context.Context, batch *encoding.Batch, codecVer
return &newBatch, nil
}
func (o *Batch) InsertPermissionlessBatch(ctx context.Context, batchIndex *big.Int, batchHash common.Hash, codecVersion encoding.CodecVersion, chunk *Chunk) (*Batch, error) {
now := time.Now()
newBatch := &Batch{
Index: batchIndex.Uint64(),
Hash: batchHash.Hex(),
StartChunkIndex: chunk.Index,
StartChunkHash: chunk.Hash,
EndChunkIndex: chunk.Index,
EndChunkHash: chunk.Hash,
StateRoot: chunk.StateRoot,
PrevL1MessageQueueHash: chunk.PrevL1MessageQueueHash,
PostL1MessageQueueHash: chunk.PostL1MessageQueueHash,
BatchHeader: []byte{1, 2, 3},
CodecVersion: int16(codecVersion),
EnableCompress: false,
ProvingStatus: int16(types.ProvingTaskVerified),
ProvedAt: &now,
RollupStatus: int16(types.RollupFinalized),
FinalizedAt: &now,
}
db := o.db.WithContext(ctx)
db = db.Model(&Batch{})
if err := db.Create(newBatch).Error; err != nil {
return nil, fmt.Errorf("Batch.InsertPermissionlessBatch error: %w", err)
}
return newBatch, nil
}
// UpdateProvingStatus updates the proving status of a batch.
func (o *Batch) UpdateProvingStatus(ctx context.Context, hash string, status types.ProvingStatus, dbTX ...*gorm.DB) error {
updateFields := make(map[string]interface{})
@@ -366,6 +413,29 @@ func (o *Batch) UpdateProvingStatus(ctx context.Context, hash string, status typ
return nil
}
func (o *Batch) UpdateRollupStatusCommitAndFinalizeTxHash(ctx context.Context, hash string, status types.RollupStatus, commitTxHash string, finalizeTxHash string, dbTX ...*gorm.DB) error {
updateFields := make(map[string]interface{})
updateFields["commit_tx_hash"] = commitTxHash
updateFields["committed_at"] = utils.NowUTC()
updateFields["finalize_tx_hash"] = finalizeTxHash
updateFields["finalized_at"] = utils.NowUTC()
updateFields["rollup_status"] = int(status)
db := o.db
if len(dbTX) > 0 && dbTX[0] != nil {
db = dbTX[0]
}
db = db.WithContext(ctx)
db = db.Model(&Batch{})
db = db.Where("hash", hash)
if err := db.Updates(updateFields).Error; err != nil {
return fmt.Errorf("Batch.UpdateRollupStatusCommitAndFinalizeTxHash error: %w, batch hash: %v, status: %v, commitTxHash: %v, finalizeTxHash: %v", err, hash, status.String(), commitTxHash, finalizeTxHash)
}
return nil
}
// UpdateRollupStatus updates the rollup status of a batch.
func (o *Batch) UpdateRollupStatus(ctx context.Context, hash string, status types.RollupStatus, dbTX ...*gorm.DB) error {
updateFields := make(map[string]interface{})

View File

@@ -59,8 +59,8 @@ func (*Bundle) TableName() string {
return "bundle"
}
// getLatestBundle retrieves the latest bundle from the database.
func (o *Bundle) getLatestBundle(ctx context.Context) (*Bundle, error) {
// GetLatestBundle retrieves the latest bundle from the database.
func (o *Bundle) GetLatestBundle(ctx context.Context) (*Bundle, error) {
db := o.db.WithContext(ctx)
db = db.Model(&Bundle{})
db = db.Order("index desc")
@@ -70,7 +70,7 @@ func (o *Bundle) getLatestBundle(ctx context.Context) (*Bundle, error) {
if errors.Is(err, gorm.ErrRecordNotFound) {
return nil, nil
}
return nil, fmt.Errorf("getLatestBundle error: %w", err)
return nil, fmt.Errorf("GetLatestBundle error: %w", err)
}
return &latestBundle, nil
}
@@ -106,7 +106,7 @@ func (o *Bundle) GetBundles(ctx context.Context, fields map[string]interface{},
// GetFirstUnbundledBatchIndex retrieves the first unbundled batch index.
func (o *Bundle) GetFirstUnbundledBatchIndex(ctx context.Context) (uint64, error) {
// Get the latest bundle
latestBundle, err := o.getLatestBundle(ctx)
latestBundle, err := o.GetLatestBundle(ctx)
if err != nil {
return 0, fmt.Errorf("Bundle.GetFirstUnbundledBatchIndex error: %w", err)
}
@@ -237,14 +237,18 @@ func (o *Bundle) UpdateProvingStatus(ctx context.Context, hash string, status ty
// UpdateRollupStatus updates the rollup status for a bundle.
// only used in unit tests.
func (o *Bundle) UpdateRollupStatus(ctx context.Context, hash string, status types.RollupStatus) error {
func (o *Bundle) UpdateRollupStatus(ctx context.Context, hash string, status types.RollupStatus, dbTX ...*gorm.DB) error {
updateFields := make(map[string]interface{})
updateFields["rollup_status"] = int(status)
if status == types.RollupFinalized {
updateFields["finalized_at"] = utils.NowUTC()
}
db := o.db.WithContext(ctx)
db := o.db
if len(dbTX) > 0 && dbTX[0] != nil {
db = dbTX[0]
}
db = db.WithContext(ctx)
db = db.Model(&Bundle{})
db = db.Where("hash", hash)

View File

@@ -7,9 +7,12 @@ import (
"time"
"github.com/scroll-tech/da-codec/encoding"
"github.com/scroll-tech/go-ethereum/log"
"gorm.io/gorm"
"github.com/scroll-tech/go-ethereum/common"
"github.com/scroll-tech/go-ethereum/crypto"
"github.com/scroll-tech/go-ethereum/log"
"scroll-tech/common/types"
"scroll-tech/common/utils"
@@ -275,6 +278,48 @@ func (o *Chunk) InsertChunk(ctx context.Context, chunk *encoding.Chunk, codecVer
return &newChunk, nil
}
func (o *Chunk) InsertPermissionlessChunk(ctx context.Context, index uint64, codecVersion encoding.CodecVersion, daBlobPayload encoding.DABlobPayload, totalL1MessagePoppedBefore uint64, stateRoot common.Hash) (*Chunk, error) {
// Create some unique identifier. It is not really used for anything except in DB.
var chunkBytes []byte
for _, block := range daBlobPayload.Blocks() {
blockBytes := block.Encode()
chunkBytes = append(chunkBytes, blockBytes...)
}
hash := crypto.Keccak256Hash(chunkBytes)
numBlocks := len(daBlobPayload.Blocks())
emptyHash := common.Hash{}.Hex()
newChunk := &Chunk{
Index: index,
Hash: hash.Hex(),
StartBlockNumber: daBlobPayload.Blocks()[0].Number(),
StartBlockHash: emptyHash,
EndBlockNumber: daBlobPayload.Blocks()[numBlocks-1].Number(),
EndBlockHash: emptyHash,
StartBlockTime: daBlobPayload.Blocks()[0].Timestamp(),
TotalL1MessagesPoppedInChunk: 0, // this needs to be 0 so that the calculation of the total L1 messages popped before for the next chunk is correct
TotalL1MessagesPoppedBefore: totalL1MessagePoppedBefore,
PrevL1MessageQueueHash: daBlobPayload.PrevL1MessageQueueHash().Hex(),
PostL1MessageQueueHash: daBlobPayload.PostL1MessageQueueHash().Hex(),
ParentChunkHash: emptyHash,
StateRoot: stateRoot.Hex(),
ParentChunkStateRoot: emptyHash,
WithdrawRoot: emptyHash,
CodecVersion: int16(codecVersion),
EnableCompress: false,
ProvingStatus: int16(types.ProvingTaskVerified),
}
db := o.db.WithContext(ctx)
db = db.Model(&Chunk{})
if err := db.Create(newChunk).Error; err != nil {
return nil, fmt.Errorf("Chunk. InsertPermissionlessChunk error: %w, chunk hash: %v", err, newChunk.Hash)
}
return newChunk, nil
}
// InsertTestChunkForProposerTool inserts a new chunk into the database only for analysis usage by proposer tool.
func (o *Chunk) InsertTestChunkForProposerTool(ctx context.Context, chunk *encoding.Chunk, codecVersion encoding.CodecVersion, totalL1MessagePoppedBefore uint64, dbTX ...*gorm.DB) (*Chunk, error) {
if chunk == nil || len(chunk.Blocks) == 0 {

View File

@@ -2,7 +2,6 @@
"l2_config": {
"endpoint": "https://rpc.scroll.io",
"chunk_proposer_config": {
"max_block_num_per_chunk": 100,
"max_l2_gas_per_chunk": 20000000,
"chunk_timeout_sec": 72000000000,
"max_uncompressed_batch_bytes_size": 4194304

File diff suppressed because one or more lines are too long

View File

@@ -0,0 +1,215 @@
package main
import (
"context"
"errors"
"math/rand"
"sort"
"gorm.io/gorm"
"github.com/scroll-tech/da-codec/encoding"
"github.com/scroll-tech/go-ethereum/common"
"github.com/scroll-tech/go-ethereum/log"
"scroll-tech/common/database"
"scroll-tech/rollup/internal/orm"
"scroll-tech/rollup/internal/utils"
)
type importRecord struct {
Chunk []string `json:"chunks"`
Batch []string `json:"batches"`
Bundle []string `json:"bundles"`
}
func randomPickKfromN(n, k int, rng *rand.Rand) []int {
ret := make([]int, n-1)
for i := 1; i < n; i++ {
ret[i-1] = i
}
rng.Shuffle(len(ret), func(i, j int) {
ret[i], ret[j] = ret[j], ret[i]
})
ret = ret[:k-1]
sort.Ints(ret)
return ret
}
func importData(ctx context.Context, beginBlk, endBlk uint64, chkNum, batchNum, bundleNum int, seed int64) (*importRecord, error) {
db, err := database.InitDB(cfg.DBConfig)
if err != nil {
return nil, err
}
ret := &importRecord{}
// Create a new random source with the provided seed
source := rand.NewSource(seed)
//nolint:all
rng := rand.New(source)
chkSepIdx := randomPickKfromN(int(endBlk-beginBlk)+1, chkNum, rng)
chkSep := make([]uint64, len(chkSepIdx))
for i, ind := range chkSepIdx {
chkSep[i] = beginBlk + uint64(ind)
}
chkSep = append(chkSep, endBlk+1)
log.Info("separated chunk", "border", chkSep)
head := beginBlk
lastMsgHash := common.Hash{}
ormChks := make([]*orm.Chunk, 0, chkNum)
encChks := make([]*encoding.Chunk, 0, chkNum)
for _, edBlk := range chkSep {
ormChk, chk, err := importChunk(ctx, db, head, edBlk-1, lastMsgHash)
if err != nil {
return nil, err
}
lastMsgHash = chk.PostL1MessageQueueHash
ormChks = append(ormChks, ormChk)
encChks = append(encChks, chk)
head = edBlk
}
for _, chk := range ormChks {
ret.Chunk = append(ret.Chunk, chk.Hash)
}
batchSep := randomPickKfromN(chkNum, batchNum, rng)
batchSep = append(batchSep, chkNum)
log.Info("separated batch", "border", batchSep)
headChk := int(0)
batches := make([]*orm.Batch, 0, batchNum)
var lastBatch *orm.Batch
for _, endChk := range batchSep {
batch, err := importBatch(ctx, db, ormChks[headChk:endChk], encChks[headChk:endChk], lastBatch)
if err != nil {
return nil, err
}
lastBatch = batch
batches = append(batches, batch)
headChk = endChk
}
for _, batch := range batches {
ret.Batch = append(ret.Batch, batch.Hash)
}
bundleSep := randomPickKfromN(batchNum, bundleNum, rng)
bundleSep = append(bundleSep, batchNum)
log.Info("separated bundle", "border", bundleSep)
headBatch := int(0)
for _, endBatch := range bundleSep {
hash, err := importBundle(ctx, db, batches[headBatch:endBatch])
if err != nil {
return nil, err
}
ret.Bundle = append(ret.Bundle, hash)
headBatch = endBatch
}
return ret, nil
}
func importChunk(ctx context.Context, db *gorm.DB, beginBlk, endBlk uint64, prevMsgQueueHash common.Hash) (*orm.Chunk, *encoding.Chunk, error) {
nblk := int(endBlk-beginBlk) + 1
blockOrm := orm.NewL2Block(db)
blks, err := blockOrm.GetL2BlocksGEHeight(ctx, beginBlk, nblk)
if err != nil {
return nil, nil, err
}
postHash, err := encoding.MessageQueueV2ApplyL1MessagesFromBlocks(prevMsgQueueHash, blks)
if err != nil {
return nil, nil, err
}
theChunk := &encoding.Chunk{
Blocks: blks,
PrevL1MessageQueueHash: prevMsgQueueHash,
PostL1MessageQueueHash: postHash,
}
chunkOrm := orm.NewChunk(db)
dbChk, err := chunkOrm.InsertChunk(ctx, theChunk, codecCfg, utils.ChunkMetrics{})
if err != nil {
return nil, nil, err
}
err = blockOrm.UpdateChunkHashInRange(ctx, beginBlk, endBlk, dbChk.Hash)
if err != nil {
return nil, nil, err
}
log.Info("insert chunk", "From", beginBlk, "To", endBlk, "hash", dbChk.Hash)
return dbChk, theChunk, nil
}
func importBatch(ctx context.Context, db *gorm.DB, chks []*orm.Chunk, encChks []*encoding.Chunk, last *orm.Batch) (*orm.Batch, error) {
batchOrm := orm.NewBatch(db)
if last == nil {
var err error
last, err = batchOrm.GetLatestBatch(ctx)
if err != nil && !errors.Is(err, gorm.ErrRecordNotFound) {
return nil, err
} else if last != nil {
log.Info("start from last batch", "index", last.Index)
}
}
index := uint64(0)
var parentHash common.Hash
if last != nil {
index = last.Index + 1
parentHash = common.HexToHash(last.Hash)
}
var blks []*encoding.Block
for _, chk := range encChks {
blks = append(blks, chk.Blocks...)
}
batch := &encoding.Batch{
Index: index,
TotalL1MessagePoppedBefore: chks[0].TotalL1MessagesPoppedBefore,
ParentBatchHash: parentHash,
Chunks: encChks,
Blocks: blks,
}
dbBatch, err := batchOrm.InsertBatch(ctx, batch, codecCfg, utils.BatchMetrics{})
if err != nil {
return nil, err
}
err = orm.NewChunk(db).UpdateBatchHashInRange(ctx, chks[0].Index, chks[len(chks)-1].Index, dbBatch.Hash)
if err != nil {
return nil, err
}
log.Info("insert batch", "index", index)
return dbBatch, nil
}
func importBundle(ctx context.Context, db *gorm.DB, batches []*orm.Batch) (string, error) {
bundleOrm := orm.NewBundle(db)
bundle, err := bundleOrm.InsertBundle(ctx, batches, codecCfg)
if err != nil {
return "", err
}
err = orm.NewBatch(db).UpdateBundleHashInRange(ctx, batches[0].Index, batches[len(batches)-1].Index, bundle.Hash)
if err != nil {
return "", err
}
log.Info("insert bundle", "hash", bundle.Hash)
return bundle.Hash, nil
}

View File

@@ -0,0 +1,198 @@
package main
import (
"encoding/json"
"fmt"
"math/rand"
"os"
"path/filepath"
"strconv"
"strings"
"github.com/scroll-tech/da-codec/encoding"
"github.com/scroll-tech/go-ethereum/log"
"github.com/urfave/cli/v2"
"scroll-tech/common/database"
"scroll-tech/common/utils"
"scroll-tech/common/version"
)
var app *cli.App
var cfg *config
var codecCfg encoding.CodecVersion = encoding.CodecV8
var outputNumFlag = cli.StringFlag{
Name: "counts",
Usage: "Counts for output (chunks,batches,bundles)",
Value: "4,2,1",
}
var outputPathFlag = cli.StringFlag{
Name: "output",
Usage: "output file path",
Value: "testset.json",
}
var seedFlag = cli.Int64Flag{
Name: "seed",
Usage: "random seed, 0 to use random selected seed",
Value: 0,
}
var codecFlag = cli.IntFlag{
Name: "codec",
Usage: "codec version, valid from 6, default(auto) is 0",
Value: 0,
}
func parseThreeIntegers(value string) (int, int, int, error) {
// Split the input string by comma
parts := strings.Split(value, ",")
// Check that we have exactly 3 parts
if len(parts) != 3 {
return 0, 0, 0, fmt.Errorf("input must contain exactly 3 comma-separated integers, got %s", value)
}
// Parse the three integers
values := make([]int, 3)
for i, part := range parts {
// Trim any whitespace
part = strings.TrimSpace(part)
// Parse the integer
val, err := strconv.Atoi(part)
if err != nil {
return 0, 0, 0, fmt.Errorf("failed to parse '%s' as integer: %w", part, err)
}
// Check that it's positive
if val <= 0 {
return 0, 0, 0, fmt.Errorf("all integers must be greater than 0, got %d", val)
}
values[i] = val
}
// Check that first >= second >= third
if values[0] < values[1] || values[1] < values[2] {
return 0, 0, 0, fmt.Errorf("integers must be in descending order: %d >= %d >= %d",
values[0], values[1], values[2])
}
return values[0], values[1], values[2], nil
}
// load a comptabile type of config for rollup
type config struct {
DBConfig *database.Config `json:"db_config"`
}
func init() {
// Set up coordinator app info.
app = cli.NewApp()
app.Action = action
app.Name = "integration-test-tool"
app.Usage = "The Scroll L2 Integration Test Tool"
app.Version = version.Version
app.Flags = append(app.Flags, &codecFlag, &seedFlag, &outputNumFlag, &outputPathFlag)
app.Flags = append(app.Flags, utils.CommonFlags...)
app.Before = func(ctx *cli.Context) error {
if err := utils.LogSetup(ctx); err != nil {
return err
}
cfgFile := ctx.String(utils.ConfigFileFlag.Name)
var err error
cfg, err = newConfig(cfgFile)
if err != nil {
log.Crit("failed to load config file", "config file", cfgFile, "error", err)
}
return nil
}
}
func newConfig(file string) (*config, error) {
buf, err := os.ReadFile(filepath.Clean(file))
if err != nil {
return nil, err
}
cfg := &config{}
err = json.Unmarshal(buf, cfg)
if err != nil {
return nil, err
}
return cfg, nil
}
func action(ctx *cli.Context) error {
if ctx.Args().Len() < 2 {
return fmt.Errorf("specify begin and end block number")
}
codecFl := ctx.Int(codecFlag.Name)
if codecFl != 0 {
switch codecFl {
case 6:
codecCfg = encoding.CodecV6
case 7:
codecCfg = encoding.CodecV7
case 8:
codecCfg = encoding.CodecV8
default:
return fmt.Errorf("invalid codec version %d", codecFl)
}
log.Info("set codec", "version", codecCfg)
}
beginBlk, err := strconv.ParseUint(ctx.Args().First(), 10, 64)
if err != nil {
return fmt.Errorf("invalid begin block number: %w", err)
}
endBlk, err := strconv.ParseUint(ctx.Args().Get(1), 10, 64)
if err != nil {
return fmt.Errorf("invalid begin block number: %w", err)
}
chkNum, batchNum, bundleNum, err := parseThreeIntegers(ctx.String(outputNumFlag.Name))
if err != nil {
return err
}
seed := ctx.Int64(seedFlag.Name)
//nolint:all
if seed == 0 {
seed = rand.Int63()
}
outputPath := ctx.String(outputPathFlag.Name)
log.Info("output", "Seed", seed, "file", outputPath)
ret, err := importData(ctx.Context, beginBlk, endBlk, chkNum, batchNum, bundleNum, seed)
if err != nil {
return err
}
// Marshal the ret variable to JSON with indentation for readability
jsonData, err := json.MarshalIndent(ret, "", " ")
if err != nil {
return fmt.Errorf("failed to marshal result data to JSON: %w", err)
}
// Write the JSON data to the specified file
err = os.WriteFile(outputPath, jsonData, 0600)
if err != nil {
return fmt.Errorf("failed to write result to file %s: %w", outputPath, err)
}
return nil
}
func main() {
if err := app.Run(os.Args); err != nil {
_, _ = fmt.Fprintln(os.Stderr, err)
os.Exit(1)
}
}

View File

@@ -118,7 +118,6 @@ func testCommitBatchAndFinalizeBundleCodecV7(t *testing.T) {
}
cp := watcher.NewChunkProposer(context.Background(), &config.ChunkProposerConfig{
MaxBlockNumPerChunk: 100,
MaxL2GasPerChunk: math.MaxUint64,
ChunkTimeoutSec: 300,
MaxUncompressedBatchBytesSize: math.MaxUint64,

View File

@@ -5,8 +5,8 @@ go 1.22
toolchain go1.22.2
require (
github.com/scroll-tech/da-codec v0.1.3-0.20250310095435-012aaee6b435
github.com/scroll-tech/go-ethereum v1.10.14-0.20250305151038-478940e79601
github.com/scroll-tech/da-codec v0.1.3-0.20250826112206-b4cce5c5d178
github.com/scroll-tech/go-ethereum v1.10.14-0.20250625112225-a67863c65587
github.com/stretchr/testify v1.10.0
gorm.io/gorm v1.25.7-0.20240204074919-46816ad31dde
)

View File

@@ -93,10 +93,10 @@ github.com/rjeczalik/notify v0.9.1 h1:CLCKso/QK1snAlnhNR/CNvNiFU2saUtjV0bx3EwNeC
github.com/rjeczalik/notify v0.9.1/go.mod h1:rKwnCoCGeuQnwBtTSPL9Dad03Vh2n40ePRrjvIXnJho=
github.com/rogpeppe/go-internal v1.12.0 h1:exVL4IDcn6na9z1rAb56Vxr+CgyK3nn3O+epU5NdKM8=
github.com/rogpeppe/go-internal v1.12.0/go.mod h1:E+RYuTGaKKdloAfM02xzb0FW3Paa99yedzYV+kq4uf4=
github.com/scroll-tech/da-codec v0.1.3-0.20250310095435-012aaee6b435 h1:X9fkvjrYBY79lGgKEPpUhuiJ4vWpWwzOVw4H8CU8L54=
github.com/scroll-tech/da-codec v0.1.3-0.20250310095435-012aaee6b435/go.mod h1:yhTS9OVC0xQGhg7DN5iV5KZJvnSIlFWAxDdp+6jxQtY=
github.com/scroll-tech/go-ethereum v1.10.14-0.20250305151038-478940e79601 h1:NEsjCG6uSvLRBlsP3+x6PL1kM+Ojs3g8UGotIPgJSz8=
github.com/scroll-tech/go-ethereum v1.10.14-0.20250305151038-478940e79601/go.mod h1:OblWe1+QrZwdpwO0j/LY3BSGuKT3YPUFBDQQgvvfStQ=
github.com/scroll-tech/da-codec v0.1.3-0.20250826112206-b4cce5c5d178 h1:4utngmJHXSOS5FoSdZhEV1xMRirpArbXvyoCZY9nYj0=
github.com/scroll-tech/da-codec v0.1.3-0.20250826112206-b4cce5c5d178/go.mod h1:Z6kN5u2khPhiqHyk172kGB7o38bH/nj7Ilrb/46wZGg=
github.com/scroll-tech/go-ethereum v1.10.14-0.20250625112225-a67863c65587 h1:wG1+gb+K4iLtxAHhiAreMdIjP5x9hB64duraN2+u1QU=
github.com/scroll-tech/go-ethereum v1.10.14-0.20250625112225-a67863c65587/go.mod h1:YyfB2AyAtphlbIuDQgaxc2b9mo0zE4EBA1+qtXvzlmg=
github.com/scroll-tech/zktrie v0.8.4 h1:UagmnZ4Z3ITCk+aUq9NQZJNAwnWl4gSxsLb2Nl7IgRE=
github.com/scroll-tech/zktrie v0.8.4/go.mod h1:XvNo7vAk8yxNyTjBDj5WIiFzYW4bx/gJ78+NK6Zn6Uk=
github.com/shirou/gopsutil v3.21.11+incompatible h1:+1+c1VGhc88SSonWP6foOcLhvnKlUeu/erjjvaPEYiI=

3
tests/prover-e2e/.env Normal file
View File

@@ -0,0 +1,3 @@
GOOSE_DRIVER=postgres
GOOSE_DBSTRING=postgresql://dev:dev@localhost:5432/scroll?sslmode=disable
GOOSE_MIGRATION_DIR=../../database/migrate/migrations

2
tests/prover-e2e/.gitignore vendored Normal file
View File

@@ -0,0 +1,2 @@
build/*
testset.json

View File

@@ -0,0 +1,132 @@
-- +goose Up
-- +goose StatementBegin
INSERT INTO l2_block (number, hash, parent_hash, header, withdraw_root,
state_root, tx_num, gas_used, block_timestamp, row_consumption,
chunk_hash, transactions
) VALUES ('10973700', '0x29c84f0df09fda2c6c63d314bb6714dbbdeca3b85c91743f9e25d3f81c28b986', '0x01aabb5d1d7edadd10011b4099de7ed703b9ce495717cd48a304ff4db3710d8a', '{"parentHash":"0x01aabb5d1d7edadd10011b4099de7ed703b9ce495717cd48a304ff4db3710d8a","sha3Uncles":"0x1dcc4de8dec75d7aab85b567b6ccd41ad312451b948a7413f0a142fd40d49347","miner":"0x0000000000000000000000000000000000000000","stateRoot":"0x347733a157bc7045f6a1d5bfd37d51763f3503b63290576a65b3b83265add2cf","transactionsRoot":"0x56e81f171bcc55a6ff8345e692c0f86e5b48e01b996cadc001622fb5e363b421","receiptsRoot":"0x56e81f171bcc55a6ff8345e692c0f86e5b48e01b996cadc001622fb5e363b421","logsBloom":"0x00000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000","difficulty":"0x1","number":"0xa77204","gasLimit":"0x1312d00","gasUsed":"0x0","timestamp":"0x687f36e5","extraData":"0x","mixHash":"0x0000000000000000000000000000000000000000000000000000000000000000","nonce":"0x0000000000000000","baseFeePerGas":"0xef4209","withdrawalsRoot":null,"blobGasUsed":null,"excessBlobGas":null,"parentBeaconBlockRoot":null,"requestsHash":null,"hash":"0x29c84f0df09fda2c6c63d314bb6714dbbdeca3b85c91743f9e25d3f81c28b986"}', '0x5a9bd7f5f6723ce51c03beffa310a5bf79c2cf261ddb8622cf407b41d968ef91', '0x347733a157bc7045f6a1d5bfd37d51763f3503b63290576a65b3b83265add2cf', '0', '0', '1753167589', '', '0x206c062cf0991353ba5ebc9888ca224f470ad3edf8e8e01125726a3858ebdd73', '[]');
INSERT INTO l2_block (number, hash, parent_hash, header, withdraw_root,
state_root, tx_num, gas_used, block_timestamp, row_consumption,
chunk_hash, transactions
) VALUES ('10973701', '0x21ad5215b5b51cb5eb5ea6a19444804ca1628cbf8ef8cf1977660d8c468c0151', '0x29c84f0df09fda2c6c63d314bb6714dbbdeca3b85c91743f9e25d3f81c28b986', '{"parentHash":"0x29c84f0df09fda2c6c63d314bb6714dbbdeca3b85c91743f9e25d3f81c28b986","sha3Uncles":"0x1dcc4de8dec75d7aab85b567b6ccd41ad312451b948a7413f0a142fd40d49347","miner":"0x0000000000000000000000000000000000000000","stateRoot":"0x347733a157bc7045f6a1d5bfd37d51763f3503b63290576a65b3b83265add2cf","transactionsRoot":"0x56e81f171bcc55a6ff8345e692c0f86e5b48e01b996cadc001622fb5e363b421","receiptsRoot":"0x56e81f171bcc55a6ff8345e692c0f86e5b48e01b996cadc001622fb5e363b421","logsBloom":"0x00000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000","difficulty":"0x1","number":"0xa77205","gasLimit":"0x1312d00","gasUsed":"0x0","timestamp":"0x687f36e6","extraData":"0x","mixHash":"0x0000000000000000000000000000000000000000000000000000000000000000","nonce":"0x0000000000000000","baseFeePerGas":"0xef4209","withdrawalsRoot":null,"blobGasUsed":null,"excessBlobGas":null,"parentBeaconBlockRoot":null,"requestsHash":null,"hash":"0x21ad5215b5b51cb5eb5ea6a19444804ca1628cbf8ef8cf1977660d8c468c0151"}', '0x5a9bd7f5f6723ce51c03beffa310a5bf79c2cf261ddb8622cf407b41d968ef91', '0x347733a157bc7045f6a1d5bfd37d51763f3503b63290576a65b3b83265add2cf', '0', '0', '1753167590', '', '0x206c062cf0991353ba5ebc9888ca224f470ad3edf8e8e01125726a3858ebdd73', '[]');
INSERT INTO l2_block (number, hash, parent_hash, header, withdraw_root,
state_root, tx_num, gas_used, block_timestamp, row_consumption,
chunk_hash, transactions
) VALUES ('10973702', '0x8c9ce33a9b62060b193c01f31518f3bfc5d8132c11569a5b4db03a5b0611f30e', '0x21ad5215b5b51cb5eb5ea6a19444804ca1628cbf8ef8cf1977660d8c468c0151', '{"parentHash":"0x21ad5215b5b51cb5eb5ea6a19444804ca1628cbf8ef8cf1977660d8c468c0151","sha3Uncles":"0x1dcc4de8dec75d7aab85b567b6ccd41ad312451b948a7413f0a142fd40d49347","miner":"0x0000000000000000000000000000000000000000","stateRoot":"0x347733a157bc7045f6a1d5bfd37d51763f3503b63290576a65b3b83265add2cf","transactionsRoot":"0x56e81f171bcc55a6ff8345e692c0f86e5b48e01b996cadc001622fb5e363b421","receiptsRoot":"0x56e81f171bcc55a6ff8345e692c0f86e5b48e01b996cadc001622fb5e363b421","logsBloom":"0x00000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000","difficulty":"0x1","number":"0xa77206","gasLimit":"0x1312d00","gasUsed":"0x0","timestamp":"0x687f36e7","extraData":"0x","mixHash":"0x0000000000000000000000000000000000000000000000000000000000000000","nonce":"0x0000000000000000","baseFeePerGas":"0xef4209","withdrawalsRoot":null,"blobGasUsed":null,"excessBlobGas":null,"parentBeaconBlockRoot":null,"requestsHash":null,"hash":"0x8c9ce33a9b62060b193c01f31518f3bfc5d8132c11569a5b4db03a5b0611f30e"}', '0x5a9bd7f5f6723ce51c03beffa310a5bf79c2cf261ddb8622cf407b41d968ef91', '0x347733a157bc7045f6a1d5bfd37d51763f3503b63290576a65b3b83265add2cf', '0', '0', '1753167591', '', '0x206c062cf0991353ba5ebc9888ca224f470ad3edf8e8e01125726a3858ebdd73', '[]');
INSERT INTO l2_block (number, hash, parent_hash, header, withdraw_root,
state_root, tx_num, gas_used, block_timestamp, row_consumption,
chunk_hash, transactions
) VALUES ('10973703', '0x73eed8060ca9a36fe8bf8c981f0e44425cd69ade00ff986452f1c02d462194fe', '0x8c9ce33a9b62060b193c01f31518f3bfc5d8132c11569a5b4db03a5b0611f30e', '{"parentHash":"0x8c9ce33a9b62060b193c01f31518f3bfc5d8132c11569a5b4db03a5b0611f30e","sha3Uncles":"0x1dcc4de8dec75d7aab85b567b6ccd41ad312451b948a7413f0a142fd40d49347","miner":"0x0000000000000000000000000000000000000000","stateRoot":"0x347733a157bc7045f6a1d5bfd37d51763f3503b63290576a65b3b83265add2cf","transactionsRoot":"0x56e81f171bcc55a6ff8345e692c0f86e5b48e01b996cadc001622fb5e363b421","receiptsRoot":"0x56e81f171bcc55a6ff8345e692c0f86e5b48e01b996cadc001622fb5e363b421","logsBloom":"0x00000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000","difficulty":"0x1","number":"0xa77207","gasLimit":"0x1312d00","gasUsed":"0x0","timestamp":"0x687f36e8","extraData":"0x","mixHash":"0x0000000000000000000000000000000000000000000000000000000000000000","nonce":"0x0000000000000000","baseFeePerGas":"0xef4209","withdrawalsRoot":null,"blobGasUsed":null,"excessBlobGas":null,"parentBeaconBlockRoot":null,"requestsHash":null,"hash":"0x73eed8060ca9a36fe8bf8c981f0e44425cd69ade00ff986452f1c02d462194fe"}', '0x5a9bd7f5f6723ce51c03beffa310a5bf79c2cf261ddb8622cf407b41d968ef91', '0x347733a157bc7045f6a1d5bfd37d51763f3503b63290576a65b3b83265add2cf', '0', '0', '1753167592', '', '0x206c062cf0991353ba5ebc9888ca224f470ad3edf8e8e01125726a3858ebdd73', '[]');
INSERT INTO l2_block (number, hash, parent_hash, header, withdraw_root,
state_root, tx_num, gas_used, block_timestamp, row_consumption,
chunk_hash, transactions
) VALUES ('10973704', '0x7a6a1bede8936cfbd677cf38c43e399c8a2d0b62700caf05513bad540541b1b5', '0x73eed8060ca9a36fe8bf8c981f0e44425cd69ade00ff986452f1c02d462194fe', '{"parentHash":"0x73eed8060ca9a36fe8bf8c981f0e44425cd69ade00ff986452f1c02d462194fe","sha3Uncles":"0x1dcc4de8dec75d7aab85b567b6ccd41ad312451b948a7413f0a142fd40d49347","miner":"0x0000000000000000000000000000000000000000","stateRoot":"0x347733a157bc7045f6a1d5bfd37d51763f3503b63290576a65b3b83265add2cf","transactionsRoot":"0x56e81f171bcc55a6ff8345e692c0f86e5b48e01b996cadc001622fb5e363b421","receiptsRoot":"0x56e81f171bcc55a6ff8345e692c0f86e5b48e01b996cadc001622fb5e363b421","logsBloom":"0x00000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000","difficulty":"0x1","number":"0xa77208","gasLimit":"0x1312d00","gasUsed":"0x0","timestamp":"0x687f36e9","extraData":"0x","mixHash":"0x0000000000000000000000000000000000000000000000000000000000000000","nonce":"0x0000000000000000","baseFeePerGas":"0xef4209","withdrawalsRoot":null,"blobGasUsed":null,"excessBlobGas":null,"parentBeaconBlockRoot":null,"requestsHash":null,"hash":"0x7a6a1bede8936cfbd677cf38c43e399c8a2d0b62700caf05513bad540541b1b5"}', '0x5a9bd7f5f6723ce51c03beffa310a5bf79c2cf261ddb8622cf407b41d968ef91', '0x347733a157bc7045f6a1d5bfd37d51763f3503b63290576a65b3b83265add2cf', '0', '0', '1753167593', '', '0x206c062cf0991353ba5ebc9888ca224f470ad3edf8e8e01125726a3858ebdd73', '[]');
INSERT INTO l2_block (number, hash, parent_hash, header, withdraw_root,
state_root, tx_num, gas_used, block_timestamp, row_consumption,
chunk_hash, transactions
) VALUES ('10973705', '0x1a5306293b7801a42c4402f9d32e7a45d640c49506ee61452da0120f3e242424', '0x7a6a1bede8936cfbd677cf38c43e399c8a2d0b62700caf05513bad540541b1b5', '{"parentHash":"0x7a6a1bede8936cfbd677cf38c43e399c8a2d0b62700caf05513bad540541b1b5","sha3Uncles":"0x1dcc4de8dec75d7aab85b567b6ccd41ad312451b948a7413f0a142fd40d49347","miner":"0x0000000000000000000000000000000000000000","stateRoot":"0x347733a157bc7045f6a1d5bfd37d51763f3503b63290576a65b3b83265add2cf","transactionsRoot":"0x56e81f171bcc55a6ff8345e692c0f86e5b48e01b996cadc001622fb5e363b421","receiptsRoot":"0x56e81f171bcc55a6ff8345e692c0f86e5b48e01b996cadc001622fb5e363b421","logsBloom":"0x00000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000","difficulty":"0x1","number":"0xa77209","gasLimit":"0x1312d00","gasUsed":"0x0","timestamp":"0x687f36ea","extraData":"0x","mixHash":"0x0000000000000000000000000000000000000000000000000000000000000000","nonce":"0x0000000000000000","baseFeePerGas":"0xef4209","withdrawalsRoot":null,"blobGasUsed":null,"excessBlobGas":null,"parentBeaconBlockRoot":null,"requestsHash":null,"hash":"0x1a5306293b7801a42c4402f9d32e7a45d640c49506ee61452da0120f3e242424"}', '0x5a9bd7f5f6723ce51c03beffa310a5bf79c2cf261ddb8622cf407b41d968ef91', '0x347733a157bc7045f6a1d5bfd37d51763f3503b63290576a65b3b83265add2cf', '0', '0', '1753167594', '', '0x206c062cf0991353ba5ebc9888ca224f470ad3edf8e8e01125726a3858ebdd73', '[]');
INSERT INTO l2_block (number, hash, parent_hash, header, withdraw_root,
state_root, tx_num, gas_used, block_timestamp, row_consumption,
chunk_hash, transactions
) VALUES ('10973706', '0xc89f391a03c7138f676bd8babed6589035b2b7d8c8b99071cb90661f7f996386', '0x1a5306293b7801a42c4402f9d32e7a45d640c49506ee61452da0120f3e242424', '{"parentHash":"0x1a5306293b7801a42c4402f9d32e7a45d640c49506ee61452da0120f3e242424","sha3Uncles":"0x1dcc4de8dec75d7aab85b567b6ccd41ad312451b948a7413f0a142fd40d49347","miner":"0x0000000000000000000000000000000000000000","stateRoot":"0x347733a157bc7045f6a1d5bfd37d51763f3503b63290576a65b3b83265add2cf","transactionsRoot":"0x56e81f171bcc55a6ff8345e692c0f86e5b48e01b996cadc001622fb5e363b421","receiptsRoot":"0x56e81f171bcc55a6ff8345e692c0f86e5b48e01b996cadc001622fb5e363b421","logsBloom":"0x00000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000","difficulty":"0x1","number":"0xa7720a","gasLimit":"0x1312d00","gasUsed":"0x0","timestamp":"0x687f36eb","extraData":"0x","mixHash":"0x0000000000000000000000000000000000000000000000000000000000000000","nonce":"0x0000000000000000","baseFeePerGas":"0xef4209","withdrawalsRoot":null,"blobGasUsed":null,"excessBlobGas":null,"parentBeaconBlockRoot":null,"requestsHash":null,"hash":"0xc89f391a03c7138f676bd8babed6589035b2b7d8c8b99071cb90661f7f996386"}', '0x5a9bd7f5f6723ce51c03beffa310a5bf79c2cf261ddb8622cf407b41d968ef91', '0x347733a157bc7045f6a1d5bfd37d51763f3503b63290576a65b3b83265add2cf', '0', '0', '1753167595', '', '0x206c062cf0991353ba5ebc9888ca224f470ad3edf8e8e01125726a3858ebdd73', '[]');
INSERT INTO l2_block (number, hash, parent_hash, header, withdraw_root,
state_root, tx_num, gas_used, block_timestamp, row_consumption,
chunk_hash, transactions
) VALUES ('10973707', '0x38d72b14ef43e548ab0ffd84892e7c3accdd11e1121c2c7a94b953b4e896eb41', '0xc89f391a03c7138f676bd8babed6589035b2b7d8c8b99071cb90661f7f996386', '{"parentHash":"0xc89f391a03c7138f676bd8babed6589035b2b7d8c8b99071cb90661f7f996386","sha3Uncles":"0x1dcc4de8dec75d7aab85b567b6ccd41ad312451b948a7413f0a142fd40d49347","miner":"0x0000000000000000000000000000000000000000","stateRoot":"0xb920df561f210a617a0c1567cb2f65350818b96b159f5aa4b9ac7915b7af4946","transactionsRoot":"0xf14cf5134833ddb4f42a017e92af371f0a71eaf5d84cb6e681c81fa023662c5d","receiptsRoot":"0x4008fb883088f1ba377310e15221fffc8e5446faf420d6a28e061e9341beb056","logsBloom":"0x00000000000000000000000000000000000000000000000000000000000000000000000000000000000000000080000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000020000000000900000000000000000000000000000000000000000000000000000000000000000000000001000000008000000000000000000000000000000010000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000010000000000000000000000000020000000000000000000","difficulty":"0x1","number":"0xa7720b","gasLimit":"0x1312d00","gasUsed":"0x9642","timestamp":"0x687f36ec","extraData":"0x","mixHash":"0x0000000000000000000000000000000000000000000000000000000000000000","nonce":"0x0000000000000000","baseFeePerGas":"0xef4209","withdrawalsRoot":null,"blobGasUsed":null,"excessBlobGas":null,"parentBeaconBlockRoot":null,"requestsHash":null,"hash":"0x38d72b14ef43e548ab0ffd84892e7c3accdd11e1121c2c7a94b953b4e896eb41"}', '0x5a9bd7f5f6723ce51c03beffa310a5bf79c2cf261ddb8622cf407b41d968ef91', '0xb920df561f210a617a0c1567cb2f65350818b96b159f5aa4b9ac7915b7af4946', '1', '38466', '1753167596', '', '0x206c062cf0991353ba5ebc9888ca224f470ad3edf8e8e01125726a3858ebdd73', '[{"type":2,"nonce":3392500,"txHash":"0xc15b615906602154131a6c42d7603def4bd2a769881292d831140b0b9b8f8850","gas":45919,"gasPrice":"0x1de8476","gasTipCap":"0x64","gasFeeCap":"0x1de8476","from":"0x0000000000000000000000000000000000000000","to":"0x5300000000000000000000000000000000000002","chainId":"0x8274f","value":"0x0","data":"0x39455d3a0000000000000000000000000000000000000000000000000000000000045b840000000000000000000000000000000000000000000000000000000000000001","isCreate":false,"accessList":[{"address":"0x5300000000000000000000000000000000000003","storageKeys":["0x297c59f20c6b2556a4ed35dccabbdeb8b1cf950f62aefb86b98d19b5a4aff2a2"]}],"authorizationList":null,"v":"0x1","r":"0xa1b888cc9be7990c4f6bd8a9d0d5fa743ea8173196c7ca871464becd133ba0de","s":"0x6bacc3e1a244c62eff3008795e010598d07b95a8bad7a5592ec941e121294885"}]');
INSERT INTO l2_block (number, hash, parent_hash, header, withdraw_root,
state_root, tx_num, gas_used, block_timestamp, row_consumption,
chunk_hash, transactions
) VALUES ('10973708', '0x230863f98595ba0c83785caf618072ce2a876307102adbeba11b9de9c4af8a08', '0x38d72b14ef43e548ab0ffd84892e7c3accdd11e1121c2c7a94b953b4e896eb41', '{"parentHash":"0x38d72b14ef43e548ab0ffd84892e7c3accdd11e1121c2c7a94b953b4e896eb41","sha3Uncles":"0x1dcc4de8dec75d7aab85b567b6ccd41ad312451b948a7413f0a142fd40d49347","miner":"0x0000000000000000000000000000000000000000","stateRoot":"0xb920df561f210a617a0c1567cb2f65350818b96b159f5aa4b9ac7915b7af4946","transactionsRoot":"0x56e81f171bcc55a6ff8345e692c0f86e5b48e01b996cadc001622fb5e363b421","receiptsRoot":"0x56e81f171bcc55a6ff8345e692c0f86e5b48e01b996cadc001622fb5e363b421","logsBloom":"0x00000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000","difficulty":"0x1","number":"0xa7720c","gasLimit":"0x1312d00","gasUsed":"0x0","timestamp":"0x687f36ed","extraData":"0x","mixHash":"0x0000000000000000000000000000000000000000000000000000000000000000","nonce":"0x0000000000000000","baseFeePerGas":"0xef4209","withdrawalsRoot":null,"blobGasUsed":null,"excessBlobGas":null,"parentBeaconBlockRoot":null,"requestsHash":null,"hash":"0x230863f98595ba0c83785caf618072ce2a876307102adbeba11b9de9c4af8a08"}', '0x5a9bd7f5f6723ce51c03beffa310a5bf79c2cf261ddb8622cf407b41d968ef91', '0xb920df561f210a617a0c1567cb2f65350818b96b159f5aa4b9ac7915b7af4946', '0', '0', '1753167597', '', '0x206c062cf0991353ba5ebc9888ca224f470ad3edf8e8e01125726a3858ebdd73', '[]');
INSERT INTO l2_block (number, hash, parent_hash, header, withdraw_root,
state_root, tx_num, gas_used, block_timestamp, row_consumption,
chunk_hash, transactions
) VALUES ('10973709', '0xae4af19a7697c2bb10a641f07269ed7df66775d56414567245adc98befdae557', '0x230863f98595ba0c83785caf618072ce2a876307102adbeba11b9de9c4af8a08', '{"parentHash":"0x230863f98595ba0c83785caf618072ce2a876307102adbeba11b9de9c4af8a08","sha3Uncles":"0x1dcc4de8dec75d7aab85b567b6ccd41ad312451b948a7413f0a142fd40d49347","miner":"0x0000000000000000000000000000000000000000","stateRoot":"0xb920df561f210a617a0c1567cb2f65350818b96b159f5aa4b9ac7915b7af4946","transactionsRoot":"0x56e81f171bcc55a6ff8345e692c0f86e5b48e01b996cadc001622fb5e363b421","receiptsRoot":"0x56e81f171bcc55a6ff8345e692c0f86e5b48e01b996cadc001622fb5e363b421","logsBloom":"0x00000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000","difficulty":"0x1","number":"0xa7720d","gasLimit":"0x1312d00","gasUsed":"0x0","timestamp":"0x687f36ee","extraData":"0x","mixHash":"0x0000000000000000000000000000000000000000000000000000000000000000","nonce":"0x0000000000000000","baseFeePerGas":"0xef4209","withdrawalsRoot":null,"blobGasUsed":null,"excessBlobGas":null,"parentBeaconBlockRoot":null,"requestsHash":null,"hash":"0xae4af19a7697c2bb10a641f07269ed7df66775d56414567245adc98befdae557"}', '0x5a9bd7f5f6723ce51c03beffa310a5bf79c2cf261ddb8622cf407b41d968ef91', '0xb920df561f210a617a0c1567cb2f65350818b96b159f5aa4b9ac7915b7af4946', '0', '0', '1753167598', '', '0x206c062cf0991353ba5ebc9888ca224f470ad3edf8e8e01125726a3858ebdd73', '[]');
INSERT INTO l2_block (number, hash, parent_hash, header, withdraw_root,
state_root, tx_num, gas_used, block_timestamp, row_consumption,
chunk_hash, transactions
) VALUES ('10973710', '0xd2bc7e24b66940a767abaee485870e9ebd24ff28e72a3096cdccc55b85f84182', '0xae4af19a7697c2bb10a641f07269ed7df66775d56414567245adc98befdae557', '{"parentHash":"0xae4af19a7697c2bb10a641f07269ed7df66775d56414567245adc98befdae557","sha3Uncles":"0x1dcc4de8dec75d7aab85b567b6ccd41ad312451b948a7413f0a142fd40d49347","miner":"0x0000000000000000000000000000000000000000","stateRoot":"0xb920df561f210a617a0c1567cb2f65350818b96b159f5aa4b9ac7915b7af4946","transactionsRoot":"0x56e81f171bcc55a6ff8345e692c0f86e5b48e01b996cadc001622fb5e363b421","receiptsRoot":"0x56e81f171bcc55a6ff8345e692c0f86e5b48e01b996cadc001622fb5e363b421","logsBloom":"0x00000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000","difficulty":"0x1","number":"0xa7720e","gasLimit":"0x1312d00","gasUsed":"0x0","timestamp":"0x687f36ef","extraData":"0x","mixHash":"0x0000000000000000000000000000000000000000000000000000000000000000","nonce":"0x0000000000000000","baseFeePerGas":"0xef4209","withdrawalsRoot":null,"blobGasUsed":null,"excessBlobGas":null,"parentBeaconBlockRoot":null,"requestsHash":null,"hash":"0xd2bc7e24b66940a767abaee485870e9ebd24ff28e72a3096cdccc55b85f84182"}', '0x5a9bd7f5f6723ce51c03beffa310a5bf79c2cf261ddb8622cf407b41d968ef91', '0xb920df561f210a617a0c1567cb2f65350818b96b159f5aa4b9ac7915b7af4946', '0', '0', '1753167599', '', '0x206c062cf0991353ba5ebc9888ca224f470ad3edf8e8e01125726a3858ebdd73', '[]');
INSERT INTO l2_block (number, hash, parent_hash, header, withdraw_root,
state_root, tx_num, gas_used, block_timestamp, row_consumption,
chunk_hash, transactions
) VALUES ('10973711', '0x1870b94320db154de4702fe6bfb69ebc98fb531b78bf81a69b8ab658ba9d9af5', '0xd2bc7e24b66940a767abaee485870e9ebd24ff28e72a3096cdccc55b85f84182', '{"parentHash":"0xd2bc7e24b66940a767abaee485870e9ebd24ff28e72a3096cdccc55b85f84182","sha3Uncles":"0x1dcc4de8dec75d7aab85b567b6ccd41ad312451b948a7413f0a142fd40d49347","miner":"0x0000000000000000000000000000000000000000","stateRoot":"0x2bdf2906a7bbb398419246c3c77804a204641259b2aeb4f4a806eb772d31c480","transactionsRoot":"0x56e81f171bcc55a6ff8345e692c0f86e5b48e01b996cadc001622fb5e363b421","receiptsRoot":"0x56e81f171bcc55a6ff8345e692c0f86e5b48e01b996cadc001622fb5e363b421","logsBloom":"0x00000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000","difficulty":"0x1","number":"0xa7720f","gasLimit":"0x1312d00","gasUsed":"0x0","timestamp":"0x687f36f0","extraData":"0x","mixHash":"0x0000000000000000000000000000000000000000000000000000000000000000","nonce":"0x0000000000000000","baseFeePerGas":"0xef4209","withdrawalsRoot":null,"blobGasUsed":null,"excessBlobGas":null,"parentBeaconBlockRoot":null,"requestsHash":null,"hash":"0x1870b94320db154de4702fe6bfb69ebc98fb531b78bf81a69b8ab658ba9d9af5"}', '0x5a9bd7f5f6723ce51c03beffa310a5bf79c2cf261ddb8622cf407b41d968ef91', '0x2bdf2906a7bbb398419246c3c77804a204641259b2aeb4f4a806eb772d31c480', '0', '0', '1753167600', '', '0x2f73e96335a43b678e107b2ef57c7ec0297d88d4a9986c1d6f4e31f1d11fb4f4', '[]');
INSERT INTO l2_block (number, hash, parent_hash, header, withdraw_root,
state_root, tx_num, gas_used, block_timestamp, row_consumption,
chunk_hash, transactions
) VALUES ('10973712', '0x271745e26e3222352fce7052edef9adecba921f4315e48a9d55e46640ac324ce', '0x1870b94320db154de4702fe6bfb69ebc98fb531b78bf81a69b8ab658ba9d9af5', '{"parentHash":"0x1870b94320db154de4702fe6bfb69ebc98fb531b78bf81a69b8ab658ba9d9af5","sha3Uncles":"0x1dcc4de8dec75d7aab85b567b6ccd41ad312451b948a7413f0a142fd40d49347","miner":"0x0000000000000000000000000000000000000000","stateRoot":"0x1ec0026bd12fe29d710e5f04e605cdb715d68a2e5bac57416066a7bc6b298762","transactionsRoot":"0x56e81f171bcc55a6ff8345e692c0f86e5b48e01b996cadc001622fb5e363b421","receiptsRoot":"0x56e81f171bcc55a6ff8345e692c0f86e5b48e01b996cadc001622fb5e363b421","logsBloom":"0x00000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000","difficulty":"0x1","number":"0xa77210","gasLimit":"0x1312d00","gasUsed":"0x0","timestamp":"0x687f36f1","extraData":"0x","mixHash":"0x0000000000000000000000000000000000000000000000000000000000000000","nonce":"0x0000000000000000","baseFeePerGas":"0xef4208","withdrawalsRoot":null,"blobGasUsed":null,"excessBlobGas":null,"parentBeaconBlockRoot":null,"requestsHash":null,"hash":"0x271745e26e3222352fce7052edef9adecba921f4315e48a9d55e46640ac324ce"}', '0x5a9bd7f5f6723ce51c03beffa310a5bf79c2cf261ddb8622cf407b41d968ef91', '0x1ec0026bd12fe29d710e5f04e605cdb715d68a2e5bac57416066a7bc6b298762', '0', '0', '1753167601', '', '0x2f73e96335a43b678e107b2ef57c7ec0297d88d4a9986c1d6f4e31f1d11fb4f4', '[]');
INSERT INTO l2_block (number, hash, parent_hash, header, withdraw_root,
state_root, tx_num, gas_used, block_timestamp, row_consumption,
chunk_hash, transactions
) VALUES ('10973713', '0xe193fc4298a6beebbc59b83cc7e2bbdace76c24fe9b7bd76aa415159ecb60914', '0x271745e26e3222352fce7052edef9adecba921f4315e48a9d55e46640ac324ce', '{"parentHash":"0x271745e26e3222352fce7052edef9adecba921f4315e48a9d55e46640ac324ce","sha3Uncles":"0x1dcc4de8dec75d7aab85b567b6ccd41ad312451b948a7413f0a142fd40d49347","miner":"0x0000000000000000000000000000000000000000","stateRoot":"0x7e29363a63f54a0e03e08cb515a98f3c416a5ade3ec15d29eddd262baf67a2a1","transactionsRoot":"0x56e81f171bcc55a6ff8345e692c0f86e5b48e01b996cadc001622fb5e363b421","receiptsRoot":"0x56e81f171bcc55a6ff8345e692c0f86e5b48e01b996cadc001622fb5e363b421","logsBloom":"0x00000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000","difficulty":"0x1","number":"0xa77211","gasLimit":"0x1312d00","gasUsed":"0x0","timestamp":"0x687f36f2","extraData":"0x","mixHash":"0x0000000000000000000000000000000000000000000000000000000000000000","nonce":"0x0000000000000000","baseFeePerGas":"0xef4207","withdrawalsRoot":null,"blobGasUsed":null,"excessBlobGas":null,"parentBeaconBlockRoot":null,"requestsHash":null,"hash":"0xe193fc4298a6beebbc59b83cc7e2bbdace76c24fe9b7bd76aa415159ecb60914"}', '0x5a9bd7f5f6723ce51c03beffa310a5bf79c2cf261ddb8622cf407b41d968ef91', '0x7e29363a63f54a0e03e08cb515a98f3c416a5ade3ec15d29eddd262baf67a2a1', '0', '0', '1753167602', '', '0x2f73e96335a43b678e107b2ef57c7ec0297d88d4a9986c1d6f4e31f1d11fb4f4', '[]');
INSERT INTO l2_block (number, hash, parent_hash, header, withdraw_root,
state_root, tx_num, gas_used, block_timestamp, row_consumption,
chunk_hash, transactions
) VALUES ('10973714', '0x8e99d9242315a107492520e9255dd77798adbff81393d3de83960dac361bd838', '0xe193fc4298a6beebbc59b83cc7e2bbdace76c24fe9b7bd76aa415159ecb60914', '{"parentHash":"0xe193fc4298a6beebbc59b83cc7e2bbdace76c24fe9b7bd76aa415159ecb60914","sha3Uncles":"0x1dcc4de8dec75d7aab85b567b6ccd41ad312451b948a7413f0a142fd40d49347","miner":"0x0000000000000000000000000000000000000000","stateRoot":"0xd818ac5028fe5aa1abd2d3ffe4693b4e96eabad35e49011e2ce920bcd76d061a","transactionsRoot":"0x56e81f171bcc55a6ff8345e692c0f86e5b48e01b996cadc001622fb5e363b421","receiptsRoot":"0x56e81f171bcc55a6ff8345e692c0f86e5b48e01b996cadc001622fb5e363b421","logsBloom":"0x00000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000","difficulty":"0x1","number":"0xa77212","gasLimit":"0x1312d00","gasUsed":"0x0","timestamp":"0x687f36f3","extraData":"0x","mixHash":"0x0000000000000000000000000000000000000000000000000000000000000000","nonce":"0x0000000000000000","baseFeePerGas":"0xef4207","withdrawalsRoot":null,"blobGasUsed":null,"excessBlobGas":null,"parentBeaconBlockRoot":null,"requestsHash":null,"hash":"0x8e99d9242315a107492520e9255dd77798adbff81393d3de83960dac361bd838"}', '0x5a9bd7f5f6723ce51c03beffa310a5bf79c2cf261ddb8622cf407b41d968ef91', '0xd818ac5028fe5aa1abd2d3ffe4693b4e96eabad35e49011e2ce920bcd76d061a', '0', '0', '1753167603', '', '0x2f73e96335a43b678e107b2ef57c7ec0297d88d4a9986c1d6f4e31f1d11fb4f4', '[]');
INSERT INTO l2_block (number, hash, parent_hash, header, withdraw_root,
state_root, tx_num, gas_used, block_timestamp, row_consumption,
chunk_hash, transactions
) VALUES ('10973715', '0x9ee9d75a25a912e7d3907679a1e588021f4d2040c71d2e1c958538466a4fbbd6', '0x8e99d9242315a107492520e9255dd77798adbff81393d3de83960dac361bd838', '{"parentHash":"0x8e99d9242315a107492520e9255dd77798adbff81393d3de83960dac361bd838","sha3Uncles":"0x1dcc4de8dec75d7aab85b567b6ccd41ad312451b948a7413f0a142fd40d49347","miner":"0x0000000000000000000000000000000000000000","stateRoot":"0x233fd85bd753e126e4df23a05c56ccde3eb6ec06ce2565a990af3347dc95b0c5","transactionsRoot":"0x56e81f171bcc55a6ff8345e692c0f86e5b48e01b996cadc001622fb5e363b421","receiptsRoot":"0x56e81f171bcc55a6ff8345e692c0f86e5b48e01b996cadc001622fb5e363b421","logsBloom":"0x00000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000","difficulty":"0x1","number":"0xa77213","gasLimit":"0x1312d00","gasUsed":"0x0","timestamp":"0x687f36f4","extraData":"0x","mixHash":"0x0000000000000000000000000000000000000000000000000000000000000000","nonce":"0x0000000000000000","baseFeePerGas":"0xef4207","withdrawalsRoot":null,"blobGasUsed":null,"excessBlobGas":null,"parentBeaconBlockRoot":null,"requestsHash":null,"hash":"0x9ee9d75a25a912e7d3907679a1e588021f4d2040c71d2e1c958538466a4fbbd6"}', '0x5a9bd7f5f6723ce51c03beffa310a5bf79c2cf261ddb8622cf407b41d968ef91', '0x233fd85bd753e126e4df23a05c56ccde3eb6ec06ce2565a990af3347dc95b0c5', '0', '0', '1753167604', '', '0x2f73e96335a43b678e107b2ef57c7ec0297d88d4a9986c1d6f4e31f1d11fb4f4', '[]');
INSERT INTO l2_block (number, hash, parent_hash, header, withdraw_root,
state_root, tx_num, gas_used, block_timestamp, row_consumption,
chunk_hash, transactions
) VALUES ('10973716', '0x9b25094db21166930008728c487ca9dbbc1e842701c8573eaa6bea4d41c10a7e', '0x9ee9d75a25a912e7d3907679a1e588021f4d2040c71d2e1c958538466a4fbbd6', '{"parentHash":"0x9ee9d75a25a912e7d3907679a1e588021f4d2040c71d2e1c958538466a4fbbd6","sha3Uncles":"0x1dcc4de8dec75d7aab85b567b6ccd41ad312451b948a7413f0a142fd40d49347","miner":"0x0000000000000000000000000000000000000000","stateRoot":"0x12be357fcc1fc28e574a7f95a5f9b3aae7e18d8ab8829c676478b4e8953a8502","transactionsRoot":"0x56e81f171bcc55a6ff8345e692c0f86e5b48e01b996cadc001622fb5e363b421","receiptsRoot":"0x56e81f171bcc55a6ff8345e692c0f86e5b48e01b996cadc001622fb5e363b421","logsBloom":"0x00000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000","difficulty":"0x1","number":"0xa77214","gasLimit":"0x1312d00","gasUsed":"0x0","timestamp":"0x687f36f5","extraData":"0x","mixHash":"0x0000000000000000000000000000000000000000000000000000000000000000","nonce":"0x0000000000000000","baseFeePerGas":"0xef4207","withdrawalsRoot":null,"blobGasUsed":null,"excessBlobGas":null,"parentBeaconBlockRoot":null,"requestsHash":null,"hash":"0x9b25094db21166930008728c487ca9dbbc1e842701c8573eaa6bea4d41c10a7e"}', '0x5a9bd7f5f6723ce51c03beffa310a5bf79c2cf261ddb8622cf407b41d968ef91', '0x12be357fcc1fc28e574a7f95a5f9b3aae7e18d8ab8829c676478b4e8953a8502', '0', '0', '1753167605', '', '0x2f73e96335a43b678e107b2ef57c7ec0297d88d4a9986c1d6f4e31f1d11fb4f4', '[]');
INSERT INTO l2_block (number, hash, parent_hash, header, withdraw_root,
state_root, tx_num, gas_used, block_timestamp, row_consumption,
chunk_hash, transactions
) VALUES ('10973717', '0xeaab0e07b40720f8e961f28ef665f08e67797428dc1eaccba88d5d4f60341284', '0x9b25094db21166930008728c487ca9dbbc1e842701c8573eaa6bea4d41c10a7e', '{"parentHash":"0x9b25094db21166930008728c487ca9dbbc1e842701c8573eaa6bea4d41c10a7e","sha3Uncles":"0x1dcc4de8dec75d7aab85b567b6ccd41ad312451b948a7413f0a142fd40d49347","miner":"0x0000000000000000000000000000000000000000","stateRoot":"0x7e49dc33343a54e9afc285155b8a35575e6924d465fe2dc543b5ea8915eb828a","transactionsRoot":"0x56e81f171bcc55a6ff8345e692c0f86e5b48e01b996cadc001622fb5e363b421","receiptsRoot":"0x56e81f171bcc55a6ff8345e692c0f86e5b48e01b996cadc001622fb5e363b421","logsBloom":"0x00000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000","difficulty":"0x1","number":"0xa77215","gasLimit":"0x1312d00","gasUsed":"0x0","timestamp":"0x687f36f6","extraData":"0x","mixHash":"0x0000000000000000000000000000000000000000000000000000000000000000","nonce":"0x0000000000000000","baseFeePerGas":"0xef4207","withdrawalsRoot":null,"blobGasUsed":null,"excessBlobGas":null,"parentBeaconBlockRoot":null,"requestsHash":null,"hash":"0xeaab0e07b40720f8e961f28ef665f08e67797428dc1eaccba88d5d4f60341284"}', '0x5a9bd7f5f6723ce51c03beffa310a5bf79c2cf261ddb8622cf407b41d968ef91', '0x7e49dc33343a54e9afc285155b8a35575e6924d465fe2dc543b5ea8915eb828a', '0', '0', '1753167606', '', '0x2f73e96335a43b678e107b2ef57c7ec0297d88d4a9986c1d6f4e31f1d11fb4f4', '[]');
INSERT INTO l2_block (number, hash, parent_hash, header, withdraw_root,
state_root, tx_num, gas_used, block_timestamp, row_consumption,
chunk_hash, transactions
) VALUES ('10973718', '0xd6af3c7bf29f3689b516ed5c8fcf885a4e7eb2df751c35d4ebbb19fcc13628d4', '0xeaab0e07b40720f8e961f28ef665f08e67797428dc1eaccba88d5d4f60341284', '{"parentHash":"0xeaab0e07b40720f8e961f28ef665f08e67797428dc1eaccba88d5d4f60341284","sha3Uncles":"0x1dcc4de8dec75d7aab85b567b6ccd41ad312451b948a7413f0a142fd40d49347","miner":"0x0000000000000000000000000000000000000000","stateRoot":"0xf1a7db2e4f463fa87e3e65b73d2abc5374302855f6af9735d5a11c94c2d93975","transactionsRoot":"0x56e81f171bcc55a6ff8345e692c0f86e5b48e01b996cadc001622fb5e363b421","receiptsRoot":"0x56e81f171bcc55a6ff8345e692c0f86e5b48e01b996cadc001622fb5e363b421","logsBloom":"0x00000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000","difficulty":"0x1","number":"0xa77216","gasLimit":"0x1312d00","gasUsed":"0x0","timestamp":"0x687f36f7","extraData":"0x","mixHash":"0x0000000000000000000000000000000000000000000000000000000000000000","nonce":"0x0000000000000000","baseFeePerGas":"0xef4207","withdrawalsRoot":null,"blobGasUsed":null,"excessBlobGas":null,"parentBeaconBlockRoot":null,"requestsHash":null,"hash":"0xd6af3c7bf29f3689b516ed5c8fcf885a4e7eb2df751c35d4ebbb19fcc13628d4"}', '0x5a9bd7f5f6723ce51c03beffa310a5bf79c2cf261ddb8622cf407b41d968ef91', '0xf1a7db2e4f463fa87e3e65b73d2abc5374302855f6af9735d5a11c94c2d93975', '0', '0', '1753167607', '', '0x2f73e96335a43b678e107b2ef57c7ec0297d88d4a9986c1d6f4e31f1d11fb4f4', '[]');
INSERT INTO l2_block (number, hash, parent_hash, header, withdraw_root,
state_root, tx_num, gas_used, block_timestamp, row_consumption,
chunk_hash, transactions
) VALUES ('10973719', '0x5815f5b91d53d0b5c7d423f06da7cad3d45edeab1c2b590af02ceebfd33b2ce1', '0xd6af3c7bf29f3689b516ed5c8fcf885a4e7eb2df751c35d4ebbb19fcc13628d4', '{"parentHash":"0xd6af3c7bf29f3689b516ed5c8fcf885a4e7eb2df751c35d4ebbb19fcc13628d4","sha3Uncles":"0x1dcc4de8dec75d7aab85b567b6ccd41ad312451b948a7413f0a142fd40d49347","miner":"0x0000000000000000000000000000000000000000","stateRoot":"0xb5d1f420ddc1edb60c7fc3a06929a2014c548d1ddd52a78ab6984faed53a09d1","transactionsRoot":"0x56e81f171bcc55a6ff8345e692c0f86e5b48e01b996cadc001622fb5e363b421","receiptsRoot":"0x56e81f171bcc55a6ff8345e692c0f86e5b48e01b996cadc001622fb5e363b421","logsBloom":"0x00000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000","difficulty":"0x1","number":"0xa77217","gasLimit":"0x1312d00","gasUsed":"0x0","timestamp":"0x687f36f8","extraData":"0x","mixHash":"0x0000000000000000000000000000000000000000000000000000000000000000","nonce":"0x0000000000000000","baseFeePerGas":"0xef4207","withdrawalsRoot":null,"blobGasUsed":null,"excessBlobGas":null,"parentBeaconBlockRoot":null,"requestsHash":null,"hash":"0x5815f5b91d53d0b5c7d423f06da7cad3d45edeab1c2b590af02ceebfd33b2ce1"}', '0x5a9bd7f5f6723ce51c03beffa310a5bf79c2cf261ddb8622cf407b41d968ef91', '0xb5d1f420ddc1edb60c7fc3a06929a2014c548d1ddd52a78ab6984faed53a09d1', '0', '0', '1753167608', '', '0x2f73e96335a43b678e107b2ef57c7ec0297d88d4a9986c1d6f4e31f1d11fb4f4', '[]');
INSERT INTO l2_block (number, hash, parent_hash, header, withdraw_root,
state_root, tx_num, gas_used, block_timestamp, row_consumption,
chunk_hash, transactions
) VALUES ('10973720', '0x4d8bbd6a15515cacf18cf9810ca3867442cefc733e9cdeaa1527a008fbca3bd1', '0x5815f5b91d53d0b5c7d423f06da7cad3d45edeab1c2b590af02ceebfd33b2ce1', '{"parentHash":"0x5815f5b91d53d0b5c7d423f06da7cad3d45edeab1c2b590af02ceebfd33b2ce1","sha3Uncles":"0x1dcc4de8dec75d7aab85b567b6ccd41ad312451b948a7413f0a142fd40d49347","miner":"0x0000000000000000000000000000000000000000","stateRoot":"0xf4ca7c941e6ad6a780ad8422a817c6a7916f3f80b5f0d0f95cabcb17b0531299","transactionsRoot":"0x79c3ba4e0fe89ddea0ed8becdbfff86f18dab3ffd21eaf13744b86cb104d664e","receiptsRoot":"0xc8f88931c3c4ca18cb582e490d7acabfbe04fd6fa971549af6bf927aec7bfa1f","logsBloom":"0x00000000000000000000000000000000000000080000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000001000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000040000000000000000000000000000000000000000000000000000000000000000000000002000000000000000000000000000000000000200000000000000000000000000000000000004000000000000000000000000000000000000000000000000000000000000000000000000000","difficulty":"0x1","number":"0xa77218","gasLimit":"0x1312d00","gasUsed":"0x7623","timestamp":"0x687f36f9","extraData":"0x","mixHash":"0x0000000000000000000000000000000000000000000000000000000000000000","nonce":"0x0000000000000000","baseFeePerGas":"0xef4207","withdrawalsRoot":null,"blobGasUsed":null,"excessBlobGas":null,"parentBeaconBlockRoot":null,"requestsHash":null,"hash":"0x4d8bbd6a15515cacf18cf9810ca3867442cefc733e9cdeaa1527a008fbca3bd1"}', '0x5a9bd7f5f6723ce51c03beffa310a5bf79c2cf261ddb8622cf407b41d968ef91', '0xf4ca7c941e6ad6a780ad8422a817c6a7916f3f80b5f0d0f95cabcb17b0531299', '1', '30243', '1753167609', '', '0x2f73e96335a43b678e107b2ef57c7ec0297d88d4a9986c1d6f4e31f1d11fb4f4', '[{"type":0,"nonce":13627,"txHash":"0x539962b9584723f919b9f3a0b454622f5f51c195300564116d0cedfec17a1381","gas":30243,"gasPrice":"0xef426b","gasTipCap":"0xef426b","gasFeeCap":"0xef426b","from":"0x0000000000000000000000000000000000000000","to":"0xf07cc6482a24843efe7b42259acbaf8d0a2a6952","chainId":"0x8274f","value":"0x0","data":"0x91b7f5ed0000000000000000000000000000000000000000000018f4c5be1c1407000000","isCreate":false,"accessList":null,"authorizationList":null,"v":"0x104ec2","r":"0xaa309d7e218825160be9a87c9e50d3cbfead9c87e90e984ad0ea2441633092a2","s":"0x438f39c0af058794f320e5578720557af07c5397e363f9628a6c4ffee5bd2487"}]');
INSERT INTO l2_block (number, hash, parent_hash, header, withdraw_root,
state_root, tx_num, gas_used, block_timestamp, row_consumption,
chunk_hash, transactions
) VALUES ('10973721', '0x84c53d922cfe0558c4be02865af5ebd49efe44a458dabc16aea5584e5e06f346', '0x4d8bbd6a15515cacf18cf9810ca3867442cefc733e9cdeaa1527a008fbca3bd1', '{"parentHash":"0x4d8bbd6a15515cacf18cf9810ca3867442cefc733e9cdeaa1527a008fbca3bd1","sha3Uncles":"0x1dcc4de8dec75d7aab85b567b6ccd41ad312451b948a7413f0a142fd40d49347","miner":"0x0000000000000000000000000000000000000000","stateRoot":"0xd4838ba86f5a8e865a41ef7547148b6074235a658dd57ff2296c0badda4760d1","transactionsRoot":"0x56e81f171bcc55a6ff8345e692c0f86e5b48e01b996cadc001622fb5e363b421","receiptsRoot":"0x56e81f171bcc55a6ff8345e692c0f86e5b48e01b996cadc001622fb5e363b421","logsBloom":"0x00000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000","difficulty":"0x1","number":"0xa77219","gasLimit":"0x1312d00","gasUsed":"0x0","timestamp":"0x687f36fa","extraData":"0x","mixHash":"0x0000000000000000000000000000000000000000000000000000000000000000","nonce":"0x0000000000000000","baseFeePerGas":"0xef4207","withdrawalsRoot":null,"blobGasUsed":null,"excessBlobGas":null,"parentBeaconBlockRoot":null,"requestsHash":null,"hash":"0x84c53d922cfe0558c4be02865af5ebd49efe44a458dabc16aea5584e5e06f346"}', '0x5a9bd7f5f6723ce51c03beffa310a5bf79c2cf261ddb8622cf407b41d968ef91', '0xd4838ba86f5a8e865a41ef7547148b6074235a658dd57ff2296c0badda4760d1', '0', '0', '1753167610', '', '0x2f73e96335a43b678e107b2ef57c7ec0297d88d4a9986c1d6f4e31f1d11fb4f4', '[]');
INSERT INTO l2_block (number, hash, parent_hash, header, withdraw_root,
state_root, tx_num, gas_used, block_timestamp, row_consumption,
chunk_hash, transactions
) VALUES ('10973722', '0xe1d601522b08d98852b4c7dc3584f292ac246a3dac3c600ba58bd6c20c97be5b', '0x84c53d922cfe0558c4be02865af5ebd49efe44a458dabc16aea5584e5e06f346', '{"parentHash":"0x84c53d922cfe0558c4be02865af5ebd49efe44a458dabc16aea5584e5e06f346","sha3Uncles":"0x1dcc4de8dec75d7aab85b567b6ccd41ad312451b948a7413f0a142fd40d49347","miner":"0x0000000000000000000000000000000000000000","stateRoot":"0x8deede75e20423d0495cbdb493d320dddde6df0459df998608a16f658eb7bec3","transactionsRoot":"0x56e81f171bcc55a6ff8345e692c0f86e5b48e01b996cadc001622fb5e363b421","receiptsRoot":"0x56e81f171bcc55a6ff8345e692c0f86e5b48e01b996cadc001622fb5e363b421","logsBloom":"0x00000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000","difficulty":"0x1","number":"0xa7721a","gasLimit":"0x1312d00","gasUsed":"0x0","timestamp":"0x687f36fb","extraData":"0x","mixHash":"0x0000000000000000000000000000000000000000000000000000000000000000","nonce":"0x0000000000000000","baseFeePerGas":"0xef4207","withdrawalsRoot":null,"blobGasUsed":null,"excessBlobGas":null,"parentBeaconBlockRoot":null,"requestsHash":null,"hash":"0xe1d601522b08d98852b4c7dc3584f292ac246a3dac3c600ba58bd6c20c97be5b"}', '0x5a9bd7f5f6723ce51c03beffa310a5bf79c2cf261ddb8622cf407b41d968ef91', '0x8deede75e20423d0495cbdb493d320dddde6df0459df998608a16f658eb7bec3', '0', '0', '1753167611', '', '0x2f73e96335a43b678e107b2ef57c7ec0297d88d4a9986c1d6f4e31f1d11fb4f4', '[]');
INSERT INTO l2_block (number, hash, parent_hash, header, withdraw_root,
state_root, tx_num, gas_used, block_timestamp, row_consumption,
chunk_hash, transactions
) VALUES ('10973723', '0x8579712fc434b401f1ecfcf3ae22611be054480fa882e90f8eecb6c5e97534bd', '0xe1d601522b08d98852b4c7dc3584f292ac246a3dac3c600ba58bd6c20c97be5b', '{"parentHash":"0xe1d601522b08d98852b4c7dc3584f292ac246a3dac3c600ba58bd6c20c97be5b","sha3Uncles":"0x1dcc4de8dec75d7aab85b567b6ccd41ad312451b948a7413f0a142fd40d49347","miner":"0x0000000000000000000000000000000000000000","stateRoot":"0xb4fe51cda0401bb19e8448a2697a49e1fbc25398c2b18a9955d0a8e6f4b153a7","transactionsRoot":"0x56e81f171bcc55a6ff8345e692c0f86e5b48e01b996cadc001622fb5e363b421","receiptsRoot":"0x56e81f171bcc55a6ff8345e692c0f86e5b48e01b996cadc001622fb5e363b421","logsBloom":"0x00000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000","difficulty":"0x1","number":"0xa7721b","gasLimit":"0x1312d00","gasUsed":"0x0","timestamp":"0x687f36fc","extraData":"0x","mixHash":"0x0000000000000000000000000000000000000000000000000000000000000000","nonce":"0x0000000000000000","baseFeePerGas":"0xef4207","withdrawalsRoot":null,"blobGasUsed":null,"excessBlobGas":null,"parentBeaconBlockRoot":null,"requestsHash":null,"hash":"0x8579712fc434b401f1ecfcf3ae22611be054480fa882e90f8eecb6c5e97534bd"}', '0x5a9bd7f5f6723ce51c03beffa310a5bf79c2cf261ddb8622cf407b41d968ef91', '0xb4fe51cda0401bb19e8448a2697a49e1fbc25398c2b18a9955d0a8e6f4b153a7', '0', '0', '1753167612', '', '0x2f73e96335a43b678e107b2ef57c7ec0297d88d4a9986c1d6f4e31f1d11fb4f4', '[]');
INSERT INTO l2_block (number, hash, parent_hash, header, withdraw_root,
state_root, tx_num, gas_used, block_timestamp, row_consumption,
chunk_hash, transactions
) VALUES ('10973724', '0xe13a0b907e044a9df1952acc31dc08a578fb910a0cc224e11692cb84c9c9a9f7', '0x8579712fc434b401f1ecfcf3ae22611be054480fa882e90f8eecb6c5e97534bd', '{"parentHash":"0x8579712fc434b401f1ecfcf3ae22611be054480fa882e90f8eecb6c5e97534bd","sha3Uncles":"0x1dcc4de8dec75d7aab85b567b6ccd41ad312451b948a7413f0a142fd40d49347","miner":"0x0000000000000000000000000000000000000000","stateRoot":"0xcd17c85290d8ec7473357ebe1605f766af6c1356732cc7ad11de0453baca05c6","transactionsRoot":"0x56e81f171bcc55a6ff8345e692c0f86e5b48e01b996cadc001622fb5e363b421","receiptsRoot":"0x56e81f171bcc55a6ff8345e692c0f86e5b48e01b996cadc001622fb5e363b421","logsBloom":"0x00000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000","difficulty":"0x1","number":"0xa7721c","gasLimit":"0x1312d00","gasUsed":"0x0","timestamp":"0x687f36fd","extraData":"0x","mixHash":"0x0000000000000000000000000000000000000000000000000000000000000000","nonce":"0x0000000000000000","baseFeePerGas":"0xef4207","withdrawalsRoot":null,"blobGasUsed":null,"excessBlobGas":null,"parentBeaconBlockRoot":null,"requestsHash":null,"hash":"0xe13a0b907e044a9df1952acc31dc08a578fb910a0cc224e11692cb84c9c9a9f7"}', '0x5a9bd7f5f6723ce51c03beffa310a5bf79c2cf261ddb8622cf407b41d968ef91', '0xcd17c85290d8ec7473357ebe1605f766af6c1356732cc7ad11de0453baca05c6', '0', '0', '1753167613', '', '0x2f73e96335a43b678e107b2ef57c7ec0297d88d4a9986c1d6f4e31f1d11fb4f4', '[]');
INSERT INTO l2_block (number, hash, parent_hash, header, withdraw_root,
state_root, tx_num, gas_used, block_timestamp, row_consumption,
chunk_hash, transactions
) VALUES ('10973725', '0x2e26fb489f8644b3b5c44cd493ebc140ba3bc716588f37a71b8ba6dc504ccb5f', '0xe13a0b907e044a9df1952acc31dc08a578fb910a0cc224e11692cb84c9c9a9f7', '{"parentHash":"0xe13a0b907e044a9df1952acc31dc08a578fb910a0cc224e11692cb84c9c9a9f7","sha3Uncles":"0x1dcc4de8dec75d7aab85b567b6ccd41ad312451b948a7413f0a142fd40d49347","miner":"0x0000000000000000000000000000000000000000","stateRoot":"0xfd321f4a3e2bc757df89162f730a2e37519dcb29cdb63019665c1fe4dbceeb00","transactionsRoot":"0x56e81f171bcc55a6ff8345e692c0f86e5b48e01b996cadc001622fb5e363b421","receiptsRoot":"0x56e81f171bcc55a6ff8345e692c0f86e5b48e01b996cadc001622fb5e363b421","logsBloom":"0x00000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000","difficulty":"0x1","number":"0xa7721d","gasLimit":"0x1312d00","gasUsed":"0x0","timestamp":"0x687f36fe","extraData":"0x","mixHash":"0x0000000000000000000000000000000000000000000000000000000000000000","nonce":"0x0000000000000000","baseFeePerGas":"0xef4207","withdrawalsRoot":null,"blobGasUsed":null,"excessBlobGas":null,"parentBeaconBlockRoot":null,"requestsHash":null,"hash":"0x2e26fb489f8644b3b5c44cd493ebc140ba3bc716588f37a71b8ba6dc504ccb5f"}', '0x5a9bd7f5f6723ce51c03beffa310a5bf79c2cf261ddb8622cf407b41d968ef91', '0xfd321f4a3e2bc757df89162f730a2e37519dcb29cdb63019665c1fe4dbceeb00', '0', '0', '1753167614', '', '0x2f73e96335a43b678e107b2ef57c7ec0297d88d4a9986c1d6f4e31f1d11fb4f4', '[]');
INSERT INTO l2_block (number, hash, parent_hash, header, withdraw_root,
state_root, tx_num, gas_used, block_timestamp, row_consumption,
chunk_hash, transactions
) VALUES ('10973726', '0x313b0fbb7cbb8bc1ba4fbc50684b516d31f9f7ee6f66d919da01328537a4b0a1', '0x2e26fb489f8644b3b5c44cd493ebc140ba3bc716588f37a71b8ba6dc504ccb5f', '{"parentHash":"0x2e26fb489f8644b3b5c44cd493ebc140ba3bc716588f37a71b8ba6dc504ccb5f","sha3Uncles":"0x1dcc4de8dec75d7aab85b567b6ccd41ad312451b948a7413f0a142fd40d49347","miner":"0x0000000000000000000000000000000000000000","stateRoot":"0x1a24ed5ee5e8ca354f583b28bd7f2c4c6fe4dca59fef476578eddab17b857471","transactionsRoot":"0x56e81f171bcc55a6ff8345e692c0f86e5b48e01b996cadc001622fb5e363b421","receiptsRoot":"0x56e81f171bcc55a6ff8345e692c0f86e5b48e01b996cadc001622fb5e363b421","logsBloom":"0x00000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000","difficulty":"0x1","number":"0xa7721e","gasLimit":"0x1312d00","gasUsed":"0x0","timestamp":"0x687f36ff","extraData":"0x","mixHash":"0x0000000000000000000000000000000000000000000000000000000000000000","nonce":"0x0000000000000000","baseFeePerGas":"0xef4207","withdrawalsRoot":null,"blobGasUsed":null,"excessBlobGas":null,"parentBeaconBlockRoot":null,"requestsHash":null,"hash":"0x313b0fbb7cbb8bc1ba4fbc50684b516d31f9f7ee6f66d919da01328537a4b0a1"}', '0x5a9bd7f5f6723ce51c03beffa310a5bf79c2cf261ddb8622cf407b41d968ef91', '0x1a24ed5ee5e8ca354f583b28bd7f2c4c6fe4dca59fef476578eddab17b857471', '0', '0', '1753167615', '', '0x2f73e96335a43b678e107b2ef57c7ec0297d88d4a9986c1d6f4e31f1d11fb4f4', '[]');
INSERT INTO l2_block (number, hash, parent_hash, header, withdraw_root,
state_root, tx_num, gas_used, block_timestamp, row_consumption,
chunk_hash, transactions
) VALUES ('10973727', '0xf9039c9c24ab919066f2eb6f97360cfb727ed032c9e6142ea45e784b19894560', '0x313b0fbb7cbb8bc1ba4fbc50684b516d31f9f7ee6f66d919da01328537a4b0a1', '{"parentHash":"0x313b0fbb7cbb8bc1ba4fbc50684b516d31f9f7ee6f66d919da01328537a4b0a1","sha3Uncles":"0x1dcc4de8dec75d7aab85b567b6ccd41ad312451b948a7413f0a142fd40d49347","miner":"0x0000000000000000000000000000000000000000","stateRoot":"0x2927f53f1eaaeaa17a80f048f10474a7cc3b2c96547cc47caad33ff9e5b38da6","transactionsRoot":"0x80fd441b38b6ffb8f9369d8a5179356f9bf5ad332db0da99f7c6efdb90939cd2","receiptsRoot":"0xa262cee7ba62c004c6554e9cf378512a868346c24f8cafc1ac1954250339149e","logsBloom":"0x00000000000000000000000000000000000000000000000000000000000000000000000000000000000000000080000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000020000000000900000000000000000000000000000000000000000000000000000000000000000000000001000000008000000000000000000000000000000010000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000010000000000000000000000000020000000000000000000","difficulty":"0x1","number":"0xa7721f","gasLimit":"0x1312d00","gasUsed":"0x9642","timestamp":"0x687f3700","extraData":"0x","mixHash":"0x0000000000000000000000000000000000000000000000000000000000000000","nonce":"0x0000000000000000","baseFeePerGas":"0xef4207","withdrawalsRoot":null,"blobGasUsed":null,"excessBlobGas":null,"parentBeaconBlockRoot":null,"requestsHash":null,"hash":"0xf9039c9c24ab919066f2eb6f97360cfb727ed032c9e6142ea45e784b19894560"}', '0x5a9bd7f5f6723ce51c03beffa310a5bf79c2cf261ddb8622cf407b41d968ef91', '0x2927f53f1eaaeaa17a80f048f10474a7cc3b2c96547cc47caad33ff9e5b38da6', '1', '38466', '1753167616', '', '0x2f73e96335a43b678e107b2ef57c7ec0297d88d4a9986c1d6f4e31f1d11fb4f4', '[{"type":2,"nonce":3392501,"txHash":"0xa5231ea1b94eb516575807531763b312d250ee5ad4dfbeea66beab5f448c32b6","gas":45919,"gasPrice":"0x1de8472","gasTipCap":"0x64","gasFeeCap":"0x1de8472","from":"0x0000000000000000000000000000000000000000","to":"0x5300000000000000000000000000000000000002","chainId":"0x8274f","value":"0x0","data":"0x39455d3a000000000000000000000000000000000000000000000000000000000004580f0000000000000000000000000000000000000000000000000000000000000001","isCreate":false,"accessList":[{"address":"0x5300000000000000000000000000000000000003","storageKeys":["0x297c59f20c6b2556a4ed35dccabbdeb8b1cf950f62aefb86b98d19b5a4aff2a2"]}],"authorizationList":null,"v":"0x1","r":"0xa09a97c38c7a58f40ff39ca74f938c63f1ef822cf91926d4fff96b7dc818d3f3","s":"0x77ee7453096794d9cbb206f26077f23b4cc88fe51893cb5eab46714e379ac833"}]');
INSERT INTO l2_block (number, hash, parent_hash, header, withdraw_root,
state_root, tx_num, gas_used, block_timestamp, row_consumption,
chunk_hash, transactions
) VALUES ('10973728', '0xf27ff9223f6bf9a964737d50cb7c005f049cf0f4edfd16d24178a798c21716d6', '0xf9039c9c24ab919066f2eb6f97360cfb727ed032c9e6142ea45e784b19894560', '{"parentHash":"0xf9039c9c24ab919066f2eb6f97360cfb727ed032c9e6142ea45e784b19894560","sha3Uncles":"0x1dcc4de8dec75d7aab85b567b6ccd41ad312451b948a7413f0a142fd40d49347","miner":"0x0000000000000000000000000000000000000000","stateRoot":"0x0dbe54818526afaabbce83765eabcd4ec4d437a3497e5d046d599af862ea9850","transactionsRoot":"0x56e81f171bcc55a6ff8345e692c0f86e5b48e01b996cadc001622fb5e363b421","receiptsRoot":"0x56e81f171bcc55a6ff8345e692c0f86e5b48e01b996cadc001622fb5e363b421","logsBloom":"0x00000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000","difficulty":"0x1","number":"0xa77220","gasLimit":"0x1312d00","gasUsed":"0x0","timestamp":"0x687f3701","extraData":"0x","mixHash":"0x0000000000000000000000000000000000000000000000000000000000000000","nonce":"0x0000000000000000","baseFeePerGas":"0xef4207","withdrawalsRoot":null,"blobGasUsed":null,"excessBlobGas":null,"parentBeaconBlockRoot":null,"requestsHash":null,"hash":"0xf27ff9223f6bf9a964737d50cb7c005f049cf0f4edfd16d24178a798c21716d6"}', '0x5a9bd7f5f6723ce51c03beffa310a5bf79c2cf261ddb8622cf407b41d968ef91', '0x0dbe54818526afaabbce83765eabcd4ec4d437a3497e5d046d599af862ea9850', '0', '0', '1753167617', '', '0x2f73e96335a43b678e107b2ef57c7ec0297d88d4a9986c1d6f4e31f1d11fb4f4', '[]');
INSERT INTO l2_block (number, hash, parent_hash, header, withdraw_root,
state_root, tx_num, gas_used, block_timestamp, row_consumption,
chunk_hash, transactions
) VALUES ('10973729', '0x2b7777eb3ffe5939d6b70883cef69250ef5a2ed62a8b378973e0c3fe84707137', '0xf27ff9223f6bf9a964737d50cb7c005f049cf0f4edfd16d24178a798c21716d6', '{"parentHash":"0xf27ff9223f6bf9a964737d50cb7c005f049cf0f4edfd16d24178a798c21716d6","sha3Uncles":"0x1dcc4de8dec75d7aab85b567b6ccd41ad312451b948a7413f0a142fd40d49347","miner":"0x0000000000000000000000000000000000000000","stateRoot":"0xb89ed319fb9dcaed2df7e72223683cf255f6c1e45742e6caa810938871ce53bf","transactionsRoot":"0x56e81f171bcc55a6ff8345e692c0f86e5b48e01b996cadc001622fb5e363b421","receiptsRoot":"0x56e81f171bcc55a6ff8345e692c0f86e5b48e01b996cadc001622fb5e363b421","logsBloom":"0x00000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000","difficulty":"0x1","number":"0xa77221","gasLimit":"0x1312d00","gasUsed":"0x0","timestamp":"0x687f3702","extraData":"0x","mixHash":"0x0000000000000000000000000000000000000000000000000000000000000000","nonce":"0x0000000000000000","baseFeePerGas":"0xef4207","withdrawalsRoot":null,"blobGasUsed":null,"excessBlobGas":null,"parentBeaconBlockRoot":null,"requestsHash":null,"hash":"0x2b7777eb3ffe5939d6b70883cef69250ef5a2ed62a8b378973e0c3fe84707137"}', '0x5a9bd7f5f6723ce51c03beffa310a5bf79c2cf261ddb8622cf407b41d968ef91', '0xb89ed319fb9dcaed2df7e72223683cf255f6c1e45742e6caa810938871ce53bf', '0', '0', '1753167618', '', '0x2f73e96335a43b678e107b2ef57c7ec0297d88d4a9986c1d6f4e31f1d11fb4f4', '[]');
INSERT INTO l2_block (number, hash, parent_hash, header, withdraw_root,
state_root, tx_num, gas_used, block_timestamp, row_consumption,
chunk_hash, transactions
) VALUES ('10973730', '0x56318f0a941611fc22640ea7f7d0308ab88a9e23059b5c6983bafc2402003d13', '0x2b7777eb3ffe5939d6b70883cef69250ef5a2ed62a8b378973e0c3fe84707137', '{"parentHash":"0x2b7777eb3ffe5939d6b70883cef69250ef5a2ed62a8b378973e0c3fe84707137","sha3Uncles":"0x1dcc4de8dec75d7aab85b567b6ccd41ad312451b948a7413f0a142fd40d49347","miner":"0x0000000000000000000000000000000000000000","stateRoot":"0xe603d341e958521d3f5df8f37b5144b3c003214c481716cffa4e8d6303d9734f","transactionsRoot":"0x56e81f171bcc55a6ff8345e692c0f86e5b48e01b996cadc001622fb5e363b421","receiptsRoot":"0x56e81f171bcc55a6ff8345e692c0f86e5b48e01b996cadc001622fb5e363b421","logsBloom":"0x00000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000","difficulty":"0x1","number":"0xa77222","gasLimit":"0x1312d00","gasUsed":"0x0","timestamp":"0x687f3703","extraData":"0x","mixHash":"0x0000000000000000000000000000000000000000000000000000000000000000","nonce":"0x0000000000000000","baseFeePerGas":"0xef4207","withdrawalsRoot":null,"blobGasUsed":null,"excessBlobGas":null,"parentBeaconBlockRoot":null,"requestsHash":null,"hash":"0x56318f0a941611fc22640ea7f7d0308ab88a9e23059b5c6983bafc2402003d13"}', '0x5a9bd7f5f6723ce51c03beffa310a5bf79c2cf261ddb8622cf407b41d968ef91', '0xe603d341e958521d3f5df8f37b5144b3c003214c481716cffa4e8d6303d9734f', '0', '0', '1753167619', '', '0x2f73e96335a43b678e107b2ef57c7ec0297d88d4a9986c1d6f4e31f1d11fb4f4', '[]');
-- +goose StatementEnd
-- +goose Down
-- +goose StatementBegin
DELETE FROM l2_block;
-- +goose StatementEnd

46
tests/prover-e2e/Makefile Normal file
View File

@@ -0,0 +1,46 @@
.PHONY: clean setup_db test_tool all check_vars
GOOSE_CMD?=goose
all: setup_db test_tool import_data
clean:
docker compose down
check_vars:
@if [ -z "$(BEGIN_BLOCK)" ] || [ -z "$(END_BLOCK)" ]; then \
echo "Error: BEGIN_BLOCK and END_BLOCK must be defined"; \
echo "Usage: make import_data BEGIN_BLOCK=<start_block> END_BLOCK=<end_block>"; \
exit 1; \
fi
setup_db: clean
docker compose up --detach
@echo "Waiting for PostgreSQL to be ready..."
@for i in $$(seq 1 30); do \
if nc -z localhost 5432 >/dev/null 2>&1; then \
echo "PostgreSQL port is open!"; \
sleep 2; \
break; \
fi; \
echo "Waiting for PostgreSQL to start... ($$i/30)"; \
sleep 2; \
if [ $$i -eq 30 ]; then \
echo "Timed out waiting for PostgreSQL to start"; \
exit 1; \
fi; \
done
${GOOSE_CMD} up
GOOSE_MIGRATION_DIR=./ ${GOOSE_CMD} up-to 100
test_tool:
go build -o $(PWD)/build/bin/e2e_tool ../../rollup/tests/integration_tool
build/bin/e2e_tool: test_tool
import_data_euclid: build/bin/e2e_tool check_vars
build/bin/e2e_tool --config ./config.json --codec 7 ${BEGIN_BLOCK} ${END_BLOCK}
import_data: build/bin/e2e_tool check_vars
build/bin/e2e_tool --config ./config.json --codec 8 ${BEGIN_BLOCK} ${END_BLOCK}

View File

@@ -0,0 +1,12 @@
## A new e2e test tool to setup a local environment for testing coordinator and prover.
It contains data from some blocks in scroll sepolia, and helps to generate a series of chunks/batches/bundles from these blocks, filling the DB for the coordinator, so an e2e test (from chunk to bundle) can be run completely local
Steps:
1. run `make all` under `tests/prover-e2e`, it would launch a postgreSql db in local docker container, which is ready to be used by coordinator (include some chunks/batches/bundles waiting to be proven)
2. download circuit assets with `download-release.sh` script in `zkvm-prover`
3. generate the verifier stuff corresponding to the downloaded assets by `make gen_verifier_stuff` in `zkvm-prover`
4. setup `config.json` and `genesis.json` for coordinator, copy the generated verifier stuff in step 3 to the directory which coordinator would load them
5. build and launch `coordinator_api` service locally
6. setup the `config.json` for zkvm prover to connect with the locally launched coordinator api
7. in `zkvm-prover`, launch `make test_e2e_run`, which would specific prover run locally, connect to the local coordinator api service according to the `config.json`, and prove all tasks being injected to db in step 1.

View File

@@ -0,0 +1,8 @@
{
"db_config": {
"driver_name": "postgres",
"dsn": "postgres://dev:dev@localhost:5432/scroll?sslmode=disable",
"maxOpenNum": 5,
"maxIdleNum": 1
}
}

Some files were not shown because too many files have changed in this diff Show More