Compare commits

...

48 Commits

Author SHA1 Message Date
Ho
07a9f0e106 update dep, and fix for compilation error 2025-09-16 15:38:18 +09:00
Ho
ad13b56d7c clean up 2025-09-16 15:12:17 +09:00
Ho
722cc5ee76 fix gpu prover building and update build route 2025-09-16 11:55:25 +09:00
Ho
ab8df8e4b5 trival update considering AI reviews 2025-09-16 11:13:53 +09:00
Ho
755ed6074e update toolchain for CI 2025-09-15 20:35:12 +09:00
Ho
0d6eaf74fc deprecate patch for gpu building 2025-09-15 20:12:36 +09:00
Ho
953ba50c07 update gpu building 2025-09-15 19:57:04 +09:00
Ho
9998069515 upgrade zkvm dep 2025-09-15 19:46:35 +09:00
Ho
fcda68b5b3 fix, e2e for 0.5.6 passed 2025-09-12 17:18:28 +09:00
Ho
f3d1b151b2 fmt 2025-09-12 15:35:58 +09:00
Ho
e33d11ddc7 apply features in configuration 2025-09-12 15:35:48 +09:00
Ho
642ee2f975 export new func with ffi 2025-09-12 11:36:54 +09:00
Ho
8fcd27333f prune storage fetch in execution 2025-09-12 11:27:02 +09:00
Ho
f640ef9377 update dep to openvm 14 and fix issues after upgrade 2025-09-11 22:32:26 +09:00
Jonas Theis
de7f6e56a9 refactor(rollup relayer): remove max_block_num_per_chunk configuration parameter (#1729)
Co-authored-by: jonastheis <jonastheis@users.noreply.github.com>
2025-09-10 14:16:13 +08:00
Ho
3b323198dc [feat] Integration e2e test tools (#1694)
Co-authored-by: georgehao <haohongfan@gmail.com>
2025-09-09 15:48:20 +09:00
Ho
c11e0283e8 [FIX] script for detecting plonky3gpu version (#1730) 2025-09-09 10:45:37 +08:00
Ho
a5a7844646 [FIX] Compatible with current prover (before 0.5.6) (#1732) 2025-09-02 19:20:02 +09:00
georgehao
7ff5b190ec bump version to v4.5.44 (#1731) 2025-09-02 09:56:59 +08:00
Ho
b297edd28d [Feat] Prover loading assets (circuits) dynamically (#1717) 2025-08-29 19:32:44 +09:00
Péter Garamvölgyi
47c85d4983 Fix unique chunk hash (#1727) 2025-08-26 14:27:24 +02:00
Morty
1552e98b79 fix(bridge-history+rollup-relayer): update da-codec to prevent zstd deadlock (#1724)
Co-authored-by: yiweichi <yiweichi@users.noreply.github.com>
Co-authored-by: Péter Garamvölgyi <peter@scroll.io>
2025-08-25 16:23:01 +08:00
Péter Garamvölgyi
a65b3066a3 fix: remove unnecessary logs (#1725) 2025-08-22 15:02:20 +02:00
Morty
1f2b397bbd feat(bridge-history): add aws s3 blob client (#1716)
Co-authored-by: yiweichi <yiweichi@users.noreply.github.com>
Co-authored-by: colin <102356659+colinlyguo@users.noreply.github.com>
Co-authored-by: colinlyguo <colinlyguo@users.noreply.github.com>
2025-08-12 14:59:44 +08:00
colin
ae791a0714 fix(rollup-relayer): sanity checks (#1720) 2025-08-12 14:57:02 +08:00
colin
c012f7132d feat(rollup-relayer): add sanity checks before committing and finalizing (#1714)
Co-authored-by: colinlyguo <colinlyguo@users.noreply.github.com>
2025-08-11 17:49:29 +08:00
Jonas Theis
6897cc54bd feat(permissionless batches): batch production toolkit and operator recovery (#1555)
Signed-off-by: noelwei <fan@scroll.io>
Co-authored-by: Ömer Faruk Irmak <omerfirmak@gmail.com>
Co-authored-by: noelwei <fan@scroll.io>
Co-authored-by: colin <102356659+colinlyguo@users.noreply.github.com>
Co-authored-by: Rohit Narurkar <rohit.narurkar@proton.me>
Co-authored-by: colinlyguo <colinlyguo@scroll.io>
Co-authored-by: Péter Garamvölgyi <peter@scroll.io>
Co-authored-by: Morty <70688412+yiweichi@users.noreply.github.com>
Co-authored-by: omerfirmak <omerfirmak@users.noreply.github.com>
Co-authored-by: jonastheis <jonastheis@users.noreply.github.com>
Co-authored-by: georgehao <georgehao@users.noreply.github.com>
Co-authored-by: kunxian xia <xiakunxian130@gmail.com>
Co-authored-by: Velaciela <git.rover@outlook.com>
Co-authored-by: colinlyguo <colinlyguo@users.noreply.github.com>
Co-authored-by: Morty <yiweichi1@gmail.com>
2025-08-04 12:37:31 +08:00
georgehao
d21fa36803 change l2watcher from w.GetBlockByNumberOrHash to BlockByNumber (#1715) 2025-07-31 18:29:50 +08:00
colin
fc75299eb3 fix(gas-oracle): nonce too low when resubmission (#1712) 2025-07-30 14:50:41 +08:00
Morty
4bfcd35d0c fix(bridge-history): update dependency go-ethereum version (#1713) 2025-07-30 14:32:12 +08:00
colin
6d62f8e5fa fix(gas-oracle): typos in config file example (#1711) 2025-07-28 18:12:50 +08:00
Morty
392ae07736 feat(blob-uploader): support codec v8 (#1707) 2025-07-24 01:34:46 +08:00
colin
db80b47820 fix(rollup-relayer): upgrade boundary message queue hash initialization (#1706) 2025-07-23 18:51:56 +08:00
Zhang Zhuo
daa1387208 circuit-0.5.2 (#1705)
Co-authored-by: georgehao <georgehao@users.noreply.github.com>
2025-07-23 14:16:16 +08:00
Zhang Zhuo
67b05558e2 upgrade circuit to 0.5.2 (#1703)
Co-authored-by: Rohit Narurkar <rohit.narurkar@proton.me>
Co-authored-by: georgehao <georgehao@users.noreply.github.com>
2025-07-23 10:52:08 +08:00
Ho
1e447b0fef [Fix] building failure in gpu image (#1702) 2025-07-21 20:26:39 +08:00
georgehao
f7c6ecadf4 bump to v4.5.31 (#1700) 2025-07-18 16:41:59 +08:00
Ho
9d94f943e5 [Upgrade] feynman 0.5.0rc1 (#1699) 2025-07-18 15:57:31 +08:00
Morty
de17ad43ff fix(blob-uploader): orm function InsertOrUpdateBlobUpload and s3 bucket region configuration (#1679)
Co-authored-by: yiweichi <yiweichi@users.noreply.github.com>
2025-07-16 18:36:27 +08:00
colin
4233ad928c feat(rollup-relayer): support Validium (#1693)
Co-authored-by: Péter Garamvölgyi <peter@scroll.io>
2025-07-09 15:02:54 +08:00
georgehao
3050ccb40f update feynman prover makefile (#1691) 2025-07-05 10:33:08 +08:00
Ho
12e89201a1 feat: upgrading for feynman (#1690)
Co-authored-by: georgehao <georgehao@users.noreply.github.com>
Co-authored-by: georgehao <haohongfan@gmail.com>
2025-07-04 16:59:48 +08:00
colin
a0ee508bbd fix(bridge-history): commit batch txns and blobs fetching (#1689) 2025-07-04 01:50:52 +08:00
georgehao
b8909d3795 update intermediate docker runs on (#1688) 2025-07-03 21:52:24 +08:00
Ho
b7a172a519 feat: upgrade zkvm-prover to feynman fork (#1686)
Co-authored-by: colinlyguo <colinlyguo@scroll.io>
2025-07-03 18:48:33 +08:00
georgehao
80807dbb75 Feat/upgrade intermedidate (#1687) 2025-07-03 10:08:03 +08:00
georgehao
a776ca7c82 refactor: remove unused check (#1685) 2025-07-03 09:21:33 +08:00
Ho
ea38ae7e96 Refactor/zkvm 3 (#1684) 2025-07-01 06:39:27 +08:00
166 changed files with 18347 additions and 2672 deletions

View File

@@ -29,7 +29,7 @@ jobs:
steps:
- uses: actions-rs/toolchain@v1
with:
toolchain: nightly-2024-12-06
toolchain: nightly-2025-08-18
override: true
components: rustfmt, clippy
- name: Install Go

View File

@@ -33,7 +33,7 @@ jobs:
steps:
- uses: actions-rs/toolchain@v1
with:
toolchain: nightly-2023-12-03
toolchain: nightly-2025-08-18
override: true
components: rustfmt, clippy
- name: Install Go

View File

@@ -10,7 +10,8 @@ env:
jobs:
gas_oracle:
runs-on: ubuntu-latest
runs-on:
group: scroll-reth-runner-group
steps:
- name: Checkout code
uses: actions/checkout@v4
@@ -55,7 +56,8 @@ jobs:
${{ env.ECR_REGISTRY }}/${{ env.REPOSITORY }}:latest
rollup_relayer:
runs-on: ubuntu-latest
runs-on:
group: scroll-reth-runner-group
steps:
- name: Checkout code
uses: actions/checkout@v4
@@ -100,7 +102,8 @@ jobs:
${{ env.ECR_REGISTRY }}/${{ env.REPOSITORY }}:latest
blob_uploader:
runs-on: ubuntu-latest
runs-on:
group: scroll-reth-runner-group
steps:
- name: Checkout code
uses: actions/checkout@v4
@@ -145,7 +148,8 @@ jobs:
${{ env.ECR_REGISTRY }}/${{ env.REPOSITORY }}:latest
rollup-db-cli:
runs-on: ubuntu-latest
runs-on:
group: scroll-reth-runner-group
steps:
- name: Checkout code
uses: actions/checkout@v4
@@ -190,7 +194,8 @@ jobs:
${{ env.ECR_REGISTRY }}/${{ env.REPOSITORY }}:latest
bridgehistoryapi-fetcher:
runs-on: ubuntu-latest
runs-on:
group: scroll-reth-runner-group
steps:
- name: Checkout code
uses: actions/checkout@v4
@@ -235,7 +240,8 @@ jobs:
${{ env.ECR_REGISTRY }}/${{ env.REPOSITORY }}:latest
bridgehistoryapi-api:
runs-on: ubuntu-latest
runs-on:
group: scroll-reth-runner-group
steps:
- name: Checkout code
uses: actions/checkout@v4
@@ -280,7 +286,8 @@ jobs:
${{ env.ECR_REGISTRY }}/${{ env.REPOSITORY }}:latest
bridgehistoryapi-db-cli:
runs-on: ubuntu-latest
runs-on:
group: scroll-reth-runner-group
steps:
- name: Checkout code
uses: actions/checkout@v4
@@ -325,7 +332,8 @@ jobs:
${{ env.ECR_REGISTRY }}/${{ env.REPOSITORY }}:latest
coordinator-api:
runs-on: ubuntu-latest
runs-on:
group: scroll-reth-runner-group
steps:
- name: Checkout code
uses: actions/checkout@v4
@@ -352,48 +360,6 @@ jobs:
REPOSITORY: coordinator-api
run: |
aws --region ${{ env.AWS_REGION }} ecr describe-repositories --repository-names ${{ env.REPOSITORY }} && : || aws --region ${{ env.AWS_REGION }} ecr create-repository --repository-name ${{ env.REPOSITORY }}
- name: Setup SSH for repositories and clone them
run: |
mkdir -p ~/.ssh
chmod 700 ~/.ssh
# Setup for plonky3-gpu
echo "${{ secrets.PLONKY3_GPU_SSH_PRIVATE_KEY }}" > ~/.ssh/plonky3_gpu_key
chmod 600 ~/.ssh/plonky3_gpu_key
eval "$(ssh-agent -s)" > /dev/null
ssh-add ~/.ssh/plonky3_gpu_key 2>/dev/null
ssh-keyscan -t rsa github.com >> ~/.ssh/known_hosts 2>/dev/null
echo "Loaded plonky3-gpu key"
# Clone plonky3-gpu repository
./build/dockerfiles/coordinator-api/clone_plonky3_gpu.sh
# Setup for openvm-stark-gpu
echo "${{ secrets.OPENVM_STARK_GPU_SSH_PRIVATE_KEY }}" > ~/.ssh/openvm_stark_gpu_key
chmod 600 ~/.ssh/openvm_stark_gpu_key
eval "$(ssh-agent -s)" > /dev/null
ssh-add ~/.ssh/openvm_stark_gpu_key 2>/dev/null
echo "Loaded openvm-stark-gpu key"
# Clone openvm-stark-gpu repository
./build/dockerfiles/coordinator-api/clone_openvm_stark_gpu.sh
# Setup for openvm-gpu
echo "${{ secrets.OPENVM_GPU_SSH_PRIVATE_KEY }}" > ~/.ssh/openvm_gpu_key
chmod 600 ~/.ssh/openvm_gpu_key
eval "$(ssh-agent -s)" > /dev/null
ssh-add ~/.ssh/openvm_gpu_key 2>/dev/null
echo "Loaded openvm-gpu key"
# Clone openvm-gpu repository
./build/dockerfiles/coordinator-api/clone_openvm_gpu.sh
# Show number of loaded keys
echo "Number of loaded keys: $(ssh-add -l | wc -l)"
- name: Checkout specific commits
run: |
./build/dockerfiles/coordinator-api/checkout_all.sh
- name: Build and push
uses: docker/build-push-action@v3
env:
@@ -411,7 +377,8 @@ jobs:
${{ env.ECR_REGISTRY }}/${{ env.REPOSITORY }}:latest
coordinator-cron:
runs-on: ubuntu-latest
runs-on:
group: scroll-reth-runner-group
steps:
- name: Checkout code
uses: actions/checkout@v4

View File

@@ -22,10 +22,9 @@ on:
required: true
type: choice
options:
- nightly-2023-12-03
- nightly-2022-12-10
- 1.86.0
default: "nightly-2023-12-03"
- nightly-2025-08-18
default: "nightly-2025-08-18"
PYTHON_VERSION:
description: "Python version"
required: false
@@ -69,7 +68,8 @@ defaults:
jobs:
build-and-publish-intermediate:
runs-on: ubuntu-latest
runs-on:
group: scroll-reth-runner-group
steps:
- name: Checkout code
uses: actions/checkout@v4

1
.gitignore vendored
View File

@@ -24,3 +24,4 @@ sftp-config.json
*~
target
zkvm-prover/config.json

3023
Cargo.lock generated

File diff suppressed because it is too large Load Diff

View File

@@ -14,27 +14,28 @@ edition = "2021"
homepage = "https://scroll.io"
readme = "README.md"
repository = "https://github.com/scroll-tech/scroll"
version = "4.5.8"
version = "4.5.47"
[workspace.dependencies]
scroll-zkvm-prover-euclid = { git = "https://github.com/scroll-tech/zkvm-prover", rev = "29c99de", package = "scroll-zkvm-prover" }
scroll-zkvm-verifier-euclid = { git = "https://github.com/scroll-tech/zkvm-prover", rev = "29c99de", package = "scroll-zkvm-verifier" }
scroll-zkvm-types = { git = "https://github.com/scroll-tech/zkvm-prover", rev = "29c99de" }
scroll-zkvm-prover = { git = "https://github.com/scroll-tech/zkvm-prover", rev = "5c361ad" }
scroll-zkvm-verifier = { git = "https://github.com/scroll-tech/zkvm-prover", rev = "5c361ad" }
scroll-zkvm-types = { git = "https://github.com/scroll-tech/zkvm-prover", rev = "5c361ad" }
sbv-primitives = { git = "https://github.com/scroll-tech/stateless-block-verifier", branch = "zkvm/euclid-upgrade", features = ["scroll"] }
sbv-utils = { git = "https://github.com/scroll-tech/stateless-block-verifier", branch = "zkvm/euclid-upgrade" }
sbv-primitives = { git = "https://github.com/scroll-tech/stateless-block-verifier", branch = "master", features = ["scroll", "rkyv"] }
sbv-utils = { git = "https://github.com/scroll-tech/stateless-block-verifier", branch = "master" }
sbv-core = { git = "https://github.com/scroll-tech/stateless-block-verifier", branch = "master", features = ["scroll"] }
metrics = "0.23.0"
metrics-util = "0.17"
metrics-tracing-context = "0.16.0"
anyhow = "1.0"
alloy = { version = "0.11", default-features = false }
alloy-primitives = { version = "0.8", default-features = false }
alloy = { version = "1", default-features = false }
alloy-primitives = { version = "1.3", default-features = false, features = ["tiny-keccak"] }
alloy-sol-types = { version = "1.3", default-features = false }
# also use this to trigger "serde" feature for primitives
alloy-serde = { version = "0.8", default-features = false }
alloy-serde = { version = "1", default-features = false }
rkyv = "0.8"
serde = { version = "1", default-features = false, features = ["derive"] }
serde_json = { version = "1.0" }
serde_derive = "1.0"
@@ -43,24 +44,26 @@ itertools = "0.14"
tiny-keccak = "2.0"
tracing = "0.1"
eyre = "0.6"
bincode_v1 = { version = "1.3", package = "bincode"}
snark-verifier-sdk = { version = "0.2.0", default-features = false, features = [
"loader_halo2",
"halo2-axiom",
"display",
] }
once_cell = "1.20"
base64 = "0.22"
#TODO: upgrade when Feynman
vm-zstd = { git = "https://github.com/scroll-tech/rust-zstd-decompressor.git", tag = "v0.1.1" }
[patch.crates-io]
alloy-primitives = { git = "https://github.com/scroll-tech/alloy-core", branch = "v0.8.18-euclid-upgrade" }
ruint = { git = "https://github.com/scroll-tech/uint.git", branch = "v1.12.3" }
tiny-keccak = { git = "https://github.com/scroll-tech/tiny-keccak", branch = "scroll-patch-v2.0.2-euclid-upgrade" }
revm = { git = "https://github.com/scroll-tech/revm" }
revm-bytecode = { git = "https://github.com/scroll-tech/revm" }
revm-context = { git = "https://github.com/scroll-tech/revm" }
revm-context-interface = { git = "https://github.com/scroll-tech/revm" }
revm-database = { git = "https://github.com/scroll-tech/revm" }
revm-database-interface = { git = "https://github.com/scroll-tech/revm" }
revm-handler = { git = "https://github.com/scroll-tech/revm" }
revm-inspector = { git = "https://github.com/scroll-tech/revm" }
revm-interpreter = { git = "https://github.com/scroll-tech/revm" }
revm-precompile = { git = "https://github.com/scroll-tech/revm" }
revm-primitives = { git = "https://github.com/scroll-tech/revm" }
revm-state = { git = "https://github.com/scroll-tech/revm" }
alloy-primitives = { git = "https://github.com/scroll-tech/alloy-core", branch = "feat/rkyv" }
[profile.maxperf]
inherits = "release"
lto = "fat"
codegen-units = 1
codegen-units = 1

View File

@@ -10,15 +10,15 @@ require (
github.com/go-redis/redis/v8 v8.11.5
github.com/pressly/goose/v3 v3.16.0
github.com/prometheus/client_golang v1.19.0
github.com/scroll-tech/da-codec v0.1.3-0.20250626091118-58b899494da6
github.com/scroll-tech/go-ethereum v1.10.14-0.20250626101020-47bc86cd961c
github.com/scroll-tech/da-codec v0.1.3-0.20250826112206-b4cce5c5d178
github.com/scroll-tech/go-ethereum v1.10.14-0.20250729113104-bd8f141bb3e9
github.com/stretchr/testify v1.9.0
github.com/urfave/cli/v2 v2.25.7
golang.org/x/sync v0.11.0
gorm.io/gorm v1.25.7-0.20240204074919-46816ad31dde
)
replace github.com/scroll-tech/go-ethereum => github.com/scroll-tech/go-ethereum v1.10.14-0.20250626101020-47bc86cd961c // It's a hotfix for the header hash incompatibility issue, pls change this with caution
replace github.com/scroll-tech/go-ethereum => github.com/scroll-tech/go-ethereum v1.10.14-0.20250729113104-bd8f141bb3e9 // It's a hotfix for the header hash incompatibility issue, pls change this with caution
require (
dario.cat/mergo v1.0.0 // indirect

View File

@@ -309,10 +309,10 @@ github.com/rs/cors v1.7.0 h1:+88SsELBHx5r+hZ8TCkggzSstaWNbDvThkVK8H6f9ik=
github.com/rs/cors v1.7.0/go.mod h1:gFx+x8UowdsKA9AchylcLynDq+nNFfI8FkUZdN/jGCU=
github.com/russross/blackfriday/v2 v2.1.0 h1:JIOH55/0cWyOuilr9/qlrm0BSXldqnqwMsf35Ld67mk=
github.com/russross/blackfriday/v2 v2.1.0/go.mod h1:+Rmxgy9KzJVeS9/2gXHxylqXiyQDYRxCVz55jmeOWTM=
github.com/scroll-tech/da-codec v0.1.3-0.20250626091118-58b899494da6 h1:vb2XLvQwCf+F/ifP6P/lfeiQrHY6+Yb/E3R4KHXLqSE=
github.com/scroll-tech/da-codec v0.1.3-0.20250626091118-58b899494da6/go.mod h1:Z6kN5u2khPhiqHyk172kGB7o38bH/nj7Ilrb/46wZGg=
github.com/scroll-tech/go-ethereum v1.10.14-0.20250626101020-47bc86cd961c h1:IpEBKM6O+xOK2qZVZztGxcobFXkKMb5hAkBEVzfXjVg=
github.com/scroll-tech/go-ethereum v1.10.14-0.20250626101020-47bc86cd961c/go.mod h1:pDCZ4iGvEGmdIe4aSAGBrb7XSrKEML6/L/wEMmNxOdk=
github.com/scroll-tech/da-codec v0.1.3-0.20250826112206-b4cce5c5d178 h1:4utngmJHXSOS5FoSdZhEV1xMRirpArbXvyoCZY9nYj0=
github.com/scroll-tech/da-codec v0.1.3-0.20250826112206-b4cce5c5d178/go.mod h1:Z6kN5u2khPhiqHyk172kGB7o38bH/nj7Ilrb/46wZGg=
github.com/scroll-tech/go-ethereum v1.10.14-0.20250729113104-bd8f141bb3e9 h1:u371VK8eOU2Z/0SVf5KDI3eJc8msHSpJbav4do/8n38=
github.com/scroll-tech/go-ethereum v1.10.14-0.20250729113104-bd8f141bb3e9/go.mod h1:pDCZ4iGvEGmdIe4aSAGBrb7XSrKEML6/L/wEMmNxOdk=
github.com/scroll-tech/zktrie v0.8.4 h1:UagmnZ4Z3ITCk+aUq9NQZJNAwnWl4gSxsLb2Nl7IgRE=
github.com/scroll-tech/zktrie v0.8.4/go.mod h1:XvNo7vAk8yxNyTjBDj5WIiFzYW4bx/gJ78+NK6Zn6Uk=
github.com/segmentio/asm v1.2.0 h1:9BQrFxC+YOHJlTlHGkTrFWf59nbL3XnCoFLTwDCI7ys=

View File

@@ -38,6 +38,7 @@ type FetcherConfig struct {
BeaconNodeAPIEndpoint string `json:"BeaconNodeAPIEndpoint"`
BlobScanAPIEndpoint string `json:"BlobScanAPIEndpoint"`
BlockNativeAPIEndpoint string `json:"BlockNativeAPIEndpoint"`
AwsS3Endpoint string `json:"AwsS3Endpoint"`
}
// RedisConfig redis config

View File

@@ -39,6 +39,9 @@ type L1MessageFetcher struct {
// NewL1MessageFetcher creates a new L1MessageFetcher instance.
func NewL1MessageFetcher(ctx context.Context, cfg *config.FetcherConfig, db *gorm.DB, client *ethclient.Client) (*L1MessageFetcher, error) {
blobClient := blob_client.NewBlobClients()
if cfg.AwsS3Endpoint != "" {
blobClient.AddBlobClient(blob_client.NewAwsS3Client(cfg.AwsS3Endpoint))
}
if cfg.BeaconNodeAPIEndpoint != "" {
beaconNodeClient, err := blob_client.NewBeaconNodeClient(cfg.BeaconNodeAPIEndpoint)
if err != nil {

View File

@@ -4,6 +4,7 @@ import (
"context"
"fmt"
"math/big"
"time"
"github.com/scroll-tech/da-codec/encoding"
"github.com/scroll-tech/go-ethereum/common"
@@ -252,6 +253,11 @@ func (e *L1EventParser) ParseL1BatchEventLogs(ctx context.Context, logs []types.
// Key: commit transaction hash
// Value: parent batch hashes (in order) for each processed CommitBatch event in the transaction
txBlobIndexMap := make(map[common.Hash][]common.Hash)
// Cache for the previous transaction to avoid duplicate fetches
var lastTxHash common.Hash
var lastTx *types.Transaction
var l1BatchEvents []*orm.BatchEvent
for _, vlog := range logs {
switch vlog.Topics[0] {
@@ -261,11 +267,28 @@ func (e *L1EventParser) ParseL1BatchEventLogs(ctx context.Context, logs []types.
log.Error("Failed to unpack CommitBatch event", "err", err)
return nil, err
}
commitTx, isPending, err := client.TransactionByHash(ctx, vlog.TxHash)
if err != nil || isPending {
log.Error("Failed to get commit batch tx or the tx is still pending", "err", err, "isPending", isPending)
return nil, err
// Get transaction, reuse if it's the same as previous
var commitTx *types.Transaction
if lastTxHash == vlog.TxHash && lastTx != nil {
commitTx = lastTx
} else {
log.Debug("Fetching commit batch transaction", "txHash", vlog.TxHash.String())
// Create 10-second timeout context for transaction fetch
txCtx, txCancel := context.WithTimeout(ctx, 10*time.Second)
fetchedTx, isPending, err := client.TransactionByHash(txCtx, vlog.TxHash)
txCancel()
if err != nil || isPending {
log.Error("Failed to get commit batch tx or the tx is still pending", "err", err, "isPending", isPending)
return nil, err
}
commitTx = fetchedTx
lastTxHash = vlog.TxHash
lastTx = commitTx
}
version, startBlock, endBlock, err := utils.GetBatchVersionAndBlockRangeFromCalldata(commitTx.Data())
if err != nil {
log.Error("Failed to get batch range from calldata", "hash", commitTx.Hash().String(), "height", vlog.BlockNumber)
@@ -305,7 +328,13 @@ func (e *L1EventParser) ParseL1BatchEventLogs(ctx context.Context, logs []types.
return nil, fmt.Errorf("batch hash mismatch for batch %d, expected: %s, got: %s", event.BatchIndex, event.BatchHash.String(), calculatedBatch.Hash().String())
}
blocks, err := e.getBatchBlockRangeFromBlob(ctx, codec, blobVersionedHash, blockTimestampsMap[vlog.BlockNumber])
log.Debug("Processing blob data", "blobVersionedHash", blobVersionedHash.String(), "batchIndex", event.BatchIndex.Uint64(), "currentIndex", currentIndex)
// Create 20-second timeout context for blob processing
blobCtx, blobCancel := context.WithTimeout(ctx, 20*time.Second)
blocks, err := e.getBatchBlockRangeFromBlob(blobCtx, codec, blobVersionedHash, blockTimestampsMap[vlog.BlockNumber])
blobCancel()
if err != nil {
return nil, fmt.Errorf("failed to process versioned blob, blobVersionedHash: %s, block number: %d, blob index: %d, err: %w",
blobVersionedHash.String(), vlog.BlockNumber, currentIndex, err)

View File

@@ -1,9 +1,9 @@
# Build libzkp dependency
FROM scrolltech/cuda-go-rust-builder:cuda-11.7.1-go-1.21-rust-nightly-2023-12-03 as chef
FROM scrolltech/go-rust-builder:go-1.22.12-rust-nightly-2025-02-14 as chef
WORKDIR app
FROM chef as planner
COPY ./crates ./
COPY ./crates/ ./crates/
COPY ./Cargo.* ./
COPY ./rust-toolchain ./
RUN cargo chef prepare --recipe-path recipe.json
@@ -11,21 +11,15 @@ RUN cargo chef prepare --recipe-path recipe.json
FROM chef as zkp-builder
COPY ./rust-toolchain ./
COPY --from=planner /app/recipe.json recipe.json
# run scripts to get openvm-gpu
COPY ./build/dockerfiles/coordinator-api/plonky3-gpu /plonky3-gpu
COPY ./build/dockerfiles/coordinator-api/openvm-stark-gpu /openvm-stark-gpu
COPY ./build/dockerfiles/coordinator-api/openvm-gpu /openvm-gpu
COPY ./build/dockerfiles/coordinator-api/gitconfig /root/.gitconfig
COPY ./build/dockerfiles/coordinator-api/config.toml /root/.cargo/config.toml
RUN cargo chef cook --release --recipe-path recipe.json
COPY ./crates ./
COPY ./crates/ ./crates/
COPY ./Cargo.* ./
COPY .git .git
RUN cargo build --release -p libzkp-c
# Download Go dependencies
FROM scrolltech/cuda-go-rust-builder:cuda-11.7.1-go-1.21-rust-nightly-2023-12-03 as base
FROM scrolltech/go-rust-builder:go-1.22.12-rust-nightly-2025-02-14 as base
WORKDIR /src
COPY go.work* ./
COPY ./rollup/go.* ./rollup/
@@ -45,7 +39,7 @@ RUN cd ./coordinator && CGO_LDFLAGS="-Wl,--no-as-needed -ldl" make coordinator_a
RUN mv coordinator/internal/logic/libzkp/lib /bin/
# Pull coordinator into a second stage deploy ubuntu container
FROM nvidia/cuda:11.7.1-runtime-ubuntu22.04
FROM ubuntu:20.04
ENV LD_LIBRARY_PATH=$LD_LIBRARY_PATH:/src/coordinator/internal/logic/verifier/lib
ENV CGO_LDFLAGS="-Wl,--no-as-needed -ldl"
# ENV CHAIN_ID=534353

View File

@@ -4,3 +4,5 @@ docs/
l2geth/
rpc-gateway/
*target/*
permissionless-batches/conf/

View File

@@ -1,17 +0,0 @@
#!/bin/bash
set -uex
PLONKY3_GPU_COMMIT=261b322 # v0.2.0
OPENVM_STARK_GPU_COMMIT=3082234 # PR#48
OPENVM_GPU_COMMIT=8094b4f # branch: patch-v1.2.0
DIR=$(cd "$(dirname "${BASH_SOURCE[0]}")" > /dev/null 2>&1 && pwd)
# checkout plonky3-gpu
cd $DIR/plonky3-gpu && git checkout ${PLONKY3_GPU_COMMIT}
# checkout openvm-stark-gpu
cd $DIR/openvm-stark-gpu && git checkout ${OPENVM_STARK_GPU_COMMIT}
# checkout openvm-gpu
cd $DIR/openvm-gpu && git checkout ${OPENVM_GPU_COMMIT}

View File

@@ -1,10 +0,0 @@
#!/bin/bash
set -uex
DIR=$(cd "$(dirname "${BASH_SOURCE[0]}")" > /dev/null 2>&1 && pwd)
# clone openvm-gpu if not exists
if [ ! -d $DIR/openvm-gpu ]; then
git clone git@github.com:scroll-tech/openvm-gpu.git $DIR/openvm-gpu
fi
cd $DIR/openvm-gpu && git fetch --all --force

View File

@@ -1,10 +0,0 @@
#!/bin/bash
set -uex
DIR=$(cd "$(dirname "${BASH_SOURCE[0]}")" > /dev/null 2>&1 && pwd)
# clone openvm-stark-gpu if not exists
if [ ! -d $DIR/openvm-stark-gpu ]; then
git clone git@github.com:scroll-tech/openvm-stark-gpu.git $DIR/openvm-stark-gpu
fi
cd $DIR/openvm-stark-gpu && git fetch --all --force

View File

@@ -1,10 +0,0 @@
#!/bin/bash
set -uex
DIR=$(cd "$(dirname "${BASH_SOURCE[0]}")" > /dev/null 2>&1 && pwd)
# clone plonky3-gpu if not exists
if [ ! -d $DIR/plonky3-gpu ]; then
git clone git@github.com:scroll-tech/plonky3-gpu.git $DIR/plonky3-gpu
fi
cd $DIR/plonky3-gpu && git fetch --all --force

View File

@@ -1,92 +0,0 @@
# openvm
# same order and features as zkvm-prover/Cargo.toml.gpu
[patch."ssh://git@github.com/scroll-tech/openvm-gpu.git"]
openvm = { path = "/openvm-gpu/crates/toolchain/openvm", default-features = false }
openvm-algebra-complex-macros = { path = "/openvm-gpu/extensions/algebra/complex-macros", default-features = false }
openvm-algebra-guest = { path = "/openvm-gpu/extensions/algebra/guest", default-features = false }
openvm-bigint-guest = { path = "/openvm-gpu/extensions/bigint/guest", default-features = false }
openvm-build = { path = "/openvm-gpu/crates/toolchain/build", default-features = false }
openvm-circuit = { path = "/openvm-gpu/crates/vm", default-features = false }
openvm-custom-insn = { path = "/openvm-gpu/crates/toolchain/custom_insn", default-features = false }
openvm-continuations = { path = "/openvm-gpu/crates/continuations", default-features = false }
openvm-ecc-guest = { path = "/openvm-gpu/extensions/ecc/guest", default-features = false }
openvm-instructions ={ path = "/openvm-gpu/crates/toolchain/instructions", default-features = false }
openvm-keccak256-guest = { path = "/openvm-gpu/extensions/keccak256/guest", default-features = false }
openvm-native-circuit = { path = "/openvm-gpu/extensions/native/circuit", default-features = false }
openvm-native-compiler = { path = "/openvm-gpu/extensions/native/compiler", default-features = false }
openvm-native-recursion = { path = "/openvm-gpu/extensions/native/recursion", default-features = false }
openvm-native-transpiler = { path = "/openvm-gpu/extensions/native/transpiler", default-features = false }
openvm-pairing-guest = { path = "/openvm-gpu/extensions/pairing/guest", default-features = false }
openvm-rv32im-guest = { path = "/openvm-gpu/extensions/rv32im/guest", default-features = false }
openvm-rv32im-transpiler = { path = "/openvm-gpu/extensions/rv32im/transpiler", default-features = false }
openvm-sdk = { path = "/openvm-gpu/crates/sdk", default-features = false, features = ["parallel", "bench-metrics", "evm-prove"] }
openvm-sha256-guest = { path = "/openvm-gpu/extensions/sha256/guest", default-features = false }
openvm-transpiler = { path = "/openvm-gpu/crates/toolchain/transpiler", default-features = false }
# stark-backend
[patch."https://github.com/openvm-org/stark-backend.git"]
openvm-stark-backend = { path = "/openvm-stark-gpu/crates/stark-backend", features = ["gpu"] }
openvm-stark-sdk = { path = "/openvm-stark-gpu/crates/stark-sdk", features = ["gpu"] }
[patch."ssh://git@github.com/scroll-tech/openvm-stark-gpu.git"]
openvm-stark-backend = { path = "/openvm-stark-gpu/crates/stark-backend", features = ["gpu"] }
openvm-stark-sdk = { path = "/openvm-stark-gpu/crates/stark-sdk", features = ["gpu"] }
# plonky3
[patch."https://github.com/Plonky3/Plonky3.git"]
p3-air = { path = "/plonky3-gpu/air" }
p3-field = { path = "/plonky3-gpu/field" }
p3-commit = { path = "/plonky3-gpu/commit" }
p3-matrix = { path = "/plonky3-gpu/matrix" }
p3-baby-bear = { path = "/plonky3-gpu/baby-bear" }
p3-koala-bear = { path = "/plonky3-gpu/koala-bear" }
p3-util = { path = "/plonky3-gpu/util" }
p3-challenger = { path = "/plonky3-gpu/challenger" }
p3-dft = { path = "/plonky3-gpu/dft" }
p3-fri = { path = "/plonky3-gpu/fri" }
p3-goldilocks = { path = "/plonky3-gpu/goldilocks" }
p3-keccak = { path = "/plonky3-gpu/keccak" }
p3-keccak-air = { path = "/plonky3-gpu/keccak-air" }
p3-blake3 = { path = "/plonky3-gpu/blake3" }
p3-mds = { path = "/plonky3-gpu/mds" }
p3-monty-31 = { path = "/plonky3-gpu/monty-31" }
p3-merkle-tree = { path = "/plonky3-gpu/merkle-tree" }
p3-poseidon = { path = "/plonky3-gpu/poseidon" }
p3-poseidon2 = { path = "/plonky3-gpu/poseidon2" }
p3-poseidon2-air = { path = "/plonky3-gpu/poseidon2-air" }
p3-symmetric = { path = "/plonky3-gpu/symmetric" }
p3-uni-stark = { path = "/plonky3-gpu/uni-stark" }
p3-maybe-rayon = { path = "/plonky3-gpu/maybe-rayon" }
p3-bn254-fr = { path = "/plonky3-gpu/bn254-fr" }
# gpu crates
[patch."ssh://git@github.com/scroll-tech/plonky3-gpu.git"]
p3-gpu-base = { path = "/plonky3-gpu/gpu-base" }
p3-gpu-build = { path = "/plonky3-gpu/gpu-build" }
p3-gpu-field = { path = "/plonky3-gpu/gpu-field" }
p3-gpu-backend = { path = "/plonky3-gpu/gpu-backend" }
p3-gpu-module = { path = "/plonky3-gpu/gpu-module" }
p3-air = { path = "/plonky3-gpu/air" }
p3-field = { path = "/plonky3-gpu/field" }
p3-commit = { path = "/plonky3-gpu/commit" }
p3-matrix = { path = "/plonky3-gpu/matrix" }
p3-baby-bear = { path = "/plonky3-gpu/baby-bear" }
p3-koala-bear = { path = "/plonky3-gpu/koala-bear" }
p3-util = { path = "/plonky3-gpu/util" }
p3-challenger = { path = "/plonky3-gpu/challenger" }
p3-dft = { path = "/plonky3-gpu/dft" }
p3-fri = { path = "/plonky3-gpu/fri" }
p3-goldilocks = { path = "/plonky3-gpu/goldilocks" }
p3-keccak = { path = "/plonky3-gpu/keccak" }
p3-keccak-air = { path = "/plonky3-gpu/keccak-air" }
p3-blake3 = { path = "/plonky3-gpu/blake3" }
p3-mds = { path = "/plonky3-gpu/mds" }
p3-monty-31 = { path = "/plonky3-gpu/monty-31" }
p3-merkle-tree = { path = "/plonky3-gpu/merkle-tree" }
p3-poseidon = { path = "/plonky3-gpu/poseidon" }
p3-poseidon2 = { path = "/plonky3-gpu/poseidon2" }
p3-poseidon2-air = { path = "/plonky3-gpu/poseidon2-air" }
p3-symmetric = { path = "/plonky3-gpu/symmetric" }
p3-uni-stark = { path = "/plonky3-gpu/uni-stark" }
p3-maybe-rayon = { path = "/plonky3-gpu/maybe-rayon" }
p3-bn254-fr = { path = "/plonky3-gpu/bn254-fr" }

View File

@@ -1,2 +0,0 @@
[url "https://github.com/"]
insteadOf = ssh://git@github.com/

View File

@@ -4,3 +4,5 @@ docs/
l2geth/
rpc-gateway/
*target/*
permissionless-batches/conf/

View File

@@ -4,3 +4,5 @@ docs/
l2geth/
rpc-gateway/
*target/*
permissionless-batches/conf/

View File

@@ -1,5 +1,8 @@
assets/
contracts/
docs/
l2geth/
rpc-gateway/
*target/*
*target/*
permissionless-batches/conf/

View File

@@ -0,0 +1,30 @@
# Download Go dependencies
FROM scrolltech/go-rust-builder:go-1.21-rust-nightly-2023-12-03 as base
WORKDIR /src
COPY go.work* ./
COPY ./rollup/go.* ./rollup/
COPY ./common/go.* ./common/
COPY ./coordinator/go.* ./coordinator/
COPY ./database/go.* ./database/
COPY ./tests/integration-test/go.* ./tests/integration-test/
COPY ./bridge-history-api/go.* ./bridge-history-api/
RUN go mod download -x
# Build rollup_relayer
FROM base as builder
RUN --mount=target=. \
--mount=type=cache,target=/root/.cache/go-build \
cd /src/rollup/cmd/permissionless_batches/ && CGO_LDFLAGS="-ldl" go build -v -p 4 -o /bin/rollup_relayer
# Pull rollup_relayer into a second stage deploy ubuntu container
FROM ubuntu:20.04
RUN apt update && apt install vim netcat-openbsd net-tools curl ca-certificates -y
ENV CGO_LDFLAGS="-ldl"
COPY --from=builder /bin/rollup_relayer /bin/
WORKDIR /app
ENTRYPOINT ["rollup_relayer"]

View File

@@ -0,0 +1,8 @@
assets/
contracts/
docs/
l2geth/
rpc-gateway/
*target/*
permissionless-batches/conf/

View File

@@ -1,5 +1,8 @@
assets/
contracts/
docs/
l2geth/
rpc-gateway/
*target/*
*target/*
permissionless-batches/conf/

3
common/.gitignore vendored
View File

@@ -1,4 +1,3 @@
/build/bin
.idea
libzkp/impl/target
libzkp/interface/*.a
libzkp

View File

@@ -4,5 +4,4 @@ test:
go test -v -race -coverprofile=coverage.txt -covermode=atomic -p 1 $(PWD)/...
lint: ## Lint the files - used for CI
GOBIN=$(PWD)/build/bin go run ../build/lint.go
cd libzkp/impl && cargo fmt --all -- --check && cargo clippy --release -- -D warnings
GOBIN=$(PWD)/build/bin go run ../build/lint.go

View File

@@ -15,7 +15,7 @@ require (
github.com/modern-go/reflect2 v1.0.2
github.com/orcaman/concurrent-map v1.0.0
github.com/prometheus/client_golang v1.19.0
github.com/scroll-tech/go-ethereum v1.10.14-0.20250305151038-478940e79601
github.com/scroll-tech/go-ethereum v1.10.14-0.20250625112225-a67863c65587
github.com/stretchr/testify v1.10.0
github.com/testcontainers/testcontainers-go v0.30.0
github.com/testcontainers/testcontainers-go/modules/compose v0.30.0
@@ -184,7 +184,7 @@ require (
github.com/rjeczalik/notify v0.9.1 // indirect
github.com/rs/cors v1.7.0 // indirect
github.com/russross/blackfriday/v2 v2.1.0 // indirect
github.com/scroll-tech/da-codec v0.1.3-0.20250310095435-012aaee6b435 // indirect
github.com/scroll-tech/da-codec v0.1.3-0.20250826112206-b4cce5c5d178 // indirect
github.com/scroll-tech/zktrie v0.8.4 // indirect
github.com/secure-systems-lab/go-securesystemslib v0.4.0 // indirect
github.com/serialx/hashring v0.0.0-20190422032157-8b2912629002 // indirect

View File

@@ -636,10 +636,10 @@ github.com/rs/cors v1.7.0 h1:+88SsELBHx5r+hZ8TCkggzSstaWNbDvThkVK8H6f9ik=
github.com/rs/cors v1.7.0/go.mod h1:gFx+x8UowdsKA9AchylcLynDq+nNFfI8FkUZdN/jGCU=
github.com/russross/blackfriday/v2 v2.1.0 h1:JIOH55/0cWyOuilr9/qlrm0BSXldqnqwMsf35Ld67mk=
github.com/russross/blackfriday/v2 v2.1.0/go.mod h1:+Rmxgy9KzJVeS9/2gXHxylqXiyQDYRxCVz55jmeOWTM=
github.com/scroll-tech/da-codec v0.1.3-0.20250310095435-012aaee6b435 h1:X9fkvjrYBY79lGgKEPpUhuiJ4vWpWwzOVw4H8CU8L54=
github.com/scroll-tech/da-codec v0.1.3-0.20250310095435-012aaee6b435/go.mod h1:yhTS9OVC0xQGhg7DN5iV5KZJvnSIlFWAxDdp+6jxQtY=
github.com/scroll-tech/go-ethereum v1.10.14-0.20250305151038-478940e79601 h1:NEsjCG6uSvLRBlsP3+x6PL1kM+Ojs3g8UGotIPgJSz8=
github.com/scroll-tech/go-ethereum v1.10.14-0.20250305151038-478940e79601/go.mod h1:OblWe1+QrZwdpwO0j/LY3BSGuKT3YPUFBDQQgvvfStQ=
github.com/scroll-tech/da-codec v0.1.3-0.20250826112206-b4cce5c5d178 h1:4utngmJHXSOS5FoSdZhEV1xMRirpArbXvyoCZY9nYj0=
github.com/scroll-tech/da-codec v0.1.3-0.20250826112206-b4cce5c5d178/go.mod h1:Z6kN5u2khPhiqHyk172kGB7o38bH/nj7Ilrb/46wZGg=
github.com/scroll-tech/go-ethereum v1.10.14-0.20250625112225-a67863c65587 h1:wG1+gb+K4iLtxAHhiAreMdIjP5x9hB64duraN2+u1QU=
github.com/scroll-tech/go-ethereum v1.10.14-0.20250625112225-a67863c65587/go.mod h1:YyfB2AyAtphlbIuDQgaxc2b9mo0zE4EBA1+qtXvzlmg=
github.com/scroll-tech/zktrie v0.8.4 h1:UagmnZ4Z3ITCk+aUq9NQZJNAwnWl4gSxsLb2Nl7IgRE=
github.com/scroll-tech/zktrie v0.8.4/go.mod h1:XvNo7vAk8yxNyTjBDj5WIiFzYW4bx/gJ78+NK6Zn6Uk=
github.com/secure-systems-lab/go-securesystemslib v0.4.0 h1:b23VGrQhTA8cN2CbBw7/FulN9fTtqYUdS5+Oxzt+DUE=

View File

@@ -10,12 +10,6 @@ import (
"github.com/scroll-tech/go-ethereum/common/hexutil"
)
const (
EuclidV2Fork = "euclidV2"
EuclidV2ForkNameForProver = "euclidv2"
)
// ProofType represents the type of task.
type ProofType uint8
@@ -141,10 +135,18 @@ type BlockContextV2 struct {
NumL1Msgs uint16 `json:"num_l1_msgs"`
}
// Metric data carried with OpenVMProof
type OpenVMProofStat struct {
TotalCycle uint64 `json:"total_cycles"`
ExecutionTimeMills uint64 `json:"execution_time_mills"`
ProvingTimeMills uint64 `json:"proving_time_mills"`
}
// Proof for flatten VM proof
type OpenVMProof struct {
Proof []byte `json:"proofs"`
PublicValues []byte `json:"public_values"`
Proof []byte `json:"proofs"`
PublicValues []byte `json:"public_values"`
Stat *OpenVMProofStat `json:"stat,omitempty"`
}
// Proof for flatten EVM proof
@@ -156,7 +158,8 @@ type OpenVMEvmProof struct {
// OpenVMChunkProof includes the proof info that are required for chunk verification and rollup.
type OpenVMChunkProof struct {
MetaData struct {
ChunkInfo *ChunkInfo `json:"chunk_info"`
ChunkInfo *ChunkInfo `json:"chunk_info"`
TotalGasUsed uint64 `json:"chunk_total_gas"`
} `json:"metadata"`
VmProof *OpenVMProof `json:"proof"`

View File

@@ -5,7 +5,7 @@ import (
"runtime/debug"
)
var tag = "v4.5.25"
var tag = "v4.5.46"
var commit = func() string {
if info, ok := debug.ReadBuildInfo(); ok {

View File

@@ -1,4 +1,4 @@
.PHONY: lint docker clean coordinator coordinator_skip_libzkp mock_coordinator
.PHONY: lint docker clean coordinator coordinator_skip_libzkp mock_coordinator libzkp
IMAGE_VERSION=latest
REPO_ROOT_DIR=./..
@@ -34,6 +34,13 @@ coordinator_cron:
coordinator_tool:
go build -ldflags "-X scroll-tech/common/version.ZkVersion=${ZK_VERSION}" -o $(PWD)/build/bin/coordinator_tool ./cmd/tool
localsetup: coordinator_api ## Local setup: build coordinator_api, copy config, and setup releases
@echo "Copying configuration files..."
cp -r $(PWD)/conf $(PWD)/build/bin/
@echo "Setting up releases..."
cd $(PWD)/build && bash setup_releases.sh
#coordinator_api_skip_libzkp:
# go build -ldflags "-X scroll-tech/common/version.ZkVersion=${ZK_VERSION}" -o $(PWD)/build/bin/coordinator_api ./cmd/api
@@ -51,6 +58,7 @@ test-gpu-verifier: $(LIBZKP_PATH)
lint: ## Lint the files - used for CI
GOBIN=$(PWD)/build/bin go run ../build/lint.go
cd ../ && cargo fmt --all -- --check && cargo clippy --release -- -D warnings
clean: ## Empty out the bin folder
@rm -rf build/bin

View File

@@ -0,0 +1,62 @@
#!/bin/bash
# release version
if [ -z "${SCROLL_ZKVM_VERSION}" ]; then
echo "SCROLL_ZKVM_VERSION not set"
exit 1
fi
# set ASSET_DIR by reading from config.json
CONFIG_FILE="bin/conf/config.json"
if [ ! -f "$CONFIG_FILE" ]; then
echo "Config file $CONFIG_FILE not found"
exit 1
fi
# get the number of verifiers in the array
VERIFIER_COUNT=$(jq -r '.prover_manager.verifier.verifiers | length' "$CONFIG_FILE")
if [ "$VERIFIER_COUNT" = "null" ] || [ "$VERIFIER_COUNT" -eq 0 ]; then
echo "No verifiers found in config file"
exit 1
fi
echo "Found $VERIFIER_COUNT verifier(s) in config"
# iterate through each verifier entry
for ((i=0; i<$VERIFIER_COUNT; i++)); do
# extract assets_path for current verifier
ASSETS_PATH=$(jq -r ".prover_manager.verifier.verifiers[$i].assets_path" "$CONFIG_FILE")
FORK_NAME=$(jq -r ".prover_manager.verifier.verifiers[$i].fork_name" "$CONFIG_FILE")
if [ "$ASSETS_PATH" = "null" ]; then
echo "Warning: Could not find assets_path for verifier $i, skipping..."
continue
fi
echo "Processing verifier $i ($FORK_NAME): assets_path=$ASSETS_PATH"
# check if it's an absolute path (starts with /)
if [[ "$ASSETS_PATH" = /* ]]; then
# absolute path, use as is
ASSET_DIR="$ASSETS_PATH"
else
# relative path, prefix with "bin/"
ASSET_DIR="bin/$ASSETS_PATH"
fi
echo "Using ASSET_DIR: $ASSET_DIR"
# create directory if it doesn't exist
mkdir -p "$ASSET_DIR"
# assets for verifier-only mode
echo "Downloading assets for $FORK_NAME to $ASSET_DIR..."
wget https://circuit-release.s3.us-west-2.amazonaws.com/scroll-zkvm/releases/$SCROLL_ZKVM_VERSION/verifier/verifier.bin -O ${ASSET_DIR}/verifier.bin
wget https://circuit-release.s3.us-west-2.amazonaws.com/scroll-zkvm/releases/$SCROLL_ZKVM_VERSION/verifier/openVmVk.json -O ${ASSET_DIR}/openVmVk.json
echo "Completed downloading assets for $FORK_NAME"
echo "---"
done
echo "All verifier assets downloaded successfully"

View File

@@ -90,12 +90,12 @@ func (c *CoordinatorApp) MockConfig(store bool) error {
cfg.ProverManager = &coordinatorConfig.ProverManager{
ProversPerSession: 1,
Verifier: &coordinatorConfig.VerifierConfig{
HighVersionCircuit: &coordinatorConfig.CircuitConfig{
AssetsPath: "",
ForkName: "euclidV2",
MinProverVersion: "v4.4.89",
MinProverVersion: "v4.4.89",
Verifiers: []coordinatorConfig.AssetConfig{{
AssetsPath: "",
ForkName: "feynman",
},
},
}},
BatchCollectionTimeSec: 60,
ChunkCollectionTimeSec: 60,
SessionAttempts: 10,

View File

@@ -19,6 +19,7 @@ import (
)
var app *cli.App
var cfg *config.Config
func init() {
// Set up coordinator app info.
@@ -29,16 +30,29 @@ func init() {
app.Version = version.Version
app.Flags = append(app.Flags, utils.CommonFlags...)
app.Before = func(ctx *cli.Context) error {
return utils.LogSetup(ctx)
if err := utils.LogSetup(ctx); err != nil {
return err
}
cfgFile := ctx.String(utils.ConfigFileFlag.Name)
var err error
cfg, err = config.NewConfig(cfgFile)
if err != nil {
log.Crit("failed to load config file", "config file", cfgFile, "error", err)
}
return nil
}
// sub commands
app.Commands = []*cli.Command{
{
Name: "verify",
Usage: "verify an proof, specified by [forkname] <type> <proof path>",
Action: verify,
},
}
}
func action(ctx *cli.Context) error {
cfgFile := ctx.String(utils.ConfigFileFlag.Name)
cfg, err := config.NewConfig(cfgFile)
if err != nil {
log.Crit("failed to load config file", "config file", cfgFile, "error", err)
}
db, err := database.InitDB(cfg.DB)
if err != nil {
log.Crit("failed to init db connection", "err", err)

View File

@@ -0,0 +1,109 @@
package main
import (
"bytes"
"encoding/base64"
"encoding/json"
"fmt"
"os"
"path/filepath"
"strings"
"scroll-tech/coordinator/internal/logic/verifier"
"scroll-tech/common/types/message"
"github.com/scroll-tech/go-ethereum/log"
"github.com/urfave/cli/v2"
)
func verify(cCtx *cli.Context) error {
var forkName, proofType, proofPath string
if cCtx.Args().Len() <= 2 {
forkName = cfg.ProverManager.Verifier.Verifiers[0].ForkName
proofType = cCtx.Args().First()
proofPath = cCtx.Args().Get(1)
} else {
forkName = cCtx.Args().First()
proofType = cCtx.Args().Get(1)
proofPath = cCtx.Args().Get(2)
}
log.Info("verify proof", "in", proofPath, "type", proofType, "forkName", forkName)
// Load the content of the proof file
data, err := os.ReadFile(filepath.Clean(proofPath))
if err != nil {
return fmt.Errorf("error reading file: %w", err)
}
vf, err := verifier.NewVerifier(cfg.ProverManager.Verifier)
if err != nil {
return err
}
var ret bool
switch strings.ToLower(proofType) {
case "chunk":
proof := &message.OpenVMChunkProof{}
if err := json.Unmarshal(data, proof); err != nil {
return err
}
vk, ok := vf.ChunkVk[forkName]
if !ok {
return fmt.Errorf("no vk loaded for fork %s", forkName)
}
if len(proof.Vk) != 0 {
if !bytes.Equal(proof.Vk, vk) {
return fmt.Errorf("unmatch vk with expected: expected %s, get %s",
base64.StdEncoding.EncodeToString(vk),
base64.StdEncoding.EncodeToString(proof.Vk),
)
}
} else {
proof.Vk = vk
}
ret, err = vf.VerifyChunkProof(proof, forkName)
case "batch":
proof := &message.OpenVMBatchProof{}
if err := json.Unmarshal(data, proof); err != nil {
return err
}
vk, ok := vf.BatchVk[forkName]
if !ok {
return fmt.Errorf("no vk loaded for fork %s", forkName)
}
if len(proof.Vk) != 0 {
if !bytes.Equal(proof.Vk, vk) {
return fmt.Errorf("unmatch vk with expected: expected %s, get %s",
base64.StdEncoding.EncodeToString(vk),
base64.StdEncoding.EncodeToString(proof.Vk),
)
}
} else {
proof.Vk = vk
}
ret, err = vf.VerifyBatchProof(proof, forkName)
case "bundle":
proof := &message.OpenVMBundleProof{}
if err := json.Unmarshal(data, proof); err != nil {
return err
}
vk, ok := vf.BundleVk[forkName]
if !ok {
return fmt.Errorf("no vk loaded for fork %s", forkName)
}
proof.Vk = vk
ret, err = vf.VerifyBundleProof(proof, forkName)
default:
return fmt.Errorf("unsupport proof type %s", proofType)
}
if err != nil {
return err
}
log.Info("verified:", "ret", ret)
return nil
}

View File

@@ -7,11 +7,17 @@
"batch_collection_time_sec": 180,
"chunk_collection_time_sec": 180,
"verifier": {
"high_version_circuit": {
"assets_path": "assets",
"fork_name": "euclidV2",
"min_prover_version": "v4.4.45"
}
"min_prover_version": "v4.4.45",
"verifiers": [
{
"assets_path": "assets",
"fork_name": "euclidV2"
},
{
"assets_path": "assets",
"fork_name": "feynman"
}
]
}
},
"db": {
@@ -21,7 +27,10 @@
"maxIdleNum": 20
},
"l2": {
"chain_id": 111
"chain_id": 111,
"l2geth": {
"endpoint": "not need to specified for mocking"
}
},
"auth": {
"secret": "prover secret key",

View File

@@ -9,7 +9,7 @@ require (
github.com/google/uuid v1.6.0
github.com/mitchellh/mapstructure v1.5.0
github.com/prometheus/client_golang v1.19.0
github.com/scroll-tech/da-codec v0.1.3-0.20250626091118-58b899494da6
github.com/scroll-tech/da-codec v0.1.3-0.20250826112206-b4cce5c5d178
github.com/scroll-tech/go-ethereum v1.10.14-0.20250626110859-cc9a1dd82de7
github.com/shopspring/decimal v1.3.1
github.com/stretchr/testify v1.10.0
@@ -46,6 +46,7 @@ require (
)
require (
github.com/VictoriaMetrics/fastcache v1.12.2 // indirect
github.com/beorn7/perks v1.0.1 // indirect
github.com/bits-and-blooms/bitset v1.20.0 // indirect
github.com/btcsuite/btcd v0.20.1-beta // indirect
@@ -55,28 +56,57 @@ require (
github.com/cpuguy83/go-md2man/v2 v2.0.3 // indirect
github.com/crate-crypto/go-kzg-4844 v1.1.0 // indirect
github.com/davecgh/go-spew v1.1.1 // indirect
github.com/deckarep/golang-set v0.0.0-20180603214616-504e848d77ea // indirect
github.com/edsrzf/mmap-go v1.0.0 // indirect
github.com/ethereum/c-kzg-4844 v1.0.3 // indirect
github.com/fjl/memsize v0.0.2 // indirect
github.com/gballet/go-libpcsclite v0.0.0-20190607065134-2772fd86a8ff // indirect
github.com/go-ole/go-ole v1.3.0 // indirect
github.com/go-stack/stack v1.8.1 // indirect
github.com/golang/snappy v0.0.5-0.20220116011046-fa5810519dcb // indirect
github.com/gorilla/websocket v1.4.2 // indirect
github.com/hashicorp/go-bexpr v0.1.10 // indirect
github.com/hashicorp/golang-lru v0.5.5-0.20210104140557-80c98217689d // indirect
github.com/holiman/bloomfilter/v2 v2.0.3 // indirect
github.com/holiman/uint256 v1.3.2 // indirect
github.com/huin/goupnp v1.0.2 // indirect
github.com/iden3/go-iden3-crypto v0.0.17 // indirect
github.com/jackpal/go-nat-pmp v1.0.2-0.20160603034137-1fa385a6f458 // indirect
github.com/klauspost/compress v1.17.9 // indirect
github.com/mattn/go-colorable v0.1.8 // indirect
github.com/mattn/go-runewidth v0.0.15 // indirect
github.com/mitchellh/pointerstructure v1.2.0 // indirect
github.com/mmcloughlin/addchain v0.4.0 // indirect
github.com/olekukonko/tablewriter v0.0.5 // indirect
github.com/pkg/errors v0.9.1 // indirect
github.com/pmezard/go-difflib v1.0.0 // indirect
github.com/prometheus/client_model v0.5.0 // indirect
github.com/prometheus/common v0.48.0 // indirect
github.com/prometheus/procfs v0.12.0 // indirect
github.com/prometheus/tsdb v0.7.1 // indirect
github.com/rivo/uniseg v0.4.4 // indirect
github.com/rjeczalik/notify v0.9.1 // indirect
github.com/rs/cors v1.7.0 // indirect
github.com/russross/blackfriday/v2 v2.1.0 // indirect
github.com/scroll-tech/zktrie v0.8.4 // indirect
github.com/shirou/gopsutil v3.21.11+incompatible // indirect
github.com/sourcegraph/conc v0.3.0 // indirect
github.com/status-im/keycard-go v0.0.0-20190316090335-8537d3370df4 // indirect
github.com/supranational/blst v0.3.13 // indirect
github.com/syndtr/goleveldb v1.0.1-0.20210819022825-2ae1ddf74ef7 // indirect
github.com/tklauser/go-sysconf v0.3.14 // indirect
github.com/tklauser/numcpus v0.9.0 // indirect
github.com/tyler-smith/go-bip39 v1.0.1-0.20181017060643-dbb3b84ba2ef // indirect
github.com/xrash/smetrics v0.0.0-20201216005158-039620a65673 // indirect
github.com/yusufpapurcu/wmi v1.2.4 // indirect
go.uber.org/atomic v1.7.0 // indirect
go.uber.org/multierr v1.9.0 // indirect
golang.org/x/crypto v0.32.0 // indirect
golang.org/x/sync v0.11.0 // indirect
golang.org/x/sys v0.30.0 // indirect
golang.org/x/time v0.0.0-20210220033141-f8bda1e9f3ba // indirect
gopkg.in/natefinch/npipe.v2 v2.0.0-20160621034901-c1b8fa8bdcce // indirect
gopkg.in/urfave/cli.v1 v1.20.0 // indirect
gopkg.in/yaml.v3 v3.0.1 // indirect
rsc.io/tmplfunc v0.0.3 // indirect
)

View File

@@ -1,12 +1,18 @@
github.com/OneOfOne/xxhash v1.2.2/go.mod h1:HSdplMjZKSmBqAxg5vPj2TmRDmfkzw+cTzAElWljhcU=
github.com/VictoriaMetrics/fastcache v1.12.2 h1:N0y9ASrJ0F6h0QaC3o6uJb3NIZ9VKLjCM7NQbSmF7WI=
github.com/VictoriaMetrics/fastcache v1.12.2/go.mod h1:AmC+Nzz1+3G2eCPapF6UcsnkThDcMsQicp4xDukwJYI=
github.com/aead/siphash v1.0.1/go.mod h1:Nywa3cDsYNNK3gaciGTWPwHt0wlpNV15vwmswBAUSII=
github.com/agiledragon/gomonkey/v2 v2.12.0 h1:ek0dYu9K1rSV+TgkW5LvNNPRWyDZVIxGMCFI6Pz9o38=
github.com/agiledragon/gomonkey/v2 v2.12.0/go.mod h1:ap1AmDzcVOAz1YpeJ3TCzIgstoaWLA6jbbgxfB4w2iY=
github.com/alecthomas/template v0.0.0-20160405071501-a0175ee3bccc/go.mod h1:LOuyumcjzFXgccqObfd/Ljyb9UuFJ6TxHnclSeseNhc=
github.com/alecthomas/units v0.0.0-20151022065526-2efee857e7cf/go.mod h1:ybxpYRFXyAe+OPACYpWeL0wqObRcbAqCMya13uyzqw0=
github.com/allegro/bigcache v1.2.1-0.20190218064605-e24eb225f156 h1:eMwmnE/GDgah4HI848JfFxHt+iPb26b4zyfspmqY0/8=
github.com/allegro/bigcache v1.2.1-0.20190218064605-e24eb225f156/go.mod h1:Cb/ax3seSYIx7SuZdm2G2xzfwmv3TPSk2ucNfQESPXM=
github.com/appleboy/gin-jwt/v2 v2.9.1 h1:l29et8iLW6omcHltsOP6LLk4s3v4g2FbFs0koxGWVZs=
github.com/appleboy/gin-jwt/v2 v2.9.1/go.mod h1:jwcPZJ92uoC9nOUTOKWoN/f6JZOgMSKlFSHw5/FrRUk=
github.com/appleboy/gofight/v2 v2.1.2 h1:VOy3jow4vIK8BRQJoC/I9muxyYlJ2yb9ht2hZoS3rf4=
github.com/appleboy/gofight/v2 v2.1.2/go.mod h1:frW+U1QZEdDgixycTj4CygQ48yLTUhplt43+Wczp3rw=
github.com/beorn7/perks v0.0.0-20180321164747-3a771d992973/go.mod h1:Dwedo/Wpr24TaqPxmxbtue+5NUziq4I4S80YR8gNf3Q=
github.com/beorn7/perks v1.0.1 h1:VlbKKnNfV8bJzeqoa4cOKqO6bYr3WgKZxO8Z16+hsOM=
github.com/beorn7/perks v1.0.1/go.mod h1:G2ZrVWU2WbWT9wwq4/hrbKbnv/1ERSJQ0ibhJ6rlkpw=
github.com/bits-and-blooms/bitset v1.20.0 h1:2F+rfL86jE2d/bmw7OhqUg2Sj/1rURkBn3MdfoPyRVU=
@@ -24,6 +30,9 @@ github.com/bytedance/sonic v1.5.0/go.mod h1:ED5hyg4y6t3/9Ku1R6dU/4KyJ48DZ4jPhfY1
github.com/bytedance/sonic v1.10.0-rc/go.mod h1:ElCzW+ufi8qKqNW0FY314xriJhyJhuoJ3gFZdAHF7NM=
github.com/bytedance/sonic v1.10.1 h1:7a1wuFXL1cMy7a3f7/VFcEtriuXQnUBhtoVfOZiaysc=
github.com/bytedance/sonic v1.10.1/go.mod h1:iZcSUejdk5aukTND/Eu/ivjQuEL0Cu9/rf50Hi0u/g4=
github.com/cespare/cp v0.1.0 h1:SE+dxFebS7Iik5LK0tsi1k9ZCxEaFX4AjQmoyA+1dJk=
github.com/cespare/cp v0.1.0/go.mod h1:SOGHArjBr4JWaSDEVpWpo/hNg6RoKrls6Oh40hiwW+s=
github.com/cespare/xxhash v1.1.0/go.mod h1:XrSqR1VqqWfGrhpAt58auRo0WTKS1nRRg3ghfAqPWnc=
github.com/cespare/xxhash/v2 v2.2.0 h1:DC2CZ1Ep5Y4k3ZQ899DldepgrayRUGE6BBZ/cd9Cj44=
github.com/cespare/xxhash/v2 v2.2.0/go.mod h1:VGX0DQ3Q6kWi7AoAeZDth3/j3BFtOZR5XLFGgcrjCOs=
github.com/chenzhuoyu/base64x v0.0.0-20211019084208-fb5309c8db06/go.mod h1:DH46F32mSOjUmXrMHnKwZdA8wcEefY7UVqBKYGjpdQY=
@@ -45,16 +54,32 @@ github.com/davecgh/go-spew v0.0.0-20171005155431-ecdeabc65495/go.mod h1:J7Y8YcW2
github.com/davecgh/go-spew v1.1.0/go.mod h1:J7Y8YcW2NihsgmVo/mv3lAwl/skON4iLHjSsI+c5H38=
github.com/davecgh/go-spew v1.1.1 h1:vj9j/u1bqnvCEfJOwUhtlOARqs3+rkHYY13jYWTU97c=
github.com/davecgh/go-spew v1.1.1/go.mod h1:J7Y8YcW2NihsgmVo/mv3lAwl/skON4iLHjSsI+c5H38=
github.com/deckarep/golang-set v0.0.0-20180603214616-504e848d77ea h1:j4317fAZh7X6GqbFowYdYdI0L9bwxL07jyPZIdepyZ0=
github.com/deckarep/golang-set v0.0.0-20180603214616-504e848d77ea/go.mod h1:93vsz/8Wt4joVM7c2AVqh+YRMiUSc14yDtF28KmMOgQ=
github.com/dgryski/go-sip13 v0.0.0-20181026042036-e10d5fee7954/go.mod h1:vAd38F8PWV+bWy6jNmig1y/TA+kYO4g3RSRF0IAv0no=
github.com/edsrzf/mmap-go v1.0.0 h1:CEBF7HpRnUCSJgGUb5h1Gm7e3VkmVDrR8lvWVLtrOFw=
github.com/edsrzf/mmap-go v1.0.0/go.mod h1:YO35OhQPt3KJa3ryjFM5Bs14WD66h8eGKpfaBNrHW5M=
github.com/ethereum/c-kzg-4844 v1.0.3 h1:IEnbOHwjixW2cTvKRUlAAUOeleV7nNM/umJR+qy4WDs=
github.com/ethereum/c-kzg-4844 v1.0.3/go.mod h1:VewdlzQmpT5QSrVhbBuGoCdFJkpaJlO1aQputP83wc0=
github.com/fjl/memsize v0.0.2 h1:27txuSD9or+NZlnOWdKUxeBzTAUkWCVh+4Gf2dWFOzA=
github.com/fjl/memsize v0.0.2/go.mod h1:VvhXpOYNQvB+uIk2RvXzuaQtkQJzzIx6lSBe1xv7hi0=
github.com/fsnotify/fsnotify v1.4.7/go.mod h1:jwhsz4b93w/PPRr/qN1Yymfu8t87LnFCMoQvtojpjFo=
github.com/fsnotify/fsnotify v1.4.9 h1:hsms1Qyu0jgnwNXIxa+/V/PDsU6CfLf6CNO8H7IWoS4=
github.com/fsnotify/fsnotify v1.4.9/go.mod h1:znqG4EE+3YCdAaPaxE2ZRY/06pZUdp0tY4IgpuI1SZQ=
github.com/gabriel-vasile/mimetype v1.4.2 h1:w5qFW6JKBz9Y393Y4q372O9A7cUSequkh1Q7OhCmWKU=
github.com/gabriel-vasile/mimetype v1.4.2/go.mod h1:zApsH/mKG4w07erKIaJPFiX0Tsq9BFQgN3qGY5GnNgA=
github.com/gballet/go-libpcsclite v0.0.0-20190607065134-2772fd86a8ff h1:tY80oXqGNY4FhTFhk+o9oFHGINQ/+vhlm8HFzi6znCI=
github.com/gballet/go-libpcsclite v0.0.0-20190607065134-2772fd86a8ff/go.mod h1:x7DCsMOv1taUwEWCzT4cmDeAkigA5/QCwUodaVOe8Ww=
github.com/gin-contrib/sse v0.1.0 h1:Y/yl/+YNO8GZSjAhjMsSuLt29uWRFHdHYUb5lYOV9qE=
github.com/gin-contrib/sse v0.1.0/go.mod h1:RHrZQHXnP2xjPF+u1gW/2HnVO7nvIa9PG3Gm+fLHvGI=
github.com/gin-gonic/gin v1.8.1/go.mod h1:ji8BvRH1azfM+SYow9zQ6SZMvR8qOMZHmsCuWR9tTTk=
github.com/gin-gonic/gin v1.9.1 h1:4idEAncQnU5cB7BeOkPtxjfCSye0AAm1R0RVIqJ+Jmg=
github.com/gin-gonic/gin v1.9.1/go.mod h1:hPrL7YrpYKXt5YId3A/Tnip5kqbEAP+KLuI3SUcPTeU=
github.com/go-kit/kit v0.8.0 h1:Wz+5lgoB0kkuqLEc6NVmwRknTKP6dTGbSqvhZtBI/j0=
github.com/go-kit/kit v0.8.0/go.mod h1:xBxKIO96dXMWWy0MnWVtmwkA9/13aqxPnvrjFYMA2as=
github.com/go-logfmt/logfmt v0.3.0/go.mod h1:Qt1PoO58o5twSAckw1HlFXLmHsOX5/0LbT9GBnD5lWE=
github.com/go-logfmt/logfmt v0.5.1 h1:otpy5pqBCBZ1ng9RQ0dPu4PN7ba75Y/aA+UpowDyNVA=
github.com/go-logfmt/logfmt v0.5.1/go.mod h1:WYhtIu8zTZfxdn5+rREduYbwxfcBr/Vr6KEVveWlfTs=
github.com/go-ole/go-ole v1.2.6/go.mod h1:pprOEPIfldk/42T2oK7lQ4v4JSDwmV0As9GaiUsvbm0=
github.com/go-ole/go-ole v1.3.0 h1:Dt6ye7+vXGIKZ7Xtk4s6/xVdGDQynvom7xCFEdWr6uE=
github.com/go-ole/go-ole v1.3.0/go.mod h1:5LS6F96DhAwUc7C+1HLexzMXY1xGRSryjyPPKW6zv78=
@@ -73,19 +98,31 @@ github.com/go-playground/validator/v10 v10.15.5 h1:LEBecTWb/1j5TNY1YYG2RcOUN3R7N
github.com/go-playground/validator/v10 v10.15.5/go.mod h1:9iXMNT7sEkjXb0I+enO7QXmzG6QCsPWY4zveKFVRSyU=
github.com/go-resty/resty/v2 v2.7.0 h1:me+K9p3uhSmXtrBZ4k9jcEAfJmuC8IivWHwaLZwPrFY=
github.com/go-resty/resty/v2 v2.7.0/go.mod h1:9PWDzw47qPphMRFfhsyk0NnSgvluHcljSMVIq3w7q0I=
github.com/go-stack/stack v1.8.0/go.mod h1:v0f6uXyyMGvRgIKkXu+yp6POWl0qKG85gN/melR3HDY=
github.com/go-stack/stack v1.8.1 h1:ntEHSVwIt7PNXNpgPmVfMrNhLtgjlmnZha2kOpuRiDw=
github.com/go-stack/stack v1.8.1/go.mod h1:dcoOX6HbPZSZptuspn9bctJ+N/CnF5gGygcUP3XYfe4=
github.com/goccy/go-json v0.9.7/go.mod h1:6MelG93GURQebXPDq3khkgXZkazVtN9CRI+MGFi0w8I=
github.com/goccy/go-json v0.10.0/go.mod h1:6MelG93GURQebXPDq3khkgXZkazVtN9CRI+MGFi0w8I=
github.com/goccy/go-json v0.10.2 h1:CrxCmQqYDkv1z7lO7Wbh2HN93uovUHgrECaO5ZrCXAU=
github.com/goccy/go-json v0.10.2/go.mod h1:6MelG93GURQebXPDq3khkgXZkazVtN9CRI+MGFi0w8I=
github.com/gogo/protobuf v1.1.1/go.mod h1:r8qH/GZQm5c6nD/R0oafs1akxWv10x8SbQlK7atdtwQ=
github.com/golang-jwt/jwt/v4 v4.4.3/go.mod h1:m21LjoU+eqJr34lmDMbreY2eSTRJ1cv77w39/MY0Ch0=
github.com/golang-jwt/jwt/v4 v4.5.0 h1:7cYmW1XlMY7h7ii7UhUyChSgS5wUJEnm9uZVTGqOWzg=
github.com/golang-jwt/jwt/v4 v4.5.0/go.mod h1:m21LjoU+eqJr34lmDMbreY2eSTRJ1cv77w39/MY0Ch0=
github.com/golang/protobuf v1.2.0/go.mod h1:6lQm79b+lXiMfvg/cZm0SGofjICqVBUtrP5yJMmIC1U=
github.com/golang/protobuf v1.4.0-rc.1/go.mod h1:ceaxUfeHdC40wWswd/P6IGgMaK3YpKi5j83Wpe3EHw8=
github.com/golang/protobuf v1.4.0-rc.1.0.20200221234624-67d41d38c208/go.mod h1:xKAWHe0F5eneWXFV3EuXVDTCmh+JuBKY0li0aMyXATA=
github.com/golang/protobuf v1.4.0-rc.2/go.mod h1:LlEzMj4AhA7rCAGe4KMBDvJI+AwstrUpVNzEA03Pprs=
github.com/golang/protobuf v1.4.0-rc.4.0.20200313231945-b860323f09d0/go.mod h1:WU3c8KckQ9AFe+yFwt9sWVRKCVIyN9cPHBJSNnbL67w=
github.com/golang/protobuf v1.4.0/go.mod h1:jodUvKwWbYaEsadDk5Fwe5c77LiNKVO9IDvqG2KuDX0=
github.com/golang/protobuf v1.4.2/go.mod h1:oDoupMAO8OvCJWAcko0GGGIgR6R6ocIYbsSw735rRwI=
github.com/golang/protobuf v1.5.0/go.mod h1:FsONVRAS9T7sI+LIUmWTfcYkHO4aIWwzhcaSAoJOfIk=
github.com/golang/snappy v0.0.4/go.mod h1:/XxbfmMg8lxefKM7IXC3fBNl/7bRcc72aCRzEWrmP2Q=
github.com/golang/snappy v0.0.5-0.20220116011046-fa5810519dcb h1:PBC98N2aIaM3XXiurYmW7fx4GZkL8feAMVq7nEjURHk=
github.com/golang/snappy v0.0.5-0.20220116011046-fa5810519dcb/go.mod h1:/XxbfmMg8lxefKM7IXC3fBNl/7bRcc72aCRzEWrmP2Q=
github.com/google/go-cmp v0.3.0/go.mod h1:8QqcDgzrUqlUb/G2PQTWiueGozuR1884gddMywk6iLU=
github.com/google/go-cmp v0.3.1/go.mod h1:8QqcDgzrUqlUb/G2PQTWiueGozuR1884gddMywk6iLU=
github.com/google/go-cmp v0.4.0/go.mod h1:v8dTdLbMG2kIc/vJvl+f65V22dbkXbowE6jgT/gNBxE=
github.com/google/go-cmp v0.5.5/go.mod h1:v8dTdLbMG2kIc/vJvl+f65V22dbkXbowE6jgT/gNBxE=
github.com/google/go-cmp v0.6.0 h1:ofyhxvXcZhMsU5ulbFiLKl/XBFqE1GSq7atu8tAmTRI=
github.com/google/go-cmp v0.6.0/go.mod h1:17dUlkBOakJ0+DkrSSNjCkIjxS6bF9zb3elmeNGIjoY=
@@ -93,13 +130,24 @@ github.com/google/gofuzz v1.0.0/go.mod h1:dBl0BpW6vV/+mYPU4Po3pmUjxk6FQPldtuIdl/
github.com/google/subcommands v1.2.0/go.mod h1:ZjhPrFU+Olkh9WazFPsl27BQ4UPiG37m3yTrtFlrHVk=
github.com/google/uuid v1.6.0 h1:NIvaJDMOsjHA8n1jAhLSgzrAzy1Hgr+hNrb57e+94F0=
github.com/google/uuid v1.6.0/go.mod h1:TIyPZe4MgqvfeYDBFedMoGGpEw/LqOeaOT+nhxU+yHo=
github.com/gorilla/websocket v1.4.2 h1:+/TMaTYc4QFitKJxsQ7Yye35DkWvkdLcvGKqM+x0Ufc=
github.com/gorilla/websocket v1.4.2/go.mod h1:YR8l580nyteQvAITg2hZ9XVh4b55+EU/adAjf1fMHhE=
github.com/hashicorp/go-bexpr v0.1.10 h1:9kuI5PFotCboP3dkDYFr/wi0gg0QVbSNz5oFRpxn4uE=
github.com/hashicorp/go-bexpr v0.1.10/go.mod h1:oxlubA2vC/gFVfX1A6JGp7ls7uCDlfJn732ehYYg+g0=
github.com/hashicorp/golang-lru v0.5.5-0.20210104140557-80c98217689d h1:dg1dEPuWpEqDnvIw251EVy4zlP8gWbsGj4BsUKCRpYs=
github.com/hashicorp/golang-lru v0.5.5-0.20210104140557-80c98217689d/go.mod h1:iADmTwqILo4mZ8BN3D2Q6+9jd8WM5uGBxy+E8yxSoD4=
github.com/holiman/bloomfilter/v2 v2.0.3 h1:73e0e/V0tCydx14a0SCYS/EWCxgwLZ18CZcZKVu0fao=
github.com/holiman/bloomfilter/v2 v2.0.3/go.mod h1:zpoh+gs7qcpqrHr3dB55AMiJwo0iURXE7ZOP9L9hSkA=
github.com/holiman/uint256 v1.3.2 h1:a9EgMPSC1AAaj1SZL5zIQD3WbwTuHrMGOerLjGmM/TA=
github.com/holiman/uint256 v1.3.2/go.mod h1:EOMSn4q6Nyt9P6efbI3bueV4e1b3dGlUCXeiRV4ng7E=
github.com/hpcloud/tail v1.0.0/go.mod h1:ab1qPbhIpdTxEkNHXyeSf5vhxWSCs/tWer42PpOxQnU=
github.com/huin/goupnp v1.0.2 h1:RfGLP+h3mvisuWEyybxNq5Eft3NWhHLPeUN72kpKZoI=
github.com/huin/goupnp v1.0.2/go.mod h1:0dxJBVBHqTMjIUMkESDTNgOOx/Mw5wYIfyFmdzSamkM=
github.com/huin/goutil v0.0.0-20170803182201-1ca381bf3150/go.mod h1:PpLOETDnJ0o3iZrZfqZzyLl6l7F3c6L1oWn7OICBi6o=
github.com/iden3/go-iden3-crypto v0.0.17 h1:NdkceRLJo/pI4UpcjVah4lN/a3yzxRUGXqxbWcYh9mY=
github.com/iden3/go-iden3-crypto v0.0.17/go.mod h1:dLpM4vEPJ3nDHzhWFXDjzkn1qHoBeOT/3UEhXsEsP3E=
github.com/jackpal/go-nat-pmp v1.0.2-0.20160603034137-1fa385a6f458 h1:6OvNmYgJyexcZ3pYbTI9jWx5tHo1Dee/tWbLMfPe2TA=
github.com/jackpal/go-nat-pmp v1.0.2-0.20160603034137-1fa385a6f458/go.mod h1:QPH045xvCAeXUZOxsnwmrtiCoxIr9eob+4orBN1SBKc=
github.com/jessevdk/go-flags v0.0.0-20141203071132-1679536dcc89/go.mod h1:4FA24M0QyGHXBuZZK/XkWh8h0e1EYbRYJSGM75WSRxI=
github.com/jinzhu/inflection v1.0.0 h1:K317FqzuhWc8YvSVlFMCCUb36O/S9MCKRDI7QkRKD/E=
github.com/jinzhu/inflection v1.0.0/go.mod h1:h+uFLlag+Qp1Va5pdKtLDYj+kHp5pxUVkryuEj+Srlc=
@@ -115,6 +163,7 @@ github.com/klauspost/cpuid/v2 v2.0.9/go.mod h1:FInQzS24/EEf25PyTYn52gqo7WaD8xa02
github.com/klauspost/cpuid/v2 v2.2.5 h1:0E5MSMDEoAulmXNFquVs//DdoomxaoTY1kUhbc/qbZg=
github.com/klauspost/cpuid/v2 v2.2.5/go.mod h1:Lcz8mBdAVJIBVzewtcLocK12l3Y+JytZYpaMropDUws=
github.com/knz/go-libedit v1.10.1/go.mod h1:MZTVkCWyz0oBc7JOWP3wNAzd002ZbM/5hgShxwh4x8M=
github.com/kr/logfmt v0.0.0-20140226030751-b84e30acd515/go.mod h1:+0opPa2QZZtGFBFZlji/RkVcI2GknAs/DXo4wKdlNEc=
github.com/kr/pretty v0.1.0/go.mod h1:dAy3ld7l9f0ibDNOQOHHMYYIIbhfbHSm3C4ZsoJORNo=
github.com/kr/pretty v0.2.1/go.mod h1:ipq/a2n7PKx3OHsz4KJII5eveXtPO4qwEXGdVfWzfnI=
github.com/kr/pretty v0.3.0/go.mod h1:640gp4NfQd8pI5XOwp5fnNeVWj67G7CFk/SaSQn7NBk=
@@ -129,14 +178,22 @@ github.com/leanovate/gopter v0.2.11/go.mod h1:aK3tzZP/C+p1m3SPRE4SYZFGP7jjkuSI4f
github.com/leodido/go-urn v1.2.1/go.mod h1:zt4jvISO2HfUBqxjfIshjdMTYS56ZS/qv49ictyFfxY=
github.com/leodido/go-urn v1.2.4 h1:XlAE/cm/ms7TE/VMVoduSpNBoyc2dOxHs5MZSwAN63Q=
github.com/leodido/go-urn v1.2.4/go.mod h1:7ZrI8mTSeBSHl/UaRyKQW1qZeMgak41ANeCNaVckg+4=
github.com/mattn/go-colorable v0.1.8 h1:c1ghPdyEDarC70ftn0y+A/Ee++9zz8ljHG1b13eJ0s8=
github.com/mattn/go-colorable v0.1.8/go.mod h1:u6P/XSegPjTcexA+o6vUJrdnUu04hMope9wVRipJSqc=
github.com/mattn/go-isatty v0.0.12/go.mod h1:cbi8OIDigv2wuxKPP5vlRcQ1OAZbq2CE4Kysco4FUpU=
github.com/mattn/go-isatty v0.0.14/go.mod h1:7GGIvUiUoEMVVmxf/4nioHXj79iQHKdU27kJ6hsGG94=
github.com/mattn/go-isatty v0.0.16/go.mod h1:kYGgaQfpe5nmfYZH+SKPsOc2e4SrIfOl2e/yFXSvRLM=
github.com/mattn/go-isatty v0.0.20 h1:xfD0iDuEKnDkl03q4limB+vH+GxLEtL/jb4xVJSWWEY=
github.com/mattn/go-isatty v0.0.20/go.mod h1:W+V8PltTTMOvKvAeJH7IuucS94S2C6jfK/D7dTCTo3Y=
github.com/mattn/go-runewidth v0.0.9/go.mod h1:H031xJmbD/WCDINGzjvQ9THkh0rPKHF+m2gUSrubnMI=
github.com/mattn/go-runewidth v0.0.15 h1:UNAjwbU9l54TA3KzvqLGxwWjHmMgBUVhBiTjelZgg3U=
github.com/mattn/go-runewidth v0.0.15/go.mod h1:Jdepj2loyihRzMpdS35Xk/zdY8IAYHsh153qUoGf23w=
github.com/matttproud/golang_protobuf_extensions v1.0.1/go.mod h1:D8He9yQNgCq6Z5Ld7szi9bcBfOoFv/3dc6xSMkL2PC0=
github.com/mitchellh/mapstructure v1.4.1/go.mod h1:bFUtVrKA4DC2yAKiSyO/QUcy7e+RRV2QTWOzhPopBRo=
github.com/mitchellh/mapstructure v1.5.0 h1:jeMsZIYE/09sWLaz43PL7Gy6RuMjD2eJVyuac5Z2hdY=
github.com/mitchellh/mapstructure v1.5.0/go.mod h1:bFUtVrKA4DC2yAKiSyO/QUcy7e+RRV2QTWOzhPopBRo=
github.com/mitchellh/pointerstructure v1.2.0 h1:O+i9nHnXS3l/9Wu7r4NrEdwA2VFTicjUEN1uBnDo34A=
github.com/mitchellh/pointerstructure v1.2.0/go.mod h1:BRAsLI5zgXmw97Lf6s25bs8ohIXc3tViBH44KcwB2g4=
github.com/mmcloughlin/addchain v0.4.0 h1:SobOdjm2xLj1KkXN5/n0xTIWyZA2+s99UCY1iPfkHRY=
github.com/mmcloughlin/addchain v0.4.0/go.mod h1:A86O+tHqZLMNO4w6ZZ4FlVQEadcoqkyU72HC5wJ4RlU=
github.com/mmcloughlin/profile v0.1.1/go.mod h1:IhHD7q1ooxgwTgjxQYkACGA77oFTDdFVejUS1/tS/qU=
@@ -145,40 +202,59 @@ github.com/modern-go/concurrent v0.0.0-20180306012644-bacd9c7ef1dd h1:TRLaZ9cD/w
github.com/modern-go/concurrent v0.0.0-20180306012644-bacd9c7ef1dd/go.mod h1:6dJC0mAP4ikYIbvyc7fijjWJddQyLn8Ig3JB5CqoB9Q=
github.com/modern-go/reflect2 v1.0.2 h1:xBagoLtFs94CBntxluKeaWgTMpvLxC4ur3nMaC9Gz0M=
github.com/modern-go/reflect2 v1.0.2/go.mod h1:yWuevngMOJpCy52FWWMvUC8ws7m/LJsjYzDa0/r8luk=
github.com/nxadm/tail v1.4.4 h1:DQuhQpB1tVlglWS2hLQ5OV6B5r8aGxSrPc5Qo6uTN78=
github.com/nxadm/tail v1.4.4/go.mod h1:kenIhsEOeOJmVchQTgglprH7qJGnHDVpk1VPCcaMI8A=
github.com/oklog/ulid v1.3.1/go.mod h1:CirwcVhetQ6Lv90oh/F+FBtV6XMibvdAFo93nm5qn4U=
github.com/olekukonko/tablewriter v0.0.5 h1:P2Ga83D34wi1o9J6Wh1mRuqd4mF/x/lgBS7N7AbDhec=
github.com/olekukonko/tablewriter v0.0.5/go.mod h1:hPp6KlRPjbx+hW8ykQs1w3UBbZlj6HuIJcUGPhkA7kY=
github.com/onsi/ginkgo v1.6.0/go.mod h1:lLunBs/Ym6LB5Z9jYTR76FiuTmxDTDusOGeTQH+WWjE=
github.com/onsi/ginkgo v1.7.0/go.mod h1:lLunBs/Ym6LB5Z9jYTR76FiuTmxDTDusOGeTQH+WWjE=
github.com/onsi/ginkgo v1.12.1/go.mod h1:zj2OWP4+oCPe1qIXoGWkgMRwljMUYCdkwsT2108oapk=
github.com/onsi/ginkgo v1.14.0 h1:2mOpI4JVVPBN+WQRa0WKH2eXR+Ey+uK4n7Zj0aYpIQA=
github.com/onsi/ginkgo v1.14.0/go.mod h1:iSB4RoI2tjJc9BBv4NKIKWKya62Rps+oPG/Lv9klQyY=
github.com/onsi/gomega v1.4.3/go.mod h1:ex+gbHU/CVuBBDIJjb2X0qEXbFg53c61hWP/1CpauHY=
github.com/onsi/gomega v1.7.1/go.mod h1:XdKZgCCFLUoM/7CFJVPcG8C1xQ1AJ0vpAezJrB7JYyY=
github.com/onsi/gomega v1.10.1 h1:o0+MgICZLuZ7xjH7Vx6zS/zcu93/BEp1VwkIW1mEXCE=
github.com/onsi/gomega v1.10.1/go.mod h1:iN09h71vgCQne3DLsj+A5owkum+a2tYe+TOCB1ybHNo=
github.com/pelletier/go-toml/v2 v2.0.1/go.mod h1:r9LEWfGN8R5k0VXJ+0BkIe7MYkRdwZOjgMj2KwnJFUo=
github.com/pelletier/go-toml/v2 v2.0.6/go.mod h1:eumQOmlWiOPt5WriQQqoM5y18pDHwha2N+QD+EUNTek=
github.com/pelletier/go-toml/v2 v2.1.0 h1:FnwAJ4oYMvbT/34k9zzHuZNrhlz48GB3/s6at6/MHO4=
github.com/pelletier/go-toml/v2 v2.1.0/go.mod h1:tJU2Z3ZkXwnxa4DPO899bsyIoywizdUvyaeZurnPPDc=
github.com/pkg/diff v0.0.0-20210226163009-20ebb0f2a09e/go.mod h1:pJLUxLENpZxwdsKMEsNbx1VGcRFpLqf3715MtcvvzbA=
github.com/pkg/errors v0.8.0/go.mod h1:bwawxfHBFNV+L2hUp1rHADufV3IMtnDRdf1r5NINEl0=
github.com/pkg/errors v0.9.1 h1:FEBLx1zS214owpjy7qsBeixbURkuhQAwrK5UwLGTwt4=
github.com/pkg/errors v0.9.1/go.mod h1:bwawxfHBFNV+L2hUp1rHADufV3IMtnDRdf1r5NINEl0=
github.com/pmezard/go-difflib v1.0.0 h1:4DBwDE0NGyQoBHbLQYPwSUPoCMWR5BEzIk/f1lZbAQM=
github.com/pmezard/go-difflib v1.0.0/go.mod h1:iKH77koFhYxTK1pcRnkKkqfTogsbg7gZNVY4sRDYZ/4=
github.com/prometheus/client_golang v0.9.1/go.mod h1:7SWBe2y4D6OKWSNQJUaRYU/AaXPKyh/dDVn+NZz0KFw=
github.com/prometheus/client_golang v1.19.0 h1:ygXvpU1AoN1MhdzckN+PyD9QJOSD4x7kmXYlnfbA6JU=
github.com/prometheus/client_golang v1.19.0/go.mod h1:ZRM9uEAypZakd+q/x7+gmsvXdURP+DABIEIjnmDdp+k=
github.com/prometheus/client_model v0.0.0-20180712105110-5c3871d89910/go.mod h1:MbSGuTsp3dbXC40dX6PRTWyKYBIrTGTE9sqQNg2J8bo=
github.com/prometheus/client_model v0.5.0 h1:VQw1hfvPvk3Uv6Qf29VrPF32JB6rtbgI6cYPYQjL0Qw=
github.com/prometheus/client_model v0.5.0/go.mod h1:dTiFglRmd66nLR9Pv9f0mZi7B7fk5Pm3gvsjB5tr+kI=
github.com/prometheus/common v0.0.0-20181113130724-41aa239b4cce/go.mod h1:daVV7qP5qjZbuso7PdcryaAu0sAZbrN9i7WWcTMWvro=
github.com/prometheus/common v0.48.0 h1:QO8U2CdOzSn1BBsmXJXduaaW+dY/5QLjfB8svtSzKKE=
github.com/prometheus/common v0.48.0/go.mod h1:0/KsvlIEfPQCQ5I2iNSAWKPZziNCvRs5EC6ILDTlAPc=
github.com/prometheus/procfs v0.0.0-20181005140218-185b4288413d/go.mod h1:c3At6R/oaqEKCNdg8wHV1ftS6bRYblBhIjjI8uT2IGk=
github.com/prometheus/procfs v0.12.0 h1:jluTpSng7V9hY0O2R9DzzJHYb2xULk9VTR1V1R/k6Bo=
github.com/prometheus/procfs v0.12.0/go.mod h1:pcuDEFsWDnvcgNzo4EEweacyhjeA9Zk3cnaOZAZEfOo=
github.com/prometheus/tsdb v0.7.1 h1:YZcsG11NqnK4czYLrWd9mpEuAJIHVQLwdrleYfszMAA=
github.com/prometheus/tsdb v0.7.1/go.mod h1:qhTCs0VvXwvX/y3TZrWD7rabWM+ijKTux40TwIPHuXU=
github.com/rivo/uniseg v0.2.0/go.mod h1:J6wj4VEh+S6ZtnVlnTBMWIodfgj8LQOQFoIToxlJtxc=
github.com/rivo/uniseg v0.4.4 h1:8TfxU8dW6PdqD27gjM8MVNuicgxIjxpm4K7x4jp8sis=
github.com/rivo/uniseg v0.4.4/go.mod h1:FN3SvrM+Zdj16jyLfmOkMNblXMcoc8DfTHruCPUcx88=
github.com/rjeczalik/notify v0.9.1 h1:CLCKso/QK1snAlnhNR/CNvNiFU2saUtjV0bx3EwNeCE=
github.com/rjeczalik/notify v0.9.1/go.mod h1:rKwnCoCGeuQnwBtTSPL9Dad03Vh2n40ePRrjvIXnJho=
github.com/rogpeppe/go-internal v1.6.1/go.mod h1:xXDCJY+GAPziupqXw64V24skbSoqbTEfhy4qGm1nDQc=
github.com/rogpeppe/go-internal v1.8.0/go.mod h1:WmiCO8CzOY8rg0OYDC4/i/2WRWAB6poM+XZ2dLUbcbE=
github.com/rogpeppe/go-internal v1.12.0 h1:exVL4IDcn6na9z1rAb56Vxr+CgyK3nn3O+epU5NdKM8=
github.com/rogpeppe/go-internal v1.12.0/go.mod h1:E+RYuTGaKKdloAfM02xzb0FW3Paa99yedzYV+kq4uf4=
github.com/rs/cors v1.7.0 h1:+88SsELBHx5r+hZ8TCkggzSstaWNbDvThkVK8H6f9ik=
github.com/rs/cors v1.7.0/go.mod h1:gFx+x8UowdsKA9AchylcLynDq+nNFfI8FkUZdN/jGCU=
github.com/russross/blackfriday/v2 v2.1.0 h1:JIOH55/0cWyOuilr9/qlrm0BSXldqnqwMsf35Ld67mk=
github.com/russross/blackfriday/v2 v2.1.0/go.mod h1:+Rmxgy9KzJVeS9/2gXHxylqXiyQDYRxCVz55jmeOWTM=
github.com/scroll-tech/da-codec v0.1.3-0.20250626091118-58b899494da6 h1:vb2XLvQwCf+F/ifP6P/lfeiQrHY6+Yb/E3R4KHXLqSE=
github.com/scroll-tech/da-codec v0.1.3-0.20250626091118-58b899494da6/go.mod h1:Z6kN5u2khPhiqHyk172kGB7o38bH/nj7Ilrb/46wZGg=
github.com/scroll-tech/da-codec v0.1.3-0.20250826112206-b4cce5c5d178 h1:4utngmJHXSOS5FoSdZhEV1xMRirpArbXvyoCZY9nYj0=
github.com/scroll-tech/da-codec v0.1.3-0.20250826112206-b4cce5c5d178/go.mod h1:Z6kN5u2khPhiqHyk172kGB7o38bH/nj7Ilrb/46wZGg=
github.com/scroll-tech/go-ethereum v1.10.14-0.20250626110859-cc9a1dd82de7 h1:1rN1qocsQlOyk1VCpIEF1J5pfQbLAi1pnMZSLQS37jQ=
github.com/scroll-tech/go-ethereum v1.10.14-0.20250626110859-cc9a1dd82de7/go.mod h1:pDCZ4iGvEGmdIe4aSAGBrb7XSrKEML6/L/wEMmNxOdk=
github.com/scroll-tech/zktrie v0.8.4 h1:UagmnZ4Z3ITCk+aUq9NQZJNAwnWl4gSxsLb2Nl7IgRE=
@@ -187,9 +263,15 @@ github.com/shirou/gopsutil v3.21.11+incompatible h1:+1+c1VGhc88SSonWP6foOcLhvnKl
github.com/shirou/gopsutil v3.21.11+incompatible/go.mod h1:5b4v6he4MtMOwMlS0TUMTu2PcXUg8+E1lC7eC3UO/RA=
github.com/shopspring/decimal v1.3.1 h1:2Usl1nmF/WZucqkFZhnfFYxxxu8LG21F6nPQBE5gKV8=
github.com/shopspring/decimal v1.3.1/go.mod h1:DKyhrW/HYNuLGql+MJL6WCR6knT2jwCFRcu2hWCYk4o=
github.com/sourcegraph/conc v0.3.0 h1:OQTbbt6P72L20UqAkXXuLOj79LfEanQ+YQFNpLA9ySo=
github.com/sourcegraph/conc v0.3.0/go.mod h1:Sdozi7LEKbFPqYX2/J+iBAM6HpqSLTASQIKqDmF7Mt0=
github.com/spaolacci/murmur3 v0.0.0-20180118202830-f09979ecbc72/go.mod h1:JwIasOWyU6f++ZhiEuf87xNszmSA2myDM2Kzu9HwQUA=
github.com/status-im/keycard-go v0.0.0-20190316090335-8537d3370df4 h1:Gb2Tyox57NRNuZ2d3rmvB3pcmbu7O1RS3m8WRx7ilrg=
github.com/status-im/keycard-go v0.0.0-20190316090335-8537d3370df4/go.mod h1:RZLeN1LMWmRsyYjvAu+I6Dm9QmlDaIIt+Y+4Kd7Tp+Q=
github.com/stretchr/objx v0.1.0/go.mod h1:HFkY916IF+rwdDfMAkV7OtwuqBVzrE8GR6GFx+wExME=
github.com/stretchr/objx v0.4.0/go.mod h1:YvHI0jy2hoMjB+UWwv71VJQ9isScKT/TqJzVSSt89Yw=
github.com/stretchr/objx v0.5.0/go.mod h1:Yh+to48EsGEfYuaHDzXPcE3xhTkx73EhmCGUpEOglKo=
github.com/stretchr/testify v1.2.2/go.mod h1:a8OnRcib4nhh0OaRAV+Yts87kKdq0PP7pXfy6kDkUVs=
github.com/stretchr/testify v1.3.0/go.mod h1:M5WIy9Dh21IEIfnGCwXGc5bZfKNJtfHm1UVUgZn+9EI=
github.com/stretchr/testify v1.6.1/go.mod h1:6Fq8oRcR53rry900zMqJjRRixrwX3KX962/h/Wwjteg=
github.com/stretchr/testify v1.7.0/go.mod h1:6Fq8oRcR53rry900zMqJjRRixrwX3KX962/h/Wwjteg=
@@ -216,6 +298,8 @@ github.com/tklauser/numcpus v0.9.0 h1:lmyCHtANi8aRUgkckBgoDk1nHCux3n2cgkJLXdQGPD
github.com/tklauser/numcpus v0.9.0/go.mod h1:SN6Nq1O3VychhC1npsWostA+oW+VOQTxZrS604NSRyI=
github.com/twitchyliquid64/golang-asm v0.15.1 h1:SU5vSMR7hnwNxj24w34ZyCi/FmDZTkS4MhqMhdFk5YI=
github.com/twitchyliquid64/golang-asm v0.15.1/go.mod h1:a1lVb/DtPvCB8fslRZhAngC2+aY1QWCk3Cedj/Gdt08=
github.com/tyler-smith/go-bip39 v1.0.1-0.20181017060643-dbb3b84ba2ef h1:wHSqTBrZW24CsNJDfeh9Ex6Pm0Rcpc7qrgKBiL44vF4=
github.com/tyler-smith/go-bip39 v1.0.1-0.20181017060643-dbb3b84ba2ef/go.mod h1:sJ5fKU0s6JVwZjjcUEX2zFOnvq0ASQ2K9Zr6cf67kNs=
github.com/ugorji/go v1.2.7/go.mod h1:nF9osbDWLy6bDVv/Rtoh6QgnvNDpmCalQV5urGCCS6M=
github.com/ugorji/go/codec v1.2.7/go.mod h1:WGN1fab3R1fzQlVQTkfxVtIBhWDRqOviHU95kRgeqEY=
github.com/ugorji/go/codec v1.2.11 h1:BMaWp1Bb6fHwEtbplGBGJ498wD+LKlNSl25MjdZY4dU=
@@ -227,11 +311,16 @@ github.com/xrash/smetrics v0.0.0-20201216005158-039620a65673/go.mod h1:N3UwUGtsr
github.com/yuin/goldmark v1.4.13/go.mod h1:6yULJ656Px+3vBD8DxQVa3kxgyrAnzto9xy5taEt/CY=
github.com/yusufpapurcu/wmi v1.2.4 h1:zFUKzehAFReQwLys1b/iSMl+JQGSCSjtVqQn9bBrPo0=
github.com/yusufpapurcu/wmi v1.2.4/go.mod h1:SBZ9tNy3G9/m5Oi98Zks0QjeHVDvuK0qfxQmPyzfmi0=
go.uber.org/atomic v1.7.0 h1:ADUqmZGgLDDfbSL9ZmPxKTybcoEYHgpYfELNoN+7hsw=
go.uber.org/atomic v1.7.0/go.mod h1:fEN4uk6kAWBTFdckzkM89CLk9XfWZrxpCo0nPH17wJc=
go.uber.org/multierr v1.9.0 h1:7fIwc/ZtS0q++VgcfqFDxSBZVv/Xo49/SYnDFupUwlI=
go.uber.org/multierr v1.9.0/go.mod h1:X2jQV1h+kxSjClGpnseKVIxpmcjrj7MNnI0bnlfKTVQ=
golang.org/x/arch v0.0.0-20210923205945-b76863e36670/go.mod h1:5om86z9Hs0C8fWVUuoMHwpExlXzs5Tkyp9hOrfG7pp8=
golang.org/x/arch v0.5.0 h1:jpGode6huXQxcskEIpOCvrU+tzo81b6+oFLUYXWtH/Y=
golang.org/x/arch v0.5.0/go.mod h1:5om86z9Hs0C8fWVUuoMHwpExlXzs5Tkyp9hOrfG7pp8=
golang.org/x/crypto v0.0.0-20170930174604-9419663f5a44/go.mod h1:6SG95UA2DQfeDnfUPMdvaQW0Q7yPrPDi9nlGo2tz2b4=
golang.org/x/crypto v0.0.0-20190308221718-c2843e01d9a2/go.mod h1:djNgcEr1/C05ACkg1iLfiJU5Ep61QUkGW8qpdssI0+w=
golang.org/x/crypto v0.0.0-20200622213623-75b288015ac9/go.mod h1:LzIPMQfyMNhhGPhUkYOs5KpL4U8rLKemX1yGLhDgUto=
golang.org/x/crypto v0.0.0-20210711020723-a769d52b0f97/go.mod h1:GvvjBRRGRdwPK5ydBHafDWAxML/pGHZbMvKqRZ5+Abc=
golang.org/x/crypto v0.0.0-20210921155107-089bfa567519/go.mod h1:GvvjBRRGRdwPK5ydBHafDWAxML/pGHZbMvKqRZ5+Abc=
golang.org/x/crypto v0.0.0-20211215153901-e495a2d5b3d3/go.mod h1:IxCIyHEi3zRg3s0A5j5BB6A9Jmi73HwBIUl50j+osU4=
@@ -240,7 +329,10 @@ golang.org/x/crypto v0.32.0 h1:euUpcYgM8WcP71gNpTqQCn6rC2t6ULUPiOzfWaXVVfc=
golang.org/x/crypto v0.32.0/go.mod h1:ZnnJkOaASj8g0AjIduWNlq2NRxL0PlBrbKVyZ6V/Ugc=
golang.org/x/mod v0.6.0-dev.0.20220419223038-86c51ed26bb4/go.mod h1:jJ57K6gSWd91VN4djpZkiMVwK6gcyfeH4XE8wZrZaV4=
golang.org/x/net v0.0.0-20180906233101-161cd47e91fd/go.mod h1:mL1N/T3taQHkDXs73rZJwtUhF3w3ftmwwsq0BUmARs4=
golang.org/x/net v0.0.0-20190404232315-eb5bcb51f2a3/go.mod h1:t9HGtf8HONx5eT2rtn7q6eTqICYqUVnKs3thJo3Qplg=
golang.org/x/net v0.0.0-20190620200207-3b0461eec859/go.mod h1:z5CRVTTTmAJ677TzLLGU+0bjPO0LkuOLi4/5GtJWs/s=
golang.org/x/net v0.0.0-20200520004742-59133d7f0dd7/go.mod h1:qpuaurCH72eLCgpAm/N6yyVIVM9cpaDIP3A8BGJEC5A=
golang.org/x/net v0.0.0-20200813134508-3edf25e44fcc/go.mod h1:/O7V0waA8r7cgGh81Ro3o1hOxt32SMVPicZroKQ2sZA=
golang.org/x/net v0.0.0-20210226172049-e18ecbb05110/go.mod h1:m0MpNAwzfU5UDzcl9v0D8zg8gWTRqZa9RBIspLL5mdg=
golang.org/x/net v0.0.0-20211029224645-99673261e6eb/go.mod h1:9nx3DQGgdP8bBQD5qxJ1jj9UTztislL4KSBs9R2vV5Y=
golang.org/x/net v0.0.0-20211112202133-69e39bad7dc2/go.mod h1:9nx3DQGgdP8bBQD5qxJ1jj9UTztislL4KSBs9R2vV5Y=
@@ -250,13 +342,25 @@ golang.org/x/net v0.4.0/go.mod h1:MBQ8lrhLObU/6UmLb4fmbmk5OcyYmqtbGd/9yIeKjEE=
golang.org/x/net v0.23.0 h1:7EYJ93RZ9vYSZAIb2x3lnuvqO5zneoD6IvWjuhfxjTs=
golang.org/x/net v0.23.0/go.mod h1:JKghWKKOSdJwpW2GEx0Ja7fmaKnMsbu+MWVZTokSYmg=
golang.org/x/sync v0.0.0-20180314180146-1d60e4601c6f/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM=
golang.org/x/sync v0.0.0-20181108010431-42b317875d0f/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM=
golang.org/x/sync v0.0.0-20190423024810-112230192c58/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM=
golang.org/x/sync v0.0.0-20200317015054-43a5402ce75a/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM=
golang.org/x/sync v0.0.0-20220722155255-886fb9371eb4/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM=
golang.org/x/sync v0.11.0 h1:GGz8+XQP4FvTTrjZPzNKTMFtSXH80RAzG+5ghFPgK9w=
golang.org/x/sync v0.11.0/go.mod h1:Czt+wKu1gCyEFDUtn0jG5QVvpJ6rzVqr5aXyt9drQfk=
golang.org/x/sys v0.0.0-20180909124046-d0be0721c37e/go.mod h1:STP8DvDyc/dI5b8T5hshtkjS+E42TnysNCUPdjciGhY=
golang.org/x/sys v0.0.0-20181107165924-66b7b1311ac8/go.mod h1:STP8DvDyc/dI5b8T5hshtkjS+E42TnysNCUPdjciGhY=
golang.org/x/sys v0.0.0-20190215142949-d0b11bdaac8a/go.mod h1:STP8DvDyc/dI5b8T5hshtkjS+E42TnysNCUPdjciGhY=
golang.org/x/sys v0.0.0-20190412213103-97732733099d/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20190904154756-749cb33beabd/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20190916202348-b4ddaad3f8a3/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20191005200804-aed5e4c7ecf9/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20191120155948-bd437916bb0e/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20200116001909-b77594299b42/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20200223170610-d5e6a3e2c0ae/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20200323222414-85ca7c5b95cd/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20200519105757-fe76b779f299/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20200814200057-3d37ad5750ed/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20201119102817-f84b799fce68/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20210423082822-04245dca01da/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20210615035016-665e8c7367d1/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
@@ -269,36 +373,56 @@ golang.org/x/sys v0.1.0/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
golang.org/x/sys v0.3.0/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
golang.org/x/sys v0.5.0/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
golang.org/x/sys v0.6.0/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
golang.org/x/sys v0.14.0/go.mod h1:/VUhepiaJMQUp4+oa/7Zr1D23ma6VTLIYjOOTFZPUcA=
golang.org/x/sys v0.30.0 h1:QjkSwP/36a20jFYWkSue1YwXzLmsV5Gfq7Eiy72C1uc=
golang.org/x/sys v0.30.0/go.mod h1:/VUhepiaJMQUp4+oa/7Zr1D23ma6VTLIYjOOTFZPUcA=
golang.org/x/term v0.0.0-20201126162022-7de9c90e9dd1/go.mod h1:bj7SfCRtBDWHUb9snDiAeCFNEtKQo2Wmx5Cou7ajbmo=
golang.org/x/term v0.0.0-20210927222741-03fcf44c2211/go.mod h1:jbD1KX2456YbFQfuXm/mYQcufACuNUgVhRMnK/tPxf8=
golang.org/x/term v0.3.0/go.mod h1:q750SLmJuPmVoN1blW3UFBPREJfb1KmY3vwxfr+nFDA=
golang.org/x/text v0.3.0/go.mod h1:NqM8EUOU14njkJ3fqMW+pc6Ldnwhi/IjpwHt7yyuwOQ=
golang.org/x/text v0.3.2/go.mod h1:bEr9sfX3Q8Zfm5fL9x+3itogRgK3+ptLWKqgva+5dAk=
golang.org/x/text v0.3.3/go.mod h1:5Zoc/QRtKVWzQhOtBMvqHzDpF6irO9z98xDceosuGiQ=
golang.org/x/text v0.3.6/go.mod h1:5Zoc/QRtKVWzQhOtBMvqHzDpF6irO9z98xDceosuGiQ=
golang.org/x/text v0.3.7/go.mod h1:u+2+/6zg+i71rQMx5EYifcz6MCKuco9NR6JIITiCfzQ=
golang.org/x/text v0.5.0/go.mod h1:mrYo+phRRbMaCq/xk9113O4dZlRixOauAjOtrjsXDZ8=
golang.org/x/text v0.21.0 h1:zyQAAkrwaneQ066sspRyJaG9VNi/YJ1NfzcGB3hZ/qo=
golang.org/x/text v0.21.0/go.mod h1:4IBbMaMmOPCJ8SecivzSH54+73PCFmPWxNTLm+vZkEQ=
golang.org/x/time v0.0.0-20210220033141-f8bda1e9f3ba h1:O8mE0/t419eoIwhTFpKVkHiTs/Igowgfkj25AcZrtiE=
golang.org/x/time v0.0.0-20210220033141-f8bda1e9f3ba/go.mod h1:tRJNPiyCQ0inRvYxbN9jk5I+vvW/OXSQhTDSoE431IQ=
golang.org/x/tools v0.0.0-20180917221912-90fa682c2a6e/go.mod h1:n7NCudcB/nEzxVGmLbDWY5pfWTLqBcC2KZ6jyYvM4mQ=
golang.org/x/tools v0.0.0-20191119224855-298f0cb1881e/go.mod h1:b+2E5dAYhXwXZwtnZ6UAqBI28+e2cm9otk0dWdXHAEo=
golang.org/x/tools v0.1.12/go.mod h1:hNGJHUnrk76NpqgfD5Aqm5Crs+Hm0VOH/i9J2+nxYbc=
golang.org/x/xerrors v0.0.0-20190717185122-a985d3407aa7/go.mod h1:I/5z698sn9Ka8TeJc9MKroUUfqBBauWjQqLJ2OPfmY0=
golang.org/x/xerrors v0.0.0-20191204190536-9bdfabe68543/go.mod h1:I/5z698sn9Ka8TeJc9MKroUUfqBBauWjQqLJ2OPfmY0=
golang.org/x/xerrors v0.0.0-20200804184101-5ec99f83aff1 h1:go1bK/D/BFZV2I8cIQd1NKEZ+0owSTG1fDTci4IqFcE=
golang.org/x/xerrors v0.0.0-20200804184101-5ec99f83aff1/go.mod h1:I/5z698sn9Ka8TeJc9MKroUUfqBBauWjQqLJ2OPfmY0=
google.golang.org/protobuf v0.0.0-20200109180630-ec00e32a8dfd/go.mod h1:DFci5gLYBciE7Vtevhsrf46CRTquxDuWsQurQQe4oz8=
google.golang.org/protobuf v0.0.0-20200221191635-4d8936d0db64/go.mod h1:kwYJMbMJ01Woi6D6+Kah6886xMZcty6N08ah7+eCXa0=
google.golang.org/protobuf v0.0.0-20200228230310-ab0ca4ff8a60/go.mod h1:cfTl7dwQJ+fmap5saPgwCLgHXTUD7jkjRqWcaiX5VyM=
google.golang.org/protobuf v1.20.1-0.20200309200217-e05f789c0967/go.mod h1:A+miEFZTKqfCUM6K7xSMQL9OKL/b6hQv+e19PK+JZNE=
google.golang.org/protobuf v1.21.0/go.mod h1:47Nbq4nVaFHyn7ilMalzfO3qCViNmqZ2kzikPIcrTAo=
google.golang.org/protobuf v1.23.0/go.mod h1:EGpADcykh3NcUnDUJcl1+ZksZNG86OlYog2l/sGQquU=
google.golang.org/protobuf v1.26.0-rc.1/go.mod h1:jlhhOSvTdKEhbULTjvd4ARK9grFBp09yW+WbY/TyQbw=
google.golang.org/protobuf v1.28.0/go.mod h1:HV8QOd/L58Z+nl8r43ehVNZIU/HEI6OcFqwMG9pJV4I=
google.golang.org/protobuf v1.28.1/go.mod h1:HV8QOd/L58Z+nl8r43ehVNZIU/HEI6OcFqwMG9pJV4I=
google.golang.org/protobuf v1.33.0 h1:uNO2rsAINq/JlFpSdYEKIZ0uKD/R9cpdv0T+yoGwGmI=
google.golang.org/protobuf v1.33.0/go.mod h1:c6P6GXX6sHbq/GpV6MGZEdwhWPcYBgnhAHhKbcUYpos=
gopkg.in/alecthomas/kingpin.v2 v2.2.6/go.mod h1:FMv+mEhP44yOT+4EoQTLFTRgOQ1FBLkstjWtayDeSgw=
gopkg.in/check.v1 v0.0.0-20161208181325-20d25e280405/go.mod h1:Co6ibVJAznAaIkqp8huTwlJQCZ016jof/cbN4VW5Yz0=
gopkg.in/check.v1 v1.0.0-20180628173108-788fd7840127/go.mod h1:Co6ibVJAznAaIkqp8huTwlJQCZ016jof/cbN4VW5Yz0=
gopkg.in/check.v1 v1.0.0-20201130134442-10cb98267c6c h1:Hei/4ADfdWqJk1ZMxUNpqntNwaWcugrBjAiHlqqRiVk=
gopkg.in/check.v1 v1.0.0-20201130134442-10cb98267c6c/go.mod h1:JHkPIbrfpd72SG/EVd6muEfDQjcINNoR0C8j2r3qZ4Q=
gopkg.in/errgo.v2 v2.1.0/go.mod h1:hNsd1EY+bozCKY1Ytp96fpM3vjJbqLJn88ws8XvfDNI=
gopkg.in/fsnotify.v1 v1.4.7/go.mod h1:Tz8NjZHkW78fSQdbUxIjBTcgA1z1m8ZHf0WmKUhAMys=
gopkg.in/natefinch/npipe.v2 v2.0.0-20160621034901-c1b8fa8bdcce h1:+JknDZhAj8YMt7GC73Ei8pv4MzjDUNPHgQWJdtMAaDU=
gopkg.in/natefinch/npipe.v2 v2.0.0-20160621034901-c1b8fa8bdcce/go.mod h1:5AcXVHNjg+BDxry382+8OKon8SEWiKktQR07RKPsv1c=
gopkg.in/tomb.v1 v1.0.0-20141024135613-dd632973f1e7 h1:uRGJdciOHaEIrze2W8Q3AKkepLTh2hOroT7a+7czfdQ=
gopkg.in/tomb.v1 v1.0.0-20141024135613-dd632973f1e7/go.mod h1:dt/ZhP58zS4L8KSrWDmTeBkI65Dw0HsyUHuEVlX15mw=
gopkg.in/urfave/cli.v1 v1.20.0 h1:NdAVW6RYxDif9DhDHaAortIu956m2c0v+09AZBPTbE0=
gopkg.in/urfave/cli.v1 v1.20.0/go.mod h1:vuBzUtMdQeixQj8LVd+/98pzhxNGQoyuPBlsXHOQNO0=
gopkg.in/yaml.v2 v2.2.1/go.mod h1:hI93XBmqTisBFMUTm0b8Fm+jr3Dg1NNxqwp+5A1VGuI=
gopkg.in/yaml.v2 v2.2.4/go.mod h1:hI93XBmqTisBFMUTm0b8Fm+jr3Dg1NNxqwp+5A1VGuI=
gopkg.in/yaml.v2 v2.3.0/go.mod h1:hI93XBmqTisBFMUTm0b8Fm+jr3Dg1NNxqwp+5A1VGuI=
gopkg.in/yaml.v2 v2.4.0 h1:D8xgwECY7CYvx+Y2n4sBz93Jn9JRvxdiyyo8CTfuKaY=
gopkg.in/yaml.v2 v2.4.0/go.mod h1:RDklbk79AGWmwhnvt/jBztapEOGDOx6ZbXqjP6csGnQ=
gopkg.in/yaml.v3 v3.0.0-20200313102051-9f266ea9e77c/go.mod h1:K4uyk7z7BCEPqu6E+C64Yfv1cQ7kz7rIZviUmN+EgEM=

View File

@@ -55,16 +55,19 @@ type Config struct {
Auth *Auth `json:"auth"`
}
// CircuitConfig circuit items.
type CircuitConfig struct {
// AssetConfig contain assets configurated for each fork, the defaul vkfile name is "OpenVmVk.json".
type AssetConfig struct {
AssetsPath string `json:"assets_path"`
ForkName string `json:"fork_name"`
MinProverVersion string `json:"min_prover_version"`
Vkfile string `json:"vk_file,omitempty"`
MinProverVersion string `json:"min_prover_version,omitempty"`
}
// VerifierConfig load zk verifier config.
type VerifierConfig struct {
HighVersionCircuit *CircuitConfig `json:"high_version_circuit"`
MinProverVersion string `json:"min_prover_version"`
Features string `json:"features,omitempty"`
Verifiers []AssetConfig `json:"verifiers"`
}
// NewConfig returns a new instance of Config.

View File

@@ -20,11 +20,11 @@ func TestConfig(t *testing.T) {
"batch_collection_time_sec": 180,
"chunk_collection_time_sec": 180,
"verifier": {
"high_version_circuit": {
"min_prover_version": "v4.4.45",
"verifiers": [{
"assets_path": "assets",
"fork_name": "euclidV2",
"min_prover_version": "v4.4.45"
}
"fork_name": "feynman"
}]
},
"max_verifier_workers": 4
},

View File

@@ -1,12 +1,15 @@
package api
import (
"encoding/json"
"github.com/prometheus/client_golang/prometheus"
"github.com/scroll-tech/go-ethereum/log"
"github.com/scroll-tech/go-ethereum/params"
"gorm.io/gorm"
"scroll-tech/coordinator/internal/config"
"scroll-tech/coordinator/internal/logic/libzkp"
"scroll-tech/coordinator/internal/logic/verifier"
)
@@ -29,7 +32,7 @@ func InitController(cfg *config.Config, chainCfg *params.ChainConfig, db *gorm.D
log.Info("verifier created", "openVmVerifier", vf.OpenVMVkMap)
// TODO: enable this when the libzkp has been updated
/*l2cfg := cfg.L2.Endpoint
l2cfg := cfg.L2.Endpoint
if l2cfg == nil {
panic("l2geth is not specified")
}
@@ -37,9 +40,9 @@ func InitController(cfg *config.Config, chainCfg *params.ChainConfig, db *gorm.D
if err != nil {
panic(err)
}
libzkp.InitL2geth(string(l2cfgBytes))*/
libzkp.InitL2geth(string(l2cfgBytes))
Auth = NewAuthController(db, cfg, vf)
GetTask = NewGetTaskController(cfg, chainCfg, db, reg)
GetTask = NewGetTaskController(cfg, chainCfg, db, vf, reg)
SubmitProof = NewSubmitProofController(cfg, chainCfg, db, vf, reg)
}

View File

@@ -17,6 +17,7 @@ import (
"scroll-tech/coordinator/internal/config"
"scroll-tech/coordinator/internal/logic/provertask"
"scroll-tech/coordinator/internal/logic/verifier"
coordinatorType "scroll-tech/coordinator/internal/types"
)
@@ -25,13 +26,15 @@ type GetTaskController struct {
proverTasks map[message.ProofType]provertask.ProverTask
getTaskAccessCounter *prometheus.CounterVec
l2syncer *l2Syncer
}
// NewGetTaskController create a get prover task controller
func NewGetTaskController(cfg *config.Config, chainCfg *params.ChainConfig, db *gorm.DB, reg prometheus.Registerer) *GetTaskController {
chunkProverTask := provertask.NewChunkProverTask(cfg, chainCfg, db, reg)
batchProverTask := provertask.NewBatchProverTask(cfg, chainCfg, db, reg)
bundleProverTask := provertask.NewBundleProverTask(cfg, chainCfg, db, reg)
func NewGetTaskController(cfg *config.Config, chainCfg *params.ChainConfig, db *gorm.DB, verifier *verifier.Verifier, reg prometheus.Registerer) *GetTaskController {
chunkProverTask := provertask.NewChunkProverTask(cfg, chainCfg, db, verifier.ChunkVk, reg)
batchProverTask := provertask.NewBatchProverTask(cfg, chainCfg, db, verifier.BatchVk, reg)
bundleProverTask := provertask.NewBundleProverTask(cfg, chainCfg, db, verifier.BundleVk, reg)
ptc := &GetTaskController{
proverTasks: make(map[message.ProofType]provertask.ProverTask),
@@ -44,6 +47,13 @@ func NewGetTaskController(cfg *config.Config, chainCfg *params.ChainConfig, db *
ptc.proverTasks[message.ProofTypeChunk] = chunkProverTask
ptc.proverTasks[message.ProofTypeBatch] = batchProverTask
ptc.proverTasks[message.ProofTypeBundle] = bundleProverTask
if syncer, err := createL2Syncer(cfg); err != nil {
log.Crit("can not init l2 syncer", "err", err)
} else {
ptc.l2syncer = syncer
}
return ptc
}
@@ -78,6 +88,17 @@ func (ptc *GetTaskController) GetTasks(ctx *gin.Context) {
return
}
if getTaskParameter.ProverHeight == 0 {
// help update the prover height with internal l2geth
if blk, err := ptc.l2syncer.getLatestBlockNumber(ctx); err == nil {
getTaskParameter.ProverHeight = blk
} else {
nerr := fmt.Errorf("inner l2geth failure, err:%w", err)
types.RenderFailure(ctx, types.InternalServerError, nerr)
return
}
}
proofType := ptc.proofType(&getTaskParameter)
proverTask, isExist := ptc.proverTasks[proofType]
if !isExist {

View File

@@ -0,0 +1,71 @@
//go:build !mock_verifier
package api
import (
"errors"
"fmt"
"sync"
"time"
"github.com/gin-gonic/gin"
"github.com/scroll-tech/go-ethereum/ethclient"
"github.com/scroll-tech/go-ethereum/log"
"scroll-tech/coordinator/internal/config"
)
type l2Syncer struct {
l2gethClient *ethclient.Client
lastBlockNumber struct {
sync.RWMutex
data uint64
t time.Time
}
}
func createL2Syncer(cfg *config.Config) (*l2Syncer, error) {
if cfg.L2 == nil || cfg.L2.Endpoint == nil {
return nil, fmt.Errorf("l2 endpoint is not set in config")
} else {
l2gethClient, err := ethclient.Dial(cfg.L2.Endpoint.Url)
if err != nil {
return nil, fmt.Errorf("dial l2geth endpoint fail, err: %s", err)
}
return &l2Syncer{
l2gethClient: l2gethClient,
}, nil
}
}
// getLatestBlockNumber gets the latest block number, using cache if available and not expired
func (syncer *l2Syncer) getLatestBlockNumber(ctx *gin.Context) (uint64, error) {
// First check if we have a cached value that's still valid
syncer.lastBlockNumber.RLock()
if !syncer.lastBlockNumber.t.IsZero() && time.Since(syncer.lastBlockNumber.t) < time.Second*10 {
blockNumber := syncer.lastBlockNumber.data
syncer.lastBlockNumber.RUnlock()
return blockNumber, nil
}
syncer.lastBlockNumber.RUnlock()
// If not cached or expired, fetch from the client
if syncer.l2gethClient == nil {
return 0, errors.New("L2 geth client not initialized")
}
blockNumber, err := syncer.l2gethClient.BlockNumber(ctx)
if err != nil {
return 0, fmt.Errorf("failed to get latest block number: %w", err)
}
// Update the cache
syncer.lastBlockNumber.Lock()
syncer.lastBlockNumber.data = blockNumber
syncer.lastBlockNumber.t = time.Now()
syncer.lastBlockNumber.Unlock()
log.Debug("updated block height reference", "height", blockNumber)
return blockNumber, nil
}

View File

@@ -0,0 +1,20 @@
//go:build mock_verifier
package api
import (
"scroll-tech/coordinator/internal/config"
"github.com/gin-gonic/gin"
)
type l2Syncer struct{}
func createL2Syncer(_ *config.Config) (*l2Syncer, error) {
return &l2Syncer{}, nil
}
// getLatestBlockNumber gets the latest block number, using cache if available and not expired
func (syncer *l2Syncer) getLatestBlockNumber(_ *gin.Context) (uint64, error) {
return 99999994, nil
}

View File

@@ -24,16 +24,16 @@ type LoginLogic struct {
openVmVks map[string]struct{}
proverVersionHardForkMap map[string][]string
proverVersionHardForkMap map[string]string
}
// NewLoginLogic new a LoginLogic
func NewLoginLogic(db *gorm.DB, cfg *config.Config, vf *verifier.Verifier) *LoginLogic {
proverVersionHardForkMap := make(map[string][]string)
proverVersionHardForkMap := make(map[string]string)
var highHardForks []string
highHardForks = append(highHardForks, cfg.ProverManager.Verifier.HighVersionCircuit.ForkName)
proverVersionHardForkMap[cfg.ProverManager.Verifier.HighVersionCircuit.MinProverVersion] = highHardForks
for _, cfg := range cfg.ProverManager.Verifier.Verifiers {
proverVersionHardForkMap[cfg.ForkName] = cfg.MinProverVersion
}
return &LoginLogic{
cfg: cfg,
@@ -56,8 +56,8 @@ func (l *LoginLogic) Check(login *types.LoginParameter) error {
return errors.New("auth message verify failure")
}
if !version.CheckScrollRepoVersion(login.Message.ProverVersion, l.cfg.ProverManager.Verifier.HighVersionCircuit.MinProverVersion) {
return fmt.Errorf("incompatible prover version. please upgrade your prover, minimum allowed version: %s, actual version: %s", l.cfg.ProverManager.Verifier.HighVersionCircuit.MinProverVersion, login.Message.ProverVersion)
if !version.CheckScrollRepoVersion(login.Message.ProverVersion, l.cfg.ProverManager.Verifier.MinProverVersion) {
return fmt.Errorf("incompatible prover version. please upgrade your prover, minimum allowed version: %s, actual version: %s", l.cfg.ProverManager.Verifier.MinProverVersion, login.Message.ProverVersion)
}
vks := make(map[string]struct{})
@@ -99,9 +99,15 @@ func (l *LoginLogic) ProverHardForkName(login *types.LoginParameter) (string, er
}
proverVersion := proverVersionSplits[0]
if hardForkNames, ok := l.proverVersionHardForkMap[proverVersion]; ok {
return strings.Join(hardForkNames, ","), nil
var hardForkNames []string
for n, minVersion := range l.proverVersionHardForkMap {
if minVersion == "" || version.CheckScrollRepoVersion(proverVersion, minVersion) {
hardForkNames = append(hardForkNames, n)
}
}
if len(hardForkNames) == 0 {
return "", fmt.Errorf("invalid prover prover_version:%s", login.Message.ProverVersion)
}
return "", fmt.Errorf("invalid prover prover_version:%s", login.Message.ProverVersion)
return strings.Join(hardForkNames, ","), nil
}

View File

@@ -11,11 +11,16 @@ import "C" //nolint:typecheck
import (
"fmt"
"os"
"strings"
"unsafe"
"scroll-tech/common/types/message"
)
func init() {
C.init_tracing()
}
// Helper function to convert Go string to C string and handle cleanup
func goToCString(s string) *C.char {
return C.CString(s)
@@ -34,18 +39,10 @@ func InitVerifier(configJSON string) {
C.init_verifier(cConfig)
}
// Initialize the verifier
func InitL2geth(configJSON string) {
cConfig := goToCString(configJSON)
defer freeCString(cConfig)
C.init_l2geth(cConfig)
}
// Verify a chunk proof
func VerifyChunkProof(proofData, forkName string) bool {
cProof := goToCString(proofData)
cForkName := goToCString(forkName)
cForkName := goToCString(strings.ToLower(forkName))
defer freeCString(cProof)
defer freeCString(cForkName)
@@ -56,7 +53,7 @@ func VerifyChunkProof(proofData, forkName string) bool {
// Verify a batch proof
func VerifyBatchProof(proofData, forkName string) bool {
cProof := goToCString(proofData)
cForkName := goToCString(forkName)
cForkName := goToCString(strings.ToLower(forkName))
defer freeCString(cProof)
defer freeCString(cForkName)
@@ -67,7 +64,7 @@ func VerifyBatchProof(proofData, forkName string) bool {
// Verify a bundle proof
func VerifyBundleProof(proofData, forkName string) bool {
cProof := goToCString(proofData)
cForkName := goToCString(forkName)
cForkName := goToCString(strings.ToLower(forkName))
defer freeCString(cProof)
defer freeCString(cForkName)
@@ -96,8 +93,8 @@ func fromMessageTaskType(taskType int) int {
}
// Generate a universal task
func GenerateUniversalTask(taskType int, taskJSON, forkName string) (bool, string, string, []byte) {
return generateUniversalTask(fromMessageTaskType(taskType), taskJSON, forkName)
func GenerateUniversalTask(taskType int, taskJSON, forkName string, expectedVk []byte) (bool, string, string, []byte) {
return generateUniversalTask(fromMessageTaskType(taskType), taskJSON, strings.ToLower(forkName), expectedVk)
}
// Generate wrapped proof
@@ -127,7 +124,7 @@ func GenerateWrappedProof(proofJSON, metadata string, vkData []byte) string {
// Dumps a verification key to a file
func DumpVk(forkName, filePath string) error {
cForkName := goToCString(forkName)
cForkName := goToCString(strings.ToLower(forkName))
cFilePath := goToCString(filePath)
defer freeCString(cForkName)
defer freeCString(cFilePath)
@@ -143,3 +140,10 @@ func DumpVk(forkName, filePath string) error {
return nil
}
// Set dynamic feature flags that control libzkp runtime behavior
func SetDynamicFeature(feats string) {
cFeats := goToCString(feats)
defer freeCString(cFeats)
C.set_dynamic_feature(cFeats)
}

View File

@@ -8,6 +8,9 @@
#include <stddef.h> // For size_t
// Init log tracing
void init_tracing();
// Initialize the verifier with configuration
void init_verifier(char* config);
@@ -32,7 +35,13 @@ typedef struct {
// Generate a universal task based on task type and input JSON
// Returns a struct containing task data, metadata, and expected proof hash
HandlingResult gen_universal_task(int task_type, char* task, char* fork_name);
HandlingResult gen_universal_task(
int task_type,
char* task,
char* fork_name,
const unsigned char* expected_vk,
size_t expected_vk_len
);
// Release memory allocated for a HandlingResult returned by gen_universal_task
void release_task_result(HandlingResult result);
@@ -45,4 +54,7 @@ char* gen_wrapped_proof(char* proof_json, char* metadata, char* vk, size_t vk_le
// Release memory allocated for a string returned by gen_wrapped_proof
void release_string(char* string_ptr);
void set_dynamic_feature(const char* feats);
#endif /* LIBZKP_H */

View File

@@ -11,7 +11,10 @@ import (
"github.com/scroll-tech/go-ethereum/common"
)
func generateUniversalTask(taskType int, taskJSON, forkName string) (bool, string, string, []byte) {
func InitL2geth(configJSON string) {
}
func generateUniversalTask(taskType int, taskJSON, forkName string, expectedVk []byte) (bool, string, string, []byte) {
fmt.Printf("call mocked generate universal task %d, taskJson %s\n", taskType, taskJSON)
var metadata interface{}

View File

@@ -7,14 +7,29 @@ package libzkp
#include "libzkp.h"
*/
import "C" //nolint:typecheck
import "unsafe"
func generateUniversalTask(taskType int, taskJSON, forkName string) (bool, string, string, []byte) {
// Initialize the handler for universal task
func InitL2geth(configJSON string) {
cConfig := goToCString(configJSON)
defer freeCString(cConfig)
C.init_l2geth(cConfig)
}
func generateUniversalTask(taskType int, taskJSON, forkName string, expectedVk []byte) (bool, string, string, []byte) {
cTask := goToCString(taskJSON)
cForkName := goToCString(forkName)
defer freeCString(cTask)
defer freeCString(cForkName)
result := C.gen_universal_task(C.int(taskType), cTask, cForkName)
// Create a C array from Go slice
var cVk *C.uchar
if len(expectedVk) > 0 {
cVk = (*C.uchar)(unsafe.Pointer(&expectedVk[0]))
}
result := C.gen_universal_task(C.int(taskType), cTask, cForkName, cVk, C.size_t(len(expectedVk)))
defer C.release_task_result(result)
// Check if the operation was successful

View File

@@ -36,12 +36,13 @@ type BatchProverTask struct {
}
// NewBatchProverTask new a batch collector
func NewBatchProverTask(cfg *config.Config, chainCfg *params.ChainConfig, db *gorm.DB, reg prometheus.Registerer) *BatchProverTask {
func NewBatchProverTask(cfg *config.Config, chainCfg *params.ChainConfig, db *gorm.DB, expectedVk map[string][]byte, reg prometheus.Registerer) *BatchProverTask {
bp := &BatchProverTask{
BaseProverTask: BaseProverTask{
db: db,
cfg: cfg,
chainCfg: chainCfg,
expectedVk: expectedVk,
blockOrm: orm.NewL2Block(db),
chunkOrm: orm.NewChunk(db),
batchOrm: orm.NewBatch(db),
@@ -83,10 +84,37 @@ func (bp *BatchProverTask) Assign(ctx *gin.Context, getTaskParameter *coordinato
for i := 0; i < 5; i++ {
var getTaskError error
var tmpBatchTask *orm.Batch
tmpBatchTask, getTaskError = bp.batchOrm.GetAssignedBatch(ctx.Copy(), maxActiveAttempts, maxTotalAttempts)
if getTaskError != nil {
log.Error("failed to get assigned batch proving tasks", "height", getTaskParameter.ProverHeight, "err", getTaskError)
return nil, ErrCoordinatorInternalFailure
if taskCtx.hasAssignedTask != nil {
if taskCtx.hasAssignedTask.TaskType != int16(message.ProofTypeBatch) {
return nil, fmt.Errorf("prover with publicKey %s is already assigned a task. ProverName: %s, ProverVersion: %s", taskCtx.PublicKey, taskCtx.ProverName, taskCtx.ProverVersion)
}
tmpBatchTask, getTaskError = bp.batchOrm.GetBatchByHash(ctx.Copy(), taskCtx.hasAssignedTask.TaskID)
if getTaskError != nil {
log.Error("failed to get batch has assigned to prover", "taskID", taskCtx.hasAssignedTask.TaskID, "err", getTaskError)
return nil, ErrCoordinatorInternalFailure
} else if tmpBatchTask == nil {
// if the assigned batch dropped, there would be too much issue to assign another
return nil, fmt.Errorf("prover with publicKey %s is already assigned a dropped batch. ProverName: %s, ProverVersion: %s",
taskCtx.PublicKey, taskCtx.ProverName, taskCtx.ProverVersion)
}
} else if getTaskParameter.TaskID != "" {
tmpBatchTask, getTaskError = bp.batchOrm.GetBatchByHash(ctx.Copy(), getTaskParameter.TaskID)
if getTaskError != nil {
log.Error("failed to get expected batch", "taskID", getTaskParameter.TaskID, "err", getTaskError)
return nil, ErrCoordinatorInternalFailure
} else if tmpBatchTask == nil {
return nil, fmt.Errorf("Expected task (%s) is already dropped", getTaskParameter.TaskID)
}
}
if tmpBatchTask == nil {
tmpBatchTask, getTaskError = bp.batchOrm.GetAssignedBatch(ctx.Copy(), maxActiveAttempts, maxTotalAttempts)
if getTaskError != nil {
log.Error("failed to get assigned batch proving tasks", "height", getTaskParameter.ProverHeight, "err", getTaskError)
return nil, ErrCoordinatorInternalFailure
}
}
// Why here need get again? In order to support a task can assign to multiple prover, need also assign `ProvingTaskAssigned`
@@ -114,29 +142,32 @@ func (bp *BatchProverTask) Assign(ctx *gin.Context, getTaskParameter *coordinato
return nil, nil
}
// Don't dispatch the same failing job to the same prover
proverTasks, getFailedTaskError := bp.proverTaskOrm.GetFailedProverTasksByHash(ctx.Copy(), message.ProofTypeBatch, tmpBatchTask.Hash, 2)
if getFailedTaskError != nil {
log.Error("failed to get prover tasks", "proof type", message.ProofTypeBatch.String(), "task ID", tmpBatchTask.Hash, "error", getFailedTaskError)
return nil, ErrCoordinatorInternalFailure
}
for i := 0; i < len(proverTasks); i++ {
if proverTasks[i].ProverPublicKey == taskCtx.PublicKey ||
taskCtx.ProverProviderType == uint8(coordinatorType.ProverProviderTypeExternal) && cutils.IsExternalProverNameMatch(proverTasks[i].ProverName, taskCtx.ProverName) {
log.Debug("get empty batch, the prover already failed this task", "height", getTaskParameter.ProverHeight, "task ID", tmpBatchTask.Hash, "prover name", taskCtx.ProverName, "prover public key", taskCtx.PublicKey)
return nil, nil
// we are simply pick the chunk which has been assigned, so don't bother to update attempts or check failed before
if taskCtx.hasAssignedTask == nil {
// Don't dispatch the same failing job to the same prover
proverTasks, getFailedTaskError := bp.proverTaskOrm.GetFailedProverTasksByHash(ctx.Copy(), message.ProofTypeBatch, tmpBatchTask.Hash, 2)
if getFailedTaskError != nil {
log.Error("failed to get prover tasks", "proof type", message.ProofTypeBatch.String(), "task ID", tmpBatchTask.Hash, "error", getFailedTaskError)
return nil, ErrCoordinatorInternalFailure
}
for i := 0; i < len(proverTasks); i++ {
if proverTasks[i].ProverPublicKey == taskCtx.PublicKey ||
taskCtx.ProverProviderType == uint8(coordinatorType.ProverProviderTypeExternal) && cutils.IsExternalProverNameMatch(proverTasks[i].ProverName, taskCtx.ProverName) {
log.Debug("get empty batch, the prover already failed this task", "height", getTaskParameter.ProverHeight, "task ID", tmpBatchTask.Hash, "prover name", taskCtx.ProverName, "prover public key", taskCtx.PublicKey)
return nil, nil
}
}
}
rowsAffected, updateAttemptsErr := bp.batchOrm.UpdateBatchAttempts(ctx.Copy(), tmpBatchTask.Index, tmpBatchTask.ActiveAttempts, tmpBatchTask.TotalAttempts)
if updateAttemptsErr != nil {
log.Error("failed to update batch attempts", "height", getTaskParameter.ProverHeight, "err", updateAttemptsErr)
return nil, ErrCoordinatorInternalFailure
}
rowsAffected, updateAttemptsErr := bp.batchOrm.UpdateBatchAttempts(ctx.Copy(), tmpBatchTask.Index, tmpBatchTask.ActiveAttempts, tmpBatchTask.TotalAttempts)
if updateAttemptsErr != nil {
log.Error("failed to update batch attempts", "height", getTaskParameter.ProverHeight, "err", updateAttemptsErr)
return nil, ErrCoordinatorInternalFailure
}
if rowsAffected == 0 {
time.Sleep(100 * time.Millisecond)
continue
if rowsAffected == 0 {
time.Sleep(100 * time.Millisecond)
continue
}
}
batchTask = tmpBatchTask
@@ -149,19 +180,24 @@ func (bp *BatchProverTask) Assign(ctx *gin.Context, getTaskParameter *coordinato
}
log.Info("start batch proof generation session", "task_id", batchTask.Hash, "public key", taskCtx.PublicKey, "prover name", taskCtx.ProverName)
proverTask := orm.ProverTask{
TaskID: batchTask.Hash,
ProverPublicKey: taskCtx.PublicKey,
TaskType: int16(message.ProofTypeBatch),
ProverName: taskCtx.ProverName,
ProverVersion: taskCtx.ProverVersion,
ProvingStatus: int16(types.ProverAssigned),
FailureType: int16(types.ProverTaskFailureTypeUndefined),
// here why need use UTC time. see scroll/common/database/db.go
AssignedAt: utils.NowUTC(),
var proverTask *orm.ProverTask
if taskCtx.hasAssignedTask == nil {
proverTask = &orm.ProverTask{
TaskID: batchTask.Hash,
ProverPublicKey: taskCtx.PublicKey,
TaskType: int16(message.ProofTypeBatch),
ProverName: taskCtx.ProverName,
ProverVersion: taskCtx.ProverVersion,
ProvingStatus: int16(types.ProverAssigned),
FailureType: int16(types.ProverTaskFailureTypeUndefined),
// here why need use UTC time. see scroll/common/database/db.go
AssignedAt: utils.NowUTC(),
}
} else {
proverTask = taskCtx.hasAssignedTask
}
taskMsg, err := bp.formatProverTask(ctx.Copy(), &proverTask, batchTask, hardForkName)
taskMsg, err := bp.formatProverTask(ctx.Copy(), proverTask, batchTask, hardForkName)
if err != nil {
bp.recoverActiveAttempts(ctx, batchTask)
log.Error("format prover task failure", "task_id", batchTask.Hash, "err", err)
@@ -169,20 +205,23 @@ func (bp *BatchProverTask) Assign(ctx *gin.Context, getTaskParameter *coordinato
}
if getTaskParameter.Universal {
var metadata []byte
taskMsg, metadata, err = bp.applyUniversal(taskMsg)
if err != nil {
bp.recoverActiveAttempts(ctx, batchTask)
log.Error("Generate universal prover task failure", "task_id", batchTask.Hash, "type", "batch")
log.Error("Generate universal prover task failure", "task_id", batchTask.Hash, "type", "batch", "err", err)
return nil, ErrCoordinatorInternalFailure
}
proverTask.Metadata = metadata
}
// Store session info.
if err = bp.proverTaskOrm.InsertProverTask(ctx.Copy(), &proverTask); err != nil {
bp.recoverActiveAttempts(ctx, batchTask)
log.Error("insert batch prover task info fail", "task_id", batchTask.Hash, "publicKey", taskCtx.PublicKey, "err", err)
return nil, ErrCoordinatorInternalFailure
if taskCtx.hasAssignedTask == nil {
if err = bp.proverTaskOrm.InsertProverTask(ctx.Copy(), proverTask); err != nil {
bp.recoverActiveAttempts(ctx, batchTask)
log.Error("insert batch prover task info fail", "task_id", batchTask.Hash, "publicKey", taskCtx.PublicKey, "err", err)
return nil, ErrCoordinatorInternalFailure
}
}
// notice uuid is set as a side effect of InsertProverTask
taskMsg.UUID = proverTask.UUID.String()
@@ -266,13 +305,7 @@ func (bp *BatchProverTask) getBatchTaskDetail(dbBatch *orm.Batch, chunkInfos []*
taskDetail := &message.BatchTaskDetail{
ChunkInfos: chunkInfos,
ChunkProofs: chunkProofs,
}
if hardForkName == message.EuclidV2Fork {
taskDetail.ForkName = message.EuclidV2ForkNameForProver
} else {
log.Error("unsupported hard fork name", "hard_fork_name", hardForkName)
return nil, fmt.Errorf("unsupported hard fork name: %s", hardForkName)
ForkName: hardForkName,
}
dbBatchCodecVersion := encoding.CodecVersion(dbBatch.CodecVersion)

View File

@@ -33,12 +33,13 @@ type BundleProverTask struct {
}
// NewBundleProverTask new a bundle collector
func NewBundleProverTask(cfg *config.Config, chainCfg *params.ChainConfig, db *gorm.DB, reg prometheus.Registerer) *BundleProverTask {
func NewBundleProverTask(cfg *config.Config, chainCfg *params.ChainConfig, db *gorm.DB, expectedVk map[string][]byte, reg prometheus.Registerer) *BundleProverTask {
bp := &BundleProverTask{
BaseProverTask: BaseProverTask{
db: db,
chainCfg: chainCfg,
cfg: cfg,
expectedVk: expectedVk,
blockOrm: orm.NewL2Block(db),
chunkOrm: orm.NewChunk(db),
batchOrm: orm.NewBatch(db),
@@ -81,10 +82,37 @@ func (bp *BundleProverTask) Assign(ctx *gin.Context, getTaskParameter *coordinat
for i := 0; i < 5; i++ {
var getTaskError error
var tmpBundleTask *orm.Bundle
tmpBundleTask, getTaskError = bp.bundleOrm.GetAssignedBundle(ctx.Copy(), maxActiveAttempts, maxTotalAttempts)
if getTaskError != nil {
log.Error("failed to get assigned bundle proving tasks", "height", getTaskParameter.ProverHeight, "err", getTaskError)
return nil, ErrCoordinatorInternalFailure
if taskCtx.hasAssignedTask != nil {
if taskCtx.hasAssignedTask.TaskType != int16(message.ProofTypeBundle) {
return nil, fmt.Errorf("prover with publicKey %s is already assigned a task. ProverName: %s, ProverVersion: %s", taskCtx.PublicKey, taskCtx.ProverName, taskCtx.ProverVersion)
}
tmpBundleTask, getTaskError = bp.bundleOrm.GetBundleByHash(ctx.Copy(), taskCtx.hasAssignedTask.TaskID)
if getTaskError != nil {
log.Error("failed to get bundle has assigned to prover", "taskID", taskCtx.hasAssignedTask.TaskID, "err", getTaskError)
return nil, ErrCoordinatorInternalFailure
} else if tmpBundleTask == nil {
// if the assigned chunk dropped, there would be too much issue to assign another
return nil, fmt.Errorf("prover with publicKey %s is already assigned a dropped bundle. ProverName: %s, ProverVersion: %s",
taskCtx.PublicKey, taskCtx.ProverName, taskCtx.ProverVersion)
}
} else if getTaskParameter.TaskID != "" {
tmpBundleTask, getTaskError = bp.bundleOrm.GetBundleByHash(ctx.Copy(), getTaskParameter.TaskID)
if getTaskError != nil {
log.Error("failed to get expected bundle", "taskID", getTaskParameter.TaskID, "err", getTaskError)
return nil, ErrCoordinatorInternalFailure
} else if tmpBundleTask == nil {
return nil, fmt.Errorf("Expected task (%s) is already dropped", getTaskParameter.TaskID)
}
}
if tmpBundleTask == nil {
tmpBundleTask, getTaskError = bp.bundleOrm.GetAssignedBundle(ctx.Copy(), maxActiveAttempts, maxTotalAttempts)
if getTaskError != nil {
log.Error("failed to get assigned bundle proving tasks", "height", getTaskParameter.ProverHeight, "err", getTaskError)
return nil, ErrCoordinatorInternalFailure
}
}
// Why here need get again? In order to support a task can assign to multiple prover, need also assign `ProvingTaskAssigned`
@@ -112,31 +140,33 @@ func (bp *BundleProverTask) Assign(ctx *gin.Context, getTaskParameter *coordinat
return nil, nil
}
// Don't dispatch the same failing job to the same prover
proverTasks, getTaskError := bp.proverTaskOrm.GetFailedProverTasksByHash(ctx.Copy(), message.ProofTypeBundle, tmpBundleTask.Hash, 2)
if getTaskError != nil {
log.Error("failed to get prover tasks", "proof type", message.ProofTypeBundle.String(), "task ID", tmpBundleTask.Hash, "error", getTaskError)
return nil, ErrCoordinatorInternalFailure
}
for i := 0; i < len(proverTasks); i++ {
if proverTasks[i].ProverPublicKey == taskCtx.PublicKey ||
taskCtx.ProverProviderType == uint8(coordinatorType.ProverProviderTypeExternal) && cutils.IsExternalProverNameMatch(proverTasks[i].ProverName, taskCtx.ProverName) {
log.Debug("get empty bundle, the prover already failed this task", "height", getTaskParameter.ProverHeight, "task ID", tmpBundleTask.Hash, "prover name", taskCtx.ProverName, "prover public key", taskCtx.PublicKey)
return nil, nil
// we are simply pick the chunk which has been assigned, so don't bother to update attempts or check failed before
if taskCtx.hasAssignedTask == nil {
// Don't dispatch the same failing job to the same prover
proverTasks, getTaskError := bp.proverTaskOrm.GetFailedProverTasksByHash(ctx.Copy(), message.ProofTypeBundle, tmpBundleTask.Hash, 2)
if getTaskError != nil {
log.Error("failed to get prover tasks", "proof type", message.ProofTypeBundle.String(), "task ID", tmpBundleTask.Hash, "error", getTaskError)
return nil, ErrCoordinatorInternalFailure
}
for i := 0; i < len(proverTasks); i++ {
if proverTasks[i].ProverPublicKey == taskCtx.PublicKey ||
taskCtx.ProverProviderType == uint8(coordinatorType.ProverProviderTypeExternal) && cutils.IsExternalProverNameMatch(proverTasks[i].ProverName, taskCtx.ProverName) {
log.Debug("get empty bundle, the prover already failed this task", "height", getTaskParameter.ProverHeight, "task ID", tmpBundleTask.Hash, "prover name", taskCtx.ProverName, "prover public key", taskCtx.PublicKey)
return nil, nil
}
}
rowsAffected, updateAttemptsErr := bp.bundleOrm.UpdateBundleAttempts(ctx.Copy(), tmpBundleTask.Hash, tmpBundleTask.ActiveAttempts, tmpBundleTask.TotalAttempts)
if updateAttemptsErr != nil {
log.Error("failed to update bundle attempts", "height", getTaskParameter.ProverHeight, "err", updateAttemptsErr)
return nil, ErrCoordinatorInternalFailure
}
if rowsAffected == 0 {
time.Sleep(100 * time.Millisecond)
continue
}
}
rowsAffected, updateAttemptsErr := bp.bundleOrm.UpdateBundleAttempts(ctx.Copy(), tmpBundleTask.Hash, tmpBundleTask.ActiveAttempts, tmpBundleTask.TotalAttempts)
if updateAttemptsErr != nil {
log.Error("failed to update bundle attempts", "height", getTaskParameter.ProverHeight, "err", updateAttemptsErr)
return nil, ErrCoordinatorInternalFailure
}
if rowsAffected == 0 {
time.Sleep(100 * time.Millisecond)
continue
}
bundleTask = tmpBundleTask
break
}
@@ -147,19 +177,24 @@ func (bp *BundleProverTask) Assign(ctx *gin.Context, getTaskParameter *coordinat
}
log.Info("start bundle proof generation session", "task index", bundleTask.Index, "public key", taskCtx.PublicKey, "prover name", taskCtx.ProverName)
proverTask := orm.ProverTask{
TaskID: bundleTask.Hash,
ProverPublicKey: taskCtx.PublicKey,
TaskType: int16(message.ProofTypeBundle),
ProverName: taskCtx.ProverName,
ProverVersion: taskCtx.ProverVersion,
ProvingStatus: int16(types.ProverAssigned),
FailureType: int16(types.ProverTaskFailureTypeUndefined),
// here why need use UTC time. see scroll/common/database/db.go
AssignedAt: utils.NowUTC(),
var proverTask *orm.ProverTask
if taskCtx.hasAssignedTask == nil {
proverTask = &orm.ProverTask{
TaskID: bundleTask.Hash,
ProverPublicKey: taskCtx.PublicKey,
TaskType: int16(message.ProofTypeBundle),
ProverName: taskCtx.ProverName,
ProverVersion: taskCtx.ProverVersion,
ProvingStatus: int16(types.ProverAssigned),
FailureType: int16(types.ProverTaskFailureTypeUndefined),
// here why need use UTC time. see scroll/common/database/db.go
AssignedAt: utils.NowUTC(),
}
} else {
proverTask = taskCtx.hasAssignedTask
}
taskMsg, err := bp.formatProverTask(ctx.Copy(), &proverTask, hardForkName)
taskMsg, err := bp.formatProverTask(ctx.Copy(), proverTask, hardForkName)
if err != nil {
bp.recoverActiveAttempts(ctx, bundleTask)
log.Error("format bundle prover task failure", "task_id", bundleTask.Hash, "err", err)
@@ -170,7 +205,7 @@ func (bp *BundleProverTask) Assign(ctx *gin.Context, getTaskParameter *coordinat
taskMsg, metadata, err = bp.applyUniversal(taskMsg)
if err != nil {
bp.recoverActiveAttempts(ctx, bundleTask)
log.Error("Generate universal prover task failure", "task_id", bundleTask.Hash, "type", "bundle")
log.Error("Generate universal prover task failure", "task_id", bundleTask.Hash, "type", "bundle", "err", err)
return nil, ErrCoordinatorInternalFailure
}
// bundle proof require snark
@@ -179,10 +214,12 @@ func (bp *BundleProverTask) Assign(ctx *gin.Context, getTaskParameter *coordinat
}
// Store session info.
if err = bp.proverTaskOrm.InsertProverTask(ctx.Copy(), &proverTask); err != nil {
bp.recoverActiveAttempts(ctx, bundleTask)
log.Error("insert bundle prover task info fail", "task_id", bundleTask.Hash, "publicKey", taskCtx.PublicKey, "err", err)
return nil, ErrCoordinatorInternalFailure
if taskCtx.hasAssignedTask == nil {
if err = bp.proverTaskOrm.InsertProverTask(ctx.Copy(), proverTask); err != nil {
bp.recoverActiveAttempts(ctx, bundleTask)
log.Error("insert bundle prover task info fail", "task_id", bundleTask.Hash, "publicKey", taskCtx.PublicKey, "err", err)
return nil, ErrCoordinatorInternalFailure
}
}
// notice uuid is set as a side effect of InsertProverTask
taskMsg.UUID = proverTask.UUID.String()
@@ -209,9 +246,14 @@ func (bp *BundleProverTask) formatProverTask(ctx context.Context, task *orm.Prov
return nil, fmt.Errorf("failed to get batch proofs for bundle task id:%s, no batch found", task.TaskID)
}
parentBatch, err := bp.batchOrm.GetBatchByHash(ctx, batches[0].ParentBatchHash)
if err != nil {
return nil, fmt.Errorf("failed to get parent batch for batch task id:%s err:%w", task.TaskID, err)
var prevStateRoot common.Hash
// this would be common in test cases: the first batch has empty parent
if batches[0].Index > 1 {
parentBatch, err := bp.batchOrm.GetBatchByHash(ctx, batches[0].ParentBatchHash)
if err != nil {
return nil, fmt.Errorf("failed to get parent batch for batch task id:%s err:%w", task.TaskID, err)
}
prevStateRoot = common.HexToHash(parentBatch.StateRoot)
}
var batchProofs []*message.OpenVMBatchProof
@@ -225,18 +267,12 @@ func (bp *BundleProverTask) formatProverTask(ctx context.Context, task *orm.Prov
taskDetail := message.BundleTaskDetail{
BatchProofs: batchProofs,
}
if hardForkName == message.EuclidV2Fork {
taskDetail.ForkName = message.EuclidV2ForkNameForProver
} else {
log.Error("unsupported hard fork name", "hard_fork_name", hardForkName)
return nil, fmt.Errorf("unsupported hard fork name: %s", hardForkName)
ForkName: hardForkName,
}
taskDetail.BundleInfo = &message.OpenVMBundleInfo{
ChainID: bp.cfg.L2.ChainID,
PrevStateRoot: common.HexToHash(parentBatch.StateRoot),
PrevStateRoot: prevStateRoot,
PostStateRoot: common.HexToHash(batches[len(batches)-1].StateRoot),
WithdrawRoot: common.HexToHash(batches[len(batches)-1].WithdrawRoot),
NumBatches: uint32(len(batches)),

View File

@@ -33,12 +33,13 @@ type ChunkProverTask struct {
}
// NewChunkProverTask new a chunk prover task
func NewChunkProverTask(cfg *config.Config, chainCfg *params.ChainConfig, db *gorm.DB, reg prometheus.Registerer) *ChunkProverTask {
func NewChunkProverTask(cfg *config.Config, chainCfg *params.ChainConfig, db *gorm.DB, expectedVk map[string][]byte, reg prometheus.Registerer) *ChunkProverTask {
cp := &ChunkProverTask{
BaseProverTask: BaseProverTask{
db: db,
cfg: cfg,
chainCfg: chainCfg,
expectedVk: expectedVk,
chunkOrm: orm.NewChunk(db),
blockOrm: orm.NewL2Block(db),
proverTaskOrm: orm.NewProverTask(db),
@@ -79,12 +80,39 @@ func (cp *ChunkProverTask) Assign(ctx *gin.Context, getTaskParameter *coordinato
for i := 0; i < 5; i++ {
var getTaskError error
var tmpChunkTask *orm.Chunk
tmpChunkTask, getTaskError = cp.chunkOrm.GetAssignedChunk(ctx.Copy(), maxActiveAttempts, maxTotalAttempts, getTaskParameter.ProverHeight)
if getTaskError != nil {
log.Error("failed to get assigned chunk proving tasks", "height", getTaskParameter.ProverHeight, "err", getTaskError)
return nil, ErrCoordinatorInternalFailure
if taskCtx.hasAssignedTask != nil {
if taskCtx.hasAssignedTask.TaskType != int16(message.ProofTypeChunk) {
return nil, fmt.Errorf("prover with publicKey %s is already assigned a task. ProverName: %s, ProverVersion: %s", taskCtx.PublicKey, taskCtx.ProverName, taskCtx.ProverVersion)
}
log.Debug("retrieved assigned task chunk", "taskID", taskCtx.hasAssignedTask.TaskID, "prover", taskCtx.ProverName)
tmpChunkTask, getTaskError = cp.chunkOrm.GetChunkByHash(ctx.Copy(), taskCtx.hasAssignedTask.TaskID)
if getTaskError != nil {
log.Error("failed to get chunk has assigned to prover", "taskID", taskCtx.hasAssignedTask.TaskID, "err", getTaskError)
return nil, ErrCoordinatorInternalFailure
} else if tmpChunkTask == nil {
// if the assigned chunk dropped, there would be too much issue to assign another
return nil, fmt.Errorf("prover with publicKey %s is already assigned a dropped chunk. ProverName: %s, ProverVersion: %s",
taskCtx.PublicKey, taskCtx.ProverName, taskCtx.ProverVersion)
}
} else if getTaskParameter.TaskID != "" {
tmpChunkTask, getTaskError = cp.chunkOrm.GetChunkByHash(ctx.Copy(), getTaskParameter.TaskID)
if getTaskError != nil {
log.Error("failed to get expected chunk", "taskID", getTaskParameter.TaskID, "err", getTaskError)
return nil, ErrCoordinatorInternalFailure
} else if tmpChunkTask == nil {
return nil, fmt.Errorf("Expected task (%s) is already dropped", getTaskParameter.TaskID)
}
}
if tmpChunkTask == nil {
tmpChunkTask, getTaskError = cp.chunkOrm.GetAssignedChunk(ctx.Copy(), maxActiveAttempts, maxTotalAttempts, getTaskParameter.ProverHeight)
if getTaskError != nil {
log.Error("failed to get assigned chunk proving tasks", "height", getTaskParameter.ProverHeight, "err", getTaskError)
return nil, ErrCoordinatorInternalFailure
}
}
// Why here need get again? In order to support a task can assign to multiple prover, need also assign `ProvingTaskAssigned`
// chunk to prover. But use `proving_status in (1, 2)` will not use the postgres index. So need split the sql.
if tmpChunkTask == nil {
@@ -110,31 +138,33 @@ func (cp *ChunkProverTask) Assign(ctx *gin.Context, getTaskParameter *coordinato
return nil, nil
}
// Don't dispatch the same failing job to the same prover
proverTasks, getFailedTaskError := cp.proverTaskOrm.GetFailedProverTasksByHash(ctx.Copy(), message.ProofTypeChunk, tmpChunkTask.Hash, 2)
if getFailedTaskError != nil {
log.Error("failed to get prover tasks", "proof type", message.ProofTypeChunk.String(), "task ID", tmpChunkTask.Hash, "error", getFailedTaskError)
return nil, ErrCoordinatorInternalFailure
}
for i := 0; i < len(proverTasks); i++ {
if proverTasks[i].ProverPublicKey == taskCtx.PublicKey ||
taskCtx.ProverProviderType == uint8(coordinatorType.ProverProviderTypeExternal) && cutils.IsExternalProverNameMatch(proverTasks[i].ProverName, taskCtx.ProverName) {
log.Debug("get empty chunk, the prover already failed this task", "height", getTaskParameter.ProverHeight, "task ID", tmpChunkTask.Hash, "prover name", taskCtx.ProverName, "prover public key", taskCtx.PublicKey)
return nil, nil
// we are simply pick the chunk which has been assigned, so don't bother to update attempts or check failed before
if taskCtx.hasAssignedTask == nil {
// Don't dispatch the same failing job to the same prover
proverTasks, getFailedTaskError := cp.proverTaskOrm.GetFailedProverTasksByHash(ctx.Copy(), message.ProofTypeChunk, tmpChunkTask.Hash, 2)
if getFailedTaskError != nil {
log.Error("failed to get prover tasks", "proof type", message.ProofTypeChunk.String(), "task ID", tmpChunkTask.Hash, "error", getFailedTaskError)
return nil, ErrCoordinatorInternalFailure
}
for i := 0; i < len(proverTasks); i++ {
if proverTasks[i].ProverPublicKey == taskCtx.PublicKey ||
taskCtx.ProverProviderType == uint8(coordinatorType.ProverProviderTypeExternal) && cutils.IsExternalProverNameMatch(proverTasks[i].ProverName, taskCtx.ProverName) {
log.Debug("get empty chunk, the prover already failed this task", "height", getTaskParameter.ProverHeight, "task ID", tmpChunkTask.Hash, "prover name", taskCtx.ProverName, "prover public key", taskCtx.PublicKey)
return nil, nil
}
}
rowsAffected, updateAttemptsErr := cp.chunkOrm.UpdateChunkAttempts(ctx.Copy(), tmpChunkTask.Index, tmpChunkTask.ActiveAttempts, tmpChunkTask.TotalAttempts)
if updateAttemptsErr != nil {
log.Error("failed to update chunk attempts", "height", getTaskParameter.ProverHeight, "err", updateAttemptsErr)
return nil, ErrCoordinatorInternalFailure
}
if rowsAffected == 0 {
time.Sleep(100 * time.Millisecond)
continue
}
}
rowsAffected, updateAttemptsErr := cp.chunkOrm.UpdateChunkAttempts(ctx.Copy(), tmpChunkTask.Index, tmpChunkTask.ActiveAttempts, tmpChunkTask.TotalAttempts)
if updateAttemptsErr != nil {
log.Error("failed to update chunk attempts", "height", getTaskParameter.ProverHeight, "err", updateAttemptsErr)
return nil, ErrCoordinatorInternalFailure
}
if rowsAffected == 0 {
time.Sleep(100 * time.Millisecond)
continue
}
chunkTask = tmpChunkTask
break
}
@@ -145,19 +175,24 @@ func (cp *ChunkProverTask) Assign(ctx *gin.Context, getTaskParameter *coordinato
}
log.Info("start chunk generation session", "task_id", chunkTask.Hash, "public key", taskCtx.PublicKey, "prover name", taskCtx.ProverName)
proverTask := orm.ProverTask{
TaskID: chunkTask.Hash,
ProverPublicKey: taskCtx.PublicKey,
TaskType: int16(message.ProofTypeChunk),
ProverName: taskCtx.ProverName,
ProverVersion: taskCtx.ProverVersion,
ProvingStatus: int16(types.ProverAssigned),
FailureType: int16(types.ProverTaskFailureTypeUndefined),
// here why need use UTC time. see scroll/common/database/db.go
AssignedAt: utils.NowUTC(),
var proverTask *orm.ProverTask
if taskCtx.hasAssignedTask == nil {
proverTask = &orm.ProverTask{
TaskID: chunkTask.Hash,
ProverPublicKey: taskCtx.PublicKey,
TaskType: int16(message.ProofTypeChunk),
ProverName: taskCtx.ProverName,
ProverVersion: taskCtx.ProverVersion,
ProvingStatus: int16(types.ProverAssigned),
FailureType: int16(types.ProverTaskFailureTypeUndefined),
// here why need use UTC time. see scroll/common/database/db.go
AssignedAt: utils.NowUTC(),
}
} else {
proverTask = taskCtx.hasAssignedTask
}
taskMsg, err := cp.formatProverTask(ctx.Copy(), &proverTask, chunkTask, hardForkName)
taskMsg, err := cp.formatProverTask(ctx.Copy(), proverTask, chunkTask, hardForkName)
if err != nil {
cp.recoverActiveAttempts(ctx, chunkTask)
log.Error("format prover task failure", "task_id", chunkTask.Hash, "err", err)
@@ -169,16 +204,18 @@ func (cp *ChunkProverTask) Assign(ctx *gin.Context, getTaskParameter *coordinato
taskMsg, metadata, err = cp.applyUniversal(taskMsg)
if err != nil {
cp.recoverActiveAttempts(ctx, chunkTask)
log.Error("Generate universal prover task failure", "task_id", chunkTask.Hash, "type", "chunk")
log.Error("Generate universal prover task failure", "task_id", chunkTask.Hash, "type", "chunk", "err", err)
return nil, ErrCoordinatorInternalFailure
}
proverTask.Metadata = metadata
}
if err = cp.proverTaskOrm.InsertProverTask(ctx.Copy(), &proverTask); err != nil {
cp.recoverActiveAttempts(ctx, chunkTask)
log.Error("insert chunk prover task fail", "task_id", chunkTask.Hash, "publicKey", taskCtx.PublicKey, "err", err)
return nil, ErrCoordinatorInternalFailure
if taskCtx.hasAssignedTask == nil {
if err = cp.proverTaskOrm.InsertProverTask(ctx.Copy(), proverTask); err != nil {
cp.recoverActiveAttempts(ctx, chunkTask)
log.Error("insert chunk prover task fail", "task_id", chunkTask.Hash, "publicKey", taskCtx.PublicKey, "err", err)
return nil, ErrCoordinatorInternalFailure
}
}
// notice uuid is set as a side effect of InsertProverTask
taskMsg.UUID = proverTask.UUID.String()
@@ -197,20 +234,14 @@ func (cp *ChunkProverTask) formatProverTask(ctx context.Context, task *orm.Prove
// Get block hashes.
blockHashes, dbErr := cp.blockOrm.GetL2BlockHashesByChunkHash(ctx, task.TaskID)
if dbErr != nil || len(blockHashes) == 0 {
return nil, fmt.Errorf("failed to fetch block hashes of a chunk, chunk hash:%s err:%w", task.TaskID, dbErr)
return nil, fmt.Errorf("failed to fetch block hashes of a chunk, chunk hash:%s err:%v", task.TaskID, dbErr)
}
var taskDetailBytes []byte
taskDetail := message.ChunkTaskDetail{
BlockHashes: blockHashes,
PrevMsgQueueHash: common.HexToHash(chunk.PrevL1MessageQueueHash),
}
if hardForkName == message.EuclidV2Fork {
taskDetail.ForkName = message.EuclidV2ForkNameForProver
} else {
log.Error("unsupported hard fork name", "hard_fork_name", hardForkName)
return nil, fmt.Errorf("unsupported hard fork name: %s", hardForkName)
ForkName: hardForkName,
}
var err error

View File

@@ -38,9 +38,10 @@ type ProverTask interface {
// BaseProverTask a base prover task which contain series functions
type BaseProverTask struct {
cfg *config.Config
chainCfg *params.ChainConfig
db *gorm.DB
cfg *config.Config
chainCfg *params.ChainConfig
db *gorm.DB
expectedVk map[string][]byte
batchOrm *orm.Batch
chunkOrm *orm.Chunk
@@ -57,10 +58,11 @@ type proverTaskContext struct {
ProverProviderType uint8
HardForkNames map[string]struct{}
taskType message.ProofType
chunkTask *orm.Chunk
batchTask *orm.Batch
bundleTask *orm.Bundle
taskType message.ProofType
chunkTask *orm.Chunk
batchTask *orm.Batch
bundleTask *orm.Bundle
hasAssignedTask *orm.ProverTask
}
// hardForkName get the chunk/batch/bundle hard fork name
@@ -175,19 +177,22 @@ func (b *BaseProverTask) checkParameter(ctx *gin.Context) (*proverTaskContext, e
return nil, fmt.Errorf("public key %s is blocked from fetching tasks. ProverName: %s, ProverVersion: %s", publicKey, proverName, proverVersion)
}
isAssigned, err := b.proverTaskOrm.IsProverAssigned(ctx.Copy(), publicKey.(string))
assigned, err := b.proverTaskOrm.IsProverAssigned(ctx.Copy(), publicKey.(string))
if err != nil {
return nil, fmt.Errorf("failed to check if prover %s is assigned a task, err: %w", publicKey.(string), err)
}
if isAssigned {
return nil, fmt.Errorf("prover with publicKey %s is already assigned a task. ProverName: %s, ProverVersion: %s", publicKey, proverName, proverVersion)
}
ptc.hasAssignedTask = assigned
return &ptc, nil
}
func (b *BaseProverTask) applyUniversal(schema *coordinatorType.GetTaskSchema) (*coordinatorType.GetTaskSchema, []byte, error) {
ok, uTaskData, metadata, _ := libzkp.GenerateUniversalTask(schema.TaskType, schema.TaskData, schema.HardForkName)
expectedVk, ok := b.expectedVk[schema.HardForkName]
if !ok {
return nil, nil, fmt.Errorf("no expectedVk found from hardfork %s", schema.HardForkName)
}
ok, uTaskData, metadata, _ := libzkp.GenerateUniversalTask(schema.TaskType, schema.TaskData, schema.HardForkName, expectedVk)
if !ok {
return nil, nil, fmt.Errorf("can not generate universal task, see coordinator log for the reason")
}

View File

@@ -71,6 +71,9 @@ type ProofReceiverLogic struct {
validateFailureProverTaskStatusNotOk prometheus.Counter
validateFailureProverTaskTimeout prometheus.Counter
validateFailureProverTaskHaveVerifier prometheus.Counter
proverSpeed *prometheus.GaugeVec
provingTime prometheus.Gauge
evmCyclePerGas prometheus.Gauge
ChunkTask provertask.ProverTask
BundleTask provertask.ProverTask
@@ -79,6 +82,7 @@ type ProofReceiverLogic struct {
// NewSubmitProofReceiverLogic create a proof receiver logic
func NewSubmitProofReceiverLogic(cfg *config.ProverManager, chainCfg *params.ChainConfig, db *gorm.DB, vf *verifier.Verifier, reg prometheus.Registerer) *ProofReceiverLogic {
return &ProofReceiverLogic{
chunkOrm: orm.NewChunk(db),
batchOrm: orm.NewBatch(db),
@@ -133,6 +137,18 @@ func NewSubmitProofReceiverLogic(cfg *config.ProverManager, chainCfg *params.Cha
Name: "coordinator_validate_failure_submit_have_been_verifier",
Help: "Total number of submit proof validate failure proof have been verifier.",
}),
evmCyclePerGas: promauto.With(reg).NewGauge(prometheus.GaugeOpts{
Name: "evm_circuit_cycle_per_gas",
Help: "VM cycles cost for a gas unit cost in evm execution",
}),
provingTime: promauto.With(reg).NewGauge(prometheus.GaugeOpts{
Name: "chunk_proving_time",
Help: "Wall clock time for chunk proving in second",
}),
proverSpeed: promauto.With(reg).NewGaugeVec(prometheus.GaugeOpts{
Name: "prover_speed",
Help: "Cycle against running time of prover (in mhz)",
}, []string{"type", "phase"}),
}
}
@@ -178,7 +194,20 @@ func (m *ProofReceiverLogic) HandleZkProof(ctx *gin.Context, proofParameter coor
if len(proverTask.Metadata) == 0 {
return errors.New("can not re-wrapping proof: no metadata has been recorded in advance")
}
proofParameter.Proof = libzkp.GenerateWrappedProof(proofParameter.Proof, string(proverTask.Metadata), []byte{})
var expected_vk []byte
switch message.ProofType(proofParameter.TaskType) {
case message.ProofTypeChunk:
expected_vk = m.verifier.ChunkVk[hardForkName]
case message.ProofTypeBatch:
expected_vk = m.verifier.BatchVk[hardForkName]
case message.ProofTypeBundle:
expected_vk = m.verifier.BundleVk[hardForkName]
}
if len(expected_vk) == 0 {
return errors.New("no vk specified match current hard fork, check your config")
}
proofParameter.Proof = libzkp.GenerateWrappedProof(proofParameter.Proof, string(proverTask.Metadata), expected_vk)
if proofParameter.Proof == "" {
return errors.New("can not re-wrapping proof, see coordinator log for reason")
}
@@ -191,12 +220,34 @@ func (m *ProofReceiverLogic) HandleZkProof(ctx *gin.Context, proofParameter coor
return unmarshalErr
}
success, verifyErr = m.verifier.VerifyChunkProof(chunkProof, hardForkName)
if stat := chunkProof.VmProof.Stat; stat != nil {
if g, _ := m.proverSpeed.GetMetricWithLabelValues("chunk", "exec"); g != nil && stat.ExecutionTimeMills > 0 {
g.Set(float64(stat.TotalCycle) / float64(stat.ExecutionTimeMills*1000))
}
if g, _ := m.proverSpeed.GetMetricWithLabelValues("chunk", "proving"); g != nil && stat.ProvingTimeMills > 0 {
g.Set(float64(stat.TotalCycle) / float64(stat.ProvingTimeMills*1000))
}
if chunkProof.MetaData.TotalGasUsed > 0 {
cycle_per_gas := float64(stat.TotalCycle) / float64(chunkProof.MetaData.TotalGasUsed)
m.evmCyclePerGas.Set(cycle_per_gas)
}
m.provingTime.Set(float64(stat.ProvingTimeMills) / 1000)
}
case message.ProofTypeBatch:
batchProof := &message.OpenVMBatchProof{}
if unmarshalErr := json.Unmarshal([]byte(proofParameter.Proof), &batchProof); unmarshalErr != nil {
return unmarshalErr
}
success, verifyErr = m.verifier.VerifyBatchProof(batchProof, hardForkName)
if stat := batchProof.VmProof.Stat; stat != nil {
if g, _ := m.proverSpeed.GetMetricWithLabelValues("batch", "exec"); g != nil && stat.ExecutionTimeMills > 0 {
g.Set(float64(stat.TotalCycle) / float64(stat.ExecutionTimeMills*1000))
}
if g, _ := m.proverSpeed.GetMetricWithLabelValues("batch", "proving"); g != nil && stat.ProvingTimeMills > 0 {
g.Set(float64(stat.TotalCycle) / float64(stat.ProvingTimeMills*1000))
}
}
case message.ProofTypeBundle:
bundleProof := &message.OpenVMBundleProof{}
if unmarshalErr := json.Unmarshal([]byte(proofParameter.Proof), &bundleProof); unmarshalErr != nil {

View File

@@ -10,7 +10,13 @@ import (
// NewVerifier Sets up a mock verifier.
func NewVerifier(cfg *config.VerifierConfig) (*Verifier, error) {
return &Verifier{cfg: cfg, OpenVMVkMap: map[string]struct{}{"mock_vk": {}}}, nil
return &Verifier{
cfg: cfg,
OpenVMVkMap: map[string]struct{}{"mock_vk": {}},
ChunkVk: map[string][]byte{"euclidV2": []byte("mock_vk")},
BatchVk: map[string][]byte{"euclidV2": []byte("mock_vk")},
BundleVk: map[string][]byte{},
}, nil
}
// VerifyChunkProof return a mock verification result for a ChunkProof.

View File

@@ -11,4 +11,7 @@ const InvalidTestProof = "this is a invalid proof"
type Verifier struct {
cfg *config.VerifierConfig
OpenVMVkMap map[string]struct{}
ChunkVk map[string][]byte
BatchVk map[string][]byte
BundleVk map[string][]byte
}

View File

@@ -4,11 +4,14 @@ package verifier
import (
"encoding/base64"
"encoding/hex"
"encoding/json"
"fmt"
"io"
"os"
"path"
"path/filepath"
"strings"
"github.com/scroll-tech/go-ethereum/log"
@@ -18,7 +21,7 @@ import (
"scroll-tech/coordinator/internal/logic/libzkp"
)
// This struct maps to `CircuitConfig` in libzkp/impl/src/verifier.rs
// This struct maps to `CircuitConfig` in libzkp/src/verifier.rs
// Define a brand new struct here is to eliminate side effects in case fields
// in `*config.CircuitConfig` being changed
type rustCircuitConfig struct {
@@ -26,24 +29,28 @@ type rustCircuitConfig struct {
AssetsPath string `json:"assets_path"`
}
func newRustCircuitConfig(cfg *config.CircuitConfig) *rustCircuitConfig {
func newRustCircuitConfig(cfg config.AssetConfig) *rustCircuitConfig {
return &rustCircuitConfig{
ForkName: cfg.ForkName,
AssetsPath: cfg.AssetsPath,
}
}
// This struct maps to `VerifierConfig` in coordinator/internal/logic/libzkp/impl/src/verifier.rs
// This struct maps to `VerifierConfig` in coordinator/internal/logic/libzkp/src/verifier.rs
// Define a brand new struct here is to eliminate side effects in case fields
// in `*config.VerifierConfig` being changed
type rustVerifierConfig struct {
HighVersionCircuit *rustCircuitConfig `json:"high_version_circuit"`
Circuits []*rustCircuitConfig `json:"circuits"`
}
func newRustVerifierConfig(cfg *config.VerifierConfig) *rustVerifierConfig {
return &rustVerifierConfig{
HighVersionCircuit: newRustCircuitConfig(cfg.HighVersionCircuit),
out := &rustVerifierConfig{}
for _, cfg := range cfg.Verifiers {
out.Circuits = append(out.Circuits, newRustCircuitConfig(cfg))
}
return out
}
type rustVkDump struct {
@@ -60,15 +67,23 @@ func NewVerifier(cfg *config.VerifierConfig) (*Verifier, error) {
return nil, err
}
if cfg.Features != "" {
libzkp.SetDynamicFeature(cfg.Features)
}
libzkp.InitVerifier(string(configBytes))
v := &Verifier{
cfg: cfg,
OpenVMVkMap: make(map[string]struct{}),
ChunkVk: make(map[string][]byte),
BatchVk: make(map[string][]byte),
BundleVk: make(map[string][]byte),
}
if err := v.loadOpenVMVks(message.EuclidV2Fork); err != nil {
return nil, err
for _, cfg := range cfg.Verifiers {
if err := v.loadOpenVMVks(cfg); err != nil {
return nil, err
}
}
return v, nil
@@ -108,27 +123,42 @@ func (v *Verifier) VerifyBundleProof(proof *message.OpenVMBundleProof, forkName
return libzkp.VerifyBundleProof(string(buf), forkName), nil
}
func (v *Verifier) ReadVK(filePat string) (string, error) {
/*
add vk of imcompatilbe circuit app here to avoid we had used them unexpectedly
25/07/15: 0.5.0rc0 is no longer compatible since a breaking change
*/
const blocked_vks = `
rSJNNBpsxBdKlstbIIU/aYc7bHau98Qb2yjZMc5PmDhmGOolp5kYRbvF/VcWcO5HN5ujGs6S00W8pZcCoNQRLQ==,
2Lo7Cebm6SFtcsYXipkcMxIBmVY7UpoMXik/Msm7t2nyvi9EaNGsSnDnaCurscYEF+IcdjPUtVtY9EcD7IKwWg==,
D6YFHwTLZF/U2zpYJPQ3LwJZRm85yA5Vq2iFBqd3Mk4iwOUpS8sbOp3vg2+NDxhhKphgYpuUlykpdsoRhEt+cw==,
`
f, err := os.Open(filepath.Clean(filePat))
if err != nil {
return "", err
// tries to decode s as hex, and if that fails, as base64.
func decodeVkString(s string) ([]byte, error) {
// Try hex decoding first
if b, err := hex.DecodeString(s); err == nil {
return b, nil
}
byt, err := io.ReadAll(f)
// Fallback to base64 decoding
b, err := base64.StdEncoding.DecodeString(s)
if err != nil {
return "", err
return nil, err
}
return base64.StdEncoding.EncodeToString(byt), nil
if len(b) == 0 {
return nil, fmt.Errorf("decode vk string %s fail (empty bytes)", s)
}
return b, nil
}
func (v *Verifier) loadOpenVMVks(forkName string) error {
tempFile := path.Join(os.TempDir(), "openVmVk.json")
err := libzkp.DumpVk(forkName, tempFile)
if err != nil {
return err
}
func (v *Verifier) loadOpenVMVks(cfg config.AssetConfig) error {
f, err := os.Open(filepath.Clean(tempFile))
vkFileName := cfg.Vkfile
if vkFileName == "" {
vkFileName = "openVmVk.json"
}
vkFile := path.Join(cfg.AssetsPath, vkFileName)
f, err := os.Open(filepath.Clean(vkFile))
if err != nil {
return err
}
@@ -141,8 +171,36 @@ func (v *Verifier) loadOpenVMVks(forkName string) error {
if err := json.Unmarshal(byt, &dump); err != nil {
return err
}
if strings.Contains(blocked_vks, dump.Chunk) {
return fmt.Errorf("loaded blocked chunk vk %s", dump.Chunk)
}
if strings.Contains(blocked_vks, dump.Batch) {
return fmt.Errorf("loaded blocked batch vk %s", dump.Batch)
}
if strings.Contains(blocked_vks, dump.Bundle) {
return fmt.Errorf("loaded blocked bundle vk %s", dump.Bundle)
}
v.OpenVMVkMap[dump.Chunk] = struct{}{}
v.OpenVMVkMap[dump.Batch] = struct{}{}
v.OpenVMVkMap[dump.Bundle] = struct{}{}
log.Info("Load vks", "from", cfg.AssetsPath, "chunk", dump.Chunk, "batch", dump.Batch, "bundle", dump.Bundle)
decodedBytes, err := decodeVkString(dump.Chunk)
if err != nil {
return err
}
v.ChunkVk[cfg.ForkName] = decodedBytes
decodedBytes, err = decodeVkString(dump.Batch)
if err != nil {
return err
}
v.BatchVk[cfg.ForkName] = decodedBytes
decodedBytes, err = decodeVkString(dump.Bundle)
if err != nil {
return err
}
v.BundleVk[cfg.ForkName] = decodedBytes
return nil
}

View File

@@ -29,11 +29,11 @@ func TestFFI(t *testing.T) {
as := assert.New(t)
cfg := &config.VerifierConfig{
HighVersionCircuit: &config.CircuitConfig{
AssetsPath: *assetsPathHi,
ForkName: "euclidV2",
MinProverVersion: "",
},
MinProverVersion: "",
Verifiers: []config.AssetConfig{{
AssetsPath: *assetsPathHi,
ForkName: "euclidV2",
}},
}
v, err := NewVerifier(cfg)

View File

@@ -57,17 +57,17 @@ func (*ProverTask) TableName() string {
}
// IsProverAssigned checks if a prover with the given public key has been assigned a task.
func (o *ProverTask) IsProverAssigned(ctx context.Context, publicKey string) (bool, error) {
func (o *ProverTask) IsProverAssigned(ctx context.Context, publicKey string) (*ProverTask, error) {
db := o.db.WithContext(ctx)
var task ProverTask
err := db.Where("prover_public_key = ? AND proving_status = ?", publicKey, types.ProverAssigned).First(&task).Error
if err != nil {
if errors.Is(err, gorm.ErrRecordNotFound) {
return false, nil
return nil, nil
}
return false, err
return nil, err
}
return true, nil
return &task, nil
}
// GetProverTasks get prover tasks
@@ -269,6 +269,24 @@ func (o *ProverTask) UpdateProverTaskProvingStatusAndFailureType(ctx context.Con
return nil
}
// UpdateProverTaskAssignedTime updates the assigned_at time of a specific ProverTask record.
func (o *ProverTask) UpdateProverTaskAssignedTime(ctx context.Context, uuid uuid.UUID, t time.Time, dbTX ...*gorm.DB) error {
db := o.db
if len(dbTX) > 0 && dbTX[0] != nil {
db = dbTX[0]
}
db = db.WithContext(ctx)
db = db.Model(&ProverTask{})
db = db.Where("uuid = ?", uuid)
updates := make(map[string]interface{})
updates["assigned_at"] = t
if err := db.Updates(updates).Error; err != nil {
return fmt.Errorf("ProverTask.UpdateProverTaskAssignedTime error: %w, uuid:%s, status: %v", err, uuid, t)
}
return nil
}
// UpdateProverTaskFailureType update the prover task failure type
func (o *ProverTask) UpdateProverTaskFailureType(ctx context.Context, uuid uuid.UUID, failureType types.ProverTaskFailureType, dbTX ...*gorm.DB) error {
db := o.db

View File

@@ -79,16 +79,17 @@ func setupCoordinator(t *testing.T, proversPerSession uint8, coordinatorURL stri
tokenTimeout = 60
conf = &config.Config{
L2: &config.L2{
ChainID: 111,
ChainID: 111,
Endpoint: &config.L2Endpoint{},
},
ProverManager: &config.ProverManager{
ProversPerSession: proversPerSession,
Verifier: &config.VerifierConfig{
HighVersionCircuit: &config.CircuitConfig{
AssetsPath: "",
ForkName: "euclidV2",
MinProverVersion: "v4.4.89",
},
MinProverVersion: "v4.4.89",
Verifiers: []config.AssetConfig{{
AssetsPath: "",
ForkName: "euclidV2",
}},
},
BatchCollectionTimeSec: 10,
ChunkCollectionTimeSec: 10,
@@ -583,7 +584,8 @@ func testTimeoutProof(t *testing.T) {
err = chunkOrm.UpdateBatchHashInRange(context.Background(), 0, 100, batch.Hash)
assert.NoError(t, err)
encodeData, err := json.Marshal(message.OpenVMChunkProof{VmProof: &message.OpenVMProof{}, MetaData: struct {
ChunkInfo *message.ChunkInfo `json:"chunk_info"`
ChunkInfo *message.ChunkInfo `json:"chunk_info"`
TotalGasUsed uint64 `json:"chunk_total_gas"`
}{ChunkInfo: &message.ChunkInfo{}}})
assert.NoError(t, err)
assert.NotEmpty(t, encodeData)

10405
crates/gpu_override/Cargo.lock generated Normal file

File diff suppressed because it is too large Load Diff

View File

@@ -13,6 +13,7 @@ libzkp = { path = "../libzkp" }
alloy = { workspace = true, features = ["provider-http", "transport-http", "reqwest", "reqwest-rustls-tls", "json-rpc"] }
sbv-primitives = { workspace = true, features = ["scroll"] }
sbv-utils = { workspace = true, features = ["scroll"] }
sbv-core = { workspace = true, features = ["scroll"] }
eyre.workspace = true

View File

@@ -11,7 +11,7 @@ pub fn init(config: &str) -> eyre::Result<()> {
Ok(())
}
pub fn get_client() -> rpc_client::RpcClient<'static> {
pub fn get_client() -> impl libzkp::tasks::ChunkInterpreter {
GLOBAL_L2GETH_CLI
.get()
.expect("must has been inited")

View File

@@ -1,5 +1,5 @@
use alloy::{
providers::{Provider, ProviderBuilder, RootProvider},
providers::{Provider, ProviderBuilder},
rpc::client::ClientBuilder,
transports::layers::RetryBackoffLayer,
};
@@ -49,13 +49,13 @@ pub struct RpcConfig {
/// so it can be run in block mode (i.e. inside dynamic library without a global entry)
pub struct RpcClientCore {
/// rpc prover
provider: RootProvider<Network>,
client: alloy::rpc::client::RpcClient,
rt: tokio::runtime::Runtime,
}
#[derive(Clone, Copy)]
pub struct RpcClient<'a> {
provider: &'a RootProvider<Network>,
pub struct RpcClient<'a, T: Provider<Network>> {
provider: T,
handle: &'a tokio::runtime::Handle,
}
@@ -75,76 +75,78 @@ impl RpcClientCore {
let retry_layer = RetryBackoffLayer::new(config.max_retry, config.backoff, config.cups);
let client = ClientBuilder::default().layer(retry_layer).http(rpc);
Ok(Self {
provider: ProviderBuilder::<_, _, Network>::default().on_client(client),
rt,
})
Ok(Self { client, rt })
}
pub fn get_client(&self) -> RpcClient {
pub fn get_client(&self) -> RpcClient<'_, impl Provider<Network>> {
RpcClient {
provider: &self.provider,
provider: ProviderBuilder::<_, _, Network>::default()
.connect_client(self.client.clone()),
handle: self.rt.handle(),
}
}
}
impl ChunkInterpreter for RpcClient<'_> {
impl<T: Provider<Network>> ChunkInterpreter for RpcClient<'_, T> {
fn try_fetch_block_witness(
&self,
block_hash: sbv_primitives::B256,
prev_witness: Option<&sbv_primitives::types::BlockWitness>,
) -> Result<sbv_primitives::types::BlockWitness> {
prev_witness: Option<&sbv_core::BlockWitness>,
) -> Result<sbv_core::BlockWitness> {
async fn fetch_witness_async(
provider: &RootProvider<Network>,
provider: impl Provider<Network>,
block_hash: sbv_primitives::B256,
prev_witness: Option<&sbv_primitives::types::BlockWitness>,
) -> Result<sbv_primitives::types::BlockWitness> {
use alloy::network::primitives::BlockTransactionsKind;
use sbv_utils::{rpc::ProviderExt, witness::WitnessBuilder};
prev_witness: Option<&sbv_core::BlockWitness>,
) -> Result<sbv_core::BlockWitness> {
use sbv_utils::rpc::ProviderExt;
let chain_id = provider.get_chain_id().await?;
let block = provider
.get_block_by_hash(block_hash, BlockTransactionsKind::Full)
.await?
.ok_or_else(|| eyre::eyre!("Block not found"))?;
let number = block.header.number;
if number == 0 {
eyre::bail!("no number in header or use block 0");
}
let prev_state_root = if let Some(witness) = prev_witness {
if witness.header.number != number - 1 {
eyre::bail!(
"the ref witness is not the previous block, expected {} get {}",
number - 1,
witness.header.number,
);
}
witness.header.state_root
let (chain_id, block_num, prev_state_root) = if let Some(w) = prev_witness {
(w.chain_id, w.header.number + 1, w.header.state_root)
} else {
provider
.scroll_disk_root((number - 1).into())
let chain_id = provider.get_chain_id().await?;
let block = provider
.get_block_by_hash(block_hash)
.full()
.await?
.disk_root
.ok_or_else(|| eyre::eyre!("Block {block_hash} not found"))?;
let parent_block = provider
.get_block_by_hash(block.header.parent_hash)
.await?
.ok_or_else(|| {
eyre::eyre!(
"parent block for block {} should exist",
block.header.number
)
})?;
(
chain_id,
block.header.number,
parent_block.header.state_root,
)
};
let witness = WitnessBuilder::new()
.block(block)
.chain_id(chain_id)
.execution_witness(provider.debug_execution_witness(number.into()).await?)
.state_root(provider.scroll_disk_root(number.into()).await?.disk_root)?
.prev_state_root(prev_state_root)
.build()?;
let req = provider
.dump_block_witness(block_num)
.with_chain_id(chain_id)
.with_prev_state_root(prev_state_root);
let witness = req
.send()
.await
.transpose()
.ok_or_else(|| eyre::eyre!("Block witness {block_num} not available"))??;
Ok(witness)
}
tracing::debug!("fetch witness for {block_hash}");
self.handle
.block_on(fetch_witness_async(self.provider, block_hash, prev_witness))
self.handle.block_on(fetch_witness_async(
&self.provider,
block_hash,
prev_witness,
))
}
fn try_fetch_storage_node(
@@ -152,7 +154,7 @@ impl ChunkInterpreter for RpcClient<'_> {
node_hash: sbv_primitives::B256,
) -> Result<sbv_primitives::Bytes> {
async fn fetch_storage_node_async(
provider: &RootProvider<Network>,
provider: impl Provider<Network>,
node_hash: sbv_primitives::B256,
) -> Result<sbv_primitives::Bytes> {
let ret = provider
@@ -164,7 +166,7 @@ impl ChunkInterpreter for RpcClient<'_> {
tracing::debug!("fetch storage node for {node_hash}");
self.handle
.block_on(fetch_storage_node_async(self.provider, node_hash))
.block_on(fetch_storage_node_async(&self.provider, node_hash))
}
}
@@ -190,10 +192,10 @@ mod tests {
let client_core = RpcClientCore::create(&config).expect("Failed to create RPC client");
let client = client_core.get_client();
// latest - 1 block in 2025.6.15
// latest - 1 block in 2025.9.11
let block_hash = B256::from(
hex::const_decode_to_array(
b"0x9535a6970bc4db9031749331a214e35ed8c8a3f585f6f456d590a0bc780a1368",
b"0x093fb6bf2e556a659b35428ac447cd9f0635382fc40ffad417b5910824f9e932",
)
.unwrap(),
);
@@ -203,10 +205,10 @@ mod tests {
.try_fetch_block_witness(block_hash, None)
.expect("should success");
// latest block in 2025.6.15
// block selected in 2025.9.11
let block_hash = B256::from(
hex::const_decode_to_array(
b"0xd47088cdb6afc68aa082e633bb7da9340d29c73841668afacfb9c1e66e557af0",
b"0x77cc84dd7a4dedf6fe5fb9b443aeb5a4fb0623ad088a365d3232b7b23fc848e5",
)
.unwrap(),
);
@@ -216,26 +218,4 @@ mod tests {
println!("{}", serde_json::to_string_pretty(&wit2).unwrap());
}
#[test]
#[ignore = "Requires L2GETH_ENDPOINT environment variable"]
fn test_try_fetch_storage_node() {
let config = create_config_from_env();
let client_core = RpcClientCore::create(&config).expect("Failed to create RPC client");
let client = client_core.get_client();
// the root node (state root) of the block in unittest above
let node_hash = B256::from(
hex::const_decode_to_array(
b"0xb9e67403a2eb35afbb0475fe942918cf9a330a1d7532704c24554506be62b27c",
)
.unwrap(),
);
// This is expected to fail since we're using a dummy hash, but it tests the code path
let node = client
.try_fetch_storage_node(node_hash)
.expect("should success");
println!("{}", serde_json::to_string_pretty(&node).unwrap());
}
}

View File

@@ -6,9 +6,11 @@ edition.workspace = true
# See more keys and their definitions at https://doc.rust-lang.org/cargo/reference/manifest.html
[dependencies]
scroll-zkvm-types.workspace = true
scroll-zkvm-verifier-euclid.workspace = true
scroll-zkvm-verifier.workspace = true
sbv-primitives.workspace = true
alloy-primitives.workspace = true #depress the effect of "native-keccak"
sbv-primitives = {workspace = true, features = ["scroll-compress-ratio", "scroll"]}
sbv-core = { workspace = true, features = ["scroll"] }
base64.workspace = true
serde.workspace = true
serde_derive.workspace = true
@@ -17,7 +19,8 @@ tracing.workspace = true
eyre.workspace = true
git-version = "0.3.5"
bincode = { version = "2", features = ["serde"] }
serde_stacker = "0.1"
regex = "1.11"
c-kzg = { version = "1.0", features = ["serde"] }
c-kzg = { version = "2.0", features = ["serde"] }

View File

@@ -5,12 +5,33 @@ pub use verifier::{TaskType, VerifierConfig};
mod utils;
use sbv_primitives::B256;
use scroll_zkvm_types::util::vec_as_base64;
use scroll_zkvm_types::utils::vec_as_base64;
use serde::{Deserialize, Serialize};
use serde_json::value::RawValue;
use std::path::Path;
use tasks::chunk_interpreter::{ChunkInterpreter, TryFromWithInterpreter};
/// global features: use legacy encoding for witness
static mut LEGACY_WITNESS_ENCODING: bool = false;
pub(crate) fn witness_use_legacy_mode() -> bool {
unsafe { LEGACY_WITNESS_ENCODING }
}
pub fn set_dynamic_feature(feats: &str) {
for feat_s in feats.split(':') {
match feat_s.trim().to_lowercase().as_str() {
"legacy_witness" => {
tracing::info!("set witness encoding for legacy mode");
unsafe {
// the function is only called while initialize step
LEGACY_WITNESS_ENCODING = true;
}
}
s => tracing::warn!("unrecognized dynamic feature: {s}"),
}
}
}
/// Turn the coordinator's chunk task into a json string for formal chunk proving
/// task (with full witnesses)
pub fn checkout_chunk_task(
@@ -26,11 +47,12 @@ pub fn checkout_chunk_task(
}
/// Generate required staff for proving tasks
/// return (pi_hash, metadata, task)
pub fn gen_universal_task(
task_type: i32,
task_json: &str,
fork_name: &str,
interpreter: Option<impl ChunkInterpreter>,
fork_name_str: &str,
expected_vk: &[u8],
) -> eyre::Result<(B256, String, String)> {
use proofs::*;
use tasks::*;
@@ -44,26 +66,48 @@ pub fn gen_universal_task(
Bundle(BundleProofMetadata),
}
let (pi_hash, metadata, u_task) = match task_type {
let (pi_hash, metadata, mut u_task) = match task_type {
x if x == TaskType::Chunk as i32 => {
let task = serde_json::from_str::<ChunkProvingTask>(task_json)?;
let mut task = serde_json::from_str::<ChunkProvingTask>(task_json)?;
// normailze fork name field in task
task.fork_name = task.fork_name.to_lowercase();
// always respect the fork_name_str (which has been normalized) being passed
// if the fork_name wrapped in task is not match, consider it a malformed task
if fork_name_str != task.fork_name.as_str() {
eyre::bail!("fork name in chunk task not match the calling arg, expected {fork_name_str}, get {}", task.fork_name);
}
let (pi_hash, metadata, u_task) =
gen_universal_chunk_task(task, fork_name.into(), interpreter)?;
utils::panic_catch(move || gen_universal_chunk_task(task, fork_name_str.into()))
.map_err(|e| eyre::eyre!("caught panic in chunk task{e}"))??;
(pi_hash, AnyMetaData::Chunk(metadata), u_task)
}
x if x == TaskType::Batch as i32 => {
let task = serde_json::from_str::<BatchProvingTask>(task_json)?;
let (pi_hash, metadata, u_task) = gen_universal_batch_task(task, fork_name.into())?;
let mut task = serde_json::from_str::<BatchProvingTask>(task_json)?;
task.fork_name = task.fork_name.to_lowercase();
if fork_name_str != task.fork_name.as_str() {
eyre::bail!("fork name in batch task not match the calling arg, expected {fork_name_str}, get {}", task.fork_name);
}
let (pi_hash, metadata, u_task) =
utils::panic_catch(move || gen_universal_batch_task(task, fork_name_str.into()))
.map_err(|e| eyre::eyre!("caught panic in chunk task{e}"))??;
(pi_hash, AnyMetaData::Batch(metadata), u_task)
}
x if x == TaskType::Bundle as i32 => {
let task = serde_json::from_str::<BundleProvingTask>(task_json)?;
let (pi_hash, metadata, u_task) = gen_universal_bundle_task(task, fork_name.into())?;
let mut task = serde_json::from_str::<BundleProvingTask>(task_json)?;
task.fork_name = task.fork_name.to_lowercase();
if fork_name_str != task.fork_name.as_str() {
eyre::bail!("fork name in bundle task not match the calling arg, expected {fork_name_str}, get {}", task.fork_name);
}
let (pi_hash, metadata, u_task) =
utils::panic_catch(move || gen_universal_bundle_task(task, fork_name_str.into()))
.map_err(|e| eyre::eyre!("caught panic in chunk task{e}"))??;
(pi_hash, AnyMetaData::Bundle(metadata), u_task)
}
_ => return Err(eyre::eyre!("unrecognized task type {task_type}")),
};
u_task.vk = Vec::from(expected_vk);
Ok((
pi_hash,
serde_json::to_string(&metadata)?,
@@ -106,8 +150,7 @@ pub fn verifier_init(config: &str) -> eyre::Result<()> {
pub fn verify_proof(proof: Vec<u8>, fork_name: &str, task_type: TaskType) -> eyre::Result<bool> {
let verifier = verifier::get_verifier(fork_name)?;
let ret = verifier.verify(task_type, proof)?;
let ret = verifier.lock().unwrap().verify(task_type, &proof)?;
Ok(ret)
}
@@ -115,7 +158,7 @@ pub fn verify_proof(proof: Vec<u8>, fork_name: &str, task_type: TaskType) -> eyr
pub fn dump_vk(fork_name: &str, file: &str) -> eyre::Result<()> {
let verifier = verifier::get_verifier(fork_name)?;
verifier.dump_vk(Path::new(file));
verifier.lock().unwrap().dump_vk(Path::new(file));
Ok(())
}

View File

@@ -7,10 +7,10 @@ use scroll_zkvm_types::{
batch::BatchInfo,
bundle::BundleInfo,
chunk::ChunkInfo,
proof::{EvmProof, OpenVmEvmProof, ProofEnum, RootProof},
proof::{EvmProof, OpenVmEvmProof, ProofEnum, StarkProof},
public_inputs::{ForkName, MultiVersionPublicInputs},
types_agg::{AggregationInput, ProgramCommitment},
util::vec_as_base64,
types_agg::AggregationInput,
utils::{serialize_vk, vec_as_base64},
};
use serde::{de::DeserializeOwned, Deserialize, Serialize};
@@ -40,7 +40,7 @@ pub struct WrappedProof<Metadata> {
}
pub trait AsRootProof {
fn as_root_proof(&self) -> &RootProof;
fn as_root_proof(&self) -> &StarkProof;
}
pub trait AsEvmProof {
@@ -61,17 +61,17 @@ pub type BatchProof = WrappedProof<BatchProofMetadata>;
pub type BundleProof = WrappedProof<BundleProofMetadata>;
impl AsRootProof for ChunkProof {
fn as_root_proof(&self) -> &RootProof {
fn as_root_proof(&self) -> &StarkProof {
self.proof
.as_root_proof()
.as_stark_proof()
.expect("batch proof use root proof")
}
}
impl AsRootProof for BatchProof {
fn as_root_proof(&self) -> &RootProof {
fn as_root_proof(&self) -> &StarkProof {
self.proof
.as_root_proof()
.as_stark_proof()
.expect("batch proof use root proof")
}
}
@@ -122,6 +122,8 @@ pub trait PersistableProof: Sized {
pub struct ChunkProofMetadata {
/// The chunk information describing the list of blocks contained within the chunk.
pub chunk_info: ChunkInfo,
/// Additional data for stat
pub chunk_total_gas: u64,
}
impl ProofMetadata for ChunkProofMetadata {
@@ -170,7 +172,7 @@ impl<Metadata> From<&WrappedProof<Metadata>> for AggregationInput {
fn from(value: &WrappedProof<Metadata>) -> Self {
Self {
public_values: value.proof.public_values(),
commitment: ProgramCommitment::deserialize(&value.vk),
commitment: serialize_vk::deserialize(&value.vk),
}
}
}
@@ -179,7 +181,7 @@ impl<Metadata: ProofMetadata> WrappedProof<Metadata> {
/// Sanity checks on the wrapped proof:
///
/// - pi_hash computed in host does in fact match pi_hash computed in guest
pub fn sanity_check(&self, fork_name: ForkName) {
pub fn pi_hash_check(&self, fork_name: ForkName) -> bool {
let proof_pi = self.proof.public_values();
let expected_pi = self
@@ -192,10 +194,11 @@ impl<Metadata: ProofMetadata> WrappedProof<Metadata> {
.map(|&v| v as u32)
.collect::<Vec<_>>();
assert_eq!(
expected_pi, proof_pi,
"pi mismatch: expected={expected_pi:?}, found={proof_pi:?}"
);
let ret = expected_pi == proof_pi;
if !ret {
tracing::warn!("pi mismatch: expected={expected_pi:?}, found={proof_pi:?}");
}
ret
}
}
@@ -213,11 +216,7 @@ impl<Metadata: ProofMetadata> PersistableProof for WrappedProof<Metadata> {
mod tests {
use base64::{prelude::BASE64_STANDARD, Engine};
use sbv_primitives::B256;
use scroll_zkvm_types::{
bundle::{BundleInfo, BundleInfoV1},
proof::EvmProof,
public_inputs::PublicInputs,
};
use scroll_zkvm_types::{bundle::BundleInfo, proof::EvmProof, public_inputs::ForkName};
use super::*;
@@ -244,7 +243,7 @@ mod tests {
fn test_dummy_proof() -> eyre::Result<()> {
// 1. Metadata
let metadata = {
let bundle_info: BundleInfoV1 = BundleInfo {
let bundle_info = BundleInfo {
chain_id: 12345,
num_batches: 12,
prev_state_root: B256::repeat_byte(1),
@@ -253,11 +252,11 @@ mod tests {
batch_hash: B256::repeat_byte(4),
withdraw_root: B256::repeat_byte(5),
msg_queue_hash: B256::repeat_byte(6),
}
.into();
let bundle_pi_hash = bundle_info.pi_hash();
encryption_key: None,
};
let bundle_pi_hash = bundle_info.pi_hash(ForkName::EuclidV1);
BundleProofMetadata {
bundle_info: bundle_info.0,
bundle_info,
bundle_pi_hash,
}
};

View File

@@ -9,30 +9,52 @@ pub use chunk::{ChunkProvingTask, ChunkTask};
pub use chunk_interpreter::ChunkInterpreter;
pub use scroll_zkvm_types::task::ProvingTask;
use crate::proofs::{BatchProofMetadata, BundleProofMetadata, ChunkProofMetadata};
use chunk_interpreter::{DummyInterpreter, TryFromWithInterpreter};
use sbv_primitives::B256;
use scroll_zkvm_types::{
chunk::ChunkInfo,
public_inputs::{ForkName, MultiVersionPublicInputs},
use crate::{
proofs::{self, BatchProofMetadata, BundleProofMetadata, ChunkProofMetadata},
utils::panic_catch,
};
use sbv_primitives::B256;
use scroll_zkvm_types::public_inputs::{ForkName, MultiVersionPublicInputs};
fn encode_task_to_witness<T: serde::Serialize>(task: &T) -> eyre::Result<Vec<u8>> {
let config = bincode::config::standard();
Ok(bincode::serde::encode_to_vec(task, config)?)
}
fn check_aggregation_proofs<Metadata>(
proofs: &[proofs::WrappedProof<Metadata>],
fork_name: ForkName,
) -> eyre::Result<()>
where
Metadata: proofs::ProofMetadata,
{
panic_catch(|| {
for w in proofs.windows(2) {
// w[1].metadata
// .pi_hash_info()
// .validate(w[0].metadata.pi_hash_info(), fork_name);
}
})
.map_err(|e| eyre::eyre!("Chunk data validation failed: {}", e))?;
Ok(())
}
/// Generate required staff for chunk proving
pub fn gen_universal_chunk_task(
mut task: ChunkProvingTask,
task: ChunkProvingTask,
fork_name: ForkName,
interpreter: Option<impl ChunkInterpreter>,
) -> eyre::Result<(B256, ChunkProofMetadata, ProvingTask)> {
let chunk_info = if let Some(interpreter) = interpreter {
ChunkInfo::try_from_with_interpret(&mut task, interpreter)
} else {
ChunkInfo::try_from_with_interpret(&mut task, DummyInterpreter {})
}?;
let chunk_total_gas = task.stats().total_gas_used;
let chunk_info = task.precheck_and_build_metadata()?;
let proving_task = task.try_into()?;
let expected_pi_hash = chunk_info.pi_hash_by_fork(fork_name);
Ok((
expected_pi_hash,
ChunkProofMetadata { chunk_info },
ChunkProofMetadata {
chunk_info,
chunk_total_gas,
},
proving_task,
))
}

View File

@@ -4,8 +4,9 @@ use eyre::Result;
use sbv_primitives::{B256, U256};
use scroll_zkvm_types::{
batch::{
BatchHeader, BatchHeaderV6, BatchHeaderV7, BatchInfo, BatchWitness, EnvelopeV6, EnvelopeV7,
PointEvalWitness, ReferenceHeader, N_BLOB_BYTES,
build_point_eval_witness, BatchHeader, BatchHeaderV6, BatchHeaderV7, BatchHeaderV8,
BatchInfo, BatchWitness, Envelope, EnvelopeV6, EnvelopeV7, EnvelopeV8, LegacyBatchWitness,
ReferenceHeader, N_BLOB_BYTES,
},
public_inputs::ForkName,
task::ProvingTask,
@@ -23,37 +24,35 @@ use utils::{base64, point_eval};
#[serde(untagged)]
pub enum BatchHeaderV {
V6(BatchHeaderV6),
V7(BatchHeaderV7),
}
impl From<BatchHeaderV> for ReferenceHeader {
fn from(value: BatchHeaderV) -> Self {
match value {
BatchHeaderV::V6(h) => ReferenceHeader::V6(h),
BatchHeaderV::V7(h) => ReferenceHeader::V7(h),
}
}
V7_8(BatchHeaderV7),
}
impl BatchHeaderV {
pub fn batch_hash(&self) -> B256 {
match self {
BatchHeaderV::V6(h) => h.batch_hash(),
BatchHeaderV::V7(h) => h.batch_hash(),
BatchHeaderV::V7_8(h) => h.batch_hash(),
}
}
pub fn must_v6_header(&self) -> &BatchHeaderV6 {
match self {
BatchHeaderV::V6(h) => h,
BatchHeaderV::V7(_) => panic!("try to pick v7 header"),
_ => panic!("try to pick other header type"),
}
}
pub fn must_v7_header(&self) -> &BatchHeaderV7 {
match self {
BatchHeaderV::V7(h) => h,
BatchHeaderV::V6(_) => panic!("try to pick v6 header"),
BatchHeaderV::V7_8(h) => h,
_ => panic!("try to pick other header type"),
}
}
pub fn must_v8_header(&self) -> &BatchHeaderV8 {
match self {
BatchHeaderV::V7_8(h) => h,
_ => panic!("try to pick other header type"),
}
}
}
@@ -85,6 +84,12 @@ impl TryFrom<BatchProvingTask> for ProvingTask {
fn try_from(value: BatchProvingTask) -> Result<Self> {
let witness = value.build_guest_input();
let serialized_witness = if crate::witness_use_legacy_mode() {
let legacy_witness = LegacyBatchWitness::from(witness);
to_rkyv_bytes::<RancorError>(&legacy_witness)?.into_vec()
} else {
super::encode_task_to_witness(&witness)?
};
Ok(ProvingTask {
identifier: value.batch_header.batch_hash().to_string(),
@@ -92,9 +97,9 @@ impl TryFrom<BatchProvingTask> for ProvingTask {
aggregated_proofs: value
.chunk_proofs
.into_iter()
.map(|w_proof| w_proof.proof.into_root_proof().expect("expect root proof"))
.map(|w_proof| w_proof.proof.into_stark_proof().expect("expect root proof"))
.collect(),
serialized_witness: vec![to_rkyv_bytes::<RancorError>(&witness)?.into_vec()],
serialized_witness: vec![serialized_witness],
vk: Vec::new(),
})
}
@@ -104,7 +109,7 @@ impl BatchProvingTask {
fn build_guest_input(&self) -> BatchWitness {
let fork_name = self.fork_name.to_lowercase().as_str().into();
// calculate point eval needed and compare with task input
// sanity check: calculate point eval needed and compare with task input
let (kzg_commitment, kzg_proof, challenge_digest) = {
let blob = point_eval::to_blob(&self.blob_bytes);
let commitment = point_eval::blob_to_kzg_commitment(&blob);
@@ -117,21 +122,31 @@ impl BatchProvingTask {
"hardfork mismatch for da-codec@v6 header: found={fork_name:?}, expected={:?}",
ForkName::EuclidV1,
);
EnvelopeV6::from(self.blob_bytes.as_slice()).challenge_digest(versioned_hash)
EnvelopeV6::from_slice(self.blob_bytes.as_slice())
.challenge_digest(versioned_hash)
}
BatchHeaderV::V7(_) => {
assert_eq!(
fork_name,
ForkName::EuclidV2,
"hardfork mismatch for da-codec@v7 header: found={fork_name:?}, expected={:?}",
ForkName::EuclidV2,
);
BatchHeaderV::V7_8(_) => {
let padded_blob_bytes = {
let mut padded_blob_bytes = self.blob_bytes.to_vec();
padded_blob_bytes.resize(N_BLOB_BYTES, 0);
padded_blob_bytes
};
EnvelopeV7::from(padded_blob_bytes.as_slice()).challenge_digest(versioned_hash)
match fork_name {
ForkName::EuclidV2 => {
<EnvelopeV7 as Envelope>::from_slice(padded_blob_bytes.as_slice())
.challenge_digest(versioned_hash)
}
ForkName::Feynman => {
<EnvelopeV8 as Envelope>::from_slice(padded_blob_bytes.as_slice())
.challenge_digest(versioned_hash)
}
f => unreachable!(
"hardfork mismatch for da-codec@v7 header: found={}, expected={:?}",
f,
[ForkName::EuclidV2, ForkName::Feynman],
),
}
}
};
@@ -152,12 +167,16 @@ impl BatchProvingTask {
assert_eq!(p, kzg_proof);
}
let point_eval_witness = PointEvalWitness {
kzg_commitment: kzg_commitment.into_inner(),
kzg_proof: kzg_proof.into_inner(),
};
let point_eval_witness = Some(build_point_eval_witness(
kzg_commitment.into_inner(),
kzg_proof.into_inner(),
));
let reference_header = self.batch_header.clone().into();
let reference_header = match fork_name {
ForkName::EuclidV1 => ReferenceHeader::V6(*self.batch_header.must_v6_header()),
ForkName::EuclidV2 => ReferenceHeader::V7(*self.batch_header.must_v7_header()),
ForkName::Feynman => ReferenceHeader::V8(*self.batch_header.must_v8_header()),
};
BatchWitness {
fork_name,
@@ -170,84 +189,20 @@ impl BatchProvingTask {
blob_bytes: self.blob_bytes.clone(),
reference_header,
point_eval_witness,
version: 0,
}
}
pub fn precheck_and_build_metadata(&self) -> Result<BatchInfo> {
let fork_name = ForkName::from(self.fork_name.as_str());
let (parent_state_root, state_root, chain_id, withdraw_root) = (
self.chunk_proofs
.first()
.expect("at least one chunk in batch")
.metadata
.chunk_info
.prev_state_root,
self.chunk_proofs
.last()
.expect("at least one chunk in batch")
.metadata
.chunk_info
.post_state_root,
self.chunk_proofs
.last()
.expect("at least one chunk in batch")
.metadata
.chunk_info
.chain_id,
self.chunk_proofs
.last()
.expect("at least one chunk in batch")
.metadata
.chunk_info
.withdraw_root,
);
let (parent_batch_hash, prev_msg_queue_hash, post_msg_queue_hash) = match self.batch_header
{
BatchHeaderV::V6(h) => {
assert_eq!(
fork_name,
ForkName::EuclidV1,
"hardfork mismatch for da-codec@v6 header: found={fork_name:?}, expected={:?}",
ForkName::EuclidV1,
);
(h.parent_batch_hash, Default::default(), Default::default())
}
BatchHeaderV::V7(h) => {
assert_eq!(
fork_name,
ForkName::EuclidV2,
"hardfork mismatch for da-codec@v7 header: found={fork_name:?}, expected={:?}",
ForkName::EuclidV2,
);
(
h.parent_batch_hash,
self.chunk_proofs
.first()
.expect("at least one chunk in batch")
.metadata
.chunk_info
.prev_msg_queue_hash,
self.chunk_proofs
.last()
.expect("at least one chunk in batch")
.metadata
.chunk_info
.post_msg_queue_hash,
)
}
};
// for every aggregation task, there are two steps needed to build the metadata:
// 1. generate data for metadata from the witness
// 2. validate every adjacent proof pair
let witness = self.build_guest_input();
let metadata = BatchInfo::from(&witness);
let batch_hash = self.batch_header.batch_hash();
super::check_aggregation_proofs(self.chunk_proofs.as_slice(), fork_name)?;
Ok(BatchInfo {
parent_state_root,
parent_batch_hash,
state_root,
batch_hash,
chain_id,
withdraw_root,
prev_msg_queue_hash,
post_msg_queue_hash,
})
Ok(metadata)
}
}

View File

@@ -18,7 +18,7 @@ pub mod base64 {
pub mod point_eval {
use c_kzg;
use sbv_primitives::{types::eips::eip4844::BLS_MODULUS, B256 as H256, U256};
use scroll_zkvm_types::util::sha256_rv32;
use scroll_zkvm_types::utils::sha256_rv32;
/// Given the blob-envelope, translate it to a fixed size EIP-4844 blob.
///
@@ -42,7 +42,8 @@ pub mod point_eval {
/// Get the KZG commitment from an EIP-4844 blob.
pub fn blob_to_kzg_commitment(blob: &c_kzg::Blob) -> c_kzg::KzgCommitment {
c_kzg::KzgCommitment::blob_to_kzg_commitment(blob, c_kzg::ethereum_kzg_settings())
c_kzg::ethereum_kzg_settings(0)
.blob_to_kzg_commitment(blob)
.expect("blob to kzg commitment should succeed")
}
@@ -65,12 +66,9 @@ pub mod point_eval {
pub fn get_kzg_proof(blob: &c_kzg::Blob, challenge: H256) -> (c_kzg::KzgProof, U256) {
let challenge = get_x_from_challenge(challenge);
let (proof, y) = c_kzg::KzgProof::compute_kzg_proof(
blob,
&c_kzg::Bytes32::new(challenge.to_be_bytes()),
c_kzg::ethereum_kzg_settings(),
)
.expect("kzg proof should succeed");
let (proof, y) = c_kzg::ethereum_kzg_settings(0)
.compute_kzg_proof(blob, &c_kzg::Bytes32::new(challenge.to_be_bytes()))
.expect("kzg proof should succeed");
(proof, U256::from_be_slice(y.as_slice()))
}

View File

@@ -2,6 +2,7 @@ use crate::proofs::BatchProof;
use eyre::Result;
use scroll_zkvm_types::{
bundle::{BundleInfo, BundleWitness},
public_inputs::ForkName,
task::ProvingTask,
utils::{to_rkyv_bytes, RancorError},
};
@@ -40,67 +41,28 @@ impl BundleProvingTask {
fn build_guest_input(&self) -> BundleWitness {
BundleWitness {
version: 0,
batch_proofs: self.batch_proofs.iter().map(|proof| proof.into()).collect(),
batch_infos: self
.batch_proofs
.iter()
.map(|wrapped_proof| wrapped_proof.metadata.batch_info.clone())
.collect(),
fork_name: self.fork_name.to_lowercase().as_str().into(),
}
}
pub fn precheck_and_build_metadata(&self) -> Result<BundleInfo> {
use eyre::eyre;
let err_prefix = format!("metadata_with_prechecks for task_id={}", self.identifier());
let fork_name = ForkName::from(self.fork_name.as_str());
// for every aggregation task, there are two steps needed to build the metadata:
// 1. generate data for metadata from the witness
// 2. validate every adjacent proof pair
let witness = self.build_guest_input();
let metadata = BundleInfo::from(&witness);
for w in self.batch_proofs.windows(2) {
if w[1].metadata.batch_info.chain_id != w[0].metadata.batch_info.chain_id {
return Err(eyre!("{err_prefix}: chain_id mismatch"));
}
super::check_aggregation_proofs(self.batch_proofs.as_slice(), fork_name)?;
if w[1].metadata.batch_info.parent_state_root != w[0].metadata.batch_info.state_root {
return Err(eyre!("{err_prefix}: state_root not chained"));
}
if w[1].metadata.batch_info.parent_batch_hash != w[0].metadata.batch_info.batch_hash {
return Err(eyre!("{err_prefix}: batch_hash not chained"));
}
}
let (first_batch, last_batch) = (
&self
.batch_proofs
.first()
.expect("at least one batch in bundle")
.metadata
.batch_info,
&self
.batch_proofs
.last()
.expect("at least one batch in bundle")
.metadata
.batch_info,
);
let chain_id = first_batch.chain_id;
let num_batches = u32::try_from(self.batch_proofs.len()).expect("num_batches: u32");
let prev_state_root = first_batch.parent_state_root;
let prev_batch_hash = first_batch.parent_batch_hash;
let post_state_root = last_batch.state_root;
let batch_hash = last_batch.batch_hash;
let withdraw_root = last_batch.withdraw_root;
let msg_queue_hash = last_batch.post_msg_queue_hash;
Ok(BundleInfo {
chain_id,
msg_queue_hash,
num_batches,
prev_state_root,
prev_batch_hash,
post_state_root,
batch_hash,
withdraw_root,
})
Ok(metadata)
}
}
@@ -109,6 +71,12 @@ impl TryFrom<BundleProvingTask> for ProvingTask {
fn try_from(value: BundleProvingTask) -> Result<Self> {
let witness = value.build_guest_input();
let serialized_witness = if crate::witness_use_legacy_mode() {
//to_rkyv_bytes::<RancorError>(&witness)?.into_vec()
unimplemented!();
} else {
super::encode_task_to_witness(&witness)?
};
Ok(ProvingTask {
identifier: value.identifier(),
@@ -116,9 +84,9 @@ impl TryFrom<BundleProvingTask> for ProvingTask {
aggregated_proofs: value
.batch_proofs
.into_iter()
.map(|w_proof| w_proof.proof.into_root_proof().expect("expect root proof"))
.map(|w_proof| w_proof.proof.into_stark_proof().expect("expect root proof"))
.collect(),
serialized_witness: vec![to_rkyv_bytes::<RancorError>(&witness)?.to_vec()],
serialized_witness: vec![serialized_witness],
vk: Vec::new(),
})
}

View File

@@ -1,8 +1,9 @@
use super::chunk_interpreter::*;
use eyre::Result;
use sbv_primitives::{types::BlockWitness, B256};
use sbv_core::BlockWitness;
use sbv_primitives::B256;
use scroll_zkvm_types::{
chunk::{execute, ChunkInfo, ChunkWitness},
chunk::{execute, ChunkInfo, ChunkWitness, LegacyChunkWitness},
task::ProvingTask,
utils::{to_rkyv_bytes, RancorError},
};
@@ -67,12 +68,18 @@ impl TryFrom<ChunkProvingTask> for ProvingTask {
fn try_from(value: ChunkProvingTask) -> Result<Self> {
let witness = value.build_guest_input();
let serialized_witness = if crate::witness_use_legacy_mode() {
let legacy_witness = LegacyChunkWitness::from(witness);
to_rkyv_bytes::<RancorError>(&legacy_witness)?.into_vec()
} else {
super::encode_task_to_witness(&witness)?
};
Ok(ProvingTask {
identifier: value.identifier(),
fork_name: value.fork_name,
aggregated_proofs: Vec::new(),
serialized_witness: vec![to_rkyv_bytes::<RancorError>(&witness)?.to_vec()],
serialized_witness: vec![serialized_witness],
vk: Vec::new(),
})
}
@@ -84,7 +91,7 @@ impl ChunkProvingTask {
let num_txs = self
.block_witnesses
.iter()
.map(|b| b.transaction.len())
.map(|b| b.transactions.len())
.sum::<usize>();
let total_gas_used = self
.block_witnesses
@@ -119,33 +126,41 @@ impl ChunkProvingTask {
}
fn build_guest_input(&self) -> ChunkWitness {
ChunkWitness {
blocks: self.block_witnesses.to_vec(),
prev_msg_queue_hash: self.prev_msg_queue_hash,
fork_name: self.fork_name.to_lowercase().as_str().into(),
}
ChunkWitness::new(
0,
&self.block_witnesses,
self.prev_msg_queue_hash,
self.fork_name.to_lowercase().as_str().into(),
None,
)
}
fn insert_state(&mut self, node: sbv_primitives::Bytes) {
self.block_witnesses[0].states.push(node);
}
}
const MAX_FETCH_NODES_ATTEMPTS: usize = 15;
pub fn precheck_and_build_metadata(&self) -> Result<ChunkInfo> {
let witness = self.build_guest_input();
impl TryFromWithInterpreter<&mut ChunkProvingTask> for ChunkInfo {
fn try_from_with_interpret(
value: &mut ChunkProvingTask,
let ret = ChunkInfo::try_from(witness).map_err(|e| eyre::eyre!("{e}"))?;
Ok(ret)
}
/// this method check the validate of current task (there may be missing storage node)
/// and try fixing it until everything is ok
#[deprecated]
pub fn prepare_task_via_interpret(
&mut self,
interpreter: impl ChunkInterpreter,
) -> eyre::Result<Self> {
) -> eyre::Result<()> {
use eyre::eyre;
let err_prefix = format!(
"metadata_with_prechecks for task_id={:?}",
value.identifier()
self.identifier()
);
if value.block_witnesses.is_empty() {
if self.block_witnesses.is_empty() {
return Err(eyre!(
"{err_prefix}: chunk should contain at least one block",
));
@@ -156,8 +171,10 @@ impl TryFromWithInterpreter<&mut ChunkProvingTask> for ChunkInfo {
let err_parse_re = regex::Regex::new(pattern)?;
let mut attempts = 0;
loop {
match execute(&value.build_guest_input()) {
Ok(chunk_info) => return Ok(chunk_info),
let witness = self.build_guest_input();
match execute(witness) {
Ok(_) => return Ok(()),
Err(e) => {
if let Some(caps) = err_parse_re.captures(&e) {
let hash = caps[2].to_string();
@@ -174,7 +191,7 @@ impl TryFromWithInterpreter<&mut ChunkProvingTask> for ChunkInfo {
hash.parse::<sbv_primitives::B256>().expect("should be hex");
let node = interpreter.try_fetch_storage_node(node_hash)?;
tracing::warn!("missing node fetched: {node}");
value.insert_state(node);
self.insert_state(node);
} else {
return Err(eyre!("{err_prefix}: {e}"));
}
@@ -183,3 +200,5 @@ impl TryFromWithInterpreter<&mut ChunkProvingTask> for ChunkInfo {
}
}
}
const MAX_FETCH_NODES_ATTEMPTS: usize = 15;

View File

@@ -1,5 +1,6 @@
use eyre::Result;
use sbv_primitives::{types::BlockWitness, Bytes, B256};
use sbv_core::BlockWitness;
use sbv_primitives::{Bytes, B256};
/// An interpreter which is cirtical in translating chunk data
/// since we need to grep block witness and storage node data

File diff suppressed because one or more lines are too long

View File

@@ -1,10 +1,14 @@
#![allow(static_mut_refs)]
mod euclidv2;
use euclidv2::EuclidV2Verifier;
mod universal;
use eyre::Result;
use serde::{Deserialize, Serialize};
use std::{cell::OnceCell, path::Path, rc::Rc};
use std::{
collections::HashMap,
path::Path,
sync::{Arc, Mutex, OnceLock},
};
use universal::Verifier;
#[derive(Debug, Clone, Copy, PartialEq)]
pub enum TaskType {
@@ -31,7 +35,7 @@ pub struct VKDump {
}
pub trait ProofVerifier {
fn verify(&self, task_type: TaskType, proof: Vec<u8>) -> Result<bool>;
fn verify(&self, task_type: TaskType, proof: &[u8]) -> Result<bool>;
fn dump_vk(&self, file: &Path);
}
@@ -43,36 +47,49 @@ pub struct CircuitConfig {
#[derive(Debug, Serialize, Deserialize)]
pub struct VerifierConfig {
pub high_version_circuit: CircuitConfig,
pub circuits: Vec<CircuitConfig>,
}
type HardForkName = String;
struct VerifierPair(HardForkName, Rc<Box<dyn ProofVerifier>>);
static mut VERIFIER_HIGH: OnceCell<VerifierPair> = OnceCell::new();
type VerifierType = Arc<Mutex<dyn ProofVerifier + Send>>;
static VERIFIERS: OnceLock<HashMap<HardForkName, VerifierType>> = OnceLock::new();
pub fn init(config: VerifierConfig) {
let verifier = EuclidV2Verifier::new(&config.high_version_circuit.assets_path);
unsafe {
VERIFIER_HIGH
.set(VerifierPair(
config.high_version_circuit.fork_name,
Rc::new(Box::new(verifier)),
))
.unwrap_unchecked();
let mut verifiers: HashMap<HardForkName, VerifierType> = Default::default();
for cfg in &config.circuits {
let canonical_fork_name = cfg.fork_name.to_lowercase();
let verifier = Verifier::new(&cfg.assets_path, canonical_fork_name.as_str().into());
let ret = verifiers.insert(canonical_fork_name, Arc::new(Mutex::new(verifier)));
assert!(
ret.is_none(),
"DO NOT init the same fork {} twice",
cfg.fork_name
);
tracing::info!("load verifier config for fork {}", cfg.fork_name);
}
let ret = VERIFIERS.set(verifiers).is_ok();
assert!(ret);
}
pub fn get_verifier(fork_name: &str) -> Result<Rc<Box<dyn ProofVerifier>>> {
unsafe {
if let Some(verifier) = VERIFIER_HIGH.get() {
if verifier.0 == fork_name {
return Ok(verifier.1.clone());
}
pub fn get_verifier(fork_name: &str) -> Result<Arc<Mutex<dyn ProofVerifier>>> {
if let Some(verifiers) = VERIFIERS.get() {
if let Some(verifier) = verifiers.get(fork_name) {
return Ok(verifier.clone());
}
Err(eyre::eyre!(
"failed to get verifier, key not found: {}, has {:?}",
fork_name,
verifiers.keys().collect::<Vec<_>>(),
))
} else {
Err(eyre::eyre!(
"failed to get verifier, not inited {}",
fork_name
))
}
Err(eyre::eyre!(
"failed to get verifier, key not found, {}",
fork_name
))
}

View File

@@ -1,66 +0,0 @@
use super::{ProofVerifier, TaskType, VKDump};
use eyre::Result;
use crate::{
proofs::{AsRootProof, BatchProof, BundleProof, ChunkProof, IntoEvmProof},
utils::panic_catch,
};
use scroll_zkvm_verifier_euclid::verifier::{BatchVerifier, BundleVerifierEuclidV2, ChunkVerifier};
use std::{fs::File, path::Path};
pub struct EuclidV2Verifier {
chunk_verifier: ChunkVerifier,
batch_verifier: BatchVerifier,
bundle_verifier: BundleVerifierEuclidV2,
}
impl EuclidV2Verifier {
pub fn new(assets_dir: &str) -> Self {
let verifier_bin = Path::new(assets_dir).join("verifier.bin");
let config = Path::new(assets_dir).join("root-verifier-vm-config");
let exe = Path::new(assets_dir).join("root-verifier-committed-exe");
Self {
chunk_verifier: ChunkVerifier::setup(&config, &exe, &verifier_bin)
.expect("Setting up chunk verifier"),
batch_verifier: BatchVerifier::setup(&config, &exe, &verifier_bin)
.expect("Setting up batch verifier"),
bundle_verifier: BundleVerifierEuclidV2::setup(&config, &exe, &verifier_bin)
.expect("Setting up bundle verifier"),
}
}
}
impl ProofVerifier for EuclidV2Verifier {
fn verify(&self, task_type: super::TaskType, proof: Vec<u8>) -> Result<bool> {
panic_catch(|| match task_type {
TaskType::Chunk => {
let proof = serde_json::from_slice::<ChunkProof>(proof.as_slice()).unwrap();
self.chunk_verifier.verify_proof(proof.as_root_proof())
}
TaskType::Batch => {
let proof = serde_json::from_slice::<BatchProof>(proof.as_slice()).unwrap();
self.batch_verifier.verify_proof(proof.as_root_proof())
}
TaskType::Bundle => {
let proof = serde_json::from_slice::<BundleProof>(proof.as_slice()).unwrap();
self.bundle_verifier
.verify_proof_evm(&proof.into_evm_proof())
}
})
.map_err(|err_str: String| eyre::eyre!("{err_str}"))
}
fn dump_vk(&self, file: &Path) {
use base64::{prelude::BASE64_STANDARD, Engine};
let f = File::create(file).expect("Failed to open file to dump VK");
let dump = VKDump {
chunk_vk: BASE64_STANDARD.encode(self.chunk_verifier.get_app_vk()),
batch_vk: BASE64_STANDARD.encode(self.batch_verifier.get_app_vk()),
bundle_vk: BASE64_STANDARD.encode(self.bundle_verifier.get_app_vk()),
};
serde_json::to_writer(f, &dump).expect("Failed to dump VK");
}
}

View File

@@ -0,0 +1,61 @@
use super::{ProofVerifier, TaskType};
use eyre::Result;
use crate::{
proofs::{AsRootProof, BatchProof, BundleProof, ChunkProof, IntoEvmProof},
utils::panic_catch,
};
use scroll_zkvm_types::public_inputs::ForkName;
use scroll_zkvm_verifier::verifier::UniversalVerifier;
use std::path::Path;
pub struct Verifier {
verifier: UniversalVerifier,
fork: ForkName,
}
impl Verifier {
pub fn new(assets_dir: &str, fork: ForkName) -> Self {
let verifier_bin = Path::new(assets_dir);
Self {
verifier: UniversalVerifier::setup(verifier_bin).expect("Setting up chunk verifier"),
fork,
}
}
}
impl ProofVerifier for Verifier {
fn verify(&self, task_type: super::TaskType, proof: &[u8]) -> Result<bool> {
panic_catch(|| match task_type {
TaskType::Chunk => {
let proof = serde_json::from_slice::<ChunkProof>(proof).unwrap();
assert!(proof.pi_hash_check(self.fork));
self.verifier
.verify_stark_proof(proof.as_root_proof(), &proof.vk)
.unwrap()
}
TaskType::Batch => {
let proof = serde_json::from_slice::<BatchProof>(proof).unwrap();
assert!(proof.pi_hash_check(self.fork));
self.verifier
.verify_stark_proof(proof.as_root_proof(), &proof.vk)
.unwrap()
}
TaskType::Bundle => {
let proof = serde_json::from_slice::<BundleProof>(proof).unwrap();
assert!(proof.pi_hash_check(self.fork));
let vk = proof.vk.clone();
let evm_proof = proof.into_evm_proof();
self.verifier.verify_evm_proof(&evm_proof, &vk).unwrap()
}
})
.map(|_| true)
.map_err(|err_str: String| eyre::eyre!("{err_str}"))
}
fn dump_vk(&self, _file: &Path) {
panic!("dump vk has been deprecated");
}
}

View File

@@ -11,4 +11,5 @@ crate-type = ["cdylib"]
[dependencies]
libzkp = { path = "../libzkp" }
l2geth = { path = "../l2geth"}
tracing-subscriber = { version = "0.3", features = ["env-filter"] }
tracing.workspace = true

View File

@@ -5,6 +5,47 @@ use std::ffi::{c_char, CString};
use libzkp::TaskType;
use utils::{c_char_to_str, c_char_to_vec};
use std::sync::OnceLock;
static LOG_SETTINGS: OnceLock<Result<(), String>> = OnceLock::new();
fn enable_dump() -> bool {
static ZKVM_DEBUG_DUMP: OnceLock<bool> = OnceLock::new();
*ZKVM_DEBUG_DUMP.get_or_init(|| {
std::env::var("ZKVM_DEBUG")
.or_else(|_| std::env::var("ZKVM_DEBUG_PROOF"))
.map(|s| s.to_lowercase() == "true")
.unwrap_or(false)
})
}
/// # Safety
#[no_mangle]
pub unsafe extern "C" fn init_tracing() {
use tracing_subscriber::filter::{EnvFilter, LevelFilter};
LOG_SETTINGS
.get_or_init(|| {
tracing_subscriber::fmt()
.with_env_filter(
EnvFilter::builder()
.with_default_directive(LevelFilter::INFO.into())
.from_env_lossy(),
)
.with_ansi(false)
.with_level(true)
.with_target(true)
.try_init()
.map_err(|e| format!("{e}"))?;
Ok(())
})
.clone()
.expect("Failed to initialize tracing subscriber");
tracing::info!("Tracing has been initialized normally");
}
/// # Safety
#[no_mangle]
pub unsafe extern "C" fn init_verifier(config: *const c_char) {
@@ -21,6 +62,7 @@ pub unsafe extern "C" fn init_l2geth(config: *const c_char) {
fn verify_proof(proof: *const c_char, fork_name: *const c_char, task_type: TaskType) -> c_char {
let fork_name_str = c_char_to_str(fork_name);
let proof_str = proof;
let proof = c_char_to_vec(proof);
match libzkp::verify_proof(proof, fork_name_str, task_type) {
@@ -28,7 +70,24 @@ fn verify_proof(proof: *const c_char, fork_name: *const c_char, task_type: TaskT
tracing::error!("{:?} verify failed, error: {:#}", task_type, e);
false as c_char
}
Ok(result) => result as c_char,
Ok(result) => {
if !result && enable_dump() {
use std::time::{SystemTime, UNIX_EPOCH};
// Dump req.input to a temporary file
let timestamp = SystemTime::now()
.duration_since(UNIX_EPOCH)
.unwrap_or_default()
.as_secs();
let filename = format!("/tmp/proof_{}.json", timestamp);
let cstr = unsafe { std::ffi::CStr::from_ptr(proof_str) };
if let Err(e) = std::fs::write(&filename, cstr.to_bytes()) {
eprintln!("Failed to write proof to file {}: {}", filename, e);
} else {
println!("Dumped failed proof to {}", filename);
}
}
result as c_char
}
}
}
@@ -91,16 +150,14 @@ pub unsafe extern "C" fn gen_universal_task(
task_type: i32,
task: *const c_char,
fork_name: *const c_char,
expected_vk: *const u8,
expected_vk_len: usize,
) -> HandlingResult {
let mut interpreter = None;
let task_json = if task_type == TaskType::Chunk as i32 {
let pre_task_str = c_char_to_str(task);
let cli = l2geth::get_client();
match libzkp::checkout_chunk_task(pre_task_str, cli) {
Ok(str) => {
interpreter.replace(cli);
str
}
Ok(str) => str,
Err(e) => {
tracing::error!("gen_universal_task failed at pre interpret step, error: {e}");
return failed_handling_result();
@@ -109,10 +166,17 @@ pub unsafe extern "C" fn gen_universal_task(
} else {
c_char_to_str(task).to_string()
};
let ret =
libzkp::gen_universal_task(task_type, &task_json, c_char_to_str(fork_name), interpreter);
if let Ok((pi_hash, task_json, meta_json)) = ret {
let expected_vk = if expected_vk_len > 0 {
std::slice::from_raw_parts(expected_vk, expected_vk_len)
} else {
&[]
};
let ret =
libzkp::gen_universal_task(task_type, &task_json, c_char_to_str(fork_name), expected_vk);
if let Ok((pi_hash, meta_json, task_json)) = ret {
let expected_pi_hash = pi_hash.0.map(|byte| byte as c_char);
HandlingResult {
ok: true as c_char,
@@ -121,6 +185,22 @@ pub unsafe extern "C" fn gen_universal_task(
expected_pi_hash,
}
} else {
if enable_dump() {
use std::time::{SystemTime, UNIX_EPOCH};
// Dump req.input to a temporary file
let timestamp = SystemTime::now()
.duration_since(UNIX_EPOCH)
.unwrap_or_default()
.as_secs();
let c_str = unsafe { std::ffi::CStr::from_ptr(fork_name) };
let filename = format!("/tmp/task_{}_{}.json", c_str.to_str().unwrap(), timestamp);
if let Err(e) = std::fs::write(&filename, task_json.as_bytes()) {
eprintln!("Failed to write task to file {}: {}", filename, e);
} else {
println!("Dumped failed task to {}", filename);
}
}
tracing::error!("gen_universal_task failed, error: {:#}", ret.unwrap_err());
failed_handling_result()
}
@@ -165,3 +245,10 @@ pub unsafe extern "C" fn release_string(ptr: *mut c_char) {
let _ = CString::from_raw(ptr);
}
}
/// # Safety
#[no_mangle]
pub unsafe extern "C" fn set_dynamic_feature(feats: *const c_char) {
let feats_str = c_char_to_str(feats);
libzkp::set_dynamic_feature(feats_str);
}

View File

@@ -1,14 +1,14 @@
[package]
name = "prover"
version = "0.1.0"
edition = "2021"
version.workspace = true
edition.workspace = true
# See more keys and their definitions at https://doc.rust-lang.org/cargo/reference/manifest.html
[dependencies]
scroll-zkvm-types.workspace = true
scroll-zkvm-prover-euclid.workspace = true
scroll-proving-sdk = { git = "https://github.com/scroll-tech/scroll-proving-sdk.git", branch = "refactor/scroll" }
scroll-zkvm-prover.workspace = true
scroll-proving-sdk = { git = "https://github.com/scroll-tech/scroll-proving-sdk.git", rev = "4c36ab2" }
serde.workspace = true
serde_json.workspace = true
once_cell.workspace =true
@@ -17,8 +17,9 @@ tiny-keccak = { workspace = true, features = ["sha3", "keccak"] }
eyre.workspace = true
futures = "0.3.30"
futures-util = "0.3"
reqwest = { version = "0.12.4", features = ["gzip"] }
reqwest = { version = "0.12.4", features = ["gzip", "stream"] }
reqwest-middleware = "0.3"
reqwest-retry = "0.5"
hex = "0.4.3"
@@ -30,5 +31,9 @@ sled = "0.34.7"
http = "1.1.0"
clap = { version = "4.5", features = ["derive"] }
ctor = "0.2.8"
url = "2.5.4"
url = { version = "2.5.4", features = ["serde",] }
serde_bytes = "0.11.15"
[features]
default = []
cuda = ["scroll-zkvm-prover/cuda"]

View File

@@ -0,0 +1,7 @@
{
"feynman": {
"b68fdc3f28a5ce006280980df70cd3447e56913e5bca6054603ba85f0794c23a6618ea25a7991845bbc5fd571670ee47379ba31ace92d345bca59702a0d4112d": "https://circuit-release.s3.us-west-2.amazonaws.com/scroll-zkvm/releases/0.5.2/chunk/",
"9a3f66370f11e3303f1a1248921025104e83253efea43a70d221cf4e15fc145bf2be2f4468d1ac4a70e7682babb1c60417e21c7633d4b55b58f44703ec82b05a": "https://circuit-release.s3.us-west-2.amazonaws.com/scroll-zkvm/releases/0.5.2/batch/",
"1f8627277e1c1f6e1cc70c03e6fde06929e5ea27ca5b1d56e23b235dfeda282e22c0e5294bcb1b3a9def836f8d0f18612a9860629b9497292976ca11844b7e73": "https://circuit-release.s3.us-west-2.amazonaws.com/scroll-zkvm/releases/0.5.2/bundle/"
}
}

View File

@@ -2,12 +2,13 @@ mod prover;
mod types;
mod zk_circuits_handler;
use clap::{ArgAction, Parser};
use clap::{ArgAction, Parser, Subcommand};
use prover::{LocalProver, LocalProverConfig};
use scroll_proving_sdk::{
prover::ProverBuilder,
prover::{types::ProofType, ProverBuilder},
utils::{get_version, init_tracing},
};
use std::{fs::File, io::BufReader, path::Path};
#[derive(Parser, Debug)]
#[command(disable_version_flag = true)]
@@ -16,6 +17,9 @@ struct Args {
#[arg(long = "config", default_value = "conf/config.json")]
config_file: String,
#[arg(long = "forkname")]
fork_name: Option<String>,
/// Version of this prover
#[arg(short, long, action = ArgAction::SetTrue)]
version: bool,
@@ -23,6 +27,24 @@ struct Args {
/// Path of log file
#[arg(long = "log.file")]
log_file: Option<String>,
#[command(subcommand)]
command: Option<Commands>,
}
#[derive(Subcommand, Debug)]
enum Commands {
Handle {
/// path to save the verifier's asset
task_path: String,
},
}
#[derive(Debug, serde::Deserialize)]
struct HandleSet {
chunks: Vec<String>,
batches: Vec<String>,
bundles: Vec<String>,
}
#[tokio::main]
@@ -38,13 +60,52 @@ async fn main() -> eyre::Result<()> {
let cfg = LocalProverConfig::from_file(args.config_file)?;
let sdk_config = cfg.sdk_config.clone();
let local_prover = LocalProver::new(cfg);
let prover = ProverBuilder::new(sdk_config, local_prover)
.build()
.await
.map_err(|e| eyre::eyre!("build prover fail: {e}"))?;
let local_prover = LocalProver::new(cfg.clone());
prover.run().await;
match args.command {
Some(Commands::Handle { task_path }) => {
let file = File::open(Path::new(&task_path))?;
let reader = BufReader::new(file);
let handle_set: HandleSet = serde_json::from_reader(reader)?;
let prover = ProverBuilder::new(sdk_config, local_prover)
.build()
.await
.map_err(|e| eyre::eyre!("build prover fail: {e}"))?;
let prover = std::sync::Arc::new(prover);
println!("Handling task set 1: chunks ...");
assert!(
prover
.clone()
.one_shot(&handle_set.chunks, ProofType::Chunk)
.await
);
println!("Done! Handling task set 2: batches ...");
assert!(
prover
.clone()
.one_shot(&handle_set.batches, ProofType::Batch)
.await
);
println!("Done! Handling task set 3: bundles ...");
assert!(
prover
.clone()
.one_shot(&handle_set.bundles, ProofType::Bundle)
.await
);
println!("All done!");
}
None => {
let prover = ProverBuilder::new(sdk_config, local_prover)
.build()
.await
.map_err(|e| eyre::eyre!("build prover fail: {e}"))?;
prover.run().await;
}
}
Ok(())
}

View File

@@ -1,6 +1,5 @@
use crate::zk_circuits_handler::{euclidV2::EuclidV2Handler, CircuitsHandler};
use crate::zk_circuits_handler::{universal::UniversalHandler, CircuitsHandler};
use async_trait::async_trait;
use base64::{prelude::BASE64_STANDARD, Engine};
use eyre::Result;
use scroll_proving_sdk::{
config::Config as SdkConfig,
@@ -9,6 +8,7 @@ use scroll_proving_sdk::{
GetVkRequest, GetVkResponse, ProveRequest, ProveResponse, QueryTaskRequest,
QueryTaskResponse, TaskStatus,
},
types::ProofType,
ProvingService,
},
};
@@ -16,11 +16,111 @@ use serde::{Deserialize, Serialize};
use std::{
collections::HashMap,
fs::File,
sync::Arc,
path::{Path, PathBuf},
sync::{Arc, LazyLock},
time::{SystemTime, UNIX_EPOCH},
};
use tokio::{runtime::Handle, sync::Mutex, task::JoinHandle};
#[derive(Clone, Serialize, Deserialize)]
pub struct AssetsLocationData {
/// the base url to form a general downloading url for an asset, MUST HAVE A TRAILING SLASH
pub base_url: url::Url,
#[serde(default)]
/// a altered url for specififed vk
pub asset_detours: HashMap<String, url::Url>,
}
impl AssetsLocationData {
pub fn gen_asset_url(&self, vk_as_path: &str, proof_type: ProofType) -> Result<url::Url> {
Ok(self.base_url.join(
match proof_type {
ProofType::Chunk => format!("chunk/{vk_as_path}/"),
ProofType::Batch => format!("batch/{vk_as_path}/"),
ProofType::Bundle => format!("bundle/{vk_as_path}/"),
t => eyre::bail!("unrecognized proof type: {}", t as u8),
}
.as_str(),
)?)
}
pub fn validate(&self) -> Result<()> {
if !self.base_url.path().ends_with('/') {
eyre::bail!(
"base_url must have a trailing slash, got: {}",
self.base_url
);
}
Ok(())
}
pub async fn get_asset(
&self,
vk: &str,
url_base: &url::Url,
base_path: impl AsRef<Path>,
) -> Result<PathBuf> {
let download_files = ["app.vmexe", "openvm.toml"];
// Step 1: Create a local path for storage
let storage_path = base_path.as_ref().join(vk);
std::fs::create_dir_all(&storage_path)?;
// Step 2 & 3: Download each file if needed
let client = reqwest::Client::new();
for filename in download_files.iter() {
let local_file_path = storage_path.join(filename);
let download_url = url_base.join(filename)?;
// Check if file already exists
if local_file_path.exists() {
// Get file metadata to check size
if let Ok(metadata) = std::fs::metadata(&local_file_path) {
// Make a HEAD request to get remote file size
if let Ok(head_resp) = client.head(download_url.clone()).send().await {
if let Some(content_length) = head_resp.headers().get("content-length") {
if let Ok(remote_size) =
content_length.to_str().unwrap_or("0").parse::<u64>()
{
// If sizes match, skip download
if metadata.len() == remote_size {
println!("File {} already exists with matching size, skipping download", filename);
continue;
}
}
}
}
}
}
println!("Downloading {} from {}", filename, download_url);
let response = client.get(download_url).send().await?;
if !response.status().is_success() {
eyre::bail!(
"Failed to download {}: HTTP status {}",
filename,
response.status()
);
}
// Stream the content directly to file instead of loading into memory
let mut file = std::fs::File::create(&local_file_path)?;
let mut stream = response.bytes_stream();
use futures_util::StreamExt;
while let Some(chunk) = stream.next().await {
std::io::Write::write_all(&mut file, &chunk?)?;
}
}
// Step 4: Return the storage path
Ok(storage_path)
}
}
#[derive(Clone, Serialize, Deserialize)]
pub struct LocalProverConfig {
pub sdk_config: SdkConfig,
@@ -44,7 +144,14 @@ impl LocalProverConfig {
#[derive(Clone, Serialize, Deserialize)]
pub struct CircuitConfig {
pub hard_fork_name: String,
/// The path to save assets for a specified hard fork phase
pub workspace_path: String,
#[serde(flatten)]
/// The location data for dynamic loading
pub location_data: AssetsLocationData,
/// cached vk value to save some initial cost, for debugging only
#[serde(default)]
pub vks: HashMap<ProofType, String>,
}
pub struct LocalProver {
@@ -52,7 +159,7 @@ pub struct LocalProver {
next_task_id: u64,
current_task: Option<JoinHandle<Result<String>>>,
active_handler: Option<(String, Arc<dyn CircuitsHandler>)>,
handlers: HashMap<String, Arc<dyn CircuitsHandler>>,
}
#[async_trait]
@@ -60,27 +167,15 @@ impl ProvingService for LocalProver {
fn is_local(&self) -> bool {
true
}
async fn get_vks(&self, req: GetVkRequest) -> GetVkResponse {
let mut vks = vec![];
for hard_fork_name in self.config.circuits.keys() {
let handler = self.new_handler(hard_fork_name);
for proof_type in &req.proof_types {
let vk = handler.get_vk(*proof_type).await;
if let Some(vk) = vk {
vks.push(BASE64_STANDARD.encode(vk));
}
}
async fn get_vks(&self, _: GetVkRequest) -> GetVkResponse {
// get vk has been deprecated in new prover with dynamic asset loading scheme
GetVkResponse {
vks: vec![],
error: None,
}
GetVkResponse { vks, error: None }
}
async fn prove(&mut self, req: ProveRequest) -> ProveResponse {
self.set_active_handler(&req.hard_fork_name);
match self
.do_prove(req, self.active_handler.as_ref().unwrap().1.clone())
.await
{
match self.do_prove(req).await {
Ok(resp) => resp,
Err(e) => ProveResponse {
status: TaskStatus::Failed,
@@ -131,29 +226,91 @@ impl ProvingService for LocalProver {
}
}
static GLOBAL_ASSET_URLS: LazyLock<HashMap<String, HashMap<String, url::Url>>> =
LazyLock::new(|| {
const ASSETS_JSON: &str = include_str!("../assets_url_preset.json");
serde_json::from_str(ASSETS_JSON).expect("Failed to parse assets_url_preset.json")
});
impl LocalProver {
pub fn new(config: LocalProverConfig) -> Self {
pub fn new(mut config: LocalProverConfig) -> Self {
for (fork_name, circuit_config) in config.circuits.iter_mut() {
// validate each base url
circuit_config.location_data.validate().unwrap();
let mut template_url_mapping = GLOBAL_ASSET_URLS
.get(&fork_name.to_lowercase())
.cloned()
.unwrap_or_default();
// apply default settings in template
for (key, url) in circuit_config.location_data.asset_detours.drain() {
template_url_mapping.insert(key, url);
}
circuit_config.location_data.asset_detours = template_url_mapping;
// validate each detours url
for url in circuit_config.location_data.asset_detours.values() {
assert!(
url.path().ends_with('/'),
"url {} must be end with /",
url.as_str()
);
}
}
Self {
config,
next_task_id: 0,
current_task: None,
active_handler: None,
handlers: HashMap::new(),
}
}
async fn do_prove(
&mut self,
req: ProveRequest,
handler: Arc<dyn CircuitsHandler>,
) -> Result<ProveResponse> {
async fn do_prove(&mut self, req: ProveRequest) -> Result<ProveResponse> {
self.next_task_id += 1;
let duration = SystemTime::now().duration_since(UNIX_EPOCH).unwrap();
let created_at = duration.as_secs() as f64 + duration.subsec_nanos() as f64 * 1e-9;
let req_clone = req.clone();
let prover_task = UniversalHandler::get_task_from_input(&req.input)?;
let vk = hex::encode(&prover_task.vk);
let handler = if let Some(handler) = self.handlers.get(&vk) {
handler.clone()
} else {
let base_config = self
.config
.circuits
.get(&req.hard_fork_name)
.ok_or_else(|| {
eyre::eyre!(
"coordinator sent unexpected forkname {}",
req.hard_fork_name
)
})?;
let url_base = if let Some(url) = base_config.location_data.asset_detours.get(&vk) {
url.clone()
} else {
base_config
.location_data
.gen_asset_url(&vk, req.proof_type)?
};
let asset_path = base_config
.location_data
.get_asset(&vk, &url_base, &base_config.workspace_path)
.await?;
let circuits_handler = Arc::new(Mutex::new(UniversalHandler::new(
&asset_path,
req.proof_type,
)?));
self.handlers.insert(vk, circuits_handler.clone());
circuits_handler
};
let handle = Handle::current();
let task_handle =
tokio::task::spawn_blocking(move || handle.block_on(handler.get_proof_data(req_clone)));
let is_evm = req.proof_type == ProofType::Bundle;
let task_handle = tokio::task::spawn_blocking(move || {
handle.block_on(handler.get_proof_data(&prover_task, is_evm))
});
self.current_task = Some(task_handle);
Ok(ProveResponse {
@@ -167,26 +324,4 @@ impl LocalProver {
..Default::default()
})
}
fn set_active_handler(&mut self, hard_fork_name: &str) {
if let Some(handler) = &self.active_handler {
if handler.0 == hard_fork_name {
return;
}
}
self.active_handler = Some((hard_fork_name.to_string(), self.new_handler(hard_fork_name)));
}
fn new_handler(&self, hard_fork_name: &str) -> Arc<dyn CircuitsHandler> {
// if we got assigned a task for an unknown hard fork, there is something wrong in the
// coordinator
let config = self.config.circuits.get(hard_fork_name).unwrap();
match hard_fork_name {
"euclidV2" => Arc::new(Arc::new(Mutex::new(EuclidV2Handler::new(
&config.workspace_path,
)))) as Arc<dyn CircuitsHandler>,
_ => unreachable!(),
}
}
}

View File

@@ -1,3 +1,5 @@
#![allow(dead_code)]
use serde::{Deserialize, Deserializer, Serialize, Serializer};
#[derive(Serialize, Deserialize, Default)]

View File

@@ -1,67 +1,13 @@
//pub mod euclid;
#[allow(non_snake_case)]
pub mod euclidV2;
pub mod universal;
use async_trait::async_trait;
use eyre::Result;
use scroll_proving_sdk::prover::{proving_service::ProveRequest, ProofType};
use scroll_zkvm_prover_euclid::ProverConfig;
use std::path::Path;
use scroll_zkvm_types::ProvingTask;
#[async_trait]
pub trait CircuitsHandler: Sync + Send {
async fn get_vk(&self, task_type: ProofType) -> Option<Vec<u8>>;
async fn get_proof_data(&self, prove_request: ProveRequest) -> Result<String>;
}
#[derive(Clone, Copy)]
pub(crate) enum Phase {
EuclidV2,
}
impl Phase {
pub fn phase_spec_chunk(&self, workspace_path: &Path) -> ProverConfig {
let dir_cache = Some(workspace_path.join("cache"));
let path_app_exe = workspace_path.join("chunk/app.vmexe");
let path_app_config = workspace_path.join("chunk/openvm.toml");
let segment_len = Some((1 << 22) - 100);
ProverConfig {
dir_cache,
path_app_config,
path_app_exe,
segment_len,
..Default::default()
}
}
pub fn phase_spec_batch(&self, workspace_path: &Path) -> ProverConfig {
let dir_cache = Some(workspace_path.join("cache"));
let path_app_exe = workspace_path.join("batch/app.vmexe");
let path_app_config = workspace_path.join("batch/openvm.toml");
let segment_len = Some((1 << 22) - 100);
ProverConfig {
dir_cache,
path_app_config,
path_app_exe,
segment_len,
..Default::default()
}
}
pub fn phase_spec_bundle(&self, workspace_path: &Path) -> ProverConfig {
let dir_cache = Some(workspace_path.join("cache"));
let path_app_config = workspace_path.join("bundle/openvm.toml");
let segment_len = Some((1 << 22) - 100);
match self {
Phase::EuclidV2 => ProverConfig {
dir_cache,
path_app_config,
segment_len,
path_app_exe: workspace_path.join("bundle/app.vmexe"),
..Default::default()
},
}
}
async fn get_proof_data(&self, u_task: &ProvingTask, need_snark: bool) -> Result<String>;
}

View File

@@ -1,144 +0,0 @@
use std::{path::Path, sync::Arc};
use super::CircuitsHandler;
use anyhow::{anyhow, Result};
use async_trait::async_trait;
use scroll_proving_sdk::prover::{proving_service::ProveRequest, ProofType};
use scroll_zkvm_prover_euclid::{
task::{batch::BatchProvingTask, bundle::BundleProvingTask, chunk::ChunkProvingTask},
BatchProver, BundleProverEuclidV1, ChunkProver, ProverConfig,
};
use tokio::sync::Mutex;
pub struct EuclidHandler {
chunk_prover: ChunkProver,
batch_prover: BatchProver,
bundle_prover: BundleProverEuclidV1,
}
#[derive(Clone, Copy)]
pub(crate) enum Phase {
EuclidV1,
EuclidV2,
}
impl Phase {
pub fn as_str(&self) -> &str {
match self {
Phase::EuclidV1 => "euclidv1",
Phase::EuclidV2 => "euclidv2",
}
}
pub fn phase_spec_chunk(&self, workspace_path: &Path) -> ProverConfig {
let dir_cache = Some(workspace_path.join("cache"));
let path_app_exe = workspace_path.join("chunk/app.vmexe");
let path_app_config = workspace_path.join("chunk/openvm.toml");
let segment_len = Some((1 << 22) - 100);
ProverConfig {
dir_cache,
path_app_config,
path_app_exe,
segment_len,
..Default::default()
}
}
pub fn phase_spec_batch(&self, workspace_path: &Path) -> ProverConfig {
let dir_cache = Some(workspace_path.join("cache"));
let path_app_exe = workspace_path.join("batch/app.vmexe");
let path_app_config = workspace_path.join("batch/openvm.toml");
let segment_len = Some((1 << 22) - 100);
ProverConfig {
dir_cache,
path_app_config,
path_app_exe,
segment_len,
..Default::default()
}
}
pub fn phase_spec_bundle(&self, workspace_path: &Path) -> ProverConfig {
let dir_cache = Some(workspace_path.join("cache"));
let path_app_config = workspace_path.join("bundle/openvm.toml");
let segment_len = Some((1 << 22) - 100);
match self {
Phase::EuclidV1 => ProverConfig {
dir_cache,
path_app_config,
segment_len,
path_app_exe: workspace_path.join("bundle/app_euclidv1.vmexe"),
..Default::default()
},
Phase::EuclidV2 => ProverConfig {
dir_cache,
path_app_config,
segment_len,
path_app_exe: workspace_path.join("bundle/app.vmexe"),
..Default::default()
},
}
}
}
unsafe impl Send for EuclidHandler {}
impl EuclidHandler {
pub fn new(workspace_path: &str) -> Self {
let p = Phase::EuclidV1;
let workspace_path = Path::new(workspace_path);
let chunk_prover = ChunkProver::setup(p.phase_spec_chunk(workspace_path))
.expect("Failed to setup chunk prover");
let batch_prover = BatchProver::setup(p.phase_spec_batch(workspace_path))
.expect("Failed to setup batch prover");
let bundle_prover = BundleProverEuclidV1::setup(p.phase_spec_bundle(workspace_path))
.expect("Failed to setup bundle prover");
Self {
chunk_prover,
batch_prover,
bundle_prover,
}
}
}
#[async_trait]
impl CircuitsHandler for Arc<Mutex<EuclidHandler>> {
async fn get_vk(&self, task_type: ProofType) -> Option<Vec<u8>> {
Some(match task_type {
ProofType::Chunk => self.try_lock().unwrap().chunk_prover.get_app_vk(),
ProofType::Batch => self.try_lock().unwrap().batch_prover.get_app_vk(),
ProofType::Bundle => self.try_lock().unwrap().bundle_prover.get_app_vk(),
_ => unreachable!("Unsupported proof type"),
})
}
async fn get_proof_data(&self, prove_request: ProveRequest) -> Result<String> {
match prove_request.proof_type {
ProofType::Chunk => {
let task: ChunkProvingTask = serde_json::from_str(&prove_request.input)?;
let proof = self.try_lock().unwrap().chunk_prover.gen_proof(&task)?;
Ok(serde_json::to_string(&proof)?)
}
ProofType::Batch => {
let task: BatchProvingTask = serde_json::from_str(&prove_request.input)?;
let proof = self.try_lock().unwrap().batch_prover.gen_proof(&task)?;
Ok(serde_json::to_string(&proof)?)
}
ProofType::Bundle => {
let batch_proofs: BundleProvingTask = serde_json::from_str(&prove_request.input)?;
let proof = self
.try_lock()
.unwrap()
.bundle_prover
.gen_proof_evm(&batch_proofs)?;
Ok(serde_json::to_string(&proof)?)
}
_ => Err(anyhow!("Unsupported proof type")),
}
}
}

View File

@@ -1,73 +0,0 @@
use std::{path::Path, sync::Arc};
use super::{CircuitsHandler, Phase};
use async_trait::async_trait;
use eyre::Result;
use scroll_proving_sdk::prover::{proving_service::ProveRequest, ProofType};
use scroll_zkvm_prover_euclid::{BatchProver, BundleProverEuclidV2, ChunkProver};
use scroll_zkvm_types::ProvingTask;
use tokio::sync::Mutex;
pub struct EuclidV2Handler {
chunk_prover: ChunkProver,
batch_prover: BatchProver,
bundle_prover: BundleProverEuclidV2,
}
unsafe impl Send for EuclidV2Handler {}
impl EuclidV2Handler {
pub fn new(workspace_path: &str) -> Self {
let p = Phase::EuclidV2;
let workspace_path = Path::new(workspace_path);
let chunk_prover = ChunkProver::setup(p.phase_spec_chunk(workspace_path))
.expect("Failed to setup chunk prover");
let batch_prover = BatchProver::setup(p.phase_spec_batch(workspace_path))
.expect("Failed to setup batch prover");
let bundle_prover = BundleProverEuclidV2::setup(p.phase_spec_bundle(workspace_path))
.expect("Failed to setup bundle prover");
Self {
chunk_prover,
batch_prover,
bundle_prover,
}
}
}
#[async_trait]
impl CircuitsHandler for Arc<Mutex<EuclidV2Handler>> {
async fn get_vk(&self, task_type: ProofType) -> Option<Vec<u8>> {
Some(match task_type {
ProofType::Chunk => self.try_lock().unwrap().chunk_prover.get_app_vk(),
ProofType::Batch => self.try_lock().unwrap().batch_prover.get_app_vk(),
ProofType::Bundle => self.try_lock().unwrap().bundle_prover.get_app_vk(),
_ => unreachable!("Unsupported proof type"),
})
}
async fn get_proof_data(&self, prove_request: ProveRequest) -> Result<String> {
let u_task: ProvingTask = serde_json::from_str(&prove_request.input)?;
let proof = match prove_request.proof_type {
ProofType::Chunk => self
.try_lock()
.unwrap()
.chunk_prover
.gen_proof_universal(&u_task, false)?,
ProofType::Batch => self
.try_lock()
.unwrap()
.batch_prover
.gen_proof_universal(&u_task, false)?,
ProofType::Bundle => self
.try_lock()
.unwrap()
.bundle_prover
.gen_proof_universal(&u_task, true)?,
_ => return Err(eyre::eyre!("Unsupported proof type")),
};
Ok(serde_json::to_string(&proof)?)
}
}

View File

@@ -0,0 +1,55 @@
use std::path::Path;
use super::CircuitsHandler;
use async_trait::async_trait;
use eyre::Result;
use scroll_proving_sdk::prover::ProofType;
use scroll_zkvm_prover::{Prover, ProverConfig};
use scroll_zkvm_types::ProvingTask;
use tokio::sync::Mutex;
pub struct UniversalHandler {
prover: Prover,
}
/// Safe for current usage as `CircuitsHandler` trait (protected inside of Mutex and NEVER extract
/// the instance out by `into_inner`)
unsafe impl Send for UniversalHandler {}
impl UniversalHandler {
pub fn new(workspace_path: impl AsRef<Path>, _proof_type: ProofType) -> Result<Self> {
let path_app_exe = workspace_path.as_ref().join("app.vmexe");
let path_app_config = workspace_path.as_ref().join("openvm.toml");
let segment_len = Some((1 << 22) - 100);
let config = ProverConfig {
path_app_config,
path_app_exe,
segment_len,
};
let prover = Prover::setup(config, None)?;
Ok(Self { prover })
}
/// get_prover get the inner prover, later we would replace chunk/batch/bundle_prover with
/// universal prover, before that, use bundle_prover as the represent one
pub fn get_prover(&mut self) -> &mut Prover {
&mut self.prover
}
pub fn get_task_from_input(input: &str) -> Result<ProvingTask> {
Ok(serde_json::from_str(input)?)
}
}
#[async_trait]
impl CircuitsHandler for Mutex<UniversalHandler> {
async fn get_proof_data(&self, u_task: &ProvingTask, need_snark: bool) -> Result<String> {
let mut handler_self = self.lock().await;
let proof = handler_self
.get_prover()
.gen_proof_universal(u_task, need_snark)?;
Ok(serde_json::to_string(&proof)?)
}
}

1
database/.gitignore vendored
View File

@@ -1,2 +1,3 @@
/build/bin
.idea
localdbg

View File

@@ -1083,9 +1083,7 @@ github.com/holiman/uint256 v1.2.0/go.mod h1:y4ga/t+u+Xwd7CpDgZESaRcWy0I7XMlTMA25
github.com/holiman/uint256 v1.2.4/go.mod h1:EOMSn4q6Nyt9P6efbI3bueV4e1b3dGlUCXeiRV4ng7E=
github.com/holiman/uint256 v1.3.0/go.mod h1:EOMSn4q6Nyt9P6efbI3bueV4e1b3dGlUCXeiRV4ng7E=
github.com/hpcloud/tail v1.0.0 h1:nfCOvKYfkgYP8hkirhJocXT2+zOD8yUNjXaWfTlyFKI=
github.com/huin/goupnp v1.0.2/go.mod h1:0dxJBVBHqTMjIUMkESDTNgOOx/Mw5wYIfyFmdzSamkM=
github.com/huin/goutil v0.0.0-20170803182201-1ca381bf3150 h1:vlNjIqmUZ9CMAWsbURYl3a6wZbw7q5RHVvlXTNS/Bs8=
github.com/huin/goutil v0.0.0-20170803182201-1ca381bf3150/go.mod h1:PpLOETDnJ0o3iZrZfqZzyLl6l7F3c6L1oWn7OICBi6o=
github.com/hydrogen18/memlistener v1.0.0/go.mod h1:qEIFzExnS6016fRpRfxrExeVn2gbClQA99gQhnIcdhE=
github.com/iancoleman/strcase v0.3.0 h1:nTXanmYxhfFAMjZL34Ov6gkzEsSJZ5DbhxWjvSASxEI=
github.com/iancoleman/strcase v0.3.0/go.mod h1:iwCmte+B7n89clKwxIoIXy/HfoL7AsD47ZCWhYzw7ho=
@@ -1122,7 +1120,6 @@ github.com/intel/goresctrl v0.3.0 h1:K2D3GOzihV7xSBedGxONSlaw/un1LZgWsc9IfqipN4c
github.com/intel/goresctrl v0.3.0/go.mod h1:fdz3mD85cmP9sHD8JUlrNWAxvwM86CrbmVXltEKd7zk=
github.com/iris-contrib/jade v1.1.4/go.mod h1:EDqR+ur9piDl6DUgs6qRrlfzmlx/D5UybogqrXvJTBE=
github.com/iris-contrib/schema v0.0.6/go.mod h1:iYszG0IOsuIsfzjymw1kMzTL8YQcCWlm65f3wX8J5iA=
github.com/jackpal/go-nat-pmp v1.0.2-0.20160603034137-1fa385a6f458/go.mod h1:QPH045xvCAeXUZOxsnwmrtiCoxIr9eob+4orBN1SBKc=
github.com/jedisct1/go-minisign v0.0.0-20190909160543-45766022959e h1:UvSe12bq+Uj2hWd8aOlwPmoZ+CITRFrdit+sDGfAg8U=
github.com/jedisct1/go-minisign v0.0.0-20190909160543-45766022959e/go.mod h1:G1CVv03EnqU1wYL2dFwXxW2An0az9JTl/ZsqXQeBlkU=
github.com/jedisct1/go-minisign v0.0.0-20230811132847-661be99b8267/go.mod h1:h1nSAbGFqGVzn6Jyl1R/iCcBUHN4g+gW1u9CoBTrb9E=
@@ -1228,7 +1225,6 @@ github.com/mattn/go-colorable v0.0.9/go.mod h1:9vuHe8Xs5qXnSaW/c/ABM9alt+Vo+STaO
github.com/mattn/go-colorable v0.1.4/go.mod h1:U0ppj6V5qS13XJ6of8GYAs25YV2eR4EVcfRqFIhoBtE=
github.com/mattn/go-colorable v0.1.6/go.mod h1:u6P/XSegPjTcexA+o6vUJrdnUu04hMope9wVRipJSqc=
github.com/mattn/go-colorable v0.1.7/go.mod h1:u6P/XSegPjTcexA+o6vUJrdnUu04hMope9wVRipJSqc=
github.com/mattn/go-colorable v0.1.8/go.mod h1:u6P/XSegPjTcexA+o6vUJrdnUu04hMope9wVRipJSqc=
github.com/mattn/go-colorable v0.1.9/go.mod h1:u6P/XSegPjTcexA+o6vUJrdnUu04hMope9wVRipJSqc=
github.com/mattn/go-colorable v0.1.11/go.mod h1:u5H1YNBxpqRaxsYJYSkiCWKzEfiAb1Gb520KVy5xxl4=
github.com/mattn/go-colorable v0.1.12/go.mod h1:u5H1YNBxpqRaxsYJYSkiCWKzEfiAb1Gb520KVy5xxl4=
@@ -1240,7 +1236,6 @@ github.com/mattn/go-isatty v0.0.4/go.mod h1:M+lRXTBqGeGNdLjl/ufCoiOlB5xdOkqRJdNx
github.com/mattn/go-isatty v0.0.9/go.mod h1:YNRxwqDuOph6SZLI9vUUz6OYw3QyUt7WiY2yME+cCiQ=
github.com/mattn/go-isatty v0.0.10/go.mod h1:qgIWMr58cqv1PHHyhnkY9lrL7etaEgOFcMEpPG5Rm84=
github.com/mattn/go-isatty v0.0.11/go.mod h1:PhnuNfih5lzO57/f3n+odYbM4JtupLOxQOAqxQCu2WE=
github.com/mattn/go-isatty v0.0.12/go.mod h1:cbi8OIDigv2wuxKPP5vlRcQ1OAZbq2CE4Kysco4FUpU=
github.com/mattn/go-isatty v0.0.17/go.mod h1:kYGgaQfpe5nmfYZH+SKPsOc2e4SrIfOl2e/yFXSvRLM=
github.com/mattn/go-runewidth v0.0.3/go.mod h1:LwmH8dsx7+W8Uxz3IHJYH5QSwggIsqBzpuz5H//U1FU=
github.com/mattn/go-runewidth v0.0.13/go.mod h1:Jdepj2loyihRzMpdS35Xk/zdY8IAYHsh153qUoGf23w=
@@ -1413,6 +1408,7 @@ github.com/scroll-tech/da-codec v0.1.3-0.20250609113414-f33adf0904bd h1:NUol+dPt
github.com/scroll-tech/da-codec v0.1.3-0.20250609113414-f33adf0904bd/go.mod h1:gz5x3CsLy5htNTbv4PWRPBU9nSAujfx1U2XtFcXoFuk=
github.com/scroll-tech/da-codec v0.1.3-0.20250609154559-8935de62c148 h1:cyK1ifU2fRoMl8YWR9LOsZK4RvJnlG3RODgakj5I8VY=
github.com/scroll-tech/da-codec v0.1.3-0.20250609154559-8935de62c148/go.mod h1:gz5x3CsLy5htNTbv4PWRPBU9nSAujfx1U2XtFcXoFuk=
github.com/scroll-tech/da-codec v0.1.3-0.20250626091118-58b899494da6/go.mod h1:Z6kN5u2khPhiqHyk172kGB7o38bH/nj7Ilrb/46wZGg=
github.com/scroll-tech/go-ethereum v1.10.14-0.20240607130425-e2becce6a1a4/go.mod h1:byf/mZ8jLYUCnUePTicjJWn+RvKdxDn7buS6glTnMwQ=
github.com/scroll-tech/go-ethereum v1.10.14-0.20240821074444-b3fa00861e5e/go.mod h1:swB5NSp8pKNDuYsTxfR08bHS6L56i119PBx8fxvV8Cs=
github.com/scroll-tech/go-ethereum v1.10.14-0.20241010064814-3d88e870ae22/go.mod h1:r9FwtxCtybMkTbWYCyBuevT9TW3zHmOTHqD082Uh+Oo=
@@ -1454,7 +1450,6 @@ github.com/spf13/jwalterweatherman v1.1.0/go.mod h1:aNWZUN0dPAAO/Ljvb5BEdw96iTZ0
github.com/spf13/viper v1.8.1/go.mod h1:o0Pch8wJ9BVSWGQMbra6iw0oQ5oktSIBaujf1rJH9Ns=
github.com/spiffe/go-spiffe/v2 v2.1.1 h1:RT9kM8MZLZIsPTH+HKQEP5yaAk3yd/VBzlINaRjXs8k=
github.com/spiffe/go-spiffe/v2 v2.1.1/go.mod h1:5qg6rpqlwIub0JAiF1UK9IMD6BpPTmvG6yfSgDBs5lg=
github.com/status-im/keycard-go v0.0.0-20190316090335-8537d3370df4/go.mod h1:RZLeN1LMWmRsyYjvAu+I6Dm9QmlDaIIt+Y+4Kd7Tp+Q=
github.com/stefanberger/go-pkcs11uri v0.0.0-20201008174630-78d3cae3a980 h1:lIOOHPEbXzO3vnmx2gok1Tfs31Q8GQqKLc8vVqyQq/I=
github.com/stefanberger/go-pkcs11uri v0.0.0-20201008174630-78d3cae3a980/go.mod h1:AO3tvPzVZ/ayst6UlUKUv6rcPQInYe3IknH3jYhAKu8=
github.com/stoewer/go-strcase v1.2.0 h1:Z2iHWqGXH00XYgqDmNgQbIBxf3wrNq0F3feEy0ainaU=
@@ -1481,7 +1476,6 @@ github.com/tonistiigi/go-archvariant v1.0.0 h1:5LC1eDWiBNflnTF1prCiX09yfNHIxDC/a
github.com/tonistiigi/go-archvariant v1.0.0/go.mod h1:TxFmO5VS6vMq2kvs3ht04iPXtu2rUT/erOnGFYfk5Ho=
github.com/tv42/httpunix v0.0.0-20150427012821-b75d8614f926 h1:G3dpKMzFDjgEh2q1Z7zUUtKa8ViPtH+ocF0bE0g00O8=
github.com/tv42/httpunix v0.0.0-20150427012821-b75d8614f926/go.mod h1:9ESjWnEqriFuLhtthL60Sar/7RFoluCcXsuvEwTV5KM=
github.com/tyler-smith/go-bip39 v1.0.1-0.20181017060643-dbb3b84ba2ef/go.mod h1:sJ5fKU0s6JVwZjjcUEX2zFOnvq0ASQ2K9Zr6cf67kNs=
github.com/ugorji/go v1.2.7 h1:qYhyWUUd6WbiM+C6JZAUkIJt/1WrjzNHY9+KCIjVqTo=
github.com/urfave/cli v1.22.12 h1:igJgVw1JdKH+trcLWLeLwZjU9fEfPesQ+9/e4MQ44S8=
github.com/urfave/cli v1.22.12/go.mod h1:sSBEIC79qR6OvcmsD4U3KABeOTxDqQtdDnaFuUN30b8=
@@ -1716,7 +1710,6 @@ golang.org/x/oauth2 v0.18.0 h1:09qnuIAgzdx1XplqJvW6CQqMCtGZykZWcXzPMPUusvI=
golang.org/x/oauth2 v0.18.0/go.mod h1:Wf7knwG0MPoWIMMBgFlEaSUDaKskp0dCfrlJRJXbBi8=
golang.org/x/perf v0.0.0-20230113213139-801c7ef9e5c5/go.mod h1:UBKtEnL8aqnd+0JHqZ+2qoMDwtuy6cYhhKNoHLBiTQc=
golang.org/x/sync v0.0.0-20190227155943-e225da77a7e6/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM=
golang.org/x/sync v0.0.0-20200317015054-43a5402ce75a/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM=
golang.org/x/sync v0.0.0-20200625203802-6e8e738ad208/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM=
golang.org/x/sync v0.0.0-20201207232520-09787c993a3a/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM=
golang.org/x/sync v0.1.0/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM=
@@ -1741,11 +1734,9 @@ golang.org/x/sys v0.0.0-20191228213918-04cbcbbfeed8/go.mod h1:h1NjWce9XRLGQEsW7w
golang.org/x/sys v0.0.0-20200106162015-b016eb3dc98e/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20200107162124-548cf772de50/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20200113162924-86b910548bc1/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20200116001909-b77594299b42/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20200122134326-e047566fdf82/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20200202164722-d101bd2416d5/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20200212091648-12a6c2dcc1e4/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20200223170610-d5e6a3e2c0ae/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20200302150141-5c8b2ff67527/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20200331124033-c3d80250170d/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20200501052902-10377860bb8e/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
@@ -1811,7 +1802,6 @@ golang.org/x/text v0.15.0/go.mod h1:18ZOQIKpY8NJVqYksKHtTdi31H5itFRjB5/qKTNYzSU=
golang.org/x/time v0.0.0-20181108054448-85acf8d2951c/go.mod h1:tRJNPiyCQ0inRvYxbN9jk5I+vvW/OXSQhTDSoE431IQ=
golang.org/x/time v0.0.0-20191024005414-555d28b269f0/go.mod h1:tRJNPiyCQ0inRvYxbN9jk5I+vvW/OXSQhTDSoE431IQ=
golang.org/x/time v0.0.0-20201208040808-7e3f01d25324/go.mod h1:tRJNPiyCQ0inRvYxbN9jk5I+vvW/OXSQhTDSoE431IQ=
golang.org/x/time v0.0.0-20210220033141-f8bda1e9f3ba/go.mod h1:tRJNPiyCQ0inRvYxbN9jk5I+vvW/OXSQhTDSoE431IQ=
golang.org/x/time v0.2.0/go.mod h1:tRJNPiyCQ0inRvYxbN9jk5I+vvW/OXSQhTDSoE431IQ=
golang.org/x/tools v0.0.0-20180525024113-a5b4c53f6e8b/go.mod h1:n7NCudcB/nEzxVGmLbDWY5pfWTLqBcC2KZ6jyYvM4mQ=
golang.org/x/tools v0.0.0-20181030221726-6c7e314b6563/go.mod h1:n7NCudcB/nEzxVGmLbDWY5pfWTLqBcC2KZ6jyYvM4mQ=

View File

@@ -0,0 +1,22 @@
.PHONY: batch_production_submission launch_prover psql check_proving_status
export SCROLL_ZKVM_VERSION=0.4.2
PG_URL=postgres://postgres@localhost:5432/scroll
batch_production_submission:
docker compose --profile batch-production-submission up
launch_prover:
docker compose up -d
psql:
psql 'postgres://postgres@localhost:5432/scroll'
check_proving_status:
@echo "Checking proving status..."
@result=$$(psql "${PG_URL}" -t -c "SELECT proving_status = 4 AS is_status_success FROM batch ORDER BY index LIMIT 1;" | tr -d '[:space:]'); \
if [ "$$result" = "t" ]; then \
echo "✅ Prove succeeded! You're ready to submit permissionless batch and proof!"; \
else \
echo "Proof is not ready..."; \
fi

View File

@@ -0,0 +1,172 @@
# Permissionless Batches
Permissionless batches aka enforced batches is a feature that provides guarantee to users that they can exit Scroll even if the operator is down or censoring.
It allows anyone to take over and submit a batch (permissionless batch submission) together with a proof after a certain time period has passed without a batch being finalized on L1.
Once permissionless batch mode is activated, the operator can no longer submit batches in a permissioned way. Only the security council can deactivate permissionless batch mode and reinstate the operator as the only batch submitter.
There are two types of situations to consider:
- `Permissionless batch mode is activated:` This means that finalization halted for some time. Now anyone can submit batches utilizing the [batch production toolkit](#batch-production-toolkit).
- `Permissionless batch mode is deactivated:` This means that the security council has decided to reinstate the operator as the only batch submitter. The operator needs to [recover](#operator-recovery) the sequencer and relayer to resume batch submission and the valid L2 chain.
## Batch production toolkit
The batch production toolkit is a set of tools that allow anyone to submit a batch in permissionless mode. It consists of three main components:
1. l2geth state recovery from L1
2. l2geth block production
3. production, proving and submission of batch with `docker-compose.yml`
### Prerequisites
- Unix-like OS, 32GB RAM
- Docker
- [l2geth](https://github.com/scroll-tech/go-ethereum/) or [Docker image](https://hub.docker.com/r/scrolltech/l2geth) of corresponding [version](https://docs.scroll.io/en/technology/overview/scroll-upgrades/).
- access to an Ethereum L1 RPC node (beacon node and execution client)
- ability to run a prover
- L1 account with funds to pay for the batch submission
### 1. l2geth state recovery from L1
Once permissionless mode is activated there's no blocks being produced and propagated on L2. The first step is to recover the latest state of the L2 chain from L1. This is done by running l2geth in recovery mode.
Running l2geth in recovery mode requires following configuration:
- `--scroll` or `--scroll-sepolia` - enables Scroll Mainnet or Sepolia mode
- `--da.blob.beaconnode` - L1 RPC beacon node
- `--l1.endpoint` - L1 RPC execution client
- `--da.sync=true` - enables syncing with L1
- `--da.recovery` - enables recovery mode
- `--da.recovery.initiall1block` - initial L1 block (commit tx of initial batch)
- `--da.recovery.initialbatch` - batch where to start recovery from. Can be found on [Scrollscan Explorer](https://scrollscan.com/batches).
- `--da.recovery.l2endblock` - until which L2 block recovery should run (optional)
```bash
./build/bin/geth --scroll<-sepolia> \
--datadir "tmp/datadir" \
--gcmode archive \
--http --http.addr "0.0.0.0" --http.port 8545 --http.api "eth,net,web3,debug,scroll" --http.vhosts "*" \
--da.blob.beaconnode "<L1 RPC beacon node>" \
--l1.endpoint "<L1 RPC execution client>" \
--da.sync=true --da.recovery --da.recovery.initiall1block "<initial L1 block (commit tx of initial batch)>" --da.recovery.initialbatch "<batch where to start recovery from>" --da.recovery.l2endblock "<until which L2 block recovery should run (optional)>" \
--verbosity 3
```
### 2. l2geth block production
After the state is recovered, the next step is to produce blocks on L2. This is done by running l2geth in block production mode.
As a prerequisite, the state recovery must be completed and the latest state of the L2 chain must be available.
You also need to generate a keystore e.g. with [Clef](https://geth.ethereum.org/docs/fundamentals/account-management) to be able to sign blocks.
This key is not used for any funds, but required for block production to work. Once you generated blocks you can safely discard it.
Running l2geth in block production mode requires following configuration:
- `--scroll` or `--scroll-sepolia` - enables Scroll Mainnet or Sepolia mode
- `--da.blob.beaconnode` - L1 RPC beacon node
- `--l1.endpoint` - L1 RPC execution client
- `--da.sync=true` - enables syncing with L1
- `--da.recovery` - enables recovery mode
- `--da.recovery.produceblocks` - enables block production
- `--miner.etherbase '0xeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeee' --mine` - enables mining. the address is not used, but required for mining to work
- `---miner.gaslimit 1 --miner.gasprice 1 --miner.maxaccountsnum 100 --rpc.gascap 0 --gpo.ignoreprice 1` - gas limits for block production
```bash
./build/bin/geth --scroll<-sepolia> \
--datadir "tmp/datadir" \
--gcmode archive \
--http --http.addr "0.0.0.0" --http.port 8545 --http.api "eth,net,web3,debug,scroll" --http.vhosts "*" \
--da.blob.beaconnode "<L1 RPC beacon node>" \
--l1.endpoint "<L1 RPC execution client>" \
--da.sync=true --da.recovery --da.recovery.produceblocks \
--miner.gaslimit 1 --miner.gasprice 1 --miner.maxaccountsnum 100 --rpc.gascap 0 --gpo.ignoreprice 1 \
--miner.etherbase '0xeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeee' --mine \
--verbosity 3
```
### 3. production, proving and submission of batch with `docker-compose.yml`
After the blocks are produced, the next step is to produce a batch, prove it and submit it to L1. This is done by running the `docker-compose.yml` in the `permissionless-batches` folder.
#### Producing a batch
To produce a batch you need to run the `batch-production-submission` profile in `docker-compose.yml`.
1. Fill `conf/genesis.json` with the latest genesis state from the L2 chain. The genesis for the current fork can be found [here](https://docs.scroll.io/en/technology/overview/scroll-upgrades/).
2. Make sure that `l2geth` with your locally produced blocks is running and reachable from the Docker network (e.g. `http://host.docker.internal:8545`)
3. Fill in required fields in `conf/relayer/config.json`
Run with `make batch_production_submission`.
This will produce chunks, a batch and bundle which will be proven in the next step.
`Success! You're ready to generate proofs!` indicates that everything is working correctly and the batch is ready to be proven.
#### Proving a batch
To prove the chunk, batch and bundle you just generated you need to run the `prover` profile in `docker-compose.yml`.
Local Proving:
1. Hardware spec for local prover: CPU: 36+ core, 128G memory GPU: 24G memory (e.g. Rtx 3090/3090Ti/4090/A10/L4)
2. Make sure `verifier` and `high_version_circuit` in `conf/coordinator/config.json` are correct for the [latest fork](https://docs.scroll.io/en/technology/overview/scroll-upgrades/)
3. Set the `SCROLL_ZKVM_VERSION` environment variable on `Makefile` to the correct [version](https://docs.scroll.io/en/technology/overview/scroll-upgrades/).
4. Fill in the required fields in `conf/proving-service/config.json`
Run with `make launch_prover`.
#### Batch submission
To submit the batch you need to run the `batch-production-submission` profile in `docker-compose.yml`.
1. Fill in required fields in `conf/relayer/config.json` for the sender config.
Run with `make batch_production_submission`.
This will submit the batch to L1 and finalize it. The transaction will be retried in case of failure.
**Troubleshooting**
- in case the submission fails it will print the calldata for the transaction in an error message. You can use this with `cast call --trace --rpc-url "$SCROLL_L1_DEPLOYMENT_RPC" "$L1_SCROLL_CHAIN_PROXY_ADDR" <calldata>` to see what went wrong.
- `0x4df567b9: ErrorNotInEnforcedBatchMode`: permissionless batch mode is not activated, you can't submit a batch
- `0xa5d305cc: ErrorBatchIsEmpty`: no blob was provided. This is usually returned if you do the `cast call`, permissionless mode is activated but you didn't provide a blob in the transaction.
## Operator recovery
Operator recovery needs to be run by the rollup operator to resume normal rollup operation after permissionless batch mode is deactivated. It consists of two main components:
1. l2geth recovery
2. Relayer recovery
These steps are required to resume permissioned batch submission and the valid L2 chain. They will restore the entire history of the batches submitted during permissionless mode.
### Prerequisites
- l2geth with the latest state of the L2 chain (before permissionless mode was activated)
- signer key for the sequencer according to Clique consensus
- relayer and coordinator are set up, running and up-to-date with the latest state of the L2 chain (before permissionless mode was activated)
### l2geth recovery
Running l2geth in recovery mode requires following configuration:
- `--scroll` or `--scroll-sepolia` - enables Scroll Mainnet or Sepolia mode
- `--da.blob.beaconnode` - L1 RPC beacon node
- `--l1.endpoint` - L1 RPC execution client
- `--da.sync=true` - enables syncing with L1
- `--da.recovery` - enables recovery mode
- `--da.recovery.signblocks` - enables signing blocks with the sequencer and configured key
- `--da.recovery.initiall1block` - initial L1 block (commit tx of initial batch)
- `--da.recovery.initialbatch` - batch where to start recovery from. Can be found on [Scrollscan Explorer](https://scrollscan.com/batches).
- `--da.recovery.l2endblock` - until which L2 block recovery should run (optional)
```bash
./build/bin/geth --scroll<-sepolia> \
--datadir "tmp/datadir" \
--gcmode archive \
--http --http.addr "0.0.0.0" --http.port 8545 --http.api "eth,net,web3,debug,scroll" --http.vhosts "*" \
--da.blob.beaconnode "<L1 RPC beacon node>" \
--l1.endpoint "<L1 RPC execution client>" \
--da.sync=true --da.recovery --da.recovery.signblocks --da.recovery.initiall1block "<initial L1 block (commit tx of initial batch)>" --da.recovery.initialbatch "<batch where to start recovery from>" --da.recovery.l2endblock "<until which L2 block recovery should run (optional)>" \
--verbosity 3
```
After the recovery is finished, start the sequencer in normal operation and continue issuing L2 blocks as normal. This will resume the L2 chain, allow the relayer (after running recovery) to create new batches and allow other L2 follower nodes to sync up the valid and signed L2 chain.
### Relayer recovery
Start the relayer with the following additional top-level configuration:
```
"recovery_config": {
"enable": true
}
```
This will make the relayer recover all the chunks, batches and bundles that were submitted during permissionless mode. These batches are marked automatically as proven and finalized.
Once this process is finished, start the relayer normally without the recovery config to resume normal operation.
```
"recovery_config": {
"enable": false
}
```

View File

@@ -0,0 +1,30 @@
{
"prover_manager": {
"provers_per_session": 1,
"session_attempts": 100,
"chunk_collection_time_sec": 36000,
"batch_collection_time_sec": 2700,
"bundle_collection_time_sec": 2700,
"verifier": {
"high_version_circuit" : {
"fork_name": "euclid",
"assets_path": "/verifier/openvm/verifier",
"min_prover_version": "v4.5.7"
}
}
},
"db": {
"driver_name": "postgres",
"dsn": "postgres://db/scroll?sslmode=disable&user=postgres",
"maxOpenNum": 200,
"maxIdleNum": 20
},
"l2": {
"chain_id": 333333
},
"auth": {
"secret": "e788b62d39254928a821ac1c76b274a8c835aa1e20ecfb6f50eb10e87847de44",
"challenge_expire_duration_sec": 10,
"login_expire_duration_sec": 3600
}
}

View File

@@ -0,0 +1,76 @@
#!/usr/bin/bash
apt update
apt install -y wget libdigest-sha-perl
# release version
if [ -z "${SCROLL_ZKVM_VERSION}" ]; then
echo "SCROLL_ZKVM_VERSION not set"
exit 1
fi
if [ -z "${HTTP_PORT}" ]; then
echo "HTTP_PORT not set"
exit 1
fi
if [ -z "${METRICS_PORT}" ]; then
echo "METRICS_PORT not set"
exit 1
fi
case $CHAIN_ID in
"5343532222") # staging network
echo "staging network not supported"
exit 1
;;
"534353") # alpha network
echo "alpha network not supported"
exit 1
;;
esac
BASE_DOWNLOAD_DIR="/verifier"
# Ensure the base directory exists
mkdir -p "$BASE_DOWNLOAD_DIR"
# Set subdirectories
OPENVM_DIR="$BASE_DOWNLOAD_DIR/openvm"
# Create necessary directories
mkdir -p "$OPENVM_DIR/verifier"
# Define URLs for OpenVM files (No checksum verification)
OPENVM_URLS=(
"https://circuit-release.s3.us-west-2.amazonaws.com/scroll-zkvm/releases/$SCROLL_ZKVM_VERSION/verifier/verifier.bin"
"https://circuit-release.s3.us-west-2.amazonaws.com/scroll-zkvm/releases/$SCROLL_ZKVM_VERSION/verifier/root-verifier-vm-config"
"https://circuit-release.s3.us-west-2.amazonaws.com/scroll-zkvm/releases/$SCROLL_ZKVM_VERSION/verifier/root-verifier-committed-exe"
)
# Download OpenVM files (No checksum verification, but skips if file exists)
for url in "${OPENVM_URLS[@]}"; do
dest_subdir="$OPENVM_DIR/$(basename $(dirname "$url"))"
mkdir -p "$dest_subdir"
filepath="$dest_subdir/$(basename "$url")"
echo "Downloading $filepath..."
curl -o "$filepath" -L "$url"
done
mkdir -p "$HOME/.openvm"
ln -s "$OPENVM_DIR/params" "$HOME/.openvm/params"
echo "All files downloaded successfully! 🎉"
mkdir -p /usr/local/bin
wget https://github.com/ethereum/solidity/releases/download/v0.8.19/solc-static-linux -O /usr/local/bin/solc
chmod +x /usr/local/bin/solc
# Start coordinator
echo "Starting coordinator api"
RUST_BACKTRACE=1 exec coordinator_api --config /coordinator/config.json \
--genesis /coordinator/genesis.json \
--http --http.addr "0.0.0.0" --http.port ${HTTP_PORT} \
--metrics --metrics.addr "0.0.0.0" --metrics.port ${METRICS_PORT} \
--log.debug

View File

@@ -0,0 +1 @@
<fill with correct genesis.json>

View File

@@ -0,0 +1,28 @@
{
"sdk_config": {
"prover_name_prefix": "local_prover",
"keys_dir": "/keys",
"db_path": "/db",
"coordinator": {
"base_url": "http://172.17.0.1:8556",
"retry_count": 10,
"retry_wait_time_sec": 10,
"connection_timeout_sec": 30
},
"l2geth": {
"endpoint": "<L2 RPC with generated blocks reachable from Docker network>"
},
"prover": {
"circuit_type": 2,
"supported_proof_types": [1,2,3],
"circuit_version": "v0.13.1"
},
"health_listener_addr": "0.0.0.0:89"
},
"circuits": {
"euclidV2": {
"hard_fork_name": "euclidV2",
"workspace_path": "/openvm"
}
}
}

View File

@@ -0,0 +1,54 @@
#!/usr/bin/bash
apt update
apt install -y wget curl
# release version
if [ -z "${SCROLL_ZKVM_VERSION}" ]; then
echo "SCROLL_ZKVM_VERSION not set"
exit 1
fi
BASE_DOWNLOAD_DIR="/openvm"
# Ensure the base directory exists
mkdir -p "$BASE_DOWNLOAD_DIR"
# Define URLs for OpenVM files (No checksum verification)
OPENVM_URLS=(
"https://circuit-release.s3.us-west-2.amazonaws.com/scroll-zkvm/releases/$SCROLL_ZKVM_VERSION/chunk/app.vmexe"
"https://circuit-release.s3.us-west-2.amazonaws.com/scroll-zkvm/releases/$SCROLL_ZKVM_VERSION/chunk/openvm.toml"
"https://circuit-release.s3.us-west-2.amazonaws.com/scroll-zkvm/releases/$SCROLL_ZKVM_VERSION/batch/app.vmexe"
"https://circuit-release.s3.us-west-2.amazonaws.com/scroll-zkvm/releases/$SCROLL_ZKVM_VERSION/batch/openvm.toml"
"https://circuit-release.s3.us-west-2.amazonaws.com/scroll-zkvm/releases/$SCROLL_ZKVM_VERSION/bundle/app.vmexe"
"https://circuit-release.s3.us-west-2.amazonaws.com/scroll-zkvm/releases/$SCROLL_ZKVM_VERSION/bundle/app_euclidv1.vmexe"
"https://circuit-release.s3.us-west-2.amazonaws.com/scroll-zkvm/releases/$SCROLL_ZKVM_VERSION/bundle/openvm.toml"
"https://circuit-release.s3.us-west-2.amazonaws.com/scroll-zkvm/releases/$SCROLL_ZKVM_VERSION/bundle/verifier.bin"
"https://circuit-release.s3.us-west-2.amazonaws.com/scroll-zkvm/releases/$SCROLL_ZKVM_VERSION/bundle/verifier.sol"
"https://circuit-release.s3.us-west-2.amazonaws.com/scroll-zkvm/releases/$SCROLL_ZKVM_VERSION/bundle/digest_1.hex"
"https://circuit-release.s3.us-west-2.amazonaws.com/scroll-zkvm/releases/$SCROLL_ZKVM_VERSION/bundle/digest_2.hex"
"https://circuit-release.s3.us-west-2.amazonaws.com/scroll-zkvm/releases/$SCROLL_ZKVM_VERSION/bundle/digest_1_euclidv1.hex"
"https://circuit-release.s3.us-west-2.amazonaws.com/scroll-zkvm/releases/$SCROLL_ZKVM_VERSION/bundle/digest_2_euclidv1.hex"
"https://circuit-release.s3.us-west-2.amazonaws.com/scroll-zkvm/params/kzg_bn254_22.srs"
"https://circuit-release.s3.us-west-2.amazonaws.com/scroll-zkvm/params/kzg_bn254_24.srs"
)
# Download OpenVM files (No checksum verification, but skips if file exists)
for url in "${OPENVM_URLS[@]}"; do
dest_subdir="$BASE_DOWNLOAD_DIR/$(basename $(dirname "$url"))"
mkdir -p "$dest_subdir"
filepath="$dest_subdir/$(basename "$url")"
echo "Downloading $filepath..."
curl -o "$filepath" -L "$url"
done
mkdir -p "$HOME/.openvm"
ln -s "/openvm/params" "$HOME/.openvm/params"
mkdir -p /usr/local/bin
wget https://github.com/ethereum/solidity/releases/download/v0.8.19/solc-static-linux -O /usr/local/bin/solc
chmod +x /usr/local/bin/solc
mkdir -p /openvm/cache
RUST_MIN_STACK=16777216 RUST_BACKTRACE=1 exec /prover/prover --config /prover/conf/config.json

View File

@@ -0,0 +1,48 @@
{
"l1_config": {
"endpoint": "<L1 RPC execution node>"
},
"l2_config": {
"confirmations": "0x0",
"endpoint": "<L2 RPC with generated blocks reachable from Docker network>",
"relayer_config": {
"commit_sender_signer_config": {
"signer_type": "PrivateKey",
"private_key_signer_config": {
"private_key": "<the private key of L1 address to submit permissionless batch, please fund it in advance>"
}
}
},
"chunk_proposer_config": {
"propose_interval_milliseconds": 100,
"max_l2_gas_per_chunk": 20000000,
"chunk_timeout_sec": 300,
"max_uncompressed_batch_bytes_size": 4194304
},
"batch_proposer_config": {
"propose_interval_milliseconds": 1000,
"batch_timeout_sec": 300,
"max_chunks_per_batch": 45,
"max_uncompressed_batch_bytes_size": 4194304
},
"bundle_proposer_config": {
"max_batch_num_per_bundle": 20,
"bundle_timeout_sec": 36000
}
},
"db_config": {
"driver_name": "postgres",
"dsn": "postgres://172.17.0.1:5432/scroll?sslmode=disable&user=postgres",
"maxOpenNum": 200,
"maxIdleNum": 20
},
"recovery_config": {
"enable": true,
"l1_block_height": "<commit tx of last finalized batch on L1>",
"latest_finalized_batch": "<last finalized batch on L1>",
"l2_block_height_limit": "<L2 block up to which to produce batch>",
"force_latest_finalized_batch": false,
"force_l1_message_count": 0,
"submit_without_proof": false
}
}

Some files were not shown because too many files have changed in this diff Show More