Compare commits

...

85 Commits

Author SHA1 Message Date
Ho
14e2633ba3 Merge remote-tracking branch 'origin/develop' into coordinator_proxy 2025-12-22 17:10:13 +09:00
Ho
7de388ef1a [Fix] Accept proof submission even it has been timeout (#1764) 2025-12-12 12:18:34 +09:00
Ho
21326c25e6 Merge remote-tracking branch 'origin/develop' into coordinator_proxy 2025-12-04 19:06:09 +09:00
Morty
27dd62eac3 feat(rollup-relayer): add blob fee tolerance (#1773) 2025-12-03 21:49:17 +08:00
Ho
22479a7952 [Feat] Galileo v2 (#1771)
Co-authored-by: Péter Garamvölgyi <peter@scroll.io>
2025-12-02 11:04:57 +01:00
Péter Garamvölgyi
690bc01c41 feat: force commit batches at hardfork boundary (#1768) 2025-11-30 20:36:53 +01:00
Péter Garamvölgyi
e75d6c16a9 feat: propose chunk at hardfork boundary (#1767) 2025-11-28 17:21:51 +01:00
Péter Garamvölgyi
752e4e1117 fix: Fix blob fee overflow on rollup-relayer and gas-oracle (#1772) 2025-11-28 15:44:37 +01:00
georgehao
2ecc42e2f5 Return when total < request page (#1766) 2025-11-25 23:36:40 +08:00
georgehao
de72e2dccb remove unused check (#1765) 2025-11-25 22:12:16 +08:00
georgehao
edb51236e2 bump version (#1763) 2025-11-25 21:00:57 +08:00
georgehao
15a23478d1 fix bridge history GetL2UnclaimedWithdrawalsByAddress (#1760)
Co-authored-by: Péter Garamvölgyi <peter@scroll.io>
2025-11-25 20:59:31 +08:00
Morty
9100a0bd4a fix: ci lint (#1762) 2025-11-25 17:45:05 +08:00
Morty
0ede0cd41f feat(blob-uploader): upload blob once proposed (#1759)
Co-authored-by: yiweichi <yiweichi@users.noreply.github.com>
2025-11-25 14:07:08 +08:00
Ho
9c2bc02f64 Merge remote-tracking branch 'origin/develop' into coordinator_proxy 2025-11-24 18:41:23 +09:00
Ho
9dceae1ca2 [Feat] Galileo forking (#1753)
Co-authored-by: Rohit Narurkar <rohit.narurkar@proton.me>
2025-11-24 17:37:04 +08:00
Ho
9e5579c4cb cover client reset in test 2025-11-21 12:34:11 +09:00
Ho
ac4a72003c refactoring client 2025-11-21 12:25:54 +09:00
Ho
19447984bd fix issues 2025-11-21 10:13:39 +09:00
Ho
d66d705456 fix after merging 2025-11-21 08:37:30 +09:00
Ho
c938d6c25e Merge remote-tracking branch 'origin/develop' into coordinator_proxy 2025-11-21 08:33:55 +09:00
georgehao
235ba874c6 Update Galileo Dependency (#1752) 2025-11-17 18:18:48 +08:00
Zhang Zhuo
6bee33036f feat: the CLOAK privacy solution (#1737)
Co-authored-by: Ho <fan@scroll.io>
Co-authored-by: Rohit Narurkar <rohit.narurkar@proton.me>
Co-authored-by: Péter Garamvölgyi <peter@scroll.io>
2025-11-14 15:00:37 +01:00
Ho
cf9e3680c0 Fix login version issue 2025-11-11 19:09:06 +09:00
Ho
e9470ff7a5 update config template 2025-11-11 15:24:16 +09:00
Ho
51b1e79b31 add docker action 2025-11-11 14:28:25 +09:00
Ho
c22d9ecad1 fix goimport issue 2025-11-06 16:11:59 +09:00
Ho
e7551650b2 fix concurrent issue 2025-11-06 16:08:39 +09:00
Ho
20fde41be8 complete persistent layer and unit test 2025-11-05 22:02:14 +09:00
Ho
4df1dd8acd Merge remote-tracking branch 'origin/develop' into coordinator_proxy 2025-10-27 15:09:17 +09:00
Ho
1985e54ab3 [Feat] For prover 4.6.1 (#1742) 2025-10-24 16:18:40 +09:00
Ho
6696aac16a WIP 2025-10-23 15:23:59 +09:00
Ho
4b79e63c9b WIP: some refactors 2025-10-22 10:27:38 +09:00
Ho
ac0396db3c add persistent for running status 2025-10-22 08:31:55 +09:00
Ho
17e6c5b7ac robust prover manager 2025-10-20 22:24:04 +09:00
Ho
b6e33456fa fix issue 2025-10-20 22:02:48 +09:00
Ho
7572bf8923 fix 2025-10-20 15:21:13 +09:00
Ho
5d41788b07 + fix get task behavior
+ improve the robust of tests
2025-10-20 14:42:05 +09:00
Ho
8f8a537fba Merge remote-tracking branch 'origin/develop' into coordinator_proxy 2025-10-20 14:09:45 +09:00
Péter Garamvölgyi
bfc0fdd7ce feat: support Fusaka blob type (#1746)
Co-authored-by: jonastheis <jonastheis@users.noreply.github.com>
Co-authored-by: jonastheis <4181434+jonastheis@users.noreply.github.com>
2025-10-18 08:52:29 +02:00
Ho
b1c3a4ecc0 more log for init 2025-10-17 22:27:51 +09:00
Ho
d9a29cddce fix config issue 2025-10-17 22:26:29 +09:00
Ho
c992157eb4 Merge remote-tracking branch 'origin/develop' into coordinator_proxy 2025-10-17 16:14:52 +09:00
Jonas Theis
426c57a5fa feat(l2 relayer): enhance batch submission strategy based on backlog size (#1745) 2025-10-17 09:06:53 +08:00
Ho
404c664e10 fix unittest 2025-10-10 15:33:55 +09:00
Ho
8a15836d20 add compatibile mode and more logs 2025-10-09 14:30:43 +09:00
Ho
4365aafa9a refactor libzkp to be completely mocked out 2025-10-08 11:32:13 +09:00
Ho
6ee026fa16 depress link for libzkp 2025-10-07 11:04:04 +09:00
Ho
c79ad57fb7 finish binary 2025-10-07 10:54:41 +09:00
Ho
fa5b113248 Merge remote-tracking branch 'origin/develop' into coordinator_proxy 2025-10-07 09:53:54 +09:00
Péter Garamvölgyi
b7fdf48c30 ci: Do not push latest tag on Docker images (#1744) 2025-09-30 08:40:49 +02:00
johnsonjie
ad0c918944 update prover image cuda version (#1741) 2025-09-24 09:42:19 +08:00
Zhang Zhuo
884b050866 Merge branch 'develop' into coordinator_proxy 2025-09-19 09:39:24 +08:00
Ho
1098876183 [Feat] Update zkvm to 0.6 (use openvm 1.4) (#1736) 2025-09-19 10:29:13 +09:00
Jonas Theis
9e520e7769 feat(tx sender): add multiple write clients for more reliable tx submission (#1740) 2025-09-19 08:56:01 +08:00
Ho
1d9fa41535 Merge remote-tracking branch 'origin/develop' into coordinator_proxy 2025-09-10 20:48:38 +09:00
Ho
b7f23c6734 basic tests 2025-09-10 20:48:21 +09:00
Ho
057e22072c fix issues 2025-09-10 20:38:21 +09:00
Ho
c7b83a0784 fix issue in test 2025-09-10 13:55:45 +09:00
Ho
92ca7a6b76 improve get_task proxy 2025-09-10 13:55:38 +09:00
Ho
256c90af6f Merge remote-tracking branch 'origin/develop' into coordinator_proxy 2025-09-09 22:21:46 +09:00
Ho
50f3e1a97c fix issues from test 2025-09-09 22:21:24 +09:00
Ho
2721503657 refining 2025-09-09 20:10:18 +09:00
Ho
a04b64df03 routes 2025-09-08 22:30:51 +09:00
Ho
78dbe6cde1 controller WIP 2025-09-07 22:39:32 +09:00
Ho
9df6429d98 wip 2025-09-06 21:50:55 +09:00
Ho
e6be62f633 WIP 2025-09-05 22:31:45 +09:00
Ho
c72ee5d679 Merge remote-tracking branch 'origin/develop' into coordinator_proxy 2025-09-03 22:13:03 +09:00
Ho
4725d8a73c Merge remote-tracking branch 'origin/develop' into coordinator_proxy 2025-09-02 17:24:58 +09:00
Ho
322766f54f WIP 2025-09-02 17:22:35 +09:00
Ho
5614ec3b86 WIP 2025-09-01 10:12:16 +09:00
Ho
5a07a1652b WIP 2025-08-27 09:43:30 +09:00
Ho
64ef0f4ec0 WIP 2025-08-25 11:52:03 +09:00
Ho
321dd43af8 unit test for client 2025-08-25 11:43:50 +09:00
Ho
624a7a29b8 WIP: AI step 2025-08-25 09:35:10 +09:00
Ho
4f878d9231 AI step 2025-08-24 23:05:56 +09:00
Ho
7b3a65b35b framework for auto login 2025-08-24 22:41:17 +09:00
Ho
0d238d77a6 WIP: the structure of client manager 2025-08-24 22:32:38 +09:00
Ho
76ecdf064a add proxy config sample 2025-08-24 22:14:32 +09:00
Ho
5c6c225f76 WIP: config and client controller 2025-08-24 22:14:22 +09:00
Ho
3adb2e0a1b WIP: controller 2025-08-24 21:18:13 +09:00
Ho
412ad56a64 extend loginlogic 2025-08-24 20:43:40 +09:00
Ho
9796d16f6c WIP: update login logic and coordinator client 2025-08-24 20:32:11 +09:00
Ho
1f2b857671 add proxy_login route 2025-08-24 15:35:51 +09:00
Ho
5dbb5c5fb7 extend api for proxy 2025-08-24 14:54:54 +09:00
157 changed files with 7543 additions and 13252 deletions

View File

@@ -29,7 +29,7 @@ jobs:
steps:
- uses: actions-rs/toolchain@v1
with:
toolchain: nightly-2025-02-14
toolchain: nightly-2025-08-18
override: true
components: rustfmt, clippy
- name: Install Go

View File

@@ -33,7 +33,7 @@ jobs:
steps:
- uses: actions-rs/toolchain@v1
with:
toolchain: nightly-2025-02-14
toolchain: nightly-2025-08-18
override: true
components: rustfmt, clippy
- name: Install Go

View File

@@ -51,9 +51,7 @@ jobs:
push: true
tags: |
scrolltech/${{ env.REPOSITORY }}:${{ env.IMAGE_TAG }}
scrolltech/${{ env.REPOSITORY }}:latest
${{ env.ECR_REGISTRY }}/${{ env.REPOSITORY }}:${{ env.IMAGE_TAG }}
${{ env.ECR_REGISTRY }}/${{ env.REPOSITORY }}:latest
rollup_relayer:
runs-on:
@@ -97,9 +95,7 @@ jobs:
push: true
tags: |
scrolltech/${{ env.REPOSITORY }}:${{ env.IMAGE_TAG }}
scrolltech/${{ env.REPOSITORY }}:latest
${{ env.ECR_REGISTRY }}/${{ env.REPOSITORY }}:${{ env.IMAGE_TAG }}
${{ env.ECR_REGISTRY }}/${{ env.REPOSITORY }}:latest
blob_uploader:
runs-on:
@@ -143,9 +139,7 @@ jobs:
push: true
tags: |
scrolltech/${{ env.REPOSITORY }}:${{ env.IMAGE_TAG }}
scrolltech/${{ env.REPOSITORY }}:latest
${{ env.ECR_REGISTRY }}/${{ env.REPOSITORY }}:${{ env.IMAGE_TAG }}
${{ env.ECR_REGISTRY }}/${{ env.REPOSITORY }}:latest
rollup-db-cli:
runs-on:
@@ -189,9 +183,7 @@ jobs:
push: true
tags: |
scrolltech/${{ env.REPOSITORY }}:${{ env.IMAGE_TAG }}
scrolltech/${{ env.REPOSITORY }}:latest
${{ env.ECR_REGISTRY }}/${{ env.REPOSITORY }}:${{ env.IMAGE_TAG }}
${{ env.ECR_REGISTRY }}/${{ env.REPOSITORY }}:latest
bridgehistoryapi-fetcher:
runs-on:
@@ -235,9 +227,7 @@ jobs:
push: true
tags: |
scrolltech/${{ env.REPOSITORY }}:${{ env.IMAGE_TAG }}
scrolltech/${{ env.REPOSITORY }}:latest
${{ env.ECR_REGISTRY }}/${{ env.REPOSITORY }}:${{ env.IMAGE_TAG }}
${{ env.ECR_REGISTRY }}/${{ env.REPOSITORY }}:latest
bridgehistoryapi-api:
runs-on:
@@ -281,9 +271,7 @@ jobs:
push: true
tags: |
scrolltech/${{ env.REPOSITORY }}:${{ env.IMAGE_TAG }}
scrolltech/${{ env.REPOSITORY }}:latest
${{ env.ECR_REGISTRY }}/${{ env.REPOSITORY }}:${{ env.IMAGE_TAG }}
${{ env.ECR_REGISTRY }}/${{ env.REPOSITORY }}:latest
bridgehistoryapi-db-cli:
runs-on:
@@ -327,9 +315,7 @@ jobs:
push: true
tags: |
scrolltech/${{ env.REPOSITORY }}:${{ env.IMAGE_TAG }}
scrolltech/${{ env.REPOSITORY }}:latest
${{ env.ECR_REGISTRY }}/${{ env.REPOSITORY }}:${{ env.IMAGE_TAG }}
${{ env.ECR_REGISTRY }}/${{ env.REPOSITORY }}:latest
coordinator-api:
runs-on:
@@ -372,9 +358,50 @@ jobs:
push: true
tags: |
scrolltech/${{ env.REPOSITORY }}:${{ env.IMAGE_TAG }}
scrolltech/${{ env.REPOSITORY }}:latest
${{ env.ECR_REGISTRY }}/${{ env.REPOSITORY }}:${{ env.IMAGE_TAG }}
${{ env.ECR_REGISTRY }}/${{ env.REPOSITORY }}:latest
coordinator-proxy:
runs-on:
group: scroll-reth-runner-group
steps:
- name: Checkout code
uses: actions/checkout@v4
- name: Set up QEMU
uses: docker/setup-qemu-action@v2
- name: Set up Docker Buildx
uses: docker/setup-buildx-action@v2
- name: Login to Docker Hub
uses: docker/login-action@v2
with:
username: ${{ secrets.DOCKERHUB_USERNAME }}
password: ${{ secrets.DOCKERHUB_TOKEN }}
- name: Configure AWS credentials
uses: aws-actions/configure-aws-credentials@v4
with:
aws-access-key-id: ${{ secrets.AWS_ACCESS_KEY_ID }}
aws-secret-access-key: ${{ secrets.AWS_SECRET_ACCESS_KEY }}
aws-region: ${{ env.AWS_REGION }}
- name: Login to Amazon ECR
id: login-ecr
uses: aws-actions/amazon-ecr-login@v2
- name: check repo and create it if not exist
env:
REPOSITORY: coordinator-proxy
run: |
aws --region ${{ env.AWS_REGION }} ecr describe-repositories --repository-names ${{ env.REPOSITORY }} && : || aws --region ${{ env.AWS_REGION }} ecr create-repository --repository-name ${{ env.REPOSITORY }}
- name: Build and push
uses: docker/build-push-action@v3
env:
ECR_REGISTRY: ${{ steps.login-ecr.outputs.registry }}
REPOSITORY: coordinator-proxy
IMAGE_TAG: ${{ github.ref_name }}
with:
context: .
file: ./build/dockerfiles/coordinator-proxy.Dockerfile
push: true
tags: |
scrolltech/${{ env.REPOSITORY }}:${{ env.IMAGE_TAG }}
${{ env.ECR_REGISTRY }}/${{ env.REPOSITORY }}:${{ env.IMAGE_TAG }}
coordinator-cron:
runs-on:
@@ -418,6 +445,4 @@ jobs:
push: true
tags: |
scrolltech/${{ env.REPOSITORY }}:${{ env.IMAGE_TAG }}
scrolltech/${{ env.REPOSITORY }}:latest
${{ env.ECR_REGISTRY }}/${{ env.REPOSITORY }}:${{ env.IMAGE_TAG }}
${{ env.ECR_REGISTRY }}/${{ env.REPOSITORY }}:latest

View File

@@ -22,11 +22,9 @@ on:
required: true
type: choice
options:
- nightly-2023-12-03
- nightly-2022-12-10
- 1.86.0
- nightly-2025-02-14
default: "nightly-2023-12-03"
- nightly-2025-08-18
default: "nightly-2025-08-18"
PYTHON_VERSION:
description: "Python version"
required: false
@@ -41,7 +39,8 @@ on:
options:
- "11.7.1"
- "12.2.2"
default: "11.7.1"
- "12.9.1"
default: "12.9.1"
CARGO_CHEF_TAG:
description: "Cargo chef version"
required: true

2820
Cargo.lock generated

File diff suppressed because it is too large Load Diff

View File

@@ -14,15 +14,16 @@ edition = "2021"
homepage = "https://scroll.io"
readme = "README.md"
repository = "https://github.com/scroll-tech/scroll"
version = "4.5.8"
version = "4.7.1"
[workspace.dependencies]
scroll-zkvm-prover = { git = "https://github.com/scroll-tech/zkvm-prover", rev = "ad0efe7" }
scroll-zkvm-verifier = { git = "https://github.com/scroll-tech/zkvm-prover", rev = "ad0efe7" }
scroll-zkvm-types = { git = "https://github.com/scroll-tech/zkvm-prover", rev = "ad0efe7" }
scroll-zkvm-prover = { git = "https://github.com/scroll-tech/zkvm-prover", tag = "v0.7.1" }
scroll-zkvm-verifier = { git = "https://github.com/scroll-tech/zkvm-prover", tag = "v0.7.1" }
scroll-zkvm-types = { git = "https://github.com/scroll-tech/zkvm-prover", tag = "v0.7.1" }
sbv-primitives = { git = "https://github.com/scroll-tech/stateless-block-verifier", branch = "chore/openvm-1.3", features = ["scroll"] }
sbv-utils = { git = "https://github.com/scroll-tech/stateless-block-verifier", branch = "chore/openvm-1.3" }
sbv-primitives = { git = "https://github.com/scroll-tech/stateless-block-verifier", tag = "scroll-v91.2", features = ["scroll", "rkyv"] }
sbv-utils = { git = "https://github.com/scroll-tech/stateless-block-verifier", tag = "scroll-v91.2" }
sbv-core = { git = "https://github.com/scroll-tech/stateless-block-verifier", tag = "scroll-v91.2", features = ["scroll"] }
metrics = "0.23.0"
metrics-util = "0.17"
@@ -30,14 +31,14 @@ metrics-tracing-context = "0.16.0"
anyhow = "1.0"
alloy = { version = "1", default-features = false }
alloy-primitives = { version = "1.2", default-features = false, features = ["tiny-keccak"] }
alloy-primitives = { version = "1.4.1", default-features = false, features = ["tiny-keccak"] }
# also use this to trigger "serde" feature for primitives
alloy-serde = { version = "1", default-features = false }
serde = { version = "1", default-features = false, features = ["derive"] }
serde_json = { version = "1.0" }
serde_derive = "1.0"
serde_with = "3.11.0"
serde_with = "3"
itertools = "0.14"
tiny-keccak = "2.0"
tracing = "0.1"
@@ -45,22 +46,20 @@ eyre = "0.6"
once_cell = "1.20"
base64 = "0.22"
[patch.crates-io]
revm = { git = "https://github.com/scroll-tech/revm", branch = "feat/reth-v78" }
revm-bytecode = { git = "https://github.com/scroll-tech/revm", branch = "feat/reth-v78" }
revm-context = { git = "https://github.com/scroll-tech/revm", branch = "feat/reth-v78" }
revm-context-interface = { git = "https://github.com/scroll-tech/revm", branch = "feat/reth-v78" }
revm-database = { git = "https://github.com/scroll-tech/revm", branch = "feat/reth-v78" }
revm-database-interface = { git = "https://github.com/scroll-tech/revm", branch = "feat/reth-v78" }
revm-handler = { git = "https://github.com/scroll-tech/revm", branch = "feat/reth-v78" }
revm-inspector = { git = "https://github.com/scroll-tech/revm", branch = "feat/reth-v78" }
revm-interpreter = { git = "https://github.com/scroll-tech/revm", branch = "feat/reth-v78" }
revm-precompile = { git = "https://github.com/scroll-tech/revm", branch = "feat/reth-v78" }
revm-primitives = { git = "https://github.com/scroll-tech/revm", branch = "feat/reth-v78" }
revm-state = { git = "https://github.com/scroll-tech/revm", branch = "feat/reth-v78" }
ruint = { git = "https://github.com/scroll-tech/uint.git", branch = "v1.15.0" }
alloy-primitives = { git = "https://github.com/scroll-tech/alloy-core", branch = "v1.2.0" }
[patch.crates-io]
revm = { git = "https://github.com/scroll-tech/revm", tag = "scroll-v91" }
revm-bytecode = { git = "https://github.com/scroll-tech/revm", tag = "scroll-v91" }
revm-context = { git = "https://github.com/scroll-tech/revm", tag = "scroll-v91" }
revm-context-interface = { git = "https://github.com/scroll-tech/revm", tag = "scroll-v91" }
revm-database = { git = "https://github.com/scroll-tech/revm", tag = "scroll-v91" }
revm-database-interface = { git = "https://github.com/scroll-tech/revm", tag = "scroll-v91" }
revm-handler = { git = "https://github.com/scroll-tech/revm", tag = "scroll-v91" }
revm-inspector = { git = "https://github.com/scroll-tech/revm", tag = "scroll-v91" }
revm-interpreter = { git = "https://github.com/scroll-tech/revm", tag = "scroll-v91" }
revm-precompile = { git = "https://github.com/scroll-tech/revm", tag = "scroll-v91" }
revm-primitives = { git = "https://github.com/scroll-tech/revm", tag = "scroll-v91" }
revm-state = { git = "https://github.com/scroll-tech/revm", tag = "scroll-v91" }
[profile.maxperf]
inherits = "release"

View File

@@ -1,6 +1,6 @@
.PHONY: fmt dev_docker build_test_docker run_test_docker clean update
L2GETH_TAG=scroll-v5.8.23
L2GETH_TAG=scroll-v5.9.17
help: ## Display this help message
@grep -h \

View File

@@ -10,15 +10,18 @@ require (
github.com/go-redis/redis/v8 v8.11.5
github.com/pressly/goose/v3 v3.16.0
github.com/prometheus/client_golang v1.19.0
github.com/scroll-tech/da-codec v0.1.3-0.20250826112206-b4cce5c5d178
github.com/scroll-tech/go-ethereum v1.10.14-0.20250729113104-bd8f141bb3e9
github.com/stretchr/testify v1.9.0
github.com/scroll-tech/da-codec v0.10.0
github.com/scroll-tech/go-ethereum v1.10.14-0.20251128092113-8629f088d78f
github.com/stretchr/testify v1.10.0
github.com/urfave/cli/v2 v2.25.7
golang.org/x/sync v0.11.0
gorm.io/gorm v1.25.7-0.20240204074919-46816ad31dde
)
replace github.com/scroll-tech/go-ethereum => github.com/scroll-tech/go-ethereum v1.10.14-0.20250729113104-bd8f141bb3e9 // It's a hotfix for the header hash incompatibility issue, pls change this with caution
// Hotfix for header hash incompatibility issue.
// PR: https://github.com/scroll-tech/go-ethereum/pull/1133/
// CAUTION: Requires careful handling. When upgrading go-ethereum, ensure this fix remains up-to-date in this branch.
replace github.com/scroll-tech/go-ethereum => github.com/scroll-tech/go-ethereum v1.10.14-0.20251128092359-25d5bf6b817b
require (
dario.cat/mergo v1.0.0 // indirect
@@ -30,10 +33,10 @@ require (
github.com/cespare/xxhash/v2 v2.2.0 // indirect
github.com/chenzhuoyu/base64x v0.0.0-20230717121745-296ad89f973d // indirect
github.com/chenzhuoyu/iasm v0.9.0 // indirect
github.com/consensys/bavard v0.1.13 // indirect
github.com/consensys/gnark-crypto v0.13.0 // indirect
github.com/consensys/bavard v0.1.27 // indirect
github.com/consensys/gnark-crypto v0.16.0 // indirect
github.com/cpuguy83/go-md2man/v2 v2.0.3 // indirect
github.com/crate-crypto/go-kzg-4844 v1.1.0 // indirect
github.com/crate-crypto/go-eth-kzg v1.4.0 // indirect
github.com/davecgh/go-spew v1.1.1 // indirect
github.com/deckarep/golang-set v0.0.0-20180603214616-504e848d77ea // indirect
github.com/dgryski/go-rendezvous v0.0.0-20200823014737-9f7001d12a5f // indirect
@@ -41,7 +44,7 @@ require (
github.com/docker/docker v26.1.0+incompatible // indirect
github.com/docker/go-connections v0.5.0 // indirect
github.com/edsrzf/mmap-go v1.0.0 // indirect
github.com/ethereum/c-kzg-4844 v1.0.3 // indirect
github.com/ethereum/c-kzg-4844/v2 v2.1.5 // indirect
github.com/fjl/memsize v0.0.2 // indirect
github.com/fsnotify/fsnotify v1.6.0 // indirect
github.com/gabriel-vasile/mimetype v1.4.2 // indirect
@@ -98,7 +101,7 @@ require (
github.com/shirou/gopsutil v3.21.11+incompatible // indirect
github.com/sourcegraph/conc v0.3.0 // indirect
github.com/status-im/keycard-go v0.2.0 // indirect
github.com/supranational/blst v0.3.13 // indirect
github.com/supranational/blst v0.3.15 // indirect
github.com/syndtr/goleveldb v1.0.1-0.20210819022825-2ae1ddf74ef7 // indirect
github.com/tklauser/go-sysconf v0.3.14 // indirect
github.com/tklauser/numcpus v0.9.0 // indirect
@@ -110,7 +113,7 @@ require (
go.opentelemetry.io/otel/trace v1.24.0 // indirect
go.uber.org/multierr v1.11.0 // indirect
golang.org/x/arch v0.5.0 // indirect
golang.org/x/crypto v0.24.0 // indirect
golang.org/x/crypto v0.32.0 // indirect
golang.org/x/net v0.25.0 // indirect
golang.org/x/sys v0.30.0 // indirect
golang.org/x/text v0.21.0 // indirect

View File

@@ -53,16 +53,16 @@ github.com/chenzhuoyu/base64x v0.0.0-20230717121745-296ad89f973d h1:77cEq6EriyTZ
github.com/chenzhuoyu/base64x v0.0.0-20230717121745-296ad89f973d/go.mod h1:8EPpVsBuRksnlj1mLy4AWzRNQYxauNi62uWcE3to6eA=
github.com/chenzhuoyu/iasm v0.9.0 h1:9fhXjVzq5hUy2gkhhgHl95zG2cEAhw9OSGs8toWWAwo=
github.com/chenzhuoyu/iasm v0.9.0/go.mod h1:Xjy2NpN3h7aUqeqM+woSuuvxmIe6+DDsiNLIrkAmYog=
github.com/consensys/bavard v0.1.13 h1:oLhMLOFGTLdlda/kma4VOJazblc7IM5y5QPd2A/YjhQ=
github.com/consensys/bavard v0.1.13/go.mod h1:9ItSMtA/dXMAiL7BG6bqW2m3NdSEObYWoH223nGHukI=
github.com/consensys/gnark-crypto v0.13.0 h1:VPULb/v6bbYELAPTDFINEVaMTTybV5GLxDdcjnS+4oc=
github.com/consensys/gnark-crypto v0.13.0/go.mod h1:wKqwsieaKPThcFkHe0d0zMsbHEUWFmZcG7KBCse210o=
github.com/consensys/bavard v0.1.27 h1:j6hKUrGAy/H+gpNrpLU3I26n1yc+VMGmd6ID5+gAhOs=
github.com/consensys/bavard v0.1.27/go.mod h1:k/zVjHHC4B+PQy1Pg7fgvG3ALicQw540Crag8qx+dZs=
github.com/consensys/gnark-crypto v0.16.0 h1:8Dl4eYmUWK9WmlP1Bj6je688gBRJCJbT8Mw4KoTAawo=
github.com/consensys/gnark-crypto v0.16.0/go.mod h1:Ke3j06ndtPTVvo++PhGNgvm+lgpLvzbcE2MqljY7diU=
github.com/containerd/continuity v0.4.3 h1:6HVkalIp+2u1ZLH1J/pYX2oBVXlJZvh1X1A7bEZ9Su8=
github.com/containerd/continuity v0.4.3/go.mod h1:F6PTNCKepoxEaXLQp3wDAjygEnImnZ/7o4JzpodfroQ=
github.com/cpuguy83/go-md2man/v2 v2.0.3 h1:qMCsGGgs+MAzDFyp9LpAe1Lqy/fY/qCovCm0qnXZOBM=
github.com/cpuguy83/go-md2man/v2 v2.0.3/go.mod h1:tgQtvFlXSQOSOSIRvRPT7W67SCa46tRHOmNcaadrF8o=
github.com/crate-crypto/go-kzg-4844 v1.1.0 h1:EN/u9k2TF6OWSHrCCDBBU6GLNMq88OspHHlMnHfoyU4=
github.com/crate-crypto/go-kzg-4844 v1.1.0/go.mod h1:JolLjpSff1tCCJKaJx4psrlEdlXuJEC996PL3tTAFks=
github.com/crate-crypto/go-eth-kzg v1.4.0 h1:WzDGjHk4gFg6YzV0rJOAsTK4z3Qkz5jd4RE3DAvPFkg=
github.com/crate-crypto/go-eth-kzg v1.4.0/go.mod h1:J9/u5sWfznSObptgfa92Jq8rTswn6ahQWEuiLHOjCUI=
github.com/davecgh/go-spew v0.0.0-20171005155431-ecdeabc65495/go.mod h1:J7Y8YcW2NihsgmVo/mv3lAwl/skON4iLHjSsI+c5H38=
github.com/davecgh/go-spew v1.1.0/go.mod h1:J7Y8YcW2NihsgmVo/mv3lAwl/skON4iLHjSsI+c5H38=
github.com/davecgh/go-spew v1.1.1 h1:vj9j/u1bqnvCEfJOwUhtlOARqs3+rkHYY13jYWTU97c=
@@ -88,8 +88,8 @@ github.com/elastic/go-sysinfo v1.11.1 h1:g9mwl05njS4r69TisC+vwHWTSKywZFYYUu3so3T
github.com/elastic/go-sysinfo v1.11.1/go.mod h1:6KQb31j0QeWBDF88jIdWSxE8cwoOB9tO4Y4osN7Q70E=
github.com/elastic/go-windows v1.0.1 h1:AlYZOldA+UJ0/2nBuqWdo90GFCgG9xuyw9SYzGUtJm0=
github.com/elastic/go-windows v1.0.1/go.mod h1:FoVvqWSun28vaDQPbj2Elfc0JahhPB7WQEGa3c814Ss=
github.com/ethereum/c-kzg-4844 v1.0.3 h1:IEnbOHwjixW2cTvKRUlAAUOeleV7nNM/umJR+qy4WDs=
github.com/ethereum/c-kzg-4844 v1.0.3/go.mod h1:VewdlzQmpT5QSrVhbBuGoCdFJkpaJlO1aQputP83wc0=
github.com/ethereum/c-kzg-4844/v2 v2.1.5 h1:aVtoLK5xwJ6c5RiqO8g8ptJ5KU+2Hdquf6G3aXiHh5s=
github.com/ethereum/c-kzg-4844/v2 v2.1.5/go.mod h1:u59hRTTah4Co6i9fDWtiCjTrblJv0UwsqZKCc0GfgUs=
github.com/fjl/memsize v0.0.2 h1:27txuSD9or+NZlnOWdKUxeBzTAUkWCVh+4Gf2dWFOzA=
github.com/fjl/memsize v0.0.2/go.mod h1:VvhXpOYNQvB+uIk2RvXzuaQtkQJzzIx6lSBe1xv7hi0=
github.com/fsnotify/fsnotify v1.4.7/go.mod h1:jwhsz4b93w/PPRr/qN1Yymfu8t87LnFCMoQvtojpjFo=
@@ -214,8 +214,8 @@ github.com/kr/pretty v0.3.1 h1:flRD4NNwYAUpkphVc1HcthR4KEIFJ65n8Mw5qdRn3LE=
github.com/kr/pretty v0.3.1/go.mod h1:hoEshYVHaxMs3cyo3Yncou5ZscifuDolrwPKZanG3xk=
github.com/kr/text v0.2.0 h1:5Nx0Ya0ZqY2ygV366QzturHI13Jq95ApcVaJBhpS+AY=
github.com/kr/text v0.2.0/go.mod h1:eLer722TekiGuMkidMxC/pM04lWEeraHUUmBw8l2grE=
github.com/leanovate/gopter v0.2.9 h1:fQjYxZaynp97ozCzfOyOuAGOU4aU/z37zf/tOujFk7c=
github.com/leanovate/gopter v0.2.9/go.mod h1:U2L/78B+KVFIx2VmW6onHJQzXtFb+p5y3y2Sh+Jxxv8=
github.com/leanovate/gopter v0.2.11 h1:vRjThO1EKPb/1NsDXuDrzldR28RLkBflWYcU9CvzWu4=
github.com/leanovate/gopter v0.2.11/go.mod h1:aK3tzZP/C+p1m3SPRE4SYZFGP7jjkuSI4f7Xvpt0S9c=
github.com/leodido/go-urn v1.2.4 h1:XlAE/cm/ms7TE/VMVoduSpNBoyc2dOxHs5MZSwAN63Q=
github.com/leodido/go-urn v1.2.4/go.mod h1:7ZrI8mTSeBSHl/UaRyKQW1qZeMgak41ANeCNaVckg+4=
github.com/mattn/go-colorable v0.1.13 h1:fFA4WZxdEF4tXPZVKMLwD8oUnCTTo08duU7wxecdEvA=
@@ -309,10 +309,10 @@ github.com/rs/cors v1.7.0 h1:+88SsELBHx5r+hZ8TCkggzSstaWNbDvThkVK8H6f9ik=
github.com/rs/cors v1.7.0/go.mod h1:gFx+x8UowdsKA9AchylcLynDq+nNFfI8FkUZdN/jGCU=
github.com/russross/blackfriday/v2 v2.1.0 h1:JIOH55/0cWyOuilr9/qlrm0BSXldqnqwMsf35Ld67mk=
github.com/russross/blackfriday/v2 v2.1.0/go.mod h1:+Rmxgy9KzJVeS9/2gXHxylqXiyQDYRxCVz55jmeOWTM=
github.com/scroll-tech/da-codec v0.1.3-0.20250826112206-b4cce5c5d178 h1:4utngmJHXSOS5FoSdZhEV1xMRirpArbXvyoCZY9nYj0=
github.com/scroll-tech/da-codec v0.1.3-0.20250826112206-b4cce5c5d178/go.mod h1:Z6kN5u2khPhiqHyk172kGB7o38bH/nj7Ilrb/46wZGg=
github.com/scroll-tech/go-ethereum v1.10.14-0.20250729113104-bd8f141bb3e9 h1:u371VK8eOU2Z/0SVf5KDI3eJc8msHSpJbav4do/8n38=
github.com/scroll-tech/go-ethereum v1.10.14-0.20250729113104-bd8f141bb3e9/go.mod h1:pDCZ4iGvEGmdIe4aSAGBrb7XSrKEML6/L/wEMmNxOdk=
github.com/scroll-tech/da-codec v0.10.0 h1:IPHxyTyXTWPV0Q+DZ08cod2fWkhUvrfysmj/VBpB+WU=
github.com/scroll-tech/da-codec v0.10.0/go.mod h1:MBlIP4wCXPcUDZ/Ci2B7n/2IbVU1WBo9OTFTZ5ffE0U=
github.com/scroll-tech/go-ethereum v1.10.14-0.20251128092359-25d5bf6b817b h1:pMQKnroJoS/FeL1aOWkz7/u1iBHUP8PWjZstNuzoUGE=
github.com/scroll-tech/go-ethereum v1.10.14-0.20251128092359-25d5bf6b817b/go.mod h1:Aa/kD1XB+OV/7rRxMQrjcPCB4b0pKyLH0gsTrtuHi38=
github.com/scroll-tech/zktrie v0.8.4 h1:UagmnZ4Z3ITCk+aUq9NQZJNAwnWl4gSxsLb2Nl7IgRE=
github.com/scroll-tech/zktrie v0.8.4/go.mod h1:XvNo7vAk8yxNyTjBDj5WIiFzYW4bx/gJ78+NK6Zn6Uk=
github.com/segmentio/asm v1.2.0 h1:9BQrFxC+YOHJlTlHGkTrFWf59nbL3XnCoFLTwDCI7ys=
@@ -341,10 +341,10 @@ github.com/stretchr/testify v1.8.0/go.mod h1:yNjHg4UonilssWZ8iaSj1OCr/vHnekPRkoO
github.com/stretchr/testify v1.8.1/go.mod h1:w2LPCIKwWwSfY2zedu0+kehJoqGctiVI29o6fzry7u4=
github.com/stretchr/testify v1.8.2/go.mod h1:w2LPCIKwWwSfY2zedu0+kehJoqGctiVI29o6fzry7u4=
github.com/stretchr/testify v1.8.4/go.mod h1:sz/lmYIOXD/1dqDmKjjqLyZ2RngseejIcXlSw2iwfAo=
github.com/stretchr/testify v1.9.0 h1:HtqpIVDClZ4nwg75+f6Lvsy/wHu+3BoSGCbBAcpTsTg=
github.com/stretchr/testify v1.9.0/go.mod h1:r2ic/lqez/lEtzL7wO/rwa5dbSLXVDPFyf8C91i36aY=
github.com/supranational/blst v0.3.13 h1:AYeSxdOMacwu7FBmpfloBz5pbFXDmJL33RuwnKtmTjk=
github.com/supranational/blst v0.3.13/go.mod h1:jZJtfjgudtNl4en1tzwPIV3KjUnQUvG3/j+w+fVonLw=
github.com/stretchr/testify v1.10.0 h1:Xv5erBjTwe/5IxqUQTdXv5kgmIvbHo3QQyRwhJsOfJA=
github.com/stretchr/testify v1.10.0/go.mod h1:r2ic/lqez/lEtzL7wO/rwa5dbSLXVDPFyf8C91i36aY=
github.com/supranational/blst v0.3.15 h1:rd9viN6tfARE5wv3KZJ9H8e1cg0jXW8syFCcsbHa76o=
github.com/supranational/blst v0.3.15/go.mod h1:jZJtfjgudtNl4en1tzwPIV3KjUnQUvG3/j+w+fVonLw=
github.com/syndtr/goleveldb v1.0.1-0.20210819022825-2ae1ddf74ef7 h1:epCh84lMvA70Z7CTTCmYQn2CKbY8j86K7/FAIr141uY=
github.com/syndtr/goleveldb v1.0.1-0.20210819022825-2ae1ddf74ef7/go.mod h1:q4W45IWZaF22tdD+VEXcAWRA037jwmWEB5VWYORlTpc=
github.com/tklauser/go-sysconf v0.3.14 h1:g5vzr9iPFFz24v2KZXs/pvpvh8/V9Fw6vQK5ZZb78yU=
@@ -387,8 +387,8 @@ golang.org/x/arch v0.5.0/go.mod h1:5om86z9Hs0C8fWVUuoMHwpExlXzs5Tkyp9hOrfG7pp8=
golang.org/x/crypto v0.0.0-20170930174604-9419663f5a44/go.mod h1:6SG95UA2DQfeDnfUPMdvaQW0Q7yPrPDi9nlGo2tz2b4=
golang.org/x/crypto v0.0.0-20190308221718-c2843e01d9a2/go.mod h1:djNgcEr1/C05ACkg1iLfiJU5Ep61QUkGW8qpdssI0+w=
golang.org/x/crypto v0.0.0-20200622213623-75b288015ac9/go.mod h1:LzIPMQfyMNhhGPhUkYOs5KpL4U8rLKemX1yGLhDgUto=
golang.org/x/crypto v0.24.0 h1:mnl8DM0o513X8fdIkmyFE/5hTYxbwYOjDS/+rK6qpRI=
golang.org/x/crypto v0.24.0/go.mod h1:Z1PMYSOR5nyMcyAVAIQSKCDwalqy85Aqn1x3Ws4L5DM=
golang.org/x/crypto v0.32.0 h1:euUpcYgM8WcP71gNpTqQCn6rC2t6ULUPiOzfWaXVVfc=
golang.org/x/crypto v0.32.0/go.mod h1:ZnnJkOaASj8g0AjIduWNlq2NRxL0PlBrbKVyZ6V/Ugc=
golang.org/x/mod v0.17.0 h1:zY54UmvipHiNd+pm+m0x9KhZ9hl1/7QNMyxXbc6ICqA=
golang.org/x/mod v0.17.0/go.mod h1:hTbmBsO62+eylJbnUtE2MGJUyE7QWk4xUqPFrRgJ+7c=
golang.org/x/net v0.0.0-20180906233101-161cd47e91fd/go.mod h1:mL1N/T3taQHkDXs73rZJwtUhF3w3ftmwwsq0BUmARs4=

View File

@@ -361,7 +361,6 @@ func getTxHistoryInfoFromBridgeBatchDepositMessage(message *orm.BridgeBatchDepos
func (h *HistoryLogic) getCachedTxsInfo(ctx context.Context, cacheKey string, pageNum, pageSize uint64) ([]*types.TxHistoryInfo, uint64, bool, error) {
start := int64((pageNum - 1) * pageSize)
end := start + int64(pageSize) - 1
total, err := h.redis.ZCard(ctx, cacheKey).Result()
if err != nil {
log.Error("failed to get zcard result", "error", err)
@@ -372,6 +371,10 @@ func (h *HistoryLogic) getCachedTxsInfo(ctx context.Context, cacheKey string, pa
return nil, 0, false, nil
}
if start >= total {
return nil, 0, false, nil
}
values, err := h.redis.ZRevRange(ctx, cacheKey, start, end).Result()
if err != nil {
log.Error("failed to get zrange result", "error", err)
@@ -450,5 +453,6 @@ func (h *HistoryLogic) processAndCacheTxHistoryInfo(ctx context.Context, cacheKe
log.Error("cache miss after write, expect hit", "cached key", cacheKey, "page", page, "page size", pageSize, "error", err)
return nil, 0, err
}
return pagedTxs, total, nil
}

View File

@@ -157,7 +157,7 @@ func (c *CrossMessage) GetL2UnclaimedWithdrawalsByAddress(ctx context.Context, s
db = db.Where("tx_status in (?)", []types.TxStatusType{types.TxStatusTypeSent, types.TxStatusTypeFailedRelayed, types.TxStatusTypeRelayTxReverted})
db = db.Where("sender = ?", sender)
db = db.Order("block_timestamp desc")
db = db.Limit(500)
db = db.Limit(10000)
if err := db.Find(&messages).Error; err != nil {
return nil, fmt.Errorf("failed to get L2 claimable withdrawal messages by sender address, sender: %v, error: %w", sender, err)
}

View File

@@ -0,0 +1,26 @@
# Download Go dependencies
FROM scrolltech/go-rust-builder:go-1.22.12-rust-nightly-2025-02-14 as base
WORKDIR /src
COPY go.work* ./
COPY ./rollup/go.* ./rollup/
COPY ./common/go.* ./common/
COPY ./coordinator/go.* ./coordinator/
COPY ./database/go.* ./database/
COPY ./tests/integration-test/go.* ./tests/integration-test/
COPY ./bridge-history-api/go.* ./bridge-history-api/
RUN go mod download -x
# Build coordinator proxy
FROM base as builder
COPY . .
RUN cd ./coordinator && CGO_LDFLAGS="-Wl,--no-as-needed -ldl" make coordinator_proxy && mv ./build/bin/coordinator_proxy /bin/coordinator_proxy
# Pull coordinator proxy into a second stage deploy ubuntu container
FROM ubuntu:20.04
ENV CGO_LDFLAGS="-Wl,--no-as-needed -ldl"
RUN apt update && apt install vim netcat-openbsd net-tools curl jq -y
COPY --from=builder /bin/coordinator_proxy /bin/
RUN /bin/coordinator_proxy --version
WORKDIR /app
ENTRYPOINT ["/bin/coordinator_proxy"]

View File

@@ -0,0 +1,8 @@
assets/
contracts/
docs/
l2geth/
rpc-gateway/
*target/*
permissionless-batches/conf/

View File

@@ -12,10 +12,11 @@ require (
github.com/gin-gonic/gin v1.9.1
github.com/mattn/go-colorable v0.1.13
github.com/mattn/go-isatty v0.0.20
github.com/mitchellh/mapstructure v1.5.0
github.com/modern-go/reflect2 v1.0.2
github.com/orcaman/concurrent-map v1.0.0
github.com/prometheus/client_golang v1.19.0
github.com/scroll-tech/go-ethereum v1.10.14-0.20250625112225-a67863c65587
github.com/scroll-tech/go-ethereum v1.10.14-0.20251128092113-8629f088d78f
github.com/stretchr/testify v1.10.0
github.com/testcontainers/testcontainers-go v0.30.0
github.com/testcontainers/testcontainers-go/modules/compose v0.30.0
@@ -64,7 +65,7 @@ require (
github.com/containerd/typeurl/v2 v2.1.1 // indirect
github.com/cpuguy83/dockercfg v0.3.1 // indirect
github.com/cpuguy83/go-md2man/v2 v2.0.4 // indirect
github.com/crate-crypto/go-kzg-4844 v1.1.0 // indirect
github.com/crate-crypto/go-eth-kzg v1.4.0 // indirect
github.com/davecgh/go-spew v1.1.1 // indirect
github.com/deckarep/golang-set v0.0.0-20180603214616-504e848d77ea // indirect
github.com/distribution/reference v0.5.0 // indirect
@@ -79,7 +80,7 @@ require (
github.com/docker/go-units v0.5.0 // indirect
github.com/edsrzf/mmap-go v1.0.0 // indirect
github.com/emicklei/go-restful/v3 v3.10.1 // indirect
github.com/ethereum/c-kzg-4844 v1.0.3 // indirect
github.com/ethereum/c-kzg-4844/v2 v2.1.5 // indirect
github.com/felixge/httpsnoop v1.0.4 // indirect
github.com/fjl/memsize v0.0.2 // indirect
github.com/fsnotify/fsevents v0.1.1 // indirect
@@ -147,7 +148,6 @@ require (
github.com/mgutz/ansi v0.0.0-20170206155736-9520e82c474b // indirect
github.com/miekg/pkcs11 v1.1.1 // indirect
github.com/mitchellh/copystructure v1.2.0 // indirect
github.com/mitchellh/mapstructure v1.5.0 // indirect
github.com/mitchellh/pointerstructure v1.2.0 // indirect
github.com/mitchellh/reflectwalk v1.0.2 // indirect
github.com/mmcloughlin/addchain v0.4.0 // indirect
@@ -184,7 +184,7 @@ require (
github.com/rjeczalik/notify v0.9.1 // indirect
github.com/rs/cors v1.7.0 // indirect
github.com/russross/blackfriday/v2 v2.1.0 // indirect
github.com/scroll-tech/da-codec v0.1.3-0.20250826112206-b4cce5c5d178 // indirect
github.com/scroll-tech/da-codec v0.10.0 // indirect
github.com/scroll-tech/zktrie v0.8.4 // indirect
github.com/secure-systems-lab/go-securesystemslib v0.4.0 // indirect
github.com/serialx/hashring v0.0.0-20190422032157-8b2912629002 // indirect
@@ -198,7 +198,7 @@ require (
github.com/spf13/pflag v1.0.5 // indirect
github.com/spf13/viper v1.4.0 // indirect
github.com/status-im/keycard-go v0.2.0 // indirect
github.com/supranational/blst v0.3.13 // indirect
github.com/supranational/blst v0.3.15 // indirect
github.com/syndtr/goleveldb v1.0.1-0.20210819022825-2ae1ddf74ef7 // indirect
github.com/theupdateframework/notary v0.7.0 // indirect
github.com/tilt-dev/fsnotify v1.4.8-0.20220602155310-fff9c274a375 // indirect

View File

@@ -155,8 +155,8 @@ github.com/cpuguy83/dockercfg v0.3.1 h1:/FpZ+JaygUR/lZP2NlFI2DVfrOEMAIKP5wWEJdoY
github.com/cpuguy83/dockercfg v0.3.1/go.mod h1:sugsbF4//dDlL/i+S+rtpIWp+5h0BHJHfjj5/jFyUJc=
github.com/cpuguy83/go-md2man/v2 v2.0.4 h1:wfIWP927BUkWJb2NmU/kNDYIBTh/ziUX91+lVfRxZq4=
github.com/cpuguy83/go-md2man/v2 v2.0.4/go.mod h1:tgQtvFlXSQOSOSIRvRPT7W67SCa46tRHOmNcaadrF8o=
github.com/crate-crypto/go-kzg-4844 v1.1.0 h1:EN/u9k2TF6OWSHrCCDBBU6GLNMq88OspHHlMnHfoyU4=
github.com/crate-crypto/go-kzg-4844 v1.1.0/go.mod h1:JolLjpSff1tCCJKaJx4psrlEdlXuJEC996PL3tTAFks=
github.com/crate-crypto/go-eth-kzg v1.4.0 h1:WzDGjHk4gFg6YzV0rJOAsTK4z3Qkz5jd4RE3DAvPFkg=
github.com/crate-crypto/go-eth-kzg v1.4.0/go.mod h1:J9/u5sWfznSObptgfa92Jq8rTswn6ahQWEuiLHOjCUI=
github.com/creack/pty v1.1.9/go.mod h1:oKZEueFk5CKHvIhNR5MUki03XCEU+Q6VDXinZuGJ33E=
github.com/creack/pty v1.1.17/go.mod h1:MOBLtS5ELjhRRrroQr9kyvTxUAFNvYEK993ew/Vr4O4=
github.com/creack/pty v1.1.18 h1:n56/Zwd5o6whRC5PMGretI4IdRLlmBXYNjScPaBgsbY=
@@ -214,8 +214,8 @@ github.com/envoyproxy/protoc-gen-validate v0.1.0/go.mod h1:iSmxcyjqTsJpI2R4NaDN7
github.com/envoyproxy/protoc-gen-validate v1.0.2 h1:QkIBuU5k+x7/QXPvPPnWXWlCdaBFApVqftFV6k087DA=
github.com/envoyproxy/protoc-gen-validate v1.0.2/go.mod h1:GpiZQP3dDbg4JouG/NNS7QWXpgx6x8QiMKdmN72jogE=
github.com/erikstmartin/go-testdb v0.0.0-20160219214506-8d10e4a1bae5/go.mod h1:a2zkGnVExMxdzMo3M0Hi/3sEU+cWnZpSni0O6/Yb/P0=
github.com/ethereum/c-kzg-4844 v1.0.3 h1:IEnbOHwjixW2cTvKRUlAAUOeleV7nNM/umJR+qy4WDs=
github.com/ethereum/c-kzg-4844 v1.0.3/go.mod h1:VewdlzQmpT5QSrVhbBuGoCdFJkpaJlO1aQputP83wc0=
github.com/ethereum/c-kzg-4844/v2 v2.1.5 h1:aVtoLK5xwJ6c5RiqO8g8ptJ5KU+2Hdquf6G3aXiHh5s=
github.com/ethereum/c-kzg-4844/v2 v2.1.5/go.mod h1:u59hRTTah4Co6i9fDWtiCjTrblJv0UwsqZKCc0GfgUs=
github.com/felixge/httpsnoop v1.0.4 h1:NFTV2Zj1bL4mc9sqWACXbQFVBBg2W3GPvqp8/ESS2Wg=
github.com/felixge/httpsnoop v1.0.4/go.mod h1:m8KPJKqk1gH5J9DgRY2ASl2lWCfGKXixSwevea8zH2U=
github.com/fjl/memsize v0.0.2 h1:27txuSD9or+NZlnOWdKUxeBzTAUkWCVh+4Gf2dWFOzA=
@@ -636,10 +636,10 @@ github.com/rs/cors v1.7.0 h1:+88SsELBHx5r+hZ8TCkggzSstaWNbDvThkVK8H6f9ik=
github.com/rs/cors v1.7.0/go.mod h1:gFx+x8UowdsKA9AchylcLynDq+nNFfI8FkUZdN/jGCU=
github.com/russross/blackfriday/v2 v2.1.0 h1:JIOH55/0cWyOuilr9/qlrm0BSXldqnqwMsf35Ld67mk=
github.com/russross/blackfriday/v2 v2.1.0/go.mod h1:+Rmxgy9KzJVeS9/2gXHxylqXiyQDYRxCVz55jmeOWTM=
github.com/scroll-tech/da-codec v0.1.3-0.20250826112206-b4cce5c5d178 h1:4utngmJHXSOS5FoSdZhEV1xMRirpArbXvyoCZY9nYj0=
github.com/scroll-tech/da-codec v0.1.3-0.20250826112206-b4cce5c5d178/go.mod h1:Z6kN5u2khPhiqHyk172kGB7o38bH/nj7Ilrb/46wZGg=
github.com/scroll-tech/go-ethereum v1.10.14-0.20250625112225-a67863c65587 h1:wG1+gb+K4iLtxAHhiAreMdIjP5x9hB64duraN2+u1QU=
github.com/scroll-tech/go-ethereum v1.10.14-0.20250625112225-a67863c65587/go.mod h1:YyfB2AyAtphlbIuDQgaxc2b9mo0zE4EBA1+qtXvzlmg=
github.com/scroll-tech/da-codec v0.10.0 h1:IPHxyTyXTWPV0Q+DZ08cod2fWkhUvrfysmj/VBpB+WU=
github.com/scroll-tech/da-codec v0.10.0/go.mod h1:MBlIP4wCXPcUDZ/Ci2B7n/2IbVU1WBo9OTFTZ5ffE0U=
github.com/scroll-tech/go-ethereum v1.10.14-0.20251128092113-8629f088d78f h1:j6SjP98MoWFFX9TwB1/nFYEkayqHQsrtE66Ll2C+oT0=
github.com/scroll-tech/go-ethereum v1.10.14-0.20251128092113-8629f088d78f/go.mod h1:Aa/kD1XB+OV/7rRxMQrjcPCB4b0pKyLH0gsTrtuHi38=
github.com/scroll-tech/zktrie v0.8.4 h1:UagmnZ4Z3ITCk+aUq9NQZJNAwnWl4gSxsLb2Nl7IgRE=
github.com/scroll-tech/zktrie v0.8.4/go.mod h1:XvNo7vAk8yxNyTjBDj5WIiFzYW4bx/gJ78+NK6Zn6Uk=
github.com/secure-systems-lab/go-securesystemslib v0.4.0 h1:b23VGrQhTA8cN2CbBw7/FulN9fTtqYUdS5+Oxzt+DUE=
@@ -707,8 +707,8 @@ github.com/stretchr/testify v1.8.2/go.mod h1:w2LPCIKwWwSfY2zedu0+kehJoqGctiVI29o
github.com/stretchr/testify v1.8.4/go.mod h1:sz/lmYIOXD/1dqDmKjjqLyZ2RngseejIcXlSw2iwfAo=
github.com/stretchr/testify v1.10.0 h1:Xv5erBjTwe/5IxqUQTdXv5kgmIvbHo3QQyRwhJsOfJA=
github.com/stretchr/testify v1.10.0/go.mod h1:r2ic/lqez/lEtzL7wO/rwa5dbSLXVDPFyf8C91i36aY=
github.com/supranational/blst v0.3.13 h1:AYeSxdOMacwu7FBmpfloBz5pbFXDmJL33RuwnKtmTjk=
github.com/supranational/blst v0.3.13/go.mod h1:jZJtfjgudtNl4en1tzwPIV3KjUnQUvG3/j+w+fVonLw=
github.com/supranational/blst v0.3.15 h1:rd9viN6tfARE5wv3KZJ9H8e1cg0jXW8syFCcsbHa76o=
github.com/supranational/blst v0.3.15/go.mod h1:jZJtfjgudtNl4en1tzwPIV3KjUnQUvG3/j+w+fVonLw=
github.com/syndtr/goleveldb v1.0.1-0.20210819022825-2ae1ddf74ef7 h1:epCh84lMvA70Z7CTTCmYQn2CKbY8j86K7/FAIr141uY=
github.com/syndtr/goleveldb v1.0.1-0.20210819022825-2ae1ddf74ef7/go.mod h1:q4W45IWZaF22tdD+VEXcAWRA037jwmWEB5VWYORlTpc=
github.com/testcontainers/testcontainers-go v0.30.0 h1:jmn/XS22q4YRrcMwWg0pAwlClzs/abopbsBzrepyc4E=

View File

@@ -34,7 +34,7 @@ services:
# Sets up the genesis configuration for the go-ethereum client from a JSON file.
geth-genesis:
image: "ethereum/client-go:v1.13.14"
image: "ethereum/client-go:v1.14.0"
command: --datadir=/data/execution init /data/execution/genesis.json
volumes:
- data:/data
@@ -80,7 +80,7 @@ services:
# Runs the go-ethereum execution client with the specified, unlocked account and necessary
# APIs to allow for proof-of-stake consensus via Prysm.
geth:
image: "ethereum/client-go:v1.13.14"
image: "ethereum/client-go:v1.14.0"
command:
- --http
- --http.api=eth,net,web3

View File

@@ -1,4 +1,4 @@
FROM ethereum/client-go:v1.13.14
FROM ethereum/client-go:v1.14.0
COPY password /l1geth/
COPY genesis.json /l1geth/

View File

@@ -10,6 +10,7 @@ import (
"time"
"github.com/scroll-tech/go-ethereum/ethclient"
"github.com/scroll-tech/go-ethereum/rpc"
"github.com/testcontainers/testcontainers-go"
"github.com/testcontainers/testcontainers-go/modules/compose"
"github.com/testcontainers/testcontainers-go/modules/postgres"
@@ -166,13 +167,13 @@ func (t *TestcontainerApps) GetPoSL1EndPoint() (string, error) {
return contrainer.PortEndpoint(context.Background(), "8545/tcp", "http")
}
// GetPoSL1Client returns a ethclient by dialing running PoS L1 client
func (t *TestcontainerApps) GetPoSL1Client() (*ethclient.Client, error) {
// GetPoSL1Client returns a raw rpc client by dialing the L1 node
func (t *TestcontainerApps) GetPoSL1Client() (*rpc.Client, error) {
endpoint, err := t.GetPoSL1EndPoint()
if err != nil {
return nil, err
}
return ethclient.Dial(endpoint)
return rpc.Dial(endpoint)
}
// GetDBEndPoint returns the endpoint of the running postgres container
@@ -220,11 +221,20 @@ func (t *TestcontainerApps) GetGormDBClient() (*gorm.DB, error) {
// GetL2GethClient returns a ethclient by dialing running L2Geth
func (t *TestcontainerApps) GetL2GethClient() (*ethclient.Client, error) {
rpcCli, err := t.GetL2Client()
if err != nil {
return nil, err
}
return ethclient.NewClient(rpcCli), nil
}
// GetL2GethClient returns a rpc client by dialing running L2Geth
func (t *TestcontainerApps) GetL2Client() (*rpc.Client, error) {
endpoint, err := t.GetL2GethEndPoint()
if err != nil {
return nil, err
}
client, err := ethclient.Dial(endpoint)
client, err := rpc.Dial(endpoint)
if err != nil {
return nil, err
}

View File

@@ -3,7 +3,6 @@ package testcontainers
import (
"testing"
"github.com/scroll-tech/go-ethereum/ethclient"
"github.com/stretchr/testify/assert"
"gorm.io/gorm"
)
@@ -14,7 +13,6 @@ func TestNewTestcontainerApps(t *testing.T) {
err error
endpoint string
gormDBclient *gorm.DB
ethclient *ethclient.Client
)
testApps := NewTestcontainerApps()
@@ -32,17 +30,17 @@ func TestNewTestcontainerApps(t *testing.T) {
endpoint, err = testApps.GetL2GethEndPoint()
assert.NoError(t, err)
assert.NotEmpty(t, endpoint)
ethclient, err = testApps.GetL2GethClient()
l2RawClient, err := testApps.GetL2Client()
assert.NoError(t, err)
assert.NotNil(t, ethclient)
assert.NotNil(t, l2RawClient)
assert.NoError(t, testApps.StartPoSL1Container())
endpoint, err = testApps.GetPoSL1EndPoint()
assert.NoError(t, err)
assert.NotEmpty(t, endpoint)
ethclient, err = testApps.GetPoSL1Client()
l1RawClient, err := testApps.GetPoSL1Client()
assert.NoError(t, err)
assert.NotNil(t, ethclient)
assert.NotNil(t, l1RawClient)
assert.NoError(t, testApps.StartWeb3SignerContainer(1))
endpoint, err = testApps.GetWeb3SignerEndpoint()

View File

@@ -39,10 +39,12 @@ const (
// ChunkTaskDetail is a type containing ChunkTask detail for chunk task.
type ChunkTaskDetail struct {
Version uint8 `json:"version"`
// use one of the string of "euclidv1" / "euclidv2"
ForkName string `json:"fork_name"`
BlockHashes []common.Hash `json:"block_hashes"`
PrevMsgQueueHash common.Hash `json:"prev_msg_queue_hash"`
PostMsgQueueHash common.Hash `json:"post_msg_queue_hash"`
}
// it is a hex encoded big with fixed length on 48 bytes
@@ -90,40 +92,59 @@ func (e *Byte48) UnmarshalJSON(input []byte) error {
// BatchTaskDetail is a type containing BatchTask detail.
type BatchTaskDetail struct {
Version uint8 `json:"version"`
// use one of the string of "euclidv1" / "euclidv2"
ForkName string `json:"fork_name"`
ChunkInfos []*ChunkInfo `json:"chunk_infos"`
ChunkProofs []*OpenVMChunkProof `json:"chunk_proofs"`
BatchHeader interface{} `json:"batch_header"`
BlobBytes []byte `json:"blob_bytes"`
KzgProof Byte48 `json:"kzg_proof,omitempty"`
KzgCommitment Byte48 `json:"kzg_commitment,omitempty"`
ChallengeDigest common.Hash `json:"challenge_digest,omitempty"`
ForkName string `json:"fork_name"`
ChunkProofs []*OpenVMChunkProof `json:"chunk_proofs"`
BatchHeader interface{} `json:"batch_header"`
BlobBytes []byte `json:"blob_bytes"`
KzgProof *Byte48 `json:"kzg_proof,omitempty"`
KzgCommitment *Byte48 `json:"kzg_commitment,omitempty"`
// ChallengeDigest should be a common.Hash type if it is not nil
ChallengeDigest interface{} `json:"challenge_digest,omitempty"`
}
// BundleTaskDetail consists of all the information required to describe the task to generate a proof for a bundle of batches.
type BundleTaskDetail struct {
Version uint8 `json:"version"`
// use one of the string of "euclidv1" / "euclidv2"
ForkName string `json:"fork_name"`
BatchProofs []*OpenVMBatchProof `json:"batch_proofs"`
BundleInfo *OpenVMBundleInfo `json:"bundle_info,omitempty"`
}
type RawBytes []byte
func (r RawBytes) MarshalJSON() ([]byte, error) {
if r == nil {
return []byte("null"), nil
}
// Marshal the []byte as a JSON array of numbers
rn := make([]uint16, len(r))
for i := range r {
rn[i] = uint16(r[i])
}
return json.Marshal(rn)
}
// ChunkInfo is for calculating pi_hash for chunk
type ChunkInfo struct {
ChainID uint64 `json:"chain_id"`
PrevStateRoot common.Hash `json:"prev_state_root"`
PostStateRoot common.Hash `json:"post_state_root"`
WithdrawRoot common.Hash `json:"withdraw_root"`
DataHash common.Hash `json:"data_hash"`
IsPadding bool `json:"is_padding"`
TxBytes []byte `json:"tx_bytes"`
ChainID uint64 `json:"chain_id"`
PrevStateRoot common.Hash `json:"prev_state_root"`
PostStateRoot common.Hash `json:"post_state_root"`
WithdrawRoot common.Hash `json:"withdraw_root"`
DataHash common.Hash `json:"data_hash"`
IsPadding bool `json:"is_padding"`
// TxBytes []byte `json:"tx_bytes"`
TxBytesHash common.Hash `json:"tx_data_digest"`
PrevMsgQueueHash common.Hash `json:"prev_msg_queue_hash"`
PostMsgQueueHash common.Hash `json:"post_msg_queue_hash"`
TxDataLength uint64 `json:"tx_data_length"`
InitialBlockNumber uint64 `json:"initial_block_number"`
BlockCtxs []BlockContextV2 `json:"block_ctxs"`
PrevBlockhash common.Hash `json:"prev_blockhash"`
PostBlockhash common.Hash `json:"post_blockhash"`
EncryptionKey RawBytes `json:"encryption_key"`
}
// BlockContextV2 is the block context for euclid v2
@@ -186,6 +207,7 @@ type OpenVMBatchInfo struct {
ChainID uint64 `json:"chain_id"`
PrevMsgQueueHash common.Hash `json:"prev_msg_queue_hash"`
PostMsgQueueHash common.Hash `json:"post_msg_queue_hash"`
EncryptionKey RawBytes `json:"encryption_key"`
}
// BatchProof includes the proof info that are required for batch verification and rollup.
@@ -246,6 +268,7 @@ type OpenVMBundleInfo struct {
PrevBatchHash common.Hash `json:"prev_batch_hash"`
BatchHash common.Hash `json:"batch_hash"`
MsgQueueHash common.Hash `json:"msg_queue_hash"`
EncryptionKey RawBytes `json:"encryption_key"`
}
// OpenVMBundleProof includes the proof info that are required for verification of a bundle of batch proofs.

View File

@@ -4,6 +4,7 @@ import (
"net/http"
"github.com/gin-gonic/gin"
"github.com/mitchellh/mapstructure"
)
// Response the response schema
@@ -13,6 +14,19 @@ type Response struct {
Data interface{} `json:"data"`
}
func (resp *Response) DecodeData(out interface{}) error {
// Decode generically unmarshaled JSON (map[string]any, []any) into a typed struct
// honoring `json` tags and allowing weak type conversions.
dec, err := mapstructure.NewDecoder(&mapstructure.DecoderConfig{
TagName: "json",
Result: out,
})
if err != nil {
return err
}
return dec.Decode(resp.Data)
}
// RenderJSON renders response with json
func RenderJSON(ctx *gin.Context, errCode int, err error, data interface{}) {
var errMsg string

View File

@@ -5,7 +5,7 @@ import (
"runtime/debug"
)
var tag = "v4.5.46"
var tag = "v4.7.10"
var commit = func() string {
if info, ok := debug.ReadBuildInfo(); ok {

View File

@@ -34,11 +34,21 @@ coordinator_cron:
coordinator_tool:
go build -ldflags "-X scroll-tech/common/version.ZkVersion=${ZK_VERSION}" -o $(PWD)/build/bin/coordinator_tool ./cmd/tool
coordinator_proxy:
go build -ldflags "-X scroll-tech/common/version.ZkVersion=${ZK_VERSION}" -tags="mock_prover mock_verifier" -o $(PWD)/build/bin/coordinator_proxy ./cmd/proxy
localsetup: coordinator_api ## Local setup: build coordinator_api, copy config, and setup releases
mkdir -p build/bin/conf
@echo "Copying configuration files..."
cp -r $(PWD)/conf $(PWD)/build/bin/
@if [ -f "$(PWD)/conf/config.template.json" ]; then \
SRC="$(PWD)/conf/config.template.json"; \
else \
SRC="$(CURDIR)/conf/config.json"; \
fi; \
cp -fL "$$SRC" "$(CURDIR)/build/bin/conf/config.template.json"
@echo "Setting up releases..."
cd $(PWD)/build && bash setup_releases.sh
cd $(CURDIR)/build && bash setup_releases.sh
#coordinator_api_skip_libzkp:

View File

@@ -6,8 +6,11 @@ if [ -z "${SCROLL_ZKVM_VERSION}" ]; then
exit 1
fi
# default fork name from env or "galileo"
SCROLL_FORK_NAME="${SCROLL_FORK_NAME:-galileov2}"
# set ASSET_DIR by reading from config.json
CONFIG_FILE="bin/conf/config.json"
CONFIG_FILE="bin/conf/config.template.json"
if [ ! -f "$CONFIG_FILE" ]; then
echo "Config file $CONFIG_FILE not found"
exit 1
@@ -28,7 +31,13 @@ for ((i=0; i<$VERIFIER_COUNT; i++)); do
# extract assets_path for current verifier
ASSETS_PATH=$(jq -r ".prover_manager.verifier.verifiers[$i].assets_path" "$CONFIG_FILE")
FORK_NAME=$(jq -r ".prover_manager.verifier.verifiers[$i].fork_name" "$CONFIG_FILE")
# skip if this verifier's fork doesn't match the target fork
if [ "$FORK_NAME" != "$SCROLL_FORK_NAME" ]; then
echo "Expect $SCROLL_FORK_NAME, skip current fork ($FORK_NAME)"
continue
fi
if [ "$ASSETS_PATH" = "null" ]; then
echo "Warning: Could not find assets_path for verifier $i, skipping..."
continue
@@ -53,6 +62,7 @@ for ((i=0; i<$VERIFIER_COUNT; i++)); do
# assets for verifier-only mode
echo "Downloading assets for $FORK_NAME to $ASSET_DIR..."
wget https://circuit-release.s3.us-west-2.amazonaws.com/scroll-zkvm/releases/$SCROLL_ZKVM_VERSION/verifier/verifier.bin -O ${ASSET_DIR}/verifier.bin
wget https://circuit-release.s3.us-west-2.amazonaws.com/scroll-zkvm/releases/$SCROLL_ZKVM_VERSION/verifier/root_verifier_vk -O ${ASSET_DIR}/root_verifier_vk
wget https://circuit-release.s3.us-west-2.amazonaws.com/scroll-zkvm/releases/$SCROLL_ZKVM_VERSION/verifier/openVmVk.json -O ${ASSET_DIR}/openVmVk.json
echo "Completed downloading assets for $FORK_NAME"

View File

@@ -91,11 +91,13 @@ func (c *CoordinatorApp) MockConfig(store bool) error {
ProversPerSession: 1,
Verifier: &coordinatorConfig.VerifierConfig{
MinProverVersion: "v4.4.89",
Verifiers: []coordinatorConfig.AssetConfig{{
AssetsPath: "",
ForkName: "feynman",
Verifiers: []coordinatorConfig.AssetConfig{
{
AssetsPath: "",
ForkName: "galileo",
},
},
}},
},
BatchCollectionTimeSec: 60,
ChunkCollectionTimeSec: 60,
SessionAttempts: 10,

View File

@@ -0,0 +1,122 @@
package app
import (
"context"
"errors"
"fmt"
"net/http"
"os"
"os/signal"
"time"
"github.com/gin-gonic/gin"
"github.com/prometheus/client_golang/prometheus"
"github.com/scroll-tech/go-ethereum/log"
"github.com/urfave/cli/v2"
"gorm.io/gorm"
"scroll-tech/common/database"
"scroll-tech/common/observability"
"scroll-tech/common/utils"
"scroll-tech/common/version"
"scroll-tech/coordinator/internal/config"
"scroll-tech/coordinator/internal/controller/proxy"
"scroll-tech/coordinator/internal/route"
)
var app *cli.App
func init() {
// Set up coordinator app info.
app = cli.NewApp()
app.Action = action
app.Name = "coordinator proxy"
app.Usage = "Proxy for multiple Scroll L2 Coordinators"
app.Version = version.Version
app.Flags = append(app.Flags, utils.CommonFlags...)
app.Flags = append(app.Flags, apiFlags...)
app.Before = func(ctx *cli.Context) error {
return utils.LogSetup(ctx)
}
// Register `coordinator-test` app for integration-test.
utils.RegisterSimulation(app, utils.CoordinatorAPIApp)
}
func action(ctx *cli.Context) error {
cfgFile := ctx.String(utils.ConfigFileFlag.Name)
cfg, err := config.NewProxyConfig(cfgFile)
if err != nil {
log.Crit("failed to load config file", "config file", cfgFile, "error", err)
}
var db *gorm.DB
if dbCfg := cfg.ProxyManager.DB; dbCfg != nil {
log.Info("Apply persistent storage", "via", cfg.ProxyManager.DB.DSN)
db, err = database.InitDB(cfg.ProxyManager.DB)
if err != nil {
log.Crit("failed to init db connection", "err", err)
}
defer func() {
if err = database.CloseDB(db); err != nil {
log.Error("can not close db connection", "error", err)
}
}()
observability.Server(ctx, db)
}
registry := prometheus.DefaultRegisterer
apiSrv := server(ctx, cfg, db, registry)
log.Info(
"Start coordinator api successfully.",
"version", version.Version,
)
// Catch CTRL-C to ensure a graceful shutdown.
interrupt := make(chan os.Signal, 1)
signal.Notify(interrupt, os.Interrupt)
// Wait until the interrupt signal is received from an OS signal.
<-interrupt
log.Info("start shutdown coordinator proxy server ...")
closeCtx, cancelExit := context.WithTimeout(context.Background(), 5*time.Second)
defer cancelExit()
if err = apiSrv.Shutdown(closeCtx); err != nil {
log.Warn("shutdown coordinator proxy server failure", "error", err)
return nil
}
<-closeCtx.Done()
log.Info("coordinator proxy server exiting success")
return nil
}
func server(ctx *cli.Context, cfg *config.ProxyConfig, db *gorm.DB, reg prometheus.Registerer) *http.Server {
router := gin.New()
proxy.InitController(cfg, db, reg)
route.ProxyRoute(router, cfg, reg)
port := ctx.String(httpPortFlag.Name)
srv := &http.Server{
Addr: fmt.Sprintf(":%s", port),
Handler: router,
ReadHeaderTimeout: time.Minute,
}
go func() {
if runServerErr := srv.ListenAndServe(); runServerErr != nil && !errors.Is(runServerErr, http.ErrServerClosed) {
log.Crit("run coordinator proxy http server failure", "error", runServerErr)
}
}()
return srv
}
// Run coordinator.
func Run() {
// RunApp the coordinator.
if err := app.Run(os.Args); err != nil {
_, _ = fmt.Fprintln(os.Stderr, err)
os.Exit(1)
}
}

View File

@@ -0,0 +1,30 @@
package app
import "github.com/urfave/cli/v2"
var (
apiFlags = []cli.Flag{
// http flags
&httpEnabledFlag,
&httpListenAddrFlag,
&httpPortFlag,
}
// httpEnabledFlag enable rpc server.
httpEnabledFlag = cli.BoolFlag{
Name: "http",
Usage: "Enable the HTTP-RPC server",
Value: false,
}
// httpListenAddrFlag set the http address.
httpListenAddrFlag = cli.StringFlag{
Name: "http.addr",
Usage: "HTTP-RPC server listening interface",
Value: "localhost",
}
// httpPortFlag set http.port.
httpPortFlag = cli.IntFlag{
Name: "http.port",
Usage: "HTTP-RPC server listening port",
Value: 8590,
}
)

View File

@@ -0,0 +1,7 @@
package main
import "scroll-tech/coordinator/cmd/proxy/app"
func main() {
app.Run()
}

View File

@@ -36,7 +36,7 @@ func verify(cCtx *cli.Context) error {
return fmt.Errorf("error reading file: %w", err)
}
vf, err := verifier.NewVerifier(cfg.ProverManager.Verifier)
vf, err := verifier.NewVerifier(cfg.ProverManager.Verifier, cfg.L2.ValidiumMode)
if err != nil {
return err
}

View File

@@ -10,13 +10,18 @@
"min_prover_version": "v4.4.45",
"verifiers": [
{
"assets_path": "assets",
"fork_name": "euclidV2"
"features": "legacy_witness:openvm_13",
"assets_path": "assets_feynman",
"fork_name": "feynman"
},
{
"assets_path": "assets",
"fork_name": "feynman"
}
"fork_name": "galileo"
},
{
"assets_path": "assets_v2",
"fork_name": "galileoV2"
}
]
}
},

View File

@@ -0,0 +1,31 @@
{
"proxy_manager": {
"proxy_cli": {
"proxy_name": "proxy_name",
"secret": "client private key"
},
"auth": {
"secret": "proxy secret key",
"challenge_expire_duration_sec": 3600,
"login_expire_duration_sec": 3600
},
"verifier": {
"min_prover_version": "v4.4.45",
"verifiers": []
},
"db": {
"driver_name": "postgres",
"dsn": "postgres://localhost/scroll?sslmode=disable",
"maxOpenNum": 200,
"maxIdleNum": 20
}
},
"coordinators": {
"sepolia": {
"base_url": "http://localhost:8555",
"retry_count": 10,
"retry_wait_time_sec": 10,
"connection_timeout_sec": 30
}
}
}

View File

@@ -9,8 +9,8 @@ require (
github.com/google/uuid v1.6.0
github.com/mitchellh/mapstructure v1.5.0
github.com/prometheus/client_golang v1.19.0
github.com/scroll-tech/da-codec v0.1.3-0.20250826112206-b4cce5c5d178
github.com/scroll-tech/go-ethereum v1.10.14-0.20250626110859-cc9a1dd82de7
github.com/scroll-tech/da-codec v0.10.0
github.com/scroll-tech/go-ethereum v1.10.14-0.20251128092113-8629f088d78f
github.com/shopspring/decimal v1.3.1
github.com/stretchr/testify v1.10.0
github.com/urfave/cli/v2 v2.25.7
@@ -54,11 +54,11 @@ require (
github.com/consensys/bavard v0.1.29 // indirect
github.com/consensys/gnark-crypto v0.16.0 // indirect
github.com/cpuguy83/go-md2man/v2 v2.0.3 // indirect
github.com/crate-crypto/go-kzg-4844 v1.1.0 // indirect
github.com/crate-crypto/go-eth-kzg v1.4.0 // indirect
github.com/davecgh/go-spew v1.1.1 // indirect
github.com/deckarep/golang-set v0.0.0-20180603214616-504e848d77ea // indirect
github.com/edsrzf/mmap-go v1.0.0 // indirect
github.com/ethereum/c-kzg-4844 v1.0.3 // indirect
github.com/ethereum/c-kzg-4844/v2 v2.1.5 // indirect
github.com/fjl/memsize v0.0.2 // indirect
github.com/gballet/go-libpcsclite v0.0.0-20190607065134-2772fd86a8ff // indirect
github.com/go-ole/go-ole v1.3.0 // indirect
@@ -92,7 +92,7 @@ require (
github.com/shirou/gopsutil v3.21.11+incompatible // indirect
github.com/sourcegraph/conc v0.3.0 // indirect
github.com/status-im/keycard-go v0.0.0-20190316090335-8537d3370df4 // indirect
github.com/supranational/blst v0.3.13 // indirect
github.com/supranational/blst v0.3.15 // indirect
github.com/syndtr/goleveldb v1.0.1-0.20210819022825-2ae1ddf74ef7 // indirect
github.com/tklauser/go-sysconf v0.3.14 // indirect
github.com/tklauser/numcpus v0.9.0 // indirect

View File

@@ -47,8 +47,8 @@ github.com/consensys/gnark-crypto v0.16.0 h1:8Dl4eYmUWK9WmlP1Bj6je688gBRJCJbT8Mw
github.com/consensys/gnark-crypto v0.16.0/go.mod h1:Ke3j06ndtPTVvo++PhGNgvm+lgpLvzbcE2MqljY7diU=
github.com/cpuguy83/go-md2man/v2 v2.0.3 h1:qMCsGGgs+MAzDFyp9LpAe1Lqy/fY/qCovCm0qnXZOBM=
github.com/cpuguy83/go-md2man/v2 v2.0.3/go.mod h1:tgQtvFlXSQOSOSIRvRPT7W67SCa46tRHOmNcaadrF8o=
github.com/crate-crypto/go-kzg-4844 v1.1.0 h1:EN/u9k2TF6OWSHrCCDBBU6GLNMq88OspHHlMnHfoyU4=
github.com/crate-crypto/go-kzg-4844 v1.1.0/go.mod h1:JolLjpSff1tCCJKaJx4psrlEdlXuJEC996PL3tTAFks=
github.com/crate-crypto/go-eth-kzg v1.4.0 h1:WzDGjHk4gFg6YzV0rJOAsTK4z3Qkz5jd4RE3DAvPFkg=
github.com/crate-crypto/go-eth-kzg v1.4.0/go.mod h1:J9/u5sWfznSObptgfa92Jq8rTswn6ahQWEuiLHOjCUI=
github.com/creack/pty v1.1.9/go.mod h1:oKZEueFk5CKHvIhNR5MUki03XCEU+Q6VDXinZuGJ33E=
github.com/davecgh/go-spew v0.0.0-20171005155431-ecdeabc65495/go.mod h1:J7Y8YcW2NihsgmVo/mv3lAwl/skON4iLHjSsI+c5H38=
github.com/davecgh/go-spew v1.1.0/go.mod h1:J7Y8YcW2NihsgmVo/mv3lAwl/skON4iLHjSsI+c5H38=
@@ -59,8 +59,8 @@ github.com/deckarep/golang-set v0.0.0-20180603214616-504e848d77ea/go.mod h1:93vs
github.com/dgryski/go-sip13 v0.0.0-20181026042036-e10d5fee7954/go.mod h1:vAd38F8PWV+bWy6jNmig1y/TA+kYO4g3RSRF0IAv0no=
github.com/edsrzf/mmap-go v1.0.0 h1:CEBF7HpRnUCSJgGUb5h1Gm7e3VkmVDrR8lvWVLtrOFw=
github.com/edsrzf/mmap-go v1.0.0/go.mod h1:YO35OhQPt3KJa3ryjFM5Bs14WD66h8eGKpfaBNrHW5M=
github.com/ethereum/c-kzg-4844 v1.0.3 h1:IEnbOHwjixW2cTvKRUlAAUOeleV7nNM/umJR+qy4WDs=
github.com/ethereum/c-kzg-4844 v1.0.3/go.mod h1:VewdlzQmpT5QSrVhbBuGoCdFJkpaJlO1aQputP83wc0=
github.com/ethereum/c-kzg-4844/v2 v2.1.5 h1:aVtoLK5xwJ6c5RiqO8g8ptJ5KU+2Hdquf6G3aXiHh5s=
github.com/ethereum/c-kzg-4844/v2 v2.1.5/go.mod h1:u59hRTTah4Co6i9fDWtiCjTrblJv0UwsqZKCc0GfgUs=
github.com/fjl/memsize v0.0.2 h1:27txuSD9or+NZlnOWdKUxeBzTAUkWCVh+4Gf2dWFOzA=
github.com/fjl/memsize v0.0.2/go.mod h1:VvhXpOYNQvB+uIk2RvXzuaQtkQJzzIx6lSBe1xv7hi0=
github.com/fsnotify/fsnotify v1.4.7/go.mod h1:jwhsz4b93w/PPRr/qN1Yymfu8t87LnFCMoQvtojpjFo=
@@ -253,10 +253,10 @@ github.com/rs/cors v1.7.0 h1:+88SsELBHx5r+hZ8TCkggzSstaWNbDvThkVK8H6f9ik=
github.com/rs/cors v1.7.0/go.mod h1:gFx+x8UowdsKA9AchylcLynDq+nNFfI8FkUZdN/jGCU=
github.com/russross/blackfriday/v2 v2.1.0 h1:JIOH55/0cWyOuilr9/qlrm0BSXldqnqwMsf35Ld67mk=
github.com/russross/blackfriday/v2 v2.1.0/go.mod h1:+Rmxgy9KzJVeS9/2gXHxylqXiyQDYRxCVz55jmeOWTM=
github.com/scroll-tech/da-codec v0.1.3-0.20250826112206-b4cce5c5d178 h1:4utngmJHXSOS5FoSdZhEV1xMRirpArbXvyoCZY9nYj0=
github.com/scroll-tech/da-codec v0.1.3-0.20250826112206-b4cce5c5d178/go.mod h1:Z6kN5u2khPhiqHyk172kGB7o38bH/nj7Ilrb/46wZGg=
github.com/scroll-tech/go-ethereum v1.10.14-0.20250626110859-cc9a1dd82de7 h1:1rN1qocsQlOyk1VCpIEF1J5pfQbLAi1pnMZSLQS37jQ=
github.com/scroll-tech/go-ethereum v1.10.14-0.20250626110859-cc9a1dd82de7/go.mod h1:pDCZ4iGvEGmdIe4aSAGBrb7XSrKEML6/L/wEMmNxOdk=
github.com/scroll-tech/da-codec v0.10.0 h1:IPHxyTyXTWPV0Q+DZ08cod2fWkhUvrfysmj/VBpB+WU=
github.com/scroll-tech/da-codec v0.10.0/go.mod h1:MBlIP4wCXPcUDZ/Ci2B7n/2IbVU1WBo9OTFTZ5ffE0U=
github.com/scroll-tech/go-ethereum v1.10.14-0.20251128092113-8629f088d78f h1:j6SjP98MoWFFX9TwB1/nFYEkayqHQsrtE66Ll2C+oT0=
github.com/scroll-tech/go-ethereum v1.10.14-0.20251128092113-8629f088d78f/go.mod h1:Aa/kD1XB+OV/7rRxMQrjcPCB4b0pKyLH0gsTrtuHi38=
github.com/scroll-tech/zktrie v0.8.4 h1:UagmnZ4Z3ITCk+aUq9NQZJNAwnWl4gSxsLb2Nl7IgRE=
github.com/scroll-tech/zktrie v0.8.4/go.mod h1:XvNo7vAk8yxNyTjBDj5WIiFzYW4bx/gJ78+NK6Zn6Uk=
github.com/shirou/gopsutil v3.21.11+incompatible h1:+1+c1VGhc88SSonWP6foOcLhvnKlUeu/erjjvaPEYiI=
@@ -282,8 +282,8 @@ github.com/stretchr/testify v1.8.2/go.mod h1:w2LPCIKwWwSfY2zedu0+kehJoqGctiVI29o
github.com/stretchr/testify v1.8.4/go.mod h1:sz/lmYIOXD/1dqDmKjjqLyZ2RngseejIcXlSw2iwfAo=
github.com/stretchr/testify v1.10.0 h1:Xv5erBjTwe/5IxqUQTdXv5kgmIvbHo3QQyRwhJsOfJA=
github.com/stretchr/testify v1.10.0/go.mod h1:r2ic/lqez/lEtzL7wO/rwa5dbSLXVDPFyf8C91i36aY=
github.com/supranational/blst v0.3.13 h1:AYeSxdOMacwu7FBmpfloBz5pbFXDmJL33RuwnKtmTjk=
github.com/supranational/blst v0.3.13/go.mod h1:jZJtfjgudtNl4en1tzwPIV3KjUnQUvG3/j+w+fVonLw=
github.com/supranational/blst v0.3.15 h1:rd9viN6tfARE5wv3KZJ9H8e1cg0jXW8syFCcsbHa76o=
github.com/supranational/blst v0.3.15/go.mod h1:jZJtfjgudtNl4en1tzwPIV3KjUnQUvG3/j+w+fVonLw=
github.com/syndtr/goleveldb v1.0.1-0.20210819022825-2ae1ddf74ef7 h1:epCh84lMvA70Z7CTTCmYQn2CKbY8j86K7/FAIr141uY=
github.com/syndtr/goleveldb v1.0.1-0.20210819022825-2ae1ddf74ef7/go.mod h1:q4W45IWZaF22tdD+VEXcAWRA037jwmWEB5VWYORlTpc=
github.com/tidwall/gjson v1.14.3 h1:9jvXn7olKEHU1S9vwoMGliaT8jq1vJ7IH/n9zD9Dnlw=

View File

@@ -36,8 +36,9 @@ type L2Endpoint struct {
// L2 loads l2geth configuration items.
type L2 struct {
// l2geth chain_id.
ChainID uint64 `json:"chain_id"`
Endpoint *L2Endpoint `json:"l2geth"`
ChainID uint64 `json:"chain_id"`
Endpoint *L2Endpoint `json:"l2geth"`
ValidiumMode bool `json:"validium_mode"`
}
// Auth provides the auth coordinator
@@ -47,20 +48,28 @@ type Auth struct {
LoginExpireDurationSec int `json:"login_expire_duration_sec"`
}
// The sequencer controlled data
type Sequencer struct {
DecryptionKey string `json:"decryption_key"`
}
// Config load configuration items.
type Config struct {
ProverManager *ProverManager `json:"prover_manager"`
DB *database.Config `json:"db"`
L2 *L2 `json:"l2"`
Auth *Auth `json:"auth"`
Sequencer *Sequencer `json:"sequencer"`
}
// AssetConfig contain assets configurated for each fork, the defaul vkfile name is "OpenVmVk.json".
type AssetConfig struct {
AssetsPath string `json:"assets_path"`
Version uint8 `json:"version,omitempty"`
ForkName string `json:"fork_name"`
Vkfile string `json:"vk_file,omitempty"`
MinProverVersion string `json:"min_prover_version,omitempty"`
Features string `json:"features,omitempty"`
}
// VerifierConfig load zk verifier config.

View File

@@ -23,7 +23,7 @@ func TestConfig(t *testing.T) {
"min_prover_version": "v4.4.45",
"verifiers": [{
"assets_path": "assets",
"fork_name": "feynman"
"fork_name": "galileo"
}]
},
"max_verifier_workers": 4
@@ -35,13 +35,17 @@ func TestConfig(t *testing.T) {
"maxIdleNum": 20
},
"l2": {
"chain_id": 111
"chain_id": 111,
"validium_mode": false
},
"auth": {
"secret": "prover secret key",
"challenge_expire_duration_sec": 3600,
"login_expire_duration_sec": 3600
}
},
"sequencer": {
"decryption_key": "sequencer decryption key"
}
}`
t.Run("Success Case", func(t *testing.T) {

View File

@@ -0,0 +1,74 @@
package config
import (
"encoding/json"
"os"
"path/filepath"
"scroll-tech/common/database"
"scroll-tech/common/utils"
)
// Proxy loads proxy configuration items.
type ProxyManager struct {
// Zk verifier config help to confine the connected prover.
Verifier *VerifierConfig `json:"verifier"`
Client *ProxyClient `json:"proxy_cli"`
Auth *Auth `json:"auth"`
DB *database.Config `json:"db,omitempty"`
}
func (m *ProxyManager) Normalize() {
if m.Client.Secret == "" {
m.Client.Secret = m.Auth.Secret
}
if m.Client.ProxyVersion == "" {
m.Client.ProxyVersion = m.Verifier.MinProverVersion
}
}
// Proxy client configuration for connect to upstream as a client
type ProxyClient struct {
ProxyName string `json:"proxy_name"`
ProxyVersion string `json:"proxy_version,omitempty"`
Secret string `json:"secret,omitempty"`
}
// Coordinator configuration
type UpStream struct {
BaseUrl string `json:"base_url"`
RetryCount uint `json:"retry_count"`
RetryWaitTime uint `json:"retry_wait_time_sec"`
ConnectionTimeoutSec uint `json:"connection_timeout_sec"`
CompatibileMode bool `json:"compatible_mode,omitempty"`
}
// Config load configuration items.
type ProxyConfig struct {
ProxyManager *ProxyManager `json:"proxy_manager"`
ProxyName string `json:"proxy_name"`
Coordinators map[string]*UpStream `json:"coordinators"`
}
// NewConfig returns a new instance of Config.
func NewProxyConfig(file string) (*ProxyConfig, error) {
buf, err := os.ReadFile(filepath.Clean(file))
if err != nil {
return nil, err
}
cfg := &ProxyConfig{}
err = json.Unmarshal(buf, cfg)
if err != nil {
return nil, err
}
// Override config with environment variables
err = utils.OverrideConfigWithEnv(cfg, "SCROLL_COORDINATOR_PROXY")
if err != nil {
return nil, err
}
return cfg, nil
}

View File

@@ -19,28 +19,56 @@ type AuthController struct {
loginLogic *auth.LoginLogic
}
// NewAuthController returns an LoginController instance
func NewAuthController(db *gorm.DB, cfg *config.Config, vf *verifier.Verifier) *AuthController {
func NewAuthControllerWithLogic(loginLogic *auth.LoginLogic) *AuthController {
return &AuthController{
loginLogic: auth.NewLoginLogic(db, cfg, vf),
loginLogic: loginLogic,
}
}
// Login the api controller for login
// NewAuthController returns an LoginController instance
func NewAuthController(db *gorm.DB, cfg *config.Config, vf *verifier.Verifier) *AuthController {
return &AuthController{
loginLogic: auth.NewLoginLogic(db, cfg.ProverManager.Verifier, vf),
}
}
// Login the api controller for login, used as the Authenticator in JWT
// It can work in two mode: full process for normal login, or if login request
// is posted from proxy, run a simpler process to login a client
func (a *AuthController) Login(c *gin.Context) (interface{}, error) {
// check if the login is post by proxy
var viaProxy bool
if proverType, proverTypeExist := c.Get(types.ProverProviderTypeKey); proverTypeExist {
proverType := uint8(proverType.(float64))
viaProxy = proverType == types.ProverProviderTypeProxy
}
var login types.LoginParameter
if err := c.ShouldBind(&login); err != nil {
return "", fmt.Errorf("missing the public_key, err:%w", err)
}
// check login parameter's token is equal to bearer token, the Authorization must be existed
// if not exist, the jwt token will intercept it
brearToken := c.GetHeader("Authorization")
if brearToken != "Bearer "+login.Message.Challenge {
return "", errors.New("check challenge failure for the not equal challenge string")
// if not, process with normal login
if !viaProxy {
// check login parameter's token is equal to bearer token, the Authorization must be existed
// if not exist, the jwt token will intercept it
brearToken := c.GetHeader("Authorization")
if brearToken != "Bearer "+login.Message.Challenge {
return "", errors.New("check challenge failure for the not equal challenge string")
}
if err := auth.VerifyMsg(&login); err != nil {
return "", err
}
// check the challenge is used, if used, return failure
if err := a.loginLogic.InsertChallengeString(c, login.Message.Challenge); err != nil {
return "", fmt.Errorf("login insert challenge string failure:%w", err)
}
}
if err := a.loginLogic.Check(&login); err != nil {
if err := a.loginLogic.CompatiblityCheck(&login); err != nil {
return "", fmt.Errorf("check the login parameter failure: %w", err)
}
@@ -49,11 +77,6 @@ func (a *AuthController) Login(c *gin.Context) (interface{}, error) {
return "", fmt.Errorf("prover hard fork name failure:%w", err)
}
// check the challenge is used, if used, return failure
if err := a.loginLogic.InsertChallengeString(c, login.Message.Challenge); err != nil {
return "", fmt.Errorf("login insert challenge string failure:%w", err)
}
returnData := types.LoginParameterWithHardForkName{
HardForkName: hardForkNames,
LoginParameter: login,
@@ -85,10 +108,6 @@ func (a *AuthController) IdentityHandler(c *gin.Context) interface{} {
c.Set(types.ProverName, proverName)
}
if publicKey, ok := claims[types.PublicKey]; ok {
c.Set(types.PublicKey, publicKey)
}
if proverVersion, ok := claims[types.ProverVersion]; ok {
c.Set(types.ProverVersion, proverVersion)
}
@@ -101,5 +120,9 @@ func (a *AuthController) IdentityHandler(c *gin.Context) interface{} {
c.Set(types.ProverProviderTypeKey, providerType)
}
if publicKey, ok := claims[types.PublicKey]; ok {
return publicKey
}
return nil
}

View File

@@ -24,7 +24,9 @@ var (
// InitController inits Controller with database
func InitController(cfg *config.Config, chainCfg *params.ChainConfig, db *gorm.DB, reg prometheus.Registerer) {
vf, err := verifier.NewVerifier(cfg.ProverManager.Verifier)
validiumMode := cfg.L2.ValidiumMode
vf, err := verifier.NewVerifier(cfg.ProverManager.Verifier, validiumMode)
if err != nil {
panic("proof receiver new verifier failure")
}

View File

@@ -0,0 +1,150 @@
package proxy
import (
"context"
"fmt"
"sync"
"time"
jwt "github.com/appleboy/gin-jwt/v2"
"github.com/gin-gonic/gin"
"github.com/scroll-tech/go-ethereum/log"
"scroll-tech/coordinator/internal/config"
"scroll-tech/coordinator/internal/controller/api"
"scroll-tech/coordinator/internal/logic/auth"
"scroll-tech/coordinator/internal/logic/verifier"
"scroll-tech/coordinator/internal/types"
)
// AuthController is login API
type AuthController struct {
apiLogin *api.AuthController
clients Clients
proverMgr *ProverManager
}
const upstreamConnTimeout = time.Second * 5
const LoginParamCache = "login_param"
const ProverTypesKey = "prover_types"
const SignatureKey = "prover_signature"
// NewAuthController returns an LoginController instance
func NewAuthController(cfg *config.ProxyConfig, clients Clients, proverMgr *ProverManager) *AuthController {
// use a dummy Verifier to create login logic (we do not use any information in verifier)
dummyVf := verifier.Verifier{
OpenVMVkMap: make(map[string]struct{}),
}
loginLogic := auth.NewLoginLogicWithSimpleDeduplicator(cfg.ProxyManager.Verifier, &dummyVf)
authController := &AuthController{
apiLogin: api.NewAuthControllerWithLogic(loginLogic),
clients: clients,
proverMgr: proverMgr,
}
return authController
}
// Login extended the Login hander in api controller
func (a *AuthController) Login(c *gin.Context) (interface{}, error) {
loginRes, err := a.apiLogin.Login(c)
if err != nil {
return nil, err
}
loginParam := loginRes.(types.LoginParameterWithHardForkName)
if loginParam.LoginParameter.Message.ProverProviderType == types.ProverProviderTypeProxy {
return nil, fmt.Errorf("proxy do not support recursive login")
}
session := a.proverMgr.GetOrCreate(loginParam.PublicKey)
log.Debug("start handling login", "cli", loginParam.Message.ProverName)
loginCtx, cf := context.WithTimeout(context.Background(), upstreamConnTimeout)
var wg sync.WaitGroup
for _, cli := range a.clients {
wg.Add(1)
go func(cli Client) {
defer wg.Done()
if err := session.ProxyLogin(loginCtx, cli, &loginParam.LoginParameter); err != nil {
log.Error("proxy login failed during token cache update",
"userKey", loginParam.PublicKey,
"upstream", cli.Name(),
"error", err)
}
}(cli)
}
go func(cliName string) {
wg.Wait()
cf()
log.Debug("first login attempt has completed", "cli", cliName)
}(loginParam.Message.ProverName)
return loginParam.LoginParameter, nil
}
// PayloadFunc returns jwt.MapClaims with {public key, prover name}.
func (a *AuthController) PayloadFunc(data interface{}) jwt.MapClaims {
v, ok := data.(types.LoginParameter)
if !ok {
log.Error("PayloadFunc received unexpected type", "type", fmt.Sprintf("%T", data))
return jwt.MapClaims{}
}
return jwt.MapClaims{
types.PublicKey: v.PublicKey,
types.ProverName: v.Message.ProverName,
types.ProverVersion: v.Message.ProverVersion,
types.ProverProviderTypeKey: v.Message.ProverProviderType,
SignatureKey: v.Signature,
ProverTypesKey: v.Message.ProverTypes,
}
}
// IdentityHandler replies to client for /login
func (a *AuthController) IdentityHandler(c *gin.Context) interface{} {
claims := jwt.ExtractClaims(c)
loginParam := &types.LoginParameter{}
if proverName, ok := claims[types.ProverName]; ok {
loginParam.Message.ProverName, _ = proverName.(string)
}
if proverVersion, ok := claims[types.ProverVersion]; ok {
loginParam.Message.ProverVersion, _ = proverVersion.(string)
}
if providerType, ok := claims[types.ProverProviderTypeKey]; ok {
num, _ := providerType.(float64)
loginParam.Message.ProverProviderType = types.ProverProviderType(num)
}
if signature, ok := claims[SignatureKey]; ok {
loginParam.Signature, _ = signature.(string)
}
if proverTypes, ok := claims[ProverTypesKey]; ok {
arr, _ := proverTypes.([]any)
for _, elm := range arr {
num, _ := elm.(float64)
loginParam.Message.ProverTypes = append(loginParam.Message.ProverTypes, types.ProverType(num))
}
}
if publicKey, ok := claims[types.PublicKey]; ok {
loginParam.PublicKey, _ = publicKey.(string)
}
if loginParam.PublicKey != "" {
c.Set(LoginParamCache, loginParam)
c.Set(types.ProverName, loginParam.Message.ProverName)
// publickey will also be set since we have specified public_key as identical key
return loginParam.PublicKey
}
return nil
}

View File

@@ -0,0 +1,246 @@
//nolint:errcheck,bodyclose // body is closed in the following handleHttpResp call
package proxy
import (
"bytes"
"context"
"encoding/json"
"fmt"
"net/http"
"time"
"github.com/scroll-tech/go-ethereum/common"
"github.com/scroll-tech/go-ethereum/crypto"
ctypes "scroll-tech/common/types"
"scroll-tech/coordinator/internal/config"
"scroll-tech/coordinator/internal/types"
)
type ProxyCli interface {
Login(ctx context.Context, genLogin func(string) (*types.LoginParameter, error)) (*ctypes.Response, error)
ProxyLogin(ctx context.Context, param *types.LoginParameter) (*ctypes.Response, error)
Token() string
Reset()
}
type ProverCli interface {
GetTask(ctx context.Context, param *types.GetTaskParameter) (*ctypes.Response, error)
SubmitProof(ctx context.Context, param *types.SubmitProofParameter) (*ctypes.Response, error)
}
// Client wraps an http client with a preset host for coordinator API calls
type upClient struct {
httpClient *http.Client
baseURL string
loginToken string
compatibileMode bool
resetFromMgr func()
}
// NewClient creates a new Client with the specified host
func newUpClient(cfg *config.UpStream) *upClient {
return &upClient{
httpClient: &http.Client{
Timeout: time.Duration(cfg.ConnectionTimeoutSec) * time.Second,
},
baseURL: cfg.BaseUrl,
compatibileMode: cfg.CompatibileMode,
}
}
func (c *upClient) Reset() {
if c.resetFromMgr != nil {
c.resetFromMgr()
}
}
func (c *upClient) Token() string {
return c.loginToken
}
// need a parsable schema definition
type loginSchema struct {
Time string `json:"time"`
Token string `json:"token"`
}
// Login performs the complete login process: get challenge then login
func (c *upClient) Login(ctx context.Context, genLogin func(string) (*types.LoginParameter, error)) (*ctypes.Response, error) {
// Step 1: Get challenge
url := fmt.Sprintf("%s/coordinator/v1/challenge", c.baseURL)
req, err := http.NewRequest("GET", url, nil)
if err != nil {
return nil, fmt.Errorf("failed to create challenge request: %w", err)
}
challengeResp, err := c.httpClient.Do(req)
if err != nil {
return nil, fmt.Errorf("failed to get challenge: %w", err)
}
parsedResp, err := handleHttpResp(challengeResp)
if err != nil {
return nil, err
} else if parsedResp.ErrCode != 0 {
return nil, fmt.Errorf("challenge failed: %d (%s)", parsedResp.ErrCode, parsedResp.ErrMsg)
}
// Ste p2: Parse challenge response
var challengeSchema loginSchema
if err := parsedResp.DecodeData(&challengeSchema); err != nil {
return nil, fmt.Errorf("failed to parse challenge response: %w", err)
}
// Step 3: Use the token from challenge as Bearer token for login
url = fmt.Sprintf("%s/coordinator/v1/login", c.baseURL)
param, err := genLogin(challengeSchema.Token)
if err != nil {
return nil, fmt.Errorf("failed to setup login parameter: %w", err)
}
jsonData, err := json.Marshal(param)
if err != nil {
return nil, fmt.Errorf("failed to marshal login parameter: %w", err)
}
req, err = http.NewRequest("POST", url, bytes.NewBuffer(jsonData))
if err != nil {
return nil, fmt.Errorf("failed to create login request: %w", err)
}
req.Header.Set("Content-Type", "application/json")
req.Header.Set("Authorization", "Bearer "+challengeSchema.Token)
loginResp, err := c.httpClient.Do(req)
if err != nil {
return nil, fmt.Errorf("failed to perform login request: %w", err)
}
return handleHttpResp(loginResp)
}
func handleHttpResp(resp *http.Response) (*ctypes.Response, error) {
if resp.StatusCode == http.StatusOK || resp.StatusCode == http.StatusUnauthorized {
defer resp.Body.Close()
var respWithData ctypes.Response
// Note: Body is consumed after decoding, caller should not read it again
if err := json.NewDecoder(resp.Body).Decode(&respWithData); err == nil {
return &respWithData, nil
} else {
return nil, fmt.Errorf("login parsing expected response failed: %v", err)
}
}
return nil, fmt.Errorf("login request failed with status: %d", resp.StatusCode)
}
func (c *upClient) proxyLoginCompatibleMode(ctx context.Context, param *types.LoginParameter) (*ctypes.Response, error) {
mimePrivK, err := buildPrivateKey([]byte(param.PublicKey))
if err != nil {
return nil, err
}
mimePkHex := common.Bytes2Hex(crypto.CompressPubkey(&mimePrivK.PublicKey))
genLoginParam := func(challenge string) (*types.LoginParameter, error) {
// Create login parameter with proxy settings
loginParam := &types.LoginParameter{
Message: param.Message,
PublicKey: mimePkHex,
}
loginParam.Message.Challenge = challenge
// Sign the message with the private key
if err := loginParam.SignWithKey(mimePrivK); err != nil {
return nil, fmt.Errorf("failed to sign login parameter: %w", err)
}
return loginParam, nil
}
return c.Login(ctx, genLoginParam)
}
// ProxyLogin makes a POST request to /v1/proxy_login with LoginParameter
func (c *upClient) ProxyLogin(ctx context.Context, param *types.LoginParameter) (*ctypes.Response, error) {
if c.compatibileMode {
return c.proxyLoginCompatibleMode(ctx, param)
}
url := fmt.Sprintf("%s/coordinator/v1/proxy_login", c.baseURL)
jsonData, err := json.Marshal(param)
if err != nil {
return nil, fmt.Errorf("failed to marshal proxy login parameter: %w", err)
}
req, err := http.NewRequestWithContext(ctx, "POST", url, bytes.NewBuffer(jsonData))
if err != nil {
return nil, fmt.Errorf("failed to create proxy login request: %w", err)
}
req.Header.Set("Content-Type", "application/json")
req.Header.Set("Authorization", "Bearer "+c.loginToken)
proxyLoginResp, err := c.httpClient.Do(req)
if err != nil {
return nil, fmt.Errorf("failed to perform proxy login request: %w", err)
}
return handleHttpResp(proxyLoginResp)
}
// GetTask makes a POST request to /v1/get_task with GetTaskParameter
func (c *upClient) GetTask(ctx context.Context, param *types.GetTaskParameter) (*ctypes.Response, error) {
url := fmt.Sprintf("%s/coordinator/v1/get_task", c.baseURL)
jsonData, err := json.Marshal(param)
if err != nil {
return nil, fmt.Errorf("failed to marshal get task parameter: %w", err)
}
req, err := http.NewRequestWithContext(ctx, "POST", url, bytes.NewBuffer(jsonData))
if err != nil {
return nil, fmt.Errorf("failed to create get task request: %w", err)
}
req.Header.Set("Content-Type", "application/json")
if c.loginToken != "" {
req.Header.Set("Authorization", "Bearer "+c.loginToken)
}
resp, err := c.httpClient.Do(req)
if err != nil {
return nil, err
}
return handleHttpResp(resp)
}
// SubmitProof makes a POST request to /v1/submit_proof with SubmitProofParameter
func (c *upClient) SubmitProof(ctx context.Context, param *types.SubmitProofParameter) (*ctypes.Response, error) {
url := fmt.Sprintf("%s/coordinator/v1/submit_proof", c.baseURL)
jsonData, err := json.Marshal(param)
if err != nil {
return nil, fmt.Errorf("failed to marshal submit proof parameter: %w", err)
}
req, err := http.NewRequestWithContext(ctx, "POST", url, bytes.NewBuffer(jsonData))
if err != nil {
return nil, fmt.Errorf("failed to create submit proof request: %w", err)
}
req.Header.Set("Content-Type", "application/json")
if c.loginToken != "" {
req.Header.Set("Authorization", "Bearer "+c.loginToken)
}
resp, err := c.httpClient.Do(req)
if err != nil {
return nil, err
}
return handleHttpResp(resp)
}

View File

@@ -0,0 +1,220 @@
package proxy
import (
"context"
"crypto/ecdsa"
"fmt"
"sync"
"time"
"github.com/scroll-tech/go-ethereum/common"
"github.com/scroll-tech/go-ethereum/crypto"
"github.com/scroll-tech/go-ethereum/log"
"scroll-tech/common/version"
"scroll-tech/coordinator/internal/config"
"scroll-tech/coordinator/internal/types"
)
type Client interface {
// a client to access upstream coordinator with specified identity
// so prover can contact with coordinator as itself
Client(string) ProverCli
// the client to access upstream as proxy itself
ClientAsProxy(context.Context) ProxyCli
Name() string
}
type ClientManager struct {
name string
cliCfg *config.ProxyClient
cfg *config.UpStream
privKey *ecdsa.PrivateKey
cachedCli struct {
sync.RWMutex
cli *upClient
completionCtx context.Context
}
}
// transformToValidPrivateKey safely transforms arbitrary bytes into valid private key bytes
func buildPrivateKey(inputBytes []byte) (*ecdsa.PrivateKey, error) {
// Try appending bytes from 0x0 to 0x20 until we get a valid private key
for appendByte := byte(0x0); appendByte <= 0x20; appendByte++ {
// Append the byte to input
extendedBytes := append(inputBytes, appendByte)
// Calculate 256-bit hash
hash := crypto.Keccak256(extendedBytes)
// Try to create private key from hash
if k, err := crypto.ToECDSA(hash); err == nil {
return k, nil
}
}
return nil, fmt.Errorf("failed to generate valid private key from input bytes")
}
func NewClientManager(name string, cliCfg *config.ProxyClient, cfg *config.UpStream) (*ClientManager, error) {
log.Info("init client", "name", name, "upcfg", cfg.BaseUrl, "compatible mode", cfg.CompatibileMode)
privKey, err := buildPrivateKey([]byte(cliCfg.Secret))
if err != nil {
return nil, err
}
return &ClientManager{
name: name,
privKey: privKey,
cfg: cfg,
cliCfg: cliCfg,
}, nil
}
type ctxKeyType string
const loginCliKey ctxKeyType = "cli"
func (cliMgr *ClientManager) doLogin(ctx context.Context, loginCli *upClient) {
if cliMgr.cfg.CompatibileMode {
loginCli.loginToken = "dummy"
log.Info("Skip login process for compatible mode")
return
}
// Calculate wait time between 2 seconds and cfg.RetryWaitTime
minWait := 2 * time.Second
waitDuration := time.Duration(cliMgr.cfg.RetryWaitTime) * time.Second
if waitDuration < minWait {
waitDuration = minWait
}
for {
log.Info("proxy attempting login to upstream coordinator", "name", cliMgr.name)
loginResp, err := loginCli.Login(ctx, cliMgr.genLoginParam)
if err == nil && loginResp.ErrCode == 0 {
var loginResult loginSchema
err = loginResp.DecodeData(&loginResult)
if err != nil {
log.Error("login parsing data fail", "error", err)
} else {
loginCli.loginToken = loginResult.Token
log.Info("login to upstream coordinator successful", "name", cliMgr.name, "time", loginResult.Time)
// TODO: we need to parse time if we start making use of it
return
}
} else if err != nil {
log.Error("login process fail", "error", err)
} else {
log.Error("login get fail resp", "code", loginResp.ErrCode, "msg", loginResp.ErrMsg)
}
log.Info("login to upstream coordinator failed, retrying", "name", cliMgr.name, "error", err, "waitDuration", waitDuration)
timer := time.NewTimer(waitDuration)
select {
case <-ctx.Done():
timer.Stop()
return
case <-timer.C:
// Continue to next retry
}
}
}
func (cliMgr *ClientManager) Name() string {
return cliMgr.name
}
func (cliMgr *ClientManager) Client(token string) ProverCli {
loginCli := newUpClient(cliMgr.cfg)
loginCli.loginToken = token
return loginCli
}
func (cliMgr *ClientManager) ClientAsProxy(ctx context.Context) ProxyCli {
cliMgr.cachedCli.RLock()
if cliMgr.cachedCli.cli != nil {
defer cliMgr.cachedCli.RUnlock()
return cliMgr.cachedCli.cli
}
cliMgr.cachedCli.RUnlock()
cliMgr.cachedCli.Lock()
if cliMgr.cachedCli.cli != nil {
defer cliMgr.cachedCli.Unlock()
return cliMgr.cachedCli.cli
}
var completionCtx context.Context
// Check if completion context is set
if cliMgr.cachedCli.completionCtx != nil {
completionCtx = cliMgr.cachedCli.completionCtx
} else {
// Set new completion context and launch login goroutine
ctx, completionDone := context.WithCancel(context.TODO())
loginCli := newUpClient(cliMgr.cfg)
loginCli.resetFromMgr = func() {
cliMgr.cachedCli.Lock()
if cliMgr.cachedCli.cli == loginCli {
log.Info("cached client cleared", "name", cliMgr.name)
cliMgr.cachedCli.cli = nil
}
cliMgr.cachedCli.Unlock()
}
completionCtx = context.WithValue(ctx, loginCliKey, loginCli)
cliMgr.cachedCli.completionCtx = completionCtx
// Launch keep-login goroutine
go func() {
defer completionDone()
cliMgr.doLogin(context.Background(), loginCli)
cliMgr.cachedCli.Lock()
cliMgr.cachedCli.cli = loginCli
cliMgr.cachedCli.completionCtx = nil
cliMgr.cachedCli.Unlock()
}()
}
cliMgr.cachedCli.Unlock()
// Wait for completion or request cancellation
select {
case <-ctx.Done():
return nil
case <-completionCtx.Done():
cli := completionCtx.Value(loginCliKey).(*upClient)
return cli
}
}
func (cliMgr *ClientManager) genLoginParam(challenge string) (*types.LoginParameter, error) {
// Generate public key string
publicKeyHex := common.Bytes2Hex(crypto.CompressPubkey(&cliMgr.privKey.PublicKey))
// Create login parameter with proxy settings
loginParam := &types.LoginParameter{
Message: types.Message{
Challenge: challenge,
ProverName: cliMgr.cliCfg.ProxyName,
ProverVersion: version.Version,
ProverProviderType: types.ProverProviderTypeProxy,
ProverTypes: []types.ProverType{}, // Default empty
VKs: []string{}, // Default empty
},
PublicKey: publicKeyHex,
}
// Sign the message with the private key
if err := loginParam.SignWithKey(cliMgr.privKey); err != nil {
return nil, fmt.Errorf("failed to sign login parameter: %w", err)
}
return loginParam, nil
}

View File

@@ -0,0 +1,44 @@
package proxy
import (
"github.com/prometheus/client_golang/prometheus"
"gorm.io/gorm"
"scroll-tech/coordinator/internal/config"
)
var (
// GetTask the prover task controller
GetTask *GetTaskController
// SubmitProof the submit proof controller
SubmitProof *SubmitProofController
// Auth the auth controller
Auth *AuthController
)
// Clients manager a series of thread-safe clients for requesting upstream
// coordinators
type Clients map[string]Client
// InitController inits Controller with database
func InitController(cfg *config.ProxyConfig, db *gorm.DB, reg prometheus.Registerer) {
// normalize cfg
cfg.ProxyManager.Normalize()
clients := make(map[string]Client)
for nm, upCfg := range cfg.Coordinators {
cli, err := NewClientManager(nm, cfg.ProxyManager.Client, upCfg)
if err != nil {
panic("create new client fail")
}
clients[cli.Name()] = cli
}
proverManager := NewProverManagerWithPersistent(100, db)
priorityManager := NewPriorityUpstreamManagerPersistent(db)
Auth = NewAuthController(cfg, clients, proverManager)
GetTask = NewGetTaskController(cfg, clients, proverManager, priorityManager, reg)
SubmitProof = NewSubmitProofController(cfg, clients, proverManager, priorityManager, reg)
}

View File

@@ -0,0 +1,229 @@
package proxy
import (
"fmt"
"math/rand"
"sync"
"github.com/gin-gonic/gin"
"github.com/prometheus/client_golang/prometheus"
"github.com/scroll-tech/go-ethereum/log"
"gorm.io/gorm"
"scroll-tech/common/types"
"scroll-tech/coordinator/internal/config"
coordinatorType "scroll-tech/coordinator/internal/types"
)
func getSessionData(ctx *gin.Context) (string, string) {
publicKeyData, publicKeyExist := ctx.Get(coordinatorType.PublicKey)
publicKey, castOk := publicKeyData.(string)
if !publicKeyExist || !castOk {
nerr := fmt.Errorf("no public key binding: %v", publicKeyData)
log.Warn("get_task parameter fail", "error", nerr)
types.RenderFailure(ctx, types.ErrCoordinatorParameterInvalidNo, nerr)
return "", ""
}
publicNameData, publicNameExist := ctx.Get(coordinatorType.ProverName)
publicName, castOk := publicNameData.(string)
if !publicNameExist || !castOk {
log.Error("no public name binding for unknown reason, but we still forward with name = 'unknown'", "data", publicNameData)
publicName = "unknown"
}
return publicKey, publicName
}
// PriorityUpstreamManager manages priority upstream mappings with thread safety
type PriorityUpstreamManager struct {
sync.RWMutex
*proverPriorityPersist
data map[string]string
}
// NewPriorityUpstreamManager creates a new PriorityUpstreamManager
func NewPriorityUpstreamManager() *PriorityUpstreamManager {
return &PriorityUpstreamManager{
data: make(map[string]string),
}
}
// NewPriorityUpstreamManager creates a new PriorityUpstreamManager
func NewPriorityUpstreamManagerPersistent(db *gorm.DB) *PriorityUpstreamManager {
return &PriorityUpstreamManager{
data: make(map[string]string),
proverPriorityPersist: NewProverPriorityPersist(db),
}
}
// Get retrieves the priority upstream for a given key
func (p *PriorityUpstreamManager) Get(key string) (string, bool) {
p.RLock()
value, exists := p.data[key]
p.RUnlock()
if !exists {
if v, err := p.proverPriorityPersist.Get(key); err != nil {
log.Error("persistent priority record read failure", "error", err, "key", key)
} else if v != "" {
log.Debug("restore record from persistent layer", "key", key, "value", v)
return v, true
}
}
return value, exists
}
// Set sets the priority upstream for a given key
func (p *PriorityUpstreamManager) Set(key, value string) {
defer func() {
if err := p.proverPriorityPersist.Update(key, value); err != nil {
log.Error("update priority record failure", "error", err, "key", key, "value", value)
}
}()
p.Lock()
defer p.Unlock()
p.data[key] = value
}
// Delete removes the priority upstream for a given key
func (p *PriorityUpstreamManager) Delete(key string) {
defer func() {
if err := p.proverPriorityPersist.Del(key); err != nil {
log.Error("delete priority record failure", "error", err, "key", key)
}
}()
p.Lock()
defer p.Unlock()
delete(p.data, key)
}
// GetTaskController the get prover task api controller
type GetTaskController struct {
proverMgr *ProverManager
clients Clients
priorityUpstream *PriorityUpstreamManager
//workingRnd *rand.Rand
//getTaskAccessCounter *prometheus.CounterVec
}
// NewGetTaskController create a get prover task controller
func NewGetTaskController(cfg *config.ProxyConfig, clients Clients, proverMgr *ProverManager, priorityMgr *PriorityUpstreamManager, reg prometheus.Registerer) *GetTaskController {
// TODO: implement proxy get task controller initialization
return &GetTaskController{
priorityUpstream: priorityMgr,
proverMgr: proverMgr,
clients: clients,
}
}
// func (ptc *GetTaskController) incGetTaskAccessCounter(ctx *gin.Context) error {
// // TODO: implement proxy get task access counter
// return nil
// }
// GetTasks get assigned chunk/batch task
func (ptc *GetTaskController) GetTasks(ctx *gin.Context) {
var getTaskParameter coordinatorType.GetTaskParameter
if err := ctx.ShouldBind(&getTaskParameter); err != nil {
nerr := fmt.Errorf("prover task parameter invalid, err:%w", err)
types.RenderFailure(ctx, types.ErrCoordinatorParameterInvalidNo, nerr)
return
}
publicKey, proverName := getSessionData(ctx)
if publicKey == "" {
return
}
session := ptc.proverMgr.Get(publicKey)
if session == nil {
nerr := fmt.Errorf("can not get session for prover %s", proverName)
types.RenderFailure(ctx, types.InternalServerError, nerr)
return
}
getTask := func(cli Client) (error, int) {
log.Debug("Start get task", "up", cli.Name(), "cli", proverName)
upStream := cli.Name()
resp, err := session.GetTask(ctx, &getTaskParameter, cli)
if err != nil {
log.Error("Upstream error for get task", "error", err, "up", upStream, "cli", proverName)
return err, types.ErrCoordinatorGetTaskFailure
} else if resp.ErrCode != types.ErrCoordinatorEmptyProofData {
if resp.ErrCode != 0 {
// simply dispatch the error from upstream to prover
log.Error("Upstream has error resp for get task", "code", resp.ErrCode, "msg", resp.ErrMsg, "up", upStream, "cli", proverName)
return fmt.Errorf("upstream failure %s:", resp.ErrMsg), resp.ErrCode
}
var task coordinatorType.GetTaskSchema
if err = resp.DecodeData(&task); err == nil {
task.TaskID = formUpstreamWithTaskName(upStream, task.TaskID)
ptc.priorityUpstream.Set(publicKey, upStream)
log.Debug("Upstream get task", "up", upStream, "cli", proverName, "taskID", task.TaskID, "taskType", task.TaskType)
types.RenderSuccess(ctx, &task)
return nil, 0
} else {
log.Error("Upstream has wrong data for get task", "error", err, "up", upStream, "cli", proverName)
return fmt.Errorf("decode task fail: %v", err), types.InternalServerError
}
}
return nil, resp.ErrCode
}
// if the priority upstream is set, we try this upstream first until get the task resp or no task resp
priorityUpstream, exist := ptc.priorityUpstream.Get(publicKey)
if exist {
cli := ptc.clients[priorityUpstream]
log.Debug("Try get task from priority stream", "up", priorityUpstream, "cli", proverName)
if cli != nil {
err, code := getTask(cli)
if err != nil {
types.RenderFailure(ctx, code, err)
return
} else if code == 0 {
// get task done and rendered, return
return
}
// only continue if get empty task (the task has been removed in upstream)
log.Debug("can not get priority task from upstream", "up", priorityUpstream, "cli", proverName)
} else {
log.Warn("A upstream is removed or lost for some reason while running", "up", priorityUpstream, "cli", proverName)
}
}
ptc.priorityUpstream.Delete(publicKey)
// Create a slice to hold the keys
keys := make([]string, 0, len(ptc.clients))
for k := range ptc.clients {
keys = append(keys, k)
}
// Shuffle the keys using a local RNG (avoid deprecated rand.Seed)
rand.Shuffle(len(keys), func(i, j int) {
keys[i], keys[j] = keys[j], keys[i]
})
// Iterate over the shuffled keys
for _, n := range keys {
if err, code := getTask(ptc.clients[n]); err == nil && code == 0 {
// get task done
return
}
}
log.Debug("get no task from upstream", "cli", proverName)
// if all get task failed, throw empty proof resp
types.RenderFailure(ctx, types.ErrCoordinatorEmptyProofData, fmt.Errorf("get empty prover task"))
}

View File

@@ -0,0 +1,125 @@
package proxy
import (
"time"
"gorm.io/gorm"
"gorm.io/gorm/clause"
"scroll-tech/coordinator/internal/types"
)
type proverDataPersist struct {
db *gorm.DB
}
// NewProverDataPersist creates a persistence instance backed by a gorm DB.
func NewProverDataPersist(db *gorm.DB) *proverDataPersist {
return &proverDataPersist{db: db}
}
// gorm model mapping to table `prover_sessions`
type proverSessionRecord struct {
PublicKey string `gorm:"column:public_key;not null"`
Upstream string `gorm:"column:upstream;not null"`
UpToken string `gorm:"column:up_token;not null"`
Expired time.Time `gorm:"column:expired;not null"`
}
func (proverSessionRecord) TableName() string { return "prover_sessions" }
// priority_upstream model
type priorityUpstreamRecord struct {
PublicKey string `gorm:"column:public_key;not null"`
Upstream string `gorm:"column:upstream;not null"`
}
func (priorityUpstreamRecord) TableName() string { return "priority_upstream" }
// get retrieves ProverSession for a given user key, returns empty if still not exists
func (p *proverDataPersist) Get(userKey string) (*proverSession, error) {
if p == nil || p.db == nil {
return nil, nil
}
var rows []proverSessionRecord
if err := p.db.Where("public_key = ?", userKey).Find(&rows).Error; err != nil || len(rows) == 0 {
return nil, err
}
ret := &proverSession{
proverToken: make(map[string]loginToken),
}
for _, r := range rows {
ls := &types.LoginSchema{
Token: r.UpToken,
Time: r.Expired,
}
ret.proverToken[r.Upstream] = loginToken{LoginSchema: ls}
}
return ret, nil
}
func (p *proverDataPersist) Update(userKey, up string, login *types.LoginSchema) error {
if p == nil || p.db == nil || login == nil {
return nil
}
rec := proverSessionRecord{
PublicKey: userKey,
Upstream: up,
UpToken: login.Token,
Expired: login.Time,
}
return p.db.Clauses(
clause.OnConflict{
Columns: []clause.Column{{Name: "public_key"}, {Name: "upstream"}},
DoUpdates: clause.AssignmentColumns([]string{"up_token", "expired"}),
},
).Create(&rec).Error
}
type proverPriorityPersist struct {
db *gorm.DB
}
func NewProverPriorityPersist(db *gorm.DB) *proverPriorityPersist {
return &proverPriorityPersist{db: db}
}
func (p *proverPriorityPersist) Get(userKey string) (string, error) {
if p == nil || p.db == nil {
return "", nil
}
var rec priorityUpstreamRecord
if err := p.db.Where("public_key = ?", userKey).First(&rec).Error; err != nil {
if err != gorm.ErrRecordNotFound {
return "", err
} else {
return "", nil
}
}
return rec.Upstream, nil
}
func (p *proverPriorityPersist) Update(userKey, up string) error {
if p == nil || p.db == nil {
return nil
}
rec := priorityUpstreamRecord{PublicKey: userKey, Upstream: up}
return p.db.Clauses(
clause.OnConflict{
Columns: []clause.Column{{Name: "public_key"}},
DoUpdates: clause.Assignments(map[string]interface{}{"upstream": up}),
},
).Create(&rec).Error
}
func (p *proverPriorityPersist) Del(userKey string) error {
if p == nil || p.db == nil {
return nil
}
return p.db.Where("public_key = ?", userKey).Delete(&priorityUpstreamRecord{}).Error
}

View File

@@ -0,0 +1,285 @@
package proxy
import (
"context"
"fmt"
"math"
"sync"
"gorm.io/gorm"
"github.com/scroll-tech/go-ethereum/log"
ctypes "scroll-tech/common/types"
"scroll-tech/coordinator/internal/types"
)
type ProverManager struct {
sync.RWMutex
data map[string]*proverSession
willDeprecatedData map[string]*proverSession
sizeLimit int
persistent *proverDataPersist
}
func NewProverManager(size int) *ProverManager {
return &ProverManager{
data: make(map[string]*proverSession),
willDeprecatedData: make(map[string]*proverSession),
sizeLimit: size,
}
}
func NewProverManagerWithPersistent(size int, db *gorm.DB) *ProverManager {
return &ProverManager{
data: make(map[string]*proverSession),
willDeprecatedData: make(map[string]*proverSession),
sizeLimit: size,
persistent: NewProverDataPersist(db),
}
}
// get retrieves ProverSession for a given user key, returns empty if still not exists
func (m *ProverManager) Get(userKey string) (ret *proverSession) {
defer func() {
if ret == nil {
var err error
ret, err = m.persistent.Get(userKey)
if err != nil {
log.Error("Get persistent layer for prover tokens fail", "error", err)
} else if ret != nil {
log.Debug("restore record from persistent", "key", userKey, "token", ret.proverToken)
ret.persistent = m.persistent
}
}
if ret != nil {
m.Lock()
m.data[userKey] = ret
m.Unlock()
}
}()
m.RLock()
defer m.RUnlock()
if r, existed := m.data[userKey]; existed {
return r
} else {
return m.willDeprecatedData[userKey]
}
}
func (m *ProverManager) GetOrCreate(userKey string) *proverSession {
if ret := m.Get(userKey); ret != nil {
return ret
}
m.Lock()
defer m.Unlock()
ret := &proverSession{
proverToken: make(map[string]loginToken),
persistent: m.persistent,
}
if len(m.data) >= m.sizeLimit {
m.willDeprecatedData = m.data
m.data = make(map[string]*proverSession)
}
m.data[userKey] = ret
return ret
}
type loginToken struct {
*types.LoginSchema
phase uint
}
// Client wraps an http client with a preset host for coordinator API calls
type proverSession struct {
persistent *proverDataPersist
sync.RWMutex
proverToken map[string]loginToken
completionCtx context.Context
}
func (c *proverSession) maintainLogin(ctx context.Context, cliMgr Client, up string, param *types.LoginParameter, phase uint) (result loginToken, nerr error) {
c.Lock()
curPhase := c.proverToken[up].phase
if c.completionCtx != nil {
waitctx := c.completionCtx
c.Unlock()
select {
case <-waitctx.Done():
return c.maintainLogin(ctx, cliMgr, up, param, phase)
case <-ctx.Done():
nerr = fmt.Errorf("ctx fail")
return
}
}
if phase < curPhase {
// outdate login phase, give up
log.Debug("drop outdated proxy login attempt", "upstream", up, "cli", param.Message.ProverName, "phase", phase, "now", curPhase)
defer c.Unlock()
return c.proverToken[up], nil
}
// occupy the update slot
completeCtx, cf := context.WithCancel(ctx)
defer cf()
c.completionCtx = completeCtx
defer func() {
c.Lock()
c.completionCtx = nil
if result.LoginSchema != nil {
c.proverToken[up] = result
log.Info("maintain login status", "upstream", up, "cli", param.Message.ProverName, "phase", curPhase+1)
}
c.Unlock()
if nerr != nil {
log.Error("maintain login fail", "error", nerr, "upstream", up, "cli", param.Message.ProverName, "phase", curPhase)
}
}()
c.Unlock()
log.Debug("start proxy login process", "upstream", up, "cli", param.Message.ProverName)
cli := cliMgr.ClientAsProxy(ctx)
if cli == nil {
nerr = fmt.Errorf("get upstream cli fail")
return
}
resp, err := cli.ProxyLogin(ctx, param)
if err != nil {
nerr = fmt.Errorf("proxylogin fail: %v", err)
return
}
if resp.ErrCode == ctypes.ErrJWTTokenExpired {
log.Info("up stream has expired, renew upstream connection", "up", up)
cli.Reset()
cli = cliMgr.ClientAsProxy(ctx)
if cli == nil {
nerr = fmt.Errorf("get upstream cli fail (secondary try)")
return
}
// like SDK, we would try one more time if the upstream token is expired
resp, err = cli.ProxyLogin(ctx, param)
if err != nil {
nerr = fmt.Errorf("proxylogin fail: %v", err)
return
}
}
if resp.ErrCode != 0 {
nerr = fmt.Errorf("upstream fail: %d (%s)", resp.ErrCode, resp.ErrMsg)
return
}
var loginResult loginSchema
if err := resp.DecodeData(&loginResult); err != nil {
nerr = err
return
}
log.Debug("Proxy login done", "upstream", up, "cli", param.Message.ProverName)
result = loginToken{
LoginSchema: &types.LoginSchema{
Token: loginResult.Token,
},
phase: curPhase + 1,
}
return
}
// const expireTolerant = 10 * time.Minute
// ProxyLogin makes a POST request to /v1/proxy_login with LoginParameter
func (c *proverSession) ProxyLogin(ctx context.Context, cli Client, param *types.LoginParameter) error {
up := cli.Name()
c.RLock()
existedToken := c.proverToken[up]
c.RUnlock()
newtoken, err := c.maintainLogin(ctx, cli, up, param, math.MaxUint)
if newtoken.phase > existedToken.phase {
if err := c.persistent.Update(param.PublicKey, up, newtoken.LoginSchema); err != nil {
log.Error("Update persistent layer for prover tokens fail", "error", err)
}
}
return err
}
// GetTask makes a POST request to /v1/get_task with GetTaskParameter
func (c *proverSession) GetTask(ctx context.Context, param *types.GetTaskParameter, cliMgr Client) (*ctypes.Response, error) {
up := cliMgr.Name()
c.RLock()
log.Debug("call get task", "up", up, "tokens", c.proverToken)
token := c.proverToken[up]
c.RUnlock()
if token.LoginSchema != nil {
resp, err := cliMgr.Client(token.Token).GetTask(ctx, param)
if err != nil {
return nil, err
}
if resp.ErrCode != ctypes.ErrJWTTokenExpired {
return resp, nil
}
}
// like SDK, we would try one more time if the upstream token is expired
// get param from ctx
loginParam, ok := ctx.Value(LoginParamCache).(*types.LoginParameter)
if !ok {
return nil, fmt.Errorf("Unexpected error, no loginparam ctx value")
}
newToken, err := c.maintainLogin(ctx, cliMgr, up, loginParam, token.phase)
if err != nil {
return nil, fmt.Errorf("update prover token fail: %v", err)
}
return cliMgr.Client(newToken.Token).GetTask(ctx, param)
}
// SubmitProof makes a POST request to /v1/submit_proof with SubmitProofParameter
func (c *proverSession) SubmitProof(ctx context.Context, param *types.SubmitProofParameter, cliMgr Client) (*ctypes.Response, error) {
up := cliMgr.Name()
c.RLock()
token := c.proverToken[up]
c.RUnlock()
if token.LoginSchema != nil {
resp, err := cliMgr.Client(token.Token).SubmitProof(ctx, param)
if err != nil {
return nil, err
}
if resp.ErrCode != ctypes.ErrJWTTokenExpired {
return resp, nil
}
}
// like SDK, we would try one more time if the upstream token is expired
// get param from ctx
loginParam, ok := ctx.Value(LoginParamCache).(*types.LoginParameter)
if !ok {
return nil, fmt.Errorf("Unexpected error, no loginparam ctx value")
}
newToken, err := c.maintainLogin(ctx, cliMgr, up, loginParam, token.phase)
if err != nil {
return nil, fmt.Errorf("update prover token fail: %v", err)
}
return cliMgr.Client(newToken.Token).SubmitProof(ctx, param)
}

View File

@@ -0,0 +1,107 @@
package proxy
import (
"testing"
)
// TestProverManagerGetAndCreate validates basic creation and retrieval semantics.
func TestProverManagerGetAndCreate(t *testing.T) {
pm := NewProverManager(2)
if got := pm.Get("user1"); got != nil {
t.Fatalf("expected nil for non-existent key, got: %+v", got)
}
sess1 := pm.GetOrCreate("user1")
if sess1 == nil {
t.Fatalf("expected non-nil session from GetOrCreate")
}
// Should be stable on subsequent Get
if got := pm.Get("user1"); got != sess1 {
t.Fatalf("expected same session pointer on Get, got different instance: %p vs %p", got, sess1)
}
}
// TestProverManagerRolloverAndPromotion verifies rollover when sizeLimit is reached
// and that old entries are accessible and promoted back to active data map.
func TestProverManagerRolloverAndPromotion(t *testing.T) {
pm := NewProverManager(2)
s1 := pm.GetOrCreate("u1")
s2 := pm.GetOrCreate("u2")
if s1 == nil || s2 == nil {
t.Fatalf("expected sessions to be created for u1/u2")
}
// Precondition: data should contain 2 entries, no deprecated yet.
pm.RLock()
if len(pm.data) != 2 {
pm.RUnlock()
t.Fatalf("expected data len=2 before rollover, got %d", len(pm.data))
}
if len(pm.willDeprecatedData) != 0 {
pm.RUnlock()
t.Fatalf("expected willDeprecatedData len=0 before rollover, got %d", len(pm.willDeprecatedData))
}
pm.RUnlock()
// Trigger rollover by creating a third key.
s3 := pm.GetOrCreate("u3")
if s3 == nil {
t.Fatalf("expected session for u3 after rollover")
}
// After rollover: current data should only have u3, deprecated should hold u1 and u2.
pm.RLock()
if len(pm.data) != 1 {
pm.RUnlock()
t.Fatalf("expected data len=1 after rollover (only u3), got %d", len(pm.data))
}
if _, ok := pm.data["u3"]; !ok {
pm.RUnlock()
t.Fatalf("expected 'u3' to be in active data after rollover")
}
if len(pm.willDeprecatedData) != 2 {
pm.RUnlock()
t.Fatalf("expected willDeprecatedData len=2 after rollover, got %d", len(pm.willDeprecatedData))
}
pm.RUnlock()
// Accessing an old key should return the same pointer and promote it to active data map.
got1 := pm.Get("u1")
if got1 != s1 {
t.Fatalf("expected same pointer for u1 after promotion, got %p want %p", got1, s1)
}
// The promotion should add it to active data (without enforcing size limit on promotion).
pm.RLock()
if _, ok := pm.data["u1"]; !ok {
pm.RUnlock()
t.Fatalf("expected 'u1' to be present in active data after promotion")
}
if len(pm.data) != 2 {
// Now should contain u3 and u1
pm.RUnlock()
t.Fatalf("expected data len=2 after promotion of u1, got %d", len(pm.data))
}
pm.RUnlock()
// Access the other deprecated key and ensure behavior is consistent.
got2 := pm.Get("u2")
if got2 != s2 {
t.Fatalf("expected same pointer for u2 after promotion, got %p want %p", got2, s2)
}
pm.RLock()
if _, ok := pm.data["u2"]; !ok {
pm.RUnlock()
t.Fatalf("expected 'u2' to be present in active data after promotion")
}
// Note: promotion does not enforce sizeLimit, so data can grow beyond sizeLimit after promotions.
if len(pm.data) != 3 {
pm.RUnlock()
t.Fatalf("expected data len=3 after promoting both u1 and u2, got %d", len(pm.data))
}
pm.RUnlock()
}

View File

@@ -0,0 +1,94 @@
package proxy
import (
"fmt"
"strings"
"github.com/gin-gonic/gin"
"github.com/prometheus/client_golang/prometheus"
"github.com/scroll-tech/go-ethereum/log"
"scroll-tech/common/types"
"scroll-tech/coordinator/internal/config"
coordinatorType "scroll-tech/coordinator/internal/types"
)
// SubmitProofController the submit proof api controller
type SubmitProofController struct {
proverMgr *ProverManager
clients Clients
priorityUpstream *PriorityUpstreamManager
}
// NewSubmitProofController create the submit proof api controller instance
func NewSubmitProofController(cfg *config.ProxyConfig, clients Clients, proverMgr *ProverManager, priorityMgr *PriorityUpstreamManager, reg prometheus.Registerer) *SubmitProofController {
return &SubmitProofController{
proverMgr: proverMgr,
clients: clients,
priorityUpstream: priorityMgr,
}
}
func upstreamFromTaskName(taskID string) (string, string) {
parts, rest, found := strings.Cut(taskID, ":")
if found {
return parts, rest
}
return "", parts
}
func formUpstreamWithTaskName(upstream string, taskID string) string {
return fmt.Sprintf("%s:%s", upstream, taskID)
}
// SubmitProof prover submit the proof to coordinator
func (spc *SubmitProofController) SubmitProof(ctx *gin.Context) {
var submitParameter coordinatorType.SubmitProofParameter
if err := ctx.ShouldBind(&submitParameter); err != nil {
nerr := fmt.Errorf("prover submitProof parameter invalid, err:%w", err)
types.RenderFailure(ctx, types.ErrCoordinatorParameterInvalidNo, nerr)
return
}
publicKey, proverName := getSessionData(ctx)
if publicKey == "" {
return
}
session := spc.proverMgr.Get(publicKey)
if session == nil {
nerr := fmt.Errorf("can not get session for prover %s", proverName)
types.RenderFailure(ctx, types.InternalServerError, nerr)
return
}
upstream, realTaskID := upstreamFromTaskName(submitParameter.TaskID)
cli, existed := spc.clients[upstream]
if !existed {
log.Warn("A upstream for submitting is removed or lost for some reason while running", "up", upstream)
nerr := fmt.Errorf("Invalid upstream name (%s) from taskID %s", upstream, submitParameter.TaskID)
types.RenderFailure(ctx, types.ErrCoordinatorParameterInvalidNo, nerr)
return
}
log.Debug("Start submitting", "up", upstream, "cli", proverName, "id", realTaskID, "status", submitParameter.Status)
submitParameter.TaskID = realTaskID
resp, err := session.SubmitProof(ctx, &submitParameter, cli)
if err != nil {
log.Error("Upstream has error resp for submit", "error", err, "up", upstream, "cli", proverName, "taskID", realTaskID)
types.RenderFailure(ctx, types.ErrCoordinatorGetTaskFailure, err)
return
} else if resp.ErrCode != 0 {
log.Error("Upstream has error resp for get task", "code", resp.ErrCode, "msg", resp.ErrMsg, "up", upstream, "cli", proverName, "taskID", realTaskID)
// simply dispatch the error from upstream to prover
types.RenderFailure(ctx, resp.ErrCode, fmt.Errorf("%s", resp.ErrMsg))
return
} else {
log.Debug("Submit proof to upstream", "up", upstream, "cli", proverName, "taskID", realTaskID)
spc.priorityUpstream.Delete(publicKey)
types.RenderSuccess(ctx, resp.Data)
return
}
}

View File

@@ -1,6 +1,7 @@
package auth
import (
"context"
"errors"
"fmt"
"strings"
@@ -19,45 +20,72 @@ import (
// LoginLogic the auth logic
type LoginLogic struct {
cfg *config.Config
challengeOrm *orm.Challenge
cfg *config.VerifierConfig
deduplicator ChallengeDeduplicator
openVmVks map[string]struct{}
proverVersionHardForkMap map[string]string
}
type ChallengeDeduplicator interface {
InsertChallenge(ctx context.Context, challengeString string) error
}
type SimpleDeduplicator struct {
}
func (s *SimpleDeduplicator) InsertChallenge(ctx context.Context, challengeString string) error {
return nil
}
// NewLoginLogicWithSimpleDEduplicator new a LoginLogic, do not use db to deduplicate challenge
func NewLoginLogicWithSimpleDeduplicator(vcfg *config.VerifierConfig, vf *verifier.Verifier) *LoginLogic {
return newLoginLogic(&SimpleDeduplicator{}, vcfg, vf)
}
// NewLoginLogic new a LoginLogic
func NewLoginLogic(db *gorm.DB, cfg *config.Config, vf *verifier.Verifier) *LoginLogic {
func NewLoginLogic(db *gorm.DB, vcfg *config.VerifierConfig, vf *verifier.Verifier) *LoginLogic {
return newLoginLogic(orm.NewChallenge(db), vcfg, vf)
}
func newLoginLogic(deduplicator ChallengeDeduplicator, vcfg *config.VerifierConfig, vf *verifier.Verifier) *LoginLogic {
proverVersionHardForkMap := make(map[string]string)
for _, cfg := range cfg.ProverManager.Verifier.Verifiers {
for _, cfg := range vcfg.Verifiers {
proverVersionHardForkMap[cfg.ForkName] = cfg.MinProverVersion
}
return &LoginLogic{
cfg: cfg,
cfg: vcfg,
openVmVks: vf.OpenVMVkMap,
challengeOrm: orm.NewChallenge(db),
deduplicator: deduplicator,
proverVersionHardForkMap: proverVersionHardForkMap,
}
}
// InsertChallengeString insert and check the challenge string is existed
func (l *LoginLogic) InsertChallengeString(ctx *gin.Context, challenge string) error {
return l.challengeOrm.InsertChallenge(ctx.Copy(), challenge)
}
func (l *LoginLogic) Check(login *types.LoginParameter) error {
// Verify the completeness of login message
func VerifyMsg(login *types.LoginParameter) error {
verify, err := login.Verify()
if err != nil || !verify {
log.Error("auth message verify failure", "prover_name", login.Message.ProverName,
"prover_version", login.Message.ProverVersion, "message", login.Message)
return errors.New("auth message verify failure")
}
return nil
}
if !version.CheckScrollRepoVersion(login.Message.ProverVersion, l.cfg.ProverManager.Verifier.MinProverVersion) {
return fmt.Errorf("incompatible prover version. please upgrade your prover, minimum allowed version: %s, actual version: %s", l.cfg.ProverManager.Verifier.MinProverVersion, login.Message.ProverVersion)
// InsertChallengeString insert and check the challenge string is existed
func (l *LoginLogic) InsertChallengeString(ctx *gin.Context, challenge string) error {
return l.deduplicator.InsertChallenge(ctx.Copy(), challenge)
}
// Check if the login client is compatible with the setting in coordinator
func (l *LoginLogic) CompatiblityCheck(login *types.LoginParameter) error {
if !version.CheckScrollRepoVersion(login.Message.ProverVersion, l.cfg.MinProverVersion) {
return fmt.Errorf("incompatible prover version. please upgrade your prover, minimum allowed version: %s, actual version: %s", l.cfg.MinProverVersion, login.Message.ProverVersion)
}
vks := make(map[string]struct{})
@@ -65,27 +93,32 @@ func (l *LoginLogic) Check(login *types.LoginParameter) error {
vks[vk] = struct{}{}
}
for _, vk := range login.Message.VKs {
if _, ok := vks[vk]; !ok {
log.Error("vk inconsistency", "prover vk", vk, "prover name", login.Message.ProverName,
"prover_version", login.Message.ProverVersion, "message", login.Message)
if !version.CheckScrollProverVersion(login.Message.ProverVersion) {
return fmt.Errorf("incompatible prover version. please upgrade your prover, expect version: %s, actual version: %s",
version.Version, login.Message.ProverVersion)
// new coordinator / proxy do not check vks while login, code only for backward compatibility
if len(vks) != 0 {
for _, vk := range login.Message.VKs {
if _, ok := vks[vk]; !ok {
log.Error("vk inconsistency", "prover vk", vk, "prover name", login.Message.ProverName,
"prover_version", login.Message.ProverVersion, "message", login.Message)
if !version.CheckScrollProverVersion(login.Message.ProverVersion) {
return fmt.Errorf("incompatible prover version. please upgrade your prover, expect version: %s, actual version: %s",
version.Version, login.Message.ProverVersion)
}
// if the prover reports a same prover version
return errors.New("incompatible vk. please check your params files or config files")
}
// if the prover reports a same prover version
return errors.New("incompatible vk. please check your params files or config files")
}
}
if login.Message.ProverProviderType != types.ProverProviderTypeInternal && login.Message.ProverProviderType != types.ProverProviderTypeExternal {
switch login.Message.ProverProviderType {
case types.ProverProviderTypeInternal:
case types.ProverProviderTypeExternal:
case types.ProverProviderTypeProxy:
case types.ProverProviderTypeUndefined:
// for backward compatibility, set ProverProviderType as internal
if login.Message.ProverProviderType == types.ProverProviderTypeUndefined {
login.Message.ProverProviderType = types.ProverProviderTypeInternal
} else {
log.Error("invalid prover_provider_type", "value", login.Message.ProverProviderType, "prover name", login.Message.ProverName, "prover version", login.Message.ProverVersion)
return errors.New("invalid prover provider type.")
}
login.Message.ProverProviderType = types.ProverProviderTypeInternal
default:
log.Error("invalid prover_provider_type", "value", login.Message.ProverProviderType, "prover name", login.Message.ProverName, "prover version", login.Message.ProverVersion)
return errors.New("invalid prover provider type.")
}
return nil

View File

@@ -1,3 +1,5 @@
//go:build !mock_verifier
package libzkp
/*
@@ -13,8 +15,6 @@ import (
"os"
"strings"
"unsafe"
"scroll-tech/common/types/message"
)
func init() {
@@ -72,31 +72,6 @@ func VerifyBundleProof(proofData, forkName string) bool {
return result != 0
}
// TaskType enum values matching the Rust enum
const (
TaskTypeChunk = 0
TaskTypeBatch = 1
TaskTypeBundle = 2
)
func fromMessageTaskType(taskType int) int {
switch message.ProofType(taskType) {
case message.ProofTypeChunk:
return TaskTypeChunk
case message.ProofTypeBatch:
return TaskTypeBatch
case message.ProofTypeBundle:
return TaskTypeBundle
default:
panic(fmt.Sprintf("unsupported proof type: %d", taskType))
}
}
// Generate a universal task
func GenerateUniversalTask(taskType int, taskJSON, forkName string, expectedVk []byte) (bool, string, string, []byte) {
return generateUniversalTask(fromMessageTaskType(taskType), taskJSON, strings.ToLower(forkName), expectedVk)
}
// Generate wrapped proof
func GenerateWrappedProof(proofJSON, metadata string, vkData []byte) string {
cProofJSON := goToCString(proofJSON)
@@ -140,3 +115,20 @@ func DumpVk(forkName, filePath string) error {
return nil
}
// UnivTaskCompatibilityFix calls the universal task compatibility fix function
func UniversalTaskCompatibilityFix(taskJSON string) (string, error) {
cTaskJSON := goToCString(taskJSON)
defer freeCString(cTaskJSON)
resultPtr := C.univ_task_compatibility_fix(cTaskJSON)
if resultPtr == nil {
return "", fmt.Errorf("univ_task_compatibility_fix failed")
}
// Convert result to Go string and free C memory
result := C.GoString(resultPtr)
C.release_string(resultPtr)
return result, nil
}

View File

@@ -0,0 +1,57 @@
//go:build mock_verifier
package libzkp
import (
"encoding/json"
)
// // InitVerifier is a no-op in the mock.
// func InitVerifier(configJSON string) {}
// // VerifyChunkProof returns a fixed success in the mock.
// func VerifyChunkProof(proofData, forkName string) bool {
// return true
// }
// // VerifyBatchProof returns a fixed success in the mock.
// func VerifyBatchProof(proofData, forkName string) bool {
// return true
// }
// // VerifyBundleProof returns a fixed success in the mock.
// func VerifyBundleProof(proofData, forkName string) bool {
// return true
// }
func UniversalTaskCompatibilityFix(taskJSON string) (string, error) {
panic("should not run here")
}
// GenerateWrappedProof returns a fixed dummy proof string in the mock.
func GenerateWrappedProof(proofJSON, metadata string, vkData []byte) string {
payload := struct {
Metadata json.RawMessage `json:"metadata"`
Proof json.RawMessage `json:"proof"`
GitVersion string `json:"git_version"`
}{
Metadata: json.RawMessage(metadata),
Proof: json.RawMessage(proofJSON),
GitVersion: "mock-git-version",
}
out, err := json.Marshal(payload)
if err != nil {
panic(err)
}
return string(out)
}
// DumpVk is a no-op and returns nil in the mock.
func DumpVk(forkName, filePath string) error {
return nil
}
// SetDynamicFeature is a no-op in the mock.
func SetDynamicFeature(feats string) {}

View File

@@ -40,7 +40,9 @@ HandlingResult gen_universal_task(
char* task,
char* fork_name,
const unsigned char* expected_vk,
size_t expected_vk_len
size_t expected_vk_len,
const unsigned char* decryption_key,
size_t decryption_key_len
);
// Release memory allocated for a HandlingResult returned by gen_universal_task
@@ -54,4 +56,7 @@ char* gen_wrapped_proof(char* proof_json, char* metadata, char* vk, size_t vk_le
// Release memory allocated for a string returned by gen_wrapped_proof
void release_string(char* string_ptr);
// Universal task compatibility fix function
char* univ_task_compatibility_fix(char* task_json);
#endif /* LIBZKP_H */

View File

@@ -0,0 +1,27 @@
package libzkp
import (
"fmt"
"scroll-tech/common/types/message"
)
// TaskType enum values matching the Rust enum
const (
TaskTypeChunk = 0
TaskTypeBatch = 1
TaskTypeBundle = 2
)
func fromMessageTaskType(taskType int) int {
switch message.ProofType(taskType) {
case message.ProofTypeChunk:
return TaskTypeChunk
case message.ProofTypeBatch:
return TaskTypeBatch
case message.ProofTypeBundle:
return TaskTypeBundle
default:
panic(fmt.Sprintf("unsupported proof type: %d", taskType))
}
}

View File

@@ -5,6 +5,7 @@ package libzkp
import (
"encoding/json"
"fmt"
"strings"
"scroll-tech/common/types/message"
@@ -14,7 +15,11 @@ import (
func InitL2geth(configJSON string) {
}
func generateUniversalTask(taskType int, taskJSON, forkName string, expectedVk []byte) (bool, string, string, []byte) {
func GenerateUniversalTask(taskType int, taskJSON, forkName string, expectedVk []byte, decryptionKey []byte) (bool, string, string, []byte) {
return generateUniversalTask(fromMessageTaskType(taskType), taskJSON, strings.ToLower(forkName), expectedVk, decryptionKey)
}
func generateUniversalTask(taskType int, taskJSON, forkName string, expectedVk []byte, decryptionKey []byte) (bool, string, string, []byte) {
fmt.Printf("call mocked generate universal task %d, taskJson %s\n", taskType, taskJSON)
var metadata interface{}

View File

@@ -7,7 +7,10 @@ package libzkp
#include "libzkp.h"
*/
import "C" //nolint:typecheck
import "unsafe"
import (
"strings"
"unsafe"
)
// Initialize the handler for universal task
func InitL2geth(configJSON string) {
@@ -17,7 +20,12 @@ func InitL2geth(configJSON string) {
C.init_l2geth(cConfig)
}
func generateUniversalTask(taskType int, taskJSON, forkName string, expectedVk []byte) (bool, string, string, []byte) {
// Generate a universal task
func GenerateUniversalTask(taskType int, taskJSON, forkName string, expectedVk []byte, decryptionKey []byte) (bool, string, string, []byte) {
return generateUniversalTask(fromMessageTaskType(taskType), taskJSON, strings.ToLower(forkName), expectedVk, decryptionKey)
}
func generateUniversalTask(taskType int, taskJSON, forkName string, expectedVk []byte, decryptionKey []byte) (bool, string, string, []byte) {
cTask := goToCString(taskJSON)
cForkName := goToCString(forkName)
defer freeCString(cTask)
@@ -29,7 +37,13 @@ func generateUniversalTask(taskType int, taskJSON, forkName string, expectedVk [
cVk = (*C.uchar)(unsafe.Pointer(&expectedVk[0]))
}
result := C.gen_universal_task(C.int(taskType), cTask, cForkName, cVk, C.size_t(len(expectedVk)))
// Create a C array from Go slice
var cDk *C.uchar
if len(decryptionKey) > 0 {
cDk = (*C.uchar)(unsafe.Pointer(&decryptionKey[0]))
}
result := C.gen_universal_task(C.int(taskType), cTask, cForkName, cVk, C.size_t(len(expectedVk)), cDk, C.size_t(len(decryptionKey)))
defer C.release_task_result(result)
// Check if the operation was successful

View File

@@ -213,6 +213,14 @@ func (bp *BatchProverTask) Assign(ctx *gin.Context, getTaskParameter *coordinato
return nil, ErrCoordinatorInternalFailure
}
proverTask.Metadata = metadata
if isCompatibilityFixingVersion(taskCtx.ProverVersion) {
log.Info("Apply compatibility fixing for prover", "version", taskCtx.ProverVersion)
if err := fixCompatibility(taskMsg); err != nil {
log.Error("apply compatibility failure", "err", err)
return nil, ErrCoordinatorInternalFailure
}
}
}
// Store session info.
@@ -249,36 +257,21 @@ func (bp *BatchProverTask) formatProverTask(ctx context.Context, task *orm.Prove
}
var chunkProofs []*message.OpenVMChunkProof
var chunkInfos []*message.ChunkInfo
// var chunkInfos []*message.ChunkInfo
for _, chunk := range chunks {
var proof message.OpenVMChunkProof
if encodeErr := json.Unmarshal(chunk.Proof, &proof); encodeErr != nil {
return nil, fmt.Errorf("Chunk.GetProofsByBatchHash unmarshal proof error: %w, batch hash: %v, chunk hash: %v", encodeErr, task.TaskID, chunk.Hash)
}
chunkProofs = append(chunkProofs, &proof)
chunkInfo := message.ChunkInfo{
ChainID: bp.cfg.L2.ChainID,
PrevStateRoot: common.HexToHash(chunk.ParentChunkStateRoot),
PostStateRoot: common.HexToHash(chunk.StateRoot),
WithdrawRoot: common.HexToHash(chunk.WithdrawRoot),
DataHash: common.HexToHash(chunk.Hash),
PrevMsgQueueHash: common.HexToHash(chunk.PrevL1MessageQueueHash),
PostMsgQueueHash: common.HexToHash(chunk.PostL1MessageQueueHash),
IsPadding: false,
InitialBlockNumber: proof.MetaData.ChunkInfo.InitialBlockNumber,
BlockCtxs: proof.MetaData.ChunkInfo.BlockCtxs,
TxDataLength: proof.MetaData.ChunkInfo.TxDataLength,
}
chunkInfos = append(chunkInfos, &chunkInfo)
}
taskDetail, err := bp.getBatchTaskDetail(batch, chunkInfos, chunkProofs, hardForkName)
taskDetail, err := bp.getBatchTaskDetail(batch, chunkProofs, hardForkName)
if err != nil {
return nil, fmt.Errorf("failed to get batch task detail, taskID:%s err:%w", task.TaskID, err)
}
chunkProofsBytes, err := json.Marshal(taskDetail)
taskBytesWithchunkProofs, err := json.Marshal(taskDetail)
if err != nil {
return nil, fmt.Errorf("failed to marshal chunk proofs, taskID:%s err:%w", task.TaskID, err)
}
@@ -286,7 +279,7 @@ func (bp *BatchProverTask) formatProverTask(ctx context.Context, task *orm.Prove
taskMsg := &coordinatorType.GetTaskSchema{
TaskID: task.TaskID,
TaskType: int(message.ProofTypeBatch),
TaskData: string(chunkProofsBytes),
TaskData: string(taskBytesWithchunkProofs),
HardForkName: hardForkName,
}
@@ -301,38 +294,59 @@ func (bp *BatchProverTask) recoverActiveAttempts(ctx *gin.Context, batchTask *or
}
}
func (bp *BatchProverTask) getBatchTaskDetail(dbBatch *orm.Batch, chunkInfos []*message.ChunkInfo, chunkProofs []*message.OpenVMChunkProof, hardForkName string) (*message.BatchTaskDetail, error) {
func (bp *BatchProverTask) getBatchTaskDetail(dbBatch *orm.Batch, chunkProofs []*message.OpenVMChunkProof, hardForkName string) (*message.BatchTaskDetail, error) {
// Get the version byte.
version, err := bp.version(hardForkName)
if err != nil {
return nil, fmt.Errorf("failed to decode version byte: %w", err)
}
taskDetail := &message.BatchTaskDetail{
ChunkInfos: chunkInfos,
Version: version,
ChunkProofs: chunkProofs,
ForkName: hardForkName,
}
dbBatchCodecVersion := encoding.CodecVersion(dbBatch.CodecVersion)
switch dbBatchCodecVersion {
case encoding.CodecV3, encoding.CodecV4, encoding.CodecV6, encoding.CodecV7, encoding.CodecV8:
default:
return taskDetail, nil
}
codec, err := encoding.CodecFromVersion(encoding.CodecVersion(dbBatch.CodecVersion))
if err != nil {
return nil, fmt.Errorf("failed to get codec from version %d, err: %w", dbBatch.CodecVersion, err)
}
batchHeader, decodeErr := codec.NewDABatchFromBytes(dbBatch.BatchHeader)
if decodeErr != nil {
return nil, fmt.Errorf("failed to decode batch header version %d: %w", dbBatch.CodecVersion, decodeErr)
}
taskDetail.BatchHeader = batchHeader
taskDetail.BlobBytes = dbBatch.BlobBytes
taskDetail.ChallengeDigest = common.HexToHash(dbBatch.ChallengeDigest)
// Memory layout of `BlobDataProof`: used in Codec.BlobDataProofForPointEvaluation()
// | z | y | kzg_commitment | kzg_proof |
// |---------|---------|----------------|-----------|
// | bytes32 | bytes32 | bytes48 | bytes48 |
taskDetail.KzgProof = message.Byte48{Big: hexutil.Big(*new(big.Int).SetBytes(dbBatch.BlobDataProof[112:160]))}
taskDetail.KzgCommitment = message.Byte48{Big: hexutil.Big(*new(big.Int).SetBytes(dbBatch.BlobDataProof[64:112]))}
if !bp.validiumMode() {
dbBatchCodecVersion := encoding.CodecVersion(dbBatch.CodecVersion)
switch dbBatchCodecVersion {
case 0:
log.Warn("the codec version is 0, if it is not under integration test we have encountered an error here")
return taskDetail, nil
case encoding.CodecV3, encoding.CodecV4, encoding.CodecV6, encoding.CodecV7, encoding.CodecV8, encoding.CodecV9, encoding.CodecV10:
default:
return nil, fmt.Errorf("Unsupported codec version <%d>", dbBatchCodecVersion)
}
codec, err := encoding.CodecFromVersion(encoding.CodecVersion(dbBatch.CodecVersion))
if err != nil {
return nil, fmt.Errorf("failed to get codec from version %d, err: %w", dbBatch.CodecVersion, err)
}
batchHeader, decodeErr := codec.NewDABatchFromBytes(dbBatch.BatchHeader)
if decodeErr != nil {
return nil, fmt.Errorf("failed to decode batch header version %d: %w", dbBatch.CodecVersion, decodeErr)
}
taskDetail.BatchHeader = batchHeader
taskDetail.ChallengeDigest = common.HexToHash(dbBatch.ChallengeDigest)
// Memory layout of `BlobDataProof`: used in Codec.BlobDataProofForPointEvaluation()
// | z | y | kzg_commitment | kzg_proof |
// |---------|---------|----------------|-----------|
// | bytes32 | bytes32 | bytes48 | bytes48 |
taskDetail.KzgProof = &message.Byte48{Big: hexutil.Big(*new(big.Int).SetBytes(dbBatch.BlobDataProof[112:160]))}
taskDetail.KzgCommitment = &message.Byte48{Big: hexutil.Big(*new(big.Int).SetBytes(dbBatch.BlobDataProof[64:112]))}
} else {
log.Info("Apply validium mode for batch proving task")
codec := cutils.FromVersion(version)
batchHeader, decodeErr := codec.DABatchForTaskFromBytes(dbBatch.BatchHeader)
if decodeErr != nil {
return nil, fmt.Errorf("failed to decode batch header version %d: %w", dbBatch.CodecVersion, decodeErr)
}
batchHeader.SetHash(common.HexToHash(dbBatch.Hash))
taskDetail.BatchHeader = batchHeader
}
return taskDetail, nil
}

View File

@@ -211,6 +211,14 @@ func (bp *BundleProverTask) Assign(ctx *gin.Context, getTaskParameter *coordinat
// bundle proof require snark
taskMsg.UseSnark = true
proverTask.Metadata = metadata
if isCompatibilityFixingVersion(taskCtx.ProverVersion) {
log.Info("Apply compatibility fixing for prover", "version", taskCtx.ProverVersion)
if err := fixCompatibility(taskMsg); err != nil {
log.Error("apply compatibility failure", "err", err)
return nil, ErrCoordinatorInternalFailure
}
}
}
// Store session info.
@@ -265,7 +273,14 @@ func (bp *BundleProverTask) formatProverTask(ctx context.Context, task *orm.Prov
batchProofs = append(batchProofs, &proof)
}
// Get the version byte.
version, err := bp.version(hardForkName)
if err != nil {
return nil, fmt.Errorf("failed to decode version byte: %w", err)
}
taskDetail := message.BundleTaskDetail{
Version: version,
BatchProofs: batchProofs,
ForkName: hardForkName,
}

View File

@@ -237,14 +237,21 @@ func (cp *ChunkProverTask) formatProverTask(ctx context.Context, task *orm.Prove
return nil, fmt.Errorf("failed to fetch block hashes of a chunk, chunk hash:%s err:%v", task.TaskID, dbErr)
}
// Get the version byte.
version, err := cp.version(hardForkName)
if err != nil {
return nil, fmt.Errorf("failed to decode version byte: %w", err)
}
var taskDetailBytes []byte
taskDetail := message.ChunkTaskDetail{
Version: version,
BlockHashes: blockHashes,
PrevMsgQueueHash: common.HexToHash(chunk.PrevL1MessageQueueHash),
PostMsgQueueHash: common.HexToHash(chunk.PostL1MessageQueueHash),
ForkName: hardForkName,
}
var err error
taskDetailBytes, err = json.Marshal(taskDetail)
if err != nil {
return nil, fmt.Errorf("failed to marshal block hashes hash:%s, err:%w", task.TaskID, err)

View File

@@ -1,6 +1,7 @@
package provertask
import (
"encoding/hex"
"errors"
"fmt"
"strings"
@@ -14,11 +15,13 @@ import (
"gorm.io/gorm"
"scroll-tech/common/types/message"
"scroll-tech/common/version"
"scroll-tech/coordinator/internal/config"
"scroll-tech/coordinator/internal/logic/libzkp"
"scroll-tech/coordinator/internal/orm"
coordinatorType "scroll-tech/coordinator/internal/types"
"scroll-tech/coordinator/internal/utils"
)
var (
@@ -65,6 +68,17 @@ type proverTaskContext struct {
hasAssignedTask *orm.ProverTask
}
func (b *BaseProverTask) version(hardForkName string) (uint8, error) {
return utils.Version(hardForkName, b.validiumMode())
}
// validiumMode induce different behavior in task generation:
// + skip the point_evaluation part in batch task
// + encode batch header with codec in utils instead of da-codec
func (b *BaseProverTask) validiumMode() bool {
return b.cfg.L2.ValidiumMode
}
// hardForkName get the chunk/batch/bundle hard fork name
func (b *BaseProverTask) hardForkName(ctx *gin.Context, taskCtx *proverTaskContext) (string, error) {
switch {
@@ -192,7 +206,16 @@ func (b *BaseProverTask) applyUniversal(schema *coordinatorType.GetTaskSchema) (
return nil, nil, fmt.Errorf("no expectedVk found from hardfork %s", schema.HardForkName)
}
ok, uTaskData, metadata, _ := libzkp.GenerateUniversalTask(schema.TaskType, schema.TaskData, schema.HardForkName, expectedVk)
var decryptionKey []byte
if b.cfg.L2.ValidiumMode {
var err error
decryptionKey, err = hex.DecodeString(b.cfg.Sequencer.DecryptionKey)
if err != nil {
return nil, nil, fmt.Errorf("sequencer decryption key hex-decoding failed")
}
}
ok, uTaskData, metadata, _ := libzkp.GenerateUniversalTask(schema.TaskType, schema.TaskData, schema.HardForkName, expectedVk, decryptionKey)
if !ok {
return nil, nil, fmt.Errorf("can not generate universal task, see coordinator log for the reason")
}
@@ -201,6 +224,23 @@ func (b *BaseProverTask) applyUniversal(schema *coordinatorType.GetTaskSchema) (
return schema, []byte(metadata), nil
}
const CompatibilityVersion = "4.5.43"
func isCompatibilityFixingVersion(ver string) bool {
return !version.CheckScrollRepoVersion(ver, CompatibilityVersion)
}
func fixCompatibility(schema *coordinatorType.GetTaskSchema) error {
fixedTask, err := libzkp.UniversalTaskCompatibilityFix(schema.TaskData)
if err != nil {
return err
}
schema.TaskData = fixedTask
return nil
}
func newGetTaskCounterVec(factory promauto.Factory, taskType string) *prometheus.CounterVec {
getTaskCounterInitOnce.Do(func() {
getTaskCounterVec = factory.NewCounterVec(prometheus.CounterOpts{

View File

@@ -155,7 +155,7 @@ func NewSubmitProofReceiverLogic(cfg *config.ProverManager, chainCfg *params.Cha
// HandleZkProof handle a ZkProof submitted from a prover.
// For now only proving/verifying error will lead to setting status as skipped.
// db/unmarshal errors will not because they are errors on the business logic side.
func (m *ProofReceiverLogic) HandleZkProof(ctx *gin.Context, proofParameter coordinatorType.SubmitProofParameter) error {
func (m *ProofReceiverLogic) HandleZkProof(ctx *gin.Context, proofParameter coordinatorType.SubmitProofParameter) (rerr error) {
m.proofReceivedTotal.Inc()
pk := ctx.GetString(coordinatorType.PublicKey)
if len(pk) == 0 {
@@ -172,6 +172,18 @@ func (m *ProofReceiverLogic) HandleZkProof(ctx *gin.Context, proofParameter coor
return ErrValidatorFailureProverTaskEmpty
}
defer func() {
if rerr != nil && types.ProverProveStatus(proverTask.ProvingStatus) == types.ProverAssigned {
// trigger a last-chance closing of current task if some routine had missed it
log.Warn("last chance proof recover triggerred",
"proofID", proofParameter.TaskID,
"err", rerr,
)
m.proofRecover(ctx.Copy(), proverTask, types.ProverTaskFailureTypeUndefined, proofParameter)
}
}()
proofTime := time.Since(proverTask.CreatedAt)
proofTimeSec := uint64(proofTime.Seconds())
@@ -311,6 +323,20 @@ func (m *ProofReceiverLogic) validator(ctx context.Context, proverTask *orm.Prov
}
}()
// Internally we overide the timeout failure:
// if prover task FailureType is SessionInfoFailureTimeout, the submit proof is timeout, but we still accept it
if types.ProverProveStatus(proverTask.ProvingStatus) == types.ProverProofInvalid &&
types.ProverTaskFailureType(proverTask.FailureType) == types.ProverTaskFailureTypeTimeout {
m.validateFailureProverTaskTimeout.Inc()
proverTask.ProvingStatus = int16(types.ProverAssigned)
proofTime := time.Since(proverTask.CreatedAt)
proofTimeSec := uint64(proofTime.Seconds())
log.Warn("proof submit proof have timeout", "hash", proofParameter.TaskID, "taskType", proverTask.TaskType,
"proverName", proverTask.ProverName, "proverPublicKey", pk, "proofTime", proofTimeSec)
}
// Ensure this prover is eligible to participate in the prover task.
if types.ProverProveStatus(proverTask.ProvingStatus) == types.ProverProofValid ||
types.ProverProveStatus(proverTask.ProvingStatus) == types.ProverProofInvalid {
@@ -328,9 +354,6 @@ func (m *ProofReceiverLogic) validator(ctx context.Context, proverTask *orm.Prov
return ErrValidatorFailureProverTaskCannotSubmitTwice
}
proofTime := time.Since(proverTask.CreatedAt)
proofTimeSec := uint64(proofTime.Seconds())
if proofParameter.Status != int(coordinatorType.StatusOk) {
// Temporarily replace "panic" with "pa-nic" to prevent triggering the alert based on logs.
failureMsg := strings.Replace(proofParameter.FailureMsg, "panic", "pa-nic", -1)
@@ -346,14 +369,6 @@ func (m *ProofReceiverLogic) validator(ctx context.Context, proverTask *orm.Prov
return ErrValidatorFailureProofMsgStatusNotOk
}
// if prover task FailureType is SessionInfoFailureTimeout, the submit proof is timeout, need skip it
if types.ProverTaskFailureType(proverTask.FailureType) == types.ProverTaskFailureTypeTimeout {
m.validateFailureProverTaskTimeout.Inc()
log.Info("proof submit proof have timeout, skip this submit proof", "hash", proofParameter.TaskID, "taskType", proverTask.TaskType,
"proverName", proverTask.ProverName, "proverPublicKey", pk, "proofTime", proofTimeSec)
return ErrValidatorFailureProofTimeout
}
// store the proof to prover task
if updateTaskProofErr := m.updateProverTaskProof(ctx, proverTask, proofParameter); updateTaskProofErr != nil {
log.Warn("update prover task proof failure", "hash", proofParameter.TaskID, "proverPublicKey", pk,
@@ -368,6 +383,7 @@ func (m *ProofReceiverLogic) validator(ctx context.Context, proverTask *orm.Prov
"taskType", proverTask.TaskType, "proverName", proverTask.ProverName, "proverPublicKey", pk)
return ErrValidatorFailureTaskHaveVerifiedSuccess
}
return nil
}
@@ -384,7 +400,7 @@ func (m *ProofReceiverLogic) closeProofTask(ctx context.Context, proverTask *orm
log.Info("proof close task update proof status", "hash", proverTask.TaskID, "proverPublicKey", proverTask.ProverPublicKey,
"taskType", message.ProofType(proverTask.TaskType).String(), "status", types.ProvingTaskVerified.String())
if err := m.updateProofStatus(ctx, proverTask, proofParameter, types.ProverProofValid, types.ProverTaskFailureTypeUndefined, proofTimeSec); err != nil {
if err := m.updateProofStatus(ctx, proverTask, proofParameter, types.ProverProofValid, types.ProverTaskFailureType(proverTask.FailureType), proofTimeSec); err != nil {
log.Error("failed to updated proof status ProvingTaskVerified", "hash", proverTask.TaskID, "proverPublicKey", proverTask.ProverPublicKey, "error", err)
return err
}
@@ -445,6 +461,9 @@ func (m *ProofReceiverLogic) updateProofStatus(ctx context.Context, proverTask *
if err != nil {
return err
}
// sync status and failture type into proverTask
proverTask.ProvingStatus = int16(status)
proverTask.FailureType = int16(failureType)
if status == types.ProverProofValid && message.ProofType(proofParameter.TaskType) == message.ProofTypeChunk {
if checkReadyErr := m.checkAreAllChunkProofsReady(ctx, proverTask.TaskID); checkReadyErr != nil {

View File

@@ -9,7 +9,7 @@ import (
)
// NewVerifier Sets up a mock verifier.
func NewVerifier(cfg *config.VerifierConfig) (*Verifier, error) {
func NewVerifier(cfg *config.VerifierConfig, _ bool) (*Verifier, error) {
return &Verifier{
cfg: cfg,
OpenVMVkMap: map[string]struct{}{"mock_vk": {}},

View File

@@ -19,20 +19,36 @@ import (
"scroll-tech/coordinator/internal/config"
"scroll-tech/coordinator/internal/logic/libzkp"
"scroll-tech/coordinator/internal/utils"
)
// This struct maps to `CircuitConfig` in libzkp/src/verifier.rs
// Define a brand new struct here is to eliminate side effects in case fields
// in `*config.CircuitConfig` being changed
type rustCircuitConfig struct {
Version uint `json:"version"`
ForkName string `json:"fork_name"`
AssetsPath string `json:"assets_path"`
Features string `json:"features,omitempty"`
}
var validiumMode bool
func newRustCircuitConfig(cfg config.AssetConfig) *rustCircuitConfig {
ver := cfg.Version
if ver == 0 {
var err error
ver, err = utils.Version(cfg.ForkName, validiumMode)
if err != nil {
panic(err)
}
}
return &rustCircuitConfig{
ForkName: cfg.ForkName,
Version: uint(ver),
AssetsPath: cfg.AssetsPath,
ForkName: cfg.ForkName,
Features: cfg.Features,
}
}
@@ -60,7 +76,8 @@ type rustVkDump struct {
}
// NewVerifier Sets up a rust ffi to call verify.
func NewVerifier(cfg *config.VerifierConfig) (*Verifier, error) {
func NewVerifier(cfg *config.VerifierConfig, useValidiumMode bool) (*Verifier, error) {
validiumMode = useValidiumMode
verifierConfig := newRustVerifierConfig(cfg)
configBytes, err := json.Marshal(verifierConfig)
if err != nil {

View File

@@ -14,7 +14,7 @@ import (
)
// ChallengeMiddleware jwt challenge middleware
func ChallengeMiddleware(conf *config.Config) *jwt.GinJWTMiddleware {
func ChallengeMiddleware(auth *config.Auth) *jwt.GinJWTMiddleware {
jwtMiddleware, err := jwt.New(&jwt.GinJWTMiddleware{
Authenticator: func(c *gin.Context) (interface{}, error) {
return nil, nil
@@ -30,8 +30,8 @@ func ChallengeMiddleware(conf *config.Config) *jwt.GinJWTMiddleware {
}
},
Unauthorized: unauthorized,
Key: []byte(conf.Auth.Secret),
Timeout: time.Second * time.Duration(conf.Auth.ChallengeExpireDurationSec),
Key: []byte(auth.Secret),
Timeout: time.Second * time.Duration(auth.ChallengeExpireDurationSec),
TokenLookup: "header: Authorization, query: token, cookie: jwt",
TokenHeadName: "Bearer",
TimeFunc: time.Now,

View File

@@ -4,22 +4,57 @@ import (
"time"
jwt "github.com/appleboy/gin-jwt/v2"
"github.com/gin-gonic/gin"
"github.com/scroll-tech/go-ethereum/log"
"scroll-tech/coordinator/internal/config"
"scroll-tech/coordinator/internal/controller/api"
"scroll-tech/coordinator/internal/controller/proxy"
"scroll-tech/coordinator/internal/types"
)
func nonIdendityAuthorizator(data interface{}, _ *gin.Context) bool {
return data != nil
}
// LoginMiddleware jwt auth middleware
func LoginMiddleware(conf *config.Config) *jwt.GinJWTMiddleware {
func LoginMiddleware(auth *config.Auth) *jwt.GinJWTMiddleware {
jwtMiddleware, err := jwt.New(&jwt.GinJWTMiddleware{
PayloadFunc: api.Auth.PayloadFunc,
IdentityHandler: api.Auth.IdentityHandler,
IdentityKey: types.PublicKey,
Key: []byte(conf.Auth.Secret),
Timeout: time.Second * time.Duration(conf.Auth.LoginExpireDurationSec),
Key: []byte(auth.Secret),
Timeout: time.Second * time.Duration(auth.LoginExpireDurationSec),
Authenticator: api.Auth.Login,
Authorizator: nonIdendityAuthorizator,
Unauthorized: unauthorized,
TokenLookup: "header: Authorization, query: token, cookie: jwt",
TokenHeadName: "Bearer",
TimeFunc: time.Now,
LoginResponse: loginResponse,
})
if err != nil {
log.Crit("new jwt middleware panic", "error", err)
}
if errInit := jwtMiddleware.MiddlewareInit(); errInit != nil {
log.Crit("init jwt middleware panic", "error", errInit)
}
return jwtMiddleware
}
// ProxyLoginMiddleware jwt auth middleware for proxy login
func ProxyLoginMiddleware(auth *config.Auth) *jwt.GinJWTMiddleware {
jwtMiddleware, err := jwt.New(&jwt.GinJWTMiddleware{
PayloadFunc: proxy.Auth.PayloadFunc,
IdentityHandler: proxy.Auth.IdentityHandler,
IdentityKey: types.PublicKey,
Key: []byte(auth.Secret),
Timeout: time.Second * time.Duration(auth.LoginExpireDurationSec),
Authenticator: proxy.Auth.Login,
Authorizator: nonIdendityAuthorizator,
Unauthorized: unauthorized,
TokenLookup: "header: Authorization, query: token, cookie: jwt",
TokenHeadName: "Bearer",

View File

@@ -28,8 +28,8 @@ func TestMain(m *testing.M) {
defer func() {
if testApps != nil {
testApps.Free()
tearDownEnv(t)
}
tearDownEnv(t)
}()
m.Run()
}

View File

@@ -8,6 +8,7 @@ import (
"scroll-tech/coordinator/internal/config"
"scroll-tech/coordinator/internal/controller/api"
"scroll-tech/coordinator/internal/controller/proxy"
"scroll-tech/coordinator/internal/middleware"
)
@@ -25,16 +26,45 @@ func Route(router *gin.Engine, cfg *config.Config, reg prometheus.Registerer) {
func v1(router *gin.RouterGroup, conf *config.Config) {
r := router.Group("/v1")
challengeMiddleware := middleware.ChallengeMiddleware(conf)
challengeMiddleware := middleware.ChallengeMiddleware(conf.Auth)
r.GET("/challenge", challengeMiddleware.LoginHandler)
loginMiddleware := middleware.LoginMiddleware(conf)
loginMiddleware := middleware.LoginMiddleware(conf.Auth)
r.POST("/login", challengeMiddleware.MiddlewareFunc(), loginMiddleware.LoginHandler)
// need jwt token api
r.Use(loginMiddleware.MiddlewareFunc())
{
r.POST("/proxy_login", loginMiddleware.LoginHandler)
r.POST("/get_task", api.GetTask.GetTasks)
r.POST("/submit_proof", api.SubmitProof.SubmitProof)
}
}
// Route register route for coordinator
func ProxyRoute(router *gin.Engine, cfg *config.ProxyConfig, reg prometheus.Registerer) {
router.Use(gin.Recovery())
observability.Use(router, "coordinator", reg)
r := router.Group("coordinator")
v1_proxy(r, cfg)
}
func v1_proxy(router *gin.RouterGroup, conf *config.ProxyConfig) {
r := router.Group("/v1")
challengeMiddleware := middleware.ChallengeMiddleware(conf.ProxyManager.Auth)
r.GET("/challenge", challengeMiddleware.LoginHandler)
loginMiddleware := middleware.ProxyLoginMiddleware(conf.ProxyManager.Auth)
r.POST("/login", challengeMiddleware.MiddlewareFunc(), loginMiddleware.LoginHandler)
// need jwt token api
r.Use(loginMiddleware.MiddlewareFunc())
{
r.POST("/get_task", proxy.GetTask.GetTasks)
r.POST("/submit_proof", proxy.SubmitProof.SubmitProof)
}
}

View File

@@ -64,6 +64,8 @@ func (r ProverProviderType) String() string {
return "prover provider type internal"
case ProverProviderTypeExternal:
return "prover provider type external"
case ProverProviderTypeProxy:
return "prover provider type proxy"
default:
return fmt.Sprintf("prover provider type: %d", r)
}
@@ -76,4 +78,6 @@ const (
ProverProviderTypeInternal
// ProverProviderTypeExternal is an external prover provider type
ProverProviderTypeExternal
// ProverProviderTypeProxy is an proxy prover provider type
ProverProviderTypeProxy = 3
)

View File

@@ -0,0 +1,48 @@
package types
import (
"encoding/json"
"reflect"
"testing"
"scroll-tech/common/types"
)
func TestResponseDecodeData_GetTaskSchema(t *testing.T) {
// Arrange: build a dummy payload and wrap it in Response
in := GetTaskSchema{
UUID: "uuid-123",
TaskID: "task-abc",
TaskType: 1,
UseSnark: true,
TaskData: "dummy-data",
HardForkName: "cancun",
}
resp := types.Response{
ErrCode: 0,
ErrMsg: "",
Data: in,
}
// Act: JSON round-trip the Response to simulate real HTTP encoding/decoding
b, err := json.Marshal(resp)
if err != nil {
t.Fatalf("marshal response: %v", err)
}
var decoded types.Response
if err := json.Unmarshal(b, &decoded); err != nil {
t.Fatalf("unmarshal response: %v", err)
}
var out GetTaskSchema
if err := decoded.DecodeData(&out); err != nil {
t.Fatalf("DecodeData error: %v", err)
}
// Assert: structs match after decode
if !reflect.DeepEqual(in, out) {
t.Fatalf("decoded struct mismatch:\nwant: %+v\n got: %+v", in, out)
}
}

View File

@@ -0,0 +1,91 @@
package utils
import (
"encoding/binary"
"fmt"
"github.com/scroll-tech/go-ethereum/common"
)
type CodecVersion uint8
const (
daBatchValidiumEncodedLength = 137
)
type DABatch interface {
SetHash(common.Hash)
}
type daBatchValidiumV1 struct {
Version CodecVersion `json:"version"`
BatchIndex uint64 `json:"batch_index"`
ParentBatchHash common.Hash `json:"parent_batch_hash"`
PostStateRoot common.Hash `json:"post_state_root"`
WithDrawRoot common.Hash `json:"withdraw_root"`
Commitment common.Hash `json:"commitment"`
}
type daBatchValidium struct {
V1 *daBatchValidiumV1 `json:"V1,omitempty"`
BatchHash common.Hash `json:"batch_hash"`
}
func (da *daBatchValidium) SetHash(h common.Hash) {
da.BatchHash = h
}
func FromVersion(v uint8) CodecVersion {
return CodecVersion(v & STFVersionMask)
}
func (c CodecVersion) DABatchForTaskFromBytes(b []byte) (DABatch, error) {
switch c {
case 1:
if v1, err := decodeDABatchV1(b); err == nil {
return &daBatchValidium{
V1: v1,
}, nil
} else {
return nil, err
}
default:
return nil, fmt.Errorf("unknown codec type %d", c)
}
}
func decodeDABatchV1(data []byte) (*daBatchValidiumV1, error) {
if len(data) != daBatchValidiumEncodedLength {
return nil, fmt.Errorf("invalid data length for DABatchV7, expected %d bytes but got %d", daBatchValidiumEncodedLength, len(data))
}
const (
versionSize = 1
indexSize = 8
hashSize = 32
)
// Offsets (same as encodeBatchHeaderValidium)
versionOffset := 0
indexOffset := versionOffset + versionSize
parentHashOffset := indexOffset + indexSize
stateRootOffset := parentHashOffset + hashSize
withdrawRootOffset := stateRootOffset + hashSize
commitmentOffset := withdrawRootOffset + hashSize
version := CodecVersion(data[versionOffset])
batchIndex := binary.BigEndian.Uint64(data[indexOffset : indexOffset+indexSize])
parentBatchHash := common.BytesToHash(data[parentHashOffset : parentHashOffset+hashSize])
postStateRoot := common.BytesToHash(data[stateRootOffset : stateRootOffset+hashSize])
withdrawRoot := common.BytesToHash(data[withdrawRootOffset : withdrawRootOffset+hashSize])
commitment := common.BytesToHash(data[commitmentOffset : commitmentOffset+hashSize])
return &daBatchValidiumV1{
Version: version,
BatchIndex: batchIndex,
ParentBatchHash: parentBatchHash,
PostStateRoot: postStateRoot,
WithDrawRoot: withdrawRoot,
Commitment: commitment,
}, nil
}

View File

@@ -0,0 +1,42 @@
package utils
import (
"errors"
"strings"
)
const (
DomainOffset = 6
STFVersionMask = (1 << DomainOffset) - 1
)
// version get the version for the chain instance
//
// TODO: This is not foolproof and does not cover all scenarios.
func Version(hardForkName string, ValidiumMode bool) (uint8, error) {
var domain, stfVersion uint8
if ValidiumMode {
domain = 1
stfVersion = 1
} else {
domain = 0
switch canonicalName := strings.ToLower(hardForkName); canonicalName {
case "euclidv1":
stfVersion = 6
case "euclidv2":
stfVersion = 7
case "feynman":
stfVersion = 8
case "galileo":
stfVersion = 9
case "galileov2":
stfVersion = 10
default:
return 0, errors.New("unknown fork name " + canonicalName)
}
}
return (domain << DomainOffset) + stfVersion, nil
}

View File

@@ -30,12 +30,14 @@ import (
"scroll-tech/coordinator/internal/config"
"scroll-tech/coordinator/internal/controller/api"
"scroll-tech/coordinator/internal/controller/cron"
"scroll-tech/coordinator/internal/controller/proxy"
"scroll-tech/coordinator/internal/orm"
"scroll-tech/coordinator/internal/route"
)
var (
conf *config.Config
conf *config.Config
proxyConf *config.ProxyConfig
testApps *testcontainers.TestcontainerApps
@@ -51,6 +53,9 @@ var (
chunk *encoding.Chunk
batch *encoding.Batch
tokenTimeout int
envSet bool
portUsed map[int64]struct{}
)
func TestMain(m *testing.M) {
@@ -63,18 +68,44 @@ func TestMain(m *testing.M) {
}
func randomURL() string {
id, _ := rand.Int(rand.Reader, big.NewInt(2000-1))
return fmt.Sprintf("localhost:%d", 10000+2000+id.Int64())
return randmURLBatch(1)[0]
}
func setupCoordinator(t *testing.T, proversPerSession uint8, coordinatorURL string) (*cron.Collector, *http.Server) {
var err error
db, err = testApps.GetGormDBClient()
// Generate a batch of random localhost URLs with different ports, similar to randomURL.
func randmURLBatch(n int) []string {
if n <= 0 {
return nil
}
urls := make([]string, 0, n)
if portUsed == nil {
portUsed = make(map[int64]struct{})
}
for len(urls) < n {
id, _ := rand.Int(rand.Reader, big.NewInt(2000-1))
port := 20000 + 2000 + id.Int64()
if _, exist := portUsed[port]; exist {
continue
}
portUsed[port] = struct{}{}
urls = append(urls, fmt.Sprintf("localhost:%d", port))
}
return urls
}
assert.NoError(t, err)
func setupCoordinatorDb(t *testing.T) {
var err error
assert.NotNil(t, db, "setEnv must be called before")
// db, err = testApps.GetGormDBClient()
// assert.NoError(t, err)
sqlDB, err := db.DB()
assert.NoError(t, err)
assert.NoError(t, migrate.ResetDB(sqlDB))
}
func launchCoordinator(t *testing.T, proversPerSession uint8, coordinatorURL string) (*cron.Collector, *http.Server) {
assert.NotNil(t, db, "db must be set")
tokenTimeout = 60
conf = &config.Config{
@@ -114,6 +145,7 @@ func setupCoordinator(t *testing.T, proversPerSession uint8, coordinatorURL stri
EuclidV2Time: new(uint64),
}, db, nil)
route.Route(router, conf, nil)
t.Log("coordinator server url", coordinatorURL)
srv := &http.Server{
Addr: coordinatorURL,
Handler: router,
@@ -129,10 +161,80 @@ func setupCoordinator(t *testing.T, proversPerSession uint8, coordinatorURL stri
return proofCollector, srv
}
func setupCoordinator(t *testing.T, proversPerSession uint8, coordinatorURL string) (*cron.Collector, *http.Server) {
setupCoordinatorDb(t)
return launchCoordinator(t, proversPerSession, coordinatorURL)
}
func setupProxyDb(t *testing.T) {
assert.NotNil(t, db, "setEnv must be called before")
sqlDB, err := db.DB()
assert.NoError(t, err)
assert.NoError(t, migrate.ResetModuleDB(sqlDB, "proxy"))
}
func launchProxy(t *testing.T, proxyURL string, coordinatorURL []string, usePersistent bool) *http.Server {
var err error
assert.NoError(t, err)
coordinators := make(map[string]*config.UpStream)
for i, n := range coordinatorURL {
coordinators[fmt.Sprintf("coordinator_%d", i)] = testProxyUpStreamCfg(n)
}
tokenTimeout = 60
proxyConf = &config.ProxyConfig{
ProxyName: "test_proxy",
ProxyManager: &config.ProxyManager{
Verifier: &config.VerifierConfig{
MinProverVersion: "v4.4.89",
Verifiers: []config.AssetConfig{{
AssetsPath: "",
ForkName: "euclidV2",
}},
},
Client: testProxyClientCfg(),
Auth: &config.Auth{
Secret: "proxy",
ChallengeExpireDurationSec: tokenTimeout,
LoginExpireDurationSec: tokenTimeout,
},
},
Coordinators: coordinators,
}
router := gin.New()
if usePersistent {
proxy.InitController(proxyConf, db, nil)
} else {
proxy.InitController(proxyConf, nil, nil)
}
route.ProxyRoute(router, proxyConf, nil)
t.Log("proxy server url", proxyURL)
srv := &http.Server{
Addr: proxyURL,
Handler: router,
}
go func() {
runErr := srv.ListenAndServe()
if runErr != nil && !errors.Is(runErr, http.ErrServerClosed) {
assert.NoError(t, runErr)
}
}()
time.Sleep(time.Second * 2)
return srv
}
func setEnv(t *testing.T) {
if envSet {
t.Log("SetEnv is re-entried")
return
}
var err error
version.Version = "v4.4.89"
version.Version = "v4.5.45"
glogger := log.NewGlogHandler(log.StreamHandler(os.Stderr, log.LogfmtFormat()))
glogger.Verbosity(log.LvlInfo)
@@ -146,6 +248,7 @@ func setEnv(t *testing.T) {
sqlDB, err := db.DB()
assert.NoError(t, err)
assert.NoError(t, migrate.ResetDB(sqlDB))
assert.NoError(t, migrate.MigrateModule(sqlDB, "proxy"))
batchOrm = orm.NewBatch(db)
chunkOrm = orm.NewChunk(db)
@@ -169,6 +272,7 @@ func setEnv(t *testing.T) {
assert.NoError(t, err)
batch = &encoding.Batch{Chunks: []*encoding.Chunk{chunk}}
envSet = true
}
func TestApis(t *testing.T) {

View File

@@ -34,6 +34,8 @@ type mockProver struct {
privKey *ecdsa.PrivateKey
proofType message.ProofType
coordinatorURL string
token string
useCacheToken bool
}
func newMockProver(t *testing.T, proverName string, coordinatorURL string, proofType message.ProofType, version string) *mockProver {
@@ -50,6 +52,14 @@ func newMockProver(t *testing.T, proverName string, coordinatorURL string, proof
return prover
}
func (r *mockProver) resetConnection(coordinatorURL string) {
r.coordinatorURL = coordinatorURL
}
func (r *mockProver) setUseCacheToken(enable bool) {
r.useCacheToken = enable
}
// connectToCoordinator sets up a websocket client to connect to the prover manager.
func (r *mockProver) connectToCoordinator(t *testing.T, proverTypes []types.ProverType) (string, int, string) {
challengeString := r.challenge(t)
@@ -115,6 +125,7 @@ func (r *mockProver) login(t *testing.T, challengeString string, proverTypes []t
assert.NoError(t, err)
assert.Equal(t, http.StatusOK, resp.StatusCode())
assert.Empty(t, result.ErrMsg)
r.token = loginData.Token
return loginData.Token, 0, ""
}
@@ -144,11 +155,14 @@ func (r *mockProver) healthCheckFailure(t *testing.T) bool {
func (r *mockProver) getProverTask(t *testing.T, proofType message.ProofType) (*types.GetTaskSchema, int, string) {
// get task from coordinator
token, errCode, errMsg := r.connectToCoordinator(t, []types.ProverType{types.MakeProverType(proofType)})
if errCode != 0 {
return nil, errCode, errMsg
if !r.useCacheToken || r.token == "" {
token, errCode, errMsg := r.connectToCoordinator(t, []types.ProverType{types.MakeProverType(proofType)})
if errCode != 0 {
return nil, errCode, errMsg
}
assert.NotEmpty(t, token)
assert.Equal(t, token, r.token)
}
assert.NotEmpty(t, token)
type response struct {
ErrCode int `json:"errcode"`
@@ -160,7 +174,7 @@ func (r *mockProver) getProverTask(t *testing.T, proofType message.ProofType) (*
client := resty.New()
resp, err := client.R().
SetHeader("Content-Type", "application/json").
SetHeader("Authorization", fmt.Sprintf("Bearer %s", token)).
SetHeader("Authorization", fmt.Sprintf("Bearer %s", r.token)).
SetBody(map[string]interface{}{"universal": true, "prover_height": 100, "task_types": []int{int(proofType)}}).
SetResult(&result).
Post("http://" + r.coordinatorURL + "/coordinator/v1/get_task")
@@ -174,11 +188,14 @@ func (r *mockProver) getProverTask(t *testing.T, proofType message.ProofType) (*
//nolint:unparam
func (r *mockProver) tryGetProverTask(t *testing.T, proofType message.ProofType) (int, string) {
// get task from coordinator
token, errCode, errMsg := r.connectToCoordinator(t, []types.ProverType{types.MakeProverType(proofType)})
if errCode != 0 {
return errCode, errMsg
if !r.useCacheToken || r.token == "" {
token, errCode, errMsg := r.connectToCoordinator(t, []types.ProverType{types.MakeProverType(proofType)})
if errCode != 0 {
return errCode, errMsg
}
assert.NotEmpty(t, token)
assert.Equal(t, token, r.token)
}
assert.NotEmpty(t, token)
type response struct {
ErrCode int `json:"errcode"`
@@ -190,8 +207,8 @@ func (r *mockProver) tryGetProverTask(t *testing.T, proofType message.ProofType)
client := resty.New()
resp, err := client.R().
SetHeader("Content-Type", "application/json").
SetHeader("Authorization", fmt.Sprintf("Bearer %s", token)).
SetBody(map[string]interface{}{"prover_height": 100, "task_type": int(proofType), "universal": true}).
SetHeader("Authorization", fmt.Sprintf("Bearer %s", r.token)).
SetBody(map[string]interface{}{"prover_height": 100, "task_types": []int{int(proofType)}, "universal": true}).
SetResult(&result).
Post("http://" + r.coordinatorURL + "/coordinator/v1/get_task")
assert.NoError(t, err)
@@ -249,10 +266,13 @@ func (r *mockProver) submitProof(t *testing.T, proverTaskSchema *types.GetTaskSc
Universal: true,
}
token, authErrCode, errMsg := r.connectToCoordinator(t, []types.ProverType{types.MakeProverType(message.ProofType(proverTaskSchema.TaskType))})
assert.Equal(t, authErrCode, 0)
assert.Equal(t, errMsg, "")
assert.NotEmpty(t, token)
if !r.useCacheToken || r.token == "" {
token, authErrCode, errMsg := r.connectToCoordinator(t, []types.ProverType{types.MakeProverType(message.ProofType(proverTaskSchema.TaskType))})
assert.Equal(t, authErrCode, 0)
assert.Equal(t, errMsg, "")
assert.NotEmpty(t, token)
assert.Equal(t, token, r.token)
}
submitProofData, err := json.Marshal(submitProof)
assert.NoError(t, err)
@@ -262,7 +282,7 @@ func (r *mockProver) submitProof(t *testing.T, proverTaskSchema *types.GetTaskSc
client := resty.New()
resp, err := client.R().
SetHeader("Content-Type", "application/json").
SetHeader("Authorization", fmt.Sprintf("Bearer %s", token)).
SetHeader("Authorization", fmt.Sprintf("Bearer %s", r.token)).
SetBody(string(submitProofData)).
SetResult(&result).
Post("http://" + r.coordinatorURL + "/coordinator/v1/submit_proof")

View File

@@ -0,0 +1,297 @@
package test
import (
"context"
"fmt"
"net/http"
"strings"
"testing"
"time"
"github.com/scroll-tech/da-codec/encoding"
"github.com/stretchr/testify/assert"
"scroll-tech/common/types"
"scroll-tech/common/types/message"
"scroll-tech/common/version"
"scroll-tech/coordinator/internal/config"
"scroll-tech/coordinator/internal/controller/proxy"
)
func testProxyClientCfg() *config.ProxyClient {
return &config.ProxyClient{
Secret: "test-secret-key",
ProxyName: "test-proxy",
ProxyVersion: version.Version,
}
}
var testCompatibileMode bool
func testProxyUpStreamCfg(coordinatorURL string) *config.UpStream {
return &config.UpStream{
BaseUrl: fmt.Sprintf("http://%s", coordinatorURL),
RetryWaitTime: 3,
ConnectionTimeoutSec: 30,
CompatibileMode: testCompatibileMode,
}
}
func testProxyClient(t *testing.T) {
// Setup coordinator and http server.
coordinatorURL := randomURL()
proofCollector, httpHandler := setupCoordinator(t, 1, coordinatorURL)
defer func() {
proofCollector.Stop()
assert.NoError(t, httpHandler.Shutdown(context.Background()))
}()
cliCfg := testProxyClientCfg()
upCfg := testProxyUpStreamCfg(coordinatorURL)
clientManager, err := proxy.NewClientManager("test_coordinator", cliCfg, upCfg)
assert.NoError(t, err)
assert.NotNil(t, clientManager)
// Create context with timeout
ctx, cancel := context.WithTimeout(context.Background(), 10*time.Second)
defer cancel()
// Test Client method
client := clientManager.ClientAsProxy(ctx)
// Client should not be nil if login succeeds
// Note: This might be nil if the coordinator is not properly set up for proxy authentication
// but the test validates that the Client method completes without panic
assert.NotNil(t, client)
token1 := client.Token()
assert.NotEmpty(t, token1)
t.Logf("Client token: %s (%v)", token1, client)
if !upCfg.CompatibileMode {
time.Sleep(time.Second * 2)
client.Reset()
client = clientManager.ClientAsProxy(ctx)
assert.NotNil(t, client)
token2 := client.Token()
assert.NotEmpty(t, token2)
t.Logf("Client token (sec): %s (%v)", token2, client)
assert.NotEqual(t, token1, token2, "token should not be identical")
}
}
func testProxyHandshake(t *testing.T) {
// Setup proxy http server.
proxyURL := randomURL()
proxyHttpHandler := launchProxy(t, proxyURL, []string{}, false)
defer func() {
assert.NoError(t, proxyHttpHandler.Shutdown(context.Background()))
}()
chunkProver := newMockProver(t, "prover_chunk_test", proxyURL, message.ProofTypeChunk, version.Version)
assert.True(t, chunkProver.healthCheckSuccess(t))
}
func testProxyGetTask(t *testing.T) {
// Setup coordinator and http server.
urls := randmURLBatch(2)
coordinatorURL := urls[0]
collector, httpHandler := setupCoordinator(t, 3, coordinatorURL)
defer func() {
collector.Stop()
assert.NoError(t, httpHandler.Shutdown(context.Background()))
}()
proxyURL := urls[1]
proxyHttpHandler := launchProxy(t, proxyURL, []string{coordinatorURL}, false)
defer func() {
assert.NoError(t, proxyHttpHandler.Shutdown(context.Background()))
}()
chunkProver := newMockProver(t, "prover_chunk_test", proxyURL, message.ProofTypeChunk, version.Version)
chunkProver.setUseCacheToken(true)
code, _ := chunkProver.tryGetProverTask(t, message.ProofTypeChunk)
assert.Equal(t, int(types.ErrCoordinatorEmptyProofData), code)
err := l2BlockOrm.InsertL2Blocks(context.Background(), []*encoding.Block{block1, block2})
assert.NoError(t, err)
dbChunk, err := chunkOrm.InsertChunk(context.Background(), chunk)
assert.NoError(t, err)
err = l2BlockOrm.UpdateChunkHashInRange(context.Background(), 0, 100, dbChunk.Hash)
assert.NoError(t, err)
task, code, msg := chunkProver.getProverTask(t, message.ProofTypeChunk)
assert.Empty(t, code)
if code == 0 {
t.Log("get task id", task.TaskID)
} else {
t.Log("get task error msg", msg)
}
}
func testProxyProof(t *testing.T) {
urls := randmURLBatch(3)
coordinatorURL0 := urls[0]
setupCoordinatorDb(t)
collector0, httpHandler0 := launchCoordinator(t, 3, coordinatorURL0)
defer func() {
collector0.Stop()
httpHandler0.Shutdown(context.Background())
}()
coordinatorURL1 := urls[1]
collector1, httpHandler1 := launchCoordinator(t, 3, coordinatorURL1)
defer func() {
collector1.Stop()
httpHandler1.Shutdown(context.Background())
}()
coordinators := map[string]*http.Server{
"coordinator_0": httpHandler0,
"coordinator_1": httpHandler1,
}
proxyURL := urls[2]
proxyHttpHandler := launchProxy(t, proxyURL, []string{coordinatorURL0, coordinatorURL1}, false)
defer func() {
assert.NoError(t, proxyHttpHandler.Shutdown(context.Background()))
}()
err := l2BlockOrm.InsertL2Blocks(context.Background(), []*encoding.Block{block1, block2})
assert.NoError(t, err)
dbChunk, err := chunkOrm.InsertChunk(context.Background(), chunk)
assert.NoError(t, err)
err = l2BlockOrm.UpdateChunkHashInRange(context.Background(), 0, 100, dbChunk.Hash)
assert.NoError(t, err)
chunkProver := newMockProver(t, "prover_chunk_test", proxyURL, message.ProofTypeChunk, version.Version)
chunkProver.setUseCacheToken(true)
task, code, msg := chunkProver.getProverTask(t, message.ProofTypeChunk)
assert.Empty(t, code)
if code == 0 {
t.Log("get task", task)
parts, _, _ := strings.Cut(task.TaskID, ":")
// close the coordinator which do not dispatch task first, so if we submit to wrong target,
// there would be a chance the submit failed (to the closed coordinator)
for n, srv := range coordinators {
if n != parts {
t.Log("close coordinator", n)
assert.NoError(t, srv.Shutdown(context.Background()))
}
}
exceptProofStatus := verifiedSuccess
chunkProver.submitProof(t, task, exceptProofStatus, types.Success)
} else {
t.Log("get task error msg", msg)
}
// verify proof status
var (
tick = time.Tick(1500 * time.Millisecond)
tickStop = time.Tick(time.Minute)
)
var (
chunkProofStatus types.ProvingStatus
chunkActiveAttempts int16
chunkMaxAttempts int16
)
for {
select {
case <-tick:
chunkProofStatus, err = chunkOrm.GetProvingStatusByHash(context.Background(), dbChunk.Hash)
assert.NoError(t, err)
if chunkProofStatus == types.ProvingTaskVerified {
return
}
chunkActiveAttempts, chunkMaxAttempts, err = chunkOrm.GetAttemptsByHash(context.Background(), dbChunk.Hash)
assert.NoError(t, err)
assert.Equal(t, 1, int(chunkMaxAttempts))
assert.Equal(t, 0, int(chunkActiveAttempts))
case <-tickStop:
t.Error("failed to check proof status", "chunkProofStatus", chunkProofStatus.String())
return
}
}
}
func testProxyPersistent(t *testing.T) {
urls := randmURLBatch(4)
coordinatorURL0 := urls[0]
setupCoordinatorDb(t)
collector0, httpHandler0 := launchCoordinator(t, 3, coordinatorURL0)
defer func() {
collector0.Stop()
httpHandler0.Shutdown(context.Background())
}()
coordinatorURL1 := urls[1]
collector1, httpHandler1 := launchCoordinator(t, 3, coordinatorURL1)
defer func() {
collector1.Stop()
httpHandler1.Shutdown(context.Background())
}()
setupProxyDb(t)
proxyURL1 := urls[2]
proxyHttpHandler := launchProxy(t, proxyURL1, []string{coordinatorURL0, coordinatorURL1}, true)
defer func() {
assert.NoError(t, proxyHttpHandler.Shutdown(context.Background()))
}()
proxyURL2 := urls[3]
proxyHttpHandler2 := launchProxy(t, proxyURL2, []string{coordinatorURL0, coordinatorURL1}, true)
defer func() {
assert.NoError(t, proxyHttpHandler2.Shutdown(context.Background()))
}()
err := l2BlockOrm.InsertL2Blocks(context.Background(), []*encoding.Block{block1, block2})
assert.NoError(t, err)
dbChunk, err := chunkOrm.InsertChunk(context.Background(), chunk)
assert.NoError(t, err)
err = l2BlockOrm.UpdateChunkHashInRange(context.Background(), 0, 100, dbChunk.Hash)
assert.NoError(t, err)
chunkProver := newMockProver(t, "prover_chunk_test", proxyURL1, message.ProofTypeChunk, version.Version)
chunkProver.setUseCacheToken(true)
task, _, _ := chunkProver.getProverTask(t, message.ProofTypeChunk)
assert.NotNil(t, task)
taskFrom, _, _ := strings.Cut(task.TaskID, ":")
t.Log("get task from coordinator:", taskFrom)
chunkProver.resetConnection(proxyURL2)
task, _, _ = chunkProver.getProverTask(t, message.ProofTypeChunk)
assert.NotNil(t, task)
taskFrom2, _, _ := strings.Cut(task.TaskID, ":")
assert.Equal(t, taskFrom, taskFrom2)
}
func TestProxyClient(t *testing.T) {
testCompatibileMode = false
// Set up the test environment.
setEnv(t)
t.Run("TestProxyClient", testProxyClient)
t.Run("TestProxyHandshake", testProxyHandshake)
t.Run("TestProxyGetTask", testProxyGetTask)
t.Run("TestProxyValidProof", testProxyProof)
t.Run("testProxyPersistent", testProxyPersistent)
}
func TestProxyClientCompatibleMode(t *testing.T) {
testCompatibileMode = true
// Set up the test environment.
setEnv(t)
t.Run("TestProxyClient", testProxyClient)
t.Run("TestProxyHandshake", testProxyHandshake)
t.Run("TestProxyGetTask", testProxyGetTask)
t.Run("TestProxyValidProof", testProxyProof)
t.Run("testProxyPersistent", testProxyPersistent)
}

View File

@@ -1,45 +0,0 @@
[patch."https://github.com/openvm-org/openvm.git"]
openvm-build = { git = "ssh://git@github.com/scroll-tech/openvm-gpu.git", branch = "patch-v1.3.0-pipe", default-features = false }
openvm-circuit = { git = "ssh://git@github.com/scroll-tech/openvm-gpu.git", branch = "patch-v1.3.0-pipe", default-features = false }
openvm-continuations = { git = "ssh://git@github.com/scroll-tech/openvm-gpu.git", branch = "patch-v1.3.0-pipe", default-features = false }
openvm-instructions ={ git = "ssh://git@github.com/scroll-tech/openvm-gpu.git", branch = "patch-v1.3.0-pipe", default-features = false }
openvm-native-circuit = { git = "ssh://git@github.com/scroll-tech/openvm-gpu.git", branch = "patch-v1.3.0-pipe", default-features = false }
openvm-native-compiler = { git = "ssh://git@github.com/scroll-tech/openvm-gpu.git", branch = "patch-v1.3.0-pipe", default-features = false }
openvm-native-recursion = { git = "ssh://git@github.com/scroll-tech/openvm-gpu.git", branch = "patch-v1.3.0-pipe", default-features = false }
openvm-native-transpiler = { git = "ssh://git@github.com/scroll-tech/openvm-gpu.git", branch = "patch-v1.3.0-pipe", default-features = false }
openvm-rv32im-transpiler = { git = "ssh://git@github.com/scroll-tech/openvm-gpu.git", branch = "patch-v1.3.0-pipe", default-features = false }
openvm-sdk = { git = "ssh://git@github.com/scroll-tech/openvm-gpu.git", branch = "patch-v1.3.0-pipe", default-features = false, features = ["parallel", "bench-metrics", "evm-prove"] }
openvm-transpiler = { git = "ssh://git@github.com/scroll-tech/openvm-gpu.git", branch = "patch-v1.3.0-pipe", default-features = false }
[patch."https://github.com/openvm-org/stark-backend.git"]
openvm-stark-backend = { git = "ssh://git@github.com/scroll-tech/openvm-stark-gpu.git", branch = "main", features = ["gpu"] }
openvm-stark-sdk = { git = "ssh://git@github.com/scroll-tech/openvm-stark-gpu.git", branch = "main", features = ["gpu"] }
[patch."https://github.com/Plonky3/Plonky3.git"]
p3-air = { git = "ssh://git@github.com/scroll-tech/plonky3-gpu.git", tag = "v0.2.1" }
p3-field = { git = "ssh://git@github.com/scroll-tech/plonky3-gpu.git", tag = "v0.2.1" }
p3-commit = { git = "ssh://git@github.com/scroll-tech/plonky3-gpu.git", tag = "v0.2.1" }
p3-matrix = { git = "ssh://git@github.com/scroll-tech/plonky3-gpu.git", tag = "v0.2.1" }
p3-baby-bear = { git = "ssh://git@github.com/scroll-tech/plonky3-gpu.git", features = [
"nightly-features",
], tag = "v0.2.1" }
p3-koala-bear = { git = "ssh://git@github.com/scroll-tech/plonky3-gpu.git", tag = "v0.2.1" }
p3-util = { git = "ssh://git@github.com/scroll-tech/plonky3-gpu.git", tag = "v0.2.1" }
p3-challenger = { git = "ssh://git@github.com/scroll-tech/plonky3-gpu.git", tag = "v0.2.1" }
p3-dft = { git = "ssh://git@github.com/scroll-tech/plonky3-gpu.git", tag = "v0.2.1" }
p3-fri = { git = "ssh://git@github.com/scroll-tech/plonky3-gpu.git", tag = "v0.2.1" }
p3-goldilocks = { git = "ssh://git@github.com/scroll-tech/plonky3-gpu.git", tag = "v0.2.1" }
p3-keccak = { git = "ssh://git@github.com/scroll-tech/plonky3-gpu.git", tag = "v0.2.1" }
p3-keccak-air = { git = "ssh://git@github.com/scroll-tech/plonky3-gpu.git", tag = "v0.2.1" }
p3-blake3 = { git = "ssh://git@github.com/scroll-tech/plonky3-gpu.git", tag = "v0.2.1" }
p3-mds = { git = "ssh://git@github.com/scroll-tech/plonky3-gpu.git", tag = "v0.2.1" }
p3-merkle-tree = { git = "ssh://git@github.com/scroll-tech/plonky3-gpu.git", tag = "v0.2.1" }
p3-monty-31 = { git = "ssh://git@github.com/scroll-tech/plonky3-gpu.git", tag = "v0.2.1" }
p3-poseidon = { git = "ssh://git@github.com/scroll-tech/plonky3-gpu.git", tag = "v0.2.1" }
p3-poseidon2 = { git = "ssh://git@github.com/scroll-tech/plonky3-gpu.git", tag = "v0.2.1" }
p3-poseidon2-air = { git = "ssh://git@github.com/scroll-tech/plonky3-gpu.git", tag = "v0.2.1" }
p3-symmetric = { git = "ssh://git@github.com/scroll-tech/plonky3-gpu.git", tag = "v0.2.1" }
p3-uni-stark = { git = "ssh://git@github.com/scroll-tech/plonky3-gpu.git", tag = "v0.2.1" }
p3-maybe-rayon = { git = "ssh://git@github.com/scroll-tech/plonky3-gpu.git", tag = "v0.2.1" } # the "parallel" feature is NOT on by default to allow single-threaded benchmarking
p3-bn254-fr = { git = "ssh://git@github.com/scroll-tech/plonky3-gpu.git", tag = "v0.2.1" }

File diff suppressed because it is too large Load Diff

View File

@@ -1,23 +0,0 @@
.PHONY: build update clean
ZKVM_COMMIT ?= freebuild
PLONKY3_GPU_VERSION=$(shell ./print_plonky3gpu_version.sh | sed -n '2p')
$(info PLONKY3_GPU_VERSION is ${PLONKY3_GPU_VERSION})
GIT_REV ?= $(shell git rev-parse --short HEAD)
GO_TAG ?= $(shell grep "var tag = " ../../common/version/version.go | cut -d "\"" -f2)
ZK_VERSION=${ZKVM_COMMIT}-${PLONKY3_GPU_VERSION}
$(info ZK_GPU_VERSION is ${ZK_VERSION})
clean:
cargo clean -Z unstable-options --release -p prover --lockfile-path ./Cargo.lock
# build gpu prover, never touch lock file
build:
GO_TAG=${GO_TAG} GIT_REV=${GIT_REV} ZK_VERSION=${ZK_VERSION} cargo build -Z unstable-options --release -p prover --lockfile-path ./Cargo.lock
version:
echo ${GO_TAG}-${GIT_REV}-${ZK_VERSION}
# update Cargo.lock while override config has been updated
#update:
# GO_TAG=${GO_TAG} GIT_REV=${GIT_REV} ZK_VERSION=${ZK_VERSION} cargo build -Z unstable-options --release -p prover --lockfile-path ./Cargo.lock

View File

@@ -1,10 +0,0 @@
#!/bin/bash
higher_plonky3_item=`grep "plonky3-gpu" ./Cargo.lock | sort | uniq | awk -F "[#=]" '{print $3" "$4}' | sort -k 1 | tail -n 1`
higher_version=`echo $higher_plonky3_item | awk '{print $1}'`
higher_commit=`echo $higher_plonky3_item | cut -d ' ' -f2 | cut -c-7`
echo "$higher_version"
echo "$higher_commit"

View File

@@ -13,6 +13,7 @@ libzkp = { path = "../libzkp" }
alloy = { workspace = true, features = ["provider-http", "transport-http", "reqwest", "reqwest-rustls-tls", "json-rpc"] }
sbv-primitives = { workspace = true, features = ["scroll"] }
sbv-utils = { workspace = true, features = ["scroll"] }
sbv-core = { workspace = true, features = ["scroll"] }
eyre.workspace = true

View File

@@ -11,7 +11,7 @@ pub fn init(config: &str) -> eyre::Result<()> {
Ok(())
}
pub fn get_client() -> rpc_client::RpcClient<'static> {
pub fn get_client() -> impl libzkp::tasks::ChunkInterpreter {
GLOBAL_L2GETH_CLI
.get()
.expect("must has been inited")

View File

@@ -1,11 +1,11 @@
use alloy::{
providers::{Provider, ProviderBuilder, RootProvider},
providers::{Provider, ProviderBuilder},
rpc::client::ClientBuilder,
transports::layers::RetryBackoffLayer,
};
use eyre::Result;
use libzkp::tasks::ChunkInterpreter;
use sbv_primitives::types::Network;
use sbv_primitives::types::{consensus::TxL1Message, Network};
use serde::{Deserialize, Serialize};
fn default_max_retry() -> u32 {
@@ -49,13 +49,13 @@ pub struct RpcConfig {
/// so it can be run in block mode (i.e. inside dynamic library without a global entry)
pub struct RpcClientCore {
/// rpc prover
provider: RootProvider<Network>,
client: alloy::rpc::client::RpcClient,
rt: tokio::runtime::Runtime,
}
#[derive(Clone, Copy)]
pub struct RpcClient<'a> {
provider: &'a RootProvider<Network>,
pub struct RpcClient<'a, T: Provider<Network>> {
provider: T,
handle: &'a tokio::runtime::Handle,
}
@@ -75,80 +75,78 @@ impl RpcClientCore {
let retry_layer = RetryBackoffLayer::new(config.max_retry, config.backoff, config.cups);
let client = ClientBuilder::default().layer(retry_layer).http(rpc);
Ok(Self {
provider: ProviderBuilder::<_, _, Network>::default().connect_client(client),
rt,
})
Ok(Self { client, rt })
}
pub fn get_client(&self) -> RpcClient {
pub fn get_client(&self) -> RpcClient<'_, impl Provider<Network>> {
RpcClient {
provider: &self.provider,
provider: ProviderBuilder::<_, _, Network>::default()
.connect_client(self.client.clone()),
handle: self.rt.handle(),
}
}
}
impl ChunkInterpreter for RpcClient<'_> {
impl<T: Provider<Network>> ChunkInterpreter for RpcClient<'_, T> {
fn try_fetch_block_witness(
&self,
block_hash: sbv_primitives::B256,
prev_witness: Option<&sbv_primitives::types::BlockWitness>,
) -> Result<sbv_primitives::types::BlockWitness> {
prev_witness: Option<&sbv_core::BlockWitness>,
) -> Result<sbv_core::BlockWitness> {
async fn fetch_witness_async(
provider: &RootProvider<Network>,
provider: impl Provider<Network>,
block_hash: sbv_primitives::B256,
prev_witness: Option<&sbv_primitives::types::BlockWitness>,
) -> Result<sbv_primitives::types::BlockWitness> {
use sbv_utils::{rpc::ProviderExt, witness::WitnessBuilder};
prev_witness: Option<&sbv_core::BlockWitness>,
) -> Result<sbv_core::BlockWitness> {
use sbv_utils::rpc::ProviderExt;
let chain_id = provider.get_chain_id().await?;
let (chain_id, block_num, prev_state_root) = if let Some(w) = prev_witness {
(w.chain_id, w.header.number + 1, w.header.state_root)
} else {
let chain_id = provider.get_chain_id().await?;
let block = provider
.get_block_by_hash(block_hash)
.full()
.await?
.ok_or_else(|| eyre::eyre!("Block {block_hash} not found"))?;
let block = provider
.get_block_by_hash(block_hash)
.full()
.await?
.ok_or_else(|| eyre::eyre!("Block {block_hash} not found"))?;
let parent_block = provider
.get_block_by_hash(block.header.parent_hash)
.await?
.ok_or_else(|| {
eyre::eyre!(
"parent block for block {} should exist",
block.header.number
)
})?;
let number = block.header.number;
let parent_hash = block.header.parent_hash;
if number == 0 {
eyre::bail!("no number in header or use block 0");
}
let mut witness_builder = WitnessBuilder::new()
.block(block)
.chain_id(chain_id)
.execution_witness(provider.debug_execution_witness(number.into()).await?);
let prev_state_root = match prev_witness {
Some(witness) => {
if witness.header.number != number - 1 {
eyre::bail!(
"the ref witness is not the previous block, expected {} get {}",
number - 1,
witness.header.number,
);
}
witness.header.state_root
}
None => {
let parent_block = provider
.get_block_by_hash(parent_hash)
.await?
.expect("parent block should exist");
parent_block.header.state_root
}
(
chain_id,
block.header.number,
parent_block.header.state_root,
)
};
witness_builder = witness_builder.prev_state_root(prev_state_root);
Ok(witness_builder.build()?)
let req = provider
.dump_block_witness(block_num)
.with_chain_id(chain_id)
.with_prev_state_root(prev_state_root);
let witness = req
.send()
.await
.transpose()
.ok_or_else(|| eyre::eyre!("Block witness {block_num} not available"))??;
Ok(witness)
}
tracing::debug!("fetch witness for {block_hash}");
self.handle
.block_on(fetch_witness_async(self.provider, block_hash, prev_witness))
self.handle.block_on(fetch_witness_async(
&self.provider,
block_hash,
prev_witness,
))
}
fn try_fetch_storage_node(
@@ -156,7 +154,7 @@ impl ChunkInterpreter for RpcClient<'_> {
node_hash: sbv_primitives::B256,
) -> Result<sbv_primitives::Bytes> {
async fn fetch_storage_node_async(
provider: &RootProvider<Network>,
provider: impl Provider<Network>,
node_hash: sbv_primitives::B256,
) -> Result<sbv_primitives::Bytes> {
let ret = provider
@@ -168,7 +166,41 @@ impl ChunkInterpreter for RpcClient<'_> {
tracing::debug!("fetch storage node for {node_hash}");
self.handle
.block_on(fetch_storage_node_async(self.provider, node_hash))
.block_on(fetch_storage_node_async(&self.provider, node_hash))
}
fn try_fetch_l1_msgs(&self, block_number: u64) -> Result<Vec<TxL1Message>> {
async fn fetch_l1_msgs(
provider: impl Provider<Network>,
block_number: u64,
) -> Result<Vec<TxL1Message>> {
let block_number_hex = format!("0x{:x}", block_number);
#[derive(Deserialize, Debug)]
#[serde(untagged)]
enum NullOrVec {
Null, // matches JSON `null`
Vec(Vec<TxL1Message>), // matches JSON array
}
Ok(
match provider
.client()
.request::<_, NullOrVec>(
"scroll_getL1MessagesInBlock",
(block_number_hex, "synced"),
)
.await?
{
NullOrVec::Null => Vec::new(),
NullOrVec::Vec(r) => r,
},
)
}
tracing::debug!("fetch L1 msgs for {block_number}");
self.handle
.block_on(fetch_l1_msgs(&self.provider, block_number))
}
}
@@ -194,10 +226,10 @@ mod tests {
let client_core = RpcClientCore::create(&config).expect("Failed to create RPC client");
let client = client_core.get_client();
// latest - 1 block in 2025.6.15
// latest - 1 block in 2025.9.11
let block_hash = B256::from(
hex::const_decode_to_array(
b"0x9535a6970bc4db9031749331a214e35ed8c8a3f585f6f456d590a0bc780a1368",
b"0x093fb6bf2e556a659b35428ac447cd9f0635382fc40ffad417b5910824f9e932",
)
.unwrap(),
);
@@ -207,10 +239,10 @@ mod tests {
.try_fetch_block_witness(block_hash, None)
.expect("should success");
// latest block in 2025.6.15
// block selected in 2025.9.11
let block_hash = B256::from(
hex::const_decode_to_array(
b"0xd47088cdb6afc68aa082e633bb7da9340d29c73841668afacfb9c1e66e557af0",
b"0x77cc84dd7a4dedf6fe5fb9b443aeb5a4fb0623ad088a365d3232b7b23fc848e5",
)
.unwrap(),
);
@@ -223,23 +255,13 @@ mod tests {
#[test]
#[ignore = "Requires L2GETH_ENDPOINT environment variable"]
fn test_try_fetch_storage_node() {
fn test_try_fetch_l1_messages() {
let config = create_config_from_env();
let client_core = RpcClientCore::create(&config).expect("Failed to create RPC client");
let client = client_core.get_client();
// the root node (state root) of the block in unittest above
let node_hash = B256::from(
hex::const_decode_to_array(
b"0xb9e67403a2eb35afbb0475fe942918cf9a330a1d7532704c24554506be62b27c",
)
.unwrap(),
);
let msgs = client.try_fetch_l1_msgs(32).expect("should success");
// This is expected to fail since we're using a dummy hash, but it tests the code path
let node = client
.try_fetch_storage_node(node_hash)
.expect("should success");
println!("{}", serde_json::to_string_pretty(&node).unwrap());
println!("{}", serde_json::to_string_pretty(&msgs).unwrap());
}
}

View File

@@ -5,11 +5,12 @@ edition.workspace = true
# See more keys and their definitions at https://doc.rust-lang.org/cargo/reference/manifest.html
[dependencies]
scroll-zkvm-types.workspace = true
scroll-zkvm-types = { workspace = true, features = ["scroll"] }
scroll-zkvm-verifier.workspace = true
alloy-primitives.workspace = true #depress the effect of "native-keccak"
sbv-primitives = {workspace = true, features = ["scroll-compress-ratio", "scroll"]}
sbv-primitives = {workspace = true, features = ["scroll-compress-info", "scroll"]}
sbv-core = { workspace = true, features = ["scroll"] }
base64.workspace = true
serde.workspace = true
serde_derive.workspace = true
@@ -18,6 +19,7 @@ tracing.workspace = true
eyre.workspace = true
git-version = "0.3.5"
bincode = { version = "2", features = ["serde"] }
serde_stacker = "0.1"
regex = "1.11"
c-kzg = { version = "2.0", features = ["serde"] }

View File

@@ -1,28 +1,112 @@
pub mod proofs;
pub mod tasks;
pub use tasks::ProvingTaskExt;
pub mod verifier;
use verifier::HardForkName;
pub use verifier::{TaskType, VerifierConfig};
mod utils;
use sbv_primitives::B256;
use scroll_zkvm_types::utils::vec_as_base64;
use scroll_zkvm_types::{utils::vec_as_base64, version::Version};
use serde::{Deserialize, Serialize};
use serde_json::value::RawValue;
use std::path::Path;
use std::{collections::HashMap, path::Path, sync::OnceLock};
use tasks::chunk_interpreter::{ChunkInterpreter, TryFromWithInterpreter};
pub(crate) fn witness_use_legacy_mode(fork_name: &str) -> eyre::Result<bool> {
ADDITIONAL_FEATURES
.get()
.and_then(|features| features.get(fork_name))
.map(|cfg| cfg.legacy_witness_encoding)
.ok_or_else(|| {
eyre::eyre!(
"can not find features setting for unrecognized fork {}",
fork_name
)
})
}
#[derive(Debug, Default, Clone)]
struct FeatureOptions {
legacy_witness_encoding: bool,
for_openvm_13_prover: bool,
}
static ADDITIONAL_FEATURES: OnceLock<HashMap<HardForkName, FeatureOptions>> = OnceLock::new();
impl FeatureOptions {
pub fn new(feats: &str) -> Self {
let mut ret: Self = Default::default();
for feat_s in feats.split(':') {
match feat_s.trim().to_lowercase().as_str() {
"legacy_witness" => {
tracing::info!("set witness encoding for legacy mode");
ret.legacy_witness_encoding = true;
}
"openvm_13" => {
tracing::info!("set prover should use openvm 13");
ret.for_openvm_13_prover = true;
}
s => tracing::warn!("unrecognized dynamic feature: {s}"),
}
}
ret
}
}
/// Turn the coordinator's chunk task into a json string for formal chunk proving
/// task (with full witnesses)
pub fn checkout_chunk_task(
task_json: &str,
decryption_key: Option<&[u8]>,
interpreter: impl ChunkInterpreter,
) -> eyre::Result<String> {
let chunk_task = serde_json::from_str::<tasks::ChunkTask>(task_json)?;
let ret = serde_json::to_string(&tasks::ChunkProvingTask::try_from_with_interpret(
chunk_task,
interpreter,
)?)?;
Ok(ret)
Ok(serde_json::to_string(
&tasks::ChunkProvingTask::try_from_with_interpret(chunk_task, decryption_key, interpreter)?,
)?)
}
/// Convert the universal task json into compatible form for old prover
pub fn univ_task_compatibility_fix(task_json: &str) -> eyre::Result<String> {
use scroll_zkvm_types::proof::VmInternalStarkProof;
let task: tasks::ProvingTask = serde_json::from_str(task_json)?;
let aggregated_proofs: Vec<VmInternalStarkProof> = task
.aggregated_proofs
.into_iter()
.map(|proof| VmInternalStarkProof {
proofs: proof.proofs,
public_values: proof.public_values,
})
.collect();
#[derive(Serialize)]
struct CompatibleProvingTask {
/// seralized witness which should be written into stdin first
pub serialized_witness: Vec<Vec<u8>>,
/// aggregated proof carried by babybear fields, should be written into stdin
/// followed `serialized_witness`
pub aggregated_proofs: Vec<VmInternalStarkProof>,
/// Fork name specify
pub fork_name: String,
/// The vk of app which is expcted to prove this task
pub vk: Vec<u8>,
/// An identifier assigned by coordinator, it should be kept identify for the
/// same task (for example, using chunk, batch and bundle hashes)
pub identifier: String,
}
let compatible_u_task = CompatibleProvingTask {
serialized_witness: task.serialized_witness,
aggregated_proofs,
fork_name: task.fork_name,
vk: task.vk,
identifier: task.identifier,
};
Ok(serde_json::to_string(&compatible_u_task)?)
}
/// Generate required staff for proving tasks
@@ -32,7 +116,6 @@ pub fn gen_universal_task(
task_json: &str,
fork_name_str: &str,
expected_vk: &[u8],
interpreter: Option<impl ChunkInterpreter>,
) -> eyre::Result<(B256, String, String)> {
use proofs::*;
use tasks::*;
@@ -51,36 +134,56 @@ pub fn gen_universal_task(
let mut task = serde_json::from_str::<ChunkProvingTask>(task_json)?;
// normailze fork name field in task
task.fork_name = task.fork_name.to_lowercase();
let version = Version::from(task.version);
// always respect the fork_name_str (which has been normalized) being passed
// if the fork_name wrapped in task is not match, consider it a malformed task
if fork_name_str != task.fork_name.as_str() {
eyre::bail!("fork name in chunk task not match the calling arg, expected {fork_name_str}, get {}", task.fork_name);
}
let (pi_hash, metadata, u_task) = utils::panic_catch(move || {
gen_universal_chunk_task(task, fork_name_str.into(), interpreter)
})
.map_err(|e| eyre::eyre!("caught panic in chunk task{e}"))??;
if fork_name_str != version.fork.as_str() {
eyre::bail!(
"given task version, expected fork={fork_name_str}, got={version_fork}",
version_fork = version.fork.as_str()
);
}
let (pi_hash, metadata, u_task) =
utils::panic_catch(move || gen_universal_chunk_task(task))
.map_err(|e| eyre::eyre!("caught panic in chunk task{e}"))??;
(pi_hash, AnyMetaData::Chunk(metadata), u_task)
}
x if x == TaskType::Batch as i32 => {
let mut task = serde_json::from_str::<BatchProvingTask>(task_json)?;
task.fork_name = task.fork_name.to_lowercase();
let version = Version::from(task.version);
if fork_name_str != task.fork_name.as_str() {
eyre::bail!("fork name in batch task not match the calling arg, expected {fork_name_str}, get {}", task.fork_name);
}
if fork_name_str != version.fork.as_str() {
eyre::bail!(
"given task version, expected fork={fork_name_str}, got={version_fork}",
version_fork = version.fork.as_str()
);
}
let (pi_hash, metadata, u_task) =
utils::panic_catch(move || gen_universal_batch_task(task, fork_name_str.into()))
utils::panic_catch(move || gen_universal_batch_task(task))
.map_err(|e| eyre::eyre!("caught panic in chunk task{e}"))??;
(pi_hash, AnyMetaData::Batch(metadata), u_task)
}
x if x == TaskType::Bundle as i32 => {
let mut task = serde_json::from_str::<BundleProvingTask>(task_json)?;
task.fork_name = task.fork_name.to_lowercase();
let version = Version::from(task.version);
if fork_name_str != task.fork_name.as_str() {
eyre::bail!("fork name in bundle task not match the calling arg, expected {fork_name_str}, get {}", task.fork_name);
}
if fork_name_str != version.fork.as_str() {
eyre::bail!(
"given task version, expected fork={fork_name_str}, got={version_fork}",
version_fork = version.fork.as_str()
);
}
let (pi_hash, metadata, u_task) =
utils::panic_catch(move || gen_universal_bundle_task(task, fork_name_str.into()))
utils::panic_catch(move || gen_universal_bundle_task(task))
.map_err(|e| eyre::eyre!("caught panic in chunk task{e}"))??;
(pi_hash, AnyMetaData::Bundle(metadata), u_task)
}
@@ -88,11 +191,26 @@ pub fn gen_universal_task(
};
u_task.vk = Vec::from(expected_vk);
let fork_name = u_task.fork_name.clone();
let mut u_task_ext = ProvingTaskExt::new(u_task);
// set additional settings from global features
if let Some(cfg) = ADDITIONAL_FEATURES
.get()
.and_then(|features| features.get(&fork_name))
{
u_task_ext.use_openvm_13 = cfg.for_openvm_13_prover;
} else {
tracing::warn!(
"can not found features setting for unrecognized fork {}",
fork_name
);
}
Ok((
pi_hash,
serde_json::to_string(&metadata)?,
serde_json::to_string(&u_task)?,
serde_json::to_string(&u_task_ext)?,
))
}
@@ -123,7 +241,26 @@ pub fn gen_wrapped_proof(proof_json: &str, metadata: &str, vk: &[u8]) -> eyre::R
/// init verifier
pub fn verifier_init(config: &str) -> eyre::Result<()> {
let cfg: VerifierConfig = serde_json::from_str(config)?;
ADDITIONAL_FEATURES
.set(HashMap::from_iter(cfg.circuits.iter().map(|config| {
tracing::info!(
"start setting features [{:?}] for fork {}",
config.features,
config.fork_name
);
(
config.fork_name.to_lowercase(),
config
.features
.as_ref()
.map(|features| FeatureOptions::new(features.as_str()))
.unwrap_or_default(),
)
})))
.map_err(|c| eyre::eyre!("Fail to init additional features: {c:?}"))?;
verifier::init(cfg);
Ok(())
}

View File

@@ -8,9 +8,10 @@ use scroll_zkvm_types::{
bundle::BundleInfo,
chunk::ChunkInfo,
proof::{EvmProof, OpenVmEvmProof, ProofEnum, StarkProof},
public_inputs::{ForkName, MultiVersionPublicInputs},
types_agg::{AggregationInput, ProgramCommitment},
utils::vec_as_base64,
public_inputs::MultiVersionPublicInputs,
types_agg::AggregationInput,
utils::{serialize_vk, vec_as_base64},
version,
};
use serde::{de::DeserializeOwned, Deserialize, Serialize};
@@ -139,8 +140,6 @@ impl ProofMetadata for ChunkProofMetadata {
pub struct BatchProofMetadata {
/// The batch information describing the list of chunks.
pub batch_info: BatchInfo,
/// The [`scroll_zkvm_types::batch::BatchHeader`]'s digest.
pub batch_hash: B256,
}
impl ProofMetadata for BatchProofMetadata {
@@ -172,7 +171,7 @@ impl<Metadata> From<&WrappedProof<Metadata>> for AggregationInput {
fn from(value: &WrappedProof<Metadata>) -> Self {
Self {
public_values: value.proof.public_values(),
commitment: ProgramCommitment::deserialize(&value.vk),
commitment: serialize_vk::deserialize(&value.vk),
}
}
}
@@ -181,13 +180,13 @@ impl<Metadata: ProofMetadata> WrappedProof<Metadata> {
/// Sanity checks on the wrapped proof:
///
/// - pi_hash computed in host does in fact match pi_hash computed in guest
pub fn pi_hash_check(&self, fork_name: ForkName) -> bool {
pub fn pi_hash_check(&self, ver: version::Version) -> bool {
let proof_pi = self.proof.public_values();
let expected_pi = self
.metadata
.pi_hash_info()
.pi_hash_by_fork(fork_name)
.pi_hash_by_version(ver)
.0
.as_ref()
.iter()
@@ -216,7 +215,7 @@ impl<Metadata: ProofMetadata> PersistableProof for WrappedProof<Metadata> {
mod tests {
use base64::{prelude::BASE64_STANDARD, Engine};
use sbv_primitives::B256;
use scroll_zkvm_types::{bundle::BundleInfo, proof::EvmProof, public_inputs::ForkName};
use scroll_zkvm_types::{bundle::BundleInfo, proof::EvmProof};
use super::*;
@@ -252,8 +251,9 @@ mod tests {
batch_hash: B256::repeat_byte(4),
withdraw_root: B256::repeat_byte(5),
msg_queue_hash: B256::repeat_byte(6),
encryption_key: None,
};
let bundle_pi_hash = bundle_info.pi_hash(ForkName::EuclidV1);
let bundle_pi_hash = bundle_info.pi_hash_euclidv1();
BundleProofMetadata {
bundle_info,
bundle_pi_hash,

View File

@@ -10,46 +10,62 @@ pub use chunk_interpreter::ChunkInterpreter;
pub use scroll_zkvm_types::task::ProvingTask;
use crate::{
proofs::{self, BatchProofMetadata, BundleProofMetadata, ChunkProofMetadata},
proofs::{BatchProofMetadata, BundleProofMetadata, ChunkProofMetadata},
utils::panic_catch,
};
use sbv_primitives::B256;
use scroll_zkvm_types::public_inputs::{ForkName, MultiVersionPublicInputs};
use scroll_zkvm_types::public_inputs::{MultiVersionPublicInputs, Version};
fn check_aggregation_proofs<Metadata>(
proofs: &[proofs::WrappedProof<Metadata>],
fork_name: ForkName,
) -> eyre::Result<()>
where
Metadata: proofs::ProofMetadata,
{
fn encode_task_to_witness<T: serde::Serialize>(task: &T) -> eyre::Result<Vec<u8>> {
let config = bincode::config::standard();
Ok(bincode::serde::encode_to_vec(task, config)?)
}
fn check_aggregation_proofs<Metadata: MultiVersionPublicInputs>(
metadata: &[Metadata],
version: Version,
) -> eyre::Result<()> {
panic_catch(|| {
for w in proofs.windows(2) {
w[1].metadata
.pi_hash_info()
.validate(w[0].metadata.pi_hash_info(), fork_name);
for w in metadata.windows(2) {
w[1].validate(&w[0], version);
}
})
.map_err(|e| eyre::eyre!("Chunk data validation failed: {}", e))?;
.map_err(|e| eyre::eyre!("Metadata validation failed: {}", e))?;
Ok(())
}
#[derive(serde::Deserialize, serde::Serialize)]
pub struct ProvingTaskExt {
#[serde(flatten)]
task: ProvingTask,
#[serde(default)]
pub use_openvm_13: bool,
}
impl From<ProvingTaskExt> for ProvingTask {
fn from(wrap_t: ProvingTaskExt) -> Self {
wrap_t.task
}
}
impl ProvingTaskExt {
pub fn new(task: ProvingTask) -> Self {
Self {
task,
use_openvm_13: false,
}
}
}
/// Generate required staff for chunk proving
pub fn gen_universal_chunk_task(
mut task: ChunkProvingTask,
fork_name: ForkName,
interpreter: Option<impl ChunkInterpreter>,
task: ChunkProvingTask,
) -> eyre::Result<(B256, ChunkProofMetadata, ProvingTask)> {
if let Some(interpreter) = interpreter {
task.prepare_task_via_interpret(interpreter)?;
}
let chunk_total_gas = task.stats().total_gas_used;
let chunk_info = task.precheck_and_build_metadata()?;
let proving_task = task.try_into()?;
let expected_pi_hash = chunk_info.pi_hash_by_fork(fork_name);
let (proving_task, chunk_info, chunk_pi_hash) = task.into_proving_task_with_precheck()?;
Ok((
expected_pi_hash,
chunk_pi_hash,
ChunkProofMetadata {
chunk_info,
chunk_total_gas,
@@ -61,18 +77,11 @@ pub fn gen_universal_chunk_task(
/// Generate required staff for batch proving
pub fn gen_universal_batch_task(
task: BatchProvingTask,
fork_name: ForkName,
) -> eyre::Result<(B256, BatchProofMetadata, ProvingTask)> {
let batch_info = task.precheck_and_build_metadata()?;
let proving_task = task.try_into()?;
let expected_pi_hash = batch_info.pi_hash_by_fork(fork_name);
let (proving_task, batch_info, batch_pi_hash) = task.into_proving_task_with_precheck()?;
Ok((
expected_pi_hash,
BatchProofMetadata {
batch_info,
batch_hash: expected_pi_hash,
},
batch_pi_hash,
BatchProofMetadata { batch_info },
proving_task,
))
}
@@ -80,17 +89,13 @@ pub fn gen_universal_batch_task(
/// Generate required staff for bundle proving
pub fn gen_universal_bundle_task(
task: BundleProvingTask,
fork_name: ForkName,
) -> eyre::Result<(B256, BundleProofMetadata, ProvingTask)> {
let bundle_info = task.precheck_and_build_metadata()?;
let proving_task = task.try_into()?;
let expected_pi_hash = bundle_info.pi_hash_by_fork(fork_name);
let (proving_task, bundle_info, bundle_pi_hash) = task.into_proving_task_with_precheck()?;
Ok((
expected_pi_hash,
bundle_pi_hash,
BundleProofMetadata {
bundle_info,
bundle_pi_hash: expected_pi_hash,
bundle_pi_hash,
},
proving_task,
))

View File

@@ -1,58 +1,92 @@
use crate::proofs::ChunkProof;
use c_kzg::Bytes48;
use eyre::Result;
use sbv_primitives::{B256, U256};
use scroll_zkvm_types::{
batch::{
BatchHeader, BatchHeaderV6, BatchHeaderV7, BatchHeaderV8, BatchInfo, BatchWitness,
Envelope, EnvelopeV6, EnvelopeV7, EnvelopeV8, PointEvalWitness, ReferenceHeader,
ToArchievedWitness, N_BLOB_BYTES,
build_point_eval_witness, BatchHeader, BatchHeaderV6, BatchHeaderV7, BatchHeaderValidium,
BatchInfo, BatchWitness, Envelope, EnvelopeV6, EnvelopeV7, LegacyBatchWitness,
ReferenceHeader, N_BLOB_BYTES,
},
public_inputs::ForkName,
chunk::ChunkInfo,
public_inputs::{ForkName, MultiVersionPublicInputs, Version},
task::ProvingTask,
utils::{to_rkyv_bytes, RancorError},
version::{Codec, Domain, STFVersion},
};
use crate::proofs::ChunkProof;
mod utils;
use utils::{base64, point_eval};
/// Define variable batch header type, since BatchHeaderV6 can not
/// be decoded as V7 we can always has correct deserialization
/// Notice: V6 header MUST be put above V7 since untagged enum
/// try to decode each defination in order
#[derive(Clone, serde::Deserialize, serde::Serialize)]
pub struct BatchHeaderValidiumWithHash {
#[serde(flatten)]
header: BatchHeaderValidium,
batch_hash: B256,
}
/// Parse header types passed from golang side and adapt to the
/// definition in zkvm-prover's types
/// We distinguish the header type in golang side according to the STF
/// version, i.e. v6, v7-v10 (current), and validium
/// And adapt it to the corresponding batch header type used in zkvm-prover's witness
/// definition, i.e. v6, v7 (current), and validium
#[derive(Clone, serde::Deserialize, serde::Serialize)]
#[serde(untagged)]
#[allow(non_camel_case_types)]
pub enum BatchHeaderV {
/// Header for validium mode.
Validium(BatchHeaderValidiumWithHash),
/// Header for scroll's STF version v6.
V6(BatchHeaderV6),
V7_8(BatchHeaderV7),
/// Header for scroll's STF versions v7 - v10.
///
/// Since the codec essentially is unchanged for the above STF versions, we do not define new
/// variants, instead re-using the [`BatchHeaderV7`] variant.
V7_to_V10(BatchHeaderV7),
}
impl core::fmt::Display for BatchHeaderV {
fn fmt(&self, f: &mut core::fmt::Formatter<'_>) -> core::fmt::Result {
match self {
BatchHeaderV::V6(_) => write!(f, "V6"),
BatchHeaderV::V7_to_V10(_) => write!(f, "V7 - V10"),
BatchHeaderV::Validium(_) => write!(f, "Validium"),
}
}
}
impl BatchHeaderV {
pub fn batch_hash(&self) -> B256 {
match self {
BatchHeaderV::V6(h) => h.batch_hash(),
BatchHeaderV::V7_8(h) => h.batch_hash(),
BatchHeaderV::V7_to_V10(h) => h.batch_hash(),
BatchHeaderV::Validium(h) => h.header.batch_hash(),
}
}
pub fn must_v6_header(&self) -> &BatchHeaderV6 {
pub fn to_zkvm_batch_header_v6(&self) -> &BatchHeaderV6 {
match self {
BatchHeaderV::V6(h) => h,
_ => panic!("try to pick other header type"),
_ => unreachable!("A header of {} is considered to be v6", self),
}
}
pub fn must_v7_header(&self) -> &BatchHeaderV7 {
pub fn to_zkvm_batch_header_v7_to_v10(&self) -> &BatchHeaderV7 {
match self {
BatchHeaderV::V7_8(h) => h,
_ => panic!("try to pick other header type"),
BatchHeaderV::V7_to_V10(h) => h,
_ => unreachable!(
"A header of {} is considered to be in [v7, v8, v9, v10]",
self
),
}
}
pub fn must_v8_header(&self) -> &BatchHeaderV8 {
pub fn to_zkvm_batch_header_validium(&self) -> &BatchHeaderValidium {
match self {
BatchHeaderV::V7_8(h) => h,
_ => panic!("try to pick other header type"),
BatchHeaderV::Validium(h) => &h.header,
_ => unreachable!("A header of {} is considered to be validium", self),
}
}
}
@@ -61,6 +95,8 @@ impl BatchHeaderV {
/// is compatible with both pre-euclidv2 and euclidv2
#[derive(Clone, serde::Deserialize, serde::Serialize)]
pub struct BatchProvingTask {
/// The version of the chunks in the batch, as per [`Version`].
pub version: u8,
/// Chunk proofs for the contiguous list of chunks within the batch.
pub chunk_proofs: Vec<ChunkProof>,
/// The [`BatchHeaderV6/V7`], as computed on-chain for this batch.
@@ -79,128 +115,253 @@ pub struct BatchProvingTask {
pub fork_name: String,
}
impl TryFrom<BatchProvingTask> for ProvingTask {
type Error = eyre::Error;
impl BatchProvingTask {
pub fn into_proving_task_with_precheck(self) -> Result<(ProvingTask, BatchInfo, B256)> {
let (witness, metadata, batch_pi_hash) = self.precheck()?;
let serialized_witness = if crate::witness_use_legacy_mode(&self.fork_name)? {
let legacy_witness = LegacyBatchWitness::from(witness);
to_rkyv_bytes::<RancorError>(&legacy_witness)?.into_vec()
} else {
super::encode_task_to_witness(&witness)?
};
fn try_from(value: BatchProvingTask) -> Result<Self> {
let witness = value.build_guest_input();
Ok(ProvingTask {
identifier: value.batch_header.batch_hash().to_string(),
fork_name: value.fork_name,
aggregated_proofs: value
let proving_task = ProvingTask {
identifier: self.batch_header.batch_hash().to_string(),
fork_name: self.fork_name,
aggregated_proofs: self
.chunk_proofs
.into_iter()
.map(|w_proof| w_proof.proof.into_stark_proof().expect("expect root proof"))
.collect(),
serialized_witness: vec![to_rkyv_bytes::<RancorError>(&witness)?.into_vec()],
serialized_witness: vec![serialized_witness],
vk: Vec::new(),
})
};
Ok((proving_task, metadata, batch_pi_hash))
}
}
impl BatchProvingTask {
fn build_guest_input(&self) -> BatchWitness {
let fork_name = self.fork_name.to_lowercase().as_str().into();
fn build_guest_input(&self, version: Version) -> BatchWitness {
tracing::info!(
"Handling batch task for input, version byte {}, Version data: {:?}",
self.version,
version
);
// sanity check for if result of header type parsing match to version
match &self.batch_header {
BatchHeaderV::Validium(_) => assert!(
version.is_validium(),
"version {:?} is not match with parsed header, get validium header but version is not validium", version,
),
BatchHeaderV::V6(_) => assert_eq!(version.fork, ForkName::EuclidV1,
"hardfork mismatch for da-codec@v6 header: found={:?}, expected={:?}",
version.fork,
ForkName::EuclidV1,
),
BatchHeaderV::V7_to_V10(_) => assert!(
matches!(version.fork, ForkName::EuclidV2 | ForkName::Feynman | ForkName::Galileo | ForkName::GalileoV2),
"hardfork mismatch for da-codec@v7/8/9/10 header: found={}, expected={:?}",
version.fork,
[ForkName::EuclidV2, ForkName::Feynman, ForkName::Galileo, ForkName::GalileoV2],
),
}
// sanity check: calculate point eval needed and compare with task input
let (kzg_commitment, kzg_proof, challenge_digest) = {
let blob = point_eval::to_blob(&self.blob_bytes);
let commitment = point_eval::blob_to_kzg_commitment(&blob);
let versioned_hash = point_eval::get_versioned_hash(&commitment);
let challenge_digest = match &self.batch_header {
BatchHeaderV::V6(_) => {
assert_eq!(
fork_name,
ForkName::EuclidV1,
"hardfork mismatch for da-codec@v6 header: found={fork_name:?}, expected={:?}",
ForkName::EuclidV1,
);
EnvelopeV6::from_slice(self.blob_bytes.as_slice())
.challenge_digest(versioned_hash)
}
BatchHeaderV::V7_8(_) => {
let padded_blob_bytes = {
let mut padded_blob_bytes = self.blob_bytes.to_vec();
padded_blob_bytes.resize(N_BLOB_BYTES, 0);
padded_blob_bytes
};
let point_eval_witness = if !version.is_validium() {
// sanity check: calculate point eval needed and compare with task input
let (kzg_commitment, kzg_proof, challenge_digest) = {
let blob = point_eval::to_blob(&self.blob_bytes);
let commitment = point_eval::blob_to_kzg_commitment(&blob);
let versioned_hash = point_eval::get_versioned_hash(&commitment);
match fork_name {
ForkName::EuclidV2 => {
<EnvelopeV7 as Envelope>::from_slice(padded_blob_bytes.as_slice())
.challenge_digest(versioned_hash)
}
ForkName::Feynman => {
<EnvelopeV8 as Envelope>::from_slice(padded_blob_bytes.as_slice())
.challenge_digest(versioned_hash)
}
f => unreachable!(
"hardfork mismatch for da-codec@v7 header: found={}, expected={:?}",
f,
[ForkName::EuclidV2, ForkName::Feynman],
),
let padded_blob_bytes = {
let mut padded_blob_bytes = self.blob_bytes.to_vec();
padded_blob_bytes.resize(N_BLOB_BYTES, 0);
padded_blob_bytes
};
let challenge_digest = match version.codec {
Codec::V6 => {
// notice v6 do not use padded blob bytes
<EnvelopeV6 as Envelope>::from_slice(self.blob_bytes.as_slice())
.challenge_digest(versioned_hash)
}
}
Codec::V7 => <EnvelopeV7 as Envelope>::from_slice(padded_blob_bytes.as_slice())
.challenge_digest(versioned_hash),
};
let (proof, _) = point_eval::get_kzg_proof(&blob, challenge_digest);
(commitment.to_bytes(), proof.to_bytes(), challenge_digest)
};
let (proof, _) = point_eval::get_kzg_proof(&blob, challenge_digest);
if let Some(k) = self.kzg_commitment {
assert_eq!(k, kzg_commitment);
}
(commitment.to_bytes(), proof.to_bytes(), challenge_digest)
if let Some(c) = self.challenge_digest {
assert_eq!(c, U256::from_be_bytes(challenge_digest.0));
}
if let Some(p) = self.kzg_proof {
assert_eq!(p, kzg_proof);
}
Some(build_point_eval_witness(
kzg_commitment.into_inner(),
kzg_proof.into_inner(),
))
} else {
assert!(self.kzg_proof.is_none(), "domain=validium has no blob-da");
assert!(
self.kzg_commitment.is_none(),
"domain=validium has no blob-da"
);
assert!(
self.challenge_digest.is_none(),
"domain=validium has no blob-da"
);
match &self.batch_header {
BatchHeaderV::Validium(h) => assert_eq!(
h.header.batch_hash(),
h.batch_hash,
"calculated batch hash match which from coordinator"
),
_ => panic!("unexpected header type"),
}
None
};
if let Some(k) = self.kzg_commitment {
assert_eq!(k, kzg_commitment);
}
if let Some(c) = self.challenge_digest {
assert_eq!(c, U256::from_be_bytes(challenge_digest.0));
}
if let Some(p) = self.kzg_proof {
assert_eq!(p, kzg_proof);
}
let point_eval_witness = PointEvalWitness {
kzg_commitment: kzg_commitment.into_inner(),
kzg_proof: kzg_proof.into_inner(),
let reference_header = match (version.domain, version.stf_version) {
(Domain::Scroll, STFVersion::V6) => {
ReferenceHeader::V6(*self.batch_header.to_zkvm_batch_header_v6())
}
// The da-codec for STF versions v7, v8, v9, v10 is identical. In zkvm-prover we do not
// create additional variants to indicate the identical behaviour of codec. Instead we
// add a separate variant for the STF version.
//
// We handle the different STF versions here however build the same batch header since
// that type does not change. The batch header's version byte constructed in the
// coordinator actually defines the STF version (v7, v8 or v9, v10) and we can derive
// the hard-fork (e.g. feynman or galileo) and the codec from the version
// byte.
//
// Refer [`scroll_zkvm_types::public_inputs::Version`].
(
Domain::Scroll,
STFVersion::V7 | STFVersion::V8 | STFVersion::V9 | STFVersion::V10,
) => ReferenceHeader::V7_V8_V9(*self.batch_header.to_zkvm_batch_header_v7_to_v10()),
(Domain::Validium, STFVersion::V1) => {
ReferenceHeader::Validium(*self.batch_header.to_zkvm_batch_header_validium())
}
(domain, stf_version) => {
unreachable!("unsupported domain={domain:?},stf-version={stf_version:?}")
}
};
let reference_header = match fork_name {
ForkName::EuclidV1 => ReferenceHeader::V6(*self.batch_header.must_v6_header()),
ForkName::EuclidV2 => ReferenceHeader::V7(*self.batch_header.must_v7_header()),
ForkName::Feynman => ReferenceHeader::V8(*self.batch_header.must_v8_header()),
};
// patch: ensure block_hash field is ZERO for scroll domain
let chunk_infos = self
.chunk_proofs
.iter()
.map(|p| {
if version.domain == Domain::Scroll {
ChunkInfo {
prev_blockhash: B256::ZERO,
post_blockhash: B256::ZERO,
..p.metadata.chunk_info.clone()
}
} else {
p.metadata.chunk_info.clone()
}
})
.collect();
BatchWitness {
fork_name,
version: version.as_version_byte(),
fork_name: version.fork,
chunk_proofs: self.chunk_proofs.iter().map(|proof| proof.into()).collect(),
chunk_infos: self
.chunk_proofs
.iter()
.map(|p| p.metadata.chunk_info.clone())
.collect(),
chunk_infos,
blob_bytes: self.blob_bytes.clone(),
reference_header,
point_eval_witness,
}
}
pub fn precheck_and_build_metadata(&self) -> Result<BatchInfo> {
let fork_name = ForkName::from(self.fork_name.as_str());
pub fn precheck(&self) -> Result<(BatchWitness, BatchInfo, B256)> {
// for every aggregation task, there are two steps needed to build the metadata:
// 1. generate data for metadata from the witness
// 2. validate every adjacent proof pair
let witness = self.build_guest_input();
let archieved = ToArchievedWitness::create(&witness)
.map_err(|e| eyre::eyre!("archieve batch witness fail: {e}"))?;
let archieved_witness = archieved
.access()
.map_err(|e| eyre::eyre!("access archieved batch witness fail: {e}"))?;
let metadata: BatchInfo = archieved_witness.into();
let version = Version::from(self.version);
let witness = self.build_guest_input(version);
let metadata = BatchInfo::from(&witness);
super::check_aggregation_proofs(
witness.chunk_infos.as_slice(),
Version::from(self.version),
)?;
let pi_hash = metadata.pi_hash_by_version(version);
super::check_aggregation_proofs(self.chunk_proofs.as_slice(), fork_name)?;
Ok(metadata)
Ok((witness, metadata, pi_hash))
}
}
#[test]
fn test_deserde_batch_header_v_validium() {
use std::str::FromStr;
// Top-level JSON: flattened enum tag "V1" + batch_hash
let json = r#"{
"V1": {
"version": 1,
"batch_index": 42,
"parent_batch_hash": "0x1111111111111111111111111111111111111111111111111111111111111111",
"post_state_root": "0x2222222222222222222222222222222222222222222222222222222222222222",
"withdraw_root": "0x3333333333333333333333333333333333333333333333333333333333333333",
"commitment": "0x4444444444444444444444444444444444444444444444444444444444444444"
},
"batch_hash": "0x5555555555555555555555555555555555555555555555555555555555555555"
}"#;
let parsed: BatchHeaderV = serde_json::from_str(json).expect("deserialize BatchHeaderV");
match parsed {
BatchHeaderV::Validium(v) => {
// Check the batch_hash field
let expected_batch_hash = B256::from_str(
"0x5555555555555555555555555555555555555555555555555555555555555555",
)
.unwrap();
assert_eq!(v.batch_hash, expected_batch_hash);
// Check the inner header variant and fields
match v.header {
BatchHeaderValidium::V1(h) => {
assert_eq!(h.version, 1);
assert_eq!(h.batch_index, 42);
let p = B256::from_str(
"0x1111111111111111111111111111111111111111111111111111111111111111",
)
.unwrap();
let s = B256::from_str(
"0x2222222222222222222222222222222222222222222222222222222222222222",
)
.unwrap();
let w = B256::from_str(
"0x3333333333333333333333333333333333333333333333333333333333333333",
)
.unwrap();
let c = B256::from_str(
"0x4444444444444444444444444444444444444444444444444444444444444444",
)
.unwrap();
assert_eq!(h.parent_batch_hash, p);
assert_eq!(h.post_state_root, s);
assert_eq!(h.withdraw_root, w);
assert_eq!(h.commitment, c);
// Sanity: computed batch hash equals the provided one (if method available)
// assert_eq!(v.header.batch_hash(), expected_batch_hash);
}
}
}
_ => panic!("expected validium header variant"),
}
}

View File

@@ -1,16 +1,22 @@
use crate::proofs::BatchProof;
use eyre::Result;
use sbv_primitives::B256;
use scroll_zkvm_types::{
bundle::{BundleInfo, BundleWitness, ToArchievedWitness},
public_inputs::ForkName,
bundle::{BundleInfo, BundleWitness, LegacyBundleWitness},
public_inputs::{MultiVersionPublicInputs, Version},
task::ProvingTask,
utils::{to_rkyv_bytes, RancorError},
};
use crate::proofs::BatchProof;
/// Message indicating a sanity check failure.
const BUNDLE_SANITY_MSG: &str = "bundle must have at least one batch";
#[derive(Clone, serde::Deserialize, serde::Serialize)]
pub struct BundleProvingTask {
/// The version of batches in the bundle.
pub version: u8,
/// The STARK proofs of each batch in the bundle.
pub batch_proofs: Vec<BatchProof>,
/// for sanity check
pub bundle_info: Option<BundleInfo>,
@@ -19,6 +25,30 @@ pub struct BundleProvingTask {
}
impl BundleProvingTask {
pub fn into_proving_task_with_precheck(self) -> Result<(ProvingTask, BundleInfo, B256)> {
let (witness, bundle_info, bundle_pi_hash) = self.precheck()?;
let serialized_witness = if crate::witness_use_legacy_mode(&self.fork_name)? {
let legacy = LegacyBundleWitness::from(witness);
to_rkyv_bytes::<RancorError>(&legacy)?.into_vec()
} else {
super::encode_task_to_witness(&witness)?
};
let proving_task = ProvingTask {
identifier: self.identifier(),
fork_name: self.fork_name,
aggregated_proofs: self
.batch_proofs
.into_iter()
.map(|w_proof| w_proof.proof.into_stark_proof().expect("expect root proof"))
.collect(),
serialized_witness: vec![serialized_witness],
vk: Vec::new(),
};
Ok((proving_task, bundle_info, bundle_pi_hash))
}
fn identifier(&self) -> String {
assert!(!self.batch_proofs.is_empty(), "{BUNDLE_SANITY_MSG}",);
@@ -27,64 +57,45 @@ impl BundleProvingTask {
.first()
.expect(BUNDLE_SANITY_MSG)
.metadata
.batch_info
.batch_hash,
self.batch_proofs
.last()
.expect(BUNDLE_SANITY_MSG)
.metadata
.batch_info
.batch_hash,
);
format!("{first}-{last}")
}
fn build_guest_input(&self) -> BundleWitness {
fn build_guest_input(&self, version: Version) -> BundleWitness {
BundleWitness {
version: version.as_version_byte(),
batch_proofs: self.batch_proofs.iter().map(|proof| proof.into()).collect(),
batch_infos: self
.batch_proofs
.iter()
.map(|wrapped_proof| wrapped_proof.metadata.batch_info.clone())
.collect(),
fork_name: self.fork_name.to_lowercase().as_str().into(),
fork_name: version.fork,
}
}
pub fn precheck_and_build_metadata(&self) -> Result<BundleInfo> {
let fork_name = ForkName::from(self.fork_name.as_str());
fn precheck(&self) -> Result<(BundleWitness, BundleInfo, B256)> {
// for every aggregation task, there are two steps needed to build the metadata:
// 1. generate data for metadata from the witness
// 2. validate every adjacent proof pair
let witness = self.build_guest_input();
let archieved = ToArchievedWitness::create(&witness)
.map_err(|e| eyre::eyre!("archieve bundle witness fail: {e}"))?;
let archieved_witness = archieved
.access()
.map_err(|e| eyre::eyre!("access archieved bundle witness fail: {e}"))?;
let metadata: BundleInfo = archieved_witness.into();
let version = Version::from(self.version);
let witness = self.build_guest_input(version);
let metadata = BundleInfo::from(&witness);
super::check_aggregation_proofs(
witness.batch_infos.as_slice(),
Version::from(self.version),
)?;
let pi_hash = metadata.pi_hash_by_version(version);
super::check_aggregation_proofs(self.batch_proofs.as_slice(), fork_name)?;
Ok(metadata)
}
}
impl TryFrom<BundleProvingTask> for ProvingTask {
type Error = eyre::Error;
fn try_from(value: BundleProvingTask) -> Result<Self> {
let witness = value.build_guest_input();
Ok(ProvingTask {
identifier: value.identifier(),
fork_name: value.fork_name,
aggregated_proofs: value
.batch_proofs
.into_iter()
.map(|w_proof| w_proof.proof.into_stark_proof().expect("expect root proof"))
.collect(),
serialized_witness: vec![witness.rkyv_serialize(None)?.to_vec()],
vk: Vec::new(),
})
Ok((witness, metadata, pi_hash))
}
}

View File

@@ -1,18 +1,26 @@
use super::chunk_interpreter::*;
use eyre::Result;
use sbv_primitives::{types::BlockWitness, B256};
use sbv_core::BlockWitness;
use sbv_primitives::{types::consensus::BlockHeader, B256};
use scroll_zkvm_types::{
chunk::{execute, ChunkInfo, ChunkWitness, ToArchievedWitness},
chunk::{execute, ChunkInfo, ChunkWitness, LegacyChunkWitness, ValidiumInputs},
public_inputs::{MultiVersionPublicInputs, Version},
task::ProvingTask,
utils::{to_rkyv_bytes, RancorError},
};
use super::chunk_interpreter::*;
/// The type aligned with coordinator's defination
#[derive(Debug, Clone, serde::Serialize, serde::Deserialize)]
pub struct ChunkTask {
/// The version for the chunk, as per [`Version`].
pub version: u8,
/// block hashes for a series of block
pub block_hashes: Vec<B256>,
/// The on-chain L1 msg queue hash before applying L1 msg txs from the chunk.
pub prev_msg_queue_hash: B256,
/// The on-chain L1 msg queue hash after applying L1 msg txs from the chunk (for validate)
pub post_msg_queue_hash: B256,
/// Fork name specify
pub fork_name: String,
}
@@ -20,6 +28,7 @@ pub struct ChunkTask {
impl TryFromWithInterpreter<ChunkTask> for ChunkProvingTask {
fn try_from_with_interpret(
value: ChunkTask,
decryption_key: Option<&[u8]>,
interpreter: impl ChunkInterpreter,
) -> Result<Self> {
let mut block_witnesses = Vec::new();
@@ -29,10 +38,28 @@ impl TryFromWithInterpreter<ChunkTask> for ChunkProvingTask {
block_witnesses.push(witness);
}
let validium_txs = if Version::from(value.version).is_validium() {
let mut validium_txs = Vec::new();
for block_number in block_witnesses.iter().map(|w| w.header.number()) {
validium_txs.push(interpreter.try_fetch_l1_msgs(block_number)?);
}
validium_txs
} else {
vec![]
};
let validium_inputs = decryption_key.map(|secret_key| ValidiumInputs {
validium_txs,
secret_key: secret_key.into(),
});
Ok(Self {
version: value.version,
block_witnesses,
prev_msg_queue_hash: value.prev_msg_queue_hash,
post_msg_queue_hash: value.post_msg_queue_hash,
fork_name: value.fork_name,
validium_inputs,
})
}
}
@@ -46,12 +73,18 @@ const CHUNK_SANITY_MSG: &str = "chunk must have at least one block";
/// - {first_block_number}-{last_block_number}
#[derive(Clone, Debug, serde::Deserialize, serde::Serialize)]
pub struct ChunkProvingTask {
/// The version for the chunk, as per [Version][scroll_zkvm_types::version::Version].
pub version: u8,
/// Witnesses for every block in the chunk.
pub block_witnesses: Vec<BlockWitness>,
/// The on-chain L1 msg queue hash before applying L1 msg txs from the chunk.
pub prev_msg_queue_hash: B256,
/// The on-chain L1 msg queue hash after applying L1 msg txs from the chunk (for validate)
pub post_msg_queue_hash: B256,
/// Fork name specify
pub fork_name: String,
/// Optional inputs in case of domain=validium.
pub validium_inputs: Option<ValidiumInputs>,
}
#[derive(Clone, Debug)]
@@ -61,29 +94,13 @@ pub struct ChunkDetails {
pub total_gas_used: u64,
}
impl TryFrom<ChunkProvingTask> for ProvingTask {
type Error = eyre::Error;
fn try_from(value: ChunkProvingTask) -> Result<Self> {
let witness = value.build_guest_input();
Ok(ProvingTask {
identifier: value.identifier(),
fork_name: value.fork_name,
aggregated_proofs: Vec::new(),
serialized_witness: vec![witness.rkyv_serialize(None)?.to_vec()],
vk: Vec::new(),
})
}
}
impl ChunkProvingTask {
pub fn stats(&self) -> ChunkDetails {
let num_blocks = self.block_witnesses.len();
let num_txs = self
.block_witnesses
.iter()
.map(|b| b.transaction.len())
.map(|b| b.transactions.len())
.sum::<usize>();
let total_gas_used = self
.block_witnesses
@@ -98,6 +115,26 @@ impl ChunkProvingTask {
}
}
pub fn into_proving_task_with_precheck(self) -> Result<(ProvingTask, ChunkInfo, B256)> {
let (witness, chunk_info, chunk_pi_hash) = self.precheck()?;
let serialized_witness = if crate::witness_use_legacy_mode(&self.fork_name)? {
let legacy_witness = LegacyChunkWitness::from(witness);
to_rkyv_bytes::<RancorError>(&legacy_witness)?.into_vec()
} else {
super::encode_task_to_witness(&witness)?
};
let proving_task = ProvingTask {
identifier: self.identifier(),
fork_name: self.fork_name,
aggregated_proofs: Vec::new(),
serialized_witness: vec![serialized_witness],
vk: Vec::new(),
};
Ok((proving_task, chunk_info, chunk_pi_hash))
}
fn identifier(&self) -> String {
assert!(!self.block_witnesses.is_empty(), "{CHUNK_SANITY_MSG}",);
@@ -117,32 +154,42 @@ impl ChunkProvingTask {
format!("{first}-{last}")
}
fn build_guest_input(&self) -> ChunkWitness {
ChunkWitness::new(
&self.block_witnesses,
self.prev_msg_queue_hash,
self.fork_name.to_lowercase().as_str().into(),
)
fn build_guest_input(&self, version: Version) -> ChunkWitness {
if version.is_validium() {
assert!(self.validium_inputs.is_some());
ChunkWitness::new(
version.as_version_byte(),
&self.block_witnesses,
self.prev_msg_queue_hash,
version.fork,
self.validium_inputs.clone(),
)
} else {
ChunkWitness::new_scroll(
version.as_version_byte(),
&self.block_witnesses,
self.prev_msg_queue_hash,
version.fork,
)
}
}
fn insert_state(&mut self, node: sbv_primitives::Bytes) {
self.block_witnesses[0].states.push(node);
}
pub fn precheck_and_build_metadata(&self) -> Result<ChunkInfo> {
let witness = self.build_guest_input();
let archieved = ToArchievedWitness::create(&witness)
.map_err(|e| eyre::eyre!("archieve chunk witness fail: {e}"))?;
let archieved_witness = archieved
.access()
.map_err(|e| eyre::eyre!("access archieved chunk witness fail: {e}"))?;
let ret = ChunkInfo::try_from(archieved_witness).map_err(|e| eyre::eyre!("{e}"))?;
Ok(ret)
fn precheck(&self) -> Result<(ChunkWitness, ChunkInfo, B256)> {
let version = Version::from(self.version);
let witness = self.build_guest_input(version);
let chunk_info = ChunkInfo::try_from(witness.clone()).map_err(|e| eyre::eyre!("{e}"))?;
assert_eq!(chunk_info.post_msg_queue_hash, self.post_msg_queue_hash);
let chunk_pi_hash = chunk_info.pi_hash_by_version(version);
Ok((witness, chunk_info, chunk_pi_hash))
}
/// this method check the validate of current task (there may be missing storage node)
/// and try fixing it until everything is ok
#[deprecated]
pub fn prepare_task_via_interpret(
&mut self,
interpreter: impl ChunkInterpreter,
@@ -165,14 +212,9 @@ impl ChunkProvingTask {
let err_parse_re = regex::Regex::new(pattern)?;
let mut attempts = 0;
loop {
let witness = self.build_guest_input();
let archieved = ToArchievedWitness::create(&witness)
.map_err(|e| eyre::eyre!("archieve chunk witness fail: {e}"))?;
let archieved_witness = archieved
.access()
.map_err(|e| eyre::eyre!("access archieved chunk witness fail: {e}"))?;
let witness = self.build_guest_input(Version::euclid_v2());
match execute(archieved_witness) {
match execute(witness) {
Ok(_) => return Ok(()),
Err(e) => {
if let Some(caps) = err_parse_re.captures(&e) {

View File

@@ -1,5 +1,6 @@
use eyre::Result;
use sbv_primitives::{types::BlockWitness, Bytes, B256};
use sbv_core::BlockWitness;
use sbv_primitives::{types::consensus::TxL1Message, Bytes, B256};
/// An interpreter which is cirtical in translating chunk data
/// since we need to grep block witness and storage node data
@@ -12,13 +13,22 @@ pub trait ChunkInterpreter {
) -> Result<BlockWitness> {
Err(eyre::eyre!("no implement"))
}
fn try_fetch_storage_node(&self, _node_hash: B256) -> Result<Bytes> {
Err(eyre::eyre!("no implement"))
}
fn try_fetch_l1_msgs(&self, _block_number: u64) -> Result<Vec<TxL1Message>> {
Err(eyre::eyre!("no implement"))
}
}
pub trait TryFromWithInterpreter<T>: Sized {
fn try_from_with_interpret(value: T, intepreter: impl ChunkInterpreter) -> Result<Self>;
fn try_from_with_interpret(
value: T,
decryption_key: Option<&[u8]>,
intepreter: impl ChunkInterpreter,
) -> Result<Self>;
}
pub struct DummyInterpreter {}

View File

@@ -41,8 +41,11 @@ pub trait ProofVerifier {
#[derive(Debug, Serialize, Deserialize)]
pub struct CircuitConfig {
pub version: u8,
pub fork_name: String,
pub assets_path: String,
#[serde(default)]
pub features: Option<String>,
}
#[derive(Debug, Serialize, Deserialize)]
@@ -50,7 +53,7 @@ pub struct VerifierConfig {
pub circuits: Vec<CircuitConfig>,
}
type HardForkName = String;
pub(crate) type HardForkName = String;
type VerifierType = Arc<Mutex<dyn ProofVerifier + Send>>;
static VERIFIERS: OnceLock<HashMap<HardForkName, VerifierType>> = OnceLock::new();
@@ -61,14 +64,18 @@ pub fn init(config: VerifierConfig) {
for cfg in &config.circuits {
let canonical_fork_name = cfg.fork_name.to_lowercase();
let verifier = Verifier::new(&cfg.assets_path, canonical_fork_name.as_str().into());
let verifier = Verifier::new(&cfg.assets_path, cfg.version);
let ret = verifiers.insert(canonical_fork_name, Arc::new(Mutex::new(verifier)));
assert!(
ret.is_none(),
"DO NOT init the same fork {} twice",
cfg.fork_name
);
tracing::info!("load verifier config for fork {}", cfg.fork_name);
tracing::info!(
"load verifier config for fork {} (ver {})",
cfg.fork_name,
cfg.version
);
}
let ret = VERIFIERS.set(verifiers).is_ok();

View File

@@ -6,22 +6,22 @@ use crate::{
proofs::{AsRootProof, BatchProof, BundleProof, ChunkProof, IntoEvmProof},
utils::panic_catch,
};
use scroll_zkvm_types::public_inputs::ForkName;
use scroll_zkvm_types::version::Version;
use scroll_zkvm_verifier::verifier::UniversalVerifier;
use std::path::Path;
pub struct Verifier {
verifier: UniversalVerifier,
fork: ForkName,
version: Version,
}
impl Verifier {
pub fn new(assets_dir: &str, fork: ForkName) -> Self {
let verifier_bin = Path::new(assets_dir).join("verifier.bin");
pub fn new(assets_dir: &str, ver_n: u8) -> Self {
let verifier_bin = Path::new(assets_dir);
Self {
verifier: UniversalVerifier::setup(&verifier_bin).expect("Setting up chunk verifier"),
fork,
verifier: UniversalVerifier::setup(verifier_bin).expect("Setting up chunk verifier"),
version: Version::from(ver_n),
}
}
}
@@ -31,17 +31,21 @@ impl ProofVerifier for Verifier {
panic_catch(|| match task_type {
TaskType::Chunk => {
let proof = serde_json::from_slice::<ChunkProof>(proof).unwrap();
assert!(proof.pi_hash_check(self.fork));
UniversalVerifier::verify_stark_proof(proof.as_root_proof(), &proof.vk).unwrap()
assert!(proof.pi_hash_check(self.version));
self.verifier
.verify_stark_proof(proof.as_root_proof(), &proof.vk)
.unwrap()
}
TaskType::Batch => {
let proof = serde_json::from_slice::<BatchProof>(proof).unwrap();
assert!(proof.pi_hash_check(self.fork));
UniversalVerifier::verify_stark_proof(proof.as_root_proof(), &proof.vk).unwrap()
assert!(proof.pi_hash_check(self.version));
self.verifier
.verify_stark_proof(proof.as_root_proof(), &proof.vk)
.unwrap()
}
TaskType::Bundle => {
let proof = serde_json::from_slice::<BundleProof>(proof).unwrap();
assert!(proof.pi_hash_check(self.fork));
assert!(proof.pi_hash_check(self.version));
let vk = proof.vk.clone();
let evm_proof = proof.into_evm_proof();
self.verifier.verify_evm_proof(&evm_proof, &vk).unwrap()

View File

@@ -152,18 +152,30 @@ pub unsafe extern "C" fn gen_universal_task(
fork_name: *const c_char,
expected_vk: *const u8,
expected_vk_len: usize,
decryption_key: *const u8,
decryption_key_len: usize,
) -> HandlingResult {
let mut interpreter = None;
let task_json = if task_type == TaskType::Chunk as i32 {
let pre_task_str = c_char_to_str(task);
let cli = l2geth::get_client();
match libzkp::checkout_chunk_task(pre_task_str, cli) {
Ok(str) => {
interpreter.replace(cli);
str
let decryption_key = if decryption_key_len > 0 {
if decryption_key_len != 32 {
tracing::error!(
"gen_universal_task received {}-byte decryption key; expected 32",
decryption_key_len
);
return failed_handling_result();
}
Some(std::slice::from_raw_parts(
decryption_key,
decryption_key_len,
))
} else {
None
};
match libzkp::checkout_chunk_task(pre_task_str, decryption_key, cli) {
Ok(str) => str,
Err(e) => {
println!("gen_universal_task failed at pre interpret step, error: {e}");
tracing::error!("gen_universal_task failed at pre interpret step, error: {e}");
return failed_handling_result();
}
@@ -178,13 +190,8 @@ pub unsafe extern "C" fn gen_universal_task(
&[]
};
let ret = libzkp::gen_universal_task(
task_type,
&task_json,
c_char_to_str(fork_name),
expected_vk,
interpreter,
);
let ret =
libzkp::gen_universal_task(task_type, &task_json, c_char_to_str(fork_name), expected_vk);
if let Ok((pi_hash, meta_json, task_json)) = ret {
let expected_pi_hash = pi_hash.0.map(|byte| byte as c_char);
@@ -248,6 +255,19 @@ pub unsafe extern "C" fn gen_wrapped_proof(
}
}
/// # Safety
#[no_mangle]
pub unsafe extern "C" fn univ_task_compatibility_fix(task_json: *const c_char) -> *mut c_char {
let task_json_str = c_char_to_str(task_json);
match libzkp::univ_task_compatibility_fix(task_json_str) {
Ok(result) => CString::new(result).unwrap().into_raw(),
Err(e) => {
tracing::error!("univ_task_compability_fix failed, error: {:#}", e);
std::ptr::null_mut()
}
}
}
/// # Safety
#[no_mangle]
pub unsafe extern "C" fn release_string(ptr: *mut c_char) {

View File

@@ -8,7 +8,8 @@ edition.workspace = true
[dependencies]
scroll-zkvm-types.workspace = true
scroll-zkvm-prover.workspace = true
scroll-proving-sdk = { git = "https://github.com/scroll-tech/scroll-proving-sdk.git", rev = "4c36ab2" }
libzkp = { path = "../libzkp"}
scroll-proving-sdk = { git = "https://github.com/scroll-tech/scroll-proving-sdk.git", rev = "05648db" }
serde.workspace = true
serde_json.workspace = true
once_cell.workspace =true
@@ -33,3 +34,7 @@ clap = { version = "4.5", features = ["derive"] }
ctor = "0.2.8"
url = { version = "2.5.4", features = ["serde",] }
serde_bytes = "0.11.15"
[features]
default = []
cuda = ["scroll-zkvm-prover/cuda"]

View File

@@ -2,6 +2,9 @@
"feynman": {
"b68fdc3f28a5ce006280980df70cd3447e56913e5bca6054603ba85f0794c23a6618ea25a7991845bbc5fd571670ee47379ba31ace92d345bca59702a0d4112d": "https://circuit-release.s3.us-west-2.amazonaws.com/scroll-zkvm/releases/0.5.2/chunk/",
"9a3f66370f11e3303f1a1248921025104e83253efea43a70d221cf4e15fc145bf2be2f4468d1ac4a70e7682babb1c60417e21c7633d4b55b58f44703ec82b05a": "https://circuit-release.s3.us-west-2.amazonaws.com/scroll-zkvm/releases/0.5.2/batch/",
"1f8627277e1c1f6e1cc70c03e6fde06929e5ea27ca5b1d56e23b235dfeda282e22c0e5294bcb1b3a9def836f8d0f18612a9860629b9497292976ca11844b7e73": "https://circuit-release.s3.us-west-2.amazonaws.com/scroll-zkvm/releases/0.5.2/bundle/"
"1f8627277e1c1f6e1cc70c03e6fde06929e5ea27ca5b1d56e23b235dfeda282e22c0e5294bcb1b3a9def836f8d0f18612a9860629b9497292976ca11844b7e73": "https://circuit-release.s3.us-west-2.amazonaws.com/scroll-zkvm/releases/0.5.2/bundle/",
"7eb91f1885cc7a63cc848928f043fa56bf747161a74cd933d88c0456b90643346618ea25a7991845bbc5fd571670ee47379ba31ace92d345bca59702a0d4112d": "https://circuit-release.s3.us-west-2.amazonaws.com/scroll-zkvm/releases/0.6.0-rc.1/chunk/",
"dc653e7416628c612fa4d80b4724002bad4fde3653aef7316b80df0c19740a1bf2be2f4468d1ac4a70e7682babb1c60417e21c7633d4b55b58f44703ec82b05a": "https://circuit-release.s3.us-west-2.amazonaws.com/scroll-zkvm/releases/0.6.0-rc.1/batch/",
"14de1c74b663ed3c99acb03e90a5753b5923233c5c590864ad7746570297d16722c0e5294bcb1b3a9def836f8d0f18612a9860629b9497292976ca11844b7e73": "https://circuit-release.s3.us-west-2.amazonaws.com/scroll-zkvm/releases/0.6.0-rc.1/bundle/"
}
}

View File

@@ -12,6 +12,7 @@ use scroll_proving_sdk::{
ProvingService,
},
};
use scroll_zkvm_types::ProvingTask;
use serde::{Deserialize, Serialize};
use std::{
collections::HashMap,
@@ -143,7 +144,6 @@ impl LocalProverConfig {
#[derive(Clone, Serialize, Deserialize)]
pub struct CircuitConfig {
pub hard_fork_name: String,
/// The path to save assets for a specified hard fork phase
pub workspace_path: String,
#[serde(flatten)]
@@ -273,6 +273,8 @@ impl LocalProver {
let created_at = duration.as_secs() as f64 + duration.subsec_nanos() as f64 * 1e-9;
let prover_task = UniversalHandler::get_task_from_input(&req.input)?;
let is_openvm_13 = prover_task.use_openvm_13;
let prover_task: ProvingTask = prover_task.into();
let vk = hex::encode(&prover_task.vk);
let handler = if let Some(handler) = self.handlers.get(&vk) {
handler.clone()
@@ -300,7 +302,7 @@ impl LocalProver {
.await?;
let circuits_handler = Arc::new(Mutex::new(UniversalHandler::new(
&asset_path,
req.proof_type,
is_openvm_13,
)?));
self.handlers.insert(vk, circuits_handler.clone());
circuits_handler

View File

@@ -1,3 +1,5 @@
#![allow(dead_code)]
use serde::{Deserialize, Deserializer, Serialize, Serializer};
#[derive(Serialize, Deserialize, Default)]

View File

@@ -2,9 +2,8 @@ use std::path::Path;
use super::CircuitsHandler;
use async_trait::async_trait;
use base64::{prelude::BASE64_STANDARD, Engine};
use eyre::Result;
use scroll_proving_sdk::prover::ProofType;
use libzkp::ProvingTaskExt;
use scroll_zkvm_prover::{Prover, ProverConfig};
use scroll_zkvm_types::ProvingTask;
use tokio::sync::Mutex;
@@ -12,32 +11,33 @@ pub struct UniversalHandler {
prover: Prover,
}
/// Safe for current usage as `CircuitsHandler` trait (protected inside of Mutex and NEVER extract
/// the instance out by `into_inner`)
unsafe impl Send for UniversalHandler {}
impl UniversalHandler {
pub fn new(workspace_path: impl AsRef<Path>, proof_type: ProofType) -> Result<Self> {
pub fn new(workspace_path: impl AsRef<Path>, is_openvm_v13: bool) -> Result<Self> {
let path_app_exe = workspace_path.as_ref().join("app.vmexe");
let path_app_config = workspace_path.as_ref().join("openvm.toml");
let segment_len = Some((1 << 22) - 100);
let segment_len = Some((1 << 21) - 100);
let config = ProverConfig {
path_app_config,
path_app_exe,
segment_len,
is_openvm_v13,
};
let use_evm = proof_type == ProofType::Bundle;
let prover = Prover::setup(config, use_evm, None)?;
let prover = Prover::setup(config, None)?;
Ok(Self { prover })
}
/// get_prover get the inner prover, later we would replace chunk/batch/bundle_prover with
/// universal prover, before that, use bundle_prover as the represent one
pub fn get_prover(&self) -> &Prover {
&self.prover
pub fn get_prover(&mut self) -> &mut Prover {
&mut self.prover
}
pub fn get_task_from_input(input: &str) -> Result<ProvingTask> {
pub fn get_task_from_input(input: &str) -> Result<ProvingTaskExt> {
Ok(serde_json::from_str(input)?)
}
}
@@ -45,14 +45,7 @@ impl UniversalHandler {
#[async_trait]
impl CircuitsHandler for Mutex<UniversalHandler> {
async fn get_proof_data(&self, u_task: &ProvingTask, need_snark: bool) -> Result<String> {
let handler_self = self.lock().await;
if need_snark && handler_self.prover.evm_prover.is_none() {
eyre::bail!(
"do not init prover for evm (vk: {})",
BASE64_STANDARD.encode(handler_self.get_prover().get_app_vk())
)
}
let mut handler_self = self.lock().await;
let proof = handler_self
.get_prover()

View File

@@ -8,7 +8,7 @@ require (
github.com/jmoiron/sqlx v1.3.5
github.com/lib/pq v1.10.9
github.com/pressly/goose/v3 v3.16.0
github.com/scroll-tech/go-ethereum v1.10.14-0.20250305151038-478940e79601
github.com/scroll-tech/go-ethereum v1.10.14-0.20251128092113-8629f088d78f
github.com/stretchr/testify v1.10.0
github.com/urfave/cli/v2 v2.25.7
)
@@ -34,11 +34,9 @@ require (
github.com/xrash/smetrics v0.0.0-20201216005158-039620a65673 // indirect
go.opentelemetry.io/otel/trace v1.24.0 // indirect
go.uber.org/multierr v1.11.0 // indirect
golang.org/x/crypto v0.32.0 // indirect
golang.org/x/net v0.25.0 // indirect
golang.org/x/sync v0.11.0 // indirect
golang.org/x/sys v0.30.0 // indirect
golang.org/x/text v0.21.0 // indirect
golang.org/x/tools v0.21.1-0.20240508182429-e35e4ccd0d2d // indirect
google.golang.org/genproto/googleapis/rpc v0.0.0-20231127180814-3a041ad873d4 // indirect
google.golang.org/protobuf v1.33.0 // indirect
gopkg.in/check.v1 v1.0.0-20201130134442-10cb98267c6c // indirect

View File

@@ -121,8 +121,8 @@ github.com/rogpeppe/go-internal v1.12.0 h1:exVL4IDcn6na9z1rAb56Vxr+CgyK3nn3O+epU
github.com/rogpeppe/go-internal v1.12.0/go.mod h1:E+RYuTGaKKdloAfM02xzb0FW3Paa99yedzYV+kq4uf4=
github.com/russross/blackfriday/v2 v2.1.0 h1:JIOH55/0cWyOuilr9/qlrm0BSXldqnqwMsf35Ld67mk=
github.com/russross/blackfriday/v2 v2.1.0/go.mod h1:+Rmxgy9KzJVeS9/2gXHxylqXiyQDYRxCVz55jmeOWTM=
github.com/scroll-tech/go-ethereum v1.10.14-0.20250305151038-478940e79601 h1:NEsjCG6uSvLRBlsP3+x6PL1kM+Ojs3g8UGotIPgJSz8=
github.com/scroll-tech/go-ethereum v1.10.14-0.20250305151038-478940e79601/go.mod h1:OblWe1+QrZwdpwO0j/LY3BSGuKT3YPUFBDQQgvvfStQ=
github.com/scroll-tech/go-ethereum v1.10.14-0.20251128092113-8629f088d78f h1:j6SjP98MoWFFX9TwB1/nFYEkayqHQsrtE66Ll2C+oT0=
github.com/scroll-tech/go-ethereum v1.10.14-0.20251128092113-8629f088d78f/go.mod h1:Aa/kD1XB+OV/7rRxMQrjcPCB4b0pKyLH0gsTrtuHi38=
github.com/segmentio/asm v1.2.0 h1:9BQrFxC+YOHJlTlHGkTrFWf59nbL3XnCoFLTwDCI7ys=
github.com/segmentio/asm v1.2.0/go.mod h1:BqMnlJP91P8d+4ibuonYZw9mfnzI9HfxselHZr5aAcs=
github.com/sethvargo/go-retry v0.2.4 h1:T+jHEQy/zKJf5s95UkguisicE0zuF9y7+/vgz08Ocec=

View File

@@ -9,13 +9,14 @@ import (
"github.com/pressly/goose/v3"
)
//go:embed migrations/*.sql
//go:embed migrations
var embedMigrations embed.FS
// MigrationsDir migration dir
const MigrationsDir string = "migrations"
func init() {
// note goose ignore ono-sql files by default so we do not need to specify *.sql
goose.SetBaseFS(embedMigrations)
goose.SetSequential(true)
goose.SetTableName("scroll_migrations")
@@ -24,6 +25,41 @@ func init() {
goose.SetVerbose(verbose)
}
// MigrateModule migrate db used by other module with specified goose TableName
// sql file for that module must be put as a sub-directory under `MigrationsDir`
func MigrateModule(db *sql.DB, moduleName string) error {
goose.SetTableName(moduleName + "_migrations")
defer func() {
goose.SetTableName("scroll_migrations")
}()
return goose.Up(db, MigrationsDir+"/"+moduleName, goose.WithAllowMissing())
}
// RollbackModule rollback the specified module to the given version
func RollbackModule(db *sql.DB, moduleName string, version *int64) error {
goose.SetTableName(moduleName + "_migrations")
defer func() {
goose.SetTableName("scroll_migrations")
}()
moduleDir := MigrationsDir + "/" + moduleName
if version != nil {
return goose.DownTo(db, moduleDir, *version)
}
return goose.Down(db, moduleDir)
}
// ResetModuleDB clean and migrate db for a module.
func ResetModuleDB(db *sql.DB, moduleName string) error {
if err := RollbackModule(db, moduleName, new(int64)); err != nil {
return err
}
return MigrateModule(db, moduleName)
}
// Migrate migrate db
func Migrate(db *sql.DB) error {
//return goose.Up(db, MIGRATIONS_DIR, goose.WithAllowMissing())

View File

@@ -0,0 +1,30 @@
-- +goose Up
-- +goose StatementBegin
create table prover_sessions
(
public_key TEXT NOT NULL,
upstream TEXT NOT NULL,
up_token TEXT NOT NULL,
expired TIMESTAMP(0) NOT NULL,
constraint uk_prover_sessions_public_key_upstream unique (public_key, upstream)
);
create index idx_prover_sessions_expired on prover_sessions (expired);
create table priority_upstream
(
public_key TEXT NOT NULL,
upstream TEXT NOT NULL,
update_time TIMESTAMP(0) NOT NULL DEFAULT now()
);
create unique index idx_priority_upstream_public_key on priority_upstream (public_key);
-- +goose StatementEnd
-- +goose Down
-- +goose StatementBegin
drop index if exists idx_prover_sessions_expired;
drop table if exists prover_sessions;
drop table if exists priority_upstream;
-- +goose StatementEnd

View File

@@ -731,8 +731,9 @@ github.com/cockroachdb/tokenbucket v0.0.0-20230807174530-cc333fc44b06/go.mod h1:
github.com/codegangsta/inject v0.0.0-20150114235600-33e0aa1cb7c0/go.mod h1:4Zcjuz89kmFXt9morQgcfYZAYZ5n8WHjt81YYWIwtTM=
github.com/compose-spec/compose-go v1.20.0 h1:h4ZKOst1EF/DwZp7dWkb+wbTVE4nEyT9Lc89to84Ol4=
github.com/compose-spec/compose-go v1.20.0/go.mod h1:+MdqXV4RA7wdFsahh/Kb8U0pAJqkg7mr4PM9tFKU8RM=
github.com/consensys/bavard v0.1.27/go.mod h1:k/zVjHHC4B+PQy1Pg7fgvG3ALicQw540Crag8qx+dZs=
github.com/consensys/bavard v0.1.13/go.mod h1:9ItSMtA/dXMAiL7BG6bqW2m3NdSEObYWoH223nGHukI=
github.com/consensys/gnark-crypto v0.12.1/go.mod h1:v2Gy7L/4ZRosZ7Ivs+9SfUDr0f5UlG+EM5t7MPHiLuY=
github.com/consensys/gnark-crypto v0.13.0/go.mod h1:wKqwsieaKPThcFkHe0d0zMsbHEUWFmZcG7KBCse210o=
github.com/container-orchestrated-devices/container-device-interface v0.6.1 h1:mz77uJoP8im/4Zins+mPqt677ZMaflhoGaYrRAl5jvA=
github.com/container-orchestrated-devices/container-device-interface v0.6.1/go.mod h1:40T6oW59rFrL/ksiSs7q45GzjGlbvxnA4xaK6cyq+kA=
github.com/containerd/aufs v1.0.0 h1:2oeJiwX5HstO7shSrPZjrohJZLzK36wvpdmzDRkL/LY=
@@ -792,6 +793,7 @@ github.com/deckarep/golang-set v1.8.0/go.mod h1:5nI87KwE7wgsBU1F4GKAw2Qod7p5kyS3
github.com/deckarep/golang-set/v2 v2.6.0/go.mod h1:VAky9rY/yGXJOLEDv3OMci+7wtDpOF4IN+y82NBOac4=
github.com/decred/dcrd/crypto/blake256 v1.0.0 h1:/8DMNYp9SGi5f0w7uCm6d6M4OU2rGFK09Y2A4Xv7EE0=
github.com/decred/dcrd/crypto/blake256 v1.0.0/go.mod h1:sQl2p6Y26YV+ZOcSTP6thNdn47hh8kt6rqSlvmrXFAc=
github.com/decred/dcrd/crypto/blake256 v1.0.1/go.mod h1:2OfgNZ5wDpcsFmHmCK5gZTPcCXqlm2ArzUIkw9czNJo=
github.com/decred/dcrd/dcrec/secp256k1/v4 v4.0.1 h1:YLtO71vCjJRCBcrPMtQ9nqBsqpA1m5sE92cU+pd5Mcc=
github.com/decred/dcrd/dcrec/secp256k1/v4 v4.0.1/go.mod h1:hyedUtir6IdtD/7lIxGeCxkaw7y45JueMRL4DIyJDKs=
github.com/deepmap/oapi-codegen v1.6.0 h1:w/d1ntwh91XI0b/8ja7+u5SvA4IFfM0UNNLmiDR1gg0=
@@ -1199,6 +1201,7 @@ github.com/labstack/gommon v0.3.0 h1:JEeO0bvc78PKdyHxloTKiF8BD5iGrH8T6MSeGvSgob0
github.com/labstack/gommon v0.3.0/go.mod h1:MULnywXg0yavhxWKc+lOruYdAhDwPK9wf0OL7NoOu+k=
github.com/labstack/gommon v0.3.1/go.mod h1:uW6kP17uPlLJsD3ijUYn3/M5bAxtlZhMI6m3MFxTMTM=
github.com/labstack/gommon v0.4.0/go.mod h1:uW6kP17uPlLJsD3ijUYn3/M5bAxtlZhMI6m3MFxTMTM=
github.com/leanovate/gopter v0.2.9/go.mod h1:U2L/78B+KVFIx2VmW6onHJQzXtFb+p5y3y2Sh+Jxxv8=
github.com/lestrrat-go/backoff/v2 v2.0.8 h1:oNb5E5isby2kiro9AgdHLv5N5tint1AnDVVf2E2un5A=
github.com/lestrrat-go/backoff/v2 v2.0.8/go.mod h1:rHP/q/r9aT27n24JQLa7JhSQZCKBBOiM/uP402WwN8Y=
github.com/lestrrat-go/blackmagic v1.0.0 h1:XzdxDbuQTz0RZZEmdU7cnQxUtFUzgCSPq8RCz4BxIi4=
@@ -1409,11 +1412,15 @@ github.com/scroll-tech/da-codec v0.1.3-0.20250609113414-f33adf0904bd/go.mod h1:g
github.com/scroll-tech/da-codec v0.1.3-0.20250609154559-8935de62c148 h1:cyK1ifU2fRoMl8YWR9LOsZK4RvJnlG3RODgakj5I8VY=
github.com/scroll-tech/da-codec v0.1.3-0.20250609154559-8935de62c148/go.mod h1:gz5x3CsLy5htNTbv4PWRPBU9nSAujfx1U2XtFcXoFuk=
github.com/scroll-tech/da-codec v0.1.3-0.20250626091118-58b899494da6/go.mod h1:Z6kN5u2khPhiqHyk172kGB7o38bH/nj7Ilrb/46wZGg=
github.com/scroll-tech/da-codec v0.1.3-0.20250825071838-cddc263e5ef6/go.mod h1:Z6kN5u2khPhiqHyk172kGB7o38bH/nj7Ilrb/46wZGg=
github.com/scroll-tech/ecies-go/v2 v2.0.10-beta.1/go.mod h1:A+pHaITd+ogBm4Rk35xebF9OPiyMYlFlgqBOiY5PSjg=
github.com/scroll-tech/go-ethereum v1.10.14-0.20240607130425-e2becce6a1a4/go.mod h1:byf/mZ8jLYUCnUePTicjJWn+RvKdxDn7buS6glTnMwQ=
github.com/scroll-tech/go-ethereum v1.10.14-0.20240821074444-b3fa00861e5e/go.mod h1:swB5NSp8pKNDuYsTxfR08bHS6L56i119PBx8fxvV8Cs=
github.com/scroll-tech/go-ethereum v1.10.14-0.20241010064814-3d88e870ae22/go.mod h1:r9FwtxCtybMkTbWYCyBuevT9TW3zHmOTHqD082Uh+Oo=
github.com/scroll-tech/go-ethereum v1.10.14-0.20250206083728-ea43834c198f/go.mod h1:Ik3OBLl7cJxPC+CFyCBYNXBPek4wpdzkWehn/y5qLM8=
github.com/scroll-tech/go-ethereum v1.10.14-0.20250225152658-bcfdb48dd939/go.mod h1:AgU8JJxC7+nfs7R7ma35AU7dMAGW7wCw3dRZRefIKyQ=
github.com/scroll-tech/go-ethereum v1.10.14-0.20251128092359-25d5bf6b817b h1:pMQKnroJoS/FeL1aOWkz7/u1iBHUP8PWjZstNuzoUGE=
github.com/scroll-tech/go-ethereum v1.10.14-0.20251128092359-25d5bf6b817b/go.mod h1:Aa/kD1XB+OV/7rRxMQrjcPCB4b0pKyLH0gsTrtuHi38=
github.com/sean-/seed v0.0.0-20170313163322-e2103e2c3529 h1:nn5Wsu0esKSJiIVhscUtVbo7ada43DJhG55ua/hjS5I=
github.com/sean-/seed v0.0.0-20170313163322-e2103e2c3529/go.mod h1:DxrIzT+xaE7yg65j358z/aeFdxmN0P9QXhEzd20vsDc=
github.com/segmentio/kafka-go v0.1.0/go.mod h1:X6itGqS9L4jDletMsxZ7Dz+JFWxM6JHfPOCvTvk+EJo=
@@ -1611,6 +1618,7 @@ golang.org/x/crypto v0.21.0/go.mod h1:0BP7YvVV9gBbVKyeTG0Gyn+gZm94bibOW5BjDEYAOM
golang.org/x/crypto v0.22.0/go.mod h1:vr6Su+7cTlO45qkww3VDJlzDn0ctJvRgYbC2NvXHt+M=
golang.org/x/crypto v0.23.0/go.mod h1:CKFgDieR+mRhux2Lsu27y0fO304Db0wZe70UKqHu0v8=
golang.org/x/crypto v0.26.0/go.mod h1:GY7jblb9wI+FOo5y8/S2oY4zWP07AkOJ4+jxCqdqn54=
golang.org/x/crypto v0.31.0/go.mod h1:kDsLvtWBEx7MV9tJOj9bnXsPbxwJQ6csT/x4KIN4Ssk=
golang.org/x/exp v0.0.0-20180321215751-8460e604b9de/go.mod h1:CJ0aWSM057203Lf6IL+f9T1iT9GByDxfZKAQTCR3kQA=
golang.org/x/exp v0.0.0-20180807140117-3d87b88a115f/go.mod h1:CJ0aWSM057203Lf6IL+f9T1iT9GByDxfZKAQTCR3kQA=
golang.org/x/exp v0.0.0-20190125153040-c74c464bbbf2/go.mod h1:CJ0aWSM057203Lf6IL+f9T1iT9GByDxfZKAQTCR3kQA=
@@ -1632,6 +1640,7 @@ golang.org/x/lint v0.0.0-20190301231843-5614ed5bae6f/go.mod h1:UVdnD1Gm6xHRNCYTk
golang.org/x/lint v0.0.0-20190313153728-d0100b6bd8b3 h1:XQyxROzUlZH+WIQwySDgnISgOivlhjIEwaQaJEJrrN0=
golang.org/x/lint v0.0.0-20190409202823-959b441ac422/go.mod h1:6SW0HCj/g11FgYtHlgUYUwCkIfeOF89ocIRzGO/8vkc=
golang.org/x/lint v0.0.0-20190909230951-414d861bb4ac/go.mod h1:6SW0HCj/g11FgYtHlgUYUwCkIfeOF89ocIRzGO/8vkc=
golang.org/x/lint v0.0.0-20190930215403-16217165b5de h1:5hukYrvBGR8/eNkX5mdUezrA6JiaEZDtJb9Ei+1LlBs=
golang.org/x/lint v0.0.0-20190930215403-16217165b5de/go.mod h1:6SW0HCj/g11FgYtHlgUYUwCkIfeOF89ocIRzGO/8vkc=
golang.org/x/lint v0.0.0-20191125180803-fdd1cda4f05f h1:J5lckAjkw6qYlOZNj90mLYNTEKDvWeuc1yieZ8qUzUE=
golang.org/x/lint v0.0.0-20191125180803-fdd1cda4f05f/go.mod h1:5qLYkcX4OjUUV8bRuDixDT3tpyyb+LUpUlRWLxfhWrs=
@@ -1781,6 +1790,7 @@ golang.org/x/sys v0.20.0/go.mod h1:/VUhepiaJMQUp4+oa/7Zr1D23ma6VTLIYjOOTFZPUcA=
golang.org/x/sys v0.21.0/go.mod h1:/VUhepiaJMQUp4+oa/7Zr1D23ma6VTLIYjOOTFZPUcA=
golang.org/x/sys v0.24.0/go.mod h1:/VUhepiaJMQUp4+oa/7Zr1D23ma6VTLIYjOOTFZPUcA=
golang.org/x/sys v0.25.0/go.mod h1:/VUhepiaJMQUp4+oa/7Zr1D23ma6VTLIYjOOTFZPUcA=
golang.org/x/sys v0.28.0/go.mod h1:/VUhepiaJMQUp4+oa/7Zr1D23ma6VTLIYjOOTFZPUcA=
golang.org/x/sys v0.29.0/go.mod h1:/VUhepiaJMQUp4+oa/7Zr1D23ma6VTLIYjOOTFZPUcA=
golang.org/x/telemetry v0.0.0-20240228155512-f48c80bd79b2 h1:IRJeR9r1pYWsHKTRe/IInb7lYvbBVIqOgsX/u0mbOWY=
golang.org/x/telemetry v0.0.0-20240228155512-f48c80bd79b2/go.mod h1:TeRTkGYfJXctD9OcfyVLyj2J3IxLnKwHJR8f4D8a3YE=
@@ -1789,6 +1799,7 @@ golang.org/x/term v0.6.0/go.mod h1:m6U89DPEgQRMq3DNkDClhWw02AUbt2daBVO4cn4Hv9U=
golang.org/x/term v0.15.0/go.mod h1:BDl952bC7+uMoWR75FIrCDx79TPU9oHkTZ9yRbYOrX0=
golang.org/x/term v0.18.0/go.mod h1:ILwASektA3OnRv7amZ1xhE/KTR+u50pbXfZ03+6Nx58=
golang.org/x/term v0.20.0/go.mod h1:8UkIAJTvZgivsXaD6/pH6U9ecQzZ45awqEOzuCvwpFY=
golang.org/x/term v0.27.0/go.mod h1:iMsnZpn0cago0GOrHO2+Y7u7JPn5AylBrcoWkElMTSM=
golang.org/x/text v0.0.0-20170915032832-14c0d48ead0c/go.mod h1:NqM8EUOU14njkJ3fqMW+pc6Ldnwhi/IjpwHt7yyuwOQ=
golang.org/x/text v0.3.1-0.20180807135948-17ff2d5776d2/go.mod h1:NqM8EUOU14njkJ3fqMW+pc6Ldnwhi/IjpwHt7yyuwOQ=
golang.org/x/text v0.3.4/go.mod h1:5Zoc/QRtKVWzQhOtBMvqHzDpF6irO9z98xDceosuGiQ=

Some files were not shown because too many files have changed in this diff Show More