Compare commits

..

15 Commits

Author SHA1 Message Date
Kasey Kirkham
edbf5d11d1 refactor to match updated design 2023-11-02 13:32:55 -05:00
Kasey Kirkham
1148d96313 WIP: changes for 3531 2023-11-01 13:13:11 -05:00
Kasey Kirkham
daf5e9b286 get rid of double referencing in error types 2023-10-30 09:15:51 -05:00
Kasey Kirkham
309a863358 errors.As wants a ref 2023-10-27 17:32:09 -05:00
Kasey Kirkham
15b88ed7f2 drop the cache if the kzg commitment fails 2023-10-27 17:22:16 -05:00
Kasey Kirkham
4b4855d65c return custom missing/mismatch errors; unit tests 2023-10-27 17:10:07 -05:00
Kasey Kirkham
03564ee93b more tests 2023-10-26 19:35:01 -05:00
Kasey Kirkham
96541c2c6c gazelle and test fixes 2023-10-26 18:54:11 -05:00
Kasey Kirkham
0bc55339a7 remove partial saving/checking 2023-10-26 16:44:22 -05:00
Kasey Kirkham
2618852090 lint 2023-10-26 11:07:37 -05:00
Kasey Kirkham
fb3e42c1b3 moving things around and comments 2023-10-26 10:24:03 -05:00
Kasey Kirkham
0e2a9d0d82 simplify 2023-10-26 10:24:03 -05:00
Kasey Kirkham
6573cccf1d significant rewrite for db cache blocking variant 2023-10-26 10:24:03 -05:00
Kasey Kirkham
a89727f38c same cache usable for init and reg sync
the goal isn't for the 2 flavors of sync to share the same cache,
because they have different needs, but for them to follow the same
procedure and share as much code as possible.
2023-10-26 10:24:03 -05:00
Kasey Kirkham
5e1fb15c40 DA check before blob saving for init-sync 2023-10-26 10:24:03 -05:00
487 changed files with 20413 additions and 21637 deletions

View File

@@ -27,7 +27,6 @@ build:minimal --@io_bazel_rules_go//go/config:tags=minimal
# Release flags
build:release --compilation_mode=opt
build:release --stamp
build:release --define pgo_enabled=1
# Build binary with cgo symbolizer for debugging / profiling.
build:cgo_symbolizer --copt=-g

View File

@@ -144,11 +144,6 @@ config_setting(
values = {"define": "coverage_enabled=1"},
)
config_setting(
name = "pgo_enabled",
values = {"define": "pgo_enabled=1"},
)
common_files = {
"//:LICENSE.md": "LICENSE.md",
"//:README.md": "README.md",

View File

@@ -1,53 +1,45 @@
# Terms of Use
Effective as of November 2, 2023
## Terms of Use
By downloading, accessing or using the Prysm implementation (“Prysm”), you (referenced herein as “you” or the “user”) certify that you have read and agreed to the terms and conditions below (the “Terms”) which form a binding contract between you and Offchain Labs, Inc. (as successor in interest to Prysmatic Labs LLC) (referenced herein as “Offchain Labs”, “we” or “us”). If you do not agree to the Terms, do not download or use Prysm. Additionally, the Terms of Use available at https://arbitrum.io/tos (or any successor site, the “OCL Terms of Use”) are hereby incorporated by reference into these Terms. In the event of any conflict between provisions set forth herein and those set forth in the OCL Terms of Use, the provisions set forth herein shall control.
Effective as of Oct 14, 2020
## About Prysm
By downloading, accessing or using the Prysm implementation (“Prysm”), you (referenced herein as “you” or the “user”) certify that you have read and agreed to the terms and conditions below (the “Terms”) which form a binding contract between you and Prysmatic Labs (referenced herein as “we” or “us”). If you do not agree to the Terms, do not download or use Prysm.
Prysm is a client implementation for the Ethereum blockchains consensus protocol. To participate in the network, a user must send ETH from the Ethereum mainnet blockchain to a validator deposit smart contract on Ethereum mainnet. Validators participate in proposing and voting on blocks in the protocol, and the network applies rewards/penalties based on their behavior. A detailed set of installation and usage instructions as well as breakdowns of each individual component are available in the official documentation portal, however, we do not warrant the accuracy, completeness or usefulness of this documentation. Any reliance you place on such information is strictly at your own risk.
### About Prysm
Prysm is a client implementation for Ethereum consensus protocol for a proof-of-stake blockchain. To participate in the network, a user must send ETH from the Eth1.0 chain into a validator deposit contract, which will queue in the user as a validator in the system. Validators participate in proposing and voting on blocks in the protocol, and the network applies rewards/penalties based on their behavior. A detailed set of installation and usage instructions as well as breakdowns of each individual component are available in the official documentation portal, however, we do not warrant the accuracy, completeness or usefulness of this documentation. Any reliance you place on such information is strictly at your own risk.
## Licensing Terms
Prysm is an open-source software program licensed pursuant to the GNU General Public License v3.0.
The Offchain Labs name, the term “Prysm” and all related names, logos, product and service names, designs and slogans are trademarks of Offchain Labs or its affiliates and/or licensors. You must not use such marks without our prior written permission.
PLEASE READ THESE TERMS CAREFULLY, AS THE OCL TERMS OF USE INCORPORATED BY REFERENCE HEREIN CONTAIN AN AGREEMENT TO ARBITRATE AND OTHER IMPORTANT INFORMATION REGARDING YOUR LEGAL RIGHTS, REMEDIES, AND OBLIGATIONS. THE AGREEMENT TO ARBITRATE REQUIRES (WITH LIMITED EXCEPTION) THAT YOU SUBMIT CLAIMS YOU HAVE AGAINST US TO BINDING AND FINAL ARBITRATION, AND FURTHER (1) YOU WILL ONLY BE PERMITTED TO PURSUE CLAIMS AGAINST OFFCHAIN LABS ON AN INDIVIDUAL BASIS, NOT AS A PLAINTIFF OR CLASS MEMBER IN ANY CLASS OR REPRESENTATIVE ACTION OR PROCEEDING, (2) YOU WILL ONLY BE PERMITTED TO SEEK RELIEF (INCLUDING MONETARY, INJUNCTIVE, AND DECLARATORY RELIEF) ON AN INDIVIDUAL BASIS, AND (3) YOU MAY NOT BE ABLE TO HAVE ANY CLAIMS YOU HAVE AGAINST US RESOLVED BY A JURY OR IN A COURT OF LAW.
### Licensing Terms
Prysm is a fully open-source software program licensed pursuant to the GNU General Public License v3.0.
## Risks of Operating Prysm
The Prysmatic Labs name, the term “Prysm” and all related names, logos, product and service names, designs and slogans are trademarks of Prysmatic Labs or its affiliates and/or licensors. You must not use such marks without our prior written permission.
The use of Prysm and acting as a validator on the Ethereum network can lead to loss of money, tokens and value. Ethereum is still an experimental system and ETH remains a risky investment. You alone are responsible for your actions on Prysm, including the security of your ETH and meeting any applicable minimum system requirements.
### Risks of Operating Prysm
The use of Prysm and acting as a validator on the Ethereum network can lead to loss of money. Ethereum is still an experimental system and ETH remains a risky investment. You alone are responsible for your actions on Prysm including the security of your ETH and meeting any applicable minimum system requirements.
Use of Prysm and the ability to receive rewards or penalties may be affected at any time by mistakes made by the user or other users, software problems such as bugs, errors, incorrectly constructed transactions, unsafe cryptographic libraries or malware affecting the network, technical failures in the hardware of a user, security problems experienced by a user and/or actions or inactions of third parties and/or events experienced by third parties, among other risks. We cannot and do not guarantee that any user of Prysm will make money, that the Prysm network will operate in accordance with the documentation or that transactions will be effective or secure.
YOU ACKNOWLEDGE THAT WE ARE NOT RESPONSIBLE FOR ANY RISKS ASSOCIATED WITH YOUR USE OF PRYSM, AND CANNOT BE HELD LIABLE FOR ANY RESULTING LOSSES THAT YOU EXPERIENCE WHILE ACCESSING OR USING PRYSM.
We make no claims that Prysm is appropriate or permitted for use in any specific jurisdiction. Access to Prysm may not be legal by certain persons or in certain jurisdictions or countries. If you access Prysm, you do so on your own initiative and are responsible for compliance with local laws.
BY ACCESSING AND USING PRYSM, YOU REPRESENT AND WARRANT THAT YOU UNDERSTAND THE INHERENT RISKS ASSOCIATED WITH USING CRYPTOGRAPHIC AND BLOCKCHAIN-BASED SYSTEMS, AND THAT YOU HAVE A WORKING KNOWLEDGE OF THE USAGE AND INTRICACIES OF DIGITAL ASSETS, SUCH AS THOSE FOLLOWING THE ETHEREUM TOKEN STANDARD (ERC-20). YOU FURTHER UNDERSTAND THAT THE MARKETS FOR DIGITAL ASSETS ARE HIGHLY VOLATILE DUE TO VARIOUS FACTORS, INCLUDING ADOPTION, SPECULATION, TECHNOLOGY, SECURITY, AND REGULATION. YOU ACKNOWLEDGE AND ACCEPT THAT THE COST AND SPEED OF TRANSACTING WITH CRYPTOGRAPHIC AND BLOCKCHAIN-BASED SYSTEMS SUCH AS ETHEREUM ARE VARIABLE AND MAY INCREASE DRAMATICALLY AT ANY TIME. YOU UNDERSTAND THAT ANYONE CAN CREATE A TOKEN, INCLUDING FAKE VERSIONS OF EXISTING TOKENS AND TOKENS THAT FALSELY CLAIM TO REPRESENT PROJECTS, AND ACKNOWLEDGE AND ACCEPT THE RISK THAT YOU MAY MISTAKENLY INTERACT WITH THOSE OR OTHER TOKENS. YOU FURTHER ACKNOWLEDGE THAT WE ARE NOT RESPONSIBLE FOR ANY OF THE VARIABLES OR RISKS DESCRIBED IN THESE TERMS. YOU UNDERSTAND AND AGREE TO ASSUME FULL RESPONSIBILITY FOR ALL OF THE RISKS OF ACCESSING AND USING PRYSM. YOU ARE SOLELY RESPONSIBLE FOR YOUR WALLETS, FOR SAFEGUARDING THE ASSOCIATED PRIVATE KEY AND FOR ANY ACTIVITY THAT OCCURS USING YOUR WALLET. WITHOUT LIMITING THE FOREGOING, YOU ALSO UNDERSTAND THAT THERE MAY BE TAX AND REGULATORY RISKS RELATED TO USING PRYSM. IT IS YOUR SOLE RESPONSIBILITY TO DETERMINE WHETHER, AND TO WHAT EXTENT, ANY TAXES APPLY TO ANY TRANSACTIONS YOU CONDUCT IN CONNECTION WITH YOUR USE OF PRYSM, AND TO WITHHOLD, COLLECT, REPORT AND REMIT THE CORRECT AMOUNTS OF TAXES TO THE APPROPRIATE TAX AUTHORITIES. DIGITAL ASSETS, BLOCKCHAIN TECHNOLOGY, AND ANY RELATED SOFTWARE AND SERVICES ARE ALSO SUBJECT TO LEGAL AND REGULATORY UNCERTAINTY IN THE UNITED STATES AND OTHER JURISDICTIONS. YOU UNDERSTAND THAT LEGISLATIVE AND REGULATORY CHANGES OR ACTIONS MAY ADVERSELY AFFECT THE USAGE, TRANSFERABILITY, TRANSACTABILITY AND ACCESSIBILITY RELATED TO PRYSM.
Some Internet plans will charge an additional amount for any excess upload bandwidth used that isnt included in the plan and may terminate your connection without warning because of overuse. We advise that you check whether your Internet connection is subjected to such limitations and monitor your bandwidth use so that you can stop Prysm before you reach your upload limit.
We make no claims that Prysm is appropriate or permitted for use in any specific jurisdiction. Access to Prysm may not be legal by certain persons or in certain jurisdictions or countries. If you access Prysm, you do so on your own initiative and are responsible for compliance with all Applicable Law (as defined below), including, without limitation, for the avoidance of doubt, local laws.
### Warranty Disclaimer
PRYSM IS PROVIDED ON AN “AS-IS” BASIS AND MAY INCLUDE ERRORS, OMISSIONS, OR OTHER INACCURACIES. PRYSMATIC LABS AND ITS CONTRIBUTORS MAKE NO REPRESENTATIONS OR WARRANTIES ABOUT PRYSM FOR ANY PURPOSE, AND HEREBY EXPRESSLY DISCLAIM ALL WARRANTIES, EXPRESS OR IMPLIED, INCLUDING, WITHOUT LIMITATION, ANY WARRANTY OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE, OR NON-INFRINGEMENT OR ANY OTHER IMPLIED WARRANTY UNDER THE UNIFORM COMPUTER INFORMATION TRANSACTIONS ACT AS ENACTED BY ANY STATE. WE ALSO MAKE NO REPRESENTATIONS OR WARRANTIES THAT PRYSM WILL OPERATE ERROR-FREE, UNINTERRUPTED, OR IN A MANNER THAT WILL MEET YOUR REQUIREMENTS AND/OR NEEDS. THEREFORE, YOU ASSUME THE ENTIRE RISK REGARDING THE QUALITY AND/OR PERFORMANCE OF PRYSM AND ANY TRANSACTIONS ENTERED INTO THEREON.
Some Internet plans will charge additional amounts for bandwidth or any excess upload bandwidth used that isnt included in the plan and may terminate your connection without warning because of overuse. We advise that you check whether your Internet connection is subjected to any such limitations and monitor your bandwidth use and upload volumes.
### Limitation of Liability
In no event will Prysmatic Labs or any of its contributors be liable, whether in contract, warranty, tort (including negligence, whether active, passive or imputed), product liability, strict liability or other theory, breach of statutory duty or otherwise arising out of, or in connection with, your use of Prysm, for any direct, indirect, incidental, special or consequential damages (including any loss of profits or data, business interruption or other pecuniary loss, or damage, loss or other compromise of data, in each case whether direct, indirect, incidental, special or consequential) arising out of use Prysm, even if we or other users have been advised of the possibility of such damages. The foregoing limitations and disclaimers shall apply to the maximum extent permitted by applicable law, even if any remedy fails of its essential purpose. You acknowledge and agree that the limitations of liability afforded us hereunder constitute a material and actual inducement and condition to entering into these Terms, and are reasonable, fair and equitable in scope to protect our legitimate interests in light of the fact that we are not receiving consideration from you for providing Prysm.
## Warranty Disclaimer
### Indemnification
To the maximum extent permitted by law, you will defend, indemnify and hold Prysmatic Labs and its contributors harmless from and against any and all claims, actions, suits, investigations, or proceedings by any third party (including any party or purported party to or beneficiary or purported beneficiary of any transaction on Prysm), as well as any and all losses, liabilities,
damages, costs, and expenses (including reasonable attorneys fees) arising out of, accruing from, or in any way related to (i) your breach of the terms of this Agreement, (ii) any transaction, or the failure to occur of any transaction on Prysm, and (iii) your negligence, fraud, or willful misconduct.
PRYSM IS PROVIDED ON AN “AS-IS” BASIS AND MAY INCLUDE ERRORS, OMISSIONS, OR OTHER INACCURACIES. WITHOUT LIMITING ANYTHING SET FORTH ELSEWHERE IN THESE TERMS, OFFCHAIN LABS AND ITS CONTRIBUTORS MAKE NO REPRESENTATIONS OR WARRANTIES ABOUT PRYSM FOR ANY PURPOSE, AND HEREBY EXPRESSLY DISCLAIM ALL WARRANTIES, EXPRESS OR IMPLIED, INCLUDING, WITHOUT LIMITATION, ANY WARRANTY OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE, OR NON-INFRINGEMENT OR ANY OTHER IMPLIED WARRANTY UNDER THE UNIFORM COMPUTER INFORMATION TRANSACTIONS ACT AS ENACTED BY ANY STATE OR OTHER GOVERNMENTAL AUTHORITY. WE ALSO MAKE NO REPRESENTATIONS OR WARRANTIES THAT PRYSM WILL OPERATE ERROR-FREE, UNINTERRUPTED, OR IN A MANNER THAT WILL MEET YOUR REQUIREMENTS AND/OR NEEDS. THEREFORE, YOU ASSUME THE ENTIRE RISK REGARDING THE QUALITY AND/OR PERFORMANCE OF PRYSM AND ANY TRANSACTIONS ENTERED INTO THEREON.
### Compliance with Laws and Tax Obligations
Your use of Prysm is subject to all applicable laws of any governmental authority, including, without limitation, federal, state and foreign securities laws, tax laws, tariff and trade laws, ordinances, judgments, decrees, injunctions, writs and orders or like actions of any governmental authority and rules, regulations, orders, interpretations, licenses, and permits of any federal,
regional, state, county, municipal or other governmental authority and you agree to comply with all such laws in your use of Prysm. The users of Prysm are solely responsible to determinate what, if any, taxes apply to their ETH transactions. The owners of, or contributors to, Prysm are not responsible for determining the taxes that apply to ETH transactions.
## Limitation of Liability
### Miscellaneous
These Terms will be construed and enforced in accordance with the laws of the state of Illinois as applied to agreements entered into and completely performed in Illinois. You agree to the personal jurisdiction by and venue in Illinois and waive any objection to such jurisdiction or venue.
IN NO EVENT WILL OFFCHAIN LABS OR ANY OF ITS AFFILIATES OR ITS OR ANY SUCH AFFILIATES DIRECTORS, OFFICERS, EMPLOYEES, AGENTS, OR REPRESENTATIVES OR ANY CONTRIBUTORS (COLLECTIVELY, THE “OCL PARTIES”) BE LIABLE, WHETHER IN CONTRACT, WARRANTY, TORT (INCLUDING NEGLIGENCE, WHETHER ACTIVE, PASSIVE OR IMPUTED), PRODUCT LIABILITY, STRICT LIABILITY OR OTHER THEORY, BREACH OF STATUTORY DUTY OR OTHERWISE ARISING OUT OF, OR IN CONNECTION WITH, YOUR USE OF PRYSM, FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL OR CONSEQUENTIAL DAMAGES (INCLUDING ANY LOSS OF PROFITS OR DATA, BUSINESS INTERRUPTION OR OTHER PECUNIARY LOSS, OR DAMAGE, LOSS OR OTHER COMPROMISE OF DATA, IN EACH CASE WHETHER DIRECT, INDIRECT, INCIDENTAL, SPECIAL OR CONSEQUENTIAL) ARISING OUT OF USE PRYSM, EVEN IF WE OR OTHER USERS HAVE BEEN ADVISED OF THE POSSIBILITY OF SUCH DAMAGES. The foregoing limitations and disclaimers shall apply to the maximum extent permitted by Applicable Law, even if any remedy fails of its essential purpose. You acknowledge and agree that the limitations of liability afforded us hereunder constitute a material and actual inducement and condition to entering into these Terms, and are reasonable, fair and equitable in scope to protect our legitimate interests in light of the fact that we are not receiving consideration from you for providing Prysm.
We reserve the right to revise these Terms, and your rights and obligations are at all times subject to the then-current Terms provided on Prysm. Your continued use of Prysm constitutes acceptance of such revised Terms.
## Indemnification
To the maximum extent permitted by Applicable Law, you will defend, indemnify and hold each OCL Party harmless from and against any and all claims, actions, suits, investigations, or proceedings by any third party (including any party or purported party to or beneficiary or purported beneficiary of any transaction or other activity on Prysm), as well as any and all losses, liabilities, damages, costs, and expenses (including reasonable attorneys fees and costs) arising out of, accruing from, or in any way related to (i) your breach of the terms of this Agreement, (ii) any transaction, or the failure to occur of any transaction on Prysm, and (iii) your negligence, fraud, or willful misconduct.
## Compliance with Laws
Your use of Prysm is subject to all applicable laws of any governmental authority, including, without limitation, federal, state and foreign securities laws, tax laws, tariff and trade laws, ordinances, judgments, decrees, injunctions, writs and orders or like actions of any governmental authority and rules, regulations, orders, interpretations, licenses, and permits of any federal, regional, state, county, municipal or other governmental authority (collectively, “Applicable Law”) and you agree to comply with all such Applicable Law in your use of Prysm. The users of Prysm are solely responsible to determinate what, if any, taxes apply to their ETH transactions. The owners of, or contributors to, Prysm are not responsible for determining the taxes that apply to ETH transactions.
## Miscellaneous
These Terms will be governed by the laws of the State of Delaware without regard to its conflict of law provisions. With respect to any disputes or claims not subject to arbitration, as set forth in the OCL Terms of Use, you and Offchain Labs submit to the personal and exclusive jurisdiction of the state and federal courts located within New York, New York and waive any objection to such jurisdiction and venue. The failure of Offchain Labs to exercise or enforce any right or provision of these Terms will not constitute a waiver of such right or provision.
We reserve the right to revise these Terms, and your rights and obligations are at all times subject to the then-current Terms provided on Prysm. Your use of Prysm following any such revision to these Terms constitutes acceptance of such revised Terms.
These Terms constitute the entire agreement between you and Offchain Labs regarding use of Prysm and will supersede all prior agreements whether, written or oral. No usage of trade or other regular practice or method of dealing between the parties will be used to modify, interpret, supplement, or alter the terms of these Terms.
If any portion of these Terms is held invalid or unenforceable, such invalidity or enforceability will not affect the other provisions of these Terms, which will remain in full force and effect, and the invalid or unenforceable portion will be given effect to the greatest extent possible. The failure of a party to require performance of any provision will not affect that partys right to require performance at any time thereafter, nor will a waiver of any breach or default of these Terms or any provision of these Terms constitute a waiver of any subsequent breach or default or a waiver of the provision itself.
These Terms constitute the entire agreement between you and Prysmatic Labs regarding use of Prysm and will supersede all prior agreements whether, written or oral. No usage of trade or other regular practice or method of dealing between the parties will be used to modify, interpret, supplement, or alter the terms of these Terms.
If any portion of these Terms is held invalid or unenforceable, such invalidity or enforceability will not affect the other provisions of these Terms, which will remain in full force and effect, and the invalid or unenforceable portion will be given effect to the greatest extent possible. The failure of a party to require performance of any provision will not affect that partys right to require performance at any time thereafter, nor will a waiver of any breach or default of these Terms or any provision of these Terms constitute a waiver of any subsequent breach or default or a waiver of the provision itself.

View File

@@ -27,23 +27,7 @@ http_archive(
load("@hermetic_cc_toolchain//toolchain:defs.bzl", zig_toolchains = "toolchains")
# Temporarily use a nightly build until 0.12.0 is released.
# See: https://github.com/prysmaticlabs/prysm/issues/13130
zig_toolchains(
host_platform_sha256 = {
"linux-aarch64": "45afb8e32adde825165f4f293fcea9ecea503f7f9ec0e9bf4435afe70e67fb70",
"linux-x86_64": "f136c6a8a0f6adcb057d73615fbcd6f88281b3593f7008d5f7ed514ff925c02e",
"macos-aarch64": "05d995853c05243151deff47b60bdc2674f1e794a939eaeca0f42312da031cee",
"macos-x86_64": "721754ba5a50f31e8a1f0e1a74cace26f8246576878ac4a8591b0ee7b6db1fc1",
"windows-x86_64": "93f5248b2ea8c5ee8175e15b1384e133edc1cd49870b3ea259062a2e04164343",
},
url_formats = [
"https://ziglang.org/builds/zig-{host_platform}-{version}.{_ext}",
"https://mirror.bazel.build/ziglang.org/builds/zig-{host_platform}-{version}.{_ext}",
"https://prysmaticlabs.com/mirror/ziglang.org/builds/zig-{host_platform}-{version}.{_ext}",
],
version = "0.12.0-dev.1349+fa022d1ec",
)
zig_toolchains()
# Register zig sdk toolchains with support for Ubuntu 20.04 (Focal Fossa) which has an EOL date of April, 2025.
# For ubuntu glibc support, see https://launchpad.net/ubuntu/+source/glibc
@@ -263,7 +247,9 @@ filegroup(
url = "https://github.com/ethereum/EIPs/archive/5480440fe51742ed23342b68cf106cefd427e39d.tar.gz",
)
consensus_spec_version = "v1.4.0-beta.3"
consensus_spec_test_version = "v1.4.0-beta.2-hotfix"
consensus_spec_version = "v1.4.0-beta.2"
bls_test_version = "v0.1.1"
@@ -279,8 +265,8 @@ filegroup(
visibility = ["//visibility:public"],
)
""",
sha256 = "67ae5b8fc368853da23d4297e480a4b7f4722fb970d1c7e2b6a5b7faef9cb907",
url = "https://github.com/ethereum/consensus-spec-tests/releases/download/%s/general.tar.gz" % consensus_spec_version,
sha256 = "99770a001189f66204a4ef79161c8002bcbbcbd8236f1c6479bd5b83a3c68d42",
url = "https://github.com/ethereum/consensus-spec-tests/releases/download/%s/general.tar.gz" % consensus_spec_test_version,
)
http_archive(
@@ -295,8 +281,8 @@ filegroup(
visibility = ["//visibility:public"],
)
""",
sha256 = "82474f29fff4abd09fb1e71bafa98827e2573cf0ad02cf119610961831dc3bb5",
url = "https://github.com/ethereum/consensus-spec-tests/releases/download/%s/minimal.tar.gz" % consensus_spec_version,
sha256 = "56763f6492ee137108271007d62feef60d8e3f1698e53dee4bc4b07e55f7326b",
url = "https://github.com/ethereum/consensus-spec-tests/releases/download/%s/minimal.tar.gz" % consensus_spec_test_version,
)
http_archive(
@@ -311,8 +297,8 @@ filegroup(
visibility = ["//visibility:public"],
)
""",
sha256 = "60e4b6eb6c341daab7ee5614a8e3f28567247c504c593b951bfe919622c8ef8f",
url = "https://github.com/ethereum/consensus-spec-tests/releases/download/%s/mainnet.tar.gz" % consensus_spec_version,
sha256 = "bc1cac1a991cdc7426efea14385dcf215df85ed3f0572b824ad6a1d7ca0c89ad",
url = "https://github.com/ethereum/consensus-spec-tests/releases/download/%s/mainnet.tar.gz" % consensus_spec_test_version,
)
http_archive(
@@ -326,7 +312,7 @@ filegroup(
visibility = ["//visibility:public"],
)
""",
sha256 = "fdab9756c93a250219ff6a10d5a9faee1e2e6878a14508410409e307362c6991",
sha256 = "c5898001aaab2a5bb38a39ff9d17a52f1f9befcc26e63752cbf556040f0c884e",
strip_prefix = "consensus-specs-" + consensus_spec_version[1:],
url = "https://github.com/ethereum/consensus-specs/archive/refs/tags/%s.tar.gz" % consensus_spec_version,
)

View File

@@ -14,9 +14,7 @@ go_library(
"//beacon-chain/core/helpers:go_default_library",
"//beacon-chain/rpc/apimiddleware:go_default_library",
"//beacon-chain/rpc/eth/beacon:go_default_library",
"//beacon-chain/rpc/eth/config:go_default_library",
"//beacon-chain/rpc/eth/shared:go_default_library",
"//beacon-chain/rpc/prysm/beacon:go_default_library",
"//beacon-chain/state:go_default_library",
"//consensus-types/interfaces:go_default_library",
"//consensus-types/primitives:go_default_library",
@@ -24,6 +22,7 @@ go_library(
"//encoding/ssz/detect:go_default_library",
"//io/file:go_default_library",
"//network/forks:go_default_library",
"//proto/eth/v1:go_default_library",
"//proto/prysm/v1alpha1:go_default_library",
"//runtime/version:go_default_library",
"//time/slots:go_default_library",

View File

@@ -13,17 +13,17 @@ import (
"strconv"
"text/template"
"github.com/prysmaticlabs/prysm/v4/api/client"
"github.com/prysmaticlabs/prysm/v4/beacon-chain/rpc/eth/beacon"
"github.com/prysmaticlabs/prysm/v4/beacon-chain/rpc/eth/shared"
"github.com/prysmaticlabs/prysm/v4/network/forks"
v1 "github.com/prysmaticlabs/prysm/v4/proto/eth/v1"
"github.com/ethereum/go-ethereum/common/hexutil"
"github.com/pkg/errors"
"github.com/prysmaticlabs/prysm/v4/api/client"
"github.com/prysmaticlabs/prysm/v4/beacon-chain/rpc/apimiddleware"
"github.com/prysmaticlabs/prysm/v4/beacon-chain/rpc/eth/beacon"
"github.com/prysmaticlabs/prysm/v4/beacon-chain/rpc/eth/config"
"github.com/prysmaticlabs/prysm/v4/beacon-chain/rpc/eth/shared"
apibeacon "github.com/prysmaticlabs/prysm/v4/beacon-chain/rpc/prysm/beacon"
"github.com/prysmaticlabs/prysm/v4/consensus-types/primitives"
"github.com/prysmaticlabs/prysm/v4/encoding/bytesutil"
"github.com/prysmaticlabs/prysm/v4/network/forks"
ethpb "github.com/prysmaticlabs/prysm/v4/proto/prysm/v1alpha1"
log "github.com/sirupsen/logrus"
)
@@ -32,7 +32,7 @@ const (
getSignedBlockPath = "/eth/v2/beacon/blocks"
getBlockRootPath = "/eth/v1/beacon/blocks/{{.Id}}/root"
getForkForStatePath = "/eth/v1/beacon/states/{{.Id}}/fork"
getWeakSubjectivityPath = "/prysm/v1/beacon/weak_subjectivity"
getWeakSubjectivityPath = "/eth/v1/beacon/weak_subjectivity"
getForkSchedulePath = "/eth/v1/config/fork_schedule"
getConfigSpecPath = "/eth/v1/config/spec"
getStatePath = "/eth/v2/debug/beacon/states"
@@ -179,12 +179,12 @@ func (c *Client) GetForkSchedule(ctx context.Context) (forks.OrderedSchedule, er
}
// GetConfigSpec retrieve the current configs of the network used by the beacon node.
func (c *Client) GetConfigSpec(ctx context.Context) (*config.GetSpecResponse, error) {
func (c *Client) GetConfigSpec(ctx context.Context) (*v1.SpecResponse, error) {
body, err := c.Get(ctx, getConfigSpecPath)
if err != nil {
return nil, errors.Wrap(err, "error requesting configSpecPath")
}
fsr := &config.GetSpecResponse{}
fsr := &v1.SpecResponse{}
err = json.Unmarshal(body, fsr)
if err != nil {
return nil, err
@@ -259,16 +259,16 @@ func (c *Client) GetWeakSubjectivity(ctx context.Context) (*WeakSubjectivityData
if err != nil {
return nil, err
}
v := &apibeacon.GetWeakSubjectivityResponse{}
v := &apimiddleware.WeakSubjectivityResponse{}
err = json.Unmarshal(body, v)
if err != nil {
return nil, err
}
epoch, err := strconv.ParseUint(v.Data.WsCheckpoint.Epoch, 10, 64)
epoch, err := strconv.ParseUint(v.Data.Checkpoint.Epoch, 10, 64)
if err != nil {
return nil, err
}
blockRoot, err := hexutil.Decode(v.Data.WsCheckpoint.Root)
blockRoot, err := hexutil.Decode(v.Data.Checkpoint.Root)
if err != nil {
return nil, err
}

View File

@@ -20,6 +20,8 @@ go_library(
"//encoding/bytesutil:go_default_library",
"//math:go_default_library",
"//monitoring/tracing:go_default_library",
"//network:go_default_library",
"//network/authorization:go_default_library",
"//proto/engine/v1:go_default_library",
"//proto/prysm/v1alpha1:go_default_library",
"//runtime/version:go_default_library",

View File

@@ -19,6 +19,8 @@ import (
"github.com/prysmaticlabs/prysm/v4/consensus-types/primitives"
"github.com/prysmaticlabs/prysm/v4/encoding/bytesutil"
"github.com/prysmaticlabs/prysm/v4/monitoring/tracing"
"github.com/prysmaticlabs/prysm/v4/network"
"github.com/prysmaticlabs/prysm/v4/network/authorization"
v1 "github.com/prysmaticlabs/prysm/v4/proto/engine/v1"
ethpb "github.com/prysmaticlabs/prysm/v4/proto/prysm/v1alpha1"
"github.com/prysmaticlabs/prysm/v4/runtime/version"
@@ -102,7 +104,8 @@ type Client struct {
// `host` is the base host + port used to construct request urls. This value can be
// a URL string, or NewClient will assume an http endpoint if just `host:port` is used.
func NewClient(host string, opts ...ClientOpt) (*Client, error) {
u, err := urlForHost(host)
endpoint := covertEndPoint(host)
u, err := urlForHost(endpoint.Url)
if err != nil {
return nil, err
}
@@ -118,7 +121,8 @@ func NewClient(host string, opts ...ClientOpt) (*Client, error) {
func urlForHost(h string) (*url.URL, error) {
// try to parse as url (being permissive)
if u, err := url.Parse(h); err == nil && u.Host != "" {
u, err := url.Parse(h)
if err == nil && u.Host != "" {
return u, nil
}
// try to parse as host:port
@@ -136,7 +140,7 @@ func (c *Client) NodeURL() string {
type reqOption func(*http.Request)
// do is a generic, opinionated request function to reduce boilerplate amongst the methods in this package api/client/builder.
// do is a generic, opinionated request function to reduce boilerplate amongst the methods in this package api/client/builder/types.go.
func (c *Client) do(ctx context.Context, method string, path string, body io.Reader, opts ...reqOption) (res []byte, err error) {
ctx, span := trace.StartSpan(ctx, "builder.client.do")
defer func() {
@@ -453,3 +457,12 @@ func non200Err(response *http.Response) error {
return errors.Wrap(ErrNotOK, fmt.Sprintf("unsupported error code: %d", response.StatusCode))
}
}
func covertEndPoint(ep string) network.Endpoint {
return network.Endpoint{
Url: ep,
Auth: network.AuthorizationData{ // Auth is not used for builder.
Method: authorization.None,
Value: "",
}}
}

View File

@@ -117,7 +117,7 @@ type VersionResponse struct {
Version string `json:"version"`
}
// ExecHeaderResponse is a JSON representation of the builder API header response for Bellatrix.
// ExecHeaderResponse is a JSON representation of the builder API header response for Bellatrix.
type ExecHeaderResponse struct {
Version string `json:"version"`
Data struct {

View File

@@ -8,6 +8,7 @@ go_library(
deps = [
"//api/client:go_default_library",
"//validator/rpc:go_default_library",
"//validator/rpc/apimiddleware:go_default_library",
"@com_github_pkg_errors//:go_default_library",
],
)

View File

@@ -9,6 +9,7 @@ import (
"github.com/pkg/errors"
"github.com/prysmaticlabs/prysm/v4/api/client"
"github.com/prysmaticlabs/prysm/v4/validator/rpc"
"github.com/prysmaticlabs/prysm/v4/validator/rpc/apimiddleware"
)
const (
@@ -41,14 +42,14 @@ func (c *Client) GetValidatorPubKeys(ctx context.Context) ([]string, error) {
if err != nil {
return nil, err
}
if len(jsonlocal.Data) == 0 && len(jsonremote.Data) == 0 {
if len(jsonlocal.Keystores) == 0 && len(jsonremote.Data) == 0 {
return nil, errors.New("there are no local keys or remote keys on the validator")
}
hexKeys := make(map[string]bool)
for index := range jsonlocal.Data {
hexKeys[jsonlocal.Data[index].ValidatingPubkey] = true
for index := range jsonlocal.Keystores {
hexKeys[jsonlocal.Keystores[index].ValidatingPubkey] = true
}
for index := range jsonremote.Data {
hexKeys[jsonremote.Data[index].Pubkey] = true
@@ -61,12 +62,12 @@ func (c *Client) GetValidatorPubKeys(ctx context.Context) ([]string, error) {
}
// GetLocalValidatorKeys calls the keymanager APIs for local validator keys
func (c *Client) GetLocalValidatorKeys(ctx context.Context) (*rpc.ListKeystoresResponse, error) {
func (c *Client) GetLocalValidatorKeys(ctx context.Context) (*apimiddleware.ListKeystoresResponseJson, error) {
localBytes, err := c.Get(ctx, localKeysPath, client.WithAuthorizationToken(c.Token()))
if err != nil {
return nil, err
}
jsonlocal := &rpc.ListKeystoresResponse{}
jsonlocal := &apimiddleware.ListKeystoresResponseJson{}
if err := json.Unmarshal(localBytes, jsonlocal); err != nil {
return nil, errors.Wrap(err, "failed to parse local keystore list")
}

View File

@@ -24,7 +24,6 @@ go_library(
"@org_golang_google_grpc//:go_default_library",
"@org_golang_google_grpc//connectivity:go_default_library",
"@org_golang_google_grpc//credentials:go_default_library",
"@org_golang_google_grpc//credentials/insecure:go_default_library",
"@org_golang_google_protobuf//proto:go_default_library",
],
)

View File

@@ -6,6 +6,8 @@ import (
"fmt"
"net"
"net/http"
"path"
"strings"
"time"
"github.com/gorilla/mux"
@@ -17,7 +19,6 @@ import (
"google.golang.org/grpc"
"google.golang.org/grpc/connectivity"
"google.golang.org/grpc/credentials"
"google.golang.org/grpc/credentials/insecure"
)
var _ runtime.Service = (*Gateway)(nil)
@@ -177,6 +178,24 @@ func (g *Gateway) corsMiddleware(h http.Handler) http.Handler {
return c.Handler(h)
}
const swaggerDir = "proto/prysm/v1alpha1/"
// SwaggerServer returns swagger specification files located under "/swagger/"
func SwaggerServer() http.HandlerFunc {
return func(w http.ResponseWriter, r *http.Request) {
if !strings.HasSuffix(r.URL.Path, ".swagger.json") {
log.Debugf("Not found: %s", r.URL.Path)
http.NotFound(w, r)
return
}
log.Debugf("Serving %s\n", r.URL.Path)
p := strings.TrimPrefix(r.URL.Path, "/swagger/")
p = path.Join(swaggerDir, p)
http.ServeFile(w, r, p)
}
}
// dial the gRPC server.
func (g *Gateway) dial(ctx context.Context, network, addr string) (*grpc.ClientConn, error) {
switch network {
@@ -192,21 +211,19 @@ func (g *Gateway) dial(ctx context.Context, network, addr string) (*grpc.ClientC
// dialTCP creates a client connection via TCP.
// "addr" must be a valid TCP address with a port number.
func (g *Gateway) dialTCP(ctx context.Context, addr string) (*grpc.ClientConn, error) {
var security grpc.DialOption
security := grpc.WithInsecure()
if len(g.cfg.remoteCert) > 0 {
creds, err := credentials.NewClientTLSFromFile(g.cfg.remoteCert, "")
if err != nil {
return nil, err
}
security = grpc.WithTransportCredentials(creds)
} else {
// Use insecure credentials when there's no remote cert provided.
security = grpc.WithTransportCredentials(insecure.NewCredentials())
}
opts := []grpc.DialOption{
security,
grpc.WithDefaultCallOptions(grpc.MaxCallRecvMsgSize(int(g.cfg.maxCallRecvMsgSize))),
}
return grpc.DialContext(ctx, addr, opts...)
}
@@ -223,7 +240,7 @@ func (g *Gateway) dialUnix(ctx context.Context, addr string) (*grpc.ClientConn,
return d(addr, 0)
}
opts := []grpc.DialOption{
grpc.WithTransportCredentials(insecure.NewCredentials()),
grpc.WithInsecure(),
grpc.WithContextDialer(f),
grpc.WithDefaultCallOptions(grpc.MaxCallRecvMsgSize(int(g.cfg.maxCallRecvMsgSize))),
}

View File

@@ -10,10 +10,7 @@ go_library(
],
importpath = "github.com/prysmaticlabs/prysm/v4/async",
visibility = ["//visibility:public"],
deps = [
"//time/slots:go_default_library",
"@com_github_sirupsen_logrus//:go_default_library",
],
deps = ["@com_github_sirupsen_logrus//:go_default_library"],
)
go_test(
@@ -27,7 +24,6 @@ go_test(
],
embed = [":go_default_library"],
deps = [
"//config/params:go_default_library",
"//testing/assert:go_default_library",
"//testing/require:go_default_library",
"//testing/util:go_default_library",

View File

@@ -7,7 +7,6 @@ import (
"runtime"
"time"
"github.com/prysmaticlabs/prysm/v4/time/slots"
log "github.com/sirupsen/logrus"
)
@@ -30,21 +29,3 @@ func RunEvery(ctx context.Context, period time.Duration, f func()) {
}
}()
}
func RunWithTickerAndInterval(ctx context.Context, genesis time.Time, intervals []time.Duration, f func()) {
funcName := runtime.FuncForPC(reflect.ValueOf(f).Pointer()).Name()
ticker := slots.NewSlotTickerWithIntervals(genesis, intervals)
go func() {
for {
select {
case <-ticker.C():
log.WithField("function", funcName).Trace("running")
f()
case <-ctx.Done():
log.WithField("function", funcName).Debug("context is closed, exiting")
ticker.Done()
return
}
}
}()
}

View File

@@ -7,7 +7,6 @@ import (
"time"
"github.com/prysmaticlabs/prysm/v4/async"
"github.com/prysmaticlabs/prysm/v4/config/params"
)
func TestEveryRuns(t *testing.T) {
@@ -39,46 +38,3 @@ func TestEveryRuns(t *testing.T) {
t.Error("Counter incremented after stop")
}
}
func TestEveryRunsWithTickerAndInterval(t *testing.T) {
ctx, cancel := context.WithCancel(context.Background())
params.SetupTestConfigCleanup(t)
newCfg := params.BeaconConfig()
newCfg.SecondsPerSlot = 1
params.OverrideBeaconConfig(newCfg)
i := int32(0)
genTime := time.Now()
async.RunWithTickerAndInterval(ctx, genTime, []time.Duration{100 * time.Millisecond, 200 * time.Millisecond}, func() {
atomic.AddInt32(&i, 1)
})
// Sleep for a bit and ensure the value has increased.
time.Sleep(1150 * time.Millisecond)
if atomic.LoadInt32(&i) != 1 {
t.Errorf("Counter failed to increment with ticker: Got %d instead of %d", atomic.LoadInt32(&i), 1)
}
// Sleep for a bit and ensure the value has increased.
time.Sleep(200 * time.Millisecond)
if atomic.LoadInt32(&i) != 2 {
t.Errorf("Counter failed to increment with ticker: Got %d instead of %d", atomic.LoadInt32(&i), 2)
}
cancel()
// Sleep for a bit to let the cancel take place.
time.Sleep(100 * time.Millisecond)
last := atomic.LoadInt32(&i)
// Sleep for a bit and ensure the value has not increased.
time.Sleep(200 * time.Millisecond)
if atomic.LoadInt32(&i) != last {
t.Error("Counter incremented after stop")
}
}

19
beacon-chain/BUILD.bazel Normal file
View File

@@ -0,0 +1,19 @@
load("//tools:target_migration.bzl", "moved_targets")
moved_targets(
[
":push_images_debug",
":push_images_alpine",
":push_images",
":image_bundle_debug",
":image_debug",
":image_bundle_alpine",
":image_bundle",
":image_with_creation_time",
":image_alpine",
":image",
":go_default_test",
":beacon-chain",
],
"//cmd/beacon-chain",
)

View File

@@ -49,8 +49,8 @@ go_library(
"//beacon-chain/core/signing:go_default_library",
"//beacon-chain/core/time:go_default_library",
"//beacon-chain/core/transition:go_default_library",
"//beacon-chain/das:go_default_library",
"//beacon-chain/db:go_default_library",
"//beacon-chain/db/filesystem:go_default_library",
"//beacon-chain/db/filters:go_default_library",
"//beacon-chain/db/kv:go_default_library",
"//beacon-chain/execution:go_default_library",
@@ -70,7 +70,6 @@ go_library(
"//config/params:go_default_library",
"//consensus-types:go_default_library",
"//consensus-types/blocks:go_default_library",
"//consensus-types/forkchoice:go_default_library",
"//consensus-types/interfaces:go_default_library",
"//consensus-types/payload-attribute:go_default_library",
"//consensus-types/primitives:go_default_library",

View File

@@ -12,10 +12,10 @@ import (
"github.com/prysmaticlabs/prysm/v4/beacon-chain/state"
fieldparams "github.com/prysmaticlabs/prysm/v4/config/fieldparams"
"github.com/prysmaticlabs/prysm/v4/config/params"
"github.com/prysmaticlabs/prysm/v4/consensus-types/forkchoice"
"github.com/prysmaticlabs/prysm/v4/consensus-types/interfaces"
"github.com/prysmaticlabs/prysm/v4/consensus-types/primitives"
"github.com/prysmaticlabs/prysm/v4/encoding/bytesutil"
ethpbv1 "github.com/prysmaticlabs/prysm/v4/proto/eth/v1"
ethpb "github.com/prysmaticlabs/prysm/v4/proto/prysm/v1alpha1"
"github.com/prysmaticlabs/prysm/v4/time/slots"
"go.opencensus.io/trace"
@@ -45,7 +45,7 @@ type ForkchoiceFetcher interface {
HighestReceivedBlockSlot() primitives.Slot
ReceivedBlocksLastEpoch() (uint64, error)
InsertNode(context.Context, state.BeaconState, [32]byte) error
ForkChoiceDump(context.Context) (*forkchoice.Dump, error)
ForkChoiceDump(context.Context) (*ethpbv1.ForkChoiceDump, error)
NewSlot(context.Context, primitives.Slot) error
ProposerBoost() [32]byte
}
@@ -530,10 +530,3 @@ func (s *Service) recoverStateSummary(ctx context.Context, blockRoot [32]byte) (
func (s *Service) BlockBeingSynced(root [32]byte) bool {
return s.blockBeingSynced.isSyncing(root)
}
// RecentBlockSlot returns block slot form fork choice store
func (s *Service) RecentBlockSlot(root [32]byte) (primitives.Slot, error) {
s.cfg.ForkChoiceStore.RLock()
defer s.cfg.ForkChoiceStore.RUnlock()
return s.cfg.ForkChoiceStore.Slot(root)
}

View File

@@ -4,8 +4,8 @@ import (
"context"
"github.com/prysmaticlabs/prysm/v4/beacon-chain/state"
"github.com/prysmaticlabs/prysm/v4/consensus-types/forkchoice"
"github.com/prysmaticlabs/prysm/v4/consensus-types/primitives"
ethpbv1 "github.com/prysmaticlabs/prysm/v4/proto/eth/v1"
)
// CachedHeadRoot returns the corresponding value from Forkchoice
@@ -51,7 +51,7 @@ func (s *Service) InsertNode(ctx context.Context, st state.BeaconState, root [32
}
// ForkChoiceDump returns the corresponding value from forkchoice
func (s *Service) ForkChoiceDump(ctx context.Context) (*forkchoice.Dump, error) {
func (s *Service) ForkChoiceDump(ctx context.Context) (*ethpbv1.ForkChoiceDump, error) {
s.cfg.ForkChoiceStore.RLock()
defer s.cfg.ForkChoiceStore.RUnlock()
return s.cfg.ForkChoiceStore.ForkChoiceDump(ctx)

View File

@@ -30,8 +30,6 @@ var (
ErrNotCheckpoint = errors.New("not a checkpoint in forkchoice")
)
var errMaxBlobsExceeded = errors.New("Expected commitments in block exceeds MAX_BLOBS_PER_BLOCK")
// An invalid block is the block that fails state transition based on the core protocol rules.
// The beacon node shall not be accepting nor building blocks that branch off from an invalid block.
// Some examples of invalid blocks are:

View File

@@ -157,11 +157,6 @@ func (s *Service) notifyForkchoiceUpdate(ctx context.Context, arg *notifyForkcho
if hasAttr && payloadID != nil {
var pId [8]byte
copy(pId[:], payloadID[:])
log.WithFields(logrus.Fields{
"blockRoot": fmt.Sprintf("%#x", bytesutil.Trunc(arg.headRoot[:])),
"headSlot": headBlk.Slot(),
"payloadID": fmt.Sprintf("%#x", bytesutil.Trunc(payloadID[:])),
}).Info("Forkchoice updated with payload attributes for proposal")
s.cfg.ProposerSlotIndexCache.SetProposerAndPayloadIDs(nextSlot, proposerId, pId, arg.headRoot)
} else if hasAttr && payloadID == nil && !features.Get().PrepareAllPayloads {
log.WithFields(logrus.Fields{

File diff suppressed because one or more lines are too long

View File

@@ -2,8 +2,10 @@ package kzg
import (
"fmt"
"strings"
GoKZG "github.com/crate-crypto/go-kzg-4844"
"github.com/pkg/errors"
ethpb "github.com/prysmaticlabs/prysm/v4/proto/prysm/v1alpha1"
)
@@ -12,7 +14,7 @@ import (
// - Expected KZG commitments match the number of blobs in the block
// - That the number of proofs match the number of blobs
// - That the proofs are verified against the KZG commitments
func IsDataAvailable(commitments [][]byte, sidecars []*ethpb.DeprecatedBlobSidecar) error {
func IsDataAvailable(commitments [][]byte, sidecars []*ethpb.BlobSidecar) error {
if len(commitments) != len(sidecars) {
return fmt.Errorf("could not check data availability, expected %d commitments, obtained %d",
len(commitments), len(sidecars))
@@ -31,6 +33,61 @@ func IsDataAvailable(commitments [][]byte, sidecars []*ethpb.DeprecatedBlobSidec
return kzgContext.VerifyBlobKZGProofBatch(blobs, cmts, proofs)
}
var ErrKzgProofFailed = errors.New("failed to prove commitment to BlobSidecar Blob data")
type KzgProofError struct {
failed [][48]byte
}
func NewKzgProofError(failed [][48]byte) *KzgProofError {
return &KzgProofError{failed: failed}
}
func (e *KzgProofError) Error() string {
cmts := make([]string, len(e.failed))
for i := range e.failed {
cmts[i] = fmt.Sprintf("%#x", e.failed[i])
}
return fmt.Sprintf("%s: bad commitments=%s", ErrKzgProofFailed.Error(), strings.Join(cmts, ","))
}
func (e *KzgProofError) Failed() [][48]byte {
return e.failed
}
func (e *KzgProofError) Unwrap() error {
return ErrKzgProofFailed
}
// BisectBlobSidecarKzgProofs tries to batch prove the given sidecars against their own specified commitment.
// The caller is responsible for ensuring that the commitments match those specified by the block.
// If the batch fails, it will then try to verify the proofs one-by-one.
// If an error is returned, it will be a custom error of type KzgProofError that provides access
// to the list of commitments that failed.
func BisectBlobSidecarKzgProofs(sidecars []*ethpb.BlobSidecar) error {
if len(sidecars) == 0 {
return nil
}
blobs := make([]GoKZG.Blob, len(sidecars))
cmts := make([]GoKZG.KZGCommitment, len(sidecars))
proofs := make([]GoKZG.KZGProof, len(sidecars))
for i, sidecar := range sidecars {
blobs[i] = bytesToBlob(sidecar.Blob)
cmts[i] = bytesToCommitment(sidecar.KzgCommitment)
proofs[i] = bytesToKZGProof(sidecar.KzgProof)
}
if err := kzgContext.VerifyBlobKZGProofBatch(blobs, cmts, proofs); err == nil {
return nil
}
failed := make([][48]byte, 0, len(blobs))
for i := range blobs {
if err := kzgContext.VerifyBlobKZGProof(blobs[i], cmts[i], proofs[i]); err != nil {
failed = append(failed, cmts[i])
}
}
return NewKzgProofError(failed)
}
func bytesToBlob(blob []byte) (ret GoKZG.Blob) {
copy(ret[:], blob)
return

View File

@@ -9,7 +9,7 @@ import (
)
func TestIsDataAvailable(t *testing.T) {
sidecars := make([]*ethpb.DeprecatedBlobSidecar, 0)
sidecars := make([]*ethpb.BlobSidecar, 0)
commitments := make([][]byte, 0)
require.NoError(t, IsDataAvailable(commitments, sidecars))
}

View File

@@ -147,3 +147,15 @@ func logPayload(block interfaces.ReadOnlyBeaconBlock) error {
log.WithFields(fields).Debug("Synced new payload")
return nil
}
func logBlobSidecar(scs []*ethpb.BlobSidecar, startTime time.Time) {
if len(scs) == 0 {
return
}
log.WithFields(logrus.Fields{
"count": len(scs),
"slot": scs[0].Slot,
"block": hex.EncodeToString(scs[0].BlockRoot),
"validationTime": time.Since(startTime),
}).Debug("Synced new blob sidecars")
}

View File

@@ -5,7 +5,6 @@ import (
"github.com/prysmaticlabs/prysm/v4/beacon-chain/cache"
statefeed "github.com/prysmaticlabs/prysm/v4/beacon-chain/core/feed/state"
"github.com/prysmaticlabs/prysm/v4/beacon-chain/db"
"github.com/prysmaticlabs/prysm/v4/beacon-chain/db/filesystem"
"github.com/prysmaticlabs/prysm/v4/beacon-chain/execution"
"github.com/prysmaticlabs/prysm/v4/beacon-chain/forkchoice"
"github.com/prysmaticlabs/prysm/v4/beacon-chain/operations/attestations"
@@ -165,8 +164,6 @@ func WithFinalizedStateAtStartUp(st state.BeaconState) Option {
}
}
// WithClockSychronizer sets the ClockSetter/ClockWaiter values to be used by services that need to block until
// the genesis timestamp is known (ClockWaiter) or which determine the genesis timestamp (ClockSetter).
func WithClockSynchronizer(gs *startup.ClockSynchronizer) Option {
return func(s *Service) error {
s.clockSetter = gs
@@ -175,18 +172,9 @@ func WithClockSynchronizer(gs *startup.ClockSynchronizer) Option {
}
}
// WithSyncComplete sets a channel that is used to notify blockchain service that the node has synced to head.
func WithSyncComplete(c chan struct{}) Option {
return func(s *Service) error {
s.syncComplete = c
return nil
}
}
// WithBlobStorage sets the blob storage backend for the blockchain service.
func WithBlobStorage(b *filesystem.BlobStorage) Option {
return func(s *Service) error {
s.blobStorage = b
return nil
}
}

View File

@@ -2,7 +2,6 @@ package blockchain
import (
"context"
"sync"
"testing"
"time"
@@ -146,61 +145,6 @@ func TestStore_OnAttestation_Ok_DoublyLinkedTree(t *testing.T) {
require.NoError(t, service.OnAttestation(ctx, att[0], 0))
}
func TestService_GetAttPreState_Concurrency(t *testing.T) {
service, _ := minimalTestService(t)
ctx := context.Background()
s, err := util.NewBeaconState()
require.NoError(t, err)
ckRoot := bytesutil.PadTo([]byte{'A'}, fieldparams.RootLength)
err = s.SetFinalizedCheckpoint(&ethpb.Checkpoint{Root: ckRoot})
require.NoError(t, err)
val := &ethpb.Validator{PublicKey: bytesutil.PadTo([]byte("foo"), 48), WithdrawalCredentials: bytesutil.PadTo([]byte("bar"), fieldparams.RootLength)}
err = s.SetValidators([]*ethpb.Validator{val})
require.NoError(t, err)
err = s.SetBalances([]uint64{0})
require.NoError(t, err)
r := [32]byte{'g'}
require.NoError(t, service.cfg.BeaconDB.SaveState(ctx, s, r))
cp1 := &ethpb.Checkpoint{Epoch: 1, Root: ckRoot}
require.NoError(t, service.cfg.BeaconDB.SaveState(ctx, s, bytesutil.ToBytes32([]byte{'A'})))
require.NoError(t, service.cfg.BeaconDB.SaveStateSummary(ctx, &ethpb.StateSummary{Root: ckRoot}))
st, root, err := prepareForkchoiceState(ctx, 100, [32]byte(cp1.Root), [32]byte{}, [32]byte{'R'}, cp1, cp1)
require.NoError(t, err)
require.NoError(t, service.cfg.ForkChoiceStore.InsertNode(ctx, st, root))
var wg sync.WaitGroup
errChan := make(chan error, 1000)
for i := 0; i < 1000; i++ {
wg.Add(1)
go func() {
defer wg.Done()
cp1 := &ethpb.Checkpoint{Epoch: 1, Root: ckRoot}
_, err := service.getAttPreState(ctx, cp1)
if err != nil {
errChan <- err
}
}()
}
go func() {
wg.Wait()
close(errChan)
}()
select {
case <-time.After(10 * time.Second):
t.Fatal("Test timed out")
case err, ok := <-errChan:
if ok && err != nil {
require.ErrorContains(t, "not a checkpoint in forkchoice", err)
}
}
}
func TestStore_SaveCheckpointState(t *testing.T) {
service, tr := minimalTestService(t)
ctx := tr.ctx

View File

@@ -6,17 +6,18 @@ import (
"time"
"github.com/pkg/errors"
"github.com/prysmaticlabs/prysm/v4/beacon-chain/blockchain/kzg"
"github.com/prysmaticlabs/prysm/v4/beacon-chain/core/blocks"
"github.com/prysmaticlabs/prysm/v4/beacon-chain/core/feed"
statefeed "github.com/prysmaticlabs/prysm/v4/beacon-chain/core/feed/state"
"github.com/prysmaticlabs/prysm/v4/beacon-chain/core/helpers"
coreTime "github.com/prysmaticlabs/prysm/v4/beacon-chain/core/time"
"github.com/prysmaticlabs/prysm/v4/beacon-chain/core/transition"
"github.com/prysmaticlabs/prysm/v4/beacon-chain/db/filesystem"
"github.com/prysmaticlabs/prysm/v4/beacon-chain/das"
"github.com/prysmaticlabs/prysm/v4/beacon-chain/db"
forkchoicetypes "github.com/prysmaticlabs/prysm/v4/beacon-chain/forkchoice/types"
"github.com/prysmaticlabs/prysm/v4/beacon-chain/state"
"github.com/prysmaticlabs/prysm/v4/config/features"
fieldparams "github.com/prysmaticlabs/prysm/v4/config/fieldparams"
"github.com/prysmaticlabs/prysm/v4/config/params"
consensusblocks "github.com/prysmaticlabs/prysm/v4/consensus-types/blocks"
"github.com/prysmaticlabs/prysm/v4/consensus-types/interfaces"
@@ -162,7 +163,7 @@ func getStateVersionAndPayload(st state.BeaconState) (int, interfaces.ExecutionD
return preStateVersion, preStateHeader, nil
}
func (s *Service) onBlockBatch(ctx context.Context, blks []consensusblocks.ROBlock) error {
func (s *Service) onBlockBatch(ctx context.Context, blks []consensusblocks.ROBlock, avs das.AvailabilityStore) error {
ctx, span := trace.StartSpan(ctx, "blockChain.onBlockBatch")
defer span.End()
@@ -265,7 +266,7 @@ func (s *Service) onBlockBatch(ctx context.Context, blks []consensusblocks.ROBlo
return err
}
}
if err := s.databaseDACheck(ctx, b); err != nil {
if err := avs.IsDataAvailable(ctx, s.CurrentSlot(), b); err != nil {
return errors.Wrap(err, "could not validate blob data availability")
}
args := &forkchoicetypes.BlockAndCheckpoints{Block: b.Block(),
@@ -333,37 +334,6 @@ func (s *Service) onBlockBatch(ctx context.Context, blks []consensusblocks.ROBlo
return s.saveHeadNoDB(ctx, lastB, lastBR, preState, !isValidPayload)
}
func commitmentsToCheck(b consensusblocks.ROBlock, current primitives.Slot) [][]byte {
if b.Version() < version.Deneb {
return nil
}
// We are only required to check within MIN_EPOCHS_FOR_BLOB_SIDECARS_REQUESTS
if !params.WithinDAPeriod(slots.ToEpoch(b.Block().Slot()), slots.ToEpoch(current)) {
return nil
}
kzgCommitments, err := b.Block().Body().BlobKzgCommitments()
if err != nil {
return nil
}
return kzgCommitments
}
func (s *Service) databaseDACheck(ctx context.Context, b consensusblocks.ROBlock) error {
commitments := commitmentsToCheck(b, s.CurrentSlot())
if len(commitments) == 0 {
return nil
}
missing, err := missingIndices(s.blobStorage, b.Root(), commitments)
if err != nil {
return err
}
if len(missing) == 0 {
return nil
}
// TODO: don't worry that this error isn't informative, it will be superceded by a detailed sidecar cache error.
return errors.New("not all kzg commitments are available")
}
func (s *Service) updateEpochBoundaryCaches(ctx context.Context, st state.BeaconState) error {
e := coreTime.CurrentEpoch(st)
if err := helpers.UpdateCommitteeCache(ctx, st, e); err != nil {
@@ -533,43 +503,11 @@ func (s *Service) runLateBlockTasks() {
}
}
// missingIndices uses the expected commitments from the block to determine
// which BlobSidecar indices would need to be in the database for DA success.
// It returns a map where each key represents a missing BlobSidecar index.
// An empty map means we have all indices; a non-empty map can be used to compare incoming
// BlobSidecars against the set of known missing sidecars.
func missingIndices(bs *filesystem.BlobStorage, root [32]byte, expected [][]byte) (map[uint64]struct{}, error) {
if len(expected) == 0 {
return nil, nil
}
if len(expected) > fieldparams.MaxBlobsPerBlock {
return nil, errMaxBlobsExceeded
}
indices, err := bs.Indices(root)
if err != nil {
return nil, err
}
missing := make(map[uint64]struct{}, len(expected))
for i := range expected {
ui := uint64(i)
if len(expected[i]) > 0 {
if !indices[i] {
missing[ui] = struct{}{}
}
}
}
return missing, nil
}
// isDataAvailable blocks until all BlobSidecars committed to in the block are available,
// or an error or context cancellation occurs. A nil result means that the data availability check is successful.
// The function will first check the database to see if all sidecars have been persisted. If any
// sidecars are missing, it will then read from the blobNotifier channel for the given root until the channel is
// closed, the context hits cancellation/timeout, or notifications have been received for all the missing sidecars.
func (s *Service) isDataAvailable(ctx context.Context, root [32]byte, signed interfaces.ReadOnlySignedBeaconBlock) error {
if signed.Version() < version.Deneb {
return nil
}
t := time.Now()
block := signed.Block()
if block == nil {
@@ -588,35 +526,59 @@ func (s *Service) isDataAvailable(ctx context.Context, root [32]byte, signed int
if err != nil {
return errors.Wrap(err, "could not get KZG commitments")
}
// expected is the number of kzg commitments observed in the block.
expected := len(kzgCommitments)
if expected == 0 {
return nil
}
// get a map of BlobSidecar indices that are not currently available.
missing, err := missingIndices(s.blobStorage, root, kzgCommitments)
if err != nil {
return err
}
// If there are no missing indices, all BlobSidecars are available.
if len(missing) == 0 {
return nil
// Read first from db in case we have the blobs
sidecars, err := s.cfg.BeaconDB.BlobSidecarsByRoot(ctx, root)
switch {
case err == nil:
if len(sidecars) >= expected {
s.blobNotifiers.delete(root)
if err := kzg.IsDataAvailable(kzgCommitments, sidecars); err != nil {
log.WithField("root", fmt.Sprintf("%#x", root)).Warn("removing blob sidecars with invalid proofs")
if err2 := s.cfg.BeaconDB.DeleteBlobSidecars(ctx, root); err2 != nil {
log.WithError(err2).Error("could not delete sidecars")
}
return err
}
logBlobSidecar(sidecars, t)
return nil
}
case errors.Is(err, db.ErrNotFound):
// If the blob sidecars haven't arrived yet, the subsequent code will wait for them.
// Note: The system will not exit with an error in this scenario.
default:
log.WithError(err).Error("could not get blob sidecars from DB")
}
// The gossip handler for blobs writes the index of each verified blob referencing the given
// root to the channel returned by blobNotifiers.forRoot.
found := map[uint64]struct{}{}
for _, sc := range sidecars {
found[sc.Index] = struct{}{}
}
nc := s.blobNotifiers.forRoot(root)
for {
select {
case idx := <-nc:
// Delete each index seen in the notification channel.
delete(missing, idx)
// Read from the channel until there are no more missing sidecars.
if len(missing) > 0 {
found[idx] = struct{}{}
if len(found) != expected {
continue
}
// Once all sidecars have been observed, clean up the notification channel.
s.blobNotifiers.delete(root)
sidecars, err := s.cfg.BeaconDB.BlobSidecarsByRoot(ctx, root)
if err != nil {
return errors.Wrap(err, "could not get blob sidecars")
}
if err := kzg.IsDataAvailable(kzgCommitments, sidecars); err != nil {
log.WithField("root", fmt.Sprintf("%#x", root)).Warn("removing blob sidecars with invalid proofs")
if err2 := s.cfg.BeaconDB.DeleteBlobSidecars(ctx, root); err2 != nil {
log.WithError(err2).Error("could not delete sidecars")
}
return err
}
logBlobSidecar(sidecars, t)
return nil
case <-ctx.Done():
return errors.Wrap(ctx.Err(), "context deadline waiting for blob sidecars")

View File

@@ -1,7 +1,6 @@
package blockchain
import (
"bytes"
"context"
"fmt"
"math/big"
@@ -17,8 +16,8 @@ import (
"github.com/prysmaticlabs/prysm/v4/beacon-chain/core/blocks"
"github.com/prysmaticlabs/prysm/v4/beacon-chain/core/signing"
"github.com/prysmaticlabs/prysm/v4/beacon-chain/core/transition"
"github.com/prysmaticlabs/prysm/v4/beacon-chain/das"
"github.com/prysmaticlabs/prysm/v4/beacon-chain/db"
"github.com/prysmaticlabs/prysm/v4/beacon-chain/db/filesystem"
testDB "github.com/prysmaticlabs/prysm/v4/beacon-chain/db/testing"
"github.com/prysmaticlabs/prysm/v4/beacon-chain/execution"
mockExecution "github.com/prysmaticlabs/prysm/v4/beacon-chain/execution/testing"
@@ -40,7 +39,6 @@ import (
"github.com/prysmaticlabs/prysm/v4/testing/require"
"github.com/prysmaticlabs/prysm/v4/testing/util"
prysmTime "github.com/prysmaticlabs/prysm/v4/time"
"github.com/prysmaticlabs/prysm/v4/time/slots"
logTest "github.com/sirupsen/logrus/hooks/test"
)
@@ -69,7 +67,7 @@ func TestStore_OnBlockBatch(t *testing.T) {
require.NoError(t, err)
blks = append(blks, rwsb)
}
err := service.onBlockBatch(ctx, blks)
err := service.onBlockBatch(ctx, blks, &das.MockAvailabilityStore{})
require.NoError(t, err)
jcp := service.CurrentJustifiedCheckpt()
jroot := bytesutil.ToBytes32(jcp.Root)
@@ -99,7 +97,7 @@ func TestStore_OnBlockBatch_NotifyNewPayload(t *testing.T) {
require.NoError(t, service.saveInitSyncBlock(ctx, rwsb.Root(), wsb))
blks = append(blks, rwsb)
}
require.NoError(t, service.onBlockBatch(ctx, blks))
require.NoError(t, service.onBlockBatch(ctx, blks, &das.MockAvailabilityStore{}))
}
func TestCachedPreState_CanGetFromStateSummary(t *testing.T) {
@@ -1946,7 +1944,7 @@ func TestNoViableHead_Reboot(t *testing.T) {
rwsb, err := consensusblocks.NewROBlock(wsb)
require.NoError(t, err)
// We use onBlockBatch here because the valid chain is missing in forkchoice
require.NoError(t, service.onBlockBatch(ctx, []consensusblocks.ROBlock{rwsb}))
require.NoError(t, service.onBlockBatch(ctx, []consensusblocks.ROBlock{rwsb}, &das.MockAvailabilityStore{}))
// Check that the head is now VALID and the node is not optimistic
require.Equal(t, genesisRoot, service.ensureRootNotZeros(service.cfg.ForkChoiceStore.CachedHeadRoot()))
headRoot, err = service.HeadRoot(ctx)
@@ -2048,167 +2046,3 @@ func driftGenesisTime(s *Service, slot, delay int64) {
offset := slot*int64(params.BeaconConfig().SecondsPerSlot) - delay
s.SetGenesisTime(time.Unix(time.Now().Unix()-offset, 0))
}
func Test_commitmentsToCheck(t *testing.T) {
windowSlots, err := slots.EpochEnd(params.BeaconNetworkConfig().MinEpochsForBlobsSidecarsRequest)
require.NoError(t, err)
commits := [][]byte{
bytesutil.PadTo([]byte("a"), 48),
bytesutil.PadTo([]byte("b"), 48),
bytesutil.PadTo([]byte("c"), 48),
bytesutil.PadTo([]byte("d"), 48),
}
cases := []struct {
name string
commits [][]byte
block func(*testing.T) consensusblocks.ROBlock
slot primitives.Slot
}{
{
name: "pre deneb",
block: func(t *testing.T) consensusblocks.ROBlock {
bb := util.NewBeaconBlockBellatrix()
sb, err := consensusblocks.NewSignedBeaconBlock(bb)
require.NoError(t, err)
rb, err := consensusblocks.NewROBlock(sb)
require.NoError(t, err)
return rb
},
},
{
name: "commitments within da",
block: func(t *testing.T) consensusblocks.ROBlock {
d := util.NewBeaconBlockDeneb()
d.Block.Body.BlobKzgCommitments = commits
d.Block.Slot = 100
sb, err := consensusblocks.NewSignedBeaconBlock(d)
require.NoError(t, err)
rb, err := consensusblocks.NewROBlock(sb)
require.NoError(t, err)
return rb
},
commits: commits,
slot: 100,
},
{
name: "commitments outside da",
block: func(t *testing.T) consensusblocks.ROBlock {
d := util.NewBeaconBlockDeneb()
// block is from slot 0, "current slot" is window size +1 (so outside the window)
d.Block.Body.BlobKzgCommitments = commits
sb, err := consensusblocks.NewSignedBeaconBlock(d)
require.NoError(t, err)
rb, err := consensusblocks.NewROBlock(sb)
require.NoError(t, err)
return rb
},
slot: windowSlots + 1,
},
}
for _, c := range cases {
t.Run(c.name, func(t *testing.T) {
b := c.block(t)
co := commitmentsToCheck(b, c.slot)
require.Equal(t, len(c.commits), len(co))
for i := 0; i < len(c.commits); i++ {
require.Equal(t, true, bytes.Equal(c.commits[i], co[i]))
}
})
}
}
func TestMissingIndices(t *testing.T) {
cases := []struct {
name string
expected [][]byte
present []uint64
result map[uint64]struct{}
root [32]byte
err error
}{
{
name: "zero len",
},
{
name: "expected exceeds max",
expected: fakeCommitments(fieldparams.MaxBlobsPerBlock + 1),
err: errMaxBlobsExceeded,
},
{
name: "first missing",
expected: fakeCommitments(fieldparams.MaxBlobsPerBlock),
present: []uint64{1, 2, 3, 4, 5},
result: fakeResult([]uint64{0}),
},
{
name: "all missing",
expected: fakeCommitments(fieldparams.MaxBlobsPerBlock),
result: fakeResult([]uint64{0, 1, 2, 3, 4, 5}),
},
{
name: "none missing",
expected: fakeCommitments(fieldparams.MaxBlobsPerBlock),
present: []uint64{0, 1, 2, 3, 4, 5},
result: fakeResult([]uint64{}),
},
{
name: "one commitment, missing",
expected: fakeCommitments(1),
present: []uint64{},
result: fakeResult([]uint64{0}),
},
{
name: "3 commitments, 1 missing",
expected: fakeCommitments(3),
present: []uint64{1},
result: fakeResult([]uint64{0, 2}),
},
{
name: "3 commitments, none missing",
expected: fakeCommitments(3),
present: []uint64{0, 1, 2},
result: fakeResult([]uint64{}),
},
{
name: "3 commitments, all missing",
expected: fakeCommitments(3),
present: []uint64{},
result: fakeResult([]uint64{0, 1, 2}),
},
}
for _, c := range cases {
bm, bs := filesystem.NewEphemeralBlobStorageWithMocker(t)
t.Run(c.name, func(t *testing.T) {
require.NoError(t, bm.CreateFakeIndices(c.root, c.present))
missing, err := missingIndices(bs, c.root, c.expected)
if c.err != nil {
require.ErrorIs(t, err, c.err)
return
}
require.NoError(t, err)
require.Equal(t, len(c.result), len(missing))
for key := range c.result {
m, ok := missing[key]
require.Equal(t, true, ok)
require.Equal(t, c.result[key], m)
}
})
}
}
func fakeCommitments(n int) [][]byte {
f := make([][]byte, n)
for i := range f {
f[i] = make([]byte, 48)
}
return f
}
func fakeResult(missing []uint64) map[uint64]struct{} {
r := make(map[uint64]struct{}, len(missing))
for i := range missing {
r[missing[i]] = struct{}{}
}
return r
}

View File

@@ -3,21 +3,21 @@ package blockchain
import (
"context"
"github.com/prysmaticlabs/prysm/v4/consensus-types/blocks"
ethpb "github.com/prysmaticlabs/prysm/v4/proto/prysm/v1alpha1"
)
// SendNewBlobEvent sends a message to the BlobNotifier channel that the blob
// for the blocroot `root` is ready in the database
func (s *Service) sendNewBlobEvent(root [32]byte, index uint64) {
s.blobNotifiers.notifyIndex(root, index)
s.blobNotifiers.forRoot(root) <- index
}
// ReceiveBlob saves the blob to database and sends the new event
func (s *Service) ReceiveBlob(ctx context.Context, b blocks.VerifiedROBlob) error {
if err := s.blobStorage.Save(b); err != nil {
func (s *Service) ReceiveBlob(ctx context.Context, b *ethpb.BlobSidecar) error {
if err := s.cfg.BeaconDB.SaveBlobSidecar(ctx, []*ethpb.BlobSidecar{b}); err != nil {
return err
}
s.sendNewBlobEvent(b.BlockRoot(), b.Index)
s.sendNewBlobEvent([32]byte(b.BlockRoot), b.Index)
return nil
}

View File

@@ -12,6 +12,7 @@ import (
"github.com/prysmaticlabs/prysm/v4/beacon-chain/core/helpers"
coreTime "github.com/prysmaticlabs/prysm/v4/beacon-chain/core/time"
"github.com/prysmaticlabs/prysm/v4/beacon-chain/core/transition"
"github.com/prysmaticlabs/prysm/v4/beacon-chain/das"
"github.com/prysmaticlabs/prysm/v4/beacon-chain/state"
"github.com/prysmaticlabs/prysm/v4/config/features"
"github.com/prysmaticlabs/prysm/v4/consensus-types/blocks"
@@ -34,7 +35,7 @@ var epochsSinceFinalitySaveHotStateDB = primitives.Epoch(100)
// BlockReceiver interface defines the methods of chain service for receiving and processing new blocks.
type BlockReceiver interface {
ReceiveBlock(ctx context.Context, block interfaces.ReadOnlySignedBeaconBlock, blockRoot [32]byte) error
ReceiveBlockBatch(ctx context.Context, blocks []blocks.ROBlock) error
ReceiveBlockBatch(ctx context.Context, blocks []blocks.ROBlock, avs das.AvailabilityStore) error
HasBlock(ctx context.Context, root [32]byte) bool
RecentBlockSlot(root [32]byte) (primitives.Slot, error)
BlockBeingSynced([32]byte) bool
@@ -43,7 +44,7 @@ type BlockReceiver interface {
// BlobReceiver interface defines the methods of chain service for receiving new
// blobs
type BlobReceiver interface {
ReceiveBlob(context.Context, blocks.VerifiedROBlob) error
ReceiveBlob(context.Context, *ethpb.BlobSidecar) error
}
// SlashingReceiver interface defines the methods of chain service for receiving validated slashing over the wire.
@@ -191,7 +192,7 @@ func (s *Service) ReceiveBlock(ctx context.Context, block interfaces.ReadOnlySig
// ReceiveBlockBatch processes the whole block batch at once, assuming the block batch is linear ,transitioning
// the state, performing batch verification of all collected signatures and then performing the appropriate
// actions for a block post-transition.
func (s *Service) ReceiveBlockBatch(ctx context.Context, blocks []blocks.ROBlock) error {
func (s *Service) ReceiveBlockBatch(ctx context.Context, blocks []blocks.ROBlock, avs das.AvailabilityStore) error {
ctx, span := trace.StartSpan(ctx, "blockChain.ReceiveBlockBatch")
defer span.End()
@@ -199,7 +200,7 @@ func (s *Service) ReceiveBlockBatch(ctx context.Context, blocks []blocks.ROBlock
defer s.cfg.ForkChoiceStore.Unlock()
// Apply state transition on the incoming newly received block batches, one by one.
if err := s.onBlockBatch(ctx, blocks); err != nil {
if err := s.onBlockBatch(ctx, blocks, avs); err != nil {
err := errors.Wrap(err, "could not process block in batch")
tracing.AnnotateError(span, err)
return err
@@ -258,6 +259,11 @@ func (s *Service) HasBlock(ctx context.Context, root [32]byte) bool {
return s.hasBlockInInitSyncOrDB(ctx, root)
}
// RecentBlockSlot returns block slot form fork choice store
func (s *Service) RecentBlockSlot(root [32]byte) (primitives.Slot, error) {
return s.cfg.ForkChoiceStore.Slot(root)
}
// ReceiveAttesterSlashing receives an attester slashing and inserts it to forkchoice
func (s *Service) ReceiveAttesterSlashing(ctx context.Context, slashing *ethpb.AttesterSlashing) {
s.cfg.ForkChoiceStore.Lock()

View File

@@ -7,6 +7,7 @@ import (
"time"
blockchainTesting "github.com/prysmaticlabs/prysm/v4/beacon-chain/blockchain/testing"
"github.com/prysmaticlabs/prysm/v4/beacon-chain/das"
"github.com/prysmaticlabs/prysm/v4/beacon-chain/operations/voluntaryexits"
"github.com/prysmaticlabs/prysm/v4/config/params"
"github.com/prysmaticlabs/prysm/v4/consensus-types/blocks"
@@ -240,7 +241,7 @@ func TestService_ReceiveBlockBatch(t *testing.T) {
require.NoError(t, err)
rwsb, err := blocks.NewROBlock(wsb)
require.NoError(t, err)
err = s.ReceiveBlockBatch(ctx, []blocks.ROBlock{rwsb})
err = s.ReceiveBlockBatch(ctx, []blocks.ROBlock{rwsb}, &das.MockAvailabilityStore{})
if tt.wantedErr != "" {
assert.ErrorContains(t, tt.wantedErr, err)
} else {

View File

@@ -20,7 +20,6 @@ import (
coreTime "github.com/prysmaticlabs/prysm/v4/beacon-chain/core/time"
"github.com/prysmaticlabs/prysm/v4/beacon-chain/core/transition"
"github.com/prysmaticlabs/prysm/v4/beacon-chain/db"
"github.com/prysmaticlabs/prysm/v4/beacon-chain/db/filesystem"
"github.com/prysmaticlabs/prysm/v4/beacon-chain/execution"
f "github.com/prysmaticlabs/prysm/v4/beacon-chain/forkchoice"
forkchoicetypes "github.com/prysmaticlabs/prysm/v4/beacon-chain/forkchoice/types"
@@ -64,7 +63,6 @@ type Service struct {
syncComplete chan struct{}
blobNotifiers *blobNotifierMap
blockBeingSynced *currentlySyncingBlock
blobStorage *filesystem.BlobStorage
}
// config options for the service.
@@ -96,35 +94,6 @@ var ErrMissingClockSetter = errors.New("blockchain Service initialized without a
type blobNotifierMap struct {
sync.RWMutex
notifiers map[[32]byte]chan uint64
seenIndex map[[32]byte][fieldparams.MaxBlobsPerBlock]bool
}
// notifyIndex notifies a blob by its index for a given root.
// It uses internal maps to keep track of seen indices and notifier channels.
func (bn *blobNotifierMap) notifyIndex(root [32]byte, idx uint64) {
if idx >= fieldparams.MaxBlobsPerBlock {
return
}
bn.Lock()
seen := bn.seenIndex[root]
if seen[idx] {
bn.Unlock()
return
}
seen[idx] = true
bn.seenIndex[root] = seen
// Retrieve or create the notifier channel for the given root.
c, ok := bn.notifiers[root]
if !ok {
c = make(chan uint64, fieldparams.MaxBlobsPerBlock)
bn.notifiers[root] = c
}
bn.Unlock()
c <- idx
}
func (bn *blobNotifierMap) forRoot(root [32]byte) chan uint64 {
@@ -141,7 +110,6 @@ func (bn *blobNotifierMap) forRoot(root [32]byte) chan uint64 {
func (bn *blobNotifierMap) delete(root [32]byte) {
bn.Lock()
defer bn.Unlock()
delete(bn.seenIndex, root)
delete(bn.notifiers, root)
}
@@ -158,7 +126,6 @@ func NewService(ctx context.Context, opts ...Option) (*Service, error) {
ctx, cancel := context.WithCancel(ctx)
bn := &blobNotifierMap{
notifiers: make(map[[32]byte]chan uint64),
seenIndex: make(map[[32]byte][fieldparams.MaxBlobsPerBlock]bool),
}
srv := &Service{
ctx: ctx,

View File

@@ -25,7 +25,6 @@ import (
state_native "github.com/prysmaticlabs/prysm/v4/beacon-chain/state/state-native"
"github.com/prysmaticlabs/prysm/v4/beacon-chain/state/stategen"
"github.com/prysmaticlabs/prysm/v4/config/features"
fieldparams "github.com/prysmaticlabs/prysm/v4/config/fieldparams"
"github.com/prysmaticlabs/prysm/v4/config/params"
consensusblocks "github.com/prysmaticlabs/prysm/v4/consensus-types/blocks"
"github.com/prysmaticlabs/prysm/v4/consensus-types/interfaces"
@@ -446,10 +445,11 @@ func BenchmarkHasBlockForkChoiceStore_DoublyLinkedTree(b *testing.B) {
s := &Service{
cfg: &config{ForkChoiceStore: doublylinkedtree.New(), BeaconDB: beaconDB},
}
blk := util.NewBeaconBlock()
blk := &ethpb.SignedBeaconBlock{Block: &ethpb.BeaconBlock{Body: &ethpb.BeaconBlockBody{}}}
r, err := blk.Block.HashTreeRoot()
require.NoError(b, err)
beaconState, err := util.NewBeaconState()
bs := &ethpb.BeaconState{FinalizedCheckpoint: &ethpb.Checkpoint{Root: make([]byte, 32)}, CurrentJustifiedCheckpoint: &ethpb.Checkpoint{Root: make([]byte, 32)}}
beaconState, err := state_native.InitializeFromProtoPhase0(bs)
require.NoError(b, err)
require.NoError(b, s.cfg.ForkChoiceStore.InsertNode(ctx, beaconState, r))
@@ -514,48 +514,3 @@ func (s *MockClockSetter) SetClock(g *startup.Clock) error {
s.G = g
return s.Err
}
func TestNotifyIndex(t *testing.T) {
// Initialize a blobNotifierMap
bn := &blobNotifierMap{
seenIndex: make(map[[32]byte][fieldparams.MaxBlobsPerBlock]bool),
notifiers: make(map[[32]byte]chan uint64),
}
// Sample root and index
var root [32]byte
copy(root[:], "exampleRoot")
// Test notifying a new index
bn.notifyIndex(root, 1)
if !bn.seenIndex[root][1] {
t.Errorf("Index was not marked as seen")
}
// Test that a new channel is created
if _, ok := bn.notifiers[root]; !ok {
t.Errorf("Notifier channel was not created")
}
// Test notifying an already seen index
bn.notifyIndex(root, 1)
if len(bn.notifiers[root]) > 1 {
t.Errorf("Notifier channel should not receive multiple messages for the same index")
}
// Test notifying a new index again
bn.notifyIndex(root, 2)
if !bn.seenIndex[root][2] {
t.Errorf("Index was not marked as seen")
}
// Test that the notifier channel receives the index
select {
case idx := <-bn.notifiers[root]:
if idx != 1 {
t.Errorf("Received index on channel is incorrect")
}
default:
t.Errorf("Notifier channel did not receive the index")
}
}

View File

@@ -56,7 +56,7 @@ func (mb *mockBroadcaster) BroadcastSyncCommitteeMessage(_ context.Context, _ ui
return nil
}
func (mb *mockBroadcaster) BroadcastBlob(_ context.Context, _ uint64, _ *ethpb.BlobSidecar) error {
func (mb *mockBroadcaster) BroadcastBlob(_ context.Context, _ uint64, _ *ethpb.SignedBlobSidecar) error {
mb.broadcastCalled = true
return nil
}

View File

@@ -17,6 +17,7 @@ go_library(
"//beacon-chain/core/feed/operation:go_default_library",
"//beacon-chain/core/feed/state:go_default_library",
"//beacon-chain/core/helpers:go_default_library",
"//beacon-chain/das:go_default_library",
"//beacon-chain/db:go_default_library",
"//beacon-chain/forkchoice:go_default_library",
"//beacon-chain/state:go_default_library",
@@ -24,11 +25,11 @@ go_library(
"//config/fieldparams:go_default_library",
"//config/params:go_default_library",
"//consensus-types/blocks:go_default_library",
"//consensus-types/forkchoice:go_default_library",
"//consensus-types/interfaces:go_default_library",
"//consensus-types/primitives:go_default_library",
"//encoding/bytesutil:go_default_library",
"//proto/engine/v1:go_default_library",
"//proto/eth/v1:go_default_library",
"//proto/prysm/v1alpha1:go_default_library",
"@com_github_pkg_errors//:go_default_library",
"@com_github_sirupsen_logrus//:go_default_library",

View File

@@ -16,6 +16,7 @@ import (
opfeed "github.com/prysmaticlabs/prysm/v4/beacon-chain/core/feed/operation"
statefeed "github.com/prysmaticlabs/prysm/v4/beacon-chain/core/feed/state"
"github.com/prysmaticlabs/prysm/v4/beacon-chain/core/helpers"
"github.com/prysmaticlabs/prysm/v4/beacon-chain/das"
"github.com/prysmaticlabs/prysm/v4/beacon-chain/db"
"github.com/prysmaticlabs/prysm/v4/beacon-chain/forkchoice"
"github.com/prysmaticlabs/prysm/v4/beacon-chain/state"
@@ -23,11 +24,11 @@ import (
fieldparams "github.com/prysmaticlabs/prysm/v4/config/fieldparams"
"github.com/prysmaticlabs/prysm/v4/config/params"
"github.com/prysmaticlabs/prysm/v4/consensus-types/blocks"
forkchoice2 "github.com/prysmaticlabs/prysm/v4/consensus-types/forkchoice"
"github.com/prysmaticlabs/prysm/v4/consensus-types/interfaces"
"github.com/prysmaticlabs/prysm/v4/consensus-types/primitives"
"github.com/prysmaticlabs/prysm/v4/encoding/bytesutil"
enginev1 "github.com/prysmaticlabs/prysm/v4/proto/engine/v1"
ethpbv1 "github.com/prysmaticlabs/prysm/v4/proto/eth/v1"
ethpb "github.com/prysmaticlabs/prysm/v4/proto/prysm/v1alpha1"
"github.com/sirupsen/logrus"
)
@@ -72,7 +73,7 @@ type ChainService struct {
OptimisticRoots map[[32]byte]bool
BlockSlot primitives.Slot
SyncingRoot [32]byte
Blobs []blocks.VerifiedROBlob
Blobs []*ethpb.BlobSidecar
}
func (s *ChainService) Ancestor(ctx context.Context, root []byte, slot primitives.Slot) ([]byte, error) {
@@ -207,7 +208,7 @@ func (s *ChainService) ReceiveBlockInitialSync(ctx context.Context, block interf
}
// ReceiveBlockBatch processes blocks in batches from initial-sync.
func (s *ChainService) ReceiveBlockBatch(ctx context.Context, blks []blocks.ROBlock) error {
func (s *ChainService) ReceiveBlockBatch(ctx context.Context, blks []blocks.ROBlock, avs das.AvailabilityStore) error {
if s.State == nil {
return ErrNilState
}
@@ -574,7 +575,7 @@ func (s *ChainService) InsertNode(ctx context.Context, st state.BeaconState, roo
}
// ForkChoiceDump mocks the same method in the chain service
func (s *ChainService) ForkChoiceDump(ctx context.Context) (*forkchoice2.Dump, error) {
func (s *ChainService) ForkChoiceDump(ctx context.Context) (*ethpbv1.ForkChoiceDump, error) {
if s.ForkChoiceStore != nil {
return s.ForkChoiceStore.ForkChoiceDump(ctx)
}
@@ -613,7 +614,7 @@ func (c *ChainService) BlockBeingSynced(root [32]byte) bool {
}
// ReceiveBlob implements the same method in the chain service
func (c *ChainService) ReceiveBlob(_ context.Context, b blocks.VerifiedROBlob) error {
func (c *ChainService) ReceiveBlob(_ context.Context, b *ethpb.BlobSidecar) error {
c.Blobs = append(c.Blobs, b)
return nil
}

View File

@@ -0,0 +1,50 @@
load("@prysm//tools/go:def.bzl", "go_library", "go_test")
go_library(
name = "go_default_library",
srcs = [
"availability.go",
"blocking.go",
"cache.go",
"error.go",
"iface.go",
"mock.go",
],
importpath = "github.com/prysmaticlabs/prysm/v4/beacon-chain/das",
visibility = ["//visibility:public"],
deps = [
"//beacon-chain/blockchain/kzg:go_default_library",
"//beacon-chain/db:go_default_library",
"//config/fieldparams:go_default_library",
"//config/params:go_default_library",
"//consensus-types/blocks:go_default_library",
"//consensus-types/primitives:go_default_library",
"//encoding/bytesutil:go_default_library",
"//proto/prysm/v1alpha1:go_default_library",
"//runtime/version:go_default_library",
"//time/slots:go_default_library",
"@com_github_pkg_errors//:go_default_library",
"@com_github_sirupsen_logrus//:go_default_library",
],
)
go_test(
name = "go_default_test",
srcs = [
"availability_test.go",
"cache_test.go",
],
embed = [":go_default_library"],
deps = [
"//config/fieldparams:go_default_library",
"//config/params:go_default_library",
"//consensus-types/blocks:go_default_library",
"//consensus-types/primitives:go_default_library",
"//encoding/bytesutil:go_default_library",
"//proto/prysm/v1alpha1:go_default_library",
"//testing/require:go_default_library",
"//testing/util:go_default_library",
"//time/slots:go_default_library",
"@com_github_pkg_errors//:go_default_library",
],
)

View File

@@ -0,0 +1,158 @@
package das
import (
"context"
errors "github.com/pkg/errors"
"github.com/prysmaticlabs/prysm/v4/beacon-chain/blockchain/kzg"
"github.com/prysmaticlabs/prysm/v4/beacon-chain/db"
fieldparams "github.com/prysmaticlabs/prysm/v4/config/fieldparams"
"github.com/prysmaticlabs/prysm/v4/config/params"
"github.com/prysmaticlabs/prysm/v4/consensus-types/blocks"
"github.com/prysmaticlabs/prysm/v4/consensus-types/primitives"
ethpb "github.com/prysmaticlabs/prysm/v4/proto/prysm/v1alpha1"
"github.com/prysmaticlabs/prysm/v4/runtime/version"
"github.com/prysmaticlabs/prysm/v4/time/slots"
log "github.com/sirupsen/logrus"
)
type kzgBatch func([][]byte, []*ethpb.BlobSidecar) error
type LazilyPersistentStore struct {
db BlobsDB
cache *cache
verifyKZG kzgBatch
}
var _ AvailabilityStore = &LazilyPersistentStore{}
func NewLazilyPersistentStore(db BlobsDB) *LazilyPersistentStore {
return &LazilyPersistentStore{
db: db,
cache: newCache(),
verifyKZG: kzg.IsDataAvailable,
}
}
// PersistOnceCommitted adds blobs to the working blob cache (in-memory or disk backed is an implementation
// detail). Blobs stored in this cache will be persisted for at least as long as the node is
// running. Once IsDataAvailable succeeds, all blobs referenced by the given block are guaranteed
// to be persisted for the remainder of the retention period.
func (s *LazilyPersistentStore) Persist(ctx context.Context, current primitives.Slot, sc ...*ethpb.BlobSidecar) ([]*ethpb.BlobSidecar, error) {
if len(sc) == 0 {
return nil, nil
}
if len(sc) == 1 {
}
var key cacheKey
var entry *cacheEntry
persisted := make([]*ethpb.BlobSidecar, 0, len(sc))
for i := range sc {
if !params.WithinDAPeriod(slots.ToEpoch(sc[i].Slot), slots.ToEpoch(current)) {
continue
}
if sc[i].Index >= fieldparams.MaxBlobsPerBlock {
log.WithField("block_root", sc[i].BlockRoot).WithField("index", sc[i].Index).Error("discarding BlobSidecar with index >= MaxBlobsPerBlock")
continue
}
skey := keyFromSidecar(sc[i])
if key != skey {
key = skey
entry = s.cache.ensure(key)
}
if entry.stash(sc[i]) {
persisted = append(persisted, sc[i])
}
}
return persisted, nil
}
// IsDataAvailable returns nil if all the commitments in the given block are persisted to the db and have been verified.
// - BlobSidecars already in the db are assumed to have been previously verified against the block.
// - BlobSidecars waiting for verification in the cache will be persisted to the db after verification.
// - When BlobSidecars are written to the db, their cache entries are cleared.
// - BlobSidecar cache entries with commitments that do not match the block will be evicted.
// - BlobSidecar cachee entries with commitments that fail proof verification will be evicted.
func (s *LazilyPersistentStore) IsDataAvailable(ctx context.Context, current primitives.Slot, b blocks.ROBlock) error {
blockCommitments := commitmentsToCheck(b, current)
if len(blockCommitments) == 0 {
return nil
}
key := keyFromBlock(b)
entry := s.cache.ensure(key)
// holding the lock over the course of the DA check simplifies everything
entry.Lock()
defer entry.Unlock()
if err := s.daCheck(ctx, b.Root(), blockCommitments, entry); err != nil {
return err
}
// If there is no error, DA has been successful, so we can clean up the cache.
s.cache.delete(key)
return nil
}
func (s *LazilyPersistentStore) daCheck(ctx context.Context, root [32]byte, blockCommitments [][]byte, entry *cacheEntry) error {
sidecars, cacheErr := entry.filter(root, blockCommitments)
if cacheErr == nil {
if err := s.verifyKZG(blockCommitments, sidecars); err != nil {
s.cache.delete(keyFromSidecar(sidecars[0]))
return err
}
// We have all the committed sidecars in cache, and they all have valid proofs.
// If flushing them to backing storage succeeds, then we can confirm DA.
return s.db.SaveBlobSidecar(ctx, sidecars)
}
// Before returning the cache error, check if we have the data in the db.
dbidx, err := s.persisted(ctx, root, entry)
// persisted() accounts for db.ErrNotFound, so this is a real database error.
if err != nil {
return err
}
notInDb, err := dbidx.missing(blockCommitments)
// This is a database integrity sanity check - it should never fail.
if err != nil {
return err
}
// All commitments were found in the db, due to a previous successful DA check.
if len(notInDb) == 0 {
return nil
}
return cacheErr
}
// persisted populate the db cache, which contains a mapping from Index->KzgCommitment for BlobSidecars previously verified
// (proof verification) and saved to the backend.
func (s *LazilyPersistentStore) persisted(ctx context.Context, root [32]byte, entry *cacheEntry) (dbidx, error) {
if entry.dbidxInitialized() {
return entry.dbidx(), nil
}
sidecars, err := s.db.BlobSidecarsByRoot(ctx, root)
if err != nil {
if errors.Is(err, db.ErrNotFound) {
// No BlobSidecars, initialize with empty idx.
return entry.ensureDbidx(), nil
}
return entry.dbidx(), err
}
// Ensure all sidecars found in the db are represented in the cache and return the cache value.
return entry.ensureDbidx(sidecars...), nil
}
func commitmentsToCheck(b blocks.ROBlock, current primitives.Slot) [][]byte {
if b.Version() < version.Deneb {
return nil
}
// We are only required to check within MIN_EPOCHS_FOR_BLOB_SIDECARS_REQUESTS
if !params.WithinDAPeriod(slots.ToEpoch(b.Block().Slot()), slots.ToEpoch(current)) {
return nil
}
kzgCommitments, err := b.Block().Body().BlobKzgCommitments()
if err != nil {
return nil
}
return kzgCommitments
}

View File

@@ -0,0 +1,318 @@
package das
import (
"bytes"
"context"
"testing"
errors "github.com/pkg/errors"
"github.com/prysmaticlabs/prysm/v4/config/params"
"github.com/prysmaticlabs/prysm/v4/consensus-types/blocks"
"github.com/prysmaticlabs/prysm/v4/consensus-types/primitives"
"github.com/prysmaticlabs/prysm/v4/encoding/bytesutil"
ethpb "github.com/prysmaticlabs/prysm/v4/proto/prysm/v1alpha1"
"github.com/prysmaticlabs/prysm/v4/testing/require"
"github.com/prysmaticlabs/prysm/v4/testing/util"
"github.com/prysmaticlabs/prysm/v4/time/slots"
)
func Test_commitmentsToCheck(t *testing.T) {
windowSlots, err := slots.EpochEnd(params.BeaconNetworkConfig().MinEpochsForBlobsSidecarsRequest)
require.NoError(t, err)
commits := [][]byte{
bytesutil.PadTo([]byte("a"), 48),
bytesutil.PadTo([]byte("b"), 48),
bytesutil.PadTo([]byte("c"), 48),
bytesutil.PadTo([]byte("d"), 48),
}
cases := []struct {
name string
commits [][]byte
block func(*testing.T) blocks.ROBlock
slot primitives.Slot
}{
{
name: "pre deneb",
block: func(t *testing.T) blocks.ROBlock {
bb := util.NewBeaconBlockBellatrix()
sb, err := blocks.NewSignedBeaconBlock(bb)
require.NoError(t, err)
rb, err := blocks.NewROBlock(sb)
require.NoError(t, err)
return rb
},
},
{
name: "commitments within da",
block: func(t *testing.T) blocks.ROBlock {
d := util.NewBeaconBlockDeneb()
d.Block.Body.BlobKzgCommitments = commits
d.Block.Slot = 100
sb, err := blocks.NewSignedBeaconBlock(d)
require.NoError(t, err)
rb, err := blocks.NewROBlock(sb)
require.NoError(t, err)
return rb
},
commits: commits,
slot: 100,
},
{
name: "commitments outside da",
block: func(t *testing.T) blocks.ROBlock {
d := util.NewBeaconBlockDeneb()
// block is from slot 0, "current slot" is window size +1 (so outside the window)
d.Block.Body.BlobKzgCommitments = commits
sb, err := blocks.NewSignedBeaconBlock(d)
require.NoError(t, err)
rb, err := blocks.NewROBlock(sb)
require.NoError(t, err)
return rb
},
slot: windowSlots + 1,
},
}
for _, c := range cases {
t.Run(c.name, func(t *testing.T) {
b := c.block(t)
co := commitmentsToCheck(b, c.slot)
require.Equal(t, len(c.commits), len(co))
for i := 0; i < len(c.commits); i++ {
require.Equal(t, true, bytes.Equal(c.commits[i], co[i]))
}
})
}
}
func daAlwaysSucceeds(_ [][]byte, _ []*ethpb.BlobSidecar) error {
return nil
}
type mockDA struct {
t *testing.T
cmts [][]byte
scs []*ethpb.BlobSidecar
err error
}
func (m *mockDA) expectedArguments(gotC [][]byte, gotSC []*ethpb.BlobSidecar) error {
require.Equal(m.t, len(m.cmts), len(gotC))
require.Equal(m.t, len(gotSC), len(m.scs))
for i := range m.cmts {
require.Equal(m.t, true, bytes.Equal(m.cmts[i], gotC[i]))
}
for i := range m.scs {
require.Equal(m.t, m.scs[i], gotSC[i])
}
return m.err
}
func TestLazilyPersistent_Missing(t *testing.T) {
ctx := context.Background()
db := &mockBlobsDB{}
as := NewLazilyPersistentStore(db)
blk, scs := util.GenerateTestDenebBlockWithSidecar(t, [32]byte{}, 1, 3)
expectedCommitments, err := blk.Block().Body().BlobKzgCommitments()
require.NoError(t, err)
vf := &mockDA{
t: t,
cmts: expectedCommitments,
scs: scs,
}
as.verifyKZG = vf.expectedArguments
// Only one commitment persisted, should return error with other indices
_, err = as.Persist(ctx, 1, scs[2])
require.NoError(t, err)
err = as.IsDataAvailable(ctx, 1, blk)
require.NotNil(t, err)
missingErr := MissingIndicesError{}
require.Equal(t, true, errors.As(err, &missingErr))
require.DeepEqual(t, []uint64{0, 1}, missingErr.Missing())
// All but one persisted, return missing idx
_, err = as.Persist(ctx, 1, scs[0])
require.NoError(t, err)
err = as.IsDataAvailable(ctx, 1, blk)
require.NotNil(t, err)
require.Equal(t, true, errors.As(err, &missingErr))
require.DeepEqual(t, []uint64{1}, missingErr.Missing())
// All persisted, return nil
_, err = as.Persist(ctx, 1, scs[1])
require.NoError(t, err)
require.NoError(t, as.IsDataAvailable(ctx, 1, blk))
}
func TestLazilyPersistent_Mismatch(t *testing.T) {
ctx := context.Background()
db := &mockBlobsDB{}
as := NewLazilyPersistentStore(db)
blk, scs := util.GenerateTestDenebBlockWithSidecar(t, [32]byte{}, 1, 3)
vf := &mockDA{
t: t,
err: errors.New("kzg check should not run"),
}
as.verifyKZG = vf.expectedArguments
scs[0].KzgCommitment = bytesutil.PadTo([]byte("nope"), 48)
// Only one commitment persisted, should return error with other indices
_, err := as.Persist(ctx, 1, scs[0])
require.NoError(t, err)
err = as.IsDataAvailable(ctx, 1, blk)
require.NotNil(t, err)
mismatchErr := CommitmentMismatchError{}
require.Equal(t, true, errors.As(err, &mismatchErr))
require.DeepEqual(t, []uint64{0}, mismatchErr.Mismatch())
// the next time we call the DA check, the mismatched commitment should be evicted
err = as.IsDataAvailable(ctx, 1, blk)
require.NotNil(t, err)
missingErr := MissingIndicesError{}
require.Equal(t, true, errors.As(err, &missingErr))
require.DeepEqual(t, []uint64{0, 1, 2}, missingErr.Missing())
}
func TestPersisted(t *testing.T) {
ctx := context.Background()
blk, scs := util.GenerateTestDenebBlockWithSidecar(t, [32]byte{}, 1, 3)
db := &mockBlobsDB{
BlobSidecarsByRootCallback: func(ctx context.Context, beaconBlockRoot [32]byte, indices ...uint64) ([]*ethpb.BlobSidecar, error) {
return scs, nil
},
}
vf := &mockDA{
t: t,
err: errors.New("kzg check should not run"),
}
as := NewLazilyPersistentStore(db)
as.verifyKZG = vf.expectedArguments
entry := &cacheEntry{}
dbidx, err := as.persisted(ctx, blk.Root(), entry)
require.NoError(t, err)
require.Equal(t, false, dbidx[0] == nil)
for i := range scs {
require.Equal(t, *dbidx[i], bytesutil.ToBytes48(scs[i].KzgCommitment))
}
expectedCommitments, err := blk.Block().Body().BlobKzgCommitments()
require.NoError(t, err)
missing, err := dbidx.missing(expectedCommitments)
require.NoError(t, err)
require.Equal(t, 0, len(missing))
// test that the caching is working by returning the wrong set of sidecars
// and making sure that dbidx still thinks none are missing
db = &mockBlobsDB{
BlobSidecarsByRootCallback: func(ctx context.Context, beaconBlockRoot [32]byte, indices ...uint64) ([]*ethpb.BlobSidecar, error) {
return scs[1:], nil
},
}
as = NewLazilyPersistentStore(db)
// note, using the same entry value
dbidx, err = as.persisted(ctx, blk.Root(), entry)
require.NoError(t, err)
// same assertions should pass as when all sidecars returned by db
missing, err = dbidx.missing(expectedCommitments)
require.NoError(t, err)
require.Equal(t, 0, len(missing))
// do it again, but with a fresh cache entry - we should see a missing sidecar
newEntry := &cacheEntry{}
dbidx, err = as.persisted(ctx, blk.Root(), newEntry)
require.NoError(t, err)
missing, err = dbidx.missing(expectedCommitments)
require.NoError(t, err)
require.Equal(t, 1, len(missing))
// only element in missing should be the zero index
require.Equal(t, uint64(0), missing[0])
}
func TestLazilyPersistent_DBFallback(t *testing.T) {
ctx := context.Background()
blk, scs := util.GenerateTestDenebBlockWithSidecar(t, [32]byte{}, 1, 3)
// Generate the same sidecars index 0 so we can mess with its commitment
_, scscp := util.GenerateTestDenebBlockWithSidecar(t, [32]byte{}, 1, 1)
db := &mockBlobsDB{
BlobSidecarsByRootCallback: func(ctx context.Context, beaconBlockRoot [32]byte, indices ...uint64) ([]*ethpb.BlobSidecar, error) {
return scs, nil
},
}
vf := &mockDA{
t: t,
err: errors.New("kzg check should not run"),
}
as := NewLazilyPersistentStore(db)
as.verifyKZG = vf.expectedArguments
// Set up the mismatched commit, but we don't expect this to error because
// the db contains the sidecars.
scscp[0].BlockRoot = scs[0].BlockRoot
scscp[0].KzgCommitment = bytesutil.PadTo([]byte("nope"), 48)
_, err := as.Persist(ctx, 1, scscp[0])
require.NoError(t, err)
// This should pass since the db is giving us all the right sidecars
require.NoError(t, as.IsDataAvailable(ctx, 1, blk))
// now using an empty db, we should fail
as.db = &mockBlobsDB{}
// but we should have pruned, so we'll get a missing error, not mismatch
err = as.IsDataAvailable(ctx, 1, blk)
require.NotNil(t, err)
missingErr := MissingIndicesError{}
require.Equal(t, true, errors.As(err, &missingErr))
require.DeepEqual(t, []uint64{0, 1, 2}, missingErr.Missing())
// put the bad value back in the cache
persisted, err := as.Persist(ctx, 1, scscp[0])
require.NoError(t, err)
require.Equal(t, 1, len(persisted))
// now we'll get a mismatch error
err = as.IsDataAvailable(ctx, 1, blk)
require.NotNil(t, err)
mismatchErr := CommitmentMismatchError{}
require.Equal(t, true, errors.As(err, &mismatchErr))
require.DeepEqual(t, []uint64{0}, mismatchErr.Mismatch())
}
func TestLazyPersistOnceCommitted(t *testing.T) {
ctx := context.Background()
_, scs := util.GenerateTestDenebBlockWithSidecar(t, [32]byte{}, 1, 6)
as := NewLazilyPersistentStore(&mockBlobsDB{})
// stashes as expected
p, err := as.Persist(ctx, 1, scs...)
require.NoError(t, err)
require.Equal(t, 6, len(p))
// ignores duplicates
p, err = as.Persist(ctx, 1, scs...)
require.NoError(t, err)
require.Equal(t, 0, len(p))
// ignores index out of bound
scs[0].Index = 6
p, err = as.Persist(ctx, 1, scs[0])
require.NoError(t, err)
require.Equal(t, 0, len(p))
_, more := util.GenerateTestDenebBlockWithSidecar(t, [32]byte{}, 1, 4)
// ignores sidecars before the retention period
slotOOB, err := slots.EpochStart(params.BeaconNetworkConfig().MinEpochsForBlobsSidecarsRequest)
require.NoError(t, err)
p, err = as.Persist(ctx, 32+slotOOB, more[0])
require.NoError(t, err)
require.Equal(t, 0, len(p))
// doesn't ignore new sidecars with a different block root
p, err = as.Persist(ctx, 1, more...)
require.NoError(t, err)
require.Equal(t, 4, len(p))
}

View File

@@ -0,0 +1,93 @@
package das
import (
"context"
"sync"
errors "github.com/pkg/errors"
fieldparams "github.com/prysmaticlabs/prysm/v4/config/fieldparams"
"github.com/prysmaticlabs/prysm/v4/consensus-types/blocks"
"github.com/prysmaticlabs/prysm/v4/consensus-types/primitives"
ethpb "github.com/prysmaticlabs/prysm/v4/proto/prysm/v1alpha1"
)
// AsyncStore wraps NewCachingDBVerifiedStore and blocks until IsDataAvailable is ready.
// If the context given to IsDataAvailable is cancelled, the result of IsDataAvailable will be ctx.Error().
type AsyncStore struct {
notif *idxNotifiers
s AvailabilityStore
}
func (bs *AsyncStore) PersistOnceCommitted(ctx context.Context, current primitives.Slot, sc ...*ethpb.BlobSidecar) ([]*ethpb.BlobSidecar, error) {
if len(sc) < 1 {
return nil, nil
}
seen := bs.notif.ensure(keyFromSidecar(sc[0]))
persisted, err := bs.s.Persist(ctx, current, sc...)
if err != nil {
return nil, err
}
for i := range persisted {
seen <- persisted[i].Index
}
return persisted, nil
}
func (bs *AsyncStore) IsDataAvailable(ctx context.Context, current primitives.Slot, b blocks.ROBlock) error {
key := keyFromBlock(b)
for {
err := bs.s.IsDataAvailable(ctx, current, b)
if err == nil {
return nil
}
mie := MissingIndicesError{}
if !errors.As(err, &mie) {
return err
}
waitFor := make(map[uint64]struct{})
for _, m := range mie.Missing() {
waitFor[m] = struct{}{}
}
if err := waitForIndices(ctx, bs.notif.ensure(key), waitFor); err != nil {
return err
}
bs.notif.reset(key)
}
}
func waitForIndices(ctx context.Context, idxSeen chan uint64, waitFor map[uint64]struct{}) error {
for {
select {
case <-ctx.Done():
return ctx.Err()
case idx := <-idxSeen:
delete(waitFor, idx)
if len(waitFor) == 0 {
return nil
}
continue
}
}
}
type idxNotifiers struct {
sync.RWMutex
entries map[cacheKey]chan uint64
}
func (c *idxNotifiers) ensure(key cacheKey) chan uint64 {
c.Lock()
defer c.Unlock()
e, ok := c.entries[key]
if !ok {
e = make(chan uint64, fieldparams.MaxBlobsPerBlock)
c.entries[key] = e
}
return e
}
func (c *idxNotifiers) reset(key cacheKey) {
c.Lock()
defer c.Unlock()
delete(c.entries, key)
}

184
beacon-chain/das/cache.go Normal file
View File

@@ -0,0 +1,184 @@
package das
import (
"bytes"
"fmt"
"sync"
errors "github.com/pkg/errors"
fieldparams "github.com/prysmaticlabs/prysm/v4/config/fieldparams"
"github.com/prysmaticlabs/prysm/v4/consensus-types/blocks"
"github.com/prysmaticlabs/prysm/v4/consensus-types/primitives"
"github.com/prysmaticlabs/prysm/v4/encoding/bytesutil"
ethpb "github.com/prysmaticlabs/prysm/v4/proto/prysm/v1alpha1"
log "github.com/sirupsen/logrus"
)
var (
errDBCommitmentMismatch = errors.New("blob/block commitment mismatch")
)
// cacheKey includes the slot so that we can easily iterate through the cache and compare
// slots for eviction purposes. Whether the input is the block or the sidecar, we always have
// the root+slot when interacting with the cache, so it isn't an inconvenience to use both.
type cacheKey struct {
slot primitives.Slot
root [32]byte
}
type cache struct {
sync.RWMutex
entries map[cacheKey]*cacheEntry
}
func newCache() *cache {
return &cache{entries: make(map[cacheKey]*cacheEntry)}
}
// keyFromSidecar is a convenience method for constructing a cacheKey from a BlobSidecar value.
func keyFromSidecar(sc *ethpb.BlobSidecar) cacheKey {
return cacheKey{slot: sc.Slot, root: bytesutil.ToBytes32(sc.BlockRoot)}
}
// keyFromBlock is a convenience method for constructing a cacheKey from a ROBlock value.
func keyFromBlock(b blocks.ROBlock) cacheKey {
return cacheKey{slot: b.Block().Slot(), root: b.Root()}
}
// ensure returns the entry for the given key, creating it if it isn't already present.
func (c *cache) ensure(key cacheKey) *cacheEntry {
c.Lock()
defer c.Unlock()
e, ok := c.entries[key]
if !ok {
e = &cacheEntry{}
c.entries[key] = e
}
return e
}
// delete removes the cache entry from the cache.
func (c *cache) delete(key cacheKey) {
c.Lock()
defer c.Unlock()
delete(c.entries, key)
}
// dbidx is a compact representation of the set of BlobSidecars in the database for a given root,
// organized as a map from BlobSidecar.Index->BlobSidecar.KzgCommitment.
// This representation is convenient for comparison to a block's commitments.
type dbidx [fieldparams.MaxBlobsPerBlock]*[48]byte
// missing compares the set of BlobSidecars observed in the backing store to the set of commitments
// observed in a block - cmts is the BlobKzgCommitments field from a block.
func (idx dbidx) missing(cmts [][]byte) ([]uint64, error) {
m := make([]uint64, 0, len(cmts))
for i := range cmts {
if idx[i] == nil {
m = append(m, uint64(i))
continue
}
c := *idx[i]
if c != bytesutil.ToBytes48(cmts[i]) {
return nil, errors.Wrapf(errDBCommitmentMismatch,
"index=%d, db=%#x, block=%#x", i, c, cmts[i])
}
}
return m, nil
}
// cacheEntry represents 2 different types of caches for a given block.
// scs is a fixed-length cache of BlobSidecars.
// dbx is a compact representation of BlobSidecars observed in the backing store.
// dbx assumes that all writes to the backing store go through the same cache.
type cacheEntry struct {
sync.RWMutex
scs [fieldparams.MaxBlobsPerBlock]*ethpb.BlobSidecar
dbx dbidx
dbRead bool
}
// stash adds an item to the in-memory cache of BlobSidecars.
// Only the first BlobSidecar of a given Index will be kept in the cache.
// The return value represents whether the given BlobSidecar was stashed.
// A false value means there was already a BlobSidecar with the given Index.
func (e *cacheEntry) stash(sc *ethpb.BlobSidecar) bool {
if e.scs[sc.Index] == nil {
e.scs[sc.Index] = sc
return true
}
return false
}
func (e *cacheEntry) dbidx() dbidx {
return e.dbx
}
func (e *cacheEntry) dbidxInitialized() bool {
return e.dbRead
}
// filter evicts sidecars that are not commited to by the block and returns custom
// errors if the cache is missing any of the commitments, or if the commitments in
// the cache do not match those found in the block. If err is nil, then all expected
// commitments were found in the cache and the sidecar slice return value can be used
// to perform a DA check against the cached sidecars.
func (e *cacheEntry) filter(root [32]byte, blkCmts [][]byte) ([]*ethpb.BlobSidecar, error) {
// Evict any blobs that are out of range.
for i := len(blkCmts); i < fieldparams.MaxBlobsPerBlock; i++ {
if e.scs[i] == nil {
continue
}
log.WithField("block_root", root).
WithField("index", i).
WithField("cached_commitment", fmt.Sprintf("%#x", e.scs[i].KzgCommitment)).
Warn("Evicting BlobSidecar with index > maximum blob commitment")
e.scs[i] = nil
}
// Generate a MissingIndicesError for any missing indices.
// Generate a CommitmentMismatchError for any mismatched commitments.
missing := make([]uint64, 0, len(blkCmts))
mismatch := make([]uint64, 0, len(blkCmts))
for i := range blkCmts {
if e.scs[i] == nil {
missing = append(missing, uint64(i))
continue
}
if !bytes.Equal(blkCmts[i], e.scs[i].KzgCommitment) {
mismatch = append(mismatch, uint64(i))
log.WithField("block_root", root).
WithField("index", i).
WithField("expected_commitment", fmt.Sprintf("%#x", blkCmts[i])).
WithField("cached_commitment", fmt.Sprintf("%#x", e.scs[i].KzgCommitment)).
Error("Evicting BlobSidecar with incorrect commitment")
e.scs[i] = nil
continue
}
}
if len(mismatch) > 0 {
return nil, NewCommitmentMismatchError(mismatch)
}
if len(missing) > 0 {
return nil, NewMissingIndicesError(missing)
}
return e.scs[0:len(blkCmts)], nil
}
// ensureDbidx updates the db cache representation to include the given BlobSidecars.
func (e *cacheEntry) ensureDbidx(scs ...*ethpb.BlobSidecar) dbidx {
if e.dbRead == false {
e.dbRead = true
}
for i := range scs {
if scs[i].Index >= fieldparams.MaxBlobsPerBlock {
continue
}
// Don't overwrite.
if e.dbx[scs[i].Index] != nil {
continue
}
c := bytesutil.ToBytes48(scs[i].KzgCommitment)
e.dbx[scs[i].Index] = &c
}
return e.dbx
}

View File

@@ -0,0 +1,122 @@
package das
import (
"testing"
fieldparams "github.com/prysmaticlabs/prysm/v4/config/fieldparams"
"github.com/prysmaticlabs/prysm/v4/consensus-types/primitives"
"github.com/prysmaticlabs/prysm/v4/encoding/bytesutil"
ethpb "github.com/prysmaticlabs/prysm/v4/proto/prysm/v1alpha1"
"github.com/prysmaticlabs/prysm/v4/testing/require"
)
func TestCacheEnsureDelete(t *testing.T) {
c := newCache()
require.Equal(t, 0, len(c.entries))
root := bytesutil.ToBytes32([]byte("root"))
slot := primitives.Slot(1234)
k := cacheKey{root: root, slot: slot}
entry := c.ensure(k)
require.Equal(t, 1, len(c.entries))
require.Equal(t, c.entries[k], entry)
c.delete(k)
require.Equal(t, 0, len(c.entries))
var nilEntry *cacheEntry
require.Equal(t, nilEntry, c.entries[k])
}
func TestNewEntry(t *testing.T) {
entry := &cacheEntry{}
require.Equal(t, false, entry.dbidxInitialized())
entry.ensureDbidx()
require.Equal(t, true, entry.dbidxInitialized())
}
func TestDbidxBounds(t *testing.T) {
scs := generateMinimalBlobSidecars(2)
entry := &cacheEntry{}
entry.ensureDbidx(scs...)
//require.Equal(t, 2, len(entry.dbidx()))
for i := range scs {
require.Equal(t, bytesutil.ToBytes48(scs[i].KzgCommitment), *entry.dbidx()[i])
}
var nilPtr *[48]byte
// test that duplicate sidecars are ignored
orig := entry.dbidx()
copy(scs[0].KzgCommitment[0:4], []byte("derp"))
edited := bytesutil.ToBytes48(scs[0].KzgCommitment)
require.Equal(t, false, *entry.dbidx()[0] == edited)
entry.ensureDbidx(scs[0])
for i := 2; i < fieldparams.MaxBlobsPerBlock; i++ {
require.Equal(t, entry.dbidx()[i], nilPtr)
}
require.Equal(t, entry.dbidx(), orig)
// test that excess sidecars are discarded
oob := generateMinimalBlobSidecars(fieldparams.MaxBlobsPerBlock + 1)
entry = &cacheEntry{}
entry.ensureDbidx(oob...)
require.Equal(t, fieldparams.MaxBlobsPerBlock, len(entry.dbidx()))
}
func TestDbidxMissing(t *testing.T) {
scs := generateMinimalBlobSidecars(6)
missing := []uint64{0, 1, 2, 3, 4, 5}
blockCommits := commitsForSidecars(scs)
cases := []struct {
name string
nMissing int
err error
}{
{
name: "all missing",
nMissing: len(scs),
},
{
name: "none missing",
nMissing: 0,
},
{
name: "2 missing",
nMissing: 2,
},
{
name: "3 missing",
nMissing: 3,
},
}
for _, c := range cases {
t.Run(c.name, func(t *testing.T) {
l := len(scs)
entry := &cacheEntry{}
d := entry.ensureDbidx(scs[0 : l-c.nMissing]...)
m, err := d.missing(blockCommits)
if c.err == nil {
require.NoError(t, err)
}
require.DeepEqual(t, m, missing[l-c.nMissing:])
require.Equal(t, c.nMissing, len(m))
})
}
}
func commitsForSidecars(scs []*ethpb.BlobSidecar) [][]byte {
m := make([][]byte, len(scs))
for i := range scs {
m[i] = scs[i].KzgCommitment
}
return m
}
func generateMinimalBlobSidecars(n int) []*ethpb.BlobSidecar {
scs := make([]*ethpb.BlobSidecar, n)
for i := 0; i < n; i++ {
scs[i] = &ethpb.BlobSidecar{
Index: uint64(i),
KzgCommitment: bytesutil.PadTo([]byte{byte(i)}, 48),
}
}
return scs
}

76
beacon-chain/das/error.go Normal file
View File

@@ -0,0 +1,76 @@
package das
import (
"fmt"
"strings"
errors "github.com/pkg/errors"
)
var (
errDAIncomplete = errors.New("some BlobSidecars are not available at this time")
errDAEquivocated = errors.New("cache contains BlobSidecars that do not match block commitments")
errMixedRoots = errors.New("BlobSidecars must all be for the same block")
)
// The following errors are exported so that gossip verification can use errors.Is to determine the correct pubsub.ValidationResult.
var (
// ErrInvalidInclusionProof is returned when the inclusion proof check on the BlobSidecar fails.
ErrInvalidInclusionProof = errors.New("BlobSidecar inclusion proof is invalid")
// ErrInvalidBlockSignature is returned when the BlobSidecar.SignedBeaconBlockHeader signature cannot be verified against the block root.
ErrInvalidBlockSignature = errors.New("SignedBeaconBlockHeader signature could not verified")
// ErrInvalidCommitment is returned when the kzg_commitment cannot be verified against the kzg_proof and blob.
ErrInvalidCommitment = errors.New("BlobSidecar.kzg_commitment verification failed")
)
func NewMissingIndicesError(missing []uint64) MissingIndicesError {
return MissingIndicesError{indices: missing}
}
type MissingIndicesError struct {
indices []uint64
}
var _ error = MissingIndicesError{}
func (m MissingIndicesError) Error() string {
is := make([]string, 0, len(m.indices))
for i := range m.indices {
is = append(is, fmt.Sprintf("%d", m.indices[i]))
}
return fmt.Sprintf("%s at indices %s", errDAIncomplete.Error(), strings.Join(is, ","))
}
func (m MissingIndicesError) Missing() []uint64 {
return m.indices
}
func (m MissingIndicesError) Unwrap() error {
return errDAIncomplete
}
func NewCommitmentMismatchError(mismatch []uint64) CommitmentMismatchError {
return CommitmentMismatchError{mismatch: mismatch}
}
type CommitmentMismatchError struct {
mismatch []uint64
}
var _ error = CommitmentMismatchError{}
func (m CommitmentMismatchError) Error() string {
is := make([]string, 0, len(m.mismatch))
for i := range m.mismatch {
is = append(is, fmt.Sprintf("%d", m.mismatch[i]))
}
return fmt.Sprintf("%s at indices %s", errDAEquivocated.Error(), strings.Join(is, ","))
}
func (m CommitmentMismatchError) Mismatch() []uint64 {
return m.mismatch
}
func (m CommitmentMismatchError) Unwrap() error {
return errDAEquivocated
}

22
beacon-chain/das/iface.go Normal file
View File

@@ -0,0 +1,22 @@
package das
import (
"context"
"github.com/prysmaticlabs/prysm/v4/consensus-types/blocks"
"github.com/prysmaticlabs/prysm/v4/consensus-types/primitives"
ethpb "github.com/prysmaticlabs/prysm/v4/proto/prysm/v1alpha1"
)
// BlobsDB specifies the persistence store methods needed by the AvailabilityStore.
type BlobsDB interface {
BlobSidecarsByRoot(ctx context.Context, beaconBlockRoot [32]byte, indices ...uint64) ([]*ethpb.BlobSidecar, error)
SaveBlobSidecar(ctx context.Context, sidecars []*ethpb.BlobSidecar) error
}
// AvailabilityStore describes a component that can verify and save sidecars for a given block, and confirm previously
// verified and saved sidecars.
type AvailabilityStore interface {
IsDataAvailable(ctx context.Context, current primitives.Slot, b blocks.ROBlock) error
Persist(ctx context.Context, current primitives.Slot, sc ...*ethpb.BlobSidecar) ([]*ethpb.BlobSidecar, error)
}

51
beacon-chain/das/mock.go Normal file
View File

@@ -0,0 +1,51 @@
package das
import (
"context"
"github.com/prysmaticlabs/prysm/v4/consensus-types/blocks"
"github.com/prysmaticlabs/prysm/v4/consensus-types/primitives"
ethpb "github.com/prysmaticlabs/prysm/v4/proto/prysm/v1alpha1"
)
type MockAvailabilityStore struct {
VerifyAvailabilityCallback func(ctx context.Context, current primitives.Slot, b blocks.ROBlock) error
PersistBlobsCallback func(ctx context.Context, current primitives.Slot, sc ...*ethpb.BlobSidecar) ([]*ethpb.BlobSidecar, error)
}
var _ AvailabilityStore = &MockAvailabilityStore{}
func (m *MockAvailabilityStore) IsDataAvailable(ctx context.Context, current primitives.Slot, b blocks.ROBlock) error {
if m.VerifyAvailabilityCallback != nil {
return m.VerifyAvailabilityCallback(ctx, current, b)
}
return nil
}
func (m *MockAvailabilityStore) Persist(ctx context.Context, current primitives.Slot, sc ...*ethpb.BlobSidecar) ([]*ethpb.BlobSidecar, error) {
if m.PersistBlobsCallback != nil {
return m.PersistBlobsCallback(ctx, current, sc...)
}
return sc, nil
}
type mockBlobsDB struct {
BlobSidecarsByRootCallback func(ctx context.Context, root [32]byte, indices ...uint64) ([]*ethpb.BlobSidecar, error)
SaveBlobSidecarCallback func(ctx context.Context, sidecars []*ethpb.BlobSidecar) error
}
var _ BlobsDB = &mockBlobsDB{}
func (b *mockBlobsDB) BlobSidecarsByRoot(ctx context.Context, root [32]byte, indices ...uint64) ([]*ethpb.BlobSidecar, error) {
if b.BlobSidecarsByRootCallback != nil {
return b.BlobSidecarsByRootCallback(ctx, root, indices...)
}
return nil, nil
}
func (b *mockBlobsDB) SaveBlobSidecar(ctx context.Context, sidecars []*ethpb.BlobSidecar) error {
if b.SaveBlobSidecarCallback != nil {
return b.SaveBlobSidecarCallback(ctx, sidecars)
}
return nil
}

View File

@@ -1,11 +1,6 @@
package db
import (
"errors"
"os"
"github.com/prysmaticlabs/prysm/v4/beacon-chain/db/kv"
)
import "github.com/prysmaticlabs/prysm/v4/beacon-chain/db/kv"
// ErrNotFound can be used to determine if an error from a method in the database package
// represents a "not found" error. These often require different handling than a low-level
@@ -24,9 +19,3 @@ var ErrNotFoundBackfillBlockRoot = kv.ErrNotFoundBackfillBlockRoot
// ErrNotFoundGenesisBlockRoot means no genesis block root was found, indicating the db was not initialized with genesis
var ErrNotFoundGenesisBlockRoot = kv.ErrNotFoundGenesisBlockRoot
// IsNotFound allows callers to treat errors from a flat-file database, where the file record is missing,
// as equivalent to db.ErrNotFound.
func IsNotFound(err error) bool {
return errors.Is(err, ErrNotFound) || os.IsNotExist(err)
}

View File

@@ -1,37 +0,0 @@
load("@prysm//tools/go:def.bzl", "go_library", "go_test")
go_library(
name = "go_default_library",
srcs = [
"blob.go",
"ephemeral.go",
],
importpath = "github.com/prysmaticlabs/prysm/v4/beacon-chain/db/filesystem",
visibility = ["//visibility:public"],
deps = [
"//beacon-chain/verification:go_default_library",
"//config/fieldparams:go_default_library",
"//consensus-types/blocks:go_default_library",
"//io/file:go_default_library",
"//proto/prysm/v1alpha1:go_default_library",
"//runtime/logging:go_default_library",
"@com_github_pkg_errors//:go_default_library",
"@com_github_sirupsen_logrus//:go_default_library",
"@com_github_spf13_afero//:go_default_library",
],
)
go_test(
name = "go_default_test",
srcs = ["blob_test.go"],
embed = [":go_default_library"],
deps = [
"//beacon-chain/verification:go_default_library",
"//config/fieldparams:go_default_library",
"//proto/prysm/v1alpha1:go_default_library",
"//testing/require:go_default_library",
"//testing/util:go_default_library",
"@com_github_prysmaticlabs_fastssz//:go_default_library",
"@com_github_spf13_afero//:go_default_library",
],
)

View File

@@ -1,179 +0,0 @@
package filesystem
import (
"fmt"
"os"
"path"
"strconv"
"strings"
"github.com/pkg/errors"
"github.com/prysmaticlabs/prysm/v4/beacon-chain/verification"
fieldparams "github.com/prysmaticlabs/prysm/v4/config/fieldparams"
"github.com/prysmaticlabs/prysm/v4/consensus-types/blocks"
"github.com/prysmaticlabs/prysm/v4/io/file"
ethpb "github.com/prysmaticlabs/prysm/v4/proto/prysm/v1alpha1"
"github.com/prysmaticlabs/prysm/v4/runtime/logging"
log "github.com/sirupsen/logrus"
"github.com/spf13/afero"
)
var (
errIndexOutOfBounds = errors.New("blob index in file name > MaxBlobsPerBlock")
)
const (
sszExt = "ssz"
partExt = "part"
directoryPermissions = 0700
)
// NewBlobStorage creates a new instance of the BlobStorage object. Note that the implementation of BlobStorage may
// attempt to hold a file lock to guarantee exclusive control of the blob storage directory, so this should only be
// initialized once per beacon node.
func NewBlobStorage(base string) (*BlobStorage, error) {
base = path.Clean(base)
if err := file.MkdirAll(base); err != nil {
return nil, err
}
fs := afero.NewBasePathFs(afero.NewOsFs(), base)
return &BlobStorage{fs: fs}, nil
}
// BlobStorage is the concrete implementation of the filesystem backend for saving and retrieving BlobSidecars.
type BlobStorage struct {
fs afero.Fs
}
// Save saves blobs given a list of sidecars.
func (bs *BlobStorage) Save(sidecar blocks.VerifiedROBlob) error {
fname := namerForSidecar(sidecar)
sszPath := fname.path()
exists, err := afero.Exists(bs.fs, sszPath)
if err != nil {
return err
}
if exists {
log.WithFields(logging.BlobFields(sidecar.ROBlob)).Debug("ignoring a duplicate blob sidecar Save attempt")
return nil
}
// Serialize the ethpb.BlobSidecar to binary data using SSZ.
sidecarData, err := sidecar.MarshalSSZ()
if err != nil {
return errors.Wrap(err, "failed to serialize sidecar data")
}
if err := bs.fs.MkdirAll(fname.dir(), directoryPermissions); err != nil {
return err
}
partPath := fname.partPath()
// Create a partial file and write the serialized data to it.
partialFile, err := bs.fs.Create(partPath)
if err != nil {
return errors.Wrap(err, "failed to create partial file")
}
_, err = partialFile.Write(sidecarData)
if err != nil {
closeErr := partialFile.Close()
if closeErr != nil {
return closeErr
}
return errors.Wrap(err, "failed to write to partial file")
}
err = partialFile.Close()
if err != nil {
return err
}
// Atomically rename the partial file to its final name.
err = bs.fs.Rename(partPath, sszPath)
if err != nil {
return errors.Wrap(err, "failed to rename partial file to final name")
}
return nil
}
// Get retrieves a single BlobSidecar by its root and index.
// Since BlobStorage only writes blobs that have undergone full verification, the return
// value is always a VerifiedROBlob.
func (bs *BlobStorage) Get(root [32]byte, idx uint64) (blocks.VerifiedROBlob, error) {
expected := blobNamer{root: root, index: idx}
encoded, err := afero.ReadFile(bs.fs, expected.path())
var v blocks.VerifiedROBlob
if err != nil {
return v, err
}
s := &ethpb.BlobSidecar{}
if err := s.UnmarshalSSZ(encoded); err != nil {
return v, err
}
ro, err := blocks.NewROBlobWithRoot(s, root)
if err != nil {
return blocks.VerifiedROBlob{}, err
}
return verification.BlobSidecarNoop(ro)
}
// Indices generates a bitmap representing which BlobSidecar.Index values are present on disk for a given root.
// This value can be compared to the commitments observed in a block to determine which indices need to be found
// on the network to confirm data availability.
func (bs *BlobStorage) Indices(root [32]byte) ([fieldparams.MaxBlobsPerBlock]bool, error) {
var mask [fieldparams.MaxBlobsPerBlock]bool
rootDir := blobNamer{root: root}.dir()
entries, err := afero.ReadDir(bs.fs, rootDir)
if err != nil {
if os.IsNotExist(err) {
return mask, nil
}
return mask, err
}
for i := range entries {
if entries[i].IsDir() {
continue
}
name := entries[i].Name()
if !strings.HasSuffix(name, sszExt) {
continue
}
parts := strings.Split(name, ".")
if len(parts) != 2 {
continue
}
u, err := strconv.ParseUint(parts[0], 10, 64)
if err != nil {
return mask, errors.Wrapf(err, "unexpected directory entry breaks listing, %s", parts[0])
}
if u >= fieldparams.MaxBlobsPerBlock {
return mask, errIndexOutOfBounds
}
mask[u] = true
}
return mask, nil
}
type blobNamer struct {
root [32]byte
index uint64
}
func namerForSidecar(sc blocks.VerifiedROBlob) blobNamer {
return blobNamer{root: sc.BlockRoot(), index: sc.Index}
}
func (p blobNamer) dir() string {
return fmt.Sprintf("%#x", p.root)
}
func (p blobNamer) fname(ext string) string {
return path.Join(p.dir(), fmt.Sprintf("%d.%s", p.index, ext))
}
func (p blobNamer) partPath() string {
return p.fname(partExt)
}
func (p blobNamer) path() string {
return p.fname(sszExt)
}

View File

@@ -1,100 +0,0 @@
package filesystem
import (
"bytes"
"testing"
ssz "github.com/prysmaticlabs/fastssz"
"github.com/prysmaticlabs/prysm/v4/beacon-chain/verification"
fieldparams "github.com/prysmaticlabs/prysm/v4/config/fieldparams"
ethpb "github.com/prysmaticlabs/prysm/v4/proto/prysm/v1alpha1"
"github.com/spf13/afero"
"github.com/prysmaticlabs/prysm/v4/testing/require"
"github.com/prysmaticlabs/prysm/v4/testing/util"
)
func TestBlobStorage_SaveBlobData(t *testing.T) {
_, sidecars := util.GenerateTestDenebBlockWithSidecar(t, [32]byte{}, 1, fieldparams.MaxBlobsPerBlock)
testSidecars, err := verification.BlobSidecarSliceNoop(sidecars)
require.NoError(t, err)
t.Run("no error for duplicate", func(t *testing.T) {
fs, bs := NewEphemeralBlobStorageWithFs(t)
existingSidecar := testSidecars[0]
blobPath := namerForSidecar(existingSidecar).path()
// Serialize the existing BlobSidecar to binary data.
existingSidecarData, err := ssz.MarshalSSZ(existingSidecar)
require.NoError(t, err)
require.NoError(t, bs.Save(existingSidecar))
// No error when attempting to write twice.
require.NoError(t, bs.Save(existingSidecar))
content, err := afero.ReadFile(fs, blobPath)
require.NoError(t, err)
require.Equal(t, true, bytes.Equal(existingSidecarData, content))
// Deserialize the BlobSidecar from the saved file data.
savedSidecar := &ethpb.BlobSidecar{}
err = savedSidecar.UnmarshalSSZ(content)
require.NoError(t, err)
// Compare the original Sidecar and the saved Sidecar.
require.DeepSSZEqual(t, existingSidecar.BlobSidecar, savedSidecar)
})
t.Run("indices", func(t *testing.T) {
bs := NewEphemeralBlobStorage(t)
sc := testSidecars[2]
require.NoError(t, bs.Save(sc))
actualSc, err := bs.Get(sc.BlockRoot(), sc.Index)
require.NoError(t, err)
expectedIdx := [fieldparams.MaxBlobsPerBlock]bool{false, false, true}
actualIdx, err := bs.Indices(actualSc.BlockRoot())
require.NoError(t, err)
require.Equal(t, expectedIdx, actualIdx)
})
t.Run("round trip write then read", func(t *testing.T) {
bs := NewEphemeralBlobStorage(t)
err := bs.Save(testSidecars[0])
require.NoError(t, err)
expected := testSidecars[0]
actual, err := bs.Get(expected.BlockRoot(), expected.Index)
require.NoError(t, err)
require.DeepSSZEqual(t, expected, actual)
})
}
func TestBlobIndicesBounds(t *testing.T) {
fs, bs := NewEphemeralBlobStorageWithFs(t)
root := [32]byte{}
okIdx := uint64(fieldparams.MaxBlobsPerBlock - 1)
writeFakeSSZ(t, fs, root, okIdx)
indices, err := bs.Indices(root)
require.NoError(t, err)
var expected [fieldparams.MaxBlobsPerBlock]bool
expected[okIdx] = true
for i := range expected {
require.Equal(t, expected[i], indices[i])
}
oobIdx := uint64(fieldparams.MaxBlobsPerBlock)
writeFakeSSZ(t, fs, root, oobIdx)
_, err = bs.Indices(root)
require.ErrorIs(t, err, errIndexOutOfBounds)
}
func writeFakeSSZ(t *testing.T, fs afero.Fs, root [32]byte, idx uint64) {
namer := blobNamer{root: root, index: idx}
require.NoError(t, fs.MkdirAll(namer.dir(), 0700))
fh, err := fs.Create(namer.path())
require.NoError(t, err)
_, err = fh.Write([]byte("derp"))
require.NoError(t, err)
require.NoError(t, fh.Close())
}

View File

@@ -1,53 +0,0 @@
package filesystem
import (
"testing"
"github.com/spf13/afero"
)
// NewEphemeralBlobStorage should only be used for tests.
// The instance of BlobStorage returned is backed by an in-memory virtual filesystem,
// improving test performance and simplifying cleanup.
func NewEphemeralBlobStorage(_ testing.TB) *BlobStorage {
return &BlobStorage{fs: afero.NewMemMapFs()}
}
// NewEphemeralBlobStorageWithFs can be used by tests that want access to the virtual filesystem
// in order to interact with it outside the parameters of the BlobStorage api.
func NewEphemeralBlobStorageWithFs(_ testing.TB) (afero.Fs, *BlobStorage) {
fs := afero.NewMemMapFs()
return fs, &BlobStorage{fs: fs}
}
type BlobMocker struct {
fs afero.Fs
bs *BlobStorage
}
// CreateFakeIndices creates empty blob sidecar files at the expected path for the given
// root and indices to influence the result of Indices().
func (bm *BlobMocker) CreateFakeIndices(root [32]byte, indices []uint64) error {
for i := range indices {
n := blobNamer{root: root, index: indices[i]}
if err := bm.fs.MkdirAll(n.dir(), directoryPermissions); err != nil {
return err
}
f, err := bm.fs.Create(n.path())
if err != nil {
return err
}
if err := f.Close(); err != nil {
return err
}
}
return nil
}
// NewEpehmeralBlobStorageWithMocker returns a *BlobMocker value in addition to the BlobStorage value.
// BlockMocker encapsulates things blob path construction to avoid leaking implementation details.
func NewEphemeralBlobStorageWithMocker(_ testing.TB) (*BlobMocker, *BlobStorage) {
fs := afero.NewMemMapFs()
bs := &BlobStorage{fs: fs}
return &BlobMocker{fs: fs, bs: bs}, bs
}

View File

@@ -56,8 +56,8 @@ type ReadOnlyDatabase interface {
RegistrationByValidatorID(ctx context.Context, id primitives.ValidatorIndex) (*ethpb.ValidatorRegistrationV1, error)
// Blob operations.
BlobSidecarsByRoot(ctx context.Context, beaconBlockRoot [32]byte, indices ...uint64) ([]*ethpb.DeprecatedBlobSidecar, error)
BlobSidecarsBySlot(ctx context.Context, slot primitives.Slot, indices ...uint64) ([]*ethpb.DeprecatedBlobSidecar, error)
BlobSidecarsByRoot(ctx context.Context, beaconBlockRoot [32]byte, indices ...uint64) ([]*ethpb.BlobSidecar, error)
BlobSidecarsBySlot(ctx context.Context, slot primitives.Slot, indices ...uint64) ([]*ethpb.BlobSidecar, error)
// origin checkpoint sync support
OriginCheckpointBlockRoot(ctx context.Context) ([32]byte, error)
@@ -95,7 +95,7 @@ type NoHeadAccessDatabase interface {
SaveRegistrationsByValidatorIDs(ctx context.Context, ids []primitives.ValidatorIndex, regs []*ethpb.ValidatorRegistrationV1) error
// Blob operations.
SaveBlobSidecar(ctx context.Context, sidecars []*ethpb.DeprecatedBlobSidecar) error
SaveBlobSidecar(ctx context.Context, sidecars []*ethpb.BlobSidecar) error
DeleteBlobSidecars(ctx context.Context, beaconBlockRoot [32]byte) error
CleanUpDirtyStates(ctx context.Context, slotsPerArchivedPoint primitives.Slot) error

View File

@@ -124,7 +124,6 @@ go_test(
"@com_github_ethereum_go_ethereum//common:go_default_library",
"@com_github_golang_snappy//:go_default_library",
"@com_github_pkg_errors//:go_default_library",
"@com_github_sirupsen_logrus//:go_default_library",
"@com_github_urfave_cli_v2//:go_default_library",
"@io_bazel_rules_go//go/tools/bazel:go_default_library",
"@io_etcd_go_bbolt//:go_default_library",

View File

@@ -55,7 +55,7 @@ func (rk blobRotatingKey) BlockRoot() []byte {
// 3. Begin the save algorithm: If the incoming blob has a slot bigger than the saved slot at the spot
// in the rotating keys buffer, we overwrite all elements for that slot. Otherwise, we merge the blob with an existing one.
// Trying to replace a newer blob with an older one is an error.
func (s *Store) SaveBlobSidecar(ctx context.Context, scs []*ethpb.DeprecatedBlobSidecar) error {
func (s *Store) SaveBlobSidecar(ctx context.Context, scs []*ethpb.BlobSidecar) error {
if len(scs) == 0 {
return errEmptySidecar
}
@@ -123,7 +123,7 @@ func (s *Store) SaveBlobSidecar(ctx context.Context, scs []*ethpb.DeprecatedBlob
// validUniqueSidecars ensures that all sidecars have the same slot, parent root, block root, and proposer index, and
// there are no more than MAX_BLOBS_PER_BLOCK sidecars.
func validUniqueSidecars(scs []*ethpb.DeprecatedBlobSidecar) ([]*ethpb.DeprecatedBlobSidecar, error) {
func validUniqueSidecars(scs []*ethpb.BlobSidecar) ([]*ethpb.BlobSidecar, error) {
if len(scs) == 0 {
return nil, errEmptySidecar
}
@@ -167,7 +167,7 @@ func validUniqueSidecars(scs []*ethpb.DeprecatedBlobSidecar) ([]*ethpb.Deprecate
}
// sortSidecars sorts the sidecars by their index.
func sortSidecars(scs []*ethpb.DeprecatedBlobSidecar) {
func sortSidecars(scs []*ethpb.BlobSidecar) {
sort.Slice(scs, func(i, j int) bool {
return scs[i].Index < scs[j].Index
})
@@ -178,7 +178,7 @@ func sortSidecars(scs []*ethpb.DeprecatedBlobSidecar) {
// Otherwise, the result will be filtered to only include the specified indices.
// An error will result if an invalid index is specified.
// The bucket size is bounded by 131072 entries. That's the most blobs a node will keep before rotating it out.
func (s *Store) BlobSidecarsByRoot(ctx context.Context, root [32]byte, indices ...uint64) ([]*ethpb.DeprecatedBlobSidecar, error) {
func (s *Store) BlobSidecarsByRoot(ctx context.Context, root [32]byte, indices ...uint64) ([]*ethpb.BlobSidecar, error) {
ctx, span := trace.StartSpan(ctx, "BeaconDB.BlobSidecarsByRoot")
defer span.End()
@@ -207,7 +207,38 @@ func (s *Store) BlobSidecarsByRoot(ctx context.Context, root [32]byte, indices .
return filterForIndices(sc, indices...)
}
func filterForIndices(sc *ethpb.BlobSidecars, indices ...uint64) ([]*ethpb.DeprecatedBlobSidecar, error) {
func (s *Store) BlobIndicesAvailable(ctx context.Context, root [32]byte) ([]uint64, error) {
ctx, span := trace.StartSpan(ctx, "BeaconDB.BlobIndicesAvailable")
defer span.End()
var enc []byte
if err := s.db.View(func(tx *bolt.Tx) error {
c := tx.Bucket(blobsBucket).Cursor()
for k, v := c.First(); k != nil; k, v = c.Next() {
if bytes.HasSuffix(k, root[:]) {
enc = v
break
}
}
return nil
}); err != nil {
return nil, err
}
if enc == nil {
return nil, ErrNotFound
}
scs := &ethpb.BlobSidecars{}
if err := decode(ctx, enc, scs); err != nil {
return nil, err
}
idxs := make([]uint64, len(scs.Sidecars))
for i := range scs.Sidecars {
idxs[i] = scs.Sidecars[i].Index
}
return idxs, nil
}
func filterForIndices(sc *ethpb.BlobSidecars, indices ...uint64) ([]*ethpb.BlobSidecar, error) {
if len(indices) == 0 {
return sc.Sidecars, nil
}
@@ -215,7 +246,7 @@ func filterForIndices(sc *ethpb.BlobSidecars, indices ...uint64) ([]*ethpb.Depre
// in ascending order from eg 0..3, without gaps. This allows us to assume the indices argument
// maps 1:1 with indices in the BlobSidecars storage object.
maxIdx := uint64(len(sc.Sidecars)) - 1
sidecars := make([]*ethpb.DeprecatedBlobSidecar, len(indices))
sidecars := make([]*ethpb.BlobSidecar, len(indices))
for i, idx := range indices {
if idx > maxIdx {
return nil, errors.Wrapf(ErrNotFound, "BlobSidecars missing index: index %d", idx)
@@ -230,7 +261,7 @@ func filterForIndices(sc *ethpb.BlobSidecars, indices ...uint64) ([]*ethpb.Depre
// Otherwise, the result will be filtered to only include the specified indices.
// An error will result if an invalid index is specified.
// The bucket size is bounded by 131072 entries. That's the most blobs a node will keep before rotating it out.
func (s *Store) BlobSidecarsBySlot(ctx context.Context, slot types.Slot, indices ...uint64) ([]*ethpb.DeprecatedBlobSidecar, error) {
func (s *Store) BlobSidecarsBySlot(ctx context.Context, slot types.Slot, indices ...uint64) ([]*ethpb.BlobSidecar, error) {
ctx, span := trace.StartSpan(ctx, "BeaconDB.BlobSidecarsBySlot")
defer span.End()
@@ -281,7 +312,7 @@ func (s *Store) DeleteBlobSidecars(ctx context.Context, beaconBlockRoot [32]byte
// We define a blob sidecar key as: bytes(slot_to_rotating_buffer(blob.slot)) ++ bytes(blob.slot) ++ blob.block_root
// where slot_to_rotating_buffer(slot) = slot % MAX_SLOTS_TO_PERSIST_BLOBS.
func blobSidecarKey(blob *ethpb.DeprecatedBlobSidecar) blobRotatingKey {
func blobSidecarKey(blob *ethpb.BlobSidecar) blobRotatingKey {
key := slotKey(blob.Slot)
key = append(key, bytesutil.SlotToBytesBigEndian(blob.Slot)...)
key = append(key, blob.BlockRoot...)

View File

@@ -22,7 +22,7 @@ import (
bolt "go.etcd.io/bbolt"
)
func equalBlobSlices(expect []*ethpb.DeprecatedBlobSidecar, got []*ethpb.DeprecatedBlobSidecar) error {
func equalBlobSlices(expect []*ethpb.BlobSidecar, got []*ethpb.BlobSidecar) error {
if len(expect) != len(got) {
return fmt.Errorf("mismatched lengths, expect=%d, got=%d", len(expect), len(got))
}
@@ -80,7 +80,7 @@ func TestStore_BlobSidecars(t *testing.T) {
db := setupDB(t)
scs := generateBlobSidecars(t, fieldparams.MaxBlobsPerBlock)
for _, sc := range scs {
require.NoError(t, db.SaveBlobSidecar(ctx, []*ethpb.DeprecatedBlobSidecar{sc}))
require.NoError(t, db.SaveBlobSidecar(ctx, []*ethpb.BlobSidecar{sc}))
}
require.Equal(t, fieldparams.MaxBlobsPerBlock, len(scs))
got, err := db.BlobSidecarsByRoot(ctx, bytesutil.ToBytes32(scs[0].BlockRoot))
@@ -94,7 +94,7 @@ func TestStore_BlobSidecars(t *testing.T) {
require.Equal(t, fieldparams.MaxBlobsPerBlock, len(scs))
// we'll request indices 0 and 3, so make a slice with those indices for comparison
expect := make([]*ethpb.DeprecatedBlobSidecar, 2)
expect := make([]*ethpb.BlobSidecar, 2)
expect[0] = scs[0]
expect[1] = scs[3]
@@ -136,7 +136,7 @@ func TestStore_BlobSidecars(t *testing.T) {
db := setupDB(t)
scs := generateBlobSidecars(t, fieldparams.MaxBlobsPerBlock)
for _, sc := range scs {
require.NoError(t, db.SaveBlobSidecar(ctx, []*ethpb.DeprecatedBlobSidecar{sc}))
require.NoError(t, db.SaveBlobSidecar(ctx, []*ethpb.BlobSidecar{sc}))
}
require.Equal(t, fieldparams.MaxBlobsPerBlock, len(scs))
got, err := db.BlobSidecarsBySlot(ctx, scs[0].Slot)
@@ -150,7 +150,7 @@ func TestStore_BlobSidecars(t *testing.T) {
require.Equal(t, fieldparams.MaxBlobsPerBlock, len(scs))
// we'll request indices 0 and 3, so make a slice with those indices for comparison
expect := make([]*ethpb.DeprecatedBlobSidecar, 2)
expect := make([]*ethpb.BlobSidecar, 2)
expect[0] = scs[0]
expect[1] = scs[3]
@@ -191,11 +191,11 @@ func TestStore_BlobSidecars(t *testing.T) {
for i := 0; i < fieldparams.MaxBlobsPerBlock; i++ {
scs[i].Slot = primitives.Slot(i)
scs[i].BlockRoot = bytesutil.PadTo([]byte{byte(i)}, 32)
require.NoError(t, db.SaveBlobSidecar(ctx, []*ethpb.DeprecatedBlobSidecar{scs[i]}))
require.NoError(t, db.SaveBlobSidecar(ctx, []*ethpb.BlobSidecar{scs[i]}))
br := bytesutil.ToBytes32(scs[i].BlockRoot)
saved, err := db.BlobSidecarsByRoot(ctx, br)
require.NoError(t, err)
require.NoError(t, equalBlobSlices([]*ethpb.DeprecatedBlobSidecar{scs[i]}, saved))
require.NoError(t, equalBlobSlices([]*ethpb.BlobSidecar{scs[i]}, saved))
}
})
t.Run("saving a new blob for rotation (batch)", func(t *testing.T) {
@@ -226,7 +226,7 @@ func TestStore_BlobSidecars(t *testing.T) {
db := setupDB(t)
scs := generateBlobSidecars(t, fieldparams.MaxBlobsPerBlock)
for _, sc := range scs {
require.NoError(t, db.SaveBlobSidecar(ctx, []*ethpb.DeprecatedBlobSidecar{sc}))
require.NoError(t, db.SaveBlobSidecar(ctx, []*ethpb.BlobSidecar{sc}))
}
got, err := db.BlobSidecarsByRoot(ctx, bytesutil.ToBytes32(scs[0].BlockRoot))
require.NoError(t, err)
@@ -238,7 +238,7 @@ func TestStore_BlobSidecars(t *testing.T) {
sc.Slot = sc.Slot + newRetentionSlot
}
for _, sc := range scs {
require.NoError(t, db.SaveBlobSidecar(ctx, []*ethpb.DeprecatedBlobSidecar{sc}))
require.NoError(t, db.SaveBlobSidecar(ctx, []*ethpb.BlobSidecar{sc}))
}
_, err = db.BlobSidecarsBySlot(ctx, 100)
@@ -264,7 +264,7 @@ func TestStore_BlobSidecars(t *testing.T) {
sc.Slot = sc.Slot + newRetentionSlot
}
for _, sc := range scs {
require.NoError(t, db.SaveBlobSidecar(ctx, []*ethpb.DeprecatedBlobSidecar{sc}))
require.NoError(t, db.SaveBlobSidecar(ctx, []*ethpb.BlobSidecar{sc}))
}
_, err = db.BlobSidecarsBySlot(ctx, 100)
@@ -278,7 +278,7 @@ func TestStore_BlobSidecars(t *testing.T) {
db := setupDB(t)
scs := generateBlobSidecars(t, fieldparams.MaxBlobsPerBlock)
for _, sc := range scs {
require.NoError(t, db.SaveBlobSidecar(ctx, []*ethpb.DeprecatedBlobSidecar{sc}))
require.NoError(t, db.SaveBlobSidecar(ctx, []*ethpb.BlobSidecar{sc}))
}
got, err := db.BlobSidecarsByRoot(ctx, bytesutil.ToBytes32(scs[0].BlockRoot))
require.NoError(t, err)
@@ -304,8 +304,8 @@ func TestStore_BlobSidecars(t *testing.T) {
eScs := generateEquivocatingBlobSidecars(t, fieldparams.MaxBlobsPerBlock/2)
for i, sc := range scs {
require.NoError(t, db.SaveBlobSidecar(ctx, []*ethpb.DeprecatedBlobSidecar{sc}))
require.NoError(t, db.SaveBlobSidecar(ctx, []*ethpb.DeprecatedBlobSidecar{eScs[i]}))
require.NoError(t, db.SaveBlobSidecar(ctx, []*ethpb.BlobSidecar{sc}))
require.NoError(t, db.SaveBlobSidecar(ctx, []*ethpb.BlobSidecar{eScs[i]}))
}
got, err := db.BlobSidecarsByRoot(ctx, bytesutil.ToBytes32(scs[0].BlockRoot))
@@ -318,15 +318,15 @@ func TestStore_BlobSidecars(t *testing.T) {
})
}
func generateBlobSidecars(t *testing.T, n uint64) []*ethpb.DeprecatedBlobSidecar {
blobSidecars := make([]*ethpb.DeprecatedBlobSidecar, n)
func generateBlobSidecars(t *testing.T, n uint64) []*ethpb.BlobSidecar {
blobSidecars := make([]*ethpb.BlobSidecar, n)
for i := uint64(0); i < n; i++ {
blobSidecars[i] = generateBlobSidecar(t, i)
}
return blobSidecars
}
func generateBlobSidecar(t *testing.T, index uint64) *ethpb.DeprecatedBlobSidecar {
func generateBlobSidecar(t *testing.T, index uint64) *ethpb.BlobSidecar {
blob := make([]byte, 131072)
_, err := rand.Read(blob)
require.NoError(t, err)
@@ -336,7 +336,7 @@ func generateBlobSidecar(t *testing.T, index uint64) *ethpb.DeprecatedBlobSideca
kzgProof := make([]byte, 48)
_, err = rand.Read(kzgProof)
require.NoError(t, err)
return &ethpb.DeprecatedBlobSidecar{
return &ethpb.BlobSidecar{
BlockRoot: bytesutil.PadTo([]byte{'a'}, 32),
Index: index,
Slot: 100,
@@ -348,15 +348,15 @@ func generateBlobSidecar(t *testing.T, index uint64) *ethpb.DeprecatedBlobSideca
}
}
func generateEquivocatingBlobSidecars(t *testing.T, n uint64) []*ethpb.DeprecatedBlobSidecar {
blobSidecars := make([]*ethpb.DeprecatedBlobSidecar, n)
func generateEquivocatingBlobSidecars(t *testing.T, n uint64) []*ethpb.BlobSidecar {
blobSidecars := make([]*ethpb.BlobSidecar, n)
for i := uint64(0); i < n; i++ {
blobSidecars[i] = generateEquivocatingBlobSidecar(t, i)
}
return blobSidecars
}
func generateEquivocatingBlobSidecar(t *testing.T, index uint64) *ethpb.DeprecatedBlobSidecar {
func generateEquivocatingBlobSidecar(t *testing.T, index uint64) *ethpb.BlobSidecar {
blob := make([]byte, 131072)
_, err := rand.Read(blob)
require.NoError(t, err)
@@ -367,7 +367,7 @@ func generateEquivocatingBlobSidecar(t *testing.T, index uint64) *ethpb.Deprecat
_, err = rand.Read(kzgProof)
require.NoError(t, err)
return &ethpb.DeprecatedBlobSidecar{
return &ethpb.BlobSidecar{
BlockRoot: bytesutil.PadTo([]byte{'c'}, 32),
Index: index,
Slot: 100,
@@ -382,16 +382,16 @@ func generateEquivocatingBlobSidecar(t *testing.T, index uint64) *ethpb.Deprecat
func Test_validUniqueSidecars_validation(t *testing.T) {
tests := []struct {
name string
scs []*ethpb.DeprecatedBlobSidecar
scs []*ethpb.BlobSidecar
err error
}{
{name: "empty", scs: []*ethpb.DeprecatedBlobSidecar{}, err: errEmptySidecar},
{name: "empty", scs: []*ethpb.BlobSidecar{}, err: errEmptySidecar},
{name: "too many sidecars", scs: generateBlobSidecars(t, fieldparams.MaxBlobsPerBlock+1), err: errBlobSidecarLimit},
{name: "invalid slot", scs: []*ethpb.DeprecatedBlobSidecar{{Slot: 1}, {Slot: 2}}, err: errBlobSlotMismatch},
{name: "invalid proposer index", scs: []*ethpb.DeprecatedBlobSidecar{{ProposerIndex: 1}, {ProposerIndex: 2}}, err: errBlobProposerMismatch},
{name: "invalid root", scs: []*ethpb.DeprecatedBlobSidecar{{BlockRoot: []byte{1}}, {BlockRoot: []byte{2}}}, err: errBlobRootMismatch},
{name: "invalid parent root", scs: []*ethpb.DeprecatedBlobSidecar{{BlockParentRoot: []byte{1}}, {BlockParentRoot: []byte{2}}}, err: errBlobParentMismatch},
{name: "happy path", scs: []*ethpb.DeprecatedBlobSidecar{{Index: 0}, {Index: 1}}},
{name: "invalid slot", scs: []*ethpb.BlobSidecar{{Slot: 1}, {Slot: 2}}, err: errBlobSlotMismatch},
{name: "invalid proposer index", scs: []*ethpb.BlobSidecar{{ProposerIndex: 1}, {ProposerIndex: 2}}, err: errBlobProposerMismatch},
{name: "invalid root", scs: []*ethpb.BlobSidecar{{BlockRoot: []byte{1}}, {BlockRoot: []byte{2}}}, err: errBlobRootMismatch},
{name: "invalid parent root", scs: []*ethpb.BlobSidecar{{BlockParentRoot: []byte{1}}, {BlockParentRoot: []byte{2}}}, err: errBlobParentMismatch},
{name: "happy path", scs: []*ethpb.BlobSidecar{{Index: 0}, {Index: 1}}},
}
for _, tt := range tests {
t.Run(tt.name, func(t *testing.T) {
@@ -408,43 +408,43 @@ func Test_validUniqueSidecars_validation(t *testing.T) {
func Test_validUniqueSidecars_dedup(t *testing.T) {
cases := []struct {
name string
scs []*ethpb.DeprecatedBlobSidecar
expected []*ethpb.DeprecatedBlobSidecar
scs []*ethpb.BlobSidecar
expected []*ethpb.BlobSidecar
err error
}{
{
name: "duplicate sidecar",
scs: []*ethpb.DeprecatedBlobSidecar{{Index: 1}, {Index: 1}},
expected: []*ethpb.DeprecatedBlobSidecar{{Index: 1}},
scs: []*ethpb.BlobSidecar{{Index: 1}, {Index: 1}},
expected: []*ethpb.BlobSidecar{{Index: 1}},
},
{
name: "single sidecar",
scs: []*ethpb.DeprecatedBlobSidecar{{Index: 1}},
expected: []*ethpb.DeprecatedBlobSidecar{{Index: 1}},
scs: []*ethpb.BlobSidecar{{Index: 1}},
expected: []*ethpb.BlobSidecar{{Index: 1}},
},
{
name: "multiple duplicates",
scs: []*ethpb.DeprecatedBlobSidecar{{Index: 1}, {Index: 2}, {Index: 2}, {Index: 3}, {Index: 3}},
expected: []*ethpb.DeprecatedBlobSidecar{{Index: 1}, {Index: 2}, {Index: 3}},
scs: []*ethpb.BlobSidecar{{Index: 1}, {Index: 2}, {Index: 2}, {Index: 3}, {Index: 3}},
expected: []*ethpb.BlobSidecar{{Index: 1}, {Index: 2}, {Index: 3}},
},
{
name: "ok number after de-dupe, > 6 before",
scs: []*ethpb.DeprecatedBlobSidecar{{Index: 1}, {Index: 2}, {Index: 2}, {Index: 2}, {Index: 2}, {Index: 3}, {Index: 3}},
expected: []*ethpb.DeprecatedBlobSidecar{{Index: 1}, {Index: 2}, {Index: 3}},
scs: []*ethpb.BlobSidecar{{Index: 1}, {Index: 2}, {Index: 2}, {Index: 2}, {Index: 2}, {Index: 3}, {Index: 3}},
expected: []*ethpb.BlobSidecar{{Index: 1}, {Index: 2}, {Index: 3}},
},
{
name: "max unique, no dupes",
scs: []*ethpb.DeprecatedBlobSidecar{{Index: 1}, {Index: 2}, {Index: 3}, {Index: 4}, {Index: 5}, {Index: 6}},
expected: []*ethpb.DeprecatedBlobSidecar{{Index: 1}, {Index: 2}, {Index: 3}, {Index: 4}, {Index: 5}, {Index: 6}},
scs: []*ethpb.BlobSidecar{{Index: 1}, {Index: 2}, {Index: 3}, {Index: 4}, {Index: 5}, {Index: 6}},
expected: []*ethpb.BlobSidecar{{Index: 1}, {Index: 2}, {Index: 3}, {Index: 4}, {Index: 5}, {Index: 6}},
},
{
name: "too many unique",
scs: []*ethpb.DeprecatedBlobSidecar{{Index: 1}, {Index: 2}, {Index: 3}, {Index: 4}, {Index: 5}, {Index: 6}, {Index: 7}},
scs: []*ethpb.BlobSidecar{{Index: 1}, {Index: 2}, {Index: 3}, {Index: 4}, {Index: 5}, {Index: 6}, {Index: 7}},
err: errBlobSidecarLimit,
},
{
name: "too many unique with dupes",
scs: []*ethpb.DeprecatedBlobSidecar{{Index: 1}, {Index: 1}, {Index: 1}, {Index: 2}, {Index: 3}, {Index: 4}, {Index: 5}, {Index: 6}, {Index: 7}},
scs: []*ethpb.BlobSidecar{{Index: 1}, {Index: 1}, {Index: 1}, {Index: 2}, {Index: 3}, {Index: 4}, {Index: 5}, {Index: 6}, {Index: 7}},
err: errBlobSidecarLimit,
},
}
@@ -462,7 +462,7 @@ func Test_validUniqueSidecars_dedup(t *testing.T) {
}
func TestStore_sortSidecars(t *testing.T) {
scs := []*ethpb.DeprecatedBlobSidecar{
scs := []*ethpb.BlobSidecar{
{Index: 6},
{Index: 4},
{Index: 2},
@@ -480,7 +480,7 @@ func TestStore_sortSidecars(t *testing.T) {
func BenchmarkStore_BlobSidecarsByRoot(b *testing.B) {
s := setupDB(b)
ctx := context.Background()
require.NoError(b, s.SaveBlobSidecar(ctx, []*ethpb.DeprecatedBlobSidecar{
require.NoError(b, s.SaveBlobSidecar(ctx, []*ethpb.BlobSidecar{
{BlockRoot: bytesutil.PadTo([]byte{'a'}, 32), Slot: 0},
}))
@@ -490,7 +490,7 @@ func BenchmarkStore_BlobSidecarsByRoot(b *testing.B) {
r := make([]byte, 32)
_, err := rand.Read(r)
require.NoError(b, err)
scs := []*ethpb.DeprecatedBlobSidecar{
scs := []*ethpb.BlobSidecar{
{BlockRoot: r, Slot: primitives.Slot(i)},
}
k := blobSidecarKey(scs[0])
@@ -502,7 +502,7 @@ func BenchmarkStore_BlobSidecarsByRoot(b *testing.B) {
})
require.NoError(b, err)
require.NoError(b, s.SaveBlobSidecar(ctx, []*ethpb.DeprecatedBlobSidecar{
require.NoError(b, s.SaveBlobSidecar(ctx, []*ethpb.BlobSidecar{
{BlockRoot: bytesutil.PadTo([]byte{'b'}, 32), Slot: 131071},
}))
@@ -529,7 +529,7 @@ func Test_checkEpochsForBlobSidecarsRequestBucket(t *testing.T) {
}
func TestBlobRotatingKey(t *testing.T) {
k := blobSidecarKey(&ethpb.DeprecatedBlobSidecar{
k := blobSidecarKey(&ethpb.BlobSidecar{
Slot: 1,
BlockRoot: []byte{2},
})

View File

@@ -11,7 +11,7 @@ import (
var maxEpochsToPersistBlobs = params.BeaconNetworkConfig().MinEpochsForBlobsSidecarsRequest
// ConfigureBlobRetentionEpoch sets the epoch for blob retention based on command-line context. It sets the local config `maxEpochsToPersistBlobs`.
// ConfigureBlobRetentionEpoch sets the for blob retention based on command-line context. It sets the local config `maxEpochsToPersistBlobs`.
// If the flag is not set, the spec default `MinEpochsForBlobsSidecarsRequest` is used.
// An error if the input epoch is smaller than the spec default value.
func ConfigureBlobRetentionEpoch(cliCtx *cli.Context) error {

View File

@@ -35,5 +35,5 @@ func TestConfigureBlobRetentionEpoch(t *testing.T) {
require.NoError(t, set.Set(flags.BlobRetentionEpoch.Name, strconv.FormatUint(minEpochsForSidecarRequest-1, 10)))
cliCtx = cli.NewContext(&app, set, nil)
err := ConfigureBlobRetentionEpoch(cliCtx)
require.ErrorContains(t, "blob-retention-epochs smaller than spec default", err)
require.ErrorContains(t, "extend-blob-retention-epoch smaller than spec default", err)
}

View File

@@ -1,12 +1,7 @@
package kv
import (
"io"
"os"
"testing"
"github.com/prysmaticlabs/prysm/v4/config/params"
"github.com/sirupsen/logrus"
)
func init() {
@@ -15,9 +10,3 @@ func init() {
panic(err)
}
}
func TestMain(m *testing.M) {
logrus.SetLevel(logrus.DebugLevel)
logrus.SetOutput(io.Discard)
os.Exit(m.Run())
}

View File

@@ -5,6 +5,7 @@ go_library(
srcs = [
"block_cache.go",
"block_reader.go",
"check_transition_config.go",
"deposit.go",
"engine_client.go",
"errors.go",
@@ -81,6 +82,7 @@ go_test(
srcs = [
"block_cache_test.go",
"block_reader_test.go",
"check_transition_config_test.go",
"deposit_test.go",
"engine_client_fuzz_test.go",
"engine_client_test.go",
@@ -94,6 +96,7 @@ go_test(
embed = [":go_default_library"],
deps = [
"//async/event:go_default_library",
"//beacon-chain/blockchain/testing:go_default_library",
"//beacon-chain/cache/depositcache:go_default_library",
"//beacon-chain/core/feed:go_default_library",
"//beacon-chain/core/feed/state:go_default_library",
@@ -137,5 +140,6 @@ go_test(
"@com_github_prometheus_client_golang//prometheus:go_default_library",
"@com_github_sirupsen_logrus//:go_default_library",
"@com_github_sirupsen_logrus//hooks/test:go_default_library",
"@org_golang_google_protobuf//proto:go_default_library",
],
)

View File

@@ -0,0 +1,168 @@
package execution
import (
"context"
"errors"
"math"
"math/big"
"time"
"github.com/ethereum/go-ethereum/common/hexutil"
"github.com/holiman/uint256"
"github.com/prysmaticlabs/prysm/v4/beacon-chain/core/blocks"
"github.com/prysmaticlabs/prysm/v4/beacon-chain/core/feed"
statefeed "github.com/prysmaticlabs/prysm/v4/beacon-chain/core/feed/state"
"github.com/prysmaticlabs/prysm/v4/config/params"
"github.com/prysmaticlabs/prysm/v4/network"
pb "github.com/prysmaticlabs/prysm/v4/proto/engine/v1"
"github.com/prysmaticlabs/prysm/v4/time/slots"
"github.com/sirupsen/logrus"
)
var (
checkTransitionPollingInterval = time.Second * 10
logTtdInterval = time.Minute
configMismatchLog = "Configuration mismatch between your execution client and Prysm. " +
"Please check your execution client and restart it with the proper configuration. If this is not done, " +
"your node will not be able to complete the proof-of-stake transition"
needsEnginePortLog = "Could not check execution client configuration. " +
"You are probably connecting to your execution client on the wrong port. For the Ethereum " +
"merge, you will need to connect to your " +
"execution client on port 8551 rather than 8545. This is known as the 'engine API' port and needs to be " +
"authenticated if connecting via HTTP. See our documentation on how to set up this up here " +
"https://docs.prylabs.network/docs/execution-node/authentication"
)
// Checks the transition configuration between Prysm and the connected execution node to ensure
// there are no differences in terminal block difficulty and block hash.
// If there are any discrepancies, we must log errors to ensure users can resolve
// the problem and be ready for the merge transition.
func (s *Service) checkTransitionConfiguration(
ctx context.Context, blockNotifications chan *feed.Event,
) {
// If Bellatrix fork epoch is not set, we do not run this check.
if params.BeaconConfig().BellatrixForkEpoch == math.MaxUint64 {
return
}
i := new(big.Int)
i.SetString(params.BeaconConfig().TerminalTotalDifficulty, 10)
ttd := new(uint256.Int)
ttd.SetFromBig(i)
cfg := &pb.TransitionConfiguration{
TerminalTotalDifficulty: ttd.Hex(),
TerminalBlockHash: params.BeaconConfig().TerminalBlockHash[:],
TerminalBlockNumber: big.NewInt(0).Bytes(), // A value of 0 is recommended in the request.
}
err := s.ExchangeTransitionConfiguration(ctx, cfg)
if err != nil {
switch {
case errors.Is(err, ErrConfigMismatch):
log.WithError(err).Fatal(configMismatchLog)
case errors.Is(err, ErrMethodNotFound):
log.WithError(err).Error(needsEnginePortLog)
default:
log.WithError(err).Error("Could not check configuration values between execution and consensus client")
}
}
// We poll the execution client to see if the transition configuration has changed.
// This serves as a heartbeat to ensure the execution client and Prysm are ready for the
// Bellatrix hard-fork transition.
ticker := time.NewTicker(checkTransitionPollingInterval)
logTtdTicker := time.NewTicker(logTtdInterval)
hasTtdReached := false
defer ticker.Stop()
defer logTtdTicker.Stop()
sub := s.cfg.stateNotifier.StateFeed().Subscribe(blockNotifications)
defer sub.Unsubscribe()
for {
select {
case <-ctx.Done():
return
case <-sub.Err():
return
case ev := <-blockNotifications:
data, ok := ev.Data.(*statefeed.BlockProcessedData)
if !ok {
continue
}
isExecutionBlock, err := blocks.IsExecutionBlock(data.SignedBlock.Block().Body())
if err != nil {
log.WithError(err).Debug("Could not check whether signed block is execution block")
continue
}
if isExecutionBlock {
log.Debug("PoS transition is complete, no longer checking for configuration changes")
return
}
case tm := <-ticker.C:
ctx, cancel := context.WithDeadline(ctx, tm.Add(network.DefaultRPCHTTPTimeout))
err = s.ExchangeTransitionConfiguration(ctx, cfg)
s.handleExchangeConfigurationError(err)
cancel()
case <-logTtdTicker.C:
currentEpoch := slots.ToEpoch(slots.CurrentSlot(s.chainStartData.GetGenesisTime()))
if currentEpoch >= params.BeaconConfig().BellatrixForkEpoch && !hasTtdReached {
hasTtdReached, err = s.logTtdStatus(ctx, ttd)
if err != nil {
log.WithError(err).Error("Could not log ttd status")
}
}
}
}
}
// We check if there is a configuration mismatch error between the execution client
// and the Prysm beacon node. If so, we need to log errors in the node as it cannot successfully
// complete the merge transition for the Bellatrix hard fork.
func (s *Service) handleExchangeConfigurationError(err error) {
if err == nil {
// If there is no error in checking the exchange configuration error, we clear
// the run error of the service if we had previously set it to ErrConfigMismatch.
if errors.Is(s.runError, ErrConfigMismatch) {
s.runError = nil
}
return
}
// If the error is a configuration mismatch, we set a runtime error in the service.
if errors.Is(err, ErrConfigMismatch) {
s.runError = err
log.WithError(err).Error(configMismatchLog)
return
} else if errors.Is(err, ErrMethodNotFound) {
log.WithError(err).Error(needsEnginePortLog)
return
}
log.WithError(err).Error("Could not check configuration values between execution and consensus client")
}
// Logs the terminal total difficulty status.
func (s *Service) logTtdStatus(ctx context.Context, ttd *uint256.Int) (bool, error) {
latest, err := s.LatestExecutionBlock(ctx)
switch {
case errors.Is(err, hexutil.ErrEmptyString):
return false, nil
case err != nil:
return false, err
case latest == nil:
return false, errors.New("latest block is nil")
case latest.TotalDifficulty == "":
return false, nil
default:
}
latestTtd, err := hexutil.DecodeBig(latest.TotalDifficulty)
if err != nil {
return false, err
}
if latestTtd.Cmp(ttd.ToBig()) >= 0 {
return true, nil
}
log.WithFields(logrus.Fields{
"latestDifficulty": latestTtd.String(),
"terminalDifficulty": ttd.ToBig().String(),
"network": params.BeaconConfig().ConfigName,
}).Info("Ready for The Merge")
totalTerminalDifficulty.Set(float64(latestTtd.Uint64()))
return false, nil
}

View File

@@ -0,0 +1,260 @@
package execution
import (
"context"
"encoding/json"
"errors"
"math/big"
"net/http"
"net/http/httptest"
"testing"
"time"
"github.com/ethereum/go-ethereum/common"
gethtypes "github.com/ethereum/go-ethereum/core/types"
"github.com/ethereum/go-ethereum/rpc"
"github.com/holiman/uint256"
mockChain "github.com/prysmaticlabs/prysm/v4/beacon-chain/blockchain/testing"
"github.com/prysmaticlabs/prysm/v4/beacon-chain/core/feed"
statefeed "github.com/prysmaticlabs/prysm/v4/beacon-chain/core/feed/state"
mocks "github.com/prysmaticlabs/prysm/v4/beacon-chain/execution/testing"
fieldparams "github.com/prysmaticlabs/prysm/v4/config/fieldparams"
"github.com/prysmaticlabs/prysm/v4/config/params"
"github.com/prysmaticlabs/prysm/v4/consensus-types/blocks"
pb "github.com/prysmaticlabs/prysm/v4/proto/engine/v1"
ethpb "github.com/prysmaticlabs/prysm/v4/proto/prysm/v1alpha1"
"github.com/prysmaticlabs/prysm/v4/testing/require"
logTest "github.com/sirupsen/logrus/hooks/test"
"google.golang.org/protobuf/proto"
)
func Test_checkTransitionConfiguration(t *testing.T) {
params.SetupTestConfigCleanup(t)
cfg := params.BeaconConfig().Copy()
cfg.BellatrixForkEpoch = 0
params.OverrideBeaconConfig(cfg)
hook := logTest.NewGlobal()
t.Run("context canceled", func(t *testing.T) {
ctx := context.Background()
m := &mocks.EngineClient{}
m.Err = errors.New("something went wrong")
srv := setupTransitionConfigTest(t)
srv.cfg.stateNotifier = &mockChain.MockStateNotifier{}
checkTransitionPollingInterval = time.Millisecond
ctx, cancel := context.WithCancel(ctx)
go srv.checkTransitionConfiguration(ctx, make(chan *feed.Event, 1))
<-time.After(100 * time.Millisecond)
cancel()
require.LogsContain(t, hook, "Could not check configuration values")
})
t.Run("block containing execution payload exits routine", func(t *testing.T) {
ctx := context.Background()
m := &mocks.EngineClient{}
m.Err = errors.New("something went wrong")
srv := setupTransitionConfigTest(t)
srv.cfg.stateNotifier = &mockChain.MockStateNotifier{}
checkTransitionPollingInterval = time.Millisecond
ctx, cancel := context.WithCancel(ctx)
exit := make(chan bool)
notification := make(chan *feed.Event)
go func() {
srv.checkTransitionConfiguration(ctx, notification)
exit <- true
}()
payload := emptyPayload()
payload.GasUsed = 21000
wrappedBlock, err := blocks.NewSignedBeaconBlock(&ethpb.SignedBeaconBlockBellatrix{
Block: &ethpb.BeaconBlockBellatrix{
Body: &ethpb.BeaconBlockBodyBellatrix{
ExecutionPayload: payload,
},
}},
)
require.NoError(t, err)
notification <- &feed.Event{
Data: &statefeed.BlockProcessedData{
SignedBlock: wrappedBlock,
},
Type: statefeed.BlockProcessed,
}
<-exit
cancel()
require.LogsContain(t, hook, "PoS transition is complete, no longer checking")
})
}
func TestService_handleExchangeConfigurationError(t *testing.T) {
hook := logTest.NewGlobal()
t.Run("clears existing service error", func(t *testing.T) {
srv := setupTransitionConfigTest(t)
srv.isRunning = true
srv.runError = ErrConfigMismatch
srv.handleExchangeConfigurationError(nil)
require.Equal(t, true, srv.Status() == nil)
})
t.Run("does not clear existing service error if wrong kind", func(t *testing.T) {
srv := setupTransitionConfigTest(t)
srv.isRunning = true
err := errors.New("something else went wrong")
srv.runError = err
srv.handleExchangeConfigurationError(nil)
require.ErrorIs(t, err, srv.Status())
})
t.Run("sets service error on config mismatch", func(t *testing.T) {
srv := setupTransitionConfigTest(t)
srv.isRunning = true
srv.handleExchangeConfigurationError(ErrConfigMismatch)
require.Equal(t, ErrConfigMismatch, srv.Status())
require.LogsContain(t, hook, configMismatchLog)
})
t.Run("does not set service error if unrelated problem", func(t *testing.T) {
srv := setupTransitionConfigTest(t)
srv.isRunning = true
srv.handleExchangeConfigurationError(errors.New("foo"))
require.Equal(t, true, srv.Status() == nil)
require.LogsContain(t, hook, "Could not check configuration values")
})
}
func setupTransitionConfigTest(t testing.TB) *Service {
fix := fixtures()
request, ok := fix["TransitionConfiguration"].(*pb.TransitionConfiguration)
require.Equal(t, true, ok)
resp, ok := proto.Clone(request).(*pb.TransitionConfiguration)
require.Equal(t, true, ok)
srv := httptest.NewServer(http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) {
w.Header().Set("Content-Type", "application/json")
defer func() {
require.NoError(t, r.Body.Close())
}()
// Change the terminal block hash.
h := common.BytesToHash([]byte("foo"))
resp.TerminalBlockHash = h[:]
respJSON := map[string]interface{}{
"jsonrpc": "2.0",
"id": 1,
"result": resp,
}
require.NoError(t, json.NewEncoder(w).Encode(respJSON))
}))
defer srv.Close()
rpcClient, err := rpc.DialHTTP(srv.URL)
require.NoError(t, err)
defer rpcClient.Close()
service := &Service{
cfg: &config{},
}
service.rpcClient = rpcClient
return service
}
func TestService_logTtdStatus(t *testing.T) {
srv := httptest.NewServer(http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) {
w.Header().Set("Content-Type", "application/json")
defer func() {
require.NoError(t, r.Body.Close())
}()
resp := &pb.ExecutionBlock{
Header: gethtypes.Header{
ParentHash: common.Hash{},
UncleHash: common.Hash{},
Coinbase: common.Address{},
Root: common.Hash{},
TxHash: common.Hash{},
ReceiptHash: common.Hash{},
Bloom: gethtypes.Bloom{},
Difficulty: big.NewInt(1),
Number: big.NewInt(2),
GasLimit: 3,
GasUsed: 4,
Time: 5,
Extra: nil,
MixDigest: common.Hash{},
Nonce: gethtypes.BlockNonce{},
BaseFee: big.NewInt(6),
},
TotalDifficulty: "0x12345678",
}
respJSON := map[string]interface{}{
"jsonrpc": "2.0",
"id": 1,
"result": resp,
}
require.NoError(t, json.NewEncoder(w).Encode(respJSON))
}))
defer srv.Close()
rpcClient, err := rpc.DialHTTP(srv.URL)
require.NoError(t, err)
defer rpcClient.Close()
service := &Service{
cfg: &config{},
}
service.rpcClient = rpcClient
ttd := new(uint256.Int)
reached, err := service.logTtdStatus(context.Background(), ttd.SetUint64(24343))
require.NoError(t, err)
require.Equal(t, true, reached)
reached, err = service.logTtdStatus(context.Background(), ttd.SetUint64(323423484))
require.NoError(t, err)
require.Equal(t, false, reached)
}
func TestService_logTtdStatus_NotSyncedClient(t *testing.T) {
srv := httptest.NewServer(http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) {
w.Header().Set("Content-Type", "application/json")
defer func() {
require.NoError(t, r.Body.Close())
}()
resp := (*pb.ExecutionBlock)(nil) // Nil response when a client is not synced
respJSON := map[string]interface{}{
"jsonrpc": "2.0",
"id": 1,
"result": resp,
}
require.NoError(t, json.NewEncoder(w).Encode(respJSON))
}))
defer srv.Close()
rpcClient, err := rpc.DialHTTP(srv.URL)
require.NoError(t, err)
defer rpcClient.Close()
service := &Service{
cfg: &config{},
}
service.rpcClient = rpcClient
ttd := new(uint256.Int)
reached, err := service.logTtdStatus(context.Background(), ttd.SetUint64(24343))
require.ErrorContains(t, "missing required field 'parentHash' for Header", err)
require.Equal(t, false, reached)
}
func emptyPayload() *pb.ExecutionPayload {
return &pb.ExecutionPayload{
ParentHash: make([]byte, fieldparams.RootLength),
FeeRecipient: make([]byte, fieldparams.FeeRecipientLength),
StateRoot: make([]byte, fieldparams.RootLength),
ReceiptsRoot: make([]byte, fieldparams.RootLength),
LogsBloom: make([]byte, fieldparams.LogsBloomLength),
PrevRandao: make([]byte, fieldparams.RootLength),
BaseFeePerGas: make([]byte, fieldparams.RootLength),
BlockHash: make([]byte, fieldparams.RootLength),
Transactions: make([][]byte, 0),
ExtraData: make([]byte, 0),
}
}

View File

@@ -39,6 +39,7 @@ var (
ForkchoiceUpdatedMethodV2,
GetPayloadMethod,
GetPayloadMethodV2,
ExchangeTransitionConfigurationMethod,
GetPayloadBodiesByHashV1,
GetPayloadBodiesByRangeV1,
}
@@ -61,6 +62,8 @@ const (
// GetPayloadMethodV2 v2 request string for JSON-RPC.
GetPayloadMethodV2 = "engine_getPayloadV2"
GetPayloadMethodV3 = "engine_getPayloadV3"
// ExchangeTransitionConfigurationMethod v1 request string for JSON-RPC.
ExchangeTransitionConfigurationMethod = "engine_exchangeTransitionConfigurationV1"
// BlockByHashMethod request string for JSON-RPC.
BlockByHashMethod = "eth_getBlockByHash"
// BlockByNumberMethod request string for JSON-RPC.
@@ -103,6 +106,9 @@ type EngineCaller interface {
ctx context.Context, state *pb.ForkchoiceState, attrs payloadattribute.Attributer,
) (*pb.PayloadIDBytes, []byte, error)
GetPayload(ctx context.Context, payloadId [8]byte, slot primitives.Slot) (interfaces.ExecutionData, *pb.BlobsBundle, bool, error)
ExchangeTransitionConfiguration(
ctx context.Context, cfg *pb.TransitionConfiguration,
) error
ExecutionBlockByHash(ctx context.Context, hash common.Hash, withTxs bool) (*pb.ExecutionBlock, error)
GetTerminalBlockHash(ctx context.Context, transitionTime uint64) ([]byte, bool, error)
}
@@ -293,6 +299,51 @@ func (s *Service) GetPayload(ctx context.Context, payloadId [8]byte, slot primit
return ed, nil, false, nil
}
// ExchangeTransitionConfiguration calls the engine_exchangeTransitionConfigurationV1 method via JSON-RPC.
func (s *Service) ExchangeTransitionConfiguration(
ctx context.Context, cfg *pb.TransitionConfiguration,
) error {
ctx, span := trace.StartSpan(ctx, "powchain.engine-api-client.ExchangeTransitionConfiguration")
defer span.End()
// We set terminal block number to 0 as the parameter is not set on the consensus layer.
zeroBigNum := big.NewInt(0)
cfg.TerminalBlockNumber = zeroBigNum.Bytes()
d := time.Now().Add(defaultEngineTimeout)
ctx, cancel := context.WithDeadline(ctx, d)
defer cancel()
result := &pb.TransitionConfiguration{}
if err := s.rpcClient.CallContext(ctx, result, ExchangeTransitionConfigurationMethod, cfg); err != nil {
return handleRPCError(err)
}
// We surface an error to the user if local configuration settings mismatch
// according to the response from the execution node.
cfgTerminalHash := params.BeaconConfig().TerminalBlockHash[:]
if !bytes.Equal(cfgTerminalHash, result.TerminalBlockHash) {
return errors.Wrapf(
ErrConfigMismatch,
"got %#x from execution node, wanted %#x",
result.TerminalBlockHash,
cfgTerminalHash,
)
}
ttdCfg := params.BeaconConfig().TerminalTotalDifficulty
ttdResult, err := hexutil.DecodeBig(result.TerminalTotalDifficulty)
if err != nil {
return errors.Wrap(err, "could not decode received terminal total difficulty")
}
if ttdResult.String() != ttdCfg {
return errors.Wrapf(
ErrConfigMismatch,
"got %s from execution node, wanted %s",
ttdResult.String(),
ttdCfg,
)
}
return nil
}
func (s *Service) ExchangeCapabilities(ctx context.Context) ([]string, error) {
ctx, span := trace.StartSpan(ctx, "powchain.engine-api-client.ExchangeCapabilities")
defer span.End()

View File

@@ -13,6 +13,7 @@ import (
"github.com/ethereum/go-ethereum/beacon/engine"
"github.com/ethereum/go-ethereum/common"
"github.com/ethereum/go-ethereum/common/hexutil"
"github.com/ethereum/go-ethereum/core/types"
"github.com/pkg/errors"
"github.com/prysmaticlabs/prysm/v4/beacon-chain/execution"
@@ -71,6 +72,47 @@ func FuzzForkChoiceResponse(f *testing.F) {
})
}
func FuzzExchangeTransitionConfiguration(f *testing.F) {
valHash := common.Hash([32]byte{0xFF, 0x01})
ttd := hexutil.Big(*big.NewInt(math.MaxInt))
seed := &engine.TransitionConfigurationV1{
TerminalTotalDifficulty: &ttd,
TerminalBlockHash: valHash,
TerminalBlockNumber: hexutil.Uint64(math.MaxUint64),
}
output, err := json.Marshal(seed)
assert.NoError(f, err)
f.Add(output)
f.Fuzz(func(t *testing.T, jsonBlob []byte) {
gethResp := &engine.TransitionConfigurationV1{}
prysmResp := &pb.TransitionConfiguration{}
gethErr := json.Unmarshal(jsonBlob, gethResp)
prysmErr := json.Unmarshal(jsonBlob, prysmResp)
assert.Equal(t, gethErr != nil, prysmErr != nil, fmt.Sprintf("geth and prysm unmarshaller return inconsistent errors. %v and %v", gethErr, prysmErr))
// Nothing to marshal if we have an error.
if gethErr != nil {
return
}
gethBlob, gethErr := json.Marshal(gethResp)
prysmBlob, prysmErr := json.Marshal(prysmResp)
if gethErr != nil {
t.Errorf("%s %s", gethResp.TerminalTotalDifficulty.String(), prysmResp.TerminalTotalDifficulty)
}
assert.Equal(t, gethErr != nil, prysmErr != nil, fmt.Sprintf("geth and prysm unmarshaller return inconsistent errors. %v and %v", gethErr, prysmErr))
if gethErr != nil {
t.Errorf("%s %s", gethResp.TerminalTotalDifficulty.String(), prysmResp.TerminalTotalDifficulty)
}
newGethResp := &engine.TransitionConfigurationV1{}
newGethErr := json.Unmarshal(prysmBlob, newGethResp)
assert.NoError(t, newGethErr)
newGethResp2 := &engine.TransitionConfigurationV1{}
newGethErr = json.Unmarshal(gethBlob, newGethResp2)
assert.NoError(t, newGethErr)
})
}
func FuzzExecutionPayload(f *testing.F) {
logsBloom := [256]byte{'j', 'u', 'n', 'k'}
execData := &engine.ExecutionPayloadEnvelope{

View File

@@ -33,6 +33,7 @@ import (
"github.com/prysmaticlabs/prysm/v4/testing/require"
"github.com/prysmaticlabs/prysm/v4/testing/util"
logTest "github.com/sirupsen/logrus/hooks/test"
"google.golang.org/protobuf/proto"
)
var (
@@ -134,6 +135,12 @@ func TestClient_IPC(t *testing.T) {
require.NoError(t, err)
require.DeepEqual(t, bytesutil.ToBytes32(want.LatestValidHash), bytesutil.ToBytes32(latestValidHash))
})
t.Run(ExchangeTransitionConfigurationMethod, func(t *testing.T) {
want, ok := fix["TransitionConfiguration"].(*pb.TransitionConfiguration)
require.Equal(t, true, ok)
err := srv.ExchangeTransitionConfiguration(ctx, want)
require.NoError(t, err)
})
t.Run(BlockByNumberMethod, func(t *testing.T) {
want, ok := fix["ExecutionBlock"].(*pb.ExecutionBlock)
require.Equal(t, true, ok)
@@ -667,6 +674,44 @@ func TestClient_HTTP(t *testing.T) {
require.NoError(t, err)
require.DeepEqual(t, want, resp)
})
t.Run(ExchangeTransitionConfigurationMethod, func(t *testing.T) {
want, ok := fix["TransitionConfiguration"].(*pb.TransitionConfiguration)
require.Equal(t, true, ok)
encodedReq, err := json.Marshal(want)
require.NoError(t, err)
srv := httptest.NewServer(http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) {
w.Header().Set("Content-Type", "application/json")
defer func() {
require.NoError(t, r.Body.Close())
}()
enc, err := io.ReadAll(r.Body)
require.NoError(t, err)
jsonRequestString := string(enc)
// We expect the JSON string RPC request contains the right arguments.
require.Equal(t, true, strings.Contains(
jsonRequestString, string(encodedReq),
))
resp := map[string]interface{}{
"jsonrpc": "2.0",
"id": 1,
"result": want,
}
err = json.NewEncoder(w).Encode(resp)
require.NoError(t, err)
}))
defer srv.Close()
rpcClient, err := rpc.DialHTTP(srv.URL)
require.NoError(t, err)
defer rpcClient.Close()
client := &Service{}
client.rpcClient = rpcClient
// We call the RPC method via HTTP and expect a proper result.
err = client.ExchangeTransitionConfiguration(ctx, want)
require.NoError(t, err)
})
t.Run(BlockByHashMethod, func(t *testing.T) {
arg := common.BytesToHash([]byte("foo"))
want, ok := fix["ExecutionBlock"].(*pb.ExecutionBlock)
@@ -1145,6 +1190,78 @@ func Test_tDStringToUint256(t *testing.T) {
require.ErrorContains(t, "hex number > 256 bits", err)
}
func TestExchangeTransitionConfiguration(t *testing.T) {
fix := fixtures()
ctx := context.Background()
t.Run("wrong terminal block hash", func(t *testing.T) {
request, ok := fix["TransitionConfiguration"].(*pb.TransitionConfiguration)
require.Equal(t, true, ok)
resp, ok := proto.Clone(request).(*pb.TransitionConfiguration)
require.Equal(t, true, ok)
srv := httptest.NewServer(http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) {
w.Header().Set("Content-Type", "application/json")
defer func() {
require.NoError(t, r.Body.Close())
}()
// Change the terminal block hash.
h := common.BytesToHash([]byte("foo"))
resp.TerminalBlockHash = h[:]
respJSON := map[string]interface{}{
"jsonrpc": "2.0",
"id": 1,
"result": resp,
}
require.NoError(t, json.NewEncoder(w).Encode(respJSON))
}))
defer srv.Close()
rpcClient, err := rpc.DialHTTP(srv.URL)
require.NoError(t, err)
defer rpcClient.Close()
service := &Service{}
service.rpcClient = rpcClient
err = service.ExchangeTransitionConfiguration(ctx, request)
require.Equal(t, true, errors.Is(err, ErrConfigMismatch))
})
t.Run("wrong terminal total difficulty", func(t *testing.T) {
request, ok := fix["TransitionConfiguration"].(*pb.TransitionConfiguration)
require.Equal(t, true, ok)
resp, ok := proto.Clone(request).(*pb.TransitionConfiguration)
require.Equal(t, true, ok)
srv := httptest.NewServer(http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) {
w.Header().Set("Content-Type", "application/json")
defer func() {
require.NoError(t, r.Body.Close())
}()
// Change the terminal block hash.
resp.TerminalTotalDifficulty = "0x1"
respJSON := map[string]interface{}{
"jsonrpc": "2.0",
"id": 1,
"result": resp,
}
require.NoError(t, json.NewEncoder(w).Encode(respJSON))
}))
defer srv.Close()
rpcClient, err := rpc.DialHTTP(srv.URL)
require.NoError(t, err)
defer rpcClient.Close()
service := &Service{}
service.rpcClient = rpcClient
err = service.ExchangeTransitionConfiguration(ctx, request)
require.Equal(t, true, errors.Is(err, ErrConfigMismatch))
})
}
type customError struct {
code int
timeout bool
@@ -1432,6 +1549,13 @@ func fixtures() map[string]interface{} {
},
PayloadId: &id,
}
b, _ := new(big.Int).SetString(params.BeaconConfig().TerminalTotalDifficulty, 10)
ttd, _ := uint256.FromBig(b)
transitionCfg := &pb.TransitionConfiguration{
TerminalBlockHash: params.BeaconConfig().TerminalBlockHash[:],
TerminalTotalDifficulty: ttd.Hex(),
TerminalBlockNumber: big.NewInt(0).Bytes(),
}
validStatus := &pb.PayloadStatus{
Status: pb.PayloadStatus_VALID,
LatestValidHash: foo[:],
@@ -1474,6 +1598,7 @@ func fixtures() map[string]interface{} {
"ForkchoiceUpdatedSyncingResponse": forkChoiceSyncingResp,
"ForkchoiceUpdatedAcceptedResponse": forkChoiceAcceptedResp,
"ForkchoiceUpdatedInvalidResponse": forkChoiceInvalidResp,
"TransitionConfiguration": transitionCfg,
}
}
@@ -1780,6 +1905,17 @@ func (*testEngineService) GetPayloadV2(
return item
}
func (*testEngineService) ExchangeTransitionConfigurationV1(
_ context.Context, _ *pb.TransitionConfiguration,
) *pb.TransitionConfiguration {
fix := fixtures()
item, ok := fix["TransitionConfiguration"].(*pb.TransitionConfiguration)
if !ok {
panic("not found")
}
return item
}
func (*testEngineService) ForkchoiceUpdatedV1(
_ context.Context, _ *pb.ForkchoiceState, _ *pb.PayloadAttributes,
) *ForkchoiceUpdatedResponse {

View File

@@ -6,6 +6,10 @@ import (
)
var (
totalTerminalDifficulty = promauto.NewGauge(prometheus.GaugeOpts{
Name: "total_terminal_difficulty",
Help: "The total terminal difficulty of the execution chain before merge",
})
newPayloadLatency = promauto.NewHistogram(
prometheus.HistogramOpts{
Name: "new_payload_v1_latency_milliseconds",

View File

@@ -22,6 +22,7 @@ import (
"github.com/prometheus/client_golang/prometheus/promauto"
"github.com/prysmaticlabs/prysm/v4/beacon-chain/cache"
"github.com/prysmaticlabs/prysm/v4/beacon-chain/cache/depositsnapshot"
"github.com/prysmaticlabs/prysm/v4/beacon-chain/core/feed"
statefeed "github.com/prysmaticlabs/prysm/v4/beacon-chain/core/feed/state"
"github.com/prysmaticlabs/prysm/v4/beacon-chain/core/transition"
"github.com/prysmaticlabs/prysm/v4/beacon-chain/db"
@@ -241,6 +242,9 @@ func (s *Service) Start() {
// Poll the execution client connection and fallback if errors occur.
s.pollConnectionStatus(s.ctx)
// Check transition configuration for the engine API client in the background.
go s.checkTransitionConfiguration(s.ctx, make(chan *feed.Event, 1))
go s.run(s.ctx.Done())
}
@@ -750,10 +754,6 @@ func (s *Service) initializeEth1Data(ctx context.Context, eth1DataInDB *ethpb.ET
}
}
} else {
if eth1DataInDB.Trie == nil && eth1DataInDB.DepositSnapshot != nil {
return errors.Errorf("trying to use old deposit trie after migration to the new trie. "+
"Run with the --%s flag to resume normal operations.", features.EnableEIP4881.Name)
}
s.depositTrie, err = trie.CreateTrieFromProto(eth1DataInDB.Trie)
}
if err != nil {

View File

@@ -83,6 +83,11 @@ func (e *EngineClient) GetPayload(_ context.Context, _ [8]byte, s primitives.Slo
return p, nil, e.BuilderOverride, e.ErrGetPayload
}
// ExchangeTransitionConfiguration --
func (e *EngineClient) ExchangeTransitionConfiguration(_ context.Context, _ *pb.TransitionConfiguration) error {
return e.Err
}
// LatestExecutionBlock --
func (e *EngineClient) LatestExecutionBlock(_ context.Context) (*pb.ExecutionBlock, error) {
return e.ExecutionBlock, e.ErrLatestExecBlock

View File

@@ -17,8 +17,8 @@ go_library(
"//beacon-chain/forkchoice/types:go_default_library",
"//beacon-chain/state:go_default_library",
"//config/fieldparams:go_default_library",
"//consensus-types/forkchoice:go_default_library",
"//consensus-types/primitives:go_default_library",
"//proto/eth/v1:go_default_library",
"@com_github_pkg_errors//:go_default_library",
],
)

View File

@@ -31,9 +31,9 @@ go_library(
"//config/features:go_default_library",
"//config/fieldparams:go_default_library",
"//config/params:go_default_library",
"//consensus-types/forkchoice:go_default_library",
"//consensus-types/primitives:go_default_library",
"//encoding/bytesutil:go_default_library",
"//proto/eth/v1:go_default_library",
"//proto/prysm/v1alpha1:go_default_library",
"//runtime/version:go_default_library",
"//time/slots:go_default_library",
@@ -69,11 +69,11 @@ go_test(
"//beacon-chain/state/state-native:go_default_library",
"//config/params:go_default_library",
"//consensus-types/blocks:go_default_library",
"//consensus-types/forkchoice:go_default_library",
"//consensus-types/primitives:go_default_library",
"//crypto/hash:go_default_library",
"//encoding/bytesutil:go_default_library",
"//proto/engine/v1:go_default_library",
"//proto/eth/v1:go_default_library",
"//proto/prysm/v1alpha1:go_default_library",
"//testing/assert:go_default_library",
"//testing/require:go_default_library",

View File

@@ -12,9 +12,9 @@ import (
"github.com/prysmaticlabs/prysm/v4/beacon-chain/state"
fieldparams "github.com/prysmaticlabs/prysm/v4/config/fieldparams"
"github.com/prysmaticlabs/prysm/v4/config/params"
forkchoice2 "github.com/prysmaticlabs/prysm/v4/consensus-types/forkchoice"
"github.com/prysmaticlabs/prysm/v4/consensus-types/primitives"
"github.com/prysmaticlabs/prysm/v4/encoding/bytesutil"
v1 "github.com/prysmaticlabs/prysm/v4/proto/eth/v1"
ethpb "github.com/prysmaticlabs/prysm/v4/proto/prysm/v1alpha1"
"github.com/prysmaticlabs/prysm/v4/runtime/version"
"github.com/prysmaticlabs/prysm/v4/time/slots"
@@ -475,7 +475,7 @@ func (f *ForkChoice) CommonAncestor(ctx context.Context, r1 [32]byte, r2 [32]byt
// `blocks`. This slice must be ordered from child to parent. It includes all
// blocks **except** the first one (that is the one with the highest slot
// number). All blocks are assumed to be a strict chain
// where blocks[i].Parent = blocks[i+1]. Also, we assume that the parent of the
// where blocks[i].Parent = blocks[i+1]. Also we assume that the parent of the
// last block in this list is already included in forkchoice store.
func (f *ForkChoice) InsertChain(ctx context.Context, chain []*forkchoicetypes.BlockAndCheckpoints) error {
if len(chain) == 0 {
@@ -554,24 +554,24 @@ func (f *ForkChoice) UnrealizedJustifiedPayloadBlockHash() [32]byte {
}
// ForkChoiceDump returns a full dump of forkchoice.
func (f *ForkChoice) ForkChoiceDump(ctx context.Context) (*forkchoice2.Dump, error) {
jc := &ethpb.Checkpoint{
func (f *ForkChoice) ForkChoiceDump(ctx context.Context) (*v1.ForkChoiceDump, error) {
jc := &v1.Checkpoint{
Epoch: f.store.justifiedCheckpoint.Epoch,
Root: f.store.justifiedCheckpoint.Root[:],
}
ujc := &ethpb.Checkpoint{
ujc := &v1.Checkpoint{
Epoch: f.store.unrealizedJustifiedCheckpoint.Epoch,
Root: f.store.unrealizedJustifiedCheckpoint.Root[:],
}
fc := &ethpb.Checkpoint{
fc := &v1.Checkpoint{
Epoch: f.store.finalizedCheckpoint.Epoch,
Root: f.store.finalizedCheckpoint.Root[:],
}
ufc := &ethpb.Checkpoint{
ufc := &v1.Checkpoint{
Epoch: f.store.unrealizedFinalizedCheckpoint.Epoch,
Root: f.store.unrealizedFinalizedCheckpoint.Root[:],
}
nodes := make([]*forkchoice2.Node, 0, f.NodeCount())
nodes := make([]*v1.ForkChoiceNode, 0, f.NodeCount())
var err error
if f.store.treeRootNode != nil {
nodes, err = f.store.treeRootNode.nodeTreeDump(ctx, nodes)
@@ -583,7 +583,7 @@ func (f *ForkChoice) ForkChoiceDump(ctx context.Context) (*forkchoice2.Dump, err
if f.store.headNode != nil {
headRoot = f.store.headNode.root
}
resp := &forkchoice2.Dump{
resp := &v1.ForkChoiceDump{
JustifiedCheckpoint: jc,
UnrealizedJustifiedCheckpoint: ujc,
FinalizedCheckpoint: fc,

View File

@@ -780,9 +780,9 @@ func TestForkChoiceIsViableForCheckpoint(t *testing.T) {
require.NoError(t, err)
require.Equal(t, true, viable)
st, bRoot, err := prepareForkchoiceState(ctx, 1, [32]byte{'b'}, root, [32]byte{'B'}, 0, 0)
st, broot, err := prepareForkchoiceState(ctx, 1, [32]byte{'b'}, root, [32]byte{'B'}, 0, 0)
require.NoError(t, err)
require.NoError(t, f.InsertNode(ctx, st, bRoot))
require.NoError(t, f.InsertNode(ctx, st, broot))
// Epoch start
viable, err = f.IsViableForCheckpoint(&forkchoicetypes.Checkpoint{Root: root})
@@ -794,55 +794,55 @@ func TestForkChoiceIsViableForCheckpoint(t *testing.T) {
require.Equal(t, false, viable)
// No Children but impossible checkpoint
viable, err = f.IsViableForCheckpoint(&forkchoicetypes.Checkpoint{Root: bRoot})
viable, err = f.IsViableForCheckpoint(&forkchoicetypes.Checkpoint{Root: broot})
require.NoError(t, err)
require.Equal(t, false, viable)
viable, err = f.IsViableForCheckpoint(&forkchoicetypes.Checkpoint{Root: bRoot, Epoch: 1})
viable, err = f.IsViableForCheckpoint(&forkchoicetypes.Checkpoint{Root: broot, Epoch: 1})
require.NoError(t, err)
require.Equal(t, true, viable)
st, cRoot, err := prepareForkchoiceState(ctx, 2, [32]byte{'c'}, bRoot, [32]byte{'C'}, 0, 0)
st, croot, err := prepareForkchoiceState(ctx, 2, [32]byte{'c'}, broot, [32]byte{'C'}, 0, 0)
require.NoError(t, err)
require.NoError(t, f.InsertNode(ctx, st, cRoot))
require.NoError(t, f.InsertNode(ctx, st, croot))
// Children in same epoch
viable, err = f.IsViableForCheckpoint(&forkchoicetypes.Checkpoint{Root: bRoot})
viable, err = f.IsViableForCheckpoint(&forkchoicetypes.Checkpoint{Root: broot})
require.NoError(t, err)
require.Equal(t, false, viable)
viable, err = f.IsViableForCheckpoint(&forkchoicetypes.Checkpoint{Root: bRoot, Epoch: 1})
viable, err = f.IsViableForCheckpoint(&forkchoicetypes.Checkpoint{Root: broot, Epoch: 1})
require.NoError(t, err)
require.Equal(t, false, viable)
st, dRoot, err := prepareForkchoiceState(ctx, params.BeaconConfig().SlotsPerEpoch, [32]byte{'d'}, bRoot, [32]byte{'D'}, 0, 0)
st, droot, err := prepareForkchoiceState(ctx, params.BeaconConfig().SlotsPerEpoch, [32]byte{'d'}, broot, [32]byte{'D'}, 0, 0)
require.NoError(t, err)
require.NoError(t, f.InsertNode(ctx, st, dRoot))
require.NoError(t, f.InsertNode(ctx, st, droot))
// Children in next epoch but boundary
viable, err = f.IsViableForCheckpoint(&forkchoicetypes.Checkpoint{Root: bRoot})
viable, err = f.IsViableForCheckpoint(&forkchoicetypes.Checkpoint{Root: broot})
require.NoError(t, err)
require.Equal(t, false, viable)
viable, err = f.IsViableForCheckpoint(&forkchoicetypes.Checkpoint{Root: bRoot, Epoch: 1})
viable, err = f.IsViableForCheckpoint(&forkchoicetypes.Checkpoint{Root: broot, Epoch: 1})
require.NoError(t, err)
require.Equal(t, false, viable)
// Boundary block
viable, err = f.IsViableForCheckpoint(&forkchoicetypes.Checkpoint{Root: dRoot, Epoch: 1})
viable, err = f.IsViableForCheckpoint(&forkchoicetypes.Checkpoint{Root: droot, Epoch: 1})
require.NoError(t, err)
require.Equal(t, true, viable)
viable, err = f.IsViableForCheckpoint(&forkchoicetypes.Checkpoint{Root: dRoot, Epoch: 0})
viable, err = f.IsViableForCheckpoint(&forkchoicetypes.Checkpoint{Root: droot, Epoch: 0})
require.NoError(t, err)
require.Equal(t, false, viable)
// Children in next epoch
st, eRoot, err := prepareForkchoiceState(ctx, params.BeaconConfig().SlotsPerEpoch+1, [32]byte{'e'}, bRoot, [32]byte{'E'}, 0, 0)
st, eroot, err := prepareForkchoiceState(ctx, params.BeaconConfig().SlotsPerEpoch+1, [32]byte{'e'}, broot, [32]byte{'E'}, 0, 0)
require.NoError(t, err)
require.NoError(t, f.InsertNode(ctx, st, eRoot))
require.NoError(t, f.InsertNode(ctx, st, eroot))
viable, err = f.IsViableForCheckpoint(&forkchoicetypes.Checkpoint{Root: bRoot, Epoch: 1})
viable, err = f.IsViableForCheckpoint(&forkchoicetypes.Checkpoint{Root: broot, Epoch: 1})
require.NoError(t, err)
require.Equal(t, true, viable)
}

View File

@@ -6,8 +6,8 @@ import (
"github.com/pkg/errors"
"github.com/prysmaticlabs/prysm/v4/config/params"
forkchoice2 "github.com/prysmaticlabs/prysm/v4/consensus-types/forkchoice"
"github.com/prysmaticlabs/prysm/v4/consensus-types/primitives"
v1 "github.com/prysmaticlabs/prysm/v4/proto/eth/v1"
"github.com/prysmaticlabs/prysm/v4/time/slots"
)
@@ -151,7 +151,7 @@ func (n *Node) arrivedAfterOrphanCheck(genesisTime uint64) (bool, error) {
}
// nodeTreeDump appends to the given list all the nodes descending from this one
func (n *Node) nodeTreeDump(ctx context.Context, nodes []*forkchoice2.Node) ([]*forkchoice2.Node, error) {
func (n *Node) nodeTreeDump(ctx context.Context, nodes []*v1.ForkChoiceNode) ([]*v1.ForkChoiceNode, error) {
if ctx.Err() != nil {
return nil, ctx.Err()
}
@@ -159,7 +159,7 @@ func (n *Node) nodeTreeDump(ctx context.Context, nodes []*forkchoice2.Node) ([]*
if n.parent != nil {
parentRoot = n.parent.root
}
thisNode := &forkchoice2.Node{
thisNode := &v1.ForkChoiceNode{
Slot: n.slot,
BlockRoot: n.root[:],
ParentRoot: parentRoot[:],
@@ -174,9 +174,9 @@ func (n *Node) nodeTreeDump(ctx context.Context, nodes []*forkchoice2.Node) ([]*
Timestamp: n.timestamp,
}
if n.optimistic {
thisNode.Validity = forkchoice2.Optimistic
thisNode.Validity = v1.ForkChoiceNodeValidity_OPTIMISTIC
} else {
thisNode.Validity = forkchoice2.Valid
thisNode.Validity = v1.ForkChoiceNodeValidity_VALID
}
nodes = append(nodes, thisNode)

View File

@@ -5,8 +5,8 @@ import (
"testing"
"github.com/prysmaticlabs/prysm/v4/config/params"
"github.com/prysmaticlabs/prysm/v4/consensus-types/forkchoice"
"github.com/prysmaticlabs/prysm/v4/consensus-types/primitives"
v1 "github.com/prysmaticlabs/prysm/v4/proto/eth/v1"
"github.com/prysmaticlabs/prysm/v4/testing/assert"
"github.com/prysmaticlabs/prysm/v4/testing/require"
)
@@ -86,7 +86,7 @@ func TestNode_UpdateBestDescendant_NonViableChild(t *testing.T) {
func TestNode_UpdateBestDescendant_ViableChild(t *testing.T) {
f := setup(1, 1)
ctx := context.Background()
// Input child is the best descendant
// Input child is best descendant
state, blkRoot, err := prepareForkchoiceState(ctx, 1, indexToHash(1), params.BeaconConfig().ZeroHash, params.BeaconConfig().ZeroHash, 1, 1)
require.NoError(t, err)
require.NoError(t, f.InsertNode(ctx, state, blkRoot))
@@ -99,7 +99,7 @@ func TestNode_UpdateBestDescendant_ViableChild(t *testing.T) {
func TestNode_UpdateBestDescendant_HigherWeightChild(t *testing.T) {
f := setup(1, 1)
ctx := context.Background()
// Input child is the best descendant
// Input child is best descendant
state, blkRoot, err := prepareForkchoiceState(ctx, 1, indexToHash(1), params.BeaconConfig().ZeroHash, params.BeaconConfig().ZeroHash, 1, 1)
require.NoError(t, err)
require.NoError(t, f.InsertNode(ctx, state, blkRoot))
@@ -119,7 +119,7 @@ func TestNode_UpdateBestDescendant_HigherWeightChild(t *testing.T) {
func TestNode_UpdateBestDescendant_LowerWeightChild(t *testing.T) {
f := setup(1, 1)
ctx := context.Background()
// Input child is the best descendant
// Input child is best descendant
state, blkRoot, err := prepareForkchoiceState(ctx, 1, indexToHash(1), params.BeaconConfig().ZeroHash, params.BeaconConfig().ZeroHash, 1, 1)
require.NoError(t, err)
require.NoError(t, f.InsertNode(ctx, state, blkRoot))
@@ -237,7 +237,7 @@ func TestNode_SetFullyValidated(t *testing.T) {
require.NoError(t, err)
require.Equal(t, false, opt)
respNodes := make([]*forkchoice.Node, 0)
respNodes := make([]*v1.ForkChoiceNode, 0)
respNodes, err = f.store.treeRootNode.nodeTreeDump(ctx, respNodes)
require.NoError(t, err)
require.Equal(t, len(respNodes), f.NodeCount())

View File

@@ -156,9 +156,9 @@ func TestForkChoice_BoostProposerRoot_PreventsExAnteAttack(t *testing.T) {
// Ancestors have the added weights of their children. Genesis is a special exception at 0 weight,
require.Equal(t, f.store.treeRootNode.weight, uint64(0))
// Proposer boost score with these test parameters is 8
// Proposer boost score with this tests parameters is 8
// Each of the nodes received one attestation accounting for 10.
// Node D is the only one with proposer boost still applied:
// Node D is the only one with a proposer boost still applied:
//
// (1: 48) -> (2: 38) -> (3: 10)
// \--------------->(4: 18)

View File

@@ -28,7 +28,7 @@ const orphanLateBlockProposingEarly = 2
// 3- The beacon node is serving a validator that will propose during the next
// slot.
//
// This function only applies a heuristic to decide if the beacon will update
// This function only applies an heuristic to decide if the beacon will update
// the engine's view of head with the parent block or the incoming block. It
// does not guarantee an attempted reorg. This will only be decided later at
// proposal time by calling GetProposerHead.

View File

@@ -61,7 +61,7 @@ func (s *Store) head(ctx context.Context) ([32]byte, error) {
}
// insert registers a new block node to the fork choice store's node list.
// It then updates the new node's parent with the best child and descendant node.
// It then updates the new node's parent with best child and descendant node.
func (s *Store) insert(ctx context.Context,
slot primitives.Slot,
root, parentRoot, payloadHash [fieldparams.RootLength]byte,

View File

@@ -82,7 +82,7 @@ func TestStore_Head_Itself(t *testing.T) {
require.NoError(t, err)
require.NoError(t, f.InsertNode(context.Background(), state, blkRoot))
// Since the justified node does not have a best descendant, the best node
// Since the justified node does not have a best descendant so the best node
// is itself.
f.store.justifiedCheckpoint = &forkchoicetypes.Checkpoint{Epoch: 0, Root: indexToHash(1)}
h, err := f.store.head(context.Background())
@@ -213,7 +213,7 @@ func TestStore_Prune_ReturnEarly(t *testing.T) {
require.Equal(t, nodeCount, f.NodeCount())
}
// This unit test starts with a simple branch like this
// This unit tests starts with a simple branch like this
//
// - 1
// /

View File

@@ -298,7 +298,7 @@ func TestStore_PullTips_Heuristics(t *testing.T) {
require.NoError(tt, err)
require.NoError(tt, f.InsertNode(ctx, st, root))
// Check that the justification point is not the parent's.
// This test checks that the heuristics in pullTips did not apply and
// This tests that the heuristics in pullTips did not apply and
// the test continues to compute a bogus unrealized
// justification
require.Equal(tt, primitives.Epoch(1), f.store.nodeByRoot[[32]byte{'h'}].unrealizedJustifiedEpoch)
@@ -315,7 +315,7 @@ func TestStore_PullTips_Heuristics(t *testing.T) {
require.NoError(tt, err)
require.NoError(tt, f.InsertNode(ctx, st, root))
// Check that the justification point is not the parent's.
// This test checks that the heuristics in pullTips did not apply and
// This tests that the heuristics in pullTips did not apply and
// the test continues to compute a bogus unrealized
// justification
require.Equal(tt, primitives.Epoch(1), f.store.nodeByRoot[[32]byte{'h'}].unrealizedJustifiedEpoch)
@@ -331,7 +331,7 @@ func TestStore_PullTips_Heuristics(t *testing.T) {
require.NoError(tt, err)
require.NoError(tt, f.InsertNode(ctx, st, root))
// Check that the justification point is not the parent's.
// This test checks that the heuristics in pullTips did not apply and
// This tests that the heuristics in pullTips did not apply and
// the test continues to compute a bogus unrealized
// justification
require.Equal(tt, primitives.Epoch(2), f.store.nodeByRoot[[32]byte{'h'}].unrealizedJustifiedEpoch)

View File

@@ -6,8 +6,8 @@ import (
forkchoicetypes "github.com/prysmaticlabs/prysm/v4/beacon-chain/forkchoice/types"
"github.com/prysmaticlabs/prysm/v4/beacon-chain/state"
fieldparams "github.com/prysmaticlabs/prysm/v4/config/fieldparams"
forkchoice2 "github.com/prysmaticlabs/prysm/v4/consensus-types/forkchoice"
"github.com/prysmaticlabs/prysm/v4/consensus-types/primitives"
v1 "github.com/prysmaticlabs/prysm/v4/proto/eth/v1"
)
// BalancesByRooter is a handler to obtain the effective balances of the state
@@ -62,7 +62,7 @@ type Getter interface {
NodeCount() int
HighestReceivedBlockSlot() primitives.Slot
ReceivedBlocksLastEpoch() (uint64, error)
ForkChoiceDump(context.Context) (*forkchoice2.Dump, error)
ForkChoiceDump(context.Context) (*v1.ForkChoiceDump, error)
Weight(root [32]byte) (uint64, error)
Tips() ([][32]byte, []primitives.Slot)
IsOptimistic(root [32]byte) (bool, error)

View File

@@ -52,8 +52,13 @@ func DefaultConfig(enableDebugRPCEndpoints bool, httpModules string) MuxConfig {
}
if flags.EnableHTTPEthAPI(httpModules) {
ethRegistrations := []gateway.PbHandlerRegistration{
ethpbservice.RegisterBeaconChainHandler,
ethpbservice.RegisterBeaconValidatorHandler,
ethpbservice.RegisterEventsHandler,
}
if enableDebugRPCEndpoints {
ethRegistrations = append(ethRegistrations, ethpbservice.RegisterBeaconDebugHandler)
}
ethMux := gwruntime.NewServeMux(
gwruntime.WithMarshalerOption(gwruntime.MIMEWildcard, &gwruntime.HTTPBodyMarshaler{
Marshaler: &gwruntime.JSONPb{

View File

@@ -9,12 +9,39 @@ import (
)
func TestDefaultConfig(t *testing.T) {
t.Run("Without debug endpoints", func(t *testing.T) {
cfg := DefaultConfig(false, "eth,prysm")
assert.NotNil(t, cfg.EthPbMux.Mux)
require.Equal(t, 2, len(cfg.EthPbMux.Patterns))
assert.Equal(t, "/internal/eth/v1/", cfg.EthPbMux.Patterns[0])
assert.Equal(t, "/internal/eth/v2/", cfg.EthPbMux.Patterns[1])
assert.Equal(t, 3, len(cfg.EthPbMux.Registrations))
assert.NotNil(t, cfg.V1AlphaPbMux.Mux)
require.Equal(t, 2, len(cfg.V1AlphaPbMux.Patterns))
assert.Equal(t, "/eth/v1alpha1/", cfg.V1AlphaPbMux.Patterns[0])
assert.Equal(t, "/eth/v1alpha2/", cfg.V1AlphaPbMux.Patterns[1])
assert.Equal(t, 4, len(cfg.V1AlphaPbMux.Registrations))
})
t.Run("With debug endpoints", func(t *testing.T) {
cfg := DefaultConfig(true, "eth,prysm")
assert.NotNil(t, cfg.EthPbMux.Mux)
require.Equal(t, 2, len(cfg.EthPbMux.Patterns))
assert.Equal(t, "/internal/eth/v1/", cfg.EthPbMux.Patterns[0])
assert.Equal(t, "/internal/eth/v2/", cfg.EthPbMux.Patterns[1])
assert.Equal(t, 4, len(cfg.EthPbMux.Registrations))
assert.NotNil(t, cfg.V1AlphaPbMux.Mux)
require.Equal(t, 2, len(cfg.V1AlphaPbMux.Patterns))
assert.Equal(t, "/eth/v1alpha1/", cfg.V1AlphaPbMux.Patterns[0])
assert.Equal(t, "/eth/v1alpha2/", cfg.V1AlphaPbMux.Patterns[1])
assert.Equal(t, 5, len(cfg.V1AlphaPbMux.Registrations))
})
t.Run("Without Prysm API", func(t *testing.T) {
cfg := DefaultConfig(true, "eth")
assert.NotNil(t, cfg.EthPbMux.Mux)
require.Equal(t, 2, len(cfg.EthPbMux.Patterns))
assert.Equal(t, "/internal/eth/v1/", cfg.EthPbMux.Patterns[0])
assert.Equal(t, 1, len(cfg.EthPbMux.Registrations))
assert.Equal(t, 4, len(cfg.EthPbMux.Registrations))
assert.Equal(t, (*gateway.PbMux)(nil), cfg.V1AlphaPbMux)
})
t.Run("Without Eth API", func(t *testing.T) {

View File

@@ -24,7 +24,6 @@ go_library(
"//beacon-chain/cache/depositcache:go_default_library",
"//beacon-chain/cache/depositsnapshot:go_default_library",
"//beacon-chain/db:go_default_library",
"//beacon-chain/db/filesystem:go_default_library",
"//beacon-chain/db/kv:go_default_library",
"//beacon-chain/db/slasherkv:go_default_library",
"//beacon-chain/deterministic-genesis:go_default_library",
@@ -86,7 +85,6 @@ go_test(
"//beacon-chain/blockchain:go_default_library",
"//beacon-chain/builder:go_default_library",
"//beacon-chain/core/feed/state:go_default_library",
"//beacon-chain/db/filesystem:go_default_library",
"//beacon-chain/execution:go_default_library",
"//beacon-chain/execution/testing:go_default_library",
"//beacon-chain/monitor:go_default_library",

View File

@@ -40,7 +40,7 @@ func configureHistoricalSlasher(cliCtx *cli.Context) error {
return err
}
cmdConfig := cmd.Get()
// Allow up to 4096 attestations at a time to be requested from the beacon node.
// Allow up to 4096 attestations at a time to be requested from the beacon nde.
cmdConfig.MaxRPCPageSize = int(params.BeaconConfig().SlotsPerEpoch.Mul(params.BeaconConfig().MaxAttestations)) // lint:ignore uintcast -- Page size should not exceed int64 with these constants.
cmd.Init(cmdConfig)
log.Warnf(

View File

@@ -26,7 +26,6 @@ import (
"github.com/prysmaticlabs/prysm/v4/beacon-chain/cache/depositcache"
"github.com/prysmaticlabs/prysm/v4/beacon-chain/cache/depositsnapshot"
"github.com/prysmaticlabs/prysm/v4/beacon-chain/db"
"github.com/prysmaticlabs/prysm/v4/beacon-chain/db/filesystem"
"github.com/prysmaticlabs/prysm/v4/beacon-chain/db/kv"
"github.com/prysmaticlabs/prysm/v4/beacon-chain/db/slasherkv"
interopcoldstart "github.com/prysmaticlabs/prysm/v4/beacon-chain/deterministic-genesis"
@@ -113,7 +112,6 @@ type BeaconNode struct {
forkChoicer forkchoice.ForkChoicer
clockWaiter startup.ClockWaiter
initialSyncComplete chan struct{}
BlobStorage *filesystem.BlobStorage
}
// New creates a new node instance, sets up configuration options, and registers
@@ -202,7 +200,6 @@ func New(cliCtx *cli.Context, opts ...Option) (*BeaconNode, error) {
if err != nil {
return nil, err
}
log.Debugln("Starting DB")
if err := beacon.startDB(cliCtx, depositAddress); err != nil {
return nil, err
@@ -248,8 +245,12 @@ func New(cliCtx *cli.Context, opts ...Option) (*BeaconNode, error) {
return nil, err
}
log.Debugln("Registering Blockchain Service")
if err := beacon.registerBlockchainService(beacon.forkChoicer, synchronizer, beacon.initialSyncComplete); err != nil {
bcOpts := []blockchain.Option{
blockchain.WithForkChoiceStore(beacon.forkChoicer),
blockchain.WithClockSynchronizer(synchronizer),
blockchain.WithSyncComplete(beacon.initialSyncComplete),
}
if err := beacon.registerBlockchainService(bcOpts); err != nil {
return nil, err
}
@@ -610,7 +611,8 @@ func (b *BeaconNode) registerAttestationPool() error {
return b.services.RegisterService(s)
}
func (b *BeaconNode) registerBlockchainService(fc forkchoice.ForkChoicer, gs *startup.ClockSynchronizer, syncComplete chan struct{}) error {
func (b *BeaconNode) registerBlockchainService(required []blockchain.Option) error {
log.Debugln("Registering Blockchain Service")
var web3Service *execution.Service
if err := b.services.FetchService(&web3Service); err != nil {
return err
@@ -621,10 +623,9 @@ func (b *BeaconNode) registerBlockchainService(fc forkchoice.ForkChoicer, gs *st
return err
}
opts := append(b.serviceFlagOpts.blockchainFlagOpts, required...)
// skipcq: CRT-D0001
opts := append(
b.serviceFlagOpts.blockchainFlagOpts,
blockchain.WithForkChoiceStore(fc),
opts = append(opts,
blockchain.WithDatabase(b.db),
blockchain.WithDepositCache(b.depositCache),
blockchain.WithChainStartFetcher(web3Service),
@@ -640,9 +641,6 @@ func (b *BeaconNode) registerBlockchainService(fc forkchoice.ForkChoicer, gs *st
blockchain.WithSlasherAttestationsFeed(b.slasherAttestationsFeed),
blockchain.WithFinalizedStateAtStartUp(b.finalizedStateAtStartUp),
blockchain.WithProposerIdsCache(b.proposerIdsCache),
blockchain.WithClockSynchronizer(gs),
blockchain.WithSyncComplete(syncComplete),
blockchain.WithBlobStorage(b.BlobStorage),
)
blockchainService, err := blockchain.NewService(b.ctx, opts...)
@@ -721,7 +719,6 @@ func (b *BeaconNode) registerSyncService(initialSyncComplete chan struct{}) erro
regularsync.WithClockWaiter(b.clockWaiter),
regularsync.WithInitialSyncComplete(initialSyncComplete),
regularsync.WithStateNotifier(b),
regularsync.WithBlobStorage(b.BlobStorage),
)
return b.services.RegisterService(rs)
}
@@ -846,7 +843,6 @@ func (b *BeaconNode) registerRPCService(router *mux.Router) error {
ForkchoiceFetcher: chainService,
FinalizationFetcher: chainService,
BlockReceiver: chainService,
BlobReceiver: chainService,
AttestationReceiver: chainService,
GenesisTimeFetcher: chainService,
GenesisFetcher: chainService,

View File

@@ -13,7 +13,6 @@ import (
"github.com/prysmaticlabs/prysm/v4/beacon-chain/blockchain"
"github.com/prysmaticlabs/prysm/v4/beacon-chain/builder"
statefeed "github.com/prysmaticlabs/prysm/v4/beacon-chain/core/feed/state"
"github.com/prysmaticlabs/prysm/v4/beacon-chain/db/filesystem"
"github.com/prysmaticlabs/prysm/v4/beacon-chain/execution"
mockExecution "github.com/prysmaticlabs/prysm/v4/beacon-chain/execution/testing"
"github.com/prysmaticlabs/prysm/v4/beacon-chain/monitor"
@@ -71,8 +70,7 @@ func TestNodeStart_Ok(t *testing.T) {
ctx := cli.NewContext(&app, set, nil)
node, err := New(ctx, WithBlockchainFlagOptions([]blockchain.Option{}),
WithBuilderFlagOptions([]builder.Option{}),
WithExecutionChainOptions([]execution.Option{}),
WithBlobStorage(filesystem.NewEphemeralBlobStorage(t)))
WithExecutionChainOptions([]execution.Option{}))
require.NoError(t, err)
node.services = &runtime.ServiceRegistry{}
go func() {
@@ -150,12 +148,11 @@ func TestClearDB(t *testing.T) {
set.String("suggested-fee-recipient", "0x6e35733c5af9B61374A128e6F85f553aF09ff89A", "fee recipient")
require.NoError(t, set.Set("suggested-fee-recipient", "0x6e35733c5af9B61374A128e6F85f553aF09ff89A"))
context := cli.NewContext(&app, set, nil)
options := []Option{
WithExecutionChainOptions([]execution.Option{execution.WithHttpEndpoint(endpoint)}),
WithBlobStorage(filesystem.NewEphemeralBlobStorage(t)),
}
_, err = New(context, options...)
_, err = New(context, WithExecutionChainOptions([]execution.Option{
execution.WithHttpEndpoint(endpoint),
}))
require.NoError(t, err)
require.LogsContain(t, hook, "Removing database")
}

View File

@@ -3,7 +3,6 @@ package node
import (
"github.com/prysmaticlabs/prysm/v4/beacon-chain/blockchain"
"github.com/prysmaticlabs/prysm/v4/beacon-chain/builder"
"github.com/prysmaticlabs/prysm/v4/beacon-chain/db/filesystem"
"github.com/prysmaticlabs/prysm/v4/beacon-chain/execution"
)
@@ -33,11 +32,3 @@ func WithBuilderFlagOptions(opts []builder.Option) Option {
return nil
}
}
// WithBlobStorage sets the BlobStorage backend for the BeaconNode
func WithBlobStorage(bs *filesystem.BlobStorage) Option {
return func(bn *BeaconNode) error {
bn.BlobStorage = bs
return nil
}
}

View File

@@ -32,6 +32,7 @@ go_test(
name = "go_default_test",
srcs = [
"aggregated_test.go",
"benchmark_test.go",
"block_test.go",
"forkchoice_test.go",
"seen_bits_test.go",

View File

@@ -0,0 +1,20 @@
package kv_test
import (
"testing"
"github.com/prysmaticlabs/prysm/v4/beacon-chain/operations/attestations/kv"
ethpb "github.com/prysmaticlabs/prysm/v4/proto/prysm/v1alpha1"
"github.com/prysmaticlabs/prysm/v4/testing/assert"
)
func BenchmarkAttCaches(b *testing.B) {
ac := kv.NewAttCaches()
att := &ethpb.Attestation{}
for i := 0; i < b.N; i++ {
assert.NoError(b, ac.SaveUnaggregatedAttestation(att))
assert.NoError(b, ac.DeleteAggregatedAttestation(att))
}
}

View File

@@ -204,7 +204,7 @@ func (s *Service) broadcastSyncCommittee(ctx context.Context, subnet uint64, sMs
// BroadcastBlob broadcasts a blob to the p2p network, the message is assumed to be
// broadcasted to the current fork and to the input subnet.
func (s *Service) BroadcastBlob(ctx context.Context, subnet uint64, blob *ethpb.BlobSidecar) error {
func (s *Service) BroadcastBlob(ctx context.Context, subnet uint64, blob *ethpb.SignedBlobSidecar) error {
ctx, span := trace.StartSpan(ctx, "p2p.BroadcastBlob")
defer span.End()
if blob == nil {
@@ -223,7 +223,7 @@ func (s *Service) BroadcastBlob(ctx context.Context, subnet uint64, blob *ethpb.
return nil
}
func (s *Service) broadcastBlob(ctx context.Context, subnet uint64, blobSidecar *ethpb.BlobSidecar, forkDigest [4]byte) {
func (s *Service) broadcastBlob(ctx context.Context, subnet uint64, blobSidecar *ethpb.SignedBlobSidecar, forkDigest [4]byte) {
_, span := trace.StartSpan(ctx, "p2p.broadcastBlob")
defer span.End()
ctx = trace.NewContext(context.Background(), span) // clear parent context / deadline.

View File

@@ -469,18 +469,18 @@ func TestService_BroadcastBlob(t *testing.T) {
}),
}
header := util.HydrateSignedBeaconHeader(&ethpb.SignedBeaconBlockHeader{})
commitmentInclusionProof := make([][]byte, 17)
for i := range commitmentInclusionProof {
commitmentInclusionProof[i] = bytesutil.PadTo([]byte{}, 32)
}
blobSidecar := &ethpb.BlobSidecar{
Index: 1,
Blob: bytesutil.PadTo([]byte{'C'}, fieldparams.BlobLength),
KzgCommitment: bytesutil.PadTo([]byte{'D'}, fieldparams.BLSPubkeyLength),
KzgProof: bytesutil.PadTo([]byte{'E'}, fieldparams.BLSPubkeyLength),
SignedBlockHeader: header,
CommitmentInclusionProof: commitmentInclusionProof,
blobSidecar := &ethpb.SignedBlobSidecar{
Message: &ethpb.BlobSidecar{
BlockRoot: bytesutil.PadTo([]byte{'A'}, fieldparams.RootLength),
Index: 1,
Slot: 2,
BlockParentRoot: bytesutil.PadTo([]byte{'B'}, fieldparams.RootLength),
ProposerIndex: 3,
Blob: bytesutil.PadTo([]byte{'C'}, fieldparams.BlobLength),
KzgCommitment: bytesutil.PadTo([]byte{'D'}, fieldparams.BLSPubkeyLength),
KzgProof: bytesutil.PadTo([]byte{'E'}, fieldparams.BLSPubkeyLength),
},
Signature: bytesutil.PadTo([]byte{'F'}, fieldparams.BLSSignatureLength),
}
subnet := uint64(0)
@@ -508,7 +508,7 @@ func TestService_BroadcastBlob(t *testing.T) {
incomingMessage, err := sub.Next(ctx)
require.NoError(t, err)
result := &ethpb.BlobSidecar{}
result := &ethpb.SignedBlobSidecar{}
require.NoError(t, p.Encoding().DecodeGossip(incomingMessage.Data, result))
require.DeepEqual(t, result, blobSidecar)
}(t)

View File

@@ -21,7 +21,7 @@ var gossipTopicMappings = map[string]proto.Message{
SyncContributionAndProofSubnetTopicFormat: &ethpb.SignedContributionAndProof{},
SyncCommitteeSubnetTopicFormat: &ethpb.SyncCommitteeMessage{},
BlsToExecutionChangeSubnetTopicFormat: &ethpb.SignedBLSToExecutionChange{},
BlobSubnetTopicFormat: &ethpb.BlobSidecar{},
BlobSubnetTopicFormat: &ethpb.SignedBlobSidecar{},
}
// GossipTopicMappings is a function to return the assigned data type

View File

@@ -35,7 +35,7 @@ type Broadcaster interface {
Broadcast(context.Context, proto.Message) error
BroadcastAttestation(ctx context.Context, subnet uint64, att *ethpb.Attestation) error
BroadcastSyncCommitteeMessage(ctx context.Context, subnet uint64, sMsg *ethpb.SyncCommitteeMessage) error
BroadcastBlob(ctx context.Context, subnet uint64, blob *ethpb.BlobSidecar) error
BroadcastBlob(ctx context.Context, subnet uint64, blob *ethpb.SignedBlobSidecar) error
}
// SetStreamHandler configures p2p to handle streams of a certain topic ID.

View File

@@ -144,7 +144,7 @@ func (_ *FakeP2P) BroadcastSyncCommitteeMessage(_ context.Context, _ uint64, _ *
}
// BroadcastBlob -- fake.
func (_ *FakeP2P) BroadcastBlob(_ context.Context, _ uint64, _ *ethpb.BlobSidecar) error {
func (_ *FakeP2P) BroadcastBlob(_ context.Context, _ uint64, _ *ethpb.SignedBlobSidecar) error {
return nil
}

View File

@@ -43,7 +43,7 @@ func (m *MockBroadcaster) BroadcastSyncCommitteeMessage(_ context.Context, _ uin
}
// BroadcastBlob broadcasts a blob for mock.
func (m *MockBroadcaster) BroadcastBlob(context.Context, uint64, *ethpb.BlobSidecar) error {
func (m *MockBroadcaster) BroadcastBlob(context.Context, uint64, *ethpb.SignedBlobSidecar) error {
m.BroadcastCalled.Store(true)
return nil
}

View File

@@ -178,7 +178,7 @@ func (p *TestP2P) BroadcastSyncCommitteeMessage(_ context.Context, _ uint64, _ *
}
// BroadcastBlob broadcasts a blob for mock.
func (p *TestP2P) BroadcastBlob(context.Context, uint64, *ethpb.BlobSidecar) error {
func (p *TestP2P) BroadcastBlob(context.Context, uint64, *ethpb.SignedBlobSidecar) error {
p.BroadcastCalled.Store(true)
return nil
}

View File

@@ -28,15 +28,12 @@ go_library(
"//beacon-chain/rpc/eth/beacon:go_default_library",
"//beacon-chain/rpc/eth/blob:go_default_library",
"//beacon-chain/rpc/eth/builder:go_default_library",
"//beacon-chain/rpc/eth/config:go_default_library",
"//beacon-chain/rpc/eth/debug:go_default_library",
"//beacon-chain/rpc/eth/events:go_default_library",
"//beacon-chain/rpc/eth/light-client:go_default_library",
"//beacon-chain/rpc/eth/node:go_default_library",
"//beacon-chain/rpc/eth/rewards:go_default_library",
"//beacon-chain/rpc/eth/validator:go_default_library",
"//beacon-chain/rpc/lookup:go_default_library",
"//beacon-chain/rpc/prysm/beacon:go_default_library",
"//beacon-chain/rpc/prysm/node:go_default_library",
"//beacon-chain/rpc/prysm/v1alpha1/beacon:go_default_library",
"//beacon-chain/rpc/prysm/v1alpha1/debug:go_default_library",
@@ -76,7 +73,6 @@ go_test(
deps = [
"//beacon-chain/blockchain/testing:go_default_library",
"//beacon-chain/execution/testing:go_default_library",
"//beacon-chain/startup:go_default_library",
"//beacon-chain/sync/initial-sync/testing:go_default_library",
"//testing/assert:go_default_library",
"//testing/require:go_default_library",

View File

@@ -12,10 +12,14 @@ go_library(
importpath = "github.com/prysmaticlabs/prysm/v4/beacon-chain/rpc/apimiddleware",
visibility = ["//visibility:public"],
deps = [
"//api:go_default_library",
"//api/gateway/apimiddleware:go_default_library",
"//api/grpc:go_default_library",
"//beacon-chain/rpc/eth/events:go_default_library",
"//config/params:go_default_library",
"//consensus-types/primitives:go_default_library",
"//network/http:go_default_library",
"//proto/eth/v2:go_default_library",
"//runtime/version:go_default_library",
"//time/slots:go_default_library",
"@com_github_pkg_errors//:go_default_library",
@@ -33,12 +37,16 @@ go_test(
],
embed = [":go_default_library"],
deps = [
"//api:go_default_library",
"//api/gateway/apimiddleware:go_default_library",
"//api/grpc:go_default_library",
"//beacon-chain/rpc/eth/events:go_default_library",
"//config/params:go_default_library",
"//proto/eth/v2:go_default_library",
"//testing/assert:go_default_library",
"//testing/require:go_default_library",
"//time/slots:go_default_library",
"@com_github_gogo_protobuf//types:go_default_library",
"@com_github_r3labs_sse_v2//:go_default_library",
],
)

View File

@@ -2,17 +2,178 @@ package apimiddleware
import (
"bytes"
"encoding/base64"
"encoding/json"
"errors"
"fmt"
"io"
"net/http"
"strconv"
"strings"
"github.com/prysmaticlabs/prysm/v4/api"
"github.com/prysmaticlabs/prysm/v4/api/gateway/apimiddleware"
"github.com/prysmaticlabs/prysm/v4/api/grpc"
"github.com/prysmaticlabs/prysm/v4/beacon-chain/rpc/eth/events"
http2 "github.com/prysmaticlabs/prysm/v4/network/http"
"github.com/prysmaticlabs/prysm/v4/runtime/version"
"github.com/r3labs/sse/v2"
)
type sszConfig struct {
fileName string
responseJson SszResponse
}
func handleProduceBlockSSZ(m *apimiddleware.ApiProxyMiddleware, endpoint apimiddleware.Endpoint, w http.ResponseWriter, req *http.Request) (handled bool) {
config := sszConfig{
fileName: "produce_beacon_block.ssz",
responseJson: &VersionedSSZResponseJson{},
}
return handleGetSSZ(m, endpoint, w, req, config)
}
func handleProduceBlindedBlockSSZ(
m *apimiddleware.ApiProxyMiddleware,
endpoint apimiddleware.Endpoint,
w http.ResponseWriter,
req *http.Request,
) (handled bool) {
config := sszConfig{
fileName: "produce_blinded_beacon_block.ssz",
responseJson: &VersionedSSZResponseJson{},
}
return handleGetSSZ(m, endpoint, w, req, config)
}
func handleGetSSZ(
m *apimiddleware.ApiProxyMiddleware,
endpoint apimiddleware.Endpoint,
w http.ResponseWriter,
req *http.Request,
config sszConfig,
) (handled bool) {
ssz := http2.SszRequested(req)
if !ssz {
return false
}
if errJson := prepareSSZRequestForProxying(m, endpoint, req); errJson != nil {
apimiddleware.WriteError(w, errJson, nil)
return true
}
grpcResponse, errJson := m.ProxyRequest(req)
if errJson != nil {
apimiddleware.WriteError(w, errJson, nil)
return true
}
grpcResponseBody, errJson := apimiddleware.ReadGrpcResponseBody(grpcResponse.Body)
if errJson != nil {
apimiddleware.WriteError(w, errJson, nil)
return true
}
respHasError, errJson := apimiddleware.HandleGrpcResponseError(endpoint.Err, grpcResponse, grpcResponseBody, w)
if errJson != nil {
apimiddleware.WriteError(w, errJson, nil)
return true
}
if respHasError {
return true
}
if errJson := apimiddleware.DeserializeGrpcResponseBodyIntoContainer(grpcResponseBody, config.responseJson); errJson != nil {
apimiddleware.WriteError(w, errJson, nil)
return true
}
respVersion, responseSsz, errJson := serializeMiddlewareResponseIntoSSZ(config.responseJson)
if errJson != nil {
apimiddleware.WriteError(w, errJson, nil)
return true
}
if errJson := writeSSZResponseHeaderAndBody(grpcResponse, w, responseSsz, respVersion, config.fileName); errJson != nil {
apimiddleware.WriteError(w, errJson, nil)
return true
}
if errJson := apimiddleware.Cleanup(grpcResponse.Body); errJson != nil {
apimiddleware.WriteError(w, errJson, nil)
return true
}
return true
}
func prepareSSZRequestForProxying(m *apimiddleware.ApiProxyMiddleware, endpoint apimiddleware.Endpoint, req *http.Request) apimiddleware.ErrorJson {
req.URL.Scheme = "http"
req.URL.Host = m.GatewayAddress
req.RequestURI = ""
if errJson := apimiddleware.HandleURLParameters(endpoint.Path, req, endpoint.RequestURLLiterals); errJson != nil {
return errJson
}
if errJson := apimiddleware.HandleQueryParameters(req, endpoint.RequestQueryParams); errJson != nil {
return errJson
}
// We have to add new segments after handling parameters because it changes URL segment indexing.
req.URL.Path = "/internal" + req.URL.Path + "/ssz"
return nil
}
func preparePostedSSZData(req *http.Request) apimiddleware.ErrorJson {
buf, err := io.ReadAll(req.Body)
if err != nil {
return apimiddleware.InternalServerErrorWithMessage(err, "could not read body")
}
j := SszRequestJson{Data: base64.StdEncoding.EncodeToString(buf)}
data, err := json.Marshal(j)
if err != nil {
return apimiddleware.InternalServerErrorWithMessage(err, "could not prepare POST data")
}
req.Body = io.NopCloser(bytes.NewBuffer(data))
req.ContentLength = int64(len(data))
req.Header.Set("Content-Type", api.JsonMediaType)
return nil
}
func serializeMiddlewareResponseIntoSSZ(respJson SszResponse) (version string, ssz []byte, errJson apimiddleware.ErrorJson) {
// Serialize the SSZ part of the deserialized value.
data, err := base64.StdEncoding.DecodeString(respJson.SSZData())
if err != nil {
return "", nil, apimiddleware.InternalServerErrorWithMessage(err, "could not decode response body into base64")
}
return strings.ToLower(respJson.SSZVersion()), data, nil
}
func writeSSZResponseHeaderAndBody(grpcResp *http.Response, w http.ResponseWriter, respSsz []byte, respVersion, fileName string) apimiddleware.ErrorJson {
var statusCodeHeader string
for h, vs := range grpcResp.Header {
// We don't want to expose any gRPC metadata in the HTTP response, so we skip forwarding metadata headers.
if strings.HasPrefix(h, "Grpc-Metadata") {
if h == "Grpc-Metadata-"+grpc.HttpCodeMetadataKey {
statusCodeHeader = vs[0]
}
} else {
for _, v := range vs {
w.Header().Set(h, v)
}
}
}
w.Header().Set("Content-Length", strconv.Itoa(len(respSsz)))
w.Header().Set("Content-Type", api.OctetStreamMediaType)
w.Header().Set("Content-Disposition", "attachment; filename="+fileName)
w.Header().Set(api.VersionHeader, respVersion)
if statusCodeHeader != "" {
code, err := strconv.Atoi(statusCodeHeader)
if err != nil {
return apimiddleware.InternalServerErrorWithMessage(err, "could not parse status code")
}
w.WriteHeader(code)
} else {
w.WriteHeader(grpcResp.StatusCode)
}
if _, err := io.Copy(w, io.NopCloser(bytes.NewReader(respSsz))); err != nil {
return apimiddleware.InternalServerErrorWithMessage(err, "could not write response message")
}
return nil
}
func handleEvents(m *apimiddleware.ApiProxyMiddleware, _ apimiddleware.Endpoint, w http.ResponseWriter, req *http.Request) (handled bool) {
sseClient := sse.NewClient("http://" + m.GatewayAddress + "/internal" + req.URL.RequestURI())
sseClient.Headers["Grpc-Timeout"] = "0S"

View File

@@ -4,16 +4,165 @@ import (
"bytes"
"context"
"encoding/json"
"net/http"
"net/http/httptest"
"strings"
"testing"
"time"
"github.com/prysmaticlabs/prysm/v4/api"
"github.com/prysmaticlabs/prysm/v4/api/gateway/apimiddleware"
"github.com/prysmaticlabs/prysm/v4/api/grpc"
"github.com/prysmaticlabs/prysm/v4/beacon-chain/rpc/eth/events"
"github.com/prysmaticlabs/prysm/v4/testing/assert"
"github.com/prysmaticlabs/prysm/v4/testing/require"
"github.com/r3labs/sse/v2"
)
type testSSZResponseJson struct {
Version string `json:"version"`
ExecutionOptimistic bool `json:"execution_optimistic"`
Finalized bool `json:"finalized"`
Data string `json:"data"`
}
func (t testSSZResponseJson) SSZVersion() string {
return t.Version
}
func (t testSSZResponseJson) SSZOptimistic() bool {
return t.ExecutionOptimistic
}
func (t testSSZResponseJson) SSZData() string {
return t.Data
}
func (t testSSZResponseJson) SSZFinalized() bool {
return t.Finalized
}
func TestPrepareSSZRequestForProxying(t *testing.T) {
middleware := &apimiddleware.ApiProxyMiddleware{
GatewayAddress: "http://apimiddleware.example",
}
endpoint := apimiddleware.Endpoint{
Path: "http://foo.example",
}
var body bytes.Buffer
request := httptest.NewRequest("GET", "http://foo.example", &body)
errJson := prepareSSZRequestForProxying(middleware, endpoint, request)
require.Equal(t, true, errJson == nil)
assert.Equal(t, "/internal/ssz", request.URL.Path)
}
func TestPreparePostedSszData(t *testing.T) {
var body bytes.Buffer
body.Write([]byte("body"))
request := httptest.NewRequest("POST", "http://foo.example", &body)
preparePostedSSZData(request)
assert.Equal(t, int64(19), request.ContentLength)
assert.Equal(t, api.JsonMediaType, request.Header.Get("Content-Type"))
}
func TestSerializeMiddlewareResponseIntoSSZ(t *testing.T) {
t.Run("ok", func(t *testing.T) {
j := testSSZResponseJson{
Version: "Version",
Data: "Zm9v",
}
v, ssz, errJson := serializeMiddlewareResponseIntoSSZ(j)
require.Equal(t, true, errJson == nil)
assert.DeepEqual(t, []byte("foo"), ssz)
assert.Equal(t, "version", v)
})
t.Run("invalid_data", func(t *testing.T) {
j := testSSZResponseJson{
Version: "Version",
Data: "invalid",
}
_, _, errJson := serializeMiddlewareResponseIntoSSZ(j)
require.Equal(t, false, errJson == nil)
assert.Equal(t, true, strings.Contains(errJson.Msg(), "could not decode response body into base64"))
assert.Equal(t, http.StatusInternalServerError, errJson.StatusCode())
})
}
func TestWriteSSZResponseHeaderAndBody(t *testing.T) {
responseSsz := []byte("ssz")
version := "version"
fileName := "test.ssz"
t.Run("ok", func(t *testing.T) {
response := &http.Response{
Header: http.Header{
"Foo": []string{"foo"},
"Grpc-Metadata-" + grpc.HttpCodeMetadataKey: []string{"204"},
},
}
writer := httptest.NewRecorder()
writer.Body = &bytes.Buffer{}
errJson := writeSSZResponseHeaderAndBody(response, writer, responseSsz, version, fileName)
require.Equal(t, true, errJson == nil)
v, ok := writer.Header()["Foo"]
require.Equal(t, true, ok, "header not found")
require.Equal(t, 1, len(v), "wrong number of header values")
assert.Equal(t, "foo", v[0])
v, ok = writer.Header()["Content-Length"]
require.Equal(t, true, ok, "header not found")
require.Equal(t, 1, len(v), "wrong number of header values")
assert.Equal(t, "3", v[0])
v, ok = writer.Header()["Content-Type"]
require.Equal(t, true, ok, "header not found")
require.Equal(t, 1, len(v), "wrong number of header values")
assert.Equal(t, api.OctetStreamMediaType, v[0])
v, ok = writer.Header()["Content-Disposition"]
require.Equal(t, true, ok, "header not found")
require.Equal(t, 1, len(v), "wrong number of header values")
assert.Equal(t, "attachment; filename=test.ssz", v[0])
v, ok = writer.Header()[api.VersionHeader]
require.Equal(t, true, ok, "header not found")
require.Equal(t, 1, len(v), "wrong number of header values")
assert.Equal(t, "version", v[0])
assert.Equal(t, 204, writer.Code)
})
t.Run("no_grpc_status_code_header", func(t *testing.T) {
response := &http.Response{
Header: http.Header{},
StatusCode: 204,
}
writer := httptest.NewRecorder()
writer.Body = &bytes.Buffer{}
errJson := writeSSZResponseHeaderAndBody(response, writer, responseSsz, version, fileName)
require.Equal(t, true, errJson == nil)
assert.Equal(t, 204, writer.Code)
})
t.Run("invalid_status_code", func(t *testing.T) {
response := &http.Response{
Header: http.Header{
"Foo": []string{"foo"},
"Grpc-Metadata-" + grpc.HttpCodeMetadataKey: []string{"invalid"},
},
}
responseSsz := []byte("ssz")
writer := httptest.NewRecorder()
writer.Body = &bytes.Buffer{}
errJson := writeSSZResponseHeaderAndBody(response, writer, responseSsz, version, fileName)
require.Equal(t, false, errJson == nil)
assert.Equal(t, true, strings.Contains(errJson.Msg(), "could not parse status code"))
assert.Equal(t, http.StatusInternalServerError, errJson.StatusCode())
})
}
func TestReceiveEvents(t *testing.T) {
ctx, cancel := context.WithCancel(context.Background())
ch := make(chan *sse.Event)

Some files were not shown because too many files have changed in this diff Show More