Compare commits

..

306 Commits

Author SHA1 Message Date
Preston Van Loon
0b3ce46d28 Minor runtime fixes (#2961) 2019-07-14 10:23:23 -07:00
Preston Van Loon
d0ef6dd7f3 Minor fixes for runtime (#2960)
* Minor fixes for runtime

* use comments

* goimports

* revert beacon-chain/core/state/state.go and fix comment for lint

* fix test too
2019-07-13 23:00:42 -04:00
Nishant Das
5be922abe4 Add Tests for Genesis Deposits Caching (#2952)
* remove old method and replace with an improved one

* add new files

* gaz

* add test

* added all these tests

* gaz

* Apply suggestions from code review

Co-Authored-By: terence tsao <terence@prysmaticlabs.com>

* fix merkle proof error

* fix config
2019-07-13 15:33:45 -04:00
Preston Van Loon
97b9176411 Fix deposit input data (#2956)
* fix deposit input data

* fix deposit input data

* gaz and build fix
2019-07-13 15:06:36 -04:00
Preston Van Loon
76ca77bc1b fix breakages from #2957 (#2958)
* fix breakages from #2957

* oops
2019-07-13 14:53:59 -04:00
Preston Van Loon
78d17f07c6 remove unused chainstart param (#2957) 2019-07-13 13:05:13 -04:00
Preston Van Loon
882c1e6bf3 cleanup deposit contract slightly 2019-07-13 11:16:49 -04:00
Preston Van Loon
4b8490a4dd add skip reason 2019-07-13 10:38:57 -04:00
Raul Jordan
9daff21027 Add Back Eth1Data After Bad Merge (#2953)
* eth1data rpc endpoint

* first version

* comment added

* gazelle fix

* new function to go once over the deposit array

* fix tests

* export DepositContainer

* terence feedback

* move structure decleration

* binary search

* fix block into Block

* preston feedback

* keep slice sorted to remove overhead in retrival

* merge changes

* feedback

* update to the latest go-ssz

* revert change

* chnages to fit new ssz

* revert merge reversion

* go fmt goimprts duplicate string

* exception for lint unused doesParentExist

* feedback changes

* latesteth1data to eth1data

* goimports and stop exposing Eth1Data

* revert unneeded change

* remove exposure of DepositContainer

* feedback and fixes

* fix workspace duplicate dependancy

* greatest number of deposits at current height

* add count votes function

* change method name

* revert back to latesteth1data

* latesteth1data

* preston feedback

* seperate function add tests fix bug

* stop exposing voteCountMap

* eth1data comment fix

* preston feedback

* fix tests

* new proto files

* workspace to default version of ssz

* new ssz

* chnage test size

* marshalled  marshaled

* everything passing again
2019-07-12 14:33:46 -05:00
terence tsao
e5648f46c1 Static check on branch spec-v0.6 (#2946)
* Ran staticcheck and fixed the important complains

* commit

* commit

* Create Test Runner for Genesis State Spec Tests (#2940)

* genesis change

* integrate changes

* bodyroot

* remove unused code

* HasChainStarted

* added isValidGenesisState to ProcessLog

* state fix

* fix gazelle

* uint64 timestamp

* SetupInitialDeposits adds proof

* remove unneeded parts of test

* deposithash

* merkleproof from spec utils

* Revert "merkleproof from spec utils"

This reverts commit 1b0a124352.

* fix test failures

* chain started and hashtree root in tests

* simple eth2genesistime

* eth2 genesis time

* fix zero time

* add comment

* remove eth1data and feedback

* fix build issues

* main changes: add fields and methods to track active validator
count

* gaz

* fix test

* fix more tests

* improve test utils

* Start genesis spec tests

* shift spec method to state package, improve test setup

* Add Genesis validity spec test

* Bazel

* fixed log processing tests

* remove log

* gaz

* fix invalid metric

* use json tags

* fix up latest changes

* Fix most of test errors

* Attempts to see whats wrong with genesis validity

* Fix merge

* skip minimal

* fix state test

* new commit

* fix nishant comment

* gaz
2019-07-12 11:32:44 -05:00
Ivan Martinez
0d8165aa98 Create Test Runner for Genesis State Spec Tests (#2940)
* genesis change

* integrate changes

* bodyroot

* remove unused code

* HasChainStarted

* added isValidGenesisState to ProcessLog

* state fix

* fix gazelle

* uint64 timestamp

* SetupInitialDeposits adds proof

* remove unneeded parts of test

* deposithash

* merkleproof from spec utils

* Revert "merkleproof from spec utils"

This reverts commit 1b0a124352.

* fix test failures

* chain started and hashtree root in tests

* simple eth2genesistime

* eth2 genesis time

* fix zero time

* add comment

* remove eth1data and feedback

* fix build issues

* main changes: add fields and methods to track active validator
count

* gaz

* fix test

* fix more tests

* improve test utils

* Start genesis spec tests

* shift spec method to state package, improve test setup

* Add Genesis validity spec test

* Bazel

* fixed log processing tests

* remove log

* gaz

* fix invalid metric

* use json tags

* fix up latest changes

* Fix most of test errors

* Attempts to see whats wrong with genesis validity

* Fix merge

* skip minimal

* fix state test

* new commit

* fix nishant comment

* gaz
2019-07-12 11:19:46 -05:00
Nishant Das
971d8bdcce Remove Block Signing Root (#2945)
* replace with hash tree root

* Revert "replace with hash tree root"

This reverts commit 77d8f16a16.

* replace with signing root instead

* remove one more ref
2019-07-12 09:13:29 -04:00
Preston Van Loon
a5d483aa50 Merge branch 'spec-v0.6' of github.com:prysmaticlabs/prysm into spec-v0.6 2019-07-12 01:17:18 -04:00
Preston Van Loon
cdd2e35152 use better tag names, not latest 2019-07-12 01:17:08 -04:00
shayzluf
5f9f496f64 Genesis trigger (#2905)
* genesis change

* integrate changes

* bodyroot

* remove unused code

* HasChainStarted

* added isValidGenesisState to ProcessLog

* state fix

* fix gazelle

* uint64 timestamp

* SetupInitialDeposits adds proof

* remove unneeded parts of test

* deposithash

* merkleproof from spec utils

* Revert "merkleproof from spec utils"

This reverts commit 1b0a124352.

* fix test failures

* chain started and hashtree root in tests

* simple eth2genesistime

* eth2 genesis time

* fix zero time

* add comment

* remove eth1data and feedback

* fix build issues

* main changes: add fields and methods to track active validator
count

* gaz

* fix test

* fix more tests

* improve test utils

* shift spec method to state package, improve test setup

* fixed log processing tests

* remove log

* gaz

* fix invalid metric
2019-07-11 16:10:37 -05:00
terence tsao
1514f7849a Final updates spec tests (#2901)
* set up tests

* need to reorder pbs

* figuring out how to seqeeze in multiple fields in a tag for pb

* Added max tags and regenerated pb.go

* updated to new standard

* New bitfield types in proto (#2915)

* Cast bytes to correct bitfield, failing tests now though

* Add forked gogo/protobuf until https://github.com/gogo/protobuf/pull/582

* remove newline

* use proper override for gogo-protobuf

* fix a few tags

* forgot to include custody bits and Slashable not used

* playing with tags idea, can revert this commit later

* fixes after merge

* reset caches before test

* all epoch tests pass

* gazelle
2019-07-11 15:35:09 -05:00
Preston Van Loon
35de6cc816 fix bes flag 2019-07-11 10:33:26 -04:00
terence tsao
67c083460e Add ssz compatibility tests for signing root (#2931)
* Added tests for signing root

* imports

* fix lint on travis
2019-07-11 09:26:21 -05:00
Preston Van Loon
7c4b5eba09 add backend service 2019-07-11 10:17:34 -04:00
Ivan Martinez
921b56522d Perform Mutesting In core/state Package (#2923)
* Perform mutesting on validator

* Mutesting in helpers package

* Mutested eth1data

* Perform mutesting in epoch and state packages

* Fix voluntary exits test

* Fix typo

* Fix comments

* Fix formatting

* Fix error message

* Handle missing errors

* Handle all errors

* Perform Mutesting In State Package

* Fix block roots size

* Remove comment

* Fix error
2019-07-11 08:39:16 -04:00
terence tsao
ddbecc81bf Epoch Process Slashings Spec Tests (#2930)
* updated justification bits, tests passing OK

* regen pb.go, clarify bit operations

* justification and finalization tests; failing

* Add wrapper, so we call the correct method

* checkpoint

* Update tar ref

* TestSlashingsMinimal/small_penalty still failing

* Use bigint instead of () and float

* Revert a bad merge from workspace

* Fmt

* add note about https://github.com/ethereum/eth2.0-specs/issues/1284

* improve tests

* gaz
2019-07-11 08:37:48 -04:00
Preston Van Loon
58ca70acfa Add ssz regression tests for investigation of test failures in PR #2828 (#2935)
* Adding regression tests for investigation

* add another example

* goimports

* add quick comment about test case 0
2019-07-10 23:38:37 -05:00
Preston Van Loon
ec1aa897bf satisfy deprecation warning 2019-07-10 23:55:40 -04:00
Preston Van Loon
2b57cf3bac Merge branch 'spec-v0.6' of github.com:prysmaticlabs/prysm into spec-v0.6 2019-07-10 23:19:16 -04:00
Preston Van Loon
8b47f2ad1a update bazel jobs flag 2019-07-10 23:19:11 -04:00
Preston Van Loon
fbfbb126a1 update bazel jobs flag 2019-07-10 23:18:42 -04:00
terence tsao
b04d4c466a Block operations spec tests (#2828)
* update PrevEpoch

* debugging proposer slashing tests

* fmt

* add deposit tests

* Added skeleton for attestation minimal test

* remove bazel runfiles thing

* add deposits

* proposer slashing test is done

* comment

* complete test, some failing cases

* sig verify on

* refactor slightly to support mainnet and minimal

* included mainnet stuff

* Add block header tests

* volunary exit done

* transfer done

* new changes

* fix all tests

* update domain functions

* fmt

* fixed lint

* fixed all the tests

* fixed a transfer bug

* finished attester slashing tests and fixed a few bugs

* started fixing...

* cleaned up exit and proposr slashing tests

* attester slashing passing

* refactored deposit tests

* remove yamls, update ssz

* Added todo for invalid sig

* gazelle

* deposits test done!

* transfer tests done and pass!

* fix attesting indices bug

* temporarily disabled signature verification

* cleaned up most of the block ops, except for att

* update committee AttestingIndices

* oops, i dont know how or why i changed this file

* fixed all the rpc tests

* 6 more failing packages

* test max transfer in state package

* replace hashproto with treehash in package blockchain

* gazelle

* fix test

* fix test again

* fixed transition test, 2 more left

* expect an error in attestation tests

* Handle panic when no votes in aggregate attestation

* clear cache

* Add differ, add logging, tests pass yay

* remove todo, add tag

* fixed TestReceiveBlock_RemovesPendingDeposits

* TestAttestationMinimal/success_since_max_epochs_per_crosslink fails now...

* handle panics

* Transfer tests were disabled in https://github.com/ethereum/eth2.0-specs/pull/1238

* more fixes after merge, updating block_operations.yaml.go to match yaml

* figuring out how to seqeeze in multiple fields in a tag for pb

* Added max tags and regenerated pb.go

* updated to new standard

* New bitfield types in proto (#2915)

* Cast bytes to correct bitfield, failing tests now though

* Add forked gogo/protobuf until https://github.com/gogo/protobuf/pull/582

* remove newline

* fix references and test panic

* change to proto objects from custom types

* fix panics in tests

* use proper override for gogo-protobuf

* fix a few tags

* fix tests

* forgot to include custody bits and Slashable not used

* fix tests

* sort again

* Update yaml struct to use pb

* Update workspace to use latest ssz

* All tests fail

* Use the latest go-ssz commit

* All pass except for state (too long to taste)

* Update test.proto

* Added rest of the tests

* use 1 justification bits

* minor fixes

* wrong proto.Equal

* fix tag test, apply @rauljordan's suggestion

* add IsEmpty and use ssz struct

* inverted logic

* update zero hash

* change zero hash to conform with spec

* goimports

* add test for zero hash

* Revert "Update zero hash to sha256().digest() (#2932)"

This reverts commit b926ae0667.

* update ssz, fix import, shard big test

* checkpoint

* fix compress validator

* update go-ssz

* missing import

* missing import

* tests now pass

* been a good day

* update test size

* fix lint

* imports and remove unused const
2019-07-11 08:34:27 +05:30
Preston Van Loon
ee826991a3 try minimal 2019-07-10 22:12:12 -04:00
Preston Van Loon
9e589a6b62 build without the bytes test 2019-07-10 22:08:08 -04:00
Preston Van Loon
afc2bad4d7 Fix compress validator (#2936)
* fix compress validator

* update go-ssz
2019-07-10 15:50:35 -05:00
Nishant Das
18d7f909e2 Revert "Update zero hash to sha256().digest() (#2932)" (#2933)
This reverts commit b926ae0667.
2019-07-10 08:13:35 -04:00
Preston Van Loon
b926ae0667 Update zero hash to sha256().digest() (#2932)
* update zero hash

* change zero hash to conform with spec

* goimports

* add test for zero hash
2019-07-09 22:29:14 -04:00
Preston Van Loon
9eb22c4f52 delete unused file 2019-07-09 19:59:48 -04:00
terence tsao
af142a4119 Fix SSZ Compatibility Test (#2924)
* figuring out how to seqeeze in multiple fields in a tag for pb

* Added max tags and regenerated pb.go

* updated to new standard

* New bitfield types in proto (#2915)

* Cast bytes to correct bitfield, failing tests now though

* Add forked gogo/protobuf until https://github.com/gogo/protobuf/pull/582

* remove newline

* use proper override for gogo-protobuf

* fix a few tags

* forgot to include custody bits and Slashable not used

* Update yaml struct to use pb

* Update workspace to use latest ssz

* All tests fail

* Use the latest go-ssz commit

* All pass except for state (too long to taste)

* Update test.proto

* Added rest of the tests

* use 1 justification bits

* fix tag test, apply @rauljordan's suggestion

* add IsEmpty and use ssz struct
2019-07-09 19:47:48 -04:00
Preston Van Loon
c6372c109e Merge branch 'master' of github.com:prysmaticlabs/prysm into spec-v0.6 2019-07-09 16:24:11 -04:00
Preston Van Loon
a9d88725fe update go-ssz 2019-07-09 16:07:21 -04:00
Preston Van Loon
0acf164ebd Justification spec tests (#2896) 2019-07-09 10:08:04 -07:00
terence tsao
af0d9d1c63 Update Validator Workflow (#2906) 2019-07-09 09:36:22 -07:00
terence tsao
f34ef8c54c Fixed a few incorrect proto fields (#2926) 2019-07-08 14:18:04 -07:00
Terence Tsao
320e7affd6 Regen pb.go 2019-07-08 11:13:38 -07:00
Terence Tsao
e9931a6fe0 No more space between ssz-size numbers 2019-07-08 11:12:24 -07:00
terence tsao
c029fed293 Add Max Size Tag for Protobuf Fields (#2908) 2019-07-08 10:42:05 -07:00
terence tsao
4b4441ed8e done with updates (#2890) 2019-07-08 11:14:36 +05:30
Ivan Martinez
8773bf881a Perform Mutesting in Epoch and State Packages (#2913) 2019-07-07 13:13:22 -07:00
Nishant Das
1cf1e081ac Fix Spec tests (#2907)
* fix panics

* handle failed transitions

* remove log

* fix to protos

* new changes

* remove i

* change ssz commit

* new changes

* update epoch tests

* fix epoch testing

* fix shuffle tests

* fix test
2019-07-07 10:51:01 -04:00
Nishant Das
5ae798549a Update BLS Domain (#2916)
* change from integer to byte slice

* add test

* fix func for bytes4
2019-07-07 10:41:28 -04:00
terence tsao
175ad3c9b1 s/volundary/voluntary (#2914) 2019-07-06 10:44:13 -07:00
Ivan Martinez
064f6cba4d Perform Mutesting in Helpers Package (#2912)
* Perform mutesting on validator

* Mutesting in helpers package

* Mutested eth1data
2019-07-06 12:17:58 -04:00
Preston Van Loon
add9647b7b update WORKSPACE to use 0.8 spec tests 2019-07-04 16:09:54 -04:00
Nishant Das
2e145dd3ae Update Deposit Contract (#2903)
* update to new contract

* fix references

* fix tests

* fix some more tests

* fix local deposit trie

* gaz

* shays review

* more changes
2019-07-04 16:17:22 +05:30
Preston Van Loon
3cb5203f2b Remove deprecated fields from attestation data (#2892)
* fix broken tests

* remove comment

* fixes
2019-07-03 18:58:21 -04:00
Nishant Das
8674cc3d26 Implement Compact Committee Root (#2897)
* add tags

* add function

* add new code

* add function

* add all new changes

* lint

* add tests

* fix tests

* fix more tests

* fix all outstanding tests

* gaz

* Update beacon-chain/core/helpers/committee.go

Co-Authored-By: terence tsao <terence@prysmaticlabs.com>

* comment
2019-07-03 19:41:27 +05:30
Nishant Das
79c7307095 Slashing Penalty Calculation Change (#2889)
* lint

* change config val

* add max helper

* changes to slashing and process slashing, add a min function for integers

* gaz

* fix failing tests

* fix test

* fixed all tests

* Change Yaml tag

* lint

* remove gc hack

* fix test

* gaz

* preston's comments

* change failing field

* fix and regen proto

* lint
2019-07-03 08:30:22 +05:30
terence tsao
be35f0297b reorder proto fields (#2902) 2019-07-02 21:06:57 -04:00
Ivan Martinez
4ba541dea4 Move Block Operation Length Checks to ProcessOperations (#2900)
* Move block operation length checks to ProcessOperations

* Write tests for each length check in ProcessOperations

* Remove unneeded test

* Move checks to a new function

* Move duplicate check back into ProcessOperations
2019-07-03 02:30:10 +09:00
Ivan Martinez
f612b816f6 Make Constants Explicit and Minor Cleanups (#2898)
* Rename outdated configs, make constants explicitly delcared

* Remove activate_validator, not needed

* Remove GenesisSlot and GenesisEpoch

* Remove unused import
2019-07-03 01:22:55 +09:00
terence tsao
24f630d70b updated all the helper pseudocodes (#2895) 2019-07-02 07:40:33 -07:00
Preston Van Loon
2d09a1e539 Update justification bits (#2894) 2019-07-01 14:32:17 -07:00
terence tsao
61fafd0523 Update Freeze Spec Simplification Section - part 1 (#2893)
* finished changes to attesting_indices

* removed index_count <= 2**40 requirement

* lint

* reverted index_count <= 2**40 check

* added short cut len(a) > len(b)
2019-07-01 14:58:09 -04:00
Nishant Das
de5df510e4 Change Base Reward Factor (#2888) 2019-07-01 06:54:21 -07:00
terence tsao
5af9774a41 Update State Transition Function (#2867) 2019-07-01 06:52:44 -07:00
Preston Van Loon
69e46a9628 fix build 2019-07-01 02:36:12 -04:00
Preston Van Loon
d3cbad3608 update latest sha for spec tests 2019-07-01 02:10:43 -04:00
Preston Van Loon
878540bfc5 update readme target 2019-07-01 00:58:20 -04:00
Preston Van Loon
8b024d58ac Mutation testing fixes for beacon-chain/core/helpers/committee.go (#2870)
* Add some fixes for mutation testing on blocks.go

* working on mutation testing fo committee.go

* gofmt

* goimports
2019-06-30 21:21:24 -04:00
terence tsao
059c1daff8 Update Configs for Freeze (#2876)
* update configs

* updated minimal configs

* almost there

* all tests passing except for spec tests

* better comment for MinGenesisTime

* done, ready for review

* rm seconds per day

* feedback
2019-06-30 21:17:23 -04:00
Preston Van Loon
4b94d88f3d Removes some deprecated fields from protobuf (#2877)
* search and replace checkpoints

* fix tests, except spec tests
2019-06-30 21:11:58 -04:00
terence tsao
06aae90f15 Align Protobuf Type Names (#2872) 2019-06-30 13:59:35 -07:00
Preston Van Loon
74a2198a22 Spec freeze release candidate spectests 2019-06-30 16:48:52 -04:00
Preston Van Loon
230810533e gaz 2019-06-30 12:20:34 -04:00
Preston Van Loon
237f0db6de Fix sizes 2019-06-30 12:20:20 -04:00
Preston Van Loon
697a3dcb04 Merge branch 'master' of github.com:prysmaticlabs/prysm into spec-v0.6 2019-06-30 12:10:53 -04:00
Preston Van Loon
db5463e7fa Add some fixes for mutation testing on blocks.go (#2869) 2019-06-29 14:40:20 -07:00
Preston Van Loon
b87b730f60 Mutation testing fixes for beacon-chain/core/helpers/attestation.go (#2868)
* mutation testing for attestation.go

* new line

* lint

* revert fmt.Errorf deletion

* gofmt
2019-06-29 13:34:14 -04:00
Nishant Das
c5a4778ef9 Block Processing Sanity Spec Tests (#2817)
* update PrevEpoch

* add new changes

* shift to blocks package

* add more changes

* new changes

* updated pb with size tags

* add new changes

* fix errors

* uncomment code

* more changes

* add new changes

* rename and lint

* gaz

* more changes

* proccess slot SigningRoot instead of HashTreeRoot

* ensure yaml generated structs work

* block sanity all passing

* minimal and mainnet all pass

* remove commented code

* fix one test

* fix all tests

* fix again

* no state comparison

* matching spec

* change target viz

* comments gazelle

* clear caches before test cases

* latest attempts

* clean up test format

* remove debugging log, remove yaml

* unskip attestation

* remove skip, check post state, diff state diffs

* handle err

* add bug fixes

* fixed one more bug

* fixed churn limit bug

* change hashProto to HashTreeRoot

* all tests pass :)

* fix all tests

* gaz

* add regression tests

* fix test bug
2019-06-29 20:06:02 +05:30
terence tsao
816d14f31b removed old chaintest, simulated backend and state generator (#2863) 2019-06-26 16:44:56 -07:00
Nishant Das
598916566e Attesting Indices Fix (#2862)
* add change

* fix one test

* fix all tests

* add test

* clear cache
2019-06-26 09:39:29 -05:00
shayzluf
50e63f0cf0 eth1data rpc endpoint (#2733)
* eth1data rpc endpoint

* first version

* comment added

* gazelle fix

* new function to go once over the deposit array

* fix tests

* export DepositContainer

* terence feedback

* move structure decleration

* binary search

* fix block into Block

* preston feedback

* keep slice sorted to remove overhead in retrival

* merge changes

* feedback

* update to the latest go-ssz

* revert change

* chnages to fit new ssz

* revert merge reversion

* go fmt goimprts duplicate string

* exception for lint unused doesParentExist

* feedback changes

* latesteth1data to eth1data

* goimports and stop exposing Eth1Data

* revert unneeded change

* remove exposure of DepositContainer

* feedback and fixes

* fix workspace duplicate dependancy

* greatest number of deposits at current height

* add count votes function

* change method name

* revert back to latesteth1data

* latesteth1data

* preston feedback

* seperate function add tests fix bug

* stop exposing voteCountMap

* eth1data comment fix

* preston feedback

* fix tests

* new proto files

* workspace to default version of ssz

* new ssz

* chnage test size

* marshalled  marshaled
2019-06-26 04:19:54 +03:00
terence tsao
50eae63112 Slot processing spec test (#2813) 2019-06-25 11:12:11 -07:00
Nishant Das
e083bb5086 Merge branch 'master' into spec-v0.6 2019-06-25 23:40:28 +05:30
Nishant Das
7bc42b3c6b BLS spec tests (#2826) (#2856)
* bls spec tests

* add more bls tests

* use ioutil instead of bazel runfiles

* dont read bytes

* skip tests that overflow uint64

* manually fix input data

* add tests

* lint and gaz

* add all new changes

* some refactoring, cleanup, remove new API methods that only exist for tests

* gaz

* Remove yamls, skip test
2019-06-25 23:33:02 +05:30
Preston Van Loon
d4be71cbe0 remove yamls from branch 2019-06-25 12:52:47 -04:00
Preston Van Loon
bfd50c3f10 update workspace spec sha 2019-06-25 12:35:39 -04:00
terence tsao
12bc617b62 SSZ compatibility test for protobufs (#2839) 2019-06-25 08:45:02 -07:00
shayzluf
c081179cdd Shuffle tests revisited (#2829)
* first commit

* remove old files, add log

* remove duplicate yaml testing code

* reduce visability

* nishant feedback changes

* skip TestFromYaml_Pass

* added tags to bazel build

* gazelle fix

* remove unused vars

* adda back config

* remove config handling

* remove unused var

* gazelle fix
2019-06-25 13:35:17 +05:30
Nishant Das
f59563cfbe Remove Deposit Index (#2851) 2019-06-24 17:08:38 -07:00
terence tsao
c6f119bd08 Epoch processing spec tests (#2814) 2019-06-24 14:54:28 -07:00
Terence Tsao
85ae64855c Merge branch 'master' into spec-v0.6 2019-06-24 14:39:27 -07:00
Terence Tsao
4a487fca6a sync with master 2019-06-24 14:00:11 -07:00
terence tsao
2b20455037 Update types PB with Size Tags (#2840) 2019-06-23 11:17:09 -07:00
terence tsao
1def1d8d32 Block Processing Bug Fixes (#2836) 2019-06-23 10:07:34 -07:00
terence tsao
540b568bd3 Add ConvertToPb to package testutil (#2838) 2019-06-21 22:37:09 -07:00
Preston Van Loon
ea9de682f9 Revert "Add Functions for Compressed and Uncompressed HashG2 With Domain (#2833)" (#2835)
This reverts commit 7fb2ebf3f1.
2019-06-21 19:35:56 +05:30
Nishant Das
7fb2ebf3f1 Add Functions for Compressed and Uncompressed HashG2 With Domain (#2833)
* add tests

* gaz

* lint
2019-06-21 08:43:03 +05:30
terence tsao
35b67a3dcf Update Domain Related Functions (#2832) 2019-06-20 09:44:58 -07:00
terence tsao
b4b00fd41c Revert "Align Protobuf Type Names (#2825)" (#2827)
This reverts commit 882d067144.
2019-06-19 11:13:48 +05:30
Preston Van Loon
650383d117 gofmt 2019-06-18 23:11:18 -04:00
terence tsao
882d067144 Align Protobuf Type Names (#2825) 2019-06-18 17:22:39 -07:00
Preston Van Loon
06d1161bd2 Merge branch 'master' of github.com:prysmaticlabs/prysm into spec-v0.6 2019-06-18 18:00:58 -04:00
Nishant Das
7563db0538 Add Config YAML for Spec Tests (#2818) 2019-06-18 13:48:33 -07:00
Raul Jordan
f61688a08c Fix Final Missing Items in Block Processing v0.6 (#2710)
* override config successfully

* passes processing

* add signing root helper

* blockchain tests pass

* tests blocked by signing root

* lint

* fix references

* fix protos

* proper use of signing root

* only few failing tests now

* fix final test

* tests passing

* lint and imports

* rem unused

* Update beacon-chain/core/blocks/block_operations.go

Co-Authored-By: terence tsao <terence@prysmaticlabs.com>

* lint

* Update beacon-chain/attestation/service_test.go

Co-Authored-By: terence tsao <terence@prysmaticlabs.com>

* Update beacon-chain/db/block_test.go

Co-Authored-By: terence tsao <terence@prysmaticlabs.com>

* rename to hash tree root

* rename decode to unmarshal

* fix

* use latest ssz

* all tests passing

* lint

* fmt
2019-06-18 11:22:24 -05:00
terence tsao
55ea83066c Update BlockRoot to BlockHash (#2816) 2019-06-17 19:42:36 -07:00
Raul Jordan
eecec35b03 Merge branch 'master' into spec-v0.6 2019-06-17 10:24:34 -05:00
terence tsao
b2ddee1ae8 Optimize Committee Assignment RPC (#2787) 2019-06-17 05:56:50 -07:00
Ivan Martinez
ae0a501646 Refactor Deposit Flow and Cleanup Tests (#2788)
* More WIP on cleaning deposit flow

* Fix tests

* Cleanup and imports

* run gazelle

* Move deposit to block_operations

* gazelle

* Update beacon-chain/core/blocks/block_operations.go

Co-Authored-By: terence tsao <terence@prysmaticlabs.com>

* Fix docs

* Remove unneeded calculations

* Fix tests

* Fix tests finally (?)
2019-06-13 10:00:25 -07:00
Preston Van Loon
2dfc2de155 Merge branch 'master' of github.com:prysmaticlabs/prysm into spec-v0.6 2019-06-12 16:27:06 -04:00
terence tsao
8078917d8c Minor Updates to 0.7 (#2795) 2019-06-12 08:18:15 -07:00
Ivan Martinez
d07ed6427e Update config and function parameters to v0.7 (#2791) 2019-06-12 06:16:55 -07:00
terence tsao
f02afa20ab Update Attester Server RPC Calls (#2773) 2019-06-11 09:57:24 -07:00
Preston Van Loon
5217749c4f Merge branch 'master' into spec-v0.6 2019-06-10 16:14:51 -04:00
Preston Van Loon
5ff4139f0e unstaged changes 2019-06-10 13:55:20 -04:00
Preston Van Loon
5bb7ad31a8 Merge branch 'master' into spec-v0.6 2019-06-10 13:46:33 -04:00
Preston Van Loon
69d1cb40ac Merge remote-tracking branch 'origin/new-v0.6' into spec-v0.6 2019-06-10 13:20:10 -04:00
nisdas
fda0310e42 update to latest go-ssz 2019-06-10 22:57:19 +08:00
nisdas
01c646b2bc Revert "temp change for go-ssz"
This reverts commit 3411cb9d6d.
2019-06-10 22:23:46 +08:00
nisdas
64162a51fa fix test 2019-06-10 16:50:04 +08:00
nisdas
3411cb9d6d temp change for go-ssz 2019-06-10 16:22:45 +08:00
nisdas
3c6c3b9999 fix build files 2019-06-10 16:22:16 +08:00
nisdas
d48e6b17d0 fix all genesis errors 2019-06-10 16:10:06 +08:00
nisdas
9798f83dc6 fix genesis issue for slot ticker 2019-06-10 16:05:13 +08:00
nisdas
5f9c2edebd fix genesis issue 2019-06-10 16:03:47 +08:00
nisdas
8d9960d912 fix errors 2019-06-10 16:02:58 +08:00
nisdas
ca427dcb66 add back protos and mocks 2019-06-10 15:23:46 +08:00
nisdas
df00a73b0c add back deleted stuff that was deleted accidentally 2019-06-10 15:05:31 +08:00
nisdas
d9719b2a83 add back deleted stuff 2019-06-10 14:44:22 +08:00
nisdas
26d275568d updating rpc package 2019-06-10 14:41:51 +08:00
nisdas
7a0f4c0ed2 lint 2019-06-10 13:51:23 +08:00
Terence Tsao
bd0f18281a started fixing blockchain package 2019-06-09 09:06:41 -07:00
Terence Tsao
8b2184dd9c removed shared/forkutils 2019-06-09 08:55:23 -07:00
Terence Tsao
eb76044fff all core tests passing! 2019-06-09 08:48:48 -07:00
Terence Tsao
f01dc69a35 started fixing core 2019-06-09 07:30:16 -07:00
Terence Tsao
03abcd5497 regenerated protos 2019-06-09 06:49:41 -07:00
nisdas
0e035c787e regen protos and mocks 2019-06-09 20:48:06 +08:00
nisdas
e3d4cff19e lint 2019-06-09 19:01:34 +08:00
Nishant Das
4e82811264 Update Genesis State Function to v0.6 (#2465)
* add pseudocode

* make changes

* fix all tests

* fix tests
2019-06-09 18:58:19 +08:00
terence tsao
76a21207bb Update Shard Helpers for 0.6 (#2497)
* ValidatorStatus Estimating Activation RPC Server (#2469)

* fix spacing

* working on position in queue

* fmt

* spacing

* feedback

* tests

* rename

* Only Perform Initial Sync With a Single Peer (#2471)

* fix spacing

* use send instead of broadcast in initial sync

* Fix Estimation of Deposit Inclusion Slot in ValidatorActivationStatus (#2472)

* fix spacing

* fix time estimates

* correct slot estimation

* naming

* Update beacon-chain/rpc/validator_server.go

Co-Authored-By: rauljordan <raul@prysmaticlabs.com>

* SSZ web api for decoding input data (#2473)

* first pass ssz server for decoding deposit input data

* fix decoding

* revert viz change on helper

* add image target

* use /api prefix, add deployment for cluster

* fix lint

* standardize slot numbers (#2475)

* Add CORS for ssz api (#2476)

* first pass ssz server for decoding deposit input data

* fix decoding

* revert viz change on helper

* add image target

* use /api prefix, add deployment for cluster

* fix lint

* needed CORS

* Allow Client to Retrieve Multiple Validator Statuses (#2474)

* multiple validator statuses

* gazelle

* context

* fixing bugs

* remove old way of checking

* fix logging

* make activation queue more accurate

* fix rpc test

* add test

* fix remaining tests

* lint

* comment

* review comments

* Update Prysm README (#2477)

* README updated

* readme updates

* no err throw (#2479)

* Fix Status Nil Pointer Error (#2480)

* no err throw

* nil errors

* 3.175 (#2482)

* Better Error Message if Failing to Exit Initial Sync (#2483)

* no err throw

* nil errors

* better error on init sync

* Only Log Active Balances (#2485)

* only log active balance

* dont need ()

* change logging (#2487)

* fix chainstart waiting on rpc server (#2488)

* shift ticker to after activation (#2489)

* Add drain script (#2418)

* Add drain script

* Fix script to drain contracts from newest to oldest

* Add README

* remove comments

* Only after block 400k, look up by deposit event

* issue warn log on disconnecting peer instead of error (#2491)

* Display Only Active Validator Data (#2490)

* Fix Validator Status Field in RPC Server (#2492)

* fix status of key

* status test fix

* fmt

* Estimate the Time Till Follow Distance Is Completed (#2486)

* use estimation instead

* fix test

* fixing another test

* fix tests and preston's comments

* remove unused var

* fix condition

* Revert "fix condition"

This reverts commit dee0e3112c.

* dont return error

* add production config for testnet release (#2493)

* Lookup Validator Index in State in Status Check (#2494)

* state lookup

* refactor duplicate code

* refactor with mapping

* fix broken tests

* finish refactor

* merged master

* updated EpochCommitteeCount and fixed tests

* implemented ShardDelta

* test for ShardDelta

* implemented EpochStartShard

* added epoch out of bound test

* test for accurate start shard

* lint
2019-06-09 18:56:25 +08:00
Nishant Das
185dade934 Update Deposit Contract (#2648)
* lint

* add new contract

* change version

* remove log

* generating abi and binary files

* fix tests

* update to current version

* new changes

* add new hash function

* save hashed nodes

* add more things

* new method

* add update to trie

* new stuff

* gaz

* more stuff

* finally fixed build

* remove deposit data

* Revert "remove deposit data"

This reverts commit 9085409e91.

* more changes

* lint and gaz

* lint
2019-06-09 18:53:28 +08:00
terence tsao
bf441a8ec8 Cache Active Validator Indices, Count, and Balances (#2737) 2019-06-09 18:45:35 +08:00
nisdas
ccbe0720fd Merge remote-tracking branch 'remotes/origin/master' into fixMess 2019-06-09 18:35:03 +08:00
terence tsao
0b476749fb Update RPC end point for Proposer (#2767)
* add RequestBlock

* run mockgen

* implemented RequestBlock

* updated proto definitions

* updated tests

* updated validator attest tests

* done

* comment

* todo issue

* removed unused proto
2019-06-09 17:47:48 +08:00
terence tsao
f2c31004de Clean up Old RPC Endpoints (#2763) 2019-06-09 17:47:31 +08:00
terence tsao
da0e6bb28b Tidying up Godoc for Core Package (#2762) 2019-06-09 17:47:13 +08:00
terence tsao
32975baa3c Benchmark Process Block with Attestations (#2758) 2019-06-09 17:46:56 +08:00
Ivan Martinez
4a0b3710ef Add graffiti and update generate seed (#2759) 2019-06-09 17:46:54 +08:00
Ivan Martinez
82dc94e023 Cleanup and Docs update (#2756) 2019-06-09 17:46:35 +08:00
terence tsao
348579780e Optimize Process Eth1 Data Vote (#2754) 2019-06-09 17:46:21 +08:00
terence tsao
8373d75033 removed deprecated configs (#2755) 2019-06-09 17:46:00 +08:00
terence tsao
76fa1e5adb Optimize Base Reward Calculation (#2753)
* benchmark process epoch

* revert prof.out

* Add some optimizations

* beware where we use ActiveValidatorIndices...

* revert extra file

* gaz

* quick commit to get feedback

* revert extra file

* started fixing tests

* fixed broken TestProcessCrosslink_NoUpdate

* gaz

* cache randao seed

* fixed all the tests

* fmt and lint

* spacing

* Added todo

* lint

* revert binary file

* started regression test

* basic tests done

* using a fifo for active indices cache

* using a fifo for active count cache

* using a fifo for total balance cache

* using a fifo for active balance cache

* using a fifo for start shard cache

* using a fifo for seed cache

* gaz

* clean up

* fixing tests

* fixed all the core tests

* fixed all the tests!!!

* lint

* comment

* rm'ed commented code

* cache size to 1000 should be good enough

* optimized base reward

* revert binary file

* Added comments to calculate adjusted quotient outside
2019-06-09 17:45:42 +08:00
Ivan Martinez
b899305a69 Fix most of skipped tests (#2735) 2019-06-09 17:32:48 +08:00
Ivan Martinez
b88d7a841e Remove Deprecated Validator Protobuf (#2727)
* Remove deprecated validator protos

* Fix to comments
2019-06-09 17:32:23 +08:00
Nishant Das
0c33dc472f change hash function (#2732) 2019-06-09 17:31:41 +08:00
Nishant Das
ae11795617 Refactor Deposit Contract Test Setup (#2731)
* add new package

* fix all tests

* lint
2019-06-09 17:31:15 +08:00
terence tsao
94cb85d847 Optimize Shuffled Indices Cache (#2728) 2019-06-09 17:30:40 +08:00
terence tsao
87ff6a0773 Fixed Skipped Attestation Tests (#2730) 2019-06-09 17:30:27 +08:00
terence tsao
d9a109ad37 Benchmark Compute Committee (#2698) 2019-06-09 17:30:06 +08:00
terence tsao
013aee6958 Remove Committee Cache (#2729) 2019-06-09 17:29:55 +08:00
Nishant Das
0f43b261bd Fix Deposit Trie (#2686)
* remove deposit data

* remove deposit input

* fix references

* remove deposit helpers

* fix all refs

* gaz

* rgene proto

* fix all tests

* remove deposit data deprecated field

* fix remaining references

* fix all tests

* fix lint

* new tests with contract

* gaz

* more tests

* fixed bugs

* new test

* finally fixed it

* gaz

* fix test
2019-06-09 17:29:50 +08:00
Nishant Das
d29a2be222 Remove Latest Block (#2721)
* lint

* remove latest block

* lint

* add proto

* fix build
2019-06-09 17:28:53 +08:00
shayzluf
ee3a38821a YAML shuffle tests for v0.6 (#2667)
* new shuffle tests

* added comment for exported function

* fix format and print

* added config files handling

* gazelle fix

* shuffle test debugging

* added shuffle list and benchmark

* hash function addition from nishant code

* gazelle fix

* remove unused function

* few minor changes

* add test to test protos optimization

* test a bigger list

* remove commented code

* small changes

* fix spec test and test indices to pass

* remove empty line

* abstraction of repeated code and comment arrangement

* terence change requests

* fix new test

* add small comment for better readability

* change from unshuflle to shuffle

* comment

* better comment

* fix all tests
2019-06-09 17:28:20 +08:00
terence tsao
1fb02639b0 Cache Shuffled Validator Indices (#2682) 2019-06-09 17:27:16 +08:00
terence tsao
13c9432d37 Remove Deprecated Attestation Proto Fields (#2723) 2019-06-09 17:25:39 +08:00
terence tsao
1cbe622859 Remove Deprecated Beacon Block Proto Fields (#2717) 2019-06-09 17:12:15 +08:00
terence tsao
b0db321a57 Remove Deprecated Protobuf Crosslink/Slashing/Block Fields (#2714) 2019-06-09 17:11:27 +08:00
terence tsao
98f8331649 Remove Deprecated Protobuf State Fields (#2713) 2019-06-09 17:05:42 +08:00
Nishant Das
131823511e Remove Obsolete Deposit Proto Objects (#2673)
* remove deposit data

* remove deposit input

* fix references

* remove deposit helpers

* fix all refs

* gaz

* rgene proto

* fix all tests

* remove deposit data deprecated field

* fix remaining references

* fix all tests

* fix lint

* regen proto

* fix test
2019-06-09 17:03:59 +08:00
terence tsao
5fd97e9ed2 Fixed Skipped Tests for RPC Server (#2712) 2019-06-09 17:00:28 +08:00
Raul Jordan
52ef370817 Process Block Attestations v0.6 (#2650)
* attestation.go is ready, tests not

* ready for review

* fixing linter issues

* modified Crosslink and AttestationData proto fields to spec 2.0,marked deprecated fields

* gazelle

* fixed tests

* fixed error

* add att processing:

* process atts

* error msg

* finish process attestations logic

* spacing

* ssz move

* inclusion delay failure passing

* more attestation tests

* more att tests passing

* more tests

* ffg data mismatching test

* ffg tests complete

* gofmt

* fix testing to match attestation updates

* ssz

* lint
2019-06-09 16:59:22 +08:00
terence tsao
1877c2e441 cleaned up skipped tests for core processing (#2697) 2019-06-09 16:56:46 +08:00
terence tsao
66a922dab2 Update CommitteeAssignment (#2693) 2019-06-09 16:56:42 +08:00
terence tsao
c25ed02bb3 Implement Process Epoch for 0.6 (#2675)
* can't find process j f functons

* implemented process_epoch

* tests done

* lint

* nishant's feedback

* stupid goland replace

* goimports
2019-06-09 16:55:36 +08:00
Raul Jordan
ebff47bde1 Update Slot Processing and State Transition v0.6 (#2664)
* edit state transition to add slot processing logic, reorder logic

* fix build

* lint

* tests passing

* spacing

* tests pass

* imports

* passing tests
2019-06-09 16:52:48 +08:00
terence tsao
5c1410b224 Implement process_rewards_and_penalties for 0.6 (#2665)
* update process crosslink and update existing tests

* added a test case to cover no crosslink changes

* more test

* preston's feedback

* spellings

* ValidatorStatus Estimating Activation RPC Server (#2469)

* fix spacing

* working on position in queue

* fmt

* spacing

* feedback

* tests

* rename

* Only Perform Initial Sync With a Single Peer (#2471)

* fix spacing

* use send instead of broadcast in initial sync

* Fix Estimation of Deposit Inclusion Slot in ValidatorActivationStatus (#2472)

* fix spacing

* fix time estimates

* correct slot estimation

* naming

* Update beacon-chain/rpc/validator_server.go

Co-Authored-By: rauljordan <raul@prysmaticlabs.com>

* SSZ web api for decoding input data (#2473)

* first pass ssz server for decoding deposit input data

* fix decoding

* revert viz change on helper

* add image target

* use /api prefix, add deployment for cluster

* fix lint

* standardize slot numbers (#2475)

* Add CORS for ssz api (#2476)

* first pass ssz server for decoding deposit input data

* fix decoding

* revert viz change on helper

* add image target

* use /api prefix, add deployment for cluster

* fix lint

* needed CORS

* Allow Client to Retrieve Multiple Validator Statuses (#2474)

* multiple validator statuses

* gazelle

* context

* fixing bugs

* remove old way of checking

* fix logging

* make activation queue more accurate

* fix rpc test

* add test

* fix remaining tests

* lint

* comment

* review comments

* Update Prysm README (#2477)

* README updated

* readme updates

* no err throw (#2479)

* Fix Status Nil Pointer Error (#2480)

* no err throw

* nil errors

* 3.175 (#2482)

* Better Error Message if Failing to Exit Initial Sync (#2483)

* no err throw

* nil errors

* better error on init sync

* Only Log Active Balances (#2485)

* only log active balance

* dont need ()

* change logging (#2487)

* fix chainstart waiting on rpc server (#2488)

* shift ticker to after activation (#2489)

* Add drain script (#2418)

* Add drain script

* Fix script to drain contracts from newest to oldest

* Add README

* remove comments

* Only after block 400k, look up by deposit event

* issue warn log on disconnecting peer instead of error (#2491)

* Display Only Active Validator Data (#2490)

* Fix Validator Status Field in RPC Server (#2492)

* fix status of key

* status test fix

* fmt

* Estimate the Time Till Follow Distance Is Completed (#2486)

* use estimation instead

* fix test

* fixing another test

* fix tests and preston's comments

* remove unused var

* fix condition

* Revert "fix condition"

This reverts commit dee0e3112c.

* dont return error

* add production config for testnet release (#2493)

* Lookup Validator Index in State in Status Check (#2494)

* state lookup

* refactor duplicate code

* refactor with mapping

* fix broken tests

* finish refactor

* merged master

* Starting, I need get_epoch_start_shard

* updated EpochCommitteeCount and fixed tests

* implemented ShardDelta

* test for ShardDelta

* implemented EpochStartShard

* added epoch out of bound test

* test for accurate start shard

* lint

* need to use changes from latest crosslinks

* added BaseReward and totalActiveBalance

* added test for base reward

* implemented process_attestation_delta

* comments

* comments

* merged master

* all tests passing

* start testing

* done

* merged master

* fixed tests

* tests, more to come

* tests done

* lint

* spaces over tabs

* addressed shay's feedback

* starting but need to merge a few things...

* tests

* fmt
2019-06-09 16:49:58 +08:00
terence tsao
f747de5dc8 Implement Attestation Delta for v0.6 (#2646)
* update process crosslink and update existing tests

* added a test case to cover no crosslink changes

* more test

* preston's feedback

* spellings

* ValidatorStatus Estimating Activation RPC Server (#2469)

* fix spacing

* working on position in queue

* fmt

* spacing

* feedback

* tests

* rename

* Only Perform Initial Sync With a Single Peer (#2471)

* fix spacing

* use send instead of broadcast in initial sync

* Fix Estimation of Deposit Inclusion Slot in ValidatorActivationStatus (#2472)

* fix spacing

* fix time estimates

* correct slot estimation

* naming

* Update beacon-chain/rpc/validator_server.go

Co-Authored-By: rauljordan <raul@prysmaticlabs.com>

* SSZ web api for decoding input data (#2473)

* first pass ssz server for decoding deposit input data

* fix decoding

* revert viz change on helper

* add image target

* use /api prefix, add deployment for cluster

* fix lint

* standardize slot numbers (#2475)

* Add CORS for ssz api (#2476)

* first pass ssz server for decoding deposit input data

* fix decoding

* revert viz change on helper

* add image target

* use /api prefix, add deployment for cluster

* fix lint

* needed CORS

* Allow Client to Retrieve Multiple Validator Statuses (#2474)

* multiple validator statuses

* gazelle

* context

* fixing bugs

* remove old way of checking

* fix logging

* make activation queue more accurate

* fix rpc test

* add test

* fix remaining tests

* lint

* comment

* review comments

* Update Prysm README (#2477)

* README updated

* readme updates

* no err throw (#2479)

* Fix Status Nil Pointer Error (#2480)

* no err throw

* nil errors

* 3.175 (#2482)

* Better Error Message if Failing to Exit Initial Sync (#2483)

* no err throw

* nil errors

* better error on init sync

* Only Log Active Balances (#2485)

* only log active balance

* dont need ()

* change logging (#2487)

* fix chainstart waiting on rpc server (#2488)

* shift ticker to after activation (#2489)

* Add drain script (#2418)

* Add drain script

* Fix script to drain contracts from newest to oldest

* Add README

* remove comments

* Only after block 400k, look up by deposit event

* issue warn log on disconnecting peer instead of error (#2491)

* Display Only Active Validator Data (#2490)

* Fix Validator Status Field in RPC Server (#2492)

* fix status of key

* status test fix

* fmt

* Estimate the Time Till Follow Distance Is Completed (#2486)

* use estimation instead

* fix test

* fixing another test

* fix tests and preston's comments

* remove unused var

* fix condition

* Revert "fix condition"

This reverts commit dee0e3112c.

* dont return error

* add production config for testnet release (#2493)

* Lookup Validator Index in State in Status Check (#2494)

* state lookup

* refactor duplicate code

* refactor with mapping

* fix broken tests

* finish refactor

* merged master

* Starting, I need get_epoch_start_shard

* updated EpochCommitteeCount and fixed tests

* implemented ShardDelta

* test for ShardDelta

* implemented EpochStartShard

* added epoch out of bound test

* test for accurate start shard

* lint

* need to use changes from latest crosslinks

* added BaseReward and totalActiveBalance

* added test for base reward

* implemented process_attestation_delta

* comments

* comments

* merged master

* all tests passing

* start testing

* done

* merged master

* fixed tests

* tests, more to come

* tests done

* lint

* spaces over tabs

* addressed shay's feedback

* merged master
2019-06-09 16:49:53 +08:00
terence tsao
518e129d46 Putting Crosslink Delta Back (#2654)
* update process crosslink and update existing tests

* added a test case to cover no crosslink changes

* more test

* preston's feedback

* spellings

* ValidatorStatus Estimating Activation RPC Server (#2469)

* fix spacing

* working on position in queue

* fmt

* spacing

* feedback

* tests

* rename

* Only Perform Initial Sync With a Single Peer (#2471)

* fix spacing

* use send instead of broadcast in initial sync

* Fix Estimation of Deposit Inclusion Slot in ValidatorActivationStatus (#2472)

* fix spacing

* fix time estimates

* correct slot estimation

* naming

* Update beacon-chain/rpc/validator_server.go

Co-Authored-By: rauljordan <raul@prysmaticlabs.com>

* SSZ web api for decoding input data (#2473)

* first pass ssz server for decoding deposit input data

* fix decoding

* revert viz change on helper

* add image target

* use /api prefix, add deployment for cluster

* fix lint

* standardize slot numbers (#2475)

* Add CORS for ssz api (#2476)

* first pass ssz server for decoding deposit input data

* fix decoding

* revert viz change on helper

* add image target

* use /api prefix, add deployment for cluster

* fix lint

* needed CORS

* Allow Client to Retrieve Multiple Validator Statuses (#2474)

* multiple validator statuses

* gazelle

* context

* fixing bugs

* remove old way of checking

* fix logging

* make activation queue more accurate

* fix rpc test

* add test

* fix remaining tests

* lint

* comment

* review comments

* Update Prysm README (#2477)

* README updated

* readme updates

* no err throw (#2479)

* Fix Status Nil Pointer Error (#2480)

* no err throw

* nil errors

* 3.175 (#2482)

* Better Error Message if Failing to Exit Initial Sync (#2483)

* no err throw

* nil errors

* better error on init sync

* Only Log Active Balances (#2485)

* only log active balance

* dont need ()

* change logging (#2487)

* fix chainstart waiting on rpc server (#2488)

* shift ticker to after activation (#2489)

* Add drain script (#2418)

* Add drain script

* Fix script to drain contracts from newest to oldest

* Add README

* remove comments

* Only after block 400k, look up by deposit event

* issue warn log on disconnecting peer instead of error (#2491)

* Display Only Active Validator Data (#2490)

* Fix Validator Status Field in RPC Server (#2492)

* fix status of key

* status test fix

* fmt

* Estimate the Time Till Follow Distance Is Completed (#2486)

* use estimation instead

* fix test

* fixing another test

* fix tests and preston's comments

* remove unused var

* fix condition

* Revert "fix condition"

This reverts commit dee0e3112c.

* dont return error

* add production config for testnet release (#2493)

* Lookup Validator Index in State in Status Check (#2494)

* state lookup

* refactor duplicate code

* refactor with mapping

* fix broken tests

* finish refactor

* merged master

* Starting, I need get_epoch_start_shard

* updated EpochCommitteeCount and fixed tests

* implemented ShardDelta

* test for ShardDelta

* implemented EpochStartShard

* added epoch out of bound test

* test for accurate start shard

* lint

* need to use changes from latest crosslinks

* added BaseReward and totalActiveBalance

* added test for base reward

* merged master

* all tests passing

* start testing

* done

* fixed tests

* addressed shay's feedback
2019-06-09 16:48:04 +08:00
Raul Jordan
c504931c3b Follow up on process block header v0.6 (#2666) 2019-06-09 16:47:26 +08:00
terence tsao
53dcc5e8d7 Add Process Registry for Epoch Processing (#2668)
* update update-process-registry

* added back the old tests

* fmt

* gaz
2019-06-09 16:46:56 +08:00
shayzluf
9ecd5d4c4e add ProcessBlockHeader v0.6 (#2534)
* add ProcessBlockHeader

* function has all its dependancies in place

* arange the basic ok test

* gazzele and skip test update

* skip wrong sig test

* fmt imports and change requests

* goimports fmt

* map for struct fields to be location independent

* reorder protobuf fields

* added tests

* gazzle fix

* few change requests fixes

* revert changes in types.proto

* revert changes in types

* fix tests

* fix lint

* fmt imports

* fix gazelle

* fix var naming

* pb update

* var naming

* tarance change request fixes

* fix test
2019-06-09 16:46:11 +08:00
terence tsao
d21dbca1af Implement Crosslink Delta Rewards for 0.6 (#2517)
* update process crosslink and update existing tests

* added a test case to cover no crosslink changes

* more test

* preston's feedback

* spellings

* ValidatorStatus Estimating Activation RPC Server (#2469)

* fix spacing

* working on position in queue

* fmt

* spacing

* feedback

* tests

* rename

* Only Perform Initial Sync With a Single Peer (#2471)

* fix spacing

* use send instead of broadcast in initial sync

* Fix Estimation of Deposit Inclusion Slot in ValidatorActivationStatus (#2472)

* fix spacing

* fix time estimates

* correct slot estimation

* naming

* Update beacon-chain/rpc/validator_server.go

Co-Authored-By: rauljordan <raul@prysmaticlabs.com>

* SSZ web api for decoding input data (#2473)

* first pass ssz server for decoding deposit input data

* fix decoding

* revert viz change on helper

* add image target

* use /api prefix, add deployment for cluster

* fix lint

* standardize slot numbers (#2475)

* Add CORS for ssz api (#2476)

* first pass ssz server for decoding deposit input data

* fix decoding

* revert viz change on helper

* add image target

* use /api prefix, add deployment for cluster

* fix lint

* needed CORS

* Allow Client to Retrieve Multiple Validator Statuses (#2474)

* multiple validator statuses

* gazelle

* context

* fixing bugs

* remove old way of checking

* fix logging

* make activation queue more accurate

* fix rpc test

* add test

* fix remaining tests

* lint

* comment

* review comments

* Update Prysm README (#2477)

* README updated

* readme updates

* no err throw (#2479)

* Fix Status Nil Pointer Error (#2480)

* no err throw

* nil errors

* 3.175 (#2482)

* Better Error Message if Failing to Exit Initial Sync (#2483)

* no err throw

* nil errors

* better error on init sync

* Only Log Active Balances (#2485)

* only log active balance

* dont need ()

* change logging (#2487)

* fix chainstart waiting on rpc server (#2488)

* shift ticker to after activation (#2489)

* Add drain script (#2418)

* Add drain script

* Fix script to drain contracts from newest to oldest

* Add README

* remove comments

* Only after block 400k, look up by deposit event

* issue warn log on disconnecting peer instead of error (#2491)

* Display Only Active Validator Data (#2490)

* Fix Validator Status Field in RPC Server (#2492)

* fix status of key

* status test fix

* fmt

* Estimate the Time Till Follow Distance Is Completed (#2486)

* use estimation instead

* fix test

* fixing another test

* fix tests and preston's comments

* remove unused var

* fix condition

* Revert "fix condition"

This reverts commit dee0e3112c.

* dont return error

* add production config for testnet release (#2493)

* Lookup Validator Index in State in Status Check (#2494)

* state lookup

* refactor duplicate code

* refactor with mapping

* fix broken tests

* finish refactor

* merged master

* Starting, I need get_epoch_start_shard

* updated EpochCommitteeCount and fixed tests

* implemented ShardDelta

* test for ShardDelta

* implemented EpochStartShard

* added epoch out of bound test

* test for accurate start shard

* lint

* need to use changes from latest crosslinks

* added BaseReward and totalActiveBalance

* added test for base reward

* merged master

* all tests passing

* start testing

* done
2019-06-09 16:45:31 +08:00
Raul Jordan
67878ec06a Process Block Deposits v0.6 (#2647)
* imports fixes

* deposits tests pass

* wrapped up gazelle

* spacing
2019-06-09 16:45:24 +08:00
Andrei Ivasko
ddf7a96aad Get attestation data slot v0.6 (#2593)
* attestation.go is ready, tests not

* ready for review

* fixing linter issues

* modified Crosslink and AttestationData proto fields to spec 2.0,marked deprecated fields

* gazelle

* fixed tests

* fixed error

* error msg
2019-06-09 16:44:46 +08:00
Raul Jordan
7beeec460c Process Block Eth1 Data v0.6 (#2645) 2019-06-09 16:44:16 +08:00
terence tsao
744affa75b Implement Process Crosslink From 0.6 (#2460) 2019-06-09 16:43:15 +08:00
Raul Jordan
86ff17337e Process Beacon Chain Transfers v0.6 (#2642)
* add transfers

* beacon transfer operations complete

* full cov

* transfer testing

* finished tests

* Update beacon-chain/core/blocks/block_operations_test.go

Co-Authored-By: terence tsao <terence@prysmaticlabs.com>

* Update beacon-chain/core/blocks/block_operations.go

Co-Authored-By: terence tsao <terence@prysmaticlabs.com>
2019-06-09 16:42:19 +08:00
terence tsao
b09ecec75e Finalize helper functions for 0.6 (#2632) 2019-06-09 16:42:04 +08:00
terence tsao
b26fe09351 Clean up Helper Functions Part 1 (#2612) 2019-06-09 16:40:43 +08:00
Andrei Ivasko
e181666982 Update processEth1Data for v0.6 (#2516) 2019-06-09 16:39:36 +08:00
Raul Jordan
3e63695e43 Update Block Processing Voluntary Exits (#2609) 2019-06-09 16:39:36 +08:00
shayzluf
233adc9bcd update to new spec (#2614) 2019-06-09 16:39:32 +08:00
terence tsao
07b5e52b52 Update Reward Helper v0.6 (#2470)
* added BaseReward

* added rewards helper

* added test for BaseReward

* extra space

* move exported function above
2019-06-09 16:38:46 +08:00
terence tsao
dc3e73ee5a Implement Final Updates 0.6 (#2562)
* ValidatorStatus Estimating Activation RPC Server (#2469)

* fix spacing

* working on position in queue

* fmt

* spacing

* feedback

* tests

* rename

* Only Perform Initial Sync With a Single Peer (#2471)

* fix spacing

* use send instead of broadcast in initial sync

* Fix Estimation of Deposit Inclusion Slot in ValidatorActivationStatus (#2472)

* fix spacing

* fix time estimates

* correct slot estimation

* naming

* Update beacon-chain/rpc/validator_server.go

Co-Authored-By: rauljordan <raul@prysmaticlabs.com>

* SSZ web api for decoding input data (#2473)

* first pass ssz server for decoding deposit input data

* fix decoding

* revert viz change on helper

* add image target

* use /api prefix, add deployment for cluster

* fix lint

* standardize slot numbers (#2475)

* Add CORS for ssz api (#2476)

* first pass ssz server for decoding deposit input data

* fix decoding

* revert viz change on helper

* add image target

* use /api prefix, add deployment for cluster

* fix lint

* needed CORS

* Allow Client to Retrieve Multiple Validator Statuses (#2474)

* multiple validator statuses

* gazelle

* context

* fixing bugs

* remove old way of checking

* fix logging

* make activation queue more accurate

* fix rpc test

* add test

* fix remaining tests

* lint

* comment

* review comments

* Update Prysm README (#2477)

* README updated

* readme updates

* no err throw (#2479)

* Fix Status Nil Pointer Error (#2480)

* no err throw

* nil errors

* 3.175 (#2482)

* Better Error Message if Failing to Exit Initial Sync (#2483)

* no err throw

* nil errors

* better error on init sync

* Only Log Active Balances (#2485)

* only log active balance

* dont need ()

* change logging (#2487)

* fix chainstart waiting on rpc server (#2488)

* shift ticker to after activation (#2489)

* Add drain script (#2418)

* Add drain script

* Fix script to drain contracts from newest to oldest

* Add README

* remove comments

* Only after block 400k, look up by deposit event

* issue warn log on disconnecting peer instead of error (#2491)

* Display Only Active Validator Data (#2490)

* Fix Validator Status Field in RPC Server (#2492)

* fix status of key

* status test fix

* fmt

* Estimate the Time Till Follow Distance Is Completed (#2486)

* use estimation instead

* fix test

* fixing another test

* fix tests and preston's comments

* remove unused var

* fix condition

* Revert "fix condition"

This reverts commit dee0e3112c.

* dont return error

* add production config for testnet release (#2493)

* Lookup Validator Index in State in Status Check (#2494)

* state lookup

* refactor duplicate code

* refactor with mapping

* fix broken tests

* finish refactor

* merged master

* updated EpochCommitteeCount and fixed tests

* implemented ShardDelta

* test for ShardDelta

* implemented EpochStartShard

* added epoch out of bound test

* test for accurate start shard

* lint

* implemented process_final_updates

* move to the top of the file

* add comment back

* gaz

* Test for process final updates

* fixed tests

* fixed all the tests
2019-06-09 16:38:46 +08:00
Raul Jordan
c774917749 Update Proposer/Attester Slashings and Slashing Helpers (#2603) 2019-06-09 16:38:46 +08:00
terence tsao
57bc1070f6 Implement Process Slashings for 0.6 (#2523) 2019-06-09 16:38:46 +08:00
terence tsao
67902e1977 Update Committee Helper Part 2 (#2592) 2019-06-09 16:38:39 +08:00
Ivan Martinez
cb63eca4f2 Update Committee Helpers to v0.6.0 (#2398) 2019-06-09 16:35:22 +08:00
shayzluf
6a26fb2465 update registry updates func (#2521)
* update registry updates func

* added tests and moved to epoch processing

* fixed naming issues
2019-06-09 16:32:16 +08:00
Nishant Das
1c0aa9438c changes name to signature (#2535) 2019-06-09 16:32:04 +08:00
shayzluf
129762a65f Add convert to indexed (#2519)
* sort participants slice

* add bitfield functions and tests

* added BitfieldBit test

* add AttestationParticipantsNew

* revert AttestationParticipants to its previous change

* add tests

* remove verifybitfieldnew

* fix tests and remove multiple tests

* remove duplicate test

* start work

* convert attestation to indexed attestations

* fix test for convert index

* remove calling getter

* add more tests

* remove underscore
2019-06-09 16:28:24 +08:00
terence tsao
d77c3e2dd6 Implement Justification and finalization Processing (#2448) 2019-06-09 16:28:24 +08:00
shayzluf
6fcf25c811 Update attesting indices v0.6 (#2449)
* sort participants slice

* add bitfield functions and tests

* added BitfieldBit test

* add AttestationParticipantsNew

* revert AttestationParticipants to its previous change

* add tests

* remove verifybitfieldnew

* fix tests and remove multiple tests

* remove duplicate test

* change magic number into ceildiv8
2019-06-09 16:28:24 +08:00
terence tsao
508c36a430 Update RPC end point for Proposer (#2767)
* add RequestBlock

* run mockgen

* implemented RequestBlock

* updated proto definitions

* updated tests

* updated validator attest tests

* done

* comment

* todo issue

* removed unused proto
2019-06-09 09:47:03 +08:00
terence tsao
7d1b304e8c Clean up Old RPC Endpoints (#2763) 2019-06-07 10:25:03 -04:00
terence tsao
5f1ec70e2b Tidying up Godoc for Core Package (#2762) 2019-06-06 17:16:15 -04:00
terence tsao
71eea358a1 Benchmark Process Block with Attestations (#2758) 2019-06-06 07:32:32 -04:00
Ivan Martinez
ebc0d67d1a Add graffiti and update generate seed (#2759) 2019-06-06 07:22:02 -04:00
Ivan Martinez
ed2fadc738 Cleanup and Docs update (#2756) 2019-06-05 04:40:24 -07:00
terence tsao
60aba6cebf Optimize Process Eth1 Data Vote (#2754) 2019-06-05 04:28:50 -07:00
terence tsao
f4ebbff4af removed deprecated configs (#2755) 2019-06-05 04:00:17 -07:00
terence tsao
cd1033c7f9 Optimize Base Reward Calculation (#2753)
* benchmark process epoch

* revert prof.out

* Add some optimizations

* beware where we use ActiveValidatorIndices...

* revert extra file

* gaz

* quick commit to get feedback

* revert extra file

* started fixing tests

* fixed broken TestProcessCrosslink_NoUpdate

* gaz

* cache randao seed

* fixed all the tests

* fmt and lint

* spacing

* Added todo

* lint

* revert binary file

* started regression test

* basic tests done

* using a fifo for active indices cache

* using a fifo for active count cache

* using a fifo for total balance cache

* using a fifo for active balance cache

* using a fifo for start shard cache

* using a fifo for seed cache

* gaz

* clean up

* fixing tests

* fixed all the core tests

* fixed all the tests!!!

* lint

* comment

* rm'ed commented code

* cache size to 1000 should be good enough

* optimized base reward

* revert binary file

* Added comments to calculate adjusted quotient outside
2019-06-04 20:05:07 +08:00
terence tsao
5a04d0d6bb Cache Active Validator Indices, Count, and Balances (#2737) 2019-06-03 21:18:07 -07:00
Ivan Martinez
901966c031 Fix most of skipped tests (#2735) 2019-06-01 22:23:53 -07:00
Ivan Martinez
2fb1f3d817 Remove Deprecated Validator Protobuf (#2727)
* Remove deprecated validator protos

* Fix to comments
2019-06-01 01:14:10 +08:00
Nishant Das
085962a848 change hash function (#2732) 2019-06-01 00:45:26 +08:00
Nishant Das
988b1fbfc5 Refactor Deposit Contract Test Setup (#2731)
* add new package

* fix all tests

* lint
2019-06-01 00:37:07 +08:00
terence tsao
5b4651748a Optimize Shuffled Indices Cache (#2728) 2019-05-30 23:46:52 -07:00
terence tsao
d36019d5ea Fixed Skipped Attestation Tests (#2730) 2019-05-30 23:21:43 -07:00
terence tsao
9d1989d473 Benchmark Compute Committee (#2698) 2019-05-30 23:10:38 -07:00
terence tsao
a685b461e3 Remove Committee Cache (#2729) 2019-05-30 22:56:57 -07:00
Nishant Das
7d5e7172d6 Fix Deposit Trie (#2686)
* remove deposit data

* remove deposit input

* fix references

* remove deposit helpers

* fix all refs

* gaz

* rgene proto

* fix all tests

* remove deposit data deprecated field

* fix remaining references

* fix all tests

* fix lint

* new tests with contract

* gaz

* more tests

* fixed bugs

* new test

* finally fixed it

* gaz

* fix test
2019-05-31 11:58:13 +08:00
Nishant Das
e099432551 Remove Latest Block (#2721)
* lint

* remove latest block

* lint

* add proto

* fix build
2019-05-30 13:47:32 +08:00
shayzluf
a562ece1c1 YAML shuffle tests for v0.6 (#2667)
* new shuffle tests

* added comment for exported function

* fix format and print

* added config files handling

* gazelle fix

* shuffle test debugging

* added shuffle list and benchmark

* hash function addition from nishant code

* gazelle fix

* remove unused function

* few minor changes

* add test to test protos optimization

* test a bigger list

* remove commented code

* small changes

* fix spec test and test indices to pass

* remove empty line

* abstraction of repeated code and comment arrangement

* terence change requests

* fix new test

* add small comment for better readability

* change from unshuflle to shuffle

* comment

* better comment

* fix all tests
2019-05-30 13:22:16 +08:00
terence tsao
18332ad929 Cache Shuffled Validator Indices (#2682) 2019-05-29 18:03:28 -07:00
terence tsao
32a0b8a5da Remove Deprecated Attestation Proto Fields (#2723) 2019-05-29 15:47:31 -07:00
Nishant Das
d3d6223b68 Sync With Master (#2718) 2019-05-29 07:39:48 -07:00
terence tsao
e7db3b2aa8 Remove Deprecated Beacon Block Proto Fields (#2717) 2019-05-28 07:09:27 -07:00
terence tsao
6bef43cda6 Remove Deprecated Protobuf Crosslink/Slashing/Block Fields (#2714) 2019-05-28 06:43:29 -07:00
terence tsao
6c0b9264e4 Remove Deprecated Protobuf State Fields (#2713) 2019-05-28 06:18:50 -07:00
Nishant Das
371c38916c Remove Obsolete Deposit Proto Objects (#2673)
* remove deposit data

* remove deposit input

* fix references

* remove deposit helpers

* fix all refs

* gaz

* rgene proto

* fix all tests

* remove deposit data deprecated field

* fix remaining references

* fix all tests

* fix lint

* regen proto

* fix test
2019-05-28 11:18:45 +08:00
terence tsao
65fe416b19 Fixed Skipped Tests for RPC Server (#2712) 2019-05-27 19:53:40 -07:00
Raul Jordan
6c1e74b3de Process Block Attestations v0.6 (#2650)
* attestation.go is ready, tests not

* ready for review

* fixing linter issues

* modified Crosslink and AttestationData proto fields to spec 2.0,marked deprecated fields

* gazelle

* fixed tests

* fixed error

* add att processing:

* process atts

* error msg

* finish process attestations logic

* spacing

* ssz move

* inclusion delay failure passing

* more attestation tests

* more att tests passing

* more tests

* ffg data mismatching test

* ffg tests complete

* gofmt

* fix testing to match attestation updates

* ssz

* lint
2019-05-27 00:11:36 -04:00
terence tsao
bc25747275 cleaned up skipped tests for core processing (#2697) 2019-05-25 22:08:38 -07:00
terence tsao
b9eab2a2cc Update CommitteeAssignment (#2693) 2019-05-25 21:44:04 -07:00
terence tsao
c34f2911ff Reorder Epoch Processing Functions (#2676) 2019-05-24 07:06:49 -07:00
terence tsao
6d86d0b3b9 Implement Process Epoch for 0.6 (#2675)
* can't find process j f functons

* implemented process_epoch

* tests done

* lint

* nishant's feedback

* stupid goland replace

* goimports
2019-05-24 15:14:03 +08:00
Nishant Das
855bf7cec9 Update Deposit Contract (#2648)
* lint

* add new contract

* change version

* remove log

* generating abi and binary files

* fix tests

* update to current version

* new changes

* add new hash function

* save hashed nodes

* add more things

* new method

* add update to trie

* new stuff

* gaz

* more stuff

* finally fixed build

* remove deposit data

* Revert "remove deposit data"

This reverts commit 9085409e91.

* more changes

* lint and gaz

* lint
2019-05-24 13:30:40 +08:00
Raul Jordan
c0c2671c5f Update Slot Processing and State Transition v0.6 (#2664)
* edit state transition to add slot processing logic, reorder logic

* fix build

* lint

* tests passing

* spacing

* tests pass

* imports

* passing tests
2019-05-21 19:56:40 -04:00
terence tsao
126e8d4487 Implement process_rewards_and_penalties for 0.6 (#2665)
* update process crosslink and update existing tests

* added a test case to cover no crosslink changes

* more test

* preston's feedback

* spellings

* ValidatorStatus Estimating Activation RPC Server (#2469)

* fix spacing

* working on position in queue

* fmt

* spacing

* feedback

* tests

* rename

* Only Perform Initial Sync With a Single Peer (#2471)

* fix spacing

* use send instead of broadcast in initial sync

* Fix Estimation of Deposit Inclusion Slot in ValidatorActivationStatus (#2472)

* fix spacing

* fix time estimates

* correct slot estimation

* naming

* Update beacon-chain/rpc/validator_server.go

Co-Authored-By: rauljordan <raul@prysmaticlabs.com>

* SSZ web api for decoding input data (#2473)

* first pass ssz server for decoding deposit input data

* fix decoding

* revert viz change on helper

* add image target

* use /api prefix, add deployment for cluster

* fix lint

* standardize slot numbers (#2475)

* Add CORS for ssz api (#2476)

* first pass ssz server for decoding deposit input data

* fix decoding

* revert viz change on helper

* add image target

* use /api prefix, add deployment for cluster

* fix lint

* needed CORS

* Allow Client to Retrieve Multiple Validator Statuses (#2474)

* multiple validator statuses

* gazelle

* context

* fixing bugs

* remove old way of checking

* fix logging

* make activation queue more accurate

* fix rpc test

* add test

* fix remaining tests

* lint

* comment

* review comments

* Update Prysm README (#2477)

* README updated

* readme updates

* no err throw (#2479)

* Fix Status Nil Pointer Error (#2480)

* no err throw

* nil errors

* 3.175 (#2482)

* Better Error Message if Failing to Exit Initial Sync (#2483)

* no err throw

* nil errors

* better error on init sync

* Only Log Active Balances (#2485)

* only log active balance

* dont need ()

* change logging (#2487)

* fix chainstart waiting on rpc server (#2488)

* shift ticker to after activation (#2489)

* Add drain script (#2418)

* Add drain script

* Fix script to drain contracts from newest to oldest

* Add README

* remove comments

* Only after block 400k, look up by deposit event

* issue warn log on disconnecting peer instead of error (#2491)

* Display Only Active Validator Data (#2490)

* Fix Validator Status Field in RPC Server (#2492)

* fix status of key

* status test fix

* fmt

* Estimate the Time Till Follow Distance Is Completed (#2486)

* use estimation instead

* fix test

* fixing another test

* fix tests and preston's comments

* remove unused var

* fix condition

* Revert "fix condition"

This reverts commit dee0e3112c.

* dont return error

* add production config for testnet release (#2493)

* Lookup Validator Index in State in Status Check (#2494)

* state lookup

* refactor duplicate code

* refactor with mapping

* fix broken tests

* finish refactor

* merged master

* Starting, I need get_epoch_start_shard

* updated EpochCommitteeCount and fixed tests

* implemented ShardDelta

* test for ShardDelta

* implemented EpochStartShard

* added epoch out of bound test

* test for accurate start shard

* lint

* need to use changes from latest crosslinks

* added BaseReward and totalActiveBalance

* added test for base reward

* implemented process_attestation_delta

* comments

* comments

* merged master

* all tests passing

* start testing

* done

* merged master

* fixed tests

* tests, more to come

* tests done

* lint

* spaces over tabs

* addressed shay's feedback

* starting but need to merge a few things...

* tests

* fmt
2019-05-21 19:07:42 -04:00
terence tsao
1127fc656e Implement Attestation Delta for v0.6 (#2646)
* update process crosslink and update existing tests

* added a test case to cover no crosslink changes

* more test

* preston's feedback

* spellings

* ValidatorStatus Estimating Activation RPC Server (#2469)

* fix spacing

* working on position in queue

* fmt

* spacing

* feedback

* tests

* rename

* Only Perform Initial Sync With a Single Peer (#2471)

* fix spacing

* use send instead of broadcast in initial sync

* Fix Estimation of Deposit Inclusion Slot in ValidatorActivationStatus (#2472)

* fix spacing

* fix time estimates

* correct slot estimation

* naming

* Update beacon-chain/rpc/validator_server.go

Co-Authored-By: rauljordan <raul@prysmaticlabs.com>

* SSZ web api for decoding input data (#2473)

* first pass ssz server for decoding deposit input data

* fix decoding

* revert viz change on helper

* add image target

* use /api prefix, add deployment for cluster

* fix lint

* standardize slot numbers (#2475)

* Add CORS for ssz api (#2476)

* first pass ssz server for decoding deposit input data

* fix decoding

* revert viz change on helper

* add image target

* use /api prefix, add deployment for cluster

* fix lint

* needed CORS

* Allow Client to Retrieve Multiple Validator Statuses (#2474)

* multiple validator statuses

* gazelle

* context

* fixing bugs

* remove old way of checking

* fix logging

* make activation queue more accurate

* fix rpc test

* add test

* fix remaining tests

* lint

* comment

* review comments

* Update Prysm README (#2477)

* README updated

* readme updates

* no err throw (#2479)

* Fix Status Nil Pointer Error (#2480)

* no err throw

* nil errors

* 3.175 (#2482)

* Better Error Message if Failing to Exit Initial Sync (#2483)

* no err throw

* nil errors

* better error on init sync

* Only Log Active Balances (#2485)

* only log active balance

* dont need ()

* change logging (#2487)

* fix chainstart waiting on rpc server (#2488)

* shift ticker to after activation (#2489)

* Add drain script (#2418)

* Add drain script

* Fix script to drain contracts from newest to oldest

* Add README

* remove comments

* Only after block 400k, look up by deposit event

* issue warn log on disconnecting peer instead of error (#2491)

* Display Only Active Validator Data (#2490)

* Fix Validator Status Field in RPC Server (#2492)

* fix status of key

* status test fix

* fmt

* Estimate the Time Till Follow Distance Is Completed (#2486)

* use estimation instead

* fix test

* fixing another test

* fix tests and preston's comments

* remove unused var

* fix condition

* Revert "fix condition"

This reverts commit dee0e3112c.

* dont return error

* add production config for testnet release (#2493)

* Lookup Validator Index in State in Status Check (#2494)

* state lookup

* refactor duplicate code

* refactor with mapping

* fix broken tests

* finish refactor

* merged master

* Starting, I need get_epoch_start_shard

* updated EpochCommitteeCount and fixed tests

* implemented ShardDelta

* test for ShardDelta

* implemented EpochStartShard

* added epoch out of bound test

* test for accurate start shard

* lint

* need to use changes from latest crosslinks

* added BaseReward and totalActiveBalance

* added test for base reward

* implemented process_attestation_delta

* comments

* comments

* merged master

* all tests passing

* start testing

* done

* merged master

* fixed tests

* tests, more to come

* tests done

* lint

* spaces over tabs

* addressed shay's feedback

* merged master
2019-05-21 14:57:25 -04:00
terence tsao
d8aada8258 Putting Crosslink Delta Back (#2654)
* update process crosslink and update existing tests

* added a test case to cover no crosslink changes

* more test

* preston's feedback

* spellings

* ValidatorStatus Estimating Activation RPC Server (#2469)

* fix spacing

* working on position in queue

* fmt

* spacing

* feedback

* tests

* rename

* Only Perform Initial Sync With a Single Peer (#2471)

* fix spacing

* use send instead of broadcast in initial sync

* Fix Estimation of Deposit Inclusion Slot in ValidatorActivationStatus (#2472)

* fix spacing

* fix time estimates

* correct slot estimation

* naming

* Update beacon-chain/rpc/validator_server.go

Co-Authored-By: rauljordan <raul@prysmaticlabs.com>

* SSZ web api for decoding input data (#2473)

* first pass ssz server for decoding deposit input data

* fix decoding

* revert viz change on helper

* add image target

* use /api prefix, add deployment for cluster

* fix lint

* standardize slot numbers (#2475)

* Add CORS for ssz api (#2476)

* first pass ssz server for decoding deposit input data

* fix decoding

* revert viz change on helper

* add image target

* use /api prefix, add deployment for cluster

* fix lint

* needed CORS

* Allow Client to Retrieve Multiple Validator Statuses (#2474)

* multiple validator statuses

* gazelle

* context

* fixing bugs

* remove old way of checking

* fix logging

* make activation queue more accurate

* fix rpc test

* add test

* fix remaining tests

* lint

* comment

* review comments

* Update Prysm README (#2477)

* README updated

* readme updates

* no err throw (#2479)

* Fix Status Nil Pointer Error (#2480)

* no err throw

* nil errors

* 3.175 (#2482)

* Better Error Message if Failing to Exit Initial Sync (#2483)

* no err throw

* nil errors

* better error on init sync

* Only Log Active Balances (#2485)

* only log active balance

* dont need ()

* change logging (#2487)

* fix chainstart waiting on rpc server (#2488)

* shift ticker to after activation (#2489)

* Add drain script (#2418)

* Add drain script

* Fix script to drain contracts from newest to oldest

* Add README

* remove comments

* Only after block 400k, look up by deposit event

* issue warn log on disconnecting peer instead of error (#2491)

* Display Only Active Validator Data (#2490)

* Fix Validator Status Field in RPC Server (#2492)

* fix status of key

* status test fix

* fmt

* Estimate the Time Till Follow Distance Is Completed (#2486)

* use estimation instead

* fix test

* fixing another test

* fix tests and preston's comments

* remove unused var

* fix condition

* Revert "fix condition"

This reverts commit dee0e3112c.

* dont return error

* add production config for testnet release (#2493)

* Lookup Validator Index in State in Status Check (#2494)

* state lookup

* refactor duplicate code

* refactor with mapping

* fix broken tests

* finish refactor

* merged master

* Starting, I need get_epoch_start_shard

* updated EpochCommitteeCount and fixed tests

* implemented ShardDelta

* test for ShardDelta

* implemented EpochStartShard

* added epoch out of bound test

* test for accurate start shard

* lint

* need to use changes from latest crosslinks

* added BaseReward and totalActiveBalance

* added test for base reward

* merged master

* all tests passing

* start testing

* done

* fixed tests

* addressed shay's feedback
2019-05-21 14:45:59 -04:00
Raul Jordan
967f3efc50 Follow up on process block header v0.6 (#2666) 2019-05-21 12:12:35 -05:00
terence tsao
0fa4197a9b Add Process Registry for Epoch Processing (#2668)
* update update-process-registry

* added back the old tests

* fmt

* gaz
2019-05-21 09:23:06 -04:00
shayzluf
95bc2ca224 add ProcessBlockHeader v0.6 (#2534)
* add ProcessBlockHeader

* function has all its dependancies in place

* arange the basic ok test

* gazzele and skip test update

* skip wrong sig test

* fmt imports and change requests

* goimports fmt

* map for struct fields to be location independent

* reorder protobuf fields

* added tests

* gazzle fix

* few change requests fixes

* revert changes in types.proto

* revert changes in types

* fix tests

* fix lint

* fmt imports

* fix gazelle

* fix var naming

* pb update

* var naming

* tarance change request fixes

* fix test
2019-05-21 04:17:25 +05:30
Preston Van Loon
ab35814d39 Genesis == zero (#2653) 2019-05-19 19:25:06 -04:00
terence tsao
3c9b099ae5 Implement Crosslink Delta Rewards for 0.6 (#2517)
* update process crosslink and update existing tests

* added a test case to cover no crosslink changes

* more test

* preston's feedback

* spellings

* ValidatorStatus Estimating Activation RPC Server (#2469)

* fix spacing

* working on position in queue

* fmt

* spacing

* feedback

* tests

* rename

* Only Perform Initial Sync With a Single Peer (#2471)

* fix spacing

* use send instead of broadcast in initial sync

* Fix Estimation of Deposit Inclusion Slot in ValidatorActivationStatus (#2472)

* fix spacing

* fix time estimates

* correct slot estimation

* naming

* Update beacon-chain/rpc/validator_server.go

Co-Authored-By: rauljordan <raul@prysmaticlabs.com>

* SSZ web api for decoding input data (#2473)

* first pass ssz server for decoding deposit input data

* fix decoding

* revert viz change on helper

* add image target

* use /api prefix, add deployment for cluster

* fix lint

* standardize slot numbers (#2475)

* Add CORS for ssz api (#2476)

* first pass ssz server for decoding deposit input data

* fix decoding

* revert viz change on helper

* add image target

* use /api prefix, add deployment for cluster

* fix lint

* needed CORS

* Allow Client to Retrieve Multiple Validator Statuses (#2474)

* multiple validator statuses

* gazelle

* context

* fixing bugs

* remove old way of checking

* fix logging

* make activation queue more accurate

* fix rpc test

* add test

* fix remaining tests

* lint

* comment

* review comments

* Update Prysm README (#2477)

* README updated

* readme updates

* no err throw (#2479)

* Fix Status Nil Pointer Error (#2480)

* no err throw

* nil errors

* 3.175 (#2482)

* Better Error Message if Failing to Exit Initial Sync (#2483)

* no err throw

* nil errors

* better error on init sync

* Only Log Active Balances (#2485)

* only log active balance

* dont need ()

* change logging (#2487)

* fix chainstart waiting on rpc server (#2488)

* shift ticker to after activation (#2489)

* Add drain script (#2418)

* Add drain script

* Fix script to drain contracts from newest to oldest

* Add README

* remove comments

* Only after block 400k, look up by deposit event

* issue warn log on disconnecting peer instead of error (#2491)

* Display Only Active Validator Data (#2490)

* Fix Validator Status Field in RPC Server (#2492)

* fix status of key

* status test fix

* fmt

* Estimate the Time Till Follow Distance Is Completed (#2486)

* use estimation instead

* fix test

* fixing another test

* fix tests and preston's comments

* remove unused var

* fix condition

* Revert "fix condition"

This reverts commit dee0e3112c.

* dont return error

* add production config for testnet release (#2493)

* Lookup Validator Index in State in Status Check (#2494)

* state lookup

* refactor duplicate code

* refactor with mapping

* fix broken tests

* finish refactor

* merged master

* Starting, I need get_epoch_start_shard

* updated EpochCommitteeCount and fixed tests

* implemented ShardDelta

* test for ShardDelta

* implemented EpochStartShard

* added epoch out of bound test

* test for accurate start shard

* lint

* need to use changes from latest crosslinks

* added BaseReward and totalActiveBalance

* added test for base reward

* merged master

* all tests passing

* start testing

* done
2019-05-19 12:09:53 -04:00
Raul Jordan
a62acd965d Process Block Deposits v0.6 (#2647)
* imports fixes

* deposits tests pass

* wrapped up gazelle

* spacing
2019-05-19 11:16:25 -04:00
Andrei Ivasko
aec8f398d9 Get attestation data slot v0.6 (#2593)
* attestation.go is ready, tests not

* ready for review

* fixing linter issues

* modified Crosslink and AttestationData proto fields to spec 2.0,marked deprecated fields

* gazelle

* fixed tests

* fixed error

* error msg
2019-05-19 10:48:18 -04:00
Raul Jordan
f0e1235910 Process Block Eth1 Data v0.6 (#2645) 2019-05-18 21:09:51 -04:00
terence tsao
4f3ab898bc Implement Process Crosslink From 0.6 (#2460) 2019-05-18 16:47:52 -04:00
Raul Jordan
fc325255fb Process Beacon Chain Transfers v0.6 (#2642)
* add transfers

* beacon transfer operations complete

* full cov

* transfer testing

* finished tests

* Update beacon-chain/core/blocks/block_operations_test.go

Co-Authored-By: terence tsao <terence@prysmaticlabs.com>

* Update beacon-chain/core/blocks/block_operations.go

Co-Authored-By: terence tsao <terence@prysmaticlabs.com>
2019-05-18 16:24:25 -04:00
terence tsao
097b147921 Removed Old Epoch Processing Functions (#2644)
* removed old epoch processing stuff

* gaz
2019-05-18 14:50:16 -04:00
terence tsao
98cc1e98c6 Finalize helper functions for 0.6 (#2632) 2019-05-17 19:01:07 -04:00
terence tsao
216469c8a0 Clean up Helper Functions Part 1 (#2612) 2019-05-17 11:43:30 -04:00
Andrei Ivasko
9bf2874f0f Update processEth1Data for v0.6 (#2516) 2019-05-16 12:38:19 -04:00
Raul Jordan
20a543507b Update Block Processing Voluntary Exits (#2609) 2019-05-16 11:19:45 -04:00
shayzluf
70ad16f58e update to new spec (#2614) 2019-05-16 05:31:48 -07:00
terence tsao
5ae4ced152 Update Reward Helper v0.6 (#2470)
* added BaseReward

* added rewards helper

* added test for BaseReward

* extra space

* move exported function above
2019-05-15 09:00:18 -05:00
terence tsao
169ffcd66f Implement Final Updates 0.6 (#2562)
* ValidatorStatus Estimating Activation RPC Server (#2469)

* fix spacing

* working on position in queue

* fmt

* spacing

* feedback

* tests

* rename

* Only Perform Initial Sync With a Single Peer (#2471)

* fix spacing

* use send instead of broadcast in initial sync

* Fix Estimation of Deposit Inclusion Slot in ValidatorActivationStatus (#2472)

* fix spacing

* fix time estimates

* correct slot estimation

* naming

* Update beacon-chain/rpc/validator_server.go

Co-Authored-By: rauljordan <raul@prysmaticlabs.com>

* SSZ web api for decoding input data (#2473)

* first pass ssz server for decoding deposit input data

* fix decoding

* revert viz change on helper

* add image target

* use /api prefix, add deployment for cluster

* fix lint

* standardize slot numbers (#2475)

* Add CORS for ssz api (#2476)

* first pass ssz server for decoding deposit input data

* fix decoding

* revert viz change on helper

* add image target

* use /api prefix, add deployment for cluster

* fix lint

* needed CORS

* Allow Client to Retrieve Multiple Validator Statuses (#2474)

* multiple validator statuses

* gazelle

* context

* fixing bugs

* remove old way of checking

* fix logging

* make activation queue more accurate

* fix rpc test

* add test

* fix remaining tests

* lint

* comment

* review comments

* Update Prysm README (#2477)

* README updated

* readme updates

* no err throw (#2479)

* Fix Status Nil Pointer Error (#2480)

* no err throw

* nil errors

* 3.175 (#2482)

* Better Error Message if Failing to Exit Initial Sync (#2483)

* no err throw

* nil errors

* better error on init sync

* Only Log Active Balances (#2485)

* only log active balance

* dont need ()

* change logging (#2487)

* fix chainstart waiting on rpc server (#2488)

* shift ticker to after activation (#2489)

* Add drain script (#2418)

* Add drain script

* Fix script to drain contracts from newest to oldest

* Add README

* remove comments

* Only after block 400k, look up by deposit event

* issue warn log on disconnecting peer instead of error (#2491)

* Display Only Active Validator Data (#2490)

* Fix Validator Status Field in RPC Server (#2492)

* fix status of key

* status test fix

* fmt

* Estimate the Time Till Follow Distance Is Completed (#2486)

* use estimation instead

* fix test

* fixing another test

* fix tests and preston's comments

* remove unused var

* fix condition

* Revert "fix condition"

This reverts commit dee0e3112c.

* dont return error

* add production config for testnet release (#2493)

* Lookup Validator Index in State in Status Check (#2494)

* state lookup

* refactor duplicate code

* refactor with mapping

* fix broken tests

* finish refactor

* merged master

* updated EpochCommitteeCount and fixed tests

* implemented ShardDelta

* test for ShardDelta

* implemented EpochStartShard

* added epoch out of bound test

* test for accurate start shard

* lint

* implemented process_final_updates

* move to the top of the file

* add comment back

* gaz

* Test for process final updates

* fixed tests

* fixed all the tests
2019-05-15 08:47:22 -05:00
Raul Jordan
ab72752422 Update Proposer/Attester Slashings and Slashing Helpers (#2603) 2019-05-15 06:26:24 -07:00
terence tsao
f8118e4599 Implement Process Slashings for 0.6 (#2523) 2019-05-15 06:15:07 -07:00
terence tsao
23de5ca6c4 Update Committee Helper Part 2 (#2592) 2019-05-14 21:02:34 -07:00
terence tsao
83b13a1602 Update Shard Helpers for 0.6 (#2497)
* ValidatorStatus Estimating Activation RPC Server (#2469)

* fix spacing

* working on position in queue

* fmt

* spacing

* feedback

* tests

* rename

* Only Perform Initial Sync With a Single Peer (#2471)

* fix spacing

* use send instead of broadcast in initial sync

* Fix Estimation of Deposit Inclusion Slot in ValidatorActivationStatus (#2472)

* fix spacing

* fix time estimates

* correct slot estimation

* naming

* Update beacon-chain/rpc/validator_server.go

Co-Authored-By: rauljordan <raul@prysmaticlabs.com>

* SSZ web api for decoding input data (#2473)

* first pass ssz server for decoding deposit input data

* fix decoding

* revert viz change on helper

* add image target

* use /api prefix, add deployment for cluster

* fix lint

* standardize slot numbers (#2475)

* Add CORS for ssz api (#2476)

* first pass ssz server for decoding deposit input data

* fix decoding

* revert viz change on helper

* add image target

* use /api prefix, add deployment for cluster

* fix lint

* needed CORS

* Allow Client to Retrieve Multiple Validator Statuses (#2474)

* multiple validator statuses

* gazelle

* context

* fixing bugs

* remove old way of checking

* fix logging

* make activation queue more accurate

* fix rpc test

* add test

* fix remaining tests

* lint

* comment

* review comments

* Update Prysm README (#2477)

* README updated

* readme updates

* no err throw (#2479)

* Fix Status Nil Pointer Error (#2480)

* no err throw

* nil errors

* 3.175 (#2482)

* Better Error Message if Failing to Exit Initial Sync (#2483)

* no err throw

* nil errors

* better error on init sync

* Only Log Active Balances (#2485)

* only log active balance

* dont need ()

* change logging (#2487)

* fix chainstart waiting on rpc server (#2488)

* shift ticker to after activation (#2489)

* Add drain script (#2418)

* Add drain script

* Fix script to drain contracts from newest to oldest

* Add README

* remove comments

* Only after block 400k, look up by deposit event

* issue warn log on disconnecting peer instead of error (#2491)

* Display Only Active Validator Data (#2490)

* Fix Validator Status Field in RPC Server (#2492)

* fix status of key

* status test fix

* fmt

* Estimate the Time Till Follow Distance Is Completed (#2486)

* use estimation instead

* fix test

* fixing another test

* fix tests and preston's comments

* remove unused var

* fix condition

* Revert "fix condition"

This reverts commit dee0e3112c.

* dont return error

* add production config for testnet release (#2493)

* Lookup Validator Index in State in Status Check (#2494)

* state lookup

* refactor duplicate code

* refactor with mapping

* fix broken tests

* finish refactor

* merged master

* updated EpochCommitteeCount and fixed tests

* implemented ShardDelta

* test for ShardDelta

* implemented EpochStartShard

* added epoch out of bound test

* test for accurate start shard

* lint
2019-05-14 15:44:12 -05:00
Ivan Martinez
c4c76c747c Update Committee Helpers to v0.6.0 (#2398) 2019-05-13 08:24:39 -07:00
terence tsao
d54bd42230 Rebase 0.6 with Master (#2563)
* ValidatorStatus Estimating Activation RPC Server (#2469)

* fix spacing

* working on position in queue

* fmt

* spacing

* feedback

* tests

* rename

* Only Perform Initial Sync With a Single Peer (#2471)

* fix spacing

* use send instead of broadcast in initial sync

* Fix Estimation of Deposit Inclusion Slot in ValidatorActivationStatus (#2472)

* fix spacing

* fix time estimates

* correct slot estimation

* naming

* Update beacon-chain/rpc/validator_server.go

Co-Authored-By: rauljordan <raul@prysmaticlabs.com>

* SSZ web api for decoding input data (#2473)

* first pass ssz server for decoding deposit input data

* fix decoding

* revert viz change on helper

* add image target

* use /api prefix, add deployment for cluster

* fix lint

* standardize slot numbers (#2475)

* Add CORS for ssz api (#2476)

* first pass ssz server for decoding deposit input data

* fix decoding

* revert viz change on helper

* add image target

* use /api prefix, add deployment for cluster

* fix lint

* needed CORS

* Allow Client to Retrieve Multiple Validator Statuses (#2474)

* multiple validator statuses

* gazelle

* context

* fixing bugs

* remove old way of checking

* fix logging

* make activation queue more accurate

* fix rpc test

* add test

* fix remaining tests

* lint

* comment

* review comments

* Update Prysm README (#2477)

* README updated

* readme updates

* no err throw (#2479)

* Fix Status Nil Pointer Error (#2480)

* no err throw

* nil errors

* 3.175 (#2482)

* Better Error Message if Failing to Exit Initial Sync (#2483)

* no err throw

* nil errors

* better error on init sync

* Only Log Active Balances (#2485)

* only log active balance

* dont need ()

* change logging (#2487)

* fix chainstart waiting on rpc server (#2488)

* shift ticker to after activation (#2489)

* Add drain script (#2418)

* Add drain script

* Fix script to drain contracts from newest to oldest

* Add README

* remove comments

* Only after block 400k, look up by deposit event

* issue warn log on disconnecting peer instead of error (#2491)

* Display Only Active Validator Data (#2490)

* Fix Validator Status Field in RPC Server (#2492)

* fix status of key

* status test fix

* fmt

* Estimate the Time Till Follow Distance Is Completed (#2486)

* use estimation instead

* fix test

* fixing another test

* fix tests and preston's comments

* remove unused var

* fix condition

* Revert "fix condition"

This reverts commit dee0e3112c.

* dont return error

* add production config for testnet release (#2493)

* Lookup Validator Index in State in Status Check (#2494)

* state lookup

* refactor duplicate code

* refactor with mapping

* fix broken tests

* finish refactor

* Fix Status Update Progression in RPC Server (#2495)

* fix status updates

* standardize logs in validator

* tests

* fix conditional

* Renovate Updates in Batch (#2505)

* Update com_github_atlassian_bazel_tools commit hash to 20cbdb1

* Update io_bazel_rules_k8s commit hash to 94e92d1

* Update prysm_testnet_site commit hash to b6c4983

* Update dependency com_github_jbenet_goprocess to v0.1.0

* Update dependency com_github_pkg_errors to v0.8.1

* Update dependency com_google_cloud_go to v0.38.0

* Update libp2p

* fixed

* add path for prylabs.net/ssz (#2508)

* Add GCP test configuration and p2p-host-ip flag (#2510)

* Add GCP startup script

* add flag for external IP

* specify that it must be for linux

* /deploy/create

* gofmt

* Canonical Blocks for Batch Block Request (#2511)

* only reply canonical block for reg sync

* CanonicalBlock test

* lint

* Use Single Code Path for Receiving Blocks and Fork Choice (#2514)

* insert canonical

* one path

* single entry

* travis

* lint

* Do Not Broadcast Attestations in Operations Service (#2509)

* no att broadcast

* broadcast in rpc but not operations

* fix space

* tests

* Revert "Renovate Updates in Batch (#2505)" (#2515)

This reverts commit 0e8ef07587.

* fix validator flags (#2518)

* Update Attestation Target for AttestHead (#2525)

* update attestation target for AttestHead

* fixed test

* fixed atts verification (#2527)

* delete failed pending atts (#2528)

* Do Not Subscribe to Blocks in Initial Sync (#2524)

* only sub to block batches

* batch sub remove

* tests

* fix lint

* gazelle

* delete old im mem blocks code

* Sort list before processing batched blocks (#2531)

* Revert "Canonical Blocks for Batch Block Request (#2511)" (#2532)

This reverts commit a818564b8d.

* Do Not Run Fork Choice on Block Proposals (#2526)

* removed unused doesParentExist (#2538)

* Sync Responds With Canonical Block Lists (#2539)

* first attempt at canonical blk list

* lint

* condition 1

* ctx w/ time out

* added canonical block list tests

* revert

* add to BeaconChainFlags

* dont use map, use proto

* attempt to use proto, take 1

* add run

* like canonical better than head

* removed unused

* Update proto/beacon/p2p/v1/messages.proto

Co-Authored-By: rauljordan <raul@prysmaticlabs.com>

* protos

* Check context has not expired before expensive operations (#2541)

* use ctx.Err for potentially expensive RPC methods, use batch for saving attestations

* more

* in sync too

* Update BUILD.bazel

* fix spacing

* enhance forkchoice log (#2537)

* add attestation data req cache (#2542)

* add attestation data req cache

* add tests

* godocs

* fix cache size gauge

* lint

* fix tests

* gazelle

* add more comments

* exclusive of finalized block

* Refactor DB Package to Enable Multiple Blocks/States at Slots (#2540)

* prefixed blocks blocked

* db refactor

* new historical state saving

* builds but tests fail

* more tests pass

* fix tests

* fix tests

* delete buf

* Update beacon-chain/db/block.go

Co-Authored-By: rauljordan <raul@prysmaticlabs.com>

* Update beacon-chain/db/block.go

Co-Authored-By: rauljordan <raul@prysmaticlabs.com>

* rem unused

* exclusive of finalized block (#2547)

* PreChainStart Activation Fix (#2544)

* fix activation

* remove logs

* remove logs

* revert change

* fix test

* Prevent Reorgs if Chain Head Does Not Change (#2548)

* revent reorgs if head does not change

* lint

* spacing

* Fetch Block Tree from Justified Block to Highest Observed Slot via RPC (#2549)

* test block tree req

* tree improvement

* use the right data

* block tree blocked by children func

* rem file

* imports

* add ctx

* imports

* mock

* check expired context

* added block root

* gazelle

* sace

* "Super sync" and naive p2p reputation (#2550)

* checkpoint on super sync with reputation

* ensure handling only expected peers msg

* exclusive of finalized block

* skip block saved already

* clean up struct

* remove 2 more fields

* _

* everything builds, but doesnt test yet

* lint

* fix p2p tests

* space

* space

* space

* fmt

* fmt

* Filter Canonical Attester for RPC (#2551)

* exclusive of finalized block

* add filter to only include canonical attestation

* comments

* grammer

* gaz

* typo

* fixed existing tests

* added test for IsAttCanonical

* add nil blocks test

* Can't save attestation target when head is nil (#2530)

* take care nil block

* warn to info

* preston's feedback

* Only marshal broadcast debug message when actually logging debug (#2553)

* fix broadcast debug message

* feedback

* Fix lint issues (#2554)

* fix broadcast debug message

* feedback

* imports

* lint

* Fix Logging in Validator Client (#2555)

* Use a prysm specific DHT protocol (#2558)

* use a prysm specific DHT

* gazelle

* space

* Fix BlockTree RPC Server Response (#2556)
2019-05-12 08:55:17 +08:00
Nishant Das
46fde662c8 Implement Self Signed Root in SSZ (#2520)
* add function

* add tests

* finally fixed it

* more changes

* assert slice type

* new changes

* add tests

* more test cases

* remove useless vars

* gazelle

* remove requirement

* shay's comments

* lint
2019-05-09 18:12:58 +08:00
shayzluf
59ae4260bf Update get domain and Fork to v0.6 (#2545)
* get_domain DomainVersion refactoring

* fix test to comply with new call

* remove forkutil

* remove forkdata

* fix nishant change requests
2019-05-09 17:23:56 +08:00
shayzluf
54184eb108 update registry updates func (#2521)
* update registry updates func

* added tests and moved to epoch processing

* fixed naming issues
2019-05-09 05:51:39 +05:30
Nishant Das
417a7ddf7a changes name to signature (#2535) 2019-05-08 06:10:41 -07:00
shayzluf
6a2591ff34 Add convert to indexed (#2519)
* sort participants slice

* add bitfield functions and tests

* added BitfieldBit test

* add AttestationParticipantsNew

* revert AttestationParticipants to its previous change

* add tests

* remove verifybitfieldnew

* fix tests and remove multiple tests

* remove duplicate test

* start work

* convert attestation to indexed attestations

* fix test for convert index

* remove calling getter

* add more tests

* remove underscore
2019-05-08 05:56:56 +05:30
terence tsao
fde0a7819b Implement Justification and finalization Processing (#2448) 2019-05-06 21:09:22 -07:00
shayzluf
4cec1b1e74 Update attesting indices v0.6 (#2449)
* sort participants slice

* add bitfield functions and tests

* added BitfieldBit test

* add AttestationParticipantsNew

* revert AttestationParticipants to its previous change

* add tests

* remove verifybitfieldnew

* fix tests and remove multiple tests

* remove duplicate test

* change magic number into ceildiv8
2019-05-07 08:39:53 +05:30
Nishant Das
2452266033 Update Genesis State Function to v0.6 (#2465)
* add pseudocode

* make changes

* fix all tests

* fix tests
2019-05-06 09:05:13 +08:00
terence tsao
49c70769cb Rebase Spec 0.6 with Master (#2496) 2019-05-05 16:56:22 -07:00
Preston Van Loon
f0c45f5011 merge master 2019-05-02 11:53:40 -04:00
Nishant Das
c87f71b303 Changes Names of Variables that aren't Actually Deprecated (#2466) 2019-05-02 05:54:42 -07:00
shayzluf
191cc9d1c5 Update bitfield helpers v0.6 (#2446)
* add bitfield functions and tests

* added BitfieldBit test

* remove verifybitfieldnew

* fix checkbit and tests

* change request fixes from nishant and terence

* nicer code by nishant
2019-05-01 13:04:00 +08:00
Nishant Das
c1fdfcd63c Update from Master (#2447) 2019-04-29 21:45:50 -07:00
shayzluf
6a245babba Update generateseed to 0.6 spec (#2371)
* update generateseed to 0.6 spec

* added fixes to active index to fix  tests

* fix comment and rewrote the assignment

* fix terence change requests

* remove extra space
2019-04-30 03:25:57 +05:30
Ivan Martinez
1af61109fb Add VerifyIndexedAttestation for v0.6 (#2385) 2019-04-29 14:13:27 -07:00
terence tsao
bb703e4ab7 Implement Crosslink Helpers (#2403) 2019-04-29 12:34:09 -07:00
shayzluf
7a44bd2873 Update active index root helpers (#2369) 2019-04-28 21:54:08 -07:00
terence tsao
dd09c5ed06 Implement Epoch Processing Helpers - Attesting Indices, Balances, Inclusion Slot... (#2393) 2019-04-27 10:29:39 -07:00
shayzluf
5d4780b746 PermutedIndex helper function (#2350)
* PermutedIndex helper function with Benchmark

* removed duplicate functions

* fix change requests by preston

* fix gazelle

* t.skip with todo as new yaml tests depands on compute commitee

* added new test yaml files for shuffling

* added comment for exported struct

* review comments

* review comments

* review comments

* change naming
2019-04-26 22:33:13 +08:00
terence tsao
f9dbbb8496 Implement Epoch Processing Helpers - Attestations Matching (#2379) 2019-04-26 07:11:07 -07:00
Nishant Das
4318d21872 Merge From Master (#2392) 2019-04-26 06:36:53 -07:00
shayzluf
6cdf27c356 Add split offset function v0.6 (#2365)
* add SplitOffset function and test that fails"

* use splitoffset in function

* Update beacon-chain/utils/shuffle_test.go

error format change

Co-Authored-By: shayzluf <thezluf@gmail.com>

* add test
2019-04-26 18:00:49 +08:00
Ivan Martinez
cffe93e7b0 Update missing constants and some proto to v0.6 (#2388)
* Update missing constants and some proto to v0.6
2019-04-26 00:18:32 -04:00
terence tsao
4a71a37d34 Implement new initiate validator exit function (#2366) 2019-04-25 18:10:22 -07:00
shayzluf
0035a5f783 Update balance helpers v0.6 (#2348) 2019-04-24 18:31:05 -07:00
Ivan Martinez
f319ec7eb2 Update Randao Mix Helpers to v0.6 (#2357) 2019-04-24 14:40:42 -07:00
terence tsao
50708c961a Update Block & State Root Helpers (#2355) 2019-04-24 14:18:44 -07:00
terence tsao
792b52a03c Implement Validator Helpers (#2364) 2019-04-24 13:56:48 -07:00
Ivan Martinez
14698aee6b Update Validator Helpers to v0.6 (#2352) 2019-04-23 19:40:49 -07:00
terence tsao
fbdeae1792 Update Balances Related Helpers (#2356) 2019-04-23 11:17:54 -07:00
Ivan Martinez
4fc37fcf41 Add WithdrawableEpoch to Validiator (#2351)
* Add WithdrawableEpoch to Validiator

* Build proto
2019-04-23 06:52:48 -04:00
terence tsao
89cf5f5645 Update Beacon State Protobuf Field (#2341) 2019-04-22 21:39:28 -07:00
terence tsao
4172198ff4 Update DepositData and Validator Proto Fields (#2318) 2019-04-22 21:21:14 -07:00
terence tsao
bf97c6e1a2 Update Beacon Block Proto Fields (#2340) 2019-04-22 21:07:15 -07:00
terence tsao
e2ecbc3575 Update Historical Batch, Blk Header and Deposit Proto Fields (#2327) 2019-04-22 21:07:00 -07:00
Ivan Martinez
17147c7449 Update constants v0.6 (#2343)
* Update config to v0.6.0

* finish docs

* Add comment for deprecated fields

* Move deprecated lines
2019-04-22 23:59:20 -04:00
terence tsao
bb28b50c81 Update Block Operations Proto Fields (#2338)
* update attestation related protos

* use root for hash32

* fixed a few typos

* review comments

* update historical batch, deposit and blk header fields

* Add nogo to introduce built time linting (#2317)

* Add nogo and fix lint issues

* Run gazelle

* better gazelle

* ignore external struct tags

* Exclude all third party code from unsafeptr (#2321)

* Fix Assingments Bug (#2320)

* fix

* fix tests

* Add feature flag to toggle gossip sub in p2p (#2322)

* add feature flag to enable gossip sub in p2p

* invert the enable/disable logic

* add the flag in k8s and fix tests

* gazellle

* return empty config if nil

* Prevent Canceling Goroutines in Validator Client (#2324)

* do not cancel assignments goroutines

* exclude rule

* disable lostcancel for now

* Fix Pending Attestations RPC Call (#2304)

* pending atts

* use proposal slot

* attestation inclusion fix

* lint

* advance state transitions

* gazelle

* lint tests pass

* Do Not Update Validator Registry on Nil Block (#2326)

* no registry update if block is nil

* regression test

* lint

* Ensure Block Processing Failures Return an Error (#2325)

* ensure block failed processing returns an error

* fixed test

* test assertion corrected

* comments

* fix tests

* imports

* rebase

* add spec badge. Thanks to https://github.com/ChainSafe/lodestar/pull/166 for the idea :)

* Update Crosslink Protobuf Fields (#2313)

* rebase

* rebase

* rebase

* Prevent Canceling Goroutines in Validator Client (#2324)

* do not cancel assignments goroutines

* exclude rule

* disable lostcancel for now

* starting proposer slashing

* starting attester slashing

* revert gen file

* Update types.pb.go
2019-04-22 23:45:44 -04:00
terence tsao
a85612d155 Update Attestation Proto Fields (#2316)
* update attestation related protos

* use root for hash32

* fixed a few typos

* review comments

* revert genf ile
2019-04-22 23:40:19 -04:00
terence tsao
01f61664d8 Update Eth1data Protobuf Fields (#2315)
* update-eth1data

* use block root

* ops

* messed up crosslink

* review comments
2019-04-22 21:50:53 -04:00
Preston Van Loon
bae78a7f2c Merge branch 'master' of github.com:prysmaticlabs/prysm into spec-v0.6 2019-04-22 18:58:18 -04:00
Ivan Martinez
e913c73813 Update slot helpers v0.6 (#2345)
* Update config to v0.6.0

* finish docs

* Update slot helper docs to v0.6
2019-04-22 16:47:33 -05:00
Preston Van Loon
2cabf15525 Merge branch 'master' of github.com:prysmaticlabs/prysm into spec-v0.6 2019-04-22 10:21:56 -04:00
terence tsao
2b887bc01a Update Crosslink Protobuf Fields (#2313) 2019-04-20 17:00:05 -07:00
Preston Van Loon
92c6e854d5 add spec badge. Thanks to https://github.com/ChainSafe/lodestar/pull/166 for the idea :) 2019-04-20 12:19:25 -04:00
3999 changed files with 90301 additions and 601530 deletions

View File

@@ -1,56 +1,5 @@
# Import bazelrc presets
import %workspace%/build/bazelrc/convenience.bazelrc
import %workspace%/build/bazelrc/correctness.bazelrc
import %workspace%/build/bazelrc/cross.bazelrc
import %workspace%/build/bazelrc/debug.bazelrc
import %workspace%/build/bazelrc/hermetic-cc.bazelrc
import %workspace%/build/bazelrc/performance.bazelrc
# Print warnings for tests with inappropriate test size or timeout.
test --test_verbose_timeout_warnings
# hermetic_cc_toolchain v3.0.1 required changes.
common --enable_platform_specific_config
build:linux --sandbox_add_mount_pair=/tmp
build:macos --sandbox_add_mount_pair=/var/tmp
build:windows --sandbox_add_mount_pair=C:\Temp
# E2E run with debug gotag
test:e2e --define gotags=debug
# Clearly indicate that coverage is enabled to disable certain nogo checks.
coverage --define=coverage_enabled=1
# Stamp binaries with git information
build --workspace_status_command=./hack/workspace_status.sh
build --define blst_disabled=false
run --define blst_disabled=false
build:blst_disabled --define blst_disabled=true
build:blst_disabled --define gotags=blst_disabled
build:minimal --//proto:network=minimal
build:minimal --@io_bazel_rules_go//go/config:tags=minimal
# Release flags
build:release --compilation_mode=opt
build:release --stamp
build:release --define pgo_enabled=1
# Build binary with cgo symbolizer for debugging / profiling.
build:cgo_symbolizer --copt=-g
build:cgo_symbolizer --define=USE_CGO_SYMBOLIZER=true
build:cgo_symbolizer -c dbg
build:cgo_symbolizer --define=gotags=cgosymbolizer_enabled
# toolchain build debug configs
#------------------------------
build:debug --sandbox_debug
build:debug --toolchain_resolution_debug=".*"
build:debug --verbose_failures
build:debug -s
# Set bazel gotag
build --define gotags=bazel
# Abseil requires c++14 or greater.
build --cxxopt=-std=c++20
build --host_cxxopt=-std=c++20
# Fix for rules_docker. See: https://github.com/bazelbuild/rules_docker/issues/842
build --host_force_python=PY2

View File

@@ -1 +0,0 @@
7.1.0

View File

@@ -2,57 +2,45 @@
# across machines, developers, and workspaces.
#
# This config is loaded from https://github.com/bazelbuild/bazel-toolchains/blob/master/bazelrc/latest.bazelrc
#build:remote-cache --remote_timeout=3600
#build:remote-cache --spawn_strategy=standalone
#build:remote-cache --strategy=Javac=standalone
#build:remote-cache --strategy=Closure=standalone
#build:remote-cache --strategy=Genrule=standalone
build:remote-cache --remote_cache=grpcs://remotebuildexecution.googleapis.com
build:remote-cache --remote_timeout=3600
build:remote-cache --auth_enabled=true
build:remote-cache --spawn_strategy=standalone
build:remote-cache --strategy=Javac=standalone
build:remote-cache --strategy=Closure=standalone
build:remote-cache --strategy=Genrule=standalone
# Build results backend.
build:remote-cache --bes_results_url="https://source.cloud.google.com/results/invocations/"
build:remote-cache --bes_backend=buildeventservice.googleapis.com
build:remote-cache --bes_timeout=60s
build:remote-cache --project_id=prysmaticlabs
build:remote-cache --bes_upload_mode=fully_async
# Prysm specific remote-cache properties.
build:remote-cache --remote_download_minimal
build:remote-cache --remote_build_event_upload=minimal
build:remote-cache --remote_cache=grpc://bazel-remote-cache:9092
build:remote-cache --experimental_remote_downloader=grpc://bazel-remote-cache:9092
build:remote-cache --remote_local_fallback
build:remote-cache --experimental_remote_cache_async
build:remote-cache --experimental_remote_merkle_tree_cache
build:remote-cache --experimental_action_cache_store_output_metadata
build:remote-cache --experimental_remote_cache_compression
# Enforce stricter environment rules, which eliminates some non-hermetic
# behavior and therefore improves both the remote cache hit rate and the
# correctness and repeatability of the build.
build:remote-cache --incompatible_strict_action_env=true
build:remote-cache --disk_cache=
build:remote-cache --host_platform_remote_properties_override='properties:{name:\"cache-silo-key\" value:\"prysm\"}'
build:remote-cache --remote_instance_name=projects/prysmaticlabs/instances/default_instance
build --experimental_use_hermetic_linux_sandbox
build:remote-cache --experimental_remote_download_outputs=minimal
build:remote-cache --experimental_inmemory_jdeps_files
build:remote-cache --experimental_inmemory_dotd_files
# Import workspace options.
import %workspace%/.bazelrc
# Enable blake3 once it is supported in remote cache. See: https://github.com/buchgr/bazel-remote/issues/710
# startup --digest_function=blake3
startup --host_jvm_args=-Xmx8g --host_jvm_args=-Xms4g
build --experimental_strict_action_env
build --disk_cache=/tmp/bazelbuilds
build --experimental_multi_threaded_digest
build --sandbox_tmpfs_path=/tmp
build --verbose_failures
build --announce_rc
build --show_progress_rate_limit=5
build --curses=no --color=no
build --curses=yes --color=yes
build --keep_going
build --test_output=errors
build --flaky_test_attempts=5
build --build_runfile_links=false # Only build runfile symlink forest when required by local action, test, or run command.
build --jobs=50
test --local_test_jobs=2
# Disabled race detection due to unstable test results under constrained environment build kite
# build --features=race
# Better caching
build:nostamp --nostamp
# Build metadata
build --build_metadata=ROLE=CI
build --build_metadata=REPO_URL=https://github.com/prysmaticlabs/prysm.git
build --workspace_status_command=./hack/workspace_status_ci.sh
# Buildbuddy
build --bes_results_url=https://app.buildbuddy.io/invocation/
build --bes_backend=grpcs://remote.buildbuddy.io

View File

@@ -30,4 +30,4 @@ comment:
ignore:
- "**/*.pb.go"
- "**/testing/**/*"
- "**/*_mock.go"

View File

@@ -1,26 +0,0 @@
version = 1
exclude_patterns = [
"tools/analyzers/**",
"validator/keymanager/remote/keymanager_test.go",
"validator/web/site_data.go"
]
[[analyzers]]
name = "go"
enabled = true
[analyzers.meta]
import_paths = ["github.com/prysmaticlabs/prysm/v5"]
[[analyzers]]
name = "test-coverage"
enabled = true
[[analyzers]]
name = "shell"
enabled = true
[[analyzers]]
name = "secrets"
enabled = true

View File

@@ -1,2 +0,0 @@
bazel-*
.git

8
.github/CODEOWNERS vendored
View File

@@ -1,8 +0,0 @@
# Automatically require code review from core-team.
* @prysmaticlabs/core-team
# Starlark code owners
*.bzl @prestonvanloon
# Anyone on prylabs team can approve dependency updates.
deps.bzl @prysmaticlabs/core-team

View File

@@ -1,66 +0,0 @@
name: 🐞 Bug report
description: Report a bug or problem with running Prysm
labels: ["Bug"]
body:
- type: markdown
attributes:
value: |
To help us tend to your issue faster, please search our currently open issues before submitting a new one.
Existing issues often contain information about workarounds, resolution, or progress updates.
- type: textarea
id: what-happened
attributes:
label: Describe the bug
description: |
A clear and concise description of the problem...
validations:
required: true
- type: textarea
id: previous-version
attributes:
label: Has this worked before in a previous version?
description: Did this behavior use to work in the previous version?
render: text
- type: textarea
id: reproduction-steps
attributes:
label: 🔬 Minimal Reproduction
description: |
Please let us know how we can reproduce this issue.
Include the exact method you used to run Prysm along with any flags used in your beacon chain and/or validator.
Make sure you don't upload any confidential files or private keys.
placeholder: |
Steps to reproduce:
1. Start '...'
2. Then '...'
3. Check '...'
4. See error
- type: textarea
id: errors
attributes:
label: Error
description: |
If the issue is accompanied by an error, please share the error logs with us below.
If you have a lot of logs, place make a paste bin with your logs and share the link with us here:
render: text
- type: dropdown
id: platform
attributes:
label: Platform(s)
description: What platform(s) did this occur on?
multiple: true
options:
- Linux (x86)
- Linux (ARM)
- Mac (Intel)
- Mac (Apple Silicon)
- Windows (x86)
- Windows (ARM)
- type: input
attributes:
label: What version of Prysm are you running? (Which release)
description: You can check your Prysm version by running your beacon node or validator with the `--version` flag.
- type: textarea
attributes:
label: Anything else relevant (validator index / public key)?

View File

@@ -1,21 +0,0 @@
---
name: "\U0001F48EGeneral issue / discussion"
about: Any other type of general issue or discussion
---
<!--💎💎💎💎💎💎💎💎💎💎💎💎💎💎💎💎💎💎💎💎💎💎💎💎💎💎💎💎
Hellooo! 😄
To help us tend to your issue faster, please search our currently open issues before submitting a new one.
Existing issues often contain information about workarounds, resolution, or progress updates.
💎💎💎💎💎💎💎💎💎💎💎💎💎💎💎💎💎💎💎💎💎💎💎💎💎💎💎💎💎💎💎-->
# 💎 Issue
### Background
<!-- ✍️--> Context and background information on the discussion...
### Description

View File

@@ -1,38 +0,0 @@
---
name: "\U0001F984Feature Flag Tracking"
about: Track a new feature, in development, in Prysm. This issue template should only be used by developers or contributors!
labels: Tracking
---
<!--💎💎💎💎💎💎💎💎💎💎💎💎💎💎💎💎💎💎💎💎💎💎💎💎💎💎💎💎
Hellooo! 😄
Thanks for taking the time to file a tracking issue for your new feature. These issues really help
us track progress of features as they work their way through development. Be sure to review
shared/featureconfig/README.md for the latest documentation around feature flags.
💎💎💎💎💎💎💎💎💎💎💎💎💎💎💎💎💎💎💎💎💎💎💎💎💎💎💎💎💎💎💎-->
# 🦄 Feature Tracking
**Current status** (opt-in or opt-out):
**Opt-in feature flag name**:
**Opt-out feature flag name**:
### Feature Description
<!-- ✍️--> A clear and concise description of the new feature. Provide links to your technical
design document or other supporting information.
### Tasks
- [ ] Feature flag created as opt-in
- [ ] Feature implemented / code complete
- [ ] Feature tested in production for adequate amount of time
- [ ] Feature flag is inverted to be opt-out and the opt-in flag is deprecated
- [ ] Feature has made it to a tagged production release as an opt-out flag
- [ ] Opt-out feature flag is deprecated and old code paths are cleaned up
This issue should be closed after all of the above tasks are complete.

View File

@@ -1,27 +0,0 @@
---
name: "\U0001F680Feature request"
about: Suggest a feature for Prysm
---
<!--💎💎💎💎💎💎💎💎💎💎💎💎💎💎💎💎💎💎💎💎💎💎💎💎💎💎💎💎
Hellooo! 😄
To help us tend to your issue faster, please search our currently open issues before submitting a new one.
Existing issues often contain information about workarounds, resolution, or progress updates.
💎💎💎💎💎💎💎💎💎💎💎💎💎💎💎💎💎💎💎💎💎💎💎💎💎💎💎💎💎💎💎-->
# 🚀 Feature Request
### Description
<!-- ✍️ A clear and concise description of the problem or missing capability... -->
### Describe the solution you'd like
<!-- ✍️ If you have a solution in mind, please describe it. -->
### Describe alternatives you've considered
<!-- ✍️ Have you considered any alternative solutions or workarounds? -->

View File

@@ -1,37 +0,0 @@
<!-- Thanks for sending a PR! Before submitting:
1. If this is your first PR, check out our contribution guide here https://docs.prylabs.network/docs/contribute/contribution-guidelines
You will then need to sign our Contributor License Agreement (CLA), which will show up as a comment from a bot in this pull request after you open it. We cannot review code without a signed CLA.
2. Please file an associated tracking issue if this pull request is non-trivial and requires context for our team to understand. All features and most bug fixes should have
an associated issue with a design discussed and decided upon. Small bug
fixes and documentation improvements don't need issues.
3. New features and bug fixes must have tests. Documentation may need to
be updated. If you're unsure what to update, send the PR, and we'll discuss
in review.
4. Note that PRs updating dependencies and new Go versions are not accepted.
Please file an issue instead.
5. A changelog entry is required for user facing issues.
-->
**What type of PR is this?**
> Uncomment one line below and remove others.
>
> Bug fix
> Feature
> Documentation
> Other
**What does this PR do? Why is it needed?**
**Which issues(s) does this PR fix?**
Fixes #
**Other notes for review**
**Acknowledgements**
- [ ] I have read [CONTRIBUTING.md](https://github.com/prysmaticlabs/prysm/blob/develop/CONTRIBUTING.md).
- [ ] I have made an appropriate entry to [CHANGELOG.md](https://github.com/prysmaticlabs/prysm/blob/develop/CHANGELOG.md).
- [ ] I have added a description to this PR with sufficient context for reviewers to understand this PR.

View File

@@ -1,5 +0,0 @@
FROM golang:1.22-alpine
COPY entrypoint.sh /entrypoint.sh
ENTRYPOINT ["/entrypoint.sh"]

View File

@@ -1,5 +0,0 @@
name: 'Go mod tidy checker'
description: 'Checks that `go mod tidy` has been applied.'
runs:
using: 'docker'
image: 'Dockerfile'

View File

@@ -1,34 +0,0 @@
#!/bin/sh -l
set -e
export PATH="$PATH:/usr/local/go/bin"
cd "$GITHUB_WORKSPACE"
cp go.mod go.mod.orig
cp go.sum go.sum.orig
go mod tidy -compat=1.17
echo "Checking go.mod and go.sum:"
checks=0
if [ "$(diff -s go.mod.orig go.mod | grep -c 'Files go.mod.orig and go.mod are identical')" = 1 ]; then
echo "- go.mod is up to date."
checks=$((checks + 1))
else
echo "- go.mod is NOT up to date."
fi
if [ "$(diff -s go.sum.orig go.sum | grep -c 'Files go.sum.orig and go.sum are identical')" = 1 ]; then
echo "- go.sum is up to date."
checks=$((checks + 1))
else
echo "- go.sum is NOT up to date."
fi
if [ $checks -eq 2 ]; then
exit 0
fi
# Notify of any issues.
echo "Run 'go mod tidy' to update."
exit 1

View File

@@ -1,13 +0,0 @@
# Configuration for probot-no-response - https://github.com/probot/no-response
# Number of days of inactivity before an Issue is closed for lack of response
daysUntilClose: 14
# Label requiring a response
responseRequiredLabel: Need-Info
# Comment to post when closing an Issue for lack of response. Set to `false` to disable
closeComment: >
This issue has been automatically closed because there has been no response
to our request for more information from the original author. With only the
information that is currently in the issue, we don't have enough information
to take action. Please reach out if you have or find the answers we need so
that we can investigate further.

View File

@@ -1,33 +0,0 @@
name: CI
on:
pull_request:
branches:
- develop
jobs:
changed_files:
runs-on: ubuntu-latest
name: Check CHANGELOG.md
steps:
- uses: actions/checkout@v4
- name: changelog modified
id: changelog-modified
uses: tj-actions/changed-files@v45
with:
files: CHANGELOG.md
- name: List all changed files
env:
ALL_CHANGED_FILES: ${{ steps.changelog-modified.outputs.all_changed_files }}
run: |
if [[ ${ALL_CHANGED_FILES[*]} =~ (^|[[:space:]])"CHANGELOG.md"($|[[:space:]]) ]];
then
echo "CHANGELOG.md was modified.";
exit 0;
else
echo "CHANGELOG.md was not modified.";
echo "Please see CHANGELOG.md and follow the instructions to add your changes to that file."
echo "In some rare scenarios, a changelog entry is not required and this CI check can be ignored."
exit 1;
fi

View File

@@ -1,45 +0,0 @@
name: "fuzz"
on:
workflow_dispatch:
schedule:
- cron: "0 12 * * *"
permissions:
contents: write
pull-requests: write
jobs:
list:
runs-on: ubuntu-latest
timeout-minutes: 180
steps:
- uses: actions/checkout@v3
- uses: actions/setup-go@v4
with:
go-version: '1.22.3'
- id: list
uses: shogo82148/actions-go-fuzz/list@v0
with:
tags: fuzz,develop
outputs:
fuzz-tests: ${{steps.list.outputs.fuzz-tests}}
fuzz:
runs-on: ubuntu-latest
timeout-minutes: 360
needs: list
strategy:
fail-fast: false
matrix:
include: ${{fromJson(needs.list.outputs.fuzz-tests)}}
steps:
- uses: actions/checkout@v3
- uses: actions/setup-go@v4
with:
go-version: '1.22.3'
- uses: shogo82148/actions-go-fuzz/run@v0
with:
packages: ${{ matrix.package }}
fuzz-regexp: ${{ matrix.func }}
fuzz-time: "20m"
tags: fuzz,develop

View File

@@ -1,91 +0,0 @@
name: Go
on:
push:
branches: [ master ]
pull_request:
branches: [ '*' ]
merge_group:
types: [checks_requested]
jobs:
formatting:
name: Formatting
runs-on: ubuntu-latest
steps:
- name: Checkout
uses: actions/checkout@v4
- name: Go mod tidy checker
id: gomodtidy
uses: ./.github/actions/gomodtidy
gosec:
name: Gosec scan
runs-on: ubuntu-latest
env:
GO111MODULE: on
steps:
- name: Checkout
uses: actions/checkout@v4
- name: Set up Go 1.22
uses: actions/setup-go@v4
with:
go-version: '1.22.6'
- name: Run Gosec Security Scanner
run: | # https://github.com/securego/gosec/issues/469
export PATH=$PATH:$(go env GOPATH)/bin
go install github.com/securego/gosec/v2/cmd/gosec@v2.19.0
gosec -exclude-generated -exclude=G307 -exclude-dir=crypto/bls/herumi ./...
lint:
name: Lint
runs-on: ubuntu-latest
steps:
- name: Checkout
uses: actions/checkout@v4
- name: Set up Go 1.22
uses: actions/setup-go@v4
with:
go-version: '1.22.6'
id: go
- name: Golangci-lint
uses: golangci/golangci-lint-action@v5
with:
version: v1.55.2
args: --config=.golangci.yml --out-${NO_FUTURE}format colored-line-number
build:
name: Build
runs-on: ubuntu-latest
steps:
- name: Set up Go 1.x
uses: actions/setup-go@v4
with:
go-version: '1.22.6'
id: go
- name: Check out code into the Go module directory
uses: actions/checkout@v4
- name: Get dependencies
run: |
go get -v -t -d ./...
- name: Build
# Use blst tag to allow go and bazel builds for blst.
run: go build -v ./...
env:
CGO_CFLAGS: "-O2 -D__BLST_PORTABLE__"
# fuzz leverage go tag based stubs at compile time.
# Building and testing with these tags should be checked and enforced at pre-submit.
- name: Test for fuzzing
run: go test -tags=fuzz,develop ./... -test.run=^Fuzz
env:
CGO_CFLAGS: "-O2 -D__BLST_PORTABLE__"
# Tests run via Bazel for now...
# - name: Test
# run: go test -v ./...

View File

@@ -1,22 +0,0 @@
name: Horusec Security Scan
on:
schedule:
# Runs cron at 16.00 UTC on
- cron: '0 0 * * SUN'
jobs:
Horusec_Scan:
name: horusec-Scan
runs-on: ubuntu-latest
if: github.ref == 'refs/heads/develop'
steps:
- name: Check out code
uses: actions/checkout@v2
with: # Required when commit authors is enabled
fetch-depth: 0
- name: Running Security Scan
run: |
curl -fsSL https://raw.githubusercontent.com/ZupIT/horusec/main/deployments/scripts/install.sh | bash -s latest
horusec start -t="10000" -p="./" -e="true" -i="**/crypto/bls/herumi/**, **/**/*_test.go, **/third_party/afl/**, **/crypto/keystore/key.go"

24
.gitignore vendored
View File

@@ -4,9 +4,6 @@ bazel-*
.DS_Store
.swp
# Ignore VI/Vim swapfiles
.*.sw?
# IntelliJ
.idea
.ijwb
@@ -17,7 +14,6 @@ bazel-*
# Coverage outputs
coverage.txt
profile.out
profile.grind
# Nodejs
node_modules
@@ -25,22 +21,4 @@ yarn-error.log
.vscode/
# Ignore password file
password.txt
# Dist files
dist
# deepsource cli
bin
# p2p metaData
metaData
# execution API authentication
jwt.hex
# manual testing
tmp
# spectest coverage reports
report.txt
password.txt

View File

@@ -1,93 +1,69 @@
run:
skip-files:
- validator/web/site_data.go
- .*_test.go
skip-dirs:
- proto
- tools/analyzers
timeout: 10m
go: '1.22.6'
linters-settings:
govet:
check-shadowing: true
settings:
printf:
funcs:
- (github.com/golangci/golangci-lint/pkg/logutils.Log).Infof
- (github.com/golangci/golangci-lint/pkg/logutils.Log).Warnf
- (github.com/golangci/golangci-lint/pkg/logutils.Log).Errorf
- (github.com/golangci/golangci-lint/pkg/logutils.Log).Fatalf
golint:
min-confidence: 0
gocyclo:
min-complexity: 10
maligned:
suggest-new: true
dupl:
threshold: 100
goconst:
min-len: 2
min-occurrences: 2
depguard:
list-type: blacklist
packages:
# logging is allowed only by logutils.Log, logrus
# is allowed to use only in logutils package
- github.com/sirupsen/logrus
misspell:
locale: US
lll:
line-length: 140
goimports:
local-prefixes: github.com/golangci/golangci-lint
gocritic:
enabled-tags:
- performance
- style
- experimental
disabled-checks:
- wrapperFunc
linters:
enable-all: true
disable:
# Deprecated linters:
enable:
- deadcode
- exhaustivestruct
- golint
- govet
- ifshort
- interfacer
- maligned
- nosnakecase
- scopelint
- structcheck
- varcheck
# Disabled for now:
- asasalint
- bodyclose
- containedctx
- contextcheck
- cyclop
- depguard
- dogsled
- dupl
- durationcheck
- exhaustive
- exhaustruct
- forbidigo
- forcetypeassert
- funlen
- gci
- gochecknoglobals
- gochecknoinits
- goconst
- gocritic
- gocyclo
- godot
- godox
- goerr113
- gofumpt
- gomnd
- gomoddirectives
- goimports
- golint
- gosec
- inamedparam
- interfacebloat
- ireturn
- lll
- maintidx
- makezero
- musttag
- nakedret
- nestif
- nilnil
- nlreturn
- noctx
- nolintlint
- nonamedreturns
- nosprintfhostport
- perfsprint
- prealloc
- predeclared
- promlinter
- protogetter
- revive
- staticcheck
- stylecheck
- tagalign
- tagliatelle
- thelper
- misspell
- structcheck
- typecheck
- unparam
- varnamelen
- wrapcheck
- wsl
- varcheck
- gofmt
- unused
disable-all: true
linters-settings:
gocognit:
# TODO: We should target for < 50
min-complexity: 65
run:
skip-dirs:
- proto/
- ^contracts/
deadline: 10m
output:
print-issued-lines: true
sort-results: true
# golangci.com configuration
# https://github.com/golangci/golangci/wiki/Configuration
service:
golangci-lint-version: 1.15.0 # use the fixed version to not introduce new linters unexpectedly
prepare:
- echo "here I can run custom commands, but no preparation needed for this repo"

View File

@@ -1,89 +0,0 @@
policy:
approval:
- or:
- only test files are changed
- only proto files are changed
- touches consensus critical code
- touches sync/blockchain/p2p code
- large line count
- is critical priority
approval_rules:
- name: only test files are changed
if:
only_changed_files:
paths:
- "*_test.go"
- "*.bazel"
options:
ignore_commits_by:
users: ["bulldozer[bot]"]
requires:
count: 1
teams:
- "prysmaticlabs/core-team"
- name: only proto files are changed
if:
only_changed_files:
paths:
- "*pb.go"
- "*.bazel"
options:
ignore_commits_by:
users: ["bulldozer[bot]"]
requires:
count: 1
teams:
- "prysmaticlabs/core-team"
- name: touches consensus critical code
if:
only_changed_files:
paths:
- "beacon-chain/core/*"
- "beacon-chain/state/*"
options:
ignore_commits_by:
users: ["bulldozer[bot]"]
requires:
count: 2
teams:
- "prysmaticlabs/core-team"
- name: touches sync/blockchain/p2p code
if:
only_changed_files:
paths:
- "beacon-chain/blockchain/*"
- "beacon-chain/sync/*"
- "beacon-chain/p2p/*"
options:
ignore_commits_by:
users: ["bulldozer[bot]"]
requires:
count: 2
teams:
- "prysmaticlabs/core-team"
- name: large line count
if:
modified_lines:
total: "> 1000"
changed_files:
ignore:
- "*pb.go"
- "*.bazel"
options:
ignore_commits_by:
users: ["bulldozer[bot]"]
requires:
count: 2
teams:
- "prysmaticlabs/core-team"
- name: is critical priority
if:
has_labels:
- "Priority: Critical"
options:
ignore_commits_by:
users: ["bulldozer[bot]"]
requires:
count: 3
teams:
- "prysmaticlabs/core-team"

View File

@@ -12,7 +12,7 @@ matrix:
- go get ${gobuild_args} -t ./...
- go get ${gobuild_args} github.com/golangci/golangci-lint/cmd/golangci-lint
script:
- golangci-lint run --skip-dirs ./proto
- golangci-lint run
email: false
after_success:
- wget https://raw.githubusercontent.com/k3rn31p4nic/travis-ci-discord-webhook/master/send.sh

View File

@@ -1,218 +0,0 @@
-----BEGIN PGP PUBLIC KEY BLOCK-----
mQINBFpZUO0BEAC/tqN6QctJKSOF+A7jrBzvNvs6rVEi9Hn55u/scKNIPkSRJRTy
8kVsDMha+EKXka8oQiVM+sHoHMIMCa3tdicm1/k5f67X8dqadQmFP4DhYRYIKCKT
TkntNG568ynf/yUs/YY9Ce8H17JvgytkLK50mxMxycUlYREMaRPMR8Wt1Arrd9QT
U0I2cZemeqUORKuiVuZjj/7BVDHvSXHvzpi5zKI86sJQTnzcuGqyNsrUU4n4sYrb
I+0TfEHzmSdxoAXSaMYjomPLmbaSdBiK/CeNKF8xuFCKJRd8wOUe5FcIN3DCdk0e
yoFye5RMC+W09Ro+tK/jTQ/sTUuNLJm0VUlddQQeoGlhQdLLwiU4PJqcyeL4KaN1
l8cVml7xr1CdemhGV4wmEqAgxnNFN5mKnS2KcDaHxRz7MoGNdgVVQuxxaE0+OsdJ
zCKtA12Q/OcR24qYVFg5+O+bUNhy23mqIxx0HiQ0DsK+IDvcLLWqta0aP9wd9wxG
eObh9WkCxELTLg8xlbe0d7R3juaRjBLdD5d3UyjqGh+7zUflMsFhpUPraNXdKzvm
AqT25cveadM7q/CNzFXeCwmKjZab8Jic8VB80KcikmX6y9eGOHwjFixBogxowlBq
3KpeNTqJMNHYBzEb0V18P8huKVO0SMfg11Z1/nU4NA4lcjiVImb5xnJS0wARAQAB
tCxQcmVzdG9uIFZhbiBMb29uIDxwcmVzdG9uQHByeXNtYXRpY2xhYnMuY29tPokC
NwQTAQgAIQUCWuZ7uwIbAwULCQgHAgYVCAkKCwIEFgIDAQIeAQIXgAAKCRBy4z5N
8aUDbtchD/4hQDTj7qsccQKpaL8furw+OWuoZtX38smO9LrcSoLaZlk5NRxFNlhQ
FaL6iyDLoDkWUXEPg94x1dMFBUfMdIuh6iEF2pcvCN6WG3Pd2j2kG6OxaIx+rI7a
eaD2Wds312etYATu+7AI6pxlQVqPeZi6kZVvXo8r9qzEnl1nifo7d3L4ut6FerDz
/ptvIuhW++BEgksILkwMA44U7owTllkZ5wSbXRJer0bI/jLPLTaRETVMgWmF5L2n
PWKhBnhBXf/P00zTHVoba0peJ/RGdzlb1JZH+uCQk0wGBl0rBMUmiihIB8DZBZDX
5prwMdoHG9qAT9SuJ8ZOjPKaBHVVVw4JOU6rX2+p9E49CVATsdoPfWVbNyVRGi5f
Oo0bJPZU3uO10Q09CIeyqT6JPDCZYS7po2bpfFjQTDkoIv0uPWOsV5l3cvFxVlcD
Pvir4669xhujmoVBOrp18mn/W/rMft2TJ84inp0seKSKdwBUeEyIZwwu1YTorFe4
bgJghDu/Y+K4y0wq7rpPimoiXSoJOaaIPrOzsKAVu20zJB4eoulXlPwHexix8wwf
xeVH4bGl3wtTFWKja0+EQVdxpt+uUlABvrheQbNlNGz5ROOAtjvwASI3YW/jVVaG
wSbOUuMEHfC1rSnj5Y9fN6FbilxJAZ2UbvZE6iXqqemfRdFuB5yedbQmUHJlc3Rv
biBWYW4gTG9vbiA8cHJlc3RvbjkwQGdtYWlsLmNvbT6JAjcEEwEIACEFAlqrvbMC
GwMFCwkIBwIGFQgJCgsCBBYCAwECHgECF4AACgkQcuM+TfGlA269zg/9FynobfLE
Vh+Aid7l5C6pprHflHpdNVhdHzjvosLKfPMCcBJLyCLAIueKC0vvKbt5kyNV6Adl
6fibdO/vRrNsEdmKXTdfDnN03yVX+VO54rqyuweR09CbLHakECh5awWpk0x9E/Gc
3MlU8YDCVNPcWAw8JPZZ0y6s50/MFFXya+/opw45neH6WXrbqbIapzZ4V1bhkloF
TTz7LYp1dk6qjus/f1jHgujclJ6FazG+OqCpzJ22lnuwNYs8sv1DclW+lEUfIin5
PvzcSRCCd6NilFRSGzfBhM7wxZrAp0JzWXpM1jmd2WHtgErusTaTXRTATjPf9Chg
SE9UT3EJvJ5fPxuxm+qOAowpJwe8Irv+YuMzL8C6C2BhZIV8XID3/cErNeQbPocj
QFksmBEwpe9EG3Yhd5SmSXRhCtBFnyFKyOqPpAE2s+U0b7qsnWc50UFbIytPEm4C
YmdNL6IVilp/+LYFcRWWK/ppOtvtgbj8r7+Foidi/tCauJGt2BzhWH7DkLEdXGxk
PfR/rTgyuAQZowl03DaNwAPKegY2SIuROO9kpQACwTWZNg4l2yrZv09qrKBhohyn
b2i4pwHPZ5/bWLikdzENcJuvbH5qhf6tV0FIaNGbTkWX00uI++bvWAxxE25jZv56
e7VTaWuMt8Ly4DBcwl5JyWuvZ74+yv7QGp20WlByZXN0b24gVmFuIExvb24gKDB4
ZjcxRTlDNzY2Q2RmMTY5ZURGYkUyNzQ5NDkwOTQzQzFEQzZiOEE1NSkgPHByZXN0
b25AbWFjaGluZXBvd2VyZWQuY29tPokCNwQTAQgAIQUCWllQ7QIbAwULCQgHAgYV
CAkKCwIEFgIDAQIeAQIXgAAKCRBy4z5N8aUDbmvdD/4jkT1IXJyj65sgHus0HHYC
MBp6mzvtpDcPljg/5Nl1WXydv4zYGnEnIwWIqDYwwvokOKllxAjgevcplHqZa0cb
7XKCFk5h+56FLPHv9e0WK1fOFeo2ad3imNRstSHgaumGAWHJMg9vWbDisGv1zjQ0
usFkrMKoX34YzMt8KBD7IuQyeBkYNupT8EfByeA9Ta+wkvS+X6BojsFB1UWDAFCY
z8RgTcWIFjWtGZwIkWUKCzebqXbElpJF8ckZU9q4qVhyZ2DT+8BS/Lm0Sf4z6VTC
4EN10ZuN+F+J3DMR2Zuudp4X5OUGkDG4KuU/kvj6EJRWpbTS1D9ReK7hoApIbV72
Um6Mf7kC9EKvxgL1gOfl4aj3Io9M0R8FT/0WSMduQzf/jI3XqdNH0nYo2kTDdZm8
S972cyvt6frKYy6RsOlPz1+S61iCmupsRY+HyKDTa0vnpXg2c4dE1neF5eMI6SVw
viJvCG2cflctJ2PLiINZJYAg87gV5Rm1i/Lt2YG5zdxAXXJt2R0uuU9fv4DcHZLU
yx69Yuh+5UiEaXYU7xoRCdZJYyHbvaC2hPcDGnEa3K1QbgnI5hGAtG3cwdbYC5e3
bcaJb/LdwzkdRnHLgpxa/UTAIfejEI1U2kvVvNoe/HvEXq/OPrhFDvE4rW8AzbX+
ISTWWRY0lLSr8/SD0TDJMbkBDQRam2geAQgA0kESOibOO3CcXrHtqP9lPGmj6rVe
G18fRmPDJiWtx863QqJeByuuUKwrGkPW/sqtIa5Ano+iXVHpk7m955nRjBmz4gd8
xqSGZd9XpNObYPA5iirAO8ztpgQvuvsHH9y9Ge50NnR7hQSMUbGVeCUU/vljwT60
/U+UPnsTAmkhvkEI72x50l5Ih9ihcBcr5jJpi08XktE3sFOoannac0kZQJ6WXFrY
4ILbv8fVqcRf44xMKOlFB9qHhleGW0H9ZUjTKs9quRt7r5D1MOiwrZDjNN3LqMco
oWj37hb+3KkmIIsAYB2DjnWYkMPu2B0O4uSOHYAWfEJtRvA8qW7sWj+q1wARAQAB
iQIlBBgBCAAPBQJam2geAhsgBQkB4TOAAAoJEHLjPk3xpQNulz0P/2m9veAAGGCO
yPsqiNVVa8lrhmLGh/W+BaoszQ/r+pfin4xTOV5K5h3MC5CVJM0JxP/cxNol+Nmr
cYGbZgq4QhLlH6PdQ7cReL5ED6Wi+eHb4ocvXJqUuZ2Gl8Z7gzQzp+WFbLrn8vXj
LcyGGEETV84jy+X5cKu24eAjTFK+unfqwfxXFZP5vhIEVe7+uG6A/pMB5eLDqEUh
FQUcWqCF2Imt08Kcn7HL31GoIY0ABHdD+ICXZE5lTm6iJGzpFBKdTMm/e5igCJ3v
/tgtWZbnvyseLR/T/gU1JyheS9kNXP5sICwicBbY/vnGdEP6OKpDOSfQam5x4BBj
cvsBnsNuonY7OJn4iLY6LviQ0MM91PbaJUUJrp9Uyi4hj9iO/MZBaG0Giu0FKjG6
Vk+RlCYPjYIvHflQLZHe9BWLPN1AvaLKszt3IYaxS5seXCu0ZqHDGCBVqVCNDDJk
MJbHlrOVZ9T6mZhA+aQ1EJvTWk3MNj1AOyCJdiXtOOdg+4Fm5dN7nLpumLIg2yP2
afI7YfrPGA7sm+T0IMnOWxkK+KODC7OV9h/QjyBJDcUYGXLapuK9eP80cr8hKvH7
S3G4Top/rXDpnBNQ2azCqmUwaJWNRhLucC80Vd00d4pWIOwAPWpiV70Fq8OhQFhT
PNXwFXVLwtNfPvPxN1s+Vk+BBXp+M19AuQENBFqbaC8BCADSq89Z9xWRS2fvWI/N
+yEWliIU8XiqC9Ch+/6mS2LEyzB1pPLIIQcRvM6rq2dxXIRveVGpb63ck9vUtuJG
//es+DnDkO7re+ZmWHX+LAqMYNdaobSYxHkkR4CcY2HbPSEUbb//Zwk4BDyp3g3W
bKK9noVbslZuSwWNrxjX/Hieh/dIGYkNFeWOlmNfUYsevzqNDjsziOagyXKxLc++
hUM3GKgzXRQJBvBpgzQc4bRY+YGHXtZurk9AiZ4ZBhWv2Qrb5OYYislE9sdv3KWV
Iv1EBpbkAJ9MM1HgS8bkIOIpNs4XxHY6fTWWdrXi+NgZXQwQRYaWTQsXL3mktarS
fFfTABEBAAGJAiUEGAEIAA8FAlqbaC8CGwwFCQHhM4AACgkQcuM+TfGlA24vqg/8
CsVBHO4mh8SSaxdWNEU+mG4pU230BRCwrfn42wuSKb7WNLTTcWMDNI0KY/JNcWSq
G2qa6lngiAyOS72Jd635ptZ6Wvjd0WtBt90NN2jtn+aRqwQ8uItlYMQscofFzskj
QnBF+NWER+44nxboghuQ041m6aI2XmYUErSOBZi6onbC3svH6coMldkkiueWFbdB
qac3UXue4LUcaGR5T9pCQsLgTl3D8V5icEM+HpTVVGQZkVrOuKMKe6T9N5FS/WFu
T6CuFf/OyU15a6RE4WW9QqKYsaHl8B6+0P7uqPoVIxs8hfJcwaUu9XwIiZYBZg7N
yYCQ7JBapC5KZlIcTCfFSxby8lCJoZbIM3Pk/pgCuKxGaS9rHHUGuIvz8TfnM9FO
2zlxl4E6k3NFgGKF3p6oiKayC74o6EOw50p6DMjrisG7kkWVsTRCMINF7CcfeVme
MI9ljGAMB1hPtHPhl27hMRfq+/iIbR9gb7+Xw2yKQL2QRjMDIGGxP3q4NjD4tV8P
VyTbAAwNARG8oMpNM628v3tsW+WYNot1SPUQuZbIx1pCwMeTqljWsdQxWRVW5UxM
dnXTCqZhQwH0ICG/jbepbP6ciVB/CSa7TVohEK6jiTMorhqynRMMJ6p48Z6HIlyY
0Ss8q9K29eXaqmLB8Oy3HmupqxH95TqMntqivzf6i6e5AQ0EWptoPQEIAL1OdDIl
7E3VKAEWH5GnMzdnl9bju/ovoMjLqGevHS9Gyz4OPyZvCQ2pS8k0sgqwsn4F8vWM
7L3xKTyQTSYOPby3do58+kxUrdTNlqGKEEcZDG+MSzxKyft7m+37fzbg6tcU+O3L
/7m8nYWQSRKJeist7Q8HrQJzehuzcgAnNmpeBqXHnAwRBvtqORvz5cQAIQ4EsEvP
f/unTjw95JtL1LtBOaOaynF9ap/TZ34OvDdARmZSdqPpRtEvjfgIboIYYt1dNmDH
kiSaaKaqBLCJTD2z5KT8ccDeF8lzYHAYzNy8v2lTc9vagCZH+lf3d2d6umNcr4y1
NGEN4ZEhrmt/lP0AEQEAAYkDRAQYAQgADwUCWptoPQIbAgUJAeEzgAEpCRBy4z5N
8aUDbsBdIAQZAQgABgUCWptoPQAKCRD619WmVzqkJDTvB/49MQNQk8YJTwAnbdSH
7stU2uQFBbkljE8Juz5WJK53lL6xUVUp/3gKrQio1F+z9uRVqRh0cQnqX6mPMe5a
2dlHEIDHTJjSlR5GCCBRDbssV6SN72xSVgbxVGZ9L32qUYtiwGnxwXQC9D9KsonD
YfGfUhD1CLAldr6HwhJkOq4QKz5GF4ty8sMKEcpM5UaR2sa2y6Ebn9Vzma10YaVp
X7RlZM/iTiDhTmWuxLh7e21DI95k/eFqHpKA912N0n1BdzZPbwk83nVRxXg793NU
RFpegzlLawtaCW9oVJOYqErMGOYN5nbXAHXsr5X7Or70v1Yo1khgzNOfnirO57T9
VjT0J1QP/j1xobgMuNda7Fpwc0xo2oIKEf+SWY9GQrkUK7wCLDbpgbGVxTmREkyE
6oyDzW1QpQXRjLS9Wvtun0J3Wn6r6Aar0VaGKa7uiiq8UORWtFkWQAzzyBj87Rjx
eWZWV1dzLK7eMJdyN0gsOzcejrsOqf1sydzvhm4K66byjDZ78piv0DdyPIb4OsiQ
QU2GH+QwGRYIkYlU9f9g+hSasAfzvrATHlJZNrOjOCgXbut/yP9ug3DHKj76wmoU
n/Y4rpmskAzIQrZIpimkpNBmZVTGr4bkWcokVzrRFor3NCpl1qA1K9Cd43wARH8t
Zwk4evI4abbqUId0vVNCKSSDyCCjgNwRsmU8RXdn7r0vs4ObKuWfY9Yl2y8tq3qO
YPHr0r50YXWtKqUNy5JUc3Ow9DFR1p4O4yfmiSyTLUyOYbglfvtnO32OJkrrZ8Kq
iSDnyiq9u2nYJAEHk7AchF5TJrTCnd8yWNIjohild+wc+rMKktspoEcxmT6zaK4T
AymmEe3jzQhlxptpBm68t2E5kaYFQh+NnizXlRcmFjBgEhH2CxCoerUpK82Y+MuE
S/ODmF9kULre14aXAAspj4vldQ/LJm0SYEfTkE1sDipJQUzv6oS5AFloQDXZPWTB
15P+Xsj8sJV9hWCfJZVmppWuPg/pCYXwdjUHUYouTz3ZL6qnUbm2uQINBFpZUO0B
EADZkfsbXPpDcMALUgVmB3cJU90tFqc8Q5guK9oSs3ibx4SmyhBVmeF2TF6PQNoK
YvpUR50hAcx2AbwtY51u0XxrAr8kFiL6R5ce/0pVMfXGchcC58CJ3uf+O+OPt080
ftyVSTsY9xEnBohMoJescn0L/IPaM6KkkIFIMAI4Ct3QCHox7WHgPLrqB7Efx2Dj
Qkgjbioqj8zVKDwxTs0S3sknch675gOhsCkaI7KMcRGACDinKRF3wQha4brNa8En
vPTV01Cv0ttCo6NCcbQzQi8QQYpMQcWVkquEJWweZQvL/OvYWdT13JgnIp66pkmo
+JvGTOqQpSnIx6OQnV9yqwqsg4E0dkCE+9rDYAHpwvmRkaI2sItjN4KAEQdTWES8
RgGVgfCtvNH87PqpaPIgarMDY/j5KqTDp/7Nc5oGLCmhwZiYzQa7Bm5uVNnYIyOO
d1IjfclgVdVAzMOmrFytvXFCBcga1khL15taC7s6Vuld89TgMZdSot2RVz8W8Xc+
39ZrBvzrCeYsPq5/U0Z85cnOSw4skwh06wsxTvL1D70SilI2c0YdR1sVgbhq04HN
7FyE7SDQ1GqxyTJAU+OPH3Pk97Bl25vWD43RCCIjSUHhKzQRPno2ItObFepE+QJH
lSO1YMXmZDAfsRts3dca3VSDOdAQely6G7HQu5kXWXGtRQARAQABiQIfBBgBCAAJ
BQJaWVDtAhsMAAoJEHLjPk3xpQNuRlsQAKnkyjjXRvfMB3eKZ1PrWf7DBx5WL8Hk
r2QAnv0diDeRTzotQyzKivf3/W3gQc9kQi/ilaPuuXeW+yKiNQaIbai7EC0DdyHa
qG9s8F7UDoojFOGCc3OVpQvuucVbv4a6EpqXBI6ayrQW0jMXakiuIF/dL5uCGRLn
sPMB87xJfMrkqEziymJMooQTRckgj2S9J2QYDZFxmKChPl1kXMlmx6Y4FhzrsYwo
I47hW9hHG1+8129FOgwYTwELEuX6FKShcKpyy77b4Tar3UvdzNkfaysFPvX3O7Oh
Bes2VgfslEZSgK2CScigvLIz9Ehe9CUw6WEC6LZW3bbC+Cbz624gOsm/GYI9Llsc
5/NMMwQTCoTRm+b0UAYQvzHDS7km9yceNSfFlkpE/lWnIt9E0H+8bOwEbYF8y2qy
yLXIm7MYEm4UnUZ0j8kihP+wSHsZP2xPhjEUcQw1ZWhWLACvzlEC0a3zsovoJ6U8
zrqqfEBtuQgfwwJRWArMLQn/rlEJSrTrFfehwhvH3dPVSU36N5nBA8ls5YlSHBQv
38dChpBXu3wzFhetkSuHxdlfopeZMtDmljnIkBNTEFg01oqJNPbyiX2JQcfKizCt
maCIiJY+hOYIKkJdkt1FyOhADBciebxCQrpIIzWupeyQNuVj3I4g6YaQX00+kgiB
H1XKrAg/dpMNmQENBFrHiEoBCADASg9zHYzaSU0/1q1hcmTHU6H4syCQXHHd3zF7
n/RcsGnt4RBuUi/BUvNp3zYR6uFOyyk6LPV1hq2Ca8e/oSFrDYxqLladESQd9GNN
stDeK3HinsWJCVwSbkzNJbUtyr6SclmWt66vNqBZngMallJRQe8QDqpg0ZSZj/0d
VGxPBR16zc/2ddGnXJFe/V5XAWAap9SEo44pyGK4xf87Bgq8jT33LuQtd8exOk3E
atkK5jLEn9xmiheoSePEhOoQSrJMHfMjFka0PYZlCeaaHw7r7yXb/VoHFOAPxb6k
a1cunbp39b4z7Jy9kLBy0tbDnAs/sLp4LUN3Vx1JLoXBSIsHABEBAAG0JFJhdWwg
Sm9yZGFuIDxyYXVsQHByeXNtYXRpY2xhYnMuY29tPokBNwQTAQoAIQUCWseISgIb
AwULCQgHAwUVCgkICwUWAgMBAAIeAQIXgAAKCRCVRSpwGBD+26lmB/wJxChWwva0
StjMRSk3V7JTBPi5RNQm3VY8UydUNjsXzHVvtjUczOj+zHfo8tYUYa/ypWujXEX9
bVYVMr9JGNjm+rysI1gmW1gcx9pZr9gde4CR4Owfpwjh8oFnSrBBrcaelb0Krrv4
Okms43fEw1bWpllayal5SqknOIxpw1ZiNpG3jVe/C9n1/3Nw4XF5R0RHDxXTu6Hu
H3t7zRQwAJcOod6ccRdzNR9CsGdnOoNPG3pqs9w6zWUp0QETMAMcEpSLCJumaVfQ
24PK+MF92F5JRwNVt80xSLA9KMyNnv/ZvxZXXLjkIhsAmwYkHUs3WSm5HJ4qqOCX
V8VwFbwnea3eiQIzBBABCgAdFiEECuAFHWR7o8GpF69AcuM+TfGlA24FAl9zs2AA
CgkQcuM+TfGlA24dMA/+N31SPAXVgwiNasBknb+YSq2/OQPogxQUc8tupvDcQ62H
u0kvA7pUPUFjITJ1xsm8CRXfb7Hge9rNUw9Y2MNKf4sIDQs3bFeKOiAGVbO8Z055
5VCTWWPhXzjWoR84YWrx0Go8WT/V3Lahy4frGA3Vsza6wzi2P6c7LF4jqX2iBD1l
OpAYNGyt0sX/RLp3s2jOTWJwVyRR4UKSZOgpi9OTGLXrq5oU2dpwEIzBmhaWIs+u
oD5/4TaAt4lEFu3Sxk5w8fJlUXbM4IG8A1l+dnRPF5rtjtfvuX0GeQcDtJ5e1qHj
7PvVj8HjGPwxqsAjYHpg6QjyQCdtHYHdboEIU+OXMYRPRh2Iv0GUoeuMqoSsvFgn
c9PGN8Ai5u+oNzKeLJhpxpLV7s9zAbsripdnvDBn7RwuNx2ziqZayxoYvFRCAQl5
wzr0/eo/wq36D9EI7uJ2I3yt1g/VkwWQsAeE2skuGGbwed263cWTkl6g1w+vlDY3
/jc3a8HcS0zIUX194ChrbbNezGb74mQFoLk6PksLfhXseAKCs8bvPaTwIm0JGTMT
YzkRufrv1+I9KPFy3fTpvnMZTbud4nPCLsK179whk+Jrdv866i+E2WZ1JyzIZ9bB
P1x+ABi82PMdzhTw+pTkR1CVHmrmOiSi2MHRYGMedM9ECGnPIdab5d75r1qkuvO5
AQ0EWseISgEIAKHrgTVeZ0CWZMETk7NFjDTvBpoTMQDT7DDe2vFQHc8D2AvR3Aod
+CUaOeY0Yp0GOemd7BEcUbpYbCQk1B9G8vHNcQqTMLjBgujXwF6w2c+lqEu2/tyI
2/uGCOBDs4wuQZo+akCCMekqYjdZBfZV6nQzf/l7eQHIq+sNtnSiZmHdH46PLi7/
17wR9GxzKvuv1R73DCyekCcz1Puo0b7lfGD28kESJK1Mg9SAOqVjtaS58Oxo+Y1M
ZWRqh0tkAkgOBpdyddGy6TX/9c3a0U3eBQweRpNDh0eCEh5UsYDluL4NtXj7rVYd
4mHONJzI3h1LnKWvpVYmk703MPmtgeJwNzEAEQEAAYkBHwQYAQoACQUCWseISgIb
DAAKCRCVRSpwGBD+25PxCACNBj/2HBSpdYAxEGhNHSaw62y7Jwuf7NbsKIAqygzi
m2+dxae7PbAm64jEXYJA5GaCOs4xO52S/2+1N87e5J86VRNg0vlA9/ivFzERAxEg
rgIUyGXYfS8oA/4r5+PfKt/NvXO2wH3MPakrqZqXhOv+1IPvOt03wWoZKmYyVT6g
YnzutOsvH6cbhUh/D0WpuU8ZgkrFu5Xe7ynZoLm1qQOA23pkxxQbeNs0uoKUja9v
bx4eBtaliLc6rsXt+1WzdXA5ZRNqkdn87Tz5IU0FuQYVblYvPz9QoUlvx5siWVaP
Mmc7dIctWaWyaJmgjkLKdy36ydS5axhqn158yIjOZV3emQINBF8h6lMBEACovC4z
oieJ9iMZpWfylsLQKkeEmfSXnjcjW4RLUjs2CRWj+W6H7eCRy/MBOdx+6zAb8TE/
n/TSlP5l5Xx40BGDAMUZVFrJVEkMPK5kUmNzybg7PiuMd3qZE+pNyHEXXlLU77Dn
VO26TD9RvpKXdjm+ATsnQ6rvDNszkYI2Bj1tPxwZ7bRi96U/upL3WfYOsTLHirM3
pEkI3zfMZj8ufDX9XlmGrQ384E/hTgjpLXmDm/jMRlX3mRzDkV1HO+gEicfePm5+
K4eWlMCNXN5bcGwoEFY2LwAojRcbRaH/UH2S3btkG2eSK2fOFEwQ0G4vIyoLF60R
cEk1s6cYIgk3kVsmNSzA5iJ81nD5bbfTceUKsjTZiw5RmN0Vh5g75pw+3uoXB+55
egabEIt/GTZZlP6zhd/CKjTQR0R4+ZaEbZFOCtABukC5xfWGCRdNmXAWMBe0MYpW
ub2TRLISLfNNc7GWM4Y17d9aRhaql9gY6QLIRNGYuvGPiRMaJADZx46LtbWo8YiG
xLId7H8D+/0MSzMOg7RhELqYScYDigiVEXkTHA3QSprf/ohjm8woRhQY5CjCEZp5
8MvhD40VAo4Gxgx1V8lwB3CD3kDgFEaUZh2H+N/fmRegACN2lzEm1krOiS7sAB48
/Av99/1VXalLoBbzFTnuAYwnmLk8vdparT5LlwARAQABtChUZXJlbmNlIFRzYW8g
PHRlcmVuY2VAcHJ5c21hdGljbGFicy5jb20+iQJOBBMBCAA4FiEEMX1ukQWPjzwj
A7p3VjE+RFgSl6YFAl8h6lMCGwMFCwkIBwIGFQoJCAsCBBYCAwECHgECF4AACgkQ
VjE+RFgSl6YJgBAAqIAMzMFf/+qE896kuXYSlmaYtzhrMXgQVD09UgSq4JfVYudE
e8EfamWz8RFV7znxHO/5Fp4yoWeDGc2b40DhivvRb1p4XSJi+F64moeHt+qn4Hay
CDs5wGv40KJk0+2r7iILc3Gw4NP6r4c2HU2EqSW8fn/bYM3yxLsYEXt5zhgipKpR
CB4WmqojW+CxXkj8dv4CyY+HlKJ6nZWstK42h736njM44rjMDSDt3lv0ZYykopTw
09THnHkWOXY4PSrKOzE+OqsPMPOD3Gq3bYPqQ0ZgdS4FeuDesqqHjJabRX+7DgJK
lLEnylfDlvDKf6I/tqrAfqCCdQvEjpdQXG84GSdK//wT1biunHvGeyHM/PZx9V5O
X8YoecswcYjozU1F7x8buk8GxV7ZuJ2pbdXr7jTou1dA7e4Nas9Qu1RcK39MoTYc
Ub467ONFCX+Y3MzELTnEyx4hSwTxlWg7VjCQ63zrFOd8VG0WUydlqiHgIE8WCJVE
pQlrr2v/A6ieM1HY1Ftjt+mk/qncqibCdyIofyV4o5FrVo8arx74jR+Lhvkp1T9N
uRj8V6M3g9vxJdAsy1O5/ZrdwKJ6Yy1n62+9b/uxhklYONK7DpDlWU0ZLWHlKEIV
zzfILCiLf+nxMQdVwuthlcysg8JwYEE8xDXc851ciuj0ygv3Wm0BY44V6/m5Ag0E
XyHqUwEQALuKzOAlo75p2vSYD7TecE/E0hBOS+cjs8kRH+oIzm5fx7sf00SC1jU2
q5QLYLixNeT+l0bD70R7b8r2uFu1aZL7pQqbGIGisLHlxu8611+PCpE5AsQi3Wui
IZ6Y8K7grJ28vviBiZUBY3iCCRH0LuvyZN3R0zgyMGbzouQk5wuGJUkRHJKtV5by
FVEl3CYudRtAp5LPFw6j7UzT5yQqBmY6tXp5WMmmAvOtnu8ohpRhzY21dJMlSykX
Ne9rcARy8mVWNdcXJUIc85z0BmyrdiA4YY0XiZTHD9mslj+af4QHsaS+p3aorTLD
5BKkp5Ek79a1BUxBjrao4W2fljYf129/SHbwds/Dup26zB2vi/fhbfVPvevXLPpi
Vm4uz8fE4D5lPYZAdu5GtO2V9kWbhDtU1R1SJSdFDI9Sev3B+NLrstclGfdbFQKF
shbUxydjSX56OJvh5hee50PcCz+Ab+usoyUPkcOfET/L55AdqJo3cVYtnAKpOJG/
mKP5Ih3LZVm9wCdWTCzboEtDfLlcjCc5kLiE9Pq7sKMnXm/NcXNFWBkRuaHg+Mt5
Yk659/Q6oUG3yUrS2d7cSeuNRWNlaRlnk3hZtB13xpmoHGYmrMOe7wiEE5xEnOju
1ctRRRX6lNUPlSvID83Y9JSYL9hYMbidRmFY28AIddT/PjVTgfi1ABEBAAGJAjYE
GAEIACAWIQQxfW6RBY+PPCMDundWMT5EWBKXpgUCXyHqUwIbDAAKCRBWMT5EWBKX
poMOD/4/sRLnK94pY2qtSSfNZEumOdQVvges08u34saG4mT1iLD9TcRgsFvgwcTV
Pec9Git4iJmoJpFylvOzoBWmtDzJ7TB3JxvtwUfFH2uZ22k12eChyAFHI8TxASqI
TV2YwkIgB2PTmva1GrdNKM6y5bfS1BdS0u+Etgp4zN9nyQECY2sM2accUi2S7JBV
7o6lEJWCRgFUwIfY6fdfxhZo/w0g2u5wftUQYlbXWSQLImFFZBc/13k8EMxsfbMi
LQ/wxKEfe1ZwFoXqIS6cq8CHTdj70QO7K9ah6krGPmL23LVuxlkQIR7mT5yFJtv2
VRT4IYgOUl6rAd08B6MOy6MOMa14FpNrAybPK8Yd/eSdUNciVf17q33Yc15x+trs
En1hHbDS82xaLJ/Ld4mANeJVt3FodgVmzZGpKheFEOFXXeuX1ZZ4c9TgX+0wBKoh
aBb8EGBVo4aV4GbZ/gEa8wls5NNTwd56H4G94hiq+qFM39x4YjhAvkM0hLRYzR05
tSCdlEbkh2K6RbOhBPsNHCbypWRt2fN8/q4uLPJJt7NRQ2zi6H/x04HGblhXdX6H
mmYjJv1LTbem9VptcpvPauNsibeIvIWA2nYM2ncDWt6gJpOH9Zh4fHuaG4aCdNmJ
hPNeJNnmLcpDQvR9wU5w7e5tC/ZSeTZ5ul1zOKa1qZ4lJ50BDQ==
=a30p
-----END PGP PUBLIC KEY BLOCK-----

View File

@@ -3,24 +3,24 @@ Hash: SHA512
Contact: mailto:security@prysmaticlabs.com
Encryption: openpgp4fpr:0AE0051D647BA3C1A917AF4072E33E4DF1A5036E
Encryption: openpgp4fpr:CD08DE68C60B82D3EE2A3F7D95452A701810FEDB
Encryption: openpgp4fpr:317D6E91058F8F3C2303BA7756313E44581297A6
Encryption: openpgp4fpr:79C59A585E3FD3AFFA00F5C22940A6479DA7C9EC
Encryption: openpgp4fpr:341396BAFACC28C5082327F889725027FC8EC0D4
Encryption: openpgp4fpr:8B7814F1B221A8E8AA465FC7BDBF744ADE1A0033
Preferred-Languages: en
Canonical: https://github.com/prysmaticlabs/prysm/tree/master/.well-known/security.txt
-----BEGIN PGP SIGNATURE-----
iQIzBAEBCgAdFiEECuAFHWR7o8GpF69AcuM+TfGlA24FAmGOfiYACgkQcuM+TfGl
A24YwRAAiQk3w6yzqSEggrOlNoNn04iu/rWZdn5ihkQgzACXy8XH2D1gdKLChE/X
7e5bUtgE2aCuHryQjwoKxqZakviBJFstVmHgF64rXv2zKhpqA30Mj4fI+T3zn8I+
+FpFV0TTsxNLDx+AcR1eQ1nSayO7ImUDIfOQNDDnSZZy42Bc+F+QIGKB3aH/8bpG
kT+bDTZrXvX+TE1gZTbAtZG8sH8g/zadoWEHIhfXUuYb0kTz+DRzAxoqU4j4Z4ee
1zSfFAgfJwxJP4kWD7s4xkE1sBbCgGBeD6cW/C2lbcfIei+XSizLpHW3jD9dNqh4
fLkmEspSa/LV/iXFq8nFzu/GLww4q+sQZDzzDKZyws54CrATinRitZMhzoIL0bTn
yFZVOGHosFAMEVZ36dl1Aw2+B2W6tr2CVr9c5zfV+kup5/KZH1EmT5nYY/zFwfg2
jYCFB5wmYeiyWZvuprgJXRArgVZLZaJxwWazlPVk4i/4vPvRgvfHqOwHCBe8DXy0
VHPhpewwb/ECYek1KoaNQflgR8iH2GMHkC5RjhGDAt1S0AQDtite5m4ZYt1kvO9E
k/znkv89dduhL9CKDvZvnI+DICwsTrf//4KJ8PM/qaPAJa4GvtiUU/eS/jKBivtv
OP5dZQtX6KPc9ewqqZgn622uHSezoBidgeTkdZsJ6tw2eIu0lsY=
=V7L0
iQIzBAEBCgAdFiEECuAFHWR7o8GpF69AcuM+TfGlA24FAlzi0WgACgkQcuM+TfGl
A241pw/+Ks3Hxx8eGbjRIeuncuK811FkCiofNJS+MY2p4W2/tIrk48DtLRx8/k5L
Dh1QyypZsqUgofrK7PbGVdEin6oEb2jYbTWUarAVTbhlsUdM4YcxwpgmGVslW7+C
Hm8wMasQZhCkFfakzhfKX5hIQoFaFI/OvtVKIQsodP8dAieCDaGmtfq1Bs1LgFqi
KrpeEdC2XbBQs33ADheC5SdGT1mnatP3VX8cOhLsfoPksYgTSpwK0clkoWs1eZOQ
l1ImfW/FJCpSndBWgBR503ZgaU3Ic+5qxmAIuUP4chl0DFRMlPFEM5OWC6JkkCOd
5kKrXGRmrhgtQg+pA3zqJnFItRj7gxPBA/ypxCkKPrLEkRvbdpdZEl5vAlYkeBL6
iKSLHnMswGKldiYxy7ofam5bM3myhYYNFb25boV5pRptrnoUmWOACHioBGQHwWNt
B0XktD0j7+pCCiJyyYxmOnElsk/Y/u4Tv5pYWvfFuxTF2XOg+P/EH64AIFLWgB1U
VnITxhakxqejCBxZkuVCFNSzt+TXG0NS9EIj/UOYBY+wxrBZ62ITjdA16RS/3n3z
DuIDtxOOwUumbOO32+a5zIb+ARmnocYJviI7FuENb01/U6qb+nm9hQI6oIpSCNsv
Pb4O/ZlOx70U/7mt4Xn/dTKH9bnKOOVhOw00KJWFfAce73AVnLA=
=Uhqg
-----END PGP SIGNATURE-----

View File

@@ -3,21 +3,12 @@ load("@com_github_atlassian_bazel_tools//gometalinter:def.bzl", "gometalinter")
load("@com_github_atlassian_bazel_tools//goimports:def.bzl", "goimports")
load("@io_kubernetes_build//defs:run_in_workspace.bzl", "workspace_binary")
load("@io_bazel_rules_go//go:def.bzl", "nogo")
load("@bazel_skylib//rules:common_settings.bzl", "string_setting")
load("@prysm//tools/nogo_config:def.bzl", "nogo_config_exclude")
prefix = "github.com/prysmaticlabs/prysm"
exports_files([
"LICENSE.md",
])
exports_files(["genesis.json"])
# gazelle:prefix github.com/prysmaticlabs/prysm/v5
# gazelle:map_kind go_library go_library @prysm//tools/go:def.bzl
# gazelle:map_kind go_test go_test @prysm//tools/go:def.bzl
# gazelle:map_kind go_repository go_repository @prysm//tools/go:def.bzl
# gazelle:build_tags bazel
# gazelle:exclude tools/analyzers/**/testdata/**
# gazelle:prefix github.com/prysmaticlabs/prysm
gazelle(
name = "gazelle",
prefix = prefix,
@@ -26,16 +17,7 @@ gazelle(
# Protobuf compiler (non-gRPC)
alias(
name = "proto_compiler",
actual = "@io_bazel_rules_go//proto:go_proto",
visibility = [
"//proto:__subpackages__",
],
)
# Cast protobuf compiler (non-gRPC)
alias(
name = "cast_proto_compiler",
actual = "@com_github_prysmaticlabs_protoc_gen_go_cast//:go_cast",
actual = "@io_bazel_rules_go//proto:gogofast_proto",
visibility = [
"//proto:__subpackages__",
],
@@ -44,15 +26,28 @@ alias(
# Protobuf compiler (gRPC)
alias(
name = "grpc_proto_compiler",
actual = "@io_bazel_rules_go//proto:go_grpc",
visibility = ["//visibility:public"],
actual = "@io_bazel_rules_go//proto:gogofast_grpc",
visibility = [
"//proto:__subpackages__",
],
)
# Cast protobuf compiler (gRPC)
# Protobuf gRPC compiler without gogoproto. Required for gRPC gateway.
alias(
name = "cast_grpc_proto_compiler",
actual = "@com_github_prysmaticlabs_protoc_gen_go_cast//:go_cast_grpc",
visibility = ["//visibility:public"],
name = "grpc_nogogo_proto_compiler",
actual = "@io_bazel_rules_go//proto:go_grpc",
visibility = [
"//proto:__subpackages__",
],
)
# Protobuf gRPC gateway compiler
alias(
name = "grpc_gateway_proto_compiler",
actual = "@grpc_ecosystem_grpc_gateway//protoc-gen-grpc-gateway:go_gen_grpc_gateway",
visibility = [
"//proto:__subpackages__",
],
)
gometalinter(
@@ -76,207 +71,40 @@ workspace_binary(
cmd = "@com_github_golang_lint//golint",
)
STATICCHECK_ANALYZERS = [
# Enabled static checks. See https://staticcheck.dev/docs/checks/
# Please. keep this list sorted. Don't be a bad person by inserting stuff randomly.
"sa1000",
"sa1001",
"sa1002",
"sa1003",
"sa1004",
"sa1005",
"sa1006",
"sa1007",
"sa1008",
"sa1010",
"sa1011",
"sa1012",
"sa1013",
"sa1014",
"sa1015",
"sa1016",
"sa1017",
"sa1018",
# "sa1019", # TODO: Fix all uses of deprecated things.
"sa1020",
"sa1021",
"sa1023",
"sa1024",
"sa1025",
"sa1026",
"sa1027",
"sa1028",
"sa1029",
"sa1030",
"sa2000",
"sa2001",
"sa2002",
"sa2003",
"sa3000",
"sa3001",
"sa4000",
"sa4001",
"sa4003",
"sa4004",
"sa4005",
"sa4006",
"sa4008",
"sa4009",
"sa4010",
"sa4011",
"sa4012",
"sa4013",
"sa4014",
"sa4015",
"sa4016",
"sa4017",
"sa4018",
"sa4019",
"sa4020",
"sa4021",
"sa4022",
"sa4023",
"sa4024",
"sa4025",
"sa4026",
"sa4027",
"sa4028",
"sa4029",
"sa4030",
"sa4031",
"sa4032",
"sa5000",
"sa5001",
"sa5002",
"sa5003",
"sa5004",
"sa5005",
"sa5007",
"sa5008",
"sa5009",
"sa5010",
"sa5011",
"sa5012",
"sa6000",
"sa6001",
"sa6002",
"sa6003",
"sa6005",
"sa6006",
"sa9001",
"sa9002",
#"sa9003", # Doesn't build. See https://github.com/dominikh/go-tools/pull/1483
"sa9004",
"sa9005",
"sa9006",
"sa9007",
"sa9008",
]
nogo_config_exclude(
name = "nogo_config_with_excludes",
checks = [sa.upper() for sa in STATICCHECK_ANALYZERS],
exclude_files = [
"external/.*",
],
input = "nogo_config.json",
)
nogo(
name = "nogo",
config = ":nogo_config_with_excludes",
config = "nogo_config.json",
visibility = ["//visibility:public"],
deps = [
"//tools/analyzers/comparesame:go_default_library",
"//tools/analyzers/cryptorand:go_default_library",
"//tools/analyzers/errcheck:go_default_library",
"//tools/analyzers/featureconfig:go_default_library",
"//tools/analyzers/gocognit:go_default_library",
"//tools/analyzers/ineffassign:go_default_library",
"//tools/analyzers/interfacechecker:go_default_library",
"//tools/analyzers/logruswitherror:go_default_library",
"//tools/analyzers/maligned:go_default_library",
"//tools/analyzers/nop:go_default_library",
"//tools/analyzers/properpermissions:go_default_library",
"//tools/analyzers/recursivelock:go_default_library",
"//tools/analyzers/shadowpredecl:go_default_library",
"//tools/analyzers/slicedirect:go_default_library",
"//tools/analyzers/uintcast:go_default_library",
"@org_golang_x_tools//go/analysis/passes/appends:go_default_library",
"@org_golang_x_tools//go/analysis/passes/asmdecl:go_default_library",
"@org_golang_x_tools//go/analysis/passes/assign:go_default_library",
"@org_golang_x_tools//go/analysis/passes/atomic:go_default_library",
"@org_golang_x_tools//go/analysis/passes/atomicalign:go_default_library",
"@org_golang_x_tools//go/analysis/passes/bools:go_default_library",
"@org_golang_x_tools//go/analysis/passes/buildssa:go_default_library",
"@org_golang_x_tools//go/analysis/passes/buildtag:go_default_library",
# cgocall disabled
#"@org_golang_x_tools//go/analysis/passes/cgocall:go_default_library",
"@org_golang_x_tools//go/analysis/passes/copylock:go_default_library",
"@org_golang_x_tools//go/analysis/passes/ctrlflow:go_default_library",
"@org_golang_x_tools//go/analysis/passes/deepequalerrors:go_default_library",
"@org_golang_x_tools//go/analysis/passes/defers:go_default_library",
"@org_golang_x_tools//go/analysis/passes/directive:go_default_library",
"@org_golang_x_tools//go/analysis/passes/errorsas:go_default_library",
# fieldalignment disabled
#"@org_golang_x_tools//go/analysis/passes/fieldalignment:go_default_library",
"@org_golang_x_tools//go/analysis/passes/findcall:go_default_library",
"@org_golang_x_tools//go/analysis/passes/framepointer:go_default_library",
"@org_golang_x_tools//go/analysis/passes/httpmux:go_default_library",
"@org_golang_x_tools//go/analysis/passes/httpresponse:go_default_library",
"@org_golang_x_tools//go/analysis/passes/ifaceassert:go_default_library",
"@org_golang_x_tools//go/analysis/passes/inspect:go_default_library",
"@org_golang_x_tools//go/analysis/passes/loopclosure:go_default_library",
"@org_golang_x_tools//go/analysis/passes/nilfunc:go_default_library",
"@org_golang_x_tools//go/analysis/passes/nilness:go_default_library",
"@org_golang_x_tools//go/analysis/passes/pkgfact:go_default_library",
"@org_golang_x_tools//go/analysis/passes/printf:go_default_library",
"@org_golang_x_tools//go/analysis/passes/reflectvaluecompare:go_default_library",
# shadow disabled
#"@org_golang_x_tools//go/analysis/passes/shadow:go_default_library",
"@org_golang_x_tools//go/analysis/passes/shift:go_default_library",
"@org_golang_x_tools//go/analysis/passes/sigchanyzer:go_default_library",
"@org_golang_x_tools//go/analysis/passes/slog:go_default_library",
"@org_golang_x_tools//go/analysis/passes/sortslice:go_default_library",
"@org_golang_x_tools//go/analysis/passes/stdmethods:go_default_library",
"@org_golang_x_tools//go/analysis/passes/stringintconv:go_default_library",
"@org_golang_x_tools//go/analysis/passes/structtag:go_default_library",
"@org_golang_x_tools//go/analysis/passes/testinggoroutine:go_default_library",
"@org_golang_x_tools//go/analysis/passes/tests:go_default_library",
"@org_golang_x_tools//go/analysis/passes/timeformat:go_default_library",
"@org_golang_x_tools//go/analysis/passes/unmarshal:go_default_library",
"@org_golang_x_tools//go/analysis/passes/unreachable:go_default_library",
"@org_golang_x_tools//go/analysis/passes/unsafeptr:go_default_library",
"@org_golang_x_tools//go/analysis/passes/unusedresult:go_default_library",
"@org_golang_x_tools//go/analysis/passes/unusedwrite:go_default_library",
"@org_golang_x_tools//go/analysis/passes/usesgenerics:go_default_library",
] + select({
# nogo checks that fail with coverage enabled.
":coverage_enabled": [],
"//conditions:default": [
"@org_golang_x_tools//go/analysis/passes/composite:go_default_library",
"@org_golang_x_tools//go/analysis/passes/lostcancel:go_default_library",
],
}) + ["@co_honnef_go_tools//staticcheck/%s:go_default_library" % c for c in STATICCHECK_ANALYZERS],
)
config_setting(
name = "coverage_enabled",
values = {"define": "coverage_enabled=1"},
)
config_setting(
name = "pgo_enabled",
values = {"define": "pgo_enabled=1"},
)
common_files = {
"//:LICENSE.md": "LICENSE.md",
"//:README.md": "README.md",
}
sh_binary(
name = "prysm_sh",
srcs = ["prysm.sh"],
visibility = ["//visibility:public"],
"@org_golang_x_tools//go/analysis/passes/unsafeptr:go_tool_library",
"@org_golang_x_tools//go/analysis/passes/unreachable:go_tool_library",
"@org_golang_x_tools//go/analysis/passes/unmarshal:go_tool_library",
"@org_golang_x_tools//go/analysis/passes/tests:go_tool_library",
"@org_golang_x_tools//go/analysis/passes/structtag:go_tool_library",
"@org_golang_x_tools//go/analysis/passes/stdmethods:go_tool_library",
"@org_golang_x_tools//go/analysis/passes/shift:go_tool_library",
# "@org_golang_x_tools//go/analysis/passes/shadow:go_tool_library",
"@org_golang_x_tools//go/analysis/passes/printf:go_tool_library",
"@org_golang_x_tools//go/analysis/passes/pkgfact:go_tool_library",
"@org_golang_x_tools//go/analysis/passes/nilness:go_tool_library",
"@org_golang_x_tools//go/analysis/passes/nilfunc:go_tool_library",
# lost cancel ignore doesn't seem to work when running with coverage
#"@org_golang_x_tools//go/analysis/passes/lostcancel:go_tool_library",
"@org_golang_x_tools//go/analysis/passes/loopclosure:go_tool_library",
"@org_golang_x_tools//go/analysis/passes/httpresponse:go_tool_library",
"@org_golang_x_tools//go/analysis/passes/findcall:go_tool_library",
"@org_golang_x_tools//go/analysis/passes/deepequalerrors:go_tool_library",
"@org_golang_x_tools//go/analysis/passes/ctrlflow:go_tool_library",
"@org_golang_x_tools//go/analysis/passes/copylock:go_tool_library",
"@org_golang_x_tools//go/analysis/passes/composite:go_tool_library",
# "@org_golang_x_tools//go/analysis/passes/cgocall:go_tool_library",
"@org_golang_x_tools//go/analysis/passes/buildtag:go_tool_library",
"@org_golang_x_tools//go/analysis/passes/buildssa:go_tool_library",
"@org_golang_x_tools//go/analysis/passes/bools:go_tool_library",
"@org_golang_x_tools//go/analysis/passes/atomicalign:go_tool_library",
"@org_golang_x_tools//go/analysis/passes/atomic:go_tool_library",
"@org_golang_x_tools//go/analysis/passes/assign:go_tool_library",
"@org_golang_x_tools//go/analysis/passes/inspect:go_tool_library",
"@org_golang_x_tools//go/analysis/passes/asmdecl:go_tool_library",
],
)

File diff suppressed because it is too large Load Diff

View File

@@ -1,21 +1,16 @@
# Contribution Guidelines
Note: The latest and most up-to-date documentation can be found on our [docs portal](https://docs.prylabs.network/docs/contribute/contribution-guidelines).
Excited by our work and want to get involved in building out our sharding releases? Or maybe you haven't learned as much about the Ethereum protocol but are a savvy developer? Our [READINGS.md](https://github.com/prysmaticlabs/prysm/blob/master/docs/READINGS.md) doc includes comprehensive information on Ethereum and sharding for both part-time and core contributors to the project.
Excited by our work and want to get involved in building out our sharding releases? Or maybe you haven't learned as much about the Ethereum protocol but are a savvy developer?
You can explore our [Open Issues](https://github.com/prysmaticlabs/prysm/issues) in-the works for our different releases. Feel free to fork our repo and start creating PRs after assigning yourself to an issue of interest. We are always chatting on [Discord](https://discord.gg/CTYGPUJ) drop us a line there if you want to get more involved or have any questions on our implementation!
> [!IMPORTANT]
> Please, **do not send pull requests for trivial changes**, such as typos, these will be rejected. These types of pull requests incur a cost to reviewers and do not provide much value to the project. If you are unsure, please open an issue first to discuss the change.
You can explore our [Open Issues](https://github.com/prysmaticlabs/prysm/issues) in-the works for our different releases. Feel free to fork our repo and start creating PRs after assigning yourself to an issue of interest. We are always chatting on [Discord](https://discord.gg/che9auJ) or [Gitter](https://gitter.im/prysmaticlabs/geth-sharding) drop us a line there if you want to get more involved or have any questions on our implementation!
## Contribution Steps
**1. Set up Prysm following the instructions in README.md.**
**2. Fork the Prysm repo.**
**2. Fork the prysm repo.**
Sign in to your GitHub account or create a new account if you do not have one already. Then navigate your browser to https://github.com/prysmaticlabs/prysm/. In the upper right hand corner of the page, click “fork”. This will create a copy of the Prysm repo in your account.
Sign in to your Github account or create a new account if you do not have one already. Then navigate your browser to https://github.com/prysmaticlabs/prysm/. In the upper right hand corner of the page, click “fork”. This will create a copy of the Prysm repo in your account.
**3. Create a local clone of Prysm.**
@@ -26,7 +21,7 @@ $ git clone https://github.com/prysmaticlabs/prysm.git
$ cd $GOPATH/src/github.com/prysmaticlabs/prysm
```
**4. Link your local clone to the fork on your GitHub repo.**
**4. Link your local clone to the fork on your Github repo.**
```
$ git remote add myprysmrepo https://github.com/<your_github_user_name>/prysm.git
@@ -65,13 +60,19 @@ Changes that only affect a single file can be tested with
$ go test <file_you_are_working_on>
```
Changes that affect multiple files can be tested with ...
```
$ golangci-lint run && bazel test //...
```
**10. Stage the file or files that you want to commit.**
```
$ git add --all
```
This command stages all the files that you have changed. You can add individual files by specifying the file name or names and eliminating the “-- all”.
This command stages all of the files that you have changed. You can add individual files by specifying the file name or names and eliminating the “-- all”.
**11. Commit the file or files.**
@@ -99,7 +100,8 @@ If there are conflicts between your edits and those made by others since you sta
$ git status
```
Open those files one at a time, and you will see lines inserted by Git that identify the conflicts:
Open those files one at a time and you
will see lines inserted by Git that identify the conflicts:
```
<<<<<< HEAD
@@ -121,21 +123,17 @@ $ git push myrepo feature-in-progress-branch
**15. Check to be sure your fork of the Prysm repo contains your feature branch with the latest edits.**
Navigate to your fork of the repo on GitHub. On the upper left where the current branch is listed, change the branch to your feature-in-progress-branch. Open the files that you have worked on and check to make sure they include your changes.
Navigate to your fork of the repo on Github. On the upper left where the current branch is listed, change the branch to your feature-in-progress-branch. Open the files that you have worked on and check to make sure they include your changes.
**16. Add an entry to CHANGELOG.md.**
**16. Create a pull request.**
If your change is user facing, you must include a CHANGELOG.md entry. See the [Maintaining CHANGELOG.md](#maintaining-changelogmd) section for more information.
Navigate your browser to https://github.com/prysmaticlabs/prysm and click on the new pull request button. In the “base” box on the left, leave the default selection “base master”, the branch that you want your changes to be applied to. In the “compare” box on the right, select feature-in-progress-branch, the branch containing the changes you want to apply. You will then be asked to answer a few questions about your pull request. After you complete the questionnaire, the pull request will appear in the list of pull requests at https://github.com/prysmaticlabs/prysm/pulls.
**17. Create a pull request.**
Navigate your browser to https://github.com/prysmaticlabs/prysm and click on the new pull request button. In the “base” box on the left, leave the default selection “base master”, the branch that you want your changes to be applied to. In the “compare” box on the right, select feature-in-progress-branch, the branch containing the changes you want to apply. You will then be asked to answer a few questions about your pull request. After you complete the questionnaire, the pull request will appear in the list of pull requests at https://github.com/prysmaticlabs/prysm/pulls. Ensure that you have added an entry to CHANGELOG.md if your PR is a user-facing change. See the [Maintaining CHANGELOG.md](#maintaining-changelogmd) section for more information.
**18. Respond to comments by Core Contributors.**
**17. Respond to comments by Core Contributors.**
Core Contributors may ask questions and request that you make edits. If you set notifications at the top of the page to “not watching,” you will still be notified by email whenever someone comments on the page of a pull request you have created. If you are asked to modify your pull request, repeat steps 8 through 15, then leave a comment to notify the Core Contributors that the pull request is ready for further review.
**19. If the number of commits becomes excessive, you may be asked to squash your commits.**
**18. If the number of commits becomes excessive, you may be asked to squash your commits.**
You can do this with an interactive rebase. Start by running the following command to determine the commit that is the base of your branch...
@@ -143,7 +141,7 @@ Core Contributors may ask questions and request that you make edits. If you set
$ git merge-base feature-in-progress-branch prysm/master
```
**20. The previous command will return a commit-hash that you should use in the following command.**
**19. The previous command will return a commit-hash that you should use in the following command.**
```
$ git rebase -i commit-hash
@@ -157,7 +155,7 @@ pick hash fix a bug
pick hash add a feature
```
Replace the word pick with the word “squash” for every line but the first, so you end with ….
Replace the word pick with the word “squash” for every line but the first so you end with ….
```
pick hash do some work
@@ -167,30 +165,13 @@ squash hash add a feature
Save and close the file, then a commit command will appear in the terminal that squashes the smaller commits into one. Check to be sure the commit message accurately reflects your changes and then hit enter to execute it.
**21. Update your pull request with the following command.**
**20. Update your pull request with the following command.**
```
$ git push myrepo feature-in-progress-branch -f
```
**22. Finally, again leave a comment to the Core Contributors on the pull request to let them know that the pull request has been updated.**
## Maintaining CHANGELOG.md
This project follows the changelog guidelines from [keepachangelog.com](https://keepachangelog.com/en/1.1.0/).
All PRs with user facing changes should have an entry in the CHANGELOG.md file and the change should be categorized in the appropriate category within the "Unreleased" section. The categories are:
- `Added` for new features.
- `Changed` for changes in existing functionality.
- `Deprecated` for soon-to-be removed features.
- `Removed` for now removed features.
- `Fixed` for any bug fixes.
- `Security` in case of vulnerabilities. Please see the [Security Policy](SECURITY.md) for responsible disclosure before adding a change with this category.
### Releasing
When a new release is made, the "Unreleased" section should be moved to a new section with the release version and the current date. Then a new "Unreleased" section is made at the top of the file with the categories listed above.
**21. Finally, again leave a comment to the Core Contributors on the pull request to let them know that the pull request has been updated.**
## Contributor Responsibilities
@@ -198,10 +179,10 @@ We consider two types of contributions to our repo and categorize them as follow
### Part-Time Contributors
Anyone can become a part-time contributor and help out on implementing Ethereum consensus. The responsibilities of a part-time contributor include:
Anyone can become a part-time contributor and help out on implementing sharding. The responsibilities of a part-time contributor include:
- Engaging in Gitter conversations, asking the questions on how to begin contributing to the project
- Opening up GitHub issues to express interest in code to implement
- Opening up github issues to express interest in code to implement
- Opening up PRs referencing any open issue in the repo. PRs should include:
- Detailed context of what would be required for merge
- Tests that are consistent with how other tests are written in our implementation
@@ -209,14 +190,16 @@ Anyone can become a part-time contributor and help out on implementing Ethereum
- Follow up on open PRs
- Have an estimated timeframe to completion and let the core contributors know if a PR will take longer than expected
We do not expect all part-time contributors to be experts on all the latest sharding documentation, but all contributors should at least be familiarized with our sharding [README.md](https://github.com/prysmaticlabs/prysm/blob/master/validator/README.md) and have gone through the required Ethereum readings as posted on our [READINGS.md](https://github.com/prysmaticlabs/prysm/blob/master/docs/READINGS.md) document.
### Core Contributors
Core contributors are remote contractors of Prysmatic Labs, LLC. and are considered critical team members of our organization. Core devs have all the responsibilities of part-time contributors plus the majority of the following:
Core contributors are remote contractors of Prysmatic Labs, LLC. and are considered critical team members of our organization. Core devs have all of the responsibilities of part-time contributors plus the majority of the following:
- Stay up to date on the latest beacon chain specification
- Monitor GitHub issues and PRs to make sure owner, labels, descriptions are correct
- Stay up to date on the latest beacon chain sepcification
- Monitor github issues and PRs to make sure owner, labels, descriptions are correct
- Formulate independent ideas, suggest new work to do, point out improvements to existing approaches
- Participate in code review, ensure code quality is excellent, and ensure high code coverage
- Participate in code review, ensure code quality is excellent, and have ensure high code coverage
- Help with social media presence, write bi-weekly development update
- Represent Prysmatic Labs at events to help spread the word on scalability research and solutions

View File

@@ -1,80 +0,0 @@
# Dependency Management in Prysm
Prysm is go project with many complicated dependencies, including some c++ based libraries. There
are two parts to Prysm's dependency management. Go modules and bazel managed dependencies. Be sure
to read [Why Bazel?](https://github.com/prysmaticlabs/documentation/issues/138) to fully
understand the reasoning behind an additional layer of build tooling via Bazel rather than a pure
"go build" project.
## Go Module support
The Prysm project officially supports go modules with some caveats.
### Caveat 1: Some c++ libraries are precompiled archives
Given some of Prysm's c++ dependencies have very complicated project structures which make building
difficult or impossible with "go build" alone. Additionally, building c++ dependencies with certain
compilers, like clang / LLVM, offer a significant performance improvement. To get around this
issue, c++ dependencies have been precompiled as linkable archives. While there isn't necessarily
anything bad about precompiled archives, these files are a "blackbox" which a 3rd party author
could have compiled anything for this archive and detecting undesired behavior would be nearly
impossible. If your risk tolerance is low, always compile everything from source yourself,
including complicated c++ dependencies.
*Recommendation: Use go build only for local development and use bazel build for production.*
### Caveat 2: Generated gRPC protobuf libraries
One key advantage of Bazel over vanilla `go build` is that Bazel automatically (re)builds generated
pb.go files at build time when file changes are present in any protobuf definition file or after
any updates to the protobuf compiler or other relevant dependencies. Vanilla go users should run
the following scripts often to ensure their generated files are up to date. Furthermore, Prysm
generates SSZ marshal related code based on defined data structures. These generated files must
also be updated and checked in as frequently.
```bash
./hack/update-go-pbs.sh
./hack/update-go-ssz.sh
```
*Recommendation: Use go build only for local development and use bazel build for production.*
### Caveat 3: Compile-time optimizations
When Prysmatic Labs builds production binaries, they use the "release" configuration of bazel to
compile with several compiler optimizations and recommended production build configurations.
Additionally, the release build properly stamps the built binaries to include helpful metadata
about how and when the binary was built.
*Recommendation: Use go build only for local development and use bazel build for production.*
```bash
bazel build //beacon-chain --config=release
```
## Adding / updating dependencies
1. Add your dependency as you would with go modules. I.e. `go get ...`
1. Run `bazel run //:gazelle -- update-repos -from_file=go.mod` to update the bazel managed dependencies.
Example:
```bash
go get github.com/prysmaticlabs/example@v1.2.3
bazel run //:gazelle -- update-repos -from_file=go.mod -to_macro=deps.bzl%prysm_deps -prune=true
```
The deps.bzl file should have been updated with the dependency and any transitive dependencies.
Do NOT add new `go_repository` to the WORKSPACE file. All dependencies should live in deps.bzl.
## Running tests
To enable conditional compilation and custom configuration for tests (where compiled code has more
debug info, while not being completely optimized), we rely on Go's build tags/constraints mechanism
(see official docs on [build constraints](https://golang.org/pkg/go/build/#hdr-Build_Constraints)).
Therefore, whenever using `go test`, do not forget to pass in extra build tag, eg:
```bash
go test ./beacon-chain/sync/initial-sync -tags develop
```

View File

@@ -1,88 +0,0 @@
# Prysm Client Interoperability Guide
This README details how to setup Prysm for interop testing for usage with other Ethereum consensus clients.
## Installation & Setup
1. Install [Bazel](https://docs.bazel.build/versions/master/install.html) **(Recommended)**
2. `git clone https://github.com/prysmaticlabs/prysm && cd prysm`
3. `bazel build //...`
## Starting from Genesis
Prysm supports a few ways to quickly launch a beacon node from basic configurations:
- `NumValidators + GenesisTime`: Launches a beacon node by deterministically generating a state from a num-validators flag along with a genesis time **(Recommended)**
- `SSZ Genesis`: Launches a beacon node from a .ssz file containing a SSZ-encoded, genesis beacon state
## Generating a Genesis State
To setup the necessary files for these quick starts, Prysm provides a tool to generate a `genesis.ssz` from
a deterministically generated set of validator private keys following the official interop YAML format
[here](https://github.com/ethereum/eth2.0-pm/blob/master/interop/mocked_start).
You can use `bazel run //tools/genesis-state-gen` to create a deterministic genesis state for interop.
### Usage
- **--genesis-time** uint: Unix timestamp used as the genesis time in the generated genesis state (defaults to now)
- **--num-validators** int: Number of validators to deterministically include in the generated genesis state
- **--output-ssz** string: Output filename of the SSZ marshaling of the generated genesis state
- **--config-name=interop** string: name of the beacon chain config to use when generating the state. ex mainnet|minimal|interop
The example below creates 64 validator keys, instantiates a genesis state with those 64 validators and with genesis unix timestamp 1567542540,
and finally writes a ssz encoded output to ~/Desktop/genesis.ssz. This file can be used to kickstart the beacon chain in the next section. When using the `--interop-*` flags, the beacon node will assume the `interop` config should be used, unless a different config is specified on the command line.
```
bazel run //tools/genesis-state-gen -- --config-name interop --output-ssz ~/Desktop/genesis.ssz --num-validators 64 --genesis-time 1567542540
```
## Launching a Beacon Node + Validator Client
### Launching from Pure CLI Flags
Open up two terminal windows, run:
```
bazel run //beacon-chain -- \
--bootstrap-node= \
--deposit-contract 0x8A04d14125D0FDCDc742F4A05C051De07232EDa4 \
--datadir=/tmp/beacon-chain-interop \
--force-clear-db \
--min-sync-peers=0 \
--interop-num-validators 64 \
--interop-eth1data-votes
```
This will deterministically generate a beacon genesis state and start
the system with 64 validators and the genesis time set to the current unix timestamp.
Wait a bit until your beacon chain starts, and in the other window:
```
bazel run //validator -- --keymanager=interop --keymanageropts='{"keys":64}'
```
This will launch and kickstart the system with your 64 validators performing their duties accordingly.
### Launching from `genesis.ssz`
Assuming you generated a `genesis.ssz` file with 64 validators, open up two terminal windows, run:
```
bazel run //beacon-chain -- \
--bootstrap-node= \
--deposit-contract 0x8A04d14125D0FDCDc742F4A05C051De07232EDa4 \
--datadir=/tmp/beacon-chain-interop \
--force-clear-db \
--min-sync-peers=0 \
--interop-genesis-state /path/to/genesis.ssz \
--interop-eth1data-votes
```
Wait a bit until your beacon chain starts, and in the other window:
```
bazel run //validator -- --keymanager=interop --keymanageropts='{"keys":64}'
```
This will launch and kickstart the system with your 64 validators performing their duties accordingly.

View File

@@ -1,6 +0,0 @@
###############################################################################
# Bazel now uses Bzlmod by default to manage external dependencies.
# Please consider migrating your external dependencies from WORKSPACE to MODULE.bazel.
#
# For more details, please check https://github.com/bazelbuild/bazel/issues/18958
###############################################################################

1653
MODULE.bazel.lock generated

File diff suppressed because it is too large Load Diff

13
PULL_REQUEST_TEMPLATE.md Normal file
View File

@@ -0,0 +1,13 @@
TODO(you): choose "part of" or "resolves" and the associated github issue #.
[Part of|Resolves] #531
---
# Description
**Write why you are making the changes in this pull request**
**Write a summary of the changes you are making**
**Link anything that would be helpful or relevant to the reviewers**

156
README.md
View File

@@ -1,37 +1,145 @@
# Prysm: An Ethereum Consensus Implementation Written in Go
# Prysm: Ethereum 'Serenity' 2.0 Go Implementation
[![Build status](https://badge.buildkite.com/b555891daf3614bae4284dcf365b2340cefc0089839526f096.svg?branch=master)](https://buildkite.com/prysmatic-labs/prysm)
[![Go Report Card](https://goreportcard.com/badge/github.com/prysmaticlabs/prysm)](https://goreportcard.com/report/github.com/prysmaticlabs/prysm)
[![Consensus_Spec_Version 1.4.0](https://img.shields.io/badge/Consensus%20Spec%20Version-v1.4.0-blue.svg)](https://github.com/ethereum/consensus-specs/tree/v1.4.0)
[![Execution_API_Version 1.0.0-beta.2](https://img.shields.io/badge/Execution%20API%20Version-v1.0.0.beta.2-blue.svg)](https://github.com/ethereum/execution-apis/tree/v1.0.0-beta.2/src/engine)
[![Discord](https://user-images.githubusercontent.com/7288322/34471967-1df7808a-efbb-11e7-9088-ed0b04151291.png)](https://discord.gg/prysmaticlabs)
[![GitPOAP Badge](https://public-api.gitpoap.io/v1/repo/prysmaticlabs/prysm/badge)](https://www.gitpoap.io/gh/prysmaticlabs/prysm)
[![ETH2.0_Spec_Version 0.8.0](https://img.shields.io/badge/ETH2.0%20Spec%20Version-v0.8.0-blue.svg)](https://github.com/ethereum/eth2.0-specs/commit/8d324b7497bcb558e0183a30002d78d18704e3fa)
[![Discord](https://user-images.githubusercontent.com/7288322/34471967-1df7808a-efbb-11e7-9088-ed0b04151291.png)](https://discord.gg/KSA7rPr)
[![Gitter](https://badges.gitter.im/Join%20Chat.svg)](https://gitter.im/prysmaticlabs/geth-sharding?utm_source=badge&utm_medium=badge&utm_campaign=pr-badge)
This is the core repository for Prysm, a [Golang](https://golang.org/) implementation of the [Ethereum Consensus](https://ethereum.org/en/developers/docs/consensus-mechanisms/#proof-of-stake) [specification](https://github.com/ethereum/consensus-specs), developed by [Offchain Labs](https://www.offchainlabs.com). See the [Changelog](https://github.com/prysmaticlabs/prysm/releases) for details of the latest releases and upcoming breaking changes.
This is the Core repository for Prysm, [Prysmatic Labs](https://prysmaticlabs.com)' [Go](https://golang.org/) implementation of the Ethereum protocol 2.0 (Serenity).
### Getting Started
### Need assistance?
A more detailed set of installation and usage instructions as well as explanations of each component are available on our [official documentation portal](https://prysmaticlabs.gitbook.io/prysm/). If you still have questions, feel free to stop by either our [Discord](https://discord.gg/KSA7rPr) or [Gitter](https://gitter.im/prysmaticlabs/geth-sharding?utm_source=badge&utm_medium=badge&utm_campaign=pr-badge) and a member of the team or our community will be happy to assist you.
A detailed set of installation and usage instructions as well as breakdowns of each individual component are available in the [official documentation portal](https://docs.prylabs.network). If you still have questions, feel free to stop by our [Discord](https://discord.gg/prysmaticlabs).
**Interested in what's next?** Be sure to read our [Roadmap Reference Implementation](https://github.com/prysmaticlabs/prysm/blob/master/docs/ROADMAP.md) document. This page outlines the basics of sharding as well as the various short-term milestones that we hope to achieve over the coming year.
### Staking on Mainnet
### Come join the testnet!
Participation is now open to the public in our testnet release for Ethereum 2.0 phase 0. Visit [prylabs.net](https://prylabs.net) for more information on the project itself or to sign up as a validator on the network.
To participate in staking, you can join the [official eth2 launchpad](https://launchpad.ethereum.org). The launchpad is the only recommended way to become a validator on mainnet. You can explore validator rewards/penalties via Bitfly's block explorer: [beaconcha.in](https://beaconcha.in), and follow the latest blocks added to the chain on [beaconscan](https://beaconscan.com).
# Table of Contents
- [Dependencies](#dependencies)
- [Installation](#installation)
- [Build Via Docker](#build-via-docker)
- [Build Via Bazel](#build-via-bazel)
- [Running an Ethereum 2.0 Beacon Node](#running-an-ethereum-20-beacon-node)
- [Staking ETH: Running a Validator Client](#staking-eth-running-a-validator-client)
- [Testing Prysm](#testing-prysm)
- [Contributing](#contributing)
- [License](#license)
## Dependencies
Prysm can be installed either with Docker **(recommended method)** or using our build tool, Bazel. The below instructions include sections for performing both.
**For Docker installations:**
- The latest release of [Docker](https://docs.docker.com/install/)
**For Bazel installations:**
- The latest release of [Bazel](https://docs.bazel.build/versions/master/install.html)
- A modern GNU/Linux operating system
## Installation
### Build via Docker
1. Ensure you are running the most recent version of Docker by issuing the command:
```
docker -v
```
2. To pull the Prysm images from the server, issue the following commands:
```
docker pull gcr.io/prysmaticlabs/prysm/validator:latest
docker pull gcr.io/prysmaticlabs/prysm/beacon-chain:latest
```
This process will also install any related dependencies.
### Build via Bazel
1. Open a terminal window. Ensure you are running the most recent version of Bazel by issuing the command:
```
bazel version
```
2. Clone this repository and enter the directory:
```
git clone https://github.com/prysmaticlabs/prysm
cd prysm
```
3. Build both the beacon chain node implementation and the validator client:
```
bazel build //beacon-chain:beacon-chain
bazel build //validator:validator
```
Bazel will automatically pull and install any dependencies as well, including Go and necessary compilers.
## Running an Ethereum 2.0 Beacon Node
To understand the role that both the beacon node and validator play in Prysm, see [this section of our documentation](https://prysmaticlabs.gitbook.io/prysm/how-prysm-works/basic-architecture-overview).
### Running via Docker
**Docker on Linux/Mac:**
1. To start your beacon node, issue the following command:
```
docker run -v /tmp/prysm-data:/data -p 4000:4000 \
gcr.io/prysmaticlabs/prysm/beacon-chain:latest \
--datadir=/data
--clear-db
```
**Docker on Windows:**
1) You will need to share the local drive you wish to mount to to container (e.g. C:).
1. Enter Docker settings (right click the tray icon)
2. Click 'Shared Drives'
3. Select a drive to share
4. Click 'Apply'
2) You will next need to create a directory named ```/tmp/prysm-data/``` within your selected shared Drive. This folder will be used as a local data directory for Beacon Node chain data as well as account and keystore information required by the validator. Docker will **not** create this directory if it does not exist already. For the purposes of these instructions, it is assumed that ```C:``` is your prior-selected shared Drive.
4) To run the beacon node, issue the following command:
```
docker run -it -v c:/tmp/prysm-data:/data -p 4000:4000 gcr.io/prysmaticlabs/prysm/beacon-chain:latest --datadir=/data --clear-db
```
### Running via Bazel
1) To start your Beacon Node with Bazel, issue the following command:
```
bazel run //beacon-chain -- --clear-db --datadir=/tmp/prysm-data
```
This will sync up the Beacon Node with the latest head block in the network.
## Staking ETH: Running a Validator Client
Once your beacon node is up, the chain will be waiting for you to deposit 3.2 Goerli ETH into the Validator Deposit Contract to activate your validator (discussed in the section below). First though, you will need to create a *validator client* to connect to this node in order to stake and participate. Each validator represents 3.2 Goerli ETH being staked in the system, and it is possible to spin up as many as you desire in order to have more stake in the network.
### Activating Your Validator: Depositing 3.2 Goerli ETH
Using your validator deposit data from the previous step, follow the instructions found on https://alpha.prylabs.net/participate to make a deposit.
It will take a while for the nodes in the network to process your deposit, but once your node is active, the validator will begin doing its responsibility. In your validator client, you will be able to frequently see your validator balance as it goes up over time. Note that, should your node ever go offline for a long period, you'll start gradually losing your deposit until you are removed from the system.
### Starting the validator with Bazel
1. Open another terminal window. Enter your Prysm directory and run the validator by issuing the following command:
```
cd prysm
bazel run //validator
```
**Congratulations, you are now running Ethereum 2.0 Phase 0!**
## Testing Prysm
**To run the unit tests of our system**, issue the command:
```
bazel test //...
```
**To run our linter**, make sure you have [golangci-lint](https://https://github.com/golangci/golangci-lint) installed and then issue the command:
```
golangci-lint run
```
## Contributing
### Branches
Prysm maintains two permanent branches:
* [master](https://github.com/prysmaticlabs/prysm/tree/master): This points to the latest stable release. It is ideal for most users.
* [develop](https://github.com/prysmaticlabs/prysm/tree/develop): This is used for development, it contains the latest PRs. Developers should base their PRs on this branch.
### Guide
Want to get involved? Check out our [Contribution Guide](https://docs.prylabs.network/docs/contribute/contribution-guidelines/) to learn more!
We have put all of our contribution guidelines into [CONTRIBUTING.md](https://github.com/prysmaticlabs/prysm/blob/master/CONTRIBUTING.md)! Check it out to get started.
## License
[GNU General Public License v3.0](https://www.gnu.org/licenses/gpl-3.0.en.html)
## Legal Disclaimer
[Terms of Use](/TERMS_OF_SERVICE.md)

View File

@@ -1,11 +0,0 @@
# Security Policy
## Supported Versions
[Releases](https://github.com/prysmaticlabs/prysm/releases/) contains all available releases. We recommend using the [most recently released version](https://github.com/prysmaticlabs/prysm/releases/latest).
## Reporting a Vulnerability
Please see our signed [security.txt](https://github.com/prysmaticlabs/prysm/blob/develop/.well-known/security.txt) for preferred encryption and reporting destination.
**Please do not file a public ticket** mentioning the vulnerability, as doing so could increase the likelihood of the vulnerability being used before a fix has been created, released and installed on the network.

View File

@@ -1,53 +0,0 @@
# Terms of Use
Effective as of November 2, 2023
By downloading, accessing or using the Prysm implementation (“Prysm”), you (referenced herein as “you” or the “user”) certify that you have read and agreed to the terms and conditions below (the “Terms”) which form a binding contract between you and Offchain Labs, Inc. (as successor in interest to Prysmatic Labs LLC) (referenced herein as “Offchain Labs”, “we” or “us”). If you do not agree to the Terms, do not download or use Prysm. Additionally, the Terms of Use available at https://arbitrum.io/tos (or any successor site, the “OCL Terms of Use”) are hereby incorporated by reference into these Terms. In the event of any conflict between provisions set forth herein and those set forth in the OCL Terms of Use, the provisions set forth herein shall control.
## About Prysm
Prysm is a client implementation for the Ethereum blockchains consensus protocol. To participate in the network, a user must send ETH from the Ethereum mainnet blockchain to a validator deposit smart contract on Ethereum mainnet. Validators participate in proposing and voting on blocks in the protocol, and the network applies rewards/penalties based on their behavior. A detailed set of installation and usage instructions as well as breakdowns of each individual component are available in the official documentation portal, however, we do not warrant the accuracy, completeness or usefulness of this documentation. Any reliance you place on such information is strictly at your own risk.
## Licensing Terms
Prysm is an open-source software program licensed pursuant to the GNU General Public License v3.0.
The Offchain Labs name, the term “Prysm” and all related names, logos, product and service names, designs and slogans are trademarks of Offchain Labs or its affiliates and/or licensors. You must not use such marks without our prior written permission.
PLEASE READ THESE TERMS CAREFULLY, AS THE OCL TERMS OF USE INCORPORATED BY REFERENCE HEREIN CONTAIN AN AGREEMENT TO ARBITRATE AND OTHER IMPORTANT INFORMATION REGARDING YOUR LEGAL RIGHTS, REMEDIES, AND OBLIGATIONS. THE AGREEMENT TO ARBITRATE REQUIRES (WITH LIMITED EXCEPTION) THAT YOU SUBMIT CLAIMS YOU HAVE AGAINST US TO BINDING AND FINAL ARBITRATION, AND FURTHER (1) YOU WILL ONLY BE PERMITTED TO PURSUE CLAIMS AGAINST OFFCHAIN LABS ON AN INDIVIDUAL BASIS, NOT AS A PLAINTIFF OR CLASS MEMBER IN ANY CLASS OR REPRESENTATIVE ACTION OR PROCEEDING, (2) YOU WILL ONLY BE PERMITTED TO SEEK RELIEF (INCLUDING MONETARY, INJUNCTIVE, AND DECLARATORY RELIEF) ON AN INDIVIDUAL BASIS, AND (3) YOU MAY NOT BE ABLE TO HAVE ANY CLAIMS YOU HAVE AGAINST US RESOLVED BY A JURY OR IN A COURT OF LAW.
## Risks of Operating Prysm
The use of Prysm and acting as a validator on the Ethereum network can lead to loss of money, tokens and value. Ethereum is still an experimental system and ETH remains a risky investment. You alone are responsible for your actions on Prysm, including the security of your ETH and meeting any applicable minimum system requirements.
Use of Prysm and the ability to receive rewards or penalties may be affected at any time by mistakes made by the user or other users, software problems such as bugs, errors, incorrectly constructed transactions, unsafe cryptographic libraries or malware affecting the network, technical failures in the hardware of a user, security problems experienced by a user and/or actions or inactions of third parties and/or events experienced by third parties, among other risks. We cannot and do not guarantee that any user of Prysm will make money, that the Prysm network will operate in accordance with the documentation or that transactions will be effective or secure.
YOU ACKNOWLEDGE THAT WE ARE NOT RESPONSIBLE FOR ANY RISKS ASSOCIATED WITH YOUR USE OF PRYSM, AND CANNOT BE HELD LIABLE FOR ANY RESULTING LOSSES THAT YOU EXPERIENCE WHILE ACCESSING OR USING PRYSM.
BY ACCESSING AND USING PRYSM, YOU REPRESENT AND WARRANT THAT YOU UNDERSTAND THE INHERENT RISKS ASSOCIATED WITH USING CRYPTOGRAPHIC AND BLOCKCHAIN-BASED SYSTEMS, AND THAT YOU HAVE A WORKING KNOWLEDGE OF THE USAGE AND INTRICACIES OF DIGITAL ASSETS, SUCH AS THOSE FOLLOWING THE ETHEREUM TOKEN STANDARD (ERC-20). YOU FURTHER UNDERSTAND THAT THE MARKETS FOR DIGITAL ASSETS ARE HIGHLY VOLATILE DUE TO VARIOUS FACTORS, INCLUDING ADOPTION, SPECULATION, TECHNOLOGY, SECURITY, AND REGULATION. YOU ACKNOWLEDGE AND ACCEPT THAT THE COST AND SPEED OF TRANSACTING WITH CRYPTOGRAPHIC AND BLOCKCHAIN-BASED SYSTEMS SUCH AS ETHEREUM ARE VARIABLE AND MAY INCREASE DRAMATICALLY AT ANY TIME. YOU UNDERSTAND THAT ANYONE CAN CREATE A TOKEN, INCLUDING FAKE VERSIONS OF EXISTING TOKENS AND TOKENS THAT FALSELY CLAIM TO REPRESENT PROJECTS, AND ACKNOWLEDGE AND ACCEPT THE RISK THAT YOU MAY MISTAKENLY INTERACT WITH THOSE OR OTHER TOKENS. YOU FURTHER ACKNOWLEDGE THAT WE ARE NOT RESPONSIBLE FOR ANY OF THE VARIABLES OR RISKS DESCRIBED IN THESE TERMS. YOU UNDERSTAND AND AGREE TO ASSUME FULL RESPONSIBILITY FOR ALL OF THE RISKS OF ACCESSING AND USING PRYSM. YOU ARE SOLELY RESPONSIBLE FOR YOUR WALLETS, FOR SAFEGUARDING THE ASSOCIATED PRIVATE KEY AND FOR ANY ACTIVITY THAT OCCURS USING YOUR WALLET. WITHOUT LIMITING THE FOREGOING, YOU ALSO UNDERSTAND THAT THERE MAY BE TAX AND REGULATORY RISKS RELATED TO USING PRYSM. IT IS YOUR SOLE RESPONSIBILITY TO DETERMINE WHETHER, AND TO WHAT EXTENT, ANY TAXES APPLY TO ANY TRANSACTIONS YOU CONDUCT IN CONNECTION WITH YOUR USE OF PRYSM, AND TO WITHHOLD, COLLECT, REPORT AND REMIT THE CORRECT AMOUNTS OF TAXES TO THE APPROPRIATE TAX AUTHORITIES. DIGITAL ASSETS, BLOCKCHAIN TECHNOLOGY, AND ANY RELATED SOFTWARE AND SERVICES ARE ALSO SUBJECT TO LEGAL AND REGULATORY UNCERTAINTY IN THE UNITED STATES AND OTHER JURISDICTIONS. YOU UNDERSTAND THAT LEGISLATIVE AND REGULATORY CHANGES OR ACTIONS MAY ADVERSELY AFFECT THE USAGE, TRANSFERABILITY, TRANSACTABILITY AND ACCESSIBILITY RELATED TO PRYSM.
We make no claims that Prysm is appropriate or permitted for use in any specific jurisdiction. Access to Prysm may not be legal by certain persons or in certain jurisdictions or countries. If you access Prysm, you do so on your own initiative and are responsible for compliance with all Applicable Law (as defined below), including, without limitation, for the avoidance of doubt, local laws.
Some Internet plans will charge additional amounts for bandwidth or any excess upload bandwidth used that isnt included in the plan and may terminate your connection without warning because of overuse. We advise that you check whether your Internet connection is subjected to any such limitations and monitor your bandwidth use and upload volumes.
## Warranty Disclaimer
PRYSM IS PROVIDED ON AN “AS-IS” BASIS AND MAY INCLUDE ERRORS, OMISSIONS, OR OTHER INACCURACIES. WITHOUT LIMITING ANYTHING SET FORTH ELSEWHERE IN THESE TERMS, OFFCHAIN LABS AND ITS CONTRIBUTORS MAKE NO REPRESENTATIONS OR WARRANTIES ABOUT PRYSM FOR ANY PURPOSE, AND HEREBY EXPRESSLY DISCLAIM ALL WARRANTIES, EXPRESS OR IMPLIED, INCLUDING, WITHOUT LIMITATION, ANY WARRANTY OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE, OR NON-INFRINGEMENT OR ANY OTHER IMPLIED WARRANTY UNDER THE UNIFORM COMPUTER INFORMATION TRANSACTIONS ACT AS ENACTED BY ANY STATE OR OTHER GOVERNMENTAL AUTHORITY. WE ALSO MAKE NO REPRESENTATIONS OR WARRANTIES THAT PRYSM WILL OPERATE ERROR-FREE, UNINTERRUPTED, OR IN A MANNER THAT WILL MEET YOUR REQUIREMENTS AND/OR NEEDS. THEREFORE, YOU ASSUME THE ENTIRE RISK REGARDING THE QUALITY AND/OR PERFORMANCE OF PRYSM AND ANY TRANSACTIONS ENTERED INTO THEREON.
## Limitation of Liability
IN NO EVENT WILL OFFCHAIN LABS OR ANY OF ITS AFFILIATES OR ITS OR ANY SUCH AFFILIATES DIRECTORS, OFFICERS, EMPLOYEES, AGENTS, OR REPRESENTATIVES OR ANY CONTRIBUTORS (COLLECTIVELY, THE “OCL PARTIES”) BE LIABLE, WHETHER IN CONTRACT, WARRANTY, TORT (INCLUDING NEGLIGENCE, WHETHER ACTIVE, PASSIVE OR IMPUTED), PRODUCT LIABILITY, STRICT LIABILITY OR OTHER THEORY, BREACH OF STATUTORY DUTY OR OTHERWISE ARISING OUT OF, OR IN CONNECTION WITH, YOUR USE OF PRYSM, FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL OR CONSEQUENTIAL DAMAGES (INCLUDING ANY LOSS OF PROFITS OR DATA, BUSINESS INTERRUPTION OR OTHER PECUNIARY LOSS, OR DAMAGE, LOSS OR OTHER COMPROMISE OF DATA, IN EACH CASE WHETHER DIRECT, INDIRECT, INCIDENTAL, SPECIAL OR CONSEQUENTIAL) ARISING OUT OF USE PRYSM, EVEN IF WE OR OTHER USERS HAVE BEEN ADVISED OF THE POSSIBILITY OF SUCH DAMAGES. The foregoing limitations and disclaimers shall apply to the maximum extent permitted by Applicable Law, even if any remedy fails of its essential purpose. You acknowledge and agree that the limitations of liability afforded us hereunder constitute a material and actual inducement and condition to entering into these Terms, and are reasonable, fair and equitable in scope to protect our legitimate interests in light of the fact that we are not receiving consideration from you for providing Prysm.
## Indemnification
To the maximum extent permitted by Applicable Law, you will defend, indemnify and hold each OCL Party harmless from and against any and all claims, actions, suits, investigations, or proceedings by any third party (including any party or purported party to or beneficiary or purported beneficiary of any transaction or other activity on Prysm), as well as any and all losses, liabilities, damages, costs, and expenses (including reasonable attorneys fees and costs) arising out of, accruing from, or in any way related to (i) your breach of the terms of this Agreement, (ii) any transaction, or the failure to occur of any transaction on Prysm, and (iii) your negligence, fraud, or willful misconduct.
## Compliance with Laws
Your use of Prysm is subject to all applicable laws of any governmental authority, including, without limitation, federal, state and foreign securities laws, tax laws, tariff and trade laws, ordinances, judgments, decrees, injunctions, writs and orders or like actions of any governmental authority and rules, regulations, orders, interpretations, licenses, and permits of any federal, regional, state, county, municipal or other governmental authority (collectively, “Applicable Law”) and you agree to comply with all such Applicable Law in your use of Prysm. The users of Prysm are solely responsible to determinate what, if any, taxes apply to their ETH transactions. The owners of, or contributors to, Prysm are not responsible for determining the taxes that apply to ETH transactions.
## Miscellaneous
These Terms will be governed by the laws of the State of Delaware without regard to its conflict of law provisions. With respect to any disputes or claims not subject to arbitration, as set forth in the OCL Terms of Use, you and Offchain Labs submit to the personal and exclusive jurisdiction of the state and federal courts located within New York, New York and waive any objection to such jurisdiction and venue. The failure of Offchain Labs to exercise or enforce any right or provision of these Terms will not constitute a waiver of such right or provision.
We reserve the right to revise these Terms, and your rights and obligations are at all times subject to the then-current Terms provided on Prysm. Your use of Prysm following any such revision to these Terms constitutes acceptance of such revised Terms.
These Terms constitute the entire agreement between you and Offchain Labs regarding use of Prysm and will supersede all prior agreements whether, written or oral. No usage of trade or other regular practice or method of dealing between the parties will be used to modify, interpret, supplement, or alter the terms of these Terms.
If any portion of these Terms is held invalid or unenforceable, such invalidity or enforceability will not affect the other provisions of these Terms, which will remain in full force and effect, and the invalid or unenforceable portion will be given effect to the greatest extent possible. The failure of a party to require performance of any provision will not affect that partys right to require performance at any time thereafter, nor will a waiver of any breach or default of these Terms or any provision of these Terms constitute a waiver of any subsequent breach or default or a waiver of the provision itself.

1442
WORKSPACE

File diff suppressed because it is too large Load Diff

View File

@@ -1,24 +0,0 @@
load("@prysm//tools/go:def.bzl", "go_library", "go_test")
go_library(
name = "go_default_library",
srcs = [
"constants.go",
"headers.go",
"jwt.go",
],
importpath = "github.com/prysmaticlabs/prysm/v5/api",
visibility = ["//visibility:public"],
deps = [
"//crypto/rand:go_default_library",
"@com_github_ethereum_go_ethereum//common/hexutil:go_default_library",
"@com_github_pkg_errors//:go_default_library",
],
)
go_test(
name = "go_default_test",
srcs = ["jwt_test.go"],
embed = [":go_default_library"],
deps = ["//testing/require:go_default_library"],
)

View File

@@ -1,20 +0,0 @@
load("@prysm//tools/go:def.bzl", "go_library", "go_test")
go_library(
name = "go_default_library",
srcs = [
"client.go",
"errors.go",
"options.go",
],
importpath = "github.com/prysmaticlabs/prysm/v5/api/client",
visibility = ["//visibility:public"],
deps = ["@com_github_pkg_errors//:go_default_library"],
)
go_test(
name = "go_default_test",
srcs = ["client_test.go"],
embed = [":go_default_library"],
deps = ["//testing/require:go_default_library"],
)

View File

@@ -1,63 +0,0 @@
load("@prysm//tools/go:def.bzl", "go_library", "go_test")
go_library(
name = "go_default_library",
srcs = [
"checkpoint.go",
"client.go",
"doc.go",
"health.go",
"log.go",
],
importpath = "github.com/prysmaticlabs/prysm/v5/api/client/beacon",
visibility = ["//visibility:public"],
deps = [
"//api/client:go_default_library",
"//api/client/beacon/iface:go_default_library",
"//api/server:go_default_library",
"//api/server/structs:go_default_library",
"//beacon-chain/core/helpers:go_default_library",
"//beacon-chain/state:go_default_library",
"//consensus-types/interfaces:go_default_library",
"//consensus-types/primitives:go_default_library",
"//encoding/bytesutil:go_default_library",
"//encoding/ssz/detect:go_default_library",
"//io/file:go_default_library",
"//network/forks:go_default_library",
"//proto/prysm/v1alpha1:go_default_library",
"//runtime/version:go_default_library",
"//time/slots:go_default_library",
"@com_github_ethereum_go_ethereum//common/hexutil:go_default_library",
"@com_github_pkg_errors//:go_default_library",
"@com_github_sirupsen_logrus//:go_default_library",
"@org_golang_x_mod//semver:go_default_library",
],
)
go_test(
name = "go_default_test",
srcs = [
"checkpoint_test.go",
"client_test.go",
"health_test.go",
],
embed = [":go_default_library"],
deps = [
"//api/client:go_default_library",
"//api/client/beacon/testing:go_default_library",
"//beacon-chain/state:go_default_library",
"//config/params:go_default_library",
"//consensus-types/blocks:go_default_library",
"//consensus-types/blocks/testing:go_default_library",
"//consensus-types/primitives:go_default_library",
"//encoding/ssz/detect:go_default_library",
"//network/forks:go_default_library",
"//proto/prysm/v1alpha1:go_default_library",
"//runtime/version:go_default_library",
"//testing/require:go_default_library",
"//testing/util:go_default_library",
"//time/slots:go_default_library",
"@com_github_pkg_errors//:go_default_library",
"@org_uber_go_mock//gomock:go_default_library",
],
)

View File

@@ -1,276 +0,0 @@
package beacon
import (
"context"
"fmt"
"path"
"github.com/ethereum/go-ethereum/common/hexutil"
"github.com/pkg/errors"
base "github.com/prysmaticlabs/prysm/v5/api/client"
"github.com/prysmaticlabs/prysm/v5/beacon-chain/core/helpers"
"github.com/prysmaticlabs/prysm/v5/beacon-chain/state"
"github.com/prysmaticlabs/prysm/v5/consensus-types/interfaces"
"github.com/prysmaticlabs/prysm/v5/consensus-types/primitives"
"github.com/prysmaticlabs/prysm/v5/encoding/bytesutil"
"github.com/prysmaticlabs/prysm/v5/encoding/ssz/detect"
"github.com/prysmaticlabs/prysm/v5/io/file"
"github.com/prysmaticlabs/prysm/v5/runtime/version"
"github.com/prysmaticlabs/prysm/v5/time/slots"
"github.com/sirupsen/logrus"
"golang.org/x/mod/semver"
)
var errCheckpointBlockMismatch = errors.New("mismatch between checkpoint sync state and block")
// OriginData represents the BeaconState and ReadOnlySignedBeaconBlock necessary to start an empty Beacon Node
// using Checkpoint Sync.
type OriginData struct {
sb []byte
bb []byte
st state.BeaconState
b interfaces.ReadOnlySignedBeaconBlock
vu *detect.VersionedUnmarshaler
br [32]byte
sr [32]byte
}
// SaveBlock saves the downloaded block to a unique file in the given path.
// For readability and collision avoidance, the file name includes: type, config name, slot and root
func (o *OriginData) SaveBlock(dir string) (string, error) {
blockPath := path.Join(dir, fname("block", o.vu, o.b.Block().Slot(), o.br))
return blockPath, file.WriteFile(blockPath, o.BlockBytes())
}
// SaveState saves the downloaded state to a unique file in the given path.
// For readability and collision avoidance, the file name includes: type, config name, slot and root
func (o *OriginData) SaveState(dir string) (string, error) {
statePath := path.Join(dir, fname("state", o.vu, o.st.Slot(), o.sr))
return statePath, file.WriteFile(statePath, o.StateBytes())
}
// StateBytes returns the ssz-encoded bytes of the downloaded BeaconState value.
func (o *OriginData) StateBytes() []byte {
return o.sb
}
// BlockBytes returns the ssz-encoded bytes of the downloaded ReadOnlySignedBeaconBlock value.
func (o *OriginData) BlockBytes() []byte {
return o.bb
}
func fname(prefix string, vu *detect.VersionedUnmarshaler, slot primitives.Slot, root [32]byte) string {
return fmt.Sprintf("%s_%s_%s_%d-%#x.ssz", prefix, vu.Config.ConfigName, version.String(vu.Fork), slot, root)
}
// DownloadFinalizedData downloads the most recently finalized state, and the block most recently applied to that state.
// This pair can be used to initialize a new beacon node via checkpoint sync.
func DownloadFinalizedData(ctx context.Context, client *Client) (*OriginData, error) {
sb, err := client.GetState(ctx, IdFinalized)
if err != nil {
return nil, err
}
vu, err := detect.FromState(sb)
if err != nil {
return nil, errors.Wrap(err, "error detecting chain config for finalized state")
}
log.WithFields(logrus.Fields{
"name": vu.Config.ConfigName,
"fork": version.String(vu.Fork),
}).Info("Detected supported config in remote finalized state")
s, err := vu.UnmarshalBeaconState(sb)
if err != nil {
return nil, errors.Wrap(err, "error unmarshaling finalized state to correct version")
}
slot := s.LatestBlockHeader().Slot
bb, err := client.GetBlock(ctx, IdFromSlot(slot))
if err != nil {
return nil, errors.Wrapf(err, "error requesting block by slot = %d", slot)
}
b, err := vu.UnmarshalBeaconBlock(bb)
if err != nil {
return nil, errors.Wrap(err, "unable to unmarshal block to a supported type using the detected fork schedule")
}
br, err := b.Block().HashTreeRoot()
if err != nil {
return nil, errors.Wrap(err, "error computing hash_tree_root of retrieved block")
}
bodyRoot, err := b.Block().Body().HashTreeRoot()
if err != nil {
return nil, errors.Wrap(err, "error computing hash_tree_root of retrieved block body")
}
sbr := bytesutil.ToBytes32(s.LatestBlockHeader().BodyRoot)
if sbr != bodyRoot {
return nil, errors.Wrapf(errCheckpointBlockMismatch, "state body root = %#x, block body root = %#x", sbr, bodyRoot)
}
sr, err := s.HashTreeRoot(ctx)
if err != nil {
return nil, errors.Wrapf(err, "failed to compute htr for finalized state at slot=%d", s.Slot())
}
log.
WithField("blockSlot", b.Block().Slot()).
WithField("stateSlot", s.Slot()).
WithField("stateRoot", hexutil.Encode(sr[:])).
WithField("blockRoot", hexutil.Encode(br[:])).
Info("Downloaded checkpoint sync state and block.")
return &OriginData{
st: s,
b: b,
sb: sb,
bb: bb,
vu: vu,
br: br,
sr: sr,
}, nil
}
// WeakSubjectivityData represents the state root, block root and epoch of the BeaconState + ReadOnlySignedBeaconBlock
// that falls at the beginning of the current weak subjectivity period. These values can be used to construct
// a weak subjectivity checkpoint beacon node flag to be used for validation.
type WeakSubjectivityData struct {
BlockRoot [32]byte
StateRoot [32]byte
Epoch primitives.Epoch
}
// CheckpointString returns the standard string representation of a Checkpoint.
// The format is a hex-encoded block root, followed by the epoch of the block, separated by a colon. For example:
// "0x1c35540cac127315fabb6bf29181f2ae0de1a3fc909d2e76ba771e61312cc49a:74888"
func (wsd *WeakSubjectivityData) CheckpointString() string {
return fmt.Sprintf("%#x:%d", wsd.BlockRoot, wsd.Epoch)
}
// ComputeWeakSubjectivityCheckpoint attempts to use the prysm weak_subjectivity api
// to obtain the current weak_subjectivity checkpoint.
// For non-prysm nodes, the same computation will be performed with extra steps,
// using the head state downloaded from the beacon node api.
func ComputeWeakSubjectivityCheckpoint(ctx context.Context, client *Client) (*WeakSubjectivityData, error) {
ws, err := client.GetWeakSubjectivity(ctx)
if err != nil {
// a 404/405 is expected if querying an endpoint that doesn't support the weak subjectivity checkpoint api
if !errors.Is(err, base.ErrNotOK) {
return nil, errors.Wrap(err, "unexpected API response for prysm-only weak subjectivity checkpoint API")
}
// fall back to vanilla Beacon Node API method
return computeBackwardsCompatible(ctx, client)
}
log.Printf("server weak subjectivity checkpoint response - epoch=%d, block_root=%#x, state_root=%#x", ws.Epoch, ws.BlockRoot, ws.StateRoot)
return ws, nil
}
const (
prysmMinimumVersion = "v2.0.7"
prysmImplementationName = "Prysm"
)
// errUnsupportedPrysmCheckpointVersion indicates remote beacon node can't be used for checkpoint retrieval.
var errUnsupportedPrysmCheckpointVersion = errors.New("node does not meet minimum version requirements for checkpoint retrieval")
// for older endpoints or clients that do not support the weak_subjectivity api method
// we gather the necessary data for a checkpoint sync by:
// - inspecting the remote server's head state and computing the weak subjectivity epoch locally
// - requesting the state at the first slot of the epoch
// - using hash_tree_root(state.latest_block_header) to compute the block the state integrates
// - requesting that block by its root
func computeBackwardsCompatible(ctx context.Context, client *Client) (*WeakSubjectivityData, error) {
log.Print("falling back to generic checkpoint derivation, weak_subjectivity API not supported by server")
nv, err := client.GetNodeVersion(ctx)
if err != nil {
return nil, errors.Wrap(err, "unable to proceed with fallback method without confirming node version")
}
if nv.implementation == prysmImplementationName && semver.Compare(nv.semver, prysmMinimumVersion) < 0 {
return nil, errors.Wrapf(errUnsupportedPrysmCheckpointVersion, "%s < minimum (%s)", nv.semver, prysmMinimumVersion)
}
epoch, err := getWeakSubjectivityEpochFromHead(ctx, client)
if err != nil {
return nil, errors.Wrap(err, "error computing weak subjectivity epoch via head state inspection")
}
// use first slot of the epoch for the state slot
slot, err := slots.EpochStart(epoch)
if err != nil {
return nil, errors.Wrapf(err, "error computing first slot of epoch=%d", epoch)
}
log.Printf("requesting checkpoint state at slot %d", slot)
// get the state at the first slot of the epoch
sb, err := client.GetState(ctx, IdFromSlot(slot))
if err != nil {
return nil, errors.Wrapf(err, "failed to request state by slot from api, slot=%d", slot)
}
// ConfigFork is used to unmarshal the BeaconState so we can read the block root in latest_block_header
vu, err := detect.FromState(sb)
if err != nil {
return nil, errors.Wrap(err, "error detecting chain config for beacon state")
}
log.Printf("detected supported config in checkpoint state, name=%s, fork=%s", vu.Config.ConfigName, version.String(vu.Fork))
s, err := vu.UnmarshalBeaconState(sb)
if err != nil {
return nil, errors.Wrap(err, "error using detected config fork to unmarshal state bytes")
}
// compute state and block roots
sr, err := s.HashTreeRoot(ctx)
if err != nil {
return nil, errors.Wrap(err, "error computing hash_tree_root of state")
}
h := s.LatestBlockHeader()
h.StateRoot = sr[:]
br, err := h.HashTreeRoot()
if err != nil {
return nil, errors.Wrap(err, "error while computing block root using state data")
}
bb, err := client.GetBlock(ctx, IdFromRoot(br))
if err != nil {
return nil, errors.Wrapf(err, "error requesting block by root = %d", br)
}
b, err := vu.UnmarshalBeaconBlock(bb)
if err != nil {
return nil, errors.Wrap(err, "unable to unmarshal block to a supported type using the detected fork schedule")
}
br, err = b.Block().HashTreeRoot()
if err != nil {
return nil, errors.Wrap(err, "error computing hash_tree_root for block obtained via root")
}
return &WeakSubjectivityData{
Epoch: epoch,
BlockRoot: br,
StateRoot: sr,
}, nil
}
// this method downloads the head state, which can be used to find the correct chain config
// and use prysm's helper methods to compute the latest weak subjectivity epoch.
func getWeakSubjectivityEpochFromHead(ctx context.Context, client *Client) (primitives.Epoch, error) {
headBytes, err := client.GetState(ctx, IdHead)
if err != nil {
return 0, err
}
vu, err := detect.FromState(headBytes)
if err != nil {
return 0, errors.Wrap(err, "error detecting chain config for beacon state")
}
log.Printf("detected supported config in remote head state, name=%s, fork=%s", vu.Config.ConfigName, version.String(vu.Fork))
headState, err := vu.UnmarshalBeaconState(headBytes)
if err != nil {
return 0, errors.Wrap(err, "error unmarshaling state to correct version")
}
epoch, err := helpers.LatestWeakSubjectivityEpoch(ctx, headState, vu.Config)
if err != nil {
return 0, errors.Wrap(err, "error computing the weak subjectivity epoch from head state")
}
log.Printf("(computed client-side) weak subjectivity epoch = %d", epoch)
return epoch, nil
}

View File

@@ -1,481 +0,0 @@
package beacon
import (
"bytes"
"context"
"encoding/json"
"fmt"
"io"
"net/http"
"testing"
"github.com/pkg/errors"
"github.com/prysmaticlabs/prysm/v5/api/client"
"github.com/prysmaticlabs/prysm/v5/beacon-chain/state"
"github.com/prysmaticlabs/prysm/v5/config/params"
"github.com/prysmaticlabs/prysm/v5/consensus-types/blocks"
blocktest "github.com/prysmaticlabs/prysm/v5/consensus-types/blocks/testing"
"github.com/prysmaticlabs/prysm/v5/consensus-types/primitives"
"github.com/prysmaticlabs/prysm/v5/encoding/ssz/detect"
"github.com/prysmaticlabs/prysm/v5/network/forks"
ethpb "github.com/prysmaticlabs/prysm/v5/proto/prysm/v1alpha1"
"github.com/prysmaticlabs/prysm/v5/runtime/version"
"github.com/prysmaticlabs/prysm/v5/testing/require"
"github.com/prysmaticlabs/prysm/v5/testing/util"
"github.com/prysmaticlabs/prysm/v5/time/slots"
)
type testRT struct {
rt func(*http.Request) (*http.Response, error)
}
func (rt *testRT) RoundTrip(req *http.Request) (*http.Response, error) {
if rt.rt != nil {
return rt.rt(req)
}
return nil, errors.New("RoundTripper not implemented")
}
var _ http.RoundTripper = &testRT{}
func marshalToEnvelope(val interface{}) ([]byte, error) {
raw, err := json.Marshal(val)
if err != nil {
return nil, errors.Wrap(err, "error marshaling value to place in data envelope")
}
env := struct {
Data json.RawMessage `json:"data"`
}{
Data: raw,
}
return json.Marshal(env)
}
func TestMarshalToEnvelope(t *testing.T) {
d := struct {
Version string `json:"version"`
}{
Version: "Prysm/v2.0.5 (linux amd64)",
}
encoded, err := marshalToEnvelope(d)
require.NoError(t, err)
expected := `{"data":{"version":"Prysm/v2.0.5 (linux amd64)"}}`
require.Equal(t, expected, string(encoded))
}
func TestFallbackVersionCheck(t *testing.T) {
trans := &testRT{rt: func(req *http.Request) (*http.Response, error) {
res := &http.Response{Request: req}
switch req.URL.Path {
case getNodeVersionPath:
res.StatusCode = http.StatusOK
b := bytes.NewBuffer(nil)
d := struct {
Version string `json:"version"`
}{
Version: "Prysm/v2.0.5 (linux amd64)",
}
encoded, err := marshalToEnvelope(d)
require.NoError(t, err)
b.Write(encoded)
res.Body = io.NopCloser(b)
case getWeakSubjectivityPath:
res.StatusCode = http.StatusNotFound
}
return res, nil
}}
c, err := NewClient("http://localhost:3500", client.WithRoundTripper(trans))
require.NoError(t, err)
ctx := context.Background()
_, err = ComputeWeakSubjectivityCheckpoint(ctx, c)
require.ErrorIs(t, err, errUnsupportedPrysmCheckpointVersion)
}
func TestFname(t *testing.T) {
vu := &detect.VersionedUnmarshaler{
Config: params.MainnetConfig(),
Fork: version.Phase0,
}
slot := primitives.Slot(23)
prefix := "block"
var root [32]byte
copy(root[:], []byte{0x23, 0x23, 0x23})
expected := "block_mainnet_phase0_23-0x2323230000000000000000000000000000000000000000000000000000000000.ssz"
actual := fname(prefix, vu, slot, root)
require.Equal(t, expected, actual)
vu.Config = params.MinimalSpecConfig()
vu.Fork = version.Altair
slot = 17
prefix = "state"
copy(root[29:], []byte{0x17, 0x17, 0x17})
expected = "state_minimal_altair_17-0x2323230000000000000000000000000000000000000000000000000000171717.ssz"
actual = fname(prefix, vu, slot, root)
require.Equal(t, expected, actual)
}
func TestDownloadWeakSubjectivityCheckpoint(t *testing.T) {
ctx := context.Background()
cfg := params.MainnetConfig().Copy()
epoch := cfg.AltairForkEpoch - 1
// set up checkpoint state, using the epoch that will be computed as the ws checkpoint state based on the head state
wSlot, err := slots.EpochStart(epoch)
require.NoError(t, err)
wst, err := util.NewBeaconState()
require.NoError(t, err)
fork, err := forkForEpoch(cfg, epoch)
require.NoError(t, err)
require.NoError(t, wst.SetFork(fork))
// set up checkpoint block
b, err := blocks.NewSignedBeaconBlock(util.NewBeaconBlock())
require.NoError(t, err)
b, err = blocktest.SetBlockParentRoot(b, cfg.ZeroHash)
require.NoError(t, err)
b, err = blocktest.SetBlockSlot(b, wSlot)
require.NoError(t, err)
b, err = blocktest.SetProposerIndex(b, 0)
require.NoError(t, err)
// set up state header pointing at checkpoint block - this is how the block is downloaded by root
header, err := b.Header()
require.NoError(t, err)
require.NoError(t, wst.SetLatestBlockHeader(header.Header))
// order of operations can be confusing here:
// - when computing the state root, make sure block header is complete, EXCEPT the state root should be zero-value
// - before computing the block root (to match the request route), the block should include the state root
// *computed from the state with a header that does not have a state root set yet*
wRoot, err := wst.HashTreeRoot(ctx)
require.NoError(t, err)
b, err = blocktest.SetBlockStateRoot(b, wRoot)
require.NoError(t, err)
serBlock, err := b.MarshalSSZ()
require.NoError(t, err)
bRoot, err := b.Block().HashTreeRoot()
require.NoError(t, err)
wsSerialized, err := wst.MarshalSSZ()
require.NoError(t, err)
expectedWSD := WeakSubjectivityData{
BlockRoot: bRoot,
StateRoot: wRoot,
Epoch: epoch,
}
trans := &testRT{rt: func(req *http.Request) (*http.Response, error) {
res := &http.Response{Request: req}
switch req.URL.Path {
case getWeakSubjectivityPath:
res.StatusCode = http.StatusOK
cp := struct {
Epoch string `json:"epoch"`
Root string `json:"root"`
}{
Epoch: fmt.Sprintf("%d", slots.ToEpoch(b.Block().Slot())),
Root: fmt.Sprintf("%#x", bRoot),
}
wsr := struct {
Checkpoint interface{} `json:"ws_checkpoint"`
StateRoot string `json:"state_root"`
}{
Checkpoint: cp,
StateRoot: fmt.Sprintf("%#x", wRoot),
}
rb, err := marshalToEnvelope(wsr)
require.NoError(t, err)
res.Body = io.NopCloser(bytes.NewBuffer(rb))
case renderGetStatePath(IdFromSlot(wSlot)):
res.StatusCode = http.StatusOK
res.Body = io.NopCloser(bytes.NewBuffer(wsSerialized))
case renderGetBlockPath(IdFromRoot(bRoot)):
res.StatusCode = http.StatusOK
res.Body = io.NopCloser(bytes.NewBuffer(serBlock))
}
return res, nil
}}
c, err := NewClient("http://localhost:3500", client.WithRoundTripper(trans))
require.NoError(t, err)
wsd, err := ComputeWeakSubjectivityCheckpoint(ctx, c)
require.NoError(t, err)
require.Equal(t, expectedWSD.Epoch, wsd.Epoch)
require.Equal(t, expectedWSD.StateRoot, wsd.StateRoot)
require.Equal(t, expectedWSD.BlockRoot, wsd.BlockRoot)
}
// runs computeBackwardsCompatible directly
// and via ComputeWeakSubjectivityCheckpoint with a round tripper that triggers the backwards compatible code path
func TestDownloadBackwardsCompatibleCombined(t *testing.T) {
ctx := context.Background()
cfg := params.MainnetConfig()
st, expectedEpoch := defaultTestHeadState(t, cfg)
serialized, err := st.MarshalSSZ()
require.NoError(t, err)
// set up checkpoint state, using the epoch that will be computed as the ws checkpoint state based on the head state
wSlot, err := slots.EpochStart(expectedEpoch)
require.NoError(t, err)
wst, err := util.NewBeaconState()
require.NoError(t, err)
fork, err := forkForEpoch(cfg, cfg.GenesisEpoch)
require.NoError(t, err)
require.NoError(t, wst.SetFork(fork))
// set up checkpoint block
b, err := blocks.NewSignedBeaconBlock(util.NewBeaconBlock())
require.NoError(t, err)
b, err = blocktest.SetBlockParentRoot(b, cfg.ZeroHash)
require.NoError(t, err)
b, err = blocktest.SetBlockSlot(b, wSlot)
require.NoError(t, err)
b, err = blocktest.SetProposerIndex(b, 0)
require.NoError(t, err)
// set up state header pointing at checkpoint block - this is how the block is downloaded by root
header, err := b.Header()
require.NoError(t, err)
require.NoError(t, wst.SetLatestBlockHeader(header.Header))
// order of operations can be confusing here:
// - when computing the state root, make sure block header is complete, EXCEPT the state root should be zero-value
// - before computing the block root (to match the request route), the block should include the state root
// *computed from the state with a header that does not have a state root set yet*
wRoot, err := wst.HashTreeRoot(ctx)
require.NoError(t, err)
b, err = blocktest.SetBlockStateRoot(b, wRoot)
require.NoError(t, err)
serBlock, err := b.MarshalSSZ()
require.NoError(t, err)
bRoot, err := b.Block().HashTreeRoot()
require.NoError(t, err)
wsSerialized, err := wst.MarshalSSZ()
require.NoError(t, err)
trans := &testRT{rt: func(req *http.Request) (*http.Response, error) {
res := &http.Response{Request: req}
switch req.URL.Path {
case getNodeVersionPath:
res.StatusCode = http.StatusOK
b := bytes.NewBuffer(nil)
d := struct {
Version string `json:"version"`
}{
Version: "Lighthouse/v0.1.5 (Linux x86_64)",
}
encoded, err := marshalToEnvelope(d)
require.NoError(t, err)
b.Write(encoded)
res.Body = io.NopCloser(b)
case getWeakSubjectivityPath:
res.StatusCode = http.StatusNotFound
case renderGetStatePath(IdHead):
res.StatusCode = http.StatusOK
res.Body = io.NopCloser(bytes.NewBuffer(serialized))
case renderGetStatePath(IdFromSlot(wSlot)):
res.StatusCode = http.StatusOK
res.Body = io.NopCloser(bytes.NewBuffer(wsSerialized))
case renderGetBlockPath(IdFromRoot(bRoot)):
res.StatusCode = http.StatusOK
res.Body = io.NopCloser(bytes.NewBuffer(serBlock))
}
return res, nil
}}
c, err := NewClient("http://localhost:3500", client.WithRoundTripper(trans))
require.NoError(t, err)
wsPub, err := ComputeWeakSubjectivityCheckpoint(ctx, c)
require.NoError(t, err)
wsPriv, err := computeBackwardsCompatible(ctx, c)
require.NoError(t, err)
require.DeepEqual(t, wsPriv, wsPub)
}
func TestGetWeakSubjectivityEpochFromHead(t *testing.T) {
st, expectedEpoch := defaultTestHeadState(t, params.MainnetConfig())
serialized, err := st.MarshalSSZ()
require.NoError(t, err)
trans := &testRT{rt: func(req *http.Request) (*http.Response, error) {
res := &http.Response{Request: req}
if req.URL.Path == renderGetStatePath(IdHead) {
res.StatusCode = http.StatusOK
res.Body = io.NopCloser(bytes.NewBuffer(serialized))
}
return res, nil
}}
c, err := NewClient("http://localhost:3500", client.WithRoundTripper(trans))
require.NoError(t, err)
actualEpoch, err := getWeakSubjectivityEpochFromHead(context.Background(), c)
require.NoError(t, err)
require.Equal(t, expectedEpoch, actualEpoch)
}
func forkForEpoch(cfg *params.BeaconChainConfig, epoch primitives.Epoch) (*ethpb.Fork, error) {
os := forks.NewOrderedSchedule(cfg)
currentVersion, err := os.VersionForEpoch(epoch)
if err != nil {
return nil, err
}
prevVersion, err := os.Previous(currentVersion)
if err != nil {
if !errors.Is(err, forks.ErrNoPreviousVersion) {
return nil, err
}
// use same version for both in the case of genesis
prevVersion = currentVersion
}
forkEpoch := cfg.ForkVersionSchedule[currentVersion]
return &ethpb.Fork{
PreviousVersion: prevVersion[:],
CurrentVersion: currentVersion[:],
Epoch: forkEpoch,
}, nil
}
func defaultTestHeadState(t *testing.T, cfg *params.BeaconChainConfig) (state.BeaconState, primitives.Epoch) {
st, err := util.NewBeaconStateAltair()
require.NoError(t, err)
fork, err := forkForEpoch(cfg, cfg.AltairForkEpoch)
require.NoError(t, err)
require.NoError(t, st.SetFork(fork))
slot, err := slots.EpochStart(cfg.AltairForkEpoch)
require.NoError(t, err)
require.NoError(t, st.SetSlot(slot))
var validatorCount, avgBalance uint64 = 100, 35
require.NoError(t, populateValidators(cfg, st, validatorCount, avgBalance))
require.NoError(t, st.SetFinalizedCheckpoint(&ethpb.Checkpoint{
Epoch: fork.Epoch - 10,
Root: make([]byte, 32),
}))
// to see the math for this, look at helpers.LatestWeakSubjectivityEpoch
// and for the values use mainnet config values, the validatorCount and avgBalance above, and altair fork epoch
expectedEpoch := slots.ToEpoch(st.Slot()) - 224
return st, expectedEpoch
}
// TODO(10429): refactor beacon state options in testing/util to take a state.BeaconState so this can become an option
func populateValidators(cfg *params.BeaconChainConfig, st state.BeaconState, valCount, avgBalance uint64) error {
validators := make([]*ethpb.Validator, valCount)
balances := make([]uint64, len(validators))
for i := uint64(0); i < valCount; i++ {
validators[i] = &ethpb.Validator{
PublicKey: make([]byte, cfg.BLSPubkeyLength),
WithdrawalCredentials: make([]byte, 32),
EffectiveBalance: avgBalance * 1e9,
ExitEpoch: cfg.FarFutureEpoch,
}
balances[i] = validators[i].EffectiveBalance
}
if err := st.SetValidators(validators); err != nil {
return err
}
return st.SetBalances(balances)
}
func TestDownloadFinalizedData(t *testing.T) {
ctx := context.Background()
cfg := params.MainnetConfig().Copy()
// avoid the altair zone because genesis tests are easier to set up
epoch := cfg.AltairForkEpoch - 1
// set up checkpoint state, using the epoch that will be computed as the ws checkpoint state based on the head state
slot, err := slots.EpochStart(epoch)
require.NoError(t, err)
st, err := util.NewBeaconState()
require.NoError(t, err)
fork, err := forkForEpoch(cfg, epoch)
require.NoError(t, err)
require.NoError(t, st.SetFork(fork))
require.NoError(t, st.SetSlot(slot))
// set up checkpoint block
b, err := blocks.NewSignedBeaconBlock(util.NewBeaconBlock())
require.NoError(t, err)
b, err = blocktest.SetBlockParentRoot(b, cfg.ZeroHash)
require.NoError(t, err)
b, err = blocktest.SetBlockSlot(b, slot)
require.NoError(t, err)
b, err = blocktest.SetProposerIndex(b, 0)
require.NoError(t, err)
// set up state header pointing at checkpoint block - this is how the block is downloaded by root
header, err := b.Header()
require.NoError(t, err)
require.NoError(t, st.SetLatestBlockHeader(header.Header))
// order of operations can be confusing here:
// - when computing the state root, make sure block header is complete, EXCEPT the state root should be zero-value
// - before computing the block root (to match the request route), the block should include the state root
// *computed from the state with a header that does not have a state root set yet*
sr, err := st.HashTreeRoot(ctx)
require.NoError(t, err)
b, err = blocktest.SetBlockStateRoot(b, sr)
require.NoError(t, err)
mb, err := b.MarshalSSZ()
require.NoError(t, err)
br, err := b.Block().HashTreeRoot()
require.NoError(t, err)
ms, err := st.MarshalSSZ()
require.NoError(t, err)
trans := &testRT{rt: func(req *http.Request) (*http.Response, error) {
res := &http.Response{Request: req}
switch req.URL.Path {
case renderGetStatePath(IdFinalized):
res.StatusCode = http.StatusOK
res.Body = io.NopCloser(bytes.NewBuffer(ms))
case renderGetBlockPath(IdFromSlot(b.Block().Slot())):
res.StatusCode = http.StatusOK
res.Body = io.NopCloser(bytes.NewBuffer(mb))
default:
res.StatusCode = http.StatusInternalServerError
res.Body = io.NopCloser(bytes.NewBufferString(""))
}
return res, nil
}}
c, err := NewClient("http://localhost:3500", client.WithRoundTripper(trans))
require.NoError(t, err)
// sanity check before we go through checkpoint
// make sure we can download the state and unmarshal it with the VersionedUnmarshaler
sb, err := c.GetState(ctx, IdFinalized)
require.NoError(t, err)
require.Equal(t, true, bytes.Equal(sb, ms))
vu, err := detect.FromState(sb)
require.NoError(t, err)
us, err := vu.UnmarshalBeaconState(sb)
require.NoError(t, err)
ushtr, err := us.HashTreeRoot(ctx)
require.NoError(t, err)
require.Equal(t, sr, ushtr)
expected := &OriginData{
sb: ms,
bb: mb,
br: br,
sr: sr,
}
od, err := DownloadFinalizedData(ctx, c)
require.NoError(t, err)
require.Equal(t, true, bytes.Equal(expected.sb, od.sb))
require.Equal(t, true, bytes.Equal(expected.bb, od.bb))
require.Equal(t, expected.br, od.br)
require.Equal(t, expected.sr, od.sr)
}

View File

@@ -1,363 +0,0 @@
package beacon
import (
"bytes"
"context"
"encoding/json"
"fmt"
"net/http"
"net/url"
"path"
"regexp"
"sort"
"strconv"
"text/template"
"github.com/ethereum/go-ethereum/common/hexutil"
"github.com/pkg/errors"
"github.com/prysmaticlabs/prysm/v5/api/client"
"github.com/prysmaticlabs/prysm/v5/api/server"
"github.com/prysmaticlabs/prysm/v5/api/server/structs"
"github.com/prysmaticlabs/prysm/v5/consensus-types/primitives"
"github.com/prysmaticlabs/prysm/v5/encoding/bytesutil"
"github.com/prysmaticlabs/prysm/v5/network/forks"
ethpb "github.com/prysmaticlabs/prysm/v5/proto/prysm/v1alpha1"
"github.com/sirupsen/logrus"
)
const (
getSignedBlockPath = "/eth/v2/beacon/blocks"
getBlockRootPath = "/eth/v1/beacon/blocks/{{.Id}}/root"
getForkForStatePath = "/eth/v1/beacon/states/{{.Id}}/fork"
getWeakSubjectivityPath = "/prysm/v1/beacon/weak_subjectivity"
getForkSchedulePath = "/eth/v1/config/fork_schedule"
getConfigSpecPath = "/eth/v1/config/spec"
getStatePath = "/eth/v2/debug/beacon/states"
getNodeVersionPath = "/eth/v1/node/version"
changeBLStoExecutionPath = "/eth/v1/beacon/pool/bls_to_execution_changes"
)
// StateOrBlockId represents the block_id / state_id parameters that several of the Eth Beacon API methods accept.
// StateOrBlockId constants are defined for named identifiers, and helper methods are provided
// for slot and root identifiers. Example text from the Eth Beacon Node API documentation:
//
// "Block identifier can be one of: "head" (canonical head in node's view), "genesis", "finalized",
// <slot>, <hex encoded blockRoot with 0x prefix>."
type StateOrBlockId string
const (
IdGenesis StateOrBlockId = "genesis"
IdHead StateOrBlockId = "head"
IdFinalized StateOrBlockId = "finalized"
)
// IdFromRoot encodes a block root in the format expected by the API in places where a root can be used to identify
// a BeaconState or SignedBeaconBlock.
func IdFromRoot(r [32]byte) StateOrBlockId {
return StateOrBlockId(fmt.Sprintf("%#x", r))
}
// IdFromSlot encodes a Slot in the format expected by the API in places where a slot can be used to identify
// a BeaconState or SignedBeaconBlock.
func IdFromSlot(s primitives.Slot) StateOrBlockId {
return StateOrBlockId(strconv.FormatUint(uint64(s), 10))
}
// idTemplate is used to create template functions that can interpolate StateOrBlockId values.
func idTemplate(ts string) func(StateOrBlockId) string {
t := template.Must(template.New("").Parse(ts))
f := func(id StateOrBlockId) string {
b := bytes.NewBuffer(nil)
err := t.Execute(b, struct{ Id string }{Id: string(id)})
if err != nil {
panic(fmt.Sprintf("invalid idTemplate: %s", ts))
}
return b.String()
}
// run the template to ensure that it is valid
// this should happen load time (using package scoped vars) to ensure runtime errors aren't possible
_ = f(IdGenesis)
return f
}
func renderGetBlockPath(id StateOrBlockId) string {
return path.Join(getSignedBlockPath, string(id))
}
// Client provides a collection of helper methods for calling the Eth Beacon Node API endpoints.
type Client struct {
*client.Client
}
// NewClient returns a new Client that includes functions for rest calls to Beacon API.
func NewClient(host string, opts ...client.ClientOpt) (*Client, error) {
c, err := client.NewClient(host, opts...)
if err != nil {
return nil, err
}
return &Client{c}, nil
}
// GetBlock retrieves the SignedBeaconBlock for the given block id.
// Block identifier can be one of: "head" (canonical head in node's view), "genesis", "finalized",
// <slot>, <hex encoded blockRoot with 0x prefix>. Variables of type StateOrBlockId are exported by this package
// for the named identifiers.
// The return value contains the ssz-encoded bytes.
func (c *Client) GetBlock(ctx context.Context, blockId StateOrBlockId) ([]byte, error) {
blockPath := renderGetBlockPath(blockId)
b, err := c.Get(ctx, blockPath, client.WithSSZEncoding())
if err != nil {
return nil, errors.Wrapf(err, "error requesting state by id = %s", blockId)
}
return b, nil
}
var getBlockRootTpl = idTemplate(getBlockRootPath)
// GetBlockRoot retrieves the hash_tree_root of the BeaconBlock for the given block id.
// Block identifier can be one of: "head" (canonical head in node's view), "genesis", "finalized",
// <slot>, <hex encoded blockRoot with 0x prefix>. Variables of type StateOrBlockId are exported by this package
// for the named identifiers.
func (c *Client) GetBlockRoot(ctx context.Context, blockId StateOrBlockId) ([32]byte, error) {
rootPath := getBlockRootTpl(blockId)
b, err := c.Get(ctx, rootPath)
if err != nil {
return [32]byte{}, errors.Wrapf(err, "error requesting block root by id = %s", blockId)
}
jsonr := &struct{ Data struct{ Root string } }{}
err = json.Unmarshal(b, jsonr)
if err != nil {
return [32]byte{}, errors.Wrap(err, "error decoding json data from get block root response")
}
rs, err := hexutil.Decode(jsonr.Data.Root)
if err != nil {
return [32]byte{}, errors.Wrap(err, fmt.Sprintf("error decoding hex-encoded value %s", jsonr.Data.Root))
}
return bytesutil.ToBytes32(rs), nil
}
var getForkTpl = idTemplate(getForkForStatePath)
// GetFork queries the Beacon Node API for the Fork from the state identified by stateId.
// Block identifier can be one of: "head" (canonical head in node's view), "genesis", "finalized",
// <slot>, <hex encoded blockRoot with 0x prefix>. Variables of type StateOrBlockId are exported by this package
// for the named identifiers.
func (c *Client) GetFork(ctx context.Context, stateId StateOrBlockId) (*ethpb.Fork, error) {
body, err := c.Get(ctx, getForkTpl(stateId))
if err != nil {
return nil, errors.Wrapf(err, "error requesting fork by state id = %s", stateId)
}
fr := &structs.Fork{}
dataWrapper := &struct{ Data *structs.Fork }{Data: fr}
err = json.Unmarshal(body, dataWrapper)
if err != nil {
return nil, errors.Wrap(err, "error decoding json response in GetFork")
}
return fr.ToConsensus()
}
// GetForkSchedule retrieve all forks, past present and future, of which this node is aware.
func (c *Client) GetForkSchedule(ctx context.Context) (forks.OrderedSchedule, error) {
body, err := c.Get(ctx, getForkSchedulePath)
if err != nil {
return nil, errors.Wrap(err, "error requesting fork schedule")
}
fsr := &forkScheduleResponse{}
err = json.Unmarshal(body, fsr)
if err != nil {
return nil, err
}
ofs, err := fsr.OrderedForkSchedule()
if err != nil {
return nil, errors.Wrap(err, fmt.Sprintf("problem unmarshaling %s response", getForkSchedulePath))
}
return ofs, nil
}
// GetConfigSpec retrieve the current configs of the network used by the beacon node.
func (c *Client) GetConfigSpec(ctx context.Context) (*structs.GetSpecResponse, error) {
body, err := c.Get(ctx, getConfigSpecPath)
if err != nil {
return nil, errors.Wrap(err, "error requesting configSpecPath")
}
fsr := &structs.GetSpecResponse{}
err = json.Unmarshal(body, fsr)
if err != nil {
return nil, err
}
return fsr, nil
}
type NodeVersion struct {
implementation string
semver string
systemInfo string
}
var versionRE = regexp.MustCompile(`^(\w+)/(v\d+\.\d+\.\d+[-a-zA-Z0-9]*)\s*/?(.*)$`)
func parseNodeVersion(v string) (*NodeVersion, error) {
groups := versionRE.FindStringSubmatch(v)
if len(groups) != 4 {
return nil, errors.Wrapf(client.ErrInvalidNodeVersion, "could not be parsed: %s", v)
}
return &NodeVersion{
implementation: groups[1],
semver: groups[2],
systemInfo: groups[3],
}, nil
}
// GetNodeVersion requests that the beacon node identify information about its implementation in a format
// similar to a HTTP User-Agent field. ex: Lighthouse/v0.1.5 (Linux x86_64)
func (c *Client) GetNodeVersion(ctx context.Context) (*NodeVersion, error) {
b, err := c.Get(ctx, getNodeVersionPath)
if err != nil {
return nil, errors.Wrap(err, "error requesting node version")
}
d := struct {
Data struct {
Version string `json:"version"`
} `json:"data"`
}{}
err = json.Unmarshal(b, &d)
if err != nil {
return nil, errors.Wrapf(err, "error unmarshaling response body: %s", string(b))
}
return parseNodeVersion(d.Data.Version)
}
func renderGetStatePath(id StateOrBlockId) string {
return path.Join(getStatePath, string(id))
}
// GetState retrieves the BeaconState for the given state id.
// State identifier can be one of: "head" (canonical head in node's view), "genesis", "finalized",
// <slot>, <hex encoded stateRoot with 0x prefix>. Variables of type StateOrBlockId are exported by this package
// for the named identifiers.
// The return value contains the ssz-encoded bytes.
func (c *Client) GetState(ctx context.Context, stateId StateOrBlockId) ([]byte, error) {
statePath := path.Join(getStatePath, string(stateId))
b, err := c.Get(ctx, statePath, client.WithSSZEncoding())
if err != nil {
return nil, errors.Wrapf(err, "error requesting state by id = %s", stateId)
}
return b, nil
}
// GetWeakSubjectivity calls a proposed API endpoint that is unique to prysm
// This api method does the following:
// - computes weak subjectivity epoch
// - finds the highest non-skipped block preceding the epoch
// - returns the htr of the found block and returns this + the value of state_root from the block
func (c *Client) GetWeakSubjectivity(ctx context.Context) (*WeakSubjectivityData, error) {
body, err := c.Get(ctx, getWeakSubjectivityPath)
if err != nil {
return nil, err
}
v := &structs.GetWeakSubjectivityResponse{}
err = json.Unmarshal(body, v)
if err != nil {
return nil, err
}
epoch, err := strconv.ParseUint(v.Data.WsCheckpoint.Epoch, 10, 64)
if err != nil {
return nil, err
}
blockRoot, err := hexutil.Decode(v.Data.WsCheckpoint.Root)
if err != nil {
return nil, err
}
stateRoot, err := hexutil.Decode(v.Data.StateRoot)
if err != nil {
return nil, err
}
return &WeakSubjectivityData{
Epoch: primitives.Epoch(epoch),
BlockRoot: bytesutil.ToBytes32(blockRoot),
StateRoot: bytesutil.ToBytes32(stateRoot),
}, nil
}
// SubmitChangeBLStoExecution calls a beacon API endpoint to set the withdrawal addresses based on the given signed messages.
// If the API responds with something other than OK there will be failure messages associated to the corresponding request message.
func (c *Client) SubmitChangeBLStoExecution(ctx context.Context, request []*structs.SignedBLSToExecutionChange) error {
u := c.BaseURL().ResolveReference(&url.URL{Path: changeBLStoExecutionPath})
body, err := json.Marshal(request)
if err != nil {
return errors.Wrap(err, "failed to marshal JSON")
}
req, err := http.NewRequestWithContext(ctx, http.MethodPost, u.String(), bytes.NewBuffer(body))
if err != nil {
return errors.Wrap(err, "invalid format, failed to create new POST request object")
}
req.Header.Set("Content-Type", "application/json")
resp, err := c.Do(req)
if err != nil {
return err
}
defer func() {
err = resp.Body.Close()
}()
if resp.StatusCode != http.StatusOK {
decoder := json.NewDecoder(resp.Body)
decoder.DisallowUnknownFields()
errorJson := &server.IndexedVerificationFailureError{}
if err := decoder.Decode(errorJson); err != nil {
return errors.Wrapf(err, "failed to decode error JSON for %s", resp.Request.URL)
}
for _, failure := range errorJson.Failures {
w := request[failure.Index].Message
log.WithFields(logrus.Fields{
"validatorIndex": w.ValidatorIndex,
"withdrawalAddress": w.ToExecutionAddress,
}).Error(failure.Message)
}
return errors.Errorf("POST error %d: %s", errorJson.Code, errorJson.Message)
}
return nil
}
// GetBLStoExecutionChanges gets all the set withdrawal messages in the node's operation pool.
// Returns a struct representation of json response.
func (c *Client) GetBLStoExecutionChanges(ctx context.Context) (*structs.BLSToExecutionChangesPoolResponse, error) {
body, err := c.Get(ctx, changeBLStoExecutionPath)
if err != nil {
return nil, err
}
poolResponse := &structs.BLSToExecutionChangesPoolResponse{}
err = json.Unmarshal(body, poolResponse)
if err != nil {
return nil, err
}
return poolResponse, nil
}
type forkScheduleResponse struct {
Data []structs.Fork
}
func (fsr *forkScheduleResponse) OrderedForkSchedule() (forks.OrderedSchedule, error) {
ofs := make(forks.OrderedSchedule, 0)
for _, d := range fsr.Data {
epoch, err := strconv.ParseUint(d.Epoch, 10, 64)
if err != nil {
return nil, errors.Wrapf(err, "error parsing epoch %s", d.Epoch)
}
vSlice, err := hexutil.Decode(d.CurrentVersion)
if err != nil {
return nil, err
}
if len(vSlice) != 4 {
return nil, fmt.Errorf("got %d byte version, expected 4 bytes. version hex=%s", len(vSlice), d.CurrentVersion)
}
version := bytesutil.ToBytes4(vSlice)
ofs = append(ofs, forks.ForkScheduleEntry{
Version: version,
Epoch: primitives.Epoch(epoch),
})
}
sort.Sort(ofs)
return ofs, nil
}

View File

@@ -1,139 +0,0 @@
package beacon
import (
"net/url"
"testing"
"github.com/prysmaticlabs/prysm/v5/api/client"
"github.com/prysmaticlabs/prysm/v5/testing/require"
)
func TestParseNodeVersion(t *testing.T) {
cases := []struct {
name string
v string
err error
nv *NodeVersion
}{
{
name: "empty string",
v: "",
err: client.ErrInvalidNodeVersion,
},
{
name: "Prysm as the version string",
v: "Prysm",
err: client.ErrInvalidNodeVersion,
},
{
name: "semver only",
v: "v2.0.6",
err: client.ErrInvalidNodeVersion,
},
{
name: "complete version",
v: "Prysm/v2.0.6 (linux amd64)",
nv: &NodeVersion{
implementation: "Prysm",
semver: "v2.0.6",
systemInfo: "(linux amd64)",
},
},
{
name: "nimbus version",
v: "Nimbus/v22.4.0-039bec-stateofus",
nv: &NodeVersion{
implementation: "Nimbus",
semver: "v22.4.0-039bec-stateofus",
systemInfo: "",
},
},
{
name: "teku version",
v: "teku/v22.3.2/linux-x86_64/oracle-java-11",
nv: &NodeVersion{
implementation: "teku",
semver: "v22.3.2",
systemInfo: "linux-x86_64/oracle-java-11",
},
},
{
name: "lighthouse version",
v: "Lighthouse/v2.1.1-5f628a7/x86_64-linux",
nv: &NodeVersion{
implementation: "Lighthouse",
semver: "v2.1.1-5f628a7",
systemInfo: "x86_64-linux",
},
},
}
for _, c := range cases {
t.Run(c.name, func(t *testing.T) {
nv, err := parseNodeVersion(c.v)
if c.err != nil {
require.ErrorIs(t, err, c.err)
} else {
require.NoError(t, err)
require.DeepEqual(t, c.nv, nv)
}
})
}
}
func TestValidHostname(t *testing.T) {
cases := []struct {
name string
hostArg string
path string
joined string
err error
}{
{
name: "hostname without port",
hostArg: "mydomain.org",
err: client.ErrMalformedHostname,
},
{
name: "hostname with port",
hostArg: "mydomain.org:3500",
path: getNodeVersionPath,
joined: "http://mydomain.org:3500/eth/v1/node/version",
},
{
name: "https scheme, hostname with port",
hostArg: "https://mydomain.org:3500",
path: getNodeVersionPath,
joined: "https://mydomain.org:3500/eth/v1/node/version",
},
{
name: "http scheme, hostname without port",
hostArg: "http://mydomain.org",
path: getNodeVersionPath,
joined: "http://mydomain.org/eth/v1/node/version",
},
{
name: "http scheme, trailing slash, hostname without port",
hostArg: "http://mydomain.org/",
path: getNodeVersionPath,
joined: "http://mydomain.org/eth/v1/node/version",
},
{
name: "http scheme, hostname with basic auth creds and no port",
hostArg: "http://username:pass@mydomain.org/",
path: getNodeVersionPath,
joined: "http://username:pass@mydomain.org/eth/v1/node/version",
},
}
for _, c := range cases {
t.Run(c.name, func(t *testing.T) {
cl, err := NewClient(c.hostArg)
if c.err != nil {
require.ErrorIs(t, err, c.err)
return
}
require.NoError(t, err)
require.Equal(t, c.joined, cl.BaseURL().ResolveReference(&url.URL{Path: c.path}).String())
})
}
}

View File

@@ -1,5 +0,0 @@
/*
Package beacon provides a client for interacting with the standard Eth Beacon Node API.
Interactive swagger documentation for the API is available here: https://ethereum.github.io/beacon-APIs/
*/
package beacon

View File

@@ -1,60 +0,0 @@
package beacon
import (
"context"
"sync"
"github.com/prysmaticlabs/prysm/v5/api/client/beacon/iface"
)
type NodeHealthTracker struct {
isHealthy *bool
healthChan chan bool
node iface.HealthNode
sync.RWMutex
}
func NewNodeHealthTracker(node iface.HealthNode) *NodeHealthTracker {
return &NodeHealthTracker{
node: node,
healthChan: make(chan bool, 1),
}
}
// HealthUpdates provides a read-only channel for health updates.
func (n *NodeHealthTracker) HealthUpdates() <-chan bool {
return n.healthChan
}
func (n *NodeHealthTracker) IsHealthy() bool {
n.RLock()
defer n.RUnlock()
if n.isHealthy == nil {
return false
}
return *n.isHealthy
}
func (n *NodeHealthTracker) CheckHealth(ctx context.Context) bool {
n.Lock()
defer n.Unlock()
newStatus := n.node.IsHealthy(ctx)
if n.isHealthy == nil {
n.isHealthy = &newStatus
}
isStatusChanged := newStatus != *n.isHealthy
if isStatusChanged {
// Update the health status
n.isHealthy = &newStatus
// Send the new status to the health channel, potentially overwriting the existing value
select {
case <-n.healthChan:
n.healthChan <- newStatus
default:
n.healthChan <- newStatus
}
}
return newStatus
}

View File

@@ -1,112 +0,0 @@
package beacon
import (
"context"
"sync"
"testing"
healthTesting "github.com/prysmaticlabs/prysm/v5/api/client/beacon/testing"
"go.uber.org/mock/gomock"
)
func TestNodeHealth_IsHealthy(t *testing.T) {
tests := []struct {
name string
isHealthy bool
want bool
}{
{"initially healthy", true, true},
{"initially unhealthy", false, false},
}
for _, tt := range tests {
t.Run(tt.name, func(t *testing.T) {
n := &NodeHealthTracker{
isHealthy: &tt.isHealthy,
healthChan: make(chan bool, 1),
}
if got := n.IsHealthy(); got != tt.want {
t.Errorf("IsHealthy() = %v, want %v", got, tt.want)
}
})
}
}
func TestNodeHealth_UpdateNodeHealth(t *testing.T) {
tests := []struct {
name string
initial bool // Initial health status
newStatus bool // Status to update to
shouldSend bool // Should a message be sent through the channel
}{
{"healthy to unhealthy", true, false, true},
{"unhealthy to healthy", false, true, true},
{"remain healthy", true, true, false},
{"remain unhealthy", false, false, false},
}
for _, tt := range tests {
t.Run(tt.name, func(t *testing.T) {
ctrl := gomock.NewController(t)
defer ctrl.Finish()
client := healthTesting.NewMockHealthClient(ctrl)
client.EXPECT().IsHealthy(gomock.Any()).Return(tt.newStatus)
n := &NodeHealthTracker{
isHealthy: &tt.initial,
node: client,
healthChan: make(chan bool, 1),
}
s := n.CheckHealth(context.Background())
// Check if health status was updated
if s != tt.newStatus {
t.Errorf("UpdateNodeHealth() failed to update isHealthy from %v to %v", tt.initial, tt.newStatus)
}
select {
case status := <-n.HealthUpdates():
if !tt.shouldSend {
t.Errorf("UpdateNodeHealth() unexpectedly sent status %v to HealthCh", status)
} else if status != tt.newStatus {
t.Errorf("UpdateNodeHealth() sent wrong status %v, want %v", status, tt.newStatus)
}
default:
if tt.shouldSend {
t.Error("UpdateNodeHealth() did not send any status to HealthCh when expected")
}
}
})
}
}
func TestNodeHealth_Concurrency(t *testing.T) {
ctrl := gomock.NewController(t)
defer ctrl.Finish()
client := healthTesting.NewMockHealthClient(ctrl)
n := NewNodeHealthTracker(client)
var wg sync.WaitGroup
// Number of goroutines to spawn for both reading and writing
numGoroutines := 6
wg.Add(numGoroutines * 2) // for readers and writers
// Concurrently update health status
for i := 0; i < numGoroutines; i++ {
go func() {
defer wg.Done()
client.EXPECT().IsHealthy(gomock.Any()).Return(false).Times(1)
n.CheckHealth(context.Background())
client.EXPECT().IsHealthy(gomock.Any()).Return(true).Times(1)
n.CheckHealth(context.Background())
}()
}
// Concurrently read health status
for i := 0; i < numGoroutines; i++ {
go func() {
defer wg.Done()
_ = n.IsHealthy() // Just read the value
}()
}
wg.Wait() // Wait for all goroutines to finish
}

View File

@@ -1,8 +0,0 @@
load("@prysm//tools/go:def.bzl", "go_library")
go_library(
name = "go_default_library",
srcs = ["health.go"],
importpath = "github.com/prysmaticlabs/prysm/v5/api/client/beacon/iface",
visibility = ["//visibility:public"],
)

View File

@@ -1,13 +0,0 @@
package iface
import "context"
type HealthTracker interface {
HealthUpdates() <-chan bool
IsHealthy() bool
CheckHealth(ctx context.Context) bool
}
type HealthNode interface {
IsHealthy(ctx context.Context) bool
}

View File

@@ -1,5 +0,0 @@
package beacon
import "github.com/sirupsen/logrus"
var log = logrus.WithField("prefix", "beacon")

View File

@@ -1,12 +0,0 @@
load("@prysm//tools/go:def.bzl", "go_library")
go_library(
name = "go_default_library",
srcs = ["mock.go"],
importpath = "github.com/prysmaticlabs/prysm/v5/api/client/beacon/testing",
visibility = ["//visibility:public"],
deps = [
"//api/client/beacon/iface:go_default_library",
"@org_uber_go_mock//gomock:go_default_library",
],
)

View File

@@ -1,59 +0,0 @@
package testing
import (
"context"
"reflect"
"sync"
"github.com/prysmaticlabs/prysm/v5/api/client/beacon/iface"
"go.uber.org/mock/gomock"
)
var (
_ = iface.HealthNode(&MockHealthClient{})
)
// MockHealthClient is a mock of HealthClient interface.
type MockHealthClient struct {
ctrl *gomock.Controller
recorder *MockHealthClientMockRecorder
sync.Mutex
}
// MockHealthClientMockRecorder is the mock recorder for MockHealthClient.
type MockHealthClientMockRecorder struct {
mock *MockHealthClient
}
// IsHealthy mocks base method.
func (m *MockHealthClient) IsHealthy(arg0 context.Context) bool {
m.Lock()
defer m.Unlock()
m.ctrl.T.Helper()
ret := m.ctrl.Call(m, "IsHealthy", arg0)
ret0, ok := ret[0].(bool)
if !ok {
return false
}
return ret0
}
// EXPECT returns an object that allows the caller to indicate expected use.
func (m *MockHealthClient) EXPECT() *MockHealthClientMockRecorder {
return m.recorder
}
// IsHealthy indicates an expected call of IsHealthy.
func (mr *MockHealthClientMockRecorder) IsHealthy(arg0 any) *gomock.Call {
mr.mock.Lock()
defer mr.mock.Unlock()
mr.mock.ctrl.T.Helper()
return mr.mock.ctrl.RecordCallWithMethodType(mr.mock, "IsHealthy", reflect.TypeOf((*MockHealthClient)(nil).IsHealthy), arg0)
}
// NewMockHealthClient creates a new mock instance.
func NewMockHealthClient(ctrl *gomock.Controller) *MockHealthClient {
mock := &MockHealthClient{ctrl: ctrl}
mock.recorder = &MockHealthClientMockRecorder{mock}
return mock
}

View File

@@ -1,62 +0,0 @@
load("@prysm//tools/go:def.bzl", "go_library", "go_test")
go_library(
name = "go_default_library",
srcs = [
"bid.go",
"client.go",
"errors.go",
"types.go",
],
importpath = "github.com/prysmaticlabs/prysm/v5/api/client/builder",
visibility = ["//visibility:public"],
deps = [
"//api:go_default_library",
"//api/server/structs:go_default_library",
"//config/fieldparams:go_default_library",
"//consensus-types:go_default_library",
"//consensus-types/blocks:go_default_library",
"//consensus-types/interfaces:go_default_library",
"//consensus-types/primitives:go_default_library",
"//encoding/bytesutil:go_default_library",
"//math:go_default_library",
"//monitoring/tracing:go_default_library",
"//monitoring/tracing/trace:go_default_library",
"//proto/engine/v1:go_default_library",
"//proto/prysm/v1alpha1:go_default_library",
"//runtime/version:go_default_library",
"@com_github_ethereum_go_ethereum//common/hexutil:go_default_library",
"@com_github_pkg_errors//:go_default_library",
"@com_github_prysmaticlabs_fastssz//:go_default_library",
"@com_github_sirupsen_logrus//:go_default_library",
"@org_golang_google_protobuf//proto:go_default_library",
],
)
go_test(
name = "go_default_test",
srcs = [
"client_test.go",
"types_test.go",
],
data = glob(["testdata/**"]),
embed = [":go_default_library"],
deps = [
"//api/server/structs:go_default_library",
"//config/fieldparams:go_default_library",
"//config/params:go_default_library",
"//consensus-types/blocks:go_default_library",
"//consensus-types/primitives:go_default_library",
"//encoding/bytesutil:go_default_library",
"//math:go_default_library",
"//proto/engine/v1:go_default_library",
"//proto/prysm/v1alpha1:go_default_library",
"//runtime/version:go_default_library",
"//testing/assert:go_default_library",
"//testing/require:go_default_library",
"@com_github_ethereum_go_ethereum//common/hexutil:go_default_library",
"@com_github_pkg_errors//:go_default_library",
"@com_github_prysmaticlabs_go_bitfield//:go_default_library",
"@com_github_sirupsen_logrus//:go_default_library",
],
)

View File

@@ -1,292 +0,0 @@
package builder
import (
"github.com/pkg/errors"
ssz "github.com/prysmaticlabs/fastssz"
consensus_types "github.com/prysmaticlabs/prysm/v5/consensus-types"
"github.com/prysmaticlabs/prysm/v5/consensus-types/blocks"
"github.com/prysmaticlabs/prysm/v5/consensus-types/interfaces"
"github.com/prysmaticlabs/prysm/v5/consensus-types/primitives"
ethpb "github.com/prysmaticlabs/prysm/v5/proto/prysm/v1alpha1"
"github.com/prysmaticlabs/prysm/v5/runtime/version"
)
// SignedBid is an interface describing the method set of a signed builder bid.
type SignedBid interface {
Message() (Bid, error)
Signature() []byte
Version() int
IsNil() bool
}
// Bid is an interface describing the method set of a builder bid.
type Bid interface {
Header() (interfaces.ExecutionData, error)
BlobKzgCommitments() ([][]byte, error)
Value() primitives.Wei
Pubkey() []byte
Version() int
IsNil() bool
HashTreeRoot() ([32]byte, error)
HashTreeRootWith(hh *ssz.Hasher) error
}
type signedBuilderBid struct {
p *ethpb.SignedBuilderBid
}
// WrappedSignedBuilderBid is a constructor which wraps a protobuf signed bit into an interface.
func WrappedSignedBuilderBid(p *ethpb.SignedBuilderBid) (SignedBid, error) {
w := signedBuilderBid{p: p}
if w.IsNil() {
return nil, consensus_types.ErrNilObjectWrapped
}
return w, nil
}
// Message --
func (b signedBuilderBid) Message() (Bid, error) {
return WrappedBuilderBid(b.p.Message)
}
// Signature --
func (b signedBuilderBid) Signature() []byte {
return b.p.Signature
}
// Version --
func (b signedBuilderBid) Version() int {
return version.Bellatrix
}
// IsNil --
func (b signedBuilderBid) IsNil() bool {
return b.p == nil
}
type signedBuilderBidCapella struct {
p *ethpb.SignedBuilderBidCapella
}
// WrappedSignedBuilderBidCapella is a constructor which wraps a protobuf signed bit into an interface.
func WrappedSignedBuilderBidCapella(p *ethpb.SignedBuilderBidCapella) (SignedBid, error) {
w := signedBuilderBidCapella{p: p}
if w.IsNil() {
return nil, consensus_types.ErrNilObjectWrapped
}
return w, nil
}
// Message --
func (b signedBuilderBidCapella) Message() (Bid, error) {
return WrappedBuilderBidCapella(b.p.Message)
}
// Signature --
func (b signedBuilderBidCapella) Signature() []byte {
return b.p.Signature
}
// Version --
func (b signedBuilderBidCapella) Version() int {
return version.Capella
}
// IsNil --
func (b signedBuilderBidCapella) IsNil() bool {
return b.p == nil
}
type builderBid struct {
p *ethpb.BuilderBid
}
// WrappedBuilderBid is a constructor which wraps a protobuf bid into an interface.
func WrappedBuilderBid(p *ethpb.BuilderBid) (Bid, error) {
w := builderBid{p: p}
if w.IsNil() {
return nil, consensus_types.ErrNilObjectWrapped
}
return w, nil
}
// Header --
func (b builderBid) Header() (interfaces.ExecutionData, error) {
return blocks.WrappedExecutionPayloadHeader(b.p.Header)
}
// BlobKzgCommitments --
func (b builderBid) BlobKzgCommitments() ([][]byte, error) {
return [][]byte{}, errors.New("blob kzg commitments not available before Deneb")
}
// Version --
func (b builderBid) Version() int {
return version.Bellatrix
}
// Value --
func (b builderBid) Value() primitives.Wei {
return primitives.LittleEndianBytesToWei(b.p.Value)
}
// Pubkey --
func (b builderBid) Pubkey() []byte {
return b.p.Pubkey
}
// IsNil --
func (b builderBid) IsNil() bool {
return b.p == nil
}
// HashTreeRoot --
func (b builderBid) HashTreeRoot() ([32]byte, error) {
return b.p.HashTreeRoot()
}
// HashTreeRootWith --
func (b builderBid) HashTreeRootWith(hh *ssz.Hasher) error {
return b.p.HashTreeRootWith(hh)
}
type builderBidCapella struct {
p *ethpb.BuilderBidCapella
}
// WrappedBuilderBidCapella is a constructor which wraps a protobuf bid into an interface.
func WrappedBuilderBidCapella(p *ethpb.BuilderBidCapella) (Bid, error) {
w := builderBidCapella{p: p}
if w.IsNil() {
return nil, consensus_types.ErrNilObjectWrapped
}
return w, nil
}
// Header returns the execution data interface.
func (b builderBidCapella) Header() (interfaces.ExecutionData, error) {
// We have to convert big endian to little endian because the value is coming from the execution layer.
return blocks.WrappedExecutionPayloadHeaderCapella(b.p.Header)
}
// BlobKzgCommitments --
func (b builderBidCapella) BlobKzgCommitments() ([][]byte, error) {
return [][]byte{}, errors.New("blob kzg commitments not available before Deneb")
}
// Version --
func (b builderBidCapella) Version() int {
return version.Capella
}
// Value --
func (b builderBidCapella) Value() primitives.Wei {
return primitives.LittleEndianBytesToWei(b.p.Value)
}
// Pubkey --
func (b builderBidCapella) Pubkey() []byte {
return b.p.Pubkey
}
// IsNil --
func (b builderBidCapella) IsNil() bool {
return b.p == nil
}
// HashTreeRoot --
func (b builderBidCapella) HashTreeRoot() ([32]byte, error) {
return b.p.HashTreeRoot()
}
// HashTreeRootWith --
func (b builderBidCapella) HashTreeRootWith(hh *ssz.Hasher) error {
return b.p.HashTreeRootWith(hh)
}
type builderBidDeneb struct {
p *ethpb.BuilderBidDeneb
}
// WrappedBuilderBidDeneb is a constructor which wraps a protobuf bid into an interface.
func WrappedBuilderBidDeneb(p *ethpb.BuilderBidDeneb) (Bid, error) {
w := builderBidDeneb{p: p}
if w.IsNil() {
return nil, consensus_types.ErrNilObjectWrapped
}
return w, nil
}
// Version --
func (b builderBidDeneb) Version() int {
return version.Deneb
}
// Value --
func (b builderBidDeneb) Value() primitives.Wei {
return primitives.LittleEndianBytesToWei(b.p.Value)
}
// Pubkey --
func (b builderBidDeneb) Pubkey() []byte {
return b.p.Pubkey
}
// IsNil --
func (b builderBidDeneb) IsNil() bool {
return b.p == nil
}
// HashTreeRoot --
func (b builderBidDeneb) HashTreeRoot() ([32]byte, error) {
return b.p.HashTreeRoot()
}
// HashTreeRootWith --
func (b builderBidDeneb) HashTreeRootWith(hh *ssz.Hasher) error {
return b.p.HashTreeRootWith(hh)
}
// Header --
func (b builderBidDeneb) Header() (interfaces.ExecutionData, error) {
// We have to convert big endian to little endian because the value is coming from the execution layer.
return blocks.WrappedExecutionPayloadHeaderDeneb(b.p.Header)
}
// BlobKzgCommitments --
func (b builderBidDeneb) BlobKzgCommitments() ([][]byte, error) {
return b.p.BlobKzgCommitments, nil
}
type signedBuilderBidDeneb struct {
p *ethpb.SignedBuilderBidDeneb
}
// WrappedSignedBuilderBidDeneb is a constructor which wraps a protobuf signed bit into an interface.
func WrappedSignedBuilderBidDeneb(p *ethpb.SignedBuilderBidDeneb) (SignedBid, error) {
w := signedBuilderBidDeneb{p: p}
if w.IsNil() {
return nil, consensus_types.ErrNilObjectWrapped
}
return w, nil
}
// Message --
func (b signedBuilderBidDeneb) Message() (Bid, error) {
return WrappedBuilderBidDeneb(b.p.Message)
}
// Signature --
func (b signedBuilderBidDeneb) Signature() []byte {
return b.p.Signature
}
// Version --
func (b signedBuilderBidDeneb) Version() int {
return version.Deneb
}
// IsNil --
func (b signedBuilderBidDeneb) IsNil() bool {
return b.p == nil
}

View File

@@ -1,396 +0,0 @@
package builder
import (
"bytes"
"context"
"encoding/json"
"fmt"
"io"
"net"
"net/http"
"net/url"
"strings"
"text/template"
"github.com/pkg/errors"
"github.com/prysmaticlabs/prysm/v5/api"
"github.com/prysmaticlabs/prysm/v5/api/server/structs"
"github.com/prysmaticlabs/prysm/v5/consensus-types/blocks"
"github.com/prysmaticlabs/prysm/v5/consensus-types/interfaces"
"github.com/prysmaticlabs/prysm/v5/consensus-types/primitives"
"github.com/prysmaticlabs/prysm/v5/monitoring/tracing"
"github.com/prysmaticlabs/prysm/v5/monitoring/tracing/trace"
v1 "github.com/prysmaticlabs/prysm/v5/proto/engine/v1"
ethpb "github.com/prysmaticlabs/prysm/v5/proto/prysm/v1alpha1"
"github.com/prysmaticlabs/prysm/v5/runtime/version"
log "github.com/sirupsen/logrus"
)
const (
getExecHeaderPath = "/eth/v1/builder/header/{{.Slot}}/{{.ParentHash}}/{{.Pubkey}}"
getStatus = "/eth/v1/builder/status"
postBlindedBeaconBlockPath = "/eth/v1/builder/blinded_blocks"
postRegisterValidatorPath = "/eth/v1/builder/validators"
)
var errMalformedHostname = errors.New("hostname must include port, separated by one colon, like example.com:3500")
var errMalformedRequest = errors.New("required request data are missing")
var errNotBlinded = errors.New("submitted block is not blinded")
// ClientOpt is a functional option for the Client type (http.Client wrapper)
type ClientOpt func(*Client)
type observer interface {
observe(r *http.Request) error
}
func WithObserver(m observer) ClientOpt {
return func(c *Client) {
c.obvs = append(c.obvs, m)
}
}
type requestLogger struct{}
func (*requestLogger) observe(r *http.Request) (e error) {
b := bytes.NewBuffer(nil)
if r.Body == nil {
log.WithFields(log.Fields{
"bodyBase64": "(nil value)",
"url": r.URL.String(),
}).Info("builder http request")
return nil
}
t := io.TeeReader(r.Body, b)
defer func() {
if r.Body != nil {
e = r.Body.Close()
}
}()
body, err := io.ReadAll(t)
if err != nil {
return err
}
r.Body = io.NopCloser(b)
log.WithFields(log.Fields{
"bodyBase64": string(body),
"url": r.URL.String(),
}).Info("builder http request")
return nil
}
var _ observer = &requestLogger{}
// BuilderClient provides a collection of helper methods for calling Builder API endpoints.
type BuilderClient interface {
NodeURL() string
GetHeader(ctx context.Context, slot primitives.Slot, parentHash [32]byte, pubkey [48]byte) (SignedBid, error)
RegisterValidator(ctx context.Context, svr []*ethpb.SignedValidatorRegistrationV1) error
SubmitBlindedBlock(ctx context.Context, sb interfaces.ReadOnlySignedBeaconBlock) (interfaces.ExecutionData, *v1.BlobsBundle, error)
Status(ctx context.Context) error
}
// Client provides a collection of helper methods for calling Builder API endpoints.
type Client struct {
hc *http.Client
baseURL *url.URL
obvs []observer
}
// NewClient constructs a new client with the provided options (ex WithTimeout).
// `host` is the base host + port used to construct request urls. This value can be
// a URL string, or NewClient will assume an http endpoint if just `host:port` is used.
func NewClient(host string, opts ...ClientOpt) (*Client, error) {
u, err := urlForHost(host)
if err != nil {
return nil, err
}
c := &Client{
hc: &http.Client{},
baseURL: u,
}
for _, o := range opts {
o(c)
}
return c, nil
}
func urlForHost(h string) (*url.URL, error) {
// try to parse as url (being permissive)
if u, err := url.Parse(h); err == nil && u.Host != "" {
return u, nil
}
// try to parse as host:port
host, port, err := net.SplitHostPort(h)
if err != nil {
return nil, errMalformedHostname
}
return &url.URL{Host: net.JoinHostPort(host, port), Scheme: "http"}, nil
}
// NodeURL returns a human-readable string representation of the beacon node base url.
func (c *Client) NodeURL() string {
return c.baseURL.String()
}
type reqOption func(*http.Request)
// do is a generic, opinionated request function to reduce boilerplate amongst the methods in this package api/client/builder.
func (c *Client) do(ctx context.Context, method string, path string, body io.Reader, opts ...reqOption) (res []byte, err error) {
ctx, span := trace.StartSpan(ctx, "builder.client.do")
defer func() {
tracing.AnnotateError(span, err)
span.End()
}()
u := c.baseURL.ResolveReference(&url.URL{Path: path})
span.SetAttributes(trace.StringAttribute("url", u.String()),
trace.StringAttribute("method", method))
req, err := http.NewRequestWithContext(ctx, method, u.String(), body)
if err != nil {
return
}
req.Header.Add("User-Agent", version.BuildData())
for _, o := range opts {
o(req)
}
for _, o := range c.obvs {
if err = o.observe(req); err != nil {
return
}
}
r, err := c.hc.Do(req)
if err != nil {
return
}
defer func() {
closeErr := r.Body.Close()
if closeErr != nil {
log.WithError(closeErr).Error("Failed to close response body")
}
}()
if r.StatusCode != http.StatusOK {
err = non200Err(r)
return
}
res, err = io.ReadAll(r.Body)
if err != nil {
err = errors.Wrap(err, "error reading http response body from builder server")
return
}
return
}
var execHeaderTemplate = template.Must(template.New("").Parse(getExecHeaderPath))
func execHeaderPath(slot primitives.Slot, parentHash [32]byte, pubkey [48]byte) (string, error) {
v := struct {
Slot primitives.Slot
ParentHash string
Pubkey string
}{
Slot: slot,
ParentHash: fmt.Sprintf("%#x", parentHash),
Pubkey: fmt.Sprintf("%#x", pubkey),
}
b := bytes.NewBuffer(nil)
err := execHeaderTemplate.Execute(b, v)
if err != nil {
return "", errors.Wrapf(err, "error rendering exec header template with slot=%d, parentHash=%#x, pubkey=%#x", slot, parentHash, pubkey)
}
return b.String(), nil
}
// GetHeader is used by a proposing validator to request an execution payload header from the Builder node.
func (c *Client) GetHeader(ctx context.Context, slot primitives.Slot, parentHash [32]byte, pubkey [48]byte) (SignedBid, error) {
path, err := execHeaderPath(slot, parentHash, pubkey)
if err != nil {
return nil, err
}
hb, err := c.do(ctx, http.MethodGet, path, nil)
if err != nil {
return nil, err
}
v := &VersionResponse{}
if err := json.Unmarshal(hb, v); err != nil {
return nil, errors.Wrapf(err, "error unmarshaling the builder GetHeader response, using slot=%d, parentHash=%#x, pubkey=%#x", slot, parentHash, pubkey)
}
switch strings.ToLower(v.Version) {
case strings.ToLower(version.String(version.Deneb)):
hr := &ExecHeaderResponseDeneb{}
if err := json.Unmarshal(hb, hr); err != nil {
return nil, errors.Wrapf(err, "error unmarshaling the builder GetHeader response, using slot=%d, parentHash=%#x, pubkey=%#x", slot, parentHash, pubkey)
}
p, err := hr.ToProto()
if err != nil {
return nil, errors.Wrapf(err, "could not extract proto message from header")
}
return WrappedSignedBuilderBidDeneb(p)
case strings.ToLower(version.String(version.Capella)):
hr := &ExecHeaderResponseCapella{}
if err := json.Unmarshal(hb, hr); err != nil {
return nil, errors.Wrapf(err, "error unmarshaling the builder GetHeader response, using slot=%d, parentHash=%#x, pubkey=%#x", slot, parentHash, pubkey)
}
p, err := hr.ToProto()
if err != nil {
return nil, errors.Wrapf(err, "could not extract proto message from header")
}
return WrappedSignedBuilderBidCapella(p)
case strings.ToLower(version.String(version.Bellatrix)):
hr := &ExecHeaderResponse{}
if err := json.Unmarshal(hb, hr); err != nil {
return nil, errors.Wrapf(err, "error unmarshaling the builder GetHeader response, using slot=%d, parentHash=%#x, pubkey=%#x", slot, parentHash, pubkey)
}
p, err := hr.ToProto()
if err != nil {
return nil, errors.Wrap(err, "could not extract proto message from header")
}
return WrappedSignedBuilderBid(p)
default:
return nil, fmt.Errorf("unsupported header version %s", strings.ToLower(v.Version))
}
}
// RegisterValidator encodes the SignedValidatorRegistrationV1 message to json (including hex-encoding the byte
// fields with 0x prefixes) and posts to the builder validator registration endpoint.
func (c *Client) RegisterValidator(ctx context.Context, svr []*ethpb.SignedValidatorRegistrationV1) error {
ctx, span := trace.StartSpan(ctx, "builder.client.RegisterValidator")
defer span.End()
span.SetAttributes(trace.Int64Attribute("num_reqs", int64(len(svr))))
if len(svr) == 0 {
err := errors.Wrap(errMalformedRequest, "empty validator registration list")
tracing.AnnotateError(span, err)
return err
}
vs := make([]*structs.SignedValidatorRegistration, len(svr))
for i := 0; i < len(svr); i++ {
vs[i] = structs.SignedValidatorRegistrationFromConsensus(svr[i])
}
body, err := json.Marshal(vs)
if err != nil {
err := errors.Wrap(err, "error encoding the SignedValidatorRegistration value body in RegisterValidator")
tracing.AnnotateError(span, err)
return err
}
_, err = c.do(ctx, http.MethodPost, postRegisterValidatorPath, bytes.NewBuffer(body))
if err != nil {
return err
}
log.WithField("num_registrations", len(svr)).Info("successfully registered validator(s) on builder")
return nil
}
var errResponseVersionMismatch = errors.New("builder API response uses a different version than requested in " + api.VersionHeader + " header")
// SubmitBlindedBlock calls the builder API endpoint that binds the validator to the builder and submits the block.
// The response is the full execution payload used to create the blinded block.
func (c *Client) SubmitBlindedBlock(ctx context.Context, sb interfaces.ReadOnlySignedBeaconBlock) (interfaces.ExecutionData, *v1.BlobsBundle, error) {
if !sb.IsBlinded() {
return nil, nil, errNotBlinded
}
// massage the proto struct type data into the api response type.
mj, err := structs.SignedBeaconBlockMessageJsoner(sb)
if err != nil {
return nil, nil, errors.Wrap(err, "error generating blinded beacon block post request")
}
body, err := json.Marshal(mj)
if err != nil {
return nil, nil, errors.Wrap(err, "error marshaling blinded block post request to json")
}
postOpts := func(r *http.Request) {
r.Header.Add("Eth-Consensus-Version", version.String(sb.Version()))
r.Header.Set("Content-Type", api.JsonMediaType)
r.Header.Set("Accept", api.JsonMediaType)
}
// post the blinded block - the execution payload response should contain the unblinded payload, along with the
// blobs bundle if it is post deneb.
rb, err := c.do(ctx, http.MethodPost, postBlindedBeaconBlockPath, bytes.NewBuffer(body), postOpts)
if err != nil {
return nil, nil, errors.Wrap(err, "error posting the blinded block to the builder api")
}
// ExecutionPayloadResponse parses just the outer container and the Value key, enabling it to use the .Value
// key to determine which underlying data type to use to finish the unmarshaling.
ep := &ExecutionPayloadResponse{}
if err := json.Unmarshal(rb, ep); err != nil {
return nil, nil, errors.Wrap(err, "error unmarshaling the builder ExecutionPayloadResponse")
}
if strings.ToLower(ep.Version) != version.String(sb.Version()) {
return nil, nil, errors.Wrapf(errResponseVersionMismatch, "req=%s, recv=%s", strings.ToLower(ep.Version), version.String(sb.Version()))
}
// This parses the rest of the response and returns the inner data field.
pp, err := ep.ParsePayload()
if err != nil {
return nil, nil, errors.Wrapf(err, "failed to parse execution payload from builder with version=%s", ep.Version)
}
// Get the payload as a proto.Message so it can be wrapped as an execution payload interface.
pb, err := pp.PayloadProto()
if err != nil {
return nil, nil, err
}
ed, err := blocks.NewWrappedExecutionData(pb)
if err != nil {
return nil, nil, err
}
bb, ok := pp.(BlobBundler)
if ok {
bbpb, err := bb.BundleProto()
if err != nil {
return nil, nil, errors.Wrapf(err, "failed to extract blobs bundle from builder response with version=%s", ep.Version)
}
return ed, bbpb, nil
}
return ed, nil, nil
}
// Status asks the remote builder server for a health check. A response of 200 with an empty body is the success/healthy
// response, and an error response may have an error message. This method will return a nil value for error in the
// happy path, and an error with information about the server response body for a non-200 response.
func (c *Client) Status(ctx context.Context) error {
_, err := c.do(ctx, http.MethodGet, getStatus, nil)
return err
}
func non200Err(response *http.Response) error {
bodyBytes, err := io.ReadAll(response.Body)
var errMessage ErrorMessage
var body string
if err != nil {
body = "(Unable to read response body.)"
} else {
body = "response body:\n" + string(bodyBytes)
}
msg := fmt.Sprintf("code=%d, url=%s, body=%s", response.StatusCode, response.Request.URL, body)
switch response.StatusCode {
case http.StatusNoContent:
log.WithError(ErrNoContent).Debug(msg)
return ErrNoContent
case http.StatusBadRequest:
log.WithError(ErrBadRequest).Debug(msg)
if jsonErr := json.Unmarshal(bodyBytes, &errMessage); jsonErr != nil {
return errors.Wrap(jsonErr, "unable to read response body")
}
return errors.Wrap(ErrBadRequest, errMessage.Message)
case http.StatusNotFound:
log.WithError(ErrNotFound).Debug(msg)
if jsonErr := json.Unmarshal(bodyBytes, &errMessage); jsonErr != nil {
return errors.Wrap(jsonErr, "unable to read response body")
}
return errors.Wrap(ErrNotFound, errMessage.Message)
case http.StatusInternalServerError:
log.WithError(ErrNotOK).Debug(msg)
if jsonErr := json.Unmarshal(bodyBytes, &errMessage); jsonErr != nil {
return errors.Wrap(jsonErr, "unable to read response body")
}
return errors.Wrap(ErrNotOK, errMessage.Message)
default:
log.WithError(ErrNotOK).Debug(msg)
return errors.Wrap(ErrNotOK, fmt.Sprintf("unsupported error code: %d", response.StatusCode))
}
}

View File

@@ -1,898 +0,0 @@
package builder
import (
"bytes"
"context"
"encoding/json"
"fmt"
"io"
"net/http"
"net/url"
"strconv"
"testing"
"github.com/prysmaticlabs/go-bitfield"
"github.com/prysmaticlabs/prysm/v5/api/server/structs"
"github.com/prysmaticlabs/prysm/v5/config/params"
"github.com/prysmaticlabs/prysm/v5/consensus-types/blocks"
"github.com/prysmaticlabs/prysm/v5/consensus-types/primitives"
types "github.com/prysmaticlabs/prysm/v5/consensus-types/primitives"
"github.com/prysmaticlabs/prysm/v5/encoding/bytesutil"
v1 "github.com/prysmaticlabs/prysm/v5/proto/engine/v1"
eth "github.com/prysmaticlabs/prysm/v5/proto/prysm/v1alpha1"
"github.com/prysmaticlabs/prysm/v5/testing/assert"
"github.com/prysmaticlabs/prysm/v5/testing/require"
log "github.com/sirupsen/logrus"
)
type roundtrip func(*http.Request) (*http.Response, error)
func (fn roundtrip) RoundTrip(r *http.Request) (*http.Response, error) {
return fn(r)
}
func TestClient_Status(t *testing.T) {
ctx := context.Background()
statusPath := "/eth/v1/builder/status"
hc := &http.Client{
Transport: roundtrip(func(r *http.Request) (*http.Response, error) {
defer func() {
if r.Body == nil {
return
}
require.NoError(t, r.Body.Close())
}()
require.Equal(t, statusPath, r.URL.Path)
return &http.Response{
StatusCode: http.StatusOK,
Body: io.NopCloser(bytes.NewBuffer(nil)),
Request: r.Clone(ctx),
}, nil
}),
}
c := &Client{
hc: hc,
baseURL: &url.URL{Host: "localhost:3500", Scheme: "http"},
}
require.NoError(t, c.Status(ctx))
hc = &http.Client{
Transport: roundtrip(func(r *http.Request) (*http.Response, error) {
defer func() {
if r.Body == nil {
return
}
require.NoError(t, r.Body.Close())
}()
require.Equal(t, statusPath, r.URL.Path)
message := ErrorMessage{
Code: 500,
Message: "Internal server error",
}
resp, err := json.Marshal(message)
require.NoError(t, err)
return &http.Response{
StatusCode: http.StatusInternalServerError,
Body: io.NopCloser(bytes.NewBuffer(resp)),
Request: r.Clone(ctx),
}, nil
}),
}
c = &Client{
hc: hc,
baseURL: &url.URL{Host: "localhost:3500", Scheme: "http"},
}
require.ErrorIs(t, c.Status(ctx), ErrNotOK)
}
func TestClient_RegisterValidator(t *testing.T) {
ctx := context.Background()
expectedBody := `[{"message":{"fee_recipient":"0x0000000000000000000000000000000000000000","gas_limit":"23","timestamp":"42","pubkey":"0x93247f2209abcacf57b75a51dafae777f9dd38bc7053d1af526f220a7489a6d3a2753e5f3e8b1cfe39b56f43611df74a"},"signature":"0x1b66ac1fb663c9bc59509846d6ec05345bd908eda73e670af888da41af171505cc411d61252fb6cb3fa0017b679f8bb2305b26a285fa2737f175668d0dff91cc1b66ac1fb663c9bc59509846d6ec05345bd908eda73e670af888da41af171505"}]`
expectedPath := "/eth/v1/builder/validators"
hc := &http.Client{
Transport: roundtrip(func(r *http.Request) (*http.Response, error) {
body, err := io.ReadAll(r.Body)
defer func() {
require.NoError(t, r.Body.Close())
}()
require.NoError(t, err)
require.Equal(t, expectedBody, string(body))
require.Equal(t, expectedPath, r.URL.Path)
require.Equal(t, http.MethodPost, r.Method)
return &http.Response{
StatusCode: http.StatusOK,
Body: io.NopCloser(bytes.NewBuffer(nil)),
Request: r.Clone(ctx),
}, nil
}),
}
c := &Client{
hc: hc,
baseURL: &url.URL{Host: "localhost:3500", Scheme: "http"},
}
reg := &eth.SignedValidatorRegistrationV1{
Message: &eth.ValidatorRegistrationV1{
FeeRecipient: ezDecode(t, params.BeaconConfig().EthBurnAddressHex),
GasLimit: 23,
Timestamp: 42,
Pubkey: ezDecode(t, "0x93247f2209abcacf57b75a51dafae777f9dd38bc7053d1af526f220a7489a6d3a2753e5f3e8b1cfe39b56f43611df74a"),
},
Signature: ezDecode(t, "0x1b66ac1fb663c9bc59509846d6ec05345bd908eda73e670af888da41af171505cc411d61252fb6cb3fa0017b679f8bb2305b26a285fa2737f175668d0dff91cc1b66ac1fb663c9bc59509846d6ec05345bd908eda73e670af888da41af171505"),
}
require.NoError(t, c.RegisterValidator(ctx, []*eth.SignedValidatorRegistrationV1{reg}))
}
func TestClient_GetHeader(t *testing.T) {
ctx := context.Background()
expectedPath := "/eth/v1/builder/header/23/0xcf8e0d4e9587369b2301d0790347320302cc0943d5a1884560367e8208d920f2/0x93247f2209abcacf57b75a51dafae777f9dd38bc7053d1af526f220a7489a6d3a2753e5f3e8b1cfe39b56f43611df74a"
var slot types.Slot = 23
parentHash := ezDecode(t, "0xcf8e0d4e9587369b2301d0790347320302cc0943d5a1884560367e8208d920f2")
pubkey := ezDecode(t, "0x93247f2209abcacf57b75a51dafae777f9dd38bc7053d1af526f220a7489a6d3a2753e5f3e8b1cfe39b56f43611df74a")
t.Run("server error", func(t *testing.T) {
hc := &http.Client{
Transport: roundtrip(func(r *http.Request) (*http.Response, error) {
require.Equal(t, expectedPath, r.URL.Path)
message := ErrorMessage{
Code: 500,
Message: "Internal server error",
}
resp, err := json.Marshal(message)
require.NoError(t, err)
return &http.Response{
StatusCode: http.StatusInternalServerError,
Body: io.NopCloser(bytes.NewBuffer(resp)),
Request: r.Clone(ctx),
}, nil
}),
}
c := &Client{
hc: hc,
baseURL: &url.URL{Host: "localhost:3500", Scheme: "http"},
}
_, err := c.GetHeader(ctx, slot, bytesutil.ToBytes32(parentHash), bytesutil.ToBytes48(pubkey))
require.ErrorIs(t, err, ErrNotOK)
})
t.Run("header not available", func(t *testing.T) {
hc := &http.Client{
Transport: roundtrip(func(r *http.Request) (*http.Response, error) {
require.Equal(t, expectedPath, r.URL.Path)
return &http.Response{
StatusCode: http.StatusNoContent,
Body: io.NopCloser(bytes.NewBuffer([]byte("No header is available."))),
Request: r.Clone(ctx),
}, nil
}),
}
c := &Client{
hc: hc,
baseURL: &url.URL{Host: "localhost:3500", Scheme: "http"},
}
_, err := c.GetHeader(ctx, slot, bytesutil.ToBytes32(parentHash), bytesutil.ToBytes48(pubkey))
require.ErrorIs(t, err, ErrNoContent)
})
t.Run("bellatrix", func(t *testing.T) {
hc := &http.Client{
Transport: roundtrip(func(r *http.Request) (*http.Response, error) {
require.Equal(t, expectedPath, r.URL.Path)
return &http.Response{
StatusCode: http.StatusOK,
Body: io.NopCloser(bytes.NewBufferString(testExampleHeaderResponse)),
Request: r.Clone(ctx),
}, nil
}),
}
c := &Client{
hc: hc,
baseURL: &url.URL{Host: "localhost:3500", Scheme: "http"},
}
h, err := c.GetHeader(ctx, slot, bytesutil.ToBytes32(parentHash), bytesutil.ToBytes48(pubkey))
require.NoError(t, err)
expectedSig := ezDecode(t, "0x1b66ac1fb663c9bc59509846d6ec05345bd908eda73e670af888da41af171505cc411d61252fb6cb3fa0017b679f8bb2305b26a285fa2737f175668d0dff91cc1b66ac1fb663c9bc59509846d6ec05345bd908eda73e670af888da41af171505")
require.Equal(t, true, bytes.Equal(expectedSig, h.Signature()))
expectedTxRoot := ezDecode(t, "0xcf8e0d4e9587369b2301d0790347320302cc0943d5a1884560367e8208d920f2")
bid, err := h.Message()
require.NoError(t, err)
bidHeader, err := bid.Header()
require.NoError(t, err)
withdrawalsRoot, err := bidHeader.TransactionsRoot()
require.NoError(t, err)
require.Equal(t, true, bytes.Equal(expectedTxRoot, withdrawalsRoot))
require.Equal(t, uint64(1), bidHeader.GasUsed())
// this matches the value in the testExampleHeaderResponse
bidStr := "652312848583266388373324160190187140051835877600158453279131187530910662656"
value, err := stringToUint256(bidStr)
require.NoError(t, err)
require.Equal(t, 0, value.Int.Cmp(primitives.WeiToBigInt(bid.Value())))
require.Equal(t, bidStr, primitives.WeiToBigInt(bid.Value()).String())
})
t.Run("capella", func(t *testing.T) {
hc := &http.Client{
Transport: roundtrip(func(r *http.Request) (*http.Response, error) {
require.Equal(t, expectedPath, r.URL.Path)
return &http.Response{
StatusCode: http.StatusOK,
Body: io.NopCloser(bytes.NewBufferString(testExampleHeaderResponseCapella)),
Request: r.Clone(ctx),
}, nil
}),
}
c := &Client{
hc: hc,
baseURL: &url.URL{Host: "localhost:3500", Scheme: "http"},
}
h, err := c.GetHeader(ctx, slot, bytesutil.ToBytes32(parentHash), bytesutil.ToBytes48(pubkey))
require.NoError(t, err)
expectedWithdrawalsRoot := ezDecode(t, "0xcf8e0d4e9587369b2301d0790347320302cc0943d5a1884560367e8208d920f2")
bid, err := h.Message()
require.NoError(t, err)
bidHeader, err := bid.Header()
require.NoError(t, err)
withdrawalsRoot, err := bidHeader.WithdrawalsRoot()
require.NoError(t, err)
require.Equal(t, true, bytes.Equal(expectedWithdrawalsRoot, withdrawalsRoot))
bidStr := "652312848583266388373324160190187140051835877600158453279131187530910662656"
value, err := stringToUint256(bidStr)
require.NoError(t, err)
require.Equal(t, 0, value.Int.Cmp(primitives.WeiToBigInt(bid.Value())))
require.Equal(t, bidStr, primitives.WeiToBigInt(bid.Value()).String())
})
t.Run("deneb", func(t *testing.T) {
hc := &http.Client{
Transport: roundtrip(func(r *http.Request) (*http.Response, error) {
require.Equal(t, expectedPath, r.URL.Path)
return &http.Response{
StatusCode: http.StatusOK,
Body: io.NopCloser(bytes.NewBufferString(testExampleHeaderResponseDeneb)),
Request: r.Clone(ctx),
}, nil
}),
}
c := &Client{
hc: hc,
baseURL: &url.URL{Host: "localhost:3500", Scheme: "http"},
}
h, err := c.GetHeader(ctx, slot, bytesutil.ToBytes32(parentHash), bytesutil.ToBytes48(pubkey))
require.NoError(t, err)
expectedWithdrawalsRoot := ezDecode(t, "0xcf8e0d4e9587369b2301d0790347320302cc0943d5a1884560367e8208d920f2")
bid, err := h.Message()
require.NoError(t, err)
bidHeader, err := bid.Header()
require.NoError(t, err)
withdrawalsRoot, err := bidHeader.WithdrawalsRoot()
require.NoError(t, err)
require.Equal(t, true, bytes.Equal(expectedWithdrawalsRoot, withdrawalsRoot))
bidStr := "652312848583266388373324160190187140051835877600158453279131187530910662656"
value, err := stringToUint256(bidStr)
require.NoError(t, err)
require.Equal(t, 0, value.Int.Cmp(primitives.WeiToBigInt(bid.Value())))
require.Equal(t, bidStr, primitives.WeiToBigInt(bid.Value()).String())
kcgCommitments, err := bid.BlobKzgCommitments()
require.NoError(t, err)
require.Equal(t, len(kcgCommitments) > 0, true)
for i := range kcgCommitments {
require.Equal(t, len(kcgCommitments[i]) == 48, true)
}
})
t.Run("deneb, too many kzg commitments", func(t *testing.T) {
hc := &http.Client{
Transport: roundtrip(func(r *http.Request) (*http.Response, error) {
require.Equal(t, expectedPath, r.URL.Path)
return &http.Response{
StatusCode: http.StatusOK,
Body: io.NopCloser(bytes.NewBufferString(testExampleHeaderResponseDenebTooManyBlobs)),
Request: r.Clone(ctx),
}, nil
}),
}
c := &Client{
hc: hc,
baseURL: &url.URL{Host: "localhost:3500", Scheme: "http"},
}
_, err := c.GetHeader(ctx, slot, bytesutil.ToBytes32(parentHash), bytesutil.ToBytes48(pubkey))
require.ErrorContains(t, "could not extract proto message from header: too many blob commitments: 7", err)
})
t.Run("unsupported version", func(t *testing.T) {
hc := &http.Client{
Transport: roundtrip(func(r *http.Request) (*http.Response, error) {
require.Equal(t, expectedPath, r.URL.Path)
return &http.Response{
StatusCode: http.StatusOK,
Body: io.NopCloser(bytes.NewBufferString(testExampleHeaderResponseUnknownVersion)),
Request: r.Clone(ctx),
}, nil
}),
}
c := &Client{
hc: hc,
baseURL: &url.URL{Host: "localhost:3500", Scheme: "http"},
}
_, err := c.GetHeader(ctx, slot, bytesutil.ToBytes32(parentHash), bytesutil.ToBytes48(pubkey))
require.ErrorContains(t, "unsupported header version", err)
})
}
func TestSubmitBlindedBlock(t *testing.T) {
ctx := context.Background()
t.Run("bellatrix", func(t *testing.T) {
hc := &http.Client{
Transport: roundtrip(func(r *http.Request) (*http.Response, error) {
require.Equal(t, postBlindedBeaconBlockPath, r.URL.Path)
require.Equal(t, "bellatrix", r.Header.Get("Eth-Consensus-Version"))
require.Equal(t, "application/json", r.Header.Get("Content-Type"))
require.Equal(t, "application/json", r.Header.Get("Accept"))
return &http.Response{
StatusCode: http.StatusOK,
Body: io.NopCloser(bytes.NewBufferString(testExampleExecutionPayload)),
Request: r.Clone(ctx),
}, nil
}),
}
c := &Client{
hc: hc,
baseURL: &url.URL{Host: "localhost:3500", Scheme: "http"},
}
sbbb, err := blocks.NewSignedBeaconBlock(testSignedBlindedBeaconBlockBellatrix(t))
require.NoError(t, err)
ep, _, err := c.SubmitBlindedBlock(ctx, sbbb)
require.NoError(t, err)
require.Equal(t, true, bytes.Equal(ezDecode(t, "0xcf8e0d4e9587369b2301d0790347320302cc0943d5a1884560367e8208d920f2"), ep.ParentHash()))
bfpg, err := stringToUint256("452312848583266388373324160190187140051835877600158453279131187530910662656")
require.NoError(t, err)
require.Equal(t, fmt.Sprintf("%#x", bfpg.SSZBytes()), fmt.Sprintf("%#x", ep.BaseFeePerGas()))
require.Equal(t, uint64(1), ep.GasLimit())
})
t.Run("capella", func(t *testing.T) {
hc := &http.Client{
Transport: roundtrip(func(r *http.Request) (*http.Response, error) {
require.Equal(t, postBlindedBeaconBlockPath, r.URL.Path)
require.Equal(t, "capella", r.Header.Get("Eth-Consensus-Version"))
require.Equal(t, "application/json", r.Header.Get("Content-Type"))
require.Equal(t, "application/json", r.Header.Get("Accept"))
return &http.Response{
StatusCode: http.StatusOK,
Body: io.NopCloser(bytes.NewBufferString(testExampleExecutionPayloadCapella)),
Request: r.Clone(ctx),
}, nil
}),
}
c := &Client{
hc: hc,
baseURL: &url.URL{Host: "localhost:3500", Scheme: "http"},
}
sbb, err := blocks.NewSignedBeaconBlock(testSignedBlindedBeaconBlockCapella(t))
require.NoError(t, err)
ep, _, err := c.SubmitBlindedBlock(ctx, sbb)
require.NoError(t, err)
withdrawals, err := ep.Withdrawals()
require.NoError(t, err)
require.Equal(t, 1, len(withdrawals))
assert.Equal(t, uint64(1), withdrawals[0].Index)
assert.Equal(t, types.ValidatorIndex(1), withdrawals[0].ValidatorIndex)
assert.DeepEqual(t, ezDecode(t, "0xcf8e0d4e9587369b2301d0790347320302cc0943"), withdrawals[0].Address)
assert.Equal(t, uint64(1), withdrawals[0].Amount)
})
t.Run("deneb", func(t *testing.T) {
test := testSignedBlindedBeaconBlockDeneb(t)
hc := &http.Client{
Transport: roundtrip(func(r *http.Request) (*http.Response, error) {
require.Equal(t, postBlindedBeaconBlockPath, r.URL.Path)
require.Equal(t, "deneb", r.Header.Get("Eth-Consensus-Version"))
require.Equal(t, "application/json", r.Header.Get("Content-Type"))
require.Equal(t, "application/json", r.Header.Get("Accept"))
var req structs.SignedBlindedBeaconBlockDeneb
err := json.NewDecoder(r.Body).Decode(&req)
require.NoError(t, err)
block, err := req.ToConsensus()
require.NoError(t, err)
require.DeepEqual(t, block, test)
return &http.Response{
StatusCode: http.StatusOK,
Body: io.NopCloser(bytes.NewBufferString(testExampleExecutionPayloadDeneb)),
Request: r.Clone(ctx),
}, nil
}),
}
c := &Client{
hc: hc,
baseURL: &url.URL{Host: "localhost:3500", Scheme: "http"},
}
sbb, err := blocks.NewSignedBeaconBlock(test)
require.NoError(t, err)
ep, blobBundle, err := c.SubmitBlindedBlock(ctx, sbb)
require.NoError(t, err)
withdrawals, err := ep.Withdrawals()
require.NoError(t, err)
require.Equal(t, 1, len(withdrawals))
assert.Equal(t, uint64(1), withdrawals[0].Index)
assert.Equal(t, types.ValidatorIndex(1), withdrawals[0].ValidatorIndex)
assert.DeepEqual(t, ezDecode(t, "0xcf8e0d4e9587369b2301d0790347320302cc0943"), withdrawals[0].Address)
assert.Equal(t, uint64(1), withdrawals[0].Amount)
require.NotNil(t, blobBundle)
})
t.Run("mismatched versions, expected bellatrix got capella", func(t *testing.T) {
hc := &http.Client{
Transport: roundtrip(func(r *http.Request) (*http.Response, error) {
require.Equal(t, postBlindedBeaconBlockPath, r.URL.Path)
return &http.Response{
StatusCode: http.StatusOK,
Body: io.NopCloser(bytes.NewBufferString(testExampleExecutionPayloadCapella)), // send a Capella payload
Request: r.Clone(ctx),
}, nil
}),
}
c := &Client{
hc: hc,
baseURL: &url.URL{Host: "localhost:3500", Scheme: "http"},
}
sbbb, err := blocks.NewSignedBeaconBlock(testSignedBlindedBeaconBlockBellatrix(t))
require.NoError(t, err)
_, _, err = c.SubmitBlindedBlock(ctx, sbbb)
require.ErrorIs(t, err, errResponseVersionMismatch)
})
t.Run("not blinded", func(t *testing.T) {
sbb, err := blocks.NewSignedBeaconBlock(&eth.SignedBeaconBlockBellatrix{Block: &eth.BeaconBlockBellatrix{Body: &eth.BeaconBlockBodyBellatrix{ExecutionPayload: &v1.ExecutionPayload{}}}})
require.NoError(t, err)
_, _, err = (&Client{}).SubmitBlindedBlock(ctx, sbb)
require.ErrorIs(t, err, errNotBlinded)
})
}
func testSignedBlindedBeaconBlockBellatrix(t *testing.T) *eth.SignedBlindedBeaconBlockBellatrix {
return &eth.SignedBlindedBeaconBlockBellatrix{
Block: &eth.BlindedBeaconBlockBellatrix{
Slot: 1,
ProposerIndex: 1,
ParentRoot: ezDecode(t, "0xcf8e0d4e9587369b2301d0790347320302cc0943d5a1884560367e8208d920f2"),
StateRoot: ezDecode(t, "0xcf8e0d4e9587369b2301d0790347320302cc0943d5a1884560367e8208d920f2"),
Body: &eth.BlindedBeaconBlockBodyBellatrix{
RandaoReveal: ezDecode(t, "0x1b66ac1fb663c9bc59509846d6ec05345bd908eda73e670af888da41af171505cc411d61252fb6cb3fa0017b679f8bb2305b26a285fa2737f175668d0dff91cc1b66ac1fb663c9bc59509846d6ec05345bd908eda73e670af888da41af171505"),
Eth1Data: &eth.Eth1Data{
DepositRoot: ezDecode(t, "0xcf8e0d4e9587369b2301d0790347320302cc0943d5a1884560367e8208d920f2"),
DepositCount: 1,
BlockHash: ezDecode(t, "0xcf8e0d4e9587369b2301d0790347320302cc0943d5a1884560367e8208d920f2"),
},
Graffiti: ezDecode(t, "0xdeadbeefc0ffee"),
ProposerSlashings: []*eth.ProposerSlashing{
{
Header_1: &eth.SignedBeaconBlockHeader{
Header: &eth.BeaconBlockHeader{
Slot: 1,
ProposerIndex: 1,
ParentRoot: ezDecode(t, "0xcf8e0d4e9587369b2301d0790347320302cc0943d5a1884560367e8208d920f2"),
StateRoot: ezDecode(t, "0xcf8e0d4e9587369b2301d0790347320302cc0943d5a1884560367e8208d920f2"),
BodyRoot: ezDecode(t, "0xcf8e0d4e9587369b2301d0790347320302cc0943d5a1884560367e8208d920f2"),
},
Signature: ezDecode(t, "0x1b66ac1fb663c9bc59509846d6ec05345bd908eda73e670af888da41af171505cc411d61252fb6cb3fa0017b679f8bb2305b26a285fa2737f175668d0dff91cc1b66ac1fb663c9bc59509846d6ec05345bd908eda73e670af888da41af171505"),
},
Header_2: &eth.SignedBeaconBlockHeader{
Header: &eth.BeaconBlockHeader{
Slot: 1,
ProposerIndex: 1,
ParentRoot: ezDecode(t, "0xcf8e0d4e9587369b2301d0790347320302cc0943d5a1884560367e8208d920f2"),
StateRoot: ezDecode(t, "0xcf8e0d4e9587369b2301d0790347320302cc0943d5a1884560367e8208d920f2"),
BodyRoot: ezDecode(t, "0xcf8e0d4e9587369b2301d0790347320302cc0943d5a1884560367e8208d920f2"),
},
Signature: ezDecode(t, "0x1b66ac1fb663c9bc59509846d6ec05345bd908eda73e670af888da41af171505cc411d61252fb6cb3fa0017b679f8bb2305b26a285fa2737f175668d0dff91cc1b66ac1fb663c9bc59509846d6ec05345bd908eda73e670af888da41af171505"),
},
},
},
AttesterSlashings: []*eth.AttesterSlashing{
{
Attestation_1: &eth.IndexedAttestation{
AttestingIndices: []uint64{1},
Data: &eth.AttestationData{
Slot: 1,
CommitteeIndex: 1,
BeaconBlockRoot: ezDecode(t, "0xcf8e0d4e9587369b2301d0790347320302cc0943d5a1884560367e8208d920f2"),
Source: &eth.Checkpoint{
Epoch: 1,
Root: ezDecode(t, "0xcf8e0d4e9587369b2301d0790347320302cc0943d5a1884560367e8208d920f2"),
},
Target: &eth.Checkpoint{
Epoch: 1,
Root: ezDecode(t, "0xcf8e0d4e9587369b2301d0790347320302cc0943d5a1884560367e8208d920f2"),
},
},
Signature: ezDecode(t, "0x1b66ac1fb663c9bc59509846d6ec05345bd908eda73e670af888da41af171505cc411d61252fb6cb3fa0017b679f8bb2305b26a285fa2737f175668d0dff91cc1b66ac1fb663c9bc59509846d6ec05345bd908eda73e670af888da41af171505"),
},
Attestation_2: &eth.IndexedAttestation{
AttestingIndices: []uint64{1},
Data: &eth.AttestationData{
Slot: 1,
CommitteeIndex: 1,
BeaconBlockRoot: ezDecode(t, "0xcf8e0d4e9587369b2301d0790347320302cc0943d5a1884560367e8208d920f2"),
Source: &eth.Checkpoint{
Epoch: 1,
Root: ezDecode(t, "0xcf8e0d4e9587369b2301d0790347320302cc0943d5a1884560367e8208d920f2"),
},
Target: &eth.Checkpoint{
Epoch: 1,
Root: ezDecode(t, "0xcf8e0d4e9587369b2301d0790347320302cc0943d5a1884560367e8208d920f2"),
},
},
Signature: ezDecode(t, "0x1b66ac1fb663c9bc59509846d6ec05345bd908eda73e670af888da41af171505cc411d61252fb6cb3fa0017b679f8bb2305b26a285fa2737f175668d0dff91cc1b66ac1fb663c9bc59509846d6ec05345bd908eda73e670af888da41af171505"),
},
},
},
Attestations: []*eth.Attestation{
{
AggregationBits: bitfield.Bitlist{0x01},
Data: &eth.AttestationData{
Slot: 1,
CommitteeIndex: 1,
BeaconBlockRoot: ezDecode(t, "0xcf8e0d4e9587369b2301d0790347320302cc0943d5a1884560367e8208d920f2"),
Source: &eth.Checkpoint{
Epoch: 1,
Root: ezDecode(t, "0xcf8e0d4e9587369b2301d0790347320302cc0943d5a1884560367e8208d920f2"),
},
Target: &eth.Checkpoint{
Epoch: 1,
Root: ezDecode(t, "0xcf8e0d4e9587369b2301d0790347320302cc0943d5a1884560367e8208d920f2"),
},
},
Signature: ezDecode(t, "0x1b66ac1fb663c9bc59509846d6ec05345bd908eda73e670af888da41af171505cc411d61252fb6cb3fa0017b679f8bb2305b26a285fa2737f175668d0dff91cc1b66ac1fb663c9bc59509846d6ec05345bd908eda73e670af888da41af171505"),
},
},
Deposits: []*eth.Deposit{
{
Proof: [][]byte{ezDecode(t, "0xcf8e0d4e9587369b2301d0790347320302cc0943d5a1884560367e8208d920f2")},
Data: &eth.Deposit_Data{
PublicKey: ezDecode(t, "0x93247f2209abcacf57b75a51dafae777f9dd38bc7053d1af526f220a7489a6d3a2753e5f3e8b1cfe39b56f43611df74a"),
WithdrawalCredentials: ezDecode(t, "0xcf8e0d4e9587369b2301d0790347320302cc0943d5a1884560367e8208d920f2"),
Amount: 1,
Signature: ezDecode(t, "0x1b66ac1fb663c9bc59509846d6ec05345bd908eda73e670af888da41af171505cc411d61252fb6cb3fa0017b679f8bb2305b26a285fa2737f175668d0dff91cc1b66ac1fb663c9bc59509846d6ec05345bd908eda73e670af888da41af171505"),
},
},
},
VoluntaryExits: []*eth.SignedVoluntaryExit{
{
Exit: &eth.VoluntaryExit{
Epoch: 1,
ValidatorIndex: 1,
},
Signature: ezDecode(t, "0x1b66ac1fb663c9bc59509846d6ec05345bd908eda73e670af888da41af171505cc411d61252fb6cb3fa0017b679f8bb2305b26a285fa2737f175668d0dff91cc1b66ac1fb663c9bc59509846d6ec05345bd908eda73e670af888da41af171505"),
},
},
SyncAggregate: &eth.SyncAggregate{
SyncCommitteeSignature: make([]byte, 48),
SyncCommitteeBits: bitfield.Bitvector512{0x01},
},
ExecutionPayloadHeader: &v1.ExecutionPayloadHeader{
ParentHash: ezDecode(t, "0xcf8e0d4e9587369b2301d0790347320302cc0943d5a1884560367e8208d920f2"),
FeeRecipient: ezDecode(t, "0xabcf8e0d4e9587369b2301d0790347320302cc09"),
StateRoot: ezDecode(t, "0xcf8e0d4e9587369b2301d0790347320302cc0943d5a1884560367e8208d920f2"),
ReceiptsRoot: ezDecode(t, "0xcf8e0d4e9587369b2301d0790347320302cc0943d5a1884560367e8208d920f2"),
LogsBloom: ezDecode(t, "0x00000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000"),
PrevRandao: ezDecode(t, "0xcf8e0d4e9587369b2301d0790347320302cc0943d5a1884560367e8208d920f2"),
BlockNumber: 1,
GasLimit: 1,
GasUsed: 1,
Timestamp: 1,
ExtraData: ezDecode(t, "0xcf8e0d4e9587369b2301d0790347320302cc0943d5a1884560367e8208d920f2"),
BaseFeePerGas: []byte(strconv.FormatUint(1, 10)),
BlockHash: ezDecode(t, "0xcf8e0d4e9587369b2301d0790347320302cc0943d5a1884560367e8208d920f2"),
TransactionsRoot: ezDecode(t, "0xcf8e0d4e9587369b2301d0790347320302cc0943d5a1884560367e8208d920f2"),
},
},
},
Signature: ezDecode(t, "0x1b66ac1fb663c9bc59509846d6ec05345bd908eda73e670af888da41af171505cc411d61252fb6cb3fa0017b679f8bb2305b26a285fa2737f175668d0dff91cc1b66ac1fb663c9bc59509846d6ec05345bd908eda73e670af888da41af171505"),
}
}
func testSignedBlindedBeaconBlockCapella(t *testing.T) *eth.SignedBlindedBeaconBlockCapella {
return &eth.SignedBlindedBeaconBlockCapella{
Block: &eth.BlindedBeaconBlockCapella{
Slot: 1,
ProposerIndex: 1,
ParentRoot: ezDecode(t, "0xcf8e0d4e9587369b2301d0790347320302cc0943d5a1884560367e8208d920f2"),
StateRoot: ezDecode(t, "0xcf8e0d4e9587369b2301d0790347320302cc0943d5a1884560367e8208d920f2"),
Body: &eth.BlindedBeaconBlockBodyCapella{
RandaoReveal: ezDecode(t, "0x1b66ac1fb663c9bc59509846d6ec05345bd908eda73e670af888da41af171505cc411d61252fb6cb3fa0017b679f8bb2305b26a285fa2737f175668d0dff91cc1b66ac1fb663c9bc59509846d6ec05345bd908eda73e670af888da41af171505"),
Eth1Data: &eth.Eth1Data{
DepositRoot: ezDecode(t, "0xcf8e0d4e9587369b2301d0790347320302cc0943d5a1884560367e8208d920f2"),
DepositCount: 1,
BlockHash: ezDecode(t, "0xcf8e0d4e9587369b2301d0790347320302cc0943d5a1884560367e8208d920f2"),
},
Graffiti: ezDecode(t, "0xdeadbeefc0ffee"),
ProposerSlashings: []*eth.ProposerSlashing{
{
Header_1: &eth.SignedBeaconBlockHeader{
Header: &eth.BeaconBlockHeader{
Slot: 1,
ProposerIndex: 1,
ParentRoot: ezDecode(t, "0xcf8e0d4e9587369b2301d0790347320302cc0943d5a1884560367e8208d920f2"),
StateRoot: ezDecode(t, "0xcf8e0d4e9587369b2301d0790347320302cc0943d5a1884560367e8208d920f2"),
BodyRoot: ezDecode(t, "0xcf8e0d4e9587369b2301d0790347320302cc0943d5a1884560367e8208d920f2"),
},
Signature: ezDecode(t, "0x1b66ac1fb663c9bc59509846d6ec05345bd908eda73e670af888da41af171505cc411d61252fb6cb3fa0017b679f8bb2305b26a285fa2737f175668d0dff91cc1b66ac1fb663c9bc59509846d6ec05345bd908eda73e670af888da41af171505"),
},
Header_2: &eth.SignedBeaconBlockHeader{
Header: &eth.BeaconBlockHeader{
Slot: 1,
ProposerIndex: 1,
ParentRoot: ezDecode(t, "0xcf8e0d4e9587369b2301d0790347320302cc0943d5a1884560367e8208d920f2"),
StateRoot: ezDecode(t, "0xcf8e0d4e9587369b2301d0790347320302cc0943d5a1884560367e8208d920f2"),
BodyRoot: ezDecode(t, "0xcf8e0d4e9587369b2301d0790347320302cc0943d5a1884560367e8208d920f2"),
},
Signature: ezDecode(t, "0x1b66ac1fb663c9bc59509846d6ec05345bd908eda73e670af888da41af171505cc411d61252fb6cb3fa0017b679f8bb2305b26a285fa2737f175668d0dff91cc1b66ac1fb663c9bc59509846d6ec05345bd908eda73e670af888da41af171505"),
},
},
},
AttesterSlashings: []*eth.AttesterSlashing{
{
Attestation_1: &eth.IndexedAttestation{
AttestingIndices: []uint64{1},
Data: &eth.AttestationData{
Slot: 1,
CommitteeIndex: 1,
BeaconBlockRoot: ezDecode(t, "0xcf8e0d4e9587369b2301d0790347320302cc0943d5a1884560367e8208d920f2"),
Source: &eth.Checkpoint{
Epoch: 1,
Root: ezDecode(t, "0xcf8e0d4e9587369b2301d0790347320302cc0943d5a1884560367e8208d920f2"),
},
Target: &eth.Checkpoint{
Epoch: 1,
Root: ezDecode(t, "0xcf8e0d4e9587369b2301d0790347320302cc0943d5a1884560367e8208d920f2"),
},
},
Signature: ezDecode(t, "0x1b66ac1fb663c9bc59509846d6ec05345bd908eda73e670af888da41af171505cc411d61252fb6cb3fa0017b679f8bb2305b26a285fa2737f175668d0dff91cc1b66ac1fb663c9bc59509846d6ec05345bd908eda73e670af888da41af171505"),
},
Attestation_2: &eth.IndexedAttestation{
AttestingIndices: []uint64{1},
Data: &eth.AttestationData{
Slot: 1,
CommitteeIndex: 1,
BeaconBlockRoot: ezDecode(t, "0xcf8e0d4e9587369b2301d0790347320302cc0943d5a1884560367e8208d920f2"),
Source: &eth.Checkpoint{
Epoch: 1,
Root: ezDecode(t, "0xcf8e0d4e9587369b2301d0790347320302cc0943d5a1884560367e8208d920f2"),
},
Target: &eth.Checkpoint{
Epoch: 1,
Root: ezDecode(t, "0xcf8e0d4e9587369b2301d0790347320302cc0943d5a1884560367e8208d920f2"),
},
},
Signature: ezDecode(t, "0x1b66ac1fb663c9bc59509846d6ec05345bd908eda73e670af888da41af171505cc411d61252fb6cb3fa0017b679f8bb2305b26a285fa2737f175668d0dff91cc1b66ac1fb663c9bc59509846d6ec05345bd908eda73e670af888da41af171505"),
},
},
},
Attestations: []*eth.Attestation{
{
AggregationBits: bitfield.Bitlist{0x01},
Data: &eth.AttestationData{
Slot: 1,
CommitteeIndex: 1,
BeaconBlockRoot: ezDecode(t, "0xcf8e0d4e9587369b2301d0790347320302cc0943d5a1884560367e8208d920f2"),
Source: &eth.Checkpoint{
Epoch: 1,
Root: ezDecode(t, "0xcf8e0d4e9587369b2301d0790347320302cc0943d5a1884560367e8208d920f2"),
},
Target: &eth.Checkpoint{
Epoch: 1,
Root: ezDecode(t, "0xcf8e0d4e9587369b2301d0790347320302cc0943d5a1884560367e8208d920f2"),
},
},
Signature: ezDecode(t, "0x1b66ac1fb663c9bc59509846d6ec05345bd908eda73e670af888da41af171505cc411d61252fb6cb3fa0017b679f8bb2305b26a285fa2737f175668d0dff91cc1b66ac1fb663c9bc59509846d6ec05345bd908eda73e670af888da41af171505"),
},
},
Deposits: []*eth.Deposit{
{
Proof: [][]byte{ezDecode(t, "0xcf8e0d4e9587369b2301d0790347320302cc0943d5a1884560367e8208d920f2")},
Data: &eth.Deposit_Data{
PublicKey: ezDecode(t, "0x93247f2209abcacf57b75a51dafae777f9dd38bc7053d1af526f220a7489a6d3a2753e5f3e8b1cfe39b56f43611df74a"),
WithdrawalCredentials: ezDecode(t, "0xcf8e0d4e9587369b2301d0790347320302cc0943d5a1884560367e8208d920f2"),
Amount: 1,
Signature: ezDecode(t, "0x1b66ac1fb663c9bc59509846d6ec05345bd908eda73e670af888da41af171505cc411d61252fb6cb3fa0017b679f8bb2305b26a285fa2737f175668d0dff91cc1b66ac1fb663c9bc59509846d6ec05345bd908eda73e670af888da41af171505"),
},
},
},
VoluntaryExits: []*eth.SignedVoluntaryExit{
{
Exit: &eth.VoluntaryExit{
Epoch: 1,
ValidatorIndex: 1,
},
Signature: ezDecode(t, "0x1b66ac1fb663c9bc59509846d6ec05345bd908eda73e670af888da41af171505cc411d61252fb6cb3fa0017b679f8bb2305b26a285fa2737f175668d0dff91cc1b66ac1fb663c9bc59509846d6ec05345bd908eda73e670af888da41af171505"),
},
},
SyncAggregate: &eth.SyncAggregate{
SyncCommitteeSignature: make([]byte, 48),
SyncCommitteeBits: bitfield.Bitvector512{0x01},
},
ExecutionPayloadHeader: &v1.ExecutionPayloadHeaderCapella{
ParentHash: ezDecode(t, "0xcf8e0d4e9587369b2301d0790347320302cc0943d5a1884560367e8208d920f2"),
FeeRecipient: ezDecode(t, "0xabcf8e0d4e9587369b2301d0790347320302cc09"),
StateRoot: ezDecode(t, "0xcf8e0d4e9587369b2301d0790347320302cc0943d5a1884560367e8208d920f2"),
ReceiptsRoot: ezDecode(t, "0xcf8e0d4e9587369b2301d0790347320302cc0943d5a1884560367e8208d920f2"),
LogsBloom: ezDecode(t, "0x00000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000"),
PrevRandao: ezDecode(t, "0xcf8e0d4e9587369b2301d0790347320302cc0943d5a1884560367e8208d920f2"),
BlockNumber: 1,
GasLimit: 1,
GasUsed: 1,
Timestamp: 1,
ExtraData: ezDecode(t, "0xcf8e0d4e9587369b2301d0790347320302cc0943d5a1884560367e8208d920f2"),
BaseFeePerGas: []byte(strconv.FormatUint(1, 10)),
BlockHash: ezDecode(t, "0xcf8e0d4e9587369b2301d0790347320302cc0943d5a1884560367e8208d920f2"),
TransactionsRoot: ezDecode(t, "0xcf8e0d4e9587369b2301d0790347320302cc0943d5a1884560367e8208d920f2"),
WithdrawalsRoot: ezDecode(t, "0xcf8e0d4e9587369b2301d0790347320302cc0943d5a1884560367e8208d920f2"),
},
},
},
Signature: ezDecode(t, "0x1b66ac1fb663c9bc59509846d6ec05345bd908eda73e670af888da41af171505cc411d61252fb6cb3fa0017b679f8bb2305b26a285fa2737f175668d0dff91cc1b66ac1fb663c9bc59509846d6ec05345bd908eda73e670af888da41af171505"),
}
}
func testSignedBlindedBeaconBlockDeneb(t *testing.T) *eth.SignedBlindedBeaconBlockDeneb {
basebytes, err := bytesutil.Uint256ToSSZBytes("14074904626401341155369551180448584754667373453244490859944217516317499064576")
if err != nil {
log.Error(err)
}
return &eth.SignedBlindedBeaconBlockDeneb{
Message: &eth.BlindedBeaconBlockDeneb{
Slot: 1,
ProposerIndex: 1,
ParentRoot: ezDecode(t, "0xcf8e0d4e9587369b2301d0790347320302cc0943d5a1884560367e8208d920f2"),
StateRoot: ezDecode(t, "0xcf8e0d4e9587369b2301d0790347320302cc0943d5a1884560367e8208d920f2"),
Body: &eth.BlindedBeaconBlockBodyDeneb{
RandaoReveal: ezDecode(t, "0x1b66ac1fb663c9bc59509846d6ec05345bd908eda73e670af888da41af171505cc411d61252fb6cb3fa0017b679f8bb2305b26a285fa2737f175668d0dff91cc1b66ac1fb663c9bc59509846d6ec05345bd908eda73e670af888da41af171505"),
Eth1Data: &eth.Eth1Data{
DepositRoot: ezDecode(t, "0xcf8e0d4e9587369b2301d0790347320302cc0943d5a1884560367e8208d920f2"),
DepositCount: 1,
BlockHash: ezDecode(t, "0xcf8e0d4e9587369b2301d0790347320302cc0943d5a1884560367e8208d920f2"),
},
Graffiti: ezDecode(t, "0xcf8e0d4e9587369b2301d0790347320302cc0943d5a1884560367e8208d920f2"),
ProposerSlashings: []*eth.ProposerSlashing{
{
Header_1: &eth.SignedBeaconBlockHeader{
Header: &eth.BeaconBlockHeader{
Slot: 1,
ProposerIndex: 1,
ParentRoot: ezDecode(t, "0xcf8e0d4e9587369b2301d0790347320302cc0943d5a1884560367e8208d920f2"),
StateRoot: ezDecode(t, "0xcf8e0d4e9587369b2301d0790347320302cc0943d5a1884560367e8208d920f2"),
BodyRoot: ezDecode(t, "0xcf8e0d4e9587369b2301d0790347320302cc0943d5a1884560367e8208d920f2"),
},
Signature: ezDecode(t, "0x1b66ac1fb663c9bc59509846d6ec05345bd908eda73e670af888da41af171505cc411d61252fb6cb3fa0017b679f8bb2305b26a285fa2737f175668d0dff91cc1b66ac1fb663c9bc59509846d6ec05345bd908eda73e670af888da41af171505"),
},
Header_2: &eth.SignedBeaconBlockHeader{
Header: &eth.BeaconBlockHeader{
Slot: 1,
ProposerIndex: 1,
ParentRoot: ezDecode(t, "0xcf8e0d4e9587369b2301d0790347320302cc0943d5a1884560367e8208d920f2"),
StateRoot: ezDecode(t, "0xcf8e0d4e9587369b2301d0790347320302cc0943d5a1884560367e8208d920f2"),
BodyRoot: ezDecode(t, "0xcf8e0d4e9587369b2301d0790347320302cc0943d5a1884560367e8208d920f2"),
},
Signature: ezDecode(t, "0x1b66ac1fb663c9bc59509846d6ec05345bd908eda73e670af888da41af171505cc411d61252fb6cb3fa0017b679f8bb2305b26a285fa2737f175668d0dff91cc1b66ac1fb663c9bc59509846d6ec05345bd908eda73e670af888da41af171505"),
},
},
},
AttesterSlashings: []*eth.AttesterSlashing{
{
Attestation_1: &eth.IndexedAttestation{
AttestingIndices: []uint64{1},
Data: &eth.AttestationData{
Slot: 1,
CommitteeIndex: 1,
BeaconBlockRoot: ezDecode(t, "0xcf8e0d4e9587369b2301d0790347320302cc0943d5a1884560367e8208d920f2"),
Source: &eth.Checkpoint{
Epoch: 1,
Root: ezDecode(t, "0xcf8e0d4e9587369b2301d0790347320302cc0943d5a1884560367e8208d920f2"),
},
Target: &eth.Checkpoint{
Epoch: 1,
Root: ezDecode(t, "0xcf8e0d4e9587369b2301d0790347320302cc0943d5a1884560367e8208d920f2"),
},
},
Signature: ezDecode(t, "0x1b66ac1fb663c9bc59509846d6ec05345bd908eda73e670af888da41af171505cc411d61252fb6cb3fa0017b679f8bb2305b26a285fa2737f175668d0dff91cc1b66ac1fb663c9bc59509846d6ec05345bd908eda73e670af888da41af171505"),
},
Attestation_2: &eth.IndexedAttestation{
AttestingIndices: []uint64{1},
Data: &eth.AttestationData{
Slot: 1,
CommitteeIndex: 1,
BeaconBlockRoot: ezDecode(t, "0xcf8e0d4e9587369b2301d0790347320302cc0943d5a1884560367e8208d920f2"),
Source: &eth.Checkpoint{
Epoch: 1,
Root: ezDecode(t, "0xcf8e0d4e9587369b2301d0790347320302cc0943d5a1884560367e8208d920f2"),
},
Target: &eth.Checkpoint{
Epoch: 1,
Root: ezDecode(t, "0xcf8e0d4e9587369b2301d0790347320302cc0943d5a1884560367e8208d920f2"),
},
},
Signature: ezDecode(t, "0x1b66ac1fb663c9bc59509846d6ec05345bd908eda73e670af888da41af171505cc411d61252fb6cb3fa0017b679f8bb2305b26a285fa2737f175668d0dff91cc1b66ac1fb663c9bc59509846d6ec05345bd908eda73e670af888da41af171505"),
},
},
},
Attestations: []*eth.Attestation{
{
AggregationBits: bitfield.Bitlist{0x01},
Data: &eth.AttestationData{
Slot: 1,
CommitteeIndex: 1,
BeaconBlockRoot: ezDecode(t, "0xcf8e0d4e9587369b2301d0790347320302cc0943d5a1884560367e8208d920f2"),
Source: &eth.Checkpoint{
Epoch: 1,
Root: ezDecode(t, "0xcf8e0d4e9587369b2301d0790347320302cc0943d5a1884560367e8208d920f2"),
},
Target: &eth.Checkpoint{
Epoch: 1,
Root: ezDecode(t, "0xcf8e0d4e9587369b2301d0790347320302cc0943d5a1884560367e8208d920f2"),
},
},
Signature: ezDecode(t, "0x1b66ac1fb663c9bc59509846d6ec05345bd908eda73e670af888da41af171505cc411d61252fb6cb3fa0017b679f8bb2305b26a285fa2737f175668d0dff91cc1b66ac1fb663c9bc59509846d6ec05345bd908eda73e670af888da41af171505"),
},
},
Deposits: []*eth.Deposit{
{
Proof: [][]byte{ezDecode(t, "0xcf8e0d4e9587369b2301d0790347320302cc0943d5a1884560367e8208d920f2")},
Data: &eth.Deposit_Data{
PublicKey: ezDecode(t, "0x93247f2209abcacf57b75a51dafae777f9dd38bc7053d1af526f220a7489a6d3a2753e5f3e8b1cfe39b56f43611df74a"),
WithdrawalCredentials: ezDecode(t, "0xcf8e0d4e9587369b2301d0790347320302cc0943d5a1884560367e8208d920f2"),
Amount: 1,
Signature: ezDecode(t, "0x1b66ac1fb663c9bc59509846d6ec05345bd908eda73e670af888da41af171505cc411d61252fb6cb3fa0017b679f8bb2305b26a285fa2737f175668d0dff91cc1b66ac1fb663c9bc59509846d6ec05345bd908eda73e670af888da41af171505"),
},
},
},
VoluntaryExits: []*eth.SignedVoluntaryExit{
{
Exit: &eth.VoluntaryExit{
Epoch: 1,
ValidatorIndex: 1,
},
Signature: ezDecode(t, "0x1b66ac1fb663c9bc59509846d6ec05345bd908eda73e670af888da41af171505cc411d61252fb6cb3fa0017b679f8bb2305b26a285fa2737f175668d0dff91cc1b66ac1fb663c9bc59509846d6ec05345bd908eda73e670af888da41af171505"),
},
},
SyncAggregate: &eth.SyncAggregate{
SyncCommitteeSignature: make([]byte, 96),
SyncCommitteeBits: ezDecode(t, "0x6451e9f951ebf05edc01de67e593484b672877054f055903ff0df1a1a945cf30ca26bb4d4b154f94a1bc776bcf5d0efb3603e1f9b8ee2499ccdcfe2a18cef458"),
},
ExecutionPayloadHeader: &v1.ExecutionPayloadHeaderDeneb{
ParentHash: ezDecode(t, "0xcf8e0d4e9587369b2301d0790347320302cc0943d5a1884560367e8208d920f2"),
FeeRecipient: ezDecode(t, "0xabcf8e0d4e9587369b2301d0790347320302cc09"),
StateRoot: ezDecode(t, "0xcf8e0d4e9587369b2301d0790347320302cc0943d5a1884560367e8208d920f2"),
ReceiptsRoot: ezDecode(t, "0xcf8e0d4e9587369b2301d0790347320302cc0943d5a1884560367e8208d920f2"),
LogsBloom: ezDecode(t, "0x00000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000"),
PrevRandao: ezDecode(t, "0xcf8e0d4e9587369b2301d0790347320302cc0943d5a1884560367e8208d920f2"),
BlockNumber: 1,
GasLimit: 1,
GasUsed: 1,
Timestamp: 1,
ExtraData: ezDecode(t, "0xcf8e0d4e9587369b2301d0790347320302cc0943d5a1884560367e8208d920f2"),
BaseFeePerGas: basebytes,
BlockHash: ezDecode(t, "0xcf8e0d4e9587369b2301d0790347320302cc0943d5a1884560367e8208d920f2"),
TransactionsRoot: ezDecode(t, "0xcf8e0d4e9587369b2301d0790347320302cc0943d5a1884560367e8208d920f2"),
WithdrawalsRoot: ezDecode(t, "0xcf8e0d4e9587369b2301d0790347320302cc0943d5a1884560367e8208d920f2"),
BlobGasUsed: 1,
ExcessBlobGas: 2,
},
},
},
Signature: ezDecode(t, "0x1b66ac1fb663c9bc59509846d6ec05345bd908eda73e670af888da41af171505cc411d61252fb6cb3fa0017b679f8bb2305b26a285fa2737f175668d0dff91cc1b66ac1fb663c9bc59509846d6ec05345bd908eda73e670af888da41af171505"),
}
}
func TestRequestLogger(t *testing.T) {
wo := WithObserver(&requestLogger{})
c, err := NewClient("localhost:3500", wo)
require.NoError(t, err)
ctx := context.Background()
hc := &http.Client{
Transport: roundtrip(func(r *http.Request) (*http.Response, error) {
require.Equal(t, getStatus, r.URL.Path)
return &http.Response{
StatusCode: http.StatusOK,
Body: io.NopCloser(bytes.NewBufferString(testExampleExecutionPayload)),
Request: r.Clone(ctx),
}, nil
}),
}
c.hc = hc
err = c.Status(ctx)
require.NoError(t, err)
}

View File

@@ -1,17 +0,0 @@
package builder
import "github.com/pkg/errors"
// ErrNotOK is used to indicate when an HTTP request to the Beacon Node API failed with any non-2xx response code.
// More specific errors may be returned, but an error in reaction to a non-2xx response will always wrap ErrNotOK.
var ErrNotOK = errors.New("did not receive 200 response from API")
// ErrNotFound specifically means that a '404 - NOT FOUND' response was received from the API.
var ErrNotFound = errors.Wrap(ErrNotOK, "recv 404 NotFound response from API")
// ErrBadRequest specifically means that a '400 - BAD REQUEST' response was received from the API.
var ErrBadRequest = errors.Wrap(ErrNotOK, "recv 400 BadRequest response from API")
// ErrNoContent specifically means that a '204 - No Content' response was received from the API.
// Typically, a 204 is a success but in this case for the Header API means No header is available
var ErrNoContent = errors.New("recv 204 no content response from API, No header is available")

File diff suppressed because one or more lines are too long

File diff suppressed because one or more lines are too long

View File

@@ -1 +0,0 @@
{"parent_hash":"0xcf8e0d4e9587369b2301d0790347320302cc0943d5a1884560367e8208d920f2","fee_recipient":"0xabcf8e0d4e9587369b2301d0790347320302cc09","state_root":"0xcf8e0d4e9587369b2301d0790347320302cc0943d5a1884560367e8208d920f2","receipts_root":"0xcf8e0d4e9587369b2301d0790347320302cc0943d5a1884560367e8208d920f2","logs_bloom":"0x00000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000","prev_randao":"0xcf8e0d4e9587369b2301d0790347320302cc0943d5a1884560367e8208d920f2","block_number":"1","gas_limit":"1","gas_used":"1","timestamp":"1","extra_data":"0xcf8e0d4e9587369b2301d0790347320302cc0943d5a1884560367e8208d920f2","base_fee_per_gas":"14074904626401341155369551180448584754667373453244490859944217516317499064576","block_hash":"0xcf8e0d4e9587369b2301d0790347320302cc0943d5a1884560367e8208d920f2","transactions_root":"0xcf8e0d4e9587369b2301d0790347320302cc0943d5a1884560367e8208d920f2","withdrawals_root":"0xcf8e0d4e9587369b2301d0790347320302cc0943d5a1884560367e8208d920f2"}

View File

@@ -1 +0,0 @@
{"parent_hash":"0xcf8e0d4e9587369b2301d0790347320302cc0943d5a1884560367e8208d920f2","fee_recipient":"0xabcf8e0d4e9587369b2301d0790347320302cc09","state_root":"0xcf8e0d4e9587369b2301d0790347320302cc0943d5a1884560367e8208d920f2","receipts_root":"0xcf8e0d4e9587369b2301d0790347320302cc0943d5a1884560367e8208d920f2","logs_bloom":"0x00000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000","prev_randao":"0xcf8e0d4e9587369b2301d0790347320302cc0943d5a1884560367e8208d920f2","block_number":"1","gas_limit":"1","gas_used":"1","timestamp":"1","extra_data":"0xcf8e0d4e9587369b2301d0790347320302cc0943d5a1884560367e8208d920f2","base_fee_per_gas":"14074904626401341155369551180448584754667373453244490859944217516317499064576","block_hash":"0xcf8e0d4e9587369b2301d0790347320302cc0943d5a1884560367e8208d920f2","transactions_root":"0xcf8e0d4e9587369b2301d0790347320302cc0943d5a1884560367e8208d920f2"}

View File

@@ -1,16 +0,0 @@
load("@prysm//tools/go:def.bzl", "go_library")
go_library(
name = "go_default_library",
srcs = ["mock.go"],
importpath = "github.com/prysmaticlabs/prysm/v5/api/client/builder/testing",
visibility = ["//visibility:public"],
deps = [
"//api/client/builder:go_default_library",
"//consensus-types/interfaces:go_default_library",
"//consensus-types/primitives:go_default_library",
"//encoding/bytesutil:go_default_library",
"//proto/engine/v1:go_default_library",
"//proto/prysm/v1alpha1:go_default_library",
],
)

View File

@@ -1,51 +0,0 @@
package testing
import (
"context"
"github.com/prysmaticlabs/prysm/v5/api/client/builder"
"github.com/prysmaticlabs/prysm/v5/consensus-types/interfaces"
"github.com/prysmaticlabs/prysm/v5/consensus-types/primitives"
"github.com/prysmaticlabs/prysm/v5/encoding/bytesutil"
v1 "github.com/prysmaticlabs/prysm/v5/proto/engine/v1"
ethpb "github.com/prysmaticlabs/prysm/v5/proto/prysm/v1alpha1"
)
// MockClient is a mock implementation of BuilderClient.
type MockClient struct {
RegisteredVals map[[48]byte]bool
}
// NewClient creates a new, correctly initialized mock.
func NewClient() MockClient {
return MockClient{RegisteredVals: map[[48]byte]bool{}}
}
// NodeURL --
func (MockClient) NodeURL() string {
return ""
}
// GetHeader --
func (MockClient) GetHeader(_ context.Context, _ primitives.Slot, _ [32]byte, _ [48]byte) (builder.SignedBid, error) {
return nil, nil
}
// RegisterValidator --
func (m MockClient) RegisterValidator(_ context.Context, svr []*ethpb.SignedValidatorRegistrationV1) error {
for _, r := range svr {
b := bytesutil.ToBytes48(r.Message.Pubkey)
m.RegisteredVals[b] = true
}
return nil
}
// SubmitBlindedBlock --
func (MockClient) SubmitBlindedBlock(_ context.Context, _ interfaces.ReadOnlySignedBeaconBlock) (interfaces.ExecutionData, *v1.BlobsBundle, error) {
return nil, nil, nil
}
// Status --
func (MockClient) Status(_ context.Context) error {
return nil
}

File diff suppressed because it is too large Load Diff

File diff suppressed because it is too large Load Diff

View File

@@ -1,97 +0,0 @@
package client
import (
"context"
"io"
"net"
"net/http"
"net/url"
"github.com/pkg/errors"
)
// Client is a wrapper object around the HTTP client.
type Client struct {
hc *http.Client
baseURL *url.URL
token string
}
// NewClient constructs a new client with the provided options (ex WithTimeout).
// `host` is the base host + port used to construct request urls. This value can be
// a URL string, or NewClient will assume an http endpoint if just `host:port` is used.
func NewClient(host string, opts ...ClientOpt) (*Client, error) {
u, err := urlForHost(host)
if err != nil {
return nil, err
}
c := &Client{
hc: &http.Client{},
baseURL: u,
}
for _, o := range opts {
o(c)
}
return c, nil
}
// Token returns the bearer token used for jwt authentication
func (c *Client) Token() string {
return c.token
}
// BaseURL returns the base url of the client
func (c *Client) BaseURL() *url.URL {
return c.baseURL
}
// Do execute the request against the http client
func (c *Client) Do(req *http.Request) (*http.Response, error) {
return c.hc.Do(req)
}
func urlForHost(h string) (*url.URL, error) {
// try to parse as url (being permissive)
u, err := url.Parse(h)
if err == nil && u.Host != "" {
return u, nil
}
// try to parse as host:port
host, port, err := net.SplitHostPort(h)
if err != nil {
return nil, ErrMalformedHostname
}
return &url.URL{Host: net.JoinHostPort(host, port), Scheme: "http"}, nil
}
// NodeURL returns a human-readable string representation of the beacon node base url.
func (c *Client) NodeURL() string {
return c.baseURL.String()
}
// Get is a generic, opinionated GET function to reduce boilerplate amongst the getters in this package.
func (c *Client) Get(ctx context.Context, path string, opts ...ReqOption) ([]byte, error) {
u := c.baseURL.ResolveReference(&url.URL{Path: path})
req, err := http.NewRequestWithContext(ctx, http.MethodGet, u.String(), nil)
if err != nil {
return nil, err
}
for _, o := range opts {
o(req)
}
r, err := c.hc.Do(req)
if err != nil {
return nil, err
}
defer func() {
err = r.Body.Close()
}()
if r.StatusCode != http.StatusOK {
return nil, Non200Err(r)
}
b, err := io.ReadAll(r.Body)
if err != nil {
return nil, errors.Wrap(err, "error reading http response body")
}
return b, nil
}

View File

@@ -1,48 +0,0 @@
package client
import (
"net/url"
"testing"
"github.com/prysmaticlabs/prysm/v5/testing/require"
)
func TestValidHostname(t *testing.T) {
cases := []struct {
name string
hostArg string
path string
joined string
err error
}{
{
name: "hostname without port",
hostArg: "mydomain.org",
err: ErrMalformedHostname,
},
}
for _, c := range cases {
t.Run(c.name, func(t *testing.T) {
cl, err := NewClient(c.hostArg)
if c.err != nil {
require.ErrorIs(t, err, c.err)
return
}
require.NoError(t, err)
require.Equal(t, c.joined, cl.BaseURL().ResolveReference(&url.URL{Path: c.path}).String())
})
}
}
func TestWithAuthenticationToken(t *testing.T) {
cl, err := NewClient("https://www.offchainlabs.com:3500", WithAuthenticationToken("my token"))
require.NoError(t, err)
require.Equal(t, cl.Token(), "my token")
}
func TestBaseURL(t *testing.T) {
cl, err := NewClient("https://www.offchainlabs.com:3500")
require.NoError(t, err)
require.Equal(t, "www.offchainlabs.com", cl.BaseURL().Hostname())
require.Equal(t, "3500", cl.BaseURL().Port())
}

View File

@@ -1,43 +0,0 @@
package client
import (
"fmt"
"io"
"net/http"
"github.com/pkg/errors"
)
// ErrMalformedHostname is used to indicate if a host name's format is incorrect.
var ErrMalformedHostname = errors.New("hostname must include port, separated by one colon, like example.com:3500")
// ErrNotOK is used to indicate when an HTTP request to the API failed with any non-2xx response code.
// More specific errors may be returned, but an error in reaction to a non-2xx response will always wrap ErrNotOK.
var ErrNotOK = errors.New("did not receive 2xx response from API")
// ErrNotFound specifically means that a '404 - NOT FOUND' response was received from the API.
var ErrNotFound = errors.Wrap(ErrNotOK, "recv 404 NotFound response from API")
// ErrInvalidNodeVersion indicates that the /eth/v1/node/version API response format was not recognized.
var ErrInvalidNodeVersion = errors.New("invalid node version response")
// ErrConnectionIssue represents a connection problem.
var ErrConnectionIssue = errors.New("could not connect")
// Non200Err is a function that parses an HTTP response to handle responses that are not 200 with a formatted error.
func Non200Err(response *http.Response) error {
bodyBytes, err := io.ReadAll(response.Body)
var body string
if err != nil {
body = "(Unable to read response body.)"
} else {
body = "response body:\n" + string(bodyBytes)
}
msg := fmt.Sprintf("code=%d, url=%s, body=%s", response.StatusCode, response.Request.URL, body)
switch response.StatusCode {
case http.StatusNotFound:
return errors.Wrap(ErrNotFound, msg)
default:
return errors.Wrap(ErrNotOK, msg)
}
}

View File

@@ -1,30 +0,0 @@
load("@prysm//tools/go:def.bzl", "go_library", "go_test")
go_library(
name = "go_default_library",
srcs = [
"event_stream.go",
"utils.go",
],
importpath = "github.com/prysmaticlabs/prysm/v5/api/client/event",
visibility = ["//visibility:public"],
deps = [
"//api:go_default_library",
"//api/client:go_default_library",
"@com_github_pkg_errors//:go_default_library",
"@com_github_sirupsen_logrus//:go_default_library",
],
)
go_test(
name = "go_default_test",
srcs = [
"event_stream_test.go",
"utils_test.go",
],
embed = [":go_default_library"],
deps = [
"//testing/require:go_default_library",
"@com_github_sirupsen_logrus//:go_default_library",
],
)

View File

@@ -1,151 +0,0 @@
package event
import (
"bufio"
"context"
"net/http"
"net/url"
"strings"
"github.com/pkg/errors"
"github.com/prysmaticlabs/prysm/v5/api"
"github.com/prysmaticlabs/prysm/v5/api/client"
log "github.com/sirupsen/logrus"
)
const (
EventHead = "head"
EventBlock = "block"
EventAttestation = "attestation"
EventVoluntaryExit = "voluntary_exit"
EventBlsToExecutionChange = "bls_to_execution_change"
EventProposerSlashing = "proposer_slashing"
EventAttesterSlashing = "attester_slashing"
EventFinalizedCheckpoint = "finalized_checkpoint"
EventChainReorg = "chain_reorg"
EventContributionAndProof = "contribution_and_proof"
EventLightClientFinalityUpdate = "light_client_finality_update"
EventLightClientOptimisticUpdate = "light_client_optimistic_update"
EventPayloadAttributes = "payload_attributes"
EventBlobSidecar = "blob_sidecar"
EventError = "error"
EventConnectionError = "connection_error"
)
var (
_ = EventStreamClient(&EventStream{})
)
var DefaultEventTopics = []string{EventHead}
type EventStreamClient interface {
Subscribe(eventsChannel chan<- *Event)
}
type Event struct {
EventType string
Data []byte
}
// EventStream is responsible for subscribing to the Beacon API events endpoint
// and dispatching received events to subscribers.
type EventStream struct {
ctx context.Context
httpClient *http.Client
host string
topics []string
}
func NewEventStream(ctx context.Context, httpClient *http.Client, host string, topics []string) (*EventStream, error) {
// Check if the host is a valid URL
_, err := url.ParseRequestURI(host)
if err != nil {
return nil, err
}
if len(topics) == 0 {
return nil, errors.New("no topics provided")
}
return &EventStream{
ctx: ctx,
httpClient: httpClient,
host: host,
topics: topics,
}, nil
}
func (h *EventStream) Subscribe(eventsChannel chan<- *Event) {
allTopics := strings.Join(h.topics, ",")
log.WithField("topics", allTopics).Info("Listening to Beacon API events")
fullUrl := h.host + "/eth/v1/events?topics=" + allTopics
req, err := http.NewRequestWithContext(h.ctx, http.MethodGet, fullUrl, nil)
if err != nil {
eventsChannel <- &Event{
EventType: EventConnectionError,
Data: []byte(errors.Wrap(err, "failed to create HTTP request").Error()),
}
}
req.Header.Set("Accept", api.EventStreamMediaType)
req.Header.Set("Connection", api.KeepAlive)
resp, err := h.httpClient.Do(req)
if err != nil {
eventsChannel <- &Event{
EventType: EventConnectionError,
Data: []byte(errors.Wrap(err, client.ErrConnectionIssue.Error()).Error()),
}
return
}
defer func() {
if closeErr := resp.Body.Close(); closeErr != nil {
log.WithError(closeErr).Error("Failed to close events response body")
}
}()
// Create a new scanner to read lines from the response body
scanner := bufio.NewScanner(resp.Body)
// Set the split function for the scanning operation
scanner.Split(scanLinesWithCarriage)
var eventType, data string // Variables to store event type and data
// Iterate over lines of the event stream
for scanner.Scan() {
select {
case <-h.ctx.Done():
log.Info("Context canceled, stopping event stream")
close(eventsChannel)
return
default:
line := scanner.Text()
// Handle the event based on your specific format
if line == "" {
// Empty line indicates the end of an event
if eventType != "" && data != "" {
// Process the event when both eventType and data are set
eventsChannel <- &Event{EventType: eventType, Data: []byte(data)}
}
// Reset eventType and data for the next event
eventType, data = "", ""
continue
}
et, ok := strings.CutPrefix(line, "event: ")
if ok {
// Extract event type from the "event" field
eventType = et
}
d, ok := strings.CutPrefix(line, "data: ")
if ok {
// Extract data from the "data" field
data = d
}
}
}
if err := scanner.Err(); err != nil {
eventsChannel <- &Event{
EventType: EventConnectionError,
Data: []byte(errors.Wrap(err, errors.Wrap(client.ErrConnectionIssue, "scanner failed").Error()).Error()),
}
}
}

View File

@@ -1,101 +0,0 @@
package event
import (
"context"
"fmt"
"net/http"
"net/http/httptest"
"testing"
"time"
"github.com/prysmaticlabs/prysm/v5/testing/require"
log "github.com/sirupsen/logrus"
)
func TestNewEventStream(t *testing.T) {
validURL := "http://localhost:8080"
invalidURL := "://invalid"
topics := []string{"topic1", "topic2"}
tests := []struct {
name string
host string
topics []string
wantErr bool
}{
{"Valid input", validURL, topics, false},
{"Invalid URL", invalidURL, topics, true},
{"No topics", validURL, []string{}, true},
}
for _, tt := range tests {
t.Run(tt.name, func(t *testing.T) {
_, err := NewEventStream(context.Background(), &http.Client{}, tt.host, tt.topics)
if (err != nil) != tt.wantErr {
t.Errorf("NewEventStream() error = %v, wantErr %v", err, tt.wantErr)
}
})
}
}
func TestEventStream(t *testing.T) {
mux := http.NewServeMux()
mux.HandleFunc("/eth/v1/events", func(w http.ResponseWriter, _ *http.Request) {
flusher, ok := w.(http.Flusher)
require.Equal(t, true, ok)
for i := 1; i <= 3; i++ {
events := [3]string{"event: head\ndata: data%d\n\n", "event: head\rdata: data%d\r\r", "event: head\r\ndata: data%d\r\n\r\n"}
_, err := fmt.Fprintf(w, events[i-1], i)
require.NoError(t, err)
flusher.Flush() // Trigger flush to simulate streaming data
time.Sleep(100 * time.Millisecond) // Simulate delay between events
}
})
server := httptest.NewServer(mux)
defer server.Close()
topics := []string{"head"}
eventsChannel := make(chan *Event, 1)
stream, err := NewEventStream(context.Background(), http.DefaultClient, server.URL, topics)
require.NoError(t, err)
go stream.Subscribe(eventsChannel)
// Collect events
var events []*Event
for len(events) != 3 {
select {
case event := <-eventsChannel:
log.Info(event)
events = append(events, event)
}
}
// Assertions to verify the events content
expectedData := []string{"data1", "data2", "data3"}
for i, event := range events {
if string(event.Data) != expectedData[i] {
t.Errorf("Expected event data %q, got %q", expectedData[i], string(event.Data))
}
}
}
func TestEventStreamRequestError(t *testing.T) {
topics := []string{"head"}
eventsChannel := make(chan *Event, 1)
ctx, cancel := context.WithCancel(context.Background())
defer cancel()
// use valid url that will result in failed request with nil body
stream, err := NewEventStream(ctx, http.DefaultClient, "http://badhost:1234", topics)
require.NoError(t, err)
// error will happen when request is made, should be received over events channel
go stream.Subscribe(eventsChannel)
event := <-eventsChannel
if event.EventType != EventConnectionError {
t.Errorf("Expected event type %q, got %q", EventConnectionError, event.EventType)
}
}

View File

@@ -1,36 +0,0 @@
package event
import (
"bytes"
)
// adapted from ScanLines in scan.go to handle carriage return characters as separators
func scanLinesWithCarriage(data []byte, atEOF bool) (advance int, token []byte, err error) {
if atEOF && len(data) == 0 {
return 0, nil, nil
}
if i, j := bytes.IndexByte(data, '\n'), bytes.IndexByte(data, '\r'); i >= 0 || j >= 0 {
in := i
// Select the first index of \n or \r or the second index of \r if it is followed by \n
if i < 0 || (i > j && i != j+1 && j >= 0) {
in = j
}
// We have a full newline-terminated line.
return in + 1, dropCR(data[0:in]), nil
}
// If we're at EOF, we have a final, non-terminated line. Return it.
if atEOF {
return len(data), dropCR(data), nil
}
// Request more data.
return 0, nil, nil
}
// dropCR drops a terminal \r from the data.
func dropCR(data []byte) []byte {
if len(data) > 0 && data[len(data)-1] == '\r' {
return data[0 : len(data)-1]
}
return data
}

View File

@@ -1,97 +0,0 @@
package event
import (
"bufio"
"bytes"
"testing"
"github.com/prysmaticlabs/prysm/v5/testing/require"
)
func TestScanLinesWithCarriage(t *testing.T) {
testCases := []struct {
name string
input string
expected []string
}{
{
name: "LF line endings",
input: "line1\nline2\nline3",
expected: []string{"line1", "line2", "line3"},
},
{
name: "CR line endings",
input: "line1\rline2\rline3",
expected: []string{"line1", "line2", "line3"},
},
{
name: "CRLF line endings",
input: "line1\r\nline2\r\nline3",
expected: []string{"line1", "line2", "line3"},
},
{
name: "Mixed line endings",
input: "line1\nline2\rline3\r\nline4",
expected: []string{"line1", "line2", "line3", "line4"},
},
{
name: "Empty lines",
input: "line1\n\nline2\r\rline3",
expected: []string{"line1", "", "line2", "", "line3"},
},
{
name: "Empty lines 2",
input: "line1\n\rline2\n\rline3",
expected: []string{"line1", "", "line2", "", "line3"},
},
{
name: "No line endings",
input: "single line without ending",
expected: []string{"single line without ending"},
},
}
for _, tc := range testCases {
t.Run(tc.name, func(t *testing.T) {
scanner := bufio.NewScanner(bytes.NewReader([]byte(tc.input)))
scanner.Split(scanLinesWithCarriage)
var lines []string
for scanner.Scan() {
lines = append(lines, scanner.Text())
}
require.NoError(t, scanner.Err())
require.Equal(t, len(tc.expected), len(lines), "Number of lines does not match")
for i, line := range lines {
require.Equal(t, tc.expected[i], line, "Line %d does not match", i)
}
})
}
}
// TestScanLinesWithCarriageEdgeCases tests edge cases and potential error scenarios
func TestScanLinesWithCarriageEdgeCases(t *testing.T) {
t.Run("Empty input", func(t *testing.T) {
scanner := bufio.NewScanner(bytes.NewReader([]byte("")))
scanner.Split(scanLinesWithCarriage)
require.Equal(t, scanner.Scan(), false)
require.NoError(t, scanner.Err())
})
t.Run("Very long line", func(t *testing.T) {
longLine := bytes.Repeat([]byte("a"), bufio.MaxScanTokenSize+1)
scanner := bufio.NewScanner(bytes.NewReader(longLine))
scanner.Split(scanLinesWithCarriage)
require.Equal(t, scanner.Scan(), false)
require.NotNil(t, scanner.Err())
})
t.Run("Line ending at max token size", func(t *testing.T) {
input := append(bytes.Repeat([]byte("a"), bufio.MaxScanTokenSize-1), '\n')
scanner := bufio.NewScanner(bytes.NewReader(input))
scanner.Split(scanLinesWithCarriage)
require.Equal(t, scanner.Scan(), true)
require.Equal(t, string(bytes.Repeat([]byte("a"), bufio.MaxScanTokenSize-1)), scanner.Text())
})
}

View File

@@ -1,48 +0,0 @@
package client
import (
"fmt"
"net/http"
"time"
)
// ReqOption is a request functional option.
type ReqOption func(*http.Request)
// WithSSZEncoding is a request functional option that adds SSZ encoding header.
func WithSSZEncoding() ReqOption {
return func(req *http.Request) {
req.Header.Set("Accept", "application/octet-stream")
}
}
// WithAuthorizationToken is a request functional option that adds header for authorization token.
func WithAuthorizationToken(token string) ReqOption {
return func(req *http.Request) {
req.Header.Set("Authorization", fmt.Sprintf("Bearer %s", token))
}
}
// ClientOpt is a functional option for the Client type (http.Client wrapper)
type ClientOpt func(*Client)
// WithTimeout sets the .Timeout attribute of the wrapped http.Client.
func WithTimeout(timeout time.Duration) ClientOpt {
return func(c *Client) {
c.hc.Timeout = timeout
}
}
// WithRoundTripper replaces the underlying HTTP's transport with a custom one.
func WithRoundTripper(t http.RoundTripper) ClientOpt {
return func(c *Client) {
c.hc.Transport = t
}
}
// WithAuthenticationToken sets an oauth token to be used.
func WithAuthenticationToken(token string) ClientOpt {
return func(c *Client) {
c.token = token
}
}

View File

@@ -1,13 +0,0 @@
load("@prysm//tools/go:def.bzl", "go_library")
go_library(
name = "go_default_library",
srcs = ["client.go"],
importpath = "github.com/prysmaticlabs/prysm/v5/api/client/validator",
visibility = ["//visibility:public"],
deps = [
"//api/client:go_default_library",
"//validator/rpc:go_default_library",
"@com_github_pkg_errors//:go_default_library",
],
)

View File

@@ -1,121 +0,0 @@
package validator
import (
"context"
"encoding/json"
"fmt"
"strings"
"github.com/pkg/errors"
"github.com/prysmaticlabs/prysm/v5/api/client"
"github.com/prysmaticlabs/prysm/v5/validator/rpc"
)
const (
localKeysPath = "/eth/v1/keystores"
remoteKeysPath = "/eth/v1/remotekeys"
feeRecipientPath = "/eth/v1/validator/{pubkey}/feerecipient"
)
// Client provides a collection of helper methods for calling the Keymanager API endpoints.
type Client struct {
*client.Client
}
// NewClient returns a new Client that includes functions for REST calls to keymanager APIs.
func NewClient(host string, opts ...client.ClientOpt) (*Client, error) {
c, err := client.NewClient(host, opts...)
if err != nil {
return nil, err
}
return &Client{c}, nil
}
// GetValidatorPubKeys gets the current list of web3signer or the local validator public keys in hex format.
func (c *Client) GetValidatorPubKeys(ctx context.Context) ([]string, error) {
jsonlocal, err := c.GetLocalValidatorKeys(ctx)
if err != nil {
return nil, err
}
jsonremote, err := c.GetRemoteValidatorKeys(ctx)
if err != nil {
return nil, err
}
if len(jsonlocal.Data) == 0 && len(jsonremote.Data) == 0 {
return nil, errors.New("there are no local keys or remote keys on the validator")
}
hexKeys := make(map[string]bool)
for index := range jsonlocal.Data {
hexKeys[jsonlocal.Data[index].ValidatingPubkey] = true
}
for index := range jsonremote.Data {
hexKeys[jsonremote.Data[index].Pubkey] = true
}
keys := make([]string, 0)
for k := range hexKeys {
keys = append(keys, k)
}
return keys, nil
}
// GetLocalValidatorKeys calls the keymanager APIs for local validator keys
func (c *Client) GetLocalValidatorKeys(ctx context.Context) (*rpc.ListKeystoresResponse, error) {
localBytes, err := c.Get(ctx, localKeysPath, client.WithAuthorizationToken(c.Token()))
if err != nil {
return nil, err
}
jsonlocal := &rpc.ListKeystoresResponse{}
if err := json.Unmarshal(localBytes, jsonlocal); err != nil {
return nil, errors.Wrap(err, "failed to parse local keystore list")
}
return jsonlocal, nil
}
// GetRemoteValidatorKeys calls the keymanager APIs for web3signer validator keys
func (c *Client) GetRemoteValidatorKeys(ctx context.Context) (*rpc.ListRemoteKeysResponse, error) {
remoteBytes, err := c.Get(ctx, remoteKeysPath, client.WithAuthorizationToken(c.Token()))
if err != nil {
if !strings.Contains(err.Error(), "Prysm Wallet is not of type Web3Signer") {
return nil, err
}
}
jsonremote := &rpc.ListRemoteKeysResponse{}
if len(remoteBytes) != 0 {
if err := json.Unmarshal(remoteBytes, jsonremote); err != nil {
return nil, errors.Wrap(err, "failed to parse remote keystore list")
}
}
return jsonremote, nil
}
// GetFeeRecipientAddresses takes a list of validators in hex format and returns an equal length list of fee recipients in hex format.
func (c *Client) GetFeeRecipientAddresses(ctx context.Context, validators []string) ([]string, error) {
feeRecipients := make([]string, len(validators))
for index, validator := range validators {
feejson, err := c.GetFeeRecipientAddress(ctx, validator)
if err != nil {
return nil, errors.Wrap(err, fmt.Sprintf("keymanager API failed to retrieve fee recipient for validator %s", validators[index]))
}
if feejson.Data == nil {
continue
}
feeRecipients[index] = feejson.Data.Ethaddress
}
return feeRecipients, nil
}
// GetFeeRecipientAddress takes a public key and calls the keymanager API to return its fee recipient.
func (c *Client) GetFeeRecipientAddress(ctx context.Context, pubkey string) (*rpc.GetFeeRecipientByPubkeyResponse, error) {
path := strings.Replace(feeRecipientPath, "{pubkey}", pubkey, 1)
b, err := c.Get(ctx, path, client.WithAuthorizationToken(c.Token()))
if err != nil {
return nil, err
}
feejson := &rpc.GetFeeRecipientByPubkeyResponse{}
if err := json.Unmarshal(b, feejson); err != nil {
return nil, errors.Wrap(err, "failed to parse fee recipient")
}
return feejson, nil
}

View File

@@ -1,9 +0,0 @@
package api
const (
WebUrlPrefix = "/v2/validator/"
WebApiUrlPrefix = "/api/v2/validator/"
KeymanagerApiPrefix = "/eth/v1"
SystemLogsPrefix = "health/logs"
AuthTokenFileName = "auth-token"
)

View File

@@ -1,28 +0,0 @@
load("@prysm//tools/go:def.bzl", "go_library", "go_test")
go_library(
name = "go_default_library",
srcs = [
"grpcutils.go",
"parameters.go",
],
importpath = "github.com/prysmaticlabs/prysm/v5/api/grpc",
visibility = ["//visibility:public"],
deps = [
"@com_github_sirupsen_logrus//:go_default_library",
"@org_golang_google_grpc//:go_default_library",
"@org_golang_google_grpc//metadata:go_default_library",
],
)
go_test(
name = "go_default_test",
srcs = ["grpcutils_test.go"],
embed = [":go_default_library"],
deps = [
"//testing/assert:go_default_library",
"//testing/require:go_default_library",
"@com_github_sirupsen_logrus//hooks/test:go_default_library",
"@org_golang_google_grpc//metadata:go_default_library",
],
)

View File

@@ -1,81 +0,0 @@
package grpc
import (
"context"
"strings"
"time"
"github.com/sirupsen/logrus"
"google.golang.org/grpc"
"google.golang.org/grpc/metadata"
)
// LogRequests logs the gRPC backend as well as request duration when the log level is set to debug
// or higher.
func LogRequests(
ctx context.Context,
method string, req,
reply interface{},
cc *grpc.ClientConn,
invoker grpc.UnaryInvoker,
opts ...grpc.CallOption,
) error {
// Shortcut when debug logging is not enabled.
if logrus.GetLevel() < logrus.DebugLevel {
return invoker(ctx, method, req, reply, cc, opts...)
}
var header metadata.MD
opts = append(
opts,
grpc.Header(&header),
)
start := time.Now()
err := invoker(ctx, method, req, reply, cc, opts...)
logrus.WithField("backend", header["x-backend"]).
WithField("method", method).WithField("duration", time.Since(start)).
Debug("gRPC request finished.")
return err
}
// LogStream prints the method at DEBUG level at the start of the stream.
func LogStream(
ctx context.Context,
sd *grpc.StreamDesc,
conn *grpc.ClientConn,
method string,
streamer grpc.Streamer,
opts ...grpc.CallOption,
) (grpc.ClientStream, error) {
// Shortcut when debug logging is not enabled.
if logrus.GetLevel() < logrus.DebugLevel {
return streamer(ctx, sd, conn, method, opts...)
}
var header metadata.MD
opts = append(
opts,
grpc.Header(&header),
)
strm, err := streamer(ctx, sd, conn, method, opts...)
logrus.WithField("backend", header["x-backend"]).
WithField("method", method).
Debug("gRPC stream started.")
return strm, err
}
// AppendHeaders parses the provided GRPC headers
// and attaches them to the provided context.
func AppendHeaders(parent context.Context, headers []string) context.Context {
for _, h := range headers {
if h != "" {
keyValue := strings.Split(h, "=")
if len(keyValue) < 2 {
logrus.Warnf("Incorrect gRPC header flag format. Skipping %v", keyValue[0])
continue
}
parent = metadata.AppendToOutgoingContext(parent, keyValue[0], strings.Join(keyValue[1:], "="))
}
}
return parent
}

View File

@@ -1,60 +0,0 @@
package grpc
import (
"context"
"testing"
"github.com/prysmaticlabs/prysm/v5/testing/assert"
"github.com/prysmaticlabs/prysm/v5/testing/require"
logTest "github.com/sirupsen/logrus/hooks/test"
"google.golang.org/grpc/metadata"
)
type customErrorData struct {
Message string `json:"message"`
}
func TestAppendHeaders(t *testing.T) {
t.Run("one_header", func(t *testing.T) {
ctx := AppendHeaders(context.Background(), []string{"first=value1"})
md, ok := metadata.FromOutgoingContext(ctx)
require.Equal(t, true, ok, "Failed to read context metadata")
require.Equal(t, 1, md.Len(), "MetadataV0 contains wrong number of values")
assert.Equal(t, "value1", md.Get("first")[0])
})
t.Run("multiple_headers", func(t *testing.T) {
ctx := AppendHeaders(context.Background(), []string{"first=value1", "second=value2"})
md, ok := metadata.FromOutgoingContext(ctx)
require.Equal(t, true, ok, "Failed to read context metadata")
require.Equal(t, 2, md.Len(), "MetadataV0 contains wrong number of values")
assert.Equal(t, "value1", md.Get("first")[0])
assert.Equal(t, "value2", md.Get("second")[0])
})
t.Run("one_empty_header", func(t *testing.T) {
ctx := AppendHeaders(context.Background(), []string{"first=value1", ""})
md, ok := metadata.FromOutgoingContext(ctx)
require.Equal(t, true, ok, "Failed to read context metadata")
require.Equal(t, 1, md.Len(), "MetadataV0 contains wrong number of values")
assert.Equal(t, "value1", md.Get("first")[0])
})
t.Run("incorrect_header", func(t *testing.T) {
logHook := logTest.NewGlobal()
ctx := AppendHeaders(context.Background(), []string{"first=value1", "second"})
md, ok := metadata.FromOutgoingContext(ctx)
require.Equal(t, true, ok, "Failed to read context metadata")
require.Equal(t, 1, md.Len(), "MetadataV0 contains wrong number of values")
assert.Equal(t, "value1", md.Get("first")[0])
assert.LogsContain(t, logHook, "Skipping second")
})
t.Run("header_value_with_equal_sign", func(t *testing.T) {
ctx := AppendHeaders(context.Background(), []string{"first=value=1"})
md, ok := metadata.FromOutgoingContext(ctx)
require.Equal(t, true, ok, "Failed to read context metadata")
require.Equal(t, 1, md.Len(), "MetadataV0 contains wrong number of values")
assert.Equal(t, "value=1", md.Get("first")[0])
})
}

View File

@@ -1,16 +0,0 @@
package grpc
// CustomErrorMetadataKey is the name of the metadata key storing additional error information.
// Metadata value is expected to be a byte-encoded JSON object.
const CustomErrorMetadataKey = "Custom-Error"
// HttpCodeMetadataKey is the key to use when setting custom HTTP status codes in gRPC metadata.
const HttpCodeMetadataKey = "X-Http-Code"
// MetadataPrefix is the prefix for grpc headers on metadata
const MetadataPrefix = "Grpc-Metadata"
// WithPrefix creates a new string with grpc metadata prefix
func WithPrefix(value string) string {
return MetadataPrefix + "-" + value
}

View File

@@ -1,20 +0,0 @@
package api
import "net/http"
const (
VersionHeader = "Eth-Consensus-Version"
ExecutionPayloadBlindedHeader = "Eth-Execution-Payload-Blinded"
ExecutionPayloadValueHeader = "Eth-Execution-Payload-Value"
ConsensusBlockValueHeader = "Eth-Consensus-Block-Value"
JsonMediaType = "application/json"
OctetStreamMediaType = "application/octet-stream"
EventStreamMediaType = "text/event-stream"
KeepAlive = "keep-alive"
)
// SetSSEHeaders sets the headers needed for a server-sent event response.
func SetSSEHeaders(w http.ResponseWriter) {
w.Header().Set("Content-Type", EventStreamMediaType)
w.Header().Set("Connection", KeepAlive)
}

View File

@@ -1,32 +0,0 @@
package api
import (
"github.com/ethereum/go-ethereum/common/hexutil"
"github.com/pkg/errors"
"github.com/prysmaticlabs/prysm/v5/crypto/rand"
)
// GenerateRandomHexString generates a random hex string that follows the standards for jwt token
// used for beacon node -> execution client
// used for web client -> validator client
func GenerateRandomHexString() (string, error) {
secret := make([]byte, 32)
randGen := rand.NewGenerator()
n, err := randGen.Read(secret)
if err != nil {
return "", err
} else if n != 32 {
return "", errors.New("rand: unexpected length")
}
return hexutil.Encode(secret), nil
}
// ValidateAuthToken validating auth token for web
func ValidateAuthToken(token string) error {
b, err := hexutil.Decode(token)
// token should be hex-encoded and at least 256 bits
if err != nil || len(b) < 32 {
return errors.New("invalid auth token: token should be hex-encoded and at least 256 bits")
}
return nil
}

View File

@@ -1,13 +0,0 @@
package api
import (
"testing"
"github.com/prysmaticlabs/prysm/v5/testing/require"
)
func TestGenerateRandomHexString(t *testing.T) {
token, err := GenerateRandomHexString()
require.NoError(t, err)
require.NoError(t, ValidateAuthToken(token))
}

View File

@@ -1,22 +0,0 @@
load("@prysm//tools/go:def.bzl", "go_library", "go_test")
go_library(
name = "go_default_library",
srcs = ["pagination.go"],
importpath = "github.com/prysmaticlabs/prysm/v5/api/pagination",
visibility = ["//visibility:public"],
deps = [
"//config/params:go_default_library",
"@com_github_pkg_errors//:go_default_library",
],
)
go_test(
name = "go_default_test",
srcs = ["pagination_test.go"],
deps = [
":go_default_library",
"//testing/assert:go_default_library",
"//testing/require:go_default_library",
],
)

View File

@@ -1,49 +0,0 @@
// Package pagination contains useful pagination-related helpers.
package pagination
import (
"fmt"
"strconv"
"github.com/pkg/errors"
"github.com/prysmaticlabs/prysm/v5/config/params"
)
// StartAndEndPage takes in the requested page token, wanted page size, total page size.
// It returns start, end page and the next page token.
func StartAndEndPage(pageToken string, pageSize, totalSize int) (int, int, string, error) {
if pageToken == "" {
pageToken = "0"
}
if pageSize < 0 || totalSize < 0 {
return 0, 0, "", errors.Errorf("invalid page and total sizes provided: page size %d , total size %d", pageSize, totalSize)
}
if pageSize == 0 {
pageSize = params.BeaconConfig().DefaultPageSize
}
token, err := strconv.Atoi(pageToken)
if err != nil {
return 0, 0, "", errors.Wrap(err, "could not convert page token")
}
if token < 0 {
return 0, 0, "", errors.Errorf("invalid token value provided: %d", token)
}
// Start page can not be greater than set size.
start := token * pageSize
if start >= totalSize {
return 0, 0, "", fmt.Errorf("page start %d >= list %d", start, totalSize)
}
// End page can not go out of bound.
end := start + pageSize
nextPageToken := strconv.Itoa(token + 1)
if end >= totalSize {
end = totalSize
nextPageToken = "" // Return an empty next page token for the last page of a set
}
return start, end, nextPageToken, nil
}

View File

@@ -1,103 +0,0 @@
package pagination_test
import (
"testing"
"github.com/prysmaticlabs/prysm/v5/api/pagination"
"github.com/prysmaticlabs/prysm/v5/testing/assert"
"github.com/prysmaticlabs/prysm/v5/testing/require"
)
func TestStartAndEndPage(t *testing.T) {
tests := []struct {
token string
pageSize int
totalSize int
nextToken string
start int
end int
}{
{
token: "0",
pageSize: 9,
totalSize: 100,
nextToken: "1",
start: 0,
end: 9,
},
{
token: "10",
pageSize: 4,
totalSize: 100,
nextToken: "11",
start: 40,
end: 44,
},
{
token: "100",
pageSize: 5,
totalSize: 1000,
nextToken: "101",
start: 500,
end: 505,
},
{
token: "3",
pageSize: 33,
totalSize: 100,
nextToken: "",
start: 99,
end: 100,
},
{
token: "34",
pageSize: 500,
totalSize: 17500,
nextToken: "",
start: 17000,
end: 17500,
},
}
for _, test := range tests {
start, end, next, err := pagination.StartAndEndPage(test.token, test.pageSize, test.totalSize)
require.NoError(t, err)
if test.start != start {
t.Errorf("expected start and computed start are not equal %d, %d", test.start, start)
}
if test.end != end {
t.Errorf("expected end and computed end are not equal %d, %d", test.end, end)
}
if test.nextToken != next {
t.Errorf("expected next token and computed next token are not equal %v, %v", test.nextToken, next)
}
}
}
func TestStartAndEndPage_CannotConvertPage(t *testing.T) {
wanted := "could not convert page token: strconv.Atoi: parsing"
_, _, _, err := pagination.StartAndEndPage("bad", 0, 0)
assert.ErrorContains(t, wanted, err)
}
func TestStartAndEndPage_ExceedsMaxPage(t *testing.T) {
wanted := "page start 0 >= list 0"
_, _, _, err := pagination.StartAndEndPage("", 0, 0)
assert.ErrorContains(t, wanted, err)
}
func TestStartAndEndPage_InvalidPageValues(t *testing.T) {
_, _, _, err := pagination.StartAndEndPage("10", -1, 10)
assert.ErrorContains(t, "invalid page and total sizes provided", err)
_, _, _, err = pagination.StartAndEndPage("12", 10, -10)
assert.ErrorContains(t, "invalid page and total sizes provided", err)
_, _, _, err = pagination.StartAndEndPage("12", -50, -60)
assert.ErrorContains(t, "invalid page and total sizes provided", err)
}
func TestStartAndEndPage_InvalidTokenValue(t *testing.T) {
_, _, _, err := pagination.StartAndEndPage("-12", 50, 60)
assert.ErrorContains(t, "invalid token value provided", err)
}

View File

@@ -1,15 +0,0 @@
load("@prysm//tools/go:def.bzl", "go_library", "go_test")
go_library(
name = "go_default_library",
srcs = ["error.go"],
importpath = "github.com/prysmaticlabs/prysm/v5/api/server",
visibility = ["//visibility:public"],
)
go_test(
name = "go_default_test",
srcs = ["error_test.go"],
embed = [":go_default_library"],
deps = ["//testing/assert:go_default_library"],
)

View File

@@ -1,47 +0,0 @@
package server
import (
"errors"
"fmt"
"strings"
)
// DecodeError represents an error resulting from trying to decode an HTTP request.
// It tracks the full field name for which decoding failed.
type DecodeError struct {
path []string
err error
}
// NewDecodeError wraps an error (either the initial decoding error or another DecodeError).
// The current field that failed decoding must be passed in.
func NewDecodeError(err error, field string) *DecodeError {
var de *DecodeError
ok := errors.As(err, &de)
if ok {
return &DecodeError{path: append([]string{field}, de.path...), err: de.err}
}
return &DecodeError{path: []string{field}, err: err}
}
// Error returns the formatted error message which contains the full field name and the actual decoding error.
func (e *DecodeError) Error() string {
return fmt.Sprintf("could not decode %s: %s", strings.Join(e.path, "."), e.err.Error())
}
// IndexedVerificationFailureError wraps a collection of verification failures.
type IndexedVerificationFailureError struct {
Message string `json:"message"`
Code int `json:"code"`
Failures []*IndexedVerificationFailure `json:"failures"`
}
func (e *IndexedVerificationFailureError) StatusCode() int {
return e.Code
}
// IndexedVerificationFailure represents an issue when verifying a single indexed object e.g. an item in an array.
type IndexedVerificationFailure struct {
Index int `json:"index"`
Message string `json:"message"`
}

View File

@@ -1,16 +0,0 @@
package server
import (
"errors"
"testing"
"github.com/prysmaticlabs/prysm/v5/testing/assert"
)
func TestDecodeError(t *testing.T) {
e := errors.New("not a number")
de := NewDecodeError(e, "Z")
de = NewDecodeError(de, "Y")
de = NewDecodeError(de, "X")
assert.Equal(t, "could not decode X.Y.Z: not a number", de.Error())
}

View File

@@ -1,31 +0,0 @@
load("@prysm//tools/go:def.bzl", "go_library", "go_test")
go_library(
name = "go_default_library",
srcs = [
"log.go",
"options.go",
"server.go",
],
importpath = "github.com/prysmaticlabs/prysm/v5/api/server/httprest",
visibility = ["//visibility:public"],
deps = [
"//api/server/middleware:go_default_library",
"//runtime:go_default_library",
"@com_github_pkg_errors//:go_default_library",
"@com_github_sirupsen_logrus//:go_default_library",
],
)
go_test(
name = "go_default_test",
srcs = ["server_test.go"],
embed = [":go_default_library"],
deps = [
"//cmd/beacon-chain/flags:go_default_library",
"//testing/assert:go_default_library",
"//testing/require:go_default_library",
"@com_github_sirupsen_logrus//hooks/test:go_default_library",
"@com_github_urfave_cli_v2//:go_default_library",
],
)

View File

@@ -1,5 +0,0 @@
package httprest
import "github.com/sirupsen/logrus"
var log = logrus.WithField("prefix", "httprest")

View File

@@ -1,44 +0,0 @@
package httprest
import (
"time"
"net/http"
"github.com/prysmaticlabs/prysm/v5/api/server/middleware"
)
// Option is a http rest server functional parameter type.
type Option func(g *Server) error
// WithMiddlewares sets the list of middlewares to be applied on routes.
func WithMiddlewares(mw []middleware.Middleware) Option {
return func(g *Server) error {
g.cfg.middlewares = mw
return nil
}
}
// WithHTTPAddr sets the full address ( host and port ) of the server.
func WithHTTPAddr(addr string) Option {
return func(g *Server) error {
g.cfg.httpAddr = addr
return nil
}
}
// WithRouter sets the internal router of the server, this is required.
func WithRouter(r *http.ServeMux) Option {
return func(g *Server) error {
g.cfg.router = r
return nil
}
}
// WithTimeout allows changing the timeout value for API calls.
func WithTimeout(duration time.Duration) Option {
return func(g *Server) error {
g.cfg.timeout = duration
return nil
}
}

View File

@@ -1,101 +0,0 @@
package httprest
import (
"context"
"net/http"
"time"
"github.com/pkg/errors"
"github.com/prysmaticlabs/prysm/v5/api/server/middleware"
"github.com/prysmaticlabs/prysm/v5/runtime"
)
var _ runtime.Service = (*Server)(nil)
// Config parameters for setting up the http-rest service.
type config struct {
httpAddr string
middlewares []middleware.Middleware
router http.Handler
timeout time.Duration
}
// Server serves HTTP traffic.
type Server struct {
cfg *config
server *http.Server
cancel context.CancelFunc
ctx context.Context
startFailure error
}
// New returns a new instance of the Server.
func New(ctx context.Context, opts ...Option) (*Server, error) {
g := &Server{
ctx: ctx,
cfg: &config{},
}
for _, opt := range opts {
if err := opt(g); err != nil {
return nil, err
}
}
if g.cfg.router == nil {
return nil, errors.New("router option not configured")
}
var handler http.Handler
defaultReadHeaderTimeout := time.Second
handler = middleware.MiddlewareChain(g.cfg.router, g.cfg.middlewares)
if g.cfg.timeout > 0*time.Second {
defaultReadHeaderTimeout = g.cfg.timeout
handler = http.TimeoutHandler(handler, g.cfg.timeout, "request timed out")
}
g.server = &http.Server{
Addr: g.cfg.httpAddr,
Handler: handler,
ReadHeaderTimeout: defaultReadHeaderTimeout,
}
return g, nil
}
// Start the http rest service.
func (g *Server) Start() {
g.ctx, g.cancel = context.WithCancel(g.ctx)
go func() {
log.WithField("address", g.cfg.httpAddr).Info("Starting HTTP server")
if err := g.server.ListenAndServe(); err != http.ErrServerClosed {
log.WithError(err).Error("Failed to start HTTP server")
g.startFailure = err
return
}
}()
}
// Status of the HTTP server. Returns an error if this service is unhealthy.
func (g *Server) Status() error {
if g.startFailure != nil {
return g.startFailure
}
return nil
}
// Stop the HTTP server with a graceful shutdown.
func (g *Server) Stop() error {
if g.server != nil {
shutdownCtx, shutdownCancel := context.WithTimeout(g.ctx, 2*time.Second)
defer shutdownCancel()
if err := g.server.Shutdown(shutdownCtx); err != nil {
if errors.Is(err, context.DeadlineExceeded) {
log.Warn("Existing connections terminated")
} else {
log.WithError(err).Error("Failed to gracefully shut down server")
}
}
}
if g.cancel != nil {
g.cancel()
}
return nil
}

View File

@@ -1,71 +0,0 @@
package httprest
import (
"context"
"flag"
"fmt"
"net"
"net/http"
"net/http/httptest"
"net/url"
"testing"
"github.com/prysmaticlabs/prysm/v5/cmd/beacon-chain/flags"
"github.com/prysmaticlabs/prysm/v5/testing/assert"
"github.com/prysmaticlabs/prysm/v5/testing/require"
logTest "github.com/sirupsen/logrus/hooks/test"
"github.com/urfave/cli/v2"
)
func TestServer_StartStop(t *testing.T) {
hook := logTest.NewGlobal()
app := cli.App{}
set := flag.NewFlagSet("test", 0)
ctx := cli.NewContext(&app, set, nil)
port := ctx.Int(flags.HTTPServerPort.Name)
portStr := fmt.Sprintf("%d", port) // Convert port to string
host := ctx.String(flags.HTTPServerHost.Name)
address := net.JoinHostPort(host, portStr)
handler := http.NewServeMux()
opts := []Option{
WithHTTPAddr(address),
WithRouter(handler),
}
g, err := New(context.Background(), opts...)
require.NoError(t, err)
g.Start()
go func() {
require.LogsContain(t, hook, "Starting HTTP server")
require.LogsDoNotContain(t, hook, "Starting API middleware")
}()
err = g.Stop()
require.NoError(t, err)
}
func TestServer_NilHandler_NotFoundHandlerRegistered(t *testing.T) {
app := cli.App{}
set := flag.NewFlagSet("test", 0)
ctx := cli.NewContext(&app, set, nil)
handler := http.NewServeMux()
port := ctx.Int(flags.HTTPServerPort.Name)
portStr := fmt.Sprintf("%d", port) // Convert port to string
host := ctx.String(flags.HTTPServerHost.Name)
address := net.JoinHostPort(host, portStr)
opts := []Option{
WithHTTPAddr(address),
WithRouter(handler),
}
g, err := New(context.Background(), opts...)
require.NoError(t, err)
writer := httptest.NewRecorder()
g.cfg.router.ServeHTTP(writer, &http.Request{Method: "GET", Host: "localhost", URL: &url.URL{Path: "/foo"}})
assert.Equal(t, http.StatusNotFound, writer.Code)
}

View File

@@ -1,26 +0,0 @@
load("@prysm//tools/go:def.bzl", "go_library", "go_test")
go_library(
name = "go_default_library",
srcs = [
"middleware.go",
"util.go",
],
importpath = "github.com/prysmaticlabs/prysm/v5/api/server/middleware",
visibility = ["//visibility:public"],
deps = ["@com_github_rs_cors//:go_default_library"],
)
go_test(
name = "go_default_test",
srcs = [
"middleware_test.go",
"util_test.go",
],
embed = [":go_default_library"],
deps = [
"//api:go_default_library",
"//testing/assert:go_default_library",
"//testing/require:go_default_library",
],
)

View File

@@ -1,125 +0,0 @@
package middleware
import (
"fmt"
"net/http"
"strings"
"github.com/rs/cors"
)
type Middleware func(http.Handler) http.Handler
// NormalizeQueryValuesHandler normalizes an input query of "key=value1,value2,value3" to "key=value1&key=value2&key=value3"
func NormalizeQueryValuesHandler(next http.Handler) http.Handler {
return http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) {
query := r.URL.Query()
NormalizeQueryValues(query)
r.URL.RawQuery = query.Encode()
next.ServeHTTP(w, r)
})
}
// CorsHandler sets the cors settings on api endpoints
func CorsHandler(allowOrigins []string) Middleware {
c := cors.New(cors.Options{
AllowedOrigins: allowOrigins,
AllowedMethods: []string{http.MethodPost, http.MethodGet, http.MethodDelete, http.MethodOptions},
AllowCredentials: true,
MaxAge: 600,
AllowedHeaders: []string{"*"},
})
return c.Handler
}
// ContentTypeHandler checks request for the appropriate media types otherwise returning a http.StatusUnsupportedMediaType error
func ContentTypeHandler(acceptedMediaTypes []string) Middleware {
return func(next http.Handler) http.Handler {
return http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) {
// skip the GET request
if r.Method == http.MethodGet {
next.ServeHTTP(w, r)
return
}
contentType := r.Header.Get("Content-Type")
if contentType == "" {
http.Error(w, "Content-Type header is missing", http.StatusUnsupportedMediaType)
return
}
accepted := false
for _, acceptedType := range acceptedMediaTypes {
if strings.Contains(strings.TrimSpace(contentType), strings.TrimSpace(acceptedType)) {
accepted = true
break
}
}
if !accepted {
http.Error(w, fmt.Sprintf("Unsupported media type: %s", contentType), http.StatusUnsupportedMediaType)
return
}
next.ServeHTTP(w, r)
})
}
}
// AcceptHeaderHandler checks if the client's response preference is handled
func AcceptHeaderHandler(serverAcceptedTypes []string) Middleware {
return func(next http.Handler) http.Handler {
return http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) {
acceptHeader := r.Header.Get("Accept")
// header is optional and should skip if not provided
if acceptHeader == "" {
next.ServeHTTP(w, r)
return
}
accepted := false
acceptTypes := strings.Split(acceptHeader, ",")
// follows rules defined in https://datatracker.ietf.org/doc/html/rfc2616#section-14.1
for _, acceptType := range acceptTypes {
acceptType = strings.TrimSpace(acceptType)
if acceptType == "*/*" {
accepted = true
break
}
for _, serverAcceptedType := range serverAcceptedTypes {
if strings.HasPrefix(acceptType, serverAcceptedType) {
accepted = true
break
}
if acceptType != "/*" && strings.HasSuffix(acceptType, "/*") && strings.HasPrefix(serverAcceptedType, acceptType[:len(acceptType)-2]) {
accepted = true
break
}
}
if accepted {
break
}
}
if !accepted {
http.Error(w, fmt.Sprintf("Not Acceptable: %s", acceptHeader), http.StatusNotAcceptable)
return
}
next.ServeHTTP(w, r)
})
}
}
func MiddlewareChain(h http.Handler, mw []Middleware) http.Handler {
if len(mw) < 1 {
return h
}
wrapped := h
for i := len(mw) - 1; i >= 0; i-- {
wrapped = mw[i](wrapped)
}
return wrapped
}

View File

@@ -1,204 +0,0 @@
package middleware
import (
"net/http"
"net/http/httptest"
"testing"
"github.com/prysmaticlabs/prysm/v5/api"
"github.com/prysmaticlabs/prysm/v5/testing/require"
)
func TestNormalizeQueryValuesHandler(t *testing.T) {
nextHandler := http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) {
_, err := w.Write([]byte("next handler"))
require.NoError(t, err)
})
handler := NormalizeQueryValuesHandler(nextHandler)
tests := []struct {
name string
inputQuery string
expectedQuery string
}{
{
name: "3 values",
inputQuery: "key=value1,value2,value3",
expectedQuery: "key=value1&key=value2&key=value3", // replace with expected normalized value
},
}
for _, test := range tests {
t.Run(test.name, func(t *testing.T) {
req, err := http.NewRequest("GET", "/test?"+test.inputQuery, http.NoBody)
if err != nil {
t.Fatal(err)
}
rr := httptest.NewRecorder()
handler.ServeHTTP(rr, req)
if rr.Code != http.StatusOK {
t.Errorf("handler returned wrong status code: got %v want %v", rr.Code, http.StatusOK)
}
if req.URL.RawQuery != test.expectedQuery {
t.Errorf("query not normalized: got %v want %v", req.URL.RawQuery, test.expectedQuery)
}
if rr.Body.String() != "next handler" {
t.Errorf("next handler was not executed")
}
})
}
}
func TestContentTypeHandler(t *testing.T) {
acceptedMediaTypes := []string{api.JsonMediaType, api.OctetStreamMediaType}
nextHandler := http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) {
_, err := w.Write([]byte("next handler"))
require.NoError(t, err)
})
handler := ContentTypeHandler(acceptedMediaTypes)(nextHandler)
tests := []struct {
name string
contentType string
expectedStatusCode int
isGet bool
}{
{
name: "Accepted Content-Type - application/json",
contentType: api.JsonMediaType,
expectedStatusCode: http.StatusOK,
},
{
name: "Accepted Content-Type - ssz format",
contentType: api.OctetStreamMediaType,
expectedStatusCode: http.StatusOK,
},
{
name: "Unsupported Content-Type - text/plain",
contentType: "text/plain",
expectedStatusCode: http.StatusUnsupportedMediaType,
},
{
name: "Missing Content-Type",
contentType: "",
expectedStatusCode: http.StatusUnsupportedMediaType,
},
{
name: "GET request skips content type check",
contentType: "",
expectedStatusCode: http.StatusOK,
isGet: true,
},
{
name: "Content type contains charset is ok",
contentType: "application/json; charset=utf-8",
expectedStatusCode: http.StatusOK,
},
}
for _, tt := range tests {
t.Run(tt.name, func(t *testing.T) {
httpMethod := http.MethodPost
if tt.isGet {
httpMethod = http.MethodGet
}
req := httptest.NewRequest(httpMethod, "/", nil)
if tt.contentType != "" {
req.Header.Set("Content-Type", tt.contentType)
}
rr := httptest.NewRecorder()
handler.ServeHTTP(rr, req)
if status := rr.Code; status != tt.expectedStatusCode {
t.Errorf("handler returned wrong status code: got %v want %v", status, tt.expectedStatusCode)
}
})
}
}
func TestAcceptHeaderHandler(t *testing.T) {
acceptedTypes := []string{"application/json", "application/octet-stream"}
nextHandler := http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) {
_, err := w.Write([]byte("next handler"))
require.NoError(t, err)
})
handler := AcceptHeaderHandler(acceptedTypes)(nextHandler)
tests := []struct {
name string
acceptHeader string
expectedStatusCode int
}{
{
name: "Accepted Accept-Type - application/json",
acceptHeader: "application/json",
expectedStatusCode: http.StatusOK,
},
{
name: "Accepted Accept-Type - application/octet-stream",
acceptHeader: "application/octet-stream",
expectedStatusCode: http.StatusOK,
},
{
name: "Accepted Accept-Type with parameters",
acceptHeader: "application/json;q=0.9, application/octet-stream;q=0.8",
expectedStatusCode: http.StatusOK,
},
{
name: "Unsupported Accept-Type - text/plain",
acceptHeader: "text/plain",
expectedStatusCode: http.StatusNotAcceptable,
},
{
name: "Missing Accept header",
acceptHeader: "",
expectedStatusCode: http.StatusOK,
},
{
name: "*/* is accepted",
acceptHeader: "*/*",
expectedStatusCode: http.StatusOK,
},
{
name: "application/* is accepted",
acceptHeader: "application/*",
expectedStatusCode: http.StatusOK,
},
{
name: "/* is unsupported",
acceptHeader: "/*",
expectedStatusCode: http.StatusNotAcceptable,
},
{
name: "application/ is unsupported",
acceptHeader: "application/",
expectedStatusCode: http.StatusNotAcceptable,
},
}
for _, tt := range tests {
t.Run(tt.name, func(t *testing.T) {
req := httptest.NewRequest("GET", "/", nil)
if tt.acceptHeader != "" {
req.Header.Set("Accept", tt.acceptHeader)
}
rr := httptest.NewRecorder()
handler.ServeHTTP(rr, req)
if status := rr.Code; status != tt.expectedStatusCode {
t.Errorf("handler returned wrong status code: got %v want %v", status, tt.expectedStatusCode)
}
})
}
}

View File

@@ -1,17 +0,0 @@
package middleware
import (
"net/url"
"strings"
)
// NormalizeQueryValues replaces comma-separated values with individual values
func NormalizeQueryValues(queryParams url.Values) {
for key, vals := range queryParams {
splitVals := make([]string, 0)
for _, v := range vals {
splitVals = append(splitVals, strings.Split(v, ",")...)
}
queryParams[key] = splitVals
}
}

Some files were not shown because too many files have changed in this diff Show More