Compare commits

...

103 Commits

Author SHA1 Message Date
Richard Ramos
2c0d4b873e chore(version): update libp2p.nimble to 1.11.0 2025-06-18 14:42:04 -04:00
Radosław Kamiński
d803352bd6 test(gossipsub): split unit and integration tests (#1465) 2025-06-16 15:18:18 +00:00
Radosław Kamiński
2eafac47e8 test(gossipsub): GossipThreshold and PublishThreshold tests (#1464) 2025-06-16 14:46:25 +00:00
vladopajic
848fdde0a8 feat(perf): add stats (#1452) 2025-06-13 10:16:45 +00:00
Gabriel Cruz
31e7dc68e2 chore(peeridauth): add mocked client (#1458) 2025-06-12 21:11:36 +00:00
Ivan FB
08299a2059 chore: Add some more context when an exception is caught (#1432)
Co-authored-by: richΛrd <info@richardramos.me>
2025-06-12 14:38:25 +00:00
Gabriel Cruz
2f3156eafb fix(daily): fix typo in testintegration (#1463) 2025-06-12 09:26:46 -03:00
Radosław Kamiński
72e85101b0 test(gossipsub): refactor and unify scoring tests (#1461) 2025-06-12 08:18:01 +00:00
Gabriel Cruz
d205260a3e chore(acme): add MockACMEApi for testing (#1457) 2025-06-11 18:59:29 +00:00
Radosław Kamiński
97e576d146 test: increase timeout (#1460) 2025-06-11 14:19:33 +00:00
richΛrd
888cb78331 feat(kad-dht): protobuffers (#1453) 2025-06-11 12:56:02 +00:00
richΛrd
1d4c261d2a feat: withWsTransport (#1398) 2025-06-10 22:32:55 +00:00
Gabriel Cruz
83de0c0abd feat(peeridauth): add peeridauth (#1445) 2025-06-10 10:25:34 -03:00
AkshayaMani
c501adc9ab feat(gossipsub): Add support for custom connection handling (Mix protocol integration) (#1420)
Co-authored-by: Ben-PH <benphawke@gmail.com>
2025-06-09 13:36:06 -04:00
Radosław Kamiński
f9fc24cc08 test(gossipsub): flaky tests (#1451) 2025-06-09 17:20:49 +01:00
richΛrd
cd26244ccc chore(quic): add libp2p_network_bytes metric (#1439)
Co-authored-by: vladopajic <vladopajic@users.noreply.github.com>
2025-06-09 09:42:52 -03:00
vladopajic
cabab6aafe chore(gossipsub): add consts (#1447)
Co-authored-by: Radoslaw Kaminski <radoslaw@status.im>
Co-authored-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com>
2025-06-06 14:33:38 +00:00
Radosław Kamiński
fb42a9b4aa test(gossipsub): parameters (#1442)
Co-authored-by: vladopajic <vladopajic@users.noreply.github.com>
2025-06-06 14:09:55 +00:00
Radosław Kamiński
141f4d9116 fix(GossipSub): save sent iHave in first element (#1405) 2025-06-06 10:27:59 +00:00
Gabriel Cruz
cb31152b53 feat(autotls): add acme client (#1436) 2025-06-05 17:47:02 +00:00
Radosław Kamiński
3a7745f920 test(gossipsub): message cache (#1431) 2025-06-03 15:18:29 +01:00
Radosław Kamiński
a89916fb1a test: checkUntilTimeout refactor (#1437) 2025-06-03 13:31:34 +01:00
vladopajic
c6cf46c904 fix(ci-daily): delete cache action will continue on error (#1435) 2025-06-02 17:08:31 +02:00
Gabriel Cruz
b28a71ab13 chore(readme): improve README's development section (#1427) 2025-05-29 17:51:29 +00:00
vladopajic
95b9859bcd chore(interop): move interop code to separate folder (#1413) 2025-05-29 16:14:12 +00:00
vladopajic
9e599753af ci(daily): add pinned dependencies variant (#1418) 2025-05-29 15:27:06 +00:00
richΛrd
2e924906bb chore: bump quic (#1428) 2025-05-29 14:25:02 +00:00
Radosław Kamiński
e811c1ad32 fix(gossipsub): save iDontWants messages in the first element of history (#1393) 2025-05-29 13:33:51 +01:00
Radosław Kamiński
86695b55bb test(gossipsub): include missing test files and handle flaky tests (#1416)
Co-authored-by: vladopajic <vladopajic@users.noreply.github.com>
2025-05-29 12:44:21 +01:00
vladopajic
8c3a4d882a ci(dependencies): fix access to tokens (#1421) 2025-05-29 00:27:36 +00:00
richΛrd
4bad343ddc fix: limit chronicles version to < 0.11.0 (#1423) 2025-05-28 21:00:41 -03:00
vladopajic
47b8a05c32 ci(daily): improvements (#1404) 2025-05-27 14:41:53 +00:00
Radosław Kamiński
4e6f4af601 test(gossipsub): heartbeat tests (#1391) 2025-05-27 10:28:12 +01:00
Miran
7275f6f9c3 chore: unused imports are now errors (#1399) 2025-05-26 21:36:08 +02:00
richΛrd
c3dae6a7d4 fix(quic): reset and mm for interop tests (#1397) 2025-05-26 12:16:17 -04:00
vladopajic
bb404eda4a fix(ci-daily): remove --solver flag (#1400) 2025-05-26 16:48:51 +02:00
richΛrd
584710bd80 chore: move -d:libp2p_quic_support flag to .nimble (#1392) 2025-05-26 08:57:26 -04:00
Radosław Kamiński
ad5eae9adf test(gossipsub): move and refactor control messages tests (#1380) 2025-05-22 15:10:37 +00:00
richΛrd
26fae7cd2d chore: bump quic (#1387) 2025-05-21 22:30:35 +00:00
Miran
87d6655368 chore: update more dependencies (#1374) 2025-05-21 21:46:09 +00:00
richΛrd
cd60b254a0 chore(version): update libp2p.nimble to 1.10.1 (#1390) 2025-05-21 07:40:11 -04:00
richΛrd
b88cdcdd4b chore: make quic optional (#1389) 2025-05-20 21:04:30 -04:00
vladopajic
4a5e06cb45 revert: disable transport interop with zig-v0.0.1 (#1372) (#1383) 2025-05-20 14:20:42 +02:00
vladopajic
fff3a7ad1f chore(hp): add timeout on dial (#1378) 2025-05-20 11:10:01 +00:00
Miran
05c894d487 fix(ci): test Nim 2.2 (#1385) 2025-05-19 15:51:56 -03:00
vladopajic
8850e9ccd9 ci(test): reduce timeout (#1376) 2025-05-19 15:34:16 +00:00
Ivan FB
2746531851 chore(dialer): capture possible exception (#1381) 2025-05-19 10:57:04 -04:00
vladopajic
2856db5490 ci(interop): disable transport interop with zig-v0.0.1 (#1372) 2025-05-15 20:04:41 +00:00
AYAHASSAN287
b29e78ccae test(gossipsub): block5 protobuf test cases (#1204)
Co-authored-by: Radoslaw Kaminski <radoslaw@status.im>
2025-05-15 16:32:03 +01:00
Gabriel Cruz
c9761c3588 chore: improve README.md text (#1373) 2025-05-15 12:35:01 +00:00
richΛrd
e4ef21e07c chore: bump quic (#1371)
Co-authored-by: Gabriel Cruz <8129788+gmelodie@users.noreply.github.com>
2025-05-14 21:06:38 +00:00
Miran
61429aa0d6 chore: fix import warnings (#1370) 2025-05-14 19:08:46 +00:00
Radosław Kamiński
c1ef011556 test(gossipsub): refactor testgossipinternal (#1366) 2025-05-14 17:15:31 +01:00
vladopajic
cd1424c09f chore(interop): use the same redis dependency (#1364) 2025-05-14 15:49:51 +00:00
Miran
878d627f93 chore: update dependencies (#1368) 2025-05-14 10:51:08 -03:00
richΛrd
1d6385ddc5 chore: bump quic (#1361)
Co-authored-by: Gabriel Cruz <8129788+gmelodie@users.noreply.github.com>
2025-05-14 11:40:13 +00:00
Gabriel Cruz
873f730b4e chore: change nim-stew dep tagging (#1362) 2025-05-13 21:46:07 -04:00
Radosław Kamiński
1c1547b137 test(gossipsub): Topic Membership Tests - updated (#1363) 2025-05-13 16:22:49 +01:00
Álex
9997f3e3d3 test(gossipsub): control message (#1191)
Co-authored-by: Radoslaw Kaminski <radoslaw@status.im>
2025-05-13 10:54:07 -04:00
richΛrd
4d0b4ecc22 feat: interop (#1303) 2025-05-06 19:27:33 -04:00
Gabriel Cruz
ccb24b5f1f feat(cert): add certificate signing request (CSR) generation (#1355) 2025-05-06 18:56:51 +00:00
Marko Burčul
5cb493439d fix(ci): secrets token typo (#1357) 2025-05-05 09:49:42 -03:00
Ivan FB
24b284240a chore: add gcsafe pragma to removeValidator (#1356) 2025-05-02 18:39:00 +02:00
richΛrd
b0f77d24f9 chore(version): update libp2p.nimble to 1.10.0 (#1351) 2025-05-01 05:39:58 -04:00
richΛrd
e32ac492d3 chore: set @vacp2p/p2p team as codeowners of repo (#1352) 2025-05-01 05:03:54 -03:00
Gabriel Cruz
470a7f8cc5 chore: add libp2p CID codec (#1348) 2025-04-27 09:45:40 +00:00
Radosław Kamiński
b269fce289 test(gossipsub): reorganize tests by feature category (#1350) 2025-04-25 16:48:50 +01:00
vladopajic
bc4febe92c fix: git ignore for tests (#1349) 2025-04-24 15:36:46 +02:00
Radosław Kamiński
b5f9bfe0f4 test(gossipsub): optimise heartbeat interval and sleepAsync (#1342) 2025-04-23 18:10:16 +01:00
Gabriel Cruz
4ce1e8119b chore(readme): add gabe as a maintainer (#1346) 2025-04-23 15:57:32 +02:00
Miran
65136b38e2 chore: fix warnings (#1341)
Co-authored-by: vladopajic <vladopajic@users.noreply.github.com>
2025-04-22 19:45:53 +00:00
Gabriel Cruz
ffc114e8d9 chore: fix broken old status-im links (#1332) 2025-04-22 09:14:26 +00:00
Radosław Kamiński
f2be2d6ed5 test: include missing tests in testall (#1338) 2025-04-22 09:45:38 +01:00
Radosław Kamiński
ab690a06a6 test: combine tests (#1335) 2025-04-21 17:39:42 +01:00
Radosław Kamiński
10cdaf14c5 chore(ci): decouple examples from unit tests (#1334) 2025-04-21 16:31:50 +01:00
Radosław Kamiński
ebbfb63c17 chore(test): remove unused flags and simplify testpubsub (#1328) 2025-04-17 13:38:27 +02:00
Álex
ac25da6cea test(gossipsub): message propagation (#1184)
Co-authored-by: Radoslaw Kaminski <radoslaw@status.im>
Co-authored-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com>
2025-04-14 15:49:13 +01:00
Gabriel Cruz
fb41972ba3 chore: rendezvous improvements (#1319) 2025-04-11 13:31:24 +00:00
Richard Ramos
504d1618af fix: bump nim-quic 2025-04-10 17:54:20 -04:00
richΛrd
0f91b23f12 fix: do not use while loop for quic transport errors (#1317) 2025-04-10 21:47:42 +00:00
vladopajic
5ddd62a8b9 chore(git): ignore auto generated test binaries (#1320) 2025-04-10 13:39:04 +00:00
vladopajic
e7f13a7e73 refactor: utilize singe bridgedConnections (#1309) 2025-04-10 08:37:26 -04:00
vladopajic
89e825fb0d fix(quic): continue accept when client certificate is incorrect (#1312) 2025-04-08 21:03:47 +02:00
richΛrd
1b706e84fa chore: bump nim-quic (#1314) 2025-04-08 10:04:31 -04:00
richΛrd
5cafcb70dc chore: remove range checks from rendezvous (#1306) 2025-04-07 12:16:18 +00:00
vladopajic
8c71266058 chore(readme): add quic and memory transports (#1311) 2025-04-04 15:07:31 +00:00
vladopajic
9c986c5c13 feat(transport): add memory transport (#1304) 2025-04-04 15:43:34 +02:00
vladopajic
3d0451d7f2 chore(protocols): remove deprecated utilities (#1305) 2025-04-04 08:44:36 +00:00
richΛrd
b1f65c97ae fix: unsafe string usage (#1308) 2025-04-03 15:33:08 -04:00
vladopajic
5584809fca chore(certificate): update test vectors (#1294) 2025-04-01 17:15:26 +02:00
richΛrd
7586f17b15 fix: set peerId on incoming Quic connection (#1302) 2025-03-31 09:38:30 -04:00
richΛrd
0e16d873c8 feat: withQuicTransport (#1301) 2025-03-30 04:44:49 +00:00
richΛrd
b11acd2118 chore: update quic and expect exception in test (#1300)
Co-authored-by: vladopajic <vladopajic@users.noreply.github.com>
2025-03-27 12:19:49 -04:00
vladopajic
1376f5b077 chore(quic): add tests with invalid certs (#1297) 2025-03-27 15:19:14 +01:00
richΛrd
340ea05ae5 feat: quic (#1265)
Co-authored-by: vladopajic <vladopajic@users.noreply.github.com>
2025-03-26 10:17:15 -04:00
vladopajic
024ec51f66 feat(certificate): add date verification (#1299) 2025-03-25 11:50:25 +01:00
richΛrd
efe453df87 refactor: use openssl instead of mbedtls (#1298) 2025-03-24 10:22:52 -04:00
vladopajic
c0f4d903ba feat(certificate): set distinguishable issuer name with peer id (#1296) 2025-03-21 12:38:02 +00:00
vladopajic
28f2b268ae chore(certificate): cosmetics (#1293) 2025-03-19 17:02:14 +00:00
vladopajic
5abb6916b6 feat: X.509 certificate validation (#1292) 2025-03-19 15:40:14 +00:00
richΛrd
e6aec94c0c chore: use token per repo in autobump task (#1288) 2025-03-18 17:12:52 +00:00
vladopajic
9eddc7c662 chore: specify exceptions (#1284) 2025-03-17 13:09:18 +00:00
richΛrd
028c730a4f chore: remove python dependency (#1287) 2025-03-17 08:04:30 -04:00
169 changed files with 10231 additions and 4111 deletions

1
.github/CODEOWNERS vendored Normal file
View File

@@ -0,0 +1 @@
* @vacp2p/p2p

View File

@@ -14,7 +14,7 @@ concurrency:
jobs:
test:
timeout-minutes: 90
timeout-minutes: 40
strategy:
fail-fast: false
matrix:
@@ -36,6 +36,8 @@ jobs:
memory_management: refc
- ref: version-2-0
memory_management: refc
- ref: version-2-2
memory_management: refc
include:
- platform:
os: linux
@@ -96,15 +98,9 @@ jobs:
# The change happened on Nimble v0.14.0. Also forcing the deps to be reinstalled on each os and cpu.
key: nimbledeps-${{ matrix.nim.ref }}-${{ matrix.builder }}-${{ matrix.platform.cpu }}-${{ hashFiles('.pinned') }} # hashFiles returns a different value on windows
- name: Setup python
run: |
mkdir .venv
python -m venv .venv
- name: Install deps
if: ${{ steps.deps-cache.outputs.cache-hit != 'true' }}
run: |
source .venv/bin/activate
nimble install_pinned
- name: Use gcc 14
@@ -118,11 +114,9 @@ jobs:
- name: Run tests
run: |
source .venv/bin/activate
nim --version
nimble --version
gcc --version
NIMFLAGS="${NIMFLAGS} --mm:${{ matrix.nim.memory_management }}"
export NIMFLAGS="${NIMFLAGS} --mm:${{ matrix.nim.memory_management }}"
nimble test

View File

@@ -6,9 +6,26 @@ on:
workflow_dispatch:
jobs:
test_amd64:
name: Daily amd64
test_amd64_latest:
name: Daily amd64 (latest dependencies)
uses: ./.github/workflows/daily_common.yml
with:
nim: "[{'ref': 'version-1-6', 'memory_management': 'refc'}, {'ref': 'version-2-0', 'memory_management': 'refc'}]"
nim: "[
{'ref': 'version-1-6', 'memory_management': 'refc'},
{'ref': 'version-2-0', 'memory_management': 'refc'},
{'ref': 'version-2-2', 'memory_management': 'refc'},
{'ref': 'devel', 'memory_management': 'refc'},
]"
cpu: "['amd64']"
test_amd64_pinned:
name: Daily amd64 (pinned dependencies)
uses: ./.github/workflows/daily_common.yml
with:
pinned_deps: true
nim: "[
{'ref': 'version-1-6', 'memory_management': 'refc'},
{'ref': 'version-2-0', 'memory_management': 'refc'},
{'ref': 'version-2-2', 'memory_management': 'refc'},
{'ref': 'devel', 'memory_management': 'refc'},
]"
cpu: "['amd64']"

View File

@@ -4,6 +4,11 @@ name: Daily Common
on:
workflow_call:
inputs:
pinned_deps:
description: 'Should dependencies be installed from pinned file or use latest versions'
required: false
type: boolean
default: false
nim:
description: 'Nim Configuration'
required: true
@@ -17,26 +22,18 @@ on:
required: false
type: string
default: "[]"
use_sat_solver:
description: 'Install dependencies with SAT Solver'
required: false
type: boolean
default: false
concurrency:
group: ${{ github.workflow }}-${{ github.event.pull_request.number || github.ref }}
cancel-in-progress: true
jobs:
delete_cache:
name: Delete github action's branch cache
runs-on: ubuntu-latest
continue-on-error: true
steps:
- uses: snnaplab/delete-branch-cache-action@v1
test:
needs: delete_cache
timeout-minutes: 90
timeout-minutes: 40
strategy:
fail-fast: false
matrix:
@@ -81,8 +78,14 @@ jobs:
- name: Install p2pd
run: |
V=1 bash scripts/build_p2pd.sh p2pdCache 124530a3
- name: Install dependencies
- name: Install dependencies (pinned)
if: ${{ inputs.pinned_deps }}
run: |
nimble install_pinned
- name: Install dependencies (latest)
if: ${{ inputs.pinned_deps != 'true' }}
run: |
nimble install -y --depsOnly
@@ -91,11 +94,6 @@ jobs:
nim --version
nimble --version
if [[ "${{ inputs.use_sat_solver }}" == "true" ]]; then
dependency_solver="sat"
else
dependency_solver="legacy"
fi
NIMFLAGS="${NIMFLAGS} --mm:${{ matrix.nim.memory_management }} --solver:${dependency_solver}"
export NIMFLAGS="${NIMFLAGS} --mm:${{ matrix.nim.memory_management }}"
nimble test
nimble testintegration

View File

@@ -1,14 +0,0 @@
name: Daily Nim Devel
on:
schedule:
- cron: "30 6 * * *"
workflow_dispatch:
jobs:
test_nim_devel:
name: Daily Nim Devel
uses: ./.github/workflows/daily_common.yml
with:
nim: "[{'ref': 'devel', 'memory_management': 'orc'}]"
cpu: "['amd64']"

View File

@@ -10,6 +10,14 @@ jobs:
name: Daily i386 (Linux)
uses: ./.github/workflows/daily_common.yml
with:
nim: "[{'ref': 'version-1-6', 'memory_management': 'refc'}, {'ref': 'version-2-0', 'memory_management': 'refc'}, {'ref': 'devel', 'memory_management': 'orc'}]"
nim: "[
{'ref': 'version-1-6', 'memory_management': 'refc'},
{'ref': 'version-2-0', 'memory_management': 'refc'},
{'ref': 'version-2-2', 'memory_management': 'refc'},
{'ref': 'devel', 'memory_management': 'refc'},
]"
cpu: "['i386']"
exclude: "[{'platform': {'os':'macos'}}, {'platform': {'os':'windows'}}]"
exclude: "[
{'platform': {'os':'macos'}},
{'platform': {'os':'windows'}},
]"

View File

@@ -1,15 +0,0 @@
name: Daily SAT
on:
schedule:
- cron: "30 6 * * *"
workflow_dispatch:
jobs:
test_amd64:
name: Daily SAT
uses: ./.github/workflows/daily_common.yml
with:
nim: "[{'ref': 'version-2-0', 'memory_management': 'refc'}]"
cpu: "['amd64']"
use_sat_solver: true

View File

@@ -17,10 +17,13 @@ jobs:
target:
- repository: status-im/nimbus-eth2
ref: unstable
secret: ACTIONS_GITHUB_TOKEN_NIMBUS_ETH2
- repository: waku-org/nwaku
ref: master
secret: ACTIONS_GITHUB_TOKEN_NWAKU
- repository: codex-storage/nim-codex
ref: master
secret: ACTIONS_GITHUB_TOKEN_NIM_CODEX
steps:
- name: Clone target repository
uses: actions/checkout@v4
@@ -29,7 +32,7 @@ jobs:
ref: ${{ matrix.target.ref}}
path: nbc
fetch-depth: 0
token: ${{ secrets.ACTIONS_GITHUB_TOKEN }}
token: ${{ secrets[matrix.target.secret] }}
- name: Checkout this ref in target repository
run: |

60
.github/workflows/examples.yml vendored Normal file
View File

@@ -0,0 +1,60 @@
name: Examples
on:
push:
branches:
- master
pull_request:
merge_group:
workflow_dispatch:
concurrency:
group: ${{ github.workflow }}-${{ github.event.pull_request.number || github.ref }}
cancel-in-progress: true
jobs:
examples:
timeout-minutes: 30
strategy:
fail-fast: false
defaults:
run:
shell: bash
name: "Build Examples"
runs-on: ubuntu-22.04
steps:
- name: Checkout
uses: actions/checkout@v4
with:
submodules: true
- name: Setup Nim
uses: "./.github/actions/install_nim"
with:
shell: bash
os: linux
cpu: amd64
nim_ref: version-1-6
- name: Restore deps from cache
id: deps-cache
uses: actions/cache@v3
with:
path: nimbledeps
key: nimbledeps-${{ hashFiles('.pinned') }}
- name: Install deps
if: ${{ steps.deps-cache.outputs.cache-hit != 'true' }}
run: |
nimble install_pinned
- name: Build and run examples
run: |
nim --version
nimble --version
gcc --version
NIMFLAGS="${NIMFLAGS} --mm:${{ matrix.nim.memory_management }}"
nimble examples

View File

@@ -27,7 +27,7 @@ jobs:
- uses: actions/checkout@v4
- uses: docker/setup-buildx-action@v3
- name: Build image
run: docker buildx build --load -t nim-libp2p-head -f tests/transport-interop/Dockerfile .
run: docker buildx build --load -t nim-libp2p-head -f interop/transport/Dockerfile .
- name: Run tests
uses: libp2p/test-plans/.github/actions/run-transport-interop-test@master
with:
@@ -35,7 +35,7 @@ jobs:
# without suffix action fails because "hole-punching-interop" artifacts have
# the same name as "transport-interop" artifacts
test-results-suffix: transport-interop
extra-versions: ${{ github.workspace }}/tests/transport-interop/version.json
extra-versions: ${{ github.workspace }}/interop/transport/version.json
s3-cache-bucket: ${{ vars.S3_LIBP2P_BUILD_CACHE_BUCKET_NAME }}
s3-access-key-id: ${{ vars.S3_LIBP2P_BUILD_CACHE_AWS_ACCESS_KEY_ID }}
s3-secret-access-key: ${{ secrets.S3_LIBP2P_BUILD_CACHE_AWS_SECRET_ACCESS_KEY }}
@@ -48,12 +48,12 @@ jobs:
- uses: actions/checkout@v4
- uses: docker/setup-buildx-action@v3
- name: Build image
run: docker buildx build --load -t nim-libp2p-head -f tests/hole-punching-interop/Dockerfile .
run: docker buildx build --load -t nim-libp2p-head -f interop/hole-punching/Dockerfile .
- name: Run tests
uses: libp2p/test-plans/.github/actions/run-interop-hole-punch-test@master
with:
test-filter: nim-libp2p-head
extra-versions: ${{ github.workspace }}/tests/hole-punching-interop/version.json
extra-versions: ${{ github.workspace }}/interop/hole-punching/version.json
s3-cache-bucket: ${{ vars.S3_LIBP2P_BUILD_CACHE_BUCKET_NAME }}
s3-access-key-id: ${{ vars.S3_LIBP2P_BUILD_CACHE_AWS_ACCESS_KEY_ID }}
s3-secret-access-key: ${{ secrets.S3_LIBP2P_BUILD_CACHE_AWS_SECRET_ACCESS_KEY }}

View File

@@ -22,6 +22,6 @@ jobs:
uses: arnetheduck/nph-action@v1
with:
version: 0.6.1
options: "examples libp2p tests tools *.nim*"
options: "examples libp2p tests interop tools *.nim*"
fail: true
suggest: true

8
.gitignore vendored
View File

@@ -17,3 +17,11 @@ examples/*.md
nimble.develop
nimble.paths
go-libp2p-daemon/
# Ignore all test build files in tests folder (auto generated when running tests).
# First rule (`tests/**/test*[^.]*`) will ignore all binaries: has prefix test + does not have dot in name.
# Second and third rules are here to un-ignores all files with extension and Docker file,
# because it appears that vs code is skipping text search is some tests files without these rules.
tests/**/test*[^.]*
!tests/**/*.*
!tests/**/Dockerfile

38
.pinned
View File

@@ -1,20 +1,22 @@
bearssl;https://github.com/status-im/nim-bearssl@#667b40440a53a58e9f922e29e20818720c62d9ac
chronicles;https://github.com/status-im/nim-chronicles@#32ac8679680ea699f7dbc046e8e0131cac97d41a
chronos;https://github.com/status-im/nim-chronos@#c04576d829b8a0a1b12baaa8bc92037501b3a4a0
bearssl;https://github.com/status-im/nim-bearssl@#34d712933a4e0f91f5e66bc848594a581504a215
chronicles;https://github.com/status-im/nim-chronicles@#81a4a7a360c78be9c80c8f735c76b6d4a1517304
chronos;https://github.com/status-im/nim-chronos@#b55e2816eb45f698ddaca8d8473e401502562db2
dnsclient;https://github.com/ba0f3/dnsclient.nim@#23214235d4784d24aceed99bbfe153379ea557c8
faststreams;https://github.com/status-im/nim-faststreams@#720fc5e5c8e428d9d0af618e1e27c44b42350309
httputils;https://github.com/status-im/nim-http-utils@#3b491a40c60aad9e8d3407443f46f62511e63b18
json_serialization;https://github.com/status-im/nim-json-serialization@#85b7ea093cb85ee4f433a617b97571bd709d30df
mbedtls;https://github.com/status-im/nim-mbedtls.git@#740fb2f469511adc1772c5cb32395f4076b9e0c5
faststreams;https://github.com/status-im/nim-faststreams@#c51315d0ae5eb2594d0bf41181d0e1aca1b3c01d
httputils;https://github.com/status-im/nim-http-utils@#79cbab1460f4c0cdde2084589d017c43a3d7b4f1
json_serialization;https://github.com/status-im/nim-json-serialization@#2b1c5eb11df3647a2cee107cd4cce3593cbb8bcf
metrics;https://github.com/status-im/nim-metrics@#6142e433fc8ea9b73379770a788017ac528d46ff
ngtcp2;https://github.com/status-im/nim-ngtcp2@#6834f4756b6af58356ac9c4fef3d71db3c3ae5fe
nimcrypto;https://github.com/cheatfate/nimcrypto@#1c8d6e3caf3abc572136ae9a1da81730c4eb4288
quic;https://github.com/status-im/nim-quic.git@#ddcb31ffb74b5460ab37fd13547eca90594248bc
results;https://github.com/arnetheduck/nim-results@#f3c666a272c69d70cb41e7245e7f6844797303ad
secp256k1;https://github.com/status-im/nim-secp256k1@#7246d91c667f4cc3759fdd50339caa45a2ecd8be
serialization;https://github.com/status-im/nim-serialization@#4bdbc29e54fe54049950e352bb969aab97173b35
stew;https://github.com/status-im/nim-stew@#3159137d9a3110edb4024145ce0ba778975de40e
testutils;https://github.com/status-im/nim-testutils@#dfc4c1b39f9ded9baf6365014de2b4bfb4dafc34
unittest2;https://github.com/status-im/nim-unittest2@#2300fa9924a76e6c96bc4ea79d043e3a0f27120c
websock;https://github.com/status-im/nim-websock@#f8ed9b40a5ff27ad02a3c237c4905b0924e3f982
zlib;https://github.com/status-im/nim-zlib@#38b72eda9d70067df4a953f56b5ed59630f2a17b
ngtcp2;https://github.com/status-im/nim-ngtcp2@#9456daa178c655bccd4a3c78ad3b8cce1f0add73
nimcrypto;https://github.com/cheatfate/nimcrypto@#19c41d6be4c00b4a2c8000583bd30cf8ceb5f4b1
quic;https://github.com/status-im/nim-quic.git@#ca3eda53bee9cef7379be195738ca1490877432f
results;https://github.com/arnetheduck/nim-results@#df8113dda4c2d74d460a8fa98252b0b771bf1f27
secp256k1;https://github.com/status-im/nim-secp256k1@#f808ed5e7a7bfc42204ec7830f14b7a42b63c284
serialization;https://github.com/status-im/nim-serialization@#548d0adc9797a10b2db7f788b804330306293088
stew;https://github.com/status-im/nim-stew@#0db179256cf98eb9ce9ee7b9bc939f219e621f77
testutils;https://github.com/status-im/nim-testutils@#9e842bd58420d23044bc55e16088e8abbe93ce51
unittest2;https://github.com/status-im/nim-unittest2@#8b51e99b4a57fcfb31689230e75595f024543024
websock;https://github.com/status-im/nim-websock@#d5cd89062cd2d168ef35193c7d29d2102921d97e
zlib;https://github.com/status-im/nim-zlib@#daa8723fd32299d4ca621c837430c29a5a11e19a
jwt;https://github.com/vacp2p/nim-jwt@#18f8378de52b241f321c1f9ea905456e89b95c6f
bearssl_pkey_decoder;https://github.com/vacp2p/bearssl_pkey_decoder@#21dd3710df9345ed2ad8bf8f882761e07863b8e0
bio;https://github.com/xzeshen/bio@#0f5ed58b31c678920b6b4f7c1783984e6660be97

172
README.md
View File

@@ -20,39 +20,120 @@
- [Background](#background)
- [Install](#install)
- [Getting Started](#getting-started)
- [Go-libp2p-daemon](#go-libp2p-daemon)
- [Modules](#modules)
- [Users](#users)
- [Stability](#stability)
- [Development](#development)
- [Contribute](#contribute)
- [Contributors](#contributors)
- [Core Maintainers](#core-maintainers)
- [Modules](#modules)
- [Users](#users)
- [Stability](#stability)
- [License](#license)
## Background
libp2p is a [Peer-to-Peer](https://en.wikipedia.org/wiki/Peer-to-peer) networking stack, with [implementations](https://github.com/libp2p/libp2p#implementations) in multiple languages derived from the same [specifications.](https://github.com/libp2p/specs)
Building large scale peer-to-peer systems has been complex and difficult in the last 15 years and libp2p is a way to fix that. It's striving to be a modular stack, with sane and secure defaults, useful protocols, while remain open and extensible.
This implementation in native Nim, relying on [chronos](https://github.com/status-im/nim-chronos) for async. It's used in production by a few [projects](#users)
Building large scale peer-to-peer systems has been complex and difficult in the last 15 years and libp2p is a way to fix that. It strives to be a modular stack with secure defaults and useful protocols, while remaining open and extensible.
This is a native Nim implementation, using [chronos](https://github.com/status-im/nim-chronos) for asynchronous execution. It's used in production by a few [projects](#users)
Learn more about libp2p at [**libp2p.io**](https://libp2p.io) and follow libp2p's documentation [**docs.libp2p.io**](https://docs.libp2p.io).
## Install
**Prerequisite**
- [Nim](https://nim-lang.org/install.html)
> The currently supported Nim version is 1.6.18.
> The currently supported Nim versions are 1.6, 2.0 and 2.2.
```
nimble install libp2p
```
You'll find the nim-libp2p documentation [here](https://vacp2p.github.io/nim-libp2p/docs/). See [examples](./examples) for simple usage patterns.
## Getting Started
You'll find the nim-libp2p documentation [here](https://vacp2p.github.io/nim-libp2p/docs/).
Try out the chat example. For this you'll need to have [`go-libp2p-daemon`](examples/go-daemon/daemonapi.md) running. Full code can be found [here](https://github.com/status-im/nim-libp2p/blob/master/examples/chat.nim):
```bash
nim c -r --threads:on examples/directchat.nim
```
This will output a peer ID such as `QmbmHfVvouKammmQDJck4hz33WvVktNEe7pasxz2HgseRu` which you can use in another instance to connect to it.
```bash
./examples/directchat
/connect QmbmHfVvouKammmQDJck4hz33WvVktNEe7pasxz2HgseRu # change this hash by the hash you were given
```
You can now chat between the instances!
![Chat example](https://imgur.com/caYRu8K.gif)
## Development
Clone the repository and install the dependencies:
```sh
git clone https://github.com/vacp2p/nim-libp2p
cd nim-libp2p
nimble install -dy
```
### Testing
Remember you'll need to build the `go-libp2p-daemon` binary to run the `nim-libp2p` tests.
To do so, please follow the installation instructions in [daemonapi.md](examples/go-daemon/daemonapi.md).
Run unit tests:
```sh
# run all the unit tests
nimble test
```
**Obs:** Running all tests requires the [`go-libp2p-daemon` to be installed and running](examples/go-daemon/daemonapi.md).
If you only want to run tests that don't require `go-libp2p-daemon`, use:
```
nimble testnative
```
For a list of all available test suites, use:
```
nimble tasks
```
### Contribute
The libp2p implementation in Nim is a work in progress. We welcome contributors to help out! Specifically, you can:
- Go through the modules and **check out existing issues**. This would be especially useful for modules in active development. Some knowledge of IPFS/libp2p may be required, as well as the infrastructure behind it.
- **Perform code reviews**. Feel free to let us know if you found anything that can a) speed up the project development b) ensure better quality and c) reduce possible future bugs.
- **Add tests**. Help nim-libp2p to be more robust by adding more tests to the [tests folder](tests/).
- **Small PRs**. Try to keep PRs atomic and digestible. This makes the review process and pinpointing bugs easier.
- **Code format**. Code should be formatted with [nph](https://github.com/arnetheduck/nph) and follow the [Status Nim Style Guide](https://status-im.github.io/nim-style-guide/).
### Contributors
<a href="https://github.com/vacp2p/nim-libp2p/graphs/contributors"><img src="https://contrib.rocks/image?repo=vacp2p/nim-libp2p" alt="nim-libp2p contributors"></a>
### Core Maintainers
<table>
<tbody>
<tr>
<td align="center"><a href="https://github.com/richard-ramos"><img src="https://avatars.githubusercontent.com/u/1106587?v=4?s=100" width="100px;" alt="Richard"/><br /><sub><b>Richard</b></sub></a></td>
<td align="center"><a href="https://github.com/vladopajic"><img src="https://avatars.githubusercontent.com/u/4353513?v=4?s=100" width="100px;" alt="Vlado"/><br /><sub><b>Vlado</b></sub></a></td>
<td align="center"><a href="https://github.com/gmelodie"><img src="https://avatars.githubusercontent.com/u/8129788?v=4?s=100" width="100px;" alt="Gabe"/><br /><sub><b>Gabe</b></sub></a></td>
</tr>
</tbody>
</table>
### Compile time flags
Enable quic transport support
```bash
nim c -d:libp2p_quic_support some_file.nim
```
Enable expensive metrics (ie, metrics with per-peer cardinality):
```bash
nim c -d:libp2p_expensive_metrics some_file.nim
```
Set list of known libp2p agents for metrics:
```bash
nim c -d:libp2p_agents_metrics -d:KnownLibP2PAgents=nimbus,lighthouse,lodestar,prysm,teku some_file.nim
```
Specify gossipsub specific topics to measure in the metrics:
```bash
nim c -d:KnownLibP2PTopics=topic1,topic2,topic3 some_file.nim
```
## Modules
List of packages modules implemented in nim-libp2p:
@@ -70,6 +151,8 @@ List of packages modules implemented in nim-libp2p:
| [libp2p-tcp](libp2p/transports/tcptransport.nim) | TCP transport |
| [libp2p-ws](libp2p/transports/wstransport.nim) | WebSocket & WebSocket Secure transport |
| [libp2p-tor](libp2p/transports/tortransport.nim) | Tor Transport |
| [libp2p-quic](libp2p/transports/quictransport.nim) | Quic Transport |
| [libp2p-memory](libp2p/transports/memorytransport.nim) | Memory Transport |
| **Secure Channels** | |
| [libp2p-noise](libp2p/protocols/secure/noise.nim) | [Noise](https://docs.libp2p.io/concepts/secure-comm/noise/) secure channel |
| [libp2p-plaintext](libp2p/protocols/secure/plaintext.nim) | Plain Text for development purposes |
@@ -78,10 +161,10 @@ List of packages modules implemented in nim-libp2p:
| [libp2p-yamux](libp2p/muxers/yamux/yamux.nim) | [Yamux](https://docs.libp2p.io/concepts/multiplex/yamux/) multiplexer |
| **Data Types** | |
| [peer-id](libp2p/peerid.nim) | [Cryptographic identifiers](https://docs.libp2p.io/concepts/fundamentals/peers/#peer-id) |
| [peer-store](libp2p/peerstore.nim) | ["Address book" of known peers](https://docs.libp2p.io/concepts/fundamentals/peers/#peer-store) |
| [peer-store](libp2p/peerstore.nim) | [Address book of known peers](https://docs.libp2p.io/concepts/fundamentals/peers/#peer-store) |
| [multiaddress](libp2p/multiaddress.nim) | [Composable network addresses](https://github.com/multiformats/multiaddr) |
| [signed envelope](libp2p/signed_envelope.nim) | [Signed generic data container](https://github.com/libp2p/specs/blob/master/RFC/0002-signed-envelopes.md) |
| [routing record](libp2p/routing_record.nim) | [Signed peer dialing informations](https://github.com/libp2p/specs/blob/master/RFC/0003-routing-records.md) |
| [signed-envelope](libp2p/signed_envelope.nim) | [Signed generic data container](https://github.com/libp2p/specs/blob/master/RFC/0002-signed-envelopes.md) |
| [routing-record](libp2p/routing_record.nim) | [Signed peer dialing informations](https://github.com/libp2p/specs/blob/master/RFC/0003-routing-records.md) |
| [discovery manager](libp2p/discovery/discoverymngr.nim) | Discovery Manager |
| **Utilities** | |
| [libp2p-crypto](libp2p/crypto) | Cryptographic backend |
@@ -109,65 +192,6 @@ The versioning follows [semver](https://semver.org/), with some additions:
We aim to be compatible at all time with at least 2 Nim `MINOR` versions, currently `1.6 & 2.0`
## Development
Clone and Install dependencies:
```sh
git clone https://github.com/vacp2p/nim-libp2p
cd nim-libp2p
# to use dependencies computed by nimble
nimble install -dy
# OR to install the dependencies versions used in CI
nimble install_pinned
```
Run unit tests:
```sh
# run all the unit tests
nimble test
```
This requires the go daemon to be available. To only run native tests, use `nimble testnative`.
Or use `nimble tasks` to show all available tasks.
### Contribute
The libp2p implementation in Nim is a work in progress. We welcome contributors to help out! Specifically, you can:
- Go through the modules and **check out existing issues**. This would be especially useful for modules in active development. Some knowledge of IPFS/libp2p may be required, as well as the infrastructure behind it.
- **Perform code reviews**. Feel free to let us know if you found anything that can a) speed up the project development b) ensure better quality and c) reduce possible future bugs.
- **Add tests**. Help nim-libp2p to be more robust by adding more tests to the [tests folder](tests/).
- **Small PRs**. Try to keep PRs atomic and digestible. This makes the review process and pinpointing bugs easier.
- **Code format**. Please format code using [nph](https://github.com/arnetheduck/nph) v0.5.1. This will ensure a consistent codebase and make PRs easier to review. A CI rule has been added to ensure that future commits are all formatted using the same nph version.
The code follows the [Status Nim Style Guide](https://status-im.github.io/nim-style-guide/).
### Contributors
<a href="https://github.com/vacp2p/nim-libp2p/graphs/contributors"><img src="https://contrib.rocks/image?repo=vacp2p/nim-libp2p" alt="nim-libp2p contributors"></a>
### Core Maintainers
<table>
<tbody>
<tr>
<td align="center"><a href="https://github.com/richard-ramos"><img src="https://avatars.githubusercontent.com/u/1106587?v=4?s=100" width="100px;" alt="Richard"/><br /><sub><b>Richard</b></sub></a></td>
<td align="center"><a href="https://github.com/vladopajic"><img src="https://avatars.githubusercontent.com/u/4353513?v=4?s=100" width="100px;" alt="Vlado"/><br /><sub><b>Vlado</b></sub></a></td>
</tr>
</tbody>
</table>
### Compile time flags
Enable expensive metrics (ie, metrics with per-peer cardinality):
```bash
nim c -d:libp2p_expensive_metrics some_file.nim
```
Set list of known libp2p agents for metrics:
```bash
nim c -d:libp2p_agents_metrics -d:KnownLibP2PAgents=nimbus,lighthouse,lodestar,prysm,teku some_file.nim
```
Specify gossipsub specific topics to measure in the metrics:
```bash
nim c -d:KnownLibP2PTopics=topic1,topic2,topic3 some_file.nim
```
## License
Licensed and distributed under either of

View File

@@ -4,6 +4,7 @@ if dirExists("nimbledeps/pkgs"):
if dirExists("nimbledeps/pkgs2"):
switch("NimblePath", "nimbledeps/pkgs2")
switch("warningAsError", "UnusedImport:on")
switch("warning", "CaseTransition:off")
switch("warning", "ObservableStores:off")
switch("warning", "LockLevel:off")

View File

@@ -0,0 +1,3 @@
{.used.}
import directchat, tutorial_6_game

View File

@@ -0,0 +1,5 @@
{.used.}
import
helloworld, circuitrelay, tutorial_1_connect, tutorial_2_customproto,
tutorial_3_protobuf, tutorial_4_gossipsub, tutorial_5_discovery

View File

@@ -93,8 +93,8 @@ proc serveThread(udata: CustomData) {.async.} =
pending.add(item.write(msg))
if len(pending) > 0:
var results = await all(pending)
except:
echo getCurrentException().msg
except CatchableError as err:
echo err.msg
proc main() {.async.} =
var data = new CustomData

View File

@@ -3,9 +3,7 @@
- [Prerequisites](#prerequisites)
- [Installation](#installation)
- [Script](#script)
- [Usage](#usage)
- [Example](#example)
- [Getting Started](#getting-started)
- [Examples](#examples)
# Introduction
This is a libp2p-backed daemon wrapping the functionalities of go-libp2p for use in Nim. <br>
@@ -13,20 +11,25 @@ For more information about the go daemon, check out [this repository](https://gi
> **Required only** for running the tests.
# Prerequisites
Go with version `1.16.0`.
Go with version `1.16.0`
> You will *likely* be able to build `go-libp2p-daemon` with different Go versions, but **they haven't been tested**.
# Installation
Follow one of the methods below:
## Script
Run the build script while having the `go` command pointing to the correct Go version.
We recommend using `1.16.0`, as previously stated.
```sh
./scripts/build_p2pd.sh
```
If everything goes correctly, the binary (`p2pd`) should be built and placed in the correct directory.
If you find any issues, please head into our discord and ask for our assistance.
`build_p2pd.sh` will not rebuild unless needed. If you already have the newest binary and you want to force the rebuild, use:
```sh
./scripts/build_p2pd.sh -f
```
Or:
```sh
./scripts/build_p2pd.sh --force
```
If everything goes correctly, the binary (`p2pd`) should be built and placed in the `$GOPATH/bin` directory.
If you're having issues, head into [our discord](https://discord.com/channels/864066763682218004/1115526869769535629) and ask for assistance.
After successfully building the binary, remember to add it to your path so it can be found. You can do that by running:
```sh
@@ -34,28 +37,7 @@ export PATH="$PATH:$HOME/go/bin"
```
> **Tip:** To make this change permanent, add the command above to your `.bashrc` file.
# Usage
## Example
# Examples
Examples can be found in the [examples folder](https://github.com/status-im/nim-libp2p/tree/readme/examples/go-daemon)
## Getting Started
Try out the chat example. Full code can be found [here](https://github.com/status-im/nim-libp2p/blob/master/examples/chat.nim):
```bash
nim c -r --threads:on examples/directchat.nim
```
This will output a peer ID such as `QmbmHfVvouKammmQDJck4hz33WvVktNEe7pasxz2HgseRu` which you can use in another instance to connect to it.
```bash
./examples/directchat
/connect QmbmHfVvouKammmQDJck4hz33WvVktNEe7pasxz2HgseRu
```
You can now chat between the instances!
![Chat example](https://imgur.com/caYRu8K.gif)

View File

@@ -158,8 +158,8 @@ waitFor(main())
## This is John receiving & logging everyone's metrics.
##
## ## Going further
## Building efficient & safe GossipSub networks is a tricky subject. By tweaking the [gossip params](https://status-im.github.io/nim-libp2p/master/libp2p/protocols/pubsub/gossipsub/types.html#GossipSubParams)
## and [topic params](https://status-im.github.io/nim-libp2p/master/libp2p/protocols/pubsub/gossipsub/types.html#TopicParams),
## Building efficient & safe GossipSub networks is a tricky subject. By tweaking the [gossip params](https://vacp2p.github.io/nim-libp2p/master/libp2p/protocols/pubsub/gossipsub/types.html#GossipSubParams)
## and [topic params](https://vacp2p.github.io/nim-libp2p/master/libp2p/protocols/pubsub/gossipsub/types.html#TopicParams),
## you can achieve very different properties.
##
## Also see reports for [GossipSub v1.1](https://gateway.ipfs.io/ipfs/QmRAFP5DBnvNjdYSbWhEhVRJJDFCLpPyvew5GwCCB4VxM4)

View File

@@ -0,0 +1,19 @@
# syntax=docker/dockerfile:1.5-labs
FROM nimlang/nim:1.6.16 as builder
WORKDIR /workspace
COPY .pinned libp2p.nimble nim-libp2p/
RUN --mount=type=cache,target=/var/cache/apt apt-get update && apt-get install -y libssl-dev
RUN cd nim-libp2p && nimble install_pinned && nimble install "redis@#b341fe240dbf11c544011dd0e033d3c3acca56af" -y
COPY . nim-libp2p/
RUN cd nim-libp2p && nim c --skipParentCfg --NimblePath:./nimbledeps/pkgs --mm:refc -d:chronicles_log_level=DEBUG -d:chronicles_default_output_device=stderr -d:release --threads:off --skipProjCfg -o:hole-punching-tests ./interop/hole-punching/hole_punching.nim
FROM --platform=linux/amd64 debian:bullseye-slim
RUN --mount=type=cache,target=/var/cache/apt apt-get update && apt-get install -y dnsutils jq curl tcpdump iproute2 libssl-dev
COPY --from=builder /workspace/nim-libp2p/hole-punching-tests /usr/bin/hole-punch-client
ENV RUST_BACKTRACE=1

View File

@@ -0,0 +1,138 @@
import std/[os, options, strformat, sequtils]
import redis
import chronos, chronicles
import
../../libp2p/[
builders,
switch,
multicodec,
observedaddrmanager,
services/hpservice,
services/autorelayservice,
protocols/connectivity/autonat/client as aclient,
protocols/connectivity/relay/client as rclient,
protocols/connectivity/relay/relay,
protocols/connectivity/autonat/service,
protocols/ping,
]
import ../../tests/[stubs/autonatclientstub, errorhelpers]
logScope:
topics = "hp interop node"
proc createSwitch(r: Relay = nil, hpService: Service = nil): Switch =
let rng = newRng()
var builder = SwitchBuilder
.new()
.withRng(rng)
.withAddresses(@[MultiAddress.init("/ip4/0.0.0.0/tcp/0").tryGet()])
.withObservedAddrManager(ObservedAddrManager.new(maxSize = 1, minCount = 1))
.withTcpTransport({ServerFlags.TcpNoDelay})
.withYamux()
.withAutonat()
.withNoise()
if hpService != nil:
builder = builder.withServices(@[hpService])
if r != nil:
builder = builder.withCircuitRelay(r)
let s = builder.build()
s.mount(Ping.new(rng = rng))
return s
proc main() {.async.} =
let relayClient = RelayClient.new()
let autoRelayService = AutoRelayService.new(1, relayClient, nil, newRng())
let autonatClientStub = AutonatClientStub.new(expectedDials = 1)
autonatClientStub.answer = NotReachable
let autonatService = AutonatService.new(autonatClientStub, newRng(), maxQueueSize = 1)
let hpservice = HPService.new(autonatService, autoRelayService)
let
isListener = getEnv("MODE") == "listen"
switch = createSwitch(relayClient, hpservice)
auxSwitch = createSwitch()
redisClient = open("redis", 6379.Port)
debug "Connected to redis"
await switch.start()
await auxSwitch.start()
let relayAddr =
try:
redisClient.bLPop(@["RELAY_TCP_ADDRESS"], 0)
except Exception as e:
raise newException(CatchableError, e.msg)
debug "All relay addresses", relayAddr
# This is necessary to make the autonat service work. It will ask this peer for our reachability which the autonat
# client stub will answer NotReachable.
await switch.connect(auxSwitch.peerInfo.peerId, auxSwitch.peerInfo.addrs)
# Wait for autonat to be NotReachable
while autonatService.networkReachability != NetworkReachability.NotReachable:
await sleepAsync(100.milliseconds)
# This will trigger the autonat relay service to make a reservation.
let relayMA = MultiAddress.init(relayAddr[1]).tryGet()
try:
debug "Dialing relay...", relayMA
let relayId = await switch.connect(relayMA).wait(30.seconds)
debug "Connected to relay", relayId
except AsyncTimeoutError as e:
raise newException(CatchableError, "Connection to relay timed out: " & e.msg, e)
# Wait for our relay address to be published
while not switch.peerInfo.addrs.anyIt(it.contains(multiCodec("p2p-circuit")).tryGet()):
await sleepAsync(100.milliseconds)
if isListener:
let listenerPeerId = switch.peerInfo.peerId
discard redisClient.rPush("LISTEN_CLIENT_PEER_ID", $listenerPeerId)
debug "Pushed listener client peer id to redis", listenerPeerId
# Nothing to do anymore, wait to be killed
await sleepAsync(2.minutes)
else:
let listenerId =
try:
PeerId.init(redisClient.bLPop(@["LISTEN_CLIENT_PEER_ID"], 0)[1]).tryGet()
except Exception as e:
raise newException(CatchableError, "Exception init peer: " & e.msg, e)
debug "Got listener peer id", listenerId
let listenerRelayAddr = MultiAddress.init($relayMA & "/p2p-circuit").tryGet()
debug "Dialing listener relay address", listenerRelayAddr
await switch.connect(listenerId, @[listenerRelayAddr])
# wait for hole-punching to complete in the background
await sleepAsync(5000.milliseconds)
let conn = switch.connManager.selectMuxer(listenerId).connection
let channel = await switch.dial(listenerId, @[listenerRelayAddr], PingCodec)
let delay = await Ping.new().ping(channel)
await allFuturesThrowing(
channel.close(), conn.close(), switch.stop(), auxSwitch.stop()
)
echo &"""{{"rtt_to_holepunched_peer_millis":{delay.millis}}}"""
try:
proc mainAsync(): Future[string] {.async.} =
# mainAsync wraps main and returns some value, as otherwise
# 'waitFor(fut)' has no type (or is ambiguous)
await main()
return "done"
discard waitFor(mainAsync().wait(4.minutes))
except AsyncTimeoutError as e:
error "Program execution timed out", description = e.msg
quit(-1)
except CatchableError as e:
error "Unexpected error", description = e.msg
quit(-1)

View File

@@ -3,11 +3,9 @@ FROM nimlang/nim:1.6.16 as builder
WORKDIR /app
COPY .pinned libp2p.nimble nim-libp2p/
COPY .pinned libp2p.nimble nim-libp2p/
RUN --mount=type=cache,target=/var/cache/apt apt-get update && apt-get install -y python python3 python3-pip python3-venv curl
RUN mkdir .venv && python3 -m venv .venv && . .venv/bin/activate
RUN --mount=type=cache,target=/var/cache/apt apt-get update && apt-get install -y libssl-dev
RUN cd nim-libp2p && nimble install_pinned && nimble install "redis@#b341fe240dbf11c544011dd0e033d3c3acca56af" -y
@@ -15,6 +13,6 @@ COPY . nim-libp2p/
RUN \
cd nim-libp2p && \
nim c --skipProjCfg --skipParentCfg --NimblePath:./nimbledeps/pkgs -p:nim-libp2p -d:chronicles_log_level=WARN -d:chronicles_default_output_device=stderr --threads:off ./tests/transport-interop/main.nim
nim c --skipProjCfg --skipParentCfg --NimblePath:./nimbledeps/pkgs -p:nim-libp2p --mm:refc -d:libp2p_quic_support -d:chronicles_log_level=WARN -d:chronicles_default_output_device=stderr --threads:off ./interop/transport/main.nim
ENTRYPOINT ["/app/nim-libp2p/tests/transport-interop/main"]
ENTRYPOINT ["/app/nim-libp2p/interop/transport/main"]

View File

@@ -42,29 +42,26 @@ proc main() {.async.} =
discard switchBuilder.withTcpTransport().withAddress(
MultiAddress.init("/ip4/" & ip & "/tcp/0").tryGet()
)
of "ws":
discard switchBuilder
.withTransport(
proc(upgr: Upgrade): Transport =
WsTransport.new(upgr)
of "quic-v1":
discard switchBuilder.withQuicTransport().withAddress(
MultiAddress.init("/ip4/" & ip & "/udp/0/quic-v1").tryGet()
)
of "ws":
discard switchBuilder.withWsTransport().withAddress(
MultiAddress.init("/ip4/" & ip & "/tcp/0/ws").tryGet()
)
.withAddress(MultiAddress.init("/ip4/" & ip & "/tcp/0/ws").tryGet())
else:
doAssert false
case secureChannel
of "noise":
discard switchBuilder.withNoise()
else:
doAssert false
case muxer
of "yamux":
discard switchBuilder.withYamux()
of "mplex":
discard switchBuilder.withMplex()
else:
doAssert false
let
rng = newRng()
@@ -83,7 +80,7 @@ proc main() {.async.} =
try:
redisClient.bLPop(@["listenerAddr"], testTimeout.seconds.int)[1]
except Exception as e:
raise newException(CatchableError, e.msg)
raise newException(CatchableError, "Exception calling bLPop: " & e.msg, e)
let
remoteAddr = MultiAddress.init(listenerAddr).tryGet()
dialingStart = Moment.now()
@@ -99,7 +96,18 @@ proc main() {.async.} =
pingRTTMilllis: float(pingDelay.milliseconds),
)
)
quit(0)
discard waitFor(main().withTimeout(testTimeout))
quit(1)
try:
proc mainAsync(): Future[string] {.async.} =
# mainAsync wraps main and returns some value, as otherwise
# 'waitFor(fut)' has no type (or is ambiguous)
await main()
return "done"
discard waitFor(mainAsync().wait(testTimeout))
except AsyncTimeoutError as e:
error "Program execution timed out", description = e.msg
quit(-1)
except CatchableError as e:
error "Unexpected error", description = e.msg
quit(-1)

View File

@@ -3,7 +3,8 @@
"containerImageID": "nim-libp2p-head",
"transports": [
"tcp",
"ws"
"ws",
"quic-v1"
],
"secureChannels": [
"noise"

View File

@@ -17,7 +17,7 @@ when defined(nimdoc):
## stay backward compatible during the Major version, whereas private ones can
## change at each new Minor version.
##
## If you're new to nim-libp2p, you can find a tutorial `here<https://status-im.github.io/nim-libp2p/docs/tutorial_1_connect/>`_
## If you're new to nim-libp2p, you can find a tutorial `here<https://vacp2p.github.io/nim-libp2p/docs/tutorial_1_connect/>`_
## that can help you get started.
# Import stuff for doc
@@ -52,7 +52,6 @@ else:
stream/connection,
transports/transport,
transports/tcptransport,
transports/quictransport,
protocols/secure/noise,
cid,
multihash,
@@ -71,3 +70,7 @@ else:
minprotobuf, switch, peerid, peerinfo, connection, multiaddress, crypto, lpstream,
bufferstream, muxer, mplex, transport, tcptransport, noise, errors, cid, multihash,
multicodec, builders, pubsub
when defined(libp2p_quic_support):
import libp2p/transports/quictransport
export quictransport

View File

@@ -1,7 +1,7 @@
mode = ScriptMode.Verbose
packageName = "libp2p"
version = "1.9.0"
version = "1.11.0"
author = "Status Research & Development GmbH"
description = "LibP2P implementation"
license = "MIT"
@@ -9,10 +9,9 @@ skipDirs = @["tests", "examples", "Nim", "tools", "scripts", "docs"]
requires "nim >= 1.6.0",
"nimcrypto >= 0.6.0 & < 0.7.0", "dnsclient >= 0.3.0 & < 0.4.0", "bearssl >= 0.2.5",
"chronicles >= 0.10.2", "chronos >= 4.0.3", "metrics", "secp256k1", "stew#head",
"websock", "unittest2",
"https://github.com/status-im/nim-quic.git#ddcb31ffb74b5460ab37fd13547eca90594248bc",
"https://github.com/status-im/nim-mbedtls.git"
"chronicles >= 0.10.3 & < 0.11.0", "chronos >= 4.0.4", "metrics", "secp256k1",
"stew >= 0.4.0", "websock >= 0.2.0", "unittest2", "results", "quic >= 0.2.7", "bio",
"https://github.com/vacp2p/nim-jwt.git#18f8378de52b241f321c1f9ea905456e89b95c6f"
let nimc = getEnv("NIMC", "nim") # Which nim compiler to use
let lang = getEnv("NIMLANG", "c") # Which backend (c/cpp/js)
@@ -26,16 +25,12 @@ let cfg =
import hashes, strutils
proc runTest(
filename: string, verify: bool = true, sign: bool = true, moreoptions: string = ""
) =
proc runTest(filename: string, moreoptions: string = "") =
var excstr = nimc & " " & lang & " -d:debug " & cfg & " " & flags
excstr.add(" -d:libp2p_pubsub_sign=" & $sign)
excstr.add(" -d:libp2p_pubsub_verify=" & $verify)
excstr.add(" " & moreoptions & " ")
if getEnv("CICOV").len > 0:
excstr &= " --nimcache:nimcache/" & filename & "-" & $excstr.hash
exec excstr & " -r " & " tests/" & filename
exec excstr & " -r -d:libp2p_quic_support tests/" & filename
rmFile "tests/" & filename.toExe
proc buildSample(filename: string, run = false, extraFlags = "") =
@@ -61,51 +56,18 @@ task testinterop, "Runs interop tests":
runTest("testinterop")
task testpubsub, "Runs pubsub tests":
runTest(
"pubsub/testgossipinternal",
sign = false,
verify = false,
moreoptions = "-d:pubsub_internal_testing",
)
runTest("pubsub/testpubsub")
runTest("pubsub/testpubsub", sign = false, verify = false)
runTest(
"pubsub/testpubsub",
sign = false,
verify = false,
moreoptions = "-d:libp2p_pubsub_anonymize=true",
)
task testpubsub_slim, "Runs pubsub tests":
runTest(
"pubsub/testgossipinternal",
sign = false,
verify = false,
moreoptions = "-d:pubsub_internal_testing",
)
runTest("pubsub/testpubsub")
task testfilter, "Run PKI filter test":
runTest("testpkifilter", moreoptions = "-d:libp2p_pki_schemes=\"secp256k1\"")
runTest("testpkifilter", moreoptions = "-d:libp2p_pki_schemes=\"secp256k1;ed25519\"")
runTest(
"testpkifilter", moreoptions = "-d:libp2p_pki_schemes=\"secp256k1;ed25519;ecnist\""
)
runTest("testpkifilter")
runTest("testpkifilter", moreoptions = "-d:libp2p_pki_schemes=")
task test, "Runs the test suite":
exec "nimble testnative"
exec "nimble testpubsub"
exec "nimble testdaemon"
exec "nimble testinterop"
exec "nimble testfilter"
exec "nimble examples_build"
task testintegration, "Runs integraion tests":
runTest("testintegration")
task test_slim, "Runs the (slimmed down) test suite":
exec "nimble testnative"
exec "nimble testpubsub_slim"
task test, "Runs the test suite":
runTest("testall")
exec "nimble testfilter"
exec "nimble examples_build"
task website, "Build the website":
tutorialToMd("examples/tutorial_1_connect.nim")
@@ -117,18 +79,12 @@ task website, "Build the website":
tutorialToMd("examples/circuitrelay.nim")
exec "mkdocs build"
task examples_build, "Build the samples":
buildSample("directchat")
buildSample("helloworld", true)
buildSample("circuitrelay", true)
buildSample("tutorial_1_connect", true)
buildSample("tutorial_2_customproto", true)
buildSample("tutorial_3_protobuf", true)
buildSample("tutorial_4_gossipsub", true)
buildSample("tutorial_5_discovery", true)
task examples, "Build and run examples":
exec "nimble install -y nimpng"
exec "nimble install -y nico --passNim=--skipParentCfg"
buildSample("tutorial_6_game", false, "--styleCheck:off")
buildSample("examples_build", false, "--styleCheck:off") # build only
buildSample("examples_run", true)
# pin system
# while nimble lockfile

478
libp2p/autotls/acme/api.nim Normal file
View File

@@ -0,0 +1,478 @@
import options, base64, sequtils, strutils, json
from times import DateTime, parse
import chronos/apps/http/httpclient, jwt, results, bearssl/pem
import ./utils
import ../../crypto/crypto
import ../../crypto/rsa
export ACMEError
const
LetsEncryptURL* = "https://acme-v02.api.letsencrypt.org"
LetsEncryptURLStaging* = "https://acme-staging-v02.api.letsencrypt.org"
Alg = "RS256"
DefaultChalCompletedRetries = 10
DefaultChalCompletedRetryTime = 1.seconds
DefaultFinalizeRetries = 10
DefaultFinalizeRetryTime = 1.seconds
DefaultRandStringSize = 256
ACMEHttpHeaders = [("Content-Type", "application/jose+json")]
type Nonce* = string
type Kid* = string
type ACMEDirectory* = object
newNonce*: string
newOrder*: string
newAccount*: string
type ACMEApi* = ref object of RootObj
directory: ACMEDirectory
session: HttpSessionRef
acmeServerURL*: string
type HTTPResponse* = object
body*: JsonNode
headers*: HttpTable
type JWK = object
kty: string
n: string
e: string
# whether the request uses Kid or not
type ACMERequestType = enum
ACMEJwkRequest
ACMEKidRequest
type ACMERequestHeader = object
alg: string
typ: string
nonce: string
url: string
case kind: ACMERequestType
of ACMEJwkRequest:
jwk: JWK
of ACMEKidRequest:
kid: Kid
type ACMERegisterRequest* = object
termsOfServiceAgreed: bool
contact: seq[string]
type ACMEAccountStatus = enum
valid
deactivated
revoked
type ACMERegisterResponseBody = object
status*: ACMEAccountStatus
type ACMERegisterResponse* = object
kid*: Kid
status*: ACMEAccountStatus
type ACMEChallengeStatus* {.pure.} = enum
pending = "pending"
processing = "processing"
valid = "valid"
invalid = "invalid"
type ACMEChallenge = object
url*: string
`type`*: string
status*: ACMEChallengeStatus
token*: string
type ACMEChallengeIdentifier = object
`type`: string
value: string
type ACMEChallengeRequest = object
identifiers: seq[ACMEChallengeIdentifier]
type ACMEChallengeResponseBody = object
status: ACMEChallengeStatus
authorizations: seq[string]
finalize: string
type ACMEChallengeResponse* = object
status*: ACMEChallengeStatus
authorizations*: seq[string]
finalize*: string
orderURL*: string
type ACMEChallengeResponseWrapper* = object
finalizeURL*: string
orderURL*: string
dns01*: ACMEChallenge
type ACMEAuthorizationsResponse* = object
challenges*: seq[ACMEChallenge]
type ACMECompletedResponse* = object
checkURL: string
type ACMEOrderStatus* {.pure.} = enum
pending = "pending"
ready = "ready"
processing = "processing"
valid = "valid"
invalid = "invalid"
type ACMECheckKind* = enum
ACMEOrderCheck
ACMEChallengeCheck
type ACMECheckResponse* = object
case kind: ACMECheckKind
of ACMEOrderCheck:
orderStatus: ACMEOrderStatus
of ACMEChallengeCheck:
chalStatus: ACMEChallengeStatus
retryAfter: Duration
type ACMEFinalizeResponse* = object
status: ACMEOrderStatus
type ACMEOrderResponse* = object
certificate: string
expires: string
type ACMECertificateResponse* = object
rawCertificate: string
certificateExpiry: DateTime
template handleError*(msg: string, body: untyped): untyped =
try:
body
except ACMEError as exc:
raise exc
except CancelledError as exc:
raise exc
except JsonKindError as exc:
raise newException(ACMEError, msg & ": Failed to decode JSON", exc)
except ValueError as exc:
raise newException(ACMEError, msg & ": Failed to decode JSON", exc)
except HttpError as exc:
raise newException(ACMEError, msg & ": Failed to connect to ACME server", exc)
except CatchableError as exc:
raise newException(ACMEError, msg & ": Unexpected error", exc)
method post*(
self: ACMEApi, url: string, payload: string
): Future[HTTPResponse] {.
async: (raises: [ACMEError, HttpError, CancelledError]), base
.}
method get*(
self: ACMEApi, url: string
): Future[HTTPResponse] {.
async: (raises: [ACMEError, HttpError, CancelledError]), base
.}
proc new*(
T: typedesc[ACMEApi], acmeServerURL: string = LetsEncryptURL
): Future[ACMEApi] {.async: (raises: [ACMEError, CancelledError]).} =
let session = HttpSessionRef.new()
let directory = handleError("new API"):
let rawResponse =
await HttpClientRequestRef.get(session, acmeServerURL & "/directory").get().send()
let body = await rawResponse.getResponseBody()
body.to(ACMEDirectory)
ACMEApi(session: session, directory: directory, acmeServerURL: acmeServerURL)
method requestNonce*(
self: ACMEApi
): Future[Nonce] {.async: (raises: [ACMEError, CancelledError]), base.} =
handleError("requestNonce"):
let acmeResponse = await self.get(self.directory.newNonce)
Nonce(acmeResponse.headers.keyOrError("Replay-Nonce"))
# TODO: save n and e in account so we don't have to recalculate every time
proc acmeHeader(
self: ACMEApi, url: string, key: KeyPair, needsJwk: bool, kid: Opt[Kid]
): Future[ACMERequestHeader] {.async: (raises: [ACMEError, CancelledError]).} =
if not needsJwk and kid.isNone:
raise newException(ACMEError, "kid not set")
if key.pubkey.scheme != PKScheme.RSA or key.seckey.scheme != PKScheme.RSA:
raise newException(ACMEError, "Unsupported signing key type")
let newNonce = await self.requestNonce()
if needsJwk:
let pubkey = key.pubkey.rsakey
let nArray = @(getArray(pubkey.buffer, pubkey.key.n, pubkey.key.nlen))
let eArray = @(getArray(pubkey.buffer, pubkey.key.e, pubkey.key.elen))
ACMERequestHeader(
kind: ACMEJwkRequest,
alg: Alg,
typ: "JWT",
nonce: newNonce,
url: url,
jwk: JWK(kty: "RSA", n: base64UrlEncode(nArray), e: base64UrlEncode(eArray)),
)
else:
ACMERequestHeader(
kind: ACMEKidRequest,
alg: Alg,
typ: "JWT",
nonce: newNonce,
url: url,
kid: kid.get(),
)
method post*(
self: ACMEApi, url: string, payload: string
): Future[HTTPResponse] {.
async: (raises: [ACMEError, HttpError, CancelledError]), base
.} =
let rawResponse = await HttpClientRequestRef
.post(self.session, url, body = payload, headers = ACMEHttpHeaders)
.get()
.send()
let body = await rawResponse.getResponseBody()
HTTPResponse(body: body, headers: rawResponse.headers)
method get*(
self: ACMEApi, url: string
): Future[HTTPResponse] {.
async: (raises: [ACMEError, HttpError, CancelledError]), base
.} =
let rawResponse = await HttpClientRequestRef.get(self.session, url).get().send()
let body = await rawResponse.getResponseBody()
HTTPResponse(body: body, headers: rawResponse.headers)
proc createSignedAcmeRequest(
self: ACMEApi,
url: string,
payload: auto,
key: KeyPair,
needsJwk: bool = false,
kid: Opt[Kid] = Opt.none(Kid),
): Future[string] {.async: (raises: [ACMEError, CancelledError]).} =
if key.pubkey.scheme != PKScheme.RSA or key.seckey.scheme != PKScheme.RSA:
raise newException(ACMEError, "Unsupported signing key type")
let acmeHeader = await self.acmeHeader(url, key, needsJwk, kid)
handleError("createSignedAcmeRequest"):
var token = toJWT(%*{"header": acmeHeader, "claims": payload})
let derPrivKey = key.seckey.rsakey.getBytes.get
let pemPrivKey: string = pemEncode(derPrivKey, "PRIVATE KEY")
token.sign(pemPrivKey)
$token.toFlattenedJson()
proc requestRegister*(
self: ACMEApi, key: KeyPair
): Future[ACMERegisterResponse] {.async: (raises: [ACMEError, CancelledError]).} =
let registerRequest = ACMERegisterRequest(termsOfServiceAgreed: true)
handleError("acmeRegister"):
let payload = await self.createSignedAcmeRequest(
self.directory.newAccount, registerRequest, key, needsJwk = true
)
let acmeResponse = await self.post(self.directory.newAccount, payload)
let acmeResponseBody = acmeResponse.body.to(ACMERegisterResponseBody)
ACMERegisterResponse(
status: acmeResponseBody.status, kid: acmeResponse.headers.keyOrError("location")
)
proc requestNewOrder*(
self: ACMEApi, domains: seq[string], key: KeyPair, kid: Kid
): Future[ACMEChallengeResponse] {.async: (raises: [ACMEError, CancelledError]).} =
# request challenge from ACME server
let orderRequest = ACMEChallengeRequest(
identifiers: domains.mapIt(ACMEChallengeIdentifier(`type`: "dns", value: it))
)
handleError("requestNewOrder"):
let payload = await self.createSignedAcmeRequest(
self.directory.newOrder, orderRequest, key, kid = Opt.some(kid)
)
let acmeResponse = await self.post(self.directory.newOrder, payload)
let challengeResponseBody = acmeResponse.body.to(ACMEChallengeResponseBody)
if challengeResponseBody.authorizations.len() == 0:
raise newException(ACMEError, "Authorizations field is empty")
ACMEChallengeResponse(
status: challengeResponseBody.status,
authorizations: challengeResponseBody.authorizations,
finalize: challengeResponseBody.finalize,
orderURL: acmeResponse.headers.keyOrError("location"),
)
proc requestAuthorizations*(
self: ACMEApi, authorizations: seq[string], key: KeyPair, kid: Kid
): Future[ACMEAuthorizationsResponse] {.async: (raises: [ACMEError, CancelledError]).} =
handleError("requestAuthorizations"):
doAssert authorizations.len > 0
let acmeResponse = await self.get(authorizations[0])
acmeResponse.body.to(ACMEAuthorizationsResponse)
proc requestChallenge*(
self: ACMEApi, domains: seq[string], key: KeyPair, kid: Kid
): Future[ACMEChallengeResponseWrapper] {.async: (raises: [ACMEError, CancelledError]).} =
let challengeResponse = await self.requestNewOrder(domains, key, kid)
let authorizationsResponse =
await self.requestAuthorizations(challengeResponse.authorizations, key, kid)
return ACMEChallengeResponseWrapper(
finalizeURL: challengeResponse.finalize,
orderURL: challengeResponse.orderURL,
dns01: authorizationsResponse.challenges.filterIt(it.`type` == "dns-01")[0],
)
proc requestCheck*(
self: ACMEApi, checkURL: string, checkKind: ACMECheckKind, key: KeyPair, kid: Kid
): Future[ACMECheckResponse] {.async: (raises: [ACMEError, CancelledError]).} =
handleError("requestCheck"):
let acmeResponse = await self.get(checkURL)
let retryAfter =
try:
parseInt(acmeResponse.headers.keyOrError("Retry-After")).seconds
except ValueError:
DefaultChalCompletedRetryTime
case checkKind
of ACMEOrderCheck:
try:
ACMECheckResponse(
kind: checkKind,
orderStatus: parseEnum[ACMEOrderStatus](acmeResponse.body["status"].getStr),
retryAfter: retryAfter,
)
except ValueError:
raise newException(
ACMEError, "Invalid order status: " & acmeResponse.body["status"].getStr
)
of ACMEChallengeCheck:
try:
ACMECheckResponse(
kind: checkKind,
chalStatus: parseEnum[ACMEChallengeStatus](acmeResponse.body["status"].getStr),
retryAfter: retryAfter,
)
except ValueError:
raise newException(
ACMEError, "Invalid order status: " & acmeResponse.body["status"].getStr
)
proc requestCompleted*(
self: ACMEApi, chalURL: string, key: KeyPair, kid: Kid
): Future[ACMECompletedResponse] {.async: (raises: [ACMEError, CancelledError]).} =
handleError("requestCompleted (send notify)"):
let payload =
await self.createSignedAcmeRequest(chalURL, %*{}, key, kid = Opt.some(kid))
let acmeResponse = await self.post(chalURL, payload)
acmeResponse.body.to(ACMECompletedResponse)
proc checkChallengeCompleted*(
self: ACMEApi,
checkURL: string,
key: KeyPair,
kid: Kid,
retries: int = DefaultChalCompletedRetries,
): Future[bool] {.async: (raises: [ACMEError, CancelledError]).} =
for i in 0 .. retries:
let checkResponse = await self.requestCheck(checkURL, ACMEChallengeCheck, key, kid)
case checkResponse.chalStatus
of ACMEChallengeStatus.pending:
await sleepAsync(checkResponse.retryAfter) # try again after some delay
of ACMEChallengeStatus.valid:
return true
else:
raise newException(
ACMEError,
"Failed challenge completion: expected 'valid', got '" &
$checkResponse.chalStatus & "'",
)
return false
proc completeChallenge*(
self: ACMEApi,
chalURL: string,
key: KeyPair,
kid: Kid,
retries: int = DefaultChalCompletedRetries,
): Future[bool] {.async: (raises: [ACMEError, CancelledError]).} =
let completedResponse = await self.requestCompleted(chalURL, key, kid)
# check until acme server is done (poll validation)
return await self.checkChallengeCompleted(chalURL, key, kid, retries = retries)
proc requestFinalize*(
self: ACMEApi, domain: string, finalizeURL: string, key: KeyPair, kid: Kid
): Future[ACMEFinalizeResponse] {.async: (raises: [ACMEError, CancelledError]).} =
let derCSR = createCSR(domain)
let b64CSR = base64.encode(derCSR.toSeq, safe = true)
handleError("requestFinalize"):
let payload = await self.createSignedAcmeRequest(
finalizeURL, %*{"csr": b64CSR}, key, kid = Opt.some(kid)
)
let acmeResponse = await self.post(finalizeURL, payload)
# server responds with updated order response
acmeResponse.body.to(ACMEFinalizeResponse)
proc checkCertFinalized*(
self: ACMEApi,
orderURL: string,
key: KeyPair,
kid: Kid,
retries: int = DefaultChalCompletedRetries,
): Future[bool] {.async: (raises: [ACMEError, CancelledError]).} =
for i in 0 .. retries:
let checkResponse = await self.requestCheck(orderURL, ACMEOrderCheck, key, kid)
case checkResponse.orderStatus
of ACMEOrderStatus.valid:
return true
of ACMEOrderStatus.processing:
await sleepAsync(checkResponse.retryAfter) # try again after some delay
else:
raise newException(
ACMEError,
"Failed certificate finalization: expected 'valid', got '" &
$checkResponse.orderStatus & "'",
)
return false
return false
proc certificateFinalized*(
self: ACMEApi,
domain: string,
finalizeURL: string,
orderURL: string,
key: KeyPair,
kid: Kid,
retries: int = DefaultFinalizeRetries,
): Future[bool] {.async: (raises: [ACMEError, CancelledError]).} =
let finalizeResponse = await self.requestFinalize(domain, finalizeURL, key, kid)
# keep checking order until cert is valid (done)
return await self.checkCertFinalized(orderURL, key, kid, retries = retries)
proc requestGetOrder*(
self: ACMEApi, orderURL: string
): Future[ACMEOrderResponse] {.async: (raises: [ACMEError, CancelledError]).} =
handleError("requestGetOrder"):
let acmeResponse = await self.get(orderURL)
acmeResponse.body.to(ACMEOrderResponse)
proc downloadCertificate*(
self: ACMEApi, orderURL: string
): Future[ACMECertificateResponse] {.async: (raises: [ACMEError, CancelledError]).} =
let orderResponse = await self.requestGetOrder(orderURL)
handleError("downloadCertificate"):
let rawResponse = await HttpClientRequestRef
.get(self.session, orderResponse.certificate)
.get()
.send()
ACMECertificateResponse(
rawCertificate: bytesToString(await rawResponse.getBodyBytes()),
certificateExpiry: parse(orderResponse.expires, "yyyy-MM-dd'T'HH:mm:ss'Z'"),
)
proc close*(self: ACMEApi): Future[void] {.async: (raises: [CancelledError]).} =
await self.session.closeWait()

View File

@@ -0,0 +1,37 @@
import chronos, chronos/apps/http/httpclient, json
import ./api, ./utils
export api
type MockACMEApi* = ref object of ACMEApi
parent*: ACMEApi
mockedHeaders*: HttpTable
mockedBody*: JsonNode
proc new*(
T: typedesc[MockACMEApi]
): Future[MockACMEApi] {.async: (raises: [ACMEError, CancelledError]).} =
let directory = ACMEDirectory(
newNonce: LetsEncryptURL & "/new-nonce",
newOrder: LetsEncryptURL & "/new-order",
newAccount: LetsEncryptURL & "/new-account",
)
MockACMEApi(
session: HttpSessionRef.new(), directory: directory, acmeServerURL: LetsEncryptURL
)
method requestNonce*(
self: MockACMEApi
): Future[Nonce] {.async: (raises: [ACMEError, CancelledError]).} =
return self.acmeServerURL & "/acme/1234"
method post*(
self: MockACMEApi, url: string, payload: string
): Future[HTTPResponse] {.async: (raises: [ACMEError, HttpError, CancelledError]).} =
HTTPResponse(body: self.mockedBody, headers: self.mockedHeaders)
method get*(
self: MockACMEApi, url: string
): Future[HTTPResponse] {.async: (raises: [ACMEError, HttpError, CancelledError]).} =
HTTPResponse(body: self.mockedBody, headers: self.mockedHeaders)

View File

@@ -0,0 +1,48 @@
import base64, strutils, chronos/apps/http/httpclient, json
import ../../errors
import ../../transports/tls/certificate_ffi
type ACMEError* = object of LPError
proc keyOrError*(table: HttpTable, key: string): string {.raises: [ValueError].} =
if not table.contains(key):
raise newException(ValueError, "key " & key & " not present in headers")
table.getString(key)
proc base64UrlEncode*(data: seq[byte]): string =
## Encodes data using base64url (RFC 4648 §5) — no padding, URL-safe
var encoded = base64.encode(data, safe = true)
encoded.removeSuffix("=")
encoded.removeSuffix("=")
return encoded
proc getResponseBody*(
response: HttpClientResponseRef
): Future[JsonNode] {.async: (raises: [ACMEError, CancelledError]).} =
try:
let responseBody = bytesToString(await response.getBodyBytes()).parseJson()
return responseBody
except CancelledError as exc:
raise exc
except CatchableError as exc:
raise
newException(ACMEError, "Unexpected error occurred while getting body bytes", exc)
except Exception as exc: # this is required for nim 1.6
raise
newException(ACMEError, "Unexpected error occurred while getting body bytes", exc)
proc createCSR*(domain: string): string {.raises: [ACMEError].} =
var certKey: cert_key_t
var certCtx: cert_context_t
var derCSR: ptr cert_buffer = nil
let personalizationStr = "libp2p_autotls"
if cert_init_drbg(
personalizationStr.cstring, personalizationStr.len.csize_t, certCtx.addr
) != CERT_SUCCESS:
raise newException(ACMEError, "Failed to initialize certCtx")
if cert_generate_key(certCtx, certKey.addr) != CERT_SUCCESS:
raise newException(ACMEError, "Failed to generate cert key")
if cert_signing_req(domain.cstring, certKey, derCSR.addr) != CERT_SUCCESS:
raise newException(ACMEError, "Failed to create CSR")

View File

@@ -23,7 +23,7 @@ import
stream/connection,
multiaddress,
crypto/crypto,
transports/[transport, tcptransport],
transports/[transport, tcptransport, wstransport, memorytransport],
muxers/[muxer, mplex/mplex, yamux/yamux],
protocols/[identify, secure/secure, secure/noise, rendezvous],
protocols/connectivity/[autonat/server, relay/relay, relay/client, relay/rtransport],
@@ -35,10 +35,15 @@ import
utility
import services/wildcardresolverservice
export switch, peerid, peerinfo, connection, multiaddress, crypto, errors
export
switch, peerid, peerinfo, connection, multiaddress, crypto, errors, TLSPrivateKey,
TLSCertificate, TLSFlags, ServerFlags
const MemoryAutoAddress* = memorytransport.MemoryAutoAddress
type
TransportProvider* {.public.} = proc(upgr: Upgrade): Transport {.gcsafe, raises: [].}
TransportProvider* {.public.} =
proc(upgr: Upgrade, privateKey: PrivateKey): Transport {.gcsafe, raises: [].}
SecureProtocol* {.pure.} = enum
Noise
@@ -151,7 +156,7 @@ proc withTransport*(
let switch = SwitchBuilder
.new()
.withTransport(
proc(upgr: Upgrade): Transport =
proc(upgr: Upgrade, privateKey: PrivateKey): Transport =
TcpTransport.new(flags, upgr)
)
.build()
@@ -162,10 +167,37 @@ proc withTcpTransport*(
b: SwitchBuilder, flags: set[ServerFlags] = {}
): SwitchBuilder {.public.} =
b.withTransport(
proc(upgr: Upgrade): Transport =
proc(upgr: Upgrade, privateKey: PrivateKey): Transport =
TcpTransport.new(flags, upgr)
)
proc withWsTransport*(
b: SwitchBuilder,
tlsPrivateKey: TLSPrivateKey = nil,
tlsCertificate: TLSCertificate = nil,
tlsFlags: set[TLSFlags] = {},
flags: set[ServerFlags] = {},
): SwitchBuilder =
b.withTransport(
proc(upgr: Upgrade, privateKey: PrivateKey): Transport =
WsTransport.new(upgr, tlsPrivateKey, tlsCertificate, tlsFlags, flags)
)
when defined(libp2p_quic_support):
import transports/quictransport
proc withQuicTransport*(b: SwitchBuilder): SwitchBuilder {.public.} =
b.withTransport(
proc(upgr: Upgrade, privateKey: PrivateKey): Transport =
QuicTransport.new(upgr, privateKey)
)
proc withMemoryTransport*(b: SwitchBuilder): SwitchBuilder {.public.} =
b.withTransport(
proc(upgr: Upgrade, privateKey: PrivateKey): Transport =
MemoryTransport.new(upgr)
)
proc withRng*(b: SwitchBuilder, rng: ref HmacDrbgContext): SwitchBuilder {.public.} =
b.rng = rng
b
@@ -247,6 +279,10 @@ proc build*(b: SwitchBuilder): Switch {.raises: [LPError], public.} =
let pkRes = PrivateKey.random(b.rng[])
let seckey = b.privKey.get(otherwise = pkRes.expect("Expected default Private Key"))
if b.secureManagers.len == 0:
debug "no secure managers defined. Adding noise by default"
b.secureManagers.add(SecureProtocol.Noise)
var secureManagerInstances: seq[Secure]
if SecureProtocol.Noise in b.secureManagers:
secureManagerInstances.add(Noise.new(b.rng, seckey).Secure)
@@ -270,7 +306,7 @@ proc build*(b: SwitchBuilder): Switch {.raises: [LPError], public.} =
let transports = block:
var transports: seq[Transport]
for tProvider in b.transports:
transports.add(tProvider(muxedUpgrade))
transports.add(tProvider(muxedUpgrade, seckey))
transports
if b.secureManagers.len == 0:

View File

@@ -10,10 +10,11 @@
## This module implementes CID (Content IDentifier).
{.push raises: [].}
{.used.}
import tables, hashes
import multibase, multicodec, multihash, vbuffer, varint
import stew/[base58, results]
import multibase, multicodec, multihash, vbuffer, varint, results
import stew/base58
export results
@@ -41,6 +42,7 @@ const ContentIdsList = [
multiCodec("dag-pb"),
multiCodec("dag-cbor"),
multiCodec("dag-json"),
multiCodec("libp2p-key"),
multiCodec("git-raw"),
multiCodec("eth-block"),
multiCodec("eth-block-list"),

View File

@@ -140,7 +140,7 @@ proc triggerConnEvent*(
except CancelledError as exc:
raise exc
except CatchableError as exc:
warn "Exception in triggerConnEvents",
warn "Exception in triggerConnEvent",
description = exc.msg, peer = peerId, event = $event
proc addPeerEventHandler*(
@@ -186,7 +186,7 @@ proc expectConnection*(
if key in c.expectedConnectionsOverLimit:
raise newException(
AlreadyExpectingConnectionError,
"Already expecting an incoming connection from that peer",
"Already expecting an incoming connection from that peer: " & shortLog(p),
)
let future = Future[Muxer].Raising([CancelledError]).init()

View File

@@ -76,7 +76,7 @@ import nimcrypto/[rijndael, twofish, sha2, hash, hmac]
# We use `ncrutils` for constant-time hexadecimal encoding/decoding procedures.
import nimcrypto/utils as ncrutils
import ../utility
import stew/results
import results
export results, utility
# This is workaround for Nim's `import` bug

View File

@@ -18,7 +18,7 @@
{.push raises: [].}
import bearssl/[ec, rand]
import stew/results
import results
from stew/assign2 import assign
export results

View File

@@ -21,7 +21,8 @@ import bearssl/[ec, rand, hash]
import nimcrypto/utils as ncrutils
import minasn1
export minasn1.Asn1Error
import stew/[results, ctops]
import stew/ctops
import results
import ../utility

View File

@@ -18,7 +18,8 @@ import constants
import nimcrypto/[hash, sha2]
# We use `ncrutils` for constant-time hexadecimal encoding/decoding procedures.
import nimcrypto/utils as ncrutils
import stew/[results, ctops]
import results
import stew/ctops
import ../../utility

View File

@@ -11,7 +11,8 @@
{.push raises: [].}
import stew/[endians2, results, ctops]
import stew/[endians2, ctops]
import results
export results
# We use `ncrutils` for constant-time hexadecimal encoding/decoding procedures.
import nimcrypto/utils as ncrutils
@@ -291,28 +292,6 @@ proc asn1EncodeBitString*(
dest[2 + lenlen + bytelen - 1] = lastbyte and mask
res
proc asn1EncodeTag[T: SomeUnsignedInt](dest: var openArray[byte], value: T): int =
var v = value
if value <= cast[T](0x7F):
if len(dest) >= 1:
dest[0] = cast[byte](value)
1
else:
var s = 0
var res = 0
while v != 0:
v = v shr 7
s += 7
inc(res)
if len(dest) >= res:
var k = 0
while s != 0:
s -= 7
dest[k] = cast[byte](((value shr s) and cast[T](0x7F)) or cast[T](0x80))
inc(k)
dest[k - 1] = dest[k - 1] and 0x7F'u8
res
proc asn1EncodeOid*(dest: var openArray[byte], value: openArray[byte]): int =
## Encode array of bytes ``value`` as ASN.1 DER `OBJECT IDENTIFIER` and return
## number of bytes (octets) used.
@@ -665,9 +644,6 @@ proc read*(ab: var Asn1Buffer): Asn1Result[Asn1Field] =
return ok(field)
else:
return err(Asn1Error.NoSupport)
inclass = false
ttag = 0
else:
return err(Asn1Error.NoSupport)

View File

@@ -17,7 +17,8 @@
import bearssl/[rsa, rand, hash]
import minasn1
import stew/[results, ctops]
import results
import stew/ctops
# We use `ncrutils` for constant-time hexadecimal encoding/decoding procedures.
import nimcrypto/utils as ncrutils

View File

@@ -10,7 +10,7 @@
{.push raises: [].}
import bearssl/rand
import secp256k1, stew/[byteutils, results], nimcrypto/[hash, sha2]
import secp256k1, results, stew/byteutils, nimcrypto/[hash, sha2]
export sha2, results, rand
@@ -85,8 +85,9 @@ proc init*(sig: var SkSignature, data: string): SkResult[void] =
var buffer: seq[byte]
try:
buffer = hexToSeqByte(data)
except ValueError:
return err("secp: Hex to bytes failed")
except ValueError as e:
let errMsg = "secp: Hex to bytes failed: " & e.msg
return err(errMsg.cstring)
init(sig, buffer)
proc init*(t: typedesc[SkPrivateKey], data: openArray[byte]): SkResult[SkPrivateKey] =

View File

@@ -595,13 +595,13 @@ template exceptionToAssert(body: untyped): untyped =
try:
res = body
except OSError as exc:
raise exc
raise newException(OSError, "failure in exceptionToAssert: " & exc.msg, exc)
except IOError as exc:
raise exc
raise newException(IOError, "failure in exceptionToAssert: " & exc.msg, exc)
except Defect as exc:
raise exc
raise newException(Defect, "failure in exceptionToAssert: " & exc.msg, exc)
except Exception as exc:
raiseAssert exc.msg
raiseAssert "Exception captured in exceptionToAssert: " & exc.msg
when defined(nimHasWarnBareExcept):
{.pop.}
res
@@ -967,9 +967,9 @@ proc openStream*(
stream.flags.incl(Outbound)
stream.transp = transp
result = stream
except ResultError[ProtoError]:
except ResultError[ProtoError] as e:
await api.closeConnection(transp)
raise newException(DaemonLocalError, "Wrong message type!")
raise newException(DaemonLocalError, "Wrong message type: " & e.msg, e)
proc streamHandler(server: StreamServer, transp: StreamTransport) {.async.} =
# must not specify raised exceptions as this is StreamCallback from chronos
@@ -1023,10 +1023,10 @@ proc addHandler*(
api.servers.add(P2PServer(server: server, address: maddress))
except DaemonLocalError as e:
await removeHandler()
raise e
raise newException(DaemonLocalError, "Could not add stream handler: " & e.msg, e)
except TransportError as e:
await removeHandler()
raise e
raise newException(TransportError, "Could not add stream handler: " & e.msg, e)
except CancelledError as e:
await removeHandler()
raise e
@@ -1503,10 +1503,14 @@ proc pubsubSubscribe*(
result = ticket
except DaemonLocalError as exc:
await api.closeConnection(transp)
raise exc
raise newException(
DaemonLocalError, "Could not subscribe to topic '" & topic & "': " & exc.msg, exc
)
except TransportError as exc:
await api.closeConnection(transp)
raise exc
raise newException(
TransportError, "Could not subscribe to topic '" & topic & "': " & exc.msg, exc
)
except CancelledError as exc:
await api.closeConnection(transp)
raise exc

View File

@@ -10,7 +10,7 @@
{.push raises: [].}
import chronos
import stew/results
import results
import peerid, stream/connection, transports/transport
export results
@@ -31,14 +31,14 @@ method connect*(
## a protocol
##
doAssert(false, "Not implemented!")
doAssert(false, "[Dial.connect] abstract method not implemented!")
method connect*(
self: Dial, address: MultiAddress, allowUnknownPeerId = false
): Future[PeerId] {.base, async: (raises: [DialFailedError, CancelledError]).} =
## Connects to a peer and retrieve its PeerId
doAssert(false, "Not implemented!")
doAssert(false, "[Dial.connect] abstract method not implemented!")
method dial*(
self: Dial, peerId: PeerId, protos: seq[string]
@@ -47,7 +47,7 @@ method dial*(
## existing connection
##
doAssert(false, "Not implemented!")
doAssert(false, "[Dial.dial] abstract method not implemented!")
method dial*(
self: Dial,
@@ -60,14 +60,14 @@ method dial*(
## a connection if one doesn't exist already
##
doAssert(false, "Not implemented!")
doAssert(false, "[Dial.dial] abstract method not implemented!")
method addTransport*(self: Dial, transport: Transport) {.base.} =
doAssert(false, "Not implemented!")
doAssert(false, "[Dial.addTransport] abstract method not implemented!")
method tryDial*(
self: Dial, peerId: PeerId, addrs: seq[MultiAddress]
): Future[Opt[MultiAddress]] {.
base, async: (raises: [DialFailedError, CancelledError])
.} =
doAssert(false, "Not implemented!")
doAssert(false, "[Dial.tryDial] abstract method not implemented!")

View File

@@ -9,8 +9,7 @@
import std/tables
import stew/results
import pkg/[chronos, chronicles, metrics]
import pkg/[chronos, chronicles, metrics, results]
import
dial,
@@ -125,9 +124,13 @@ proc expandDnsAddr(
for resolvedAddress in resolved:
let lastPart = resolvedAddress[^1].tryGet()
if lastPart.protoCode == Result[MultiCodec, string].ok(multiCodec("p2p")):
let
var peerIdBytes: seq[byte]
try:
peerIdBytes = lastPart.protoArgument().tryGet()
addrPeerId = PeerId.init(peerIdBytes).tryGet()
except ResultError[string] as e:
raiseAssert "expandDnsAddr failed in expandDnsAddr protoArgument: " & e.msg
let addrPeerId = PeerId.init(peerIdBytes).tryGet()
result.add((resolvedAddress[0 ..^ 2].tryGet(), Opt.some(addrPeerId)))
else:
result.add((resolvedAddress, peerId))
@@ -175,7 +178,7 @@ proc internalConnect(
dir = Direction.Out,
): Future[Muxer] {.async: (raises: [DialFailedError, CancelledError]).} =
if Opt.some(self.localPeerId) == peerId:
raise newException(DialFailedError, "can't dial self!")
raise newException(DialFailedError, "internalConnect can't dial self!")
# Ensure there's only one in-flight attempt per peer
let lock = self.dialLock.mgetOrPut(peerId.get(default(PeerId)), newAsyncLock())
@@ -183,8 +186,8 @@ proc internalConnect(
defer:
try:
lock.release()
except AsyncLockError:
raiseAssert "lock must have been acquired in line above"
except AsyncLockError as e:
raiseAssert "lock must have been acquired in line above: " & e.msg
if reuseConnection:
peerId.withValue(peerId):
@@ -195,7 +198,9 @@ proc internalConnect(
try:
self.connManager.getOutgoingSlot(forceDial)
except TooManyConnectionsError as exc:
raise newException(DialFailedError, exc.msg)
raise newException(
DialFailedError, "failed getOutgoingSlot in internalConnect: " & exc.msg, exc
)
let muxed =
try:
@@ -205,11 +210,15 @@ proc internalConnect(
raise exc
except CatchableError as exc:
slot.release()
raise newException(DialFailedError, exc.msg)
raise newException(
DialFailedError, "failed dialAndUpgrade in internalConnect: " & exc.msg, exc
)
slot.trackMuxer(muxed)
if isNil(muxed): # None of the addresses connected
raise newException(DialFailedError, "Unable to establish outgoing link")
raise newException(
DialFailedError, "Unable to establish outgoing link in internalConnect"
)
try:
self.connManager.storeMuxer(muxed)
@@ -225,7 +234,11 @@ proc internalConnect(
except CatchableError as exc:
trace "Failed to finish outgoing upgrade", description = exc.msg
await muxed.close()
raise newException(DialFailedError, "Failed to finish outgoing upgrade")
raise newException(
DialFailedError,
"Failed to finish outgoing upgrade in internalConnect: " & exc.msg,
exc,
)
method connect*(
self: Dialer,
@@ -257,7 +270,7 @@ method connect*(
if allowUnknownPeerId == false:
raise newException(
DialFailedError, "Address without PeerID and unknown peer id disabled!"
DialFailedError, "Address without PeerID and unknown peer id disabled in connect"
)
return
@@ -270,7 +283,7 @@ proc negotiateStream(
let selected = await MultistreamSelect.select(conn, protos)
if not protos.contains(selected):
await conn.closeWithEOF()
raise newException(DialFailedError, "Unable to select sub-protocol " & $protos)
raise newException(DialFailedError, "Unable to select sub-protocol: " & $protos)
return conn
@@ -286,13 +299,13 @@ method tryDial*(
try:
let mux = await self.dialAndUpgrade(Opt.some(peerId), addrs)
if mux.isNil():
raise newException(DialFailedError, "No valid multiaddress")
raise newException(DialFailedError, "No valid multiaddress in tryDial")
await mux.close()
return mux.connection.observedAddr
except CancelledError as exc:
raise exc
except CatchableError as exc:
raise newException(DialFailedError, exc.msg)
raise newException(DialFailedError, "tryDial failed: " & exc.msg, exc)
method dial*(
self: Dialer, peerId: PeerId, protos: seq[string]
@@ -306,14 +319,17 @@ method dial*(
try:
let stream = await self.connManager.getStream(peerId)
if stream.isNil:
raise newException(DialFailedError, "Couldn't get muxed stream")
raise newException(
DialFailedError,
"Couldn't get muxed stream in dial for peer_id: " & shortLog(peerId),
)
return await self.negotiateStream(stream, protos)
except CancelledError as exc:
trace "Dial canceled"
trace "Dial canceled", description = exc.msg
raise exc
except CatchableError as exc:
trace "Error dialing", description = exc.msg
raise newException(DialFailedError, exc.msg)
raise newException(DialFailedError, "failed dial existing: " & exc.msg)
method dial*(
self: Dialer,
@@ -344,17 +360,20 @@ method dial*(
stream = await self.connManager.getStream(conn)
if isNil(stream):
raise newException(DialFailedError, "Couldn't get muxed stream")
raise newException(
DialFailedError,
"Couldn't get muxed stream in new dial for remote_peer_id: " & shortLog(peerId),
)
return await self.negotiateStream(stream, protos)
except CancelledError as exc:
trace "Dial canceled", conn
trace "Dial canceled", conn, description = exc.msg
await cleanup()
raise exc
except CatchableError as exc:
debug "Error dialing", conn, description = exc.msg
await cleanup()
raise newException(DialFailedError, exc.msg)
raise newException(DialFailedError, "failed new dial: " & exc.msg, exc)
method addTransport*(self: Dialer, t: Transport) =
self.transports &= t

View File

@@ -10,7 +10,7 @@
{.push raises: [].}
import std/sequtils
import chronos, chronicles, stew/results
import chronos, chronicles, results
import ../errors
type
@@ -59,7 +59,7 @@ proc `{}`*[T](pa: PeerAttributes, t: typedesc[T]): Opt[T] =
proc `[]`*[T](pa: PeerAttributes, t: typedesc[T]): T {.raises: [KeyError].} =
pa{T}.valueOr:
raise newException(KeyError, "Attritute not found")
raise newException(KeyError, "Attribute not found")
proc match*(pa, candidate: PeerAttributes): bool =
for f in pa.attributes:
@@ -86,12 +86,12 @@ type
method request*(
self: DiscoveryInterface, pa: PeerAttributes
) {.base, async: (raises: [DiscoveryError, CancelledError]).} =
doAssert(false, "Not implemented!")
doAssert(false, "[DiscoveryInterface.request] abstract method not implemented!")
method advertise*(
self: DiscoveryInterface
) {.base, async: (raises: [CancelledError, AdvertiseError]).} =
doAssert(false, "Not implemented!")
doAssert(false, "[DiscoveryInterface.advertise] abstract method not implemented!")
type
DiscoveryQuery* = ref object
@@ -113,7 +113,7 @@ proc add*(dm: DiscoveryManager, di: DiscoveryInterface) =
try:
query.peers.putNoWait(pa)
except AsyncQueueFullError as exc:
debug "Cannot push discovered peer to queue"
debug "Cannot push discovered peer to queue", description = exc.msg
proc request*(dm: DiscoveryManager, pa: PeerAttributes): DiscoveryQuery =
var query = DiscoveryQuery(attr: pa, peers: newAsyncQueue[PeerAttributes]())

View File

@@ -26,14 +26,14 @@ proc `==`*(a, b: RdvNamespace): bool {.borrow.}
method request*(
self: RendezVousInterface, pa: PeerAttributes
) {.async: (raises: [DiscoveryError, CancelledError]).} =
var namespace = ""
var namespace = Opt.none(string)
for attr in pa:
if attr.ofType(RdvNamespace):
namespace = string attr.to(RdvNamespace)
namespace = Opt.some(string attr.to(RdvNamespace))
elif attr.ofType(DiscoveryService):
namespace = string attr.to(DiscoveryService)
namespace = Opt.some(string attr.to(DiscoveryService))
elif attr.ofType(PeerId):
namespace = $attr.to(PeerId)
namespace = Opt.some($attr.to(PeerId))
else:
# unhandled type
return
@@ -44,8 +44,8 @@ method request*(
for address in pr.addresses:
peer.add(address.address)
peer.add(DiscoveryService(namespace))
peer.add(RdvNamespace(namespace))
peer.add(DiscoveryService(namespace.get()))
peer.add(RdvNamespace(namespace.get()))
self.onPeerFound(peer)
await sleepAsync(self.timeToRequest)

View File

@@ -171,6 +171,18 @@ proc ip6zoneVB(vb: var VBuffer): bool =
## IPv6 validateBuffer() implementation.
pathValidateBufferNoSlash(vb)
proc memoryStB(s: string, vb: var VBuffer): bool =
## Memory stringToBuffer() implementation.
pathStringToBuffer(s, vb)
proc memoryBtS(vb: var VBuffer, s: var string): bool =
## Memory bufferToString() implementation.
pathBufferToString(vb, s)
proc memoryVB(vb: var VBuffer): bool =
## Memory validateBuffer() implementation.
pathValidateBuffer(vb)
proc portStB(s: string, vb: var VBuffer): bool =
## Port number stringToBuffer() implementation.
var port: array[2, byte]
@@ -355,6 +367,10 @@ const
)
TranscoderDNS* =
Transcoder(stringToBuffer: dnsStB, bufferToString: dnsBtS, validateBuffer: dnsVB)
TranscoderMemory* = Transcoder(
stringToBuffer: memoryStB, bufferToString: memoryBtS, validateBuffer: memoryVB
)
ProtocolsList = [
MAProtocol(mcodec: multiCodec("ip4"), kind: Fixed, size: 4, coder: TranscoderIP4),
MAProtocol(mcodec: multiCodec("tcp"), kind: Fixed, size: 2, coder: TranscoderPort),
@@ -393,6 +409,9 @@ const
MAProtocol(mcodec: multiCodec("p2p-websocket-star"), kind: Marker, size: 0),
MAProtocol(mcodec: multiCodec("p2p-webrtc-star"), kind: Marker, size: 0),
MAProtocol(mcodec: multiCodec("p2p-webrtc-direct"), kind: Marker, size: 0),
MAProtocol(
mcodec: multiCodec("memory"), kind: Path, size: 0, coder: TranscoderMemory
),
]
DNSANY* = mapEq("dns")
@@ -453,6 +472,8 @@ const
CircuitRelay* = mapEq("p2p-circuit")
Memory* = mapEq("memory")
proc initMultiAddressCodeTable(): Table[MultiCodec, MAProtocol] {.compileTime.} =
for item in ProtocolsList:
result[item.mcodec] = item

View File

@@ -16,7 +16,8 @@
{.push raises: [].}
import tables
import stew/[base32, base58, base64, results]
import results
import stew/[base32, base58, base64]
type
MultiBaseStatus* {.pure.} = enum

View File

@@ -10,10 +10,11 @@
## This module implements MultiCodec.
{.push raises: [].}
{.used.}
import tables, hashes
import vbuffer
import stew/results
import results
export results
## List of officially supported codecs can BE found here
@@ -396,6 +397,7 @@ const MultiCodecList = [
("onion3", 0x01BD),
("p2p-circuit", 0x0122),
("libp2p-peer-record", 0x0301),
("memory", 0x0309),
("dns", 0x35),
("dns4", 0x36),
("dns6", 0x37),
@@ -403,6 +405,7 @@ const MultiCodecList = [
# IPLD formats
("dag-pb", 0x70),
("dag-cbor", 0x71),
("libp2p-key", 0x72),
("dag-json", 0x129),
("git-raw", 0x78),
("eth-block", 0x90),

View File

@@ -22,12 +22,13 @@
## 2. MURMUR
{.push raises: [].}
{.used.}
import tables
import nimcrypto/[sha, sha2, keccak, blake2, hash, utils]
import varint, vbuffer, multicodec, multibase
import stew/base58
import stew/results
import results
export results
# This is workaround for Nim `import` bug.
export sha, sha2, keccak, blake2, hash, utils
@@ -566,7 +567,7 @@ proc init*(mhtype: typedesc[MultiHash], data: string): MhResult[MultiHash] {.inl
proc init58*(mhtype: typedesc[MultiHash], data: string): MultiHash {.inline.} =
## Create MultiHash from BASE58 encoded string representation ``data``.
if MultiHash.decode(Base58.decode(data), result) == -1:
raise newException(MultihashError, "Incorrect MultiHash binary format")
raise newException(MultihashError, "Incorrect MultiHash binary format in init58")
proc cmp(a: openArray[byte], b: openArray[byte]): bool {.inline.} =
if len(a) != len(b):

View File

@@ -87,7 +87,7 @@ proc open*(s: LPChannel) {.async: (raises: [CancelledError, LPStreamError]).} =
raise exc
except LPStreamError as exc:
await s.conn.close()
raise exc
raise newException(LPStreamError, "Opening LPChannel failed: " & exc.msg, exc)
method closed*(s: LPChannel): bool =
s.closedLocal

View File

@@ -52,7 +52,7 @@ method newStream*(
): Future[Connection] {.
base, async: (raises: [CancelledError, LPStreamError, MuxerError], raw: true)
.} =
raiseAssert("Not implemented!")
raiseAssert("[Muxer.newStream] abstract method not implemented!")
method close*(m: Muxer) {.base, async: (raises: []).} =
if m.connection != nil:
@@ -68,4 +68,4 @@ proc new*(
muxerProvider
method getStreams*(m: Muxer): seq[Connection] {.base, gcsafe.} =
raiseAssert("Not implemented!")
raiseAssert("[Muxer.getStreams] abstract method not implemented!")

View File

@@ -587,10 +587,12 @@ method handle*(m: Yamux) {.async: (raises: []).} =
let channel =
try:
m.channels[header.streamId]
except KeyError:
except KeyError as e:
raise newException(
YamuxError,
"Stream was cleaned up before handling data: " & $header.streamId,
"Stream was cleaned up before handling data: " & $header.streamId & " : " &
e.msg,
e,
)
if header.msgType == WindowUpdate:

View File

@@ -78,23 +78,23 @@ proc getDnsResponse(
try:
await receivedDataFuture.wait(5.seconds) #unix default
except AsyncTimeoutError:
raise newException(IOError, "DNS server timeout")
except AsyncTimeoutError as e:
raise newException(IOError, "DNS server timeout: " & e.msg, e)
let rawResponse = sock.getMessage()
try:
parseResponse(string.fromBytes(rawResponse))
except IOError as exc:
raise exc
raise newException(IOError, "Failed to parse DNS response: " & exc.msg, exc)
except OSError as exc:
raise exc
raise newException(OSError, "Failed to parse DNS response: " & exc.msg, exc)
except ValueError as exc:
raise exc
raise newException(ValueError, "Failed to parse DNS response: " & exc.msg, exc)
except Exception as exc:
# Nim 1.6: parseResponse can has a raises: [Exception, ..] because of
# https://github.com/nim-lang/Nim/commit/035134de429b5d99c5607c5fae912762bebb6008
# it can't actually raise though
raiseAssert exc.msg
raiseAssert "Exception parsing DN response: " & exc.msg
finally:
await sock.closeWait()

View File

@@ -22,7 +22,7 @@ method resolveTxt*(
self: NameResolver, address: string
): Future[seq[string]] {.async: (raises: [CancelledError]), base.} =
## Get TXT record
raiseAssert "Not implemented!"
raiseAssert "[NameResolver.resolveTxt] abstract method not implemented!"
method resolveIp*(
self: NameResolver, address: string, port: Port, domain: Domain = Domain.AF_UNSPEC
@@ -30,7 +30,7 @@ method resolveIp*(
async: (raises: [CancelledError, TransportAddressError]), base
.} =
## Resolve the specified address
raiseAssert "Not implemented!"
raiseAssert "[NameResolver.resolveIp] abstract method not implemented!"
proc getHostname*(ma: MultiAddress): string =
let

View File

@@ -11,10 +11,12 @@
{.push raises: [].}
{.push public.}
{.used.}
import
std/[hashes, strutils],
stew/[base58, results],
stew/base58,
results,
chronicles,
nimcrypto/utils,
utility,

View File

@@ -0,0 +1,335 @@
# Nim-Libp2p
# Copyright (c) 2025 Status Research & Development GmbH
# Licensed under either of
# * Apache License, version 2.0, ([LICENSE-APACHE](LICENSE-APACHE))
# * MIT license ([LICENSE-MIT](LICENSE-MIT))
# at your option.
# This file may not be copied, modified, or distributed except according to
# those terms.
{.push raises: [].}
import base64, json, strutils, uri, times
import chronos, chronos/apps/http/httpclient, results, chronicles, bio
import ../peerinfo, ../crypto/crypto, ../varint.nim
logScope:
topics = "libp2p peeridauth"
const
NimLibp2pUserAgent = "nim-libp2p"
PeerIDAuthPrefix* = "libp2p-PeerID"
ChallengeCharset = "ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz0123456789"
ChallengeDefaultLen = 48
type PeerIDAuthClient* = ref object of RootObj
session: HttpSessionRef
rng: ref HmacDrbgContext
type PeerIDAuthError* = object of LPError
type PeerIDAuthResponse* = object
status*: int
headers*: HttpTable
body*: seq[byte]
type BearerToken* = object
token*: string
expires*: Opt[DateTime]
type PeerIDAuthOpaque* = string
type PeerIDAuthSignature* = string
type PeerIDAuthChallenge* = string
type PeerIDAuthAuthenticationResponse* = object
challengeClient*: PeerIDAuthChallenge
opaque*: PeerIDAuthOpaque
serverPubkey*: PublicKey
type PeerIDAuthAuthorizationResponse* = object
sig*: PeerIDAuthSignature
bearer*: BearerToken
response*: PeerIDAuthResponse
type SigParam = object
k: string
v: seq[byte]
proc new*(T: typedesc[PeerIDAuthClient], rng: ref HmacDrbgContext): PeerIDAuthClient =
PeerIDAuthClient(session: HttpSessionRef.new(), rng: rng)
proc sampleChar(
ctx: var HmacDrbgContext, choices: string
): char {.raises: [ValueError].} =
## Samples a random character from the input string using the DRBG context
if choices.len == 0:
raise newException(ValueError, "Cannot sample from an empty string")
var idx: uint32
ctx.generate(idx)
return choices[uint32(idx mod uint32(choices.len))]
proc randomChallenge(
rng: ref HmacDrbgContext, challengeLen: int = ChallengeDefaultLen
): PeerIDAuthChallenge {.raises: [PeerIDAuthError].} =
var rng = rng[]
var challenge = ""
try:
for _ in 0 ..< challengeLen:
challenge.add(rng.sampleChar(ChallengeCharset))
except ValueError as exc:
raise newException(PeerIDAuthError, "Failed to generate challenge", exc)
PeerIDAuthChallenge(challenge)
proc extractField(data, key: string): string {.raises: [PeerIDAuthError].} =
# Helper to extract quoted value from key
for segment in data.split(","):
if key in segment:
return segment.split("=", 1)[1].strip(chars = {' ', '"'})
raise newException(PeerIDAuthError, "Failed to find " & key & " in " & data)
proc genDataToSign(
parts: seq[SigParam], prefix: string = PeerIDAuthPrefix
): seq[byte] {.raises: [PeerIDAuthError].} =
var buf: seq[byte] = prefix.toByteSeq()
for p in parts:
let varintLen = PB.encodeVarint(hint(p.k.len + p.v.len + 1)).valueOr:
raise newException(PeerIDAuthError, "could not encode fields length to varint")
buf.add varintLen
buf.add (p.k & "=").toByteSeq()
buf.add p.v
return buf
proc getSigParams(
clientSender: bool, hostname: string, challenge: string, publicKey: PublicKey
): seq[SigParam] =
if clientSender:
@[
SigParam(k: "challenge-client", v: challenge.toByteSeq()),
SigParam(k: "hostname", v: hostname.toByteSeq()),
SigParam(k: "server-public-key", v: publicKey.getBytes().get()),
]
else:
@[
SigParam(k: "challenge-server", v: challenge.toByteSeq()),
SigParam(k: "client-public-key", v: publicKey.getBytes().get()),
SigParam(k: "hostname", v: hostname.toByteSeq()),
]
proc sign(
privateKey: PrivateKey,
challenge: PeerIDAuthChallenge,
publicKey: PublicKey,
hostname: string,
clientSender: bool = true,
): PeerIDAuthSignature {.raises: [PeerIDAuthError].} =
let bytesToSign =
getSigParams(clientSender, hostname, challenge, publicKey).genDataToSign()
PeerIDAuthSignature(
base64.encode(privateKey.sign(bytesToSign).get().getBytes(), safe = true)
)
proc checkSignature*(
serverSig: PeerIDAuthSignature,
serverPublicKey: PublicKey,
challengeServer: PeerIDAuthChallenge,
clientPublicKey: PublicKey,
hostname: string,
): bool {.raises: [PeerIDAuthError].} =
let bytesToSign =
getSigParams(false, hostname, challengeServer, clientPublicKey).genDataToSign()
var serverSignature: Signature
try:
if not serverSignature.init(base64.decode(serverSig).toByteSeq()):
raise newException(
PeerIDAuthError, "Failed to initialize Signature from base64 encoded sig"
)
except ValueError as exc:
raise newException(PeerIDAuthError, "Failed to decode server's signature", exc)
serverSignature.verify(
bytesToSign.toOpenArray(0, bytesToSign.len - 1), serverPublicKey
)
method post*(
self: PeerIDAuthClient, uri: string, payload: string, authHeader: string
): Future[PeerIDAuthResponse] {.async: (raises: [HttpError, CancelledError]), base.} =
let rawResponse = await HttpClientRequestRef
.post(
self.session,
uri,
body = payload,
headers = [
("Content-Type", "application/json"),
("User-Agent", NimLibp2pUserAgent),
("Authorization", authHeader),
],
)
.get()
.send()
PeerIDAuthResponse(
status: rawResponse.status,
headers: rawResponse.headers,
body: await rawResponse.getBodyBytes(),
)
method get*(
self: PeerIDAuthClient, uri: string
): Future[PeerIDAuthResponse] {.async: (raises: [HttpError, CancelledError]), base.} =
let rawResponse = await HttpClientRequestRef.get(self.session, $uri).get().send()
PeerIDAuthResponse(
status: rawResponse.status,
headers: rawResponse.headers,
body: await rawResponse.getBodyBytes(),
)
proc requestAuthentication*(
self: PeerIDAuthClient, uri: Uri
): Future[PeerIDAuthAuthenticationResponse] {.
async: (raises: [PeerIDAuthError, CancelledError])
.} =
let response =
try:
await self.get($uri)
except HttpError as exc:
raise newException(PeerIDAuthError, "Failed to start PeerID Auth", exc)
let wwwAuthenticate = response.headers.getString("WWW-Authenticate")
if wwwAuthenticate == "":
raise newException(PeerIDAuthError, "WWW-authenticate not present in response")
let serverPubkey: PublicKey =
try:
PublicKey.init(decode(extractField(wwwAuthenticate, "public-key")).toByteSeq()).valueOr:
raise newException(PeerIDAuthError, "Failed to initialize server public-key")
except ValueError as exc:
raise newException(PeerIDAuthError, "Failed to decode server public-key", exc)
PeerIDAuthAuthenticationResponse(
challengeClient: extractField(wwwAuthenticate, "challenge-client"),
opaque: extractField(wwwAuthenticate, "opaque"),
serverPubkey: serverPubkey,
)
proc pubkeyBytes*(pubkey: PublicKey): seq[byte] {.raises: [PeerIDAuthError].} =
try:
pubkey.getBytes().valueOr:
raise
newException(PeerIDAuthError, "Failed to get bytes from PeerInfo's publicKey")
except ValueError as exc:
raise newException(
PeerIDAuthError, "Failed to get bytes from PeerInfo's publicKey", exc
)
proc parse3339DateTime(
timeStr: string
): DateTime {.raises: [ValueError, TimeParseError].} =
let parts = timeStr.split('.')
let base = parse(parts[0], "yyyy-MM-dd'T'HH:mm:ss")
let millis = parseInt(parts[1].strip(chars = {'Z'}))
result = base + initDuration(milliseconds = millis)
proc requestAuthorization*(
self: PeerIDAuthClient,
peerInfo: PeerInfo,
uri: Uri,
challengeClient: PeerIDAuthChallenge,
challengeServer: PeerIDAuthChallenge,
serverPubkey: PublicKey,
opaque: PeerIDAuthOpaque,
payload: auto,
): Future[PeerIDAuthAuthorizationResponse] {.
async: (raises: [PeerIDAuthError, CancelledError])
.} =
let clientPubkeyB64 = peerInfo.publicKey.pubkeyBytes().encode(safe = true)
let sig = peerInfo.privateKey.sign(challengeClient, serverPubkey, uri.hostname)
let authHeader =
PeerIDAuthPrefix & " public-key=\"" & clientPubkeyB64 & "\"" & ", opaque=\"" & opaque &
"\"" & ", challenge-server=\"" & challengeServer & "\"" & ", sig=\"" & sig & "\""
let response =
try:
await self.post($uri, $payload, authHeader)
except HttpError as exc:
raise newException(
PeerIDAuthError, "Failed to send Authorization for PeerID Auth", exc
)
let authenticationInfo = response.headers.getString("authentication-info")
let bearerExpires =
try:
Opt.some(parse3339DateTime(extractField(authenticationInfo, "expires")))
except ValueError, PeerIDAuthError, TimeParseError:
Opt.none(DateTime)
PeerIDAuthAuthorizationResponse(
sig: PeerIDAuthSignature(extractField(authenticationInfo, "sig")),
bearer: BearerToken(
token: extractField(authenticationInfo, "bearer"), expires: bearerExpires
),
response: response,
)
proc sendWithoutBearer(
self: PeerIDAuthClient, uri: Uri, peerInfo: PeerInfo, payload: auto
): Future[(BearerToken, PeerIDAuthResponse)] {.
async: (raises: [PeerIDAuthError, CancelledError])
.} =
# Authenticate in three ways as per the PeerID Auth spec
# https://github.com/libp2p/specs/blob/master/http/peer-id-auth.md
let authenticationResponse = await self.requestAuthentication(uri)
let challengeServer = self.rng.randomChallenge()
let authorizationResponse = await self.requestAuthorization(
peerInfo, uri, authenticationResponse.challengeClient, challengeServer,
authenticationResponse.serverPubkey, authenticationResponse.opaque, payload,
)
if not checkSignature(
authorizationResponse.sig, authenticationResponse.serverPubkey, challengeServer,
peerInfo.publicKey, uri.hostname,
):
raise newException(PeerIDAuthError, "Failed to validate server's signature")
return (authorizationResponse.bearer, authorizationResponse.response)
proc sendWithBearer(
self: PeerIDAuthClient,
uri: Uri,
peerInfo: PeerInfo,
payload: auto,
bearer: BearerToken,
): Future[(BearerToken, PeerIDAuthResponse)] {.
async: (raises: [PeerIDAuthError, CancelledError])
.} =
if bearer.expires.isSome and DateTime(bearer.expires.get) <= now():
raise newException(PeerIDAuthError, "Bearer expired")
let authHeader = PeerIDAuthPrefix & " bearer=\"" & bearer.token & "\""
let response =
try:
await self.post($uri, $payload, authHeader)
except HttpError as exc:
raise newException(
PeerIDAuthError, "Failed to send request with bearer token for PeerID Auth", exc
)
return (bearer, response)
proc send*(
self: PeerIDAuthClient,
uri: Uri,
peerInfo: PeerInfo,
payload: auto,
bearer: BearerToken = BearerToken(),
): Future[(BearerToken, PeerIDAuthResponse)] {.
async: (raises: [PeerIDAuthError, CancelledError])
.} =
if bearer.token == "":
await self.sendWithoutBearer(uri, peerInfo, payload)
else:
await self.sendWithBearer(uri, peerInfo, payload, bearer)
proc close*(
self: PeerIDAuthClient
): Future[void] {.async: (raises: [CancelledError]).} =
await self.session.closeWait()

View File

@@ -0,0 +1,41 @@
# Nim-Libp2p
# Copyright (c) 2025 Status Research & Development GmbH
# Licensed under either of
# * Apache License, version 2.0, ([LICENSE-APACHE](LICENSE-APACHE))
# * MIT license ([LICENSE-MIT](LICENSE-MIT))
# at your option.
# This file may not be copied, modified, or distributed except according to
# those terms.
{.push raises: [].}
import chronos, chronos/apps/http/httpclient
import ../crypto/crypto
import ./client
export client
type MockPeerIDAuthClient* = ref object of PeerIDAuthClient
mockedStatus*: int
mockedHeaders*: HttpTable
mockedBody*: seq[byte]
proc new*(
T: typedesc[MockPeerIDAuthClient], rng: ref HmacDrbgContext
): MockPeerIDAuthClient {.raises: [PeerIDAuthError].} =
MockPeerIDAuthClient(session: HttpSessionRef.new(), rng: rng)
method post*(
self: MockPeerIDAuthClient, uri: string, payload: string, authHeader: string
): Future[PeerIDAuthResponse] {.async: (raises: [HttpError, CancelledError]).} =
PeerIDAuthResponse(
status: self.mockedStatus, headers: self.mockedHeaders, body: self.mockedBody
)
method get*(
self: MockPeerIDAuthClient, uri: string
): Future[PeerIDAuthResponse] {.async: (raises: [HttpError, CancelledError]).} =
PeerIDAuthResponse(
status: self.mockedStatus, headers: self.mockedHeaders, body: self.mockedBody
)

View File

@@ -11,7 +11,7 @@
{.push public.}
import std/sequtils
import pkg/[chronos, chronicles, stew/results]
import pkg/[chronos, chronicles, results]
import peerid, multiaddress, multicodec, crypto/crypto, routing_record, errors, utility
export peerid, multiaddress, crypto, routing_record, errors, results
@@ -101,8 +101,10 @@ proc new*(
let pubkey =
try:
key.getPublicKey().tryGet()
except CatchableError:
raise newException(PeerInfoError, "invalid private key")
except CatchableError as e:
raise newException(
PeerInfoError, "invalid private key creating PeerInfo: " & e.msg, e
)
let peerId = PeerId.init(key).tryGet()

View File

@@ -160,10 +160,10 @@ proc updatePeerInfo*(
peerStore[KeyBook][info.peerId] = pubkey
info.agentVersion.withValue(agentVersion):
peerStore[AgentBook][info.peerId] = agentVersion.string
peerStore[AgentBook][info.peerId] = agentVersion
info.protoVersion.withValue(protoVersion):
peerStore[ProtoVersionBook][info.peerId] = protoVersion.string
peerStore[ProtoVersionBook][info.peerId] = protoVersion
if info.protos.len > 0:
peerStore[ProtoBook][info.peerId] = info.protos

View File

@@ -11,7 +11,7 @@
{.push raises: [].}
import ../varint, ../utility, stew/[endians2, results]
import ../varint, ../utility, stew/endians2, results
export results, utility
{.push public.}

View File

@@ -9,7 +9,7 @@
{.push raises: [].}
import stew/results
import results
import chronos, chronicles
import ../../../switch, ../../../multiaddress, ../../../peerid
import core
@@ -87,7 +87,7 @@ method dialMe*(
except CancelledError as e:
raise e
except CatchableError as e:
raise newException(AutonatError, "read Dial response failed", e)
raise newException(AutonatError, "read Dial response failed: " & e.msg, e)
let response = getResponseOrRaise(AutonatMsg.decode(respBytes))
@@ -96,7 +96,7 @@ method dialMe*(
of ResponseStatus.Ok:
try:
response.ma.tryGet()
except:
except ResultError[void]:
raiseAssert("checked with if")
of ResponseStatus.DialError:
raise newException(

View File

@@ -9,8 +9,8 @@
{.push raises: [].}
import stew/[results, objects]
import chronos, chronicles
import stew/objects
import results, chronos, chronicles
import ../../../multiaddress, ../../../peerid, ../../../errors
import ../../../protobuf/minprotobuf

View File

@@ -10,7 +10,7 @@
{.push raises: [].}
import std/[sets, sequtils]
import stew/results
import results
import chronos, chronicles
import
../../protocol,

View File

@@ -11,7 +11,7 @@
import std/sequtils
import stew/results
import results
import chronos, chronicles
import core
@@ -107,7 +107,9 @@ proc startSync*(
description = err.msg
raise newException(
DcutrError,
"Unexpected error when Dcutr initiator tried to connect to the remote peer", err,
"Unexpected error when Dcutr initiator tried to connect to the remote peer: " &
err.msg,
err,
)
finally:
if stream != nil:

View File

@@ -10,8 +10,8 @@
{.push raises: [].}
import std/[sets, sequtils]
import stew/[results, objects]
import chronos, chronicles
import stew/objects
import results, chronos, chronicles
import core
import

View File

@@ -148,7 +148,7 @@ proc dialPeerV1*(
raise exc
except LPStreamError as exc:
trace "error writing hop request", description = exc.msg
raise newException(RelayV1DialError, "error writing hop request", exc)
raise newException(RelayV1DialError, "error writing hop request: " & exc.msg, exc)
let msgRcvFromRelayOpt =
try:
@@ -158,7 +158,8 @@ proc dialPeerV1*(
except LPStreamError as exc:
trace "error reading stop response", description = exc.msg
await sendStatus(conn, StatusV1.HopCantOpenDstStream)
raise newException(RelayV1DialError, "error reading stop response", exc)
raise
newException(RelayV1DialError, "error reading stop response: " & exc.msg, exc)
try:
let msgRcvFromRelay = msgRcvFromRelayOpt.valueOr:
@@ -173,10 +174,16 @@ proc dialPeerV1*(
)
except RelayV1DialError as exc:
await sendStatus(conn, StatusV1.HopCantOpenDstStream)
raise exc
raise newException(
RelayV1DialError,
"Hop can't open destination stream after sendStatus: " & exc.msg,
exc,
)
except ValueError as exc:
await sendStatus(conn, StatusV1.HopCantOpenDstStream)
raise newException(RelayV1DialError, exc.msg)
raise newException(
RelayV1DialError, "Exception reading msg in dialPeerV1: " & exc.msg, exc
)
result = conn
proc dialPeerV2*(
@@ -199,7 +206,8 @@ proc dialPeerV2*(
raise exc
except CatchableError as exc:
trace "error reading stop response", description = exc.msg
raise newException(RelayV2DialError, exc.msg)
raise
newException(RelayV2DialError, "Exception decoding HopMessage: " & exc.msg, exc)
if msgRcvFromRelay.msgType != HopMessageType.Status:
raise newException(RelayV2DialError, "Unexpected stop response")

View File

@@ -10,7 +10,8 @@
{.push raises: [].}
import macros
import stew/[objects, results]
import stew/objects
import results
import ../../../peerinfo, ../../../signed_envelope
import ../../../protobuf/minprotobuf

View File

@@ -76,7 +76,7 @@ proc dial*(
if not dstPeerId.init(($(sma[^1].tryGet())).split('/')[2]):
raise newException(RelayDialError, "Destination doesn't exist")
except RelayDialError as e:
raise e
raise newException(RelayDialError, "dial address not valid: " & e.msg, e)
except CatchableError:
raise newException(RelayDialError, "dial address not valid")
@@ -100,13 +100,13 @@ proc dial*(
raise e
except DialFailedError as e:
safeClose(rc)
raise newException(RelayDialError, "dial relay peer failed", e)
raise newException(RelayDialError, "dial relay peer failed: " & e.msg, e)
except RelayV1DialError as e:
safeClose(rc)
raise e
raise newException(RelayV1DialError, "dial relay v1 failed: " & e.msg, e)
except RelayV2DialError as e:
safeClose(rc)
raise e
raise newException(RelayV2DialError, "dial relay v2 failed: " & e.msg, e)
method dial*(
self: RelayTransport,
@@ -121,7 +121,8 @@ method dial*(
except CancelledError as e:
raise e
except CatchableError as e:
raise newException(transport.TransportDialError, e.msg, e)
raise
newException(transport.TransportDialError, "Caught error in dial: " & e.msg, e)
method handles*(self: RelayTransport, ma: MultiAddress): bool {.gcsafe.} =
try:

View File

@@ -69,8 +69,8 @@ proc bridge*(
while not connSrc.closed() and not connDst.closed():
try: # https://github.com/status-im/nim-chronos/issues/516
discard await race(futSrc, futDst)
except ValueError:
raiseAssert("Futures list is not empty")
except ValueError as e:
raiseAssert("Futures list is not empty: " & e.msg)
if futSrc.finished():
bufRead = await futSrc
if bufRead > 0:

View File

@@ -13,8 +13,7 @@
{.push raises: [].}
import std/[sequtils, options, strutils, sugar]
import stew/results
import chronos, chronicles
import results, chronos, chronicles
import
../protobuf/minprotobuf,
../peerinfo,

View File

@@ -0,0 +1,159 @@
import ../../protobuf/minprotobuf
import ../../varint
import ../../utility
import results
import ../../multiaddress
import stew/objects
import stew/assign2
import options
type
Record* {.public.} = object
key*: Option[seq[byte]]
value*: Option[seq[byte]]
timeReceived*: Option[string]
MessageType* = enum
putValue = 0
getValue = 1
addProvider = 2
getProviders = 3
findNode = 4
ping = 5 # Deprecated
ConnectionType* = enum
notConnected = 0
connected = 1
canConnect = 2 # Unused
cannotConnect = 3 # Unused
Peer* {.public.} = object
id*: seq[byte]
addrs*: seq[MultiAddress]
connection*: ConnectionType
Message* {.public.} = object
msgType*: MessageType
key*: Option[seq[byte]]
record*: Option[Record]
closerPeers*: seq[Peer]
providerPeers*: seq[Peer]
proc write*(pb: var ProtoBuffer, field: int, value: Record) {.raises: [].}
proc writeOpt*[T](pb: var ProtoBuffer, field: int, opt: Option[T]) {.raises: [].}
proc encode*(record: Record): ProtoBuffer {.raises: [].} =
var pb = initProtoBuffer()
pb.writeOpt(1, record.key)
pb.writeOpt(2, record.value)
pb.writeOpt(5, record.timeReceived)
pb.finish()
return pb
proc encode*(peer: Peer): ProtoBuffer {.raises: [].} =
var pb = initProtoBuffer()
pb.write(1, peer.id)
for address in peer.addrs:
pb.write(2, address.data.buffer)
pb.write(3, uint32(ord(peer.connection)))
pb.finish()
return pb
proc encode*(msg: Message): ProtoBuffer {.raises: [].} =
var pb = initProtoBuffer()
pb.write(1, uint32(ord(msg.msgType)))
pb.writeOpt(2, msg.key)
msg.record.withValue(record):
pb.writeOpt(3, msg.record)
for peer in msg.closerPeers:
pb.write(8, peer.encode())
for peer in msg.providerPeers:
pb.write(9, peer.encode())
pb.finish()
return pb
proc writeOpt*[T](pb: var ProtoBuffer, field: int, opt: Option[T]) {.raises: [].} =
opt.withValue(v):
pb.write(field, v)
proc write*(pb: var ProtoBuffer, field: int, value: Record) {.raises: [].} =
pb.write(field, value.encode())
proc getOptionField[T: ProtoScalar | string | seq[byte]](
pb: ProtoBuffer, field: int, output: var Option[T]
): ProtoResult[void] =
var f: T
if ?pb.getField(field, f):
assign(output, some(f))
ok()
proc decode*(T: type Record, pb: ProtoBuffer): ProtoResult[Option[T]] =
var r: Record
?pb.getOptionField(1, r.key)
?pb.getOptionField(2, r.value)
?pb.getOptionField(5, r.timeReceived)
return ok(some(r))
proc decode*(T: type Peer, pb: ProtoBuffer): ProtoResult[Option[T]] =
var
p: Peer
id: seq[byte]
?pb.getRequiredField(1, p.id)
discard ?pb.getRepeatedField(2, p.addrs)
var connVal: uint32
if ?pb.getField(3, connVal):
var connType: ConnectionType
if not checkedEnumAssign(connType, connVal):
return err(ProtoError.BadWireType)
p.connection = connType
return ok(some(p))
proc decode*(T: type Message, buf: seq[byte]): ProtoResult[Option[T]] =
var
m: Message
key: seq[byte]
recPb: seq[byte]
closerPbs: seq[seq[byte]]
providerPbs: seq[seq[byte]]
var pb = initProtoBuffer(buf)
var msgTypeVal: uint32
?pb.getRequiredField(1, msgTypeVal)
var msgType: MessageType
if not checkedEnumAssign(msgType, msgTypeVal):
return err(ProtoError.BadWireType)
m.msgType = msgType
?pb.getOptionField(2, m.key)
if ?pb.getField(3, recPb):
assign(m.record, ?Record.decode(initProtoBuffer(recPb)))
discard ?pb.getRepeatedField(8, closerPbs)
for ppb in closerPbs:
let peerOpt = ?Peer.decode(initProtoBuffer(ppb))
peerOpt.withValue(peer):
m.closerPeers.add(peer)
discard ?pb.getRepeatedField(9, providerPbs)
for ppb in providerPbs:
let peer = ?Peer.decode(initProtoBuffer(ppb))
peer.withValue(peer):
m.providerPeers.add(peer)
return ok(some(m))

View File

@@ -16,35 +16,68 @@ import ./core, ../../stream/connection
logScope:
topics = "libp2p perf"
type PerfClient* = ref object of RootObj
type Stats* = object
isFinal*: bool
uploadBytes*: uint
downloadBytes*: uint
duration*: Duration
type PerfClient* = ref object
stats: Stats
proc new*(T: typedesc[PerfClient]): T =
return T()
proc currentStats*(p: PerfClient): Stats =
return p.stats
proc perf*(
_: typedesc[PerfClient],
conn: Connection,
sizeToWrite: uint64 = 0,
sizeToRead: uint64 = 0,
p: PerfClient, conn: Connection, sizeToWrite: uint64 = 0, sizeToRead: uint64 = 0
): Future[Duration] {.public, async: (raises: [CancelledError, LPStreamError]).} =
var
size = sizeToWrite
buf: array[PerfSize, byte]
let start = Moment.now()
trace "starting performance benchmark", conn, sizeToWrite, sizeToRead
await conn.write(toSeq(toBytesBE(sizeToRead)))
while size > 0:
let toWrite = min(size, PerfSize)
await conn.write(buf[0 ..< toWrite])
size -= toWrite
p.stats = Stats()
await conn.close()
try:
var
size = sizeToWrite
buf: array[PerfSize, byte]
size = sizeToRead
let start = Moment.now()
while size > 0:
let toRead = min(size, PerfSize)
await conn.readExactly(addr buf[0], toRead.int)
size = size - toRead
await conn.write(toSeq(toBytesBE(sizeToRead)))
while size > 0:
let toWrite = min(size, PerfSize)
await conn.write(buf[0 ..< toWrite])
size -= toWrite.uint
let duration = Moment.now() - start
trace "finishing performance benchmark", duration
return duration
# set stats using copy value to avoid race condition
var statsCopy = p.stats
statsCopy.duration = Moment.now() - start
statsCopy.uploadBytes += toWrite.uint
p.stats = statsCopy
await conn.close()
size = sizeToRead
while size > 0:
let toRead = min(size, PerfSize)
await conn.readExactly(addr buf[0], toRead.int)
size = size - toRead.uint
# set stats using copy value to avoid race condition
var statsCopy = p.stats
statsCopy.duration = Moment.now() - start
statsCopy.downloadBytes += toRead.uint
p.stats = statsCopy
except CancelledError as e:
raise e
except LPStreamError as e:
raise e
finally:
p.stats.isFinal = true
trace "finishing performance benchmark", duration = p.stats.duration
return p.stats.duration

View File

@@ -9,7 +9,7 @@
{.push raises: [].}
import chronos, stew/results
import chronos, results
import ../stream/connection
export results
@@ -66,21 +66,6 @@ template `handler`*(p: LPProtocol, conn: Connection, proto: string): Future[void
func `handler=`*(p: LPProtocol, handler: LPProtoHandler) =
p.handlerImpl = handler
# Callbacks that are annotated with `{.async: (raises).}` explicitly
# document the types of errors that they may raise, but are not compatible
# with `LPProtoHandler` and need to use a custom `proc` type.
# They are internally wrapped into a `LPProtoHandler`, but still allow the
# compiler to check that their `{.async: (raises).}` annotation is correct.
# https://github.com/nim-lang/Nim/issues/23432
func `handler=`*[E](
p: LPProtocol,
handler: proc(conn: Connection, proto: string): InternalRaisesFuture[void, E],
) {.deprecated: "Use `LPProtoHandler` that explicitly specifies raised exceptions.".} =
proc wrap(conn: Connection, proto: string): Future[void] {.async.} =
await handler(conn, proto)
p.handlerImpl = wrap
proc new*(
T: type LPProtocol,
codecs: seq[string],
@@ -96,17 +81,3 @@ proc new*(
else:
maxIncomingStreams,
)
proc new*[E](
T: type LPProtocol,
codecs: seq[string],
handler: proc(conn: Connection, proto: string): InternalRaisesFuture[void, E],
maxIncomingStreams: Opt[int] | int = Opt.none(int),
): T {.
deprecated:
"Use `new` with `LPProtoHandler` that explicitly specifies raised exceptions."
.} =
proc wrap(conn: Connection, proto: string): Future[void] {.async.} =
await handler(conn, proto)
T.new(codec, wrap, maxIncomingStreams)

View File

@@ -185,14 +185,14 @@ method init*(f: FloodSub) =
try:
await f.handleConn(conn, proto)
except CancelledError as exc:
trace "Unexpected cancellation in floodsub handler", conn
trace "Unexpected cancellation in floodsub handler", conn, description = exc.msg
raise exc
f.handler = handler
f.codec = FloodSubCodec
method publish*(
f: FloodSub, topic: string, data: seq[byte]
f: FloodSub, topic: string, data: seq[byte], useCustomConn: bool = false
): Future[int] {.async: (raises: []).} =
# base returns always 0
discard await procCall PubSub(f).publish(topic, data)

View File

@@ -29,7 +29,7 @@ import
../../utility,
../../switch
import stew/results
import results
export results
import ./gossipsub/[types, scoring, behavior], ../../utils/heartbeat
@@ -218,7 +218,7 @@ method init*(g: GossipSub) =
try:
await g.handleConn(conn, proto)
except CancelledError as exc:
trace "Unexpected cancellation in gossipsub handler", conn
trace "Unexpected cancellation in gossipsub handler", conn, description = exc.msg
raise exc
g.handler = handler
@@ -702,24 +702,27 @@ method onTopicSubscription*(g: GossipSub, topic: string, subscribed: bool) =
# Send unsubscribe (in reverse order to sub/graft)
procCall PubSub(g).onTopicSubscription(topic, subscribed)
method publish*(
proc makePeersForPublishUsingCustomConn(
g: GossipSub, topic: string
): HashSet[PubSubPeer] =
assert g.customConnCallbacks.isSome,
"GossipSub misconfiguration: useCustomConn was true, but no customConnCallbacks provided"
trace "Selecting peers via custom connection callback"
return g.customConnCallbacks.get().customPeerSelectionCB(
g.gossipsub.getOrDefault(topic),
g.subscribedDirectPeers.getOrDefault(topic),
g.mesh.getOrDefault(topic),
g.fanout.getOrDefault(topic),
)
proc makePeersForPublishDefault(
g: GossipSub, topic: string, data: seq[byte]
): Future[int] {.async: (raises: []).} =
logScope:
topic
if topic.len <= 0: # data could be 0/empty
debug "Empty topic, skipping publish"
return 0
# base returns always 0
discard await procCall PubSub(g).publish(topic, data)
trace "Publishing message on topic", data = data.shortLog
): HashSet[PubSubPeer] =
var peers: HashSet[PubSubPeer]
# add always direct peers
# Always include direct peers
peers.incl(g.subscribedDirectPeers.getOrDefault(topic))
if topic in g.topics: # if we're subscribed use the mesh
@@ -769,6 +772,29 @@ method publish*(
# ultimately is not sent)
g.lastFanoutPubSub[topic] = Moment.fromNow(g.parameters.fanoutTTL)
return peers
method publish*(
g: GossipSub, topic: string, data: seq[byte], useCustomConn: bool = false
): Future[int] {.async: (raises: []).} =
logScope:
topic
if topic.len <= 0: # data could be 0/empty
debug "Empty topic, skipping publish"
return 0
# base returns always 0
discard await procCall PubSub(g).publish(topic, data)
trace "Publishing message on topic", data = data.shortLog
let peers =
if useCustomConn:
g.makePeersForPublishUsingCustomConn(topic)
else:
g.makePeersForPublishDefault(topic, data)
if peers.len == 0:
let topicPeers = g.gossipsub.getOrDefault(topic).toSeq()
debug "No peers for topic, skipping publish",
@@ -807,7 +833,12 @@ method publish*(
if g.parameters.sendIDontWantOnPublish and isLargeMessage(msg, msgId):
g.sendIDontWant(msg, msgId, peers)
g.broadcast(peers, RPCMsg(messages: @[msg]), isHighPriority = true)
g.broadcast(
peers,
RPCMsg(messages: @[msg]),
isHighPriority = true,
useCustomConn = useCustomConn,
)
if g.knownTopics.contains(topic):
libp2p_pubsub_messages_published.inc(peers.len.int64, labelValues = [topic])

View File

@@ -305,9 +305,9 @@ proc handleIHave*(
proc handleIDontWant*(g: GossipSub, peer: PubSubPeer, iDontWants: seq[ControlIWant]) =
for dontWant in iDontWants:
for messageId in dontWant.messageIDs:
if peer.iDontWants[^1].len > 1000:
if peer.iDontWants[0].len >= IDontWantMaxCount:
break
peer.iDontWants[^1].incl(g.salt(messageId))
peer.iDontWants[0].incl(g.salt(messageId))
proc handleIWant*(
g: GossipSub, peer: PubSubPeer, iwants: seq[ControlIWant]
@@ -457,8 +457,8 @@ proc rebalanceMesh*(g: GossipSub, topic: string, metrics: ptr MeshMetrics = nil)
prunes = toSeq(
try:
g.mesh[topic]
except KeyError:
raiseAssert "have peers"
except KeyError as e:
raiseAssert "have peers: " & e.msg
)
# avoid pruning peers we are currently grafting in this heartbeat
prunes.keepIf do(x: PubSubPeer) -> bool:
@@ -513,8 +513,8 @@ proc rebalanceMesh*(g: GossipSub, topic: string, metrics: ptr MeshMetrics = nil)
var peers = toSeq(
try:
g.mesh[topic]
except KeyError:
raiseAssert "have peers"
except KeyError as e:
raiseAssert "have peers: " & e.msg
)
# grafting so high score has priority
peers.sort(byScore, SortOrder.Descending)
@@ -538,8 +538,8 @@ proc rebalanceMesh*(g: GossipSub, topic: string, metrics: ptr MeshMetrics = nil)
it.peerId notin backingOff:
avail.add(it)
# by spec, grab only 2
if avail.len > 1:
# by spec, grab only up to MaxOpportunisticGraftPeers
if avail.len >= MaxOpportunisticGraftPeers:
break
for peer in avail:
@@ -690,7 +690,7 @@ proc getGossipPeers*(g: GossipSub): Table[PubSubPeer, ControlMessage] =
for peer in allPeers:
control.mgetOrPut(peer, ControlMessage()).ihave.add(ihave)
for msgId in ihave.messageIDs:
peer.sentIHaves[^1].incl(msgId)
peer.sentIHaves[0].incl(msgId)
libp2p_gossipsub_cache_window_size.set(cacheWindowSize.int64)

View File

@@ -50,6 +50,9 @@ const
# rust sigp: https://github.com/sigp/rust-libp2p/blob/f53d02bc873fef2bf52cd31e3d5ce366a41d8a8c/protocols/gossipsub/src/config.rs#L572
# go: https://github.com/libp2p/go-libp2p-pubsub/blob/08c17398fb11b2ab06ca141dddc8ec97272eb772/gossipsub.go#L155
IHaveMaxLength* = 5000
IDontWantMaxCount* = 1000
# maximum number of IDontWant messages in one slot of the history
MaxOpportunisticGraftPeers* = 2
type
TopicInfo* = object # gossip 1.1 related

View File

@@ -31,7 +31,7 @@ import
../../errors,
../../utility
import stew/results
import results
export results
export tables, sets
@@ -176,6 +176,7 @@ type
rng*: ref HmacDrbgContext
knownTopics*: HashSet[string]
customConnCallbacks*: Option[CustomConnectionCallbacks]
method unsubscribePeer*(p: PubSub, peerId: PeerId) {.base, gcsafe.} =
## handle peer disconnects
@@ -187,7 +188,11 @@ method unsubscribePeer*(p: PubSub, peerId: PeerId) {.base, gcsafe.} =
libp2p_pubsub_peers.set(p.peers.len.int64)
proc send*(
p: PubSub, peer: PubSubPeer, msg: RPCMsg, isHighPriority: bool
p: PubSub,
peer: PubSubPeer,
msg: RPCMsg,
isHighPriority: bool,
useCustomConn: bool = false,
) {.raises: [].} =
## This procedure attempts to send a `msg` (of type `RPCMsg`) to the specified remote peer in the PubSub network.
##
@@ -200,13 +205,14 @@ proc send*(
## priority messages have been sent.
trace "sending pubsub message to peer", peer, payload = shortLog(msg)
peer.send(msg, p.anonymize, isHighPriority)
peer.send(msg, p.anonymize, isHighPriority, useCustomConn)
proc broadcast*(
p: PubSub,
sendPeers: auto, # Iteratble[PubSubPeer]
msg: RPCMsg,
isHighPriority: bool,
useCustomConn: bool = false,
) {.raises: [].} =
## This procedure attempts to send a `msg` (of type `RPCMsg`) to a specified group of peers in the PubSub network.
##
@@ -261,12 +267,12 @@ proc broadcast*(
if anyIt(sendPeers, it.hasObservers):
for peer in sendPeers:
p.send(peer, msg, isHighPriority)
p.send(peer, msg, isHighPriority, useCustomConn)
else:
# Fast path that only encodes message once
let encoded = encodeRpcMsg(msg, p.anonymize)
for peer in sendPeers:
asyncSpawn peer.sendEncoded(encoded, isHighPriority)
asyncSpawn peer.sendEncoded(encoded, isHighPriority, useCustomConn)
proc sendSubs*(
p: PubSub, peer: PubSubPeer, topics: openArray[string], subscribe: bool
@@ -373,8 +379,14 @@ method getOrCreatePeer*(
p.onPubSubPeerEvent(peer, event)
# create new pubsub peer
let pubSubPeer =
PubSubPeer.new(peerId, getConn, onEvent, protoNegotiated, p.maxMessageSize)
let pubSubPeer = PubSubPeer.new(
peerId,
getConn,
onEvent,
protoNegotiated,
p.maxMessageSize,
customConnCallbacks = p.customConnCallbacks,
)
debug "created new pubsub peer", peerId
p.peers[peerId] = pubSubPeer
@@ -558,7 +570,7 @@ proc subscribe*(p: PubSub, topic: string, handler: TopicHandler) {.public.} =
p.updateTopicMetrics(topic)
method publish*(
p: PubSub, topic: string, data: seq[byte]
p: PubSub, topic: string, data: seq[byte], useCustomConn: bool = false
): Future[int] {.base, async: (raises: []), public.} =
## publish to a ``topic``
##
@@ -589,7 +601,7 @@ method addValidator*(
method removeValidator*(
p: PubSub, topic: varargs[string], hook: ValidatorHandler
) {.base, public.} =
) {.base, public, gcsafe.} =
for t in topic:
p.validators.withValue(t, validators):
validators[].excl(hook)
@@ -648,6 +660,8 @@ proc init*[PubParams: object | bool](
maxMessageSize: int = 1024 * 1024,
rng: ref HmacDrbgContext = newRng(),
parameters: PubParams = false,
customConnCallbacks: Option[CustomConnectionCallbacks] =
none(CustomConnectionCallbacks),
): P {.raises: [InitializationError], public.} =
let pubsub =
when PubParams is bool:
@@ -663,6 +677,7 @@ proc init*[PubParams: object | bool](
maxMessageSize: maxMessageSize,
rng: rng,
topicsHigh: int.high,
customConnCallbacks: customConnCallbacks,
)
else:
P(
@@ -678,6 +693,7 @@ proc init*[PubParams: object | bool](
maxMessageSize: maxMessageSize,
rng: rng,
topicsHigh: int.high,
customConnCallbacks: customConnCallbacks,
)
proc peerEventHandler(

View File

@@ -9,8 +9,8 @@
{.push raises: [].}
import std/[sequtils, strutils, tables, hashes, options, sets, deques]
import stew/results
import std/[sequtils, tables, hashes, options, sets, deques]
import results
import chronos, chronicles, nimcrypto/sha2, metrics
import chronos/ratelimit
import
@@ -95,6 +95,21 @@ type
# Task for processing non-priority message queue.
sendNonPriorityTask: Future[void]
CustomConnCreationProc* = proc(
destAddr: Option[MultiAddress], destPeerId: PeerId, codec: string
): Connection {.gcsafe, raises: [].}
CustomPeerSelectionProc* = proc(
allPeers: HashSet[PubSubPeer],
directPeers: HashSet[PubSubPeer],
meshPeers: HashSet[PubSubPeer],
fanoutPeers: HashSet[PubSubPeer],
): HashSet[PubSubPeer] {.gcsafe, raises: [].}
CustomConnectionCallbacks* = object
customConnCreationCB*: CustomConnCreationProc
customPeerSelectionCB*: CustomPeerSelectionProc
PubSubPeer* = ref object of RootObj
getConn*: GetConn # callback to establish a new send connection
onEvent*: OnEvent # Connectivity updates for peer
@@ -123,6 +138,7 @@ type
maxNumElementsInNonPriorityQueue*: int
# The max number of elements allowed in the non-priority queue.
disconnected: bool
customConnCallbacks*: Option[CustomConnectionCallbacks]
RPCHandler* =
proc(peer: PubSubPeer, data: seq[byte]): Future[void] {.async: (raises: []).}
@@ -214,10 +230,10 @@ proc handle*(p: PubSubPeer, conn: Connection) {.async: (raises: []).} =
conn, peer = p, closed = conn.closed, description = exc.msg
finally:
await conn.close()
except CancelledError:
except CancelledError as e:
# This is top-level procedure which will work as separate task, so it
# do not need to propagate CancelledError.
trace "Unexpected cancellation in PubSubPeer.handle"
trace "Unexpected cancellation in PubSubPeer.handle", description = e.msg
finally:
debug "exiting pubsub read loop", conn, peer = p, closed = conn.closed
@@ -250,7 +266,7 @@ proc connectOnce(
await p.getConn().wait(5.seconds)
except AsyncTimeoutError as error:
trace "getConn timed out", description = error.msg
raise (ref LPError)(msg: "Cannot establish send connection")
raise (ref LPError)(msg: "Cannot establish send connection: " & error.msg)
# When the send channel goes up, subscriptions need to be sent to the
# remote peer - if we had multiple channels up and one goes down, all
@@ -356,21 +372,43 @@ proc sendMsgSlow(p: PubSubPeer, msg: seq[byte]) {.async: (raises: [CancelledErro
trace "sending encoded msg to peer", conn, encoded = shortLog(msg)
await sendMsgContinue(conn, conn.writeLp(msg))
proc sendMsg(p: PubSubPeer, msg: seq[byte]): Future[void] {.async: (raises: []).} =
if p.sendConn != nil and not p.sendConn.closed():
# Fast path that avoids copying msg (which happens for {.async.})
let conn = p.sendConn
proc sendMsg(
p: PubSubPeer, msg: seq[byte], useCustomConn: bool = false
): Future[void] {.async: (raises: []).} =
type ConnectionType = enum
ctCustom
ctSend
ctSlow
trace "sending encoded msg to peer", conn, encoded = shortLog(msg)
var slowPath = false
let (conn, connType) =
if useCustomConn and p.customConnCallbacks.isSome:
let address = p.address
(
p.customConnCallbacks.get().customConnCreationCB(address, p.peerId, p.codec),
ctCustom,
)
elif p.sendConn != nil and not p.sendConn.closed():
(p.sendConn, ctSend)
else:
slowPath = true
(nil, ctSlow)
if not slowPath:
trace "sending encoded msg to peer",
conntype = $connType, conn = conn, encoded = shortLog(msg)
let f = conn.writeLp(msg)
if not f.completed():
sendMsgContinue(conn, f)
else:
f
else:
trace "sending encoded msg to peer via slow path"
sendMsgSlow(p, msg)
proc sendEncoded*(p: PubSubPeer, msg: seq[byte], isHighPriority: bool): Future[void] =
proc sendEncoded*(
p: PubSubPeer, msg: seq[byte], isHighPriority: bool, useCustomConn: bool = false
): Future[void] =
## Asynchronously sends an encoded message to a specified `PubSubPeer`.
##
## Parameters:
@@ -399,7 +437,7 @@ proc sendEncoded*(p: PubSubPeer, msg: seq[byte], isHighPriority: bool): Future[v
maxSize = p.maxMessageSize, msgSize = msg.len
Future[void].completed()
elif isHighPriority or emptyQueues:
let f = p.sendMsg(msg)
let f = p.sendMsg(msg, useCustomConn)
if not f.finished:
p.rpcmessagequeue.sendPriorityQueue.addLast(f)
when defined(pubsubpeer_queue_metrics):
@@ -458,7 +496,11 @@ iterator splitRPCMsg(
trace "message too big to sent", peer, rpcMsg = shortLog(currentRPCMsg)
proc send*(
p: PubSubPeer, msg: RPCMsg, anonymize: bool, isHighPriority: bool
p: PubSubPeer,
msg: RPCMsg,
anonymize: bool,
isHighPriority: bool,
useCustomConn: bool = false,
) {.raises: [].} =
## Asynchronously sends an `RPCMsg` to a specified `PubSubPeer` with an option for anonymization.
##
@@ -489,11 +531,11 @@ proc send*(
if encoded.len > p.maxMessageSize and msg.messages.len > 1:
for encodedSplitMsg in splitRPCMsg(p, msg, p.maxMessageSize, anonymize):
asyncSpawn p.sendEncoded(encodedSplitMsg, isHighPriority)
asyncSpawn p.sendEncoded(encodedSplitMsg, isHighPriority, useCustomConn)
else:
# If the message size is within limits, send it as is
trace "sending msg to peer", peer = p, rpcMsg = shortLog(msg)
asyncSpawn p.sendEncoded(encoded, isHighPriority)
asyncSpawn p.sendEncoded(encoded, isHighPriority, useCustomConn)
proc canAskIWant*(p: PubSubPeer, msgId: MessageId): bool =
for sentIHave in p.sentIHaves.mitems():
@@ -552,6 +594,8 @@ proc new*(
maxMessageSize: int,
maxNumElementsInNonPriorityQueue: int = DefaultMaxNumElementsInNonPriorityQueue,
overheadRateLimitOpt: Opt[TokenBucket] = Opt.none(TokenBucket),
customConnCallbacks: Option[CustomConnectionCallbacks] =
none(CustomConnectionCallbacks),
): T =
result = T(
getConn: getConn,
@@ -563,6 +607,7 @@ proc new*(
overheadRateLimitOpt: overheadRateLimitOpt,
rpcmessagequeue: RpcMessageQueue.new(),
maxNumElementsInNonPriorityQueue: maxNumElementsInNonPriorityQueue,
customConnCallbacks: customConnCallbacks,
)
result.sentIHaves.addFirst(default(HashSet[MessageId]))
result.iDontWants.addFirst(default(HashSet[SaltedId]))

View File

@@ -10,7 +10,7 @@
{.push raises: [].}
import std/[hashes, sets]
import chronos/timer, stew/results
import chronos/timer, results
import ../../utility

View File

@@ -11,7 +11,7 @@
import tables, sequtils, sugar, sets
import metrics except collect
import chronos, chronicles, bearssl/rand, stew/[byteutils, objects, results]
import chronos, chronicles, bearssl/rand, stew/[byteutils, objects]
import
./protocol,
../protobuf/minprotobuf,
@@ -37,6 +37,9 @@ const
RendezVousCodec* = "/rendezvous/1.0.0"
MinimumDuration* = 2.hours
MaximumDuration = 72.hours
MaximumMessageLen = 1 shl 22 # 4MB
MinimumNamespaceLen = 1
MaximumNamespaceLen = 255
RegistrationLimitPerPeer = 1000
DiscoverLimit = 1000'u64
SemaphoreDefaultSize = 5
@@ -61,7 +64,7 @@ type
Cookie = object
offset: uint64
ns: string
ns: Opt[string]
Register = object
ns: string
@@ -77,7 +80,7 @@ type
ns: string
Discover = object
ns: string
ns: Opt[string]
limit: Opt[uint64]
cookie: Opt[seq[byte]]
@@ -98,7 +101,8 @@ type
proc encode(c: Cookie): ProtoBuffer =
result = initProtoBuffer()
result.write(1, c.offset)
result.write(2, c.ns)
if c.ns.isSome():
result.write(2, c.ns.get())
result.finish()
proc encode(r: Register): ProtoBuffer =
@@ -125,7 +129,8 @@ proc encode(u: Unregister): ProtoBuffer =
proc encode(d: Discover): ProtoBuffer =
result = initProtoBuffer()
result.write(1, d.ns)
if d.ns.isSome():
result.write(1, d.ns.get())
d.limit.withValue(limit):
result.write(2, limit)
d.cookie.withValue(cookie):
@@ -159,13 +164,17 @@ proc encode(msg: Message): ProtoBuffer =
result.finish()
proc decode(_: typedesc[Cookie], buf: seq[byte]): Opt[Cookie] =
var c: Cookie
var
c: Cookie
ns: string
let
pb = initProtoBuffer(buf)
r1 = pb.getRequiredField(1, c.offset)
r2 = pb.getRequiredField(2, c.ns)
r2 = pb.getField(2, ns)
if r1.isErr() or r2.isErr():
return Opt.none(Cookie)
if r2.get(false):
c.ns = Opt.some(ns)
Opt.some(c)
proc decode(_: typedesc[Register], buf: seq[byte]): Opt[Register] =
@@ -217,13 +226,16 @@ proc decode(_: typedesc[Discover], buf: seq[byte]): Opt[Discover] =
d: Discover
limit: uint64
cookie: seq[byte]
ns: string
let
pb = initProtoBuffer(buf)
r1 = pb.getRequiredField(1, d.ns)
r1 = pb.getField(1, ns)
r2 = pb.getField(2, limit)
r3 = pb.getField(3, cookie)
if r1.isErr() or r2.isErr() or r3.isErr:
return Opt.none(Discover)
if r1.get(false):
d.ns = Opt.some(ns)
if r2.get(false):
d.limit = Opt.some(limit)
if r3.get(false):
@@ -407,16 +419,16 @@ proc save(
)
rdv.namespaces[nsSalted].add(rdv.registered.high)
# rdv.registerEvent.fire()
except KeyError:
doAssert false, "Should have key"
except KeyError as e:
doAssert false, "Should have key: " & e.msg
proc register(rdv: RendezVous, conn: Connection, r: Register): Future[void] =
trace "Received Register", peerId = conn.peerId, ns = r.ns
libp2p_rendezvous_register.inc()
if r.ns.len notin 1 .. 255:
if r.ns.len < MinimumNamespaceLen or r.ns.len > MaximumNamespaceLen:
return conn.sendRegisterResponseError(InvalidNamespace)
let ttl = r.ttl.get(rdv.minTTL)
if ttl notin rdv.minTTL .. rdv.maxTTL:
if ttl < rdv.minTTL or ttl > rdv.maxTTL:
return conn.sendRegisterResponseError(InvalidTTL)
let pr = checkPeerRecord(r.signedPeerRecord, conn.peerId)
if pr.isErr():
@@ -444,7 +456,7 @@ proc discover(
) {.async: (raises: [CancelledError, LPStreamError]).} =
trace "Received Discover", peerId = conn.peerId, ns = d.ns
libp2p_rendezvous_discover.inc()
if d.ns.len notin 0 .. 255:
if d.ns.isSome() and d.ns.get().len > MaximumNamespaceLen:
await conn.sendDiscoverResponseError(InvalidNamespace)
return
var limit = min(DiscoverLimit, d.limit.get(DiscoverLimit))
@@ -457,20 +469,19 @@ proc discover(
return
else:
Cookie(offset: rdv.registered.low().uint64 - 1)
if cookie.ns != d.ns or
cookie.offset notin rdv.registered.low().uint64 .. rdv.registered.high().uint64:
if d.ns.isSome() and cookie.ns.isSome() and cookie.ns.get() != d.ns.get() or
cookie.offset < rdv.registered.low().uint64 or
cookie.offset > rdv.registered.high().uint64:
cookie = Cookie(offset: rdv.registered.low().uint64 - 1)
let
nsSalted = d.ns & rdv.salt
namespaces =
if d.ns != "":
try:
rdv.namespaces[nsSalted]
except KeyError:
await conn.sendDiscoverResponseError(InvalidNamespace)
return
else:
toSeq(cookie.offset.int .. rdv.registered.high())
let namespaces =
if d.ns.isSome():
try:
rdv.namespaces[d.ns.get() & rdv.salt]
except KeyError:
await conn.sendDiscoverResponseError(InvalidNamespace)
return
else:
toSeq(max(cookie.offset.int, rdv.registered.offset) .. rdv.registered.high())
if namespaces.len() == 0:
await conn.sendDiscoverResponse(@[], Cookie())
return
@@ -514,15 +525,15 @@ proc advertisePeer(
rdv.sema.release()
await rdv.sema.acquire()
discard await advertiseWrap().withTimeout(5.seconds)
await advertiseWrap()
proc advertise*(
rdv: RendezVous, ns: string, ttl: Duration, peers: seq[PeerId]
) {.async: (raises: [CancelledError, AdvertiseError]).} =
if ns.len notin 1 .. 255:
if ns.len < MinimumNamespaceLen or ns.len > MaximumNamespaceLen:
raise newException(AdvertiseError, "Invalid namespace")
if ttl notin rdv.minDuration .. rdv.maxDuration:
if ttl < rdv.minDuration or ttl > rdv.maxDuration:
raise newException(AdvertiseError, "Invalid time to live: " & $ttl)
let sprBuff = rdv.switch.peerInfo.signedPeerRecord.encode().valueOr:
@@ -537,7 +548,7 @@ proc advertise*(
let futs = collect(newSeq()):
for peer in peers:
trace "Send Advertise", peerId = peer, ns
rdv.advertisePeer(peer, msg.buffer)
rdv.advertisePeer(peer, msg.buffer).withTimeout(5.seconds)
await allFutures(futs)
@@ -561,7 +572,7 @@ proc requestLocally*(rdv: RendezVous, ns: string): seq[PeerRecord] =
@[]
proc request*(
rdv: RendezVous, ns: string, l: int = DiscoverLimit.int, peers: seq[PeerId]
rdv: RendezVous, ns: Opt[string], l: int = DiscoverLimit.int, peers: seq[PeerId]
): Future[seq[PeerRecord]] {.async: (raises: [DiscoveryError, CancelledError]).} =
var
s: Table[PeerId, (PeerRecord, Register)]
@@ -570,7 +581,7 @@ proc request*(
if l <= 0 or l > DiscoverLimit.int:
raise newException(AdvertiseError, "Invalid limit")
if ns.len notin 0 .. 255:
if ns.isSome() and ns.get().len > MaximumNamespaceLen:
raise newException(AdvertiseError, "Invalid namespace")
limit = l.uint64
@@ -582,15 +593,18 @@ proc request*(
await conn.close()
d.limit = Opt.some(limit)
d.cookie =
try:
Opt.some(rdv.cookiesSaved[peer][ns])
except KeyError as exc:
if ns.isSome():
try:
Opt.some(rdv.cookiesSaved[peer][ns.get()])
except KeyError, CatchableError:
Opt.none(seq[byte])
else:
Opt.none(seq[byte])
await conn.writeLp(
encode(Message(msgType: MessageType.Discover, discover: Opt.some(d))).buffer
)
let
buf = await conn.readLp(65536)
buf = await conn.readLp(MaximumMessageLen)
msgRcv = Message.decode(buf).valueOr:
debug "Message undecodable"
return
@@ -604,12 +618,14 @@ proc request*(
trace "Cannot discover", ns, status = resp.status, text = resp.text
return
resp.cookie.withValue(cookie):
if cookie.len() < 1000 and
rdv.cookiesSaved.hasKeyOrPut(peer, {ns: cookie}.toTable()):
try:
rdv.cookiesSaved[peer][ns] = cookie
except KeyError:
raiseAssert "checked with hasKeyOrPut"
if ns.isSome:
let namespace = ns.get()
if cookie.len() < 1000 and
rdv.cookiesSaved.hasKeyOrPut(peer, {namespace: cookie}.toTable()):
try:
rdv.cookiesSaved[peer][namespace] = cookie
except KeyError:
raiseAssert "checked with hasKeyOrPut"
for r in resp.registrations:
if limit == 0:
return
@@ -632,8 +648,9 @@ proc request*(
else:
s[pr.peerId] = (pr, r)
limit.dec()
for (_, r) in s.values():
rdv.save(ns, peer, r, false)
if ns.isSome():
for (_, r) in s.values():
rdv.save(ns.get(), peer, r, false)
for peer in peers:
if limit == 0:
@@ -652,10 +669,15 @@ proc request*(
return toSeq(s.values()).mapIt(it[0])
proc request*(
rdv: RendezVous, ns: string, l: int = DiscoverLimit.int
rdv: RendezVous, ns: Opt[string], l: int = DiscoverLimit.int
): Future[seq[PeerRecord]] {.async: (raises: [DiscoveryError, CancelledError]).} =
await rdv.request(ns, l, rdv.peers)
proc request*(
rdv: RendezVous, l: int = DiscoverLimit.int
): Future[seq[PeerRecord]] {.async: (raises: [DiscoveryError, CancelledError]).} =
await rdv.request(Opt.none(string), l, rdv.peers)
proc unsubscribeLocally*(rdv: RendezVous, ns: string) =
let nsSalted = ns & rdv.salt
try:
@@ -668,7 +690,7 @@ proc unsubscribeLocally*(rdv: RendezVous, ns: string) =
proc unsubscribe*(
rdv: RendezVous, ns: string, peerIds: seq[PeerId]
) {.async: (raises: [RendezVousError, CancelledError]).} =
if ns.len notin 1 .. 255:
if ns.len < MinimumNamespaceLen or ns.len > MaximumNamespaceLen:
raise newException(RendezVousError, "Invalid namespace")
let msg = encode(
@@ -688,7 +710,7 @@ proc unsubscribe*(
for peer in peerIds:
unsubscribePeer(peer)
discard await allFutures(futs).withTimeout(5.seconds)
await allFutures(futs)
proc unsubscribe*(
rdv: RendezVous, ns: string
@@ -784,8 +806,10 @@ proc new*(
rdv.setup(switch)
return rdv
proc deletesRegister(rdv: RendezVous) {.async: (raises: [CancelledError]).} =
heartbeat "Register timeout", 1.minutes:
proc deletesRegister(
rdv: RendezVous, interval = 1.minutes
) {.async: (raises: [CancelledError]).} =
heartbeat "Register timeout", interval:
let n = Moment.now()
var total = 0
rdv.registered.flushIfIt(it.expiration < n)

View File

@@ -20,7 +20,6 @@ import ../../peerid
import ../../peerinfo
import ../../protobuf/minprotobuf
import ../../utility
import ../../errors
import secure, ../../crypto/[crypto, chacha20poly1305, curve25519, hkdf]

View File

@@ -11,15 +11,14 @@
{.push raises: [].}
import std/[strformat]
import stew/results
import results
import chronos, chronicles
import
../protocol,
../../stream/streamseq,
../../stream/connection,
../../multiaddress,
../../peerinfo,
../../errors
../../peerinfo
export protocol, results
@@ -82,7 +81,7 @@ method readMessage*(
): Future[seq[byte]] {.
async: (raises: [CancelledError, LPStreamError], raw: true), base
.} =
raiseAssert("Not implemented!")
raiseAssert("[SecureConn.readMessage] abstract method not implemented!")
method getWrapped*(s: SecureConn): Connection =
s.stream
@@ -92,7 +91,7 @@ method handshake*(
): Future[SecureConn] {.
async: (raises: [CancelledError, LPStreamError], raw: true), base
.} =
raiseAssert("Not implemented!")
raiseAssert("[Secure.handshake] abstract method not implemented!")
proc handleConn(
s: Secure, conn: Connection, initiator: bool, peerId: Opt[PeerId]
@@ -111,8 +110,8 @@ proc handleConn(
fut2 = sconn.join()
try: # https://github.com/status-im/nim-chronos/issues/516
discard await race(fut1, fut2)
except ValueError:
raiseAssert("Futures list is not empty")
except ValueError as e:
raiseAssert("Futures list is not empty: " & e.msg)
# at least one join() completed, cancel pending one, if any
if not fut1.finished:
await fut1.cancelAndWait()
@@ -183,14 +182,14 @@ method readOnce*(
except LPStreamEOFError as err:
s.isEof = true
await s.close()
raise err
raise newException(LPStreamEOFError, "Secure connection EOF: " & err.msg, err)
except CancelledError as exc:
raise exc
except LPStreamError as err:
debug "Error while reading message from secure connection, closing.",
error = err.name, message = err.msg, connection = s
await s.close()
raise err
raise newException(LPStreamError, "Secure connection read error: " & err.msg, err)
var p = cast[ptr UncheckedArray[byte]](pbytes)
return s.buf.consumeTo(toOpenArray(p, 0, nbytes - 1))

View File

@@ -12,7 +12,7 @@
{.push raises: [].}
import std/[sequtils, times]
import pkg/stew/results
import pkg/results
import multiaddress, multicodec, peerid, protobuf/minprotobuf, signed_envelope
export peerid, multiaddress, signed_envelope

View File

@@ -55,7 +55,7 @@ proc tryStartingDirectConn(
if not isRelayed.get(false) and address.isPublicMA():
return await tryConnect(address)
except CatchableError as err:
debug "Failed to create direct connection.", err = err.msg
debug "Failed to create direct connection.", description = err.msg
continue
return false
@@ -91,7 +91,7 @@ proc newConnectedPeerHandler(
except CancelledError as err:
raise err
except CatchableError as err:
debug "Hole punching failed during dcutr", err = err.msg
debug "Hole punching failed during dcutr", description = err.msg
method setup*(
self: HPService, switch: Switch
@@ -104,7 +104,7 @@ method setup*(
let dcutrProto = Dcutr.new(switch)
switch.mount(dcutrProto)
except LPError as err:
error "Failed to mount Dcutr", err = err.msg
error "Failed to mount Dcutr", description = err.msg
self.newConnectedPeerHandler = proc(
peerId: PeerId, event: PeerEvent

View File

@@ -10,8 +10,8 @@
{.push raises: [].}
import std/sequtils
import stew/[byteutils, results, endians2]
import chronos, chronos/transports/[osnet, ipnet], chronicles
import stew/endians2
import chronos, chronos/transports/[osnet, ipnet], chronicles, results
import ../[multiaddress, multicodec]
import ../switch
@@ -73,7 +73,6 @@ proc new*(
return T(networkInterfaceProvider: networkInterfaceProvider)
proc getProtocolArgument*(ma: MultiAddress, codec: MultiCodec): MaResult[seq[byte]] =
var buffer: seq[byte]
for item in ma:
let
ritem = ?item

View File

@@ -12,7 +12,7 @@
{.push raises: [].}
import std/sugar
import pkg/stew/[results, byteutils]
import pkg/stew/byteutils, pkg/results
import multicodec, crypto/crypto, protobuf/minprotobuf, vbuffer
export crypto

View File

@@ -0,0 +1,63 @@
# Nim-LibP2P
# Copyright (c) 2025 Status Research & Development GmbH
# Licensed under either of
# * Apache License, version 2.0, ([LICENSE-APACHE](LICENSE-APACHE))
# * MIT license ([LICENSE-MIT](LICENSE-MIT))
# at your option.
# This file may not be copied, modified, or distributed except according to
# those terms.
import pkg/chronos
import connection, bufferstream
export connection
type
WriteHandler = proc(data: seq[byte]): Future[void] {.
async: (raises: [CancelledError, LPStreamError])
.}
BridgeStream* = ref object of BufferStream
writeHandler: WriteHandler
closeHandler: proc(): Future[void] {.async: (raises: []).}
method write*(
s: BridgeStream, msg: seq[byte]
): Future[void] {.public, async: (raises: [CancelledError, LPStreamError], raw: true).} =
s.writeHandler(msg)
method closeImpl*(s: BridgeStream): Future[void] {.async: (raises: [], raw: true).} =
if not isNil(s.closeHandler):
discard s.closeHandler()
procCall BufferStream(s).closeImpl()
method getWrapped*(s: BridgeStream): Connection =
nil
proc bridgedConnections*(
closeTogether: bool = true, dirA = Direction.In, dirB = Direction.In
): (BridgeStream, BridgeStream) =
let connA = BridgeStream()
let connB = BridgeStream()
connA.dir = dirA
connB.dir = dirB
connA.initStream()
connB.initStream()
connA.writeHandler = proc(
data: seq[byte]
) {.async: (raises: [CancelledError, LPStreamError], raw: true).} =
connB.pushData(data)
connB.writeHandler = proc(
data: seq[byte]
) {.async: (raises: [CancelledError, LPStreamError], raw: true).} =
connA.pushData(data)
if closeTogether:
connA.closeHandler = proc(): Future[void] {.async: (raises: []).} =
await noCancel connB.close()
connB.closeHandler = proc(): Future[void] {.async: (raises: []).} =
await noCancel connA.close()
return (connA, connB)

View File

@@ -199,8 +199,10 @@ method closeImpl*(s: BufferStream): Future[void] {.async: (raises: [], raw: true
elif s.pushing:
if not s.readQueue.empty():
discard s.readQueue.popFirstNoWait()
except AsyncQueueFullError, AsyncQueueEmptyError:
raiseAssert(getCurrentExceptionMsg())
except AsyncQueueFullError as e:
raiseAssert("closeImpl failed queue full: " & e.msg)
except AsyncQueueEmptyError as e:
raiseAssert("closeImpl failed queue empty: " & e.msg)
trace "Closed BufferStream", s

View File

@@ -10,7 +10,7 @@
{.push raises: [].}
import std/[strformat]
import stew/results
import results
import chronos, chronicles, metrics
import connection
import ../utility
@@ -34,8 +34,6 @@ when defined(libp2p_agents_metrics):
declareCounter libp2p_peers_traffic_read, "incoming traffic", labels = ["agent"]
declareCounter libp2p_peers_traffic_write, "outgoing traffic", labels = ["agent"]
declareCounter libp2p_network_bytes, "total traffic", labels = ["direction"]
func shortLog*(conn: ChronosStream): auto =
try:
if conn == nil:

View File

@@ -10,7 +10,7 @@
{.push raises: [].}
import std/[hashes, oids, strformat]
import stew/results
import results
import chronicles, chronos, metrics
import lpstream, ../multiaddress, ../peerinfo, ../errors
@@ -52,6 +52,8 @@ func shortLog*(conn: Connection): string =
chronicles.formatIt(Connection):
shortLog(it)
declarePublicCounter libp2p_network_bytes, "total traffic", labels = ["direction"]
method initStream*(s: Connection) =
if s.objName.len == 0:
s.objName = ConnectionTrackerName
@@ -124,7 +126,7 @@ proc timeoutMonitor(s: Connection) {.async: (raises: []).} =
return
method getWrapped*(s: Connection): Connection {.base.} =
raiseAssert("Not implemented!")
raiseAssert("[Connection.getWrapped] abstract method not implemented!")
when defined(libp2p_agents_metrics):
proc setShortAgent*(s: Connection, shortAgent: string) =

View File

@@ -113,9 +113,9 @@ method initStream*(s: LPStream) {.base.} =
trackCounter(s.objName)
trace "Stream created", s, objName = s.objName, dir = $s.dir
proc join*(
method join*(
s: LPStream
): Future[void] {.async: (raises: [CancelledError], raw: true), public.} =
): Future[void] {.base, async: (raises: [CancelledError], raw: true), public.} =
## Wait for the stream to be closed
s.closeEvent.wait()
@@ -133,11 +133,11 @@ method readOnce*(
## Reads whatever is available in the stream,
## up to `nbytes`. Will block if nothing is
## available
raiseAssert("Not implemented!")
raiseAssert("[LPStream.readOnce] abstract method not implemented!")
proc readExactly*(
method readExactly*(
s: LPStream, pbytes: pointer, nbytes: int
): Future[void] {.async: (raises: [CancelledError, LPStreamError]), public.} =
): Future[void] {.base, async: (raises: [CancelledError, LPStreamError]), public.} =
## Waits for `nbytes` to be available, then read
## them and return them
if s.atEof:
@@ -171,9 +171,9 @@ proc readExactly*(
trace "couldn't read all bytes, incomplete data", s, nbytes, read
raise newLPStreamIncompleteError()
proc readLine*(
method readLine*(
s: LPStream, limit = 0, sep = "\r\n"
): Future[string] {.async: (raises: [CancelledError, LPStreamError]), public.} =
): Future[string] {.base, async: (raises: [CancelledError, LPStreamError]), public.} =
## Reads up to `limit` bytes are read, or a `sep` is found
# TODO replace with something that exploits buffering better
var lim = if limit <= 0: -1 else: limit
@@ -199,9 +199,9 @@ proc readLine*(
if len(result) == lim:
break
proc readVarint*(
method readVarint*(
conn: LPStream
): Future[uint64] {.async: (raises: [CancelledError, LPStreamError]), public.} =
): Future[uint64] {.base, async: (raises: [CancelledError, LPStreamError]), public.} =
var buffer: array[10, byte]
for i in 0 ..< len(buffer):
@@ -218,9 +218,9 @@ proc readVarint*(
if true: # can't end with a raise apparently
raise (ref InvalidVarintError)(msg: "Cannot parse varint")
proc readLp*(
method readLp*(
s: LPStream, maxSize: int
): Future[seq[byte]] {.async: (raises: [CancelledError, LPStreamError]), public.} =
): Future[seq[byte]] {.base, async: (raises: [CancelledError, LPStreamError]), public.} =
## read length prefixed msg, with the length encoded as a varint
let
length = await s.readVarint()
@@ -242,11 +242,13 @@ method write*(
async: (raises: [CancelledError, LPStreamError], raw: true), base, public
.} =
# Write `msg` to stream, waiting for the write to be finished
raiseAssert("Not implemented!")
raiseAssert("[LPStream.write] abstract method not implemented!")
proc writeLp*(
method writeLp*(
s: LPStream, msg: openArray[byte]
): Future[void] {.async: (raises: [CancelledError, LPStreamError], raw: true), public.} =
): Future[void] {.
base, async: (raises: [CancelledError, LPStreamError], raw: true), public
.} =
## Write `msg` with a varint-encoded length prefix
let vbytes = PB.toBytes(msg.len().uint64)
var buf = newSeqUninitialized[byte](msg.len() + vbytes.len)
@@ -254,9 +256,11 @@ proc writeLp*(
buf[vbytes.len ..< buf.len] = msg
s.write(buf)
proc writeLp*(
method writeLp*(
s: LPStream, msg: string
): Future[void] {.async: (raises: [CancelledError, LPStreamError], raw: true), public.} =
): Future[void] {.
base, async: (raises: [CancelledError, LPStreamError], raw: true), public
.} =
writeLp(s, msg.toOpenArrayByte(0, msg.high))
proc write*(
@@ -324,7 +328,7 @@ proc closeWithEOF*(s: LPStream): Future[void] {.async: (raises: []), public.} =
debug "Unexpected bytes while waiting for EOF", s
except CancelledError:
discard
except LPStreamEOFError:
trace "Expected EOF came", s
except LPStreamEOFError as e:
trace "Expected EOF came", s, description = e.msg
except LPStreamError as exc:
debug "Unexpected error while waiting for EOF", s, description = exc.msg

View File

@@ -77,7 +77,7 @@ method setup*(
return true
method run*(self: Service, switch: Switch) {.base, async: (raises: [CancelledError]).} =
doAssert(false, "Not implemented!")
doAssert(false, "[Service.run] abstract method not implemented!")
method stop*(
self: Service, switch: Switch
@@ -233,7 +233,7 @@ proc upgrader(
except CancelledError as e:
raise e
except CatchableError as e:
raise newException(UpgradeError, e.msg, e)
raise newException(UpgradeError, "catchable error upgrader: " & e.msg, e)
proc upgradeMonitor(
switch: Switch, trans: Transport, conn: Connection, upgrades: AsyncSemaphore
@@ -275,7 +275,8 @@ proc accept(s: Switch, transport: Transport) {.async: (raises: []).} =
await transport.accept()
except CatchableError as exc:
slot.release()
raise exc
raise
newException(CatchableError, "failed to accept connection: " & exc.msg, exc)
slot.trackConnection(conn)
if isNil(conn):
# A nil connection means that we might have hit a

View File

@@ -0,0 +1,122 @@
# Nim-LibP2P
# Copyright (c) 2025 Status Research & Development GmbH
# Licensed under either of
# * Apache License, version 2.0, ([LICENSE-APACHE](LICENSE-APACHE))
# * MIT license ([LICENSE-MIT](LICENSE-MIT))
# at your option.
# This file may not be copied, modified, or distributed except according to
# those terms.
import locks
import tables
import pkg/chronos
import pkg/chronicles
import ./transport
import ../multiaddress
import ../stream/connection
import ../stream/bridgestream
type
MemoryTransportError* = object of transport.TransportError
MemoryTransportAcceptStopped* = object of MemoryTransportError
type MemoryListener* = object
address: string
accept: Future[Connection]
onListenerEnd: proc(address: string) {.closure, gcsafe, raises: [].}
proc init(
_: type[MemoryListener],
address: string,
onListenerEnd: proc(address: string) {.closure, gcsafe, raises: [].},
): MemoryListener =
return MemoryListener(
accept: newFuture[Connection]("MemoryListener.accept"),
address: address,
onListenerEnd: onListenerEnd,
)
proc close*(self: MemoryListener) =
if not (self.accept.finished):
self.accept.fail(newException(MemoryTransportAcceptStopped, "Listener closed"))
self.onListenerEnd(self.address)
proc accept*(
self: MemoryListener
): Future[Connection] {.gcsafe, raises: [CatchableError].} =
return self.accept
proc dial*(
self: MemoryListener
): Future[Connection] {.gcsafe, raises: [CatchableError].} =
let (connA, connB) = bridgedConnections()
self.onListenerEnd(self.address)
self.accept.complete(connA)
let dFut = newFuture[Connection]("MemoryListener.dial")
dFut.complete(connB)
return dFut
type memoryConnManager = ref object
listeners: Table[string, MemoryListener]
connections: Table[string, Connection]
lock: Lock
proc init(_: type[memoryConnManager]): memoryConnManager =
var m = memoryConnManager()
initLock(m.lock)
return m
proc onListenerEnd(
self: memoryConnManager
): proc(address: string) {.closure, gcsafe, raises: [].} =
proc cb(address: string) {.closure, gcsafe, raises: [].} =
acquire(self.lock)
defer:
release(self.lock)
try:
if address in self.listeners:
self.listeners.del(address)
except KeyError:
raiseAssert "checked with if"
return cb
proc accept*(
self: memoryConnManager, address: string
): MemoryListener {.raises: [MemoryTransportError].} =
acquire(self.lock)
defer:
release(self.lock)
if address in self.listeners:
raise newException(MemoryTransportError, "Memory address already in use")
let listener = MemoryListener.init(address, self.onListenerEnd())
self.listeners[address] = listener
return listener
proc dial*(
self: memoryConnManager, address: string
): MemoryListener {.raises: [MemoryTransportError].} =
acquire(self.lock)
defer:
release(self.lock)
if address notin self.listeners:
raise newException(MemoryTransportError, "No memory listener found")
try:
return self.listeners[address]
except KeyError:
raiseAssert "checked with if"
let instance: memoryConnManager = memoryConnManager.init()
proc getInstance*(): memoryConnManager {.gcsafe.} =
{.gcsafe.}:
instance

View File

@@ -0,0 +1,127 @@
# Nim-LibP2P
# Copyright (c) 2025 Status Research & Development GmbH
# Licensed under either of
# * Apache License, version 2.0, ([LICENSE-APACHE](LICENSE-APACHE))
# * MIT license ([LICENSE-MIT](LICENSE-MIT))
# at your option.
# This file may not be copied, modified, or distributed except according to
# those terms.
## Memory transport implementation
import std/sequtils
import pkg/chronos
import pkg/chronicles
import ./transport
import ../multiaddress
import ../stream/connection
import ../crypto/crypto
import ../upgrademngrs/upgrade
import ./memorymanager
export connection
export MemoryTransportError, MemoryTransportAcceptStopped
const MemoryAutoAddress* = "/memory/*"
logScope:
topics = "libp2p memorytransport"
type MemoryTransport* = ref object of Transport
rng: ref HmacDrbgContext
connections: seq[Connection]
listener: Opt[MemoryListener]
proc new*(
T: typedesc[MemoryTransport],
upgrade: Upgrade = Upgrade(),
rng: ref HmacDrbgContext = newRng(),
): T =
T(upgrader: upgrade, rng: rng)
proc listenAddress(self: MemoryTransport, ma: MultiAddress): MultiAddress =
if $ma != MemoryAutoAddress:
return ma
# when special address is used `/memory/*` use any free address.
# here we assume that any random generated address will be free.
var randomBuf: array[10, byte]
hmacDrbgGenerate(self.rng[], randomBuf)
return MultiAddress.init("/memory/" & toHex(randomBuf)).get()
method start*(
self: MemoryTransport, addrs: seq[MultiAddress]
) {.async: (raises: [LPError, transport.TransportError]).} =
if self.running:
return
trace "starting memory transport on addrs", address = $addrs
self.addrs = addrs.mapIt(self.listenAddress(it))
self.running = true
method stop*(self: MemoryTransport) {.async: (raises: []).} =
if not self.running:
return
trace "stopping memory transport", address = $self.addrs
self.running = false
# closing listener will throw interruption error to caller of accept()
let listener = self.listener
if listener.isSome:
listener.get().close()
# end all connections
await noCancel allFutures(self.connections.mapIt(it.close()))
method accept*(
self: MemoryTransport
): Future[Connection] {.async: (raises: [transport.TransportError, CancelledError]).} =
if not self.running:
raise newException(MemoryTransportError, "Transport closed, no more connections!")
var listener: MemoryListener
try:
listener = getInstance().accept($self.addrs[0])
self.listener = Opt.some(listener)
let conn = await listener.accept()
self.connections.add(conn)
self.listener = Opt.none(MemoryListener)
return conn
except CancelledError as e:
listener.close()
raise e
except MemoryTransportError as e:
raise e
except CatchableError:
raiseAssert "should never happen"
method dial*(
self: MemoryTransport,
hostname: string,
ma: MultiAddress,
peerId: Opt[PeerId] = Opt.none(PeerId),
): Future[Connection] {.async: (raises: [transport.TransportError, CancelledError]).} =
try:
let listener = getInstance().dial($ma)
let conn = await listener.dial()
self.connections.add(conn)
return conn
except CancelledError as e:
raise e
except MemoryTransportError as e:
raise e
except CatchableError:
raiseAssert "should never happen"
proc dial*(
self: MemoryTransport, ma: MultiAddress, peerId: Opt[PeerId] = Opt.none(PeerId)
): Future[Connection] {.gcsafe.} =
self.dial("", ma)
method handles*(self: MemoryTransport, ma: MultiAddress): bool {.gcsafe, raises: [].} =
if procCall Transport(self).handles(ma):
if ma.protocols.isOk:
return Memory.match(ma)

View File

@@ -1,7 +1,9 @@
import std/sequtils
import pkg/chronos
import pkg/chronicles
import pkg/quic
import chronos
import chronicles
import metrics
import quic
import results
import ../multiaddress
import ../multicodec
import ../stream/connection
@@ -9,6 +11,7 @@ import ../wire
import ../muxers/muxer
import ../upgrademngrs/upgrade
import ./transport
import tls/certificate
export multiaddress
export multicodec
@@ -23,6 +26,9 @@ type
QuicConnection = quic.Connection
QuicTransportError* = object of transport.TransportError
QuicTransportDialError* = object of transport.TransportDialError
QuicTransportAcceptStopped* = object of QuicTransportError
const alpn = "libp2p"
# Stream
type QuicStream* = ref object of P2PConnection
@@ -53,6 +59,7 @@ method readOnce*(
result = min(nbytes, stream.cached.len)
copyMem(pbytes, addr stream.cached[0], result)
stream.cached = stream.cached[result ..^ 1]
libp2p_network_bytes.inc(result.int64, labelValues = ["in"])
except CatchableError as exc:
raise newLPStreamEOFError()
@@ -61,6 +68,7 @@ method write*(
stream: QuicStream, bytes: seq[byte]
) {.async: (raises: [CancelledError, LPStreamError]).} =
mapExceptions(await stream.stream.write(bytes))
libp2p_network_bytes.inc(bytes.len.int64, labelValues = ["out"])
{.pop.}
@@ -81,15 +89,19 @@ method close*(session: QuicSession) {.async: (raises: []).} =
proc getStream*(
session: QuicSession, direction = Direction.In
): Future[QuicStream] {.async: (raises: [CatchableError]).} =
var stream: Stream
case direction
of Direction.In:
stream = await session.connection.incomingStream()
of Direction.Out:
stream = await session.connection.openStream()
await stream.write(@[]) # QUIC streams do not exist until data is sent
return QuicStream.new(stream, session.observedAddr, session.peerId)
): Future[QuicStream] {.async: (raises: [QuicTransportError]).} =
try:
var stream: Stream
case direction
of Direction.In:
stream = await session.connection.incomingStream()
of Direction.Out:
stream = await session.connection.openStream()
await stream.write(@[]) # QUIC streams do not exist until data is sent
return QuicStream.new(stream, session.observedAddr, session.peerId)
except CatchableError as exc:
# TODO: incomingStream is using {.async.} with no raises
raise (ref QuicTransportError)(msg: "error in getStream: " & exc.msg, parent: exc)
method getWrapped*(self: QuicSession): P2PConnection =
nil
@@ -107,7 +119,7 @@ method newStream*(
try:
return await m.quicSession.getStream(Direction.Out)
except CatchableError as exc:
raise newException(MuxerError, exc.msg, exc)
raise newException(MuxerError, "error in newStream: " & exc.msg, exc)
proc handleStream(m: QuicMuxer, chann: QuicStream) {.async: (raises: []).} =
## call the muxer stream handler for this channel
@@ -131,19 +143,65 @@ method handle*(m: QuicMuxer): Future[void] {.async: (raises: []).} =
method close*(m: QuicMuxer) {.async: (raises: []).} =
try:
await m.quicSession.close()
m.handleFut.cancel()
m.handleFut.cancelSoon()
except CatchableError as exc:
discard
# Transport
type QuicUpgrade = ref object of Upgrade
type CertGenerator =
proc(kp: KeyPair): CertificateX509 {.gcsafe, raises: [TLSCertificateError].}
type QuicTransport* = ref object of Transport
listener: Listener
client: QuicClient
privateKey: PrivateKey
connections: seq[P2PConnection]
rng: ref HmacDrbgContext
certGenerator: CertGenerator
func new*(_: type QuicTransport, u: Upgrade): QuicTransport =
QuicTransport(upgrader: QuicUpgrade(ms: u.ms))
proc makeCertificateVerifier(): CertificateVerifier =
proc certificateVerifier(serverName: string, certificatesDer: seq[seq[byte]]): bool =
if certificatesDer.len != 1:
trace "CertificateVerifier: expected one certificate in the chain",
cert_count = certificatesDer.len
return false
let cert =
try:
parse(certificatesDer[0])
except CertificateParsingError as e:
trace "CertificateVerifier: failed to parse certificate", msg = e.msg
return false
return cert.verify()
return CustomCertificateVerifier.init(certificateVerifier)
proc defaultCertGenerator(
kp: KeyPair
): CertificateX509 {.gcsafe, raises: [TLSCertificateError].} =
return generateX509(kp, encodingFormat = EncodingFormat.PEM)
proc new*(_: type QuicTransport, u: Upgrade, privateKey: PrivateKey): QuicTransport =
return QuicTransport(
upgrader: QuicUpgrade(ms: u.ms),
privateKey: privateKey,
certGenerator: defaultCertGenerator,
)
proc new*(
_: type QuicTransport,
u: Upgrade,
privateKey: PrivateKey,
certGenerator: CertGenerator,
): QuicTransport =
return QuicTransport(
upgrader: QuicUpgrade(ms: u.ms),
privateKey: privateKey,
certGenerator: certGenerator,
)
method handles*(transport: QuicTransport, address: MultiAddress): bool {.raises: [].} =
if not procCall Transport(transport).handles(address):
@@ -155,14 +213,39 @@ method start*(
) {.async: (raises: [LPError, transport.TransportError]).} =
doAssert self.listener.isNil, "start() already called"
#TODO handle multiple addr
let pubkey = self.privateKey.getPublicKey().valueOr:
doAssert false, "could not obtain public key"
return
try:
self.listener = listen(initTAddress(addrs[0]).tryGet)
if self.rng.isNil:
self.rng = newRng()
let cert = self.certGenerator(KeyPair(seckey: self.privateKey, pubkey: pubkey))
let tlsConfig = TLSConfig.init(
cert.certificate, cert.privateKey, @[alpn], Opt.some(makeCertificateVerifier())
)
self.client = QuicClient.init(tlsConfig, rng = self.rng)
self.listener =
QuicServer.init(tlsConfig, rng = self.rng).listen(initTAddress(addrs[0]).tryGet)
await procCall Transport(self).start(addrs)
self.addrs[0] =
MultiAddress.init(self.listener.localAddress(), IPPROTO_UDP).tryGet() &
MultiAddress.init("/quic-v1").get()
except QuicConfigError as exc:
doAssert false, "invalid quic setup: " & $exc.msg
except TLSCertificateError as exc:
raise (ref QuicTransportError)(
msg: "tlscert error in quic start: " & exc.msg, parent: exc
)
except QuicError as exc:
raise
(ref QuicTransportError)(msg: "quicerror in quic start: " & exc.msg, parent: exc)
except TransportOsError as exc:
raise (ref QuicTransportError)(msg: exc.msg, parent: exc)
raise (ref QuicTransportError)(
msg: "transport error in quic start: " & exc.msg, parent: exc
)
self.running = true
method stop*(transport: QuicTransport) {.async: (raises: []).} =
@@ -174,61 +257,85 @@ method stop*(transport: QuicTransport) {.async: (raises: []).} =
await transport.listener.stop()
except CatchableError as exc:
trace "Error shutting down Quic transport", description = exc.msg
transport.listener.destroy()
transport.running = false
transport.listener = nil
proc wrapConnection(
transport: QuicTransport, connection: QuicConnection
): P2PConnection {.raises: [Defect, TransportOsError, LPError].} =
): QuicSession {.raises: [TransportOsError, MaError].} =
let
remoteAddr = connection.remoteAddress()
observedAddr =
MultiAddress.init(remoteAddr, IPPROTO_UDP).get() &
MultiAddress.init("/quic-v1").get()
conres = QuicSession(connection: connection, observedAddr: Opt.some(observedAddr))
conres.initStream()
session = QuicSession(connection: connection, observedAddr: Opt.some(observedAddr))
session.initStream()
transport.connections.add(session)
transport.connections.add(conres)
proc onClose() {.async: (raises: []).} =
await noCancel conres.join()
transport.connections.keepItIf(it != conres)
await noCancel session.join()
transport.connections.keepItIf(it != session)
trace "Cleaned up client"
asyncSpawn onClose()
return conres
return session
method accept*(
self: QuicTransport
): Future[P2PConnection] {.async: (raises: [transport.TransportError, CancelledError]).} =
): Future[connection.Connection] {.
async: (raises: [transport.TransportError, CancelledError])
.} =
doAssert not self.listener.isNil, "call start() before calling accept()"
if not self.running:
# stop accept only when transport is stopped (not when error occurs)
raise newException(QuicTransportAcceptStopped, "Quic transport stopped")
try:
let connection = await self.listener.accept()
return self.wrapConnection(connection)
except CancelledError as e:
raise e
except CatchableError as e:
raise (ref QuicTransportError)(msg: e.msg, parent: e)
except CancelledError as exc:
raise exc
except QuicError as exc:
debug "Quic Error", description = exc.msg
except MaError as exc:
debug "Multiaddr Error", description = exc.msg
except CatchableError as exc: # TODO: removing this requires async/raises in nim-quic
info "Unexpected error accepting quic connection", description = exc.msg
except TransportOsError as exc:
debug "OS Error", description = exc.msg
method dial*(
self: QuicTransport,
hostname: string,
address: MultiAddress,
peerId: Opt[PeerId] = Opt.none(PeerId),
): Future[P2PConnection] {.async: (raises: [transport.TransportError, CancelledError]).} =
): Future[connection.Connection] {.
async: (raises: [transport.TransportError, CancelledError])
.} =
try:
let connection = await dial(initTAddress(address).tryGet)
return self.wrapConnection(connection)
let quicConnection = await self.client.dial(initTAddress(address).tryGet)
return self.wrapConnection(quicConnection)
except CancelledError as e:
raise e
except CatchableError as e:
raise newException(QuicTransportDialError, e.msg, e)
raise newException(QuicTransportDialError, "error in quic dial:" & e.msg, e)
method upgrade*(
self: QuicTransport, conn: P2PConnection, peerId: Opt[PeerId]
): Future[Muxer] {.async: (raises: [CancelledError, LPError]).} =
let qs = QuicSession(conn)
if peerId.isSome:
qs.peerId = peerId.get()
qs.peerId =
if peerId.isSome:
peerId.get()
else:
let certificates = qs.connection.certificates()
let cert = parse(certificates[0])
cert.peerId()
let muxer = QuicMuxer(quicSession: qs, connection: conn)
muxer.streamHandler = proc(conn: P2PConnection) {.async: (raises: []).} =

View File

@@ -133,7 +133,9 @@ method start*(
try:
createStreamServer(ta, flags = self.flags)
except common.TransportError as exc:
raise (ref TcpTransportError)(msg: exc.msg, parent: exc)
raise (ref TcpTransportError)(
msg: "transport error in TcpTransport start:" & exc.msg, parent: exc
)
self.servers &= server
@@ -250,9 +252,13 @@ method accept*(
except TransportUseClosedError as exc:
raise newTransportClosedError(exc)
except TransportOsError as exc:
raise (ref TcpTransportError)(msg: exc.msg, parent: exc)
raise (ref TcpTransportError)(
msg: "TransportOs error in accept:" & exc.msg, parent: exc
)
except common.TransportError as exc: # Needed for chronos 4.0.0 support
raise (ref TcpTransportError)(msg: exc.msg, parent: exc)
raise (ref TcpTransportError)(
msg: "TransportError in accept: " & exc.msg, parent: exc
)
except CancelledError as exc:
cancelAcceptFuts()
raise exc
@@ -302,7 +308,8 @@ method dial*(
except CancelledError as exc:
raise exc
except CatchableError as exc:
raise (ref TcpTransportError)(msg: exc.msg, parent: exc)
raise
(ref TcpTransportError)(msg: "TcpTransport dial error: " & exc.msg, parent: exc)
# If `stop` is called after `connect` but before `await` returns, we might
# end up with a race condition where `stop` returns but not all connections
@@ -318,7 +325,7 @@ method dial*(
MultiAddress.init(transp.remoteAddress).expect("remote address is valid")
except TransportOsError as exc:
safeCloseWait(transp)
raise (ref TcpTransportError)(msg: exc.msg)
raise (ref TcpTransportError)(msg: "MultiAddress.init error in dial: " & exc.msg)
self.connHandler(transp, Opt.some(observedAddr), Direction.Out)

File diff suppressed because it is too large Load Diff

View File

@@ -0,0 +1,206 @@
#ifndef LIBP2P_CERT_H
#define LIBP2P_CERT_H
#include <stddef.h>
#include <stdint.h>
typedef struct cert_context_s *cert_context_t;
typedef struct cert_key_s *cert_key_t;
typedef int32_t cert_error_t;
#define CERT_SUCCESS 0
#define CERT_ERROR_NULL_PARAM -1
#define CERT_ERROR_MEMORY -2
#define CERT_ERROR_DRBG_INIT -3
#define CERT_ERROR_DRBG_CONFIG -4
#define CERT_ERROR_DRBG_SEED -5
#define CERT_ERROR_KEY_GEN -6
#define CERT_ERROR_CERT_GEN -7
#define CERT_ERROR_EXTENSION_GEN -8
#define CERT_ERROR_EXTENSION_ADD -9
#define CERT_ERROR_EXTENSION_DATA -10
#define CERT_ERROR_BIO_GEN -11
#define CERT_ERROR_SIGN -12
#define CERT_ERROR_ENCODING -13
#define CERT_ERROR_PARSE -14
#define CERT_ERROR_RAND -15
#define CERT_ERROR_ECKEY_GEN -16
#define CERT_ERROR_BIGNUM_CONV -17
#define CERT_ERROR_SET_KEY -18
#define CERT_ERROR_VALIDITY_PERIOD -19
#define CERT_ERROR_BIO_WRITE -20
#define CERT_ERROR_SERIAL_WRITE -21
#define CERT_ERROR_EVP_PKEY_EC_KEY -22
#define CERT_ERROR_X509_VER -23
#define CERT_ERROR_BIGNUM_GEN -24
#define CERT_ERROR_X509_NAME -25
#define CERT_ERROR_X509_CN -26
#define CERT_ERROR_X509_SUBJECT -27
#define CERT_ERROR_X509_ISSUER -28
#define CERT_ERROR_AS1_TIME_GEN -29
#define CERT_ERROR_PUBKEY_SET -30
#define CERT_ERROR_AS1_OCTET -31
#define CERT_ERROR_X509_READ -32
#define CERT_ERROR_PUBKEY_GET -33
#define CERT_ERROR_EXTENSION_NOT_FOUND -34
#define CERT_ERROR_EXTENSION_GET -35
#define CERT_ERROR_DECODE_SEQUENCE -36
#define CERT_ERROR_NOT_ENOUGH_SEQ_ELEMS -37
#define CERT_ERROR_NOT_OCTET_STR -38
#define CERT_ERROR_NID -39
#define CERT_ERROR_PUBKEY_DER_LEN -40
#define CERT_ERROR_PUBKEY_DER_CONV -41
#define CERT_ERROR_INIT_KEYGEN -42
#define CERT_ERROR_SET_CURVE -43
#define CERT_ERROR_X509_REQ_GEN -44
#define CERT_ERROR_X509_REQ_DER -45
#define CERT_ERROR_NO_PUBKEY -46
#define CERT_ERROR_X509_SAN -47
#define CERT_ERROR_CN_TOO_LONG -48
#define CERT_ERROR_CN_LABEL_TOO_LONG -49
#define CERT_ERROR_CN_EMPTY_LABEL -50
#define CERT_ERROR_CN_EMPTY -51
typedef enum { CERT_FORMAT_DER = 0, CERT_FORMAT_PEM = 1 } cert_format_t;
/* Buffer structure for raw key data */
typedef struct {
unsigned char *data; /* data buffer */
size_t len; /* Length of data */
} cert_buffer;
/* Struct to hold the parsed certificate data */
typedef struct {
cert_buffer *signature;
cert_buffer *ident_pubk;
cert_buffer *cert_pubkey;
char *valid_from;
char *valid_to;
} cert_parsed;
/**
* Initialize the CTR-DRBG for cryptographic operations
* This function creates and initializes a CTR-DRBG context using
* the provided seed for entropy. The DRBG is configured to use
* AES-256-CTR as the underlying cipher.
*
* @param seed A null-terminated string used to seed the DRBG. Must not be NULL.
* @param ctx Pointer to a context pointer that will be allocated and
* initialized. The caller is responsible for eventually freeing this context
* with the appropriate cleanup function.
*
* @return CERT_SUCCESS on successful initialization, an error code otherwise
*/
cert_error_t cert_init_drbg(const char *seed, size_t seed_len,
cert_context_t *ctx);
/**
* Generate an EC key pair for use with certificates
*
* @param ctx Context pointer obtained from `cert_init_drbg`
* @param out Pointer to store the generated key
*
* @return CERT_SUCCESS on successful execution, an error code otherwise
*/
cert_error_t cert_generate_key(cert_context_t ctx, cert_key_t *out);
/**
* Serialize a key's private key to a format
*
* @param key The key to export
* @param out Pointer to a buffer structure that will be populated with the key
* @param format output format
*
* @return CERT_SUCCESS on successful execution, an error code otherwise
*/
cert_error_t cert_serialize_privk(cert_key_t key, cert_buffer **out,
cert_format_t format);
/**
* Serialize a key's public key to a format
*
* @param key The key to export
* @param out Pointer to a buffer structure that will be populated with the key
* @param format output format
*
* @return CERT_SUCCESS on successful execution, an error code otherwise
*/
cert_error_t cert_serialize_pubk(cert_key_t key, cert_buffer **out,
cert_format_t format);
/**
* Generate a self-signed X.509 certificate with libp2p extension
*
* @param ctx Context pointer obtained from `cert_init_drbg`
* @param key Key to use
* @param out Pointer to a buffer that will be populated with a certificate
* @param signature buffer that contains a signature
* @param ident_pubk buffer that contains the bytes of an identity pubk
* @param common_name Common name to use for the certificate subject/issuer
* @param validFrom Date from which certificate is issued
* @param validTo Date to which certificate is issued
* @param format Certificate format
*
* @return CERT_SUCCESS on successful execution, an error code otherwise
*/
cert_error_t cert_generate(cert_context_t ctx, cert_key_t key,
cert_buffer **out, cert_buffer *signature,
cert_buffer *ident_pubk, const char *cn,
const char *validFrom, const char *validTo,
cert_format_t format);
/**
* Parse a certificate to extract the custom extension and public key
*
* @param cert Buffer containing the certificate data
* @param format Certificate format
* @param cert_parsed Pointer to a structure containing the parsed
* certificate data.
*
* @return CERT_SUCCESS on successful execution, an error code otherwise
*/
cert_error_t cert_parse(cert_buffer *cert, cert_format_t format,
cert_parsed **out);
/**
* Free all resources associated with a CTR-DRBG context
*
* @param ctx The context to free
*/
void cert_free_ctr_drbg(cert_context_t ctx);
/**
* Free memory allocated for a parsed certificate
*
* @param cert Pointer to the parsed certificate structure
*/
void cert_free_parsed(cert_parsed *cert);
/**
* Free all resources associated with a key
*
* @param key The key to free
*/
void cert_free_key(cert_key_t key);
/**
* Free memory allocated for a buffer
*
* @param buffer Pointer to the buffer structure
*/
void cert_free_buffer(cert_buffer *buffer);
/**
* Create a X.509 certificate request
*
* @param cn Domain for which we're requesting the certificate
* @param key Public key of the requesting client
* @param csr_buffer Pointer to the buffer that will be set to the CSR in DER format
*
* @return CERT_SUCCESS on successful execution, an error code otherwise
*/
cert_error_t cert_signing_req(const char *cn, cert_key_t key, cert_buffer **csr_buffer);
#endif /* LIBP2P_CERT_H */

Some files were not shown because too many files have changed in this diff Show More