## Summary - Introduce a standalone Python roadmap validator with a CLI entry point, modular validation pipeline, and GitHub Actions wiring so roadmap content can be linted locally and in CI. - Provide reusable validation primitives for path resolution, front-matter parsing, identity checks, task parsing, catalog enforcement, and template adherence. - Document usage, configuration, and workflow behaviour to make the validator approachable for contributors. ## Validator Details - **Core tooling** - Added the `tools/roadmap_validator/` package with `validate.py` (CLI), `validator.py` (orchestration), and helper modules (`tasks.py`, `identity.py`, `paths.py`, `constants.py`, `issues.py`). - CLI supports directory/file targets, skips default filenames, emits GitHub annotations, and integrates optional substring filtering - README explains features, environment variables, and development guidance. - **Catalog and template enforcement** - `catalog.py` verifies each allowed content unit has `index.md` and `preview.md`, confirms roadmap entries appear under the proper quarter/area, and flags stale or missing links. - `templates.py` enforces template basics: front matter completeness, `## Description` ordering/content, template placeholder cleanup, and task section detection. - **Task validation** - `tasks.py` checks required metadata (`owner`, `status`, `start-date`, `end-date`), date formats, populated descriptions/deliverables, TODO markers, tangible deliverable heuristics, and `fully-qualified-name` prefixes. - **Workflow integration** - `.github/workflows/roadmap-validator.yml` runs the validator on pushes and manual dispatch, installs dependencies, scopes validation to changed Markdown, and surfaces findings via GitHub annotations. ## Existing Roadmap Updates - Normalised 2025q4 commitments across Web, DST, QA, SC, and other units by filling in missing descriptions, deliverables, schedule notes, recurring task statuses, and maintenance tasks. - Added tasks where absent, removed remaining template placeholders, aligned fully qualified names, and ensured roadmap files conform to the new validator checks. ## Testing ```bash python tools/roadmap_validator/validate.py *2025q4* ``` CI: `Roadmap Validator` workflow runs automatically on pushes/dispatch. --------- Co-authored-by: kaiserd <1684595+kaiserd@users.noreply.github.com>
4.4 KiB
title, tags, draft, description
| title | tags | draft | description | |||
|---|---|---|---|---|---|---|
| Status chat protocol benchmarks |
|
false | Realize chat protocol benchmarks of Status. |
vac:dst:status:2025q4-status-evaluation
Description
Realizing chat protocol benchmarks on status-go will allow the Status and Waku team to perform non-regression performance tests for chat protocols. Also, this will allow quantify improvements tackled by Waku Chat team. The benchmarks will be done for communities, with emphasis on subscription and store performance, and private chats, with emphasis on contact requests, 1-1 messages and group messages.
Background
The following scenarios will help review what kind of improvements and changes can be made to the chat protocols, and see if more baseline benchmarks are needed to be measured in FURPS.
Narratives
We will support the Conduit of Expertise narrative directly by analysing and evaluating Status features, both with regards to features they have today and with regards to how that compares to past behaviour.
Additionally, these efforts will contribute to the Premier Research destination narrative by improving and strengthening our relationship with the Status-go team and thus increasing the reach and influence of the IFT, and improving the chances that we successfully grow our ecosystem's products and collaborations and especially those we want to work with externally.
Task list
Send one-to-one outage (private chats)
- fully qualified name:
vac:dst:status:2025q4-status-evaluation:one-to-one-outage - owner: Alberto
- status: not started
- start-date: 2025/10/01
- end-date: 2025/12/31
Description
Notion Link Testing app behaviour when something goes wrong. Best to start with what is easier within test framework: cut nodes from network, stop and start nodes, suspend processes, etc.
Setup:
- 50 sending nodes
- 50 receiving nodes
- 50 idle nodes
- Mix of relay and light nodes, results should be per node type.
- Each sending node has sent a contact request to a receiving node, that accepted it
Test:
- Sending nodes send one text message per 10 sec
- Test runs for 2 hours or so.
- Sending and receiving nodes lose access to network or get killed, restarted
- Duration of “outage” (whatever is the chose form), should go from 1 seconds to 1 hour; Probably a global setting so that we can look at bandwidth usage in relation to outage. and see if specific duration lead to an explosion of messages; or if specific duration lead to very poor UX (e.g. outage 30min, messages then take 60min to eventually arrive due to poor backoff strategy)
- Metrics to ensure all messages are eventually received (100%) is important
- and metrics as specified in this page.
Schedule note: Dates reflect quarter bounds; update when actual timing is known.
Deliverables
- PRs/Issues/Docs/Reports
Status-backend private chats - send group outage
- fully qualified name:
vac:dst:status:2025q4-status-evaluation:group-outage - owner: Alberto
- status: not started
- start-date: 2025/10/01
- end-date: 2025/12/31
Description
Notion Link Same as @Send one-to-one message - Network outage for private group
Schedule note: Dates reflect quarter bounds; update when actual timing is known.
Deliverables
- PRs/Issues/Docs/Reports
Chat protocol benchmarks followup
- fully qualified name:
vac:dst:status:2025q4-status-evaluation:chat-protocol-benchmarks-followup - owner: Alberto
- status: 60%
- start-date: 2025/10/06
- end-date: 2025/10/24
Description
Using the fix in the discovery process for the relay nodes to repeat the benchmarks adding light nodes in the same chat protocol benchmarks scenarios.