Files
sashaodessa c093283b1b Replace fixed sleep delays with active polling in prometheus service test (#15828)
## **Description:**

**What type of PR is this?**

> Bug fix

**What does this PR do? Why is it needed?**

Replaces fixed `time.Sleep(time.Second)` delays in `TestLifecycle` with
active polling to wait for service readiness/shutdown. This improves
test reliability and reduces execution time by eliminating unnecessary
waits when services start/stop faster than expected.

**Which issues(s) does this PR fix?**

N/A - Minor test improvement

**Other notes for review**

- Uses 50ms polling interval with 3s timeout for both startup and
shutdown checks
- Maintains same test logic while making it more efficient and less
flaky
- No functional changes to the service itself

**Acknowledgements**

- [x] I have read
[CONTRIBUTING.md](https://github.com/prysmaticlabs/prysm/blob/develop/CONTRIBUTING.md).
- [ ] I have included a uniquely named [changelog fragment
file](https://github.com/prysmaticlabs/prysm/blob/develop/CONTRIBUTING.md#maintaining-changelogmd).
- [x] I have added a description to this PR with sufficient context for
reviewers to understand this PR.
2025-11-21 21:39:24 +00:00
..
2025-11-06 16:16:23 +00:00
2021-12-07 17:52:39 +00:00
2023-10-20 14:55:16 +00:00
2025-11-06 16:16:23 +00:00

How to monitor with prometheus

Prerequisites:

Start scrapping services

To start scrapping with prometheus you must create or edit the prometheus config file and add all the services you want to scrap, like these:

global:
  scrape_interval:     15s # By default, scrape targets every 15 seconds.

  # Attach these labels to any time series or alerts when communicating with
  # external systems (federation, remote storage, Alertmanager).
  external_labels:
    monitor: 'codelab-monitor'

# A scrape configuration containing exactly one endpoint to scrape:
# Here it's Prometheus itself.
scrape_configs:
  # The job name is added as a label `job=<job_name>` to any timeseries scraped from this config.
  - job_name: 'prometheus'

    # Override the global default and scrape targets from this job every 5 seconds.
    scrape_interval: 5s

    static_configs:
      - targets: ['localhost:9090']
+  - job_name: 'beacon-chain'
+    static_configs:
+      - targets: ['localhost:8080']

After creating/updating the prometheus file run it:

$ prometheus --config.file=your-prometheus-file.yml

Now, you can add the prometheus server as a data source on grafana and start building your dashboards.

How to add additional metrics

The prometheus service export the metrics from the DefaultRegisterer so just need to register your metrics with the prometheus or promauto libraries. To know more Go application guide