* Initial commit of L2 provider/storage contract integration
* l2 storage admin
* storage store
* update mockhub
* viem
* adjust import
* adjust import
* weave in config
* flesh out l2 test
* storage registry test
* strawman the storage tests
* null check
* switch type
* further updates, updated abi
* temporarily disabling test until anvil issue is sorted out
* more tests
* weird slowdown in node18 test on ci
* ok
* confirm iterator ordering
* rework timestamp into event message
* more coverage
* feat: Initial fname registry provider class
* flesh out fname registry provider functionality
* Update to match fip
* Use new query params and gradefully handle errors
* feat: add support for verifying username proofs
* Validate server signatures before submitting username proofs
* Add changeset and default fname server url
* rolling up changes for links FIP
* typeToSetPostfix
* pr feedback
* consistency to avoid js quirks
* add versioning logic and update tests
* include version check in mergeMessages
* update protobuf comment to reflect nit
* added changeset
* code coverage
This dependency is only required for dev, not production. It's used by
Hubble when generating fake data, however, so move it to a production
dependency for Hubble.
Fetch data in batches and execute jobs concurrently. This reduces the
time estimate from ~4 hours to under 1 hour on my machine.
We could probably make this even faster by implementing a
`getAllMessages` endpoint on the hubs.
Instead of requiring the user to run `yarn install` followed by `yarn
start`, we simply have them run `docker compose up` to start both the
app and Postgres together. This is much easier and reduces the need for
them to install anything else besides Docker. It also makes cleaning up
the example easier.
* feat: Suport sync status rpc call
* Add sync status hubble command
* Fix generated file
* Changeset
* Fix isSyncing check
* Rename to status and report db stats as well
* Fix error
We had a report of one user on modern MacBook with 16GB of memory who
was running into heap allocation issues. This was likely due to the
excessive number of promises we were creating (as well as buffering all
data for each call).
Change the logic to fetch in batches of 1K records at most. This slows
down the initial sync but should reduce the likelihood that someone will
hit a memory limit.
We also specify a custom limit in the `yarn start` command so that when
we test locally we are using the same limit as everyone else.
Provide a working end-to-end example of syncing data from hubs to a
Postgres database.
This should work with no additional dependencies besides what you
install with `yarn install` and Docker.
* feat: don't merge messages that would immediately be pruned
* Fix tests and minor cleanup/review comments
* Support all other stores
* Add changeset
* use prune iterator with keys available and expand cast store tests
* re-add prune iterator args
* Additional tests
* Add tests for other stores
---------
Co-authored-by: Sanjay Raveendran <sanjayprabhu@gmail.com>
* refactor: Rename sync events flag for clarity
* feat: Add sync statuts to HubInfo RPC call
* feat: Add sync stats to getInfo rpc call
* re-patch hub-web to use default export as before
* changeset