* fix: resolves conflicts for links_fid_target_fid_type_unique
* delete the rows to fix for follow/unfollow back & forth.
* keep the soft deletes but ensure we don't get multiple rows for a follow relation
Fetch data in batches and execute jobs concurrently. This reduces the
time estimate from ~4 hours to under 1 hour on my machine.
We could probably make this even faster by implementing a
`getAllMessages` endpoint on the hubs.
Instead of requiring the user to run `yarn install` followed by `yarn
start`, we simply have them run `docker compose up` to start both the
app and Postgres together. This is much easier and reduces the need for
them to install anything else besides Docker. It also makes cleaning up
the example easier.
We had a report of one user on modern MacBook with 16GB of memory who
was running into heap allocation issues. This was likely due to the
excessive number of promises we were creating (as well as buffering all
data for each call).
Change the logic to fetch in batches of 1K records at most. This slows
down the initial sync but should reduce the likelihood that someone will
hit a memory limit.
We also specify a custom limit in the `yarn start` command so that when
we test locally we are using the same limit as everyone else.
Provide a working end-to-end example of syncing data from hubs to a
Postgres database.
This should work with no additional dependencies besides what you
install with `yarn install` and Docker.