- Replace `get_counts` method with `get_tag_counts_with_filter` which gets the counts for a list of tags, filtering by a list of selected tags
- Update `get_many` logic to apply tag filtering with AND logic, to match the new `get_tag_counts_with_filter` method
- Update workflow library router
This follows the same pattern for IP Adapter w/ its CLIP Vision model. The SigLIP model is unlikely to ever change and we don't want to force the user to select it anywhere. Hardcoding it is safe and makes the UX much nicer.
The alternative is a model dropdown that will likely only ever have one valid choice in it.
- We don't need to copy the init file. Just crawl the custom nodes dir for modules and import them all. Dunno why I didn't do this initially.
- Pass the logger in as an arg. There was a race condition where if we got the logger directly in the load_custom_nodes function, the config would not have been loaded fully yet and we'd end up with the wrong custom nodes path!
- Remove permissions-setting logic, I do not believe it is relevant for custom nodes
- Minor cleanup of the utility
There's a pydantic thing that causes the graphs to fail validation erroneously. Details in the comments - not a high priority to fix but we should figure it out someday.
This method simply sets the `opened_at` attribute to the current time.
Previously `opened_at` was set when calling `get`, but that is not correct. We `get` workflows often, even when not opening them. So this needs to be a separate thing
Get the counts of workflows for the given tags and/or categories. Made a separate method bc get_many will deserialize all matching workflows, which is unnecessary for this use case.
It's only by misunderstanding the pydantic API that this field was is typed as optional. Workflows must _always_ have a category, and indeed they do.
Fixing this allows the generated types in the frontend to be easier to work with..
There was a bit of wonk with default workflows. On every app startup, we wiped them all out and recreated them with new IDs. This is a quick-and-dirty way to ensure default workflows are always in sync.
Unfortunately, it also means default workflows are newly-created entities on every app load. Any thumbnails associated to them will be lost (bc they have new IDs), and `updated_at` doesn't work.
This changes makes default workflows stable entities.
The workflows we bundle in the python package in JSON format are still the source of truth for default workflows, but the startup logic that syncs them to the user DB is a bit smarter.
- All bundled workflows have an ID. It is prefixed with "default_" for clarity.
- Any default workflows in the user's DB that are not in the bundled default workflows are deleted from the DB.
- Any bundled default workflows that are not in the user's DB are added to the DB.
- If a default workflow in the user's DB does not match the content of its corresponding bundled workflow, it is updated in the DB.
The end result is that default workflows are still kept in sync for the user, but they don't change their identity.
We may now add thumbnails to default workflows, and sorting by `updated_at` is now meaningful.