psychedelicious
c1cf01a038
tests: use dangerously_run_function_in_subprocess to fix configure_torch_cuda_allocator tests
2025-03-06 07:49:35 +11:00
psychedelicious
d193e4f02a
feat(app): log warning instead of raising if PYTORCH_CUDA_ALLOC_CONF is already set
2025-03-06 07:49:35 +11:00
psychedelicious
ec493e30ee
feat(app): make logger a required arg in configure_torch_cuda_allocator
2025-03-06 07:49:35 +11:00
Jonathan
081b931edf
Update util.py
...
Changed string to a literal
2025-03-05 14:39:17 +11:00
Jonathan
8cd7035494
Fixed validation of begin and end steps
...
Fixed logic to match the error message - begin should be <= end.
2025-03-05 14:39:17 +11:00
psychedelicious
c2a3c66e49
feat(app): avoid nested cursors in workflow_records service
2025-03-04 08:33:42 +11:00
psychedelicious
c0a0d20935
feat(app): avoid nested cursors in style_preset_records service
2025-03-04 08:33:42 +11:00
psychedelicious
028d8d8ead
feat(app): avoid nested cursors in model_records service
2025-03-04 08:33:42 +11:00
psychedelicious
657095d2e2
feat(app): avoid nested cursors in image_records service
2025-03-04 08:33:42 +11:00
psychedelicious
1c47dc997e
feat(app): avoid nested cursors in board_records service
2025-03-04 08:33:42 +11:00
psychedelicious
a3de6b6165
feat(app): avoid nested cursors in board_image_records service
2025-03-04 08:33:42 +11:00
psychedelicious
e57f0ff055
experiment(app): avoid nested cursors in session_queue service
...
SQLite cursors are meant to be lightweight and not reused. For whatever reason, we reuse one per service for the entire app lifecycle.
This can cause issues where a cursor is used twice at the same time in different transactions.
This experiment makes the session queue use a fresh cursor for each method, hopefully fixing the issue.
2025-03-04 08:33:42 +11:00
psychedelicious
7399909029
feat(app): use simpler syntax for enqueue_batch threaded execution
2025-03-03 14:40:48 +11:00
psychedelicious
c8aaf5e76b
tidy(app): remove extraneous class attr type annotations
2025-03-03 14:40:48 +11:00
psychedelicious
0cdf7a7048
Revert "experiment(app): simulate very long enqueue operations (15s)"
...
This reverts commit eb6a323d0b70004732de493d6530e08eb5ca8acf.
2025-03-03 14:40:48 +11:00
psychedelicious
41985487d3
Revert "experiment(app): make socketio server ping every 1s"
...
This reverts commit ddf00bf260167092a3bc2afdce1244c6b116ebfb.
2025-03-03 14:40:48 +11:00
psychedelicious
14f9d5b6bc
experiment(app): remove db locking logic
...
Rely on WAL mode and the busy timeout.
Also changed:
- Remove extraneous rollbacks when we were only doing a `SELECT`
- Remove try/catch blocks that were made extraneous when removing the extraneous rollbacks
2025-03-03 14:40:48 +11:00
psychedelicious
eec4bdb038
experiment(app): enable WAL mode and set busy_timeout
...
This allows for read and write concurrency without using a global mutex. Operations may still fail they take longer than the busy timeout (5s).
If we get a database lock error after waiting 5s for an operation, we have a problem. So, I think it's actually better to use a busy timeout instead of a global mutex.
Alternatively, we could add a timeout to the global mutex.
2025-03-03 14:40:48 +11:00
psychedelicious
f3dd44044a
experiment(app): run enqueue_batch async in a thread
2025-03-03 14:40:48 +11:00
psychedelicious
61a22eb8cb
experiment(app): make socketio server ping every 1s
2025-03-03 14:40:48 +11:00
psychedelicious
03ca83fe13
experiment(app): simulate very long enqueue operations (15s)
2025-03-03 14:40:48 +11:00
Ryan Dick
b9f9d1c152
Increase the VAE decode memory estimates. to account for memory reserved by the memory allocator, but not allocated, and to generally be more conservative.
2025-02-28 17:18:57 -05:00
Ryan Dick
0e632dbc5c
(minor) typo
2025-02-28 21:39:09 +00:00
Ryan Dick
a36a627f83
Switch from use_cuda_malloc flag to a general pytorch_cuda_alloc_conf config field that allows full customization of the CUDA allocator.
2025-02-28 21:39:09 +00:00
Ryan Dick
b31c71f302
Simplify is_torch_cuda_malloc_enabled() implementation and add unit tests.
2025-02-28 21:39:09 +00:00
Ryan Dick
5302d4890f
Add use_cuda_malloc config option.
2025-02-28 21:39:09 +00:00
Ryan Dick
766b752572
Add utils for configuring the torch CUDA allocator.
2025-02-28 21:39:09 +00:00
Ryan Dick
1e2c7c51b5
Move load_custom_nodes() to run_app() entrypoint.
2025-02-28 20:54:26 +00:00
Ryan Dick
da2b6815ac
Make InvokeAILogger an inline import in startup_utils.py in response to review comment.
2025-02-28 20:10:24 +00:00
Ryan Dick
68d14de3ee
Split run_app.py and api_app.py so that api_app.py is more narrowly responsible for just initializing the FastAPI app. This also gives clearer control over the order of the initialization steps, which will be important as we add planned torch configurations that must be applied before torch is imported.
2025-02-28 20:10:24 +00:00
Ryan Dick
38991ffc35
Add register_mime_types() startup util.
2025-02-28 20:10:24 +00:00
Ryan Dick
f345c0fabc
Create an apply_monkeypatches() start util.
2025-02-28 20:10:24 +00:00
Ryan Dick
ca23b5337e
Simplify port selection logic to avoid the need for a global port variable.
2025-02-28 20:10:19 +00:00
Ryan Dick
35910d3952
Move check_cudnn() and jurigged setup to startup_utils.py.
2025-02-28 20:08:53 +00:00
Ryan Dick
6f1dcf385b
Move find_port() util to its own file.
2025-02-28 20:08:53 +00:00
skunkworxdark
36a3fba8cb
Update metadata_linked.py
...
Fix input type of default_value on MetadataToFloatInvocation
2025-02-27 04:55:29 -05:00
psychedelicious
4e8ce4abab
feat(app): more detailed messages when loading custom nodes
2025-02-27 12:39:37 +11:00
psychedelicious
d40f2fa37c
feat(app): improved custom load loading ordering
...
Previously, custom node loading occurred _during module imports_. A consequence of this is that when a custom node import fails (e.g. its type clobbers an existing node), the app fails to start up.
In fact, any time we import basically anything from the app, we trigger custom node imports! Not good.
This logic is now in its own function, called as the API app starts up.
If a custom node load fails for any reason, it no longer prevents the app from starting up.
One other bonus we get from this is that we can now ensure custom nodes are loaded _after_ core nodes.
Any clobbering that may occur while loading custom nodes is now guaranteed to be a custom node clobbering a core node's type - and not the other way round.
2025-02-27 12:39:37 +11:00
psychedelicious
933f4f6857
feat(app): improve error messages when registering invocations and they clobber
2025-02-27 12:39:37 +11:00
psychedelicious
f499b2db7b
feat(app): add get_invocation_for_type method to BaseInvocation
2025-02-27 12:39:37 +11:00
psychedelicious
706aaf7460
tidy(app): remove unused variable
2025-02-27 12:39:37 +11:00
psychedelicious
4a706d00bb
feat(app): use generic for append_list util
2025-02-27 12:28:00 +11:00
psychedelicious
3f0e3192f6
chore(app): mark metadata_field_extractor as deprecated
2025-02-27 12:28:00 +11:00
psychedelicious
c65147e2ff
feat(app): adopt @skunkworxdark's popular metadata nodes
...
Thank you!
2025-02-27 12:28:00 +11:00
psychedelicious
1c14e257a3
feat(app): do not pull PIL image from disk in image primitive
2025-02-27 12:19:27 +11:00
psychedelicious
559654f0ca
revert(app): get_all_board_image_names_for_board requires board_id
2025-02-27 10:19:13 +11:00
Eugene Brodsky
5d33874d58
fix(backend): ValuesToInsertTuple.retried_from_item_id should be an int
2025-02-27 07:35:41 +11:00
Mary Hipp
0063315139
fix(api): add new args to all uses of get_all_board_image_names_for_board
2025-02-26 15:05:40 -05:00
psychedelicious
047c643295
tidy(app): document & clean up batch prep logic
2025-02-26 21:04:23 +11:00
psychedelicious
d1e03aa1c5
tidy(app): remove timing debug logs
2025-02-26 21:04:23 +11:00