mirror of
https://github.com/invoke-ai/InvokeAI.git
synced 2026-04-23 03:00:31 -04:00
Fix flaky FLUX LoRA unit test (#6899)
## Summary This PR attempts to fix a flaky FLUX LoRA unit test. Example test failure: https://github.com/invoke-ai/InvokeAI/actions/runs/10958325913/job/30428299328?pr=6898 The failure _seems_ to be caused by a numerical precision error, but I haven't been able to reproduce it locally. I have reduced the tolerance of the offending comparison, and am pretty confident that this will solve the issue. ## QA Instructions No QA necessary. ## Checklist - [x] _The PR has a short but descriptive title, suitable for a changelog_ - [x] _Tests added / updated (if applicable)_ - [x] _Documentation added / updated (if applicable)_
This commit is contained in:
@@ -192,4 +192,6 @@ def test_apply_lora_sidecar_patches_matches_apply_lora_patches(num_layers: int):
|
||||
with LoRAPatcher.apply_lora_sidecar_patches(model=model, patches=lora_models, prefix="", dtype=dtype):
|
||||
output_lora_sidecar_patches = model(input)
|
||||
|
||||
assert torch.allclose(output_lora_patches, output_lora_sidecar_patches)
|
||||
# Note: We set atol=1e-5 because the test failed occasionally with the default atol=1e-8. Slight numerical
|
||||
# differences are tolerable and expected due to the difference between sidecar vs. patching.
|
||||
assert torch.allclose(output_lora_patches, output_lora_sidecar_patches, atol=1e-5)
|
||||
|
||||
Reference in New Issue
Block a user