mirror of
https://github.com/invoke-ai/InvokeAI.git
synced 2026-04-23 03:00:31 -04:00
fix(nodes): add thresholding to lineart & lineart anime nodes
The lineart model often outputs a lot of almost-black noise. SD1.5 ControlNets seem to be OK with this, but SDXL ControlNets are not - they need a cleaner map. 12 was experimentally determined to be a good threshold, eliminating all the noise while keeping the actual edges. Other approaches to thresholding may be better, for example stretching the contrast or removing noise. I tried: - Simple thresholding (as implemented here) - works fine. - Adaptive thresholding - doesn't work, because the thresholding is done in the context of small blocks, while we want thresholding in the context of the whole image. - Gamma adjustment - alters the white values too much. Hard to tuen. - Contrast stretching, with and without pre-simple-thresholding - this allows us to treshold out the noise, then stretch everything above the threshold down to almost-zero. So you have a smoother gradient of lightness near zero. It works but it also stretches contrast near white down a bit, which is probably undesired. In the end, simple thresholding works fine and is very simple.
This commit is contained in:
committed by
Kent Keirsey
parent
783441a89d
commit
0fd430fc20
@@ -214,8 +214,14 @@ class LineartEdgeDetector:
|
||||
line = line.cpu().numpy()
|
||||
line = (line * 255.0).clip(0, 255).astype(np.uint8)
|
||||
|
||||
detected_map = line
|
||||
detected_map = 255 - line
|
||||
|
||||
detected_map = 255 - detected_map
|
||||
# The lineart model often outputs a lot of almost-black noise. SD1.5 ControlNets seem to be OK with this, but
|
||||
# SDXL ControlNets are not - they need a cleaner map. 12 was experimentally determined to be a good threshold,
|
||||
# eliminating all the noise while keeping the actual edges. Other approaches to thresholding may be better,
|
||||
# for example stretching the contrast or removing noise.
|
||||
detected_map[detected_map < 12] = 0
|
||||
|
||||
return np_to_pil(detected_map)
|
||||
output = np_to_pil(detected_map)
|
||||
|
||||
return output
|
||||
|
||||
@@ -260,8 +260,14 @@ class LineartAnimeEdgeDetector:
|
||||
line = cv2.resize(line, (width, height), interpolation=cv2.INTER_CUBIC)
|
||||
line = line.clip(0, 255).astype(np.uint8)
|
||||
|
||||
detected_map = line
|
||||
detected_map = 255 - detected_map
|
||||
detected_map = 255 - line
|
||||
|
||||
# The lineart model often outputs a lot of almost-black noise. SD1.5 ControlNets seem to be OK with this, but
|
||||
# SDXL ControlNets are not - they need a cleaner map. 12 was experimentally determined to be a good threshold,
|
||||
# eliminating all the noise while keeping the actual edges. Other approaches to thresholding may be better,
|
||||
# for example stretching the contrast or removing noise.
|
||||
detected_map[detected_map < 12] = 0
|
||||
|
||||
output = np_to_pil(detected_map)
|
||||
|
||||
return output
|
||||
|
||||
Reference in New Issue
Block a user