This website requires JavaScript.
Explore
Help
Register
Sign In
github
/
InvokeAI
Watch
1
Star
1
Fork
0
You've already forked InvokeAI
mirror of
https://github.com/invoke-ai/InvokeAI.git
synced
2026-02-19 09:54:24 -05:00
Code
Issues
Packages
Projects
Releases
Wiki
Activity
Files
5f59a828f97b2f8b3e564fd295f6571f16674d38
InvokeAI
/
invokeai
/
backend
/
quantization
History
Ryan Dick
1fa6bddc89
WIP on moving from diffusers to FLUX
2024-08-26 20:17:50 -04:00
..
bnb_llm_int8.py
More improvements for LLM.int8() - not fully tested.
2024-08-26 20:17:50 -04:00
bnb_nf4.py
LLM.int8() quantization is working, but still some rough edges to solve.
2024-08-26 20:17:50 -04:00
fast_quantized_diffusion_model.py
Make quantized loading fast for both T5XXL and FLUX transformer.
2024-08-26 20:17:50 -04:00
fast_quantized_transformers_model.py
Make quantized loading fast for both T5XXL and FLUX transformer.
2024-08-26 20:17:50 -04:00
load_flux_model_bnb_llm_int8.py
LLM.int8() quantization is working, but still some rough edges to solve.
2024-08-26 20:17:50 -04:00
load_flux_model_bnb_nf4.py
WIP on moving from diffusers to FLUX
2024-08-26 20:17:50 -04:00