This website requires JavaScript.
Explore
Help
Register
Sign In
github
/
InvokeAI
Watch
1
Star
1
Fork
0
You've already forked InvokeAI
mirror of
https://github.com/invoke-ai/InvokeAI.git
synced
2026-02-01 16:45:13 -05:00
Code
Issues
Packages
Projects
Releases
Wiki
Activity
Files
1d8545a76cffcbaede95cbb6d37a145e833543ca
InvokeAI
/
invokeai
/
backend
/
quantization
History
Ryan Dick
1fa6bddc89
WIP on moving from diffusers to FLUX
2024-08-26 20:17:50 -04:00
..
bnb_llm_int8.py
More improvements for LLM.int8() - not fully tested.
2024-08-26 20:17:50 -04:00
bnb_nf4.py
LLM.int8() quantization is working, but still some rough edges to solve.
2024-08-26 20:17:50 -04:00
fast_quantized_diffusion_model.py
Make quantized loading fast for both T5XXL and FLUX transformer.
2024-08-26 20:17:50 -04:00
fast_quantized_transformers_model.py
Make quantized loading fast for both T5XXL and FLUX transformer.
2024-08-26 20:17:50 -04:00
load_flux_model_bnb_llm_int8.py
LLM.int8() quantization is working, but still some rough edges to solve.
2024-08-26 20:17:50 -04:00
load_flux_model_bnb_nf4.py
WIP on moving from diffusers to FLUX
2024-08-26 20:17:50 -04:00