mirror of
https://github.com/acon96/home-llm.git
synced 2026-01-08 05:14:02 -05:00
Release v0.4.2
This commit is contained in:
71
README.md
71
README.md
@@ -156,38 +156,39 @@ python3 train.py \
|
||||
|
||||
|
||||
## Version History
|
||||
| Version | Description |
|
||||
|---------|----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
|
||||
| v0.4.1 | Fix an issue with using Llama.cpp models downloaded from HuggingFace |
|
||||
| v0.4 | Rewrite integration to support tool calling models/agentic tool use loop, voice streaming, multiple config sub-entries per backend, and dynamic llama.cpp processor selection |
|
||||
| v0.3.11 | Bug-fixes and llama.cpp version update |
|
||||
| v0.3.10 | Add support for the OpenAI "Responses" API endpoint, Update llama.cpp version, Fix for breaking change in HA version 2025.7.0 |
|
||||
| v0.3.9 | Update llama.cpp version, fix installation bugs, fix conversation history not working |
|
||||
| v0.3.8 | Update llama.cpp, remove think blocks from "thinking" models, fix wheel detection for some Intel CPUs, Fixes for compatibility with latest Home Assistant version (2025.4), other small bug fixes |
|
||||
| v0.3.7 | Update llama.cpp version to support newer models, Update minimum Home Assistant version to 2024.12.3, Add German In-Context Learning examples, Fix multi-turn use, Fix an issue with webcolors |
|
||||
| v0.3.6 | Small llama.cpp backend fixes |
|
||||
| v0.3.5 | Fix for llama.cpp backend installation, Fix for Home LLM v1-3 API parameters, add Polish ICL examples |
|
||||
| v0.3.4 | Significantly improved language support including full Polish translation, Update bundled llama-cpp-python to support new models, various bug fixes |
|
||||
| v0.3.3 | Improvements to the Generic OpenAI Backend, improved area handling, fix issue using RGB colors, remove EOS token from responses, replace requests dependency with aiohttp included with Home Assistant |
|
||||
| v0.3.2 | Fix for exposed script entities causing errors, fix missing GBNF error, trim whitespace from model output |
|
||||
| v0.3.1 | Adds basic area support in prompting, Fix for broken requirements, fix for issue with formatted tools, fix custom API not registering on startup properly |
|
||||
| v0.3 | Adds support for Home Assistant LLM APIs, improved model prompting and tool formatting options, and automatic detection of GGUF quantization levels on HuggingFace |
|
||||
| v0.2.17 | Disable native llama.cpp wheel optimizations, add Command R prompt format |
|
||||
| v0.2.16 | Fix for missing huggingface_hub package preventing startup |
|
||||
| v0.2.15 | Fix startup error when using llama.cpp backend and add flash attention to llama.cpp backend |
|
||||
| v0.2.14 | Fix llama.cpp wheels + AVX detection |
|
||||
| v0.2.13 | Add support for Llama 3, build llama.cpp wheels that are compatible with non-AVX systems, fix an error with exposing script entities, fix multiple small Ollama backend issues, and add basic multi-language support |
|
||||
| v0.2.12 | Fix cover ICL examples, allow setting number of ICL examples, add min P and typical P sampler options, recommend models during setup, add JSON mode for Ollama backend, fix missing default options |
|
||||
| v0.2.11 | Add prompt caching, expose llama.cpp runtime settings, build llama-cpp-python wheels using GitHub actions, and install wheels directly from GitHub |
|
||||
| v0.2.10 | Allow configuring the model parameters during initial setup, attempt to auto-detect defaults for recommended models, Fix to allow lights to be set to max brightness |
|
||||
| v0.2.9 | Fix HuggingFace Download, Fix llama.cpp wheel installation, Fix light color changing, Add in-context-learning support |
|
||||
| v0.2.8 | Fix ollama model names with colons |
|
||||
| v0.2.7 | Publish model v3, Multiple Ollama backend improvements, Updates for HA 2024.02, support for voice assistant aliases |
|
||||
| v0.2.6 | Bug fixes, add options for limiting chat history, HTTPS endpoint support, added zephyr prompt format. |
|
||||
| v0.2.5 | Fix Ollama max tokens parameter, fix GGUF download from Hugging Face, update included llama-cpp-python to 0.2.32, and add parameters to function calling for dataset + component, & model update |
|
||||
| v0.2.4 | Fix API key auth on model load for text-generation-webui, and add support for Ollama API backend |
|
||||
| v0.2.3 | Fix API key auth, Support chat completion endpoint, and refactor to make it easier to add more remote backends |
|
||||
| v0.2.2 | Fix options window after upgrade, fix training script for new Phi model format, and release new models |
|
||||
| v0.2.1 | Properly expose generation parameters for each backend, handle config entry updates without reloading, support remote backends with an API key |
|
||||
| v0.2 | Bug fixes, support more backends, support for climate + switch devices, JSON style function calling with parameters, GBNF grammars |
|
||||
| v0.1 | Initial Release |
|
||||
| Version | Description |
|
||||
|---------|-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
|
||||
| v0.4.2 | Fix the following issues: not correctly setting default model settings during initial setup, non-integers being allowed in numeric config fields, being too strict with finish_reason requirements, and not letting the user clear the active LLM API |
|
||||
| v0.4.1 | Fix an issue with using Llama.cpp models downloaded from HuggingFace |
|
||||
| v0.4 | Rewrite integration to support tool calling models/agentic tool use loop, voice streaming, multiple config sub-entries per backend, and dynamic llama.cpp processor selection |
|
||||
| v0.3.11 | Bug-fixes and llama.cpp version update |
|
||||
| v0.3.10 | Add support for the OpenAI "Responses" API endpoint, Update llama.cpp version, Fix for breaking change in HA version 2025.7.0 |
|
||||
| v0.3.9 | Update llama.cpp version, fix installation bugs, fix conversation history not working |
|
||||
| v0.3.8 | Update llama.cpp, remove think blocks from "thinking" models, fix wheel detection for some Intel CPUs, Fixes for compatibility with latest Home Assistant version (2025.4), other small bug fixes |
|
||||
| v0.3.7 | Update llama.cpp version to support newer models, Update minimum Home Assistant version to 2024.12.3, Add German In-Context Learning examples, Fix multi-turn use, Fix an issue with webcolors |
|
||||
| v0.3.6 | Small llama.cpp backend fixes |
|
||||
| v0.3.5 | Fix for llama.cpp backend installation, Fix for Home LLM v1-3 API parameters, add Polish ICL examples |
|
||||
| v0.3.4 | Significantly improved language support including full Polish translation, Update bundled llama-cpp-python to support new models, various bug fixes |
|
||||
| v0.3.3 | Improvements to the Generic OpenAI Backend, improved area handling, fix issue using RGB colors, remove EOS token from responses, replace requests dependency with aiohttp included with Home Assistant |
|
||||
| v0.3.2 | Fix for exposed script entities causing errors, fix missing GBNF error, trim whitespace from model output |
|
||||
| v0.3.1 | Adds basic area support in prompting, Fix for broken requirements, fix for issue with formatted tools, fix custom API not registering on startup properly |
|
||||
| v0.3 | Adds support for Home Assistant LLM APIs, improved model prompting and tool formatting options, and automatic detection of GGUF quantization levels on HuggingFace |
|
||||
| v0.2.17 | Disable native llama.cpp wheel optimizations, add Command R prompt format |
|
||||
| v0.2.16 | Fix for missing huggingface_hub package preventing startup |
|
||||
| v0.2.15 | Fix startup error when using llama.cpp backend and add flash attention to llama.cpp backend |
|
||||
| v0.2.14 | Fix llama.cpp wheels + AVX detection |
|
||||
| v0.2.13 | Add support for Llama 3, build llama.cpp wheels that are compatible with non-AVX systems, fix an error with exposing script entities, fix multiple small Ollama backend issues, and add basic multi-language support |
|
||||
| v0.2.12 | Fix cover ICL examples, allow setting number of ICL examples, add min P and typical P sampler options, recommend models during setup, add JSON mode for Ollama backend, fix missing default options |
|
||||
| v0.2.11 | Add prompt caching, expose llama.cpp runtime settings, build llama-cpp-python wheels using GitHub actions, and install wheels directly from GitHub |
|
||||
| v0.2.10 | Allow configuring the model parameters during initial setup, attempt to auto-detect defaults for recommended models, Fix to allow lights to be set to max brightness |
|
||||
| v0.2.9 | Fix HuggingFace Download, Fix llama.cpp wheel installation, Fix light color changing, Add in-context-learning support |
|
||||
| v0.2.8 | Fix ollama model names with colons |
|
||||
| v0.2.7 | Publish model v3, Multiple Ollama backend improvements, Updates for HA 2024.02, support for voice assistant aliases |
|
||||
| v0.2.6 | Bug fixes, add options for limiting chat history, HTTPS endpoint support, added zephyr prompt format. |
|
||||
| v0.2.5 | Fix Ollama max tokens parameter, fix GGUF download from Hugging Face, update included llama-cpp-python to 0.2.32, and add parameters to function calling for dataset + component, & model update |
|
||||
| v0.2.4 | Fix API key auth on model load for text-generation-webui, and add support for Ollama API backend |
|
||||
| v0.2.3 | Fix API key auth, Support chat completion endpoint, and refactor to make it easier to add more remote backends |
|
||||
| v0.2.2 | Fix options window after upgrade, fix training script for new Phi model format, and release new models |
|
||||
| v0.2.1 | Properly expose generation parameters for each backend, handle config entry updates without reloading, support remote backends with an API key |
|
||||
| v0.2 | Bug fixes, support more backends, support for climate + switch devices, JSON style function calling with parameters, GBNF grammars |
|
||||
| v0.1 | Initial Release |
|
||||
|
||||
@@ -337,5 +337,5 @@ def option_overrides(backend_type: str) -> dict[str, Any]:
|
||||
},
|
||||
}
|
||||
|
||||
INTEGRATION_VERSION = "0.4.1"
|
||||
INTEGRATION_VERSION = "0.4.2"
|
||||
EMBEDDED_LLAMA_CPP_PYTHON_VERSION = "0.3.16+b6153"
|
||||
|
||||
@@ -1,7 +1,7 @@
|
||||
{
|
||||
"domain": "llama_conversation",
|
||||
"name": "Local LLMs",
|
||||
"version": "0.4.1",
|
||||
"version": "0.4.2",
|
||||
"codeowners": ["@acon96"],
|
||||
"config_flow": true,
|
||||
"dependencies": ["conversation"],
|
||||
|
||||
Reference in New Issue
Block a user