mirror of
https://github.com/acon96/home-llm.git
synced 2026-01-09 21:58:00 -05:00
add docker compose stack for testing + backends are mostly working at this point
This commit is contained in:
15
TODO.md
15
TODO.md
@@ -1,8 +1,9 @@
|
||||
# TODO
|
||||
- [ ] proper tool calling support
|
||||
- [x] proper tool calling support
|
||||
- [ ] fix old GGUFs to support tool calling
|
||||
- [ ] home assistant component text streaming support
|
||||
- [ ] new models based on qwen3
|
||||
- [ ] new model based on qwen3 0.6b
|
||||
- [ ] new model based on gemma3 270m
|
||||
- [ ] support AI task API
|
||||
- [x] support new LLM APIs
|
||||
- rewrite how services are called
|
||||
@@ -42,6 +43,16 @@
|
||||
- [x] use varied system prompts to add behaviors
|
||||
|
||||
|
||||
## v0.4 TODO for release:
|
||||
[ ] re-order the settings on the options config flow page. the order is very confusing
|
||||
[ ] split out entity functionality so we can support conversation + ai tasks
|
||||
[x] fix icl examples to match new tool calling syntax config
|
||||
[x] set up docker-compose for running all of the various backends
|
||||
[ ] fix and re-upload all compatible old models (+ upload all original safetensors)
|
||||
[ ] move llamacpp to a separate process because of all the crashing
|
||||
[ ] dedicated localai backend (tailored openai variant /w model loading)
|
||||
[ ] fix the openai responses backend
|
||||
|
||||
## more complicated ideas
|
||||
- [ ] "context requests"
|
||||
- basically just let the model decide what RAG/extra context it wants
|
||||
|
||||
Reference in New Issue
Block a user