Alex O'Connell
|
1f078d0a41
|
working ai task entities
|
2025-12-14 10:34:21 -05:00 |
|
Alex O'Connell
|
b89a0b44b6
|
support multiple LLM APIs at once
|
2025-12-13 19:03:58 -05:00 |
|
Alex O'Connell
|
8d62b52f0f
|
Release v0.4.4
|
2025-11-22 09:40:39 -05:00 |
|
Alex O'Connell
|
03989e37b5
|
add initial implementation for ai task entities
|
2025-10-26 21:47:23 -04:00 |
|
Alex O'Connell
|
5cb910d71c
|
add missing downloaded file config attribute
|
2025-10-12 10:28:26 -04:00 |
|
Alex O'Connell
|
2df454985d
|
Build llama.cpp wheels in forked repo + support reinstallation
|
2025-10-08 21:19:06 -04:00 |
|
Alex O'Connell
|
b02118705d
|
fix migration + config sub-entry setup
|
2025-09-17 18:24:08 -04:00 |
|
Alex O'Connell
|
bfc6a5a753
|
work through todos
|
2025-09-15 22:10:49 -04:00 |
|
Alex O'Connell
|
18a34a3d5b
|
more code cleanup and config_flow fixes
|
2025-09-15 22:10:49 -04:00 |
|
Alex O'Connell
|
bcfa1dca0a
|
continue rewrite of config flow to support sub entries
|
2025-09-15 22:10:49 -04:00 |
|
Alex O'Connell
|
4245c665b5
|
start working on config subentries
|
2025-09-15 22:10:25 -04:00 |
|
Alex O'Connell
|
18575c7eb3
|
decouple entity options from llm backend classes to allow model re-use between entities
|
2025-09-15 22:10:25 -04:00 |
|
Alex O'Connell
|
82ebf988aa
|
random changes
|
2025-09-15 22:10:25 -04:00 |
|
Alex O'Connell
|
53052af641
|
Split backends into separate files and start implementing streaming + tool support
|
2025-09-15 22:10:09 -04:00 |
|
Simon Redman
|
057cce485f
|
Rough pass implement GenericOpenAIResponsesAPIAgent
|
2025-05-31 15:05:38 -04:00 |
|
Simon Redman
|
58efdb9601
|
Clean up whitespace
|
2025-05-31 15:05:38 -04:00 |
|
Alex O'Connell
|
811c1ea23b
|
fix config entry setup
|
2025-04-13 18:00:26 -04:00 |
|
Alex O'Connell
|
b1d0191310
|
filter out disallowed services from the prompt when the home-llm API is selected
|
2025-02-08 08:40:02 -05:00 |
|
Alex O'Connell
|
984954d317
|
Add a platform so agents show up in the UI correctly + bump llama-cpp-python
|
2024-08-11 18:11:50 -04:00 |
|
Alex O'Connell
|
21ecc293ee
|
better error handling
|
2024-07-29 21:38:25 -04:00 |
|
Alex O'Connell
|
a7ba4d784e
|
small fixes + remove cover ICL examples b/c the intents are deprecated
|
2024-06-12 21:43:55 -04:00 |
|
Alex O'Connell
|
0a6e558fda
|
Release v0.3.1
|
2024-06-08 12:14:24 -04:00 |
|
Alex O'Connell
|
249298bb99
|
better device prompting with area support + fix circular import
|
2024-06-07 17:42:03 -04:00 |
|
Alex O'Connell
|
5ddf0d09d5
|
restrict services that can be called and add format url function to make behavior standard
|
2024-06-07 08:48:50 -04:00 |
|
Alex O'Connell
|
21640dc321
|
release notes, fix service call args, and other release prep/cleanup
|
2024-06-06 23:54:08 -04:00 |
|
Alex O'Connell
|
ab32942006
|
more fixes for llm API
|
2024-06-06 23:08:04 -04:00 |
|
Alex O'Connell
|
36e29bedf0
|
add an LLM API to support the existing models
|
2024-06-06 22:40:59 -04:00 |
|
Alex O'Connell
|
00d002d9c0
|
make tool formats work + dynamic quantization detection from HF
|
2024-06-02 12:25:26 -04:00 |
|
Alex O'Connell
|
8a28dd61ad
|
update naming and start implementing new LLM API support
|
2024-05-25 17:12:58 -04:00 |
|
Alex O'Connell
|
27044a8fae
|
refactor agents to separate file
|
2024-04-07 12:07:28 -04:00 |
|
Alex O'Connell
|
3574887c33
|
fix startup errors
|
2024-04-01 23:38:35 -04:00 |
|
Alex O'Connell
|
4058a42ee5
|
clean up UI + hook up other llama.cpp settings
|
2024-03-31 17:08:52 -04:00 |
|
Alex O'Connell
|
fdd9f1bc67
|
wire up llama cpp runtime options
|
2024-03-31 11:08:07 -04:00 |
|
Alex O'Connell
|
0d559b63be
|
add cooldown logic
|
2024-03-31 10:55:33 -04:00 |
|
Alex O'Connell
|
47417aaee8
|
fix on update
|
2024-03-30 23:58:35 -04:00 |
|
Alex O'Connell
|
3ee8135064
|
better prompt caching
|
2024-03-30 23:04:52 -04:00 |
|
Alex O'Connell
|
9c2892532f
|
track cache refreshes
|
2024-03-30 16:54:31 -04:00 |
|
Alex O'Connell
|
0008f6bb5e
|
hook up prompt caching to config flow
|
2024-03-30 16:50:24 -04:00 |
|
Alex O'Connell
|
10b6862583
|
working prompt caching prototype
|
2024-03-30 12:56:28 -04:00 |
|
Alex O'Connell
|
4fea8e6908
|
try priming kv cache
|
2024-03-25 18:29:17 -04:00 |
|
Alex O'Connell
|
ac7b71ca4f
|
Make initial configuration easier + rewrite quickstart guide
|
2024-03-24 00:05:07 -04:00 |
|
Isabella Nightshade
|
d51c172c07
|
Small fixes to brightness adjustment and EOS token removal (#95)
|
2024-03-22 12:29:34 +00:00 |
|
Alex O'Connell
|
0e618a50a2
|
properly class variables
|
2024-03-21 20:16:27 -04:00 |
|
Alex O'Connell
|
7637c758a5
|
more docs updates + code cleanup
|
2024-03-20 23:22:07 -04:00 |
|
Alex O'Connell
|
fa31682c51
|
working version of in context examples
|
2024-03-20 23:03:31 -04:00 |
|
Alex O'Connell
|
4978901412
|
start working on icl examples
|
2024-03-07 18:14:44 -05:00 |
|
Alex O'Connell
|
4f6ed08be9
|
split out service call argument allow list + properly parse rgb color arguments
|
2024-03-06 17:58:14 -05:00 |
|
Alex O'Connell
|
569f6e848a
|
properly handle colons in ollama model names
|
2024-03-05 17:39:41 -05:00 |
|
Alex O'Connell
|
1c5414b8af
|
fix ollama keep alive properly + check if model exists for ollama too
|
2024-03-04 22:40:22 -05:00 |
|
Alex O'Connell
|
7b0b021b59
|
handle voice assistant aliases as duplicate devices
|
2024-03-01 22:59:38 -05:00 |
|