Alex O'Connell
|
474901a367
|
Merge pull request #82 from acon96/release/v0.2.7
Release v0.2.7
v0.2.7
|
2024-03-04 22:57:41 -05:00 |
|
Alex O'Connell
|
0f3d518826
|
Bump version number + change notes
|
2024-03-04 22:56:48 -05:00 |
|
Alex O'Connell
|
1c5414b8af
|
fix ollama keep alive properly + check if model exists for ollama too
|
2024-03-04 22:40:22 -05:00 |
|
Alex O'Connell
|
c13d706879
|
bump text-generation-webui version
|
2024-03-04 20:56:04 -05:00 |
|
Alex O'Connell
|
6cc3f47096
|
fix deprecated configflow behavior
|
2024-03-02 22:20:55 -05:00 |
|
Alex O'Connell
|
7b0b021b59
|
handle voice assistant aliases as duplicate devices
|
2024-03-01 22:59:38 -05:00 |
|
Alex O'Connell
|
b50919340b
|
Ollama keep_alive + better docs
|
2024-03-01 22:07:03 -05:00 |
|
Alex O'Connell
|
b197632b3e
|
Update llama-cpp-python and text-generation-webui
|
2024-02-25 17:43:57 -05:00 |
|
Alex O'Connell
|
869b91b6ab
|
add model references in the setup guide
|
2024-02-22 21:31:05 -05:00 |
|
Alex O'Connell
|
32338242f2
|
Merge pull request #68 from acon96/feature/dataset-customization
Make Dataset easier to customize and Support Training StableLM
|
2024-02-22 21:24:27 -05:00 |
|
Alex O'Connell
|
fc171ac04d
|
Merge branch 'develop' into feature/dataset-customization
|
2024-02-22 21:20:59 -05:00 |
|
Alex O'Connell
|
d22ef571b8
|
update readme
|
2024-02-22 21:20:20 -05:00 |
|
Alex O'Connell
|
5dec71eae2
|
finalize new model version
|
2024-02-22 21:12:38 -05:00 |
|
Alex O'Connell
|
4910a34f11
|
Create issue templates
|
2024-02-18 11:49:22 -05:00 |
|
Alex O'Connell
|
c285e3c6a9
|
instructions for adding personas
|
2024-02-17 23:13:45 -05:00 |
|
Alex O'Connell
|
a78e57031c
|
cleanup + table of contents
|
2024-02-17 23:06:21 -05:00 |
|
Alex O'Connell
|
b2836bc250
|
tweak readme structure
|
2024-02-17 23:05:53 -05:00 |
|
Zeno Jiricek
|
ccf2c2c293
|
docs: Format README for better usability (#50)
|
2024-02-18 04:04:46 +00:00 |
|
Alex O'Connell
|
19ed596614
|
resolve conflicts
|
2024-02-17 23:04:19 -05:00 |
|
Alex O'Connell
|
43510d1d7c
|
finish up dataset changes for new device types
|
2024-02-17 21:16:56 -05:00 |
|
Alex O'Connell
|
984cf2c0a3
|
add vacuum, todo, and timer device names
|
2024-02-17 19:55:19 -05:00 |
|
Alex O'Connell
|
6cc3b13c2a
|
Merge branch 'develop' into feature/dataset-customization
|
2024-02-17 19:55:07 -05:00 |
|
colino17
|
e192a8aee6
|
Add additional data for new entity types and services (#67)
|
2024-02-18 00:49:58 +00:00 |
|
Alex O'Connell
|
3f3fb21cd9
|
Handle python 3.12 upgrade smoother + make utils file
|
2024-02-16 23:44:48 -05:00 |
|
Alex O'Connell
|
bcd67aef37
|
start working on new entities
|
2024-02-16 23:21:22 -05:00 |
|
Alex O'Connell
|
9c6d64f373
|
add note to readme about wheels
|
2024-02-16 18:30:17 -05:00 |
|
Alex O'Connell
|
5edb66c7f2
|
build python 3.12 wheels too
|
2024-02-16 18:28:37 -05:00 |
|
Alex O'Connell
|
fdfea02e1d
|
typo
|
2024-02-15 00:04:45 -05:00 |
|
Alex O'Connell
|
5660666bcc
|
Bump included llama-cpp-python to 0.2.42
|
2024-02-14 08:44:19 -05:00 |
|
Alex O'Connell
|
4e873a873c
|
Merge branch 'develop' into feature/dataset-customization
|
2024-02-13 20:23:39 -05:00 |
|
Alex O'Connell
|
411276408b
|
train new models based on stablelm + properly add new response types
|
2024-02-13 20:21:51 -05:00 |
|
Alex O'Connell
|
73c2934e20
|
Update text-generation-webui addon version
|
2024-02-11 16:42:33 -05:00 |
|
jds11111
|
cc1b0740f3
|
Fix Model Prompting.md (#60)
|
2024-02-10 22:57:03 +00:00 |
|
Alex O'Connell
|
1da51c9fb6
|
remove broken training tweaks
|
2024-02-10 10:04:58 -05:00 |
|
Alex O'Connell
|
e0fa7805e4
|
Merge branch 'main' into develop
|
2024-02-08 20:49:31 -05:00 |
|
Alex O'Connell
|
fd9dc2e23a
|
Merge pull request #58 from acon96/release/v0.2.6
Release v0.2.6
v0.2.6
|
2024-02-08 20:48:47 -05:00 |
|
Alex O'Connell
|
3f22bf3ace
|
Release v0.2.6
|
2024-02-08 20:47:05 -05:00 |
|
Alex O'Connell
|
ff13770f7e
|
add Zephyr prompt format
|
2024-02-08 20:40:05 -05:00 |
|
Alex O'Connell
|
cc48465575
|
Add support for HTTPS endpoints
|
2024-02-08 19:26:58 -05:00 |
|
Alex O'Connell
|
962c5872ff
|
fix evaluate script
|
2024-02-08 00:04:36 -05:00 |
|
Isabella Nightshade
|
b1fb7cf184
|
Config to remember conversation/limit number of messages (#53)
|
2024-02-08 01:21:28 +00:00 |
|
Alex O'Connell
|
66bd90a7be
|
support sharegpt format in evaluate.py
|
2024-02-05 22:41:05 -05:00 |
|
Alex O'Connell
|
631daebd97
|
try different way of pre-allocating the cuda buffers
|
2024-02-05 21:59:10 -05:00 |
|
Alex O'Connell
|
55dfee51e6
|
Merge remote-tracking branch 'origin/feature/dataset-customization' into feature/training-tweaks
|
2024-02-05 21:13:30 -05:00 |
|
Alex O'Connell
|
1e0113218f
|
move system prompts to a pile
|
2024-02-05 21:11:29 -05:00 |
|
Alex O'Connell
|
3bf674ae29
|
fix dataset generation
|
2024-02-05 21:05:28 -05:00 |
|
Alex O'Connell
|
11d4ede66d
|
update llama-cpp-python wheel and text-generation-webui snapshot
|
2024-02-04 11:41:26 -05:00 |
|
Alex O'Connell
|
7b01251f5d
|
fixes for training zephyr base
|
2024-02-04 11:40:03 -05:00 |
|
Alex O'Connell
|
cc2c21cab5
|
more work on making the piles easier to extend
|
2024-02-04 11:34:22 -05:00 |
|
Alex O'Connell
|
278f860e37
|
re-organize responses
|
2024-02-03 20:29:51 -05:00 |
|