Alex O'Connell
|
7b01251f5d
|
fixes for training zephyr base
|
2024-02-04 11:40:03 -05:00 |
|
Alex O'Connell
|
cecf9bc53e
|
move to jsonl, finish sharegpt dataset format, and add flag to add chatml prompt template
|
2024-01-31 23:00:32 -05:00 |
|
Alex O'Connell
|
d901eaffdf
|
start working on other base models
|
2024-01-30 22:12:46 -05:00 |
|
Alex O'Connell
|
371ac513b2
|
typo in pile
|
2024-01-30 22:12:12 -05:00 |
|
Alex O'Connell
|
d023cfee28
|
add another test prompt
|
2024-01-30 22:12:12 -05:00 |
|
Alex O'Connell
|
8b1cc5a587
|
add random sampler with largest first to optimize pytorch memory usage
|
2024-01-30 22:12:12 -05:00 |
|
Alex O'Connell
|
2bdfb79f89
|
clear cuda cache after evaluation to clear out the irregularly sized blocks from the allocation cache
|
2024-01-30 22:12:12 -05:00 |
|
Alex O'Connell
|
da7b7b4d95
|
try to checkpoint saved lora modules
|
2024-01-30 22:12:12 -05:00 |
|
Alex O'Connell
|
18f9b7cdc0
|
Merge branch 'main' into develop
|
2024-01-28 19:21:24 -05:00 |
|
Alex O'Connell
|
92617f8484
|
Merge pull request #44 from acon96/release/v0.2.5
Release v0.2.5
v0.2.5
|
2024-01-28 19:20:52 -05:00 |
|
Alex O'Connell
|
235df081b8
|
fix release notes
|
2024-01-28 19:19:35 -05:00 |
|
Alex O'Connell
|
7f94c6d4bb
|
update todo
|
2024-01-28 18:50:41 -05:00 |
|
Alex O'Connell
|
24f8ba10c3
|
update manually installed llama-cpp-python
|
2024-01-28 11:46:35 -05:00 |
|
Alex O'Connell
|
95a90618f6
|
fix link
|
2024-01-28 10:28:34 -05:00 |
|
Alex O'Connell
|
038d869ded
|
random readme fixes + format notes
|
2024-01-28 10:17:50 -05:00 |
|
Alex O'Connell
|
c2d4d95212
|
fix gguf download
|
2024-01-27 16:33:48 -05:00 |
|
Alex O'Connell
|
c6cc99e59f
|
Release v0.2.5
|
2024-01-27 15:44:20 -05:00 |
|
Alex O'Connell
|
8e4c602685
|
Merge branch 'feature/proper-functioncalling-args' into develop
|
2024-01-27 15:22:07 -05:00 |
|
Alex O'Connell
|
d5c8f7221d
|
gitignore
|
2024-01-27 15:20:03 -05:00 |
|
Alex O'Connell
|
3134c4f954
|
update info about model in readme
|
2024-01-27 15:19:10 -05:00 |
|
Alex O'Connell
|
30361809ae
|
count tokens faster
|
2024-01-27 14:54:46 -05:00 |
|
Alex O'Connell
|
9723a98139
|
more dataset + model experiments using the evaluation script
|
2024-01-27 14:54:14 -05:00 |
|
Alex O'Connell
|
be99ac2d12
|
fix missing setup config for text-generation-webui + pass arguments to service calls
|
2024-01-27 14:50:57 -05:00 |
|
Alex O'Connell
|
946623713f
|
add "extra exposed attributes" to dataset as function call arguments + fix pile template inconsistencies
|
2024-01-26 22:36:34 -05:00 |
|
Alex O'Connell
|
5860028990
|
fix chatml prompt for multi-turn conversations
|
2024-01-26 21:18:12 -05:00 |
|
Fixt
|
107f8cd740
|
fix: Max tokens setting for Ollama API (#40)
|
2024-01-26 22:59:54 +00:00 |
|
Alex O'Connell
|
e6fae06133
|
wizardlm merge + fix eval
|
2024-01-25 20:46:59 -05:00 |
|
Alex O'Connell
|
57634519ca
|
move to eval script instead of during training
|
2024-01-25 20:46:59 -05:00 |
|
Alex O'Connell
|
c3cb5c5354
|
try writing an accuracy metric
|
2024-01-25 20:46:59 -05:00 |
|
Alex O'Connell
|
9cdedb021a
|
expose arguments to function calls via HA integration
|
2024-01-25 20:46:59 -05:00 |
|
Alex O'Connell
|
69e7302704
|
Merge branch 'develop'
v0.2.4
|
2024-01-25 20:28:21 -05:00 |
|
Alex O'Connell
|
552e084b47
|
Update readme + version number
|
2024-01-25 20:28:10 -05:00 |
|
Alex O'Connell
|
a9b042de1d
|
Fix api key on load model for text-generation-webui
|
2024-01-25 20:23:57 -05:00 |
|
Fixt
|
a43c6ed27e
|
feat: Ollama API Support (#34)
|
2024-01-23 02:10:50 +00:00 |
|
Anto79-ops
|
3010aa6719
|
Update README.md to LocalAI and Home-LLM model install (#31)
|
2024-01-23 02:07:58 +00:00 |
|
Alex O'Connell
|
b7e850618e
|
readme typo
|
2024-01-21 22:03:44 -05:00 |
|
Alex O'Connell
|
2be8e857f1
|
Merge branch 'main' into develop
|
2024-01-21 21:50:29 -05:00 |
|
Alex O'Connell
|
221cceb762
|
Merge branch 'release/v0.2.3'
v0.2.3
|
2024-01-21 21:46:42 -05:00 |
|
Alex O'Connell
|
4f96c01367
|
update changelog
|
2024-01-21 21:46:35 -05:00 |
|
Alex O'Connell
|
83dd9744ae
|
Merge branch 'feature/chat-completions' into develop
|
2024-01-21 21:40:29 -05:00 |
|
Alex O'Connell
|
1594844962
|
fix auth, add llama-cpp-python server backend, better in-app docs
|
2024-01-21 21:38:23 -05:00 |
|
Alex O'Connell
|
7c30bb57cf
|
split up even more + add llama-cpp-python server
|
2024-01-21 16:25:50 -05:00 |
|
Alex O'Connell
|
cc3dd4884a
|
rewrite using class inheritance to cut down on repeated code
|
2024-01-21 14:38:06 -05:00 |
|
Alex O'Connell
|
18dfd70c86
|
Bump text-generation-webui snapshot date
|
2024-01-21 13:07:47 -05:00 |
|
Alex O'Connell
|
9290d9b388
|
properly pass chat mode + docs
|
2024-01-21 13:07:19 -05:00 |
|
Alex O'Connell
|
27963d06f9
|
completions endpoint works but it doesn't apply prompt formatting
|
2024-01-19 00:00:29 -05:00 |
|
Alex O'Connell
|
cae4aa3908
|
format TODO.md
|
2024-01-18 20:16:29 -05:00 |
|
Alex O'Connell
|
96d095f8c5
|
start working on chat completions API
|
2024-01-18 20:14:56 -05:00 |
|
Alex O'Connell
|
bf792d443c
|
Merge branch 'master' into develop
|
2024-01-18 20:14:52 -05:00 |
|
Alex O'Connell
|
5a61728a38
|
update changelog
v0.2.2
|
2024-01-17 18:29:23 -05:00 |
|