mirror of
https://github.com/acon96/home-llm.git
synced 2026-01-09 13:48:05 -05:00
Readme updates for new model
This commit is contained in:
committed by
Alex O'Connell
parent
e242ca0899
commit
077ae64337
45
README.md
45
README.md
@@ -7,22 +7,20 @@ This project provides the required "glue" components to control your Home Assist
|
||||
Please see the [Setup Guide](./docs/Setup.md) for more information on installation.
|
||||
|
||||
## LLama Conversation Integration
|
||||
In order to integrate with Home Assistant, we provide a `custom_component` that exposes the locally running LLM as a "conversation agent".
|
||||
In order to integrate with Home Assistant, we provide a custom component that exposes the locally running LLM as a "conversation agent".
|
||||
|
||||
This component can be interacted with in a few ways:
|
||||
- using a chat interface so you can chat with it.
|
||||
- integrating with Speech-to-Text and Text-to-Speech addons so you can just speak to it.
|
||||
|
||||
The component can either run the model directly as part of the Home Assistant software using llama-cpp-python, or you can run the [oobabooga/text-generation-webui](https://github.com/oobabooga/text-generation-webui) project to provide access to the LLM via an API interface.
|
||||
|
||||
When doing this, you can host the model yourself and point the add-on at machine where the model is hosted, or you can run the model using text-generation-webui using the provided [custom Home Assistant add-on](./addon).
|
||||
The component can either run the model directly as part of the Home Assistant software using llama-cpp-python, or you can run [Ollama](https://ollama.com/) (simple) or the [oobabooga/text-generation-webui](https://github.com/oobabooga/text-generation-webui) project (advanced) to provide access to the LLM via an API interface.
|
||||
|
||||
## Home LLM Model
|
||||
The "Home" models are a fine tuning of the Phi model series from Microsoft and the StableLM model series from StabilityAI. The model is able to control devices in the user's house as well as perform basic question and answering. The fine tuning dataset is a [custom synthetic dataset](./data) designed to teach the model function calling based on the device information in the context.
|
||||
The "Home" models are a fine tuning of various Large Languages Models that are under 5B parameters. The models are able to control devices in the user's house as well as perform basic question and answering. The fine tuning dataset is a [custom synthetic dataset](./data) designed to teach the model function calling based on the device information in the context.
|
||||
|
||||
The latest models can be found on HuggingFace:
|
||||
3B v3 (Based on StableLM-Zephyr-3B): https://huggingface.co/acon96/Home-3B-v3-GGUF (Zephyr prompt format)
|
||||
1B v2 (Based on Phi-1.5): https://huggingface.co/acon96/Home-1B-v2-GGUF (ChatML prompt format)
|
||||
1B v3 (Based on TinyLlama-1.1B): https://huggingface.co/acon96/Home-1B-v3-GGUF (Zephyr prompt format)
|
||||
|
||||
<details>
|
||||
|
||||
@@ -30,6 +28,7 @@ The latest models can be found on HuggingFace:
|
||||
|
||||
3B v2 (Based on Phi-2): https://huggingface.co/acon96/Home-3B-v2-GGUF (ChatML prompt format)
|
||||
3B v1 (Based on Phi-2): https://huggingface.co/acon96/Home-3B-v1-GGUF (ChatML prompt format)
|
||||
1B v2 (Based on Phi-1.5): https://huggingface.co/acon96/Home-1B-v2-GGUF (ChatML prompt format)
|
||||
1B v1 (Based on Phi-1.5): https://huggingface.co/acon96/Home-1B-v1-GGUF (ChatML prompt format)
|
||||
|
||||
</details>
|
||||
@@ -41,6 +40,7 @@ The model can be used as an "instruct" type model using the [ChatML](https://git
|
||||
Example "system" prompt:
|
||||
```
|
||||
You are 'Al', a helpful AI Assistant that controls the devices in a house. Complete the following task as instructed with the information provided only.
|
||||
The current time and date is 08:12 AM on Thursday March 14, 2024
|
||||
Services: light.turn_off(), light.turn_on(brightness,rgb_color), fan.turn_on(), fan.turn_off()
|
||||
Devices:
|
||||
light.office 'Office Light' = on;80%
|
||||
@@ -80,30 +80,26 @@ The dataset is available on HuggingFace: https://huggingface.co/datasets/acon96/
|
||||
The source for the dataset is in the [data](/data) of this repository.
|
||||
|
||||
### Training
|
||||
The 3B model was trained as a LoRA on an RTX 3090 (24GB) using the following settings for the custom training script. The embedding weights were "saved" and trained normally along with the rank matricies in order to train the newly added tokens to the embeddings. The full model is merged together at the end. Training took approximately 10 hours.
|
||||
The 3B model was trained as a full fine-tuning on 2x RTX 4090 (48GB). Training time took approximately 28 hours. It was trained on the `--large` dataset variant.
|
||||
|
||||
<details>
|
||||
<summary>Training Arguments</summary>
|
||||
|
||||
```console
|
||||
python3 train.py \
|
||||
accelerate launch --config_file fsdp_config.yaml train.py \
|
||||
--run_name home-3b \
|
||||
--base_model microsoft/phi-2 \
|
||||
--add_pad_token \
|
||||
--add_chatml_tokens \
|
||||
--base_model stabilityai/stablelm-zephyr-3b \
|
||||
--bf16 \
|
||||
--train_dataset data/home_assistant_alpaca_merged_train.json \
|
||||
--learning_rate 1e-5 \
|
||||
--save_steps 1000 \
|
||||
--micro_batch_size 2 --gradient_checkpointing \
|
||||
--train_dataset data/home_assistant_train.jsonl \
|
||||
--learning_rate 1e-5 --batch_size 64 --epochs 1 \
|
||||
--micro_batch_size 2 --gradient_checkpointing --group_by_length \
|
||||
--ctx_size 2048 \
|
||||
--group_by_length \
|
||||
--use_lora --lora_rank 32 --lora_alpha 64 --lora_modules fc1,fc2,q_proj,v_proj,dense --lora_modules_to_save embed_tokens,lm_head --lora_merge
|
||||
--save_steps 50 --save_total_limit 10 --eval_steps 100 --logging_steps 2
|
||||
```
|
||||
|
||||
</details>
|
||||
|
||||
The 1B model was trained as a full fine-tuning on on an RTX 3090 (24GB). Training took approximately 2.5 hours.
|
||||
The 1B model was trained as a full fine-tuning on an RTX 3090 (24GB). Training took approximately 2.5 hours. It was trained on the `--medium` dataset variant.
|
||||
|
||||
<details>
|
||||
<summary>Training Arguments</summary>
|
||||
@@ -111,14 +107,13 @@ The 1B model was trained as a full fine-tuning on on an RTX 3090 (24GB). Trainin
|
||||
```console
|
||||
python3 train.py \
|
||||
--run_name home-1b \
|
||||
--base_model microsoft/phi-1_5 \
|
||||
--add_pad_token \
|
||||
--add_chatml_tokens \
|
||||
--base_model TinyLlama/TinyLlama-1.1B-Chat-v1.0 \
|
||||
--bf16 \
|
||||
--train_dataset data/home_assistant_train.json \
|
||||
--learning_rate 1e-5 \
|
||||
--micro_batch_size 4 --gradient_checkpointing \
|
||||
--ctx_size 2048
|
||||
--train_dataset data/home_assistant_train.jsonl \
|
||||
--test_dataset data/home_assistant_test.jsonl \
|
||||
--learning_rate 2e-5 --batch_size 32 \
|
||||
--micro_batch_size 8 --gradient_checkpointing --group_by_length \
|
||||
--ctx_size 2048 --save_steps 100 --save_total_limit 10
|
||||
```
|
||||
|
||||
</details>
|
||||
|
||||
Reference in New Issue
Block a user