add llama 3 to list of supported models

This commit is contained in:
Alex O'Connell
2024-04-21 23:47:55 -04:00
parent 573207f7ff
commit 435cbfed21
2 changed files with 4 additions and 2 deletions

View File

@@ -20,7 +20,7 @@
"downloaded_model_quantization": "Downloaded model quantization",
"huggingface_model": "HuggingFace Model"
},
"description": "Please select a model to use.\n\n**Models supported out of the box:**\n1. [Home LLM](https://huggingface.co/collections/acon96/home-llm-6618762669211da33bb22c5a): Home 3B & Home 1B\n2. Mistral: [Mistral 7B](https://huggingface.co/mistralai/Mistral-7B-Instruct-v0.2) or [Mixtral 8x7B](https://huggingface.co/mistralai/Mixtral-8x7B-Instruct-v0.1)",
"description": "Please select a model to use.\n\n**Models supported out of the box:**\n1. [Home LLM](https://huggingface.co/collections/acon96/home-llm-6618762669211da33bb22c5a): Home 3B & Home 1B\n2. Mistral: [Mistral 7B](https://huggingface.co/mistralai/Mistral-7B-Instruct-v0.2) or [Mixtral 8x7B](https://huggingface.co/mistralai/Mixtral-8x7B-Instruct-v0.1)\nLlama 3: [8B](https://huggingface.co/meta-llama/Meta-Llama-3-8B-Instruct) and [70B](https://huggingface.co/meta-llama/Meta-Llama-3-70B-Instruct)",
"title": "Select Model"
},
"remote_model": {

View File

@@ -59,7 +59,9 @@ Currently supported prompt formats are:
2. Vicuna
3. Alpaca
4. Mistral
5. None (useful for foundation models)
5. Zephyr
6. Llama 3
7. None (useful for foundation models)
## Prompting other models with In Context Learning
It is possible to use models that are not fine-tuned with the dataset via the usage of In Context Learning (ICL) examples. These examples condition the model to output the correct JSON schema without any fine-tuning of the model.