From 69b3c00a525efea4becc5a29292b3c0247b45cd9 Mon Sep 17 00:00:00 2001 From: di-sukharev Date: Mon, 2 Sep 2024 11:13:02 +0300 Subject: [PATCH] docs(README): update OCO_AI_PROVIDER and OCO_MODEL instructions for clarity and add valid model name options to OCO_MODEL description --- README.md | 7 ++++--- 1 file changed, 4 insertions(+), 3 deletions(-) diff --git a/README.md b/README.md index a19e631..d21e7e7 100644 --- a/README.md +++ b/README.md @@ -76,7 +76,8 @@ oco config set OCO_AI_PROVIDER='ollama' If you want to use a model other than mistral (default), you can do so by setting the `OCO_AI_PROVIDER` environment variable as follows: ```sh -oco config set OCO_AI_PROVIDER='ollama/llama3:8b' +oco config set OCO_AI_PROVIDER='ollama' +oco config set OCO_MODEL='llama3:8b' ``` If you have ollama that is set up in docker/ on another machine with GPUs (not locally), you can change the default endpoint url. @@ -127,12 +128,12 @@ OCO_TOKENS_MAX_OUTPUT= OCO_OPENAI_BASE_PATH= OCO_DESCRIPTION= OCO_EMOJI= -OCO_MODEL= +OCO_MODEL= OCO_LANGUAGE= OCO_MESSAGE_TEMPLATE_PLACEHOLDER= OCO_PROMPT_MODULE= OCO_ONE_LINE_COMMIT= -OCO_AI_PROVIDER= +OCO_AI_PROVIDER= ... ```