The previous userInputCodeContext only skipped the context block
when context was exactly '' or ' '. Anything else (e.g. a string of
whitespace, null, undefined) would inject an empty or
whitespace-only <context>…</context> tag into the system prompt.
Trim the input and guard against null/undefined:
- accept string | undefined | null
- normalize via `(context ?? '').trim()`
- skip the injection whenever the trimmed value is empty
Also inline the INIT_MAIN_PROMPT IIFE into a normal function body
and introduce a `content` local, removing a layer of nesting that
obscured the prompt assembly. Behavior is unchanged.
getCommitConvention gated the entire GitMoji branch on
config.OCO_EMOJI, so --fgm was silently ignored unless the user had
previously run `oco config set OCO_EMOJI true`. Since OCO_EMOJI
defaults to false, --fgm was a no-op for most users.
This violates the standard CLI convention that command-line flags
should override configuration. Restructure getCommitConvention so
that --fgm forces FULL_GITMOJI_SPEC regardless of OCO_EMOJI:
--fgm=true → FULL_GITMOJI_SPEC
--fgm=false + OCO_EMOJI=true → GITMOJI_HELP (unchanged)
--fgm=false + OCO_EMOJI=false → CONVENTIONAL_COMMIT_KEYWORDS (unchanged)
No other files need changes — the fgm flag was already threaded
correctly through cli.ts → commit.ts → generateCommitMessageByDiff
→ getMainCommitPrompt → getCommitConvention.
When a staged diff exceeds MAX_REQUEST_TOKENS, generateCommitMessageByDiff
routes through getCommitMsgsPromisesFromFileDiffs →
getMessagesPromisesByChangesInFile → generateCommitMessageChatCompletionPrompt
to produce one sub-prompt per chunk. That entire chain was threading
`fullGitMojiSpec` but never `context`, so `-c/--context` was silently
dropped for any diff large enough to trigger chunking, even though
the simple (non-chunked) path forwarded it correctly.
Add a `context` parameter to each of the three helpers and thread it
through to generateCommitMessageChatCompletionPrompt so the user's
context is present in every sub-prompt.
When the user answers "No" at the confirmation prompt and chooses to
regenerate, the recursive call to generateCommitMessageFromGitDiff
forwarded only `diff`, `extraArgs`, and `fullGitMojiSpec`. Both
`context` and `skipCommitConfirmation` were silently dropped, so:
- `-c/--context` was honored only on the first attempt and lost on
every regeneration;
- `-y/--yes` was honored only on the first attempt, forcing a manual
confirmation after regeneration.
Forward both fields through the recursive call so the user's flags
are respected for the full lifetime of the commit() invocation.
Same class of bug as the -c/--context fix: these flags could leak
into extraArgs and be forwarded to the internal `git commit` call,
causing unexpected behavior.
Extend the extraArgs sanitization to also strip -y, --yes, --fgm,
and their values.
cleye's ignoreArgv passes unconsumed flags and arguments through to
the internal `git commit` execa call. Although -c/--context is
defined as a known cleye flag, a defensive guard is needed to strip
it from extraArgs in case it leaks through, which would conflict
with git's own handling.
Add a sanitization step at the entry of commit() that filters -c,
--context, and their values from extraArgs before they are forwarded
to the git commit invocation.
Both OllamaEngine and MLXEngine had two bugs in URL construction:
1. `axios.create({url: ...})` was used instead of `baseURL`, but `url`
in axios config sets a default request URL - not a base prefix. This
caused the URL to be ignored when `.post()` was called with a path.
2. `this.client.getUri(this.config)` was used to resolve the POST URL,
but passing the engine config (which contains non-axios properties
like `apiKey`, `model`, etc.) produced malformed URLs. When
`apiKey` is null (the default for Ollama), the URL resolved to
`http://localhost:11434/null`, returning HTTP 405.
Fix: construct the full endpoint URL once in the constructor and pass
it directly to `axios.post()`, matching how FlowiseEngine already works.
Co-Authored-By: Claude <noreply@anthropic.com>
Remove Record<string, unknown> type annotation to let TypeScript infer
the params object type, preserving type checking on all properties.
Cast to ChatCompletionCreateParamsNonStreaming at the create() call site
to accommodate the SDK's missing max_completion_tokens type. Add unit
test for reasoning model detection regex.
Signed-off-by: majiayu000 <1835304752@qq.com>
Newer OpenAI models (o1, o3, o4, gpt-5 series) reject the max_tokens
parameter and require max_completion_tokens instead. These reasoning
models also do not support temperature and top_p parameters.
Conditionally set the correct token parameter and omit unsupported
sampling parameters based on the model name.
Fixes#529
Signed-off-by: majiayu000 <1835304752@qq.com>
Integrated undici ProxyAgent for native fetch and HttpsProxyAgent for axios/openai/anthropic. Upgraded @google/generative-ai to fix#536. Added OCO_PROXY config.
Co-authored-by: uni <uni@hanwei.ink>
Move isHookCalled() check before runMigrations() and
checkIsLatestVersion() so that during git rebase, each pick commit
exits immediately without expensive I/O and network calls.
Also adds missing await on prepareCommitMessageHook() to properly
handle async errors.
Closes#493
Signed-off-by: majiayu000 <1835304752@qq.com>
- Move hardcoded baseURL before ...config spread in constructor
- This allows user config to override the default DeepSeek API URL
- Fixes issue #539 where OCO_API_URL was ignored by DeepSeek engine
This update introduces a centralized error handling mechanism for various AI engines, improving the consistency and clarity of error messages. The new `normalizeEngineError` function standardizes error responses, allowing for better user feedback and recovery suggestions. Additionally, specific error classes for insufficient credits, rate limits, and service availability have been implemented, along with user-friendly formatting for error messages. This refactor aims to enhance the overall user experience when interacting with the AI services.
Add comprehensive setup command with provider selection, API key
configuration, and model selection. Include error recovery for
model-not-found scenarios with suggested alternatives and automatic
retry functionality. Update Anthropic model list with latest versions
and add provider metadata for better user experience.
Introduces AIMLAPI as a supported AI provider, including model list, config validation, and engine implementation. Updates README and engine selection logic to integrate AIMLAPI for chat completions.
Refactor AimlApiEngine response handling
Removed dependency on removeContentTags and simplified message content extraction. Minor header formatting fix for HTTP-Referer. This streamlines the response handling and reduces unnecessary processing.
The changes:
1. Expanded the skip condition to include additional AI providers (DEEPSEEK, GROQ, MISTRAL, MLX, OPENROUTER) beyond just TEST
2. Maintained existing TEST provider skip behavior
3. Added explicit comment explaining the skip logic
The why:
Prevents migration execution for unsupported AI providers to avoid potential runtime errors or data inconsistencies, ensuring migrations only run for properly handled configurations.
Add OpenRouterEngine class and integrate it into the configuration
system. OpenRouter provides access to 300+ AI models through a
unified API, expanding model availability for commit message
generation beyond existing providers.
When user wants one line commits the system prompt and the user/assistant one-shot example were contradicting each other, confusing the LLM. This fix modifies the assistant output so that prompt and one-shot are consistent.
This commit adds a new 'describe' mode to the config command, allowing users
to get detailed information about configuration parameters. It includes:
1. New CONFIG_MODES.describe enum value
2. Functions to generate and print help messages for config parameters
3. Updated configCommand to handle the new 'describe' mode
4. README updates to document the new 'describe' functionality
- export parseCustomHeaders from src/utils/engine.ts
- use parseCustomHeaders in OpenAiEngine for config.customHeaders
- remove try/catch and inline JSON.parse logic
- update config test to expect headers as object and drop JSON.parse
Centralize header parsing for reuse and simplify engine code
Update tests to match new header format for clarity
Use OpenAI.ClientOptions for stronger typing and clarity
Extract custom headers parsing into parseCustomHeaders util
Simplify getEngine by delegating header parsing to helper
Improve maintainability and reduce code duplication
Add OCO_API_CUSTOM_HEADERS variable to README, config enum,
and env parsing to allow JSON string of custom headers.
Validate that custom headers are valid JSON in config validator.
Extend AiEngineConfig with customHeaders and pass headers to
OllamaEngine and OpenAiEngine clients when creating requests.
Parse custom headers in utils/engine and warn on invalid format.
Add unit tests to ensure OCO_API_CUSTOM_HEADERS is handled
correctly and merged from env over global config.
This enables users to send additional headers such as
Authorization or tracing headers with LLM API calls.
The space normalization logic is updated to replace only multiple spaces
and tabs with a single space, while preserving newlines. This change
ensures that the formatting of the content is maintained, especially
when dealing with empty line requirements and max line length.
The commitlint configuration and prompts have been refactored to enhance
clarity and maintain consistency throughout the codebase. The type
assertion for commitLintConfig is updated to use 'as any' for better
type handling. Additionally, formatting adjustments are made in the
prompts to ensure proper readability and alignment with the defined
conventions. These changes aim to streamline the commit message
generation process and improve overall code maintainability.