Commit Graph

242 Commits

Author SHA1 Message Date
SOV710
42029fff4e refactor(prompts): null-safe, trim-aware user context handling
The previous userInputCodeContext only skipped the context block
when context was exactly '' or ' '. Anything else (e.g. a string of
whitespace, null, undefined) would inject an empty or
whitespace-only <context>…</context> tag into the system prompt.

Trim the input and guard against null/undefined:
  - accept string | undefined | null
  - normalize via `(context ?? '').trim()`
  - skip the injection whenever the trimmed value is empty

Also inline the INIT_MAIN_PROMPT IIFE into a normal function body
and introduce a `content` local, removing a layer of nesting that
obscured the prompt assembly. Behavior is unchanged.
2026-04-05 03:58:40 +00:00
SOV710
4d767da9e5 fix(prompts): make --fgm override OCO_EMOJI config
getCommitConvention gated the entire GitMoji branch on
config.OCO_EMOJI, so --fgm was silently ignored unless the user had
previously run `oco config set OCO_EMOJI true`. Since OCO_EMOJI
defaults to false, --fgm was a no-op for most users.

This violates the standard CLI convention that command-line flags
should override configuration. Restructure getCommitConvention so
that --fgm forces FULL_GITMOJI_SPEC regardless of OCO_EMOJI:

  --fgm=true                    → FULL_GITMOJI_SPEC
  --fgm=false + OCO_EMOJI=true  → GITMOJI_HELP (unchanged)
  --fgm=false + OCO_EMOJI=false → CONVENTIONAL_COMMIT_KEYWORDS (unchanged)

No other files need changes — the fgm flag was already threaded
correctly through cli.ts → commit.ts → generateCommitMessageByDiff
→ getMainCommitPrompt → getCommitConvention.
2026-04-05 03:58:40 +00:00
SOV710
361327a8fe fix(generate): forward context through chunked large-diff prompt path
When a staged diff exceeds MAX_REQUEST_TOKENS, generateCommitMessageByDiff
routes through getCommitMsgsPromisesFromFileDiffs →
getMessagesPromisesByChangesInFile → generateCommitMessageChatCompletionPrompt
to produce one sub-prompt per chunk. That entire chain was threading
`fullGitMojiSpec` but never `context`, so `-c/--context` was silently
dropped for any diff large enough to trigger chunking, even though
the simple (non-chunked) path forwarded it correctly.

Add a `context` parameter to each of the three helpers and thread it
through to generateCommitMessageChatCompletionPrompt so the user's
context is present in every sub-prompt.
2026-04-05 03:58:40 +00:00
SOV710
3a2fa11fcd fix(commit): preserve context and skip-confirm flag across regenerate
When the user answers "No" at the confirmation prompt and chooses to
regenerate, the recursive call to generateCommitMessageFromGitDiff
forwarded only `diff`, `extraArgs`, and `fullGitMojiSpec`. Both
`context` and `skipCommitConfirmation` were silently dropped, so:

- `-c/--context` was honored only on the first attempt and lost on
  every regeneration;
- `-y/--yes` was honored only on the first attempt, forcing a manual
  confirmation after regeneration.

Forward both fields through the recursive call so the user's flags
are respected for the full lifetime of the commit() invocation.
2026-04-05 03:58:40 +00:00
SOV710
4056bfa547 fix(cli): strip -y/--fgm from extraArgs to prevent git commit conflict
Same class of bug as the -c/--context fix: these flags could leak
into extraArgs and be forwarded to the internal `git commit` call,
causing unexpected behavior.

Extend the extraArgs sanitization to also strip -y, --yes, --fgm,
and their values.
2026-04-05 03:58:40 +00:00
SOV710
a48d33096a fix(cli): strip -c/--context from extraArgs to prevent git commit conflict
cleye's ignoreArgv passes unconsumed flags and arguments through to
the internal `git commit` execa call. Although -c/--context is
defined as a known cleye flag, a defensive guard is needed to strip
it from extraArgs in case it leaks through, which would conflict
with git's own handling.

Add a sanitization step at the entry of commit() that filters -c,
--context, and their values from extraArgs before they are forwarded
to the git commit invocation.
2026-04-05 03:58:40 +00:00
keith666666
0ee82f7430 fix(engine): fix broken URL resolution in Ollama and MLX engines
Both OllamaEngine and MLXEngine had two bugs in URL construction:

1. `axios.create({url: ...})` was used instead of `baseURL`, but `url`
   in axios config sets a default request URL - not a base prefix. This
   caused the URL to be ignored when `.post()` was called with a path.

2. `this.client.getUri(this.config)` was used to resolve the POST URL,
   but passing the engine config (which contains non-axios properties
   like `apiKey`, `model`, etc.) produced malformed URLs. When
   `apiKey` is null (the default for Ollama), the URL resolved to
   `http://localhost:11434/null`, returning HTTP 405.

Fix: construct the full endpoint URL once in the constructor and pass
it directly to `axios.post()`, matching how FlowiseEngine already works.

Co-Authored-By: Claude <noreply@anthropic.com>
2026-04-02 10:52:32 +08:00
majiayu000
f74ba2dfc6 fix: resolve CI failures — revert gemini test mock path and fix prettier formatting
Signed-off-by: majiayu000 <1835304752@qq.com>
2026-03-30 00:55:06 +08:00
majiayu000
6982e76cf5 fix: improve type safety for max_completion_tokens params
Remove Record<string, unknown> type annotation to let TypeScript infer
the params object type, preserving type checking on all properties.
Cast to ChatCompletionCreateParamsNonStreaming at the create() call site
to accommodate the SDK's missing max_completion_tokens type. Add unit
test for reasoning model detection regex.

Signed-off-by: majiayu000 <1835304752@qq.com>
2026-03-30 00:54:48 +08:00
majiayu000
dc7f7f6552 fix: use max_completion_tokens for reasoning models in OpenAI engine
Newer OpenAI models (o1, o3, o4, gpt-5 series) reject the max_tokens
parameter and require max_completion_tokens instead. These reasoning
models also do not support temperature and top_p parameters.

Conditionally set the correct token parameter and omit unsupported
sampling parameters based on the model name.

Fixes #529

Signed-off-by: majiayu000 <1835304752@qq.com>
2026-03-30 00:54:44 +08:00
sky
e27007b6fe feat(proxy): add universal proxy support and fix Gemini model resolution (#536)
Integrated undici ProxyAgent for native fetch and HttpsProxyAgent for axios/openai/anthropic. Upgraded @google/generative-ai to fix #536. Added OCO_PROXY config.

Co-authored-by: uni <uni@hanwei.ink>
2026-03-29 14:54:45 +00:00
majiayu000
83f9193749 fix: stabilize e2e flow in clean CI env
Signed-off-by: majiayu000 <majiayu000@users.noreply.github.com>
2026-03-27 17:19:31 +08:00
majiayu000
bc608e97bd fix: skip migrations and version check when called as git hook
Move isHookCalled() check before runMigrations() and
checkIsLatestVersion() so that during git rebase, each pick commit
exits immediately without expensive I/O and network calls.

Also adds missing await on prepareCommitMessageHook() to properly
handle async errors.

Closes #493

Signed-off-by: majiayu000 <1835304752@qq.com>
2026-03-21 10:59:24 +08:00
gaozhenqian
62d56a5278 Fix: Allow OCO_API_URL to override DeepSeek engine baseURL
- Move hardcoded baseURL before ...config spread in constructor
- This allows user config to override the default DeepSeek API URL
- Fixes issue #539 where OCO_API_URL was ignored by DeepSeek engine
2026-03-12 14:22:45 +08:00
GPT8
de5d5cbb95 Merge pull request #521 from muni-corn/claude-fix-top-p
fix(anthropic): remove `top_p` parameter for Claude 4.5 models
2026-02-21 23:32:04 +03:00
di-sukharev
6ed70d0382 add oco models command 2026-01-17 23:46:04 +03:00
di-sukharev
5b241ed2d0 refactor: enhance error handling and normalization across AI engines
This update introduces a centralized error handling mechanism for various AI engines, improving the consistency and clarity of error messages. The new `normalizeEngineError` function standardizes error responses, allowing for better user feedback and recovery suggestions. Additionally, specific error classes for insufficient credits, rate limits, and service availability have been implemented, along with user-friendly formatting for error messages. This refactor aims to enhance the overall user experience when interacting with the AI services.
2026-01-17 23:34:49 +03:00
di-sukharev
d70797b864 feat: add interactive setup wizard and model error handling
Add comprehensive setup command with provider selection, API key
configuration, and model selection. Include error recovery for
model-not-found scenarios with suggested alternatives and automatic
retry functionality. Update Anthropic model list with latest versions
and add provider metadata for better user experience.
2026-01-17 23:04:43 +03:00
municorn
74fff2861b refactor(anthropic): improve model version detection using regex pattern 2025-10-22 08:33:33 -06:00
municorn
a0dc1c87c5 fix(anthropic): correct model detection logic to properly identify Claude 4.5 models 2025-10-22 08:27:39 -06:00
municorn
d65547dcaa fix(anthropic): remove top_p parameter for Claude 4.5 models
Fixes #520.
2025-10-20 15:10:33 -06:00
di-sukharev
b318d1d882 Merge branch 'master' into dev 2025-08-01 16:02:44 +03:00
D1m7asis
c5ce50aaa3 feat: add AIML API provider support
Introduces AIMLAPI as a supported AI provider, including model list, config validation, and engine implementation. Updates README and engine selection logic to integrate AIMLAPI for chat completions.

Refactor AimlApiEngine response handling

Removed dependency on removeContentTags and simplified message content extraction. Minor header formatting fix for HTTP-Referer. This streamlines the response handling and reduces unnecessary processing.
2025-08-01 14:48:11 +02:00
GPT8
c1756b85af Merge pull request #498 from kykungz/fix-491
Fix TypeScript build error and add missing confirm import (regression from #491)
2025-07-23 17:12:44 +03:00
GPT8
dac1271782 Merge pull request #496 from kykungz/resolve-top-level-git-dir
Fix git commands when executed from subdirectories
2025-07-23 17:10:37 +03:00
Kongpon Charanwattanakit
1cc7a64f99 feat(commit.ts): add confirmation prompt and refactor commit message editing for better user experience 2025-07-23 16:15:20 +07:00
GPT8
4deb7bca65 Merge pull request #488 from anpigon/fix/i18n-ko
fix(i18n): correct typo in Korean translation for 'feat' commit type
2025-07-22 23:40:54 +03:00
GPT8
1a90485a10 Merge pull request #491 from leoliu0605/dev
feat(commit.ts): enable users to edit commit message before committing
2025-07-22 23:38:30 +03:00
Kongpon Charanwattanakit
7e60c68ba5 refactor(git): add getGitDir helper and update functions to use cwd option for better git repository handling 2025-07-14 21:50:58 +07:00
Phantas Weng
24adc16adf fix(run.ts): remove trailing comma from OCO_AI_PROVIDER_ENUM array to fix the prettier test 2025-07-08 09:27:40 +00:00
Phantas Weng
881f07eebe fix(prepare-commit-msg-hook): simplify commit message generation logic for clarity and maintainability 2025-07-08 05:38:42 +00:00
Phantas Weng
3a255a3ad9 feat(config): add OCO_HOOK_AUTO_UNCOMMENT config key and update commit message hook behavior to conditionally uncomment the message 2025-07-08 05:25:32 +00:00
Phantas Weng
66a5695d89 feat(prepare-commit-msg-hook): enhance commit message formatting with a divider and instructions for better user guidance 2025-07-01 06:02:32 +00:00
leoliu
43dc5e6c2b feat(commit.ts): enable users to edit commit message before committing 2025-06-26 23:41:58 +08:00
Yusheng Guo
3d42dde48c fix(migrations): skip unhandled AI providers during migration execution
The changes:
1. Expanded the skip condition to include additional AI providers (DEEPSEEK, GROQ, MISTRAL, MLX, OPENROUTER) beyond just TEST
2. Maintained existing TEST provider skip behavior
3. Added explicit comment explaining the skip logic

The why:
Prevents migration execution for unsupported AI providers to avoid potential runtime errors or data inconsistencies, ensuring migrations only run for properly handled configurations.
2025-06-23 15:34:22 +08:00
anpigon
19f32ca57d fix(i18n): correct typo in Korean translation for 'feat' commit type #487 2025-06-21 18:12:55 +09:00
frauniki
45aed936b1 ♻️ refactor: clean up code formatting and improve readability
- Fix inconsistent indentation across multiple engine files
- Remove trailing whitespace and add missing newlines
- Improve code formatting in prompt generation functions
- Break long lines for better readability
- Standardize spacing and brackets placement
2025-06-15 17:29:12 +09:00
frauniki
5725c776a7 add openrouter AI provider support with comprehensive model list
Add OpenRouterEngine class and integrate it into the configuration
system. OpenRouter provides access to 300+ AI models through a
unified API, expanding model availability for commit message
generation beyond existing providers.
2025-06-15 04:11:13 +09:00
di-sukharev
75147e91e7 refactor(git.ts): improve git add completion message for clarity 2025-06-08 10:42:07 +03:00
di-sukharev
59b6edb49c format 2025-06-08 10:41:16 +03:00
GPT8
55904155a8 Merge pull request #472 from kakakakakku/fgm
feat(cli.ts): enhance fgm flag to include description and default value for better usability
2025-05-30 10:15:00 +03:00
GPT8
c1be5138b6 Merge pull request #477 from jonsguez/fix/one-line-commit
fix(prompts.ts): edited contradictory assistant output
2025-05-30 10:13:35 +03:00
jonsguez
668e149ae3 fix(prompts.ts): edited contradictory assistant output
When user wants one line commits the system prompt and the user/assistant one-shot example were contradicting each other, confusing the LLM. This fix modifies the assistant output so that prompt and one-shot are consistent.
2025-05-29 23:09:10 -04:00
Ben Leibowitz
b5fca3155f feat(config): add 'describe' mode to config command for detailed parameter info
This commit adds a new 'describe' mode to the config command, allowing users
to get detailed information about configuration parameters. It includes:

1. New CONFIG_MODES.describe enum value
2. Functions to generate and print help messages for config parameters
3. Updated configCommand to handle the new 'describe' mode
4. README updates to document the new 'describe' functionality
2025-05-29 15:46:48 -04:00
kakakakakku
f0381c8b12 feat(cli.ts): enhance fgm flag to include description and default value for better usability 2025-05-19 09:04:31 +09:00
EmilienMottet
6aae1c7bd7 ♻️(engine): extract custom header parsing and update OpenAiEngine
- export parseCustomHeaders from src/utils/engine.ts
- use parseCustomHeaders in OpenAiEngine for config.customHeaders
- remove try/catch and inline JSON.parse logic
- update config test to expect headers as object and drop JSON.parse

Centralize header parsing for reuse and simplify engine code
Update tests to match new header format for clarity
2025-04-30 21:43:44 +02:00
EmilienMottet
71a44fac28 ♻️ refactor OpenAI client options and unify custom headers parsing
Use OpenAI.ClientOptions for stronger typing and clarity
Extract custom headers parsing into parseCustomHeaders util
Simplify getEngine by delegating header parsing to helper
Improve maintainability and reduce code duplication
2025-04-30 14:46:54 +02:00
EmilienMottet
6c48c935e2 add custom HTTP headers support via OCO_API_CUSTOM_HEADERS
Add OCO_API_CUSTOM_HEADERS variable to README, config enum,
and env parsing to allow JSON string of custom headers.
Validate that custom headers are valid JSON in config validator.
Extend AiEngineConfig with customHeaders and pass headers to
OllamaEngine and OpenAiEngine clients when creating requests.
Parse custom headers in utils/engine and warn on invalid format.
Add unit tests to ensure OCO_API_CUSTOM_HEADERS is handled
correctly and merged from env over global config.

This enables users to send additional headers such as
Authorization or tracing headers with LLM API calls.
2025-04-29 20:51:24 +02:00
Jethro Yu
0ebff3b974 fix(removeContentTags): keep newlines to preserve formatting
The space normalization logic is updated to replace only multiple spaces
and tabs with a single space, while preserving newlines. This change
ensures that the formatting of the content is maintained, especially
when dealing with empty line requirements and max line length.
2025-04-25 18:40:50 +08:00
Jethro Yu
9ffcdbdb3b refactor(commitlint): update commitlint configuration and prompts for improved clarity and consistency
The commitlint configuration and prompts have been refactored to enhance
clarity and maintain consistency throughout the codebase. The type
assertion for commitLintConfig is updated to use 'as any' for better
type handling. Additionally, formatting adjustments are made in the
prompts to ensure proper readability and alignment with the defined
conventions. These changes aim to streamline the commit message
generation process and improve overall code maintainability.
2025-04-15 14:00:09 +08:00