Skip to content

Latest commit

 

History

History
437 lines (296 loc) · 15.9 KB

File metadata and controls

437 lines (296 loc) · 15.9 KB

General Settings

The following settings can be applied to most models, but support may vary. Please check the documentation for each specific model to confirm which settings are supported.

Settings Reference

Setting Description Default
envKey Custom environment variable name for the API key -
systemPrompt System Prompt text -
systemPromptPath Path to system prompt file -
exclude Files to exclude from AI analysis -
type Type of commit message to generate conventional
locale Locale for the generated commit messages en
generate Number of commit messages to generate 1
logging Enable logging true
includeBody Whether the commit message includes body false
maxLength Maximum character length of the Subject of generated commit message 50
disableLowerCase Disable automatic lowercase conversion of commit messages false
timeout Request timeout (milliseconds) 10000
temperature Model's creativity (0.0 - 2.0) 0.7
maxTokens Maximum number of tokens to generate 1024
topP Nucleus sampling 0.9
codeReview Whether to include an automated code review in the process false
codeReviewPromptPath Path to code review prompt file -
autoCopy Auto-copy commit message to clipboard (commits normally) false
useStats Enable usage statistics tracking true
statsDays Days to retain statistics data (auto-cleanup) 30
modelNameDisplay Model name display in CLI labels (none / short / full) short
disabled Whether a specific model is enabled or disabled false
stream Experimental. Enable streaming for real-time commit message generation false
diffCompression Diff compression mode (none / compact) none
maxHunkLines Max lines per hunk in compressed diff (0 = unlimited) 0
maxDiffLines Max total lines in compressed diff (0 = unlimited) 0
diffContext Number of context lines in git diff (0-10) 3

Tip: To set the General Settings for each model, use the following command.

aicommit2 config set OPENAI.locale="jp"
aicommit2 config set CODESTRAL.type="gitmoji"
aicommit2 config set GEMINI.includeBody=true

Detailed Settings

envKey

  • Allows users to specify a custom environment variable name for their API key.
  • If envKey is not explicitly set, the system defaults to using an environment variable named after the service, followed by _API_KEY (e.g., OPENAI_API_KEY for OpenAI, GEMINI_API_KEY for Gemini).
  • This setting provides flexibility for managing API keys, especially when multiple services are used or when specific naming conventions are required.
aicommit2 config set OPENAI.envKey="MY_CUSTOM_OPENAI_KEY"

envKey is used to retrieve the API key from your system's environment variables. Ensure the specified environment variable is set with your API key.

systemPrompt

  • Allow users to specify a custom system prompt
aicommit2 config set systemPrompt="Generate git commit message."

systemPrompt takes precedence over systemPromptPath and does not apply at the same time.

systemPromptPath

  • Allow users to specify a custom file path for their own system prompt template
  • Please see Custom Prompt Template
  • Note: Paths can be absolute or relative to the configuration file location.
aicommit2 config set systemPromptPath="/path/to/user/prompt.txt"

exclude

  • Files to exclude from AI analysis
  • It is applied with the --exclude option of the CLI option. All files excluded through --exclude in CLI and exclude general setting.
aicommit2 config set exclude="*.ts"
aicommit2 config set exclude="*.ts,*.json"

NOTE: exclude option does not support per model. It is only supported by General Settings.

forceGit

Default: false

Force Git detection even in Jujutsu repositories (useful when you have both .jj and .git directories):

aicommit2 config set forceGit=true

This is equivalent to using the FORCE_GIT=true environment variable, but persistent across sessions.

type

Default: conventional

Supported: conventional, gitmoji

The type of commit message to generate:

Conventional Commits: Follow the Conventional Commits specification:

aicommit2 config set type="conventional"

Gitmoji: Use Gitmoji emojis in commit messages:

aicommit2 config set type="gitmoji"

locale

Default: en

The locale to use for the generated commit messages. Consult the list of codes in: https://wikipedia.org/wiki/List_of_ISO_639_language_codes.

aicommit2 config set locale="jp"

generate

Default: 1

The number of commit messages to generate to pick from.

Note, this will use more tokens as it generates more results.

aicommit2 config set generate=2

logging

Default: true

This boolean option controls whether the application generates log files. When enabled, both the general application logs and the AI request/response logs are written to their respective paths. For a detailed explanation of all logging settings, including how to enable/disable logging and manage log files, please refer to the main Logging section.

  • Log File Example: log-path

includeBody

Default: false

This option determines whether the commit message includes body. If you want to include body in message, you can set it to true.

aicommit2 config set includeBody="true"

ignore_body_false

aicommit2 config set includeBody="false"

ignore_body_true

maxLength

The maximum character length of the Subject of generated commit message

Default: 50

aicommit2 config set maxLength=100

disableLowerCase

Disable automatic lowercase conversion of commit messages

Default: false

By default, AICommit2 converts the first character of commit types and descriptions to lowercase to follow conventional commit standards. Set this to true to preserve the original casing.

aicommit2 config set disableLowerCase=true

You can also use the CLI flag:

aicommit2 --disable-lowercase

timeout

The timeout for network requests in milliseconds.

Default: 10_000 (10 seconds)

aicommit2 config set timeout=20000 # 20s

Note: Each AI provider has its own default timeout value, and if the configured timeout is less than the provider's default, the setting will be ignored.

temperature

The temperature (0.0-2.0) is used to control the randomness of the output

Default: 0.7

aicommit2 config set temperature=0.3

maxTokens

The maximum number of tokens that the AI models can generate.

Default: 1024

aicommit2 config set maxTokens=3000

topP

Default: 0.9

Nucleus sampling, where the model considers the results of the tokens with top_p probability mass.

Note: Claude 4.x models do not support using temperature and top_p simultaneously. For these models, top_p is automatically excluded.

aicommit2 config set topP=0.2

disabled

Default: false

This option determines whether a specific model is enabled or disabled. If you want to disable a particular model, you can set this option to true.

To disable a model, use the following commands:

aicommit2 config set GEMINI.disabled="true"
aicommit2 config set GROQ.disabled="true"

autoCopy

Default: false

When enabled, the selected commit message will be automatically copied to the clipboard while still proceeding with the commit. This is useful when you want to keep a copy of your commit messages.

aicommit2 config set autoCopy=true

Note: This differs from the --clipboard (-c) CLI flag:

  • Config autoCopy=true: Copies message AND commits normally
  • CLI --clipboard: Copies message and exits WITHOUT committing

useStats

Default: true

Controls whether usage statistics are collected. When enabled, aicommit2 tracks AI provider usage, success rates, response times, and selection counts. View statistics with aicommit2 stats.

# Disable statistics collection
aicommit2 config set useStats=false

# Re-enable statistics
aicommit2 config set useStats=true

statsDays

Default: 30

The number of days to retain statistics data. Data older than this threshold is automatically cleaned up during stats recording. This helps manage storage while keeping recent usage data.

# Keep 7 days of stats
aicommit2 config set statsDays=7

# Keep 90 days of stats
aicommit2 config set statsDays=90

modelNameDisplay

Default: short

Controls how model names appear in CLI labels (e.g., the colored prefix before each commit message suggestion).

Value Example Output
none [OpenRouter]
short [OpenRouter/llama-3.3-70b-versa…] (last segment, max 20 chars)
full [OpenRouter/meta-llama/llama-3.3-70b-versatile]
# Hide model names (provider only)
aicommit2 config set modelNameDisplay=none

# Show truncated model name (default)
aicommit2 config set modelNameDisplay=short

# Show full model path
aicommit2 config set modelNameDisplay=full

codeReview

Default: false

The codeReview parameter determines whether to include an automated code review in the process.

aicommit2 config set codeReview=true

NOTE: When enabled, aicommit2 will perform a code review before generating commit messages.

CODE_REVIEW

CAUTION

  • The codeReview feature is currently experimental.
  • This feature performs a code review before generating commit messages.
  • Using this feature will significantly increase the overall processing time.
  • It may significantly impact performance and cost.
  • The code review process consumes a large number of tokens.

codeReviewPromptPath

  • Allow users to specify a custom file path for code review
  • Note: Paths can be absolute or relative to the configuration file location.
aicommit2 config set codeReviewPromptPath="/path/to/user/prompt.txt"

stream

  • Experimental. Enable streaming mode for real-time commit message generation.
  • When enabled, commit messages appear progressively as tokens arrive from the AI provider, instead of waiting for the complete response.
  • Supported providers: OpenAI, Anthropic Claude, Gemini, Groq, DeepSeek, Ollama, OpenAI API-Compatible
  • Works best with includeBody=true for visible real-time streaming effect.
aicommit2 config set OPENAI.stream=true
aicommit2 config set ANTHROPIC.stream=true

CAUTION

  • The stream feature is currently experimental and may change in future releases.
  • Streaming is only applied to commit message generation, not code review.
  • May not be compatible with git hooks (e.g., prepare-commit-msg), external tools (e.g., lazygit), or non-interactive environments.

diffCompression

  • Controls how git diff output is compressed before sending to AI providers.
  • none (default): Sends the raw diff as-is, no compression applied.
  • compact: Strips diff metadata headers, minimizes context lines (keeps only lines adjacent to changes), and applies hunk/total line caps. Reduces token usage by 30-60% on typical diffs.
aicommit2 config set diffCompression=compact
aicommit2 config set diffCompression=none

# Per-model override
aicommit2 config set OLLAMA.diffCompression=compact
aicommit2 config set OPENAI.diffCompression=none

maxHunkLines

  • Maximum number of lines per hunk in compact mode. Hunks exceeding this limit are truncated with a [... N lines truncated] notice.
  • Default: 0 (unlimited). Set to a positive number to cap hunk size.
aicommit2 config set maxHunkLines=200  # cap at 200 lines per hunk
aicommit2 config set maxHunkLines=0    # unlimited (default)

maxDiffLines

  • Maximum total lines in the compressed diff output. When exceeded, remaining files are omitted with a notice.
  • Default: 0 (unlimited). Set to a positive number to cap total diff size.
aicommit2 config set maxDiffLines=1000 # cap at 1000 lines total
aicommit2 config set maxDiffLines=0    # unlimited (default)

diffContext

  • Number of context lines included in git diff output (equivalent to git diff -U<n>).
  • Reducing from the default 3 to 1 can further reduce token usage with minimal impact on commit message quality.
  • Range: 0-10. Default: 3.
aicommit2 config set diffContext=3
aicommit2 config set diffContext=1    # fewer context lines, saves tokens

Available Settings by Model

timeout temperature maxTokens topP stream
OpenAI
Anthropic Claude
Gemini
Mistral AI
Codestral
Cohere
Groq
Perplexity
DeepSeek
Github Models
Ollama
OpenAI API-Compatible

All AI support the following options in General Settings.

  • systemPrompt, systemPromptPath, codeReview, codeReviewPromptPath, exclude, type, locale, generate, logging, includeBody, maxLength, disableLowerCase, autoCopy, modelNameDisplay, useStats, statsDays