Merge pull request #1710 from PeterDaveHelloKitchen/SwitchDefaultTo-o4-mini

Replace default o3-mini with o4-mini
This commit is contained in:
Tal
2025-04-19 09:00:35 +03:00
committed by GitHub
4 changed files with 5 additions and 5 deletions

View File

@ -2,7 +2,7 @@
With a single-click installation you will gain access to a context-aware chat on your pull requests code, a toolbar extension with multiple AI feedbacks, Qodo Merge filters, and additional abilities.
The extension is powered by top code models like Claude 3.7 Sonnet and o3-mini. All the extension's features are free to use on public repositories.
The extension is powered by top code models like Claude 3.7 Sonnet and o4-mini. All the extension's features are free to use on public repositories.
For private repositories, you will need to install [Qodo Merge](https://github.com/apps/qodo-merge-pro){:target="_blank"} in addition to the extension (Quick GitHub app setup with a 14-day free trial. No credit card needed).
For a demonstration of how to install Qodo Merge and use it with the Chrome extension, please refer to the tutorial video at the provided [link](https://codium.ai/images/pr_agent/private_repos.mp4){:target="_blank"}.

View File

@ -1,6 +1,6 @@
To run PR-Agent locally, you first need to acquire two keys:
1. An OpenAI key from [here](https://platform.openai.com/api-keys){:target="_blank"}, with access to GPT-4 and o3-mini (or a key for other [language models](https://qodo-merge-docs.qodo.ai/usage-guide/changing_a_model/), if you prefer).
1. An OpenAI key from [here](https://platform.openai.com/api-keys){:target="_blank"}, with access to GPT-4 and o4-mini (or a key for other [language models](https://qodo-merge-docs.qodo.ai/usage-guide/changing_a_model/), if you prefer).
2. A personal access token from your Git platform (GitHub, GitLab, BitBucket) with repo scope. GitHub token, for example, can be issued from [here](https://github.com/settings/tokens){:target="_blank"}
## Using Docker image

View File

@ -1,7 +1,7 @@
## Changing a model in PR-Agent
See [here](https://github.com/Codium-ai/pr-agent/blob/main/pr_agent/algo/__init__.py) for a list of available models.
To use a different model than the default (o3-mini), you need to edit in the [configuration file](https://github.com/Codium-ai/pr-agent/blob/main/pr_agent/settings/configuration.toml#L2) the fields:
To use a different model than the default (o4-mini), you need to edit in the [configuration file](https://github.com/Codium-ai/pr-agent/blob/main/pr_agent/settings/configuration.toml#L2) the fields:
```toml
[config]
@ -311,7 +311,7 @@ To bypass chat templates and temperature controls, set `config.custom_reasoning_
reasoning_efffort= = "medium" # "low", "medium", "high"
```
With the OpenAI models that support reasoning effort (eg: o3-mini), you can specify its reasoning effort via `config` section. The default value is `medium`. You can change it to `high` or `low` based on your usage.
With the OpenAI models that support reasoning effort (eg: o4-mini), you can specify its reasoning effort via `config` section. The default value is `medium`. You can change it to `high` or `low` based on your usage.
### Anthropic models

View File

@ -6,7 +6,7 @@
[config]
# models
model="o3-mini"
model="o4-mini"
fallback_models=["gpt-4o-2024-11-20"]
#model_weak="gpt-4o-mini-2024-07-18" # optional, a weaker model to use for some easier tasks
# CLI