diff --git a/docs/docs/chrome-extension/index.md b/docs/docs/chrome-extension/index.md index d5af4d88..a063d2c8 100644 --- a/docs/docs/chrome-extension/index.md +++ b/docs/docs/chrome-extension/index.md @@ -2,7 +2,7 @@ With a single-click installation you will gain access to a context-aware chat on your pull requests code, a toolbar extension with multiple AI feedbacks, Qodo Merge filters, and additional abilities. -The extension is powered by top code models like Claude 3.7 Sonnet and o3-mini. All the extension's features are free to use on public repositories. +The extension is powered by top code models like Claude 3.7 Sonnet and o4-mini. All the extension's features are free to use on public repositories. For private repositories, you will need to install [Qodo Merge](https://github.com/apps/qodo-merge-pro){:target="_blank"} in addition to the extension (Quick GitHub app setup with a 14-day free trial. No credit card needed). For a demonstration of how to install Qodo Merge and use it with the Chrome extension, please refer to the tutorial video at the provided [link](https://codium.ai/images/pr_agent/private_repos.mp4){:target="_blank"}. diff --git a/docs/docs/installation/locally.md b/docs/docs/installation/locally.md index f2d23dbc..cd981f96 100644 --- a/docs/docs/installation/locally.md +++ b/docs/docs/installation/locally.md @@ -1,6 +1,6 @@ To run PR-Agent locally, you first need to acquire two keys: -1. An OpenAI key from [here](https://platform.openai.com/api-keys){:target="_blank"}, with access to GPT-4 and o3-mini (or a key for other [language models](https://qodo-merge-docs.qodo.ai/usage-guide/changing_a_model/), if you prefer). +1. An OpenAI key from [here](https://platform.openai.com/api-keys){:target="_blank"}, with access to GPT-4 and o4-mini (or a key for other [language models](https://qodo-merge-docs.qodo.ai/usage-guide/changing_a_model/), if you prefer). 2. A personal access token from your Git platform (GitHub, GitLab, BitBucket) with repo scope. GitHub token, for example, can be issued from [here](https://github.com/settings/tokens){:target="_blank"} ## Using Docker image diff --git a/docs/docs/usage-guide/changing_a_model.md b/docs/docs/usage-guide/changing_a_model.md index d05a73ab..76a67d29 100644 --- a/docs/docs/usage-guide/changing_a_model.md +++ b/docs/docs/usage-guide/changing_a_model.md @@ -1,7 +1,7 @@ ## Changing a model in PR-Agent See [here](https://github.com/Codium-ai/pr-agent/blob/main/pr_agent/algo/__init__.py) for a list of available models. -To use a different model than the default (o3-mini), you need to edit in the [configuration file](https://github.com/Codium-ai/pr-agent/blob/main/pr_agent/settings/configuration.toml#L2) the fields: +To use a different model than the default (o4-mini), you need to edit in the [configuration file](https://github.com/Codium-ai/pr-agent/blob/main/pr_agent/settings/configuration.toml#L2) the fields: ```toml [config] @@ -311,7 +311,7 @@ To bypass chat templates and temperature controls, set `config.custom_reasoning_ reasoning_efffort= = "medium" # "low", "medium", "high" ``` -With the OpenAI models that support reasoning effort (eg: o3-mini), you can specify its reasoning effort via `config` section. The default value is `medium`. You can change it to `high` or `low` based on your usage. +With the OpenAI models that support reasoning effort (eg: o4-mini), you can specify its reasoning effort via `config` section. The default value is `medium`. You can change it to `high` or `low` based on your usage. ### Anthropic models diff --git a/pr_agent/settings/configuration.toml b/pr_agent/settings/configuration.toml index cd3e4aed..edb5296f 100644 --- a/pr_agent/settings/configuration.toml +++ b/pr_agent/settings/configuration.toml @@ -6,7 +6,7 @@ [config] # models -model="o3-mini" +model="o4-mini" fallback_models=["gpt-4o-2024-11-20"] #model_weak="gpt-4o-mini-2024-07-18" # optional, a weaker model to use for some easier tasks # CLI