mirror of
https://github.com/qodo-ai/pr-agent.git
synced 2025-07-05 13:20:39 +08:00
docs: fix typo in documentation about config response_language from pr-agent.toml
This commit is contained in:
@ -1,6 +1,6 @@
|
||||
## Show possible configurations
|
||||
|
||||
The possible configurations of Qodo Merge are stored in [here](https://github.com/Codium-ai/pr-agent/blob/main/pr_agent/settings/configuration.toml){:target="_blank"}.
|
||||
The possible configurations of Qodo Merge are stored in [here](https://github.com/Codium-ai/pr-agent/blob/main/pr_agent/settings/configuration.toml){:target="\_blank"}.
|
||||
In the [tools](https://qodo-merge-docs.qodo.ai/tools/) page you can find explanations on how to use these configurations for each tool.
|
||||
|
||||
To print all the available configurations as a comment on your PR, you can use the following command:
|
||||
@ -72,30 +72,23 @@ Example:
|
||||
|
||||
```toml
|
||||
[config]
|
||||
response_language: "it-IT"
|
||||
response_language= "it-IT"
|
||||
```
|
||||
|
||||
This will set the response language globally for all the commands to Italian.
|
||||
|
||||
> **Important:** Note that only dynamic text generated by the AI model is translated to the configured language. Static text such as labels and table headers that are not part of the AI models response will remain in US English. In addition, the model you are using must have good support for the specified language.
|
||||
|
||||
[//]: # (## Working with large PRs)
|
||||
|
||||
[//]: # ()
|
||||
[//]: # (The default mode of CodiumAI is to have a single call per tool, using GPT-4, which has a token limit of 8000 tokens.)
|
||||
|
||||
[//]: # (This mode provides a very good speed-quality-cost tradeoff, and can handle most PRs successfully.)
|
||||
|
||||
[//]: # (When the PR is above the token limit, it employs a [PR Compression strategy](../core-abilities/index.md).)
|
||||
|
||||
[//]: # ()
|
||||
[//]: # (However, for very large PRs, or in case you want to emphasize quality over speed and cost, there are two possible solutions:)
|
||||
|
||||
[//]: # (1) [Use a model](https://qodo-merge-docs.qodo.ai/usage-guide/changing_a_model/) with larger context, like GPT-32K, or claude-100K. This solution will be applicable for all the tools.)
|
||||
|
||||
[//]: # (2) For the `/improve` tool, there is an ['extended' mode](https://qodo-merge-docs.qodo.ai/tools/improve/) (`/improve --extended`),)
|
||||
|
||||
[//]: # (which divides the PR into chunks, and processes each chunk separately. With this mode, regardless of the model, no compression will be done (but for large PRs, multiple model calls may occur))
|
||||
[//]: # "## Working with large PRs"
|
||||
[//]: #
|
||||
[//]: # "The default mode of CodiumAI is to have a single call per tool, using GPT-4, which has a token limit of 8000 tokens."
|
||||
[//]: # "This mode provides a very good speed-quality-cost tradeoff, and can handle most PRs successfully."
|
||||
[//]: # "When the PR is above the token limit, it employs a [PR Compression strategy](../core-abilities/index.md)."
|
||||
[//]: #
|
||||
[//]: # "However, for very large PRs, or in case you want to emphasize quality over speed and cost, there are two possible solutions:"
|
||||
[//]: # "1) [Use a model](https://qodo-merge-docs.qodo.ai/usage-guide/changing_a_model/) with larger context, like GPT-32K, or claude-100K. This solution will be applicable for all the tools."
|
||||
[//]: # "2) For the `/improve` tool, there is an ['extended' mode](https://qodo-merge-docs.qodo.ai/tools/improve/) (`/improve --extended`),"
|
||||
[//]: # "which divides the PR into chunks, and processes each chunk separately. With this mode, regardless of the model, no compression will be done (but for large PRs, multiple model calls may occur)"
|
||||
|
||||
## Patch Extra Lines
|
||||
|
||||
@ -236,4 +229,4 @@ ignore_pr_authors = ["my-special-bot-user", ...]
|
||||
Where the `ignore_pr_authors` is a list of usernames that you want to ignore.
|
||||
|
||||
!!! note
|
||||
There is one specific case where bots will receive an automatic response - when they generated a PR with a _failed test_. In that case, the [`ci_feedback`](https://qodo-merge-docs.qodo.ai/tools/ci_feedback/) tool will be invoked.
|
||||
There is one specific case where bots will receive an automatic response - when they generated a PR with a _failed test_. In that case, the [`ci_feedback`](https://qodo-merge-docs.qodo.ai/tools/ci_feedback/) tool will be invoked.
|
||||
|
Reference in New Issue
Block a user