From 564845adff8b827bef4dde6e28711a22aaba30b6 Mon Sep 17 00:00:00 2001 From: Tal Date: Thu, 12 Sep 2024 09:27:45 +0300 Subject: [PATCH] Update additional_configurations.md --- docs/docs/usage-guide/additional_configurations.md | 7 ++++--- 1 file changed, 4 insertions(+), 3 deletions(-) diff --git a/docs/docs/usage-guide/additional_configurations.md b/docs/docs/usage-guide/additional_configurations.md index ff7898b9..59195b68 100644 --- a/docs/docs/usage-guide/additional_configurations.md +++ b/docs/docs/usage-guide/additional_configurations.md @@ -22,7 +22,7 @@ Will output an additional field showing the actual configurations used for the ` ## Ignoring files from analysis -In some cases, you may want to exclude specific files or directories from the analysis performed by CodiumAI PR-Agent. This can be useful, for example, when you have files that are generated automatically or files that shouldn't be reviewed, like vendored code. +In some cases, you may want to exclude specific files or directories from the analysis performed by CodiumAI PR-Agent. This can be useful, for example, when you have files that are generated automatically or files that shouldn't be reviewed, like vendor code. You can ignore files or folders using the following methods: - `IGNORE.GLOB` @@ -66,7 +66,7 @@ When the PR is above the token limit, it employs a [PR Compression strategy](../ However, for very large PRs, or in case you want to emphasize quality over speed and cost, there are two possible solutions: 1) [Use a model](https://codium-ai.github.io/Docs-PR-Agent/usage-guide/#changing-a-model) with larger context, like GPT-32K, or claude-100K. This solution will be applicable for all the tools. 2) For the `/improve` tool, there is an ['extended' mode](https://codium-ai.github.io/Docs-PR-Agent/tools/#improve) (`/improve --extended`), -which divides the PR to chunks, and processes each chunk separately. With this mode, regardless of the model, no compression will be done (but for large PRs, multiple model calls may occur) +which divides the PR into chunks, and processes each chunk separately. With this mode, regardless of the model, no compression will be done (but for large PRs, multiple model calls may occur) @@ -93,7 +93,8 @@ patch_extra_lines_after=2 ``` Increasing this number provides more context to the model, but will also increase the token budget, and may overwhelm the model with too much information, unrelated to the actual PR code changes. -If the PR is too large (see [PR Compression strategy](https://github.com/Codium-ai/pr-agent/blob/main/PR_COMPRESSION.md)), PR-Agent automatically may set this number to 0, and will use the original git patch. + +If the PR is too large (see [PR Compression strategy](https://github.com/Codium-ai/pr-agent/blob/main/PR_COMPRESSION.md)), PR-Agent may automatically set this number to 0, and will use the original git patch. ## Editing the prompts