diff --git a/README.md b/README.md index 7ebcd23e..74c4a774 100644 --- a/README.md +++ b/README.md @@ -47,28 +47,25 @@ Making pull requests less painful with an AI agent - -- GitLab webhook now supports controlling which commands will [run automatically](./docs/USAGE.md#working-with-gitlab-webhook) when a new PR is opened. ### Feb 18, 2024 - Introducing the `CI Feedback` tool 💎. The tool automatically triggers when a PR has a failed check. It analyzes the failed check, and provides summarized logs and analysis. Note that this feature requires read access to GitHub 'checks' and 'actions'. See [here](./docs/CI_FEEDBACK.md) for more details. - New ability - you can run `/ask` on specific lines of code in the PR from the PR's diff view. See [here](./docs/ASK.md#ask-lines) for more details. - Introducing support for [Azure DevOps Webhooks](./Usage.md#azure-devops-webhook), as well as bug fixes and improved support for several ADO commands. -### Feb 11, 2024 -The `review` tool has been revamped, aiming to make the feedback clearer and more relevant, and better complement the `improve` tool. - -### Feb 6, 2024 -A new feature was added to the `review` tool - [Auto-approve PRs](./docs/REVIEW.md#auto-approval-1). If enabled, this feature enables to automatically approve PRs that meet specific criteria, by commenting on a PR: `/review auto_approve`. - -### Feb 2, 2024 -Added ["PR Actions"](https://www.codium.ai/images/pr_agent/pr-actions.mp4) 💎 - interactively trigger PR-Agent tools from the PR page. - - ## Overview
-
+
-
+
+
+
+
+
diff --git a/docs/IMPROVE.md b/docs/IMPROVE.md index 4c4687fd..315f4ba0 100644 --- a/docs/IMPROVE.md +++ b/docs/IMPROVE.md @@ -36,6 +36,14 @@ An extended mode, which does not involve PR Compression and provides more compre ``` /improve --extended ``` + +or by setting: +``` +[pr_code_suggestions] +auto_extended_mode=true +``` +(True by default). + Note that the extended mode divides the PR code changes into chunks, up to the token limits, where each chunk is handled separately (might use multiple calls to GPT-4 for large PRs). Hence, the total number of suggestions is proportional to the number of chunks, i.e., the size of the PR. @@ -53,7 +61,7 @@ To edit [configurations](./../pr_agent/settings/configuration.toml#L66) related - `summarize`: if set to true, the tool will display the suggestions in a single comment. Default is false. - `enable_help_text`: if set to true, the tool will display a help text in the comment. Default is true. #### params for '/improve --extended' mode -- `auto_extended_mode`: enable extended mode automatically (no need for the `--extended` option). Default is false. +- `auto_extended_mode`: enable extended mode automatically (no need for the `--extended` option). Default is true. - `num_code_suggestions_per_chunk`: number of code suggestions provided by the 'improve' tool, per chunk. Default is 8. - `rank_extended_suggestions`: if set to true, the tool will rank the suggestions, based on importance. Default is true. - `max_number_of_calls`: maximum number of chunks. Default is 5. diff --git a/pr_agent/tools/pr_update_changelog.py b/pr_agent/tools/pr_update_changelog.py index 3d2ad2c7..102d8a69 100644 --- a/pr_agent/tools/pr_update_changelog.py +++ b/pr_agent/tools/pr_update_changelog.py @@ -8,6 +8,7 @@ from pr_agent.algo.ai_handlers.base_ai_handler import BaseAiHandler from pr_agent.algo.ai_handlers.litellm_ai_handler import LiteLLMAIHandler from pr_agent.algo.pr_processing import get_pr_diff, retry_with_fallback_models from pr_agent.algo.token_handler import TokenHandler +from pr_agent.algo.utils import ModelType from pr_agent.config_loader import get_settings from pr_agent.git_providers import get_git_provider, GithubProvider from pr_agent.git_providers.git_provider import get_main_pr_language @@ -62,7 +63,7 @@ class PRUpdateChangelog: if get_settings().config.publish_output: self.git_provider.publish_comment("Preparing changelog updates...", is_temporary=True) - await retry_with_fallback_models(self._prepare_prediction) + await retry_with_fallback_models(self._prepare_prediction, model_type=ModelType.TURBO) new_file_content, answer = self._prepare_changelog_update() get_logger().debug(f"PR output", artifact=answer)