Compare commits

...

90 Commits

Author SHA1 Message Date
Tal
b073fb1a74 Update test_clip_tokens.py 2024-12-25 16:46:56 +02:00
Tal
12d603fdb4 Update README.md 2024-12-25 08:42:24 +02:00
Tal
6540e2c674 Merge pull request #1416 from Codium-ai/tr/remove_review_suggestions
refactor: remove legacy code suggestions feature from review tool
2024-12-25 08:26:03 +02:00
83e68f168a log verbosity 2024-12-25 08:22:53 +02:00
5e1b04980e refactor: remove reflection and incremental review features from docs and code 2024-12-25 08:21:33 +02:00
495c1ebe5f refactor: remove legacy code suggestions feature from review tool 2024-12-25 08:18:28 +02:00
Tal
ad71de82a9 Merge pull request #1413 from addianto/docs/1398-environment-variables
Document an example on how to configure PR Agent using environment variables
2024-12-24 21:25:54 +02:00
11676943b6 docs: Remind user to avoid commiting .env file (#1398) 2024-12-24 15:20:15 +07:00
b88507aa23 chore: Ignore .env file 2024-12-24 15:14:30 +07:00
a5c5e6f4ae docs: Add example how to define environment variables (#1398) 2024-12-24 15:01:26 +07:00
47cd361663 docs: Mention about the use of Dynaconf (#1398) 2024-12-24 15:01:13 +07:00
Tal
c84b3d04b9 Merge pull request #1412 from Codium-ai/tr/dedent_review
feat: add dedent option to code snippet formatting
2024-12-24 07:54:36 +02:00
7d9288bb1a feat: add dedent option to code snippet formatting 2024-12-24 07:49:27 +02:00
Tal
93e64367d2 Merge pull request #1410 from Codium-ai/tr/update_changelog_fix
Tr/update changelog fix
2024-12-23 19:37:28 +02:00
6c131b8406 feat: add PR link support in changelog updates 2024-12-23 19:35:52 +02:00
dd89e1f2dc feat: add PR link support in changelog updates 2024-12-23 19:34:21 +02:00
e8e4fb0afa feat: add PR link support in changelog updates 2024-12-23 17:20:29 +02:00
3360a28b3e fix: improve changelog update prompt and response handling 2024-12-23 17:06:21 +02:00
Tal
20c506d2e0 Merge pull request #1402 from KennyDizi/main
Add support for OpenAI `o1` model and snapshot version `o1-2024-12-17`
2024-12-22 09:36:04 +02:00
Tal
a1921d931c Merge pull request #1407 from Codium-ai/tr/publish_output_no_suggestions_fix
fix: only publish empty code suggestions when configured
2024-12-22 09:34:34 +02:00
31aa460f5f fix: only publish empty code suggestions when configured 2024-12-22 09:32:11 +02:00
23678c1d4d Update O1_MODEL_PREFIX to o1 based on new models released 2024-12-22 10:36:59 +07:00
8d7825233a Supported model gpt-o1 2024-12-22 10:33:26 +07:00
Tal
c9f02e63e1 Merge pull request #1403 from Codium-ai/tr/re_review
feat: enhance code review output with collapsible code snippets
2024-12-20 16:38:56 +02:00
c2f1f2dba0 fix: improve markdown rendering when git provider is unavailable 2024-12-19 21:08:27 +02:00
3ab2cac089 fix: improve markdown rendering when git provider is unavailable 2024-12-19 20:59:17 +02:00
989670b159 fix: improve markdown rendering when git provider is unavailable 2024-12-19 20:49:40 +02:00
7e8361b5fd feat: enhance code review output with collapsible code snippets and variable links 2024-12-19 20:30:56 +02:00
eaaaf6a6a2 Fix context windows token for model o1-2024-12-17 2024-12-19 23:11:45 +07:00
07f3933f6d Add support OpenAI model o1 snapshot version o1-2024-12-17 2024-12-19 23:00:47 +07:00
Tal
84786495ed Merge pull request #1401 from Codium-ai/tr/docs3
docs: simplify default tool configurations and update documentation
2024-12-19 16:43:25 +02:00
d09aa1b13e docs: remove unused automatic_review configuration option 2024-12-19 16:41:18 +02:00
Tal
e9615c6994 Merge pull request #1384 from MarkRx/feature/version-metadata
Add --version command and version metadata
2024-12-19 09:34:24 +02:00
f3ee4a75b5 docs: simplify default tool configurations and update documentation 2024-12-19 09:33:20 +02:00
452abe2e18 Move get_version to algo/util.py; fix version to 0.25 2024-12-17 08:44:53 -07:00
Tal
a768969d37 Merge pull request #1397 from KennyDizi/main
Add support for `gemini/gemini-2.0-flash-exp` model
2024-12-16 20:37:55 +02:00
Tal
9ef9198468 Update index.md 2024-12-16 20:37:09 +02:00
Tal
d0ea901bca Update fetching_ticket_context.md 2024-12-16 20:29:17 +02:00
Tal
57089c931b Merge pull request #1396 from ofir-frd/add-pr-body-license-documentation
Similar Code: Add PR Body License Documentation
2024-12-16 20:24:42 +02:00
03d2bea50b Add support model gemini-2.0-flash-exp 2024-12-16 23:37:19 +07:00
721d38d4ed docs: add license information to similar code documentation 2024-12-16 17:11:17 +02:00
Tal
bbec5d9cc9 Merge pull request #1395 from Codium-ai/mrT23-patch-6
Update review.md
2024-12-14 11:54:56 +02:00
Tal
0be2750dfa Update review.md 2024-12-14 11:53:47 +02:00
Tal
5e5c251cd0 Merge pull request #1394 from Codium-ai/mrT23-patch-6
Update README.md
2024-12-13 16:11:12 +02:00
Tal
048ae8ee9e Update README.md
Co-authored-by: qodo-merge-pro-for-open-source[bot] <189517486+qodo-merge-pro-for-open-source[bot]@users.noreply.github.com>
2024-12-13 16:11:04 +02:00
Tal
1a77e9afaf Update README.md 2024-12-13 16:09:18 +02:00
Tal
184a52d325 Update README.md 2024-12-12 21:40:04 +02:00
Tal
3d38060dff Update README.md 2024-12-12 21:35:56 +02:00
c4dc263f2c docs: update repository reference to be platform-agnostic 2024-12-11 18:31:49 +02:00
Tal
f67cc0dd18 Merge pull request #1392 from Codium-ai/tr/model_weak
docs: remove model_weak configuration and simplify model selection
2024-12-11 18:19:05 +02:00
872b27bfd8 docs: remove model_weak configuration and simplify model selection 2024-12-11 18:10:34 +02:00
Tal
cb88489dbe Merge pull request #1387 from KennyDizi/main
Introduce to weak model
2024-12-11 17:36:18 +02:00
Tal
da786b8020 Merge pull request #1391 from Codium-ai/tr/thumbs_up_down
feat: add thumbs up/down support and refactor code suggestions handling
2024-12-11 13:21:07 +02:00
6a51b8501d docs: add allow_thumbs_up_down configuration option and remove rank_extended_suggestions 2024-12-11 13:16:21 +02:00
d34edb83ff feat: add thumbs up/down support and refactor code suggestions handling 2024-12-11 13:03:43 +02:00
Tal
c6eb253ed1 Merge pull request #1390 from glebzhidkov/docs/escape-list-in-yaml-file
Fix doc for Github Actions
2024-12-10 22:07:55 +02:00
a61f1889d1 fix doc 2024-12-10 13:44:56 +01:00
75a120952c Add version metadata and --version command 2024-12-09 09:27:54 -07:00
f9a7b18073 Improve condition to pick up weak model 2024-12-09 22:36:07 +07:00
6352e6e3bf Change default model to regular model 2024-12-09 22:24:44 +07:00
f49217e058 docs: fix typos in improve.md documentation 2024-12-09 09:09:09 +02:00
Tal
e11cec7d9e Merge pull request #1388 from Codium-ai/tr/readme_enhancments
docs: reorganize and enhance best practices documentation in improve.md
2024-12-09 09:05:25 +02:00
c45cde93ef docs: fix typos in improve.md documentation 2024-12-09 09:04:53 +02:00
9e25667d97 docs: reorganize and enhance best practices documentation in improve.md 2024-12-09 09:02:40 +02:00
7dc9e73423 fix: move user_tag variable declaration outside conditional block 2024-12-09 08:27:30 +02:00
e3d779c30d Fix typo model_weak 2024-12-08 22:09:48 +07:00
88a93bdcd7 Update weak model document 2024-12-08 22:01:00 +07:00
3c31048afc Update model in git provider 2024-12-08 22:00:37 +07:00
fc5dda0957 Use weak model for the rest flows 2024-12-08 21:51:29 +07:00
936894e4d1 Use regular model for pr review and code suggestion flows 2024-12-08 21:51:09 +07:00
dec2859fc4 Set default model to weak model 2024-12-08 21:10:26 +07:00
a4d9a65fc6 Add model_week 2024-12-08 20:23:36 +07:00
683108d3a5 Removed model_turbo 2024-12-08 20:10:38 +07:00
e68e100117 typo 2024-12-08 11:46:10 +02:00
Tal
158892047b Merge pull request #1386 from Codium-ai/tr/self_check
Tr/self check
2024-12-08 11:42:53 +02:00
Tal
7d5e59cd40 Update tests/health_test/main.py
Co-authored-by: qodo-merge-pro-for-open-source[bot] <189517486+qodo-merge-pro-for-open-source[bot]@users.noreply.github.com>
2024-12-08 11:41:05 +02:00
e8fc351ce9 docs: add CLI health check section and reorganize automations documentation 2024-12-08 11:39:15 +02:00
43e91b0df7 feat: add health test for PR agent commands and improve output handling 2024-12-08 11:27:43 +02:00
39a461b3b2 docs: update badges and clarify Qodo Merge Pro description 2024-12-05 21:50:39 +02:00
19ade4acf0 fixed link 2024-12-04 14:29:12 +02:00
Tal
10f8b522db Merge pull request #1381 from Codium-ai/tr/qodo_installation
Tr/qodo installation
2024-12-03 18:17:37 +02:00
d26ca4f71c docs: update Qodo Merge Pro installation documentation with Bitbucket support 2024-12-03 18:16:02 +02:00
9160f756db docs: update Qodo Merge Pro installation documentation with Bitbucket support 2024-12-03 18:14:36 +02:00
84c3a7b969 docs: update Qodo Merge Pro installation documentation with Bitbucket support 2024-12-03 18:12:21 +02:00
d9f9cc65b3 Merge pull request #1380 from Codium-ai/hl/ticket_docs_update
Update fetching_ticket_context.md
2024-12-03 18:04:06 +02:00
7d99c0db8e Merge remote-tracking branch 'origin/main' 2024-12-03 17:48:14 +02:00
fe20a8c5e7 docs: update Qodo Merge Pro installation documentation with rebranding changes 2024-12-03 17:48:05 +02:00
43c95106d4 Update fetching_ticket_context.md 2024-12-03 17:18:40 +02:00
Tal
1dd5f0b848 Merge pull request #1379 from Codium-ai/tr/disable_auto_commands
Add disable_auto_feedback configuration option to control automatic feedback
2024-12-02 21:34:15 +02:00
8610aa27a4 Add disable_auto_feedback configuration option to control automatic PR feedback 2024-12-02 21:28:48 +02:00
45 changed files with 801 additions and 779 deletions

3
.gitignore vendored
View File

@ -1,6 +1,7 @@
.idea/
.lsp/
.vscode/
.env
venv/
pr_agent/settings/.secrets.toml
__pycache__
@ -8,4 +9,4 @@ dist/
*.egg-info/
build/
.DS_Store
docs/.cache/
docs/.cache/

112
README.md
View File

@ -13,15 +13,13 @@
Qode Merge PR-Agent aims to help efficiently review and handle pull requests, by providing AI feedback and suggestions
</div>
[![GitHub license](https://img.shields.io/badge/License-Apache_2.0-blue.svg)](https://github.com/Codium-ai/pr-agent/blob/main/LICENSE)
[![Static Badge](https://img.shields.io/badge/Chrome-Extension-violet)](https://chromewebstore.google.com/detail/pr-agent-chrome-extension/ephlnjeghhogofkifjloamocljapahnl)
[![Static Badge](https://img.shields.io/badge/Code-Benchmark-blue)](https://pr-agent-docs.codium.ai/finetuning_benchmark/)
[![Static Badge](https://img.shields.io/badge/Pro-App-blue)](https://github.com/apps/qodo-merge-pro/)
[![Static Badge](https://img.shields.io/badge/OpenSource-App-red)](https://github.com/apps/qodo-merge-pro-for-open-source/)
[![Discord](https://badgen.net/badge/icon/discord?icon=discord&label&color=purple)](https://discord.com/channels/1057273017547378788/1126104260430528613)
[![Twitter](https://img.shields.io/twitter/follow/codiumai)](https://twitter.com/codiumai)
[![Cheat Sheet](https://img.shields.io/badge/Cheat-Sheet-red)](https://www.codium.ai/images/pr_agent/cheat_sheet.pdf)
<a href="https://github.com/Codium-ai/pr-agent/commits/main">
<img alt="GitHub" src="https://img.shields.io/github/last-commit/Codium-ai/pr-agent/main?style=for-the-badge" height="20">
</a>
<a href="https://github.com/Codium-ai/pr-agent/commits/main">
<img alt="GitHub" src="https://img.shields.io/github/last-commit/Codium-ai/pr-agent/main?style=for-the-badge" height="20">
</a>
</div>
### [Documentation](https://pr-agent-docs.codium.ai/)
@ -43,12 +41,18 @@ Qode Merge PR-Agent aims to help efficiently review and handle pull requests, by
## News and Updates
### December 25, 2024
The `review` tool previously included a legacy feature for providing code suggestions (controlled by '--pr_reviewer.num_code_suggestion'). This functionality has been deprecated. Use instead the [`improve`](https://qodo-merge-docs.qodo.ai/tools/improve/) tool, which offers higher quality and more actionable code suggestions.
### December 2, 2024
Open-source repositories can now freely use Qodo Merge Pro, and enjoy easy one-click installation using our dedicated [app](https://github.com/apps/qodo-merge-pro-for-open-source).
Open-source repositories can now freely use Qodo Merge Pro, and enjoy easy one-click installation using a marketplace [app](https://github.com/apps/qodo-merge-pro-for-open-source).
<kbd><img src="https://github.com/user-attachments/assets/b0838724-87b9-43b0-ab62-73739a3a855c" width="512"></kbd>
See [here](https://qodo-merge-docs.qodo.ai/installation/pr_agent_pro/) for more details about installing Qodo Merge Pro for private repositories.
### November 18, 2024
@ -73,7 +77,7 @@ Focused mode
### November 4, 2024
Qodo Merge PR Agent will now leverage context from Jira or GitHub tickets to enhance the PR Feedback. Read more about this feature
Qodo Merge PR Agent will now leverage context from Jira or GitHub tickets to enhance the PR Feedback. Read more about this feature
[here](https://qodo-merge-docs.qodo.ai/core-abilities/fetching_ticket_context/)
@ -82,39 +86,41 @@ Qodo Merge PR Agent will now leverage context from Jira or GitHub tickets to enh
Supported commands per platform:
| | | GitHub | Gitlab | Bitbucket | Azure DevOps |
| | | GitHub | GitLab | Bitbucket | Azure DevOps |
|-------|---------------------------------------------------------------------------------------------------------|:--------------------:|:--------------------:|:--------------------:|:------------:|
| TOOLS | Review | ✅ | ✅ | ✅ | ✅ |
| | ⮑ Incremental | ✅ | | | |
| | Describe | ✅ | ✅ | ✅ | ✅ |
| | ⮑ [Inline File Summary](https://pr-agent-docs.codium.ai/tools/describe#inline-file-summary) 💎 | ✅ | | | |
| | Improve | ✅ | ✅ | ✅ | ✅ |
| | ⮑ Extended | ✅ | ✅ | ✅ | ✅ |
| | Ask | ✅ | ✅ | ✅ | ✅ |
| TOOLS | [Review](https://qodo-merge-docs.qodo.ai/tools/review/) | ✅ | ✅ | ✅ | ✅ |
| | [Describe](https://qodo-merge-docs.qodo.ai/tools/describe/) | ✅ | ✅ | ✅ | |
| | [Improve](https://qodo-merge-docs.qodo.ai/tools/improve/) | ✅ | ✅ | ✅ | ✅ |
| | [Ask](https://qodo-merge-docs.qodo.ai/tools/ask/) | ✅ | ✅ | ✅ | ✅ |
| | ⮑ [Ask on code lines](https://pr-agent-docs.codium.ai/tools/ask#ask-lines) | ✅ | ✅ | | |
| | [Custom Prompt](https://pr-agent-docs.codium.ai/tools/custom_prompt/) 💎 | ✅ | ✅ | ✅ | |
| | [Test](https://pr-agent-docs.codium.ai/tools/test/) 💎 | ✅ | ✅ | | |
| | Reflect and Review | ✅ | ✅ | ✅ | ✅ |
| | Update CHANGELOG.md | ✅ | ✅ | ✅ | ✅ |
| | Find Similar Issue | ✅ | | | |
| | [Add PR Documentation](https://pr-agent-docs.codium.ai/tools/documentation/) 💎 | ✅ | | | |
| | [Update CHANGELOG](https://qodo-merge-docs.qodo.ai/tools/update_changelog/) | ✅ | ✅ | ✅ | |
| | [Ticket Context](https://qodo-merge-docs.qodo.ai/core-abilities/fetching_ticket_context/) 💎 | ✅ | ✅ | | |
| | [Utilizing Best Practices](https://qodo-merge-docs.qodo.ai/tools/improve/#best-practices) 💎 | ✅ | ✅ | | |
| | [PR Chat](https://qodo-merge-docs.qodo.ai/chrome-extension/features/#pr-chat) 💎 | ✅ | | | |
| | [Suggestion Tracking](https://qodo-merge-docs.qodo.ai/tools/improve/#suggestion-tracking) 💎 | ✅ | ✅ | | |
| | [CI Feedback](https://pr-agent-docs.codium.ai/tools/ci_feedback/) 💎 | ✅ | | | |
| | [PR Documentation](https://pr-agent-docs.codium.ai/tools/documentation/) 💎 | ✅ | ✅ | | |
| | [Custom Labels](https://pr-agent-docs.codium.ai/tools/custom_labels/) 💎 | ✅ | ✅ | | |
| | [Analyze](https://pr-agent-docs.codium.ai/tools/analyze/) 💎 | ✅ | ✅ | | |
| | [CI Feedback](https://pr-agent-docs.codium.ai/tools/ci_feedback/) 💎 | ✅ | | | |
| | [Similar Code](https://pr-agent-docs.codium.ai/tools/similar_code/) 💎 | ✅ | | | |
| | [Custom Prompt](https://pr-agent-docs.codium.ai/tools/custom_prompt/) 💎 | ✅ | ✅ | ✅ | |
| | [Test](https://pr-agent-docs.codium.ai/tools/test/) 💎 | ✅ | ✅ | | |
| | | | | | |
| USAGE | CLI | ✅ | ✅ | ✅ | ✅ |
| | App / webhook | ✅ | ✅ | ✅ | ✅ |
| | Tagging bot | ✅ | | | |
| | Actions | ✅ |✅| ✅ |✅|
| USAGE | [CLI](https://qodo-merge-docs.qodo.ai/usage-guide/automations_and_usage/#local-repo-cli) | ✅ | ✅ | ✅ | ✅ |
| | [App / webhook](https://qodo-merge-docs.qodo.ai/usage-guide/automations_and_usage/#github-app) | ✅ | ✅ | ✅ | ✅ |
| | [Tagging bot](https://github.com/Codium-ai/pr-agent#try-it-now) | ✅ | | | |
| | [Actions](https://qodo-merge-docs.qodo.ai/installation/github/#run-as-a-github-action) | ✅ |✅| ✅ |✅|
| | | | | | |
| CORE | PR compression | ✅ | ✅ | ✅ | ✅ |
| | Repo language prioritization | ✅ | ✅ | ✅ | ✅ |
| CORE | [PR compression](https://qodo-merge-docs.qodo.ai/core-abilities/compression_strategy/) | ✅ | ✅ | ✅ | ✅ |
| | Adaptive and token-aware file patch fitting | ✅ | ✅ | ✅ | ✅ |
| | Multiple models support | ✅ | ✅ | ✅ | ✅ |
| | [Static code analysis](https://pr-agent-docs.codium.ai/core-abilities/#static-code-analysis) 💎 | ✅ | ✅ | ✅ | |
| | [Multiple models support](https://qodo-merge-docs.qodo.ai/usage-guide/changing_a_model/) | ✅ | ✅ | ✅ | ✅ |
| | [Local and global metadata](https://qodo-merge-docs.qodo.ai/core-abilities/metadata/) | ✅ | ✅ | ✅ | |
| | [Dynamic context](https://qodo-merge-docs.qodo.ai/core-abilities/dynamic_context/) | ✅ | ✅ | ✅ | ✅ |
| | [Self reflection](https://qodo-merge-docs.qodo.ai/core-abilities/self_reflection/) | ✅ | ✅ | ✅ | ✅ |
| | [Static code analysis](https://qodo-merge-docs.qodo.ai/core-abilities/static_code_analysis/) 💎 | ✅ | ✅ | ✅ | |
| | [Global and wiki configurations](https://pr-agent-docs.codium.ai/usage-guide/configuration_options/) 💎 | ✅ | ✅ | ✅ | |
| | [PR interactive actions](https://www.codium.ai/images/pr_agent/pr-actions.mp4) 💎 | ✅ | ✅ | | |
| | [Impact Evaluation](https://qodo-merge-docs.qodo.ai/core-abilities/impact_evaluation/) 💎 | ✅ | ✅ | | |
- 💎 means this feature is available only in [PR-Agent Pro](https://www.codium.ai/pricing/)
[//]: # (- Support for additional git providers is described in [here]&#40;./docs/Full_environments.md&#41;)
@ -175,50 +181,8 @@ ___
</kbd>
</p>
</div>
<hr>
<h4><a href="https://github.com/Codium-ai/pr-agent/pull/530">/generate_labels</a></h4>
<div align="center">
<p float="center">
<kbd><img src="https://www.codium.ai/images/pr_agent/geneare_custom_labels_main_short.png" width="300"></kbd>
</p>
</div>
[//]: # (<h4><a href="https://github.com/Codium-ai/pr-agent/pull/78#issuecomment-1639739496">/reflect_and_review:</a></h4>)
[//]: # (<div align="center">)
[//]: # (<p float="center">)
[//]: # (<img src="https://www.codium.ai/images/reflect_and_review.gif" width="800">)
[//]: # (</p>)
[//]: # (</div>)
[//]: # (<h4><a href="https://github.com/Codium-ai/pr-agent/pull/229#issuecomment-1695020538">/ask:</a></h4>)
[//]: # (<div align="center">)
[//]: # (<p float="center">)
[//]: # (<img src="https://www.codium.ai/images/ask-2.gif" width="800">)
[//]: # (</p>)
[//]: # (</div>)
[//]: # (<h4><a href="https://github.com/Codium-ai/pr-agent/pull/229#issuecomment-1695024952">/improve:</a></h4>)
[//]: # (<div align="center">)
[//]: # (<p float="center">)
[//]: # (<img src="https://www.codium.ai/images/improve-2.gif" width="800">)
[//]: # (</p>)
[//]: # (</div>)
<div align="left">

View File

@ -4,7 +4,7 @@ With a single-click installation you will gain access to a context-aware chat on
The extension is powered by top code models like Claude 3.5 Sonnet and GPT4. All the extension's features are free to use on public repositories.
For private repositories, you will need to install [Qodo Merge Pro](https://github.com/apps/codiumai-pr-agent-pro) in addition to the extension (Quick GitHub app setup with a 14-day free trial. No credit card needed).
For private repositories, you will need to install [Qodo Merge Pro](https://github.com/apps/qodo-merge-pro) in addition to the extension (Quick GitHub app setup with a 14-day free trial. No credit card needed).
For a demonstration of how to install Qodo Merge Pro and use it with the Chrome extension, please refer to the tutorial video at the provided [link](https://codium.ai/images/pr_agent/private_repos.mp4).
<img src="https://codium.ai/images/pr_agent/PR-AgentChat.gif" width="768">

View File

@ -1,16 +1,29 @@
# Fetching Ticket Context for PRs
`Supported Git Platforms : GitHub, GitLab, Bitbucket`
## Overview
Qodo Merge PR Agent streamlines code review workflows by seamlessly connecting with multiple ticket management systems.
This integration enriches the review process by automatically surfacing relevant ticket information and context alongside code changes.
## Ticket systems supported
- GitHub
- Jira (💎)
Ticket data fetched:
1. Ticket Title
2. Ticket Description
3. Custom Fields (Acceptance criteria)
4. Subtasks (linked tasks)
5. Labels
6. Attached Images/Screenshots
## Affected Tools
Ticket Recognition Requirements:
1. The PR description should contain a link to the ticket.
2. For Jira tickets, you should follow the instructions in [Jira Integration](https://qodo-merge-docs.qodo.ai/core-abilities/fetching_ticket_context/#jira-integration) in order to authenticate with Jira.
- The PR description should contain a link to the ticket or if the branch name starts with the ticket id / number.
- For Jira tickets, you should follow the instructions in [Jira Integration](https://qodo-merge-docs.qodo.ai/core-abilities/fetching_ticket_context/#jira-integration) in order to authenticate with Jira.
### Describe tool
Qodo Merge PR Agent will recognize the ticket and use the ticket content (title, description, labels) to provide additional context for the code changes.
@ -49,12 +62,18 @@ Since Qodo Merge PR Agent is integrated with GitHub, it doesn't require any addi
### Jira Integration 💎
We support both Jira Cloud and Jira Server/Data Center.
To integrate with Jira, The PR Description should contain a link to the Jira ticket.
To integrate with Jira, you can link your PR to a ticket using either of these methods:
For Jira integration, include a ticket reference in your PR description using either the complete URL format `https://<JIRA_ORG>.atlassian.net/browse/ISSUE-123` or the shortened ticket ID `ISSUE-123`.
**Method 1: Description Reference:**
Include a ticket reference in your PR description using either the complete URL format https://<JIRA_ORG>.atlassian.net/browse/ISSUE-123 or the shortened ticket ID ISSUE-123.
**Method 2: Branch Name Detection:**
Name your branch with the ticket ID as a prefix (e.g., `ISSUE-123-feature-description` or `ISSUE-123/feature-description`).
!!! note "Jira Base URL"
If using the shortened format, ensure your configuration file contains the Jira base URL under the [jira] section like this:
For shortened ticket IDs or branch detection (method 2), you must configure the Jira base URL in your configuration file under the [jira] section:
```toml
[jira]
@ -112,4 +131,4 @@ Currently, we only support the Personal Access Token (PAT) Authentication method
[jira]
jira_base_url = "YOUR_JIRA_BASE_URL" # e.g. https://jira.example.com
jira_api_token = "YOUR_API_TOKEN"
```
```

View File

@ -61,7 +61,7 @@ Or be triggered interactively by using the `analyze` tool.
### Find Similar Code
The [`similar code`](https://qodo-merge-docs.qodo.ai/tools/similar_code/) tool retrieves the most similar code components from inside the organization's codebase, or from open-source code.
The [`similar code`](https://qodo-merge-docs.qodo.ai/tools/similar_code/) tool retrieves the most similar code components from inside the organization's codebase or from open-source code, including details about the license associated with each repository.
For example:

View File

@ -25,36 +25,43 @@ To search the documentation site using natural language:
Qodo Merge offers extensive pull request functionalities across various git providers.
| | | GitHub | Gitlab | Bitbucket | Azure DevOps |
|-------|-----------------------------------------------------------------------------------------------------------------------|:------:|:------:|:---------:|:------------:|
| TOOLS | Review | ✅ | ✅ | ✅ | ✅ |
| | ⮑ Incremental | ✅ | | | |
| | Ask | ✅ | ✅ | ✅ | ✅ |
| | Describe | ✅ | ✅ | ✅ | ✅ |
| | ⮑ [Inline file summary](https://qodo-merge-docs.qodo.ai/tools/describe/#inline-file-summary){:target="_blank"} 💎 | | | | |
| | Improve | ✅ | ✅ | ✅ | ✅ |
| | ⮑ Extended | ✅ | ✅ | | |
| | [Custom Prompt](./tools/custom_prompt.md){:target="_blank"} 💎 | ✅ | ✅ | | |
| | Reflect and Review | ✅ | ✅ | ✅ | |
| | Update CHANGELOG.md | | ✅ | ✅ | |
| | Find Similar Issue | | | | |
| | [Add PR Documentation](./tools/documentation.md){:target="_blank"} 💎 | ✅ | | | |
| | [Generate Custom Labels](./tools/describe.md#handle-custom-labels-from-the-repos-labels-page-💎){:target="_blank"} 💎 | ✅ | | | |
| | [Analyze PR Components](./tools/analyze.md){:target="_blank"} 💎 | | | | |
| | | | | | |
| USAGE | CLI | ✅ | ✅ | | |
| | App / webhook | | | | |
| | Actions | ✅ | | | |
| | | | | |
| CORE | PR compression | ✅ | ✅ | ✅ | ✅ |
| | Repo language prioritization | | | | |
| | Adaptive and token-aware file patch fitting | ✅ | ✅ | ✅ | ✅ |
| | Multiple models support | ✅ | ✅ | ✅ | |
| | Incremental PR review | ✅ | | | |
| | [Static code analysis](./tools/analyze.md/){:target="_blank"} 💎 | ✅ | ✅ | ✅ | |
| | [Multiple configuration options](./usage-guide/configuration_options.md){:target="_blank"} 💎 | | | | |
| | | GitHub | GitLab | Bitbucket | Azure DevOps |
|-------|---------------------------------------------------------------------------------------------------------|:--------------------:|:--------------------:|:--------------------:|:------------:|
| TOOLS | [Review](https://qodo-merge-docs.qodo.ai/tools/review/) | ✅ | ✅ | ✅ | ✅ |
| | [Describe](https://qodo-merge-docs.qodo.ai/tools/describe/) | ✅ | ✅ | ✅ | |
| | [Improve](https://qodo-merge-docs.qodo.ai/tools/improve/) | ✅ | ✅ | ✅ | ✅ |
| | [Ask](https://qodo-merge-docs.qodo.ai/tools/ask/) | ✅ | ✅ | ✅ | ✅ |
| | ⮑ [Ask on code lines](https://pr-agent-docs.codium.ai/tools/ask#ask-lines) | ✅ || | |
| | [Update CHANGELOG](https://qodo-merge-docs.qodo.ai/tools/update_changelog/) | ✅ | ✅ | ✅ | ✅ |
| | [Ticket Context](https://qodo-merge-docs.qodo.ai/core-abilities/fetching_ticket_context/) 💎 | ✅ | ✅ || |
| | [Utilizing Best Practices](https://qodo-merge-docs.qodo.ai/tools/improve/#best-practices) 💎 | ✅ | ✅ || |
| | [PR Chat](https://qodo-merge-docs.qodo.ai/chrome-extension/features/#pr-chat) 💎 | ✅ | | | |
| | [Suggestion Tracking](https://qodo-merge-docs.qodo.ai/tools/improve/#suggestion-tracking) 💎 | ✅ | ✅ | | |
| | [CI Feedback](https://pr-agent-docs.codium.ai/tools/ci_feedback/) 💎 | ✅ | | | |
| | [PR Documentation](https://pr-agent-docs.codium.ai/tools/documentation/) 💎 | ✅ | ✅ | | |
| | [Custom Labels](https://pr-agent-docs.codium.ai/tools/custom_labels/) 💎 | ✅ || | |
| | [Analyze](https://pr-agent-docs.codium.ai/tools/analyze/) 💎 | ✅ | ✅ | | |
| | [Similar Code](https://pr-agent-docs.codium.ai/tools/similar_code/) 💎 | ✅ | | | |
| | [Custom Prompt](https://pr-agent-docs.codium.ai/tools/custom_prompt/) 💎 | ✅ | ✅ || |
| | [Test](https://pr-agent-docs.codium.ai/tools/test/) 💎 | ✅ || | |
| | | | | | |
| USAGE | [CLI](https://qodo-merge-docs.qodo.ai/usage-guide/automations_and_usage/#local-repo-cli) | ✅ | ✅ | ✅ | |
| | [App / webhook](https://qodo-merge-docs.qodo.ai/usage-guide/automations_and_usage/#github-app) | ✅ | ✅ | ✅ | ✅ |
| | [Tagging bot](https://github.com/Codium-ai/pr-agent#try-it-now) | ✅ | | | |
| | [Actions](https://qodo-merge-docs.qodo.ai/installation/github/#run-as-a-github-action) | ✅ |✅| ✅ |✅|
| | | | | | |
| CORE | [PR compression](https://qodo-merge-docs.qodo.ai/core-abilities/compression_strategy/) | ✅ | ✅ | ✅ | |
| | Adaptive and token-aware file patch fitting | ✅ | ✅ | ✅ | |
| | [Multiple models support](https://qodo-merge-docs.qodo.ai/usage-guide/changing_a_model/) || ✅ | ✅ | |
| | [Local and global metadata](https://qodo-merge-docs.qodo.ai/core-abilities/metadata/) | ✅ | ✅ | ✅ | ✅ |
| | [Dynamic context](https://qodo-merge-docs.qodo.ai/core-abilities/dynamic_context/) | ✅ | ✅ | ✅ | ✅ |
| | [Self reflection](https://qodo-merge-docs.qodo.ai/core-abilities/self_reflection/) | ✅ | ✅ | ✅ | ✅ |
| | [Static code analysis](https://qodo-merge-docs.qodo.ai/core-abilities/static_code_analysis/) 💎 | ✅ | ✅ | ✅ | |
| | [Global and wiki configurations](https://pr-agent-docs.codium.ai/usage-guide/configuration_options/) 💎 | ✅ | ✅ | ✅ | |
| | [PR interactive actions](https://www.codium.ai/images/pr_agent/pr-actions.mp4) 💎 | ✅ | ✅ | | |
| | [Impact Evaluation](https://qodo-merge-docs.qodo.ai/core-abilities/impact_evaluation/) 💎 | ✅ | ✅ | | |
💎 marks a feature available only in [Qodo Merge Pro](https://www.codium.ai/pricing/){:target="_blank"}
💎 marks a feature available only in [Qodo Merge Pro](https://www.qodo.ai/pricing/){:target="_blank"}
## Example Results

View File

@ -66,7 +66,30 @@ To invoke a tool (for example `review`), you can run directly from the Docker im
docker run --rm -it -e CONFIG.GIT_PROVIDER=bitbucket -e OPENAI.KEY=$OPENAI_API_KEY -e BITBUCKET.BEARER_TOKEN=$BITBUCKET_BEARER_TOKEN codiumai/pr-agent:latest --pr_url=<pr_url> review
```
For other git providers, update CONFIG.GIT_PROVIDER accordingly, and check the `pr_agent/settings/.secrets_template.toml` file for the environment variables expected names and values.
For other git providers, update `CONFIG.GIT_PROVIDER` accordingly and check the `pr_agent/settings/.secrets_template.toml` file for environment variables expected names and values.
The `pr_agent` uses [Dynaconf](https://www.dynaconf.com/) to load settings from configuration files.
It is also possible to provide or override the configuration by setting the corresponding environment variables.
You can define the corresponding environment variables by following this convention: `<TABLE>__<KEY>=<VALUE>` or `<TABLE>.<KEY>=<VALUE>`.
The `<TABLE>` refers to a table/section in a configuration file and `<KEY>=<VALUE>` refers to the key/value pair of a setting in the configuration file.
For example, suppose you want to run `pr_agent` that connects to a self-hosted GitLab instance similar to an example above.
You can define the environment variables in a plain text file named `.env` with the following content:
> Warning: Never commit the `.env` file to version control system as it might contains sensitive credentials!
```
CONFIG__GIT_PROVIDER="gitlab"
GITLAB__URL="<your url>"
GITLAB__PERSONAL_ACCESS_TOKEN="<your token>"
OPENAI__KEY="<your key>"
```
Then, you can run `pr_agent` using Docker with the following command:
```shell
docker run --rm -it --env-file .env codiumai/pr-agent:latest <tool> <tool parameter>
```
---

View File

@ -1,31 +1,44 @@
## Getting Started with Qodo Merge Pro
Qodo Merge Pro is a versatile application compatible with GitHub, GitLab, and BitBucket, hosted by CodiumAI.
Qodo Merge Pro is a versatile application compatible with GitHub, GitLab, and BitBucket, hosted by QodoAI.
See [here](https://qodo-merge-docs.qodo.ai/overview/pr_agent_pro/) for more details about the benefits of using Qodo Merge Pro.
Interested parties can subscribe to Qodo Merge Pro through the following [link](https://www.codium.ai/pricing/).
After subscribing, you are granted the ability to easily install the application across any of your repositories.
A complimentary two-week trial is provided to all new users. Following the trial period, user licenses (seats) are required for continued access.
To purchase user licenses, please visit our [pricing page](https://www.qodo.ai/pricing/).
Once subscribed, users can seamlessly deploy the application across any of their code repositories.
## Install Qodo Merge Pro for GitHub
### GitHub Cloud
Qodo Merge Pro for GitHub cloud is available for installation through the [GitHub Marketplace](https://github.com/apps/qodo-merge-pro).
![Qodo Merge Pro](https://codium.ai/images/pr_agent/pr_agent_pro_install.png){width=468}
Each user who wants to use Qodo Merge pro needs to buy a seat.
Initially, CodiumAI offers a two-week trial period at no cost, after which continued access requires each user to secure a personal seat.
Once a user acquires a seat, they gain the flexibility to use Qodo Merge Pro across any repository where it was enabled.
Users without a purchased seat who interact with a repository featuring Qodo Merge Pro are entitled to receive up to five complimentary feedbacks.
Beyond this limit, Qodo Merge Pro will cease to respond to their inquiries unless a seat is purchased.
## Install Qodo Merge Pro for GitHub Enterprise Server
### GitHub Enterprise Server
To use Qodo Merge Pro application on your private GitHub Enterprise Server, you will need to contact us for starting an [Enterprise](https://www.codium.ai/pricing/) trial.
### GitHub Open Source Projects
For open-source projects, Qodo Merge Pro is available for free usage. To install Qodo Merge Pro for your open-source repositories, use the following marketplace [link](https://github.com/apps/qodo-merge-pro-for-open-source).
## Install Qodo Merge Pro for Bitbucket
### Bitbucket Cloud
Qodo Merge Pro for Bitbucket Cloud is available for installation through the following [link](https://bitbucket.org/site/addons/authorize?addon_key=d6df813252c37258)
![Qodo Merge Pro](https://qodo.ai/images/pr_agent/pr_agent_pro_bitbucket_install.png){width=468}
### Bitbucket Server
To use Qodo Merge Pro application on your private Bitbucket Server, you will need to contact us for starting an [Enterprise](https://www.codium.ai/pricing/) trial.
## Install Qodo Merge Pro for GitLab (Teams & Enterprise)
Since GitLab platform does not support apps, installing Qodo Merge Pro for GitLab is a bit more involved, and requires the following steps:
### Step 1
#### Step 1
Acquire a personal, project or group level access token. Enable the “api” scope in order to allow Qodo Merge to read pull requests, comment and respond to requests.
@ -35,14 +48,14 @@ Acquire a personal, project or group level access token. Enable the “api” sc
Store the token in a safe place, you wont be able to access it again after it was generated.
### Step 2
#### Step 2
Generate a shared secret and link it to the access token. Browse to [https://register.gitlab.pr-agent.codium.ai](https://register.gitlab.pr-agent.codium.ai).
Fill in your generated GitLab token and your company or personal name in the appropriate fields and click "Submit".
You should see "Success!" displayed above the Submit button, and a shared secret will be generated. Store it in a safe place, you wont be able to access it again after it was generated.
### Step 3
#### Step 3
Install a webhook for your repository or groups, by clicking “webhooks” on the settings menu. Click the “Add new webhook” button.
@ -53,7 +66,7 @@ Install a webhook for your repository or groups, by clicking “webhooks” on t
In the webhook definition form, fill in the following fields:
URL: https://pro.gitlab.pr-agent.codium.ai/webhook
Secret token: Your CodiumAI key
Secret token: Your QodoAI key
Trigger: Check the comments and merge request events boxes.
Enable SSL verification: Check the box.
@ -61,7 +74,7 @@ Enable SSL verification: Check the box.
![Step 3.2](https://www.codium.ai/images/pr_agent/gitlab_pro_webhooks.png){width=750}
</figure>
### Step 4
#### Step 4
Youre all set!

View File

@ -1,6 +1,6 @@
### Overview
[Qodo Merge Pro](https://www.codium.ai/pricing/) is a hosted version of Qodo Merge, provided by Qodo. A complimentary two-week trial is offered, followed by a monthly subscription fee.
[Qodo Merge Pro](https://www.codium.ai/pricing/) is a hosted version of open-source [Qodo Merge (PR-Agent)](https://github.com/Codium-ai/pr-agent). A complimentary two-week trial is offered, followed by a monthly subscription fee.
Qodo Merge Pro is designed for companies and teams that require additional features and capabilities. It provides the following benefits:
1. **Fully managed** - We take care of everything for you - hosting, models, regular updates, and more. Installation is as simple as signing up and adding the Qodo Merge app to your GitHub\GitLab\BitBucket repo.

View File

@ -95,6 +95,112 @@ This feature is controlled by a boolean configuration parameter: `pr_code_sugges
Instead, we leverage a dedicated private page, within your repository wiki, to track suggestions. This approach offers convenient secure suggestion tracking while avoiding pull requests or any noise to the main repository.
## `Extra instructions` and `best practices`
The `improve` tool can be further customized by providing additional instructions and best practices to the AI model.
### Extra instructions
>`Platforms supported: GitHub, GitLab, Bitbucket, Azure DevOps`
You can use the `extra_instructions` configuration option to give the AI model additional instructions for the `improve` tool.
Be specific, clear, and concise in the instructions. With extra instructions, you are the prompter.
Examples for possible instructions:
```toml
[pr_code_suggestions]
extra_instructions="""\
(1) Answer in japanese
(2) Don't suggest to add try-except block
(3) Ignore changes in toml files
...
"""
```
Use triple quotes to write multi-line instructions. Use bullet points or numbers to make the instructions more readable.
### Best practices 💎
>`Platforms supported: GitHub, GitLab, Bitbucket`
Another option to give additional guidance to the AI model is by creating a dedicated [**wiki page**](https://github.com/Codium-ai/pr-agent/wiki) called `best_practices.md`.
This page can contain a list of best practices, coding standards, and guidelines that are specific to your repo/organization.
The AI model will use this wiki page as a reference, and in case the PR code violates any of the guidelines, it will create additional suggestions, with a dedicated label: `Organization
best practice`.
Example for a python `best_practices.md` content:
```markdown
## Project best practices
- Make sure that I/O operations are encapsulated in a try-except block
- Use the `logging` module for logging instead of `print` statements
- Use `is` and `is not` to compare with `None`
- Use `if __name__ == '__main__':` to run the code only when the script is executed
- Use `with` statement to open files
...
```
Tips for writing an effective `best_practices.md` file:
- Write clearly and concisely
- Include brief code examples when helpful
- Focus on project-specific guidelines, that will result in relevant suggestions you actually want to get
- Keep the file relatively short, under 800 lines, since:
- AI models may not process effectively very long documents
- Long files tend to contain generic guidelines already known to AI
#### Local and global best practices
By default, Qodo Merge will look for a local `best_practices.md` wiki file in the root of the relevant local repo.
If you want to enable also a global `best_practices.md` wiki file, set first in the global configuration file:
```toml
[best_practices]
enable_global_best_practices = true
```
Then, create a `best_practices.md` wiki file in the root of [global](https://qodo-merge-docs.qodo.ai/usage-guide/configuration_options/#global-configuration-file) configuration repository, `pr-agent-settings`.
#### Best practices for multiple languages
For a git organization working with multiple programming languages, you can maintain a centralized global `best_practices.md` file containing language-specific guidelines.
When reviewing pull requests, Qodo Merge automatically identifies the programming language and applies the relevant best practices from this file.
To do this, structure your `best_practices.md` file using the following format:
```
# [Python]
...
# [Java]
...
# [JavaScript]
...
```
#### Dedicated label for best practices suggestions
Best practice suggestions are labeled as `Organization best practice` by default.
To customize this label, modify it in your configuration file:
```toml
[best_practices]
organization_name = "..."
```
And the label will be: `{organization_name} best practice`.
#### Example results
![best_practice](https://codium.ai/images/pr_agent/org_best_practice.png){width=512}
### How to combine `extra instructions` and `best practices`
The `extra instructions` configuration is more related to the `improve` tool prompt. It can be used, for example, to avoid specific suggestions ("Don't suggest to add try-except block", "Ignore changes in toml files", ...) or to emphasize specific aspects or formats ("Answer in Japanese", "Give only short suggestions", ...)
In contrast, the `best_practices.md` file is a general guideline for the way code should be written in the repo.
Using a combination of both can help the AI model to provide relevant and tailored suggestions.
## Usage Tips
### Implementing the proposed code suggestions
@ -191,99 +297,6 @@ This approach has two main benefits:
Note: Chunking is primarily relevant for large PRs. For most PRs (up to 500 lines of code), Qodo Merge will be able to process the entire code in a single call.
### 'Extra instructions' and 'best practices'
#### Extra instructions
>`Platforms supported: GitHub, GitLab, Bitbucket`
You can use the `extra_instructions` configuration option to give the AI model additional instructions for the `improve` tool.
Be specific, clear, and concise in the instructions. With extra instructions, you are the prompter. Specify relevant aspects that you want the model to focus on.
Examples for possible instructions:
```toml
[pr_code_suggestions]
extra_instructions="""\
(1) Answer in japanese
(2) Don't suggest to add try-excpet block
(3) Ignore changes in toml files
...
"""
```
Use triple quotes to write multi-line instructions. Use bullet points or numbers to make the instructions more readable.
#### Best practices 💎
>`Platforms supported: GitHub, GitLab`
Another option to give additional guidance to the AI model is by creating a dedicated [**wiki page**](https://github.com/Codium-ai/pr-agent/wiki) called `best_practices.md`.
This page can contain a list of best practices, coding standards, and guidelines that are specific to your repo/organization.
The AI model will use this wiki page as a reference, and in case the PR code violates any of the guidelines, it will suggest improvements accordingly, with a dedicated label: `Organization
best practice`.
Example for a `best_practices.md` content can be found [here](https://github.com/Codium-ai/pr-agent/blob/main/docs/docs/usage-guide/EXAMPLE_BEST_PRACTICE.md) (adapted from Google's [pyguide](https://google.github.io/styleguide/pyguide.html)).
This file is only an example. Since it is used as a prompt for an AI model, we want to emphasize the following:
- It should be written in a clear and concise manner
- If needed, it should give short relevant code snippets as examples
- Recommended to limit the text to 800 lines or fewer. Heres why:
1) Extremely long best practices documents may not be fully processed by the AI model.
2) A lengthy file probably represent a more "**generic**" set of guidelines, which the AI model is already familiar with. The objective is to focus on a more targeted set of guidelines tailored to the specific needs of this project.
##### Local and global best practices
By default, Qodo Merge will look for a local `best_practices.md` wiki file in the root of the relevant local repo.
If you want to enable also a global `best_practices.md` wiki file, set first in the global configuration file:
```toml
[best_practices]
enable_global_best_practices = true
```
Then, create a `best_practices.md` wiki file in the root of [global](https://qodo-merge-docs.qodo.ai/usage-guide/configuration_options/#global-configuration-file) configuration repository, `pr-agent-settings`.
##### Best practices for multiple languages
For a git organization working with multiple programming languages, you can maintain a centralized global `best_practices.md` file containing language-specific guidelines.
When reviewing pull requests, Qodo Merge automatically identifies the programming language and applies the relevant best practices from this file.
Structure your `best_practices.md` file using the following format:
```
# [Python]
...
# [Java]
...
# [JavaScript]
...
```
##### Dedicated label for best practices suggestions
Best practice suggestions are labeled as `Organization best practice` by default.
To customize this label, modify it in your configuration file:
```toml
[best_practices]
organization_name = ""
```
And the label will be: `{organization_name} best practice`.
##### Example results
![best_practice](https://codium.ai/images/pr_agent/org_best_practice.png){width=512}
#### How to combine `extra instructions` and `best practices`
The `extra instructions` configuration is more related to the `improve` tool prompt. It can be used, for example, to avoid specific suggestions ("Don't suggest to add try-except block", "Ignore changes in toml files", ...) or to emphasize specific aspects or formats ("Answer in Japanese", "Give only short suggestions", ...)
In contrast, the `best_practices.md` file is a general guideline for the way code should be written in the repo.
Using a combination of both can help the AI model to provide relevant and tailored suggestions.
## Configuration options
??? example "General options"
@ -329,6 +342,10 @@ Using a combination of both can help the AI model to provide relevant and tailor
<td><b>wiki_page_accepted_suggestions</b></td>
<td>If set to true, the tool will automatically track accepted suggestions in a dedicated wiki page called `.pr_agent_accepted_suggestions`. Default is true.</td>
</tr>
<tr>
<td><b>allow_thumbs_up_down</b></td>
<td>If set to true, all code suggestions will have thumbs up and thumbs down buttons, to encourage users to provide feedback on the suggestions. Default is false.</td>
</tr>
</table>
??? example "Params for number of suggestions and AI calls"
@ -346,10 +363,6 @@ Using a combination of both can help the AI model to provide relevant and tailor
<td><b>max_number_of_calls</b></td>
<td>Maximum number of chunks. Default is 3.</td>
</tr>
<tr>
<td><b>rank_extended_suggestions</b></td>
<td>If set to true, the tool will rank the suggestions, based on importance. Default is true.</td>
</tr>
</table>
## A note on code suggestions quality

View File

@ -39,68 +39,19 @@ pr_commands = [
]
[pr_reviewer]
num_code_suggestions = ...
extra_instructions = "..."
...
```
- The `pr_commands` lists commands that will be executed automatically when a PR is opened.
- The `[pr_reviewer]` section contains the configurations for the `review` tool you want to edit (if any).
[//]: # ()
[//]: # (### Incremental Mode)
[//]: # (Incremental review only considers changes since the last Qodo Merge review. This can be useful when working on the PR in an iterative manner, and you want to focus on the changes since the last review instead of reviewing the entire PR again.)
[//]: # (For invoking the incremental mode, the following command can be used:)
[//]: # (```)
[//]: # (/review -i)
[//]: # (```)
[//]: # (Note that the incremental mode is only available for GitHub.)
[//]: # ()
[//]: # (![incremental review]&#40;https://codium.ai/images/pr_agent/incremental_review_2.png&#41;{width=512})
[//]: # (### PR Reflection)
[//]: # ()
[//]: # (By invoking:)
[//]: # (```)
[//]: # (/reflect_and_review)
[//]: # (```)
[//]: # (The tool will first ask the author questions about the PR, and will guide the review based on their answers.)
[//]: # ()
[//]: # (![reflection questions]&#40;https://codium.ai/images/pr_agent/reflection_questions.png&#41;{width=512})
[//]: # ()
[//]: # (![reflection answers]&#40;https://codium.ai/images/pr_agent/reflection_answers.png&#41;{width=512})
[//]: # ()
[//]: # (![reflection insights]&#40;https://codium.ai/images/pr_agent/reflection_insights.png&#41;{width=512})
## Configuration options
!!! example "General options"
<table>
<tr>
<td><b>num_code_suggestions</b></td>
<td>Number of code suggestions provided by the 'review' tool. Default is 0, meaning no code suggestions will be provided by the `review` tool.</td>
</tr>
<tr>
<td><b>inline_code_comments</b></td>
<td>If set to true, the tool will publish the code suggestions as comments on the code diff. Default is false. Note that you need to set `num_code_suggestions`>0 to get code suggestions </td>
</tr>
<tr>
<td><b>persistent_comment</b></td>
<td>If set to true, the review comment will be persistent, meaning that every new review request will edit the previous one. Default is true.</td>
@ -189,9 +140,9 @@ If enabled, the `review` tool can approve a PR when a specific comment, `/review
!!! tip "Automation"
When you first install Qodo Merge app, the [default mode](../usage-guide/automations_and_usage.md#github-app-automatic-tools-when-a-new-pr-is-opened) for the `review` tool is:
```
pr_commands = ["/review --pr_reviewer.num_code_suggestions=0", ...]
pr_commands = ["/review", ...]
```
Meaning the `review` tool will run automatically on every PR, without providing code suggestions.
Meaning the `review` tool will run automatically on every PR, without any additional configurations.
Edit this field to enable/disable the tool, or to change the configurations used.
!!! tip "Possible labels from the review tool"
@ -249,12 +200,8 @@ If enabled, the `review` tool can approve a PR when a specific comment, `/review
maximal_review_effort = 5
```
[//]: # (!!! tip "Code suggestions")
!!! tip "Code suggestions"
[//]: # ()
[//]: # ( If you set `num_code_suggestions`>0 , the `review` tool will also provide code suggestions.)
The `review` tool previously included a legacy feature for providing code suggestions (controlled by `--pr_reviewer.num_code_suggestion`). This functionality has been deprecated and replaced by the [`improve`](./improve.md) tool, which offers higher quality and more actionable code suggestions.
[//]: # ( )
[//]: # ( Notice If you are interested **only** in the code suggestions, it is recommended to use the [`improve`]&#40;./improve.md&#41; feature instead, since it is a dedicated only to code suggestions, and usually gives better results.)
[//]: # ( Use the `review` tool if you want to get more comprehensive feedback, which includes code suggestions as well.)

View File

@ -49,9 +49,10 @@ It can be invoked automatically from the analyze table, can be accessed by:
/analyze
```
Choose the components you want to find similar code for, and click on the `similar` checkbox.
![analyze similar](https://codium.ai/images/pr_agent/analyze_similar.png){width=768}
If you are looking to search for similar code in the organization's codebase, you can click on the `Organization` checkbox, and it will invoke a new search command just for the organization's codebase.
You can search for similar code either within the organization's codebase or globally, which includes open-source repositories. Each result will include the relevant code components along with their associated license details.
![similar code global](https://codium.ai/images/pr_agent/similar_code_global.png){width=768}

View File

@ -17,3 +17,4 @@ Under the section `pr_update_changelog`, the [configuration file](https://github
- `push_changelog_changes`: whether to push the changes to CHANGELOG.md, or just print them. Default is false (print only).
- `extra_instructions`: Optional extra instructions to the tool. For example: "focus on the changes in the file X. Ignore change in ...
- `add_pr_link`: whether the model should try to add a link to the PR in the changelog. Default is true.

View File

@ -1,4 +1,5 @@
## Local repo (CLI)
When running from your locally cloned Qodo Merge repo (CLI), your local configuration file will be used.
Examples of invoking the different tools via the CLI:
@ -35,9 +36,29 @@ This is useful for debugging or experimenting with different tools.
Default is "github".
### CLI Health Check
To verify that Qodo Merge has been configured correctly, you can run this health check command from the repository root:
```bash
python -m tests.health_test.main
```
### Online usage
If the health check passes, you will see the following output:
```
========
Health test passed successfully
========
```
At the end of the run.
Before running the health check, ensure you have:
- Configured your [LLM provider](https://qodo-merge-docs.qodo.ai/usage-guide/changing_a_model/)
- Added a valid GitHub token to your configuration file
## Online usage
Online usage means invoking Qodo Merge tools by [comments](https://github.com/Codium-ai/pr-agent/pull/229#issuecomment-1695021901) on a PR.
Commands for invoking the different tools via comments:
@ -58,50 +79,70 @@ For example, if you want to edit the `review` tool configurations, you can run:
Any configuration value in [configuration file](https://github.com/Codium-ai/pr-agent/blob/main/pr_agent/settings/configuration.toml) file can be similarly edited. Comment `/config` to see the list of available configurations.
## GitHub App
## Qodo Merge Automatic Feedback
### Disabling all automatic feedback
To easily disable all automatic feedback from Qodo Merge (GitHub App, GitLab Webhook, BitBucket App, Azure DevOps Webhook), set in a configuration file:
```toml
[config]
disable_auto_feedback = true
```
When this parameter is set to `true`, Qodo Merge will not run any automatic tools (like `describe`, `review`, `improve`) when a new PR is opened, or when new code is pushed to an open PR.
### GitHub App
!!! note "Configurations for Qodo Merge Pro"
Qodo Merge Pro for GitHub is an App, hosted by CodiumAI. So all the instructions below are relevant also for Qodo Merge Pro users.
Same goes for [GitLab webhook](#gitlab-webhook) and [BitBucket App](#bitbucket-app) sections.
### GitHub app automatic tools when a new PR is opened
#### GitHub app automatic tools when a new PR is opened
The [github_app](https://github.com/Codium-ai/pr-agent/blob/main/pr_agent/settings/configuration.toml#L108) section defines GitHub app specific configurations.
The [github_app](https://github.com/Codium-ai/pr-agent/blob/main/pr_agent/settings/configuration.toml#L220) section defines GitHub app specific configurations.
The configuration parameter `pr_commands` defines the list of tools that will be **run automatically** when a new PR is opened.
```
The configuration parameter `pr_commands` defines the list of tools that will be **run automatically** when a new PR is opened:
```toml
[github_app]
pr_commands = [
"/describe",
"/review",
"/improve --pr_code_suggestions.suggestions_score_threshold=5",
"/improve",
]
```
This means that when a new PR is opened/reopened or marked as ready for review, Qodo Merge will run the `describe`, `review` and `improve` tools.
For the `improve` tool, for example, the `suggestions_score_threshold` parameter will be set to 5 (suggestions below a score of 5 won't be presented)
You can override the default tool parameters by using one the three options for a [configuration file](https://qodo-merge-docs.qodo.ai/usage-guide/configuration_options/): **wiki**, **local**, or **global**.
For example, if your local `.pr_agent.toml` file contains:
```
For example, if your configuration file contains:
```toml
[pr_description]
generate_ai_title = true
```
Every time you run the `describe` tool, including automatic runs, the PR title will be generated by the AI.
To cancel the automatic run of all the tools, set:
```
Every time you run the `describe` tool (including automatic runs) the PR title will be generated by the AI.
You can customize configurations specifically for automated runs by using the `--config_path=<value>` parameter.
For instance, to modify the `review` tool settings only for newly opened PRs, use:
```toml
[github_app]
pr_commands = []
pr_commands = [
"/describe",
"/review --pr_reviewer.extra_instructions='focus on the file: ...'",
"/improve",
]
```
### GitHub app automatic tools for push actions (commits to an open PR)
#### GitHub app automatic tools for push actions (commits to an open PR)
In addition to running automatic tools when a PR is opened, the GitHub app can also respond to new code that is pushed to an open PR.
The configuration toggle `handle_push_trigger` can be used to enable this feature.
The configuration parameter `push_commands` defines the list of tools that will be **run automatically** when new code is pushed to the PR.
```
```toml
[github_app]
handle_push_trigger = true
push_commands = [
@ -111,7 +152,7 @@ push_commands = [
```
This means that when new code is pushed to the PR, the Qodo Merge will run the `describe` and `review` tools, with the specified parameters.
## GitHub Action
### GitHub Action
`GitHub Action` is a different way to trigger Qodo Merge tools, and uses a different configuration mechanism than `GitHub App`.<br>
You can configure settings for `GitHub Action` by adding environment variables under the env section in `.github/workflows/pr_agent.yml` file.
Specifically, start by setting the following environment variables:
@ -122,7 +163,7 @@ Specifically, start by setting the following environment variables:
github_action_config.auto_review: "true" # enable\disable auto review
github_action_config.auto_describe: "true" # enable\disable auto describe
github_action_config.auto_improve: "true" # enable\disable auto improve
github_action_config.pr_actions: ["opened", "reopened", "ready_for_review", "review_requested"]
github_action_config.pr_actions: '["opened", "reopened", "ready_for_review", "review_requested"]'
```
`github_action_config.auto_review`, `github_action_config.auto_describe` and `github_action_config.auto_improve` are used to enable/disable automatic tools that run when a new PR is opened.
If not set, the default configuration is for all three tools to run automatically when a new PR is opened.
@ -137,15 +178,18 @@ The JSON structure is equivalent to the yaml data structure defined in [pr_revie
Note that you can give additional config parameters by adding environment variables to `.github/workflows/pr_agent.yml`, or by using a `.pr_agent.toml` [configuration file](https://qodo-merge-docs.qodo.ai/usage-guide/configuration_options/#global-configuration-file) in the root of your repo
For example, you can set an environment variable: `pr_description.publish_labels=false`, or add a `.pr_agent.toml` file with the following content:
```
```toml
[pr_description]
publish_labels = false
```
to prevent Qodo Merge from publishing labels when running the `describe` tool.
## GitLab Webhook
### GitLab Webhook
After setting up a GitLab webhook, to control which commands will run automatically when a new MR is opened, you can set the `pr_commands` parameter in the configuration file, similar to the GitHub App:
```
```toml
[gitlab]
pr_commands = [
"/describe",
@ -157,7 +201,7 @@ pr_commands = [
the GitLab webhook can also respond to new code that is pushed to an open MR.
The configuration toggle `handle_push_trigger` can be used to enable this feature.
The configuration parameter `push_commands` defines the list of tools that will be **run automatically** when new code is pushed to the MR.
```
```toml
[gitlab]
handle_push_trigger = true
push_commands = [
@ -168,13 +212,13 @@ push_commands = [
Note that to use the 'handle_push_trigger' feature, you need to give the gitlab webhook also the "Push events" scope.
## BitBucket App
### BitBucket App
Similar to GitHub app, when running Qodo Merge from BitBucket App, the default [configuration file](https://github.com/Codium-ai/pr-agent/blob/main/pr_agent/settings/configuration.toml) from a pre-built docker will be initially loaded.
By uploading a local `.pr_agent.toml` file to the root of the repo's main branch, you can edit and customize any configuration parameter. Note that you need to upload `.pr_agent.toml` prior to creating a PR, in order for the configuration to take effect.
For example, if your local `.pr_agent.toml` file contains:
```
```toml
[pr_reviewer]
extra_instructions = "Answer in japanese"
```
@ -187,12 +231,12 @@ If you experience a lack of responses from Qodo Merge, you might want to set: `b
This will prevent Qodo Merge from acquiring the full file content, and will only use the diff content. This will reduce the number of requests made to BitBucket, at the cost of small decrease in accuracy, as dynamic context will not be applicable.
### BitBucket Self-Hosted App automatic tools
#### BitBucket Self-Hosted App automatic tools
To control which commands will run automatically when a new PR is opened, you can set the `pr_commands` parameter in the configuration file:
Specifically, set the following values:
```
```toml
[bitbucket_app]
pr_commands = [
"/review",
@ -203,7 +247,7 @@ Note that we set specifically for bitbucket, we recommend using: `--pr_code_sugg
Since this platform only supports inline code suggestions, we want to limit the number of suggestions, and only present a limited number.
To enable BitBucket app to respond to each **push** to the PR, set (for example):
```
```toml
[bitbucket_app]
handle_push_trigger = true
push_commands = [
@ -212,10 +256,10 @@ push_commands = [
]
```
## Azure DevOps provider
### Azure DevOps provider
To use Azure DevOps provider use the following settings in configuration.toml:
```
```toml
[config]
git_provider="azure"
```
@ -234,14 +278,14 @@ org = "https://dev.azure.com/YOUR_ORGANIZATION/"
# pat = "YOUR_PAT_TOKEN" needed only if using PAT for authentication
```
### Azure DevOps Webhook
#### Azure DevOps Webhook
To control which commands will run automatically when a new PR is opened, you can set the `pr_commands` parameter in the configuration file, similar to the GitHub App:
```
```toml
[azure_devops_server]
pr_commands = [
"/describe",
"/review --pr_reviewer.num_code_suggestions=0",
"/review",
"/improve",
]
```

View File

@ -5,7 +5,6 @@ To use a different model than the default (GPT-4), you need to edit in the [conf
```
[config]
model = "..."
model_turbo = "..."
fallback_models = ["..."]
```
@ -27,9 +26,8 @@ deployment_id = "" # The deployment name you chose when you deployed the engine
and set in your configuration file:
```
[config]
model="" # the OpenAI model you've deployed on Azure (e.g. gpt-3.5-turbo)
model_turbo="" # the OpenAI model you've deployed on Azure (e.g. gpt-3.5-turbo)
fallback_models=["..."] # the OpenAI model you've deployed on Azure (e.g. gpt-3.5-turbo)
model="" # the OpenAI model you've deployed on Azure (e.g. gpt-4o)
fallback_models=["..."]
```
### Hugging Face
@ -52,7 +50,6 @@ MAX_TOKENS={
[config] # in configuration.toml
model = "ollama/llama2"
model_turbo = "ollama/llama2"
fallback_models=["ollama/llama2"]
[ollama] # in .secrets.toml
@ -76,7 +73,6 @@ MAX_TOKENS={
}
[config] # in configuration.toml
model = "huggingface/meta-llama/Llama-2-7b-chat-hf"
model_turbo = "huggingface/meta-llama/Llama-2-7b-chat-hf"
fallback_models=["huggingface/meta-llama/Llama-2-7b-chat-hf"]
[huggingface] # in .secrets.toml
@ -91,7 +87,6 @@ To use Llama2 model with Replicate, for example, set:
```
[config] # in configuration.toml
model = "replicate/llama-2-70b-chat:2c1608e18606fad2812020dc541930f2d0495ce32eee50074220b87300bc16e1"
model_turbo = "replicate/llama-2-70b-chat:2c1608e18606fad2812020dc541930f2d0495ce32eee50074220b87300bc16e1"
fallback_models=["replicate/llama-2-70b-chat:2c1608e18606fad2812020dc541930f2d0495ce32eee50074220b87300bc16e1"]
[replicate] # in .secrets.toml
key = ...
@ -107,7 +102,6 @@ To use Llama3 model with Groq, for example, set:
```
[config] # in configuration.toml
model = "llama3-70b-8192"
model_turbo = "llama3-70b-8192"
fallback_models = ["groq/llama3-70b-8192"]
[groq] # in .secrets.toml
key = ... # your Groq api key
@ -121,7 +115,6 @@ To use Google's Vertex AI platform and its associated models (chat-bison/codecha
```
[config] # in configuration.toml
model = "vertex_ai/codechat-bison"
model_turbo = "vertex_ai/codechat-bison"
fallback_models="vertex_ai/codechat-bison"
[vertexai] # in .secrets.toml
@ -140,7 +133,6 @@ To use [Google AI Studio](https://aistudio.google.com/) models, set the relevant
```toml
[config] # in configuration.toml
model="google_ai_studio/gemini-1.5-flash"
model_turbo="google_ai_studio/gemini-1.5-flash"
fallback_models=["google_ai_studio/gemini-1.5-flash"]
[google_ai_studio] # in .secrets.toml
@ -156,7 +148,6 @@ To use Anthropic models, set the relevant models in the configuration section of
```
[config]
model="anthropic/claude-3-opus-20240229"
model_turbo="anthropic/claude-3-opus-20240229"
fallback_models=["anthropic/claude-3-opus-20240229"]
```
@ -173,7 +164,6 @@ To use Amazon Bedrock and its foundational models, add the below configuration:
```
[config] # in configuration.toml
model="bedrock/anthropic.claude-3-sonnet-20240229-v1:0"
model_turbo="bedrock/anthropic.claude-3-sonnet-20240229-v1:0"
fallback_models=["bedrock/anthropic.claude-v2:1"]
```
@ -195,7 +185,6 @@ If the relevant model doesn't appear [here](https://github.com/Codium-ai/pr-agen
```
[config]
model="custom_model_name"
model_turbo="custom_model_name"
fallback_models=["custom_model_name"]
```
(2) Set the maximal tokens for the model:

View File

@ -13,7 +13,6 @@ from pr_agent.tools.pr_config import PRConfig
from pr_agent.tools.pr_description import PRDescription
from pr_agent.tools.pr_generate_labels import PRGenerateLabels
from pr_agent.tools.pr_help_message import PRHelpMessage
from pr_agent.tools.pr_information_from_user import PRInformationFromUser
from pr_agent.tools.pr_line_questions import PR_LineQuestions
from pr_agent.tools.pr_questions import PRQuestions
from pr_agent.tools.pr_reviewer import PRReviewer
@ -25,8 +24,6 @@ command2class = {
"answer": PRReviewer,
"review": PRReviewer,
"review_pr": PRReviewer,
"reflect": PRInformationFromUser,
"reflect_and_review": PRInformationFromUser,
"describe": PRDescription,
"describe_pr": PRDescription,
"improve": PRCodeSuggestions,
@ -76,12 +73,10 @@ class PRAgent:
action = action.lstrip("/").lower()
if action not in command2class:
get_logger().debug(f"Unknown command: {action}")
get_logger().error(f"Unknown command: {action}")
return False
with get_logger().contextualize(command=action, pr_url=pr_url):
get_logger().info("PR-Agent request handler started", analytics=True)
if action == "reflect_and_review":
get_settings().pr_reviewer.ask_and_reflect = True
if action == "answer":
if notify:
notify()

View File

@ -24,6 +24,8 @@ MAX_TOKENS = {
'o1-mini-2024-09-12': 128000, # 128K, but may be limited by config.max_model_tokens
'o1-preview': 128000, # 128K, but may be limited by config.max_model_tokens
'o1-preview-2024-09-12': 128000, # 128K, but may be limited by config.max_model_tokens
'o1-2024-12-17': 204800, # 200K, but may be limited by config.max_model_tokens
'o1': 204800, # 200K, but may be limited by config.max_model_tokens
'claude-instant-1': 100000,
'claude-2': 100000,
'command-nightly': 4096,
@ -42,6 +44,7 @@ MAX_TOKENS = {
'vertex_ai/gemma2': 8200,
'gemini/gemini-1.5-pro': 1048576,
'gemini/gemini-1.5-flash': 1048576,
'gemini/gemini-2.0-flash-exp': 1048576,
'codechat-bison': 6144,
'codechat-bison-32k': 32000,
'anthropic.claude-instant-v1': 100000,
@ -59,6 +62,7 @@ MAX_TOKENS = {
'bedrock/anthropic.claude-3-5-haiku-20241022-v1:0': 100000,
'bedrock/anthropic.claude-3-5-sonnet-20240620-v1:0': 100000,
'bedrock/anthropic.claude-3-5-sonnet-20241022-v2:0': 100000,
"bedrock/us.anthropic.claude-3-5-sonnet-20241022-v2:0": 100000,
'claude-3-5-sonnet': 100000,
'groq/llama3-8b-8192': 8192,
'groq/llama3-70b-8192': 8192,

View File

@ -7,6 +7,7 @@ from litellm import acompletion
from tenacity import retry, retry_if_exception_type, stop_after_attempt
from pr_agent.algo.ai_handlers.base_ai_handler import BaseAiHandler
from pr_agent.algo.utils import get_version
from pr_agent.config_loader import get_settings
from pr_agent.log import get_logger
@ -132,7 +133,7 @@ class LiteLLMAIHandler(BaseAiHandler):
if "langfuse" in callbacks:
metadata.update({
"trace_name": command,
"tags": [git_provider, command],
"tags": [git_provider, command, f'version:{get_version()}'],
"trace_metadata": {
"command": command,
"pr_url": pr_url,
@ -141,7 +142,7 @@ class LiteLLMAIHandler(BaseAiHandler):
if "langsmith" in callbacks:
metadata.update({
"run_name": command,
"tags": [git_provider, command],
"tags": [git_provider, command, f'version:{get_version()}'],
"extra": {
"metadata": {
"command": command,
@ -192,8 +193,8 @@ class LiteLLMAIHandler(BaseAiHandler):
messages[1]["content"] = [{"type": "text", "text": messages[1]["content"]},
{"type": "image_url", "image_url": {"url": img_path}}]
# Currently O1 does not support separate system and user prompts
O1_MODEL_PREFIX = 'o1-'
# Currently, model OpenAI o1 series does not support a separate system and user prompts
O1_MODEL_PREFIX = 'o1'
model_type = model.split('/')[-1] if '/' in model else model
if model_type.startswith(O1_MODEL_PREFIX):
user = f"{system}\n\n\n{user}"

View File

@ -11,7 +11,7 @@ from pr_agent.algo.git_patch_processing import (
from pr_agent.algo.language_handler import sort_files_by_main_languages
from pr_agent.algo.token_handler import TokenHandler
from pr_agent.algo.types import EDIT_TYPE, FilePatchInfo
from pr_agent.algo.utils import ModelType, clip_tokens, get_max_tokens
from pr_agent.algo.utils import ModelType, clip_tokens, get_max_tokens, get_weak_model
from pr_agent.config_loader import get_settings
from pr_agent.git_providers.git_provider import GitProvider
from pr_agent.log import get_logger
@ -354,8 +354,8 @@ async def retry_with_fallback_models(f: Callable, model_type: ModelType = ModelT
def _get_all_models(model_type: ModelType = ModelType.REGULAR) -> List[str]:
if model_type == ModelType.TURBO:
model = get_settings().config.model_turbo
if model_type == ModelType.WEAK:
model = get_weak_model()
else:
model = get_settings().config.model
fallback_models = get_settings().config.fallback_models

View File

@ -1,5 +1,6 @@
from dataclasses import dataclass
from enum import Enum
from typing import Optional
class EDIT_TYPE(Enum):
@ -21,4 +22,5 @@ class FilePatchInfo:
old_filename: str = None
num_plus_lines: int = -1
num_minus_lines: int = -1
language: Optional[str] = None
ai_file_summary: str = None

View File

@ -7,11 +7,13 @@ import html
import json
import os
import re
import sys
import textwrap
import time
import traceback
from datetime import datetime
from enum import Enum
from importlib.metadata import PackageNotFoundError, version
from typing import Any, List, Tuple
import html2text
@ -27,6 +29,12 @@ from pr_agent.config_loader import get_settings, global_settings
from pr_agent.log import get_logger
def get_weak_model() -> str:
if get_settings().get("config.model_weak"):
return get_settings().config.model_weak
return get_settings().config.model
class Range(BaseModel):
line_start: int # should be 0-indexed
line_end: int
@ -35,8 +43,7 @@ class Range(BaseModel):
class ModelType(str, Enum):
REGULAR = "regular"
TURBO = "turbo"
WEAK = "weak"
class PRReviewHeader(str, Enum):
REGULAR = "## PR Reviewer Guide"
@ -97,7 +104,8 @@ def unique_strings(input_list: List[str]) -> List[str]:
def convert_to_markdown_v2(output_data: dict,
gfm_supported: bool = True,
incremental_review=None,
git_provider=None) -> str:
git_provider=None,
files=None) -> str:
"""
Convert a dictionary of data into markdown format.
Args:
@ -221,9 +229,13 @@ def convert_to_markdown_v2(output_data: dict,
continue
relevant_file = issue.get('relevant_file', '').strip()
issue_header = issue.get('issue_header', '').strip()
if issue_header.lower() == 'possible bug':
issue_header = 'Possible Issue' # Make the header less frightening
issue_content = issue.get('issue_content', '').strip()
start_line = int(str(issue.get('start_line', 0)).strip())
end_line = int(str(issue.get('end_line', 0)).strip())
relevant_lines_str = extract_relevant_lines_str(end_line, files, relevant_file, start_line, dedent=True)
if git_provider:
reference_link = git_provider.get_line_link(relevant_file, start_line, end_line)
else:
@ -231,7 +243,10 @@ def convert_to_markdown_v2(output_data: dict,
if gfm_supported:
if reference_link is not None and len(reference_link) > 0:
issue_str = f"<a href='{reference_link}'><strong>{issue_header}</strong></a><br>{issue_content}"
if relevant_lines_str:
issue_str = f"<details><summary><a href='{reference_link}'><strong>{issue_header}</strong></a>\n\n{issue_content}</summary>\n\n{relevant_lines_str}\n\n</details>"
else:
issue_str = f"<a href='{reference_link}'><strong>{issue_header}</strong></a><br>{issue_content}"
else:
issue_str = f"<strong>{issue_header}</strong><br>{issue_content}"
else:
@ -255,24 +270,31 @@ def convert_to_markdown_v2(output_data: dict,
if gfm_supported:
markdown_text += "</table>\n"
if 'code_feedback' in output_data:
if gfm_supported:
markdown_text += f"\n\n"
markdown_text += f"<details><summary> <strong>Code feedback:</strong></summary>\n\n"
markdown_text += "<hr>"
else:
markdown_text += f"\n\n### Code feedback:\n\n"
for i, value in enumerate(output_data['code_feedback']):
if value is None or value == '' or value == {} or value == []:
continue
markdown_text += parse_code_suggestion(value, i, gfm_supported)+"\n\n"
if markdown_text.endswith('<hr>'):
markdown_text= markdown_text[:-4]
if gfm_supported:
markdown_text += f"</details>"
return markdown_text
def extract_relevant_lines_str(end_line, files, relevant_file, start_line, dedent=False):
try:
relevant_lines_str = ""
if files:
files = set_file_languages(files)
for file in files:
if file.filename.strip() == relevant_file:
if not file.head_file:
get_logger().warning(f"No content found in file: {file.filename}")
return ""
relevant_file_lines = file.head_file.splitlines()
relevant_lines_str = "\n".join(relevant_file_lines[start_line - 1:end_line])
if dedent and relevant_lines_str:
# Remove the longest leading string of spaces and tabs common to all lines.
relevant_lines_str = textwrap.dedent(relevant_lines_str)
relevant_lines_str = f"```{file.language}\n{relevant_lines_str}\n```"
break
return relevant_lines_str
except Exception as e:
get_logger().exception(f"Failed to extract relevant lines: {e}")
return ""
def ticket_markdown_logic(emoji, markdown_text, value, gfm_supported) -> str:
ticket_compliance_str = ""
@ -1106,3 +1128,48 @@ def process_description(description_full: str) -> Tuple[str, List]:
get_logger().exception(f"Failed to process description: {e}")
return base_description_str, files
def get_version() -> str:
# First check pyproject.toml if running directly out of repository
if os.path.exists("pyproject.toml"):
if sys.version_info >= (3, 11):
import tomllib
with open("pyproject.toml", "rb") as f:
data = tomllib.load(f)
if "project" in data and "version" in data["project"]:
return data["project"]["version"]
else:
get_logger().warning("Version not found in pyproject.toml")
else:
get_logger().warning("Unable to determine local version from pyproject.toml")
# Otherwise get the installed pip package version
try:
return version('pr-agent')
except PackageNotFoundError:
get_logger().warning("Unable to find package named 'pr-agent'")
return "unknown"
def set_file_languages(diff_files) -> List[FilePatchInfo]:
try:
# if the language is already set, do not change it
if hasattr(diff_files[0], 'language') and diff_files[0].language:
return diff_files
# map file extensions to programming languages
language_extension_map_org = get_settings().language_extension_map_org
extension_to_language = {}
for language, extensions in language_extension_map_org.items():
for ext in extensions:
extension_to_language[ext] = language
for file in diff_files:
extension_s = '.' + file.filename.rsplit('.')[-1]
language_name = "txt"
if extension_s and (extension_s in extension_to_language):
language_name = extension_to_language[extension_s]
file.language = language_name.lower()
except Exception as e:
get_logger().exception(f"Failed to set file languages: {e}")
return diff_files

View File

@ -3,6 +3,7 @@ import asyncio
import os
from pr_agent.agent.pr_agent import PRAgent, commands
from pr_agent.algo.utils import get_version
from pr_agent.config_loader import get_settings
from pr_agent.log import get_logger, setup_logger
@ -45,6 +46,7 @@ def set_parser():
To edit any configuration parameter from 'configuration.toml', just add -config_path=<value>.
For example: 'python cli.py --pr_url=... review --pr_reviewer.extra_instructions="focus on the file: ..."'
""")
parser.add_argument('--version', action='version', version=f'pr-agent {get_version()}')
parser.add_argument('--pr_url', type=str, help='The URL of the PR to review', default=None)
parser.add_argument('--issue_url', type=str, help='The URL of the Issue to review', default=None)
parser.add_argument('command', type=str, help='The', choices=commands, default='review')

View File

@ -19,7 +19,7 @@ from ..algo.language_handler import is_valid_file
from ..algo.types import EDIT_TYPE
from ..algo.utils import (PRReviewHeader, Range, clip_tokens,
find_line_number_of_relevant_line_in_file,
load_large_diff)
load_large_diff, set_file_languages)
from ..config_loader import get_settings
from ..log import get_logger
from ..servers.utils import RateLimitExceeded
@ -889,18 +889,7 @@ class GithubProvider(GitProvider):
RE_HUNK_HEADER = re.compile(
r"^@@ -(\d+)(?:,(\d+))? \+(\d+)(?:,(\d+))? @@[ ]?(.*)")
# map file extensions to programming languages
language_extension_map_org = get_settings().language_extension_map_org
extension_to_language = {}
for language, extensions in language_extension_map_org.items():
for ext in extensions:
extension_to_language[ext] = language
for file in diff_files:
extension_s = '.' + file.filename.rsplit('.')[-1]
language_name = "txt"
if extension_s and (extension_s in extension_to_language):
language_name = extension_to_language[extension_s]
file.language = language_name.lower()
diff_files = set_file_languages(diff_files)
for suggestion in code_suggestions_copy:
try:

View File

@ -99,5 +99,5 @@ def set_claude_model():
"""
model_claude = "bedrock/anthropic.claude-3-5-sonnet-20240620-v1:0"
get_settings().set('config.model', model_claude)
get_settings().set('config.model_turbo', model_claude)
get_settings().set('config.model_weak', model_claude)
get_settings().set('config.fallback_models', [model_claude])

View File

@ -64,6 +64,9 @@ def authorize(credentials: HTTPBasicCredentials = Depends(security)):
async def _perform_commands_azure(commands_conf: str, agent: PRAgent, api_url: str, log_context: dict):
apply_repo_settings(api_url)
if commands_conf == "pr_commands" and get_settings().config.disable_auto_feedback: # auto commands for PR, and auto feedback is disabled
get_logger().info(f"Auto feedback is disabled, skipping auto commands for PR {api_url=}", **log_context)
return
commands = get_settings().get(f"azure_devops_server.{commands_conf}")
get_settings().set("config.is_auto_command", True)
for command in commands:

View File

@ -77,6 +77,9 @@ async def handle_manifest(request: Request, response: Response):
async def _perform_commands_bitbucket(commands_conf: str, agent: PRAgent, api_url: str, log_context: dict, data: dict):
apply_repo_settings(api_url)
if commands_conf == "pr_commands" and get_settings().config.disable_auto_feedback: # auto commands for PR, and auto feedback is disabled
get_logger().info(f"Auto feedback is disabled, skipping auto commands for PR {api_url=}")
return
if data.get("event", "") == "pullrequest:created":
if not should_process_pr_logic(data):
return

View File

@ -72,6 +72,11 @@ async def handle_webhook(background_tasks: BackgroundTasks, request: Request):
commands_to_run = []
if data["eventKey"] == "pr:opened":
apply_repo_settings(pr_url)
if get_settings().config.disable_auto_feedback: # auto commands for PR, and auto feedback is disabled
get_logger().info(f"Auto feedback is disabled, skipping auto commands for PR {pr_url}", **log_context)
return
get_settings().set("config.is_auto_command", True)
commands_to_run.extend(_get_commands_list_from_settings('BITBUCKET_SERVER.PR_COMMANDS'))
elif data["eventKey"] == "pr:comment:added":
commands_to_run.append(data["comment"]["text"])

View File

@ -374,6 +374,9 @@ def _check_pull_request_event(action: str, body: dict, log_context: dict) -> Tup
async def _perform_auto_commands_github(commands_conf: str, agent: PRAgent, body: dict, api_url: str,
log_context: dict):
apply_repo_settings(api_url)
if commands_conf == "pr_commands" and get_settings().config.disable_auto_feedback: # auto commands for PR, and auto feedback is disabled
get_logger().info(f"Auto feedback is disabled, skipping auto commands for PR {api_url=}")
return
if not should_process_pr_logic(body): # Here we already updated the configuration with the repo settings
return {}
commands = get_settings().get(f"github_app.{commands_conf}")

View File

@ -84,6 +84,7 @@ async def is_valid_notification(notification, headers, handled_ids, session, use
return False, handled_ids
async with session.get(latest_comment, headers=headers) as comment_response:
check_prev_comments = False
user_tag = "@" + user_id
if comment_response.status == 200:
comment = await comment_response.json()
if 'id' in comment:
@ -101,7 +102,6 @@ async def is_valid_notification(notification, headers, handled_ids, session, use
get_logger().debug(f"no comment_body")
check_prev_comments = True
else:
user_tag = "@" + user_id
if user_tag not in comment_body:
get_logger().debug(f"user_tag not in comment_body")
check_prev_comments = True

View File

@ -61,6 +61,9 @@ async def handle_request(api_url: str, body: str, log_context: dict, sender_id:
async def _perform_commands_gitlab(commands_conf: str, agent: PRAgent, api_url: str,
log_context: dict, data: dict):
apply_repo_settings(api_url)
if commands_conf == "pr_commands" and get_settings().config.disable_auto_feedback: # auto commands for PR, and auto feedback is disabled
get_logger().info(f"Auto feedback is disabled, skipping auto commands for PR {api_url=}", **log_context)
return
if not should_process_pr_logic(data): # Here we already updated the configurations
return
commands = get_settings().get(f"gitlab.{commands_conf}", {})

View File

@ -1,8 +1,8 @@
[config]
# models
model="gpt-4-turbo-2024-04-09"
model_turbo="gpt-4o-2024-11-20"
model="gpt-4o-2024-11-20"
fallback_models=["gpt-4o-2024-08-06"]
#model_weak="gpt-4o-mini-2024-07-18" # optional, a weaker model to use for some easier tasks
# CLI
git_provider="github"
publish_output=true
@ -14,6 +14,7 @@ use_extra_bad_extensions=false
use_wiki_settings_file=true
use_repo_settings_file=true
use_global_settings_file=true
disable_auto_feedback = false
ai_timeout=120 # 2minutes
skip_keys = []
# token limits
@ -54,10 +55,6 @@ require_can_be_split_review=false
require_security_review=true
require_ticket_analysis_review=true
# general options
num_code_suggestions=0
inline_code_comments = false
ask_and_reflect=false
#automatic_review=true
persistent_comment=true
extra_instructions = ""
final_update_message = true
@ -114,7 +111,6 @@ dual_publishing_score_threshold=-1 # -1 to disable, [0-10] to set the threshold
focus_only_on_problems=true
#
extra_instructions = ""
rank_suggestions = false
enable_help_text=false
enable_chat_text=false
enable_intro_text=true
@ -129,7 +125,7 @@ auto_extended_mode=true
num_code_suggestions_per_chunk=4
max_number_of_calls = 3
parallel_calls = true
rank_extended_suggestions = false
final_clip_factor = 0.8
# self-review checkbox
demand_code_suggestions_self_review=false # add a checkbox for the author to self-review the code suggestions
@ -139,6 +135,7 @@ fold_suggestions_on_self_review=true # Pro feature. if true, the code suggestion
# Suggestion impact 💎
publish_post_process_suggestion_impact=true
wiki_page_accepted_suggestions=true
allow_thumbs_up_down=false
[pr_custom_prompt] # /custom_prompt #
prompt = """\
@ -162,6 +159,7 @@ class_name = "" # in case there are several methods with the same name in
[pr_update_changelog] # /update_changelog #
push_changelog_changes=false
extra_instructions = ""
add_pr_link=true
[pr_analyze] # /analyze #
enable_help_text=true
@ -218,7 +216,7 @@ override_deployment_type = true
handle_pr_actions = ['opened', 'reopened', 'ready_for_review']
pr_commands = [
"/describe --pr_description.final_update_message=false",
"/review --pr_reviewer.num_code_suggestions=0",
"/review",
"/improve",
]
# settings for "pull_request" event with "synchronize" action - used to detect and handle push triggers for new commits
@ -230,27 +228,27 @@ push_trigger_pending_tasks_backlog = true
push_trigger_pending_tasks_ttl = 300
push_commands = [
"/describe",
"/review --pr_reviewer.num_code_suggestions=0",
"/review",
]
[gitlab]
url = "https://gitlab.com"
pr_commands = [
"/describe --pr_description.final_update_message=false",
"/review --pr_reviewer.num_code_suggestions=0",
"/review",
"/improve",
]
handle_push_trigger = false
push_commands = [
"/describe",
"/review --pr_reviewer.num_code_suggestions=0",
"/review",
]
[bitbucket_app]
pr_commands = [
"/describe --pr_description.final_update_message=false",
"/review --pr_reviewer.num_code_suggestions=0",
"/improve --pr_code_suggestions.commitable_code_suggestions=true --pr_code_suggestions.suggestions_score_threshold=7",
"/review",
"/improve --pr_code_suggestions.commitable_code_suggestions=true",
]
avoid_full_files = false
@ -275,8 +273,8 @@ avoid_full_files = false
url = ""
pr_commands = [
"/describe --pr_description.final_update_message=false",
"/review --pr_reviewer.num_code_suggestions=0",
"/improve --pr_code_suggestions.commitable_code_suggestions=true --pr_code_suggestions.suggestions_score_threshold=7",
"/review",
"/improve --pr_code_suggestions.commitable_code_suggestions=true",
]
[litellm]

View File

@ -1,10 +1,6 @@
[pr_review_prompt]
system="""You are PR-Reviewer, a language model designed to review a Git Pull Request (PR).
{%- if num_code_suggestions > 0 %}
Your task is to provide constructive and concise feedback for the PR, and also provide meaningful code suggestions.
{%- else %}
Your task is to provide constructive and concise feedback for the PR.
{%- endif %}
The review should focus on new code added in the PR code diff (lines starting with '+')
@ -49,16 +45,6 @@ __new hunk__
{%- endif %}
- When quoting variables or names from the code, use backticks (`) instead of single quote (').
{%- if num_code_suggestions > 0 %}
Code suggestions guidelines:
- Provide up to {{ num_code_suggestions }} code suggestions. Try to provide diverse and insightful suggestions.
- Focus on important suggestions like fixing code problems, issues and bugs. As a second priority, provide suggestions for meaningful code improvements, like performance, vulnerability, modularity, and best practices.
- Avoid making suggestions that have already been implemented in the PR code. For example, if you want to add logs, or change a variable to const, or anything else, make sure it isn't already in the PR code.
- Don't suggest to add docstring, type hints, or comments.
- Suggestions should address the new code added in the PR diff (lines starting with '+')
{%- endif %}
{%- if extra_instructions %}
@ -80,8 +66,8 @@ class SubPR(BaseModel):
class KeyIssuesComponentLink(BaseModel):
relevant_file: str = Field(description="The full file path of the relevant file")
issue_header: str = Field(description="One or two word title for the the issue. For example: 'Possible Bug', 'Performance Issue', 'Code Smell', etc.")
issue_content: str = Field(description="A short and concise summary of what should be further inspected and validated during the PR review process for this issue. Don't state line numbers here")
issue_header: str = Field(description="One or two word title for the the issue. For example: 'Possible Bug', etc.")
issue_content: str = Field(description="A short and concise summary of what should be further inspected and validated during the PR review process for this issue. Do not reference line numbers in this field.")
start_line: int = Field(description="The start line that corresponds to this issue in the relevant file")
end_line: int = Field(description="The end line that corresponds to this issue in the relevant file")
@ -111,32 +97,16 @@ class Review(BaseModel):
{%- if question_str %}
insights_from_user_answers: str = Field(description="shortly summarize the insights you gained from the user's answers to the questions")
{%- endif %}
key_issues_to_review: List[KeyIssuesComponentLink] = Field("A diverse list of bugs, issue or major performance concerns introduced in this PR, which the PR reviewer should further investigate")
key_issues_to_review: List[KeyIssuesComponentLink] = Field("A short and diverse list (0-3 issues) of high-priority bugs, problems or performance concerns introduced in the PR code, which the PR reviewer should further focus on and validate during the review process.")
{%- if require_security_review %}
security_concerns: str = Field(description="Does this PR code introduce possible vulnerabilities such as exposure of sensitive information (e.g., API keys, secrets, passwords), or security concerns like SQL injection, XSS, CSRF, and others ? Answer 'No' (without explaining why) if there are no possible issues. If there are security concerns or issues, start your answer with a short header, such as: 'Sensitive information exposure: ...', 'SQL injection: ...' etc. Explain your answer. Be specific and give examples if possible")
{%- endif %}
{%- if require_can_be_split_review %}
can_be_split: List[SubPR] = Field(min_items=0, max_items=3, description="Can this PR, which contains {{ num_pr_files }} changed files in total, be divided into smaller sub-PRs with distinct tasks that can be reviewed and merged independently, regardless of the order ? Make sure that the sub-PRs are indeed independent, with no code dependencies between them, and that each sub-PR represent a meaningful independent task. Output an empty list if the PR code does not need to be split.")
{%- endif %}
{%- if num_code_suggestions > 0 %}
class CodeSuggestion(BaseModel):
relevant_file: str = Field(description="The full file path of the relevant file")
language: str = Field(description="The programming language of the relevant file")
suggestion: str = Field(description="a concrete suggestion for meaningfully improving the new PR code. Also describe how, specifically, the suggestion can be applied to new PR code. Add tags with importance measure that matches each suggestion ('important' or 'medium'). Do not make suggestions for updating or adding docstrings, renaming PR title and description, or linter like.")
relevant_line: str = Field(description="a single code line taken from the relevant file, to which the suggestion applies. The code line should start with a '+'. Make sure to output the line exactly as it appears in the relevant file")
{%- endif %}
{%- if num_code_suggestions > 0 %}
class PRReview(BaseModel):
review: Review
code_feedback: List[CodeSuggestion]
{%- else %}
class PRReview(BaseModel):
review: Review
{%- endif %}
=====
@ -185,18 +155,6 @@ review:
title: ...
- ...
{%- endif %}
{%- if num_code_suggestions > 0 %}
code_feedback:
- relevant_file: |
directory/xxx.py
language: |
python
suggestion: |
xxx [important]
relevant_line: |
xxx
{%- endif %}
```
Answer should be a valid YAML, and nothing else. Each YAML output MUST be after a newline, with proper indent, and block scalar indicator ('|')

View File

@ -1,9 +1,14 @@
[pr_update_changelog_prompt]
system="""You are a language model called PR-Changelog-Updater.
Your task is to update the CHANGELOG.md file of the project, to shortly summarize important changes introduced in this PR (the '+' lines).
- The output should match the existing CHANGELOG.md format, style and conventions, so it will look like a natural part of the file. For example, if previous changes were summarized in a single line, you should do the same.
- Don't repeat previous changes. Generate only new content, that is not already in the CHANGELOG.md file.
- Be general, and avoid specific details, files, etc. The output should be minimal, no more than 3-4 short lines. Ignore non-relevant subsections.
Your task is to add a brief summary of this PR's changes to CHANGELOG.md file of the project:
- Follow the file's existing format and style conventions like dates, section titles, etc.
- Only add new changes (don't repeat existing entries)
- Be general, and avoid specific details, files, etc. The output should be minimal, no more than 3-4 short lines.
- Write only the new content to be added to CHANGELOG.md, without any introduction or summary. The content should appear as if it's a natural part of the existing file.
{%- if pr_link %}
- If relevant, convert the changelog main header into a clickable link using the PR URL '{{ pr_link }}'. Format: header [*][pr_link]
{%- endif %}
{%- if extra_instructions %}
@ -47,16 +52,19 @@ The PR Git Diff:
{{ diff|trim }}
======
Current date:
```
{{today}}
```
The current CHANGELOG.md:
The current 'CHANGELOG.md' file
======
{{ changelog_file_str }}
======
Response:
```markdown
"""

View File

@ -21,7 +21,7 @@ from pr_agent.config_loader import get_settings
from pr_agent.git_providers import (AzureDevopsProvider, GithubProvider,
GitLabProvider, get_git_provider,
get_git_provider_with_context)
from pr_agent.git_providers.git_provider import get_main_pr_language
from pr_agent.git_providers.git_provider import get_main_pr_language, GitProvider
from pr_agent.log import get_logger
from pr_agent.servers.help import HelpMessage
from pr_agent.tools.pr_description import insert_br_after_x_chars
@ -103,6 +103,8 @@ class PRCodeSuggestions:
relevant_configs = {'pr_code_suggestions': dict(get_settings().pr_code_suggestions),
'config': dict(get_settings().config)}
get_logger().debug("Relevant configs", artifacts=relevant_configs)
# publish "Preparing suggestions..." comments
if (get_settings().config.publish_output and get_settings().config.publish_output_progress and
not get_settings().config.get('is_auto_command', False)):
if self.git_provider.is_supported("gfm_markdown"):
@ -110,33 +112,26 @@ class PRCodeSuggestions:
else:
self.git_provider.publish_comment("Preparing suggestions...", is_temporary=True)
# call the model to get the suggestions, and self-reflect on them
if not self.is_extended:
data = await retry_with_fallback_models(self._prepare_prediction)
data = await retry_with_fallback_models(self._prepare_prediction, model_type=ModelType.REGULAR)
else:
data = await retry_with_fallback_models(self._prepare_prediction_extended)
data = await retry_with_fallback_models(self._prepare_prediction_extended, model_type=ModelType.REGULAR)
if not data:
data = {"code_suggestions": []}
self.data = data
# Handle the case where the PR has no suggestions
if (data is None or 'code_suggestions' not in data or not data['code_suggestions']):
pr_body = "## PR Code Suggestions ✨\n\nNo code suggestions found for the PR."
get_logger().warning('No code suggestions found for the PR.')
if get_settings().config.publish_output and get_settings().config.publish_output_no_suggestions:
get_logger().debug(f"PR output", artifact=pr_body)
if self.progress_response:
self.git_provider.edit_comment(self.progress_response, body=pr_body)
else:
self.git_provider.publish_comment(pr_body)
else:
get_settings().data = {"artifact": ""}
await self.publish_no_suggestions()
return
if (not self.is_extended and get_settings().pr_code_suggestions.rank_suggestions) or \
(self.is_extended and get_settings().pr_code_suggestions.rank_extended_suggestions):
get_logger().info('Ranking Suggestions...')
data['code_suggestions'] = await self.rank_suggestions(data['code_suggestions'])
# publish the suggestions
if get_settings().config.publish_output:
# If a temporary comment was published, remove it
self.git_provider.remove_initial_comment()
# Publish table summarized suggestions
if ((not get_settings().pr_code_suggestions.commitable_code_suggestions) and
self.git_provider.is_supported("gfm_markdown")):
@ -146,10 +141,7 @@ class PRCodeSuggestions:
# require self-review
if get_settings().pr_code_suggestions.demand_code_suggestions_self_review:
text = get_settings().pr_code_suggestions.code_suggestions_self_review_text
pr_body += f"\n\n- [ ] {text}"
if get_settings().pr_code_suggestions.approve_pr_on_self_review:
pr_body += ' <!-- approve pr self-review -->'
pr_body = await self.add_self_review_text(pr_body)
# add usage guide
if (get_settings().pr_code_suggestions.enable_chat_text and get_settings().config.is_auto_command
@ -165,13 +157,13 @@ class PRCodeSuggestions:
pr_body += show_relevant_configurations(relevant_section='pr_code_suggestions')
# publish the PR comment
if get_settings().pr_code_suggestions.persistent_comment:
final_update_message = False
self.publish_persistent_comment_with_history(pr_body,
if get_settings().pr_code_suggestions.persistent_comment: # true by default
self.publish_persistent_comment_with_history(self.git_provider,
pr_body,
initial_header="## PR Code Suggestions ✨",
update_header=True,
name="suggestions",
final_update_message=final_update_message,
final_update_message=False,
max_previous_comments=get_settings().pr_code_suggestions.max_history_len,
progress_response=self.progress_response)
else:
@ -182,29 +174,15 @@ class PRCodeSuggestions:
# dual publishing mode
if int(get_settings().pr_code_suggestions.dual_publishing_score_threshold) > 0:
data_above_threshold = {'code_suggestions': []}
try:
for suggestion in data['code_suggestions']:
if int(suggestion.get('score', 0)) >= int(get_settings().pr_code_suggestions.dual_publishing_score_threshold) \
and suggestion.get('improved_code'):
data_above_threshold['code_suggestions'].append(suggestion)
if not data_above_threshold['code_suggestions'][-1]['existing_code']:
get_logger().info(f'Identical existing and improved code for dual publishing found')
data_above_threshold['code_suggestions'][-1]['existing_code'] = suggestion[
'improved_code']
if data_above_threshold['code_suggestions']:
get_logger().info(
f"Publishing {len(data_above_threshold['code_suggestions'])} suggestions in dual publishing mode")
self.push_inline_code_suggestions(data_above_threshold)
except Exception as e:
get_logger().error(f"Failed to publish dual publishing suggestions, error: {e}")
await self.dual_publishing(data)
else:
self.push_inline_code_suggestions(data)
await self.push_inline_code_suggestions(data)
if self.progress_response:
self.git_provider.remove_comment(self.progress_response)
else:
get_logger().info('Code suggestions generated for PR, but not published since publish_output is False.')
get_settings().data = {"artifact": data}
pr_body = self.generate_summarized_suggestions(data)
get_settings().data = {"artifact": pr_body}
return
except Exception as e:
get_logger().error(f"Failed to generate code suggestions for PR, error: {e}",
@ -217,47 +195,108 @@ class PRCodeSuggestions:
self.git_provider.remove_initial_comment()
self.git_provider.publish_comment(f"Failed to generate code suggestions for PR")
except Exception as e:
pass
get_logger().exception(f"Failed to update persistent review, error: {e}")
def publish_persistent_comment_with_history(self, pr_comment: str,
async def add_self_review_text(self, pr_body):
text = get_settings().pr_code_suggestions.code_suggestions_self_review_text
pr_body += f"\n\n- [ ] {text}"
approve_pr_on_self_review = get_settings().pr_code_suggestions.approve_pr_on_self_review
fold_suggestions_on_self_review = get_settings().pr_code_suggestions.fold_suggestions_on_self_review
if approve_pr_on_self_review and not fold_suggestions_on_self_review:
pr_body += ' <!-- approve pr self-review -->'
elif fold_suggestions_on_self_review and not approve_pr_on_self_review:
pr_body += ' <!-- fold suggestions self-review -->'
else:
pr_body += ' <!-- approve and fold suggestions self-review -->'
return pr_body
async def publish_no_suggestions(self):
pr_body = "## PR Code Suggestions ✨\n\nNo code suggestions found for the PR."
if get_settings().config.publish_output and get_settings().config.publish_output_no_suggestions:
get_logger().warning('No code suggestions found for the PR.')
get_logger().debug(f"PR output", artifact=pr_body)
if self.progress_response:
self.git_provider.edit_comment(self.progress_response, body=pr_body)
else:
self.git_provider.publish_comment(pr_body)
else:
get_settings().data = {"artifact": ""}
async def dual_publishing(self, data):
data_above_threshold = {'code_suggestions': []}
try:
for suggestion in data['code_suggestions']:
if int(suggestion.get('score', 0)) >= int(
get_settings().pr_code_suggestions.dual_publishing_score_threshold) \
and suggestion.get('improved_code'):
data_above_threshold['code_suggestions'].append(suggestion)
if not data_above_threshold['code_suggestions'][-1]['existing_code']:
get_logger().info(f'Identical existing and improved code for dual publishing found')
data_above_threshold['code_suggestions'][-1]['existing_code'] = suggestion[
'improved_code']
if data_above_threshold['code_suggestions']:
get_logger().info(
f"Publishing {len(data_above_threshold['code_suggestions'])} suggestions in dual publishing mode")
await self.push_inline_code_suggestions(data_above_threshold)
except Exception as e:
get_logger().error(f"Failed to publish dual publishing suggestions, error: {e}")
@staticmethod
def publish_persistent_comment_with_history(git_provider: GitProvider,
pr_comment: str,
initial_header: str,
update_header: bool = True,
name='review',
final_update_message=True,
max_previous_comments=4,
progress_response=None):
progress_response=None,
only_fold=False):
if isinstance(self.git_provider, AzureDevopsProvider): # get_latest_commit_url is not supported yet
def _extract_link(comment_text: str):
r = re.compile(r"<!--.*?-->")
match = r.search(comment_text)
up_to_commit_txt = ""
if match:
up_to_commit_txt = f" up to commit {match.group(0)[4:-3].strip()}"
return up_to_commit_txt
if isinstance(git_provider, AzureDevopsProvider): # get_latest_commit_url is not supported yet
if progress_response:
self.git_provider.edit_comment(progress_response, pr_comment)
git_provider.edit_comment(progress_response, pr_comment)
new_comment = progress_response
else:
self.git_provider.publish_comment(pr_comment)
return
new_comment = git_provider.publish_comment(pr_comment)
return new_comment
history_header = f"#### Previous suggestions\n"
last_commit_num = self.git_provider.get_latest_commit_url().split('/')[-1][:7]
latest_suggestion_header = f"Latest suggestions up to {last_commit_num}"
last_commit_num = git_provider.get_latest_commit_url().split('/')[-1][:7]
if only_fold: # A user clicked on the 'self-review' checkbox
text = get_settings().pr_code_suggestions.code_suggestions_self_review_text
latest_suggestion_header = f"\n\n- [x] {text}"
else:
latest_suggestion_header = f"Latest suggestions up to {last_commit_num}"
latest_commit_html_comment = f"<!-- {last_commit_num} -->"
found_comment = None
if max_previous_comments > 0:
try:
prev_comments = list(self.git_provider.get_issue_comments())
prev_comments = list(git_provider.get_issue_comments())
for comment in prev_comments:
if comment.body.startswith(initial_header):
prev_suggestions = comment.body
found_comment = comment
comment_url = self.git_provider.get_comment_url(comment)
comment_url = git_provider.get_comment_url(comment)
if history_header.strip() not in comment.body:
# no history section
# extract everything between <table> and </table> in comment.body including <table> and </table>
table_index = comment.body.find("<table>")
if table_index == -1:
self.git_provider.edit_comment(comment, pr_comment)
git_provider.edit_comment(comment, pr_comment)
continue
# find http link from comment.body[:table_index]
up_to_commit_txt = self.extract_link(comment.body[:table_index])
up_to_commit_txt = _extract_link(comment.body[:table_index])
prev_suggestion_table = comment.body[
table_index:comment.body.rfind("</table>") + len("</table>")]
@ -278,7 +317,7 @@ class PRCodeSuggestions:
# get text after the latest_suggestion_header in comment.body
table_ind = latest_table.find("<table>")
up_to_commit_txt = self.extract_link(latest_table[:table_ind])
up_to_commit_txt = _extract_link(latest_table[:table_ind])
latest_table = latest_table[table_ind:latest_table.rfind("</table>") + len("</table>")]
# enforce max_previous_comments
@ -305,11 +344,12 @@ class PRCodeSuggestions:
get_logger().info(f"Persistent mode - updating comment {comment_url} to latest {name} message")
if progress_response: # publish to 'progress_response' comment, because it refreshes immediately
self.git_provider.edit_comment(progress_response, pr_comment_updated)
self.git_provider.remove_comment(comment)
git_provider.edit_comment(progress_response, pr_comment_updated)
git_provider.remove_comment(comment)
comment = progress_response
else:
self.git_provider.edit_comment(comment, pr_comment_updated)
return
git_provider.edit_comment(comment, pr_comment_updated)
return comment
except Exception as e:
get_logger().exception(f"Failed to update persistent review, error: {e}")
pass
@ -318,9 +358,12 @@ class PRCodeSuggestions:
body = pr_comment.replace(initial_header, "").strip()
pr_comment = f"{initial_header}\n\n{latest_commit_html_comment}\n\n{body}\n\n"
if progress_response:
self.git_provider.edit_comment(progress_response, pr_comment)
git_provider.edit_comment(progress_response, pr_comment)
new_comment = progress_response
else:
self.git_provider.publish_comment(pr_comment)
new_comment = git_provider.publish_comment(pr_comment)
return new_comment
def extract_link(self, s):
r = re.compile(r"<!--.*?-->")
@ -371,50 +414,7 @@ class PRCodeSuggestions:
response_reflect = await self.self_reflect_on_suggestions(data["code_suggestions"],
patches_diff, model=model_reflection)
if response_reflect:
response_reflect_yaml = load_yaml(response_reflect)
code_suggestions_feedback = response_reflect_yaml["code_suggestions"]
if len(code_suggestions_feedback) == len(data["code_suggestions"]):
for i, suggestion in enumerate(data["code_suggestions"]):
try:
suggestion["score"] = code_suggestions_feedback[i]["suggestion_score"]
suggestion["score_why"] = code_suggestions_feedback[i]["why"]
if 'relevant_lines_start' not in suggestion:
relevant_lines_start = code_suggestions_feedback[i].get('relevant_lines_start', -1)
relevant_lines_end = code_suggestions_feedback[i].get('relevant_lines_end', -1)
suggestion['relevant_lines_start'] = relevant_lines_start
suggestion['relevant_lines_end'] = relevant_lines_end
if relevant_lines_start < 0 or relevant_lines_end < 0:
suggestion["score"] = 0
try:
if get_settings().config.publish_output:
suggestion_statistics_dict = {'score': int(suggestion["score"]),
'label': suggestion["label"].lower().strip()}
get_logger().info(f"PR-Agent suggestions statistics",
statistics=suggestion_statistics_dict, analytics=True)
except Exception as e:
get_logger().error(f"Failed to log suggestion statistics, error: {e}")
pass
except Exception as e: #
get_logger().error(f"Error processing suggestion score {i}",
artifact={"suggestion": suggestion,
"code_suggestions_feedback": code_suggestions_feedback[i]})
suggestion["score"] = 7
suggestion["score_why"] = ""
# if the before and after code is the same, clear one of them
try:
if suggestion['existing_code'] == suggestion['improved_code']:
get_logger().debug(
f"edited improved suggestion {i + 1}, because equal to existing code: {suggestion['existing_code']}")
if get_settings().pr_code_suggestions.commitable_code_suggestions:
suggestion['improved_code'] = "" # we need 'existing_code' to locate the code in the PR
else:
suggestion['existing_code'] = ""
except Exception as e:
get_logger().error(f"Error processing suggestion {i + 1}, error: {e}")
await self.analyze_self_reflection_response(data, response_reflect)
else:
# get_logger().error(f"Could not self-reflect on suggestions. using default score 7")
for i, suggestion in enumerate(data["code_suggestions"]):
@ -423,6 +423,58 @@ class PRCodeSuggestions:
return data
async def analyze_self_reflection_response(self, data, response_reflect):
response_reflect_yaml = load_yaml(response_reflect)
code_suggestions_feedback = response_reflect_yaml.get("code_suggestions", [])
if code_suggestions_feedback and len(code_suggestions_feedback) == len(data["code_suggestions"]):
for i, suggestion in enumerate(data["code_suggestions"]):
try:
suggestion["score"] = code_suggestions_feedback[i]["suggestion_score"]
suggestion["score_why"] = code_suggestions_feedback[i]["why"]
if 'relevant_lines_start' not in suggestion:
relevant_lines_start = code_suggestions_feedback[i].get('relevant_lines_start', -1)
relevant_lines_end = code_suggestions_feedback[i].get('relevant_lines_end', -1)
suggestion['relevant_lines_start'] = relevant_lines_start
suggestion['relevant_lines_end'] = relevant_lines_end
if relevant_lines_start < 0 or relevant_lines_end < 0:
suggestion["score"] = 0
try:
if get_settings().config.publish_output:
if not suggestion["score"]:
score = -1
else:
score = int(suggestion["score"])
label = suggestion["label"].lower().strip()
label = label.replace('<br>', ' ')
suggestion_statistics_dict = {'score': score,
'label': label}
get_logger().info(f"PR-Agent suggestions statistics",
statistics=suggestion_statistics_dict, analytics=True)
except Exception as e:
get_logger().error(f"Failed to log suggestion statistics, error: {e}")
pass
except Exception as e: #
get_logger().error(f"Error processing suggestion score {i}",
artifact={"suggestion": suggestion,
"code_suggestions_feedback": code_suggestions_feedback[i]})
suggestion["score"] = 7
suggestion["score_why"] = ""
# if the before and after code is the same, clear one of them
try:
if suggestion['existing_code'] == suggestion['improved_code']:
get_logger().debug(
f"edited improved suggestion {i + 1}, because equal to existing code: {suggestion['existing_code']}")
if get_settings().pr_code_suggestions.commitable_code_suggestions:
suggestion['improved_code'] = "" # we need 'existing_code' to locate the code in the PR
else:
suggestion['existing_code'] = ""
except Exception as e:
get_logger().error(f"Error processing suggestion {i + 1}, error: {e}")
@staticmethod
def _truncate_if_needed(suggestion):
max_code_suggestion_length = get_settings().get("PR_CODE_SUGGESTIONS.MAX_CODE_SUGGESTION_LENGTH", 0)
@ -486,7 +538,7 @@ class PRCodeSuggestions:
return data
def push_inline_code_suggestions(self, data):
async def push_inline_code_suggestions(self, data):
code_suggestions = []
if not data['code_suggestions']:
@ -584,7 +636,9 @@ class PRCodeSuggestions:
patches_diff_lines = patches_diff.splitlines()
for i, line in enumerate(patches_diff_lines):
if line.strip():
if line[0].isdigit():
if line.isnumeric():
patches_diff_lines[i] = ''
elif line[0].isdigit():
# find the first letter in the line that starts with a valid letter
for j, char in enumerate(line):
if not char.isdigit():
@ -642,62 +696,6 @@ class PRCodeSuggestions:
self.data = data = None
return data
async def rank_suggestions(self, data: List) -> List:
"""
Call a model to rank (sort) code suggestions based on their importance order.
Args:
data (List): A list of code suggestions to be ranked.
Returns:
List: The ranked list of code suggestions.
"""
suggestion_list = []
if not data:
return suggestion_list
for suggestion in data:
suggestion_list.append(suggestion)
data_sorted = [[]] * len(suggestion_list)
if len(suggestion_list) == 1:
return suggestion_list
try:
suggestion_str = ""
for i, suggestion in enumerate(suggestion_list):
suggestion_str += f"suggestion {i + 1}: " + str(suggestion) + '\n\n'
variables = {'suggestion_list': suggestion_list, 'suggestion_str': suggestion_str}
model = get_settings().config.model
environment = Environment(undefined=StrictUndefined)
system_prompt = environment.from_string(get_settings().pr_sort_code_suggestions_prompt.system).render(
variables)
user_prompt = environment.from_string(get_settings().pr_sort_code_suggestions_prompt.user).render(variables)
response, finish_reason = await self.ai_handler.chat_completion(model=model, system=system_prompt,
user=user_prompt)
sort_order = load_yaml(response)
for s in sort_order['Sort Order']:
suggestion_number = s['suggestion number']
importance_order = s['importance order']
data_sorted[importance_order - 1] = suggestion_list[suggestion_number - 1]
if get_settings().pr_code_suggestions.final_clip_factor != 1:
max_len = max(
len(data_sorted),
int(get_settings().pr_code_suggestions.num_code_suggestions_per_chunk),
)
new_len = int(0.5 + max_len * get_settings().pr_code_suggestions.final_clip_factor)
if new_len < len(data_sorted):
data_sorted = data_sorted[:new_len]
except Exception as e:
if get_settings().config.verbosity_level >= 1:
get_logger().info(f"Could not sort suggestions, error: {e}")
data_sorted = suggestion_list
return data_sorted
def generate_summarized_suggestions(self, data: Dict) -> str:
try:
pr_body = "## PR Code Suggestions ✨\n\n"
@ -813,7 +811,12 @@ class PRCodeSuggestions:
get_logger().info(f"Failed to publish summarized code suggestions, error: {e}")
return ""
async def self_reflect_on_suggestions(self, suggestion_list: List, patches_diff: str, model: str) -> str:
async def self_reflect_on_suggestions(self,
suggestion_list: List,
patches_diff: str,
model: str,
prev_suggestions_str: str = "",
dedicated_prompt: str = "") -> str:
if not suggestion_list:
return ""
@ -826,13 +829,21 @@ class PRCodeSuggestions:
'suggestion_str': suggestion_str,
"diff": patches_diff,
'num_code_suggestions': len(suggestion_list),
'prev_suggestions_str': prev_suggestions_str,
"is_ai_metadata": get_settings().get("config.enable_ai_metadata", False)}
environment = Environment(undefined=StrictUndefined)
system_prompt_reflect = environment.from_string(
get_settings().pr_code_suggestions_reflect_prompt.system).render(
variables)
user_prompt_reflect = environment.from_string(
get_settings().pr_code_suggestions_reflect_prompt.user).render(variables)
if dedicated_prompt:
system_prompt_reflect = environment.from_string(
get_settings().get(dedicated_prompt).system).render(variables)
user_prompt_reflect = environment.from_string(
get_settings().get(dedicated_prompt).user).render(variables)
else:
system_prompt_reflect = environment.from_string(
get_settings().pr_code_suggestions_reflect_prompt.system).render(variables)
user_prompt_reflect = environment.from_string(
get_settings().pr_code_suggestions_reflect_prompt.user).render(variables)
with get_logger().contextualize(command="self_reflect_on_suggestions"):
response_reflect, finish_reason_reflect = await self.ai_handler.chat_completion(model=model,
system=system_prompt_reflect,
@ -840,4 +851,4 @@ class PRCodeSuggestions:
except Exception as e:
get_logger().info(f"Could not reflect on suggestions, error: {e}")
return ""
return response_reflect
return response_reflect

View File

@ -99,7 +99,7 @@ class PRDescription:
# ticket extraction if exists
await extract_and_cache_pr_tickets(self.git_provider, self.vars)
await retry_with_fallback_models(self._prepare_prediction, ModelType.TURBO)
await retry_with_fallback_models(self._prepare_prediction, ModelType.WEAK)
if self.prediction:
self._prepare_data()
@ -171,6 +171,10 @@ class PRDescription:
update_comment = f"**[PR Description]({pr_url})** updated to latest commit ({latest_commit_url})"
self.git_provider.publish_comment(update_comment)
self.git_provider.remove_initial_comment()
else:
get_logger().info('PR description, but not published since publish_output is False.')
get_settings().data = {"artifact": pr_body}
return
except Exception as e:
get_logger().error(f"Error generating PR description {self.pr_id}: {e}")

View File

@ -114,7 +114,7 @@ class PRHelpMessage:
self.vars['snippets'] = docs_prompt.strip()
# run the AI model
response = await retry_with_fallback_models(self._prepare_prediction, model_type=ModelType.REGULAR)
response = await retry_with_fallback_models(self._prepare_prediction, model_type=ModelType.WEAK)
response_yaml = load_yaml(response)
response_str = response_yaml.get('response')
relevant_sections = response_yaml.get('relevant_sections')

View File

@ -1,79 +0,0 @@
import copy
from functools import partial
from jinja2 import Environment, StrictUndefined
from pr_agent.algo.ai_handlers.base_ai_handler import BaseAiHandler
from pr_agent.algo.ai_handlers.litellm_ai_handler import LiteLLMAIHandler
from pr_agent.algo.pr_processing import get_pr_diff, retry_with_fallback_models
from pr_agent.algo.token_handler import TokenHandler
from pr_agent.config_loader import get_settings
from pr_agent.git_providers import get_git_provider
from pr_agent.git_providers.git_provider import get_main_pr_language
from pr_agent.log import get_logger
class PRInformationFromUser:
def __init__(self, pr_url: str, args: list = None,
ai_handler: partial[BaseAiHandler,] = LiteLLMAIHandler):
self.git_provider = get_git_provider()(pr_url)
self.main_pr_language = get_main_pr_language(
self.git_provider.get_languages(), self.git_provider.get_files()
)
self.ai_handler = ai_handler()
self.ai_handler.main_pr_language = self.main_pr_language
self.vars = {
"title": self.git_provider.pr.title,
"branch": self.git_provider.get_pr_branch(),
"description": self.git_provider.get_pr_description(),
"language": self.main_pr_language,
"diff": "", # empty diff for initial calculation
"commit_messages_str": self.git_provider.get_commit_messages(),
}
self.token_handler = TokenHandler(self.git_provider.pr,
self.vars,
get_settings().pr_information_from_user_prompt.system,
get_settings().pr_information_from_user_prompt.user)
self.patches_diff = None
self.prediction = None
async def run(self):
get_logger().info('Generating question to the user...')
if get_settings().config.publish_output:
self.git_provider.publish_comment("Preparing questions...", is_temporary=True)
await retry_with_fallback_models(self._prepare_prediction)
get_logger().info('Preparing questions...')
pr_comment = self._prepare_pr_answer()
if get_settings().config.publish_output:
get_logger().info('Pushing questions...')
self.git_provider.publish_comment(pr_comment)
self.git_provider.remove_initial_comment()
return ""
async def _prepare_prediction(self, model):
get_logger().info('Getting PR diff...')
self.patches_diff = get_pr_diff(self.git_provider, self.token_handler, model)
get_logger().info('Getting AI prediction...')
self.prediction = await self._get_prediction(model)
async def _get_prediction(self, model: str):
variables = copy.deepcopy(self.vars)
variables["diff"] = self.patches_diff # update diff
environment = Environment(undefined=StrictUndefined)
system_prompt = environment.from_string(get_settings().pr_information_from_user_prompt.system).render(variables)
user_prompt = environment.from_string(get_settings().pr_information_from_user_prompt.user).render(variables)
if get_settings().config.verbosity_level >= 2:
get_logger().info(f"\nSystem prompt:\n{system_prompt}")
get_logger().info(f"\nUser prompt:\n{user_prompt}")
response, finish_reason = await self.ai_handler.chat_completion(
model=model, temperature=get_settings().config.temperature, system=system_prompt, user=user_prompt)
return response
def _prepare_pr_answer(self) -> str:
model_output = self.prediction.strip()
if get_settings().config.verbosity_level >= 2:
get_logger().info(f"answer_str:\n{model_output}")
answer_str = f"{model_output}\n\n Please respond to the questions above in the following format:\n\n" +\
"\n>/answer\n>1) ...\n>2) ...\n>...\n"
return answer_str

View File

@ -79,7 +79,7 @@ class PR_LineQuestions:
line_end=line_end,
side=side)
if self.patch_with_lines:
response = await retry_with_fallback_models(self._get_prediction, model_type=ModelType.TURBO)
response = await retry_with_fallback_models(self._get_prediction, model_type=ModelType.WEAK)
get_logger().info('Preparing answer...')
if comment_id:

View File

@ -63,7 +63,7 @@ class PRQuestions:
if img_path:
get_logger().debug(f"Image path identified", artifact=img_path)
await retry_with_fallback_models(self._prepare_prediction, model_type=ModelType.TURBO)
await retry_with_fallback_models(self._prepare_prediction, model_type=ModelType.WEAK)
pr_comment = self._prepare_pr_answer()
get_logger().debug(f"PR output", artifact=pr_comment)

View File

@ -86,7 +86,6 @@ class PRReviewer:
"require_estimate_effort_to_review": get_settings().pr_reviewer.require_estimate_effort_to_review,
'require_can_be_split_review': get_settings().pr_reviewer.require_can_be_split_review,
'require_security_review': get_settings().pr_reviewer.require_security_review,
'num_code_suggestions': get_settings().pr_reviewer.num_code_suggestions,
'question_str': question_str,
'answer_str': answer_str,
"extra_instructions": get_settings().pr_reviewer.extra_instructions,
@ -148,7 +147,7 @@ class PRReviewer:
if get_settings().config.publish_output and not get_settings().config.get('is_auto_command', False):
self.git_provider.publish_comment("Preparing review...", is_temporary=True)
await retry_with_fallback_models(self._prepare_prediction)
await retry_with_fallback_models(self._prepare_prediction, model_type=ModelType.REGULAR)
if not self.prediction:
self.git_provider.remove_initial_comment()
return None
@ -168,8 +167,10 @@ class PRReviewer:
self.git_provider.publish_comment(pr_review)
self.git_provider.remove_initial_comment()
if get_settings().pr_reviewer.inline_code_comments:
self._publish_inline_code_comments()
else:
get_logger().info("Review output is not published")
get_settings().data = {"artifact": pr_review}
return
except Exception as e:
get_logger().error(f"Failed to review PR: {e}")
@ -231,33 +232,6 @@ class PRReviewer:
key_issues_to_review = data['review'].pop('key_issues_to_review')
data['review']['key_issues_to_review'] = key_issues_to_review
if 'code_feedback' in data:
code_feedback = data['code_feedback']
# Filter out code suggestions that can be submitted as inline comments
if get_settings().pr_reviewer.inline_code_comments:
del data['code_feedback']
else:
for suggestion in code_feedback:
if ('relevant_file' in suggestion) and (not suggestion['relevant_file'].startswith('``')):
suggestion['relevant_file'] = f"``{suggestion['relevant_file']}``"
if 'relevant_line' not in suggestion:
suggestion['relevant_line'] = ''
relevant_line_str = suggestion['relevant_line'].split('\n')[0]
# removing '+'
suggestion['relevant_line'] = relevant_line_str.lstrip('+').strip()
# try to add line numbers link to code suggestions
if hasattr(self.git_provider, 'generate_link_to_relevant_line_number'):
link = self.git_provider.generate_link_to_relevant_line_number(suggestion)
if link:
suggestion['relevant_line'] = f"[{suggestion['relevant_line']}]({link})"
else:
pass
incremental_review_markdown_text = None
# Add incremental review section
if self.incremental.is_incremental:
@ -266,7 +240,9 @@ class PRReviewer:
incremental_review_markdown_text = f"Starting from commit {last_commit_url}"
markdown_text = convert_to_markdown_v2(data, self.git_provider.is_supported("gfm_markdown"),
incremental_review_markdown_text, git_provider=self.git_provider)
incremental_review_markdown_text,
git_provider=self.git_provider,
files=self.git_provider.get_diff_files())
# Add help text if gfm_markdown is supported
if self.git_provider.is_supported("gfm_markdown") and get_settings().pr_reviewer.enable_help_text:
@ -286,38 +262,6 @@ class PRReviewer:
return markdown_text
def _publish_inline_code_comments(self) -> None:
"""
Publishes inline comments on a pull request with code suggestions generated by the AI model.
"""
if get_settings().pr_reviewer.num_code_suggestions == 0:
return
first_key = 'review'
last_key = 'security_concerns'
data = load_yaml(self.prediction.strip(),
keys_fix_yaml=["ticket_compliance_check", "estimated_effort_to_review_[1-5]:", "security_concerns:", "key_issues_to_review:",
"relevant_file:", "relevant_line:", "suggestion:"],
first_key=first_key, last_key=last_key)
comments: List[str] = []
for suggestion in data.get('code_feedback', []):
relevant_file = suggestion.get('relevant_file', '').strip()
relevant_line_in_file = suggestion.get('relevant_line', '').strip()
content = suggestion.get('suggestion', '')
if not relevant_file or not relevant_line_in_file or not content:
get_logger().info("Skipping inline comment with missing file/line/content")
continue
if self.git_provider.is_supported("create_inline_comment"):
comment = self.git_provider.create_inline_comment(content, relevant_file, relevant_line_in_file)
if comment:
comments.append(comment)
else:
self.git_provider.publish_inline_comment(content, relevant_file, relevant_line_in_file, suggestion)
if comments:
self.git_provider.publish_inline_comments(comments)
def _get_user_answers(self) -> Tuple[str, str]:
"""
Retrieves the question and answer strings from the discussion messages related to a pull request.

View File

@ -41,6 +41,7 @@ class PRUpdateChangelog:
"description": self.git_provider.get_pr_description(),
"language": self.main_language,
"diff": "", # empty diff for initial calculation
"pr_link": "",
"changelog_file_str": self.changelog_file_str,
"today": date.today(),
"extra_instructions": get_settings().pr_update_changelog.extra_instructions,
@ -73,7 +74,7 @@ class PRUpdateChangelog:
if get_settings().config.publish_output:
self.git_provider.publish_comment("Preparing changelog updates...", is_temporary=True)
await retry_with_fallback_models(self._prepare_prediction, model_type=ModelType.TURBO)
await retry_with_fallback_models(self._prepare_prediction, model_type=ModelType.WEAK)
new_file_content, answer = self._prepare_changelog_update()
@ -102,12 +103,23 @@ class PRUpdateChangelog:
async def _get_prediction(self, model: str):
variables = copy.deepcopy(self.vars)
variables["diff"] = self.patches_diff # update diff
if get_settings().pr_update_changelog.add_pr_link:
variables["pr_link"] = self.git_provider.get_pr_url()
environment = Environment(undefined=StrictUndefined)
system_prompt = environment.from_string(get_settings().pr_update_changelog_prompt.system).render(variables)
user_prompt = environment.from_string(get_settings().pr_update_changelog_prompt.user).render(variables)
response, finish_reason = await self.ai_handler.chat_completion(
model=model, system=system_prompt, user=user_prompt, temperature=get_settings().config.temperature)
# post-process the response
response = response.strip()
if not response:
return ""
if response.startswith("```"):
response_lines = response.splitlines()
response_lines = response_lines[1:]
response = "\n".join(response_lines)
response = response.strip("`")
return response
def _prepare_changelog_update(self) -> Tuple[str, str]:

View File

@ -4,7 +4,7 @@ build-backend = "setuptools.build_meta"
[project]
name = "pr-agent"
version = "0.2.4"
version = "0.2.5"
authors = [{ name = "CodiumAI", email = "tal.r@codium.ai" }]

70
tests/health_test/main.py Normal file
View File

@ -0,0 +1,70 @@
import argparse
import asyncio
import copy
import os
from pathlib import Path
from starlette_context import request_cycle_context, context
from pr_agent.cli import run_command
from pr_agent.config_loader import get_settings, global_settings
from pr_agent.agent.pr_agent import PRAgent, commands
from pr_agent.log import get_logger, setup_logger
from tests.e2e_tests import e2e_utils
log_level = os.environ.get("LOG_LEVEL", "INFO")
setup_logger(log_level)
async def run_async():
pr_url = os.getenv('TEST_PR_URL', 'https://github.com/Codium-ai/pr-agent/pull/1385')
get_settings().set("config.git_provider", "github")
get_settings().set("config.publish_output", False)
get_settings().set("config.fallback_models", [])
agent = PRAgent()
try:
# Run the 'describe' command
get_logger().info(f"\nSanity check for the 'describe' command...")
original_settings = copy.deepcopy(get_settings())
await agent.handle_request(pr_url, ['describe'])
pr_header_body = dict(get_settings().data)['artifact']
assert pr_header_body.startswith('###') and 'PR Type' in pr_header_body and 'Description' in pr_header_body
context['settings'] = copy.deepcopy(original_settings) # Restore settings state after each test to prevent test interference
get_logger().info("PR description generated successfully\n")
# Run the 'review' command
get_logger().info(f"\nSanity check for the 'review' command...")
original_settings = copy.deepcopy(get_settings())
await agent.handle_request(pr_url, ['review'])
pr_review_body = dict(get_settings().data)['artifact']
assert pr_review_body.startswith('##') and 'PR Reviewer Guide' in pr_review_body
context['settings'] = copy.deepcopy(original_settings) # Restore settings state after each test to prevent test interference
get_logger().info("PR review generated successfully\n")
# Run the 'improve' command
get_logger().info(f"\nSanity check for the 'improve' command...")
original_settings = copy.deepcopy(get_settings())
await agent.handle_request(pr_url, ['improve'])
pr_improve_body = dict(get_settings().data)['artifact']
assert pr_improve_body.startswith('##') and 'PR Code Suggestions' in pr_improve_body
context['settings'] = copy.deepcopy(original_settings) # Restore settings state after each test to prevent test interference
get_logger().info("PR improvements generated successfully\n")
get_logger().info(f"\n\n========\nHealth test passed successfully\n========")
except Exception as e:
get_logger().exception(f"\n\n========\nHealth test failed\n========")
raise e
def run():
with request_cycle_context({}):
context['settings'] = copy.deepcopy(global_settings)
asyncio.run(run_async())
if __name__ == '__main__':
run()

View File

@ -11,7 +11,7 @@ class TestClipTokens:
text = "line1\nline2\nline3\nline4\nline5\nline6"
max_tokens = 25
result = clip_tokens(text, max_tokens)
assert result == text
assert result != text
max_tokens = 10
result = clip_tokens(text, max_tokens)

View File

@ -47,13 +47,10 @@ class TestConvertToMarkdown:
def test_simple_dictionary_input(self):
input_data = {'review': {
'estimated_effort_to_review_[1-5]': '1, because the changes are minimal and straightforward, focusing on a single functionality addition.\n',
'relevant_tests': 'No\n', 'possible_issues': 'No\n', 'security_concerns': 'No\n'}, 'code_feedback': [
{'relevant_file': '``pr_agent/git_providers/git_provider.py\n``', 'language': 'python\n',
'suggestion': "Consider raising an exception or logging a warning when 'pr_url' attribute is not found. This can help in debugging issues related to the absence of 'pr_url' in instances where it's expected. [important]\n",
'relevant_line': '[return ""](https://github.com/Codium-ai/pr-agent-pro/pull/102/files#diff-52d45f12b836f77ed1aef86e972e65404634ea4e2a6083fb71a9b0f9bb9e062fR199)'}]}
'relevant_tests': 'No\n', 'possible_issues': 'No\n', 'security_concerns': 'No\n'}}
expected_output = f'{PRReviewHeader.REGULAR.value} 🔍\n\nHere are some key observations to aid the review process:\n\n<table>\n<tr><td>⏱️&nbsp;<strong>Estimated effort to review</strong>: 1 🔵⚪⚪⚪⚪</td></tr>\n<tr><td>🧪&nbsp;<strong>No relevant tests</strong></td></tr>\n<tr><td>&nbsp;<strong>Possible issues</strong>: No\n</td></tr>\n<tr><td>🔒&nbsp;<strong>No security concerns identified</strong></td></tr>\n</table>\n\n\n<details><summary> <strong>Code feedback:</strong></summary>\n\n<hr><table><tr><td>relevant file</td><td>pr_agent/git_providers/git_provider.py\n</td></tr><tr><td>suggestion &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;</td><td>\n\n<strong>\n\nConsider raising an exception or logging a warning when \'pr_url\' attribute is not found. This can help in debugging issues related to the absence of \'pr_url\' in instances where it\'s expected. [important]\n\n</strong>\n</td></tr><tr><td>relevant line</td><td><a href=\'https://github.com/Codium-ai/pr-agent-pro/pull/102/files#diff-52d45f12b836f77ed1aef86e972e65404634ea4e2a6083fb71a9b0f9bb9e062fR199\'>return ""</a></td></tr></table><hr>\n\n</details>'
expected_output = f'{PRReviewHeader.REGULAR.value} 🔍\n\nHere are some key observations to aid the review process:\n\n<table>\n<tr><td>⏱️&nbsp;<strong>Estimated effort to review</strong>: 1 🔵⚪⚪⚪⚪</td></tr>\n<tr><td>🧪&nbsp;<strong>No relevant tests</strong></td></tr>\n<tr><td>&nbsp;<strong>Possible issues</strong>: No\n</td></tr>\n<tr><td>🔒&nbsp;<strong>No security concerns identified</strong></td></tr>\n</table>'
assert convert_to_markdown_v2(input_data).strip() == expected_output.strip()
@ -67,7 +64,7 @@ class TestConvertToMarkdown:
assert convert_to_markdown_v2(input_data).strip() == expected_output.strip()
def test_dictionary_with_empty_dictionaries(self):
input_data = {'review': {}, 'code_feedback': [{}]}
input_data = {'review': {}}
expected_output = ''