mirror of
https://github.com/qodo-ai/pr-agent.git
synced 2025-07-04 21:00:40 +08:00
Compare commits
1 Commits
hl/detect_
...
print_test
Author | SHA1 | Date | |
---|---|---|---|
401dc29dee |
61
README.md
61
README.md
@ -73,7 +73,7 @@ Focused mode
|
|||||||
|
|
||||||
### November 4, 2024
|
### November 4, 2024
|
||||||
|
|
||||||
Qodo Merge PR Agent will now leverage context from Jira or GitHub tickets to enhance the PR Feedback. Read more about this feature
|
Qodo Merge PR Agent will now leverage context from Jira or GitHub tickets to enhance the PR Feedback. Read more about this feature
|
||||||
[here](https://qodo-merge-docs.qodo.ai/core-abilities/fetching_ticket_context/)
|
[here](https://qodo-merge-docs.qodo.ai/core-abilities/fetching_ticket_context/)
|
||||||
|
|
||||||
|
|
||||||
@ -82,41 +82,39 @@ Qodo Merge PR Agent will now leverage context from Jira or GitHub tickets to enh
|
|||||||
|
|
||||||
Supported commands per platform:
|
Supported commands per platform:
|
||||||
|
|
||||||
| | | GitHub | GitLab | Bitbucket | Azure DevOps |
|
| | | GitHub | Gitlab | Bitbucket | Azure DevOps |
|
||||||
|-------|---------------------------------------------------------------------------------------------------------|:--------------------:|:--------------------:|:--------------------:|:------------:|
|
|-------|---------------------------------------------------------------------------------------------------------|:--------------------:|:--------------------:|:--------------------:|:------------:|
|
||||||
| TOOLS | [Review](https://qodo-merge-docs.qodo.ai/tools/review/) | ✅ | ✅ | ✅ | ✅ |
|
| TOOLS | Review | ✅ | ✅ | ✅ | ✅ |
|
||||||
| | [Describe](https://qodo-merge-docs.qodo.ai/tools/describe/) | ✅ | ✅ | ✅ | ✅ |
|
| | ⮑ Incremental | ✅ | | | |
|
||||||
| | [Improve](https://qodo-merge-docs.qodo.ai/tools/improve/) | ✅ | ✅ | ✅ | ✅ |
|
| | Describe | ✅ | ✅ | ✅ | ✅ |
|
||||||
| | [Ask](https://qodo-merge-docs.qodo.ai/tools/ask/) | ✅ | ✅ | ✅ | ✅ |
|
| | ⮑ [Inline File Summary](https://pr-agent-docs.codium.ai/tools/describe#inline-file-summary) 💎 | ✅ | | | |
|
||||||
|
| | Improve | ✅ | ✅ | ✅ | ✅ |
|
||||||
|
| | ⮑ Extended | ✅ | ✅ | ✅ | ✅ |
|
||||||
|
| | Ask | ✅ | ✅ | ✅ | ✅ |
|
||||||
| | ⮑ [Ask on code lines](https://pr-agent-docs.codium.ai/tools/ask#ask-lines) | ✅ | ✅ | | |
|
| | ⮑ [Ask on code lines](https://pr-agent-docs.codium.ai/tools/ask#ask-lines) | ✅ | ✅ | | |
|
||||||
| | [Update CHANGELOG](https://qodo-merge-docs.qodo.ai/tools/update_changelog/) | ✅ | ✅ | ✅ | ✅ |
|
|
||||||
| | [Ticket Context](https://qodo-merge-docs.qodo.ai/core-abilities/fetching_ticket_context/) 💎 | ✅ | ✅ | ✅ | |
|
|
||||||
| | [Utilizing Best Practices](https://qodo-merge-docs.qodo.ai/tools/improve/#best-practices) 💎 | ✅ | ✅ | ✅ | |
|
|
||||||
| | [PR Chat](https://qodo-merge-docs.qodo.ai/chrome-extension/features/#pr-chat) 💎 | ✅ | | | |
|
|
||||||
| | [Suggestion Tracking](https://qodo-merge-docs.qodo.ai/tools/improve/#suggestion-tracking) 💎 | ✅ | ✅ | | |
|
|
||||||
| | [CI Feedback](https://pr-agent-docs.codium.ai/tools/ci_feedback/) 💎 | ✅ | | | |
|
|
||||||
| | [PR Documentation](https://pr-agent-docs.codium.ai/tools/documentation/) 💎 | ✅ | ✅ | | |
|
|
||||||
| | [Custom Labels](https://pr-agent-docs.codium.ai/tools/custom_labels/) 💎 | ✅ | ✅ | | |
|
|
||||||
| | [Analyze](https://pr-agent-docs.codium.ai/tools/analyze/) 💎 | ✅ | ✅ | | |
|
|
||||||
| | [Similar Code](https://pr-agent-docs.codium.ai/tools/similar_code/) 💎 | ✅ | | | |
|
|
||||||
| | [Custom Prompt](https://pr-agent-docs.codium.ai/tools/custom_prompt/) 💎 | ✅ | ✅ | ✅ | |
|
| | [Custom Prompt](https://pr-agent-docs.codium.ai/tools/custom_prompt/) 💎 | ✅ | ✅ | ✅ | |
|
||||||
| | [Test](https://pr-agent-docs.codium.ai/tools/test/) 💎 | ✅ | ✅ | | |
|
| | [Test](https://pr-agent-docs.codium.ai/tools/test/) 💎 | ✅ | ✅ | | |
|
||||||
|
| | Reflect and Review | ✅ | ✅ | ✅ | ✅ |
|
||||||
|
| | Update CHANGELOG.md | ✅ | ✅ | ✅ | ✅ |
|
||||||
|
| | Find Similar Issue | ✅ | | | |
|
||||||
|
| | [Add PR Documentation](https://pr-agent-docs.codium.ai/tools/documentation/) 💎 | ✅ | ✅ | | |
|
||||||
|
| | [Custom Labels](https://pr-agent-docs.codium.ai/tools/custom_labels/) 💎 | ✅ | ✅ | | |
|
||||||
|
| | [Analyze](https://pr-agent-docs.codium.ai/tools/analyze/) 💎 | ✅ | ✅ | | |
|
||||||
|
| | [CI Feedback](https://pr-agent-docs.codium.ai/tools/ci_feedback/) 💎 | ✅ | | | |
|
||||||
|
| | [Similar Code](https://pr-agent-docs.codium.ai/tools/similar_code/) 💎 | ✅ | | | |
|
||||||
| | | | | | |
|
| | | | | | |
|
||||||
| USAGE | [CLI](https://qodo-merge-docs.qodo.ai/usage-guide/automations_and_usage/#local-repo-cli) | ✅ | ✅ | ✅ | ✅ |
|
| USAGE | CLI | ✅ | ✅ | ✅ | ✅ |
|
||||||
| | [App / webhook](https://qodo-merge-docs.qodo.ai/usage-guide/automations_and_usage/#github-app) | ✅ | ✅ | ✅ | ✅ |
|
| | App / webhook | ✅ | ✅ | ✅ | ✅ |
|
||||||
| | [Tagging bot](https://github.com/Codium-ai/pr-agent#try-it-now) | ✅ | | | |
|
| | Tagging bot | ✅ | | | |
|
||||||
| | [Actions](https://qodo-merge-docs.qodo.ai/installation/github/#run-as-a-github-action) | ✅ |✅| ✅ |✅|
|
| | Actions | ✅ |✅| ✅ |✅|
|
||||||
| | | | | | |
|
| | | | | | |
|
||||||
| CORE | [PR compression](https://qodo-merge-docs.qodo.ai/core-abilities/compression_strategy/) | ✅ | ✅ | ✅ | ✅ |
|
| CORE | PR compression | ✅ | ✅ | ✅ | ✅ |
|
||||||
|
| | Repo language prioritization | ✅ | ✅ | ✅ | ✅ |
|
||||||
| | Adaptive and token-aware file patch fitting | ✅ | ✅ | ✅ | ✅ |
|
| | Adaptive and token-aware file patch fitting | ✅ | ✅ | ✅ | ✅ |
|
||||||
| | [Multiple models support](https://qodo-merge-docs.qodo.ai/usage-guide/changing_a_model/) | ✅ | ✅ | ✅ | ✅ |
|
| | Multiple models support | ✅ | ✅ | ✅ | ✅ |
|
||||||
| | [Local and global metadata](https://qodo-merge-docs.qodo.ai/core-abilities/metadata/) | ✅ | ✅ | ✅ | ✅ |
|
| | [Static code analysis](https://pr-agent-docs.codium.ai/core-abilities/#static-code-analysis) 💎 | ✅ | ✅ | ✅ | |
|
||||||
| | [Dynamic context](https://qodo-merge-docs.qodo.ai/core-abilities/dynamic_context/) | ✅ | ✅ | ✅ | ✅ |
|
|
||||||
| | [Self reflection](https://qodo-merge-docs.qodo.ai/core-abilities/self_reflection/) | ✅ | ✅ | ✅ | ✅ |
|
|
||||||
| | [Static code analysis](https://qodo-merge-docs.qodo.ai/core-abilities/static_code_analysis/) 💎 | ✅ | ✅ | ✅ | |
|
|
||||||
| | [Global and wiki configurations](https://pr-agent-docs.codium.ai/usage-guide/configuration_options/) 💎 | ✅ | ✅ | ✅ | |
|
| | [Global and wiki configurations](https://pr-agent-docs.codium.ai/usage-guide/configuration_options/) 💎 | ✅ | ✅ | ✅ | |
|
||||||
| | [PR interactive actions](https://www.codium.ai/images/pr_agent/pr-actions.mp4) 💎 | ✅ | ✅ | | |
|
| | [PR interactive actions](https://www.codium.ai/images/pr_agent/pr-actions.mp4) 💎 | ✅ | ✅ | | |
|
||||||
| | [Impact Evaluation](https://qodo-merge-docs.qodo.ai/core-abilities/impact_evaluation/) 💎 | ✅ | ✅ | | |
|
|
||||||
- 💎 means this feature is available only in [PR-Agent Pro](https://www.codium.ai/pricing/)
|
- 💎 means this feature is available only in [PR-Agent Pro](https://www.codium.ai/pricing/)
|
||||||
|
|
||||||
[//]: # (- Support for additional git providers is described in [here](./docs/Full_environments.md))
|
[//]: # (- Support for additional git providers is described in [here](./docs/Full_environments.md))
|
||||||
@ -177,9 +175,14 @@ ___
|
|||||||
</kbd>
|
</kbd>
|
||||||
</p>
|
</p>
|
||||||
</div>
|
</div>
|
||||||
|
<hr>
|
||||||
|
|
||||||
|
<h4><a href="https://github.com/Codium-ai/pr-agent/pull/530">/generate_labels</a></h4>
|
||||||
|
<div align="center">
|
||||||
|
<p float="center">
|
||||||
|
<kbd><img src="https://www.codium.ai/images/pr_agent/geneare_custom_labels_main_short.png" width="300"></kbd>
|
||||||
|
</p>
|
||||||
|
</div>
|
||||||
|
|
||||||
[//]: # (<h4><a href="https://github.com/Codium-ai/pr-agent/pull/78#issuecomment-1639739496">/reflect_and_review:</a></h4>)
|
[//]: # (<h4><a href="https://github.com/Codium-ai/pr-agent/pull/78#issuecomment-1639739496">/reflect_and_review:</a></h4>)
|
||||||
|
|
||||||
|
@ -5,25 +5,20 @@
|
|||||||
Qodo Merge PR Agent streamlines code review workflows by seamlessly connecting with multiple ticket management systems.
|
Qodo Merge PR Agent streamlines code review workflows by seamlessly connecting with multiple ticket management systems.
|
||||||
This integration enriches the review process by automatically surfacing relevant ticket information and context alongside code changes.
|
This integration enriches the review process by automatically surfacing relevant ticket information and context alongside code changes.
|
||||||
|
|
||||||
## Ticket systems supported
|
## Affected Tools
|
||||||
- GitHub
|
|
||||||
- Jira (💎)
|
Ticket Recognition Requirements:
|
||||||
|
|
||||||
|
1. The PR description should contain a link to the ticket or if the branch name starts with the ticket id / number.
|
||||||
|
2. For Jira tickets, you should follow the instructions in [Jira Integration](https://qodo-merge-docs.qodo.ai/core-abilities/fetching_ticket_context/#jira-integration) in order to authenticate with Jira.
|
||||||
|
|
||||||
Ticket data fetched:
|
Ticket data fetched:
|
||||||
|
|
||||||
1. Ticket Title
|
1. Ticket Title
|
||||||
2. Ticket Description
|
2. Ticket Description
|
||||||
3. Custom Fields (Acceptance criteria)
|
3. Custom Fields (Acceptance criteria)
|
||||||
4. Subtasks (linked tasks)
|
4. Subtasks (linked tasks)
|
||||||
5. Labels
|
5. Labels
|
||||||
6. Attached Images/Screenshots
|
6. Attached Images/Screenshots 💎
|
||||||
|
|
||||||
## Affected Tools
|
|
||||||
|
|
||||||
Ticket Recognition Requirements:
|
|
||||||
|
|
||||||
- The PR description should contain a link to the ticket or if the branch name starts with the ticket id / number.
|
|
||||||
- For Jira tickets, you should follow the instructions in [Jira Integration](https://qodo-merge-docs.qodo.ai/core-abilities/fetching_ticket_context/#jira-integration) in order to authenticate with Jira.
|
|
||||||
|
|
||||||
### Describe tool
|
### Describe tool
|
||||||
Qodo Merge PR Agent will recognize the ticket and use the ticket content (title, description, labels) to provide additional context for the code changes.
|
Qodo Merge PR Agent will recognize the ticket and use the ticket content (title, description, labels) to provide additional context for the code changes.
|
||||||
|
@ -61,7 +61,7 @@ Or be triggered interactively by using the `analyze` tool.
|
|||||||
|
|
||||||
### Find Similar Code
|
### Find Similar Code
|
||||||
|
|
||||||
The [`similar code`](https://qodo-merge-docs.qodo.ai/tools/similar_code/) tool retrieves the most similar code components from inside the organization's codebase or from open-source code, including details about the license associated with each repository.
|
The [`similar code`](https://qodo-merge-docs.qodo.ai/tools/similar_code/) tool retrieves the most similar code components from inside the organization's codebase, or from open-source code.
|
||||||
|
|
||||||
For example:
|
For example:
|
||||||
|
|
||||||
|
@ -25,43 +25,36 @@ To search the documentation site using natural language:
|
|||||||
|
|
||||||
Qodo Merge offers extensive pull request functionalities across various git providers.
|
Qodo Merge offers extensive pull request functionalities across various git providers.
|
||||||
|
|
||||||
| | | GitHub | GitLab | Bitbucket | Azure DevOps |
|
| | | GitHub | Gitlab | Bitbucket | Azure DevOps |
|
||||||
|-------|---------------------------------------------------------------------------------------------------------|:--------------------:|:--------------------:|:--------------------:|:------------:|
|
|-------|-----------------------------------------------------------------------------------------------------------------------|:------:|:------:|:---------:|:------------:|
|
||||||
| TOOLS | [Review](https://qodo-merge-docs.qodo.ai/tools/review/) | ✅ | ✅ | ✅ | ✅ |
|
| TOOLS | Review | ✅ | ✅ | ✅ | ✅ |
|
||||||
| | [Describe](https://qodo-merge-docs.qodo.ai/tools/describe/) | ✅ | ✅ | ✅ | ✅ |
|
| | ⮑ Incremental | ✅ | | | |
|
||||||
| | [Improve](https://qodo-merge-docs.qodo.ai/tools/improve/) | ✅ | ✅ | ✅ | ✅ |
|
| | Ask | ✅ | ✅ | ✅ | ✅ |
|
||||||
| | [Ask](https://qodo-merge-docs.qodo.ai/tools/ask/) | ✅ | ✅ | ✅ | ✅ |
|
| | Describe | ✅ | ✅ | ✅ | ✅ |
|
||||||
| | ⮑ [Ask on code lines](https://pr-agent-docs.codium.ai/tools/ask#ask-lines) | ✅ | ✅ | | |
|
| | ⮑ [Inline file summary](https://qodo-merge-docs.qodo.ai/tools/describe/#inline-file-summary){:target="_blank"} 💎 | ✅ | ✅ | | |
|
||||||
| | [Update CHANGELOG](https://qodo-merge-docs.qodo.ai/tools/update_changelog/) | ✅ | ✅ | ✅ | ✅ |
|
| | Improve | ✅ | ✅ | ✅ | ✅ |
|
||||||
| | [Ticket Context](https://qodo-merge-docs.qodo.ai/core-abilities/fetching_ticket_context/) 💎 | ✅ | ✅ | ✅ | |
|
| | ⮑ Extended | ✅ | ✅ | ✅ | ✅ |
|
||||||
| | [Utilizing Best Practices](https://qodo-merge-docs.qodo.ai/tools/improve/#best-practices) 💎 | ✅ | ✅ | ✅ | |
|
| | [Custom Prompt](./tools/custom_prompt.md){:target="_blank"} 💎 | ✅ | ✅ | ✅ | |
|
||||||
| | [PR Chat](https://qodo-merge-docs.qodo.ai/chrome-extension/features/#pr-chat) 💎 | ✅ | | | |
|
| | Reflect and Review | ✅ | ✅ | ✅ | |
|
||||||
| | [Suggestion Tracking](https://qodo-merge-docs.qodo.ai/tools/improve/#suggestion-tracking) 💎 | ✅ | ✅ | | |
|
| | Update CHANGELOG.md | ✅ | ✅ | ✅ | ️ |
|
||||||
| | [CI Feedback](https://pr-agent-docs.codium.ai/tools/ci_feedback/) 💎 | ✅ | | | |
|
| | Find Similar Issue | ✅ | | | ️ |
|
||||||
| | [PR Documentation](https://pr-agent-docs.codium.ai/tools/documentation/) 💎 | ✅ | ✅ | | |
|
| | [Add PR Documentation](./tools/documentation.md){:target="_blank"} 💎 | ✅ | ✅ | | |
|
||||||
| | [Custom Labels](https://pr-agent-docs.codium.ai/tools/custom_labels/) 💎 | ✅ | ✅ | | |
|
| | [Generate Custom Labels](./tools/describe.md#handle-custom-labels-from-the-repos-labels-page-💎){:target="_blank"} 💎 | ✅ | ✅ | | |
|
||||||
| | [Analyze](https://pr-agent-docs.codium.ai/tools/analyze/) 💎 | ✅ | ✅ | | |
|
| | [Analyze PR Components](./tools/analyze.md){:target="_blank"} 💎 | ✅ | ✅ | | |
|
||||||
| | [Similar Code](https://pr-agent-docs.codium.ai/tools/similar_code/) 💎 | ✅ | | | |
|
| | | | | | ️ |
|
||||||
| | [Custom Prompt](https://pr-agent-docs.codium.ai/tools/custom_prompt/) 💎 | ✅ | ✅ | ✅ | |
|
| USAGE | CLI | ✅ | ✅ | ✅ | ✅ |
|
||||||
| | [Test](https://pr-agent-docs.codium.ai/tools/test/) 💎 | ✅ | ✅ | | |
|
| | App / webhook | ✅ | ✅ | ✅ | ✅ |
|
||||||
| | | | | | |
|
| | Actions | ✅ | | | ️ |
|
||||||
| USAGE | [CLI](https://qodo-merge-docs.qodo.ai/usage-guide/automations_and_usage/#local-repo-cli) | ✅ | ✅ | ✅ | ✅ |
|
| | | | | |
|
||||||
| | [App / webhook](https://qodo-merge-docs.qodo.ai/usage-guide/automations_and_usage/#github-app) | ✅ | ✅ | ✅ | ✅ |
|
| CORE | PR compression | ✅ | ✅ | ✅ | ✅ |
|
||||||
| | [Tagging bot](https://github.com/Codium-ai/pr-agent#try-it-now) | ✅ | | | |
|
| | Repo language prioritization | ✅ | ✅ | ✅ | ✅ |
|
||||||
| | [Actions](https://qodo-merge-docs.qodo.ai/installation/github/#run-as-a-github-action) | ✅ |✅| ✅ |✅|
|
| | Adaptive and token-aware file patch fitting | ✅ | ✅ | ✅ | ✅ |
|
||||||
| | | | | | |
|
| | Multiple models support | ✅ | ✅ | ✅ | ✅ |
|
||||||
| CORE | [PR compression](https://qodo-merge-docs.qodo.ai/core-abilities/compression_strategy/) | ✅ | ✅ | ✅ | ✅ |
|
| | Incremental PR review | ✅ | | | |
|
||||||
| | Adaptive and token-aware file patch fitting | ✅ | ✅ | ✅ | ✅ |
|
| | [Static code analysis](./tools/analyze.md/){:target="_blank"} 💎 | ✅ | ✅ | ✅ | |
|
||||||
| | [Multiple models support](https://qodo-merge-docs.qodo.ai/usage-guide/changing_a_model/) | ✅ | ✅ | ✅ | ✅ |
|
| | [Multiple configuration options](./usage-guide/configuration_options.md){:target="_blank"} 💎 | ✅ | ✅ | ✅ | |
|
||||||
| | [Local and global metadata](https://qodo-merge-docs.qodo.ai/core-abilities/metadata/) | ✅ | ✅ | ✅ | ✅ |
|
|
||||||
| | [Dynamic context](https://qodo-merge-docs.qodo.ai/core-abilities/dynamic_context/) | ✅ | ✅ | ✅ | ✅ |
|
|
||||||
| | [Self reflection](https://qodo-merge-docs.qodo.ai/core-abilities/self_reflection/) | ✅ | ✅ | ✅ | ✅ |
|
|
||||||
| | [Static code analysis](https://qodo-merge-docs.qodo.ai/core-abilities/static_code_analysis/) 💎 | ✅ | ✅ | ✅ | |
|
|
||||||
| | [Global and wiki configurations](https://pr-agent-docs.codium.ai/usage-guide/configuration_options/) 💎 | ✅ | ✅ | ✅ | |
|
|
||||||
| | [PR interactive actions](https://www.codium.ai/images/pr_agent/pr-actions.mp4) 💎 | ✅ | ✅ | | |
|
|
||||||
| | [Impact Evaluation](https://qodo-merge-docs.qodo.ai/core-abilities/impact_evaluation/) 💎 | ✅ | ✅ | | |
|
|
||||||
|
|
||||||
💎 marks a feature available only in [Qodo Merge Pro](https://www.qodo.ai/pricing/){:target="_blank"}
|
💎 marks a feature available only in [Qodo Merge Pro](https://www.codium.ai/pricing/){:target="_blank"}
|
||||||
|
|
||||||
|
|
||||||
## Example Results
|
## Example Results
|
||||||
|
@ -3,7 +3,7 @@ See [here](https://qodo-merge-docs.qodo.ai/overview/pr_agent_pro/) for more deta
|
|||||||
|
|
||||||
A complimentary two-week trial is provided to all new users. Following the trial period, user licenses (seats) are required for continued access.
|
A complimentary two-week trial is provided to all new users. Following the trial period, user licenses (seats) are required for continued access.
|
||||||
To purchase user licenses, please visit our [pricing page](https://www.qodo.ai/pricing/).
|
To purchase user licenses, please visit our [pricing page](https://www.qodo.ai/pricing/).
|
||||||
Once subscribed, users can seamlessly deploy the application across any of their code repositories.
|
Once subscribed, users can seamlessly deploy the application across any of their GitHub repositories.
|
||||||
|
|
||||||
## Install Qodo Merge Pro for GitHub
|
## Install Qodo Merge Pro for GitHub
|
||||||
|
|
||||||
|
@ -95,112 +95,6 @@ This feature is controlled by a boolean configuration parameter: `pr_code_sugges
|
|||||||
|
|
||||||
Instead, we leverage a dedicated private page, within your repository wiki, to track suggestions. This approach offers convenient secure suggestion tracking while avoiding pull requests or any noise to the main repository.
|
Instead, we leverage a dedicated private page, within your repository wiki, to track suggestions. This approach offers convenient secure suggestion tracking while avoiding pull requests or any noise to the main repository.
|
||||||
|
|
||||||
## `Extra instructions` and `best practices`
|
|
||||||
|
|
||||||
The `improve` tool can be further customized by providing additional instructions and best practices to the AI model.
|
|
||||||
|
|
||||||
### Extra instructions
|
|
||||||
|
|
||||||
>`Platforms supported: GitHub, GitLab, Bitbucket, Azure DevOps`
|
|
||||||
|
|
||||||
You can use the `extra_instructions` configuration option to give the AI model additional instructions for the `improve` tool.
|
|
||||||
Be specific, clear, and concise in the instructions. With extra instructions, you are the prompter.
|
|
||||||
|
|
||||||
Examples for possible instructions:
|
|
||||||
```toml
|
|
||||||
[pr_code_suggestions]
|
|
||||||
extra_instructions="""\
|
|
||||||
(1) Answer in japanese
|
|
||||||
(2) Don't suggest to add try-except block
|
|
||||||
(3) Ignore changes in toml files
|
|
||||||
...
|
|
||||||
"""
|
|
||||||
```
|
|
||||||
Use triple quotes to write multi-line instructions. Use bullet points or numbers to make the instructions more readable.
|
|
||||||
|
|
||||||
### Best practices 💎
|
|
||||||
|
|
||||||
>`Platforms supported: GitHub, GitLab, Bitbucket`
|
|
||||||
|
|
||||||
Another option to give additional guidance to the AI model is by creating a dedicated [**wiki page**](https://github.com/Codium-ai/pr-agent/wiki) called `best_practices.md`.
|
|
||||||
This page can contain a list of best practices, coding standards, and guidelines that are specific to your repo/organization.
|
|
||||||
|
|
||||||
The AI model will use this wiki page as a reference, and in case the PR code violates any of the guidelines, it will create additional suggestions, with a dedicated label: `Organization
|
|
||||||
best practice`.
|
|
||||||
|
|
||||||
Example for a python `best_practices.md` content:
|
|
||||||
```markdown
|
|
||||||
## Project best practices
|
|
||||||
- Make sure that I/O operations are encapsulated in a try-except block
|
|
||||||
- Use the `logging` module for logging instead of `print` statements
|
|
||||||
- Use `is` and `is not` to compare with `None`
|
|
||||||
- Use `if __name__ == '__main__':` to run the code only when the script is executed
|
|
||||||
- Use `with` statement to open files
|
|
||||||
...
|
|
||||||
```
|
|
||||||
|
|
||||||
Tips for writing an effective `best_practices.md` file:
|
|
||||||
|
|
||||||
- Write clearly and concisely
|
|
||||||
- Include brief code examples when helpful
|
|
||||||
- Focus on project-specific guidelines, that will result in relevant suggestions you actually want to get
|
|
||||||
- Keep the file relatively short, under 800 lines, since:
|
|
||||||
- AI models may not process effectively very long documents
|
|
||||||
- Long files tend to contain generic guidelines already known to AI
|
|
||||||
|
|
||||||
#### Local and global best practices
|
|
||||||
By default, Qodo Merge will look for a local `best_practices.md` wiki file in the root of the relevant local repo.
|
|
||||||
|
|
||||||
If you want to enable also a global `best_practices.md` wiki file, set first in the global configuration file:
|
|
||||||
|
|
||||||
```toml
|
|
||||||
[best_practices]
|
|
||||||
enable_global_best_practices = true
|
|
||||||
```
|
|
||||||
|
|
||||||
Then, create a `best_practices.md` wiki file in the root of [global](https://qodo-merge-docs.qodo.ai/usage-guide/configuration_options/#global-configuration-file) configuration repository, `pr-agent-settings`.
|
|
||||||
|
|
||||||
#### Best practices for multiple languages
|
|
||||||
For a git organization working with multiple programming languages, you can maintain a centralized global `best_practices.md` file containing language-specific guidelines.
|
|
||||||
When reviewing pull requests, Qodo Merge automatically identifies the programming language and applies the relevant best practices from this file.
|
|
||||||
|
|
||||||
To do this, structure your `best_practices.md` file using the following format:
|
|
||||||
|
|
||||||
```
|
|
||||||
# [Python]
|
|
||||||
...
|
|
||||||
# [Java]
|
|
||||||
...
|
|
||||||
# [JavaScript]
|
|
||||||
...
|
|
||||||
```
|
|
||||||
|
|
||||||
#### Dedicated label for best practices suggestions
|
|
||||||
Best practice suggestions are labeled as `Organization best practice` by default.
|
|
||||||
To customize this label, modify it in your configuration file:
|
|
||||||
|
|
||||||
```toml
|
|
||||||
[best_practices]
|
|
||||||
organization_name = "..."
|
|
||||||
```
|
|
||||||
|
|
||||||
And the label will be: `{organization_name} best practice`.
|
|
||||||
|
|
||||||
|
|
||||||
#### Example results
|
|
||||||
|
|
||||||
{width=512}
|
|
||||||
|
|
||||||
|
|
||||||
### How to combine `extra instructions` and `best practices`
|
|
||||||
|
|
||||||
The `extra instructions` configuration is more related to the `improve` tool prompt. It can be used, for example, to avoid specific suggestions ("Don't suggest to add try-except block", "Ignore changes in toml files", ...) or to emphasize specific aspects or formats ("Answer in Japanese", "Give only short suggestions", ...)
|
|
||||||
|
|
||||||
In contrast, the `best_practices.md` file is a general guideline for the way code should be written in the repo.
|
|
||||||
|
|
||||||
Using a combination of both can help the AI model to provide relevant and tailored suggestions.
|
|
||||||
|
|
||||||
|
|
||||||
## Usage Tips
|
## Usage Tips
|
||||||
|
|
||||||
### Implementing the proposed code suggestions
|
### Implementing the proposed code suggestions
|
||||||
@ -297,6 +191,99 @@ This approach has two main benefits:
|
|||||||
Note: Chunking is primarily relevant for large PRs. For most PRs (up to 500 lines of code), Qodo Merge will be able to process the entire code in a single call.
|
Note: Chunking is primarily relevant for large PRs. For most PRs (up to 500 lines of code), Qodo Merge will be able to process the entire code in a single call.
|
||||||
|
|
||||||
|
|
||||||
|
### 'Extra instructions' and 'best practices'
|
||||||
|
|
||||||
|
#### Extra instructions
|
||||||
|
|
||||||
|
>`Platforms supported: GitHub, GitLab, Bitbucket`
|
||||||
|
|
||||||
|
You can use the `extra_instructions` configuration option to give the AI model additional instructions for the `improve` tool.
|
||||||
|
Be specific, clear, and concise in the instructions. With extra instructions, you are the prompter. Specify relevant aspects that you want the model to focus on.
|
||||||
|
|
||||||
|
Examples for possible instructions:
|
||||||
|
```toml
|
||||||
|
[pr_code_suggestions]
|
||||||
|
extra_instructions="""\
|
||||||
|
(1) Answer in japanese
|
||||||
|
(2) Don't suggest to add try-excpet block
|
||||||
|
(3) Ignore changes in toml files
|
||||||
|
...
|
||||||
|
"""
|
||||||
|
```
|
||||||
|
Use triple quotes to write multi-line instructions. Use bullet points or numbers to make the instructions more readable.
|
||||||
|
|
||||||
|
#### Best practices 💎
|
||||||
|
|
||||||
|
>`Platforms supported: GitHub, GitLab`
|
||||||
|
|
||||||
|
Another option to give additional guidance to the AI model is by creating a dedicated [**wiki page**](https://github.com/Codium-ai/pr-agent/wiki) called `best_practices.md`.
|
||||||
|
This page can contain a list of best practices, coding standards, and guidelines that are specific to your repo/organization.
|
||||||
|
|
||||||
|
The AI model will use this wiki page as a reference, and in case the PR code violates any of the guidelines, it will suggest improvements accordingly, with a dedicated label: `Organization
|
||||||
|
best practice`.
|
||||||
|
|
||||||
|
Example for a `best_practices.md` content can be found [here](https://github.com/Codium-ai/pr-agent/blob/main/docs/docs/usage-guide/EXAMPLE_BEST_PRACTICE.md) (adapted from Google's [pyguide](https://google.github.io/styleguide/pyguide.html)).
|
||||||
|
This file is only an example. Since it is used as a prompt for an AI model, we want to emphasize the following:
|
||||||
|
|
||||||
|
- It should be written in a clear and concise manner
|
||||||
|
- If needed, it should give short relevant code snippets as examples
|
||||||
|
- Recommended to limit the text to 800 lines or fewer. Here’s why:
|
||||||
|
|
||||||
|
1) Extremely long best practices documents may not be fully processed by the AI model.
|
||||||
|
|
||||||
|
2) A lengthy file probably represent a more "**generic**" set of guidelines, which the AI model is already familiar with. The objective is to focus on a more targeted set of guidelines tailored to the specific needs of this project.
|
||||||
|
|
||||||
|
##### Local and global best practices
|
||||||
|
By default, Qodo Merge will look for a local `best_practices.md` wiki file in the root of the relevant local repo.
|
||||||
|
|
||||||
|
If you want to enable also a global `best_practices.md` wiki file, set first in the global configuration file:
|
||||||
|
|
||||||
|
```toml
|
||||||
|
[best_practices]
|
||||||
|
enable_global_best_practices = true
|
||||||
|
```
|
||||||
|
|
||||||
|
Then, create a `best_practices.md` wiki file in the root of [global](https://qodo-merge-docs.qodo.ai/usage-guide/configuration_options/#global-configuration-file) configuration repository, `pr-agent-settings`.
|
||||||
|
|
||||||
|
##### Best practices for multiple languages
|
||||||
|
For a git organization working with multiple programming languages, you can maintain a centralized global `best_practices.md` file containing language-specific guidelines.
|
||||||
|
When reviewing pull requests, Qodo Merge automatically identifies the programming language and applies the relevant best practices from this file.
|
||||||
|
Structure your `best_practices.md` file using the following format:
|
||||||
|
|
||||||
|
```
|
||||||
|
# [Python]
|
||||||
|
...
|
||||||
|
# [Java]
|
||||||
|
...
|
||||||
|
# [JavaScript]
|
||||||
|
...
|
||||||
|
```
|
||||||
|
|
||||||
|
##### Dedicated label for best practices suggestions
|
||||||
|
Best practice suggestions are labeled as `Organization best practice` by default.
|
||||||
|
To customize this label, modify it in your configuration file:
|
||||||
|
|
||||||
|
```toml
|
||||||
|
[best_practices]
|
||||||
|
organization_name = ""
|
||||||
|
```
|
||||||
|
|
||||||
|
And the label will be: `{organization_name} best practice`.
|
||||||
|
|
||||||
|
|
||||||
|
##### Example results
|
||||||
|
|
||||||
|
{width=512}
|
||||||
|
|
||||||
|
|
||||||
|
#### How to combine `extra instructions` and `best practices`
|
||||||
|
|
||||||
|
The `extra instructions` configuration is more related to the `improve` tool prompt. It can be used, for example, to avoid specific suggestions ("Don't suggest to add try-except block", "Ignore changes in toml files", ...) or to emphasize specific aspects or formats ("Answer in Japanese", "Give only short suggestions", ...)
|
||||||
|
|
||||||
|
In contrast, the `best_practices.md` file is a general guideline for the way code should be written in the repo.
|
||||||
|
|
||||||
|
Using a combination of both can help the AI model to provide relevant and tailored suggestions.
|
||||||
|
|
||||||
## Configuration options
|
## Configuration options
|
||||||
|
|
||||||
??? example "General options"
|
??? example "General options"
|
||||||
@ -342,10 +329,6 @@ Note: Chunking is primarily relevant for large PRs. For most PRs (up to 500 line
|
|||||||
<td><b>wiki_page_accepted_suggestions</b></td>
|
<td><b>wiki_page_accepted_suggestions</b></td>
|
||||||
<td>If set to true, the tool will automatically track accepted suggestions in a dedicated wiki page called `.pr_agent_accepted_suggestions`. Default is true.</td>
|
<td>If set to true, the tool will automatically track accepted suggestions in a dedicated wiki page called `.pr_agent_accepted_suggestions`. Default is true.</td>
|
||||||
</tr>
|
</tr>
|
||||||
<tr>
|
|
||||||
<td><b>allow_thumbs_up_down</b></td>
|
|
||||||
<td>If set to true, all code suggestions will have thumbs up and thumbs down buttons, to encourage users to provide feedback on the suggestions. Default is false.</td>
|
|
||||||
</tr>
|
|
||||||
</table>
|
</table>
|
||||||
|
|
||||||
??? example "Params for number of suggestions and AI calls"
|
??? example "Params for number of suggestions and AI calls"
|
||||||
@ -363,6 +346,10 @@ Note: Chunking is primarily relevant for large PRs. For most PRs (up to 500 line
|
|||||||
<td><b>max_number_of_calls</b></td>
|
<td><b>max_number_of_calls</b></td>
|
||||||
<td>Maximum number of chunks. Default is 3.</td>
|
<td>Maximum number of chunks. Default is 3.</td>
|
||||||
</tr>
|
</tr>
|
||||||
|
<tr>
|
||||||
|
<td><b>rank_extended_suggestions</b></td>
|
||||||
|
<td>If set to true, the tool will rank the suggestions, based on importance. Default is true.</td>
|
||||||
|
</tr>
|
||||||
</table>
|
</table>
|
||||||
|
|
||||||
## A note on code suggestions quality
|
## A note on code suggestions quality
|
||||||
|
@ -39,7 +39,7 @@ pr_commands = [
|
|||||||
]
|
]
|
||||||
|
|
||||||
[pr_reviewer]
|
[pr_reviewer]
|
||||||
extra_instructions = "..."
|
num_code_suggestions = ...
|
||||||
...
|
...
|
||||||
```
|
```
|
||||||
|
|
||||||
@ -95,7 +95,7 @@ extra_instructions = "..."
|
|||||||
<table>
|
<table>
|
||||||
<tr>
|
<tr>
|
||||||
<td><b>num_code_suggestions</b></td>
|
<td><b>num_code_suggestions</b></td>
|
||||||
<td>Number of code suggestions provided by the 'review' tool. Default is 0, meaning no code suggestions will be provided by the `review` tool. Note that this is a legacy feature, that will be removed in future releases. Use the `improve` tool instead for code suggestions</td>
|
<td>Number of code suggestions provided by the 'review' tool. Default is 0, meaning no code suggestions will be provided by the `review` tool.</td>
|
||||||
</tr>
|
</tr>
|
||||||
<tr>
|
<tr>
|
||||||
<td><b>inline_code_comments</b></td>
|
<td><b>inline_code_comments</b></td>
|
||||||
|
@ -49,10 +49,9 @@ It can be invoked automatically from the analyze table, can be accessed by:
|
|||||||
/analyze
|
/analyze
|
||||||
```
|
```
|
||||||
Choose the components you want to find similar code for, and click on the `similar` checkbox.
|
Choose the components you want to find similar code for, and click on the `similar` checkbox.
|
||||||
|
|
||||||
{width=768}
|
{width=768}
|
||||||
|
|
||||||
You can search for similar code either within the organization's codebase or globally, which includes open-source repositories. Each result will include the relevant code components along with their associated license details.
|
If you are looking to search for similar code in the organization's codebase, you can click on the `Organization` checkbox, and it will invoke a new search command just for the organization's codebase.
|
||||||
|
|
||||||
{width=768}
|
{width=768}
|
||||||
|
|
||||||
|
@ -1,5 +1,4 @@
|
|||||||
## Local repo (CLI)
|
## Local repo (CLI)
|
||||||
|
|
||||||
When running from your locally cloned Qodo Merge repo (CLI), your local configuration file will be used.
|
When running from your locally cloned Qodo Merge repo (CLI), your local configuration file will be used.
|
||||||
Examples of invoking the different tools via the CLI:
|
Examples of invoking the different tools via the CLI:
|
||||||
|
|
||||||
@ -36,29 +35,9 @@ This is useful for debugging or experimenting with different tools.
|
|||||||
|
|
||||||
Default is "github".
|
Default is "github".
|
||||||
|
|
||||||
### CLI Health Check
|
|
||||||
To verify that Qodo Merge has been configured correctly, you can run this health check command from the repository root:
|
|
||||||
|
|
||||||
```bash
|
|
||||||
python -m tests.health_test.main
|
|
||||||
```
|
|
||||||
|
|
||||||
If the health check passes, you will see the following output:
|
### Online usage
|
||||||
|
|
||||||
```
|
|
||||||
========
|
|
||||||
Health test passed successfully
|
|
||||||
========
|
|
||||||
```
|
|
||||||
|
|
||||||
At the end of the run.
|
|
||||||
|
|
||||||
Before running the health check, ensure you have:
|
|
||||||
|
|
||||||
- Configured your [LLM provider](https://qodo-merge-docs.qodo.ai/usage-guide/changing_a_model/)
|
|
||||||
- Added a valid GitHub token to your configuration file
|
|
||||||
|
|
||||||
## Online usage
|
|
||||||
|
|
||||||
Online usage means invoking Qodo Merge tools by [comments](https://github.com/Codium-ai/pr-agent/pull/229#issuecomment-1695021901) on a PR.
|
Online usage means invoking Qodo Merge tools by [comments](https://github.com/Codium-ai/pr-agent/pull/229#issuecomment-1695021901) on a PR.
|
||||||
Commands for invoking the different tools via comments:
|
Commands for invoking the different tools via comments:
|
||||||
@ -78,11 +57,7 @@ For example, if you want to edit the `review` tool configurations, you can run:
|
|||||||
```
|
```
|
||||||
Any configuration value in [configuration file](https://github.com/Codium-ai/pr-agent/blob/main/pr_agent/settings/configuration.toml) file can be similarly edited. Comment `/config` to see the list of available configurations.
|
Any configuration value in [configuration file](https://github.com/Codium-ai/pr-agent/blob/main/pr_agent/settings/configuration.toml) file can be similarly edited. Comment `/config` to see the list of available configurations.
|
||||||
|
|
||||||
|
## Disabling automatic feedback
|
||||||
## Qodo Merge Automatic Feedback
|
|
||||||
|
|
||||||
|
|
||||||
### Disabling all automatic feedback
|
|
||||||
|
|
||||||
To easily disable all automatic feedback from Qodo Merge (GitHub App, GitLab Webhook, BitBucket App, Azure DevOps Webhook), set in a configuration file:
|
To easily disable all automatic feedback from Qodo Merge (GitHub App, GitLab Webhook, BitBucket App, Azure DevOps Webhook), set in a configuration file:
|
||||||
|
|
||||||
@ -91,52 +66,46 @@ To easily disable all automatic feedback from Qodo Merge (GitHub App, GitLab Web
|
|||||||
disable_auto_feedback = true
|
disable_auto_feedback = true
|
||||||
```
|
```
|
||||||
|
|
||||||
When this parameter is set to `true`, Qodo Merge will not run any automatic tools (like `describe`, `review`, `improve`) when a new PR is opened, or when new code is pushed to an open PR.
|
## GitHub App
|
||||||
|
|
||||||
### GitHub App
|
|
||||||
|
|
||||||
!!! note "Configurations for Qodo Merge Pro"
|
!!! note "Configurations for Qodo Merge Pro"
|
||||||
Qodo Merge Pro for GitHub is an App, hosted by CodiumAI. So all the instructions below are relevant also for Qodo Merge Pro users.
|
Qodo Merge Pro for GitHub is an App, hosted by CodiumAI. So all the instructions below are relevant also for Qodo Merge Pro users.
|
||||||
Same goes for [GitLab webhook](#gitlab-webhook) and [BitBucket App](#bitbucket-app) sections.
|
Same goes for [GitLab webhook](#gitlab-webhook) and [BitBucket App](#bitbucket-app) sections.
|
||||||
|
|
||||||
#### GitHub app automatic tools when a new PR is opened
|
### GitHub app automatic tools when a new PR is opened
|
||||||
|
|
||||||
The [github_app](https://github.com/Codium-ai/pr-agent/blob/main/pr_agent/settings/configuration.toml#L220) section defines GitHub app specific configurations.
|
The [github_app](https://github.com/Codium-ai/pr-agent/blob/main/pr_agent/settings/configuration.toml#L108) section defines GitHub app specific configurations.
|
||||||
|
|
||||||
The configuration parameter `pr_commands` defines the list of tools that will be **run automatically** when a new PR is opened:
|
The configuration parameter `pr_commands` defines the list of tools that will be **run automatically** when a new PR is opened.
|
||||||
```toml
|
```toml
|
||||||
[github_app]
|
[github_app]
|
||||||
pr_commands = [
|
pr_commands = [
|
||||||
"/describe",
|
"/describe",
|
||||||
"/review",
|
"/review",
|
||||||
"/improve",
|
"/improve --pr_code_suggestions.suggestions_score_threshold=5",
|
||||||
]
|
]
|
||||||
```
|
```
|
||||||
|
|
||||||
This means that when a new PR is opened/reopened or marked as ready for review, Qodo Merge will run the `describe`, `review` and `improve` tools.
|
This means that when a new PR is opened/reopened or marked as ready for review, Qodo Merge will run the `describe`, `review` and `improve` tools.
|
||||||
|
For the `improve` tool, for example, the `suggestions_score_threshold` parameter will be set to 5 (suggestions below a score of 5 won't be presented)
|
||||||
|
|
||||||
You can override the default tool parameters by using one the three options for a [configuration file](https://qodo-merge-docs.qodo.ai/usage-guide/configuration_options/): **wiki**, **local**, or **global**.
|
You can override the default tool parameters by using one the three options for a [configuration file](https://qodo-merge-docs.qodo.ai/usage-guide/configuration_options/): **wiki**, **local**, or **global**.
|
||||||
For example, if your configuration file contains:
|
For example, if your local `.pr_agent.toml` file contains:
|
||||||
|
|
||||||
```toml
|
```toml
|
||||||
[pr_description]
|
[pr_description]
|
||||||
generate_ai_title = true
|
generate_ai_title = true
|
||||||
```
|
```
|
||||||
|
Every time you run the `describe` tool, including automatic runs, the PR title will be generated by the AI.
|
||||||
|
|
||||||
Every time you run the `describe` tool (including automatic runs) the PR title will be generated by the AI.
|
To change which tools will run automatically when a new PR is opened, you can set the `pr_commands` parameter in the configuration file.
|
||||||
|
|
||||||
You can customize configurations specifically for automated runs by using the `--config_path=<value>` parameter.
|
|
||||||
For instance, to modify the `review` tool settings only for newly opened PRs, use:
|
|
||||||
```toml
|
```toml
|
||||||
[github_app]
|
[github_app]
|
||||||
pr_commands = [
|
pr_commands = ["describe", "review"]
|
||||||
"/describe",
|
|
||||||
"/review --pr_reviewer.extra_instructions='focus on the file: ...'",
|
|
||||||
"/improve",
|
|
||||||
]
|
|
||||||
```
|
```
|
||||||
|
|
||||||
#### GitHub app automatic tools for push actions (commits to an open PR)
|
In this case, only the `describe` and `review` tools will run automatically when a new PR is opened.
|
||||||
|
|
||||||
|
### GitHub app automatic tools for push actions (commits to an open PR)
|
||||||
|
|
||||||
In addition to running automatic tools when a PR is opened, the GitHub app can also respond to new code that is pushed to an open PR.
|
In addition to running automatic tools when a PR is opened, the GitHub app can also respond to new code that is pushed to an open PR.
|
||||||
|
|
||||||
@ -152,7 +121,7 @@ push_commands = [
|
|||||||
```
|
```
|
||||||
This means that when new code is pushed to the PR, the Qodo Merge will run the `describe` and `review` tools, with the specified parameters.
|
This means that when new code is pushed to the PR, the Qodo Merge will run the `describe` and `review` tools, with the specified parameters.
|
||||||
|
|
||||||
### GitHub Action
|
## GitHub Action
|
||||||
`GitHub Action` is a different way to trigger Qodo Merge tools, and uses a different configuration mechanism than `GitHub App`.<br>
|
`GitHub Action` is a different way to trigger Qodo Merge tools, and uses a different configuration mechanism than `GitHub App`.<br>
|
||||||
You can configure settings for `GitHub Action` by adding environment variables under the env section in `.github/workflows/pr_agent.yml` file.
|
You can configure settings for `GitHub Action` by adding environment variables under the env section in `.github/workflows/pr_agent.yml` file.
|
||||||
Specifically, start by setting the following environment variables:
|
Specifically, start by setting the following environment variables:
|
||||||
@ -163,7 +132,7 @@ Specifically, start by setting the following environment variables:
|
|||||||
github_action_config.auto_review: "true" # enable\disable auto review
|
github_action_config.auto_review: "true" # enable\disable auto review
|
||||||
github_action_config.auto_describe: "true" # enable\disable auto describe
|
github_action_config.auto_describe: "true" # enable\disable auto describe
|
||||||
github_action_config.auto_improve: "true" # enable\disable auto improve
|
github_action_config.auto_improve: "true" # enable\disable auto improve
|
||||||
github_action_config.pr_actions: '["opened", "reopened", "ready_for_review", "review_requested"]'
|
github_action_config.pr_actions: ["opened", "reopened", "ready_for_review", "review_requested"]
|
||||||
```
|
```
|
||||||
`github_action_config.auto_review`, `github_action_config.auto_describe` and `github_action_config.auto_improve` are used to enable/disable automatic tools that run when a new PR is opened.
|
`github_action_config.auto_review`, `github_action_config.auto_describe` and `github_action_config.auto_improve` are used to enable/disable automatic tools that run when a new PR is opened.
|
||||||
If not set, the default configuration is for all three tools to run automatically when a new PR is opened.
|
If not set, the default configuration is for all three tools to run automatically when a new PR is opened.
|
||||||
@ -186,7 +155,7 @@ publish_labels = false
|
|||||||
|
|
||||||
to prevent Qodo Merge from publishing labels when running the `describe` tool.
|
to prevent Qodo Merge from publishing labels when running the `describe` tool.
|
||||||
|
|
||||||
### GitLab Webhook
|
## GitLab Webhook
|
||||||
After setting up a GitLab webhook, to control which commands will run automatically when a new MR is opened, you can set the `pr_commands` parameter in the configuration file, similar to the GitHub App:
|
After setting up a GitLab webhook, to control which commands will run automatically when a new MR is opened, you can set the `pr_commands` parameter in the configuration file, similar to the GitHub App:
|
||||||
|
|
||||||
```toml
|
```toml
|
||||||
@ -212,7 +181,7 @@ push_commands = [
|
|||||||
|
|
||||||
Note that to use the 'handle_push_trigger' feature, you need to give the gitlab webhook also the "Push events" scope.
|
Note that to use the 'handle_push_trigger' feature, you need to give the gitlab webhook also the "Push events" scope.
|
||||||
|
|
||||||
### BitBucket App
|
## BitBucket App
|
||||||
Similar to GitHub app, when running Qodo Merge from BitBucket App, the default [configuration file](https://github.com/Codium-ai/pr-agent/blob/main/pr_agent/settings/configuration.toml) from a pre-built docker will be initially loaded.
|
Similar to GitHub app, when running Qodo Merge from BitBucket App, the default [configuration file](https://github.com/Codium-ai/pr-agent/blob/main/pr_agent/settings/configuration.toml) from a pre-built docker will be initially loaded.
|
||||||
|
|
||||||
By uploading a local `.pr_agent.toml` file to the root of the repo's main branch, you can edit and customize any configuration parameter. Note that you need to upload `.pr_agent.toml` prior to creating a PR, in order for the configuration to take effect.
|
By uploading a local `.pr_agent.toml` file to the root of the repo's main branch, you can edit and customize any configuration parameter. Note that you need to upload `.pr_agent.toml` prior to creating a PR, in order for the configuration to take effect.
|
||||||
@ -231,7 +200,7 @@ If you experience a lack of responses from Qodo Merge, you might want to set: `b
|
|||||||
This will prevent Qodo Merge from acquiring the full file content, and will only use the diff content. This will reduce the number of requests made to BitBucket, at the cost of small decrease in accuracy, as dynamic context will not be applicable.
|
This will prevent Qodo Merge from acquiring the full file content, and will only use the diff content. This will reduce the number of requests made to BitBucket, at the cost of small decrease in accuracy, as dynamic context will not be applicable.
|
||||||
|
|
||||||
|
|
||||||
#### BitBucket Self-Hosted App automatic tools
|
### BitBucket Self-Hosted App automatic tools
|
||||||
|
|
||||||
To control which commands will run automatically when a new PR is opened, you can set the `pr_commands` parameter in the configuration file:
|
To control which commands will run automatically when a new PR is opened, you can set the `pr_commands` parameter in the configuration file:
|
||||||
Specifically, set the following values:
|
Specifically, set the following values:
|
||||||
@ -256,7 +225,7 @@ push_commands = [
|
|||||||
]
|
]
|
||||||
```
|
```
|
||||||
|
|
||||||
### Azure DevOps provider
|
## Azure DevOps provider
|
||||||
|
|
||||||
To use Azure DevOps provider use the following settings in configuration.toml:
|
To use Azure DevOps provider use the following settings in configuration.toml:
|
||||||
```toml
|
```toml
|
||||||
@ -278,7 +247,7 @@ org = "https://dev.azure.com/YOUR_ORGANIZATION/"
|
|||||||
# pat = "YOUR_PAT_TOKEN" needed only if using PAT for authentication
|
# pat = "YOUR_PAT_TOKEN" needed only if using PAT for authentication
|
||||||
```
|
```
|
||||||
|
|
||||||
#### Azure DevOps Webhook
|
### Azure DevOps Webhook
|
||||||
|
|
||||||
To control which commands will run automatically when a new PR is opened, you can set the `pr_commands` parameter in the configuration file, similar to the GitHub App:
|
To control which commands will run automatically when a new PR is opened, you can set the `pr_commands` parameter in the configuration file, similar to the GitHub App:
|
||||||
```toml
|
```toml
|
||||||
|
@ -5,6 +5,7 @@ To use a different model than the default (GPT-4), you need to edit in the [conf
|
|||||||
```
|
```
|
||||||
[config]
|
[config]
|
||||||
model = "..."
|
model = "..."
|
||||||
|
model_turbo = "..."
|
||||||
fallback_models = ["..."]
|
fallback_models = ["..."]
|
||||||
```
|
```
|
||||||
|
|
||||||
@ -26,8 +27,9 @@ deployment_id = "" # The deployment name you chose when you deployed the engine
|
|||||||
and set in your configuration file:
|
and set in your configuration file:
|
||||||
```
|
```
|
||||||
[config]
|
[config]
|
||||||
model="" # the OpenAI model you've deployed on Azure (e.g. gpt-4o)
|
model="" # the OpenAI model you've deployed on Azure (e.g. gpt-3.5-turbo)
|
||||||
fallback_models=["..."]
|
model_turbo="" # the OpenAI model you've deployed on Azure (e.g. gpt-3.5-turbo)
|
||||||
|
fallback_models=["..."] # the OpenAI model you've deployed on Azure (e.g. gpt-3.5-turbo)
|
||||||
```
|
```
|
||||||
|
|
||||||
### Hugging Face
|
### Hugging Face
|
||||||
@ -50,6 +52,7 @@ MAX_TOKENS={
|
|||||||
|
|
||||||
[config] # in configuration.toml
|
[config] # in configuration.toml
|
||||||
model = "ollama/llama2"
|
model = "ollama/llama2"
|
||||||
|
model_turbo = "ollama/llama2"
|
||||||
fallback_models=["ollama/llama2"]
|
fallback_models=["ollama/llama2"]
|
||||||
|
|
||||||
[ollama] # in .secrets.toml
|
[ollama] # in .secrets.toml
|
||||||
@ -73,6 +76,7 @@ MAX_TOKENS={
|
|||||||
}
|
}
|
||||||
[config] # in configuration.toml
|
[config] # in configuration.toml
|
||||||
model = "huggingface/meta-llama/Llama-2-7b-chat-hf"
|
model = "huggingface/meta-llama/Llama-2-7b-chat-hf"
|
||||||
|
model_turbo = "huggingface/meta-llama/Llama-2-7b-chat-hf"
|
||||||
fallback_models=["huggingface/meta-llama/Llama-2-7b-chat-hf"]
|
fallback_models=["huggingface/meta-llama/Llama-2-7b-chat-hf"]
|
||||||
|
|
||||||
[huggingface] # in .secrets.toml
|
[huggingface] # in .secrets.toml
|
||||||
@ -87,6 +91,7 @@ To use Llama2 model with Replicate, for example, set:
|
|||||||
```
|
```
|
||||||
[config] # in configuration.toml
|
[config] # in configuration.toml
|
||||||
model = "replicate/llama-2-70b-chat:2c1608e18606fad2812020dc541930f2d0495ce32eee50074220b87300bc16e1"
|
model = "replicate/llama-2-70b-chat:2c1608e18606fad2812020dc541930f2d0495ce32eee50074220b87300bc16e1"
|
||||||
|
model_turbo = "replicate/llama-2-70b-chat:2c1608e18606fad2812020dc541930f2d0495ce32eee50074220b87300bc16e1"
|
||||||
fallback_models=["replicate/llama-2-70b-chat:2c1608e18606fad2812020dc541930f2d0495ce32eee50074220b87300bc16e1"]
|
fallback_models=["replicate/llama-2-70b-chat:2c1608e18606fad2812020dc541930f2d0495ce32eee50074220b87300bc16e1"]
|
||||||
[replicate] # in .secrets.toml
|
[replicate] # in .secrets.toml
|
||||||
key = ...
|
key = ...
|
||||||
@ -102,6 +107,7 @@ To use Llama3 model with Groq, for example, set:
|
|||||||
```
|
```
|
||||||
[config] # in configuration.toml
|
[config] # in configuration.toml
|
||||||
model = "llama3-70b-8192"
|
model = "llama3-70b-8192"
|
||||||
|
model_turbo = "llama3-70b-8192"
|
||||||
fallback_models = ["groq/llama3-70b-8192"]
|
fallback_models = ["groq/llama3-70b-8192"]
|
||||||
[groq] # in .secrets.toml
|
[groq] # in .secrets.toml
|
||||||
key = ... # your Groq api key
|
key = ... # your Groq api key
|
||||||
@ -115,6 +121,7 @@ To use Google's Vertex AI platform and its associated models (chat-bison/codecha
|
|||||||
```
|
```
|
||||||
[config] # in configuration.toml
|
[config] # in configuration.toml
|
||||||
model = "vertex_ai/codechat-bison"
|
model = "vertex_ai/codechat-bison"
|
||||||
|
model_turbo = "vertex_ai/codechat-bison"
|
||||||
fallback_models="vertex_ai/codechat-bison"
|
fallback_models="vertex_ai/codechat-bison"
|
||||||
|
|
||||||
[vertexai] # in .secrets.toml
|
[vertexai] # in .secrets.toml
|
||||||
@ -133,6 +140,7 @@ To use [Google AI Studio](https://aistudio.google.com/) models, set the relevant
|
|||||||
```toml
|
```toml
|
||||||
[config] # in configuration.toml
|
[config] # in configuration.toml
|
||||||
model="google_ai_studio/gemini-1.5-flash"
|
model="google_ai_studio/gemini-1.5-flash"
|
||||||
|
model_turbo="google_ai_studio/gemini-1.5-flash"
|
||||||
fallback_models=["google_ai_studio/gemini-1.5-flash"]
|
fallback_models=["google_ai_studio/gemini-1.5-flash"]
|
||||||
|
|
||||||
[google_ai_studio] # in .secrets.toml
|
[google_ai_studio] # in .secrets.toml
|
||||||
@ -148,6 +156,7 @@ To use Anthropic models, set the relevant models in the configuration section of
|
|||||||
```
|
```
|
||||||
[config]
|
[config]
|
||||||
model="anthropic/claude-3-opus-20240229"
|
model="anthropic/claude-3-opus-20240229"
|
||||||
|
model_turbo="anthropic/claude-3-opus-20240229"
|
||||||
fallback_models=["anthropic/claude-3-opus-20240229"]
|
fallback_models=["anthropic/claude-3-opus-20240229"]
|
||||||
```
|
```
|
||||||
|
|
||||||
@ -164,6 +173,7 @@ To use Amazon Bedrock and its foundational models, add the below configuration:
|
|||||||
```
|
```
|
||||||
[config] # in configuration.toml
|
[config] # in configuration.toml
|
||||||
model="bedrock/anthropic.claude-3-sonnet-20240229-v1:0"
|
model="bedrock/anthropic.claude-3-sonnet-20240229-v1:0"
|
||||||
|
model_turbo="bedrock/anthropic.claude-3-sonnet-20240229-v1:0"
|
||||||
fallback_models=["bedrock/anthropic.claude-v2:1"]
|
fallback_models=["bedrock/anthropic.claude-v2:1"]
|
||||||
```
|
```
|
||||||
|
|
||||||
@ -185,6 +195,7 @@ If the relevant model doesn't appear [here](https://github.com/Codium-ai/pr-agen
|
|||||||
```
|
```
|
||||||
[config]
|
[config]
|
||||||
model="custom_model_name"
|
model="custom_model_name"
|
||||||
|
model_turbo="custom_model_name"
|
||||||
fallback_models=["custom_model_name"]
|
fallback_models=["custom_model_name"]
|
||||||
```
|
```
|
||||||
(2) Set the maximal tokens for the model:
|
(2) Set the maximal tokens for the model:
|
||||||
|
@ -24,8 +24,6 @@ MAX_TOKENS = {
|
|||||||
'o1-mini-2024-09-12': 128000, # 128K, but may be limited by config.max_model_tokens
|
'o1-mini-2024-09-12': 128000, # 128K, but may be limited by config.max_model_tokens
|
||||||
'o1-preview': 128000, # 128K, but may be limited by config.max_model_tokens
|
'o1-preview': 128000, # 128K, but may be limited by config.max_model_tokens
|
||||||
'o1-preview-2024-09-12': 128000, # 128K, but may be limited by config.max_model_tokens
|
'o1-preview-2024-09-12': 128000, # 128K, but may be limited by config.max_model_tokens
|
||||||
'o1-2024-12-17': 204800, # 200K, but may be limited by config.max_model_tokens
|
|
||||||
'o1': 204800, # 200K, but may be limited by config.max_model_tokens
|
|
||||||
'claude-instant-1': 100000,
|
'claude-instant-1': 100000,
|
||||||
'claude-2': 100000,
|
'claude-2': 100000,
|
||||||
'command-nightly': 4096,
|
'command-nightly': 4096,
|
||||||
@ -44,7 +42,6 @@ MAX_TOKENS = {
|
|||||||
'vertex_ai/gemma2': 8200,
|
'vertex_ai/gemma2': 8200,
|
||||||
'gemini/gemini-1.5-pro': 1048576,
|
'gemini/gemini-1.5-pro': 1048576,
|
||||||
'gemini/gemini-1.5-flash': 1048576,
|
'gemini/gemini-1.5-flash': 1048576,
|
||||||
'gemini/gemini-2.0-flash-exp': 1048576,
|
|
||||||
'codechat-bison': 6144,
|
'codechat-bison': 6144,
|
||||||
'codechat-bison-32k': 32000,
|
'codechat-bison-32k': 32000,
|
||||||
'anthropic.claude-instant-v1': 100000,
|
'anthropic.claude-instant-v1': 100000,
|
||||||
|
@ -7,7 +7,6 @@ from litellm import acompletion
|
|||||||
from tenacity import retry, retry_if_exception_type, stop_after_attempt
|
from tenacity import retry, retry_if_exception_type, stop_after_attempt
|
||||||
|
|
||||||
from pr_agent.algo.ai_handlers.base_ai_handler import BaseAiHandler
|
from pr_agent.algo.ai_handlers.base_ai_handler import BaseAiHandler
|
||||||
from pr_agent.algo.utils import get_version
|
|
||||||
from pr_agent.config_loader import get_settings
|
from pr_agent.config_loader import get_settings
|
||||||
from pr_agent.log import get_logger
|
from pr_agent.log import get_logger
|
||||||
|
|
||||||
@ -133,7 +132,7 @@ class LiteLLMAIHandler(BaseAiHandler):
|
|||||||
if "langfuse" in callbacks:
|
if "langfuse" in callbacks:
|
||||||
metadata.update({
|
metadata.update({
|
||||||
"trace_name": command,
|
"trace_name": command,
|
||||||
"tags": [git_provider, command, f'version:{get_version()}'],
|
"tags": [git_provider, command],
|
||||||
"trace_metadata": {
|
"trace_metadata": {
|
||||||
"command": command,
|
"command": command,
|
||||||
"pr_url": pr_url,
|
"pr_url": pr_url,
|
||||||
@ -142,7 +141,7 @@ class LiteLLMAIHandler(BaseAiHandler):
|
|||||||
if "langsmith" in callbacks:
|
if "langsmith" in callbacks:
|
||||||
metadata.update({
|
metadata.update({
|
||||||
"run_name": command,
|
"run_name": command,
|
||||||
"tags": [git_provider, command, f'version:{get_version()}'],
|
"tags": [git_provider, command],
|
||||||
"extra": {
|
"extra": {
|
||||||
"metadata": {
|
"metadata": {
|
||||||
"command": command,
|
"command": command,
|
||||||
@ -193,8 +192,8 @@ class LiteLLMAIHandler(BaseAiHandler):
|
|||||||
messages[1]["content"] = [{"type": "text", "text": messages[1]["content"]},
|
messages[1]["content"] = [{"type": "text", "text": messages[1]["content"]},
|
||||||
{"type": "image_url", "image_url": {"url": img_path}}]
|
{"type": "image_url", "image_url": {"url": img_path}}]
|
||||||
|
|
||||||
# Currently, model OpenAI o1 series does not support a separate system and user prompts
|
# Currently O1 does not support separate system and user prompts
|
||||||
O1_MODEL_PREFIX = 'o1'
|
O1_MODEL_PREFIX = 'o1-'
|
||||||
model_type = model.split('/')[-1] if '/' in model else model
|
model_type = model.split('/')[-1] if '/' in model else model
|
||||||
if model_type.startswith(O1_MODEL_PREFIX):
|
if model_type.startswith(O1_MODEL_PREFIX):
|
||||||
user = f"{system}\n\n\n{user}"
|
user = f"{system}\n\n\n{user}"
|
||||||
|
@ -11,7 +11,7 @@ from pr_agent.algo.git_patch_processing import (
|
|||||||
from pr_agent.algo.language_handler import sort_files_by_main_languages
|
from pr_agent.algo.language_handler import sort_files_by_main_languages
|
||||||
from pr_agent.algo.token_handler import TokenHandler
|
from pr_agent.algo.token_handler import TokenHandler
|
||||||
from pr_agent.algo.types import EDIT_TYPE, FilePatchInfo
|
from pr_agent.algo.types import EDIT_TYPE, FilePatchInfo
|
||||||
from pr_agent.algo.utils import ModelType, clip_tokens, get_max_tokens, get_weak_model
|
from pr_agent.algo.utils import ModelType, clip_tokens, get_max_tokens
|
||||||
from pr_agent.config_loader import get_settings
|
from pr_agent.config_loader import get_settings
|
||||||
from pr_agent.git_providers.git_provider import GitProvider
|
from pr_agent.git_providers.git_provider import GitProvider
|
||||||
from pr_agent.log import get_logger
|
from pr_agent.log import get_logger
|
||||||
@ -354,8 +354,8 @@ async def retry_with_fallback_models(f: Callable, model_type: ModelType = ModelT
|
|||||||
|
|
||||||
|
|
||||||
def _get_all_models(model_type: ModelType = ModelType.REGULAR) -> List[str]:
|
def _get_all_models(model_type: ModelType = ModelType.REGULAR) -> List[str]:
|
||||||
if model_type == ModelType.WEAK:
|
if model_type == ModelType.TURBO:
|
||||||
model = get_weak_model()
|
model = get_settings().config.model_turbo
|
||||||
else:
|
else:
|
||||||
model = get_settings().config.model
|
model = get_settings().config.model
|
||||||
fallback_models = get_settings().config.fallback_models
|
fallback_models = get_settings().config.fallback_models
|
||||||
|
@ -7,13 +7,11 @@ import html
|
|||||||
import json
|
import json
|
||||||
import os
|
import os
|
||||||
import re
|
import re
|
||||||
import sys
|
|
||||||
import textwrap
|
import textwrap
|
||||||
import time
|
import time
|
||||||
import traceback
|
import traceback
|
||||||
from datetime import datetime
|
from datetime import datetime
|
||||||
from enum import Enum
|
from enum import Enum
|
||||||
from importlib.metadata import PackageNotFoundError, version
|
|
||||||
from typing import Any, List, Tuple
|
from typing import Any, List, Tuple
|
||||||
|
|
||||||
import html2text
|
import html2text
|
||||||
@ -29,12 +27,6 @@ from pr_agent.config_loader import get_settings, global_settings
|
|||||||
from pr_agent.log import get_logger
|
from pr_agent.log import get_logger
|
||||||
|
|
||||||
|
|
||||||
def get_weak_model() -> str:
|
|
||||||
if get_settings().get("config.model_weak"):
|
|
||||||
return get_settings().config.model_weak
|
|
||||||
return get_settings().config.model
|
|
||||||
|
|
||||||
|
|
||||||
class Range(BaseModel):
|
class Range(BaseModel):
|
||||||
line_start: int # should be 0-indexed
|
line_start: int # should be 0-indexed
|
||||||
line_end: int
|
line_end: int
|
||||||
@ -43,7 +35,8 @@ class Range(BaseModel):
|
|||||||
|
|
||||||
class ModelType(str, Enum):
|
class ModelType(str, Enum):
|
||||||
REGULAR = "regular"
|
REGULAR = "regular"
|
||||||
WEAK = "weak"
|
TURBO = "turbo"
|
||||||
|
|
||||||
|
|
||||||
class PRReviewHeader(str, Enum):
|
class PRReviewHeader(str, Enum):
|
||||||
REGULAR = "## PR Reviewer Guide"
|
REGULAR = "## PR Reviewer Guide"
|
||||||
@ -104,8 +97,7 @@ def unique_strings(input_list: List[str]) -> List[str]:
|
|||||||
def convert_to_markdown_v2(output_data: dict,
|
def convert_to_markdown_v2(output_data: dict,
|
||||||
gfm_supported: bool = True,
|
gfm_supported: bool = True,
|
||||||
incremental_review=None,
|
incremental_review=None,
|
||||||
git_provider=None,
|
git_provider=None) -> str:
|
||||||
files=None) -> str:
|
|
||||||
"""
|
"""
|
||||||
Convert a dictionary of data into markdown format.
|
Convert a dictionary of data into markdown format.
|
||||||
Args:
|
Args:
|
||||||
@ -229,13 +221,9 @@ def convert_to_markdown_v2(output_data: dict,
|
|||||||
continue
|
continue
|
||||||
relevant_file = issue.get('relevant_file', '').strip()
|
relevant_file = issue.get('relevant_file', '').strip()
|
||||||
issue_header = issue.get('issue_header', '').strip()
|
issue_header = issue.get('issue_header', '').strip()
|
||||||
if issue_header.lower() == 'possible bug':
|
|
||||||
issue_header = 'Possible Issue' # Make the header less frightening
|
|
||||||
issue_content = issue.get('issue_content', '').strip()
|
issue_content = issue.get('issue_content', '').strip()
|
||||||
start_line = int(str(issue.get('start_line', 0)).strip())
|
start_line = int(str(issue.get('start_line', 0)).strip())
|
||||||
end_line = int(str(issue.get('end_line', 0)).strip())
|
end_line = int(str(issue.get('end_line', 0)).strip())
|
||||||
|
|
||||||
relevant_lines_str = extract_relevant_lines_str(end_line, files, relevant_file, start_line)
|
|
||||||
if git_provider:
|
if git_provider:
|
||||||
reference_link = git_provider.get_line_link(relevant_file, start_line, end_line)
|
reference_link = git_provider.get_line_link(relevant_file, start_line, end_line)
|
||||||
else:
|
else:
|
||||||
@ -243,10 +231,7 @@ def convert_to_markdown_v2(output_data: dict,
|
|||||||
|
|
||||||
if gfm_supported:
|
if gfm_supported:
|
||||||
if reference_link is not None and len(reference_link) > 0:
|
if reference_link is not None and len(reference_link) > 0:
|
||||||
if relevant_lines_str:
|
issue_str = f"<a href='{reference_link}'><strong>{issue_header}</strong></a><br>{issue_content}"
|
||||||
issue_str = f"<details><summary><a href='{reference_link}'><strong>{issue_header}</strong></a>\n\n{issue_content}</summary>\n\n{relevant_lines_str}\n\n</details>"
|
|
||||||
else:
|
|
||||||
issue_str = f"<a href='{reference_link}'><strong>{issue_header}</strong></a><br>{issue_content}"
|
|
||||||
else:
|
else:
|
||||||
issue_str = f"<strong>{issue_header}</strong><br>{issue_content}"
|
issue_str = f"<strong>{issue_header}</strong><br>{issue_content}"
|
||||||
else:
|
else:
|
||||||
@ -288,25 +273,6 @@ def convert_to_markdown_v2(output_data: dict,
|
|||||||
|
|
||||||
return markdown_text
|
return markdown_text
|
||||||
|
|
||||||
def extract_relevant_lines_str(end_line, files, relevant_file, start_line):
|
|
||||||
try:
|
|
||||||
relevant_lines_str = ""
|
|
||||||
if files:
|
|
||||||
files = set_file_languages(files)
|
|
||||||
for file in files:
|
|
||||||
if file.filename.strip() == relevant_file:
|
|
||||||
if not file.head_file:
|
|
||||||
get_logger().warning(f"No content found in file: {file.filename}")
|
|
||||||
return ""
|
|
||||||
relevant_file_lines = file.head_file.splitlines()
|
|
||||||
relevant_lines_str = "\n".join(relevant_file_lines[start_line - 1:end_line])
|
|
||||||
relevant_lines_str = f"```{file.language}\n{relevant_lines_str}\n```"
|
|
||||||
break
|
|
||||||
return relevant_lines_str
|
|
||||||
except Exception as e:
|
|
||||||
get_logger().exception(f"Failed to extract relevant lines: {e}")
|
|
||||||
return ""
|
|
||||||
|
|
||||||
|
|
||||||
def ticket_markdown_logic(emoji, markdown_text, value, gfm_supported) -> str:
|
def ticket_markdown_logic(emoji, markdown_text, value, gfm_supported) -> str:
|
||||||
ticket_compliance_str = ""
|
ticket_compliance_str = ""
|
||||||
@ -1140,48 +1106,3 @@ def process_description(description_full: str) -> Tuple[str, List]:
|
|||||||
get_logger().exception(f"Failed to process description: {e}")
|
get_logger().exception(f"Failed to process description: {e}")
|
||||||
|
|
||||||
return base_description_str, files
|
return base_description_str, files
|
||||||
|
|
||||||
def get_version() -> str:
|
|
||||||
# First check pyproject.toml if running directly out of repository
|
|
||||||
if os.path.exists("pyproject.toml"):
|
|
||||||
if sys.version_info >= (3, 11):
|
|
||||||
import tomllib
|
|
||||||
with open("pyproject.toml", "rb") as f:
|
|
||||||
data = tomllib.load(f)
|
|
||||||
if "project" in data and "version" in data["project"]:
|
|
||||||
return data["project"]["version"]
|
|
||||||
else:
|
|
||||||
get_logger().warning("Version not found in pyproject.toml")
|
|
||||||
else:
|
|
||||||
get_logger().warning("Unable to determine local version from pyproject.toml")
|
|
||||||
|
|
||||||
# Otherwise get the installed pip package version
|
|
||||||
try:
|
|
||||||
return version('pr-agent')
|
|
||||||
except PackageNotFoundError:
|
|
||||||
get_logger().warning("Unable to find package named 'pr-agent'")
|
|
||||||
return "unknown"
|
|
||||||
|
|
||||||
|
|
||||||
def set_file_languages(diff_files) -> List[FilePatchInfo]:
|
|
||||||
try:
|
|
||||||
# if the language is already set, do not change it
|
|
||||||
if hasattr(diff_files[0], 'language') and diff_files[0].language:
|
|
||||||
return diff_files
|
|
||||||
|
|
||||||
# map file extensions to programming languages
|
|
||||||
language_extension_map_org = get_settings().language_extension_map_org
|
|
||||||
extension_to_language = {}
|
|
||||||
for language, extensions in language_extension_map_org.items():
|
|
||||||
for ext in extensions:
|
|
||||||
extension_to_language[ext] = language
|
|
||||||
for file in diff_files:
|
|
||||||
extension_s = '.' + file.filename.rsplit('.')[-1]
|
|
||||||
language_name = "txt"
|
|
||||||
if extension_s and (extension_s in extension_to_language):
|
|
||||||
language_name = extension_to_language[extension_s]
|
|
||||||
file.language = language_name.lower()
|
|
||||||
except Exception as e:
|
|
||||||
get_logger().exception(f"Failed to set file languages: {e}")
|
|
||||||
|
|
||||||
return diff_files
|
|
||||||
|
@ -3,7 +3,6 @@ import asyncio
|
|||||||
import os
|
import os
|
||||||
|
|
||||||
from pr_agent.agent.pr_agent import PRAgent, commands
|
from pr_agent.agent.pr_agent import PRAgent, commands
|
||||||
from pr_agent.algo.utils import get_version
|
|
||||||
from pr_agent.config_loader import get_settings
|
from pr_agent.config_loader import get_settings
|
||||||
from pr_agent.log import get_logger, setup_logger
|
from pr_agent.log import get_logger, setup_logger
|
||||||
|
|
||||||
@ -46,7 +45,6 @@ def set_parser():
|
|||||||
To edit any configuration parameter from 'configuration.toml', just add -config_path=<value>.
|
To edit any configuration parameter from 'configuration.toml', just add -config_path=<value>.
|
||||||
For example: 'python cli.py --pr_url=... review --pr_reviewer.extra_instructions="focus on the file: ..."'
|
For example: 'python cli.py --pr_url=... review --pr_reviewer.extra_instructions="focus on the file: ..."'
|
||||||
""")
|
""")
|
||||||
parser.add_argument('--version', action='version', version=f'pr-agent {get_version()}')
|
|
||||||
parser.add_argument('--pr_url', type=str, help='The URL of the PR to review', default=None)
|
parser.add_argument('--pr_url', type=str, help='The URL of the PR to review', default=None)
|
||||||
parser.add_argument('--issue_url', type=str, help='The URL of the Issue to review', default=None)
|
parser.add_argument('--issue_url', type=str, help='The URL of the Issue to review', default=None)
|
||||||
parser.add_argument('command', type=str, help='The', choices=commands, default='review')
|
parser.add_argument('command', type=str, help='The', choices=commands, default='review')
|
||||||
|
@ -19,7 +19,7 @@ from ..algo.language_handler import is_valid_file
|
|||||||
from ..algo.types import EDIT_TYPE
|
from ..algo.types import EDIT_TYPE
|
||||||
from ..algo.utils import (PRReviewHeader, Range, clip_tokens,
|
from ..algo.utils import (PRReviewHeader, Range, clip_tokens,
|
||||||
find_line_number_of_relevant_line_in_file,
|
find_line_number_of_relevant_line_in_file,
|
||||||
load_large_diff, set_file_languages)
|
load_large_diff)
|
||||||
from ..config_loader import get_settings
|
from ..config_loader import get_settings
|
||||||
from ..log import get_logger
|
from ..log import get_logger
|
||||||
from ..servers.utils import RateLimitExceeded
|
from ..servers.utils import RateLimitExceeded
|
||||||
@ -889,7 +889,18 @@ class GithubProvider(GitProvider):
|
|||||||
RE_HUNK_HEADER = re.compile(
|
RE_HUNK_HEADER = re.compile(
|
||||||
r"^@@ -(\d+)(?:,(\d+))? \+(\d+)(?:,(\d+))? @@[ ]?(.*)")
|
r"^@@ -(\d+)(?:,(\d+))? \+(\d+)(?:,(\d+))? @@[ ]?(.*)")
|
||||||
|
|
||||||
diff_files = set_file_languages(diff_files)
|
# map file extensions to programming languages
|
||||||
|
language_extension_map_org = get_settings().language_extension_map_org
|
||||||
|
extension_to_language = {}
|
||||||
|
for language, extensions in language_extension_map_org.items():
|
||||||
|
for ext in extensions:
|
||||||
|
extension_to_language[ext] = language
|
||||||
|
for file in diff_files:
|
||||||
|
extension_s = '.' + file.filename.rsplit('.')[-1]
|
||||||
|
language_name = "txt"
|
||||||
|
if extension_s and (extension_s in extension_to_language):
|
||||||
|
language_name = extension_to_language[extension_s]
|
||||||
|
file.language = language_name.lower()
|
||||||
|
|
||||||
for suggestion in code_suggestions_copy:
|
for suggestion in code_suggestions_copy:
|
||||||
try:
|
try:
|
||||||
|
@ -99,24 +99,5 @@ def set_claude_model():
|
|||||||
"""
|
"""
|
||||||
model_claude = "bedrock/anthropic.claude-3-5-sonnet-20240620-v1:0"
|
model_claude = "bedrock/anthropic.claude-3-5-sonnet-20240620-v1:0"
|
||||||
get_settings().set('config.model', model_claude)
|
get_settings().set('config.model', model_claude)
|
||||||
get_settings().set('config.model_weak', model_claude)
|
get_settings().set('config.model_turbo', model_claude)
|
||||||
get_settings().set('config.fallback_models', [model_claude])
|
get_settings().set('config.fallback_models', [model_claude])
|
||||||
|
|
||||||
|
|
||||||
def is_user_name_a_bot(name: str) -> bool:
|
|
||||||
if not name:
|
|
||||||
return False
|
|
||||||
bot_indicators = ['codium', 'bot_', 'bot-', '_bot', '-bot', 'qodo', "service", "github", "jenkins", "auto",
|
|
||||||
"cicd", "validator", "ci-", "assistant", "srv-"]
|
|
||||||
return any(indicator in name.lower() for indicator in bot_indicators)
|
|
||||||
|
|
||||||
|
|
||||||
def is_pr_description_indicating_bot(description: str) -> bool:
|
|
||||||
if not description:
|
|
||||||
return False
|
|
||||||
bot_descriptions = ["Snyk has created this PR", "This PR was created automatically by",
|
|
||||||
"This PR was created by a bot",
|
|
||||||
"This pull request was automatically generated by"]
|
|
||||||
# Check is it's a Snyk bot
|
|
||||||
if any(bot_description in description for bot_description in bot_descriptions):
|
|
||||||
return True
|
|
@ -19,7 +19,7 @@ from starlette_context.middleware import RawContextMiddleware
|
|||||||
from pr_agent.agent.pr_agent import PRAgent
|
from pr_agent.agent.pr_agent import PRAgent
|
||||||
from pr_agent.algo.utils import update_settings_from_args
|
from pr_agent.algo.utils import update_settings_from_args
|
||||||
from pr_agent.config_loader import get_settings, global_settings
|
from pr_agent.config_loader import get_settings, global_settings
|
||||||
from pr_agent.git_providers.utils import apply_repo_settings, is_user_name_a_bot, is_pr_description_indicating_bot
|
from pr_agent.git_providers.utils import apply_repo_settings
|
||||||
from pr_agent.identity_providers import get_identity_provider
|
from pr_agent.identity_providers import get_identity_provider
|
||||||
from pr_agent.identity_providers.identity_provider import Eligibility
|
from pr_agent.identity_providers.identity_provider import Eligibility
|
||||||
from pr_agent.log import LoggingFormat, get_logger, setup_logger
|
from pr_agent.log import LoggingFormat, get_logger, setup_logger
|
||||||
@ -102,22 +102,11 @@ async def _perform_commands_bitbucket(commands_conf: str, agent: PRAgent, api_ur
|
|||||||
def is_bot_user(data) -> bool:
|
def is_bot_user(data) -> bool:
|
||||||
try:
|
try:
|
||||||
actor = data.get("data", {}).get("actor", {})
|
actor = data.get("data", {}).get("actor", {})
|
||||||
description = data.get("data", {}).get("pullrequest", {}).get("description", "")
|
|
||||||
# allow actor type: user . if it's "AppUser" or "team" then it is a bot user
|
# allow actor type: user . if it's "AppUser" or "team" then it is a bot user
|
||||||
allowed_actor_types = {"user"}
|
allowed_actor_types = {"user"}
|
||||||
if actor and actor["type"].lower() not in allowed_actor_types:
|
if actor and actor["type"].lower() not in allowed_actor_types:
|
||||||
get_logger().info(f"BitBucket actor type is not 'user', skipping: {actor}")
|
get_logger().info(f"BitBucket actor type is not 'user', skipping: {actor}")
|
||||||
return True
|
return True
|
||||||
|
|
||||||
username = actor.get("username", "")
|
|
||||||
if username and is_user_name_a_bot(username):
|
|
||||||
get_logger().info(f"BitBucket actor is a bot user, skipping: {username}")
|
|
||||||
return True
|
|
||||||
|
|
||||||
if description and is_pr_description_indicating_bot(description):
|
|
||||||
get_logger().info(f"Description indicates a bot user: {actor}",
|
|
||||||
artifact={"description": description})
|
|
||||||
return True
|
|
||||||
except Exception as e:
|
except Exception as e:
|
||||||
get_logger().error(f"Failed 'is_bot_user' logic: {e}")
|
get_logger().error(f"Failed 'is_bot_user' logic: {e}")
|
||||||
return False
|
return False
|
||||||
|
@ -18,7 +18,7 @@ from pr_agent.config_loader import get_settings, global_settings
|
|||||||
from pr_agent.git_providers import (get_git_provider,
|
from pr_agent.git_providers import (get_git_provider,
|
||||||
get_git_provider_with_context)
|
get_git_provider_with_context)
|
||||||
from pr_agent.git_providers.git_provider import IncrementalPR
|
from pr_agent.git_providers.git_provider import IncrementalPR
|
||||||
from pr_agent.git_providers.utils import apply_repo_settings, is_user_name_a_bot, is_pr_description_indicating_bot
|
from pr_agent.git_providers.utils import apply_repo_settings
|
||||||
from pr_agent.identity_providers import get_identity_provider
|
from pr_agent.identity_providers import get_identity_provider
|
||||||
from pr_agent.identity_providers.identity_provider import Eligibility
|
from pr_agent.identity_providers.identity_provider import Eligibility
|
||||||
from pr_agent.log import LoggingFormat, get_logger, setup_logger
|
from pr_agent.log import LoggingFormat, get_logger, setup_logger
|
||||||
@ -238,22 +238,13 @@ def get_log_context(body, event, action, build_number):
|
|||||||
return log_context, sender, sender_id, sender_type
|
return log_context, sender, sender_id, sender_type
|
||||||
|
|
||||||
|
|
||||||
def is_bot_user(sender, sender_type, user_description):
|
def is_bot_user(sender, sender_type):
|
||||||
try:
|
try:
|
||||||
# logic to ignore PRs opened by bot
|
# logic to ignore PRs opened by bot
|
||||||
if get_settings().get("GITHUB_APP.IGNORE_BOT_PR", False):
|
if get_settings().get("GITHUB_APP.IGNORE_BOT_PR", False) and sender_type == "Bot":
|
||||||
if sender_type.lower() == "bot":
|
if 'pr-agent' not in sender:
|
||||||
if 'pr-agent' not in sender:
|
|
||||||
get_logger().info(f"Ignoring PR from '{sender=}' because it is a bot")
|
|
||||||
return True
|
|
||||||
if is_user_name_a_bot(sender):
|
|
||||||
get_logger().info(f"Ignoring PR from '{sender=}' because it is a bot")
|
get_logger().info(f"Ignoring PR from '{sender=}' because it is a bot")
|
||||||
return True
|
return True
|
||||||
# Ignore PRs opened by bot users based on their description
|
|
||||||
if isinstance(user_description, str) and is_pr_description_indicating_bot(user_description):
|
|
||||||
get_logger().info(f"Description indicates a bot user: {sender}",
|
|
||||||
artifact={"description": user_description})
|
|
||||||
return True
|
|
||||||
except Exception as e:
|
except Exception as e:
|
||||||
get_logger().error(f"Failed 'is_bot_user' logic: {e}")
|
get_logger().error(f"Failed 'is_bot_user' logic: {e}")
|
||||||
return False
|
return False
|
||||||
@ -316,8 +307,7 @@ async def handle_request(body: Dict[str, Any], event: str):
|
|||||||
log_context, sender, sender_id, sender_type = get_log_context(body, event, action, build_number)
|
log_context, sender, sender_id, sender_type = get_log_context(body, event, action, build_number)
|
||||||
|
|
||||||
# logic to ignore PRs opened by bot, PRs with specific titles, labels, source branches, or target branches
|
# logic to ignore PRs opened by bot, PRs with specific titles, labels, source branches, or target branches
|
||||||
pr_description = body.get("pull_request", {}).get("body", "")
|
if is_bot_user(sender, sender_type) and 'check_run' not in body:
|
||||||
if is_bot_user(sender, sender_type, pr_description) and 'check_run' not in body:
|
|
||||||
return {}
|
return {}
|
||||||
if action != 'created' and 'check_run' not in body:
|
if action != 'created' and 'check_run' not in body:
|
||||||
if not should_process_pr_logic(body):
|
if not should_process_pr_logic(body):
|
||||||
|
@ -84,7 +84,6 @@ async def is_valid_notification(notification, headers, handled_ids, session, use
|
|||||||
return False, handled_ids
|
return False, handled_ids
|
||||||
async with session.get(latest_comment, headers=headers) as comment_response:
|
async with session.get(latest_comment, headers=headers) as comment_response:
|
||||||
check_prev_comments = False
|
check_prev_comments = False
|
||||||
user_tag = "@" + user_id
|
|
||||||
if comment_response.status == 200:
|
if comment_response.status == 200:
|
||||||
comment = await comment_response.json()
|
comment = await comment_response.json()
|
||||||
if 'id' in comment:
|
if 'id' in comment:
|
||||||
@ -102,6 +101,7 @@ async def is_valid_notification(notification, headers, handled_ids, session, use
|
|||||||
get_logger().debug(f"no comment_body")
|
get_logger().debug(f"no comment_body")
|
||||||
check_prev_comments = True
|
check_prev_comments = True
|
||||||
else:
|
else:
|
||||||
|
user_tag = "@" + user_id
|
||||||
if user_tag not in comment_body:
|
if user_tag not in comment_body:
|
||||||
get_logger().debug(f"user_tag not in comment_body")
|
get_logger().debug(f"user_tag not in comment_body")
|
||||||
check_prev_comments = True
|
check_prev_comments = True
|
||||||
|
@ -15,7 +15,7 @@ from starlette_context.middleware import RawContextMiddleware
|
|||||||
from pr_agent.agent.pr_agent import PRAgent
|
from pr_agent.agent.pr_agent import PRAgent
|
||||||
from pr_agent.algo.utils import update_settings_from_args
|
from pr_agent.algo.utils import update_settings_from_args
|
||||||
from pr_agent.config_loader import get_settings, global_settings
|
from pr_agent.config_loader import get_settings, global_settings
|
||||||
from pr_agent.git_providers.utils import apply_repo_settings, is_user_name_a_bot, is_pr_description_indicating_bot
|
from pr_agent.git_providers.utils import apply_repo_settings
|
||||||
from pr_agent.log import LoggingFormat, get_logger, setup_logger
|
from pr_agent.log import LoggingFormat, get_logger, setup_logger
|
||||||
from pr_agent.secret_providers import get_secret_provider
|
from pr_agent.secret_providers import get_secret_provider
|
||||||
|
|
||||||
@ -86,14 +86,10 @@ def is_bot_user(data) -> bool:
|
|||||||
try:
|
try:
|
||||||
# logic to ignore bot users (unlike Github, no direct flag for bot users in gitlab)
|
# logic to ignore bot users (unlike Github, no direct flag for bot users in gitlab)
|
||||||
sender_name = data.get("user", {}).get("name", "unknown").lower()
|
sender_name = data.get("user", {}).get("name", "unknown").lower()
|
||||||
if is_user_name_a_bot(sender_name):
|
bot_indicators = ['codium', 'bot_', 'bot-', '_bot', '-bot']
|
||||||
|
if any(indicator in sender_name for indicator in bot_indicators):
|
||||||
get_logger().info(f"Skipping GitLab bot user: {sender_name}")
|
get_logger().info(f"Skipping GitLab bot user: {sender_name}")
|
||||||
return True
|
return True
|
||||||
pr_description = data.get('object_attributes', {}).get('description', '')
|
|
||||||
if pr_description and is_pr_description_indicating_bot(pr_description):
|
|
||||||
get_logger().info(f"Description indicates a bot user: {sender_name}",
|
|
||||||
artifact={"description": pr_description})
|
|
||||||
return True
|
|
||||||
except Exception as e:
|
except Exception as e:
|
||||||
get_logger().error(f"Failed 'is_bot_user' logic: {e}")
|
get_logger().error(f"Failed 'is_bot_user' logic: {e}")
|
||||||
return False
|
return False
|
||||||
|
@ -1,8 +1,8 @@
|
|||||||
[config]
|
[config]
|
||||||
# models
|
# models
|
||||||
model="gpt-4o-2024-11-20"
|
model="gpt-4-turbo-2024-04-09"
|
||||||
|
model_turbo="gpt-4o-2024-11-20"
|
||||||
fallback_models=["gpt-4o-2024-08-06"]
|
fallback_models=["gpt-4o-2024-08-06"]
|
||||||
#model_weak="gpt-4o-mini-2024-07-18" # optional, a weaker model to use for some easier tasks
|
|
||||||
# CLI
|
# CLI
|
||||||
git_provider="github"
|
git_provider="github"
|
||||||
publish_output=true
|
publish_output=true
|
||||||
@ -55,9 +55,10 @@ require_can_be_split_review=false
|
|||||||
require_security_review=true
|
require_security_review=true
|
||||||
require_ticket_analysis_review=true
|
require_ticket_analysis_review=true
|
||||||
# general options
|
# general options
|
||||||
num_code_suggestions=0 # legacy mode. use the `improve` command instead
|
num_code_suggestions=0
|
||||||
inline_code_comments = false
|
inline_code_comments = false
|
||||||
ask_and_reflect=false
|
ask_and_reflect=false
|
||||||
|
#automatic_review=true
|
||||||
persistent_comment=true
|
persistent_comment=true
|
||||||
extra_instructions = ""
|
extra_instructions = ""
|
||||||
final_update_message = true
|
final_update_message = true
|
||||||
@ -114,6 +115,7 @@ dual_publishing_score_threshold=-1 # -1 to disable, [0-10] to set the threshold
|
|||||||
focus_only_on_problems=true
|
focus_only_on_problems=true
|
||||||
#
|
#
|
||||||
extra_instructions = ""
|
extra_instructions = ""
|
||||||
|
rank_suggestions = false
|
||||||
enable_help_text=false
|
enable_help_text=false
|
||||||
enable_chat_text=false
|
enable_chat_text=false
|
||||||
enable_intro_text=true
|
enable_intro_text=true
|
||||||
@ -128,7 +130,7 @@ auto_extended_mode=true
|
|||||||
num_code_suggestions_per_chunk=4
|
num_code_suggestions_per_chunk=4
|
||||||
max_number_of_calls = 3
|
max_number_of_calls = 3
|
||||||
parallel_calls = true
|
parallel_calls = true
|
||||||
|
rank_extended_suggestions = false
|
||||||
final_clip_factor = 0.8
|
final_clip_factor = 0.8
|
||||||
# self-review checkbox
|
# self-review checkbox
|
||||||
demand_code_suggestions_self_review=false # add a checkbox for the author to self-review the code suggestions
|
demand_code_suggestions_self_review=false # add a checkbox for the author to self-review the code suggestions
|
||||||
@ -138,7 +140,6 @@ fold_suggestions_on_self_review=true # Pro feature. if true, the code suggestion
|
|||||||
# Suggestion impact 💎
|
# Suggestion impact 💎
|
||||||
publish_post_process_suggestion_impact=true
|
publish_post_process_suggestion_impact=true
|
||||||
wiki_page_accepted_suggestions=true
|
wiki_page_accepted_suggestions=true
|
||||||
allow_thumbs_up_down=false
|
|
||||||
|
|
||||||
[pr_custom_prompt] # /custom_prompt #
|
[pr_custom_prompt] # /custom_prompt #
|
||||||
prompt = """\
|
prompt = """\
|
||||||
@ -218,7 +219,7 @@ override_deployment_type = true
|
|||||||
handle_pr_actions = ['opened', 'reopened', 'ready_for_review']
|
handle_pr_actions = ['opened', 'reopened', 'ready_for_review']
|
||||||
pr_commands = [
|
pr_commands = [
|
||||||
"/describe --pr_description.final_update_message=false",
|
"/describe --pr_description.final_update_message=false",
|
||||||
"/review",
|
"/review --pr_reviewer.num_code_suggestions=0",
|
||||||
"/improve",
|
"/improve",
|
||||||
]
|
]
|
||||||
# settings for "pull_request" event with "synchronize" action - used to detect and handle push triggers for new commits
|
# settings for "pull_request" event with "synchronize" action - used to detect and handle push triggers for new commits
|
||||||
@ -230,27 +231,27 @@ push_trigger_pending_tasks_backlog = true
|
|||||||
push_trigger_pending_tasks_ttl = 300
|
push_trigger_pending_tasks_ttl = 300
|
||||||
push_commands = [
|
push_commands = [
|
||||||
"/describe",
|
"/describe",
|
||||||
"/review",
|
"/review --pr_reviewer.num_code_suggestions=0",
|
||||||
]
|
]
|
||||||
|
|
||||||
[gitlab]
|
[gitlab]
|
||||||
url = "https://gitlab.com"
|
url = "https://gitlab.com"
|
||||||
pr_commands = [
|
pr_commands = [
|
||||||
"/describe --pr_description.final_update_message=false",
|
"/describe --pr_description.final_update_message=false",
|
||||||
"/review",
|
"/review --pr_reviewer.num_code_suggestions=0",
|
||||||
"/improve",
|
"/improve",
|
||||||
]
|
]
|
||||||
handle_push_trigger = false
|
handle_push_trigger = false
|
||||||
push_commands = [
|
push_commands = [
|
||||||
"/describe",
|
"/describe",
|
||||||
"/review",
|
"/review --pr_reviewer.num_code_suggestions=0",
|
||||||
]
|
]
|
||||||
|
|
||||||
[bitbucket_app]
|
[bitbucket_app]
|
||||||
pr_commands = [
|
pr_commands = [
|
||||||
"/describe --pr_description.final_update_message=false",
|
"/describe --pr_description.final_update_message=false",
|
||||||
"/review",
|
"/review --pr_reviewer.num_code_suggestions=0",
|
||||||
"/improve --pr_code_suggestions.commitable_code_suggestions=true",
|
"/improve --pr_code_suggestions.commitable_code_suggestions=true --pr_code_suggestions.suggestions_score_threshold=7",
|
||||||
]
|
]
|
||||||
avoid_full_files = false
|
avoid_full_files = false
|
||||||
|
|
||||||
@ -275,8 +276,8 @@ avoid_full_files = false
|
|||||||
url = ""
|
url = ""
|
||||||
pr_commands = [
|
pr_commands = [
|
||||||
"/describe --pr_description.final_update_message=false",
|
"/describe --pr_description.final_update_message=false",
|
||||||
"/review",
|
"/review --pr_reviewer.num_code_suggestions=0",
|
||||||
"/improve --pr_code_suggestions.commitable_code_suggestions=true",
|
"/improve --pr_code_suggestions.commitable_code_suggestions=true --pr_code_suggestions.suggestions_score_threshold=7",
|
||||||
]
|
]
|
||||||
|
|
||||||
[litellm]
|
[litellm]
|
||||||
|
@ -80,8 +80,8 @@ class SubPR(BaseModel):
|
|||||||
|
|
||||||
class KeyIssuesComponentLink(BaseModel):
|
class KeyIssuesComponentLink(BaseModel):
|
||||||
relevant_file: str = Field(description="The full file path of the relevant file")
|
relevant_file: str = Field(description="The full file path of the relevant file")
|
||||||
issue_header: str = Field(description="One or two word title for the the issue. For example: 'Possible Bug', etc.")
|
issue_header: str = Field(description="One or two word title for the the issue. For example: 'Possible Bug', 'Performance Issue', 'Code Smell', etc.")
|
||||||
issue_content: str = Field(description="A short and concise summary of what should be further inspected and validated during the PR review process for this issue. Do not reference line numbers in this field.")
|
issue_content: str = Field(description="A short and concise summary of what should be further inspected and validated during the PR review process for this issue. Don't state line numbers here")
|
||||||
start_line: int = Field(description="The start line that corresponds to this issue in the relevant file")
|
start_line: int = Field(description="The start line that corresponds to this issue in the relevant file")
|
||||||
end_line: int = Field(description="The end line that corresponds to this issue in the relevant file")
|
end_line: int = Field(description="The end line that corresponds to this issue in the relevant file")
|
||||||
|
|
||||||
@ -111,7 +111,7 @@ class Review(BaseModel):
|
|||||||
{%- if question_str %}
|
{%- if question_str %}
|
||||||
insights_from_user_answers: str = Field(description="shortly summarize the insights you gained from the user's answers to the questions")
|
insights_from_user_answers: str = Field(description="shortly summarize the insights you gained from the user's answers to the questions")
|
||||||
{%- endif %}
|
{%- endif %}
|
||||||
key_issues_to_review: List[KeyIssuesComponentLink] = Field("A short and diverse list (0-3 issues) of high-priority bugs, problems or performance concerns introduced in the PR code, which the PR reviewer should further focus on and validate during the review process.")
|
key_issues_to_review: List[KeyIssuesComponentLink] = Field("A diverse list of bugs, issue or major performance concerns introduced in this PR, which the PR reviewer should further investigate")
|
||||||
{%- if require_security_review %}
|
{%- if require_security_review %}
|
||||||
security_concerns: str = Field(description="Does this PR code introduce possible vulnerabilities such as exposure of sensitive information (e.g., API keys, secrets, passwords), or security concerns like SQL injection, XSS, CSRF, and others ? Answer 'No' (without explaining why) if there are no possible issues. If there are security concerns or issues, start your answer with a short header, such as: 'Sensitive information exposure: ...', 'SQL injection: ...' etc. Explain your answer. Be specific and give examples if possible")
|
security_concerns: str = Field(description="Does this PR code introduce possible vulnerabilities such as exposure of sensitive information (e.g., API keys, secrets, passwords), or security concerns like SQL injection, XSS, CSRF, and others ? Answer 'No' (without explaining why) if there are no possible issues. If there are security concerns or issues, start your answer with a short header, such as: 'Sensitive information exposure: ...', 'SQL injection: ...' etc. Explain your answer. Be specific and give examples if possible")
|
||||||
{%- endif %}
|
{%- endif %}
|
||||||
|
@ -21,7 +21,7 @@ from pr_agent.config_loader import get_settings
|
|||||||
from pr_agent.git_providers import (AzureDevopsProvider, GithubProvider,
|
from pr_agent.git_providers import (AzureDevopsProvider, GithubProvider,
|
||||||
GitLabProvider, get_git_provider,
|
GitLabProvider, get_git_provider,
|
||||||
get_git_provider_with_context)
|
get_git_provider_with_context)
|
||||||
from pr_agent.git_providers.git_provider import get_main_pr_language, GitProvider
|
from pr_agent.git_providers.git_provider import get_main_pr_language
|
||||||
from pr_agent.log import get_logger
|
from pr_agent.log import get_logger
|
||||||
from pr_agent.servers.help import HelpMessage
|
from pr_agent.servers.help import HelpMessage
|
||||||
from pr_agent.tools.pr_description import insert_br_after_x_chars
|
from pr_agent.tools.pr_description import insert_br_after_x_chars
|
||||||
@ -103,8 +103,6 @@ class PRCodeSuggestions:
|
|||||||
relevant_configs = {'pr_code_suggestions': dict(get_settings().pr_code_suggestions),
|
relevant_configs = {'pr_code_suggestions': dict(get_settings().pr_code_suggestions),
|
||||||
'config': dict(get_settings().config)}
|
'config': dict(get_settings().config)}
|
||||||
get_logger().debug("Relevant configs", artifacts=relevant_configs)
|
get_logger().debug("Relevant configs", artifacts=relevant_configs)
|
||||||
|
|
||||||
# publish "Preparing suggestions..." comments
|
|
||||||
if (get_settings().config.publish_output and get_settings().config.publish_output_progress and
|
if (get_settings().config.publish_output and get_settings().config.publish_output_progress and
|
||||||
not get_settings().config.get('is_auto_command', False)):
|
not get_settings().config.get('is_auto_command', False)):
|
||||||
if self.git_provider.is_supported("gfm_markdown"):
|
if self.git_provider.is_supported("gfm_markdown"):
|
||||||
@ -112,26 +110,33 @@ class PRCodeSuggestions:
|
|||||||
else:
|
else:
|
||||||
self.git_provider.publish_comment("Preparing suggestions...", is_temporary=True)
|
self.git_provider.publish_comment("Preparing suggestions...", is_temporary=True)
|
||||||
|
|
||||||
# call the model to get the suggestions, and self-reflect on them
|
|
||||||
if not self.is_extended:
|
if not self.is_extended:
|
||||||
data = await retry_with_fallback_models(self._prepare_prediction, model_type=ModelType.REGULAR)
|
data = await retry_with_fallback_models(self._prepare_prediction)
|
||||||
else:
|
else:
|
||||||
data = await retry_with_fallback_models(self._prepare_prediction_extended, model_type=ModelType.REGULAR)
|
data = await retry_with_fallback_models(self._prepare_prediction_extended)
|
||||||
if not data:
|
if not data:
|
||||||
data = {"code_suggestions": []}
|
data = {"code_suggestions": []}
|
||||||
self.data = data
|
|
||||||
|
|
||||||
# Handle the case where the PR has no suggestions
|
|
||||||
if (data is None or 'code_suggestions' not in data or not data['code_suggestions']):
|
if (data is None or 'code_suggestions' not in data or not data['code_suggestions']):
|
||||||
await self.publish_no_suggestions()
|
pr_body = "## PR Code Suggestions ✨\n\nNo code suggestions found for the PR."
|
||||||
|
get_logger().warning('No code suggestions found for the PR.')
|
||||||
|
if get_settings().config.publish_output and get_settings().config.publish_output_no_suggestions:
|
||||||
|
get_logger().debug(f"PR output", artifact=pr_body)
|
||||||
|
if self.progress_response:
|
||||||
|
self.git_provider.edit_comment(self.progress_response, body=pr_body)
|
||||||
|
else:
|
||||||
|
self.git_provider.publish_comment(pr_body)
|
||||||
|
else:
|
||||||
|
get_settings().data = {"artifact": ""}
|
||||||
return
|
return
|
||||||
|
|
||||||
# publish the suggestions
|
if (not self.is_extended and get_settings().pr_code_suggestions.rank_suggestions) or \
|
||||||
if get_settings().config.publish_output:
|
(self.is_extended and get_settings().pr_code_suggestions.rank_extended_suggestions):
|
||||||
# If a temporary comment was published, remove it
|
get_logger().info('Ranking Suggestions...')
|
||||||
self.git_provider.remove_initial_comment()
|
data['code_suggestions'] = await self.rank_suggestions(data['code_suggestions'])
|
||||||
|
|
||||||
# Publish table summarized suggestions
|
if get_settings().config.publish_output:
|
||||||
|
self.git_provider.remove_initial_comment()
|
||||||
if ((not get_settings().pr_code_suggestions.commitable_code_suggestions) and
|
if ((not get_settings().pr_code_suggestions.commitable_code_suggestions) and
|
||||||
self.git_provider.is_supported("gfm_markdown")):
|
self.git_provider.is_supported("gfm_markdown")):
|
||||||
|
|
||||||
@ -141,7 +146,10 @@ class PRCodeSuggestions:
|
|||||||
|
|
||||||
# require self-review
|
# require self-review
|
||||||
if get_settings().pr_code_suggestions.demand_code_suggestions_self_review:
|
if get_settings().pr_code_suggestions.demand_code_suggestions_self_review:
|
||||||
pr_body = await self.add_self_review_text(pr_body)
|
text = get_settings().pr_code_suggestions.code_suggestions_self_review_text
|
||||||
|
pr_body += f"\n\n- [ ] {text}"
|
||||||
|
if get_settings().pr_code_suggestions.approve_pr_on_self_review:
|
||||||
|
pr_body += ' <!-- approve pr self-review -->'
|
||||||
|
|
||||||
# add usage guide
|
# add usage guide
|
||||||
if (get_settings().pr_code_suggestions.enable_chat_text and get_settings().config.is_auto_command
|
if (get_settings().pr_code_suggestions.enable_chat_text and get_settings().config.is_auto_command
|
||||||
@ -157,13 +165,13 @@ class PRCodeSuggestions:
|
|||||||
pr_body += show_relevant_configurations(relevant_section='pr_code_suggestions')
|
pr_body += show_relevant_configurations(relevant_section='pr_code_suggestions')
|
||||||
|
|
||||||
# publish the PR comment
|
# publish the PR comment
|
||||||
if get_settings().pr_code_suggestions.persistent_comment: # true by default
|
if get_settings().pr_code_suggestions.persistent_comment:
|
||||||
self.publish_persistent_comment_with_history(self.git_provider,
|
final_update_message = False
|
||||||
pr_body,
|
self.publish_persistent_comment_with_history(pr_body,
|
||||||
initial_header="## PR Code Suggestions ✨",
|
initial_header="## PR Code Suggestions ✨",
|
||||||
update_header=True,
|
update_header=True,
|
||||||
name="suggestions",
|
name="suggestions",
|
||||||
final_update_message=False,
|
final_update_message=final_update_message,
|
||||||
max_previous_comments=get_settings().pr_code_suggestions.max_history_len,
|
max_previous_comments=get_settings().pr_code_suggestions.max_history_len,
|
||||||
progress_response=self.progress_response)
|
progress_response=self.progress_response)
|
||||||
else:
|
else:
|
||||||
@ -174,15 +182,29 @@ class PRCodeSuggestions:
|
|||||||
|
|
||||||
# dual publishing mode
|
# dual publishing mode
|
||||||
if int(get_settings().pr_code_suggestions.dual_publishing_score_threshold) > 0:
|
if int(get_settings().pr_code_suggestions.dual_publishing_score_threshold) > 0:
|
||||||
await self.dual_publishing(data)
|
data_above_threshold = {'code_suggestions': []}
|
||||||
|
try:
|
||||||
|
for suggestion in data['code_suggestions']:
|
||||||
|
if int(suggestion.get('score', 0)) >= int(get_settings().pr_code_suggestions.dual_publishing_score_threshold) \
|
||||||
|
and suggestion.get('improved_code'):
|
||||||
|
data_above_threshold['code_suggestions'].append(suggestion)
|
||||||
|
if not data_above_threshold['code_suggestions'][-1]['existing_code']:
|
||||||
|
get_logger().info(f'Identical existing and improved code for dual publishing found')
|
||||||
|
data_above_threshold['code_suggestions'][-1]['existing_code'] = suggestion[
|
||||||
|
'improved_code']
|
||||||
|
if data_above_threshold['code_suggestions']:
|
||||||
|
get_logger().info(
|
||||||
|
f"Publishing {len(data_above_threshold['code_suggestions'])} suggestions in dual publishing mode")
|
||||||
|
self.push_inline_code_suggestions(data_above_threshold)
|
||||||
|
except Exception as e:
|
||||||
|
get_logger().error(f"Failed to publish dual publishing suggestions, error: {e}")
|
||||||
else:
|
else:
|
||||||
await self.push_inline_code_suggestions(data)
|
self.push_inline_code_suggestions(data)
|
||||||
if self.progress_response:
|
if self.progress_response:
|
||||||
self.git_provider.remove_comment(self.progress_response)
|
self.git_provider.remove_comment(self.progress_response)
|
||||||
else:
|
else:
|
||||||
get_logger().info('Code suggestions generated for PR, but not published since publish_output is False.')
|
get_logger().info('Code suggestions generated for PR, but not published since publish_output is False.')
|
||||||
pr_body = self.generate_summarized_suggestions(data)
|
get_settings().data = {"artifact": data}
|
||||||
get_settings().data = {"artifact": pr_body}
|
|
||||||
return
|
return
|
||||||
except Exception as e:
|
except Exception as e:
|
||||||
get_logger().error(f"Failed to generate code suggestions for PR, error: {e}",
|
get_logger().error(f"Failed to generate code suggestions for PR, error: {e}",
|
||||||
@ -195,108 +217,47 @@ class PRCodeSuggestions:
|
|||||||
self.git_provider.remove_initial_comment()
|
self.git_provider.remove_initial_comment()
|
||||||
self.git_provider.publish_comment(f"Failed to generate code suggestions for PR")
|
self.git_provider.publish_comment(f"Failed to generate code suggestions for PR")
|
||||||
except Exception as e:
|
except Exception as e:
|
||||||
get_logger().exception(f"Failed to update persistent review, error: {e}")
|
pass
|
||||||
|
|
||||||
async def add_self_review_text(self, pr_body):
|
def publish_persistent_comment_with_history(self, pr_comment: str,
|
||||||
text = get_settings().pr_code_suggestions.code_suggestions_self_review_text
|
|
||||||
pr_body += f"\n\n- [ ] {text}"
|
|
||||||
approve_pr_on_self_review = get_settings().pr_code_suggestions.approve_pr_on_self_review
|
|
||||||
fold_suggestions_on_self_review = get_settings().pr_code_suggestions.fold_suggestions_on_self_review
|
|
||||||
if approve_pr_on_self_review and not fold_suggestions_on_self_review:
|
|
||||||
pr_body += ' <!-- approve pr self-review -->'
|
|
||||||
elif fold_suggestions_on_self_review and not approve_pr_on_self_review:
|
|
||||||
pr_body += ' <!-- fold suggestions self-review -->'
|
|
||||||
else:
|
|
||||||
pr_body += ' <!-- approve and fold suggestions self-review -->'
|
|
||||||
return pr_body
|
|
||||||
|
|
||||||
async def publish_no_suggestions(self):
|
|
||||||
pr_body = "## PR Code Suggestions ✨\n\nNo code suggestions found for the PR."
|
|
||||||
if get_settings().config.publish_output and get_settings().config.publish_output_no_suggestions:
|
|
||||||
get_logger().warning('No code suggestions found for the PR.')
|
|
||||||
get_logger().debug(f"PR output", artifact=pr_body)
|
|
||||||
if self.progress_response:
|
|
||||||
self.git_provider.edit_comment(self.progress_response, body=pr_body)
|
|
||||||
else:
|
|
||||||
self.git_provider.publish_comment(pr_body)
|
|
||||||
else:
|
|
||||||
get_settings().data = {"artifact": ""}
|
|
||||||
|
|
||||||
async def dual_publishing(self, data):
|
|
||||||
data_above_threshold = {'code_suggestions': []}
|
|
||||||
try:
|
|
||||||
for suggestion in data['code_suggestions']:
|
|
||||||
if int(suggestion.get('score', 0)) >= int(
|
|
||||||
get_settings().pr_code_suggestions.dual_publishing_score_threshold) \
|
|
||||||
and suggestion.get('improved_code'):
|
|
||||||
data_above_threshold['code_suggestions'].append(suggestion)
|
|
||||||
if not data_above_threshold['code_suggestions'][-1]['existing_code']:
|
|
||||||
get_logger().info(f'Identical existing and improved code for dual publishing found')
|
|
||||||
data_above_threshold['code_suggestions'][-1]['existing_code'] = suggestion[
|
|
||||||
'improved_code']
|
|
||||||
if data_above_threshold['code_suggestions']:
|
|
||||||
get_logger().info(
|
|
||||||
f"Publishing {len(data_above_threshold['code_suggestions'])} suggestions in dual publishing mode")
|
|
||||||
await self.push_inline_code_suggestions(data_above_threshold)
|
|
||||||
except Exception as e:
|
|
||||||
get_logger().error(f"Failed to publish dual publishing suggestions, error: {e}")
|
|
||||||
|
|
||||||
@staticmethod
|
|
||||||
def publish_persistent_comment_with_history(git_provider: GitProvider,
|
|
||||||
pr_comment: str,
|
|
||||||
initial_header: str,
|
initial_header: str,
|
||||||
update_header: bool = True,
|
update_header: bool = True,
|
||||||
name='review',
|
name='review',
|
||||||
final_update_message=True,
|
final_update_message=True,
|
||||||
max_previous_comments=4,
|
max_previous_comments=4,
|
||||||
progress_response=None,
|
progress_response=None):
|
||||||
only_fold=False):
|
|
||||||
|
|
||||||
def _extract_link(comment_text: str):
|
if isinstance(self.git_provider, AzureDevopsProvider): # get_latest_commit_url is not supported yet
|
||||||
r = re.compile(r"<!--.*?-->")
|
|
||||||
match = r.search(comment_text)
|
|
||||||
|
|
||||||
up_to_commit_txt = ""
|
|
||||||
if match:
|
|
||||||
up_to_commit_txt = f" up to commit {match.group(0)[4:-3].strip()}"
|
|
||||||
return up_to_commit_txt
|
|
||||||
|
|
||||||
if isinstance(git_provider, AzureDevopsProvider): # get_latest_commit_url is not supported yet
|
|
||||||
if progress_response:
|
if progress_response:
|
||||||
git_provider.edit_comment(progress_response, pr_comment)
|
self.git_provider.edit_comment(progress_response, pr_comment)
|
||||||
new_comment = progress_response
|
|
||||||
else:
|
else:
|
||||||
new_comment = git_provider.publish_comment(pr_comment)
|
self.git_provider.publish_comment(pr_comment)
|
||||||
return new_comment
|
return
|
||||||
|
|
||||||
history_header = f"#### Previous suggestions\n"
|
history_header = f"#### Previous suggestions\n"
|
||||||
last_commit_num = git_provider.get_latest_commit_url().split('/')[-1][:7]
|
last_commit_num = self.git_provider.get_latest_commit_url().split('/')[-1][:7]
|
||||||
if only_fold: # A user clicked on the 'self-review' checkbox
|
latest_suggestion_header = f"Latest suggestions up to {last_commit_num}"
|
||||||
text = get_settings().pr_code_suggestions.code_suggestions_self_review_text
|
|
||||||
latest_suggestion_header = f"\n\n- [x] {text}"
|
|
||||||
else:
|
|
||||||
latest_suggestion_header = f"Latest suggestions up to {last_commit_num}"
|
|
||||||
latest_commit_html_comment = f"<!-- {last_commit_num} -->"
|
latest_commit_html_comment = f"<!-- {last_commit_num} -->"
|
||||||
found_comment = None
|
found_comment = None
|
||||||
|
|
||||||
if max_previous_comments > 0:
|
if max_previous_comments > 0:
|
||||||
try:
|
try:
|
||||||
prev_comments = list(git_provider.get_issue_comments())
|
prev_comments = list(self.git_provider.get_issue_comments())
|
||||||
for comment in prev_comments:
|
for comment in prev_comments:
|
||||||
if comment.body.startswith(initial_header):
|
if comment.body.startswith(initial_header):
|
||||||
prev_suggestions = comment.body
|
prev_suggestions = comment.body
|
||||||
found_comment = comment
|
found_comment = comment
|
||||||
comment_url = git_provider.get_comment_url(comment)
|
comment_url = self.git_provider.get_comment_url(comment)
|
||||||
|
|
||||||
if history_header.strip() not in comment.body:
|
if history_header.strip() not in comment.body:
|
||||||
# no history section
|
# no history section
|
||||||
# extract everything between <table> and </table> in comment.body including <table> and </table>
|
# extract everything between <table> and </table> in comment.body including <table> and </table>
|
||||||
table_index = comment.body.find("<table>")
|
table_index = comment.body.find("<table>")
|
||||||
if table_index == -1:
|
if table_index == -1:
|
||||||
git_provider.edit_comment(comment, pr_comment)
|
self.git_provider.edit_comment(comment, pr_comment)
|
||||||
continue
|
continue
|
||||||
# find http link from comment.body[:table_index]
|
# find http link from comment.body[:table_index]
|
||||||
up_to_commit_txt = _extract_link(comment.body[:table_index])
|
up_to_commit_txt = self.extract_link(comment.body[:table_index])
|
||||||
prev_suggestion_table = comment.body[
|
prev_suggestion_table = comment.body[
|
||||||
table_index:comment.body.rfind("</table>") + len("</table>")]
|
table_index:comment.body.rfind("</table>") + len("</table>")]
|
||||||
|
|
||||||
@ -317,7 +278,7 @@ class PRCodeSuggestions:
|
|||||||
|
|
||||||
# get text after the latest_suggestion_header in comment.body
|
# get text after the latest_suggestion_header in comment.body
|
||||||
table_ind = latest_table.find("<table>")
|
table_ind = latest_table.find("<table>")
|
||||||
up_to_commit_txt = _extract_link(latest_table[:table_ind])
|
up_to_commit_txt = self.extract_link(latest_table[:table_ind])
|
||||||
|
|
||||||
latest_table = latest_table[table_ind:latest_table.rfind("</table>") + len("</table>")]
|
latest_table = latest_table[table_ind:latest_table.rfind("</table>") + len("</table>")]
|
||||||
# enforce max_previous_comments
|
# enforce max_previous_comments
|
||||||
@ -344,12 +305,11 @@ class PRCodeSuggestions:
|
|||||||
|
|
||||||
get_logger().info(f"Persistent mode - updating comment {comment_url} to latest {name} message")
|
get_logger().info(f"Persistent mode - updating comment {comment_url} to latest {name} message")
|
||||||
if progress_response: # publish to 'progress_response' comment, because it refreshes immediately
|
if progress_response: # publish to 'progress_response' comment, because it refreshes immediately
|
||||||
git_provider.edit_comment(progress_response, pr_comment_updated)
|
self.git_provider.edit_comment(progress_response, pr_comment_updated)
|
||||||
git_provider.remove_comment(comment)
|
self.git_provider.remove_comment(comment)
|
||||||
comment = progress_response
|
|
||||||
else:
|
else:
|
||||||
git_provider.edit_comment(comment, pr_comment_updated)
|
self.git_provider.edit_comment(comment, pr_comment_updated)
|
||||||
return comment
|
return
|
||||||
except Exception as e:
|
except Exception as e:
|
||||||
get_logger().exception(f"Failed to update persistent review, error: {e}")
|
get_logger().exception(f"Failed to update persistent review, error: {e}")
|
||||||
pass
|
pass
|
||||||
@ -358,12 +318,9 @@ class PRCodeSuggestions:
|
|||||||
body = pr_comment.replace(initial_header, "").strip()
|
body = pr_comment.replace(initial_header, "").strip()
|
||||||
pr_comment = f"{initial_header}\n\n{latest_commit_html_comment}\n\n{body}\n\n"
|
pr_comment = f"{initial_header}\n\n{latest_commit_html_comment}\n\n{body}\n\n"
|
||||||
if progress_response:
|
if progress_response:
|
||||||
git_provider.edit_comment(progress_response, pr_comment)
|
self.git_provider.edit_comment(progress_response, pr_comment)
|
||||||
new_comment = progress_response
|
|
||||||
else:
|
else:
|
||||||
new_comment = git_provider.publish_comment(pr_comment)
|
self.git_provider.publish_comment(pr_comment)
|
||||||
return new_comment
|
|
||||||
|
|
||||||
|
|
||||||
def extract_link(self, s):
|
def extract_link(self, s):
|
||||||
r = re.compile(r"<!--.*?-->")
|
r = re.compile(r"<!--.*?-->")
|
||||||
@ -414,7 +371,50 @@ class PRCodeSuggestions:
|
|||||||
response_reflect = await self.self_reflect_on_suggestions(data["code_suggestions"],
|
response_reflect = await self.self_reflect_on_suggestions(data["code_suggestions"],
|
||||||
patches_diff, model=model_reflection)
|
patches_diff, model=model_reflection)
|
||||||
if response_reflect:
|
if response_reflect:
|
||||||
await self.analyze_self_reflection_response(data, response_reflect)
|
response_reflect_yaml = load_yaml(response_reflect)
|
||||||
|
code_suggestions_feedback = response_reflect_yaml["code_suggestions"]
|
||||||
|
if len(code_suggestions_feedback) == len(data["code_suggestions"]):
|
||||||
|
for i, suggestion in enumerate(data["code_suggestions"]):
|
||||||
|
try:
|
||||||
|
suggestion["score"] = code_suggestions_feedback[i]["suggestion_score"]
|
||||||
|
suggestion["score_why"] = code_suggestions_feedback[i]["why"]
|
||||||
|
|
||||||
|
if 'relevant_lines_start' not in suggestion:
|
||||||
|
relevant_lines_start = code_suggestions_feedback[i].get('relevant_lines_start', -1)
|
||||||
|
relevant_lines_end = code_suggestions_feedback[i].get('relevant_lines_end', -1)
|
||||||
|
suggestion['relevant_lines_start'] = relevant_lines_start
|
||||||
|
suggestion['relevant_lines_end'] = relevant_lines_end
|
||||||
|
if relevant_lines_start < 0 or relevant_lines_end < 0:
|
||||||
|
suggestion["score"] = 0
|
||||||
|
|
||||||
|
try:
|
||||||
|
if get_settings().config.publish_output:
|
||||||
|
suggestion_statistics_dict = {'score': int(suggestion["score"]),
|
||||||
|
'label': suggestion["label"].lower().strip()}
|
||||||
|
get_logger().info(f"PR-Agent suggestions statistics",
|
||||||
|
statistics=suggestion_statistics_dict, analytics=True)
|
||||||
|
except Exception as e:
|
||||||
|
get_logger().error(f"Failed to log suggestion statistics, error: {e}")
|
||||||
|
pass
|
||||||
|
|
||||||
|
except Exception as e: #
|
||||||
|
get_logger().error(f"Error processing suggestion score {i}",
|
||||||
|
artifact={"suggestion": suggestion,
|
||||||
|
"code_suggestions_feedback": code_suggestions_feedback[i]})
|
||||||
|
suggestion["score"] = 7
|
||||||
|
suggestion["score_why"] = ""
|
||||||
|
|
||||||
|
# if the before and after code is the same, clear one of them
|
||||||
|
try:
|
||||||
|
if suggestion['existing_code'] == suggestion['improved_code']:
|
||||||
|
get_logger().debug(
|
||||||
|
f"edited improved suggestion {i + 1}, because equal to existing code: {suggestion['existing_code']}")
|
||||||
|
if get_settings().pr_code_suggestions.commitable_code_suggestions:
|
||||||
|
suggestion['improved_code'] = "" # we need 'existing_code' to locate the code in the PR
|
||||||
|
else:
|
||||||
|
suggestion['existing_code'] = ""
|
||||||
|
except Exception as e:
|
||||||
|
get_logger().error(f"Error processing suggestion {i + 1}, error: {e}")
|
||||||
else:
|
else:
|
||||||
# get_logger().error(f"Could not self-reflect on suggestions. using default score 7")
|
# get_logger().error(f"Could not self-reflect on suggestions. using default score 7")
|
||||||
for i, suggestion in enumerate(data["code_suggestions"]):
|
for i, suggestion in enumerate(data["code_suggestions"]):
|
||||||
@ -423,58 +423,6 @@ class PRCodeSuggestions:
|
|||||||
|
|
||||||
return data
|
return data
|
||||||
|
|
||||||
async def analyze_self_reflection_response(self, data, response_reflect):
|
|
||||||
response_reflect_yaml = load_yaml(response_reflect)
|
|
||||||
code_suggestions_feedback = response_reflect_yaml.get("code_suggestions", [])
|
|
||||||
if code_suggestions_feedback and len(code_suggestions_feedback) == len(data["code_suggestions"]):
|
|
||||||
for i, suggestion in enumerate(data["code_suggestions"]):
|
|
||||||
try:
|
|
||||||
suggestion["score"] = code_suggestions_feedback[i]["suggestion_score"]
|
|
||||||
suggestion["score_why"] = code_suggestions_feedback[i]["why"]
|
|
||||||
|
|
||||||
if 'relevant_lines_start' not in suggestion:
|
|
||||||
relevant_lines_start = code_suggestions_feedback[i].get('relevant_lines_start', -1)
|
|
||||||
relevant_lines_end = code_suggestions_feedback[i].get('relevant_lines_end', -1)
|
|
||||||
suggestion['relevant_lines_start'] = relevant_lines_start
|
|
||||||
suggestion['relevant_lines_end'] = relevant_lines_end
|
|
||||||
if relevant_lines_start < 0 or relevant_lines_end < 0:
|
|
||||||
suggestion["score"] = 0
|
|
||||||
|
|
||||||
try:
|
|
||||||
if get_settings().config.publish_output:
|
|
||||||
if not suggestion["score"]:
|
|
||||||
score = -1
|
|
||||||
else:
|
|
||||||
score = int(suggestion["score"])
|
|
||||||
label = suggestion["label"].lower().strip()
|
|
||||||
label = label.replace('<br>', ' ')
|
|
||||||
suggestion_statistics_dict = {'score': score,
|
|
||||||
'label': label}
|
|
||||||
get_logger().info(f"PR-Agent suggestions statistics",
|
|
||||||
statistics=suggestion_statistics_dict, analytics=True)
|
|
||||||
except Exception as e:
|
|
||||||
get_logger().error(f"Failed to log suggestion statistics, error: {e}")
|
|
||||||
pass
|
|
||||||
|
|
||||||
except Exception as e: #
|
|
||||||
get_logger().error(f"Error processing suggestion score {i}",
|
|
||||||
artifact={"suggestion": suggestion,
|
|
||||||
"code_suggestions_feedback": code_suggestions_feedback[i]})
|
|
||||||
suggestion["score"] = 7
|
|
||||||
suggestion["score_why"] = ""
|
|
||||||
|
|
||||||
# if the before and after code is the same, clear one of them
|
|
||||||
try:
|
|
||||||
if suggestion['existing_code'] == suggestion['improved_code']:
|
|
||||||
get_logger().debug(
|
|
||||||
f"edited improved suggestion {i + 1}, because equal to existing code: {suggestion['existing_code']}")
|
|
||||||
if get_settings().pr_code_suggestions.commitable_code_suggestions:
|
|
||||||
suggestion['improved_code'] = "" # we need 'existing_code' to locate the code in the PR
|
|
||||||
else:
|
|
||||||
suggestion['existing_code'] = ""
|
|
||||||
except Exception as e:
|
|
||||||
get_logger().error(f"Error processing suggestion {i + 1}, error: {e}")
|
|
||||||
|
|
||||||
@staticmethod
|
@staticmethod
|
||||||
def _truncate_if_needed(suggestion):
|
def _truncate_if_needed(suggestion):
|
||||||
max_code_suggestion_length = get_settings().get("PR_CODE_SUGGESTIONS.MAX_CODE_SUGGESTION_LENGTH", 0)
|
max_code_suggestion_length = get_settings().get("PR_CODE_SUGGESTIONS.MAX_CODE_SUGGESTION_LENGTH", 0)
|
||||||
@ -538,7 +486,7 @@ class PRCodeSuggestions:
|
|||||||
|
|
||||||
return data
|
return data
|
||||||
|
|
||||||
async def push_inline_code_suggestions(self, data):
|
def push_inline_code_suggestions(self, data):
|
||||||
code_suggestions = []
|
code_suggestions = []
|
||||||
|
|
||||||
if not data['code_suggestions']:
|
if not data['code_suggestions']:
|
||||||
@ -636,9 +584,7 @@ class PRCodeSuggestions:
|
|||||||
patches_diff_lines = patches_diff.splitlines()
|
patches_diff_lines = patches_diff.splitlines()
|
||||||
for i, line in enumerate(patches_diff_lines):
|
for i, line in enumerate(patches_diff_lines):
|
||||||
if line.strip():
|
if line.strip():
|
||||||
if line.isnumeric():
|
if line[0].isdigit():
|
||||||
patches_diff_lines[i] = ''
|
|
||||||
elif line[0].isdigit():
|
|
||||||
# find the first letter in the line that starts with a valid letter
|
# find the first letter in the line that starts with a valid letter
|
||||||
for j, char in enumerate(line):
|
for j, char in enumerate(line):
|
||||||
if not char.isdigit():
|
if not char.isdigit():
|
||||||
@ -696,6 +642,62 @@ class PRCodeSuggestions:
|
|||||||
self.data = data = None
|
self.data = data = None
|
||||||
return data
|
return data
|
||||||
|
|
||||||
|
async def rank_suggestions(self, data: List) -> List:
|
||||||
|
"""
|
||||||
|
Call a model to rank (sort) code suggestions based on their importance order.
|
||||||
|
|
||||||
|
Args:
|
||||||
|
data (List): A list of code suggestions to be ranked.
|
||||||
|
|
||||||
|
Returns:
|
||||||
|
List: The ranked list of code suggestions.
|
||||||
|
"""
|
||||||
|
|
||||||
|
suggestion_list = []
|
||||||
|
if not data:
|
||||||
|
return suggestion_list
|
||||||
|
for suggestion in data:
|
||||||
|
suggestion_list.append(suggestion)
|
||||||
|
data_sorted = [[]] * len(suggestion_list)
|
||||||
|
|
||||||
|
if len(suggestion_list) == 1:
|
||||||
|
return suggestion_list
|
||||||
|
|
||||||
|
try:
|
||||||
|
suggestion_str = ""
|
||||||
|
for i, suggestion in enumerate(suggestion_list):
|
||||||
|
suggestion_str += f"suggestion {i + 1}: " + str(suggestion) + '\n\n'
|
||||||
|
|
||||||
|
variables = {'suggestion_list': suggestion_list, 'suggestion_str': suggestion_str}
|
||||||
|
model = get_settings().config.model
|
||||||
|
environment = Environment(undefined=StrictUndefined)
|
||||||
|
system_prompt = environment.from_string(get_settings().pr_sort_code_suggestions_prompt.system).render(
|
||||||
|
variables)
|
||||||
|
user_prompt = environment.from_string(get_settings().pr_sort_code_suggestions_prompt.user).render(variables)
|
||||||
|
response, finish_reason = await self.ai_handler.chat_completion(model=model, system=system_prompt,
|
||||||
|
user=user_prompt)
|
||||||
|
|
||||||
|
sort_order = load_yaml(response)
|
||||||
|
for s in sort_order['Sort Order']:
|
||||||
|
suggestion_number = s['suggestion number']
|
||||||
|
importance_order = s['importance order']
|
||||||
|
data_sorted[importance_order - 1] = suggestion_list[suggestion_number - 1]
|
||||||
|
|
||||||
|
if get_settings().pr_code_suggestions.final_clip_factor != 1:
|
||||||
|
max_len = max(
|
||||||
|
len(data_sorted),
|
||||||
|
int(get_settings().pr_code_suggestions.num_code_suggestions_per_chunk),
|
||||||
|
)
|
||||||
|
new_len = int(0.5 + max_len * get_settings().pr_code_suggestions.final_clip_factor)
|
||||||
|
if new_len < len(data_sorted):
|
||||||
|
data_sorted = data_sorted[:new_len]
|
||||||
|
except Exception as e:
|
||||||
|
if get_settings().config.verbosity_level >= 1:
|
||||||
|
get_logger().info(f"Could not sort suggestions, error: {e}")
|
||||||
|
data_sorted = suggestion_list
|
||||||
|
|
||||||
|
return data_sorted
|
||||||
|
|
||||||
def generate_summarized_suggestions(self, data: Dict) -> str:
|
def generate_summarized_suggestions(self, data: Dict) -> str:
|
||||||
try:
|
try:
|
||||||
pr_body = "## PR Code Suggestions ✨\n\n"
|
pr_body = "## PR Code Suggestions ✨\n\n"
|
||||||
@ -811,12 +813,7 @@ class PRCodeSuggestions:
|
|||||||
get_logger().info(f"Failed to publish summarized code suggestions, error: {e}")
|
get_logger().info(f"Failed to publish summarized code suggestions, error: {e}")
|
||||||
return ""
|
return ""
|
||||||
|
|
||||||
async def self_reflect_on_suggestions(self,
|
async def self_reflect_on_suggestions(self, suggestion_list: List, patches_diff: str, model: str) -> str:
|
||||||
suggestion_list: List,
|
|
||||||
patches_diff: str,
|
|
||||||
model: str,
|
|
||||||
prev_suggestions_str: str = "",
|
|
||||||
dedicated_prompt: str = "") -> str:
|
|
||||||
if not suggestion_list:
|
if not suggestion_list:
|
||||||
return ""
|
return ""
|
||||||
|
|
||||||
@ -829,21 +826,13 @@ class PRCodeSuggestions:
|
|||||||
'suggestion_str': suggestion_str,
|
'suggestion_str': suggestion_str,
|
||||||
"diff": patches_diff,
|
"diff": patches_diff,
|
||||||
'num_code_suggestions': len(suggestion_list),
|
'num_code_suggestions': len(suggestion_list),
|
||||||
'prev_suggestions_str': prev_suggestions_str,
|
|
||||||
"is_ai_metadata": get_settings().get("config.enable_ai_metadata", False)}
|
"is_ai_metadata": get_settings().get("config.enable_ai_metadata", False)}
|
||||||
environment = Environment(undefined=StrictUndefined)
|
environment = Environment(undefined=StrictUndefined)
|
||||||
|
system_prompt_reflect = environment.from_string(
|
||||||
if dedicated_prompt:
|
get_settings().pr_code_suggestions_reflect_prompt.system).render(
|
||||||
system_prompt_reflect = environment.from_string(
|
variables)
|
||||||
get_settings().get(dedicated_prompt).system).render(variables)
|
user_prompt_reflect = environment.from_string(
|
||||||
user_prompt_reflect = environment.from_string(
|
get_settings().pr_code_suggestions_reflect_prompt.user).render(variables)
|
||||||
get_settings().get(dedicated_prompt).user).render(variables)
|
|
||||||
else:
|
|
||||||
system_prompt_reflect = environment.from_string(
|
|
||||||
get_settings().pr_code_suggestions_reflect_prompt.system).render(variables)
|
|
||||||
user_prompt_reflect = environment.from_string(
|
|
||||||
get_settings().pr_code_suggestions_reflect_prompt.user).render(variables)
|
|
||||||
|
|
||||||
with get_logger().contextualize(command="self_reflect_on_suggestions"):
|
with get_logger().contextualize(command="self_reflect_on_suggestions"):
|
||||||
response_reflect, finish_reason_reflect = await self.ai_handler.chat_completion(model=model,
|
response_reflect, finish_reason_reflect = await self.ai_handler.chat_completion(model=model,
|
||||||
system=system_prompt_reflect,
|
system=system_prompt_reflect,
|
||||||
@ -851,4 +840,4 @@ class PRCodeSuggestions:
|
|||||||
except Exception as e:
|
except Exception as e:
|
||||||
get_logger().info(f"Could not reflect on suggestions, error: {e}")
|
get_logger().info(f"Could not reflect on suggestions, error: {e}")
|
||||||
return ""
|
return ""
|
||||||
return response_reflect
|
return response_reflect
|
||||||
|
@ -99,7 +99,7 @@ class PRDescription:
|
|||||||
# ticket extraction if exists
|
# ticket extraction if exists
|
||||||
await extract_and_cache_pr_tickets(self.git_provider, self.vars)
|
await extract_and_cache_pr_tickets(self.git_provider, self.vars)
|
||||||
|
|
||||||
await retry_with_fallback_models(self._prepare_prediction, ModelType.WEAK)
|
await retry_with_fallback_models(self._prepare_prediction, ModelType.TURBO)
|
||||||
|
|
||||||
if self.prediction:
|
if self.prediction:
|
||||||
self._prepare_data()
|
self._prepare_data()
|
||||||
@ -171,10 +171,6 @@ class PRDescription:
|
|||||||
update_comment = f"**[PR Description]({pr_url})** updated to latest commit ({latest_commit_url})"
|
update_comment = f"**[PR Description]({pr_url})** updated to latest commit ({latest_commit_url})"
|
||||||
self.git_provider.publish_comment(update_comment)
|
self.git_provider.publish_comment(update_comment)
|
||||||
self.git_provider.remove_initial_comment()
|
self.git_provider.remove_initial_comment()
|
||||||
else:
|
|
||||||
get_logger().info('PR description, but not published since publish_output is False.')
|
|
||||||
get_settings().data = {"artifact": pr_body}
|
|
||||||
return
|
|
||||||
except Exception as e:
|
except Exception as e:
|
||||||
get_logger().error(f"Error generating PR description {self.pr_id}: {e}")
|
get_logger().error(f"Error generating PR description {self.pr_id}: {e}")
|
||||||
|
|
||||||
|
@ -114,7 +114,7 @@ class PRHelpMessage:
|
|||||||
self.vars['snippets'] = docs_prompt.strip()
|
self.vars['snippets'] = docs_prompt.strip()
|
||||||
|
|
||||||
# run the AI model
|
# run the AI model
|
||||||
response = await retry_with_fallback_models(self._prepare_prediction, model_type=ModelType.WEAK)
|
response = await retry_with_fallback_models(self._prepare_prediction, model_type=ModelType.REGULAR)
|
||||||
response_yaml = load_yaml(response)
|
response_yaml = load_yaml(response)
|
||||||
response_str = response_yaml.get('response')
|
response_str = response_yaml.get('response')
|
||||||
relevant_sections = response_yaml.get('relevant_sections')
|
relevant_sections = response_yaml.get('relevant_sections')
|
||||||
|
@ -79,7 +79,7 @@ class PR_LineQuestions:
|
|||||||
line_end=line_end,
|
line_end=line_end,
|
||||||
side=side)
|
side=side)
|
||||||
if self.patch_with_lines:
|
if self.patch_with_lines:
|
||||||
response = await retry_with_fallback_models(self._get_prediction, model_type=ModelType.WEAK)
|
response = await retry_with_fallback_models(self._get_prediction, model_type=ModelType.TURBO)
|
||||||
|
|
||||||
get_logger().info('Preparing answer...')
|
get_logger().info('Preparing answer...')
|
||||||
if comment_id:
|
if comment_id:
|
||||||
|
@ -63,7 +63,7 @@ class PRQuestions:
|
|||||||
if img_path:
|
if img_path:
|
||||||
get_logger().debug(f"Image path identified", artifact=img_path)
|
get_logger().debug(f"Image path identified", artifact=img_path)
|
||||||
|
|
||||||
await retry_with_fallback_models(self._prepare_prediction, model_type=ModelType.WEAK)
|
await retry_with_fallback_models(self._prepare_prediction, model_type=ModelType.TURBO)
|
||||||
|
|
||||||
pr_comment = self._prepare_pr_answer()
|
pr_comment = self._prepare_pr_answer()
|
||||||
get_logger().debug(f"PR output", artifact=pr_comment)
|
get_logger().debug(f"PR output", artifact=pr_comment)
|
||||||
|
@ -148,7 +148,7 @@ class PRReviewer:
|
|||||||
if get_settings().config.publish_output and not get_settings().config.get('is_auto_command', False):
|
if get_settings().config.publish_output and not get_settings().config.get('is_auto_command', False):
|
||||||
self.git_provider.publish_comment("Preparing review...", is_temporary=True)
|
self.git_provider.publish_comment("Preparing review...", is_temporary=True)
|
||||||
|
|
||||||
await retry_with_fallback_models(self._prepare_prediction, model_type=ModelType.REGULAR)
|
await retry_with_fallback_models(self._prepare_prediction)
|
||||||
if not self.prediction:
|
if not self.prediction:
|
||||||
self.git_provider.remove_initial_comment()
|
self.git_provider.remove_initial_comment()
|
||||||
return None
|
return None
|
||||||
@ -170,10 +170,6 @@ class PRReviewer:
|
|||||||
self.git_provider.remove_initial_comment()
|
self.git_provider.remove_initial_comment()
|
||||||
if get_settings().pr_reviewer.inline_code_comments:
|
if get_settings().pr_reviewer.inline_code_comments:
|
||||||
self._publish_inline_code_comments()
|
self._publish_inline_code_comments()
|
||||||
else:
|
|
||||||
get_logger().info("Review output is not published")
|
|
||||||
get_settings().data = {"artifact": pr_review}
|
|
||||||
return
|
|
||||||
except Exception as e:
|
except Exception as e:
|
||||||
get_logger().error(f"Failed to review PR: {e}")
|
get_logger().error(f"Failed to review PR: {e}")
|
||||||
|
|
||||||
@ -270,9 +266,7 @@ class PRReviewer:
|
|||||||
incremental_review_markdown_text = f"Starting from commit {last_commit_url}"
|
incremental_review_markdown_text = f"Starting from commit {last_commit_url}"
|
||||||
|
|
||||||
markdown_text = convert_to_markdown_v2(data, self.git_provider.is_supported("gfm_markdown"),
|
markdown_text = convert_to_markdown_v2(data, self.git_provider.is_supported("gfm_markdown"),
|
||||||
incremental_review_markdown_text,
|
incremental_review_markdown_text, git_provider=self.git_provider)
|
||||||
git_provider=self.git_provider,
|
|
||||||
files=self.git_provider.get_diff_files())
|
|
||||||
|
|
||||||
# Add help text if gfm_markdown is supported
|
# Add help text if gfm_markdown is supported
|
||||||
if self.git_provider.is_supported("gfm_markdown") and get_settings().pr_reviewer.enable_help_text:
|
if self.git_provider.is_supported("gfm_markdown") and get_settings().pr_reviewer.enable_help_text:
|
||||||
|
@ -73,7 +73,7 @@ class PRUpdateChangelog:
|
|||||||
if get_settings().config.publish_output:
|
if get_settings().config.publish_output:
|
||||||
self.git_provider.publish_comment("Preparing changelog updates...", is_temporary=True)
|
self.git_provider.publish_comment("Preparing changelog updates...", is_temporary=True)
|
||||||
|
|
||||||
await retry_with_fallback_models(self._prepare_prediction, model_type=ModelType.WEAK)
|
await retry_with_fallback_models(self._prepare_prediction, model_type=ModelType.TURBO)
|
||||||
|
|
||||||
new_file_content, answer = self._prepare_changelog_update()
|
new_file_content, answer = self._prepare_changelog_update()
|
||||||
|
|
||||||
|
@ -4,7 +4,7 @@ build-backend = "setuptools.build_meta"
|
|||||||
|
|
||||||
[project]
|
[project]
|
||||||
name = "pr-agent"
|
name = "pr-agent"
|
||||||
version = "0.2.5"
|
version = "0.2.4"
|
||||||
|
|
||||||
authors = [{ name = "CodiumAI", email = "tal.r@codium.ai" }]
|
authors = [{ name = "CodiumAI", email = "tal.r@codium.ai" }]
|
||||||
|
|
||||||
|
1
setup.py
1
setup.py
@ -3,3 +3,4 @@
|
|||||||
from setuptools import setup
|
from setuptools import setup
|
||||||
|
|
||||||
setup()
|
setup()
|
||||||
|
print("testing ...")
|
||||||
|
@ -1,70 +0,0 @@
|
|||||||
import argparse
|
|
||||||
import asyncio
|
|
||||||
import copy
|
|
||||||
import os
|
|
||||||
from pathlib import Path
|
|
||||||
|
|
||||||
from starlette_context import request_cycle_context, context
|
|
||||||
|
|
||||||
from pr_agent.cli import run_command
|
|
||||||
from pr_agent.config_loader import get_settings, global_settings
|
|
||||||
|
|
||||||
from pr_agent.agent.pr_agent import PRAgent, commands
|
|
||||||
from pr_agent.log import get_logger, setup_logger
|
|
||||||
from tests.e2e_tests import e2e_utils
|
|
||||||
|
|
||||||
log_level = os.environ.get("LOG_LEVEL", "INFO")
|
|
||||||
setup_logger(log_level)
|
|
||||||
|
|
||||||
|
|
||||||
async def run_async():
|
|
||||||
pr_url = os.getenv('TEST_PR_URL', 'https://github.com/Codium-ai/pr-agent/pull/1385')
|
|
||||||
|
|
||||||
get_settings().set("config.git_provider", "github")
|
|
||||||
get_settings().set("config.publish_output", False)
|
|
||||||
get_settings().set("config.fallback_models", [])
|
|
||||||
|
|
||||||
agent = PRAgent()
|
|
||||||
try:
|
|
||||||
# Run the 'describe' command
|
|
||||||
get_logger().info(f"\nSanity check for the 'describe' command...")
|
|
||||||
original_settings = copy.deepcopy(get_settings())
|
|
||||||
await agent.handle_request(pr_url, ['describe'])
|
|
||||||
pr_header_body = dict(get_settings().data)['artifact']
|
|
||||||
assert pr_header_body.startswith('###') and 'PR Type' in pr_header_body and 'Description' in pr_header_body
|
|
||||||
context['settings'] = copy.deepcopy(original_settings) # Restore settings state after each test to prevent test interference
|
|
||||||
get_logger().info("PR description generated successfully\n")
|
|
||||||
|
|
||||||
# Run the 'review' command
|
|
||||||
get_logger().info(f"\nSanity check for the 'review' command...")
|
|
||||||
original_settings = copy.deepcopy(get_settings())
|
|
||||||
await agent.handle_request(pr_url, ['review'])
|
|
||||||
pr_review_body = dict(get_settings().data)['artifact']
|
|
||||||
assert pr_review_body.startswith('##') and 'PR Reviewer Guide' in pr_review_body
|
|
||||||
context['settings'] = copy.deepcopy(original_settings) # Restore settings state after each test to prevent test interference
|
|
||||||
get_logger().info("PR review generated successfully\n")
|
|
||||||
|
|
||||||
# Run the 'improve' command
|
|
||||||
get_logger().info(f"\nSanity check for the 'improve' command...")
|
|
||||||
original_settings = copy.deepcopy(get_settings())
|
|
||||||
await agent.handle_request(pr_url, ['improve'])
|
|
||||||
pr_improve_body = dict(get_settings().data)['artifact']
|
|
||||||
assert pr_improve_body.startswith('##') and 'PR Code Suggestions' in pr_improve_body
|
|
||||||
context['settings'] = copy.deepcopy(original_settings) # Restore settings state after each test to prevent test interference
|
|
||||||
get_logger().info("PR improvements generated successfully\n")
|
|
||||||
|
|
||||||
get_logger().info(f"\n\n========\nHealth test passed successfully\n========")
|
|
||||||
|
|
||||||
except Exception as e:
|
|
||||||
get_logger().exception(f"\n\n========\nHealth test failed\n========")
|
|
||||||
raise e
|
|
||||||
|
|
||||||
|
|
||||||
def run():
|
|
||||||
with request_cycle_context({}):
|
|
||||||
context['settings'] = copy.deepcopy(global_settings)
|
|
||||||
asyncio.run(run_async())
|
|
||||||
|
|
||||||
|
|
||||||
if __name__ == '__main__':
|
|
||||||
run()
|
|
Reference in New Issue
Block a user