mirror of
https://github.com/qodo-ai/pr-agent.git
synced 2025-07-04 21:00:40 +08:00
Compare commits
2 Commits
hl/jira_se
...
1331
Author | SHA1 | Date | |
---|---|---|---|
ee4f5c765a | |||
23cc968331 |
2
.github/workflows/build-and-test.yaml
vendored
2
.github/workflows/build-and-test.yaml
vendored
@ -37,3 +37,5 @@ jobs:
|
||||
name: Test dev docker
|
||||
run: |
|
||||
docker run --rm codiumai/pr-agent:test pytest -v tests/unittest
|
||||
|
||||
|
||||
|
3
.github/workflows/pr-agent-review.yaml
vendored
3
.github/workflows/pr-agent-review.yaml
vendored
@ -30,3 +30,6 @@ jobs:
|
||||
GITHUB_ACTION_CONFIG.AUTO_DESCRIBE: true
|
||||
GITHUB_ACTION_CONFIG.AUTO_REVIEW: true
|
||||
GITHUB_ACTION_CONFIG.AUTO_IMPROVE: true
|
||||
|
||||
|
||||
|
||||
|
17
.github/workflows/pre-commit.yml
vendored
17
.github/workflows/pre-commit.yml
vendored
@ -1,17 +0,0 @@
|
||||
# disabled. We might run it manually if needed.
|
||||
name: pre-commit
|
||||
|
||||
on:
|
||||
workflow_dispatch:
|
||||
# pull_request:
|
||||
# push:
|
||||
# branches: [main]
|
||||
|
||||
jobs:
|
||||
pre-commit:
|
||||
runs-on: ubuntu-latest
|
||||
steps:
|
||||
- uses: actions/checkout@v3
|
||||
- uses: actions/setup-python@v5
|
||||
# SEE https://github.com/pre-commit/action
|
||||
- uses: pre-commit/action@v3.0.1
|
@ -1,46 +0,0 @@
|
||||
# See https://pre-commit.com for more information
|
||||
# See https://pre-commit.com/hooks.html for more hooks
|
||||
|
||||
default_language_version:
|
||||
python: python3
|
||||
|
||||
repos:
|
||||
- repo: https://github.com/pre-commit/pre-commit-hooks
|
||||
rev: v5.0.0
|
||||
hooks:
|
||||
- id: check-added-large-files
|
||||
- id: check-toml
|
||||
- id: check-yaml
|
||||
- id: end-of-file-fixer
|
||||
- id: trailing-whitespace
|
||||
# - repo: https://github.com/rhysd/actionlint
|
||||
# rev: v1.7.3
|
||||
# hooks:
|
||||
# - id: actionlint
|
||||
- repo: https://github.com/pycqa/isort
|
||||
# rev must match what's in dev-requirements.txt
|
||||
rev: 5.13.2
|
||||
hooks:
|
||||
- id: isort
|
||||
# - repo: https://github.com/PyCQA/bandit
|
||||
# rev: 1.7.10
|
||||
# hooks:
|
||||
# - id: bandit
|
||||
# args: [
|
||||
# "-c", "pyproject.toml",
|
||||
# ]
|
||||
# - repo: https://github.com/astral-sh/ruff-pre-commit
|
||||
# rev: v0.7.1
|
||||
# hooks:
|
||||
# - id: ruff
|
||||
# args:
|
||||
# - --fix
|
||||
# - id: ruff-format
|
||||
# - repo: https://github.com/PyCQA/autoflake
|
||||
# rev: v2.3.1
|
||||
# hooks:
|
||||
# - id: autoflake
|
||||
# args:
|
||||
# - --in-place
|
||||
# - --remove-all-unused-imports
|
||||
# - --remove-unused-variables
|
129
README.md
129
README.md
@ -13,13 +13,15 @@
|
||||
Qode Merge PR-Agent aims to help efficiently review and handle pull requests, by providing AI feedback and suggestions
|
||||
</div>
|
||||
|
||||
[](https://github.com/Codium-ai/pr-agent/blob/main/LICENSE)
|
||||
[](https://chromewebstore.google.com/detail/pr-agent-chrome-extension/ephlnjeghhogofkifjloamocljapahnl)
|
||||
[](https://github.com/apps/qodo-merge-pro/)
|
||||
[](https://github.com/apps/qodo-merge-pro-for-open-source/)
|
||||
[](https://pr-agent-docs.codium.ai/finetuning_benchmark/)
|
||||
[](https://discord.com/channels/1057273017547378788/1126104260430528613)
|
||||
<a href="https://github.com/Codium-ai/pr-agent/commits/main">
|
||||
<img alt="GitHub" src="https://img.shields.io/github/last-commit/Codium-ai/pr-agent/main?style=for-the-badge" height="20">
|
||||
</a>
|
||||
[](https://twitter.com/codiumai)
|
||||
[](https://www.codium.ai/images/pr_agent/cheat_sheet.pdf)
|
||||
<a href="https://github.com/Codium-ai/pr-agent/commits/main">
|
||||
<img alt="GitHub" src="https://img.shields.io/github/last-commit/Codium-ai/pr-agent/main?style=for-the-badge" height="20">
|
||||
</a>
|
||||
</div>
|
||||
|
||||
### [Documentation](https://pr-agent-docs.codium.ai/)
|
||||
@ -41,40 +43,50 @@ Qode Merge PR-Agent aims to help efficiently review and handle pull requests, by
|
||||
|
||||
## News and Updates
|
||||
|
||||
### December 2, 2024
|
||||
### November 3, 2024
|
||||
|
||||
Open-source repositories can now freely use Qodo Merge Pro, and enjoy easy one-click installation using a marketplace [app](https://github.com/apps/qodo-merge-pro-for-open-source).
|
||||
Meaningful improvement to the quality of code suggestions by separating the code suggestion generation from [line number detection](https://github.com/Codium-ai/pr-agent/pull/1338)
|
||||
|
||||
<kbd><img src="https://github.com/user-attachments/assets/b0838724-87b9-43b0-ab62-73739a3a855c" width="512"></kbd>
|
||||
|
||||
See [here](https://qodo-merge-docs.qodo.ai/installation/pr_agent_pro/) for more details about installing Qodo Merge Pro for private repositories.
|
||||
<kbd></kbd>
|
||||
|
||||
|
||||
### November 18, 2024
|
||||
### October 27, 2024
|
||||
|
||||
A new mode was enabled by default for code suggestions - `--pr_code_suggestions.focus_only_on_problems=true`:
|
||||
Qodo Merge PR Agent will now automatically document accepted code suggestions in a dedicated wiki page (`.pr_agent_accepted_suggestions`), enabling users to track historical changes, assess the tool's effectiveness, and learn from previously implemented recommendations in the repository.
|
||||
|
||||
- This option reduces the number of code suggestions received
|
||||
- The suggestions will focus more on identifying and fixing code problems, rather than style considerations like best practices, maintainability, or readability.
|
||||
- The suggestions will be categorized into just two groups: "Possible Issues" and "General".
|
||||
This dedicated wiki page will also serve as a foundation for future AI model improvements, allowing it to learn from historically implemented suggestions and generate more targeted, contextually relevant recommendations.
|
||||
Read more about this novel feature [here](https://qodo-merge-docs.qodo.ai/tools/improve/#suggestion-tracking).
|
||||
|
||||
Still, if you prefer the previous mode, you can set `--pr_code_suggestions.focus_only_on_problems=false` in the [configuration file](https://qodo-merge-docs.qodo.ai/usage-guide/configuration_options/).
|
||||
|
||||
**Example results:**
|
||||
|
||||
Original mode
|
||||
|
||||
<kbd><img src="https://qodo.ai/images/pr_agent/code_suggestions_original_mode.png" width="512"></kbd>
|
||||
|
||||
Focused mode
|
||||
|
||||
<kbd><img src="https://qodo.ai/images/pr_agent/code_suggestions_focused_mode.png" width="512"></kbd>
|
||||
<kbd><img href="https://qodo.ai/images/pr_agent/pr_agent_accepted_suggestions1.png" src="https://qodo.ai/images/pr_agent/pr_agent_accepted_suggestions1.png" width="768"></kbd>
|
||||
|
||||
|
||||
### November 4, 2024
|
||||
|
||||
Qodo Merge PR Agent will now leverage context from Jira or GitHub tickets to enhance the PR Feedback. Read more about this feature
|
||||
[here](https://qodo-merge-docs.qodo.ai/core-abilities/fetching_ticket_context/)
|
||||
### October 21, 2024
|
||||
**Disable publishing labels by default:**
|
||||
|
||||
The default setting for `pr_description.publish_labels` has been updated to `false`. This means that labels generated by the `/describe` tool will no longer be published, unless this configuration is explicitly set to `true`.
|
||||
|
||||
We constantly strive to balance informative AI analysis with reducing unnecessary noise. User feedback indicated that in many cases, the original PR title alone provides sufficient information, making the generated labels (`enhancement`, `documentation`, `bug fix`, ...) redundant.
|
||||
The [`review_effort`](https://qodo-merge-docs.qodo.ai/tools/review/#configuration-options) label, generated by the `review` tool, will still be published by default, as it provides valuable information enabling reviewers to prioritize small PRs first.
|
||||
|
||||
However, every user has different preferences. To still publish the `describe` labels, set `pr_description.publish_labels=true` in the [configuration file](https://qodo-merge-docs.qodo.ai/usage-guide/configuration_options/).
|
||||
For more tailored and relevant labeling, we recommend using the [`custom_labels 💎`](https://qodo-merge-docs.qodo.ai/tools/custom_labels/) tool, that allows generating labels specific to your project's needs.
|
||||
|
||||
<kbd></kbd>
|
||||
|
||||
→
|
||||
|
||||
<kbd></kbd>
|
||||
|
||||
|
||||
|
||||
### October 14, 2024
|
||||
Improved support for GitHub enterprise server with [GitHub Actions](https://qodo-merge-docs.qodo.ai/installation/github/#action-for-github-enterprise-server)
|
||||
|
||||
### October 10, 2024
|
||||
New ability for the `review` tool - **ticket compliance feedback**. If the PR contains a ticket number, PR-Agent will check if the PR code actually [complies](https://github.com/Codium-ai/pr-agent/pull/1279#issuecomment-2404042130) with the ticket requirements.
|
||||
|
||||
<kbd><img src="https://github.com/user-attachments/assets/4a2a728b-5f47-40fa-80cc-16efd296938c" width="768"></kbd>
|
||||
|
||||
|
||||
## Overview
|
||||
@ -82,41 +94,39 @@ Qodo Merge PR Agent will now leverage context from Jira or GitHub tickets to enh
|
||||
|
||||
Supported commands per platform:
|
||||
|
||||
| | | GitHub | GitLab | Bitbucket | Azure DevOps |
|
||||
| | | GitHub | Gitlab | Bitbucket | Azure DevOps |
|
||||
|-------|---------------------------------------------------------------------------------------------------------|:--------------------:|:--------------------:|:--------------------:|:------------:|
|
||||
| TOOLS | [Review](https://qodo-merge-docs.qodo.ai/tools/review/) | ✅ | ✅ | ✅ | ✅ |
|
||||
| | [Describe](https://qodo-merge-docs.qodo.ai/tools/describe/) | ✅ | ✅ | ✅ | ✅ |
|
||||
| | [Improve](https://qodo-merge-docs.qodo.ai/tools/improve/) | ✅ | ✅ | ✅ | ✅ |
|
||||
| | [Ask](https://qodo-merge-docs.qodo.ai/tools/ask/) | ✅ | ✅ | ✅ | ✅ |
|
||||
| TOOLS | Review | ✅ | ✅ | ✅ | ✅ |
|
||||
| | ⮑ Incremental | ✅ | | | |
|
||||
| | Describe | ✅ | ✅ | ✅ | ✅ |
|
||||
| | ⮑ [Inline File Summary](https://pr-agent-docs.codium.ai/tools/describe#inline-file-summary) 💎 | ✅ | | | |
|
||||
| | Improve | ✅ | ✅ | ✅ | ✅ |
|
||||
| | ⮑ Extended | ✅ | ✅ | ✅ | ✅ |
|
||||
| | Ask | ✅ | ✅ | ✅ | ✅ |
|
||||
| | ⮑ [Ask on code lines](https://pr-agent-docs.codium.ai/tools/ask#ask-lines) | ✅ | ✅ | | |
|
||||
| | [Update CHANGELOG](https://qodo-merge-docs.qodo.ai/tools/update_changelog/) | ✅ | ✅ | ✅ | ✅ |
|
||||
| | [Ticket Context](https://qodo-merge-docs.qodo.ai/core-abilities/fetching_ticket_context/) 💎 | ✅ | ✅ | ✅ | |
|
||||
| | [Utilizing Best Practices](https://qodo-merge-docs.qodo.ai/tools/improve/#best-practices) 💎 | ✅ | ✅ | ✅ | |
|
||||
| | [PR Chat](https://qodo-merge-docs.qodo.ai/chrome-extension/features/#pr-chat) 💎 | ✅ | | | |
|
||||
| | [Suggestion Tracking](https://qodo-merge-docs.qodo.ai/tools/improve/#suggestion-tracking) 💎 | ✅ | ✅ | | |
|
||||
| | [CI Feedback](https://pr-agent-docs.codium.ai/tools/ci_feedback/) 💎 | ✅ | | | |
|
||||
| | [PR Documentation](https://pr-agent-docs.codium.ai/tools/documentation/) 💎 | ✅ | ✅ | | |
|
||||
| | [Custom Labels](https://pr-agent-docs.codium.ai/tools/custom_labels/) 💎 | ✅ | ✅ | | |
|
||||
| | [Analyze](https://pr-agent-docs.codium.ai/tools/analyze/) 💎 | ✅ | ✅ | | |
|
||||
| | [Similar Code](https://pr-agent-docs.codium.ai/tools/similar_code/) 💎 | ✅ | | | |
|
||||
| | [Custom Prompt](https://pr-agent-docs.codium.ai/tools/custom_prompt/) 💎 | ✅ | ✅ | ✅ | |
|
||||
| | [Test](https://pr-agent-docs.codium.ai/tools/test/) 💎 | ✅ | ✅ | | |
|
||||
| | Reflect and Review | ✅ | ✅ | ✅ | ✅ |
|
||||
| | Update CHANGELOG.md | ✅ | ✅ | ✅ | ✅ |
|
||||
| | Find Similar Issue | ✅ | | | |
|
||||
| | [Add PR Documentation](https://pr-agent-docs.codium.ai/tools/documentation/) 💎 | ✅ | ✅ | | |
|
||||
| | [Custom Labels](https://pr-agent-docs.codium.ai/tools/custom_labels/) 💎 | ✅ | ✅ | | |
|
||||
| | [Analyze](https://pr-agent-docs.codium.ai/tools/analyze/) 💎 | ✅ | ✅ | | |
|
||||
| | [CI Feedback](https://pr-agent-docs.codium.ai/tools/ci_feedback/) 💎 | ✅ | | | |
|
||||
| | [Similar Code](https://pr-agent-docs.codium.ai/tools/similar_code/) 💎 | ✅ | | | |
|
||||
| | | | | | |
|
||||
| USAGE | [CLI](https://qodo-merge-docs.qodo.ai/usage-guide/automations_and_usage/#local-repo-cli) | ✅ | ✅ | ✅ | ✅ |
|
||||
| | [App / webhook](https://qodo-merge-docs.qodo.ai/usage-guide/automations_and_usage/#github-app) | ✅ | ✅ | ✅ | ✅ |
|
||||
| | [Tagging bot](https://github.com/Codium-ai/pr-agent#try-it-now) | ✅ | | | |
|
||||
| | [Actions](https://qodo-merge-docs.qodo.ai/installation/github/#run-as-a-github-action) | ✅ |✅| ✅ |✅|
|
||||
| USAGE | CLI | ✅ | ✅ | ✅ | ✅ |
|
||||
| | App / webhook | ✅ | ✅ | ✅ | ✅ |
|
||||
| | Tagging bot | ✅ | | | |
|
||||
| | Actions | ✅ |✅| ✅ |✅|
|
||||
| | | | | | |
|
||||
| CORE | [PR compression](https://qodo-merge-docs.qodo.ai/core-abilities/compression_strategy/) | ✅ | ✅ | ✅ | ✅ |
|
||||
| CORE | PR compression | ✅ | ✅ | ✅ | ✅ |
|
||||
| | Repo language prioritization | ✅ | ✅ | ✅ | ✅ |
|
||||
| | Adaptive and token-aware file patch fitting | ✅ | ✅ | ✅ | ✅ |
|
||||
| | [Multiple models support](https://qodo-merge-docs.qodo.ai/usage-guide/changing_a_model/) | ✅ | ✅ | ✅ | ✅ |
|
||||
| | [Local and global metadata](https://qodo-merge-docs.qodo.ai/core-abilities/metadata/) | ✅ | ✅ | ✅ | ✅ |
|
||||
| | [Dynamic context](https://qodo-merge-docs.qodo.ai/core-abilities/dynamic_context/) | ✅ | ✅ | ✅ | ✅ |
|
||||
| | [Self reflection](https://qodo-merge-docs.qodo.ai/core-abilities/self_reflection/) | ✅ | ✅ | ✅ | ✅ |
|
||||
| | [Static code analysis](https://qodo-merge-docs.qodo.ai/core-abilities/static_code_analysis/) 💎 | ✅ | ✅ | ✅ | |
|
||||
| | Multiple models support | ✅ | ✅ | ✅ | ✅ |
|
||||
| | [Static code analysis](https://pr-agent-docs.codium.ai/core-abilities/#static-code-analysis) 💎 | ✅ | ✅ | ✅ | |
|
||||
| | [Global and wiki configurations](https://pr-agent-docs.codium.ai/usage-guide/configuration_options/) 💎 | ✅ | ✅ | ✅ | |
|
||||
| | [PR interactive actions](https://www.codium.ai/images/pr_agent/pr-actions.mp4) 💎 | ✅ | ✅ | | |
|
||||
| | [Impact Evaluation](https://qodo-merge-docs.qodo.ai/core-abilities/impact_evaluation/) 💎 | ✅ | ✅ | | |
|
||||
- 💎 means this feature is available only in [PR-Agent Pro](https://www.codium.ai/pricing/)
|
||||
|
||||
[//]: # (- Support for additional git providers is described in [here](./docs/Full_environments.md))
|
||||
@ -177,9 +187,14 @@ ___
|
||||
</kbd>
|
||||
</p>
|
||||
</div>
|
||||
<hr>
|
||||
|
||||
|
||||
|
||||
<h4><a href="https://github.com/Codium-ai/pr-agent/pull/530">/generate_labels</a></h4>
|
||||
<div align="center">
|
||||
<p float="center">
|
||||
<kbd><img src="https://www.codium.ai/images/pr_agent/geneare_custom_labels_main_short.png" width="300"></kbd>
|
||||
</p>
|
||||
</div>
|
||||
|
||||
[//]: # (<h4><a href="https://github.com/Codium-ai/pr-agent/pull/78#issuecomment-1639739496">/reflect_and_review:</a></h4>)
|
||||
|
||||
|
@ -2,3 +2,4 @@ We take your code's security and privacy seriously:
|
||||
|
||||
- The Chrome extension will not send your code to any external servers.
|
||||
- For private repositories, we will first validate the user's identity and permissions. After authentication, we generate responses using the existing Qodo Merge Pro integration.
|
||||
|
||||
|
@ -4,7 +4,7 @@ With a single-click installation you will gain access to a context-aware chat on
|
||||
|
||||
The extension is powered by top code models like Claude 3.5 Sonnet and GPT4. All the extension's features are free to use on public repositories.
|
||||
|
||||
For private repositories, you will need to install [Qodo Merge Pro](https://github.com/apps/qodo-merge-pro) in addition to the extension (Quick GitHub app setup with a 14-day free trial. No credit card needed).
|
||||
For private repositories, you will need to install [Qodo Merge Pro](https://github.com/apps/codiumai-pr-agent-pro) in addition to the extension (Quick GitHub app setup with a 14-day free trial. No credit card needed).
|
||||
For a demonstration of how to install Qodo Merge Pro and use it with the Chrome extension, please refer to the tutorial video at the provided [link](https://codium.ai/images/pr_agent/private_repos.mp4).
|
||||
|
||||
<img src="https://codium.ai/images/pr_agent/PR-AgentChat.gif" width="768">
|
||||
|
@ -1,169 +0,0 @@
|
||||
# Fetching Ticket Context for PRs
|
||||
`Supported Git Platforms : GitHub, GitLab, Bitbucket`
|
||||
|
||||
## Overview
|
||||
Qodo Merge PR Agent streamlines code review workflows by seamlessly connecting with multiple ticket management systems.
|
||||
This integration enriches the review process by automatically surfacing relevant ticket information and context alongside code changes.
|
||||
|
||||
## Ticket systems supported
|
||||
- GitHub
|
||||
- Jira (💎)
|
||||
|
||||
Ticket data fetched:
|
||||
|
||||
1. Ticket Title
|
||||
2. Ticket Description
|
||||
3. Custom Fields (Acceptance criteria)
|
||||
4. Subtasks (linked tasks)
|
||||
5. Labels
|
||||
6. Attached Images/Screenshots
|
||||
|
||||
## Affected Tools
|
||||
|
||||
Ticket Recognition Requirements:
|
||||
|
||||
- The PR description should contain a link to the ticket or if the branch name starts with the ticket id / number.
|
||||
- For Jira tickets, you should follow the instructions in [Jira Integration](https://qodo-merge-docs.qodo.ai/core-abilities/fetching_ticket_context/#jira-integration) in order to authenticate with Jira.
|
||||
|
||||
### Describe tool
|
||||
Qodo Merge PR Agent will recognize the ticket and use the ticket content (title, description, labels) to provide additional context for the code changes.
|
||||
By understanding the reasoning and intent behind modifications, the LLM can offer more insightful and relevant code analysis.
|
||||
|
||||
### Review tool
|
||||
Similarly to the `describe` tool, the `review` tool will use the ticket content to provide additional context for the code changes.
|
||||
|
||||
In addition, this feature will evaluate how well a Pull Request (PR) adheres to its original purpose/intent as defined by the associated ticket or issue mentioned in the PR description.
|
||||
Each ticket will be assigned a label (Compliance/Alignment level), Indicates the degree to which the PR fulfills its original purpose, Options: Fully compliant, Partially compliant or Not compliant.
|
||||
|
||||
|
||||
{width=768}
|
||||
|
||||
By default, the tool will automatically validate if the PR complies with the referenced ticket.
|
||||
If you want to disable this feedback, add the following line to your configuration file:
|
||||
|
||||
```toml
|
||||
[pr_reviewer]
|
||||
require_ticket_analysis_review=false
|
||||
```
|
||||
|
||||
## Providers
|
||||
|
||||
### Github Issues Integration
|
||||
|
||||
Qodo Merge PR Agent will automatically recognize Github issues mentioned in the PR description and fetch the issue content.
|
||||
Examples of valid GitHub issue references:
|
||||
|
||||
- `https://github.com/<ORG_NAME>/<REPO_NAME>/issues/<ISSUE_NUMBER>`
|
||||
- `#<ISSUE_NUMBER>`
|
||||
- `<ORG_NAME>/<REPO_NAME>#<ISSUE_NUMBER>`
|
||||
|
||||
Since Qodo Merge PR Agent is integrated with GitHub, it doesn't require any additional configuration to fetch GitHub issues.
|
||||
|
||||
### Jira Integration 💎
|
||||
|
||||
We support both Jira Cloud and Jira Server/Data Center.
|
||||
To integrate with Jira, you can link your PR to a ticket using either of these methods:
|
||||
|
||||
**Method 1: Description Reference:**
|
||||
|
||||
Include a ticket reference in your PR description using either the complete URL format https://<JIRA_ORG>.atlassian.net/browse/ISSUE-123 or the shortened ticket ID ISSUE-123.
|
||||
|
||||
**Method 2: Branch Name Detection:**
|
||||
|
||||
Name your branch with the ticket ID as a prefix (e.g., `ISSUE-123-feature-description` or `ISSUE-123/feature-description`).
|
||||
|
||||
!!! note "Jira Base URL"
|
||||
For shortened ticket IDs or branch detection (method 2), you must configure the Jira base URL in your configuration file under the [jira] section:
|
||||
|
||||
```toml
|
||||
[jira]
|
||||
jira_base_url = "https://<JIRA_ORG>.atlassian.net"
|
||||
```
|
||||
|
||||
#### Jira Cloud 💎
|
||||
There are two ways to authenticate with Jira Cloud:
|
||||
|
||||
**1) Jira App Authentication**
|
||||
|
||||
The recommended way to authenticate with Jira Cloud is to install the Qodo Merge app in your Jira Cloud instance. This will allow Qodo Merge to access Jira data on your behalf.
|
||||
|
||||
Installation steps:
|
||||
|
||||
1. Click [here](https://auth.atlassian.com/authorize?audience=api.atlassian.com&client_id=8krKmA4gMD8mM8z24aRCgPCSepZNP1xf&scope=read%3Ajira-work%20offline_access&redirect_uri=https%3A%2F%2Fregister.jira.pr-agent.codium.ai&state=qodomerge&response_type=code&prompt=consent) to install the Qodo Merge app in your Jira Cloud instance, click the `accept` button.<br>
|
||||
{width=384}
|
||||
|
||||
2. After installing the app, you will be redirected to the Qodo Merge registration page. and you will see a success message.<br>
|
||||
{width=384}
|
||||
|
||||
3. Now you can use the Jira integration in Qodo Merge PR Agent.
|
||||
|
||||
**2) Email/Token Authentication**
|
||||
|
||||
You can create an API token from your Atlassian account:
|
||||
|
||||
1. Log in to https://id.atlassian.com/manage-profile/security/api-tokens.
|
||||
|
||||
2. Click Create API token.
|
||||
|
||||
3. From the dialog that appears, enter a name for your new token and click Create.
|
||||
|
||||
4. Click Copy to clipboard.
|
||||
|
||||
{width=384}
|
||||
|
||||
5. In your [configuration file](https://qodo-merge-docs.qodo.ai/usage-guide/configuration_options/) add the following lines:
|
||||
|
||||
```toml
|
||||
[jira]
|
||||
jira_api_token = "YOUR_API_TOKEN"
|
||||
jira_api_email = "YOUR_EMAIL"
|
||||
```
|
||||
|
||||
|
||||
#### Jira Data Center/Server 💎
|
||||
|
||||
##### Local App Authentication (For Qodo Merge On-Premise Customers)
|
||||
|
||||
##### 1. Step 1: Set up an application link in Jira Data Center/Server
|
||||
* Go to Jira Administration > Applications > Application Links > Click on `Create link`
|
||||
{width=384}
|
||||
* Choose `External application` and set the direction to `Incoming` and then click `Continue`
|
||||
|
||||
{width=256}
|
||||
* In the following screen, enter the following details:
|
||||
* Name: `Qodo Merge`
|
||||
* Redirect URL: Enter you Qodo Merge URL followed `https://{QODO_MERGE_ENDPOINT}/register_ticket_provider`
|
||||
* Permission: Select `Read`
|
||||
* Click `Save`
|
||||
|
||||
{width=384}
|
||||
* Copy the `Client ID` and `Client secret` and set them in you `.secrets` file:
|
||||
|
||||
{width=256}
|
||||
```toml
|
||||
[jira]
|
||||
jira_app_secret = "..."
|
||||
jira_client_id = "..."
|
||||
```
|
||||
|
||||
##### 2. Step 2: Authenticate with Jira Data Center/Server
|
||||
* Open this URL in your browser: `https://{QODO_MERGE_ENDPOINT}/jira_auth`
|
||||
* Click on link
|
||||
|
||||
{width=384}
|
||||
|
||||
* You will be redirected to Jira Data Center/Server, click `Allow`
|
||||
* You will be redirected back to Qodo Merge PR Agent and you will see a success message.
|
||||
|
||||
|
||||
##### Personal Access Token (PAT) Authentication
|
||||
We also support Personal Access Token (PAT) Authentication method.
|
||||
|
||||
1. Create a [Personal Access Token (PAT)](https://confluence.atlassian.com/enterprise/using-personal-access-tokens-1026032365.html) in your Jira account
|
||||
2. In your Configuration file/Environment variables/Secrets file, add the following lines:
|
||||
|
||||
```toml
|
||||
[jira]
|
||||
jira_base_url = "YOUR_JIRA_BASE_URL" # e.g. https://jira.example.com
|
||||
jira_api_token = "YOUR_API_TOKEN"
|
||||
```
|
@ -1,7 +1,6 @@
|
||||
# Core Abilities
|
||||
Qodo Merge utilizes a variety of core abilities to provide a comprehensive and efficient code review experience. These abilities include:
|
||||
|
||||
- [Fetching ticket context](https://qodo-merge-docs.qodo.ai/core-abilities/fetching_ticket_context/)
|
||||
- [Local and global metadata](https://qodo-merge-docs.qodo.ai/core-abilities/metadata/)
|
||||
- [Dynamic context](https://qodo-merge-docs.qodo.ai/core-abilities/dynamic_context/)
|
||||
- [Self-reflection](https://qodo-merge-docs.qodo.ai/core-abilities/self_reflection/)
|
||||
|
@ -61,7 +61,7 @@ Or be triggered interactively by using the `analyze` tool.
|
||||
|
||||
### Find Similar Code
|
||||
|
||||
The [`similar code`](https://qodo-merge-docs.qodo.ai/tools/similar_code/) tool retrieves the most similar code components from inside the organization's codebase or from open-source code, including details about the license associated with each repository.
|
||||
The [`similar code`](https://qodo-merge-docs.qodo.ai/tools/similar_code/) tool retrieves the most similar code components from inside the organization's codebase, or from open-source code.
|
||||
|
||||
For example:
|
||||
|
||||
|
@ -25,43 +25,36 @@ To search the documentation site using natural language:
|
||||
|
||||
Qodo Merge offers extensive pull request functionalities across various git providers.
|
||||
|
||||
| | | GitHub | GitLab | Bitbucket | Azure DevOps |
|
||||
|-------|---------------------------------------------------------------------------------------------------------|:--------------------:|:--------------------:|:--------------------:|:------------:|
|
||||
| TOOLS | [Review](https://qodo-merge-docs.qodo.ai/tools/review/) | ✅ | ✅ | ✅ | ✅ |
|
||||
| | [Describe](https://qodo-merge-docs.qodo.ai/tools/describe/) | ✅ | ✅ | ✅ | ✅ |
|
||||
| | [Improve](https://qodo-merge-docs.qodo.ai/tools/improve/) | ✅ | ✅ | ✅ | ✅ |
|
||||
| | [Ask](https://qodo-merge-docs.qodo.ai/tools/ask/) | ✅ | ✅ | ✅ | ✅ |
|
||||
| | ⮑ [Ask on code lines](https://pr-agent-docs.codium.ai/tools/ask#ask-lines) | ✅ | ✅ | | |
|
||||
| | [Update CHANGELOG](https://qodo-merge-docs.qodo.ai/tools/update_changelog/) | ✅ | ✅ | ✅ | ✅ |
|
||||
| | [Ticket Context](https://qodo-merge-docs.qodo.ai/core-abilities/fetching_ticket_context/) 💎 | ✅ | ✅ | ✅ | |
|
||||
| | [Utilizing Best Practices](https://qodo-merge-docs.qodo.ai/tools/improve/#best-practices) 💎 | ✅ | ✅ | ✅ | |
|
||||
| | [PR Chat](https://qodo-merge-docs.qodo.ai/chrome-extension/features/#pr-chat) 💎 | ✅ | | | |
|
||||
| | [Suggestion Tracking](https://qodo-merge-docs.qodo.ai/tools/improve/#suggestion-tracking) 💎 | ✅ | ✅ | | |
|
||||
| | [CI Feedback](https://pr-agent-docs.codium.ai/tools/ci_feedback/) 💎 | ✅ | | | |
|
||||
| | [PR Documentation](https://pr-agent-docs.codium.ai/tools/documentation/) 💎 | ✅ | ✅ | | |
|
||||
| | [Custom Labels](https://pr-agent-docs.codium.ai/tools/custom_labels/) 💎 | ✅ | ✅ | | |
|
||||
| | [Analyze](https://pr-agent-docs.codium.ai/tools/analyze/) 💎 | ✅ | ✅ | | |
|
||||
| | [Similar Code](https://pr-agent-docs.codium.ai/tools/similar_code/) 💎 | ✅ | | | |
|
||||
| | [Custom Prompt](https://pr-agent-docs.codium.ai/tools/custom_prompt/) 💎 | ✅ | ✅ | ✅ | |
|
||||
| | [Test](https://pr-agent-docs.codium.ai/tools/test/) 💎 | ✅ | ✅ | | |
|
||||
| | | | | | |
|
||||
| USAGE | [CLI](https://qodo-merge-docs.qodo.ai/usage-guide/automations_and_usage/#local-repo-cli) | ✅ | ✅ | ✅ | ✅ |
|
||||
| | [App / webhook](https://qodo-merge-docs.qodo.ai/usage-guide/automations_and_usage/#github-app) | ✅ | ✅ | ✅ | ✅ |
|
||||
| | [Tagging bot](https://github.com/Codium-ai/pr-agent#try-it-now) | ✅ | | | |
|
||||
| | [Actions](https://qodo-merge-docs.qodo.ai/installation/github/#run-as-a-github-action) | ✅ |✅| ✅ |✅|
|
||||
| | | | | | |
|
||||
| CORE | [PR compression](https://qodo-merge-docs.qodo.ai/core-abilities/compression_strategy/) | ✅ | ✅ | ✅ | ✅ |
|
||||
| | Adaptive and token-aware file patch fitting | ✅ | ✅ | ✅ | ✅ |
|
||||
| | [Multiple models support](https://qodo-merge-docs.qodo.ai/usage-guide/changing_a_model/) | ✅ | ✅ | ✅ | ✅ |
|
||||
| | [Local and global metadata](https://qodo-merge-docs.qodo.ai/core-abilities/metadata/) | ✅ | ✅ | ✅ | ✅ |
|
||||
| | [Dynamic context](https://qodo-merge-docs.qodo.ai/core-abilities/dynamic_context/) | ✅ | ✅ | ✅ | ✅ |
|
||||
| | [Self reflection](https://qodo-merge-docs.qodo.ai/core-abilities/self_reflection/) | ✅ | ✅ | ✅ | ✅ |
|
||||
| | [Static code analysis](https://qodo-merge-docs.qodo.ai/core-abilities/static_code_analysis/) 💎 | ✅ | ✅ | ✅ | |
|
||||
| | [Global and wiki configurations](https://pr-agent-docs.codium.ai/usage-guide/configuration_options/) 💎 | ✅ | ✅ | ✅ | |
|
||||
| | [PR interactive actions](https://www.codium.ai/images/pr_agent/pr-actions.mp4) 💎 | ✅ | ✅ | | |
|
||||
| | [Impact Evaluation](https://qodo-merge-docs.qodo.ai/core-abilities/impact_evaluation/) 💎 | ✅ | ✅ | | |
|
||||
| | | GitHub | Gitlab | Bitbucket | Azure DevOps |
|
||||
|-------|-----------------------------------------------------------------------------------------------------------------------|:------:|:------:|:---------:|:------------:|
|
||||
| TOOLS | Review | ✅ | ✅ | ✅ | ✅ |
|
||||
| | ⮑ Incremental | ✅ | | | |
|
||||
| | Ask | ✅ | ✅ | ✅ | ✅ |
|
||||
| | Describe | ✅ | ✅ | ✅ | ✅ |
|
||||
| | ⮑ [Inline file summary](https://qodo-merge-docs.qodo.ai/tools/describe/#inline-file-summary){:target="_blank"} 💎 | ✅ | ✅ | | |
|
||||
| | Improve | ✅ | ✅ | ✅ | ✅ |
|
||||
| | ⮑ Extended | ✅ | ✅ | ✅ | ✅ |
|
||||
| | [Custom Prompt](./tools/custom_prompt.md){:target="_blank"} 💎 | ✅ | ✅ | ✅ | |
|
||||
| | Reflect and Review | ✅ | ✅ | ✅ | |
|
||||
| | Update CHANGELOG.md | ✅ | ✅ | ✅ | ️ |
|
||||
| | Find Similar Issue | ✅ | | | ️ |
|
||||
| | [Add PR Documentation](./tools/documentation.md){:target="_blank"} 💎 | ✅ | ✅ | | |
|
||||
| | [Generate Custom Labels](./tools/describe.md#handle-custom-labels-from-the-repos-labels-page-💎){:target="_blank"} 💎 | ✅ | ✅ | | |
|
||||
| | [Analyze PR Components](./tools/analyze.md){:target="_blank"} 💎 | ✅ | ✅ | | |
|
||||
| | | | | | ️ |
|
||||
| USAGE | CLI | ✅ | ✅ | ✅ | ✅ |
|
||||
| | App / webhook | ✅ | ✅ | ✅ | ✅ |
|
||||
| | Actions | ✅ | | | ️ |
|
||||
| | | | | |
|
||||
| CORE | PR compression | ✅ | ✅ | ✅ | ✅ |
|
||||
| | Repo language prioritization | ✅ | ✅ | ✅ | ✅ |
|
||||
| | Adaptive and token-aware file patch fitting | ✅ | ✅ | ✅ | ✅ |
|
||||
| | Multiple models support | ✅ | ✅ | ✅ | ✅ |
|
||||
| | Incremental PR review | ✅ | | | |
|
||||
| | [Static code analysis](./tools/analyze.md/){:target="_blank"} 💎 | ✅ | ✅ | ✅ | |
|
||||
| | [Multiple configuration options](./usage-guide/configuration_options.md){:target="_blank"} 💎 | ✅ | ✅ | ✅ | |
|
||||
|
||||
💎 marks a feature available only in [Qodo Merge Pro](https://www.qodo.ai/pricing/){:target="_blank"}
|
||||
💎 marks a feature available only in [Qodo Merge Pro](https://www.codium.ai/pricing/){:target="_blank"}
|
||||
|
||||
|
||||
## Example Results
|
||||
|
@ -51,12 +51,10 @@ stages:
|
||||
```
|
||||
This script will run Qodo Merge on every new merge request, with the `improve`, `review`, and `describe` commands.
|
||||
Note that you need to export the `azure_devops__pat` and `OPENAI_KEY` variables in the Azure DevOps pipeline settings (Pipelines -> Library -> + Variable group):
|
||||
|
||||
{width=468}
|
||||
|
||||
Make sure to give pipeline permissions to the `pr_agent` variable group.
|
||||
|
||||
> Note that Azure Pipelines lacks support for triggering workflows from PR comments. If you find a viable solution, please contribute it to our [issue tracker](https://github.com/Codium-ai/pr-agent/issues)
|
||||
|
||||
## Azure DevOps from CLI
|
||||
|
||||
|
@ -38,7 +38,6 @@ You can also modify the `script` section to run different Qodo Merge commands, o
|
||||
|
||||
Note that if your base branches are not protected, don't set the variables as `protected`, since the pipeline will not have access to them.
|
||||
|
||||
> **Note**: The `$CI_SERVER_FQDN` variable is available starting from GitLab version 16.10. If you're using an earlier version, this variable will not be available. However, you can combine `$CI_SERVER_HOST` and `$CI_SERVER_PORT` to achieve the same result. Please ensure you're using a compatible version or adjust your configuration.
|
||||
|
||||
|
||||
## Run a GitLab webhook server
|
||||
|
@ -1,44 +1,31 @@
|
||||
Qodo Merge Pro is a versatile application compatible with GitHub, GitLab, and BitBucket, hosted by QodoAI.
|
||||
|
||||
## Getting Started with Qodo Merge Pro
|
||||
|
||||
Qodo Merge Pro is a versatile application compatible with GitHub, GitLab, and BitBucket, hosted by CodiumAI.
|
||||
See [here](https://qodo-merge-docs.qodo.ai/overview/pr_agent_pro/) for more details about the benefits of using Qodo Merge Pro.
|
||||
|
||||
A complimentary two-week trial is provided to all new users. Following the trial period, user licenses (seats) are required for continued access.
|
||||
To purchase user licenses, please visit our [pricing page](https://www.qodo.ai/pricing/).
|
||||
Once subscribed, users can seamlessly deploy the application across any of their code repositories.
|
||||
|
||||
## Install Qodo Merge Pro for GitHub
|
||||
|
||||
### GitHub Cloud
|
||||
|
||||
Qodo Merge Pro for GitHub cloud is available for installation through the [GitHub Marketplace](https://github.com/apps/qodo-merge-pro).
|
||||
Interested parties can subscribe to Qodo Merge Pro through the following [link](https://www.codium.ai/pricing/).
|
||||
After subscribing, you are granted the ability to easily install the application across any of your repositories.
|
||||
|
||||
{width=468}
|
||||
|
||||
### GitHub Enterprise Server
|
||||
Each user who wants to use Qodo Merge pro needs to buy a seat.
|
||||
Initially, CodiumAI offers a two-week trial period at no cost, after which continued access requires each user to secure a personal seat.
|
||||
Once a user acquires a seat, they gain the flexibility to use Qodo Merge Pro across any repository where it was enabled.
|
||||
|
||||
Users without a purchased seat who interact with a repository featuring Qodo Merge Pro are entitled to receive up to five complimentary feedbacks.
|
||||
Beyond this limit, Qodo Merge Pro will cease to respond to their inquiries unless a seat is purchased.
|
||||
|
||||
## Install Qodo Merge Pro for GitHub Enterprise Server
|
||||
|
||||
To use Qodo Merge Pro application on your private GitHub Enterprise Server, you will need to contact us for starting an [Enterprise](https://www.codium.ai/pricing/) trial.
|
||||
|
||||
### GitHub Open Source Projects
|
||||
|
||||
For open-source projects, Qodo Merge Pro is available for free usage. To install Qodo Merge Pro for your open-source repositories, use the following marketplace [link](https://github.com/apps/qodo-merge-pro-for-open-source).
|
||||
|
||||
## Install Qodo Merge Pro for Bitbucket
|
||||
|
||||
### Bitbucket Cloud
|
||||
|
||||
Qodo Merge Pro for Bitbucket Cloud is available for installation through the following [link](https://bitbucket.org/site/addons/authorize?addon_key=d6df813252c37258)
|
||||
|
||||
{width=468}
|
||||
|
||||
### Bitbucket Server
|
||||
|
||||
To use Qodo Merge Pro application on your private Bitbucket Server, you will need to contact us for starting an [Enterprise](https://www.codium.ai/pricing/) trial.
|
||||
|
||||
|
||||
## Install Qodo Merge Pro for GitLab (Teams & Enterprise)
|
||||
|
||||
Since GitLab platform does not support apps, installing Qodo Merge Pro for GitLab is a bit more involved, and requires the following steps:
|
||||
|
||||
#### Step 1
|
||||
### Step 1
|
||||
|
||||
Acquire a personal, project or group level access token. Enable the “api” scope in order to allow Qodo Merge to read pull requests, comment and respond to requests.
|
||||
|
||||
@ -48,14 +35,14 @@ Acquire a personal, project or group level access token. Enable the “api” sc
|
||||
|
||||
Store the token in a safe place, you won’t be able to access it again after it was generated.
|
||||
|
||||
#### Step 2
|
||||
### Step 2
|
||||
|
||||
Generate a shared secret and link it to the access token. Browse to [https://register.gitlab.pr-agent.codium.ai](https://register.gitlab.pr-agent.codium.ai).
|
||||
Fill in your generated GitLab token and your company or personal name in the appropriate fields and click "Submit".
|
||||
|
||||
You should see "Success!" displayed above the Submit button, and a shared secret will be generated. Store it in a safe place, you won’t be able to access it again after it was generated.
|
||||
|
||||
#### Step 3
|
||||
### Step 3
|
||||
|
||||
Install a webhook for your repository or groups, by clicking “webhooks” on the settings menu. Click the “Add new webhook” button.
|
||||
|
||||
@ -66,7 +53,7 @@ Install a webhook for your repository or groups, by clicking “webhooks” on t
|
||||
In the webhook definition form, fill in the following fields:
|
||||
URL: https://pro.gitlab.pr-agent.codium.ai/webhook
|
||||
|
||||
Secret token: Your QodoAI key
|
||||
Secret token: Your CodiumAI key
|
||||
Trigger: Check the ‘comments’ and ‘merge request events’ boxes.
|
||||
Enable SSL verification: Check the box.
|
||||
|
||||
@ -74,7 +61,7 @@ Enable SSL verification: Check the box.
|
||||
{width=750}
|
||||
</figure>
|
||||
|
||||
#### Step 4
|
||||
### Step 4
|
||||
|
||||
You’re all set!
|
||||
|
||||
|
@ -1,6 +1,6 @@
|
||||
### Overview
|
||||
|
||||
[Qodo Merge Pro](https://www.codium.ai/pricing/) is a hosted version of open-source [Qodo Merge (PR-Agent)](https://github.com/Codium-ai/pr-agent). A complimentary two-week trial is offered, followed by a monthly subscription fee.
|
||||
[Qodo Merge Pro](https://www.codium.ai/pricing/) is a hosted version of Qodo Merge, provided by Qodo. A complimentary two-week trial is offered, followed by a monthly subscription fee.
|
||||
Qodo Merge Pro is designed for companies and teams that require additional features and capabilities. It provides the following benefits:
|
||||
|
||||
1. **Fully managed** - We take care of everything for you - hosting, models, regular updates, and more. Installation is as simple as signing up and adding the Qodo Merge app to your GitHub\GitLab\BitBucket repo.
|
||||
|
@ -95,112 +95,6 @@ This feature is controlled by a boolean configuration parameter: `pr_code_sugges
|
||||
|
||||
Instead, we leverage a dedicated private page, within your repository wiki, to track suggestions. This approach offers convenient secure suggestion tracking while avoiding pull requests or any noise to the main repository.
|
||||
|
||||
## `Extra instructions` and `best practices`
|
||||
|
||||
The `improve` tool can be further customized by providing additional instructions and best practices to the AI model.
|
||||
|
||||
### Extra instructions
|
||||
|
||||
>`Platforms supported: GitHub, GitLab, Bitbucket, Azure DevOps`
|
||||
|
||||
You can use the `extra_instructions` configuration option to give the AI model additional instructions for the `improve` tool.
|
||||
Be specific, clear, and concise in the instructions. With extra instructions, you are the prompter.
|
||||
|
||||
Examples for possible instructions:
|
||||
```toml
|
||||
[pr_code_suggestions]
|
||||
extra_instructions="""\
|
||||
(1) Answer in japanese
|
||||
(2) Don't suggest to add try-except block
|
||||
(3) Ignore changes in toml files
|
||||
...
|
||||
"""
|
||||
```
|
||||
Use triple quotes to write multi-line instructions. Use bullet points or numbers to make the instructions more readable.
|
||||
|
||||
### Best practices 💎
|
||||
|
||||
>`Platforms supported: GitHub, GitLab, Bitbucket`
|
||||
|
||||
Another option to give additional guidance to the AI model is by creating a dedicated [**wiki page**](https://github.com/Codium-ai/pr-agent/wiki) called `best_practices.md`.
|
||||
This page can contain a list of best practices, coding standards, and guidelines that are specific to your repo/organization.
|
||||
|
||||
The AI model will use this wiki page as a reference, and in case the PR code violates any of the guidelines, it will create additional suggestions, with a dedicated label: `Organization
|
||||
best practice`.
|
||||
|
||||
Example for a python `best_practices.md` content:
|
||||
```markdown
|
||||
## Project best practices
|
||||
- Make sure that I/O operations are encapsulated in a try-except block
|
||||
- Use the `logging` module for logging instead of `print` statements
|
||||
- Use `is` and `is not` to compare with `None`
|
||||
- Use `if __name__ == '__main__':` to run the code only when the script is executed
|
||||
- Use `with` statement to open files
|
||||
...
|
||||
```
|
||||
|
||||
Tips for writing an effective `best_practices.md` file:
|
||||
|
||||
- Write clearly and concisely
|
||||
- Include brief code examples when helpful
|
||||
- Focus on project-specific guidelines, that will result in relevant suggestions you actually want to get
|
||||
- Keep the file relatively short, under 800 lines, since:
|
||||
- AI models may not process effectively very long documents
|
||||
- Long files tend to contain generic guidelines already known to AI
|
||||
|
||||
#### Local and global best practices
|
||||
By default, Qodo Merge will look for a local `best_practices.md` wiki file in the root of the relevant local repo.
|
||||
|
||||
If you want to enable also a global `best_practices.md` wiki file, set first in the global configuration file:
|
||||
|
||||
```toml
|
||||
[best_practices]
|
||||
enable_global_best_practices = true
|
||||
```
|
||||
|
||||
Then, create a `best_practices.md` wiki file in the root of [global](https://qodo-merge-docs.qodo.ai/usage-guide/configuration_options/#global-configuration-file) configuration repository, `pr-agent-settings`.
|
||||
|
||||
#### Best practices for multiple languages
|
||||
For a git organization working with multiple programming languages, you can maintain a centralized global `best_practices.md` file containing language-specific guidelines.
|
||||
When reviewing pull requests, Qodo Merge automatically identifies the programming language and applies the relevant best practices from this file.
|
||||
|
||||
To do this, structure your `best_practices.md` file using the following format:
|
||||
|
||||
```
|
||||
# [Python]
|
||||
...
|
||||
# [Java]
|
||||
...
|
||||
# [JavaScript]
|
||||
...
|
||||
```
|
||||
|
||||
#### Dedicated label for best practices suggestions
|
||||
Best practice suggestions are labeled as `Organization best practice` by default.
|
||||
To customize this label, modify it in your configuration file:
|
||||
|
||||
```toml
|
||||
[best_practices]
|
||||
organization_name = "..."
|
||||
```
|
||||
|
||||
And the label will be: `{organization_name} best practice`.
|
||||
|
||||
|
||||
#### Example results
|
||||
|
||||
{width=512}
|
||||
|
||||
|
||||
### How to combine `extra instructions` and `best practices`
|
||||
|
||||
The `extra instructions` configuration is more related to the `improve` tool prompt. It can be used, for example, to avoid specific suggestions ("Don't suggest to add try-except block", "Ignore changes in toml files", ...) or to emphasize specific aspects or formats ("Answer in Japanese", "Give only short suggestions", ...)
|
||||
|
||||
In contrast, the `best_practices.md` file is a general guideline for the way code should be written in the repo.
|
||||
|
||||
Using a combination of both can help the AI model to provide relevant and tailored suggestions.
|
||||
|
||||
|
||||
## Usage Tips
|
||||
|
||||
### Implementing the proposed code suggestions
|
||||
@ -297,6 +191,73 @@ This approach has two main benefits:
|
||||
Note: Chunking is primarily relevant for large PRs. For most PRs (up to 500 lines of code), Qodo Merge will be able to process the entire code in a single call.
|
||||
|
||||
|
||||
### 'Extra instructions' and 'best practices'
|
||||
|
||||
#### Extra instructions
|
||||
|
||||
>`Platforms supported: GitHub, GitLab, Bitbucket`
|
||||
|
||||
You can use the `extra_instructions` configuration option to give the AI model additional instructions for the `improve` tool.
|
||||
Be specific, clear, and concise in the instructions. With extra instructions, you are the prompter. Specify relevant aspects that you want the model to focus on.
|
||||
|
||||
Examples for possible instructions:
|
||||
```toml
|
||||
[pr_code_suggestions]
|
||||
extra_instructions="""\
|
||||
(1) Answer in japanese
|
||||
(2) Don't suggest to add try-excpet block
|
||||
(3) Ignore changes in toml files
|
||||
...
|
||||
"""
|
||||
```
|
||||
Use triple quotes to write multi-line instructions. Use bullet points or numbers to make the instructions more readable.
|
||||
|
||||
#### Best practices 💎
|
||||
|
||||
>`Platforms supported: GitHub, GitLab`
|
||||
|
||||
Another option to give additional guidance to the AI model is by creating a dedicated [**wiki page**](https://github.com/Codium-ai/pr-agent/wiki) called `best_practices.md`.
|
||||
This page can contain a list of best practices, coding standards, and guidelines that are specific to your repo/organization.
|
||||
|
||||
The AI model will use this wiki page as a reference, and in case the PR code violates any of the guidelines, it will suggest improvements accordingly, with a dedicated label: `Organization
|
||||
best practice`.
|
||||
|
||||
Example for a `best_practices.md` content can be found [here](https://github.com/Codium-ai/pr-agent/blob/main/docs/docs/usage-guide/EXAMPLE_BEST_PRACTICE.md) (adapted from Google's [pyguide](https://google.github.io/styleguide/pyguide.html)).
|
||||
This file is only an example. Since it is used as a prompt for an AI model, we want to emphasize the following:
|
||||
|
||||
- It should be written in a clear and concise manner
|
||||
- If needed, it should give short relevant code snippets as examples
|
||||
- Recommended to limit the text to 800 lines or fewer. Here’s why:
|
||||
|
||||
1) Extremely long best practices documents may not be fully processed by the AI model.
|
||||
|
||||
2) A lengthy file probably represent a more "**generic**" set of guidelines, which the AI model is already familiar with. The objective is to focus on a more targeted set of guidelines tailored to the specific needs of this project.
|
||||
|
||||
##### Local and global best practices
|
||||
By default, Qodo Merge will look for a local `best_practices.md` wiki file in the root of the relevant local repo.
|
||||
|
||||
If you want to enable also a global `best_practices.md` wiki file, set first in the global configuration file:
|
||||
|
||||
```toml
|
||||
[best_practices]
|
||||
enable_global_best_practices = true
|
||||
```
|
||||
|
||||
Then, create a `best_practices.md` wiki file in the root of [global](https://qodo-merge-docs.qodo.ai/usage-guide/configuration_options/#global-configuration-file) configuration repository, `pr-agent-settings`.
|
||||
|
||||
##### Example results
|
||||
|
||||
{width=512}
|
||||
|
||||
|
||||
#### How to combine `extra instructions` and `best practices`
|
||||
|
||||
The `extra instructions` configuration is more related to the `improve` tool prompt. It can be used, for example, to avoid specific suggestions ("Don't suggest to add try-except block", "Ignore changes in toml files", ...) or to emphasize specific aspects or formats ("Answer in Japanese", "Give only short suggestions", ...)
|
||||
|
||||
In contrast, the `best_practices.md` file is a general guideline for the way code should be written in the repo.
|
||||
|
||||
Using a combination of both can help the AI model to provide relevant and tailored suggestions.
|
||||
|
||||
## Configuration options
|
||||
|
||||
??? example "General options"
|
||||
@ -314,10 +275,6 @@ Note: Chunking is primarily relevant for large PRs. For most PRs (up to 500 line
|
||||
<td><b>dual_publishing_score_threshold</b></td>
|
||||
<td>Minimum score threshold for suggestions to be presented as commitable PR comments in addition to the table. Default is -1 (disabled).</td>
|
||||
</tr>
|
||||
<tr>
|
||||
<td><b>focus_only_on_problems</b></td>
|
||||
<td>If set to true, suggestions will focus primarily on identifying and fixing code problems, and less on style considerations like best practices, maintainability, or readability. Default is true.</td>
|
||||
</tr>
|
||||
<tr>
|
||||
<td><b>persistent_comment</b></td>
|
||||
<td>If set to true, the improve comment will be persistent, meaning that every new improve request will edit the previous one. Default is false.</td>
|
||||
@ -342,10 +299,6 @@ Note: Chunking is primarily relevant for large PRs. For most PRs (up to 500 line
|
||||
<td><b>wiki_page_accepted_suggestions</b></td>
|
||||
<td>If set to true, the tool will automatically track accepted suggestions in a dedicated wiki page called `.pr_agent_accepted_suggestions`. Default is true.</td>
|
||||
</tr>
|
||||
<tr>
|
||||
<td><b>allow_thumbs_up_down</b></td>
|
||||
<td>If set to true, all code suggestions will have thumbs up and thumbs down buttons, to encourage users to provide feedback on the suggestions. Default is false.</td>
|
||||
</tr>
|
||||
</table>
|
||||
|
||||
??? example "Params for number of suggestions and AI calls"
|
||||
@ -363,6 +316,10 @@ Note: Chunking is primarily relevant for large PRs. For most PRs (up to 500 line
|
||||
<td><b>max_number_of_calls</b></td>
|
||||
<td>Maximum number of chunks. Default is 3.</td>
|
||||
</tr>
|
||||
<tr>
|
||||
<td><b>rank_extended_suggestions</b></td>
|
||||
<td>If set to true, the tool will rank the suggestions, based on importance. Default is true.</td>
|
||||
</tr>
|
||||
</table>
|
||||
|
||||
## A note on code suggestions quality
|
||||
|
@ -39,7 +39,7 @@ pr_commands = [
|
||||
]
|
||||
|
||||
[pr_reviewer]
|
||||
extra_instructions = "..."
|
||||
num_code_suggestions = ...
|
||||
...
|
||||
```
|
||||
|
||||
@ -95,7 +95,7 @@ extra_instructions = "..."
|
||||
<table>
|
||||
<tr>
|
||||
<td><b>num_code_suggestions</b></td>
|
||||
<td>Number of code suggestions provided by the 'review' tool. Default is 0, meaning no code suggestions will be provided by the `review` tool. Note that this is a legacy feature, that will be removed in future releases. Use the `improve` tool instead for code suggestions</td>
|
||||
<td>Number of code suggestions provided by the 'review' tool. Default is 0, meaning no code suggestions will be provided by the `review` tool.</td>
|
||||
</tr>
|
||||
<tr>
|
||||
<td><b>inline_code_comments</b></td>
|
||||
@ -140,7 +140,7 @@ extra_instructions = "..."
|
||||
</tr>
|
||||
<tr>
|
||||
<td><b>require_ticket_analysis_review</b></td>
|
||||
<td>If set to true, and the PR contains a GitHub or Jira ticket link, the tool will add a section that checks if the PR in fact fulfilled the ticket requirements. Default is true.</td>
|
||||
<td>If set to true, and the PR contains a GitHub ticket number, the tool will add a section that checks if the PR in fact fulfilled the ticket requirements. Default is true.</td>
|
||||
</tr>
|
||||
</table>
|
||||
|
||||
@ -258,3 +258,4 @@ If enabled, the `review` tool can approve a PR when a specific comment, `/review
|
||||
[//]: # ( Notice If you are interested **only** in the code suggestions, it is recommended to use the [`improve`](./improve.md) feature instead, since it is a dedicated only to code suggestions, and usually gives better results.)
|
||||
|
||||
[//]: # ( Use the `review` tool if you want to get more comprehensive feedback, which includes code suggestions as well.)
|
||||
|
||||
|
@ -49,10 +49,9 @@ It can be invoked automatically from the analyze table, can be accessed by:
|
||||
/analyze
|
||||
```
|
||||
Choose the components you want to find similar code for, and click on the `similar` checkbox.
|
||||
|
||||
{width=768}
|
||||
|
||||
You can search for similar code either within the organization's codebase or globally, which includes open-source repositories. Each result will include the relevant code components along with their associated license details.
|
||||
If you are looking to search for similar code in the organization's codebase, you can click on the `Organization` checkbox, and it will invoke a new search command just for the organization's codebase.
|
||||
|
||||
{width=768}
|
||||
|
||||
|
@ -1,5 +1,4 @@
|
||||
## Local repo (CLI)
|
||||
|
||||
When running from your locally cloned Qodo Merge repo (CLI), your local configuration file will be used.
|
||||
Examples of invoking the different tools via the CLI:
|
||||
|
||||
@ -36,29 +35,9 @@ This is useful for debugging or experimenting with different tools.
|
||||
|
||||
Default is "github".
|
||||
|
||||
### CLI Health Check
|
||||
To verify that Qodo Merge has been configured correctly, you can run this health check command from the repository root:
|
||||
|
||||
```bash
|
||||
python -m tests.health_test.main
|
||||
```
|
||||
|
||||
If the health check passes, you will see the following output:
|
||||
|
||||
```
|
||||
========
|
||||
Health test passed successfully
|
||||
========
|
||||
```
|
||||
|
||||
At the end of the run.
|
||||
|
||||
Before running the health check, ensure you have:
|
||||
|
||||
- Configured your [LLM provider](https://qodo-merge-docs.qodo.ai/usage-guide/changing_a_model/)
|
||||
- Added a valid GitHub token to your configuration file
|
||||
|
||||
## Online usage
|
||||
### Online usage
|
||||
|
||||
Online usage means invoking Qodo Merge tools by [comments](https://github.com/Codium-ai/pr-agent/pull/229#issuecomment-1695021901) on a PR.
|
||||
Commands for invoking the different tools via comments:
|
||||
@ -79,80 +58,59 @@ For example, if you want to edit the `review` tool configurations, you can run:
|
||||
Any configuration value in [configuration file](https://github.com/Codium-ai/pr-agent/blob/main/pr_agent/settings/configuration.toml) file can be similarly edited. Comment `/config` to see the list of available configurations.
|
||||
|
||||
|
||||
## Qodo Merge Automatic Feedback
|
||||
|
||||
|
||||
### Disabling all automatic feedback
|
||||
|
||||
To easily disable all automatic feedback from Qodo Merge (GitHub App, GitLab Webhook, BitBucket App, Azure DevOps Webhook), set in a configuration file:
|
||||
|
||||
```toml
|
||||
[config]
|
||||
disable_auto_feedback = true
|
||||
```
|
||||
|
||||
When this parameter is set to `true`, Qodo Merge will not run any automatic tools (like `describe`, `review`, `improve`) when a new PR is opened, or when new code is pushed to an open PR.
|
||||
|
||||
### GitHub App
|
||||
## GitHub App
|
||||
|
||||
!!! note "Configurations for Qodo Merge Pro"
|
||||
Qodo Merge Pro for GitHub is an App, hosted by CodiumAI. So all the instructions below are relevant also for Qodo Merge Pro users.
|
||||
Same goes for [GitLab webhook](#gitlab-webhook) and [BitBucket App](#bitbucket-app) sections.
|
||||
|
||||
#### GitHub app automatic tools when a new PR is opened
|
||||
### GitHub app automatic tools when a new PR is opened
|
||||
|
||||
The [github_app](https://github.com/Codium-ai/pr-agent/blob/main/pr_agent/settings/configuration.toml#L220) section defines GitHub app specific configurations.
|
||||
The [github_app](https://github.com/Codium-ai/pr-agent/blob/main/pr_agent/settings/configuration.toml#L108) section defines GitHub app specific configurations.
|
||||
|
||||
The configuration parameter `pr_commands` defines the list of tools that will be **run automatically** when a new PR is opened:
|
||||
```toml
|
||||
The configuration parameter `pr_commands` defines the list of tools that will be **run automatically** when a new PR is opened.
|
||||
```
|
||||
[github_app]
|
||||
pr_commands = [
|
||||
"/describe",
|
||||
"/review",
|
||||
"/describe --pr_description.final_update_message=false",
|
||||
"/review --pr_reviewer.num_code_suggestions=0",
|
||||
"/improve",
|
||||
]
|
||||
```
|
||||
|
||||
This means that when a new PR is opened/reopened or marked as ready for review, Qodo Merge will run the `describe`, `review` and `improve` tools.
|
||||
For the `review` tool, for example, the `num_code_suggestions` parameter will be set to 0.
|
||||
|
||||
You can override the default tool parameters by using one the three options for a [configuration file](https://qodo-merge-docs.qodo.ai/usage-guide/configuration_options/): **wiki**, **local**, or **global**.
|
||||
For example, if your configuration file contains:
|
||||
|
||||
```toml
|
||||
For example, if your local `.pr_agent.toml` file contains:
|
||||
```
|
||||
[pr_description]
|
||||
generate_ai_title = true
|
||||
```
|
||||
Every time you run the `describe` tool, including automatic runs, the PR title will be generated by the AI.
|
||||
|
||||
Every time you run the `describe` tool (including automatic runs) the PR title will be generated by the AI.
|
||||
|
||||
You can customize configurations specifically for automated runs by using the `--config_path=<value>` parameter.
|
||||
For instance, to modify the `review` tool settings only for newly opened PRs, use:
|
||||
```toml
|
||||
To cancel the automatic run of all the tools, set:
|
||||
```
|
||||
[github_app]
|
||||
pr_commands = [
|
||||
"/describe",
|
||||
"/review --pr_reviewer.extra_instructions='focus on the file: ...'",
|
||||
"/improve",
|
||||
]
|
||||
pr_commands = []
|
||||
```
|
||||
|
||||
#### GitHub app automatic tools for push actions (commits to an open PR)
|
||||
### GitHub app automatic tools for push actions (commits to an open PR)
|
||||
|
||||
In addition to running automatic tools when a PR is opened, the GitHub app can also respond to new code that is pushed to an open PR.
|
||||
|
||||
The configuration toggle `handle_push_trigger` can be used to enable this feature.
|
||||
The configuration parameter `push_commands` defines the list of tools that will be **run automatically** when new code is pushed to the PR.
|
||||
```toml
|
||||
```
|
||||
[github_app]
|
||||
handle_push_trigger = true
|
||||
push_commands = [
|
||||
"/describe",
|
||||
"/review",
|
||||
"/review --pr_reviewer.num_code_suggestions=0 --pr_reviewer.final_update_message=false",
|
||||
]
|
||||
```
|
||||
This means that when new code is pushed to the PR, the Qodo Merge will run the `describe` and `review` tools, with the specified parameters.
|
||||
|
||||
### GitHub Action
|
||||
## GitHub Action
|
||||
`GitHub Action` is a different way to trigger Qodo Merge tools, and uses a different configuration mechanism than `GitHub App`.<br>
|
||||
You can configure settings for `GitHub Action` by adding environment variables under the env section in `.github/workflows/pr_agent.yml` file.
|
||||
Specifically, start by setting the following environment variables:
|
||||
@ -163,7 +121,7 @@ Specifically, start by setting the following environment variables:
|
||||
github_action_config.auto_review: "true" # enable\disable auto review
|
||||
github_action_config.auto_describe: "true" # enable\disable auto describe
|
||||
github_action_config.auto_improve: "true" # enable\disable auto improve
|
||||
github_action_config.pr_actions: '["opened", "reopened", "ready_for_review", "review_requested"]'
|
||||
github_action_config.pr_actions: ["opened", "reopened", "ready_for_review", "review_requested"]
|
||||
```
|
||||
`github_action_config.auto_review`, `github_action_config.auto_describe` and `github_action_config.auto_improve` are used to enable/disable automatic tools that run when a new PR is opened.
|
||||
If not set, the default configuration is for all three tools to run automatically when a new PR is opened.
|
||||
@ -178,22 +136,19 @@ The JSON structure is equivalent to the yaml data structure defined in [pr_revie
|
||||
Note that you can give additional config parameters by adding environment variables to `.github/workflows/pr_agent.yml`, or by using a `.pr_agent.toml` [configuration file](https://qodo-merge-docs.qodo.ai/usage-guide/configuration_options/#global-configuration-file) in the root of your repo
|
||||
|
||||
For example, you can set an environment variable: `pr_description.publish_labels=false`, or add a `.pr_agent.toml` file with the following content:
|
||||
|
||||
```toml
|
||||
```
|
||||
[pr_description]
|
||||
publish_labels = false
|
||||
```
|
||||
|
||||
to prevent Qodo Merge from publishing labels when running the `describe` tool.
|
||||
|
||||
### GitLab Webhook
|
||||
## GitLab Webhook
|
||||
After setting up a GitLab webhook, to control which commands will run automatically when a new MR is opened, you can set the `pr_commands` parameter in the configuration file, similar to the GitHub App:
|
||||
|
||||
```toml
|
||||
```
|
||||
[gitlab]
|
||||
pr_commands = [
|
||||
"/describe",
|
||||
"/review",
|
||||
"/review --pr_reviewer.num_code_suggestions=0",
|
||||
"/improve",
|
||||
]
|
||||
```
|
||||
@ -201,24 +156,24 @@ pr_commands = [
|
||||
the GitLab webhook can also respond to new code that is pushed to an open MR.
|
||||
The configuration toggle `handle_push_trigger` can be used to enable this feature.
|
||||
The configuration parameter `push_commands` defines the list of tools that will be **run automatically** when new code is pushed to the MR.
|
||||
```toml
|
||||
```
|
||||
[gitlab]
|
||||
handle_push_trigger = true
|
||||
push_commands = [
|
||||
"/describe",
|
||||
"/review",
|
||||
"/review --pr_reviewer.num_code_suggestions=0 --pr_reviewer.final_update_message=false",
|
||||
]
|
||||
```
|
||||
|
||||
Note that to use the 'handle_push_trigger' feature, you need to give the gitlab webhook also the "Push events" scope.
|
||||
|
||||
### BitBucket App
|
||||
## BitBucket App
|
||||
Similar to GitHub app, when running Qodo Merge from BitBucket App, the default [configuration file](https://github.com/Codium-ai/pr-agent/blob/main/pr_agent/settings/configuration.toml) from a pre-built docker will be initially loaded.
|
||||
|
||||
By uploading a local `.pr_agent.toml` file to the root of the repo's main branch, you can edit and customize any configuration parameter. Note that you need to upload `.pr_agent.toml` prior to creating a PR, in order for the configuration to take effect.
|
||||
|
||||
For example, if your local `.pr_agent.toml` file contains:
|
||||
```toml
|
||||
```
|
||||
[pr_reviewer]
|
||||
extra_instructions = "Answer in japanese"
|
||||
```
|
||||
@ -227,39 +182,29 @@ Each time you invoke a `/review` tool, it will use the extra instructions you se
|
||||
|
||||
|
||||
Note that among other limitations, BitBucket provides relatively low rate-limits for applications (up to 1000 requests per hour), and does not provide an API to track the actual rate-limit usage.
|
||||
If you experience a lack of responses from Qodo Merge, you might want to set: `bitbucket_app.avoid_full_files=true` in your configuration file.
|
||||
If you experience lack of responses from Qodo Merge, you might want to set: `bitbucket_app.avoid_full_files=true` in your configuration file.
|
||||
This will prevent Qodo Merge from acquiring the full file content, and will only use the diff content. This will reduce the number of requests made to BitBucket, at the cost of small decrease in accuracy, as dynamic context will not be applicable.
|
||||
|
||||
|
||||
#### BitBucket Self-Hosted App automatic tools
|
||||
### BitBucket Self-Hosted App automatic tools
|
||||
|
||||
To control which commands will run automatically when a new PR is opened, you can set the `pr_commands` parameter in the configuration file:
|
||||
Specifically, set the following values:
|
||||
|
||||
```toml
|
||||
```
|
||||
[bitbucket_app]
|
||||
pr_commands = [
|
||||
"/review",
|
||||
"/review --pr_reviewer.num_code_suggestions=0",
|
||||
"/improve --pr_code_suggestions.commitable_code_suggestions=true --pr_code_suggestions.suggestions_score_threshold=7",
|
||||
]
|
||||
```
|
||||
Note that we set specifically for bitbucket, we recommend using: `--pr_code_suggestions.suggestions_score_threshold=7` and that is the default value we set for bitbucket.
|
||||
Since this platform only supports inline code suggestions, we want to limit the number of suggestions, and only present a limited number.
|
||||
|
||||
To enable BitBucket app to respond to each **push** to the PR, set (for example):
|
||||
```toml
|
||||
[bitbucket_app]
|
||||
handle_push_trigger = true
|
||||
push_commands = [
|
||||
"/describe",
|
||||
"/review",
|
||||
]
|
||||
```
|
||||
|
||||
### Azure DevOps provider
|
||||
## Azure DevOps provider
|
||||
|
||||
To use Azure DevOps provider use the following settings in configuration.toml:
|
||||
```toml
|
||||
```
|
||||
[config]
|
||||
git_provider="azure"
|
||||
```
|
||||
@ -278,10 +223,10 @@ org = "https://dev.azure.com/YOUR_ORGANIZATION/"
|
||||
# pat = "YOUR_PAT_TOKEN" needed only if using PAT for authentication
|
||||
```
|
||||
|
||||
#### Azure DevOps Webhook
|
||||
### Azure DevOps Webhook
|
||||
|
||||
To control which commands will run automatically when a new PR is opened, you can set the `pr_commands` parameter in the configuration file, similar to the GitHub App:
|
||||
```toml
|
||||
```
|
||||
[azure_devops_server]
|
||||
pr_commands = [
|
||||
"/describe",
|
||||
|
@ -5,6 +5,7 @@ To use a different model than the default (GPT-4), you need to edit in the [conf
|
||||
```
|
||||
[config]
|
||||
model = "..."
|
||||
model_turbo = "..."
|
||||
fallback_models = ["..."]
|
||||
```
|
||||
|
||||
@ -26,8 +27,9 @@ deployment_id = "" # The deployment name you chose when you deployed the engine
|
||||
and set in your configuration file:
|
||||
```
|
||||
[config]
|
||||
model="" # the OpenAI model you've deployed on Azure (e.g. gpt-4o)
|
||||
fallback_models=["..."]
|
||||
model="" # the OpenAI model you've deployed on Azure (e.g. gpt-3.5-turbo)
|
||||
model_turbo="" # the OpenAI model you've deployed on Azure (e.g. gpt-3.5-turbo)
|
||||
fallback_models=["..."] # the OpenAI model you've deployed on Azure (e.g. gpt-3.5-turbo)
|
||||
```
|
||||
|
||||
### Hugging Face
|
||||
@ -50,6 +52,7 @@ MAX_TOKENS={
|
||||
|
||||
[config] # in configuration.toml
|
||||
model = "ollama/llama2"
|
||||
model_turbo = "ollama/llama2"
|
||||
fallback_models=["ollama/llama2"]
|
||||
|
||||
[ollama] # in .secrets.toml
|
||||
@ -73,6 +76,7 @@ MAX_TOKENS={
|
||||
}
|
||||
[config] # in configuration.toml
|
||||
model = "huggingface/meta-llama/Llama-2-7b-chat-hf"
|
||||
model_turbo = "huggingface/meta-llama/Llama-2-7b-chat-hf"
|
||||
fallback_models=["huggingface/meta-llama/Llama-2-7b-chat-hf"]
|
||||
|
||||
[huggingface] # in .secrets.toml
|
||||
@ -87,6 +91,7 @@ To use Llama2 model with Replicate, for example, set:
|
||||
```
|
||||
[config] # in configuration.toml
|
||||
model = "replicate/llama-2-70b-chat:2c1608e18606fad2812020dc541930f2d0495ce32eee50074220b87300bc16e1"
|
||||
model_turbo = "replicate/llama-2-70b-chat:2c1608e18606fad2812020dc541930f2d0495ce32eee50074220b87300bc16e1"
|
||||
fallback_models=["replicate/llama-2-70b-chat:2c1608e18606fad2812020dc541930f2d0495ce32eee50074220b87300bc16e1"]
|
||||
[replicate] # in .secrets.toml
|
||||
key = ...
|
||||
@ -102,6 +107,7 @@ To use Llama3 model with Groq, for example, set:
|
||||
```
|
||||
[config] # in configuration.toml
|
||||
model = "llama3-70b-8192"
|
||||
model_turbo = "llama3-70b-8192"
|
||||
fallback_models = ["groq/llama3-70b-8192"]
|
||||
[groq] # in .secrets.toml
|
||||
key = ... # your Groq api key
|
||||
@ -115,6 +121,7 @@ To use Google's Vertex AI platform and its associated models (chat-bison/codecha
|
||||
```
|
||||
[config] # in configuration.toml
|
||||
model = "vertex_ai/codechat-bison"
|
||||
model_turbo = "vertex_ai/codechat-bison"
|
||||
fallback_models="vertex_ai/codechat-bison"
|
||||
|
||||
[vertexai] # in .secrets.toml
|
||||
@ -133,6 +140,7 @@ To use [Google AI Studio](https://aistudio.google.com/) models, set the relevant
|
||||
```toml
|
||||
[config] # in configuration.toml
|
||||
model="google_ai_studio/gemini-1.5-flash"
|
||||
model_turbo="google_ai_studio/gemini-1.5-flash"
|
||||
fallback_models=["google_ai_studio/gemini-1.5-flash"]
|
||||
|
||||
[google_ai_studio] # in .secrets.toml
|
||||
@ -148,6 +156,7 @@ To use Anthropic models, set the relevant models in the configuration section of
|
||||
```
|
||||
[config]
|
||||
model="anthropic/claude-3-opus-20240229"
|
||||
model_turbo="anthropic/claude-3-opus-20240229"
|
||||
fallback_models=["anthropic/claude-3-opus-20240229"]
|
||||
```
|
||||
|
||||
@ -164,6 +173,7 @@ To use Amazon Bedrock and its foundational models, add the below configuration:
|
||||
```
|
||||
[config] # in configuration.toml
|
||||
model="bedrock/anthropic.claude-3-sonnet-20240229-v1:0"
|
||||
model_turbo="bedrock/anthropic.claude-3-sonnet-20240229-v1:0"
|
||||
fallback_models=["bedrock/anthropic.claude-v2:1"]
|
||||
```
|
||||
|
||||
@ -185,6 +195,7 @@ If the relevant model doesn't appear [here](https://github.com/Codium-ai/pr-agen
|
||||
```
|
||||
[config]
|
||||
model="custom_model_name"
|
||||
model_turbo="custom_model_name"
|
||||
fallback_models=["custom_model_name"]
|
||||
```
|
||||
(2) Set the maximal tokens for the model:
|
||||
|
@ -10,3 +10,4 @@ Specifically, CLI commands can be issued by invoking a pre-built [docker image](
|
||||
|
||||
For online usage, you will need to setup either a [GitHub App](https://qodo-merge-docs.qodo.ai/installation/github/#run-as-a-github-app) or a [GitHub Action](https://qodo-merge-docs.qodo.ai/installation/github/#run-as-a-github-action) (GitHub), a [GitLab webhook](https://qodo-merge-docs.qodo.ai/installation/gitlab/#run-a-gitlab-webhook-server) (GitLab), or a [BitBucket App](https://qodo-merge-docs.qodo.ai/installation/bitbucket/#run-using-codiumai-hosted-bitbucket-app) (BitBucket).
|
||||
These platforms also enable to run Qodo Merge specific tools automatically when a new PR is opened, or on each push to a branch.
|
||||
|
||||
|
@ -43,7 +43,6 @@ nav:
|
||||
- 💎 Similar Code: 'tools/similar_code.md'
|
||||
- Core Abilities:
|
||||
- 'core-abilities/index.md'
|
||||
- Fetching ticket context: 'core-abilities/fetching_ticket_context.md'
|
||||
- Local and global metadata: 'core-abilities/metadata.md'
|
||||
- Dynamic context: 'core-abilities/dynamic_context.md'
|
||||
- Self-reflection: 'core-abilities/self_reflection.md'
|
||||
|
@ -3,5 +3,5 @@
|
||||
new Date().getTime(),event:'gtm.js'});var f=d.getElementsByTagName(s)[0],
|
||||
j=d.createElement(s),dl=l!='dataLayer'?'&l='+l:'';j.async=true;j.src=
|
||||
'https://www.googletagmanager.com/gtm.js?id='+i+dl;f.parentNode.insertBefore(j,f);
|
||||
})(window,document,'script','dataLayer','GTM-M6PJSFV');</script>
|
||||
})(window,document,'script','dataLayer','GTM-5C9KZBM3');</script>
|
||||
<!-- End Google Tag Manager -->
|
@ -0,0 +1 @@
|
||||
|
||||
|
@ -3,6 +3,7 @@ from functools import partial
|
||||
|
||||
from pr_agent.algo.ai_handlers.base_ai_handler import BaseAiHandler
|
||||
from pr_agent.algo.ai_handlers.litellm_ai_handler import LiteLLMAIHandler
|
||||
|
||||
from pr_agent.algo.utils import update_settings_from_args
|
||||
from pr_agent.config_loader import get_settings
|
||||
from pr_agent.git_providers.utils import apply_repo_settings
|
||||
|
@ -19,13 +19,10 @@ MAX_TOKENS = {
|
||||
'gpt-4o-mini': 128000, # 128K, but may be limited by config.max_model_tokens
|
||||
'gpt-4o-mini-2024-07-18': 128000, # 128K, but may be limited by config.max_model_tokens
|
||||
'gpt-4o-2024-08-06': 128000, # 128K, but may be limited by config.max_model_tokens
|
||||
'gpt-4o-2024-11-20': 128000, # 128K, but may be limited by config.max_model_tokens
|
||||
'o1-mini': 128000, # 128K, but may be limited by config.max_model_tokens
|
||||
'o1-mini-2024-09-12': 128000, # 128K, but may be limited by config.max_model_tokens
|
||||
'o1-preview': 128000, # 128K, but may be limited by config.max_model_tokens
|
||||
'o1-preview-2024-09-12': 128000, # 128K, but may be limited by config.max_model_tokens
|
||||
'o1-2024-12-17': 204800, # 200K, but may be limited by config.max_model_tokens
|
||||
'o1': 204800, # 200K, but may be limited by config.max_model_tokens
|
||||
'claude-instant-1': 100000,
|
||||
'claude-2': 100000,
|
||||
'command-nightly': 4096,
|
||||
@ -34,7 +31,6 @@ MAX_TOKENS = {
|
||||
'vertex_ai/codechat-bison': 6144,
|
||||
'vertex_ai/codechat-bison-32k': 32000,
|
||||
'vertex_ai/claude-3-haiku@20240307': 100000,
|
||||
'vertex_ai/claude-3-5-haiku@20241022': 100000,
|
||||
'vertex_ai/claude-3-sonnet@20240229': 100000,
|
||||
'vertex_ai/claude-3-opus@20240229': 100000,
|
||||
'vertex_ai/claude-3-5-sonnet@20240620': 100000,
|
||||
@ -44,7 +40,6 @@ MAX_TOKENS = {
|
||||
'vertex_ai/gemma2': 8200,
|
||||
'gemini/gemini-1.5-pro': 1048576,
|
||||
'gemini/gemini-1.5-flash': 1048576,
|
||||
'gemini/gemini-2.0-flash-exp': 1048576,
|
||||
'codechat-bison': 6144,
|
||||
'codechat-bison-32k': 32000,
|
||||
'anthropic.claude-instant-v1': 100000,
|
||||
@ -53,13 +48,11 @@ MAX_TOKENS = {
|
||||
'anthropic/claude-3-opus-20240229': 100000,
|
||||
'anthropic/claude-3-5-sonnet-20240620': 100000,
|
||||
'anthropic/claude-3-5-sonnet-20241022': 100000,
|
||||
'anthropic/claude-3-5-haiku-20241022': 100000,
|
||||
'bedrock/anthropic.claude-instant-v1': 100000,
|
||||
'bedrock/anthropic.claude-v2': 100000,
|
||||
'bedrock/anthropic.claude-v2:1': 100000,
|
||||
'bedrock/anthropic.claude-3-sonnet-20240229-v1:0': 100000,
|
||||
'bedrock/anthropic.claude-3-haiku-20240307-v1:0': 100000,
|
||||
'bedrock/anthropic.claude-3-5-haiku-20241022-v1:0': 100000,
|
||||
'bedrock/anthropic.claude-3-5-sonnet-20240620-v1:0': 100000,
|
||||
'bedrock/anthropic.claude-3-5-sonnet-20241022-v2:0': 100000,
|
||||
'claude-3-5-sonnet': 100000,
|
||||
|
@ -1,18 +1,17 @@
|
||||
try:
|
||||
from langchain_core.messages import HumanMessage, SystemMessage
|
||||
from langchain_openai import AzureChatOpenAI, ChatOpenAI
|
||||
from langchain_openai import ChatOpenAI, AzureChatOpenAI
|
||||
from langchain_core.messages import SystemMessage, HumanMessage
|
||||
except: # we don't enforce langchain as a dependency, so if it's not installed, just move on
|
||||
pass
|
||||
|
||||
import functools
|
||||
|
||||
from openai import APIError, RateLimitError, Timeout
|
||||
from retry import retry
|
||||
|
||||
from pr_agent.algo.ai_handlers.base_ai_handler import BaseAiHandler
|
||||
from pr_agent.config_loader import get_settings
|
||||
from pr_agent.log import get_logger
|
||||
|
||||
from openai import APIError, RateLimitError, Timeout
|
||||
from retry import retry
|
||||
import functools
|
||||
|
||||
OPENAI_RETRIES = 5
|
||||
|
||||
|
||||
@ -74,3 +73,4 @@ class LangChainOpenAIHandler(BaseAiHandler):
|
||||
raise ValueError(f"OpenAI {e.name} is required") from e
|
||||
else:
|
||||
raise e
|
||||
|
||||
|
@ -1,13 +1,11 @@
|
||||
import os
|
||||
|
||||
import requests
|
||||
import litellm
|
||||
import openai
|
||||
import requests
|
||||
from litellm import acompletion
|
||||
from tenacity import retry, retry_if_exception_type, stop_after_attempt
|
||||
|
||||
from pr_agent.algo.ai_handlers.base_ai_handler import BaseAiHandler
|
||||
from pr_agent.algo.utils import get_version
|
||||
from pr_agent.config_loader import get_settings
|
||||
from pr_agent.log import get_logger
|
||||
|
||||
@ -133,7 +131,7 @@ class LiteLLMAIHandler(BaseAiHandler):
|
||||
if "langfuse" in callbacks:
|
||||
metadata.update({
|
||||
"trace_name": command,
|
||||
"tags": [git_provider, command, f'version:{get_version()}'],
|
||||
"tags": [git_provider, command],
|
||||
"trace_metadata": {
|
||||
"command": command,
|
||||
"pr_url": pr_url,
|
||||
@ -142,7 +140,7 @@ class LiteLLMAIHandler(BaseAiHandler):
|
||||
if "langsmith" in callbacks:
|
||||
metadata.update({
|
||||
"run_name": command,
|
||||
"tags": [git_provider, command, f'version:{get_version()}'],
|
||||
"tags": [git_provider, command],
|
||||
"extra": {
|
||||
"metadata": {
|
||||
"command": command,
|
||||
@ -193,8 +191,8 @@ class LiteLLMAIHandler(BaseAiHandler):
|
||||
messages[1]["content"] = [{"type": "text", "text": messages[1]["content"]},
|
||||
{"type": "image_url", "image_url": {"url": img_path}}]
|
||||
|
||||
# Currently, model OpenAI o1 series does not support a separate system and user prompts
|
||||
O1_MODEL_PREFIX = 'o1'
|
||||
# Currently O1 does not support separate system and user prompts
|
||||
O1_MODEL_PREFIX = 'o1-'
|
||||
model_type = model.split('/')[-1] if '/' in model else model
|
||||
if model_type.startswith(O1_MODEL_PREFIX):
|
||||
user = f"{system}\n\n\n{user}"
|
||||
|
@ -4,7 +4,6 @@ import openai
|
||||
from openai import APIError, AsyncOpenAI, RateLimitError, Timeout
|
||||
from retry import retry
|
||||
|
||||
from pr_agent.algo.ai_handlers.base_ai_handler import BaseAiHandler
|
||||
from pr_agent.config_loader import get_settings
|
||||
from pr_agent.log import get_logger
|
||||
|
||||
@ -42,6 +41,7 @@ class OpenAIHandler(BaseAiHandler):
|
||||
tries=OPENAI_RETRIES, delay=2, backoff=2, jitter=(1, 3))
|
||||
async def chat_completion(self, model: str, system: str, user: str, temperature: float = 0.2):
|
||||
try:
|
||||
deployment_id = self.deployment_id
|
||||
get_logger().info("System: ", system)
|
||||
get_logger().info("User: ", user)
|
||||
messages = [{"role": "system", "content": system}, {"role": "user", "content": user}]
|
||||
|
@ -3,8 +3,8 @@ from __future__ import annotations
|
||||
import re
|
||||
import traceback
|
||||
|
||||
from pr_agent.algo.types import EDIT_TYPE, FilePatchInfo
|
||||
from pr_agent.config_loader import get_settings
|
||||
from pr_agent.algo.types import EDIT_TYPE, FilePatchInfo
|
||||
from pr_agent.log import get_logger
|
||||
|
||||
|
||||
@ -31,7 +31,7 @@ def extend_patch(original_file_str, patch_str, patch_extra_lines_before=0,
|
||||
|
||||
|
||||
def decode_if_bytes(original_file_str):
|
||||
if isinstance(original_file_str, (bytes, bytearray)):
|
||||
if isinstance(original_file_str, bytes):
|
||||
try:
|
||||
return original_file_str.decode('utf-8')
|
||||
except UnicodeDecodeError:
|
||||
@ -61,26 +61,23 @@ def process_patch_lines(patch_str, original_file_str, patch_extra_lines_before,
|
||||
patch_lines = patch_str.splitlines()
|
||||
extended_patch_lines = []
|
||||
|
||||
is_valid_hunk = True
|
||||
start1, size1, start2, size2 = -1, -1, -1, -1
|
||||
RE_HUNK_HEADER = re.compile(
|
||||
r"^@@ -(\d+)(?:,(\d+))? \+(\d+)(?:,(\d+))? @@[ ]?(.*)")
|
||||
try:
|
||||
for i,line in enumerate(patch_lines):
|
||||
for line in patch_lines:
|
||||
if line.startswith('@@'):
|
||||
match = RE_HUNK_HEADER.match(line)
|
||||
# identify hunk header
|
||||
if match:
|
||||
# finish processing previous hunk
|
||||
if is_valid_hunk and (start1 != -1 and patch_extra_lines_after > 0):
|
||||
if start1 != -1 and patch_extra_lines_after > 0:
|
||||
delta_lines = [f' {line}' for line in original_lines[start1 + size1 - 1:start1 + size1 - 1 + patch_extra_lines_after]]
|
||||
extended_patch_lines.extend(delta_lines)
|
||||
|
||||
section_header, size1, size2, start1, start2 = extract_hunk_headers(match)
|
||||
|
||||
is_valid_hunk = check_if_hunk_lines_matches_to_file(i, original_lines, patch_lines, start1)
|
||||
|
||||
if is_valid_hunk and (patch_extra_lines_before > 0 or patch_extra_lines_after > 0):
|
||||
if patch_extra_lines_before > 0 or patch_extra_lines_after > 0:
|
||||
def _calc_context_limits(patch_lines_before):
|
||||
extended_start1 = max(1, start1 - patch_lines_before)
|
||||
extended_size1 = size1 + (start1 - extended_start1) + patch_extra_lines_after
|
||||
@ -141,7 +138,7 @@ def process_patch_lines(patch_str, original_file_str, patch_extra_lines_before,
|
||||
return patch_str
|
||||
|
||||
# finish processing last hunk
|
||||
if start1 != -1 and patch_extra_lines_after > 0 and is_valid_hunk:
|
||||
if start1 != -1 and patch_extra_lines_after > 0:
|
||||
delta_lines = original_lines[start1 + size1 - 1:start1 + size1 - 1 + patch_extra_lines_after]
|
||||
# add space at the beginning of each extra line
|
||||
delta_lines = [f' {line}' for line in delta_lines]
|
||||
@ -151,23 +148,6 @@ def process_patch_lines(patch_str, original_file_str, patch_extra_lines_before,
|
||||
return extended_patch_str
|
||||
|
||||
|
||||
def check_if_hunk_lines_matches_to_file(i, original_lines, patch_lines, start1):
|
||||
"""
|
||||
Check if the hunk lines match the original file content. We saw cases where the hunk header line doesn't match the original file content, and then
|
||||
extending the hunk with extra lines before the hunk header can cause the hunk to be invalid.
|
||||
"""
|
||||
is_valid_hunk = True
|
||||
try:
|
||||
if i + 1 < len(patch_lines) and patch_lines[i + 1][0] == ' ': # an existing line in the file
|
||||
if patch_lines[i + 1].strip() != original_lines[start1 - 1].strip():
|
||||
is_valid_hunk = False
|
||||
get_logger().error(
|
||||
f"Invalid hunk in PR, line {start1} in hunk header doesn't match the original file content")
|
||||
except:
|
||||
pass
|
||||
return is_valid_hunk
|
||||
|
||||
|
||||
def extract_hunk_headers(match):
|
||||
res = list(match.groups())
|
||||
for i in range(len(res)):
|
||||
|
@ -4,6 +4,8 @@ from typing import Dict
|
||||
from pr_agent.config_loader import get_settings
|
||||
|
||||
|
||||
|
||||
|
||||
def filter_bad_extensions(files):
|
||||
# Bad Extensions, source: https://github.com/EleutherAI/github-downloader/blob/345e7c4cbb9e0dc8a0615fd995a08bf9d73b3fe6/download_repo_text.py # noqa: E501
|
||||
bad_extensions = get_settings().bad_extensions.default
|
||||
|
@ -5,15 +5,14 @@ from typing import Callable, List, Tuple
|
||||
|
||||
from github import RateLimitExceededException
|
||||
|
||||
from pr_agent.algo.file_filter import filter_ignored
|
||||
from pr_agent.algo.git_patch_processing import (
|
||||
convert_to_hunks_with_lines_numbers, extend_patch, handle_patch_deletions)
|
||||
from pr_agent.algo.git_patch_processing import convert_to_hunks_with_lines_numbers, extend_patch, handle_patch_deletions
|
||||
from pr_agent.algo.language_handler import sort_files_by_main_languages
|
||||
from pr_agent.algo.file_filter import filter_ignored
|
||||
from pr_agent.algo.token_handler import TokenHandler
|
||||
from pr_agent.algo.types import EDIT_TYPE, FilePatchInfo
|
||||
from pr_agent.algo.utils import ModelType, clip_tokens, get_max_tokens, get_weak_model
|
||||
from pr_agent.algo.utils import get_max_tokens, clip_tokens, ModelType
|
||||
from pr_agent.config_loader import get_settings
|
||||
from pr_agent.git_providers.git_provider import GitProvider
|
||||
from pr_agent.algo.types import EDIT_TYPE, FilePatchInfo
|
||||
from pr_agent.log import get_logger
|
||||
|
||||
DELETED_FILES_ = "Deleted files:\n"
|
||||
@ -354,8 +353,8 @@ async def retry_with_fallback_models(f: Callable, model_type: ModelType = ModelT
|
||||
|
||||
|
||||
def _get_all_models(model_type: ModelType = ModelType.REGULAR) -> List[str]:
|
||||
if model_type == ModelType.WEAK:
|
||||
model = get_weak_model()
|
||||
if model_type == ModelType.TURBO:
|
||||
model = get_settings().config.model_turbo
|
||||
else:
|
||||
model = get_settings().config.model
|
||||
fallback_models = get_settings().config.fallback_models
|
||||
|
@ -1,9 +1,8 @@
|
||||
from threading import Lock
|
||||
|
||||
from jinja2 import Environment, StrictUndefined
|
||||
from tiktoken import encoding_for_model, get_encoding
|
||||
|
||||
from pr_agent.config_loader import get_settings
|
||||
from threading import Lock
|
||||
|
||||
from pr_agent.log import get_logger
|
||||
|
||||
|
||||
|
@ -7,15 +7,14 @@ import html
|
||||
import json
|
||||
import os
|
||||
import re
|
||||
import sys
|
||||
import textwrap
|
||||
import time
|
||||
import traceback
|
||||
from datetime import datetime
|
||||
from enum import Enum
|
||||
from importlib.metadata import PackageNotFoundError, version
|
||||
from typing import Any, List, Tuple
|
||||
|
||||
|
||||
import html2text
|
||||
import requests
|
||||
import yaml
|
||||
@ -24,17 +23,10 @@ from starlette_context import context
|
||||
|
||||
from pr_agent.algo import MAX_TOKENS
|
||||
from pr_agent.algo.token_handler import TokenEncoder
|
||||
from pr_agent.algo.types import FilePatchInfo
|
||||
from pr_agent.config_loader import get_settings, global_settings
|
||||
from pr_agent.algo.types import FilePatchInfo
|
||||
from pr_agent.log import get_logger
|
||||
|
||||
|
||||
def get_weak_model() -> str:
|
||||
if get_settings().get("config.model_weak"):
|
||||
return get_settings().config.model_weak
|
||||
return get_settings().config.model
|
||||
|
||||
|
||||
class Range(BaseModel):
|
||||
line_start: int # should be 0-indexed
|
||||
line_end: int
|
||||
@ -43,7 +35,8 @@ class Range(BaseModel):
|
||||
|
||||
class ModelType(str, Enum):
|
||||
REGULAR = "regular"
|
||||
WEAK = "weak"
|
||||
TURBO = "turbo"
|
||||
|
||||
|
||||
class PRReviewHeader(str, Enum):
|
||||
REGULAR = "## PR Reviewer Guide"
|
||||
@ -104,8 +97,7 @@ def unique_strings(input_list: List[str]) -> List[str]:
|
||||
def convert_to_markdown_v2(output_data: dict,
|
||||
gfm_supported: bool = True,
|
||||
incremental_review=None,
|
||||
git_provider=None,
|
||||
files=None) -> str:
|
||||
git_provider=None) -> str:
|
||||
"""
|
||||
Convert a dictionary of data into markdown format.
|
||||
Args:
|
||||
@ -181,7 +173,7 @@ def convert_to_markdown_v2(output_data: dict,
|
||||
if is_value_no(value):
|
||||
markdown_text += f'### {emoji} No relevant tests\n\n'
|
||||
else:
|
||||
markdown_text += f"### {emoji} PR contains tests\n\n"
|
||||
markdown_text += f"### PR contains tests\n\n"
|
||||
elif 'ticket compliance check' in key_nice.lower():
|
||||
markdown_text = ticket_markdown_logic(emoji, markdown_text, value, gfm_supported)
|
||||
elif 'security concerns' in key_nice.lower():
|
||||
@ -229,31 +221,15 @@ def convert_to_markdown_v2(output_data: dict,
|
||||
continue
|
||||
relevant_file = issue.get('relevant_file', '').strip()
|
||||
issue_header = issue.get('issue_header', '').strip()
|
||||
if issue_header.lower() == 'possible bug':
|
||||
issue_header = 'Possible Issue' # Make the header less frightening
|
||||
issue_content = issue.get('issue_content', '').strip()
|
||||
start_line = int(str(issue.get('start_line', 0)).strip())
|
||||
end_line = int(str(issue.get('end_line', 0)).strip())
|
||||
|
||||
relevant_lines_str = extract_relevant_lines_str(end_line, files, relevant_file, start_line)
|
||||
if git_provider:
|
||||
reference_link = git_provider.get_line_link(relevant_file, start_line, end_line)
|
||||
else:
|
||||
reference_link = None
|
||||
reference_link = git_provider.get_line_link(relevant_file, start_line, end_line)
|
||||
|
||||
if gfm_supported:
|
||||
if reference_link is not None and len(reference_link) > 0:
|
||||
if relevant_lines_str:
|
||||
issue_str = f"<details><summary><a href='{reference_link}'><strong>{issue_header}</strong></a>\n\n{issue_content}</summary>\n\n{relevant_lines_str}\n\n</details>"
|
||||
else:
|
||||
issue_str = f"<a href='{reference_link}'><strong>{issue_header}</strong></a><br>{issue_content}"
|
||||
else:
|
||||
issue_str = f"<strong>{issue_header}</strong><br>{issue_content}"
|
||||
issue_str = f"<a href='{reference_link}'><strong>{issue_header}</strong></a><br>{issue_content}"
|
||||
else:
|
||||
if reference_link is not None and len(reference_link) > 0:
|
||||
issue_str = f"[**{issue_header}**]({reference_link})\n\n{issue_content}\n\n"
|
||||
else:
|
||||
issue_str = f"**{issue_header}**\n\n{issue_content}\n\n"
|
||||
issue_str = f"[**{issue_header}**]({reference_link})\n\n{issue_content}\n\n"
|
||||
markdown_text += f"{issue_str}\n\n"
|
||||
except Exception as e:
|
||||
get_logger().exception(f"Failed to process 'Recommended focus areas for review': {e}")
|
||||
@ -288,25 +264,6 @@ def convert_to_markdown_v2(output_data: dict,
|
||||
|
||||
return markdown_text
|
||||
|
||||
def extract_relevant_lines_str(end_line, files, relevant_file, start_line):
|
||||
try:
|
||||
relevant_lines_str = ""
|
||||
if files:
|
||||
files = set_file_languages(files)
|
||||
for file in files:
|
||||
if file.filename.strip() == relevant_file:
|
||||
if not file.head_file:
|
||||
get_logger().warning(f"No content found in file: {file.filename}")
|
||||
return ""
|
||||
relevant_file_lines = file.head_file.splitlines()
|
||||
relevant_lines_str = "\n".join(relevant_file_lines[start_line - 1:end_line])
|
||||
relevant_lines_str = f"```{file.language}\n{relevant_lines_str}\n```"
|
||||
break
|
||||
return relevant_lines_str
|
||||
except Exception as e:
|
||||
get_logger().exception(f"Failed to extract relevant lines: {e}")
|
||||
return ""
|
||||
|
||||
|
||||
def ticket_markdown_logic(emoji, markdown_text, value, gfm_supported) -> str:
|
||||
ticket_compliance_str = ""
|
||||
@ -1140,48 +1097,3 @@ def process_description(description_full: str) -> Tuple[str, List]:
|
||||
get_logger().exception(f"Failed to process description: {e}")
|
||||
|
||||
return base_description_str, files
|
||||
|
||||
def get_version() -> str:
|
||||
# First check pyproject.toml if running directly out of repository
|
||||
if os.path.exists("pyproject.toml"):
|
||||
if sys.version_info >= (3, 11):
|
||||
import tomllib
|
||||
with open("pyproject.toml", "rb") as f:
|
||||
data = tomllib.load(f)
|
||||
if "project" in data and "version" in data["project"]:
|
||||
return data["project"]["version"]
|
||||
else:
|
||||
get_logger().warning("Version not found in pyproject.toml")
|
||||
else:
|
||||
get_logger().warning("Unable to determine local version from pyproject.toml")
|
||||
|
||||
# Otherwise get the installed pip package version
|
||||
try:
|
||||
return version('pr-agent')
|
||||
except PackageNotFoundError:
|
||||
get_logger().warning("Unable to find package named 'pr-agent'")
|
||||
return "unknown"
|
||||
|
||||
|
||||
def set_file_languages(diff_files) -> List[FilePatchInfo]:
|
||||
try:
|
||||
# if the language is already set, do not change it
|
||||
if hasattr(diff_files[0], 'language') and diff_files[0].language:
|
||||
return diff_files
|
||||
|
||||
# map file extensions to programming languages
|
||||
language_extension_map_org = get_settings().language_extension_map_org
|
||||
extension_to_language = {}
|
||||
for language, extensions in language_extension_map_org.items():
|
||||
for ext in extensions:
|
||||
extension_to_language[ext] = language
|
||||
for file in diff_files:
|
||||
extension_s = '.' + file.filename.rsplit('.')[-1]
|
||||
language_name = "txt"
|
||||
if extension_s and (extension_s in extension_to_language):
|
||||
language_name = extension_to_language[extension_s]
|
||||
file.language = language_name.lower()
|
||||
except Exception as e:
|
||||
get_logger().exception(f"Failed to set file languages: {e}")
|
||||
|
||||
return diff_files
|
||||
|
@ -3,9 +3,8 @@ import asyncio
|
||||
import os
|
||||
|
||||
from pr_agent.agent.pr_agent import PRAgent, commands
|
||||
from pr_agent.algo.utils import get_version
|
||||
from pr_agent.config_loader import get_settings
|
||||
from pr_agent.log import get_logger, setup_logger
|
||||
from pr_agent.log import setup_logger, get_logger
|
||||
|
||||
log_level = os.environ.get("LOG_LEVEL", "INFO")
|
||||
setup_logger(log_level)
|
||||
@ -46,7 +45,6 @@ def set_parser():
|
||||
To edit any configuration parameter from 'configuration.toml', just add -config_path=<value>.
|
||||
For example: 'python cli.py --pr_url=... review --pr_reviewer.extra_instructions="focus on the file: ..."'
|
||||
""")
|
||||
parser.add_argument('--version', action='version', version=f'pr-agent {get_version()}')
|
||||
parser.add_argument('--pr_url', type=str, help='The URL of the PR to review', default=None)
|
||||
parser.add_argument('--issue_url', type=str, help='The URL of the Issue to review', default=None)
|
||||
parser.add_argument('command', type=str, help='The', choices=commands, default='review')
|
||||
|
@ -1,16 +1,14 @@
|
||||
from starlette_context import context
|
||||
|
||||
from pr_agent.config_loader import get_settings
|
||||
from pr_agent.git_providers.azuredevops_provider import AzureDevopsProvider
|
||||
from pr_agent.git_providers.bitbucket_provider import BitbucketProvider
|
||||
from pr_agent.git_providers.bitbucket_server_provider import \
|
||||
BitbucketServerProvider
|
||||
from pr_agent.git_providers.bitbucket_server_provider import BitbucketServerProvider
|
||||
from pr_agent.git_providers.codecommit_provider import CodeCommitProvider
|
||||
from pr_agent.git_providers.gerrit_provider import GerritProvider
|
||||
from pr_agent.git_providers.git_provider import GitProvider
|
||||
from pr_agent.git_providers.github_provider import GithubProvider
|
||||
from pr_agent.git_providers.gitlab_provider import GitLabProvider
|
||||
from pr_agent.git_providers.local_git_provider import LocalGitProvider
|
||||
from pr_agent.git_providers.azuredevops_provider import AzureDevopsProvider
|
||||
from pr_agent.git_providers.gerrit_provider import GerritProvider
|
||||
from starlette_context import context
|
||||
|
||||
_GIT_PROVIDERS = {
|
||||
'github': GithubProvider,
|
||||
|
@ -2,16 +2,13 @@ import os
|
||||
from typing import Optional, Tuple
|
||||
from urllib.parse import urlparse
|
||||
|
||||
from pr_agent.algo.types import EDIT_TYPE, FilePatchInfo
|
||||
|
||||
from ..algo.file_filter import filter_ignored
|
||||
from ..algo.language_handler import is_valid_file
|
||||
from ..algo.utils import (PRDescriptionHeader, clip_tokens,
|
||||
find_line_number_of_relevant_line_in_file,
|
||||
load_large_diff)
|
||||
from ..config_loader import get_settings
|
||||
from ..log import get_logger
|
||||
from ..algo.language_handler import is_valid_file
|
||||
from ..algo.utils import clip_tokens, find_line_number_of_relevant_line_in_file, load_large_diff, PRDescriptionHeader
|
||||
from ..config_loader import get_settings
|
||||
from .git_provider import GitProvider
|
||||
from pr_agent.algo.types import EDIT_TYPE, FilePatchInfo
|
||||
|
||||
AZURE_DEVOPS_AVAILABLE = True
|
||||
ADO_APP_CLIENT_DEFAULT_ID = "499b84ac-1321-427f-aa17-267ca6975798/.default"
|
||||
@ -19,16 +16,19 @@ MAX_PR_DESCRIPTION_AZURE_LENGTH = 4000-1
|
||||
|
||||
try:
|
||||
# noinspection PyUnresolvedReferences
|
||||
from msrest.authentication import BasicAuthentication
|
||||
# noinspection PyUnresolvedReferences
|
||||
from azure.devops.connection import Connection
|
||||
# noinspection PyUnresolvedReferences
|
||||
from azure.devops.v7_1.git.models import (Comment, CommentThread,
|
||||
GitPullRequest,
|
||||
GitPullRequestIterationChanges,
|
||||
GitVersionDescriptor)
|
||||
# noinspection PyUnresolvedReferences
|
||||
from azure.identity import DefaultAzureCredential
|
||||
from msrest.authentication import BasicAuthentication
|
||||
# noinspection PyUnresolvedReferences
|
||||
from azure.devops.v7_1.git.models import (
|
||||
Comment,
|
||||
CommentThread,
|
||||
GitVersionDescriptor,
|
||||
GitPullRequest,
|
||||
GitPullRequestIterationChanges,
|
||||
)
|
||||
except ImportError:
|
||||
AZURE_DEVOPS_AVAILABLE = False
|
||||
|
||||
@ -67,14 +67,16 @@ class AzureDevopsProvider(GitProvider):
|
||||
relevant_lines_end = suggestion['relevant_lines_end']
|
||||
|
||||
if not relevant_lines_start or relevant_lines_start == -1:
|
||||
get_logger().warning(
|
||||
f"Failed to publish code suggestion, relevant_lines_start is {relevant_lines_start}")
|
||||
if get_settings().config.verbosity_level >= 2:
|
||||
get_logger().exception(
|
||||
f"Failed to publish code suggestion, relevant_lines_start is {relevant_lines_start}")
|
||||
continue
|
||||
|
||||
if relevant_lines_end < relevant_lines_start:
|
||||
get_logger().warning(f"Failed to publish code suggestion, "
|
||||
f"relevant_lines_end is {relevant_lines_end} and "
|
||||
f"relevant_lines_start is {relevant_lines_start}")
|
||||
if get_settings().config.verbosity_level >= 2:
|
||||
get_logger().exception(f"Failed to publish code suggestion, "
|
||||
f"relevant_lines_end is {relevant_lines_end} and "
|
||||
f"relevant_lines_start is {relevant_lines_start}")
|
||||
continue
|
||||
|
||||
if relevant_lines_end > relevant_lines_start:
|
||||
@ -93,11 +95,9 @@ class AzureDevopsProvider(GitProvider):
|
||||
"side": "RIGHT",
|
||||
}
|
||||
post_parameters_list.append(post_parameters)
|
||||
if not post_parameters_list:
|
||||
return False
|
||||
|
||||
for post_parameters in post_parameters_list:
|
||||
try:
|
||||
try:
|
||||
for post_parameters in post_parameters_list:
|
||||
comment = Comment(content=post_parameters["body"], comment_type=1)
|
||||
thread = CommentThread(comments=[comment],
|
||||
thread_context={
|
||||
@ -117,11 +117,15 @@ class AzureDevopsProvider(GitProvider):
|
||||
repository_id=self.repo_slug,
|
||||
pull_request_id=self.pr_num
|
||||
)
|
||||
except Exception as e:
|
||||
get_logger().warning(f"Azure failed to publish code suggestion, error: {e}")
|
||||
return True
|
||||
|
||||
|
||||
if get_settings().config.verbosity_level >= 2:
|
||||
get_logger().info(
|
||||
f"Published code suggestion on {self.pr_num} at {post_parameters['path']}"
|
||||
)
|
||||
return True
|
||||
except Exception as e:
|
||||
if get_settings().config.verbosity_level >= 2:
|
||||
get_logger().error(f"Failed to publish code suggestion, error: {e}")
|
||||
return False
|
||||
|
||||
def get_pr_description_full(self) -> str:
|
||||
return self.pr.description
|
||||
@ -378,9 +382,6 @@ class AzureDevopsProvider(GitProvider):
|
||||
return []
|
||||
|
||||
def publish_comment(self, pr_comment: str, is_temporary: bool = False, thread_context=None):
|
||||
if is_temporary and not get_settings().config.publish_output_progress:
|
||||
get_logger().debug(f"Skipping publish_comment for temporary comment: {pr_comment}")
|
||||
return None
|
||||
comment = Comment(content=pr_comment)
|
||||
thread = CommentThread(comments=[comment], thread_context=thread_context, status=5)
|
||||
thread_response = self.azure_devops_client.create_thread(
|
||||
@ -619,3 +620,4 @@ class AzureDevopsProvider(GitProvider):
|
||||
|
||||
def publish_file_comments(self, file_comments: list) -> bool:
|
||||
pass
|
||||
|
||||
|
@ -1,6 +1,4 @@
|
||||
import difflib
|
||||
import json
|
||||
import re
|
||||
from typing import Optional, Tuple
|
||||
from urllib.parse import urlparse
|
||||
|
||||
@ -8,14 +6,13 @@ import requests
|
||||
from atlassian.bitbucket import Cloud
|
||||
from starlette_context import context
|
||||
|
||||
from pr_agent.algo.types import EDIT_TYPE, FilePatchInfo
|
||||
|
||||
from pr_agent.algo.types import FilePatchInfo, EDIT_TYPE
|
||||
from ..algo.file_filter import filter_ignored
|
||||
from ..algo.language_handler import is_valid_file
|
||||
from ..algo.utils import find_line_number_of_relevant_line_in_file
|
||||
from ..config_loader import get_settings
|
||||
from ..log import get_logger
|
||||
from .git_provider import MAX_FILES_ALLOWED_FULL, GitProvider
|
||||
from .git_provider import GitProvider, MAX_FILES_ALLOWED_FULL
|
||||
|
||||
|
||||
def _gef_filename(diff):
|
||||
@ -74,38 +71,24 @@ class BitbucketProvider(GitProvider):
|
||||
post_parameters_list = []
|
||||
for suggestion in code_suggestions:
|
||||
body = suggestion["body"]
|
||||
original_suggestion = suggestion.get('original_suggestion', None) # needed for diff code
|
||||
if original_suggestion:
|
||||
try:
|
||||
existing_code = original_suggestion['existing_code'].rstrip() + "\n"
|
||||
improved_code = original_suggestion['improved_code'].rstrip() + "\n"
|
||||
diff = difflib.unified_diff(existing_code.split('\n'),
|
||||
improved_code.split('\n'), n=999)
|
||||
patch_orig = "\n".join(diff)
|
||||
patch = "\n".join(patch_orig.splitlines()[5:]).strip('\n')
|
||||
diff_code = f"\n\n```diff\n{patch.rstrip()}\n```"
|
||||
# replace ```suggestion ... ``` with diff_code, using regex:
|
||||
body = re.sub(r'```suggestion.*?```', diff_code, body, flags=re.DOTALL)
|
||||
except Exception as e:
|
||||
get_logger().exception(f"Bitbucket failed to get diff code for publishing, error: {e}")
|
||||
continue
|
||||
|
||||
relevant_file = suggestion["relevant_file"]
|
||||
relevant_lines_start = suggestion["relevant_lines_start"]
|
||||
relevant_lines_end = suggestion["relevant_lines_end"]
|
||||
|
||||
if not relevant_lines_start or relevant_lines_start == -1:
|
||||
get_logger().exception(
|
||||
f"Failed to publish code suggestion, relevant_lines_start is {relevant_lines_start}"
|
||||
)
|
||||
if get_settings().config.verbosity_level >= 2:
|
||||
get_logger().exception(
|
||||
f"Failed to publish code suggestion, relevant_lines_start is {relevant_lines_start}"
|
||||
)
|
||||
continue
|
||||
|
||||
if relevant_lines_end < relevant_lines_start:
|
||||
get_logger().exception(
|
||||
f"Failed to publish code suggestion, "
|
||||
f"relevant_lines_end is {relevant_lines_end} and "
|
||||
f"relevant_lines_start is {relevant_lines_start}"
|
||||
)
|
||||
if get_settings().config.verbosity_level >= 2:
|
||||
get_logger().exception(
|
||||
f"Failed to publish code suggestion, "
|
||||
f"relevant_lines_end is {relevant_lines_end} and "
|
||||
f"relevant_lines_start is {relevant_lines_start}"
|
||||
)
|
||||
continue
|
||||
|
||||
if relevant_lines_end > relevant_lines_start:
|
||||
@ -129,7 +112,8 @@ class BitbucketProvider(GitProvider):
|
||||
self.publish_inline_comments(post_parameters_list)
|
||||
return True
|
||||
except Exception as e:
|
||||
get_logger().error(f"Bitbucket failed to publish code suggestion, error: {e}")
|
||||
if get_settings().config.verbosity_level >= 2:
|
||||
get_logger().error(f"Failed to publish code suggestion, error: {e}")
|
||||
return False
|
||||
|
||||
def publish_file_comments(self, file_comments: list) -> bool:
|
||||
@ -137,7 +121,7 @@ class BitbucketProvider(GitProvider):
|
||||
|
||||
def is_supported(self, capability: str) -> bool:
|
||||
if capability in ['get_issue_comments', 'publish_inline_comments', 'get_labels', 'gfm_markdown',
|
||||
'publish_file_comments']:
|
||||
'publish_file_comments']:
|
||||
return False
|
||||
return True
|
||||
|
||||
@ -325,9 +309,6 @@ class BitbucketProvider(GitProvider):
|
||||
self.publish_comment(pr_comment)
|
||||
|
||||
def publish_comment(self, pr_comment: str, is_temporary: bool = False):
|
||||
if is_temporary and not get_settings().config.publish_output_progress:
|
||||
get_logger().debug(f"Skipping publish_comment for temporary comment: {pr_comment}")
|
||||
return None
|
||||
pr_comment = self.limit_output_characters(pr_comment, self.max_comment_length)
|
||||
comment = self.pr.comment(pr_comment)
|
||||
if is_temporary:
|
||||
|
@ -1,21 +1,16 @@
|
||||
import difflib
|
||||
import re
|
||||
|
||||
from packaging.version import parse as parse_version
|
||||
from distutils.version import LooseVersion
|
||||
from requests.exceptions import HTTPError
|
||||
from typing import Optional, Tuple
|
||||
from urllib.parse import quote_plus, urlparse
|
||||
|
||||
from atlassian.bitbucket import Bitbucket
|
||||
from requests.exceptions import HTTPError
|
||||
|
||||
from ..algo.git_patch_processing import decode_if_bytes
|
||||
from ..algo.language_handler import is_valid_file
|
||||
from .git_provider import GitProvider
|
||||
from ..algo.types import EDIT_TYPE, FilePatchInfo
|
||||
from ..algo.utils import (find_line_number_of_relevant_line_in_file,
|
||||
load_large_diff)
|
||||
from ..algo.language_handler import is_valid_file
|
||||
from ..algo.utils import load_large_diff, find_line_number_of_relevant_line_in_file
|
||||
from ..config_loader import get_settings
|
||||
from ..log import get_logger
|
||||
from .git_provider import GitProvider
|
||||
|
||||
|
||||
class BitbucketServerProvider(GitProvider):
|
||||
@ -40,7 +35,7 @@ class BitbucketServerProvider(GitProvider):
|
||||
token=get_settings().get("BITBUCKET_SERVER.BEARER_TOKEN",
|
||||
None))
|
||||
try:
|
||||
self.bitbucket_api_version = parse_version(self.bitbucket_client.get("rest/api/1.0/application-properties").get('version'))
|
||||
self.bitbucket_api_version = LooseVersion(self.bitbucket_client.get("rest/api/1.0/application-properties").get('version'))
|
||||
except Exception:
|
||||
self.bitbucket_api_version = None
|
||||
|
||||
@ -70,37 +65,24 @@ class BitbucketServerProvider(GitProvider):
|
||||
post_parameters_list = []
|
||||
for suggestion in code_suggestions:
|
||||
body = suggestion["body"]
|
||||
original_suggestion = suggestion.get('original_suggestion', None) # needed for diff code
|
||||
if original_suggestion:
|
||||
try:
|
||||
existing_code = original_suggestion['existing_code'].rstrip() + "\n"
|
||||
improved_code = original_suggestion['improved_code'].rstrip() + "\n"
|
||||
diff = difflib.unified_diff(existing_code.split('\n'),
|
||||
improved_code.split('\n'), n=999)
|
||||
patch_orig = "\n".join(diff)
|
||||
patch = "\n".join(patch_orig.splitlines()[5:]).strip('\n')
|
||||
diff_code = f"\n\n```diff\n{patch.rstrip()}\n```"
|
||||
# replace ```suggestion ... ``` with diff_code, using regex:
|
||||
body = re.sub(r'```suggestion.*?```', diff_code, body, flags=re.DOTALL)
|
||||
except Exception as e:
|
||||
get_logger().exception(f"Bitbucket failed to get diff code for publishing, error: {e}")
|
||||
continue
|
||||
relevant_file = suggestion["relevant_file"]
|
||||
relevant_lines_start = suggestion["relevant_lines_start"]
|
||||
relevant_lines_end = suggestion["relevant_lines_end"]
|
||||
|
||||
if not relevant_lines_start or relevant_lines_start == -1:
|
||||
get_logger().warning(
|
||||
f"Failed to publish code suggestion, relevant_lines_start is {relevant_lines_start}"
|
||||
)
|
||||
if get_settings().config.verbosity_level >= 2:
|
||||
get_logger().exception(
|
||||
f"Failed to publish code suggestion, relevant_lines_start is {relevant_lines_start}"
|
||||
)
|
||||
continue
|
||||
|
||||
if relevant_lines_end < relevant_lines_start:
|
||||
get_logger().warning(
|
||||
f"Failed to publish code suggestion, "
|
||||
f"relevant_lines_end is {relevant_lines_end} and "
|
||||
f"relevant_lines_start is {relevant_lines_start}"
|
||||
)
|
||||
if get_settings().config.verbosity_level >= 2:
|
||||
get_logger().exception(
|
||||
f"Failed to publish code suggestion, "
|
||||
f"relevant_lines_end is {relevant_lines_end} and "
|
||||
f"relevant_lines_start is {relevant_lines_start}"
|
||||
)
|
||||
continue
|
||||
|
||||
if relevant_lines_end > relevant_lines_start:
|
||||
@ -177,7 +159,7 @@ class BitbucketServerProvider(GitProvider):
|
||||
head_sha = self.pr.fromRef['latestCommit']
|
||||
|
||||
# if Bitbucket api version is >= 8.16 then use the merge-base api for 2-way diff calculation
|
||||
if self.bitbucket_api_version is not None and self.bitbucket_api_version >= parse_version("8.16"):
|
||||
if self.bitbucket_api_version is not None and self.bitbucket_api_version >= LooseVersion("8.16"):
|
||||
try:
|
||||
base_sha = self.bitbucket_client.get(self._get_merge_base())['id']
|
||||
except Exception as e:
|
||||
@ -192,7 +174,7 @@ class BitbucketServerProvider(GitProvider):
|
||||
# if Bitbucket api version is None or < 7.0 then do a simple diff with a guaranteed common ancestor
|
||||
base_sha = source_commits_list[-1]['parents'][0]['id']
|
||||
# if Bitbucket api version is 7.0-8.15 then use 2-way diff functionality for the base_sha
|
||||
if self.bitbucket_api_version is not None and self.bitbucket_api_version >= parse_version("7.0"):
|
||||
if self.bitbucket_api_version is not None and self.bitbucket_api_version >= LooseVersion("7.0"):
|
||||
try:
|
||||
destination_commits = list(
|
||||
self.bitbucket_client.get_commits(self.workspace_slug, self.repo_slug, base_sha,
|
||||
@ -218,21 +200,25 @@ class BitbucketServerProvider(GitProvider):
|
||||
case 'ADD':
|
||||
edit_type = EDIT_TYPE.ADDED
|
||||
new_file_content_str = self.get_file(file_path, head_sha)
|
||||
new_file_content_str = decode_if_bytes(new_file_content_str)
|
||||
if isinstance(new_file_content_str, (bytes, bytearray)):
|
||||
new_file_content_str = new_file_content_str.decode("utf-8")
|
||||
original_file_content_str = ""
|
||||
case 'DELETE':
|
||||
edit_type = EDIT_TYPE.DELETED
|
||||
new_file_content_str = ""
|
||||
original_file_content_str = self.get_file(file_path, base_sha)
|
||||
original_file_content_str = decode_if_bytes(original_file_content_str)
|
||||
if isinstance(original_file_content_str, (bytes, bytearray)):
|
||||
original_file_content_str = original_file_content_str.decode("utf-8")
|
||||
case 'RENAME':
|
||||
edit_type = EDIT_TYPE.RENAMED
|
||||
case _:
|
||||
edit_type = EDIT_TYPE.MODIFIED
|
||||
original_file_content_str = self.get_file(file_path, base_sha)
|
||||
original_file_content_str = decode_if_bytes(original_file_content_str)
|
||||
if isinstance(original_file_content_str, (bytes, bytearray)):
|
||||
original_file_content_str = original_file_content_str.decode("utf-8")
|
||||
new_file_content_str = self.get_file(file_path, head_sha)
|
||||
new_file_content_str = decode_if_bytes(new_file_content_str)
|
||||
if isinstance(new_file_content_str, (bytes, bytearray)):
|
||||
new_file_content_str = new_file_content_str.decode("utf-8")
|
||||
|
||||
patch = load_large_diff(file_path, new_file_content_str, original_file_content_str)
|
||||
|
||||
@ -343,10 +329,10 @@ class BitbucketServerProvider(GitProvider):
|
||||
for comment in comments:
|
||||
if 'position' in comment:
|
||||
self.publish_inline_comment(comment['body'], comment['position'], comment['path'])
|
||||
elif 'start_line' in comment: # multi-line comment
|
||||
elif 'start_line' in comment: # multi-line comment
|
||||
# note that bitbucket does not seem to support range - only a comment on a single line - https://community.developer.atlassian.com/t/api-post-endpoint-for-inline-pull-request-comments/60452
|
||||
self.publish_inline_comment(comment['body'], comment['start_line'], comment['path'])
|
||||
elif 'line' in comment: # single-line comment
|
||||
elif 'line' in comment: # single-line comment
|
||||
self.publish_inline_comment(comment['body'], comment['line'], comment['path'])
|
||||
else:
|
||||
get_logger().error(f"Could not publish inline comment: {comment}")
|
||||
|
@ -4,15 +4,13 @@ from collections import Counter
|
||||
from typing import List, Optional, Tuple
|
||||
from urllib.parse import urlparse
|
||||
|
||||
from pr_agent.algo.language_handler import is_valid_file
|
||||
from pr_agent.algo.types import EDIT_TYPE, FilePatchInfo
|
||||
from pr_agent.git_providers.codecommit_client import CodeCommitClient
|
||||
|
||||
from pr_agent.algo.types import EDIT_TYPE, FilePatchInfo
|
||||
from ..algo.utils import load_large_diff
|
||||
from .git_provider import GitProvider
|
||||
from ..config_loader import get_settings
|
||||
from ..log import get_logger
|
||||
from .git_provider import GitProvider
|
||||
|
||||
from pr_agent.algo.language_handler import is_valid_file
|
||||
|
||||
class PullRequestCCMimic:
|
||||
"""
|
||||
|
@ -12,9 +12,9 @@ import requests
|
||||
import urllib3.util
|
||||
from git import Repo
|
||||
|
||||
from pr_agent.algo.types import EDIT_TYPE, FilePatchInfo
|
||||
from pr_agent.config_loader import get_settings
|
||||
from pr_agent.git_providers.git_provider import GitProvider
|
||||
from pr_agent.algo.types import EDIT_TYPE, FilePatchInfo
|
||||
from pr_agent.git_providers.local_git_provider import PullRequestMimic
|
||||
from pr_agent.log import get_logger
|
||||
|
||||
|
@ -1,12 +1,12 @@
|
||||
from abc import ABC, abstractmethod
|
||||
|
||||
# enum EDIT_TYPE (ADDED, DELETED, MODIFIED, RENAMED)
|
||||
from typing import Optional
|
||||
|
||||
from pr_agent.algo.types import FilePatchInfo
|
||||
from pr_agent.algo.utils import Range, process_description
|
||||
from pr_agent.config_loader import get_settings
|
||||
from pr_agent.algo.types import FilePatchInfo
|
||||
from pr_agent.log import get_logger
|
||||
|
||||
MAX_FILES_ALLOWED_FULL = 50
|
||||
|
||||
class GitProvider(ABC):
|
||||
@ -62,8 +62,8 @@ class GitProvider(ABC):
|
||||
pass
|
||||
|
||||
def get_pr_description(self, full: bool = True, split_changes_walkthrough=False) -> str or tuple:
|
||||
from pr_agent.algo.utils import clip_tokens
|
||||
from pr_agent.config_loader import get_settings
|
||||
from pr_agent.algo.utils import clip_tokens
|
||||
max_tokens_description = get_settings().get("CONFIG.MAX_DESCRIPTION_TOKENS", None)
|
||||
description = self.get_pr_description_full() if full else self.get_user_description()
|
||||
if split_changes_walkthrough:
|
||||
|
@ -1,30 +1,22 @@
|
||||
import copy
|
||||
import difflib
|
||||
import hashlib
|
||||
import itertools
|
||||
import re
|
||||
import time
|
||||
import hashlib
|
||||
import traceback
|
||||
from datetime import datetime
|
||||
from typing import Optional, Tuple
|
||||
from urllib.parse import urlparse
|
||||
|
||||
from github import AppAuthentication, Auth, Github
|
||||
from retry import retry
|
||||
from starlette_context import context
|
||||
|
||||
from ..algo.file_filter import filter_ignored
|
||||
from ..algo.git_patch_processing import extract_hunk_headers
|
||||
from ..algo.language_handler import is_valid_file
|
||||
from ..algo.types import EDIT_TYPE
|
||||
from ..algo.utils import (PRReviewHeader, Range, clip_tokens,
|
||||
find_line_number_of_relevant_line_in_file,
|
||||
load_large_diff, set_file_languages)
|
||||
from ..algo.utils import PRReviewHeader, load_large_diff, clip_tokens, find_line_number_of_relevant_line_in_file, Range
|
||||
from ..config_loader import get_settings
|
||||
from ..log import get_logger
|
||||
from ..servers.utils import RateLimitExceeded
|
||||
from .git_provider import (MAX_FILES_ALLOWED_FULL, FilePatchInfo, GitProvider,
|
||||
IncrementalPR)
|
||||
from .git_provider import FilePatchInfo, GitProvider, IncrementalPR, MAX_FILES_ALLOWED_FULL
|
||||
|
||||
|
||||
class GithubProvider(GitProvider):
|
||||
@ -203,24 +195,7 @@ class GithubProvider(GitProvider):
|
||||
if avoid_load:
|
||||
original_file_content_str = ""
|
||||
else:
|
||||
# The base.sha will point to the current state of the base branch (including parallel merges), not the original base commit when the PR was created
|
||||
# We can fix this by finding the merge base commit between the PR head and base branches
|
||||
# Note that The pr.head.sha is actually correct as is - it points to the latest commit in your PR branch.
|
||||
# This SHA isn't affected by parallel merges to the base branch since it's specific to your PR's branch.
|
||||
repo = self.repo_obj
|
||||
pr = self.pr
|
||||
try:
|
||||
compare = repo.compare(pr.base.sha, pr.head.sha)
|
||||
merge_base_commit = compare.merge_base_commit
|
||||
except Exception as e:
|
||||
get_logger().error(f"Failed to get merge base commit: {e}")
|
||||
merge_base_commit = pr.base
|
||||
if merge_base_commit.sha != pr.base.sha:
|
||||
get_logger().info(
|
||||
f"Using merge base commit {merge_base_commit.sha} instead of base commit "
|
||||
f"{pr.base.sha} for {file.filename}")
|
||||
original_file_content_str = self._get_pr_file_content(file, merge_base_commit.sha)
|
||||
|
||||
original_file_content_str = self._get_pr_file_content(file, self.pr.base.sha)
|
||||
if not patch:
|
||||
patch = load_large_diff(file.filename, new_file_content_str, original_file_content_str)
|
||||
|
||||
@ -304,7 +279,8 @@ class GithubProvider(GitProvider):
|
||||
relevant_line_in_file,
|
||||
absolute_position)
|
||||
if position == -1:
|
||||
get_logger().info(f"Could not find position for {relevant_file} {relevant_line_in_file}")
|
||||
if get_settings().config.verbosity_level >= 2:
|
||||
get_logger().info(f"Could not find position for {relevant_file} {relevant_line_in_file}")
|
||||
subject_type = "FILE"
|
||||
else:
|
||||
subject_type = "LINE"
|
||||
@ -316,9 +292,11 @@ class GithubProvider(GitProvider):
|
||||
# publish all comments in a single message
|
||||
self.pr.create_review(commit=self.last_commit_id, comments=comments)
|
||||
except Exception as e:
|
||||
get_logger().info(f"Initially failed to publish inline comments as committable")
|
||||
if get_settings().config.verbosity_level >= 2:
|
||||
get_logger().error(f"Failed to publish inline comments")
|
||||
|
||||
if (getattr(e, "status", None) == 422 and not disable_fallback):
|
||||
if (getattr(e, "status", None) == 422
|
||||
and get_settings().github.publish_inline_comments_fallback_with_verification and not disable_fallback):
|
||||
pass # continue to try _publish_inline_comments_fallback_with_verification
|
||||
else:
|
||||
raise e # will end up with publishing the comments one by one
|
||||
@ -326,7 +304,8 @@ class GithubProvider(GitProvider):
|
||||
try:
|
||||
self._publish_inline_comments_fallback_with_verification(comments)
|
||||
except Exception as e:
|
||||
get_logger().error(f"Failed to publish inline code comments fallback, error: {e}")
|
||||
if get_settings().config.verbosity_level >= 2:
|
||||
get_logger().error(f"Failed to publish inline code comments fallback, error: {e}")
|
||||
raise e
|
||||
|
||||
def _publish_inline_comments_fallback_with_verification(self, comments: list[dict]):
|
||||
@ -351,9 +330,11 @@ class GithubProvider(GitProvider):
|
||||
for comment in fixed_comments_as_one_liner:
|
||||
try:
|
||||
self.publish_inline_comments([comment], disable_fallback=True)
|
||||
get_logger().info(f"Published invalid comment as a single line comment: {comment}")
|
||||
if get_settings().config.verbosity_level >= 2:
|
||||
get_logger().info(f"Published invalid comment as a single line comment: {comment}")
|
||||
except:
|
||||
get_logger().error(f"Failed to publish invalid comment as a single line comment: {comment}")
|
||||
if get_settings().config.verbosity_level >= 2:
|
||||
get_logger().error(f"Failed to publish invalid comment as a single line comment: {comment}")
|
||||
|
||||
def _verify_code_comment(self, comment: dict):
|
||||
is_verified = False
|
||||
@ -411,7 +392,8 @@ class GithubProvider(GitProvider):
|
||||
if fixed_comment != comment:
|
||||
fixed_comments.append(fixed_comment)
|
||||
except Exception as e:
|
||||
get_logger().error(f"Failed to fix inline comment, error: {e}")
|
||||
if get_settings().config.verbosity_level >= 2:
|
||||
get_logger().error(f"Failed to fix inline comment, error: {e}")
|
||||
return fixed_comments
|
||||
|
||||
def publish_code_suggestions(self, code_suggestions: list) -> bool:
|
||||
@ -419,24 +401,23 @@ class GithubProvider(GitProvider):
|
||||
Publishes code suggestions as comments on the PR.
|
||||
"""
|
||||
post_parameters_list = []
|
||||
|
||||
code_suggestions_validated = self.validate_comments_inside_hunks(code_suggestions)
|
||||
|
||||
for suggestion in code_suggestions_validated:
|
||||
for suggestion in code_suggestions:
|
||||
body = suggestion['body']
|
||||
relevant_file = suggestion['relevant_file']
|
||||
relevant_lines_start = suggestion['relevant_lines_start']
|
||||
relevant_lines_end = suggestion['relevant_lines_end']
|
||||
|
||||
if not relevant_lines_start or relevant_lines_start == -1:
|
||||
get_logger().exception(
|
||||
f"Failed to publish code suggestion, relevant_lines_start is {relevant_lines_start}")
|
||||
if get_settings().config.verbosity_level >= 2:
|
||||
get_logger().exception(
|
||||
f"Failed to publish code suggestion, relevant_lines_start is {relevant_lines_start}")
|
||||
continue
|
||||
|
||||
if relevant_lines_end < relevant_lines_start:
|
||||
get_logger().exception(f"Failed to publish code suggestion, "
|
||||
f"relevant_lines_end is {relevant_lines_end} and "
|
||||
f"relevant_lines_start is {relevant_lines_start}")
|
||||
if get_settings().config.verbosity_level >= 2:
|
||||
get_logger().exception(f"Failed to publish code suggestion, "
|
||||
f"relevant_lines_end is {relevant_lines_end} and "
|
||||
f"relevant_lines_start is {relevant_lines_start}")
|
||||
continue
|
||||
|
||||
if relevant_lines_end > relevant_lines_start:
|
||||
@ -460,7 +441,8 @@ class GithubProvider(GitProvider):
|
||||
self.publish_inline_comments(post_parameters_list)
|
||||
return True
|
||||
except Exception as e:
|
||||
get_logger().error(f"Failed to publish code suggestion, error: {e}")
|
||||
if get_settings().config.verbosity_level >= 2:
|
||||
get_logger().error(f"Failed to publish code suggestion, error: {e}")
|
||||
return False
|
||||
|
||||
def edit_comment(self, comment, body: str):
|
||||
@ -519,7 +501,6 @@ class GithubProvider(GitProvider):
|
||||
elif self.deployment_type == 'user':
|
||||
same_comment_creator = self.github_user_id == existing_comment['user']['login']
|
||||
if existing_comment['subject_type'] == 'file' and comment['path'] == existing_comment['path'] and same_comment_creator:
|
||||
|
||||
headers, data_patch = self.pr._requester.requestJsonAndCheck(
|
||||
"PATCH", f"{self.base_url}/repos/{self.repo}/pulls/comments/{existing_comment['id']}", input={"body":comment['body']}
|
||||
)
|
||||
@ -531,7 +512,8 @@ class GithubProvider(GitProvider):
|
||||
)
|
||||
return True
|
||||
except Exception as e:
|
||||
get_logger().error(f"Failed to publish diffview file summary, error: {e}")
|
||||
if get_settings().config.verbosity_level >= 2:
|
||||
get_logger().error(f"Failed to publish diffview file summary, error: {e}")
|
||||
return False
|
||||
|
||||
def remove_initial_comment(self):
|
||||
@ -819,7 +801,8 @@ class GithubProvider(GitProvider):
|
||||
link = f"{self.base_url_html}/{self.repo}/pull/{self.pr_num}/files#diff-{sha_file}R{absolute_position}"
|
||||
return link
|
||||
except Exception as e:
|
||||
get_logger().info(f"Failed adding line link, error: {e}")
|
||||
if get_settings().config.verbosity_level >= 2:
|
||||
get_logger().info(f"Failed adding line link, error: {e}")
|
||||
|
||||
return ""
|
||||
|
||||
@ -879,89 +862,3 @@ class GithubProvider(GitProvider):
|
||||
|
||||
def calc_pr_statistics(self, pull_request_data: dict):
|
||||
return {}
|
||||
|
||||
def validate_comments_inside_hunks(self, code_suggestions):
|
||||
"""
|
||||
validate that all committable comments are inside PR hunks - this is a must for committable comments in GitHub
|
||||
"""
|
||||
code_suggestions_copy = copy.deepcopy(code_suggestions)
|
||||
diff_files = self.get_diff_files()
|
||||
RE_HUNK_HEADER = re.compile(
|
||||
r"^@@ -(\d+)(?:,(\d+))? \+(\d+)(?:,(\d+))? @@[ ]?(.*)")
|
||||
|
||||
diff_files = set_file_languages(diff_files)
|
||||
|
||||
for suggestion in code_suggestions_copy:
|
||||
try:
|
||||
relevant_file_path = suggestion['relevant_file']
|
||||
for file in diff_files:
|
||||
if file.filename == relevant_file_path:
|
||||
|
||||
# generate on-demand the patches range for the relevant file
|
||||
patch_str = file.patch
|
||||
if not hasattr(file, 'patches_range'):
|
||||
file.patches_range = []
|
||||
patch_lines = patch_str.splitlines()
|
||||
for i, line in enumerate(patch_lines):
|
||||
if line.startswith('@@'):
|
||||
match = RE_HUNK_HEADER.match(line)
|
||||
# identify hunk header
|
||||
if match:
|
||||
section_header, size1, size2, start1, start2 = extract_hunk_headers(match)
|
||||
file.patches_range.append({'start': start2, 'end': start2 + size2 - 1})
|
||||
|
||||
patches_range = file.patches_range
|
||||
comment_start_line = suggestion.get('relevant_lines_start', None)
|
||||
comment_end_line = suggestion.get('relevant_lines_end', None)
|
||||
original_suggestion = suggestion.get('original_suggestion', None) # needed for diff code
|
||||
if not comment_start_line or not comment_end_line or not original_suggestion:
|
||||
continue
|
||||
|
||||
# check if the comment is inside a valid hunk
|
||||
is_valid_hunk = False
|
||||
min_distance = float('inf')
|
||||
patch_range_min = None
|
||||
# find the hunk that contains the comment, or the closest one
|
||||
for i, patch_range in enumerate(patches_range):
|
||||
d1 = comment_start_line - patch_range['start']
|
||||
d2 = patch_range['end'] - comment_end_line
|
||||
if d1 >= 0 and d2 >= 0: # found a valid hunk
|
||||
is_valid_hunk = True
|
||||
min_distance = 0
|
||||
patch_range_min = patch_range
|
||||
break
|
||||
elif d1 * d2 <= 0: # comment is possibly inside the hunk
|
||||
d1_clip = abs(min(0, d1))
|
||||
d2_clip = abs(min(0, d2))
|
||||
d = max(d1_clip, d2_clip)
|
||||
if d < min_distance:
|
||||
patch_range_min = patch_range
|
||||
min_distance = min(min_distance, d)
|
||||
if not is_valid_hunk:
|
||||
if min_distance < 10: # 10 lines - a reasonable distance to consider the comment inside the hunk
|
||||
# make the suggestion non-committable, yet multi line
|
||||
suggestion['relevant_lines_start'] = max(suggestion['relevant_lines_start'], patch_range_min['start'])
|
||||
suggestion['relevant_lines_end'] = min(suggestion['relevant_lines_end'], patch_range_min['end'])
|
||||
body = suggestion['body'].strip()
|
||||
|
||||
# present new diff code in collapsible
|
||||
existing_code = original_suggestion['existing_code'].rstrip() + "\n"
|
||||
improved_code = original_suggestion['improved_code'].rstrip() + "\n"
|
||||
diff = difflib.unified_diff(existing_code.split('\n'),
|
||||
improved_code.split('\n'), n=999)
|
||||
patch_orig = "\n".join(diff)
|
||||
patch = "\n".join(patch_orig.splitlines()[5:]).strip('\n')
|
||||
diff_code = f"\n\n<details><summary>New proposed code:</summary>\n\n```diff\n{patch.rstrip()}\n```"
|
||||
# replace ```suggestion ... ``` with diff_code, using regex:
|
||||
body = re.sub(r'```suggestion.*?```', diff_code, body, flags=re.DOTALL)
|
||||
body += "\n\n</details>"
|
||||
suggestion['body'] = body
|
||||
get_logger().info(f"Comment was moved to a valid hunk, "
|
||||
f"start_line={suggestion['relevant_lines_start']}, end_line={suggestion['relevant_lines_end']}, file={file.filename}")
|
||||
else:
|
||||
get_logger().error(f"Comment is not inside a valid hunk, "
|
||||
f"start_line={suggestion['relevant_lines_start']}, end_line={suggestion['relevant_lines_end']}, file={file.filename}")
|
||||
except Exception as e:
|
||||
get_logger().error(f"Failed to process patch for committable comment, error: {e}")
|
||||
return code_suggestions_copy
|
||||
|
||||
|
@ -1,4 +1,3 @@
|
||||
import difflib
|
||||
import hashlib
|
||||
import re
|
||||
from typing import Optional, Tuple
|
||||
@ -8,16 +7,13 @@ import gitlab
|
||||
import requests
|
||||
from gitlab import GitlabGetError
|
||||
|
||||
from pr_agent.algo.types import EDIT_TYPE, FilePatchInfo
|
||||
|
||||
from ..algo.file_filter import filter_ignored
|
||||
from ..algo.language_handler import is_valid_file
|
||||
from ..algo.utils import (clip_tokens,
|
||||
find_line_number_of_relevant_line_in_file,
|
||||
load_large_diff)
|
||||
from ..algo.utils import load_large_diff, clip_tokens, find_line_number_of_relevant_line_in_file
|
||||
from ..config_loader import get_settings
|
||||
from .git_provider import GitProvider, MAX_FILES_ALLOWED_FULL
|
||||
from pr_agent.algo.types import EDIT_TYPE, FilePatchInfo
|
||||
from ..log import get_logger
|
||||
from .git_provider import MAX_FILES_ALLOWED_FULL, GitProvider
|
||||
|
||||
|
||||
class DiffNotFoundError(Exception):
|
||||
@ -194,9 +190,6 @@ class GitLabProvider(GitProvider):
|
||||
self.publish_persistent_comment_full(pr_comment, initial_header, update_header, name, final_update_message)
|
||||
|
||||
def publish_comment(self, mr_comment: str, is_temporary: bool = False):
|
||||
if is_temporary and not get_settings().config.publish_output_progress:
|
||||
get_logger().debug(f"Skipping publish_comment for temporary comment: {mr_comment}")
|
||||
return None
|
||||
mr_comment = self.limit_output_characters(mr_comment, self.max_comment_chars)
|
||||
comment = self.mr.notes.create({'body': mr_comment})
|
||||
if is_temporary:
|
||||
@ -282,23 +275,20 @@ class GitLabProvider(GitProvider):
|
||||
new_code_snippet = original_suggestion['improved_code']
|
||||
content = original_suggestion['suggestion_content']
|
||||
label = original_suggestion['label']
|
||||
score = original_suggestion.get('score', 7)
|
||||
if 'score' in original_suggestion:
|
||||
score = original_suggestion['score']
|
||||
else:
|
||||
score = 7
|
||||
|
||||
if hasattr(self, 'main_language'):
|
||||
language = self.main_language
|
||||
else:
|
||||
language = ''
|
||||
link = self.get_line_link(relevant_file, line_start, line_end)
|
||||
body_fallback =f"**Suggestion:** {content} [{label}, importance: {score}]\n\n"
|
||||
body_fallback +=f"\n\n<details><summary>[{target_file.filename} [{line_start}-{line_end}]]({link}):</summary>\n\n"
|
||||
body_fallback += f"\n\n___\n\n`(Cannot implement directly - GitLab API allows committable suggestions strictly on MR diff lines)`"
|
||||
body_fallback+="</details>\n\n"
|
||||
diff_patch = difflib.unified_diff(old_code_snippet.split('\n'),
|
||||
new_code_snippet.split('\n'), n=999)
|
||||
patch_orig = "\n".join(diff_patch)
|
||||
patch = "\n".join(patch_orig.splitlines()[5:]).strip('\n')
|
||||
diff_code = f"\n\n```diff\n{patch.rstrip()}\n```"
|
||||
body_fallback += diff_code
|
||||
body_fallback =f"**Suggestion:** {content} [{label}, importance: {score}]\n___\n"
|
||||
body_fallback +=f"\n\nReplace lines ([{line_start}-{line_end}]({link}))\n\n```{language}\n{old_code_snippet}\n````\n\n"
|
||||
body_fallback +=f"with\n\n```{language}\n{new_code_snippet}\n````"
|
||||
body_fallback += f"\n\n___\n\n`(Cannot implement this suggestion directly, as gitlab API does not enable committing to a non -+ line in a PR)`"
|
||||
|
||||
# Create a general note on the file in the MR
|
||||
self.mr.notes.create({
|
||||
@ -311,7 +301,6 @@ class GitLabProvider(GitProvider):
|
||||
'file_path': f'{target_file.filename}',
|
||||
}
|
||||
})
|
||||
get_logger().debug(f"Created fallback comment in MR {self.id_mr} with position {pos_obj}")
|
||||
|
||||
# get_logger().debug(
|
||||
# f"Failed to create comment in MR {self.id_mr} with position {pos_obj} (probably not a '+' line)")
|
||||
|
@ -4,9 +4,9 @@ from typing import List
|
||||
|
||||
from git import Repo
|
||||
|
||||
from pr_agent.algo.types import EDIT_TYPE, FilePatchInfo
|
||||
from pr_agent.config_loader import _find_repository_root, get_settings
|
||||
from pr_agent.git_providers.git_provider import GitProvider
|
||||
from pr_agent.algo.types import EDIT_TYPE, FilePatchInfo
|
||||
from pr_agent.log import get_logger
|
||||
|
||||
|
||||
|
@ -3,12 +3,11 @@ import os
|
||||
import tempfile
|
||||
|
||||
from dynaconf import Dynaconf
|
||||
from starlette_context import context
|
||||
|
||||
from pr_agent.config_loader import get_settings
|
||||
from pr_agent.git_providers import (get_git_provider,
|
||||
get_git_provider_with_context)
|
||||
from pr_agent.git_providers import get_git_provider, get_git_provider_with_context
|
||||
from pr_agent.log import get_logger
|
||||
from starlette_context import context
|
||||
|
||||
|
||||
def apply_repo_settings(pr_url):
|
||||
@ -99,24 +98,5 @@ def set_claude_model():
|
||||
"""
|
||||
model_claude = "bedrock/anthropic.claude-3-5-sonnet-20240620-v1:0"
|
||||
get_settings().set('config.model', model_claude)
|
||||
get_settings().set('config.model_weak', model_claude)
|
||||
get_settings().set('config.model_turbo', model_claude)
|
||||
get_settings().set('config.fallback_models', [model_claude])
|
||||
|
||||
|
||||
def is_user_name_a_bot(name: str) -> bool:
|
||||
if not name:
|
||||
return False
|
||||
bot_indicators = ['codium', 'bot_', 'bot-', '_bot', '-bot', 'qodo', "service", "github", "jenkins", "auto",
|
||||
"cicd", "validator", "ci-", "assistant", "srv-"]
|
||||
return any(indicator in name.lower() for indicator in bot_indicators)
|
||||
|
||||
|
||||
def is_pr_description_indicating_bot(description: str) -> bool:
|
||||
if not description:
|
||||
return False
|
||||
bot_descriptions = ["Snyk has created this PR", "This PR was created automatically by",
|
||||
"This PR was created by a bot",
|
||||
"This pull request was automatically generated by"]
|
||||
# Check is it's a Snyk bot
|
||||
if any(bot_description in description for bot_description in bot_descriptions):
|
||||
return True
|
@ -1,6 +1,5 @@
|
||||
from pr_agent.config_loader import get_settings
|
||||
from pr_agent.identity_providers.default_identity_provider import \
|
||||
DefaultIdentityProvider
|
||||
from pr_agent.identity_providers.default_identity_provider import DefaultIdentityProvider
|
||||
|
||||
_IDENTITY_PROVIDERS = {
|
||||
'default': DefaultIdentityProvider
|
||||
|
@ -1,5 +1,4 @@
|
||||
from pr_agent.identity_providers.identity_provider import (Eligibility,
|
||||
IdentityProvider)
|
||||
from pr_agent.identity_providers.identity_provider import Eligibility, IdentityProvider
|
||||
|
||||
|
||||
class DefaultIdentityProvider(IdentityProvider):
|
||||
|
@ -8,10 +8,12 @@ def get_secret_provider():
|
||||
provider_id = get_settings().config.secret_provider
|
||||
if provider_id == 'google_cloud_storage':
|
||||
try:
|
||||
from pr_agent.secret_providers.google_cloud_storage_secret_provider import \
|
||||
GoogleCloudStorageSecretProvider
|
||||
from pr_agent.secret_providers.google_cloud_storage_secret_provider import GoogleCloudStorageSecretProvider
|
||||
return GoogleCloudStorageSecretProvider()
|
||||
except Exception as e:
|
||||
raise ValueError(f"Failed to initialize google_cloud_storage secret provider {provider_id}") from e
|
||||
else:
|
||||
raise ValueError("Unknown SECRET_PROVIDER")
|
||||
|
||||
|
||||
|
||||
|
@ -9,9 +9,9 @@ import secrets
|
||||
from urllib.parse import unquote
|
||||
|
||||
import uvicorn
|
||||
from fastapi import APIRouter, Depends, FastAPI, HTTPException, Request
|
||||
from fastapi.encoders import jsonable_encoder
|
||||
from fastapi import APIRouter, Depends, FastAPI, HTTPException
|
||||
from fastapi.security import HTTPBasic, HTTPBasicCredentials
|
||||
from fastapi.encoders import jsonable_encoder
|
||||
from starlette import status
|
||||
from starlette.background import BackgroundTasks
|
||||
from starlette.middleware import Middleware
|
||||
@ -23,6 +23,9 @@ from pr_agent.agent.pr_agent import PRAgent, command2class
|
||||
from pr_agent.algo.utils import update_settings_from_args
|
||||
from pr_agent.config_loader import get_settings
|
||||
from pr_agent.git_providers.utils import apply_repo_settings
|
||||
from pr_agent.log import get_logger
|
||||
from fastapi import Request, Depends
|
||||
from fastapi.security import HTTPBasic, HTTPBasicCredentials
|
||||
from pr_agent.log import LoggingFormat, get_logger, setup_logger
|
||||
|
||||
setup_logger(fmt=LoggingFormat.JSON, level="DEBUG")
|
||||
@ -64,9 +67,6 @@ def authorize(credentials: HTTPBasicCredentials = Depends(security)):
|
||||
|
||||
async def _perform_commands_azure(commands_conf: str, agent: PRAgent, api_url: str, log_context: dict):
|
||||
apply_repo_settings(api_url)
|
||||
if commands_conf == "pr_commands" and get_settings().config.disable_auto_feedback: # auto commands for PR, and auto feedback is disabled
|
||||
get_logger().info(f"Auto feedback is disabled, skipping auto commands for PR {api_url=}", **log_context)
|
||||
return
|
||||
commands = get_settings().get(f"azure_devops_server.{commands_conf}")
|
||||
get_settings().set("config.is_auto_command", True)
|
||||
for command in commands:
|
||||
|
@ -19,7 +19,7 @@ from starlette_context.middleware import RawContextMiddleware
|
||||
from pr_agent.agent.pr_agent import PRAgent
|
||||
from pr_agent.algo.utils import update_settings_from_args
|
||||
from pr_agent.config_loader import get_settings, global_settings
|
||||
from pr_agent.git_providers.utils import apply_repo_settings, is_user_name_a_bot, is_pr_description_indicating_bot
|
||||
from pr_agent.git_providers.utils import apply_repo_settings
|
||||
from pr_agent.identity_providers import get_identity_provider
|
||||
from pr_agent.identity_providers.identity_provider import Eligibility
|
||||
from pr_agent.log import LoggingFormat, get_logger, setup_logger
|
||||
@ -77,9 +77,6 @@ async def handle_manifest(request: Request, response: Response):
|
||||
|
||||
async def _perform_commands_bitbucket(commands_conf: str, agent: PRAgent, api_url: str, log_context: dict, data: dict):
|
||||
apply_repo_settings(api_url)
|
||||
if commands_conf == "pr_commands" and get_settings().config.disable_auto_feedback: # auto commands for PR, and auto feedback is disabled
|
||||
get_logger().info(f"Auto feedback is disabled, skipping auto commands for PR {api_url=}")
|
||||
return
|
||||
if data.get("event", "") == "pullrequest:created":
|
||||
if not should_process_pr_logic(data):
|
||||
return
|
||||
@ -101,25 +98,11 @@ async def _perform_commands_bitbucket(commands_conf: str, agent: PRAgent, api_ur
|
||||
|
||||
def is_bot_user(data) -> bool:
|
||||
try:
|
||||
actor = data.get("data", {}).get("actor", {})
|
||||
description = data.get("data", {}).get("pullrequest", {}).get("description", "")
|
||||
# allow actor type: user . if it's "AppUser" or "team" then it is a bot user
|
||||
allowed_actor_types = {"user"}
|
||||
if actor and actor["type"].lower() not in allowed_actor_types:
|
||||
get_logger().info(f"BitBucket actor type is not 'user', skipping: {actor}")
|
||||
return True
|
||||
|
||||
username = actor.get("username", "")
|
||||
if username and is_user_name_a_bot(username):
|
||||
get_logger().info(f"BitBucket actor is a bot user, skipping: {username}")
|
||||
return True
|
||||
|
||||
if description and is_pr_description_indicating_bot(description):
|
||||
get_logger().info(f"Description indicates a bot user: {actor}",
|
||||
artifact={"description": description})
|
||||
if data["data"]["actor"]["type"] != "user":
|
||||
get_logger().info(f"BitBucket actor type is not 'user': {data['data']['actor']['type']}")
|
||||
return True
|
||||
except Exception as e:
|
||||
get_logger().error(f"Failed 'is_bot_user' logic: {e}")
|
||||
get_logger().error("Failed 'is_bot_user' logic: {e}")
|
||||
return False
|
||||
|
||||
|
||||
@ -178,18 +161,16 @@ async def handle_github_webhooks(background_tasks: BackgroundTasks, request: Req
|
||||
return "OK"
|
||||
|
||||
# Get the username of the sender
|
||||
actor = data.get("data", {}).get("actor", {})
|
||||
if actor:
|
||||
try:
|
||||
username = data["data"]["actor"]["username"]
|
||||
except KeyError:
|
||||
try:
|
||||
username = actor["username"]
|
||||
username = data["data"]["actor"]["display_name"]
|
||||
except KeyError:
|
||||
try:
|
||||
username = actor["display_name"]
|
||||
except KeyError:
|
||||
username = actor["nickname"]
|
||||
log_context["sender"] = username
|
||||
username = data["data"]["actor"]["nickname"]
|
||||
log_context["sender"] = username
|
||||
|
||||
sender_id = data.get("data", {}).get("actor", {}).get("account_id", "")
|
||||
sender_id = data["data"]["actor"]["account_id"]
|
||||
log_context["sender_id"] = sender_id
|
||||
jwt_parts = input_jwt.split(".")
|
||||
claim_part = jwt_parts[1]
|
||||
|
@ -6,20 +6,20 @@ from typing import List
|
||||
import uvicorn
|
||||
from fastapi import APIRouter, FastAPI
|
||||
from fastapi.encoders import jsonable_encoder
|
||||
from fastapi.responses import RedirectResponse
|
||||
from starlette import status
|
||||
from starlette.background import BackgroundTasks
|
||||
from starlette.middleware import Middleware
|
||||
from starlette.requests import Request
|
||||
from starlette.responses import JSONResponse
|
||||
from starlette_context.middleware import RawContextMiddleware
|
||||
|
||||
from pr_agent.agent.pr_agent import PRAgent
|
||||
from pr_agent.algo.utils import update_settings_from_args
|
||||
from pr_agent.config_loader import get_settings
|
||||
from pr_agent.git_providers.utils import apply_repo_settings
|
||||
from pr_agent.log import LoggingFormat, get_logger, setup_logger
|
||||
from pr_agent.servers.utils import verify_signature
|
||||
from fastapi.responses import RedirectResponse
|
||||
|
||||
|
||||
setup_logger(fmt=LoggingFormat.JSON, level="DEBUG")
|
||||
router = APIRouter()
|
||||
@ -72,11 +72,6 @@ async def handle_webhook(background_tasks: BackgroundTasks, request: Request):
|
||||
commands_to_run = []
|
||||
|
||||
if data["eventKey"] == "pr:opened":
|
||||
apply_repo_settings(pr_url)
|
||||
if get_settings().config.disable_auto_feedback: # auto commands for PR, and auto feedback is disabled
|
||||
get_logger().info(f"Auto feedback is disabled, skipping auto commands for PR {pr_url}", **log_context)
|
||||
return
|
||||
get_settings().set("config.is_auto_command", True)
|
||||
commands_to_run.extend(_get_commands_list_from_settings('BITBUCKET_SERVER.PR_COMMANDS'))
|
||||
elif data["eventKey"] == "pr:comment:added":
|
||||
commands_to_run.append(data["comment"]["text"])
|
||||
|
@ -15,10 +15,9 @@ from starlette_context.middleware import RawContextMiddleware
|
||||
from pr_agent.agent.pr_agent import PRAgent
|
||||
from pr_agent.algo.utils import update_settings_from_args
|
||||
from pr_agent.config_loader import get_settings, global_settings
|
||||
from pr_agent.git_providers import (get_git_provider,
|
||||
get_git_provider_with_context)
|
||||
from pr_agent.git_providers import get_git_provider, get_git_provider_with_context
|
||||
from pr_agent.git_providers.git_provider import IncrementalPR
|
||||
from pr_agent.git_providers.utils import apply_repo_settings, is_user_name_a_bot, is_pr_description_indicating_bot
|
||||
from pr_agent.git_providers.utils import apply_repo_settings
|
||||
from pr_agent.identity_providers import get_identity_provider
|
||||
from pr_agent.identity_providers.identity_provider import Eligibility
|
||||
from pr_agent.log import LoggingFormat, get_logger, setup_logger
|
||||
@ -238,22 +237,13 @@ def get_log_context(body, event, action, build_number):
|
||||
return log_context, sender, sender_id, sender_type
|
||||
|
||||
|
||||
def is_bot_user(sender, sender_type, user_description):
|
||||
def is_bot_user(sender, sender_type):
|
||||
try:
|
||||
# logic to ignore PRs opened by bot
|
||||
if get_settings().get("GITHUB_APP.IGNORE_BOT_PR", False):
|
||||
if sender_type.lower() == "bot":
|
||||
if 'pr-agent' not in sender:
|
||||
get_logger().info(f"Ignoring PR from '{sender=}' because it is a bot")
|
||||
return True
|
||||
if is_user_name_a_bot(sender):
|
||||
if get_settings().get("GITHUB_APP.IGNORE_BOT_PR", False) and sender_type == "Bot":
|
||||
if 'pr-agent' not in sender:
|
||||
get_logger().info(f"Ignoring PR from '{sender=}' because it is a bot")
|
||||
return True
|
||||
# Ignore PRs opened by bot users based on their description
|
||||
if isinstance(user_description, str) and is_pr_description_indicating_bot(user_description):
|
||||
get_logger().info(f"Description indicates a bot user: {sender}",
|
||||
artifact={"description": user_description})
|
||||
return True
|
||||
return True
|
||||
except Exception as e:
|
||||
get_logger().error(f"Failed 'is_bot_user' logic: {e}")
|
||||
return False
|
||||
@ -316,8 +306,7 @@ async def handle_request(body: Dict[str, Any], event: str):
|
||||
log_context, sender, sender_id, sender_type = get_log_context(body, event, action, build_number)
|
||||
|
||||
# logic to ignore PRs opened by bot, PRs with specific titles, labels, source branches, or target branches
|
||||
pr_description = body.get("pull_request", {}).get("body", "")
|
||||
if is_bot_user(sender, sender_type, pr_description) and 'check_run' not in body:
|
||||
if is_bot_user(sender, sender_type) and 'check_run' not in body:
|
||||
return {}
|
||||
if action != 'created' and 'check_run' not in body:
|
||||
if not should_process_pr_logic(body):
|
||||
@ -384,9 +373,6 @@ def _check_pull_request_event(action: str, body: dict, log_context: dict) -> Tup
|
||||
async def _perform_auto_commands_github(commands_conf: str, agent: PRAgent, body: dict, api_url: str,
|
||||
log_context: dict):
|
||||
apply_repo_settings(api_url)
|
||||
if commands_conf == "pr_commands" and get_settings().config.disable_auto_feedback: # auto commands for PR, and auto feedback is disabled
|
||||
get_logger().info(f"Auto feedback is disabled, skipping auto commands for PR {api_url=}")
|
||||
return
|
||||
if not should_process_pr_logic(body): # Here we already updated the configuration with the repo settings
|
||||
return {}
|
||||
commands = get_settings().get(f"github_app.{commands_conf}")
|
||||
|
@ -1,12 +1,11 @@
|
||||
import asyncio
|
||||
import multiprocessing
|
||||
import time
|
||||
import traceback
|
||||
from collections import deque
|
||||
import traceback
|
||||
from datetime import datetime, timezone
|
||||
|
||||
import aiohttp
|
||||
import time
|
||||
import requests
|
||||
import aiohttp
|
||||
|
||||
from pr_agent.agent.pr_agent import PRAgent
|
||||
from pr_agent.config_loader import get_settings
|
||||
@ -84,7 +83,6 @@ async def is_valid_notification(notification, headers, handled_ids, session, use
|
||||
return False, handled_ids
|
||||
async with session.get(latest_comment, headers=headers) as comment_response:
|
||||
check_prev_comments = False
|
||||
user_tag = "@" + user_id
|
||||
if comment_response.status == 200:
|
||||
comment = await comment_response.json()
|
||||
if 'id' in comment:
|
||||
@ -102,6 +100,7 @@ async def is_valid_notification(notification, headers, handled_ids, session, use
|
||||
get_logger().debug(f"no comment_body")
|
||||
check_prev_comments = True
|
||||
else:
|
||||
user_tag = "@" + user_id
|
||||
if user_tag not in comment_body:
|
||||
get_logger().debug(f"user_tag not in comment_body")
|
||||
check_prev_comments = True
|
||||
|
@ -1,6 +1,6 @@
|
||||
import copy
|
||||
import json
|
||||
import re
|
||||
import json
|
||||
from datetime import datetime
|
||||
|
||||
import uvicorn
|
||||
@ -15,7 +15,7 @@ from starlette_context.middleware import RawContextMiddleware
|
||||
from pr_agent.agent.pr_agent import PRAgent
|
||||
from pr_agent.algo.utils import update_settings_from_args
|
||||
from pr_agent.config_loader import get_settings, global_settings
|
||||
from pr_agent.git_providers.utils import apply_repo_settings, is_user_name_a_bot, is_pr_description_indicating_bot
|
||||
from pr_agent.git_providers.utils import apply_repo_settings
|
||||
from pr_agent.log import LoggingFormat, get_logger, setup_logger
|
||||
from pr_agent.secret_providers import get_secret_provider
|
||||
|
||||
@ -61,9 +61,6 @@ async def handle_request(api_url: str, body: str, log_context: dict, sender_id:
|
||||
async def _perform_commands_gitlab(commands_conf: str, agent: PRAgent, api_url: str,
|
||||
log_context: dict, data: dict):
|
||||
apply_repo_settings(api_url)
|
||||
if commands_conf == "pr_commands" and get_settings().config.disable_auto_feedback: # auto commands for PR, and auto feedback is disabled
|
||||
get_logger().info(f"Auto feedback is disabled, skipping auto commands for PR {api_url=}", **log_context)
|
||||
return
|
||||
if not should_process_pr_logic(data): # Here we already updated the configurations
|
||||
return
|
||||
commands = get_settings().get(f"gitlab.{commands_conf}", {})
|
||||
@ -86,14 +83,10 @@ def is_bot_user(data) -> bool:
|
||||
try:
|
||||
# logic to ignore bot users (unlike Github, no direct flag for bot users in gitlab)
|
||||
sender_name = data.get("user", {}).get("name", "unknown").lower()
|
||||
if is_user_name_a_bot(sender_name):
|
||||
bot_indicators = ['codium', 'bot_', 'bot-', '_bot', '-bot']
|
||||
if any(indicator in sender_name for indicator in bot_indicators):
|
||||
get_logger().info(f"Skipping GitLab bot user: {sender_name}")
|
||||
return True
|
||||
pr_description = data.get('object_attributes', {}).get('description', '')
|
||||
if pr_description and is_pr_description_indicating_bot(pr_description):
|
||||
get_logger().info(f"Description indicates a bot user: {sender_name}",
|
||||
artifact={"description": pr_description})
|
||||
return True
|
||||
except Exception as e:
|
||||
get_logger().error(f"Failed 'is_bot_user' logic: {e}")
|
||||
return False
|
||||
|
@ -5,6 +5,7 @@ from starlette_context.middleware import RawContextMiddleware
|
||||
|
||||
from pr_agent.servers.github_app import router
|
||||
|
||||
|
||||
middleware = [Middleware(RawContextMiddleware)]
|
||||
app = FastAPI(middleware=middleware)
|
||||
app.include_router(router)
|
||||
|
@ -2,7 +2,7 @@ import hashlib
|
||||
import hmac
|
||||
import time
|
||||
from collections import defaultdict
|
||||
from typing import Any, Callable
|
||||
from typing import Callable, Any
|
||||
|
||||
from fastapi import HTTPException
|
||||
|
||||
|
@ -1,8 +1,8 @@
|
||||
[config]
|
||||
# models
|
||||
model="gpt-4o-2024-11-20"
|
||||
fallback_models=["gpt-4o-2024-08-06"]
|
||||
#model_weak="gpt-4o-mini-2024-07-18" # optional, a weaker model to use for some easier tasks
|
||||
model="gpt-4-turbo-2024-04-09"
|
||||
model_turbo="gpt-4o-2024-08-06"
|
||||
fallback_models=["gpt-4o-2024-05-13"]
|
||||
# CLI
|
||||
git_provider="github"
|
||||
publish_output=true
|
||||
@ -14,7 +14,6 @@ use_extra_bad_extensions=false
|
||||
use_wiki_settings_file=true
|
||||
use_repo_settings_file=true
|
||||
use_global_settings_file=true
|
||||
disable_auto_feedback = false
|
||||
ai_timeout=120 # 2minutes
|
||||
skip_keys = []
|
||||
# token limits
|
||||
@ -55,9 +54,10 @@ require_can_be_split_review=false
|
||||
require_security_review=true
|
||||
require_ticket_analysis_review=true
|
||||
# general options
|
||||
num_code_suggestions=0 # legacy mode. use the `improve` command instead
|
||||
num_code_suggestions=0
|
||||
inline_code_comments = false
|
||||
ask_and_reflect=false
|
||||
#automatic_review=true
|
||||
persistent_comment=true
|
||||
extra_instructions = ""
|
||||
final_update_message = true
|
||||
@ -107,13 +107,13 @@ enable_help_text=false
|
||||
|
||||
|
||||
[pr_code_suggestions] # /improve #
|
||||
max_context_tokens=16000
|
||||
max_context_tokens=14000
|
||||
#
|
||||
commitable_code_suggestions = false
|
||||
dual_publishing_score_threshold=-1 # -1 to disable, [0-10] to set the threshold (>=) for publishing a code suggestion both in a table and as commitable
|
||||
focus_only_on_problems=true
|
||||
#
|
||||
extra_instructions = ""
|
||||
rank_suggestions = false
|
||||
enable_help_text=false
|
||||
enable_chat_text=false
|
||||
enable_intro_text=true
|
||||
@ -128,7 +128,7 @@ auto_extended_mode=true
|
||||
num_code_suggestions_per_chunk=4
|
||||
max_number_of_calls = 3
|
||||
parallel_calls = true
|
||||
|
||||
rank_extended_suggestions = false
|
||||
final_clip_factor = 0.8
|
||||
# self-review checkbox
|
||||
demand_code_suggestions_self_review=false # add a checkbox for the author to self-review the code suggestions
|
||||
@ -138,7 +138,6 @@ fold_suggestions_on_self_review=true # Pro feature. if true, the code suggestion
|
||||
# Suggestion impact 💎
|
||||
publish_post_process_suggestion_impact=true
|
||||
wiki_page_accepted_suggestions=true
|
||||
allow_thumbs_up_down=false
|
||||
|
||||
[pr_custom_prompt] # /custom_prompt #
|
||||
prompt = """\
|
||||
@ -218,7 +217,7 @@ override_deployment_type = true
|
||||
handle_pr_actions = ['opened', 'reopened', 'ready_for_review']
|
||||
pr_commands = [
|
||||
"/describe --pr_description.final_update_message=false",
|
||||
"/review",
|
||||
"/review --pr_reviewer.num_code_suggestions=0",
|
||||
"/improve",
|
||||
]
|
||||
# settings for "pull_request" event with "synchronize" action - used to detect and handle push triggers for new commits
|
||||
@ -230,27 +229,27 @@ push_trigger_pending_tasks_backlog = true
|
||||
push_trigger_pending_tasks_ttl = 300
|
||||
push_commands = [
|
||||
"/describe",
|
||||
"/review",
|
||||
"/review --pr_reviewer.num_code_suggestions=0",
|
||||
]
|
||||
|
||||
[gitlab]
|
||||
url = "https://gitlab.com"
|
||||
pr_commands = [
|
||||
"/describe --pr_description.final_update_message=false",
|
||||
"/review",
|
||||
"/review --pr_reviewer.num_code_suggestions=0",
|
||||
"/improve",
|
||||
]
|
||||
handle_push_trigger = false
|
||||
push_commands = [
|
||||
"/describe",
|
||||
"/review",
|
||||
"/review --pr_reviewer.num_code_suggestions=0",
|
||||
]
|
||||
|
||||
[bitbucket_app]
|
||||
pr_commands = [
|
||||
"/describe --pr_description.final_update_message=false",
|
||||
"/review",
|
||||
"/improve --pr_code_suggestions.commitable_code_suggestions=true",
|
||||
"/review --pr_reviewer.num_code_suggestions=0",
|
||||
"/improve --pr_code_suggestions.commitable_code_suggestions=true --pr_code_suggestions.suggestions_score_threshold=7",
|
||||
]
|
||||
avoid_full_files = false
|
||||
|
||||
@ -275,8 +274,8 @@ avoid_full_files = false
|
||||
url = ""
|
||||
pr_commands = [
|
||||
"/describe --pr_description.final_update_message=false",
|
||||
"/review",
|
||||
"/improve --pr_code_suggestions.commitable_code_suggestions=true",
|
||||
"/review --pr_reviewer.num_code_suggestions=0",
|
||||
"/improve --pr_code_suggestions.commitable_code_suggestions=true --pr_code_suggestions.suggestions_score_threshold=7",
|
||||
]
|
||||
|
||||
[litellm]
|
||||
|
@ -1,10 +1,7 @@
|
||||
[pr_code_suggestions_prompt]
|
||||
system="""You are PR-Reviewer, an AI specializing in Pull Request (PR) code analysis and suggestions.
|
||||
{%- if not focus_only_on_problems %}
|
||||
Your task is to examine the provided code diff, focusing on new code (lines prefixed with '+'), and offer concise, actionable suggestions to fix possible bugs and problems, and enhance code quality and performance.
|
||||
{%- else %}
|
||||
Your task is to examine the provided code diff, focusing on new code (lines prefixed with '+'), and offer concise, actionable suggestions to fix critical bugs and problems.
|
||||
{%- endif %}
|
||||
Your task is to examine the provided code diff, focusing on new code (lines prefixed with '+'), and offer concise, actionable suggestions to fix possible bugs and problems, and enhance code quality, readability, and performance.
|
||||
|
||||
|
||||
The PR code diff will be in the following structured format:
|
||||
======
|
||||
@ -45,17 +42,9 @@ __new hunk__
|
||||
|
||||
|
||||
Specific guidelines for generating code suggestions:
|
||||
{%- if not focus_only_on_problems %}
|
||||
- Provide up to {{ num_code_suggestions }} distinct and insightful code suggestions.
|
||||
{%- else %}
|
||||
- Provide up to {{ num_code_suggestions }} distinct and insightful code suggestions. Return less suggestions if no pertinent ones are applicable.
|
||||
{%- endif %}
|
||||
- Focus solely on enhancing new code introduced in the PR, identified by '+' prefixes in '__new hunk__' sections.
|
||||
{%- if not focus_only_on_problems %}
|
||||
- Prioritize suggestions that address potential issues, critical problems, and bugs in the PR code. Avoid repeating changes already implemented in the PR. If no pertinent suggestions are applicable, return an empty list.
|
||||
{%- else %}
|
||||
- Only give suggestions that address critical problems and bugs in the PR code. If no relevant suggestions are applicable, return an empty list.
|
||||
{%- endif %}
|
||||
- Don't suggest to add docstring, type hints, or comments, to remove unused imports, or to use more specific exception types.
|
||||
- When referencing variables or names from the code, enclose them in backticks (`). Example: "ensure that `variable_name` is..."
|
||||
- Be mindful you are viewing a partial PR code diff, not the full codebase. Avoid suggestions that might conflict with unseen code or alerting variables not declared in the visible scope, as the context is incomplete.
|
||||
@ -80,11 +69,7 @@ class CodeSuggestion(BaseModel):
|
||||
existing_code: str = Field(description="A short code snippet from a '__new hunk__' section that the suggestion aims to enhance or fix. Include only complete code lines. Use ellipsis (...) for brevity if needed. This snippet should represent the specific PR code targeted for improvement.")
|
||||
improved_code: str = Field(description="A refined code snippet that replaces the 'existing_code' snippet after implementing the suggestion.")
|
||||
one_sentence_summary: str = Field(description="A concise, single-sentence overview of the suggested improvement. Focus on the 'what'. Be general, and avoid method or variable names.")
|
||||
{%- if not focus_only_on_problems %}
|
||||
label: str = Field(description="A single, descriptive label that best characterizes the suggestion type. Possible labels include 'security', 'possible bug', 'possible issue', 'performance', 'enhancement', 'best practice', 'maintainability', 'typo'. Other relevant labels are also acceptable.")
|
||||
{%- else %}
|
||||
label: str = Field(description="A single, descriptive label that best characterizes the suggestion type. Possible labels include 'security', 'critical bug', 'general'. The 'general' section should be used for suggestions that address a major issue, but are necessarily on a critical level.")
|
||||
{%- endif %}
|
||||
|
||||
|
||||
class PRCodeSuggestions(BaseModel):
|
||||
|
@ -80,8 +80,8 @@ class SubPR(BaseModel):
|
||||
|
||||
class KeyIssuesComponentLink(BaseModel):
|
||||
relevant_file: str = Field(description="The full file path of the relevant file")
|
||||
issue_header: str = Field(description="One or two word title for the the issue. For example: 'Possible Bug', etc.")
|
||||
issue_content: str = Field(description="A short and concise summary of what should be further inspected and validated during the PR review process for this issue. Do not reference line numbers in this field.")
|
||||
issue_header: str = Field(description="One or two word title for the the issue. For example: 'Possible Bug', 'Performance Issue', 'Code Smell', etc.")
|
||||
issue_content: str = Field(description="A short and concise summary of what should be further inspected and validated during the PR review process for this issue. Don't state line numbers here")
|
||||
start_line: int = Field(description="The start line that corresponds to this issue in the relevant file")
|
||||
end_line: int = Field(description="The end line that corresponds to this issue in the relevant file")
|
||||
|
||||
@ -111,7 +111,7 @@ class Review(BaseModel):
|
||||
{%- if question_str %}
|
||||
insights_from_user_answers: str = Field(description="shortly summarize the insights you gained from the user's answers to the questions")
|
||||
{%- endif %}
|
||||
key_issues_to_review: List[KeyIssuesComponentLink] = Field("A short and diverse list (0-3 issues) of high-priority bugs, problems or performance concerns introduced in the PR code, which the PR reviewer should further focus on and validate during the review process.")
|
||||
key_issues_to_review: List[KeyIssuesComponentLink] = Field("A diverse list of bugs, issue or major performance concerns introduced in this PR, which the PR reviewer should further investigate")
|
||||
{%- if require_security_review %}
|
||||
security_concerns: str = Field(description="Does this PR code introduce possible vulnerabilities such as exposure of sensitive information (e.g., API keys, secrets, passwords), or security concerns like SQL injection, XSS, CSRF, and others ? Answer 'No' (without explaining why) if there are no possible issues. If there are security concerns or issues, start your answer with a short header, such as: 'Sensitive information exposure: ...', 'SQL injection: ...' etc. Explain your answer. Be specific and give examples if possible")
|
||||
{%- endif %}
|
||||
|
@ -1,30 +1,26 @@
|
||||
import asyncio
|
||||
import copy
|
||||
import difflib
|
||||
import re
|
||||
import textwrap
|
||||
import traceback
|
||||
from functools import partial
|
||||
from typing import Dict, List
|
||||
|
||||
from jinja2 import Environment, StrictUndefined
|
||||
|
||||
from pr_agent.algo.ai_handlers.base_ai_handler import BaseAiHandler
|
||||
from pr_agent.algo.ai_handlers.litellm_ai_handler import LiteLLMAIHandler
|
||||
from pr_agent.algo.pr_processing import (add_ai_metadata_to_diff_files,
|
||||
get_pr_diff, get_pr_multi_diffs,
|
||||
retry_with_fallback_models)
|
||||
from pr_agent.algo.pr_processing import get_pr_diff, get_pr_multi_diffs, retry_with_fallback_models, \
|
||||
add_ai_metadata_to_diff_files
|
||||
from pr_agent.algo.token_handler import TokenHandler
|
||||
from pr_agent.algo.utils import (ModelType, load_yaml, replace_code_tags,
|
||||
show_relevant_configurations)
|
||||
from pr_agent.algo.utils import load_yaml, replace_code_tags, ModelType, show_relevant_configurations
|
||||
from pr_agent.config_loader import get_settings
|
||||
from pr_agent.git_providers import (AzureDevopsProvider, GithubProvider,
|
||||
GitLabProvider, get_git_provider,
|
||||
get_git_provider_with_context)
|
||||
from pr_agent.git_providers.git_provider import get_main_pr_language, GitProvider
|
||||
from pr_agent.git_providers import get_git_provider, get_git_provider_with_context, GithubProvider, GitLabProvider, \
|
||||
AzureDevopsProvider
|
||||
from pr_agent.git_providers.git_provider import get_main_pr_language
|
||||
from pr_agent.log import get_logger
|
||||
from pr_agent.servers.help import HelpMessage
|
||||
from pr_agent.tools.pr_description import insert_br_after_x_chars
|
||||
import difflib
|
||||
import re
|
||||
|
||||
|
||||
class PRCodeSuggestions:
|
||||
@ -80,7 +76,6 @@ class PRCodeSuggestions:
|
||||
"commit_messages_str": self.git_provider.get_commit_messages(),
|
||||
"relevant_best_practices": "",
|
||||
"is_ai_metadata": get_settings().get("config.enable_ai_metadata", False),
|
||||
"focus_only_on_problems": get_settings().get("pr_code_suggestions.focus_only_on_problems", False),
|
||||
}
|
||||
self.pr_code_suggestions_prompt_system = get_settings().pr_code_suggestions_prompt.system
|
||||
|
||||
@ -103,8 +98,6 @@ class PRCodeSuggestions:
|
||||
relevant_configs = {'pr_code_suggestions': dict(get_settings().pr_code_suggestions),
|
||||
'config': dict(get_settings().config)}
|
||||
get_logger().debug("Relevant configs", artifacts=relevant_configs)
|
||||
|
||||
# publish "Preparing suggestions..." comments
|
||||
if (get_settings().config.publish_output and get_settings().config.publish_output_progress and
|
||||
not get_settings().config.get('is_auto_command', False)):
|
||||
if self.git_provider.is_supported("gfm_markdown"):
|
||||
@ -112,26 +105,33 @@ class PRCodeSuggestions:
|
||||
else:
|
||||
self.git_provider.publish_comment("Preparing suggestions...", is_temporary=True)
|
||||
|
||||
# call the model to get the suggestions, and self-reflect on them
|
||||
if not self.is_extended:
|
||||
data = await retry_with_fallback_models(self._prepare_prediction, model_type=ModelType.REGULAR)
|
||||
data = await retry_with_fallback_models(self._prepare_prediction)
|
||||
else:
|
||||
data = await retry_with_fallback_models(self._prepare_prediction_extended, model_type=ModelType.REGULAR)
|
||||
data = await retry_with_fallback_models(self._prepare_prediction_extended)
|
||||
if not data:
|
||||
data = {"code_suggestions": []}
|
||||
self.data = data
|
||||
|
||||
# Handle the case where the PR has no suggestions
|
||||
if (data is None or 'code_suggestions' not in data or not data['code_suggestions']):
|
||||
await self.publish_no_suggestions()
|
||||
pr_body = "## PR Code Suggestions ✨\n\nNo code suggestions found for the PR."
|
||||
get_logger().warning('No code suggestions found for the PR.')
|
||||
if get_settings().config.publish_output and get_settings().config.publish_output_no_suggestions:
|
||||
get_logger().debug(f"PR output", artifact=pr_body)
|
||||
if self.progress_response:
|
||||
self.git_provider.edit_comment(self.progress_response, body=pr_body)
|
||||
else:
|
||||
self.git_provider.publish_comment(pr_body)
|
||||
else:
|
||||
get_settings().data = {"artifact": ""}
|
||||
return
|
||||
|
||||
# publish the suggestions
|
||||
if get_settings().config.publish_output:
|
||||
# If a temporary comment was published, remove it
|
||||
self.git_provider.remove_initial_comment()
|
||||
if (not self.is_extended and get_settings().pr_code_suggestions.rank_suggestions) or \
|
||||
(self.is_extended and get_settings().pr_code_suggestions.rank_extended_suggestions):
|
||||
get_logger().info('Ranking Suggestions...')
|
||||
data['code_suggestions'] = await self.rank_suggestions(data['code_suggestions'])
|
||||
|
||||
# Publish table summarized suggestions
|
||||
if get_settings().config.publish_output:
|
||||
self.git_provider.remove_initial_comment()
|
||||
if ((not get_settings().pr_code_suggestions.commitable_code_suggestions) and
|
||||
self.git_provider.is_supported("gfm_markdown")):
|
||||
|
||||
@ -141,7 +141,10 @@ class PRCodeSuggestions:
|
||||
|
||||
# require self-review
|
||||
if get_settings().pr_code_suggestions.demand_code_suggestions_self_review:
|
||||
pr_body = await self.add_self_review_text(pr_body)
|
||||
text = get_settings().pr_code_suggestions.code_suggestions_self_review_text
|
||||
pr_body += f"\n\n- [ ] {text}"
|
||||
if get_settings().pr_code_suggestions.approve_pr_on_self_review:
|
||||
pr_body += ' <!-- approve pr self-review -->'
|
||||
|
||||
# add usage guide
|
||||
if (get_settings().pr_code_suggestions.enable_chat_text and get_settings().config.is_auto_command
|
||||
@ -157,13 +160,13 @@ class PRCodeSuggestions:
|
||||
pr_body += show_relevant_configurations(relevant_section='pr_code_suggestions')
|
||||
|
||||
# publish the PR comment
|
||||
if get_settings().pr_code_suggestions.persistent_comment: # true by default
|
||||
self.publish_persistent_comment_with_history(self.git_provider,
|
||||
pr_body,
|
||||
if get_settings().pr_code_suggestions.persistent_comment:
|
||||
final_update_message = False
|
||||
self.publish_persistent_comment_with_history(pr_body,
|
||||
initial_header="## PR Code Suggestions ✨",
|
||||
update_header=True,
|
||||
name="suggestions",
|
||||
final_update_message=False,
|
||||
final_update_message=final_update_message,
|
||||
max_previous_comments=get_settings().pr_code_suggestions.max_history_len,
|
||||
progress_response=self.progress_response)
|
||||
else:
|
||||
@ -174,15 +177,29 @@ class PRCodeSuggestions:
|
||||
|
||||
# dual publishing mode
|
||||
if int(get_settings().pr_code_suggestions.dual_publishing_score_threshold) > 0:
|
||||
await self.dual_publishing(data)
|
||||
data_above_threshold = {'code_suggestions': []}
|
||||
try:
|
||||
for suggestion in data['code_suggestions']:
|
||||
if int(suggestion.get('score', 0)) >= int(get_settings().pr_code_suggestions.dual_publishing_score_threshold) \
|
||||
and suggestion.get('improved_code'):
|
||||
data_above_threshold['code_suggestions'].append(suggestion)
|
||||
if not data_above_threshold['code_suggestions'][-1]['existing_code']:
|
||||
get_logger().info(f'Identical existing and improved code for dual publishing found')
|
||||
data_above_threshold['code_suggestions'][-1]['existing_code'] = suggestion[
|
||||
'improved_code']
|
||||
if data_above_threshold['code_suggestions']:
|
||||
get_logger().info(
|
||||
f"Publishing {len(data_above_threshold['code_suggestions'])} suggestions in dual publishing mode")
|
||||
self.push_inline_code_suggestions(data_above_threshold)
|
||||
except Exception as e:
|
||||
get_logger().error(f"Failed to publish dual publishing suggestions, error: {e}")
|
||||
else:
|
||||
await self.push_inline_code_suggestions(data)
|
||||
self.push_inline_code_suggestions(data)
|
||||
if self.progress_response:
|
||||
self.git_provider.remove_comment(self.progress_response)
|
||||
else:
|
||||
get_logger().info('Code suggestions generated for PR, but not published since publish_output is False.')
|
||||
pr_body = self.generate_summarized_suggestions(data)
|
||||
get_settings().data = {"artifact": pr_body}
|
||||
get_settings().data = {"artifact": data}
|
||||
return
|
||||
except Exception as e:
|
||||
get_logger().error(f"Failed to generate code suggestions for PR, error: {e}",
|
||||
@ -195,108 +212,47 @@ class PRCodeSuggestions:
|
||||
self.git_provider.remove_initial_comment()
|
||||
self.git_provider.publish_comment(f"Failed to generate code suggestions for PR")
|
||||
except Exception as e:
|
||||
get_logger().exception(f"Failed to update persistent review, error: {e}")
|
||||
pass
|
||||
|
||||
async def add_self_review_text(self, pr_body):
|
||||
text = get_settings().pr_code_suggestions.code_suggestions_self_review_text
|
||||
pr_body += f"\n\n- [ ] {text}"
|
||||
approve_pr_on_self_review = get_settings().pr_code_suggestions.approve_pr_on_self_review
|
||||
fold_suggestions_on_self_review = get_settings().pr_code_suggestions.fold_suggestions_on_self_review
|
||||
if approve_pr_on_self_review and not fold_suggestions_on_self_review:
|
||||
pr_body += ' <!-- approve pr self-review -->'
|
||||
elif fold_suggestions_on_self_review and not approve_pr_on_self_review:
|
||||
pr_body += ' <!-- fold suggestions self-review -->'
|
||||
else:
|
||||
pr_body += ' <!-- approve and fold suggestions self-review -->'
|
||||
return pr_body
|
||||
|
||||
async def publish_no_suggestions(self):
|
||||
pr_body = "## PR Code Suggestions ✨\n\nNo code suggestions found for the PR."
|
||||
if get_settings().config.publish_output and get_settings().config.publish_output_no_suggestions:
|
||||
get_logger().warning('No code suggestions found for the PR.')
|
||||
get_logger().debug(f"PR output", artifact=pr_body)
|
||||
if self.progress_response:
|
||||
self.git_provider.edit_comment(self.progress_response, body=pr_body)
|
||||
else:
|
||||
self.git_provider.publish_comment(pr_body)
|
||||
else:
|
||||
get_settings().data = {"artifact": ""}
|
||||
|
||||
async def dual_publishing(self, data):
|
||||
data_above_threshold = {'code_suggestions': []}
|
||||
try:
|
||||
for suggestion in data['code_suggestions']:
|
||||
if int(suggestion.get('score', 0)) >= int(
|
||||
get_settings().pr_code_suggestions.dual_publishing_score_threshold) \
|
||||
and suggestion.get('improved_code'):
|
||||
data_above_threshold['code_suggestions'].append(suggestion)
|
||||
if not data_above_threshold['code_suggestions'][-1]['existing_code']:
|
||||
get_logger().info(f'Identical existing and improved code for dual publishing found')
|
||||
data_above_threshold['code_suggestions'][-1]['existing_code'] = suggestion[
|
||||
'improved_code']
|
||||
if data_above_threshold['code_suggestions']:
|
||||
get_logger().info(
|
||||
f"Publishing {len(data_above_threshold['code_suggestions'])} suggestions in dual publishing mode")
|
||||
await self.push_inline_code_suggestions(data_above_threshold)
|
||||
except Exception as e:
|
||||
get_logger().error(f"Failed to publish dual publishing suggestions, error: {e}")
|
||||
|
||||
@staticmethod
|
||||
def publish_persistent_comment_with_history(git_provider: GitProvider,
|
||||
pr_comment: str,
|
||||
def publish_persistent_comment_with_history(self, pr_comment: str,
|
||||
initial_header: str,
|
||||
update_header: bool = True,
|
||||
name='review',
|
||||
final_update_message=True,
|
||||
max_previous_comments=4,
|
||||
progress_response=None,
|
||||
only_fold=False):
|
||||
progress_response=None):
|
||||
|
||||
def _extract_link(comment_text: str):
|
||||
r = re.compile(r"<!--.*?-->")
|
||||
match = r.search(comment_text)
|
||||
|
||||
up_to_commit_txt = ""
|
||||
if match:
|
||||
up_to_commit_txt = f" up to commit {match.group(0)[4:-3].strip()}"
|
||||
return up_to_commit_txt
|
||||
|
||||
if isinstance(git_provider, AzureDevopsProvider): # get_latest_commit_url is not supported yet
|
||||
if isinstance(self.git_provider, AzureDevopsProvider): # get_latest_commit_url is not supported yet
|
||||
if progress_response:
|
||||
git_provider.edit_comment(progress_response, pr_comment)
|
||||
new_comment = progress_response
|
||||
self.git_provider.edit_comment(progress_response, pr_comment)
|
||||
else:
|
||||
new_comment = git_provider.publish_comment(pr_comment)
|
||||
return new_comment
|
||||
self.git_provider.publish_comment(pr_comment)
|
||||
return
|
||||
|
||||
history_header = f"#### Previous suggestions\n"
|
||||
last_commit_num = git_provider.get_latest_commit_url().split('/')[-1][:7]
|
||||
if only_fold: # A user clicked on the 'self-review' checkbox
|
||||
text = get_settings().pr_code_suggestions.code_suggestions_self_review_text
|
||||
latest_suggestion_header = f"\n\n- [x] {text}"
|
||||
else:
|
||||
latest_suggestion_header = f"Latest suggestions up to {last_commit_num}"
|
||||
last_commit_num = self.git_provider.get_latest_commit_url().split('/')[-1][:7]
|
||||
latest_suggestion_header = f"Latest suggestions up to {last_commit_num}"
|
||||
latest_commit_html_comment = f"<!-- {last_commit_num} -->"
|
||||
found_comment = None
|
||||
|
||||
if max_previous_comments > 0:
|
||||
try:
|
||||
prev_comments = list(git_provider.get_issue_comments())
|
||||
prev_comments = list(self.git_provider.get_issue_comments())
|
||||
for comment in prev_comments:
|
||||
if comment.body.startswith(initial_header):
|
||||
prev_suggestions = comment.body
|
||||
found_comment = comment
|
||||
comment_url = git_provider.get_comment_url(comment)
|
||||
comment_url = self.git_provider.get_comment_url(comment)
|
||||
|
||||
if history_header.strip() not in comment.body:
|
||||
# no history section
|
||||
# extract everything between <table> and </table> in comment.body including <table> and </table>
|
||||
table_index = comment.body.find("<table>")
|
||||
if table_index == -1:
|
||||
git_provider.edit_comment(comment, pr_comment)
|
||||
self.git_provider.edit_comment(comment, pr_comment)
|
||||
continue
|
||||
# find http link from comment.body[:table_index]
|
||||
up_to_commit_txt = _extract_link(comment.body[:table_index])
|
||||
up_to_commit_txt = self.extract_link(comment.body[:table_index])
|
||||
prev_suggestion_table = comment.body[
|
||||
table_index:comment.body.rfind("</table>") + len("</table>")]
|
||||
|
||||
@ -317,7 +273,7 @@ class PRCodeSuggestions:
|
||||
|
||||
# get text after the latest_suggestion_header in comment.body
|
||||
table_ind = latest_table.find("<table>")
|
||||
up_to_commit_txt = _extract_link(latest_table[:table_ind])
|
||||
up_to_commit_txt = self.extract_link(latest_table[:table_ind])
|
||||
|
||||
latest_table = latest_table[table_ind:latest_table.rfind("</table>") + len("</table>")]
|
||||
# enforce max_previous_comments
|
||||
@ -344,12 +300,11 @@ class PRCodeSuggestions:
|
||||
|
||||
get_logger().info(f"Persistent mode - updating comment {comment_url} to latest {name} message")
|
||||
if progress_response: # publish to 'progress_response' comment, because it refreshes immediately
|
||||
git_provider.edit_comment(progress_response, pr_comment_updated)
|
||||
git_provider.remove_comment(comment)
|
||||
comment = progress_response
|
||||
self.git_provider.edit_comment(progress_response, pr_comment_updated)
|
||||
self.git_provider.remove_comment(comment)
|
||||
else:
|
||||
git_provider.edit_comment(comment, pr_comment_updated)
|
||||
return comment
|
||||
self.git_provider.edit_comment(comment, pr_comment_updated)
|
||||
return
|
||||
except Exception as e:
|
||||
get_logger().exception(f"Failed to update persistent review, error: {e}")
|
||||
pass
|
||||
@ -358,12 +313,9 @@ class PRCodeSuggestions:
|
||||
body = pr_comment.replace(initial_header, "").strip()
|
||||
pr_comment = f"{initial_header}\n\n{latest_commit_html_comment}\n\n{body}\n\n"
|
||||
if progress_response:
|
||||
git_provider.edit_comment(progress_response, pr_comment)
|
||||
new_comment = progress_response
|
||||
self.git_provider.edit_comment(progress_response, pr_comment)
|
||||
else:
|
||||
new_comment = git_provider.publish_comment(pr_comment)
|
||||
return new_comment
|
||||
|
||||
self.git_provider.publish_comment(pr_comment)
|
||||
|
||||
def extract_link(self, s):
|
||||
r = re.compile(r"<!--.*?-->")
|
||||
@ -380,8 +332,6 @@ class PRCodeSuggestions:
|
||||
model,
|
||||
add_line_numbers_to_hunks=True,
|
||||
disable_extra_lines=False)
|
||||
self.patches_diff_list = [self.patches_diff]
|
||||
self.patches_diff_no_line_number = self.remove_line_numbers([self.patches_diff])[0]
|
||||
|
||||
if self.patches_diff:
|
||||
get_logger().debug(f"PR diff", artifact=self.patches_diff)
|
||||
@ -414,7 +364,50 @@ class PRCodeSuggestions:
|
||||
response_reflect = await self.self_reflect_on_suggestions(data["code_suggestions"],
|
||||
patches_diff, model=model_reflection)
|
||||
if response_reflect:
|
||||
await self.analyze_self_reflection_response(data, response_reflect)
|
||||
response_reflect_yaml = load_yaml(response_reflect)
|
||||
code_suggestions_feedback = response_reflect_yaml["code_suggestions"]
|
||||
if len(code_suggestions_feedback) == len(data["code_suggestions"]):
|
||||
for i, suggestion in enumerate(data["code_suggestions"]):
|
||||
try:
|
||||
suggestion["score"] = code_suggestions_feedback[i]["suggestion_score"]
|
||||
suggestion["score_why"] = code_suggestions_feedback[i]["why"]
|
||||
|
||||
if 'relevant_lines_start' not in suggestion:
|
||||
relevant_lines_start = code_suggestions_feedback[i].get('relevant_lines_start', -1)
|
||||
relevant_lines_end = code_suggestions_feedback[i].get('relevant_lines_end', -1)
|
||||
suggestion['relevant_lines_start'] = relevant_lines_start
|
||||
suggestion['relevant_lines_end'] = relevant_lines_end
|
||||
if relevant_lines_start < 0 or relevant_lines_end < 0:
|
||||
suggestion["score"] = 0
|
||||
|
||||
try:
|
||||
if get_settings().config.publish_output:
|
||||
suggestion_statistics_dict = {'score': int(suggestion["score"]),
|
||||
'label': suggestion["label"].lower().strip()}
|
||||
get_logger().info(f"PR-Agent suggestions statistics",
|
||||
statistics=suggestion_statistics_dict, analytics=True)
|
||||
except Exception as e:
|
||||
get_logger().error(f"Failed to log suggestion statistics, error: {e}")
|
||||
pass
|
||||
|
||||
except Exception as e: #
|
||||
get_logger().error(f"Error processing suggestion score {i}",
|
||||
artifact={"suggestion": suggestion,
|
||||
"code_suggestions_feedback": code_suggestions_feedback[i]})
|
||||
suggestion["score"] = 7
|
||||
suggestion["score_why"] = ""
|
||||
|
||||
# if the before and after code is the same, clear one of them
|
||||
try:
|
||||
if suggestion['existing_code'] == suggestion['improved_code']:
|
||||
get_logger().debug(
|
||||
f"edited improved suggestion {i + 1}, because equal to existing code: {suggestion['existing_code']}")
|
||||
if get_settings().pr_code_suggestions.commitable_code_suggestions:
|
||||
suggestion['improved_code'] = "" # we need 'existing_code' to locate the code in the PR
|
||||
else:
|
||||
suggestion['existing_code'] = ""
|
||||
except Exception as e:
|
||||
get_logger().error(f"Error processing suggestion {i + 1}, error: {e}")
|
||||
else:
|
||||
# get_logger().error(f"Could not self-reflect on suggestions. using default score 7")
|
||||
for i, suggestion in enumerate(data["code_suggestions"]):
|
||||
@ -423,58 +416,6 @@ class PRCodeSuggestions:
|
||||
|
||||
return data
|
||||
|
||||
async def analyze_self_reflection_response(self, data, response_reflect):
|
||||
response_reflect_yaml = load_yaml(response_reflect)
|
||||
code_suggestions_feedback = response_reflect_yaml.get("code_suggestions", [])
|
||||
if code_suggestions_feedback and len(code_suggestions_feedback) == len(data["code_suggestions"]):
|
||||
for i, suggestion in enumerate(data["code_suggestions"]):
|
||||
try:
|
||||
suggestion["score"] = code_suggestions_feedback[i]["suggestion_score"]
|
||||
suggestion["score_why"] = code_suggestions_feedback[i]["why"]
|
||||
|
||||
if 'relevant_lines_start' not in suggestion:
|
||||
relevant_lines_start = code_suggestions_feedback[i].get('relevant_lines_start', -1)
|
||||
relevant_lines_end = code_suggestions_feedback[i].get('relevant_lines_end', -1)
|
||||
suggestion['relevant_lines_start'] = relevant_lines_start
|
||||
suggestion['relevant_lines_end'] = relevant_lines_end
|
||||
if relevant_lines_start < 0 or relevant_lines_end < 0:
|
||||
suggestion["score"] = 0
|
||||
|
||||
try:
|
||||
if get_settings().config.publish_output:
|
||||
if not suggestion["score"]:
|
||||
score = -1
|
||||
else:
|
||||
score = int(suggestion["score"])
|
||||
label = suggestion["label"].lower().strip()
|
||||
label = label.replace('<br>', ' ')
|
||||
suggestion_statistics_dict = {'score': score,
|
||||
'label': label}
|
||||
get_logger().info(f"PR-Agent suggestions statistics",
|
||||
statistics=suggestion_statistics_dict, analytics=True)
|
||||
except Exception as e:
|
||||
get_logger().error(f"Failed to log suggestion statistics, error: {e}")
|
||||
pass
|
||||
|
||||
except Exception as e: #
|
||||
get_logger().error(f"Error processing suggestion score {i}",
|
||||
artifact={"suggestion": suggestion,
|
||||
"code_suggestions_feedback": code_suggestions_feedback[i]})
|
||||
suggestion["score"] = 7
|
||||
suggestion["score_why"] = ""
|
||||
|
||||
# if the before and after code is the same, clear one of them
|
||||
try:
|
||||
if suggestion['existing_code'] == suggestion['improved_code']:
|
||||
get_logger().debug(
|
||||
f"edited improved suggestion {i + 1}, because equal to existing code: {suggestion['existing_code']}")
|
||||
if get_settings().pr_code_suggestions.commitable_code_suggestions:
|
||||
suggestion['improved_code'] = "" # we need 'existing_code' to locate the code in the PR
|
||||
else:
|
||||
suggestion['existing_code'] = ""
|
||||
except Exception as e:
|
||||
get_logger().error(f"Error processing suggestion {i + 1}, error: {e}")
|
||||
|
||||
@staticmethod
|
||||
def _truncate_if_needed(suggestion):
|
||||
max_code_suggestion_length = get_settings().get("PR_CODE_SUGGESTIONS.MAX_CODE_SUGGESTION_LENGTH", 0)
|
||||
@ -510,11 +451,6 @@ class PRCodeSuggestions:
|
||||
if not is_valid_keys:
|
||||
continue
|
||||
|
||||
if get_settings().get("pr_code_suggestions.focus_only_on_problems", False):
|
||||
CRITICAL_LABEL = 'critical'
|
||||
if CRITICAL_LABEL in suggestion['label'].lower(): # we want the published labels to be less declarative
|
||||
suggestion['label'] = 'possible issue'
|
||||
|
||||
if suggestion['one_sentence_summary'] in one_sentence_summary_list:
|
||||
get_logger().debug(f"Skipping suggestion {i + 1}, because it is a duplicate: {suggestion}")
|
||||
continue
|
||||
@ -538,7 +474,7 @@ class PRCodeSuggestions:
|
||||
|
||||
return data
|
||||
|
||||
async def push_inline_code_suggestions(self, data):
|
||||
def push_inline_code_suggestions(self, data):
|
||||
code_suggestions = []
|
||||
|
||||
if not data['code_suggestions']:
|
||||
@ -636,9 +572,7 @@ class PRCodeSuggestions:
|
||||
patches_diff_lines = patches_diff.splitlines()
|
||||
for i, line in enumerate(patches_diff_lines):
|
||||
if line.strip():
|
||||
if line.isnumeric():
|
||||
patches_diff_lines[i] = ''
|
||||
elif line[0].isdigit():
|
||||
if line[0].isdigit():
|
||||
# find the first letter in the line that starts with a valid letter
|
||||
for j, char in enumerate(line):
|
||||
if not char.isdigit():
|
||||
@ -696,6 +630,62 @@ class PRCodeSuggestions:
|
||||
self.data = data = None
|
||||
return data
|
||||
|
||||
async def rank_suggestions(self, data: List) -> List:
|
||||
"""
|
||||
Call a model to rank (sort) code suggestions based on their importance order.
|
||||
|
||||
Args:
|
||||
data (List): A list of code suggestions to be ranked.
|
||||
|
||||
Returns:
|
||||
List: The ranked list of code suggestions.
|
||||
"""
|
||||
|
||||
suggestion_list = []
|
||||
if not data:
|
||||
return suggestion_list
|
||||
for suggestion in data:
|
||||
suggestion_list.append(suggestion)
|
||||
data_sorted = [[]] * len(suggestion_list)
|
||||
|
||||
if len(suggestion_list) == 1:
|
||||
return suggestion_list
|
||||
|
||||
try:
|
||||
suggestion_str = ""
|
||||
for i, suggestion in enumerate(suggestion_list):
|
||||
suggestion_str += f"suggestion {i + 1}: " + str(suggestion) + '\n\n'
|
||||
|
||||
variables = {'suggestion_list': suggestion_list, 'suggestion_str': suggestion_str}
|
||||
model = get_settings().config.model
|
||||
environment = Environment(undefined=StrictUndefined)
|
||||
system_prompt = environment.from_string(get_settings().pr_sort_code_suggestions_prompt.system).render(
|
||||
variables)
|
||||
user_prompt = environment.from_string(get_settings().pr_sort_code_suggestions_prompt.user).render(variables)
|
||||
response, finish_reason = await self.ai_handler.chat_completion(model=model, system=system_prompt,
|
||||
user=user_prompt)
|
||||
|
||||
sort_order = load_yaml(response)
|
||||
for s in sort_order['Sort Order']:
|
||||
suggestion_number = s['suggestion number']
|
||||
importance_order = s['importance order']
|
||||
data_sorted[importance_order - 1] = suggestion_list[suggestion_number - 1]
|
||||
|
||||
if get_settings().pr_code_suggestions.final_clip_factor != 1:
|
||||
max_len = max(
|
||||
len(data_sorted),
|
||||
int(get_settings().pr_code_suggestions.num_code_suggestions_per_chunk),
|
||||
)
|
||||
new_len = int(0.5 + max_len * get_settings().pr_code_suggestions.final_clip_factor)
|
||||
if new_len < len(data_sorted):
|
||||
data_sorted = data_sorted[:new_len]
|
||||
except Exception as e:
|
||||
if get_settings().config.verbosity_level >= 1:
|
||||
get_logger().info(f"Could not sort suggestions, error: {e}")
|
||||
data_sorted = suggestion_list
|
||||
|
||||
return data_sorted
|
||||
|
||||
def generate_summarized_suggestions(self, data: Dict) -> str:
|
||||
try:
|
||||
pr_body = "## PR Code Suggestions ✨\n\n"
|
||||
@ -811,12 +801,7 @@ class PRCodeSuggestions:
|
||||
get_logger().info(f"Failed to publish summarized code suggestions, error: {e}")
|
||||
return ""
|
||||
|
||||
async def self_reflect_on_suggestions(self,
|
||||
suggestion_list: List,
|
||||
patches_diff: str,
|
||||
model: str,
|
||||
prev_suggestions_str: str = "",
|
||||
dedicated_prompt: str = "") -> str:
|
||||
async def self_reflect_on_suggestions(self, suggestion_list: List, patches_diff: str, model: str) -> str:
|
||||
if not suggestion_list:
|
||||
return ""
|
||||
|
||||
@ -829,21 +814,13 @@ class PRCodeSuggestions:
|
||||
'suggestion_str': suggestion_str,
|
||||
"diff": patches_diff,
|
||||
'num_code_suggestions': len(suggestion_list),
|
||||
'prev_suggestions_str': prev_suggestions_str,
|
||||
"is_ai_metadata": get_settings().get("config.enable_ai_metadata", False)}
|
||||
environment = Environment(undefined=StrictUndefined)
|
||||
|
||||
if dedicated_prompt:
|
||||
system_prompt_reflect = environment.from_string(
|
||||
get_settings().get(dedicated_prompt).system).render(variables)
|
||||
user_prompt_reflect = environment.from_string(
|
||||
get_settings().get(dedicated_prompt).user).render(variables)
|
||||
else:
|
||||
system_prompt_reflect = environment.from_string(
|
||||
get_settings().pr_code_suggestions_reflect_prompt.system).render(variables)
|
||||
user_prompt_reflect = environment.from_string(
|
||||
get_settings().pr_code_suggestions_reflect_prompt.user).render(variables)
|
||||
|
||||
system_prompt_reflect = environment.from_string(
|
||||
get_settings().pr_code_suggestions_reflect_prompt.system).render(
|
||||
variables)
|
||||
user_prompt_reflect = environment.from_string(
|
||||
get_settings().pr_code_suggestions_reflect_prompt.user).render(variables)
|
||||
with get_logger().contextualize(command="self_reflect_on_suggestions"):
|
||||
response_reflect, finish_reason_reflect = await self.ai_handler.chat_completion(model=model,
|
||||
system=system_prompt_reflect,
|
||||
@ -852,3 +829,4 @@ class PRCodeSuggestions:
|
||||
get_logger().info(f"Could not reflect on suggestions, error: {e}")
|
||||
return ""
|
||||
return response_reflect
|
||||
|
||||
|
@ -9,24 +9,19 @@ from jinja2 import Environment, StrictUndefined
|
||||
|
||||
from pr_agent.algo.ai_handlers.base_ai_handler import BaseAiHandler
|
||||
from pr_agent.algo.ai_handlers.litellm_ai_handler import LiteLLMAIHandler
|
||||
from pr_agent.algo.pr_processing import (OUTPUT_BUFFER_TOKENS_HARD_THRESHOLD,
|
||||
get_pr_diff,
|
||||
get_pr_diff_multiple_patchs,
|
||||
retry_with_fallback_models)
|
||||
from pr_agent.algo.pr_processing import get_pr_diff, retry_with_fallback_models, get_pr_diff_multiple_patchs, \
|
||||
OUTPUT_BUFFER_TOKENS_HARD_THRESHOLD
|
||||
from pr_agent.algo.token_handler import TokenHandler
|
||||
from pr_agent.algo.utils import (ModelType, PRDescriptionHeader, clip_tokens,
|
||||
get_max_tokens, get_user_labels, load_yaml,
|
||||
set_custom_labels,
|
||||
show_relevant_configurations)
|
||||
from pr_agent.algo.utils import set_custom_labels, PRDescriptionHeader
|
||||
from pr_agent.algo.utils import load_yaml, get_user_labels, ModelType, show_relevant_configurations, get_max_tokens, \
|
||||
clip_tokens
|
||||
from pr_agent.config_loader import get_settings
|
||||
from pr_agent.git_providers import (GithubProvider, get_git_provider,
|
||||
get_git_provider_with_context)
|
||||
from pr_agent.git_providers import get_git_provider, GithubProvider, get_git_provider_with_context
|
||||
from pr_agent.git_providers.git_provider import get_main_pr_language
|
||||
from pr_agent.log import get_logger
|
||||
from pr_agent.servers.help import HelpMessage
|
||||
from pr_agent.tools.ticket_pr_compliance_check import (
|
||||
extract_and_cache_pr_tickets, extract_ticket_links_from_pr_description,
|
||||
extract_tickets)
|
||||
from pr_agent.tools.ticket_pr_compliance_check import extract_ticket_links_from_pr_description, extract_tickets, \
|
||||
extract_and_cache_pr_tickets
|
||||
|
||||
|
||||
class PRDescription:
|
||||
@ -99,7 +94,7 @@ class PRDescription:
|
||||
# ticket extraction if exists
|
||||
await extract_and_cache_pr_tickets(self.git_provider, self.vars)
|
||||
|
||||
await retry_with_fallback_models(self._prepare_prediction, ModelType.WEAK)
|
||||
await retry_with_fallback_models(self._prepare_prediction, ModelType.TURBO)
|
||||
|
||||
if self.prediction:
|
||||
self._prepare_data()
|
||||
@ -171,10 +166,6 @@ class PRDescription:
|
||||
update_comment = f"**[PR Description]({pr_url})** updated to latest commit ({latest_commit_url})"
|
||||
self.git_provider.publish_comment(update_comment)
|
||||
self.git_provider.remove_initial_comment()
|
||||
else:
|
||||
get_logger().info('PR description, but not published since publish_output is False.')
|
||||
get_settings().data = {"artifact": pr_body}
|
||||
return
|
||||
except Exception as e:
|
||||
get_logger().error(f"Error generating PR description {self.pr_id}: {e}")
|
||||
|
||||
|
@ -9,7 +9,7 @@ from pr_agent.algo.ai_handlers.base_ai_handler import BaseAiHandler
|
||||
from pr_agent.algo.ai_handlers.litellm_ai_handler import LiteLLMAIHandler
|
||||
from pr_agent.algo.pr_processing import get_pr_diff, retry_with_fallback_models
|
||||
from pr_agent.algo.token_handler import TokenHandler
|
||||
from pr_agent.algo.utils import get_user_labels, load_yaml, set_custom_labels
|
||||
from pr_agent.algo.utils import load_yaml, set_custom_labels, get_user_labels
|
||||
from pr_agent.config_loader import get_settings
|
||||
from pr_agent.git_providers import get_git_provider
|
||||
from pr_agent.git_providers.git_provider import get_main_pr_language
|
||||
|
@ -9,10 +9,10 @@ from pr_agent.algo.ai_handlers.base_ai_handler import BaseAiHandler
|
||||
from pr_agent.algo.ai_handlers.litellm_ai_handler import LiteLLMAIHandler
|
||||
from pr_agent.algo.pr_processing import retry_with_fallback_models
|
||||
from pr_agent.algo.token_handler import TokenHandler
|
||||
from pr_agent.algo.utils import ModelType, clip_tokens, load_yaml
|
||||
from pr_agent.algo.utils import ModelType, load_yaml, clip_tokens
|
||||
from pr_agent.config_loader import get_settings
|
||||
from pr_agent.git_providers import (BitbucketServerProvider, GithubProvider,
|
||||
get_git_provider_with_context)
|
||||
from pr_agent.git_providers import GithubProvider, BitbucketServerProvider, \
|
||||
get_git_provider_with_context
|
||||
from pr_agent.log import get_logger
|
||||
|
||||
|
||||
@ -114,7 +114,7 @@ class PRHelpMessage:
|
||||
self.vars['snippets'] = docs_prompt.strip()
|
||||
|
||||
# run the AI model
|
||||
response = await retry_with_fallback_models(self._prepare_prediction, model_type=ModelType.WEAK)
|
||||
response = await retry_with_fallback_models(self._prepare_prediction, model_type=ModelType.REGULAR)
|
||||
response_yaml = load_yaml(response)
|
||||
response_str = response_yaml.get('response')
|
||||
relevant_sections = response_yaml.get('relevant_sections')
|
||||
|
@ -6,8 +6,8 @@ from jinja2 import Environment, StrictUndefined
|
||||
|
||||
from pr_agent.algo.ai_handlers.base_ai_handler import BaseAiHandler
|
||||
from pr_agent.algo.ai_handlers.litellm_ai_handler import LiteLLMAIHandler
|
||||
from pr_agent.algo.git_patch_processing import (
|
||||
convert_to_hunks_with_lines_numbers, extract_hunk_lines_from_patch)
|
||||
from pr_agent.algo.git_patch_processing import convert_to_hunks_with_lines_numbers, \
|
||||
extract_hunk_lines_from_patch
|
||||
from pr_agent.algo.pr_processing import get_pr_diff, retry_with_fallback_models
|
||||
from pr_agent.algo.token_handler import TokenHandler
|
||||
from pr_agent.algo.utils import ModelType
|
||||
@ -79,7 +79,7 @@ class PR_LineQuestions:
|
||||
line_end=line_end,
|
||||
side=side)
|
||||
if self.patch_with_lines:
|
||||
response = await retry_with_fallback_models(self._get_prediction, model_type=ModelType.WEAK)
|
||||
response = await retry_with_fallback_models(self._get_prediction, model_type=ModelType.TURBO)
|
||||
|
||||
get_logger().info('Preparing answer...')
|
||||
if comment_id:
|
||||
|
@ -63,7 +63,7 @@ class PRQuestions:
|
||||
if img_path:
|
||||
get_logger().debug(f"Image path identified", artifact=img_path)
|
||||
|
||||
await retry_with_fallback_models(self._prepare_prediction, model_type=ModelType.WEAK)
|
||||
await retry_with_fallback_models(self._prepare_prediction, model_type=ModelType.TURBO)
|
||||
|
||||
pr_comment = self._prepare_pr_answer()
|
||||
get_logger().debug(f"PR output", artifact=pr_comment)
|
||||
|
@ -4,27 +4,19 @@ import traceback
|
||||
from collections import OrderedDict
|
||||
from functools import partial
|
||||
from typing import List, Tuple
|
||||
|
||||
from jinja2 import Environment, StrictUndefined
|
||||
|
||||
from pr_agent.algo.ai_handlers.base_ai_handler import BaseAiHandler
|
||||
from pr_agent.algo.ai_handlers.litellm_ai_handler import LiteLLMAIHandler
|
||||
from pr_agent.algo.pr_processing import (add_ai_metadata_to_diff_files,
|
||||
get_pr_diff,
|
||||
retry_with_fallback_models)
|
||||
from pr_agent.algo.pr_processing import get_pr_diff, retry_with_fallback_models, add_ai_metadata_to_diff_files
|
||||
from pr_agent.algo.token_handler import TokenHandler
|
||||
from pr_agent.algo.utils import (ModelType, PRReviewHeader,
|
||||
convert_to_markdown_v2, github_action_output,
|
||||
load_yaml, show_relevant_configurations)
|
||||
from pr_agent.algo.utils import github_action_output, load_yaml, ModelType, \
|
||||
show_relevant_configurations, convert_to_markdown_v2, PRReviewHeader
|
||||
from pr_agent.config_loader import get_settings
|
||||
from pr_agent.git_providers import (get_git_provider,
|
||||
get_git_provider_with_context)
|
||||
from pr_agent.git_providers.git_provider import (IncrementalPR,
|
||||
get_main_pr_language)
|
||||
from pr_agent.git_providers import get_git_provider, get_git_provider_with_context
|
||||
from pr_agent.git_providers.git_provider import IncrementalPR, get_main_pr_language
|
||||
from pr_agent.log import get_logger
|
||||
from pr_agent.servers.help import HelpMessage
|
||||
from pr_agent.tools.ticket_pr_compliance_check import (
|
||||
extract_and_cache_pr_tickets, extract_tickets)
|
||||
from pr_agent.tools.ticket_pr_compliance_check import extract_tickets, extract_and_cache_pr_tickets
|
||||
|
||||
|
||||
class PRReviewer:
|
||||
@ -148,7 +140,7 @@ class PRReviewer:
|
||||
if get_settings().config.publish_output and not get_settings().config.get('is_auto_command', False):
|
||||
self.git_provider.publish_comment("Preparing review...", is_temporary=True)
|
||||
|
||||
await retry_with_fallback_models(self._prepare_prediction, model_type=ModelType.REGULAR)
|
||||
await retry_with_fallback_models(self._prepare_prediction)
|
||||
if not self.prediction:
|
||||
self.git_provider.remove_initial_comment()
|
||||
return None
|
||||
@ -170,10 +162,6 @@ class PRReviewer:
|
||||
self.git_provider.remove_initial_comment()
|
||||
if get_settings().pr_reviewer.inline_code_comments:
|
||||
self._publish_inline_code_comments()
|
||||
else:
|
||||
get_logger().info("Review output is not published")
|
||||
get_settings().data = {"artifact": pr_review}
|
||||
return
|
||||
except Exception as e:
|
||||
get_logger().error(f"Failed to review PR: {e}")
|
||||
|
||||
@ -270,9 +258,7 @@ class PRReviewer:
|
||||
incremental_review_markdown_text = f"Starting from commit {last_commit_url}"
|
||||
|
||||
markdown_text = convert_to_markdown_v2(data, self.git_provider.is_supported("gfm_markdown"),
|
||||
incremental_review_markdown_text,
|
||||
git_provider=self.git_provider,
|
||||
files=self.git_provider.get_diff_files())
|
||||
incremental_review_markdown_text, git_provider=self.git_provider)
|
||||
|
||||
# Add help text if gfm_markdown is supported
|
||||
if self.git_provider.is_supported("gfm_markdown") and get_settings().pr_reviewer.enable_help_text:
|
||||
|
@ -34,9 +34,9 @@ class PRSimilarIssue:
|
||||
|
||||
if get_settings().pr_similar_issue.vectordb == "pinecone":
|
||||
try:
|
||||
import pandas as pd
|
||||
import pinecone
|
||||
from pinecone_datasets import Dataset, DatasetMetadata
|
||||
import pandas as pd
|
||||
except:
|
||||
raise Exception("Please install 'pinecone' and 'pinecone_datasets' to use pinecone as vectordb")
|
||||
# assuming pinecone api key and environment are set in secrets file
|
||||
@ -111,7 +111,7 @@ class PRSimilarIssue:
|
||||
|
||||
elif get_settings().pr_similar_issue.vectordb == "lancedb":
|
||||
try:
|
||||
import lancedb # import lancedb only if needed
|
||||
import lancedb # import lancedb only if needed
|
||||
except:
|
||||
raise Exception("Please install lancedb to use lancedb as vectordb")
|
||||
self.db = lancedb.connect(get_settings().lancedb.uri)
|
||||
|
@ -3,16 +3,14 @@ from datetime import date
|
||||
from functools import partial
|
||||
from time import sleep
|
||||
from typing import Tuple
|
||||
|
||||
from jinja2 import Environment, StrictUndefined
|
||||
|
||||
from pr_agent.algo.ai_handlers.base_ai_handler import BaseAiHandler
|
||||
from pr_agent.algo.ai_handlers.litellm_ai_handler import LiteLLMAIHandler
|
||||
from pr_agent.algo.pr_processing import get_pr_diff, retry_with_fallback_models
|
||||
from pr_agent.algo.token_handler import TokenHandler
|
||||
from pr_agent.algo.utils import ModelType, show_relevant_configurations
|
||||
from pr_agent.config_loader import get_settings
|
||||
from pr_agent.git_providers import GithubProvider, get_git_provider
|
||||
from pr_agent.git_providers import get_git_provider, GithubProvider
|
||||
from pr_agent.git_providers.git_provider import get_main_pr_language
|
||||
from pr_agent.log import get_logger
|
||||
|
||||
@ -73,7 +71,7 @@ class PRUpdateChangelog:
|
||||
if get_settings().config.publish_output:
|
||||
self.git_provider.publish_comment("Preparing changelog updates...", is_temporary=True)
|
||||
|
||||
await retry_with_fallback_models(self._prepare_prediction, model_type=ModelType.WEAK)
|
||||
await retry_with_fallback_models(self._prepare_prediction, model_type=ModelType.TURBO)
|
||||
|
||||
new_file_content, answer = self._prepare_changelog_update()
|
||||
|
||||
|
@ -108,7 +108,7 @@ async def extract_tickets(git_provider):
|
||||
|
||||
|
||||
async def extract_and_cache_pr_tickets(git_provider, vars):
|
||||
if not get_settings().get('pr_reviewer.require_ticket_analysis_review', False):
|
||||
if get_settings().get('config.require_ticket_analysis_review', False):
|
||||
return
|
||||
related_tickets = get_settings().get('related_tickets', [])
|
||||
if not related_tickets:
|
||||
|
@ -4,21 +4,21 @@ build-backend = "setuptools.build_meta"
|
||||
|
||||
[project]
|
||||
name = "pr-agent"
|
||||
version = "0.2.5"
|
||||
version = "0.2.4"
|
||||
|
||||
authors = [{ name = "CodiumAI", email = "tal.r@codium.ai" }]
|
||||
authors = [{name= "CodiumAI", email = "tal.r@codium.ai"}]
|
||||
|
||||
maintainers = [
|
||||
{ name = "Tal Ridnik", email = "tal.r@codium.ai" },
|
||||
{ name = "Ori Kotek", email = "ori.k@codium.ai" },
|
||||
{ name = "Hussam Lawen", email = "hussam.l@codium.ai" },
|
||||
{name = "Tal Ridnik", email = "tal.r@codium.ai"},
|
||||
{name = "Ori Kotek", email = "ori.k@codium.ai"},
|
||||
{name = "Hussam Lawen", email = "hussam.l@codium.ai"},
|
||||
]
|
||||
|
||||
description = "CodiumAI PR-Agent aims to help efficiently review and handle pull requests, by providing AI feedbacks and suggestions."
|
||||
readme = "README.md"
|
||||
requires-python = ">=3.10"
|
||||
keywords = ["AI", "Agents", "Pull Request", "Automation", "Code Review"]
|
||||
license = { name = "Apache 2.0", file = "LICENSE" }
|
||||
license = {name = "Apache 2.0", file = "LICENSE"}
|
||||
|
||||
classifiers = [
|
||||
"Intended Audience :: Developers",
|
||||
@ -28,7 +28,7 @@ dynamic = ["dependencies"]
|
||||
|
||||
|
||||
[tool.setuptools.dynamic]
|
||||
dependencies = { file = ["requirements.txt"] }
|
||||
dependencies = {file = ["requirements.txt"]}
|
||||
|
||||
[project.urls]
|
||||
"Homepage" = "https://github.com/Codium-ai/pr-agent"
|
||||
@ -40,43 +40,41 @@ license-files = ["LICENSE"]
|
||||
|
||||
[tool.setuptools.packages.find]
|
||||
where = ["."]
|
||||
include = [
|
||||
"pr_agent*",
|
||||
] # include pr_agent and any sub-packages it finds under it.
|
||||
include = ["pr_agent*"] # include pr_agent and any sub-packages it finds under it.
|
||||
|
||||
[project.scripts]
|
||||
pr-agent = "pr_agent.cli:run"
|
||||
|
||||
|
||||
[tool.ruff]
|
||||
|
||||
line-length = 120
|
||||
|
||||
lint.select = [
|
||||
"E", # Pyflakes
|
||||
"F", # Pyflakes
|
||||
"B", # flake8-bugbear
|
||||
"I001", # isort basic checks
|
||||
"I002", # isort missing-required-import
|
||||
]
|
||||
select = [
|
||||
"E", # Pyflakes
|
||||
"F", # Pyflakes
|
||||
"B", # flake8-bugbear
|
||||
"I001", # isort basic checks
|
||||
"I002", # isort missing-required-import
|
||||
]
|
||||
|
||||
# First commit - only fixing isort
|
||||
lint.fixable = [
|
||||
"I001", # isort basic checks
|
||||
fixable = [
|
||||
"I001", # isort basic checks
|
||||
]
|
||||
|
||||
lint.unfixable = [
|
||||
"B", # Avoid trying to fix flake8-bugbear (`B`) violations.
|
||||
unfixable = [
|
||||
"B", # Avoid trying to fix flake8-bugbear (`B`) violations.
|
||||
]
|
||||
|
||||
exclude = [
|
||||
"api/code_completions",
|
||||
]
|
||||
|
||||
lint.exclude = ["api/code_completions"]
|
||||
ignore = [
|
||||
"E999", "B008"
|
||||
]
|
||||
|
||||
lint.ignore = ["E999", "B008"]
|
||||
|
||||
[tool.ruff.lint.per-file-ignores]
|
||||
"__init__.py" = [
|
||||
"E402",
|
||||
] # Ignore `E402` (import violations) in all `__init__.py` files, and in `path/to/file.py`.
|
||||
|
||||
[tool.bandit]
|
||||
exclude_dirs = ["tests"]
|
||||
skips = ["B101"]
|
||||
tests = []
|
||||
[tool.ruff.per-file-ignores]
|
||||
"__init__.py" = ["E402"] # Ignore `E402` (import violations) in all `__init__.py` files, and in `path/to/file.py`.
|
||||
# TODO: should decide if maybe not to ignore these.
|
||||
|
@ -1,4 +1,3 @@
|
||||
pytest==7.4.0
|
||||
poetry
|
||||
twine
|
||||
pre-commit>=4,<5
|
||||
|
@ -1,5 +1,5 @@
|
||||
aiohttp==3.9.5
|
||||
anthropic[vertex]==0.39.0
|
||||
anthropic[vertex]==0.37.1
|
||||
atlassian-python-api==3.41.4
|
||||
azure-devops==7.1.0b3
|
||||
azure-identity==1.15.0
|
||||
@ -12,17 +12,17 @@ google-cloud-aiplatform==1.38.0
|
||||
google-generativeai==0.8.3
|
||||
google-cloud-storage==2.10.0
|
||||
Jinja2==3.1.2
|
||||
litellm==1.52.12
|
||||
litellm==1.50.2
|
||||
loguru==0.7.2
|
||||
msrest==0.7.1
|
||||
openai==1.55.3
|
||||
openai==1.52.1
|
||||
pytest==7.4.0
|
||||
PyGithub==1.59.*
|
||||
PyYAML==6.0.1
|
||||
python-gitlab==3.15.0
|
||||
retry==0.9.2
|
||||
starlette-context==0.3.6
|
||||
tiktoken==0.8.0
|
||||
tiktoken==0.7.0
|
||||
ujson==5.8.0
|
||||
uvicorn==0.22.0
|
||||
tenacity==8.2.3
|
||||
|
@ -32,3 +32,4 @@ def main():
|
||||
if __name__ == '__main__':
|
||||
main()
|
||||
"""
|
||||
|
||||
|
@ -5,16 +5,16 @@ import time
|
||||
from datetime import datetime
|
||||
|
||||
import jwt
|
||||
import requests
|
||||
from atlassian.bitbucket import Cloud
|
||||
|
||||
import requests
|
||||
from requests.auth import HTTPBasicAuth
|
||||
|
||||
from pr_agent.config_loader import get_settings
|
||||
from pr_agent.log import get_logger, setup_logger
|
||||
from tests.e2e_tests.e2e_utils import (FILE_PATH,
|
||||
IMPROVE_START_WITH_REGEX_PATTERN,
|
||||
NEW_FILE_CONTENT, NUM_MINUTES,
|
||||
PR_HEADER_START_WITH, REVIEW_START_WITH)
|
||||
from pr_agent.log import setup_logger, get_logger
|
||||
from tests.e2e_tests.e2e_utils import NEW_FILE_CONTENT, FILE_PATH, PR_HEADER_START_WITH, REVIEW_START_WITH, \
|
||||
IMPROVE_START_WITH_REGEX_PATTERN, NUM_MINUTES
|
||||
|
||||
|
||||
log_level = os.environ.get("LOG_LEVEL", "INFO")
|
||||
setup_logger(log_level)
|
||||
|
@ -5,11 +5,9 @@ from datetime import datetime
|
||||
|
||||
from pr_agent.config_loader import get_settings
|
||||
from pr_agent.git_providers import get_git_provider
|
||||
from pr_agent.log import get_logger, setup_logger
|
||||
from tests.e2e_tests.e2e_utils import (FILE_PATH,
|
||||
IMPROVE_START_WITH_REGEX_PATTERN,
|
||||
NEW_FILE_CONTENT, NUM_MINUTES,
|
||||
PR_HEADER_START_WITH, REVIEW_START_WITH)
|
||||
from pr_agent.log import setup_logger, get_logger
|
||||
from tests.e2e_tests.e2e_utils import NEW_FILE_CONTENT, FILE_PATH, PR_HEADER_START_WITH, REVIEW_START_WITH, \
|
||||
IMPROVE_START_WITH_REGEX_PATTERN, NUM_MINUTES
|
||||
|
||||
log_level = os.environ.get("LOG_LEVEL", "INFO")
|
||||
setup_logger(log_level)
|
||||
|
@ -7,11 +7,9 @@ import gitlab
|
||||
|
||||
from pr_agent.config_loader import get_settings
|
||||
from pr_agent.git_providers import get_git_provider
|
||||
from pr_agent.log import get_logger, setup_logger
|
||||
from tests.e2e_tests.e2e_utils import (FILE_PATH,
|
||||
IMPROVE_START_WITH_REGEX_PATTERN,
|
||||
NEW_FILE_CONTENT, NUM_MINUTES,
|
||||
PR_HEADER_START_WITH, REVIEW_START_WITH)
|
||||
from pr_agent.log import setup_logger, get_logger
|
||||
from tests.e2e_tests.e2e_utils import NEW_FILE_CONTENT, FILE_PATH, PR_HEADER_START_WITH, REVIEW_START_WITH, \
|
||||
IMPROVE_START_WITH_REGEX_PATTERN, NUM_MINUTES
|
||||
|
||||
log_level = os.environ.get("LOG_LEVEL", "INFO")
|
||||
setup_logger(log_level)
|
||||
|
@ -1,70 +0,0 @@
|
||||
import argparse
|
||||
import asyncio
|
||||
import copy
|
||||
import os
|
||||
from pathlib import Path
|
||||
|
||||
from starlette_context import request_cycle_context, context
|
||||
|
||||
from pr_agent.cli import run_command
|
||||
from pr_agent.config_loader import get_settings, global_settings
|
||||
|
||||
from pr_agent.agent.pr_agent import PRAgent, commands
|
||||
from pr_agent.log import get_logger, setup_logger
|
||||
from tests.e2e_tests import e2e_utils
|
||||
|
||||
log_level = os.environ.get("LOG_LEVEL", "INFO")
|
||||
setup_logger(log_level)
|
||||
|
||||
|
||||
async def run_async():
|
||||
pr_url = os.getenv('TEST_PR_URL', 'https://github.com/Codium-ai/pr-agent/pull/1385')
|
||||
|
||||
get_settings().set("config.git_provider", "github")
|
||||
get_settings().set("config.publish_output", False)
|
||||
get_settings().set("config.fallback_models", [])
|
||||
|
||||
agent = PRAgent()
|
||||
try:
|
||||
# Run the 'describe' command
|
||||
get_logger().info(f"\nSanity check for the 'describe' command...")
|
||||
original_settings = copy.deepcopy(get_settings())
|
||||
await agent.handle_request(pr_url, ['describe'])
|
||||
pr_header_body = dict(get_settings().data)['artifact']
|
||||
assert pr_header_body.startswith('###') and 'PR Type' in pr_header_body and 'Description' in pr_header_body
|
||||
context['settings'] = copy.deepcopy(original_settings) # Restore settings state after each test to prevent test interference
|
||||
get_logger().info("PR description generated successfully\n")
|
||||
|
||||
# Run the 'review' command
|
||||
get_logger().info(f"\nSanity check for the 'review' command...")
|
||||
original_settings = copy.deepcopy(get_settings())
|
||||
await agent.handle_request(pr_url, ['review'])
|
||||
pr_review_body = dict(get_settings().data)['artifact']
|
||||
assert pr_review_body.startswith('##') and 'PR Reviewer Guide' in pr_review_body
|
||||
context['settings'] = copy.deepcopy(original_settings) # Restore settings state after each test to prevent test interference
|
||||
get_logger().info("PR review generated successfully\n")
|
||||
|
||||
# Run the 'improve' command
|
||||
get_logger().info(f"\nSanity check for the 'improve' command...")
|
||||
original_settings = copy.deepcopy(get_settings())
|
||||
await agent.handle_request(pr_url, ['improve'])
|
||||
pr_improve_body = dict(get_settings().data)['artifact']
|
||||
assert pr_improve_body.startswith('##') and 'PR Code Suggestions' in pr_improve_body
|
||||
context['settings'] = copy.deepcopy(original_settings) # Restore settings state after each test to prevent test interference
|
||||
get_logger().info("PR improvements generated successfully\n")
|
||||
|
||||
get_logger().info(f"\n\n========\nHealth test passed successfully\n========")
|
||||
|
||||
except Exception as e:
|
||||
get_logger().exception(f"\n\n========\nHealth test failed\n========")
|
||||
raise e
|
||||
|
||||
|
||||
def run():
|
||||
with request_cycle_context({}):
|
||||
context['settings'] = copy.deepcopy(global_settings)
|
||||
asyncio.run(run_async())
|
||||
|
||||
|
||||
if __name__ == '__main__':
|
||||
run()
|
@ -1,10 +1,8 @@
|
||||
from unittest.mock import MagicMock
|
||||
|
||||
from atlassian.bitbucket import Bitbucket
|
||||
|
||||
from pr_agent.algo.types import EDIT_TYPE, FilePatchInfo
|
||||
from pr_agent.git_providers import BitbucketServerProvider
|
||||
from pr_agent.git_providers.bitbucket_provider import BitbucketProvider
|
||||
from unittest.mock import MagicMock
|
||||
from atlassian.bitbucket import Bitbucket
|
||||
from pr_agent.algo.types import EDIT_TYPE, FilePatchInfo
|
||||
|
||||
|
||||
class TestBitbucketProvider:
|
||||
|
@ -1,5 +1,4 @@
|
||||
from unittest.mock import MagicMock
|
||||
|
||||
from pr_agent.git_providers.codecommit_client import CodeCommitClient
|
||||
|
||||
|
||||
|
@ -1,11 +1,9 @@
|
||||
from unittest.mock import patch
|
||||
|
||||
import pytest
|
||||
|
||||
from unittest.mock import patch
|
||||
from pr_agent.git_providers.codecommit_provider import CodeCommitFile
|
||||
from pr_agent.git_providers.codecommit_provider import CodeCommitProvider
|
||||
from pr_agent.git_providers.codecommit_provider import PullRequestCCMimic
|
||||
from pr_agent.algo.types import EDIT_TYPE, FilePatchInfo
|
||||
from pr_agent.git_providers.codecommit_provider import (CodeCommitFile,
|
||||
CodeCommitProvider,
|
||||
PullRequestCCMimic)
|
||||
|
||||
|
||||
class TestCodeCommitFile:
|
||||
|
@ -1,5 +1,4 @@
|
||||
import pytest
|
||||
|
||||
from pr_agent.algo.git_patch_processing import extend_patch
|
||||
from pr_agent.algo.pr_processing import pr_generate_extended_diff
|
||||
from pr_agent.algo.token_handler import TokenHandler
|
||||
|
@ -1,9 +1,7 @@
|
||||
import pytest
|
||||
|
||||
from pr_agent.algo.file_filter import filter_ignored
|
||||
from pr_agent.config_loader import global_settings
|
||||
|
||||
|
||||
class TestIgnoreFilter:
|
||||
def test_no_ignores(self):
|
||||
"""
|
||||
|
@ -1,10 +1,9 @@
|
||||
|
||||
# Generated by CodiumAI
|
||||
import pytest
|
||||
|
||||
from pr_agent.algo.types import FilePatchInfo
|
||||
from pr_agent.algo.utils import find_line_number_of_relevant_line_in_file
|
||||
|
||||
import pytest
|
||||
|
||||
class TestFindLineNumberOfRelevantLineInFile:
|
||||
# Tests that the function returns the correct line number and absolute position when the relevant line is found in the patch
|
||||
|
@ -1,9 +1,7 @@
|
||||
import json
|
||||
import os
|
||||
|
||||
import json
|
||||
from pr_agent.algo.utils import get_settings, github_action_output
|
||||
|
||||
|
||||
class TestGitHubOutput:
|
||||
def test_github_action_output_enabled(self, monkeypatch, tmp_path):
|
||||
get_settings().set('GITHUB_ACTION_CONFIG.ENABLE_OUTPUT', True)
|
||||
|
@ -47,3 +47,7 @@ PR Feedback:
|
||||
|
||||
expected_output = [{'relevant file': 'src/app.py:\n', 'suggestion content': 'The print statement is outside inside the if __name__ ==:'}]
|
||||
assert load_yaml(yaml_str) == expected_output
|
||||
|
||||
|
||||
|
||||
|
||||
|
@ -1,10 +1,10 @@
|
||||
|
||||
# Generated by CodiumAI
|
||||
import pytest
|
||||
|
||||
from pr_agent.algo.utils import try_fix_yaml
|
||||
|
||||
|
||||
import pytest
|
||||
|
||||
class TestTryFixYaml:
|
||||
|
||||
# The function successfully parses a valid YAML string.
|
||||
|
Reference in New Issue
Block a user