mirror of
https://github.com/qodo-ai/pr-agent.git
synced 2025-07-14 01:30:37 +08:00
Compare commits
38 Commits
mrT23-patc
...
mrT23-patc
Author | SHA1 | Date | |
---|---|---|---|
5430e9ab5a | |||
e650fe9ce9 | |||
daeca42ae8 | |||
4a982a849d | |||
6e3544f523 | |||
bf3ebbb95f | |||
eb44ecb1be | |||
45bae48701 | |||
b2181e4c79 | |||
5939d3b17b | |||
c1f4964a55 | |||
022e407d84 | |||
93ba2d239a | |||
fa49dd5167 | |||
16029e66ad | |||
7bd6713335 | |||
ef3241285d | |||
d9ef26dc1c | |||
02949b2b96 | |||
d301c76b65 | |||
dacb45dd8a | |||
443d06df06 | |||
15e8c988a4 | |||
60fab1b301 | |||
84c1c1b1ca | |||
7419a6d51a | |||
ee58a92fb3 | |||
6b64924355 | |||
2f5e8472b9 | |||
7186bf4bb3 | |||
115fca58a3 | |||
cbf60ca636 | |||
64ac45d03b | |||
db062e3e35 | |||
e85472f367 | |||
597f1c6f83 | |||
67b46e7f30 | |||
68f2cec077 |
12
README.md
12
README.md
@ -43,6 +43,18 @@ Qode Merge PR-Agent aims to help efficiently review and handle pull requests, by
|
|||||||
|
|
||||||
## News and Updates
|
## News and Updates
|
||||||
|
|
||||||
|
### November 4, 2024
|
||||||
|
|
||||||
|
Qodo Merge PR Agent will now leverage context from Jira or GitHub tickets to enhance the PR Feedback. Read more about this feature
|
||||||
|
[here](https://qodo-merge-docs.qodo.ai/core-abilities/fetching_ticket_context/)
|
||||||
|
|
||||||
|
### November 3, 2024
|
||||||
|
|
||||||
|
Meaningful improvement to the quality of code suggestions by separating the code suggestion generation from [line number detection](https://github.com/Codium-ai/pr-agent/pull/1338)
|
||||||
|
|
||||||
|
<kbd></kbd>
|
||||||
|
|
||||||
|
|
||||||
### October 27, 2024
|
### October 27, 2024
|
||||||
|
|
||||||
Qodo Merge PR Agent will now automatically document accepted code suggestions in a dedicated wiki page (`.pr_agent_accepted_suggestions`), enabling users to track historical changes, assess the tool's effectiveness, and learn from previously implemented recommendations in the repository.
|
Qodo Merge PR Agent will now automatically document accepted code suggestions in a dedicated wiki page (`.pr_agent_accepted_suggestions`), enabling users to track historical changes, assess the tool's effectiveness, and learn from previously implemented recommendations in the repository.
|
||||||
|
115
docs/docs/core-abilities/fetching_ticket_context.md
Normal file
115
docs/docs/core-abilities/fetching_ticket_context.md
Normal file
@ -0,0 +1,115 @@
|
|||||||
|
# Fetching Ticket Context for PRs
|
||||||
|
## Overview
|
||||||
|
Qodo Merge PR Agent streamlines code review workflows by seamlessly connecting with multiple ticket management systems.
|
||||||
|
This integration enriches the review process by automatically surfacing relevant ticket information and context alongside code changes.
|
||||||
|
|
||||||
|
|
||||||
|
## Affected Tools
|
||||||
|
|
||||||
|
Ticket Recognition Requirements:
|
||||||
|
|
||||||
|
1. The PR description should contain a link to the ticket.
|
||||||
|
2. For Jira tickets, you should follow the instructions in [Jira Integration](https://qodo-merge-docs.qodo.ai/core-abilities/fetching_ticket_context/#jira-integration) in order to authenticate with Jira.
|
||||||
|
|
||||||
|
|
||||||
|
### Describe tool
|
||||||
|
Qodo Merge PR Agent will recognize the ticket and use the ticket content (title, description, labels) to provide additional context for the code changes.
|
||||||
|
By understanding the reasoning and intent behind modifications, the LLM can offer more insightful and relevant code analysis.
|
||||||
|
|
||||||
|
### Review tool
|
||||||
|
Similarly to the `describe` tool, the `review` tool will use the ticket content to provide additional context for the code changes.
|
||||||
|
|
||||||
|
In addition, this feature will evaluate how well a Pull Request (PR) adheres to its original purpose/intent as defined by the associated ticket or issue mentioned in the PR description.
|
||||||
|
Each ticket will be assigned a label (Compliance/Alignment level), Indicates the degree to which the PR fulfills its original purpose, Options: Fully compliant, Partially compliant or Not compliant.
|
||||||
|
|
||||||
|
|
||||||
|
{width=768}
|
||||||
|
|
||||||
|
By default, the tool will automatically validate if the PR complies with the referenced ticket.
|
||||||
|
If you want to disable this feedback, add the following line to your configuration file:
|
||||||
|
|
||||||
|
```toml
|
||||||
|
[pr_reviewer]
|
||||||
|
require_ticket_analysis_review=false
|
||||||
|
```
|
||||||
|
|
||||||
|
## Providers
|
||||||
|
|
||||||
|
### Github Issues Integration
|
||||||
|
|
||||||
|
Qodo Merge PR Agent will automatically recognize Github issues mentioned in the PR description and fetch the issue content.
|
||||||
|
Examples of valid GitHub issue references:
|
||||||
|
|
||||||
|
- `https://github.com/<ORG_NAME>/<REPO_NAME>/issues/<ISSUE_NUMBER>`
|
||||||
|
- `#<ISSUE_NUMBER>`
|
||||||
|
- `<ORG_NAME>/<REPO_NAME>#<ISSUE_NUMBER>`
|
||||||
|
|
||||||
|
Since Qodo Merge PR Agent is integrated with GitHub, it doesn't require any additional configuration to fetch GitHub issues.
|
||||||
|
|
||||||
|
### Jira Integration 💎
|
||||||
|
|
||||||
|
We support both Jira Cloud and Jira Server/Data Center.
|
||||||
|
To integrate with Jira, The PR Description should contain a link to the Jira ticket.
|
||||||
|
|
||||||
|
For Jira integration, include a ticket reference in your PR description using either the complete URL format `https://<JIRA_ORG>.atlassian.net/browse/ISSUE-123` or the shortened ticket ID `ISSUE-123`.
|
||||||
|
|
||||||
|
!!! note "Jira Base URL"
|
||||||
|
If using the shortened format, ensure your configuration file contains the Jira base URL under the [jira] section like this:
|
||||||
|
|
||||||
|
```toml
|
||||||
|
[jira]
|
||||||
|
jira_base_url = "https://<JIRA_ORG>.atlassian.net"
|
||||||
|
```
|
||||||
|
|
||||||
|
#### Jira Cloud 💎
|
||||||
|
There are two ways to authenticate with Jira Cloud:
|
||||||
|
|
||||||
|
**1) Jira App Authentication**
|
||||||
|
|
||||||
|
The recommended way to authenticate with Jira Cloud is to install the Qodo Merge app in your Jira Cloud instance. This will allow Qodo Merge to access Jira data on your behalf.
|
||||||
|
|
||||||
|
Installation steps:
|
||||||
|
|
||||||
|
1. Click [here](https://auth.atlassian.com/authorize?audience=api.atlassian.com&client_id=8krKmA4gMD8mM8z24aRCgPCSepZNP1xf&scope=read%3Ajira-work%20offline_access&redirect_uri=https%3A%2F%2Fregister.jira.pr-agent.codium.ai&state=qodomerge&response_type=code&prompt=consent) to install the Qodo Merge app in your Jira Cloud instance, click the `accept` button.<br>
|
||||||
|
{width=384}
|
||||||
|
|
||||||
|
2. After installing the app, you will be redirected to the Qodo Merge registration page. and you will see a success message.<br>
|
||||||
|
{width=384}
|
||||||
|
|
||||||
|
3. Now you can use the Jira integration in Qodo Merge PR Agent.
|
||||||
|
|
||||||
|
**2) Email/Token Authentication**
|
||||||
|
|
||||||
|
You can create an API token from your Atlassian account:
|
||||||
|
|
||||||
|
1. Log in to https://id.atlassian.com/manage-profile/security/api-tokens.
|
||||||
|
|
||||||
|
2. Click Create API token.
|
||||||
|
|
||||||
|
3. From the dialog that appears, enter a name for your new token and click Create.
|
||||||
|
|
||||||
|
4. Click Copy to clipboard.
|
||||||
|
|
||||||
|
{width=384}
|
||||||
|
|
||||||
|
5. In your [configuration file](https://qodo-merge-docs.qodo.ai/usage-guide/configuration_options/) add the following lines:
|
||||||
|
|
||||||
|
```toml
|
||||||
|
[jira]
|
||||||
|
jira_api_token = "YOUR_API_TOKEN"
|
||||||
|
jira_api_email = "YOUR_EMAIL"
|
||||||
|
```
|
||||||
|
|
||||||
|
|
||||||
|
#### Jira Server/Data Center 💎
|
||||||
|
|
||||||
|
Currently, we only support the Personal Access Token (PAT) Authentication method.
|
||||||
|
|
||||||
|
1. Create a [Personal Access Token (PAT)](https://confluence.atlassian.com/enterprise/using-personal-access-tokens-1026032365.html) in your Jira account
|
||||||
|
2. In your Configuration file/Environment variables/Secrets file, add the following lines:
|
||||||
|
|
||||||
|
```toml
|
||||||
|
[jira]
|
||||||
|
jira_base_url = "YOUR_JIRA_BASE_URL" # e.g. https://jira.example.com
|
||||||
|
jira_api_token = "YOUR_API_TOKEN"
|
||||||
|
```
|
@ -1,6 +1,7 @@
|
|||||||
# Core Abilities
|
# Core Abilities
|
||||||
Qodo Merge utilizes a variety of core abilities to provide a comprehensive and efficient code review experience. These abilities include:
|
Qodo Merge utilizes a variety of core abilities to provide a comprehensive and efficient code review experience. These abilities include:
|
||||||
|
|
||||||
|
- [Fetching ticket context](https://qodo-merge-docs.qodo.ai/core-abilities/fetching_ticket_context/)
|
||||||
- [Local and global metadata](https://qodo-merge-docs.qodo.ai/core-abilities/metadata/)
|
- [Local and global metadata](https://qodo-merge-docs.qodo.ai/core-abilities/metadata/)
|
||||||
- [Dynamic context](https://qodo-merge-docs.qodo.ai/core-abilities/dynamic_context/)
|
- [Dynamic context](https://qodo-merge-docs.qodo.ai/core-abilities/dynamic_context/)
|
||||||
- [Self-reflection](https://qodo-merge-docs.qodo.ai/core-abilities/self_reflection/)
|
- [Self-reflection](https://qodo-merge-docs.qodo.ai/core-abilities/self_reflection/)
|
||||||
|
@ -46,6 +46,5 @@ This results in a more refined and valuable set of suggestions for the user, sav
|
|||||||
## Appendix - Relevant Configuration Options
|
## Appendix - Relevant Configuration Options
|
||||||
```
|
```
|
||||||
[pr_code_suggestions]
|
[pr_code_suggestions]
|
||||||
self_reflect_on_suggestions = true # Enable self-reflection on code suggestions
|
|
||||||
suggestions_score_threshold = 0 # Filter out suggestions with a score below this threshold (0-10)
|
suggestions_score_threshold = 0 # Filter out suggestions with a score below this threshold (0-10)
|
||||||
```
|
```
|
@ -3,7 +3,7 @@
|
|||||||
|
|
||||||
You can use the Bitbucket Pipeline system to run Qodo Merge on every pull request open or update.
|
You can use the Bitbucket Pipeline system to run Qodo Merge on every pull request open or update.
|
||||||
|
|
||||||
1. Add the following file in your repository bitbucket_pipelines.yml
|
1. Add the following file in your repository bitbucket-pipelines.yml
|
||||||
|
|
||||||
```yaml
|
```yaml
|
||||||
pipelines:
|
pipelines:
|
||||||
|
@ -42,21 +42,36 @@ Note that if your base branches are not protected, don't set the variables as `p
|
|||||||
|
|
||||||
## Run a GitLab webhook server
|
## Run a GitLab webhook server
|
||||||
|
|
||||||
1. From the GitLab workspace or group, create an access token. Enable the "api" scope only.
|
1. From the GitLab workspace or group, create an access token with "Reporter" role ("Developer" if using Pro version of the agent) and "api" scope.
|
||||||
|
|
||||||
2. Generate a random secret for your app, and save it for later. For example, you can use:
|
2. Generate a random secret for your app, and save it for later. For example, you can use:
|
||||||
|
|
||||||
```
|
```
|
||||||
WEBHOOK_SECRET=$(python -c "import secrets; print(secrets.token_hex(10))")
|
WEBHOOK_SECRET=$(python -c "import secrets; print(secrets.token_hex(10))")
|
||||||
```
|
```
|
||||||
3. Follow the instructions to build the Docker image, setup a secrets file and deploy on your own server from [here](https://qodo-merge-docs.qodo.ai/installation/github/#run-as-a-github-app) steps 4-7.
|
|
||||||
|
|
||||||
4. In the secrets file, fill in the following:
|
3. Clone this repository:
|
||||||
- Your OpenAI key.
|
|
||||||
- In the [gitlab] section, fill in personal_access_token and shared_secret. The access token can be a personal access token, or a group or project access token.
|
|
||||||
- Set deployment_type to 'gitlab' in [configuration.toml](https://github.com/Codium-ai/pr-agent/blob/main/pr_agent/settings/configuration.toml)
|
|
||||||
|
|
||||||
5. Create a webhook in GitLab. Set the URL to ```http[s]://<PR_AGENT_HOSTNAME>/webhook```. Set the secret token to the generated secret from step 2.
|
```
|
||||||
In the "Trigger" section, check the ‘comments’ and ‘merge request events’ boxes.
|
git clone https://github.com/Codium-ai/pr-agent.git
|
||||||
|
```
|
||||||
|
|
||||||
6. Test your installation by opening a merge request or commenting or a merge request using one of CodiumAI's commands.
|
4. Prepare variables and secrets. Skip this step if you plan on settings these as environment variables when running the agent:
|
||||||
|
1. In the configuration file/variables:
|
||||||
|
- Set `deployment_type` to "gitlab"
|
||||||
|
|
||||||
|
2. In the secrets file/variables:
|
||||||
|
- Set your AI model key in the respective section
|
||||||
|
- In the [gitlab] section, set `personal_access_token` (with token from step 1) and `shared_secret` (with secret from step 2)
|
||||||
|
|
||||||
|
|
||||||
|
5. Build a Docker image for the app and optionally push it to a Docker repository. We'll use Dockerhub as an example:
|
||||||
|
```
|
||||||
|
docker build . -t gitlab_pr_agent --target gitlab_webhook -f docker/Dockerfile
|
||||||
|
docker push codiumai/pr-agent:gitlab_webhook # Push to your Docker repository
|
||||||
|
```
|
||||||
|
|
||||||
|
6. Create a webhook in GitLab. Set the URL to ```http[s]://<PR_AGENT_HOSTNAME>/webhook```, the secret token to the generated secret from step 2, and enable the triggers `push`, `comments` and `merge request events`.
|
||||||
|
|
||||||
|
7. Test your installation by opening a merge request or commenting on a merge request using one of CodiumAI's commands.
|
||||||
|
boxes
|
@ -279,10 +279,6 @@ Using a combination of both can help the AI model to provide relevant and tailor
|
|||||||
<td><b>persistent_comment</b></td>
|
<td><b>persistent_comment</b></td>
|
||||||
<td>If set to true, the improve comment will be persistent, meaning that every new improve request will edit the previous one. Default is false.</td>
|
<td>If set to true, the improve comment will be persistent, meaning that every new improve request will edit the previous one. Default is false.</td>
|
||||||
</tr>
|
</tr>
|
||||||
<tr>
|
|
||||||
<td><b>self_reflect_on_suggestions</b></td>
|
|
||||||
<td>If set to true, the improve tool will calculate an importance score for each suggestion [1-10], and sort the suggestion labels group based on this score. Default is true.</td>
|
|
||||||
</tr>
|
|
||||||
<tr>
|
<tr>
|
||||||
<td><b>suggestions_score_threshold</b></td>
|
<td><b>suggestions_score_threshold</b></td>
|
||||||
<td> Any suggestion with importance score less than this threshold will be removed. Default is 0. Highly recommend not to set this value above 7-8, since above it may clip relevant suggestions that can be useful. </td>
|
<td> Any suggestion with importance score less than this threshold will be removed. Default is 0. Highly recommend not to set this value above 7-8, since above it may clip relevant suggestions that can be useful. </td>
|
||||||
|
@ -140,7 +140,7 @@ num_code_suggestions = ...
|
|||||||
</tr>
|
</tr>
|
||||||
<tr>
|
<tr>
|
||||||
<td><b>require_ticket_analysis_review</b></td>
|
<td><b>require_ticket_analysis_review</b></td>
|
||||||
<td>If set to true, and the PR contains a GitHub ticket number, the tool will add a section that checks if the PR in fact fulfilled the ticket requirements. Default is true.</td>
|
<td>If set to true, and the PR contains a GitHub or Jira ticket link, the tool will add a section that checks if the PR in fact fulfilled the ticket requirements. Default is true.</td>
|
||||||
</tr>
|
</tr>
|
||||||
</table>
|
</table>
|
||||||
|
|
||||||
|
@ -160,3 +160,13 @@ ignore_pr_target_branches = ["qa"]
|
|||||||
|
|
||||||
Where the `ignore_pr_source_branches` and `ignore_pr_target_branches` are lists of regex patterns to match the source and target branches you want to ignore.
|
Where the `ignore_pr_source_branches` and `ignore_pr_target_branches` are lists of regex patterns to match the source and target branches you want to ignore.
|
||||||
They are not mutually exclusive, you can use them together or separately.
|
They are not mutually exclusive, you can use them together or separately.
|
||||||
|
|
||||||
|
|
||||||
|
To allow only specific folders (often needed in large monorepos), set:
|
||||||
|
|
||||||
|
```
|
||||||
|
[config]
|
||||||
|
allow_only_specific_folders=['folder1','folder2']
|
||||||
|
```
|
||||||
|
|
||||||
|
For the configuration above, automatic feedback will only be triggered when the PR changes include files from 'folder1' or 'folder2'
|
||||||
|
@ -72,13 +72,13 @@ The configuration parameter `pr_commands` defines the list of tools that will be
|
|||||||
```
|
```
|
||||||
[github_app]
|
[github_app]
|
||||||
pr_commands = [
|
pr_commands = [
|
||||||
"/describe --pr_description.final_update_message=false",
|
"/describe",
|
||||||
"/review --pr_reviewer.num_code_suggestions=0",
|
"/review",
|
||||||
"/improve",
|
"/improve --pr_code_suggestions.suggestions_score_threshold=5",
|
||||||
]
|
]
|
||||||
```
|
```
|
||||||
This means that when a new PR is opened/reopened or marked as ready for review, Qodo Merge will run the `describe`, `review` and `improve` tools.
|
This means that when a new PR is opened/reopened or marked as ready for review, Qodo Merge will run the `describe`, `review` and `improve` tools.
|
||||||
For the `review` tool, for example, the `num_code_suggestions` parameter will be set to 0.
|
For the `improve` tool, for example, the `suggestions_score_threshold` parameter will be set to 5 (suggestions below a score of 5 won't be presented)
|
||||||
|
|
||||||
You can override the default tool parameters by using one the three options for a [configuration file](https://qodo-merge-docs.qodo.ai/usage-guide/configuration_options/): **wiki**, **local**, or **global**.
|
You can override the default tool parameters by using one the three options for a [configuration file](https://qodo-merge-docs.qodo.ai/usage-guide/configuration_options/): **wiki**, **local**, or **global**.
|
||||||
For example, if your local `.pr_agent.toml` file contains:
|
For example, if your local `.pr_agent.toml` file contains:
|
||||||
@ -105,7 +105,7 @@ The configuration parameter `push_commands` defines the list of tools that will
|
|||||||
handle_push_trigger = true
|
handle_push_trigger = true
|
||||||
push_commands = [
|
push_commands = [
|
||||||
"/describe",
|
"/describe",
|
||||||
"/review --pr_reviewer.num_code_suggestions=0 --pr_reviewer.final_update_message=false",
|
"/review",
|
||||||
]
|
]
|
||||||
```
|
```
|
||||||
This means that when new code is pushed to the PR, the Qodo Merge will run the `describe` and `review` tools, with the specified parameters.
|
This means that when new code is pushed to the PR, the Qodo Merge will run the `describe` and `review` tools, with the specified parameters.
|
||||||
@ -148,12 +148,12 @@ After setting up a GitLab webhook, to control which commands will run automatica
|
|||||||
[gitlab]
|
[gitlab]
|
||||||
pr_commands = [
|
pr_commands = [
|
||||||
"/describe",
|
"/describe",
|
||||||
"/review --pr_reviewer.num_code_suggestions=0",
|
"/review",
|
||||||
"/improve",
|
"/improve",
|
||||||
]
|
]
|
||||||
```
|
```
|
||||||
|
|
||||||
the GitLab webhook can also respond to new code that is pushed to an open MR.
|
The GitLab webhook can also respond to new code that is pushed to an open MR.
|
||||||
The configuration toggle `handle_push_trigger` can be used to enable this feature.
|
The configuration toggle `handle_push_trigger` can be used to enable this feature.
|
||||||
The configuration parameter `push_commands` defines the list of tools that will be **run automatically** when new code is pushed to the MR.
|
The configuration parameter `push_commands` defines the list of tools that will be **run automatically** when new code is pushed to the MR.
|
||||||
```
|
```
|
||||||
@ -161,7 +161,7 @@ The configuration parameter `push_commands` defines the list of tools that will
|
|||||||
handle_push_trigger = true
|
handle_push_trigger = true
|
||||||
push_commands = [
|
push_commands = [
|
||||||
"/describe",
|
"/describe",
|
||||||
"/review --pr_reviewer.num_code_suggestions=0 --pr_reviewer.final_update_message=false",
|
"/review",
|
||||||
]
|
]
|
||||||
```
|
```
|
||||||
|
|
||||||
@ -182,7 +182,7 @@ Each time you invoke a `/review` tool, it will use the extra instructions you se
|
|||||||
|
|
||||||
|
|
||||||
Note that among other limitations, BitBucket provides relatively low rate-limits for applications (up to 1000 requests per hour), and does not provide an API to track the actual rate-limit usage.
|
Note that among other limitations, BitBucket provides relatively low rate-limits for applications (up to 1000 requests per hour), and does not provide an API to track the actual rate-limit usage.
|
||||||
If you experience lack of responses from Qodo Merge, you might want to set: `bitbucket_app.avoid_full_files=true` in your configuration file.
|
If you experience a lack of responses from Qodo Merge, you might want to set: `bitbucket_app.avoid_full_files=true` in your configuration file.
|
||||||
This will prevent Qodo Merge from acquiring the full file content, and will only use the diff content. This will reduce the number of requests made to BitBucket, at the cost of small decrease in accuracy, as dynamic context will not be applicable.
|
This will prevent Qodo Merge from acquiring the full file content, and will only use the diff content. This will reduce the number of requests made to BitBucket, at the cost of small decrease in accuracy, as dynamic context will not be applicable.
|
||||||
|
|
||||||
|
|
||||||
@ -194,13 +194,23 @@ Specifically, set the following values:
|
|||||||
```
|
```
|
||||||
[bitbucket_app]
|
[bitbucket_app]
|
||||||
pr_commands = [
|
pr_commands = [
|
||||||
"/review --pr_reviewer.num_code_suggestions=0",
|
"/review",
|
||||||
"/improve --pr_code_suggestions.commitable_code_suggestions=true --pr_code_suggestions.suggestions_score_threshold=7",
|
"/improve --pr_code_suggestions.commitable_code_suggestions=true --pr_code_suggestions.suggestions_score_threshold=7",
|
||||||
]
|
]
|
||||||
```
|
```
|
||||||
Note that we set specifically for bitbucket, we recommend using: `--pr_code_suggestions.suggestions_score_threshold=7` and that is the default value we set for bitbucket.
|
Note that we set specifically for bitbucket, we recommend using: `--pr_code_suggestions.suggestions_score_threshold=7` and that is the default value we set for bitbucket.
|
||||||
Since this platform only supports inline code suggestions, we want to limit the number of suggestions, and only present a limited number.
|
Since this platform only supports inline code suggestions, we want to limit the number of suggestions, and only present a limited number.
|
||||||
|
|
||||||
|
To enable BitBucket app to respond to each **push** to the PR, set (for example):
|
||||||
|
```
|
||||||
|
[bitbucket_app]
|
||||||
|
handle_push_trigger = true
|
||||||
|
push_commands = [
|
||||||
|
"/describe",
|
||||||
|
"/review",
|
||||||
|
]
|
||||||
|
```
|
||||||
|
|
||||||
## Azure DevOps provider
|
## Azure DevOps provider
|
||||||
|
|
||||||
To use Azure DevOps provider use the following settings in configuration.toml:
|
To use Azure DevOps provider use the following settings in configuration.toml:
|
||||||
@ -210,7 +220,7 @@ git_provider="azure"
|
|||||||
```
|
```
|
||||||
|
|
||||||
Azure DevOps provider supports [PAT token](https://learn.microsoft.com/en-us/azure/devops/organizations/accounts/use-personal-access-tokens-to-authenticate?view=azure-devops&tabs=Windows) or [DefaultAzureCredential](https://learn.microsoft.com/en-us/azure/developer/python/sdk/authentication-overview#authentication-in-server-environments) authentication.
|
Azure DevOps provider supports [PAT token](https://learn.microsoft.com/en-us/azure/devops/organizations/accounts/use-personal-access-tokens-to-authenticate?view=azure-devops&tabs=Windows) or [DefaultAzureCredential](https://learn.microsoft.com/en-us/azure/developer/python/sdk/authentication-overview#authentication-in-server-environments) authentication.
|
||||||
PAT is faster to create, but has build in expiration date, and will use the user identity for API calls.
|
PAT is faster to create, but has a build in expiration date, and will use the user identity for API calls.
|
||||||
Using DefaultAzureCredential you can use managed identity or Service principle, which are more secure and will create separate ADO user identity (via AAD) to the agent.
|
Using DefaultAzureCredential you can use managed identity or Service principle, which are more secure and will create separate ADO user identity (via AAD) to the agent.
|
||||||
|
|
||||||
If PAT was chosen, you can assign the value in .secrets.toml.
|
If PAT was chosen, you can assign the value in .secrets.toml.
|
||||||
|
@ -133,9 +133,26 @@ Your [application default credentials](https://cloud.google.com/docs/authenticat
|
|||||||
|
|
||||||
If you do want to set explicit credentials, then you can use the `GOOGLE_APPLICATION_CREDENTIALS` environment variable set to a path to a json credentials file.
|
If you do want to set explicit credentials, then you can use the `GOOGLE_APPLICATION_CREDENTIALS` environment variable set to a path to a json credentials file.
|
||||||
|
|
||||||
|
### Google AI Studio
|
||||||
|
|
||||||
|
To use [Google AI Studio](https://aistudio.google.com/) models, set the relevant models in the configuration section of the configuration file:
|
||||||
|
|
||||||
|
```toml
|
||||||
|
[config] # in configuration.toml
|
||||||
|
model="google_ai_studio/gemini-1.5-flash"
|
||||||
|
model_turbo="google_ai_studio/gemini-1.5-flash"
|
||||||
|
fallback_models=["google_ai_studio/gemini-1.5-flash"]
|
||||||
|
|
||||||
|
[google_ai_studio] # in .secrets.toml
|
||||||
|
gemini_api_key = "..."
|
||||||
|
```
|
||||||
|
|
||||||
|
If you don't want to set the API key in the .secrets.toml file, you can set the `GOOGLE_AI_STUDIO.GEMINI_API_KEY` environment variable.
|
||||||
|
|
||||||
### Anthropic
|
### Anthropic
|
||||||
|
|
||||||
To use Anthropic models, set the relevant models in the configuration section of the configuration file:
|
To use Anthropic models, set the relevant models in the configuration section of the configuration file:
|
||||||
|
|
||||||
```
|
```
|
||||||
[config]
|
[config]
|
||||||
model="anthropic/claude-3-opus-20240229"
|
model="anthropic/claude-3-opus-20240229"
|
||||||
|
@ -43,6 +43,7 @@ nav:
|
|||||||
- 💎 Similar Code: 'tools/similar_code.md'
|
- 💎 Similar Code: 'tools/similar_code.md'
|
||||||
- Core Abilities:
|
- Core Abilities:
|
||||||
- 'core-abilities/index.md'
|
- 'core-abilities/index.md'
|
||||||
|
- Fetching ticket context: 'core-abilities/fetching_ticket_context.md'
|
||||||
- Local and global metadata: 'core-abilities/metadata.md'
|
- Local and global metadata: 'core-abilities/metadata.md'
|
||||||
- Dynamic context: 'core-abilities/dynamic_context.md'
|
- Dynamic context: 'core-abilities/dynamic_context.md'
|
||||||
- Self-reflection: 'core-abilities/self_reflection.md'
|
- Self-reflection: 'core-abilities/self_reflection.md'
|
||||||
|
@ -38,6 +38,8 @@ MAX_TOKENS = {
|
|||||||
'vertex_ai/gemini-1.5-pro': 1048576,
|
'vertex_ai/gemini-1.5-pro': 1048576,
|
||||||
'vertex_ai/gemini-1.5-flash': 1048576,
|
'vertex_ai/gemini-1.5-flash': 1048576,
|
||||||
'vertex_ai/gemma2': 8200,
|
'vertex_ai/gemma2': 8200,
|
||||||
|
'gemini/gemini-1.5-pro': 1048576,
|
||||||
|
'gemini/gemini-1.5-flash': 1048576,
|
||||||
'codechat-bison': 6144,
|
'codechat-bison': 6144,
|
||||||
'codechat-bison-32k': 32000,
|
'codechat-bison-32k': 32000,
|
||||||
'anthropic.claude-instant-v1': 100000,
|
'anthropic.claude-instant-v1': 100000,
|
||||||
|
@ -83,6 +83,11 @@ class LiteLLMAIHandler(BaseAiHandler):
|
|||||||
litellm.vertex_location = get_settings().get(
|
litellm.vertex_location = get_settings().get(
|
||||||
"VERTEXAI.VERTEX_LOCATION", None
|
"VERTEXAI.VERTEX_LOCATION", None
|
||||||
)
|
)
|
||||||
|
# Google AI Studio
|
||||||
|
# SEE https://docs.litellm.ai/docs/providers/gemini
|
||||||
|
if get_settings().get("GOOGLE_AI_STUDIO.GEMINI_API_KEY", None):
|
||||||
|
os.environ["GEMINI_API_KEY"] = get_settings().google_ai_studio.gemini_api_key
|
||||||
|
|
||||||
def prepare_logs(self, response, system, user, resp, finish_reason):
|
def prepare_logs(self, response, system, user, resp, finish_reason):
|
||||||
response_log = response.dict().copy()
|
response_log = response.dict().copy()
|
||||||
response_log['system'] = system
|
response_log['system'] = system
|
||||||
|
@ -1,6 +1,7 @@
|
|||||||
|
from os import environ
|
||||||
from pr_agent.algo.ai_handlers.base_ai_handler import BaseAiHandler
|
from pr_agent.algo.ai_handlers.base_ai_handler import BaseAiHandler
|
||||||
import openai
|
import openai
|
||||||
from openai.error import APIError, RateLimitError, Timeout, TryAgain
|
from openai import APIError, AsyncOpenAI, RateLimitError, Timeout
|
||||||
from retry import retry
|
from retry import retry
|
||||||
|
|
||||||
from pr_agent.config_loader import get_settings
|
from pr_agent.config_loader import get_settings
|
||||||
@ -14,7 +15,7 @@ class OpenAIHandler(BaseAiHandler):
|
|||||||
# Initialize OpenAIHandler specific attributes here
|
# Initialize OpenAIHandler specific attributes here
|
||||||
try:
|
try:
|
||||||
super().__init__()
|
super().__init__()
|
||||||
openai.api_key = get_settings().openai.key
|
environ["OPENAI_API_KEY"] = get_settings().openai.key
|
||||||
if get_settings().get("OPENAI.ORG", None):
|
if get_settings().get("OPENAI.ORG", None):
|
||||||
openai.organization = get_settings().openai.org
|
openai.organization = get_settings().openai.org
|
||||||
if get_settings().get("OPENAI.API_TYPE", None):
|
if get_settings().get("OPENAI.API_TYPE", None):
|
||||||
@ -24,7 +25,7 @@ class OpenAIHandler(BaseAiHandler):
|
|||||||
if get_settings().get("OPENAI.API_VERSION", None):
|
if get_settings().get("OPENAI.API_VERSION", None):
|
||||||
openai.api_version = get_settings().openai.api_version
|
openai.api_version = get_settings().openai.api_version
|
||||||
if get_settings().get("OPENAI.API_BASE", None):
|
if get_settings().get("OPENAI.API_BASE", None):
|
||||||
openai.api_base = get_settings().openai.api_base
|
environ["OPENAI_BASE_URL"] = get_settings().openai.api_base
|
||||||
|
|
||||||
except AttributeError as e:
|
except AttributeError as e:
|
||||||
raise ValueError("OpenAI key is required") from e
|
raise ValueError("OpenAI key is required") from e
|
||||||
@ -36,7 +37,7 @@ class OpenAIHandler(BaseAiHandler):
|
|||||||
"""
|
"""
|
||||||
return get_settings().get("OPENAI.DEPLOYMENT_ID", None)
|
return get_settings().get("OPENAI.DEPLOYMENT_ID", None)
|
||||||
|
|
||||||
@retry(exceptions=(APIError, Timeout, TryAgain, AttributeError, RateLimitError),
|
@retry(exceptions=(APIError, Timeout, AttributeError, RateLimitError),
|
||||||
tries=OPENAI_RETRIES, delay=2, backoff=2, jitter=(1, 3))
|
tries=OPENAI_RETRIES, delay=2, backoff=2, jitter=(1, 3))
|
||||||
async def chat_completion(self, model: str, system: str, user: str, temperature: float = 0.2):
|
async def chat_completion(self, model: str, system: str, user: str, temperature: float = 0.2):
|
||||||
try:
|
try:
|
||||||
@ -44,20 +45,19 @@ class OpenAIHandler(BaseAiHandler):
|
|||||||
get_logger().info("System: ", system)
|
get_logger().info("System: ", system)
|
||||||
get_logger().info("User: ", user)
|
get_logger().info("User: ", user)
|
||||||
messages = [{"role": "system", "content": system}, {"role": "user", "content": user}]
|
messages = [{"role": "system", "content": system}, {"role": "user", "content": user}]
|
||||||
|
client = AsyncOpenAI()
|
||||||
chat_completion = await openai.ChatCompletion.acreate(
|
chat_completion = await client.chat.completions.create(
|
||||||
model=model,
|
model=model,
|
||||||
deployment_id=deployment_id,
|
|
||||||
messages=messages,
|
messages=messages,
|
||||||
temperature=temperature,
|
temperature=temperature,
|
||||||
)
|
)
|
||||||
resp = chat_completion["choices"][0]['message']['content']
|
resp = chat_completion.choices[0].message.content
|
||||||
finish_reason = chat_completion["choices"][0]["finish_reason"]
|
finish_reason = chat_completion.choices[0].finish_reason
|
||||||
usage = chat_completion.get("usage")
|
usage = chat_completion.usage
|
||||||
get_logger().info("AI response", response=resp, messages=messages, finish_reason=finish_reason,
|
get_logger().info("AI response", response=resp, messages=messages, finish_reason=finish_reason,
|
||||||
model=model, usage=usage)
|
model=model, usage=usage)
|
||||||
return resp, finish_reason
|
return resp, finish_reason
|
||||||
except (APIError, Timeout, TryAgain) as e:
|
except (APIError, Timeout) as e:
|
||||||
get_logger().error("Error during OpenAI inference: ", e)
|
get_logger().error("Error during OpenAI inference: ", e)
|
||||||
raise
|
raise
|
||||||
except (RateLimitError) as e:
|
except (RateLimitError) as e:
|
||||||
@ -65,4 +65,4 @@ class OpenAIHandler(BaseAiHandler):
|
|||||||
raise
|
raise
|
||||||
except (Exception) as e:
|
except (Exception) as e:
|
||||||
get_logger().error("Unknown error during OpenAI inference: ", e)
|
get_logger().error("Unknown error during OpenAI inference: ", e)
|
||||||
raise TryAgain from e
|
raise
|
@ -43,6 +43,10 @@ class PRReviewHeader(str, Enum):
|
|||||||
INCREMENTAL = "## Incremental PR Reviewer Guide"
|
INCREMENTAL = "## Incremental PR Reviewer Guide"
|
||||||
|
|
||||||
|
|
||||||
|
class PRDescriptionHeader(str, Enum):
|
||||||
|
CHANGES_WALKTHROUGH = "### **Changes walkthrough** 📝"
|
||||||
|
|
||||||
|
|
||||||
def get_setting(key: str) -> Any:
|
def get_setting(key: str) -> Any:
|
||||||
try:
|
try:
|
||||||
key = key.upper()
|
key = key.upper()
|
||||||
@ -1024,8 +1028,7 @@ def process_description(description_full: str) -> Tuple[str, List]:
|
|||||||
if not description_full:
|
if not description_full:
|
||||||
return "", []
|
return "", []
|
||||||
|
|
||||||
split_str = "### **Changes walkthrough** 📝"
|
description_split = description_full.split(PRDescriptionHeader.CHANGES_WALKTHROUGH.value)
|
||||||
description_split = description_full.split(split_str)
|
|
||||||
base_description_str = description_split[0]
|
base_description_str = description_split[0]
|
||||||
changes_walkthrough_str = ""
|
changes_walkthrough_str = ""
|
||||||
files = []
|
files = []
|
||||||
@ -1060,6 +1063,9 @@ def process_description(description_full: str) -> Tuple[str, List]:
|
|||||||
if not res or res.lastindex != 4:
|
if not res or res.lastindex != 4:
|
||||||
pattern_back = r'<details>\s*<summary><strong>(.*?)</strong><dd><code>(.*?)</code>.*?</summary>\s*<hr>\s*(.*?)\n\n\s*(.*?)</details>'
|
pattern_back = r'<details>\s*<summary><strong>(.*?)</strong><dd><code>(.*?)</code>.*?</summary>\s*<hr>\s*(.*?)\n\n\s*(.*?)</details>'
|
||||||
res = re.search(pattern_back, file_data, re.DOTALL)
|
res = re.search(pattern_back, file_data, re.DOTALL)
|
||||||
|
if not res or res.lastindex != 4:
|
||||||
|
pattern_back = r'<details>\s*<summary><strong>(.*?)</strong>\s*<dd><code>(.*?)</code>.*?</summary>\s*<hr>\s*(.*?)\s*-\s*(.*?)\s*</details>' # looking for hypen ('- ')
|
||||||
|
res = re.search(pattern_back, file_data, re.DOTALL)
|
||||||
if res and res.lastindex == 4:
|
if res and res.lastindex == 4:
|
||||||
short_filename = res.group(1).strip()
|
short_filename = res.group(1).strip()
|
||||||
short_summary = res.group(2).strip()
|
short_summary = res.group(2).strip()
|
||||||
|
@ -5,7 +5,7 @@ from urllib.parse import urlparse
|
|||||||
from ..algo.file_filter import filter_ignored
|
from ..algo.file_filter import filter_ignored
|
||||||
from ..log import get_logger
|
from ..log import get_logger
|
||||||
from ..algo.language_handler import is_valid_file
|
from ..algo.language_handler import is_valid_file
|
||||||
from ..algo.utils import clip_tokens, find_line_number_of_relevant_line_in_file, load_large_diff
|
from ..algo.utils import clip_tokens, find_line_number_of_relevant_line_in_file, load_large_diff, PRDescriptionHeader
|
||||||
from ..config_loader import get_settings
|
from ..config_loader import get_settings
|
||||||
from .git_provider import GitProvider
|
from .git_provider import GitProvider
|
||||||
from pr_agent.algo.types import EDIT_TYPE, FilePatchInfo
|
from pr_agent.algo.types import EDIT_TYPE, FilePatchInfo
|
||||||
@ -404,7 +404,7 @@ class AzureDevopsProvider(GitProvider):
|
|||||||
pr_body = pr_body[:ind]
|
pr_body = pr_body[:ind]
|
||||||
|
|
||||||
if len(pr_body) > MAX_PR_DESCRIPTION_AZURE_LENGTH:
|
if len(pr_body) > MAX_PR_DESCRIPTION_AZURE_LENGTH:
|
||||||
changes_walkthrough_text = '## **Changes walkthrough**'
|
changes_walkthrough_text = PRDescriptionHeader.CHANGES_WALKTHROUGH.value
|
||||||
ind = pr_body.find(changes_walkthrough_text)
|
ind = pr_body.find(changes_walkthrough_text)
|
||||||
if ind != -1:
|
if ind != -1:
|
||||||
pr_body = pr_body[:ind]
|
pr_body = pr_body[:ind]
|
||||||
|
@ -43,6 +43,9 @@ api_base = "" # the base url for your local Llama 2, Code Llama, and other model
|
|||||||
vertex_project = "" # the google cloud platform project name for your vertexai deployment
|
vertex_project = "" # the google cloud platform project name for your vertexai deployment
|
||||||
vertex_location = "" # the google cloud platform location for your vertexai deployment
|
vertex_location = "" # the google cloud platform location for your vertexai deployment
|
||||||
|
|
||||||
|
[google_ai_studio]
|
||||||
|
gemini_api_key = "" # the google AI Studio API key
|
||||||
|
|
||||||
[github]
|
[github]
|
||||||
# ---- Set the following only for deployment type == "user"
|
# ---- Set the following only for deployment type == "user"
|
||||||
user_token = "" # A GitHub personal access token with 'repo' scope.
|
user_token = "" # A GitHub personal access token with 'repo' scope.
|
||||||
@ -60,6 +63,7 @@ webhook_secret = "<WEBHOOK SECRET>" # Optional, may be commented out.
|
|||||||
[gitlab]
|
[gitlab]
|
||||||
# Gitlab personal access token
|
# Gitlab personal access token
|
||||||
personal_access_token = ""
|
personal_access_token = ""
|
||||||
|
shared_secret = "" # webhook secret
|
||||||
|
|
||||||
[bitbucket]
|
[bitbucket]
|
||||||
# For Bitbucket personal/repository bearer token
|
# For Bitbucket personal/repository bearer token
|
||||||
|
@ -7,6 +7,7 @@ fallback_models=["gpt-4o-2024-05-13"]
|
|||||||
git_provider="github"
|
git_provider="github"
|
||||||
publish_output=true
|
publish_output=true
|
||||||
publish_output_progress=true
|
publish_output_progress=true
|
||||||
|
publish_output_no_suggestions=true
|
||||||
verbosity_level=0 # 0,1,2
|
verbosity_level=0 # 0,1,2
|
||||||
use_extra_bad_extensions=false
|
use_extra_bad_extensions=false
|
||||||
# Configurations
|
# Configurations
|
||||||
@ -121,7 +122,6 @@ max_history_len=4
|
|||||||
# enable to apply suggestion 💎
|
# enable to apply suggestion 💎
|
||||||
apply_suggestions_checkbox=true
|
apply_suggestions_checkbox=true
|
||||||
# suggestions scoring
|
# suggestions scoring
|
||||||
self_reflect_on_suggestions=true
|
|
||||||
suggestions_score_threshold=0 # [0-10]| recommend not to set this value above 8, since above it may clip highly relevant suggestions
|
suggestions_score_threshold=0 # [0-10]| recommend not to set this value above 8, since above it may clip highly relevant suggestions
|
||||||
# params for '/improve --extended' mode
|
# params for '/improve --extended' mode
|
||||||
auto_extended_mode=true
|
auto_extended_mode=true
|
||||||
|
@ -14,10 +14,10 @@ The PR code diff will be in the following structured format:
|
|||||||
|
|
||||||
@@ ... @@ def func1():
|
@@ ... @@ def func1():
|
||||||
__new hunk__
|
__new hunk__
|
||||||
11 unchanged code line0 in the PR
|
unchanged code line0 in the PR
|
||||||
12 unchanged code line1 in the PR
|
unchanged code line1 in the PR
|
||||||
13 +new code line2 added in the PR
|
+new code line2 added in the PR
|
||||||
14 unchanged code line3 in the PR
|
unchanged code line3 in the PR
|
||||||
__old hunk__
|
__old hunk__
|
||||||
unchanged code line0
|
unchanged code line0
|
||||||
unchanged code line1
|
unchanged code line1
|
||||||
@ -35,7 +35,6 @@ __new hunk__
|
|||||||
======
|
======
|
||||||
|
|
||||||
- In the format above, the diff is organized into separate '__new hunk__' and '__old hunk__' sections for each code chunk. '__new hunk__' contains the updated code, while '__old hunk__' shows the removed code. If no code was removed in a specific chunk, the __old hunk__ section will be omitted.
|
- In the format above, the diff is organized into separate '__new hunk__' and '__old hunk__' sections for each code chunk. '__new hunk__' contains the updated code, while '__old hunk__' shows the removed code. If no code was removed in a specific chunk, the __old hunk__ section will be omitted.
|
||||||
- Line numbers were added for the '__new hunk__' sections to help referencing specific lines in the code suggestions. These numbers are for reference only and are not part of the actual code.
|
|
||||||
- Code lines are prefixed with symbols: '+' for new code added in the PR, '-' for code removed, and ' ' for unchanged code.
|
- Code lines are prefixed with symbols: '+' for new code added in the PR, '-' for code removed, and ' ' for unchanged code.
|
||||||
{%- if is_ai_metadata %}
|
{%- if is_ai_metadata %}
|
||||||
- When available, an AI-generated summary will precede each file's diff, with a high-level overview of the changes. Note that this summary may not be fully accurate or complete.
|
- When available, an AI-generated summary will precede each file's diff, with a high-level overview of the changes. Note that this summary may not be fully accurate or complete.
|
||||||
@ -44,7 +43,7 @@ __new hunk__
|
|||||||
|
|
||||||
Specific guidelines for generating code suggestions:
|
Specific guidelines for generating code suggestions:
|
||||||
- Provide up to {{ num_code_suggestions }} distinct and insightful code suggestions.
|
- Provide up to {{ num_code_suggestions }} distinct and insightful code suggestions.
|
||||||
- Focus solely on enhancing new code introduced in the PR, identified by '+' prefixes in '__new hunk__' sections (after the line numbers).
|
- Focus solely on enhancing new code introduced in the PR, identified by '+' prefixes in '__new hunk__' sections.
|
||||||
- Prioritize suggestions that address potential issues, critical problems, and bugs in the PR code. Avoid repeating changes already implemented in the PR. If no pertinent suggestions are applicable, return an empty list.
|
- Prioritize suggestions that address potential issues, critical problems, and bugs in the PR code. Avoid repeating changes already implemented in the PR. If no pertinent suggestions are applicable, return an empty list.
|
||||||
- Don't suggest to add docstring, type hints, or comments, to remove unused imports, or to use more specific exception types.
|
- Don't suggest to add docstring, type hints, or comments, to remove unused imports, or to use more specific exception types.
|
||||||
- When referencing variables or names from the code, enclose them in backticks (`). Example: "ensure that `variable_name` is..."
|
- When referencing variables or names from the code, enclose them in backticks (`). Example: "ensure that `variable_name` is..."
|
||||||
@ -67,12 +66,10 @@ class CodeSuggestion(BaseModel):
|
|||||||
relevant_file: str = Field(description="Full path of the relevant file")
|
relevant_file: str = Field(description="Full path of the relevant file")
|
||||||
language: str = Field(description="Programming language used by the relevant file")
|
language: str = Field(description="Programming language used by the relevant file")
|
||||||
suggestion_content: str = Field(description="An actionable suggestion to enhance, improve or fix the new code introduced in the PR. Don't present here actual code snippets, just the suggestion. Be short and concise")
|
suggestion_content: str = Field(description="An actionable suggestion to enhance, improve or fix the new code introduced in the PR. Don't present here actual code snippets, just the suggestion. Be short and concise")
|
||||||
existing_code: str = Field(description="A short code snippet from a '__new hunk__' section that the suggestion aims to enhance or fix. Include only complete code lines, without line numbers. Use ellipsis (...) for brevity if needed. This snippet should represent the specific PR code targeted for improvement.")
|
existing_code: str = Field(description="A short code snippet from a '__new hunk__' section that the suggestion aims to enhance or fix. Include only complete code lines. Use ellipsis (...) for brevity if needed. This snippet should represent the specific PR code targeted for improvement.")
|
||||||
improved_code: str = Field(description="A refined code snippet that replaces the 'existing_code' snippet after implementing the suggestion.")
|
improved_code: str = Field(description="A refined code snippet that replaces the 'existing_code' snippet after implementing the suggestion.")
|
||||||
one_sentence_summary: str = Field(description="A concise, single-sentence overview of the suggested improvement. Focus on the 'what'. Be general, and avoid method or variable names.")
|
one_sentence_summary: str = Field(description="A concise, single-sentence overview of the suggested improvement. Focus on the 'what'. Be general, and avoid method or variable names.")
|
||||||
relevant_lines_start: int = Field(description="The relevant line number, from a '__new hunk__' section, where the suggestion starts (inclusive). Should be derived from the hunk line numbers, and correspond to the beginning of the 'existing code' snippet above")
|
label: str = Field(description="A single, descriptive label that best characterizes the suggestion type. Possible labels include 'security', 'possible bug', 'possible issue', 'performance', 'enhancement', 'best practice', 'maintainability', 'typo'. Other relevant labels are also acceptable.")
|
||||||
relevant_lines_end: int = Field(description="The relevant line number, from a '__new hunk__' section, where the suggestion ends (inclusive). Should be derived from the hunk line numbers, and correspond to the end of the 'existing code' snippet above")
|
|
||||||
label: str = Field(description="A single, descriptive label that best characterizes the suggestion type. Possible labels include 'security', 'possible bug', 'possible issue', 'performance', 'enhancement', 'best practice', 'maintainability'. Other relevant labels are also acceptable.")
|
|
||||||
|
|
||||||
|
|
||||||
class PRCodeSuggestions(BaseModel):
|
class PRCodeSuggestions(BaseModel):
|
||||||
@ -95,8 +92,6 @@ code_suggestions:
|
|||||||
...
|
...
|
||||||
one_sentence_summary: |
|
one_sentence_summary: |
|
||||||
...
|
...
|
||||||
relevant_lines_start: 12
|
|
||||||
relevant_lines_end: 13
|
|
||||||
label: |
|
label: |
|
||||||
...
|
...
|
||||||
```
|
```
|
||||||
@ -112,7 +107,7 @@ Title: '{{title}}'
|
|||||||
|
|
||||||
The PR Diff:
|
The PR Diff:
|
||||||
======
|
======
|
||||||
{{ diff|trim }}
|
{{ diff_no_line_numbers|trim }}
|
||||||
======
|
======
|
||||||
|
|
||||||
|
|
||||||
|
@ -15,8 +15,8 @@ Be particularly vigilant for suggestions that:
|
|||||||
- Contradict or ignore parts of the PR's modifications
|
- Contradict or ignore parts of the PR's modifications
|
||||||
In such cases, assign the suggestion a score of 0.
|
In such cases, assign the suggestion a score of 0.
|
||||||
|
|
||||||
For valid suggestions, your role is to provide an impartial and precise score assessment that accurately reflects each suggestion's potential impact on the PR's correctness, quality and functionality.
|
Evaluate each valid suggestion by scoring its potential impact on the PR's correctness, quality and functionality.
|
||||||
|
In addition, you should also detect the line numbers in the '__new hunk__' section that correspond to the 'existing_code' snippet.
|
||||||
|
|
||||||
Key guidelines for evaluation:
|
Key guidelines for evaluation:
|
||||||
- Thoroughly examine both the suggestion content and the corresponding PR code diff. Be vigilant for potential errors in each suggestion, ensuring they are logically sound, accurate, and directly derived from the PR code diff.
|
- Thoroughly examine both the suggestion content and the corresponding PR code diff. Be vigilant for potential errors in each suggestion, ensuring they are logically sound, accurate, and directly derived from the PR code diff.
|
||||||
@ -82,6 +82,8 @@ The output must be a YAML object equivalent to type $PRCodeSuggestionsFeedback,
|
|||||||
class CodeSuggestionFeedback(BaseModel):
|
class CodeSuggestionFeedback(BaseModel):
|
||||||
suggestion_summary: str = Field(description="Repeated from the input")
|
suggestion_summary: str = Field(description="Repeated from the input")
|
||||||
relevant_file: str = Field(description="Repeated from the input")
|
relevant_file: str = Field(description="Repeated from the input")
|
||||||
|
relevant_lines_start: int = Field(description="The relevant line number, from a '__new hunk__' section, where the suggestion starts (inclusive). Should be derived from the hunk line numbers, and correspond to the beginning of the relevant 'existing code' snippet")
|
||||||
|
relevant_lines_end: int = Field(description="The relevant line number, from a '__new hunk__' section, where the suggestion ends (inclusive). Should be derived from the hunk line numbers, and correspond to the end of the relevant 'existing code' snippet")
|
||||||
suggestion_score: int = Field(description="Evaluate the suggestion and assign a score from 0 to 10. Give 0 if the suggestion is wrong. For valid suggestions, score from 1 (lowest impact/importance) to 10 (highest impact/importance).")
|
suggestion_score: int = Field(description="Evaluate the suggestion and assign a score from 0 to 10. Give 0 if the suggestion is wrong. For valid suggestions, score from 1 (lowest impact/importance) to 10 (highest impact/importance).")
|
||||||
why: str = Field(description="Briefly explain the score given in 1-2 sentences, focusing on the suggestion's impact, relevance, and accuracy.")
|
why: str = Field(description="Briefly explain the score given in 1-2 sentences, focusing on the suggestion's impact, relevance, and accuracy.")
|
||||||
|
|
||||||
@ -96,6 +98,8 @@ code_suggestions:
|
|||||||
- suggestion_summary: |
|
- suggestion_summary: |
|
||||||
Use a more descriptive variable name here
|
Use a more descriptive variable name here
|
||||||
relevant_file: "src/file1.py"
|
relevant_file: "src/file1.py"
|
||||||
|
relevant_lines_start: 13
|
||||||
|
relevant_lines_end: 14
|
||||||
suggestion_score: 6
|
suggestion_score: 6
|
||||||
why: |
|
why: |
|
||||||
The variable name 't' is not descriptive enough
|
The variable name 't' is not descriptive enough
|
||||||
|
@ -1,6 +1,7 @@
|
|||||||
import asyncio
|
import asyncio
|
||||||
import copy
|
import copy
|
||||||
import textwrap
|
import textwrap
|
||||||
|
import traceback
|
||||||
from functools import partial
|
from functools import partial
|
||||||
from typing import Dict, List
|
from typing import Dict, List
|
||||||
from jinja2 import Environment, StrictUndefined
|
from jinja2 import Environment, StrictUndefined
|
||||||
@ -44,7 +45,7 @@ class PRCodeSuggestions:
|
|||||||
self.is_extended = self._get_is_extended(args or [])
|
self.is_extended = self._get_is_extended(args or [])
|
||||||
except:
|
except:
|
||||||
self.is_extended = False
|
self.is_extended = False
|
||||||
num_code_suggestions = get_settings().pr_code_suggestions.num_code_suggestions_per_chunk
|
num_code_suggestions = int(get_settings().pr_code_suggestions.num_code_suggestions_per_chunk)
|
||||||
|
|
||||||
|
|
||||||
self.ai_handler = ai_handler()
|
self.ai_handler = ai_handler()
|
||||||
@ -69,6 +70,7 @@ class PRCodeSuggestions:
|
|||||||
"description": self.pr_description,
|
"description": self.pr_description,
|
||||||
"language": self.main_language,
|
"language": self.main_language,
|
||||||
"diff": "", # empty diff for initial calculation
|
"diff": "", # empty diff for initial calculation
|
||||||
|
"diff_no_line_numbers": "", # empty diff for initial calculation
|
||||||
"num_code_suggestions": num_code_suggestions,
|
"num_code_suggestions": num_code_suggestions,
|
||||||
"extra_instructions": get_settings().pr_code_suggestions.extra_instructions,
|
"extra_instructions": get_settings().pr_code_suggestions.extra_instructions,
|
||||||
"commit_messages_str": self.git_provider.get_commit_messages(),
|
"commit_messages_str": self.git_provider.get_commit_messages(),
|
||||||
@ -110,15 +112,17 @@ class PRCodeSuggestions:
|
|||||||
if not data:
|
if not data:
|
||||||
data = {"code_suggestions": []}
|
data = {"code_suggestions": []}
|
||||||
|
|
||||||
if (data is None or 'code_suggestions' not in data or not data['code_suggestions']
|
if (data is None or 'code_suggestions' not in data or not data['code_suggestions']):
|
||||||
and get_settings().config.publish_output):
|
|
||||||
get_logger().warning('No code suggestions found for the PR.')
|
|
||||||
pr_body = "## PR Code Suggestions ✨\n\nNo code suggestions found for the PR."
|
pr_body = "## PR Code Suggestions ✨\n\nNo code suggestions found for the PR."
|
||||||
get_logger().debug(f"PR output", artifact=pr_body)
|
get_logger().warning('No code suggestions found for the PR.')
|
||||||
if self.progress_response:
|
if get_settings().config.publish_output and get_settings().config.publish_output_no_suggestions:
|
||||||
self.git_provider.edit_comment(self.progress_response, body=pr_body)
|
get_logger().debug(f"PR output", artifact=pr_body)
|
||||||
|
if self.progress_response:
|
||||||
|
self.git_provider.edit_comment(self.progress_response, body=pr_body)
|
||||||
|
else:
|
||||||
|
self.git_provider.publish_comment(pr_body)
|
||||||
else:
|
else:
|
||||||
self.git_provider.publish_comment(pr_body)
|
get_settings().data = {"artifact": ""}
|
||||||
return
|
return
|
||||||
|
|
||||||
if (not self.is_extended and get_settings().pr_code_suggestions.rank_suggestions) or \
|
if (not self.is_extended and get_settings().pr_code_suggestions.rank_suggestions) or \
|
||||||
@ -195,8 +199,11 @@ class PRCodeSuggestions:
|
|||||||
self.git_provider.remove_comment(self.progress_response)
|
self.git_provider.remove_comment(self.progress_response)
|
||||||
else:
|
else:
|
||||||
get_logger().info('Code suggestions generated for PR, but not published since publish_output is False.')
|
get_logger().info('Code suggestions generated for PR, but not published since publish_output is False.')
|
||||||
|
get_settings().data = {"artifact": data}
|
||||||
|
return
|
||||||
except Exception as e:
|
except Exception as e:
|
||||||
get_logger().error(f"Failed to generate code suggestions for PR, error: {e}")
|
get_logger().error(f"Failed to generate code suggestions for PR, error: {e}",
|
||||||
|
artifact={"traceback": traceback.format_exc()})
|
||||||
if get_settings().config.publish_output:
|
if get_settings().config.publish_output:
|
||||||
if self.progress_response:
|
if self.progress_response:
|
||||||
self.progress_response.delete()
|
self.progress_response.delete()
|
||||||
@ -328,7 +335,7 @@ class PRCodeSuggestions:
|
|||||||
|
|
||||||
if self.patches_diff:
|
if self.patches_diff:
|
||||||
get_logger().debug(f"PR diff", artifact=self.patches_diff)
|
get_logger().debug(f"PR diff", artifact=self.patches_diff)
|
||||||
self.prediction = await self._get_prediction(model, self.patches_diff)
|
self.prediction = await self._get_prediction(model, self.patches_diff, self.patches_diff_no_line_number)
|
||||||
else:
|
else:
|
||||||
get_logger().warning(f"Empty PR diff")
|
get_logger().warning(f"Empty PR diff")
|
||||||
self.prediction = None
|
self.prediction = None
|
||||||
@ -336,42 +343,76 @@ class PRCodeSuggestions:
|
|||||||
data = self.prediction
|
data = self.prediction
|
||||||
return data
|
return data
|
||||||
|
|
||||||
async def _get_prediction(self, model: str, patches_diff: str) -> dict:
|
async def _get_prediction(self, model: str, patches_diff: str, patches_diff_no_line_number: str) -> dict:
|
||||||
variables = copy.deepcopy(self.vars)
|
variables = copy.deepcopy(self.vars)
|
||||||
variables["diff"] = patches_diff # update diff
|
variables["diff"] = patches_diff # update diff
|
||||||
|
variables["diff_no_line_numbers"] = patches_diff_no_line_number # update diff
|
||||||
environment = Environment(undefined=StrictUndefined)
|
environment = Environment(undefined=StrictUndefined)
|
||||||
system_prompt = environment.from_string(self.pr_code_suggestions_prompt_system).render(variables)
|
system_prompt = environment.from_string(self.pr_code_suggestions_prompt_system).render(variables)
|
||||||
user_prompt = environment.from_string(get_settings().pr_code_suggestions_prompt.user).render(variables)
|
user_prompt = environment.from_string(get_settings().pr_code_suggestions_prompt.user).render(variables)
|
||||||
response, finish_reason = await self.ai_handler.chat_completion(
|
response, finish_reason = await self.ai_handler.chat_completion(
|
||||||
model=model, temperature=get_settings().config.temperature, system=system_prompt, user=user_prompt)
|
model=model, temperature=get_settings().config.temperature, system=system_prompt, user=user_prompt)
|
||||||
|
if not get_settings().config.publish_output:
|
||||||
|
get_settings().system_prompt = system_prompt
|
||||||
|
get_settings().user_prompt = user_prompt
|
||||||
|
|
||||||
# load suggestions from the AI response
|
# load suggestions from the AI response
|
||||||
data = self._prepare_pr_code_suggestions(response)
|
data = self._prepare_pr_code_suggestions(response)
|
||||||
|
|
||||||
# self-reflect on suggestions
|
# self-reflect on suggestions (mandatory, since line numbers are generated now here)
|
||||||
if get_settings().pr_code_suggestions.self_reflect_on_suggestions:
|
model_reflection = get_settings().config.model
|
||||||
model_turbo = get_settings().config.model_turbo # use turbo model for self-reflection, since it is an easier task
|
response_reflect = await self.self_reflect_on_suggestions(data["code_suggestions"],
|
||||||
response_reflect = await self.self_reflect_on_suggestions(data["code_suggestions"],
|
patches_diff, model=model_reflection)
|
||||||
patches_diff, model=model_turbo)
|
if response_reflect:
|
||||||
if response_reflect:
|
response_reflect_yaml = load_yaml(response_reflect)
|
||||||
response_reflect_yaml = load_yaml(response_reflect)
|
code_suggestions_feedback = response_reflect_yaml["code_suggestions"]
|
||||||
code_suggestions_feedback = response_reflect_yaml["code_suggestions"]
|
if len(code_suggestions_feedback) == len(data["code_suggestions"]):
|
||||||
if len(code_suggestions_feedback) == len(data["code_suggestions"]):
|
|
||||||
for i, suggestion in enumerate(data["code_suggestions"]):
|
|
||||||
try:
|
|
||||||
suggestion["score"] = code_suggestions_feedback[i]["suggestion_score"]
|
|
||||||
suggestion["score_why"] = code_suggestions_feedback[i]["why"]
|
|
||||||
except Exception as e: #
|
|
||||||
get_logger().error(f"Error processing suggestion score {i}",
|
|
||||||
artifact={"suggestion": suggestion,
|
|
||||||
"code_suggestions_feedback": code_suggestions_feedback[i]})
|
|
||||||
suggestion["score"] = 7
|
|
||||||
suggestion["score_why"] = ""
|
|
||||||
else:
|
|
||||||
# get_logger().error(f"Could not self-reflect on suggestions. using default score 7")
|
|
||||||
for i, suggestion in enumerate(data["code_suggestions"]):
|
for i, suggestion in enumerate(data["code_suggestions"]):
|
||||||
suggestion["score"] = 7
|
try:
|
||||||
suggestion["score_why"] = ""
|
suggestion["score"] = code_suggestions_feedback[i]["suggestion_score"]
|
||||||
|
suggestion["score_why"] = code_suggestions_feedback[i]["why"]
|
||||||
|
|
||||||
|
if 'relevant_lines_start' not in suggestion:
|
||||||
|
relevant_lines_start = code_suggestions_feedback[i].get('relevant_lines_start', -1)
|
||||||
|
relevant_lines_end = code_suggestions_feedback[i].get('relevant_lines_end', -1)
|
||||||
|
suggestion['relevant_lines_start'] = relevant_lines_start
|
||||||
|
suggestion['relevant_lines_end'] = relevant_lines_end
|
||||||
|
if relevant_lines_start < 0 or relevant_lines_end < 0:
|
||||||
|
suggestion["score"] = 0
|
||||||
|
|
||||||
|
try:
|
||||||
|
if get_settings().config.publish_output:
|
||||||
|
suggestion_statistics_dict = {'score': int(suggestion["score"]),
|
||||||
|
'label': suggestion["label"].lower().strip()}
|
||||||
|
get_logger().info(f"PR-Agent suggestions statistics",
|
||||||
|
statistics=suggestion_statistics_dict, analytics=True)
|
||||||
|
except Exception as e:
|
||||||
|
get_logger().error(f"Failed to log suggestion statistics, error: {e}")
|
||||||
|
pass
|
||||||
|
|
||||||
|
except Exception as e: #
|
||||||
|
get_logger().error(f"Error processing suggestion score {i}",
|
||||||
|
artifact={"suggestion": suggestion,
|
||||||
|
"code_suggestions_feedback": code_suggestions_feedback[i]})
|
||||||
|
suggestion["score"] = 7
|
||||||
|
suggestion["score_why"] = ""
|
||||||
|
|
||||||
|
# if the before and after code is the same, clear one of them
|
||||||
|
try:
|
||||||
|
if suggestion['existing_code'] == suggestion['improved_code']:
|
||||||
|
get_logger().debug(
|
||||||
|
f"edited improved suggestion {i + 1}, because equal to existing code: {suggestion['existing_code']}")
|
||||||
|
if get_settings().pr_code_suggestions.commitable_code_suggestions:
|
||||||
|
suggestion['improved_code'] = "" # we need 'existing_code' to locate the code in the PR
|
||||||
|
else:
|
||||||
|
suggestion['existing_code'] = ""
|
||||||
|
except Exception as e:
|
||||||
|
get_logger().error(f"Error processing suggestion {i + 1}, error: {e}")
|
||||||
|
else:
|
||||||
|
# get_logger().error(f"Could not self-reflect on suggestions. using default score 7")
|
||||||
|
for i, suggestion in enumerate(data["code_suggestions"]):
|
||||||
|
suggestion["score"] = 7
|
||||||
|
suggestion["score_why"] = ""
|
||||||
|
|
||||||
return data
|
return data
|
||||||
|
|
||||||
@ -381,10 +422,10 @@ class PRCodeSuggestions:
|
|||||||
suggestion_truncation_message = get_settings().get("PR_CODE_SUGGESTIONS.SUGGESTION_TRUNCATION_MESSAGE", "")
|
suggestion_truncation_message = get_settings().get("PR_CODE_SUGGESTIONS.SUGGESTION_TRUNCATION_MESSAGE", "")
|
||||||
if max_code_suggestion_length > 0:
|
if max_code_suggestion_length > 0:
|
||||||
if len(suggestion['improved_code']) > max_code_suggestion_length:
|
if len(suggestion['improved_code']) > max_code_suggestion_length:
|
||||||
suggestion['improved_code'] = suggestion['improved_code'][:max_code_suggestion_length]
|
|
||||||
suggestion['improved_code'] += f"\n{suggestion_truncation_message}"
|
|
||||||
get_logger().info(f"Truncated suggestion from {len(suggestion['improved_code'])} "
|
get_logger().info(f"Truncated suggestion from {len(suggestion['improved_code'])} "
|
||||||
f"characters to {max_code_suggestion_length} characters")
|
f"characters to {max_code_suggestion_length} characters")
|
||||||
|
suggestion['improved_code'] = suggestion['improved_code'][:max_code_suggestion_length]
|
||||||
|
suggestion['improved_code'] += f"\n{suggestion_truncation_message}"
|
||||||
return suggestion
|
return suggestion
|
||||||
|
|
||||||
def _prepare_pr_code_suggestions(self, predictions: str) -> Dict:
|
def _prepare_pr_code_suggestions(self, predictions: str) -> Dict:
|
||||||
@ -399,8 +440,7 @@ class PRCodeSuggestions:
|
|||||||
one_sentence_summary_list = []
|
one_sentence_summary_list = []
|
||||||
for i, suggestion in enumerate(data['code_suggestions']):
|
for i, suggestion in enumerate(data['code_suggestions']):
|
||||||
try:
|
try:
|
||||||
needed_keys = ['one_sentence_summary', 'label', 'relevant_file', 'relevant_lines_start',
|
needed_keys = ['one_sentence_summary', 'label', 'relevant_file']
|
||||||
'relevant_lines_end']
|
|
||||||
is_valid_keys = True
|
is_valid_keys = True
|
||||||
for key in needed_keys:
|
for key in needed_keys:
|
||||||
if key not in suggestion:
|
if key not in suggestion:
|
||||||
@ -422,13 +462,6 @@ class PRCodeSuggestions:
|
|||||||
continue
|
continue
|
||||||
|
|
||||||
if ('existing_code' in suggestion) and ('improved_code' in suggestion):
|
if ('existing_code' in suggestion) and ('improved_code' in suggestion):
|
||||||
if suggestion['existing_code'] == suggestion['improved_code']:
|
|
||||||
get_logger().debug(
|
|
||||||
f"edited improved suggestion {i + 1}, because equal to existing code: {suggestion['existing_code']}")
|
|
||||||
if get_settings().pr_code_suggestions.commitable_code_suggestions:
|
|
||||||
suggestion['improved_code'] = "" # we need 'existing_code' to locate the code in the PR
|
|
||||||
else:
|
|
||||||
suggestion['existing_code'] = ""
|
|
||||||
suggestion = self._truncate_if_needed(suggestion)
|
suggestion = self._truncate_if_needed(suggestion)
|
||||||
one_sentence_summary_list.append(suggestion['one_sentence_summary'])
|
one_sentence_summary_list.append(suggestion['one_sentence_summary'])
|
||||||
suggestion_list.append(suggestion)
|
suggestion_list.append(suggestion)
|
||||||
@ -531,9 +564,33 @@ class PRCodeSuggestions:
|
|||||||
return True
|
return True
|
||||||
return False
|
return False
|
||||||
|
|
||||||
|
def remove_line_numbers(self, patches_diff_list: List[str]) -> List[str]:
|
||||||
|
# create a copy of the patches_diff_list, without line numbers for '__new hunk__' sections
|
||||||
|
try:
|
||||||
|
self.patches_diff_list_no_line_numbers = []
|
||||||
|
for patches_diff in self.patches_diff_list:
|
||||||
|
patches_diff_lines = patches_diff.splitlines()
|
||||||
|
for i, line in enumerate(patches_diff_lines):
|
||||||
|
if line.strip():
|
||||||
|
if line[0].isdigit():
|
||||||
|
# find the first letter in the line that starts with a valid letter
|
||||||
|
for j, char in enumerate(line):
|
||||||
|
if not char.isdigit():
|
||||||
|
patches_diff_lines[i] = line[j + 1:]
|
||||||
|
break
|
||||||
|
self.patches_diff_list_no_line_numbers.append('\n'.join(patches_diff_lines))
|
||||||
|
return self.patches_diff_list_no_line_numbers
|
||||||
|
except Exception as e:
|
||||||
|
get_logger().error(f"Error removing line numbers from patches_diff_list, error: {e}")
|
||||||
|
return patches_diff_list
|
||||||
|
|
||||||
async def _prepare_prediction_extended(self, model: str) -> dict:
|
async def _prepare_prediction_extended(self, model: str) -> dict:
|
||||||
self.patches_diff_list = get_pr_multi_diffs(self.git_provider, self.token_handler, model,
|
self.patches_diff_list = get_pr_multi_diffs(self.git_provider, self.token_handler, model,
|
||||||
max_calls=get_settings().pr_code_suggestions.max_number_of_calls)
|
max_calls=get_settings().pr_code_suggestions.max_number_of_calls)
|
||||||
|
|
||||||
|
# create a copy of the patches_diff_list, without line numbers for '__new hunk__' sections
|
||||||
|
self.patches_diff_list_no_line_numbers = self.remove_line_numbers(self.patches_diff_list)
|
||||||
|
|
||||||
if self.patches_diff_list:
|
if self.patches_diff_list:
|
||||||
get_logger().info(f"Number of PR chunk calls: {len(self.patches_diff_list)}")
|
get_logger().info(f"Number of PR chunk calls: {len(self.patches_diff_list)}")
|
||||||
get_logger().debug(f"PR diff:", artifact=self.patches_diff_list)
|
get_logger().debug(f"PR diff:", artifact=self.patches_diff_list)
|
||||||
@ -541,12 +598,14 @@ class PRCodeSuggestions:
|
|||||||
# parallelize calls to AI:
|
# parallelize calls to AI:
|
||||||
if get_settings().pr_code_suggestions.parallel_calls:
|
if get_settings().pr_code_suggestions.parallel_calls:
|
||||||
prediction_list = await asyncio.gather(
|
prediction_list = await asyncio.gather(
|
||||||
*[self._get_prediction(model, patches_diff) for patches_diff in self.patches_diff_list])
|
*[self._get_prediction(model, patches_diff, patches_diff_no_line_numbers) for
|
||||||
|
patches_diff, patches_diff_no_line_numbers in
|
||||||
|
zip(self.patches_diff_list, self.patches_diff_list_no_line_numbers)])
|
||||||
self.prediction_list = prediction_list
|
self.prediction_list = prediction_list
|
||||||
else:
|
else:
|
||||||
prediction_list = []
|
prediction_list = []
|
||||||
for i, patches_diff in enumerate(self.patches_diff_list):
|
for patches_diff, patches_diff_no_line_numbers in zip(self.patches_diff_list, self.patches_diff_list_no_line_numbers):
|
||||||
prediction = await self._get_prediction(model, patches_diff)
|
prediction = await self._get_prediction(model, patches_diff, patches_diff_no_line_numbers)
|
||||||
prediction_list.append(prediction)
|
prediction_list.append(prediction)
|
||||||
|
|
||||||
data = {"code_suggestions": []}
|
data = {"code_suggestions": []}
|
||||||
@ -555,18 +614,16 @@ class PRCodeSuggestions:
|
|||||||
score_threshold = max(1, int(get_settings().pr_code_suggestions.suggestions_score_threshold))
|
score_threshold = max(1, int(get_settings().pr_code_suggestions.suggestions_score_threshold))
|
||||||
for i, prediction in enumerate(predictions["code_suggestions"]):
|
for i, prediction in enumerate(predictions["code_suggestions"]):
|
||||||
try:
|
try:
|
||||||
if get_settings().pr_code_suggestions.self_reflect_on_suggestions:
|
score = int(prediction.get("score", 1))
|
||||||
score = int(prediction.get("score", 1))
|
if score >= score_threshold:
|
||||||
if score >= score_threshold:
|
|
||||||
data["code_suggestions"].append(prediction)
|
|
||||||
else:
|
|
||||||
get_logger().info(
|
|
||||||
f"Removing suggestions {i} from call {j}, because score is {score}, and score_threshold is {score_threshold}",
|
|
||||||
artifact=prediction)
|
|
||||||
else:
|
|
||||||
data["code_suggestions"].append(prediction)
|
data["code_suggestions"].append(prediction)
|
||||||
|
else:
|
||||||
|
get_logger().info(
|
||||||
|
f"Removing suggestions {i} from call {j}, because score is {score}, and score_threshold is {score_threshold}",
|
||||||
|
artifact=prediction)
|
||||||
except Exception as e:
|
except Exception as e:
|
||||||
get_logger().error(f"Error getting PR diff for suggestion {i} in call {j}, error: {e}")
|
get_logger().error(f"Error getting PR diff for suggestion {i} in call {j}, error: {e}",
|
||||||
|
artifact={"prediction": prediction})
|
||||||
self.data = data
|
self.data = data
|
||||||
else:
|
else:
|
||||||
get_logger().warning(f"Empty PR diff list")
|
get_logger().warning(f"Empty PR diff list")
|
||||||
@ -617,7 +674,7 @@ class PRCodeSuggestions:
|
|||||||
if get_settings().pr_code_suggestions.final_clip_factor != 1:
|
if get_settings().pr_code_suggestions.final_clip_factor != 1:
|
||||||
max_len = max(
|
max_len = max(
|
||||||
len(data_sorted),
|
len(data_sorted),
|
||||||
get_settings().pr_code_suggestions.num_code_suggestions_per_chunk,
|
int(get_settings().pr_code_suggestions.num_code_suggestions_per_chunk),
|
||||||
)
|
)
|
||||||
new_len = int(0.5 + max_len * get_settings().pr_code_suggestions.final_clip_factor)
|
new_len = int(0.5 + max_len * get_settings().pr_code_suggestions.final_clip_factor)
|
||||||
if new_len < len(data_sorted):
|
if new_len < len(data_sorted):
|
||||||
@ -650,10 +707,7 @@ class PRCodeSuggestions:
|
|||||||
header = f"Suggestion"
|
header = f"Suggestion"
|
||||||
delta = 66
|
delta = 66
|
||||||
header += " " * delta
|
header += " " * delta
|
||||||
if get_settings().pr_code_suggestions.self_reflect_on_suggestions:
|
pr_body += f"""<thead><tr><td>Category</td><td align=left>{header}</td><td align=center>Score</td></tr>"""
|
||||||
pr_body += f"""<thead><tr><td>Category</td><td align=left>{header}</td><td align=center>Score</td></tr>"""
|
|
||||||
else:
|
|
||||||
pr_body += f"""<thead><tr><td>Category</td><td align=left>{header}</td></tr>"""
|
|
||||||
pr_body += """<tbody>"""
|
pr_body += """<tbody>"""
|
||||||
suggestions_labels = dict()
|
suggestions_labels = dict()
|
||||||
# add all suggestions related to each label
|
# add all suggestions related to each label
|
||||||
@ -664,12 +718,11 @@ class PRCodeSuggestions:
|
|||||||
suggestions_labels[label].append(suggestion)
|
suggestions_labels[label].append(suggestion)
|
||||||
|
|
||||||
# sort suggestions_labels by the suggestion with the highest score
|
# sort suggestions_labels by the suggestion with the highest score
|
||||||
if get_settings().pr_code_suggestions.self_reflect_on_suggestions:
|
suggestions_labels = dict(
|
||||||
suggestions_labels = dict(
|
sorted(suggestions_labels.items(), key=lambda x: max([s['score'] for s in x[1]]), reverse=True))
|
||||||
sorted(suggestions_labels.items(), key=lambda x: max([s['score'] for s in x[1]]), reverse=True))
|
# sort the suggestions inside each label group by score
|
||||||
# sort the suggestions inside each label group by score
|
for label, suggestions in suggestions_labels.items():
|
||||||
for label, suggestions in suggestions_labels.items():
|
suggestions_labels[label] = sorted(suggestions, key=lambda x: x['score'], reverse=True)
|
||||||
suggestions_labels[label] = sorted(suggestions, key=lambda x: x['score'], reverse=True)
|
|
||||||
|
|
||||||
counter_suggestions = 0
|
counter_suggestions = 0
|
||||||
for label, suggestions in suggestions_labels.items():
|
for label, suggestions in suggestions_labels.items():
|
||||||
@ -728,16 +781,14 @@ class PRCodeSuggestions:
|
|||||||
|
|
||||||
{example_code.rstrip()}
|
{example_code.rstrip()}
|
||||||
"""
|
"""
|
||||||
if get_settings().pr_code_suggestions.self_reflect_on_suggestions:
|
pr_body += f"<details><summary>Suggestion importance[1-10]: {suggestion['score']}</summary>\n\n"
|
||||||
pr_body += f"<details><summary>Suggestion importance[1-10]: {suggestion['score']}</summary>\n\n"
|
pr_body += f"Why: {suggestion['score_why']}\n\n"
|
||||||
pr_body += f"Why: {suggestion['score_why']}\n\n"
|
pr_body += f"</details>"
|
||||||
pr_body += f"</details>"
|
|
||||||
|
|
||||||
pr_body += f"</details>"
|
pr_body += f"</details>"
|
||||||
|
|
||||||
# # add another column for 'score'
|
# # add another column for 'score'
|
||||||
if get_settings().pr_code_suggestions.self_reflect_on_suggestions:
|
pr_body += f"</td><td align=center>{suggestion['score']}\n\n"
|
||||||
pr_body += f"</td><td align=center>{suggestion['score']}\n\n"
|
|
||||||
|
|
||||||
pr_body += f"</td></tr>"
|
pr_body += f"</td></tr>"
|
||||||
counter_suggestions += 1
|
counter_suggestions += 1
|
||||||
|
@ -12,7 +12,7 @@ from pr_agent.algo.ai_handlers.litellm_ai_handler import LiteLLMAIHandler
|
|||||||
from pr_agent.algo.pr_processing import get_pr_diff, retry_with_fallback_models, get_pr_diff_multiple_patchs, \
|
from pr_agent.algo.pr_processing import get_pr_diff, retry_with_fallback_models, get_pr_diff_multiple_patchs, \
|
||||||
OUTPUT_BUFFER_TOKENS_HARD_THRESHOLD
|
OUTPUT_BUFFER_TOKENS_HARD_THRESHOLD
|
||||||
from pr_agent.algo.token_handler import TokenHandler
|
from pr_agent.algo.token_handler import TokenHandler
|
||||||
from pr_agent.algo.utils import set_custom_labels
|
from pr_agent.algo.utils import set_custom_labels, PRDescriptionHeader
|
||||||
from pr_agent.algo.utils import load_yaml, get_user_labels, ModelType, show_relevant_configurations, get_max_tokens, \
|
from pr_agent.algo.utils import load_yaml, get_user_labels, ModelType, show_relevant_configurations, get_max_tokens, \
|
||||||
clip_tokens
|
clip_tokens
|
||||||
from pr_agent.config_loader import get_settings
|
from pr_agent.config_loader import get_settings
|
||||||
@ -501,7 +501,7 @@ extra_file_yaml =
|
|||||||
pr_body += "</details>\n"
|
pr_body += "</details>\n"
|
||||||
elif 'pr_files' in key.lower() and get_settings().pr_description.enable_semantic_files_types:
|
elif 'pr_files' in key.lower() and get_settings().pr_description.enable_semantic_files_types:
|
||||||
changes_walkthrough, pr_file_changes = self.process_pr_files_prediction(changes_walkthrough, value)
|
changes_walkthrough, pr_file_changes = self.process_pr_files_prediction(changes_walkthrough, value)
|
||||||
changes_walkthrough = f"### **Changes walkthrough** 📝\n{changes_walkthrough}"
|
changes_walkthrough = f"{PRDescriptionHeader.CHANGES_WALKTHROUGH.value}\n{changes_walkthrough}"
|
||||||
else:
|
else:
|
||||||
# if the value is a list, join its items by comma
|
# if the value is a list, join its items by comma
|
||||||
if isinstance(value, list):
|
if isinstance(value, list):
|
||||||
|
@ -108,7 +108,7 @@ async def extract_tickets(git_provider):
|
|||||||
|
|
||||||
|
|
||||||
async def extract_and_cache_pr_tickets(git_provider, vars):
|
async def extract_and_cache_pr_tickets(git_provider, vars):
|
||||||
if get_settings().get('config.require_ticket_analysis_review', False):
|
if not get_settings().get('pr_reviewer.require_ticket_analysis_review', False):
|
||||||
return
|
return
|
||||||
related_tickets = get_settings().get('related_tickets', [])
|
related_tickets = get_settings().get('related_tickets', [])
|
||||||
if not related_tickets:
|
if not related_tickets:
|
||||||
|
@ -4,10 +4,12 @@ atlassian-python-api==3.41.4
|
|||||||
azure-devops==7.1.0b3
|
azure-devops==7.1.0b3
|
||||||
azure-identity==1.15.0
|
azure-identity==1.15.0
|
||||||
boto3==1.33.6
|
boto3==1.33.6
|
||||||
|
certifi==2024.8.30
|
||||||
dynaconf==3.2.4
|
dynaconf==3.2.4
|
||||||
fastapi==0.111.0
|
fastapi==0.111.0
|
||||||
GitPython==3.1.41
|
GitPython==3.1.41
|
||||||
google-cloud-aiplatform==1.38.0
|
google-cloud-aiplatform==1.38.0
|
||||||
|
google-generativeai==0.8.3
|
||||||
google-cloud-storage==2.10.0
|
google-cloud-storage==2.10.0
|
||||||
Jinja2==3.1.2
|
Jinja2==3.1.2
|
||||||
litellm==1.50.2
|
litellm==1.50.2
|
||||||
|
Reference in New Issue
Block a user