mirror of
https://github.com/qodo-ai/pr-agent.git
synced 2025-07-05 05:10:38 +08:00
Compare commits
2 Commits
v0.12
...
ok/update_
Author | SHA1 | Date | |
---|---|---|---|
eee6252f6d | |||
dd8c992dad |
12
.github/workflows/pr-agent-review.yaml
vendored
12
.github/workflows/pr-agent-review.yaml
vendored
@ -5,9 +5,8 @@
|
||||
name: PR-Agent
|
||||
|
||||
on:
|
||||
# pull_request:
|
||||
# issue_comment:
|
||||
workflow_dispatch:
|
||||
pull_request:
|
||||
issue_comment:
|
||||
|
||||
permissions:
|
||||
issues: write
|
||||
@ -25,11 +24,4 @@ jobs:
|
||||
OPENAI_KEY: ${{ secrets.OPENAI_KEY }}
|
||||
OPENAI_ORG: ${{ secrets.OPENAI_ORG }} # optional
|
||||
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
|
||||
PINECONE.API_KEY: ${{ secrets.PINECONE_API_KEY }}
|
||||
PINECONE.ENVIRONMENT: ${{ secrets.PINECONE_ENVIRONMENT }}
|
||||
GITHUB_ACTION_CONFIG.AUTO_DESCRIBE: true
|
||||
GITHUB_ACTION_CONFIG.AUTO_REVIEW: true
|
||||
GITHUB_ACTION_CONFIG.AUTO_IMPROVE: true
|
||||
|
||||
|
||||
|
||||
|
1
.gitignore
vendored
1
.gitignore
vendored
@ -6,4 +6,3 @@ dist/
|
||||
*.egg-info/
|
||||
build/
|
||||
review.md
|
||||
.DS_Store
|
||||
|
@ -1,6 +0,0 @@
|
||||
[pr_reviewer]
|
||||
enable_review_labels_effort = true
|
||||
|
||||
|
||||
[pr_code_suggestions]
|
||||
summarize=true
|
219
INSTALL.md
219
INSTALL.md
@ -3,75 +3,60 @@
|
||||
|
||||
To get started with PR-Agent quickly, you first need to acquire two tokens:
|
||||
|
||||
1. An OpenAI key from [here](https://platform.openai.com/api-keys), with access to GPT-4.
|
||||
2. A GitHub\GitLab\BitBucket personal access token (classic), with the repo scope. [GitHub from [here](https://github.com/settings/tokens)]
|
||||
1. An OpenAI key from [here](https://platform.openai.com/), with access to GPT-4.
|
||||
2. A GitHub personal access token (classic) with the repo scope.
|
||||
|
||||
There are several ways to use PR-Agent:
|
||||
|
||||
**Locally**
|
||||
- [Using Docker image (no installation required)](INSTALL.md#use-docker-image-no-installation-required)
|
||||
- [Run from source](INSTALL.md#run-from-source)
|
||||
|
||||
**GitHub specific methods**
|
||||
- [Run as a GitHub Action](INSTALL.md#run-as-a-github-action)
|
||||
- [Run as a polling server](INSTALL.md#run-as-a-polling-server)
|
||||
- [Run as a GitHub App](INSTALL.md#run-as-a-github-app)
|
||||
- [Deploy as a Lambda Function](INSTALL.md#deploy-as-a-lambda-function)
|
||||
- [AWS CodeCommit](INSTALL.md#aws-codecommit-setup)
|
||||
|
||||
**GitLab specific methods**
|
||||
- [Run a GitLab webhook server](INSTALL.md#run-a-gitlab-webhook-server)
|
||||
|
||||
**BitBucket specific methods**
|
||||
- [Run as a Bitbucket Pipeline](INSTALL.md#run-as-a-bitbucket-pipeline)
|
||||
- [Run on a hosted app](INSTALL.md#run-on-a-hosted-bitbucket-app)
|
||||
- [Bitbucket server and data center](INSTALL.md#bitbucket-server-and-data-center)
|
||||
- [Method 1: Use Docker image (no installation required)](INSTALL.md#method-1-use-docker-image-no-installation-required)
|
||||
- [Method 2: Run from source](INSTALL.md#method-2-run-from-source)
|
||||
- [Method 3: Run as a GitHub Action](INSTALL.md#method-3-run-as-a-github-action)
|
||||
- [Method 4: Run as a polling server](INSTALL.md#method-4-run-as-a-polling-server)
|
||||
- [Method 5: Run as a GitHub App](INSTALL.md#method-5-run-as-a-github-app)
|
||||
- [Method 6: Deploy as a Lambda Function](INSTALL.md#method-6---deploy-as-a-lambda-function)
|
||||
- [Method 7: AWS CodeCommit](INSTALL.md#method-7---aws-codecommit-setup)
|
||||
- [Method 8: Run a GitLab webhook server](INSTALL.md#method-8---run-a-gitlab-webhook-server)
|
||||
---
|
||||
|
||||
### Use Docker image (no installation required)
|
||||
### Method 1: Use Docker image (no installation required)
|
||||
|
||||
A list of the relevant tools can be found in the [tools guide](./docs/TOOLS_GUIDE.md).
|
||||
To request a review for a PR, or ask a question about a PR, you can run directly from the Docker image. Here's how:
|
||||
|
||||
To invoke a tool (for example `review`), you can run directly from the Docker image. Here's how:
|
||||
1. To request a review for a PR, run the following command:
|
||||
|
||||
- For GitHub:
|
||||
```
|
||||
docker run --rm -it -e OPENAI.KEY=<your key> -e GITHUB.USER_TOKEN=<your token> codiumai/pr-agent:latest --pr_url <pr_url> review
|
||||
docker run --rm -it -e OPENAI.KEY=<your key> -e GITHUB.USER_TOKEN=<your token> codiumai/pr-agent --pr_url <pr_url> review
|
||||
```
|
||||
|
||||
- For GitLab:
|
||||
```
|
||||
docker run --rm -it -e OPENAI.KEY=<your key> -e CONFIG.GIT_PROVIDER=gitlab -e GITLAB.PERSONAL_ACCESS_TOKEN=<your token> codiumai/pr-agent:latest --pr_url <pr_url> review
|
||||
```
|
||||
2. To ask a question about a PR, run the following command:
|
||||
|
||||
Note: If you have a dedicated GitLab instance, you need to specify the custom url as variable:
|
||||
```
|
||||
docker run --rm -it -e OPENAI.KEY=<your key> -e CONFIG.GIT_PROVIDER=gitlab -e GITLAB.PERSONAL_ACCESS_TOKEN=<your token> -e GITLAB.URL=<your gitlab instance url> codiumai/pr-agent:latest --pr_url <pr_url> review
|
||||
docker run --rm -it -e OPENAI.KEY=<your key> -e GITHUB.USER_TOKEN=<your token> codiumai/pr-agent --pr_url <pr_url> ask "<your question>"
|
||||
```
|
||||
Note: If you want to ensure you're running a specific version of the Docker image, consider using the image's digest.
|
||||
The digest is a unique identifier for a specific version of an image. You can pull and run an image using its digest by referencing it like so: repository@sha256:digest. Always ensure you're using the correct and trusted digest for your operations.
|
||||
|
||||
- For BitBucket:
|
||||
```
|
||||
docker run --rm -it -e CONFIG.GIT_PROVIDER=bitbucket -e OPENAI.KEY=$OPENAI_API_KEY -e BITBUCKET.BEARER_TOKEN=$BITBUCKET_BEARER_TOKEN codiumai/pr-agent:latest --pr_url=<pr_url> review
|
||||
```
|
||||
|
||||
For other git providers, update CONFIG.GIT_PROVIDER accordingly, and check the `pr_agent/settings/.secrets_template.toml` file for the environment variables expected names and values.
|
||||
|
||||
---
|
||||
|
||||
|
||||
If you want to ensure you're running a specific version of the Docker image, consider using the image's digest:
|
||||
1. To request a review for a PR using a specific digest, run the following command:
|
||||
```bash
|
||||
docker run --rm -it -e OPENAI.KEY=<your key> -e GITHUB.USER_TOKEN=<your token> codiumai/pr-agent@sha256:71b5ee15df59c745d352d84752d01561ba64b6d51327f97d46152f0c58a5f678 --pr_url <pr_url> review
|
||||
```
|
||||
|
||||
Or you can run a [specific released versions](./RELEASE_NOTES.md) of pr-agent, for example:
|
||||
```
|
||||
codiumai/pr-agent@v0.9
|
||||
2. To ask a question about a PR using the same digest, run the following command:
|
||||
```bash
|
||||
docker run --rm -it -e OPENAI.KEY=<your key> -e GITHUB.USER_TOKEN=<your token> codiumai/pr-agent@sha256:71b5ee15df59c745d352d84752d01561ba64b6d51327f97d46152f0c58a5f678 --pr_url <pr_url> ask "<your question>"
|
||||
```
|
||||
|
||||
Possible questions you can ask include:
|
||||
|
||||
- What is the main theme of this PR?
|
||||
- Is the PR ready for merge?
|
||||
- What are the main changes in this PR?
|
||||
- Should this PR be split into smaller parts?
|
||||
- Can you compose a rhymed song about this PR?
|
||||
|
||||
---
|
||||
|
||||
### Run from source
|
||||
### Method 2: Run from source
|
||||
|
||||
1. Clone this repository:
|
||||
|
||||
@ -79,14 +64,12 @@ codiumai/pr-agent@v0.9
|
||||
git clone https://github.com/Codium-ai/pr-agent.git
|
||||
```
|
||||
|
||||
2. Navigate to the `/pr-agent` folder and install the requirements in your favorite virtual environment:
|
||||
2. Install the requirements in your favorite virtual environment:
|
||||
|
||||
```
|
||||
pip install -e .
|
||||
pip install -r requirements.txt
|
||||
```
|
||||
|
||||
*Note: If you get an error related to Rust in the dependency installation then make sure Rust is installed and in your `PATH`, instructions: https://rustup.rs*
|
||||
|
||||
3. Copy the secrets template file and fill in your OpenAI key and your GitHub user token:
|
||||
|
||||
```
|
||||
@ -95,27 +78,19 @@ chmod 600 pr_agent/settings/.secrets.toml
|
||||
# Edit .secrets.toml file
|
||||
```
|
||||
|
||||
4. Run the cli.py script:
|
||||
4. Add the pr_agent folder to your PYTHONPATH, then run the cli.py script:
|
||||
|
||||
```
|
||||
python3 -m pr_agent.cli --pr_url <pr_url> review
|
||||
python3 -m pr_agent.cli --pr_url <pr_url> ask <your question>
|
||||
python3 -m pr_agent.cli --pr_url <pr_url> describe
|
||||
python3 -m pr_agent.cli --pr_url <pr_url> improve
|
||||
python3 -m pr_agent.cli --pr_url <pr_url> add_docs
|
||||
python3 -m pr_agent.cli --pr_url <pr_url> generate_labels
|
||||
python3 -m pr_agent.cli --issue_url <issue_url> similar_issue
|
||||
...
|
||||
```
|
||||
|
||||
[Optional] Add the pr_agent folder to your PYTHONPATH
|
||||
```
|
||||
export PYTHONPATH=$PYTHONPATH:<PATH to pr_agent folder>
|
||||
export PYTHONPATH=[$PYTHONPATH:]<PATH to pr_agent folder>
|
||||
python pr_agent/cli.py --pr_url <pr_url> /review
|
||||
python pr_agent/cli.py --pr_url <pr_url> /ask <your question>
|
||||
python pr_agent/cli.py --pr_url <pr_url> /describe
|
||||
python pr_agent/cli.py --pr_url <pr_url> /improve
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
### Run as a GitHub Action
|
||||
### Method 3: Run as a GitHub Action
|
||||
|
||||
You can use our pre-built Github Action Docker image to run PR-Agent as a Github Action.
|
||||
|
||||
@ -141,7 +116,7 @@ jobs:
|
||||
OPENAI_KEY: ${{ secrets.OPENAI_KEY }}
|
||||
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
|
||||
```
|
||||
** if you want to pin your action to a specific release (v0.7 for example) for stability reasons, use:
|
||||
** if you want to pin your action to a specific commit for stability reasons
|
||||
```yaml
|
||||
on:
|
||||
pull_request:
|
||||
@ -158,16 +133,15 @@ jobs:
|
||||
steps:
|
||||
- name: PR Agent action step
|
||||
id: pragent
|
||||
uses: Codium-ai/pr-agent@v0.7
|
||||
uses: Codium-ai/pr-agent@<commit_sha>
|
||||
env:
|
||||
OPENAI_KEY: ${{ secrets.OPENAI_KEY }}
|
||||
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
|
||||
```
|
||||
2. Add the following secret to your repository under `Settings > Secrets and variables > Actions > New repository secret > Add secret`:
|
||||
2. Add the following secret to your repository under `Settings > Secrets`:
|
||||
|
||||
```
|
||||
Name = OPENAI_KEY
|
||||
Secret = <your key>
|
||||
OPENAI_KEY: <your key>
|
||||
```
|
||||
|
||||
The GITHUB_TOKEN secret is automatically created by GitHub.
|
||||
@ -175,7 +149,7 @@ The GITHUB_TOKEN secret is automatically created by GitHub.
|
||||
3. Merge this change to your main branch.
|
||||
When you open your next PR, you should see a comment from `github-actions` bot with a review of your PR, and instructions on how to use the rest of the tools.
|
||||
|
||||
4. You may configure PR-Agent by adding environment variables under the env section corresponding to any configurable property in the [configuration](pr_agent/settings/configuration.toml) file. Some examples:
|
||||
4. You may configure PR-Agent by adding environment variables under the env section corresponding to any configurable property in the [configuration](./Usage.md) file. Some examples:
|
||||
```yaml
|
||||
env:
|
||||
# ... previous environment values
|
||||
@ -186,11 +160,10 @@ When you open your next PR, you should see a comment from `github-actions` bot w
|
||||
|
||||
---
|
||||
|
||||
### Run as a polling server
|
||||
Request reviews by tagging your GitHub user on a PR
|
||||
|
||||
Follow [steps 1-3](#run-as-a-github-action) of the GitHub Action setup.
|
||||
### Method 4: Run as a polling server
|
||||
Request reviews by tagging your Github user on a PR
|
||||
|
||||
Follow steps 1-3 of method 2.
|
||||
Run the following command to start the server:
|
||||
|
||||
```
|
||||
@ -199,7 +172,7 @@ python pr_agent/servers/github_polling.py
|
||||
|
||||
---
|
||||
|
||||
### Run as a GitHub App
|
||||
### Method 5: Run as a GitHub App
|
||||
Allowing you to automate the review process on your private or public repositories.
|
||||
|
||||
1. Create a GitHub App from the [Github Developer Portal](https://docs.github.com/en/developers/apps/creating-a-github-app).
|
||||
@ -212,7 +185,6 @@ Allowing you to automate the review process on your private or public repositori
|
||||
- Set the following events:
|
||||
- Issue comment
|
||||
- Pull request
|
||||
- Push (if you need to enable triggering on PR update)
|
||||
|
||||
2. Generate a random secret for your app, and save it for later. For example, you can use:
|
||||
|
||||
@ -281,13 +253,13 @@ docker push codiumai/pr-agent:github_app # Push to your Docker repository
|
||||
9. Install the app by navigating to the "Install App" tab and selecting your desired repositories.
|
||||
|
||||
> **Note:** When running PR-Agent from GitHub App, the default configuration file (configuration.toml) will be loaded.<br>
|
||||
> However, you can override the default tool parameters by uploading a local configuration file `.pr_agent.toml`<br>
|
||||
> For more information please check out the [USAGE GUIDE](./Usage.md#working-with-github-app)
|
||||
> However, you can override the default tool parameters by uploading a local configuration file<br>
|
||||
> For more information please check out [CONFIGURATION.md](Usage.md#working-from-github-app-pre-built-repo)
|
||||
---
|
||||
|
||||
### Deploy as a Lambda Function
|
||||
### Method 6 - Deploy as a Lambda Function
|
||||
|
||||
1. Follow steps 1-5 of [Method 5](#run-as-a-github-app).
|
||||
1. Follow steps 1-5 of [Method 5](#method-5-run-as-a-github-app).
|
||||
2. Build a docker image that can be used as a lambda function
|
||||
```shell
|
||||
docker buildx build --platform=linux/amd64 . -t codiumai/pr-agent:serverless -f docker/Dockerfile.lambda
|
||||
@ -299,13 +271,12 @@ docker push codiumai/pr-agent:github_app # Push to your Docker repository
|
||||
```
|
||||
4. Create a lambda function that uses the uploaded image. Set the lambda timeout to be at least 3m.
|
||||
5. Configure the lambda function to have a Function URL.
|
||||
6. In the environment variables of the Lambda function, specify `AZURE_DEVOPS_CACHE_DIR` to a writable location such as /tmp. (see [link](https://github.com/Codium-ai/pr-agent/pull/450#issuecomment-1840242269))
|
||||
7. Go back to steps 8-9 of [Method 5](#run-as-a-github-app) with the function url as your Webhook URL.
|
||||
6. Go back to steps 8-9 of [Method 5](#method-5-run-as-a-github-app) with the function url as your Webhook URL.
|
||||
The Webhook URL would look like `https://<LAMBDA_FUNCTION_URL>/api/v1/github_webhooks`
|
||||
|
||||
---
|
||||
|
||||
### AWS CodeCommit Setup
|
||||
### Method 7 - AWS CodeCommit Setup
|
||||
|
||||
Not all features have been added to CodeCommit yet. As of right now, CodeCommit has been implemented to run the pr-agent CLI on the command line, using AWS credentials stored in environment variables. (More features will be added in the future.) The following is a set of instructions to have pr-agent do a review of your CodeCommit pull request from the command line:
|
||||
|
||||
@ -375,7 +346,7 @@ PYTHONPATH="/PATH/TO/PROJECTS/pr-agent" python pr_agent/cli.py \
|
||||
|
||||
---
|
||||
|
||||
### Run a GitLab webhook server
|
||||
### Method 8 - Run a GitLab webhook server
|
||||
|
||||
1. From the GitLab workspace or group, create an access token. Enable the "api" scope only.
|
||||
2. Generate a random secret for your app, and save it for later. For example, you can use:
|
||||
@ -383,7 +354,7 @@ PYTHONPATH="/PATH/TO/PROJECTS/pr-agent" python pr_agent/cli.py \
|
||||
```
|
||||
WEBHOOK_SECRET=$(python -c "import secrets; print(secrets.token_hex(10))")
|
||||
```
|
||||
3. Follow the instructions to build the Docker image, setup a secrets file and deploy on your own server from [Method 5](#run-as-a-github-app) steps 4-7.
|
||||
3. Follow the instructions to build the Docker image, setup a secrets file and deploy on your own server from [Method 5](#method-5-run-as-a-github-app).
|
||||
4. In the secrets file, fill in the following:
|
||||
- Your OpenAI key.
|
||||
- In the [gitlab] section, fill in personal_access_token and shared_secret. The access token can be a personal access token, or a group or project access token.
|
||||
@ -392,77 +363,11 @@ WEBHOOK_SECRET=$(python -c "import secrets; print(secrets.token_hex(10))")
|
||||
In the "Trigger" section, check the ‘comments’ and ‘merge request events’ boxes.
|
||||
6. Test your installation by opening a merge request or commenting or a merge request using one of CodiumAI's commands.
|
||||
|
||||
---
|
||||
|
||||
### Appendix - **Debugging LLM API Calls**
|
||||
If you're testing your codium/pr-agent server, and need to see if calls were made successfully + the exact call logs, you can use the [LiteLLM Debugger tool](https://docs.litellm.ai/docs/debugging/hosted_debugging).
|
||||
|
||||
### Run as a Bitbucket Pipeline
|
||||
You can do this by setting `litellm_debugger=true` in configuration.toml. Your Logs will be viewable in real-time @ `admin.litellm.ai/<your_email>`. Set your email in the `.secrets.toml` under 'user_email'.
|
||||
|
||||
|
||||
You can use the Bitbucket Pipeline system to run PR-Agent on every pull request open or update.
|
||||
|
||||
1. Add the following file in your repository bitbucket_pipelines.yml
|
||||
|
||||
```yaml
|
||||
pipelines:
|
||||
pull-requests:
|
||||
'**':
|
||||
- step:
|
||||
name: PR Agent Review
|
||||
image: python:3.10
|
||||
services:
|
||||
- docker
|
||||
script:
|
||||
- docker run -e CONFIG.GIT_PROVIDER=bitbucket -e OPENAI.KEY=$OPENAI_API_KEY -e BITBUCKET.BEARER_TOKEN=$BITBUCKET_BEARER_TOKEN codiumai/pr-agent:latest --pr_url=https://bitbucket.org/$BITBUCKET_WORKSPACE/$BITBUCKET_REPO_SLUG/pull-requests/$BITBUCKET_PR_ID review
|
||||
```
|
||||
|
||||
2. Add the following secure variables to your repository under Repository settings > Pipelines > Repository variables.
|
||||
OPENAI_API_KEY: <your key>
|
||||
BITBUCKET_BEARER_TOKEN: <your token>
|
||||
|
||||
You can get a Bitbucket token for your repository by following Repository Settings -> Security -> Access Tokens.
|
||||
|
||||
Note that comments on a PR are not supported in Bitbucket Pipeline.
|
||||
|
||||
|
||||
### Run using CodiumAI-hosted Bitbucket app
|
||||
|
||||
Please contact <support@codium.ai> or visit [CodiumAI pricing page](https://www.codium.ai/pricing/) if you're interested in a hosted BitBucket app solution that provides full functionality including PR reviews and comment handling. It's based on the [bitbucket_app.py](https://github.com/Codium-ai/pr-agent/blob/main/pr_agent/git_providers/bitbucket_provider.py) implementation.
|
||||
|
||||
|
||||
### Bitbucket Server and Data Center
|
||||
|
||||
Login into your on-prem instance of Bitbucket with your service account username and password.
|
||||
Navigate to `Manage account`, `HTTP Access tokens`, `Create Token`.
|
||||
Generate the token and add it to .secret.toml under `bitbucket_server` section
|
||||
|
||||
```toml
|
||||
[bitbucket_server]
|
||||
bearer_token = "<your key>"
|
||||
```
|
||||
|
||||
#### Run it as CLI
|
||||
|
||||
Modify `configuration.toml`:
|
||||
|
||||
```toml
|
||||
git_provider="bitbucket_server"
|
||||
```
|
||||
|
||||
and pass the Pull request URL:
|
||||
```shell
|
||||
python cli.py --pr_url https://git.onpreminstanceofbitbucket.com/projects/PROJECT/repos/REPO/pull-requests/1 review
|
||||
```
|
||||
|
||||
#### Run it as service
|
||||
|
||||
To run pr-agent as webhook, build the docker image:
|
||||
```
|
||||
docker build . -t codiumai/pr-agent:bitbucket_server_webhook --target bitbucket_server_webhook -f docker/Dockerfile
|
||||
docker push codiumai/pr-agent:bitbucket_server_webhook # Push to your Docker repository
|
||||
```
|
||||
|
||||
Navigate to `Projects` or `Repositories`, `Settings`, `Webhooks`, `Create Webhook`.
|
||||
Fill the name and URL, Authentication None select the Pull Request Opened checkbox to receive that event as webhook.
|
||||
|
||||
The URL should end with `/webhook`, for example: https://domain.com/webhook
|
||||
|
||||
=======
|
||||
<img src="./pics/debugger.png" width="800"/>
|
@ -1,4 +1,4 @@
|
||||
# PR Compression Strategy
|
||||
# Git Patch Logic
|
||||
There are two scenarios:
|
||||
1. The PR is small enough to fit in a single prompt (including system and user prompt)
|
||||
2. The PR is too large to fit in a single prompt (including system and user prompt)
|
||||
@ -16,7 +16,7 @@ We prioritize the languages of the repo based on the following criteria:
|
||||
## Small PR
|
||||
In this case, we can fit the entire PR in a single prompt:
|
||||
1. Exclude binary files and non code files (e.g. images, pdfs, etc)
|
||||
2. We Expand the surrounding context of each patch to 3 lines above and below the patch
|
||||
2. We Expand the surrounding context of each patch to 6 lines above and below the patch
|
||||
## Large PR
|
||||
|
||||
### Motivation
|
||||
@ -25,7 +25,7 @@ We want to be able to pack as much information as possible in a single LMM promp
|
||||
|
||||
|
||||
|
||||
#### Compression strategy
|
||||
#### PR compression strategy
|
||||
We prioritize additions over deletions:
|
||||
- Combine all deleted files into a single list (`deleted files`)
|
||||
- File patches are a list of hunks, remove all hunks of type deletion-only from the hunks in the file patch
|
||||
@ -39,4 +39,4 @@ We use [tiktoken](https://github.com/openai/tiktoken) to tokenize the patches af
|
||||
4. If we haven't reached the max token length, add the `deleted files` to the prompt until the prompt reaches the max token length (hard stop), skip the rest of the patches.
|
||||
|
||||
### Example
|
||||
<kbd><img src=https://codium.ai/images/git_patch_logic.png width="768"></kbd>
|
||||

|
||||
|
234
README.md
234
README.md
@ -2,152 +2,47 @@
|
||||
|
||||
<div align="center">
|
||||
|
||||
|
||||
<picture>
|
||||
<source media="(prefers-color-scheme: dark)" srcset="https://codium.ai/images/pr_agent/logo-dark.png" width="330">
|
||||
<source media="(prefers-color-scheme: light)" srcset="https://codium.ai/images/pr_agent/logo-light.png" width="330">
|
||||
<img alt="logo">
|
||||
</picture>
|
||||
<br/>
|
||||
<img src="./pics/logo-dark.png#gh-dark-mode-only" width="330"/>
|
||||
<img src="./pics/logo-light.png#gh-light-mode-only" width="330"/><br/>
|
||||
Making pull requests less painful with an AI agent
|
||||
</div>
|
||||
|
||||
[](https://github.com/Codium-ai/pr-agent/blob/main/LICENSE)
|
||||
[](https://discord.com/channels/1057273017547378788/1126104260430528613)
|
||||
[](https://twitter.com/codiumai)
|
||||
<a href="https://github.com/Codium-ai/pr-agent/commits/main">
|
||||
<img alt="GitHub" src="https://img.shields.io/github/last-commit/Codium-ai/pr-agent/main?style=for-the-badge" height="20">
|
||||
</a>
|
||||
</div>
|
||||
|
||||
## Table of Contents
|
||||
- [News and Updates](#news-and-updates)
|
||||
- [Overview](#overview)
|
||||
- [Example results](#example-results)
|
||||
- [Try it now](#try-it-now)
|
||||
- [Installation](#installation)
|
||||
- [PR-Agent Pro 💎](#pr-agent-pro-)
|
||||
- [How it works](#how-it-works)
|
||||
- [Why use PR-Agent?](#why-use-pr-agent)
|
||||
|
||||
## News and Updates
|
||||
### Jan 28, 2024
|
||||
- 💎 Test - A new tool, [`/test component_name`](https://github.com/Codium-ai/pr-agent/blob/main/docs/TEST.md), was added to PR-Agent Pro. The tool will generate tests for a selected component, based on the PR code changes.
|
||||
- 💎 Analyze - The [`/analyze`](https://github.com/Codium-ai/pr-agent/blob/main/docs/Analyze.md) tool was updated and simplified. It now presents a summary of the code components that were changed in the PR.
|
||||
### Jan 21, 2024
|
||||
- 💎 Custom suggestions - A new tool, `/custom_suggestions`, was added to PR-Agent Pro. The tool will propose only suggestions that follow specific guidelines defined by the user.
|
||||
See [here](https://github.com/Codium-ai/pr-agent/blob/main/docs/CUSTOM_SUGGESTIONS.md) for more details.
|
||||
|
||||
### Jan 17, 2024
|
||||
- 💎 Inline file summary - The `describe` tool has a new option `--pr_description.inline_file_summary`, which allows to add a summary of each file changes to the Diffview page. See [here](https://github.com/Codium-ai/pr-agent/blob/main/docs/DESCRIBE.md#inline-file-summary-)
|
||||
- The `improve` tool can now present suggestions in a nice collapsible format, which significantly reduces the PR footprint. See [here](https://github.com/Codium-ai/pr-agent/blob/main/docs/IMPROVE.md#summarized-vs-commitable-code-suggestions) for more details.
|
||||
- To accompany the improved interface of the `improve` tool, we change the [default automation settings](https://github.com/Codium-ai/pr-agent/blob/main/pr_agent/settings/configuration.toml#L116) of our GithupApp to:
|
||||
```
|
||||
pr_commands = [
|
||||
"/describe --pr_description.add_original_user_description=true --pr_description.keep_original_user_title=true",
|
||||
"/review --pr_reviewer.num_code_suggestions=0",
|
||||
"/improve --pr_code_suggestions.summarize=true",
|
||||
]
|
||||
```
|
||||
Meaning that by default, for each PR the `describe`, `review`, and `improve` tools will be triggered automatically, and the `improve` tool will present the suggestions in a single comment.
|
||||
You can of course overwrite these defaults by adding a `.pr_agent.toml` file to your repo. See [here](https://github.com/Codium-ai/pr-agent/blob/main/Usage.md#working-with-github-app).
|
||||
|
||||
### Jan 10, 2024
|
||||
[LanceDB](https://lancedb.com/) is now supported as a locally hosted VectorDB for the `similar_issue` tool. See [here](./docs/SIMILAR_ISSUE.md) for more details.
|
||||
|
||||
|
||||
## Overview
|
||||
<div style="text-align:left;">
|
||||
|
||||
CodiumAI PR-Agent is an open-source tool to help efficiently review and handle pull requests. It automatically analyzes the pull request and can provide several types of commands:
|
||||
CodiumAI `PR-Agent` is an open-source tool aiming to help developers review pull requests faster and more efficiently. It automatically analyzes the pull request and can provide several types of PR feedback:
|
||||
|
||||
| | | GitHub | Gitlab | Bitbucket |
|
||||
|-------|------------------------------------------------------------------------------------------------------------------------------------------|:------:|:------:|:---------:|
|
||||
| TOOLS | Review | :white_check_mark: | :white_check_mark: | :white_check_mark: |
|
||||
| | ⮑ Incremental | :white_check_mark: | | |
|
||||
| | ⮑ [SOC2 Compliance](https://github.com/Codium-ai/pr-agent/blob/main/docs/REVIEW.md#soc2-ticket-compliance-) 💎 | :white_check_mark: | :white_check_mark: | :white_check_mark: |
|
||||
| | Describe | :white_check_mark: | :white_check_mark: | :white_check_mark: |
|
||||
| | ⮑ [Inline File Summary](https://github.com/Codium-ai/pr-agent/blob/main/docs/DESCRIBE.md#inline-file-summary-) 💎 | :white_check_mark: | | |
|
||||
| | Improve | :white_check_mark: | :white_check_mark: | :white_check_mark: |
|
||||
| | ⮑ Extended | :white_check_mark: | :white_check_mark: | :white_check_mark: |
|
||||
| | Ask | :white_check_mark: | :white_check_mark: | :white_check_mark: |
|
||||
| | [Custom Suggestions](https://github.com/Codium-ai/pr-agent/blob/main/docs/CUSTOM_SUGGESTIONS.md) 💎 | :white_check_mark: | :white_check_mark: | :white_check_mark: |
|
||||
| | [Test](https://github.com/Codium-ai/pr-agent/blob/main/docs/TEST.md) 💎 | :white_check_mark: | :white_check_mark: | :white_check_mark: |
|
||||
| | Reflect and Review | :white_check_mark: | :white_check_mark: | :white_check_mark: |
|
||||
| | Update CHANGELOG.md | :white_check_mark: | :white_check_mark: | :white_check_mark: |
|
||||
| | Find Similar Issue | :white_check_mark: | | |
|
||||
| | [Add PR Documentation](https://github.com/Codium-ai/pr-agent/blob/main/docs/ADD_DOCUMENTATION.md) 💎 | :white_check_mark: | :white_check_mark: | :white_check_mark: |
|
||||
| | [Custom Labels](https://github.com/Codium-ai/pr-agent/blob/main/docs/DESCRIBE.md#handle-custom-labels-from-the-repos-labels-page-gem) 💎 | :white_check_mark: | :white_check_mark: | |
|
||||
| | [Analyze](https://github.com/Codium-ai/pr-agent/blob/main/docs/Analyze.md) 💎 | :white_check_mark: | :white_check_mark: | :white_check_mark: |
|
||||
| | | | | |
|
||||
| USAGE | CLI | :white_check_mark: | :white_check_mark: | :white_check_mark: |
|
||||
| | App / webhook | :white_check_mark: | :white_check_mark: | :white_check_mark: |
|
||||
| | Tagging bot | :white_check_mark: | | |
|
||||
| | Actions | :white_check_mark: | | :white_check_mark: |
|
||||
| | | | | |
|
||||
| CORE | PR compression | :white_check_mark: | :white_check_mark: | :white_check_mark: |
|
||||
| | Repo language prioritization | :white_check_mark: | :white_check_mark: | :white_check_mark: |
|
||||
| | Adaptive and token-aware<br />file patch fitting | :white_check_mark: | :white_check_mark: | :white_check_mark: |
|
||||
| | Multiple models support | :white_check_mark: | :white_check_mark: | :white_check_mark: | :white_check_mark: |
|
||||
| | [Static code analysis](https://github.com/Codium-ai/pr-agent/blob/main/docs/Analyze.md) 💎 | :white_check_mark: | :white_check_mark: | :white_check_mark: |
|
||||
| | [Global configuration](https://github.com/Codium-ai/pr-agent/blob/main/Usage.md#global-configuration-file-) 💎 | :white_check_mark: | :white_check_mark: | :white_check_mark: |
|
||||
- 💎 means this feature is available only in [PR-Agent Pro](https://www.codium.ai/pricing/)
|
||||
- Support for additional git providers is described in [here](./docs/Full_environments.md)
|
||||
___
|
||||
|
||||
‣ **Auto Description ([`/describe`](./docs/DESCRIBE.md))**: Automatically generating PR description - title, type, summary, code walkthrough and labels.
|
||||
**Auto Description (/describe)**: Automatically generating [PR description](https://github.com/Codium-ai/pr-agent/pull/229#issue-1860711415) - title, type, summary, code walkthrough and labels.
|
||||
\
|
||||
‣ **Auto Review ([`/review`](./docs/REVIEW.md))**: Adjustable feedback about the PR main theme, type, relevant tests, security issues, score, and various suggestions for the PR content.
|
||||
**Auto Review (/review)**: [Adjustable feedback](https://github.com/Codium-ai/pr-agent/pull/229#issuecomment-1695022908) about the PR main theme, type, relevant tests, security issues, score, and various suggestions for the PR content.
|
||||
\
|
||||
‣ **Question Answering ([`/ask ...`](./docs/ASK.md))**: Answering free-text questions about the PR.
|
||||
**Question Answering (/ask ...)**: Answering [free-text questions](https://github.com/Codium-ai/pr-agent/pull/229#issuecomment-1695021332) about the PR.
|
||||
\
|
||||
‣ **Code Suggestions ([`/improve`](./docs/IMPROVE.md))**: Committable code suggestions for improving the PR.
|
||||
**Code Suggestions (/improve)**: [Committable code suggestions](https://github.com/Codium-ai/pr-agent/pull/229#discussion_r1306919276) for improving the PR.
|
||||
\
|
||||
‣ **Update Changelog ([`/update_changelog`](./docs/UPDATE_CHANGELOG.md))**: Automatically updating the CHANGELOG.md file with the PR changes.
|
||||
\
|
||||
‣ **Find Similar Issue ([`/similar_issue`](./docs/SIMILAR_ISSUE.md))**: Automatically retrieves and presents similar issues.
|
||||
\
|
||||
‣ **Add Documentation 💎 ([`/add_docs`](./docs/ADD_DOCUMENTATION.md))**: Automatically adds documentation to methods/functions/classes that changed in the PR.
|
||||
\
|
||||
‣ **Generate Custom Labels 💎 ([`/generate_labels`](./docs/GENERATE_CUSTOM_LABELS.md))**: Automatically suggests custom labels based on the PR code changes.
|
||||
\
|
||||
‣ **Analyze 💎 ([`/analyze`](./docs/Analyze.md))**: Automatically analyzes the PR, and presents changes walkthrough for each component.
|
||||
\
|
||||
‣ **Custom Suggestions 💎 ([`/custom_suggestions`](./docs/CUSTOM_SUGGESTIONS.md))**: Automatically generates custom suggestions for improving the PR code, based on specific guidelines defined by the user.
|
||||
\
|
||||
‣ **Generate Tests 💎 ([`/test component_name`](./docs/TEST.md))**: Automatically generates unit tests for a selected component, based on the PR code changes.
|
||||
|
||||
See the [Installation Guide](./INSTALL.md) for instructions on installing and running the tool on different git platforms.
|
||||
|
||||
See the [Usage Guide](./Usage.md) for running the PR-Agent commands via different interfaces, including _CLI_, _online usage_, or by _automatically triggering_ them when a new PR is opened.
|
||||
|
||||
See the [Tools Guide](./docs/TOOLS_GUIDE.md) for a detailed description of the different tools (tools are run via the commands).
|
||||
**Update Changelog (/update_changelog)**: Automatically updating the CHANGELOG.md file with the [PR changes](https://github.com/Codium-ai/pr-agent/pull/168#discussion_r1282077645).
|
||||
|
||||
|
||||
## Example results
|
||||
See the [usage guide](./Usage.md) for instructions how to run the different tools from [CLI](./Usage.md#working-from-a-local-repo-cli), or by [online usage](./Usage.md#online-usage).
|
||||
|
||||
<h3>Example results:</h3>
|
||||
</div>
|
||||
<h4><a href="https://github.com/Codium-ai/pr-agent/pull/530">/describe</a></h4>
|
||||
<h4><a href="https://github.com/Codium-ai/pr-agent/pull/229#issuecomment-1687561986">/describe:</a></h4>
|
||||
<div align="center">
|
||||
<p float="center">
|
||||
<img src="https://www.codium.ai/images/pr_agent/describe_new_short_main.png" width="800">
|
||||
<img src="https://www.codium.ai/images/describe-2.gif" width="800">
|
||||
</p>
|
||||
</div>
|
||||
<hr>
|
||||
<h4><a href="https://github.com/Codium-ai/pr-agent/pull/472#discussion_r1435819374">/improve</a></h4>
|
||||
|
||||
<h4><a href="https://github.com/Codium-ai/pr-agent/pull/229#issuecomment-1695021901">/review:</a></h4>
|
||||
<div align="center">
|
||||
<p float="center">
|
||||
<kbd>
|
||||
<img src="https://www.codium.ai/images/pr_agent/improve_short_main.png" width="768">
|
||||
</kbd>
|
||||
</p>
|
||||
|
||||
</div>
|
||||
<hr>
|
||||
<h4><a href="https://github.com/Codium-ai/pr-agent/pull/530">/generate_labels</a></h4>
|
||||
<div align="center">
|
||||
<p float="center">
|
||||
<kbd><img src="https://www.codium.ai/images/pr_agent/geneare_custom_labels_main_short.png" width="300"></kbd>
|
||||
<img src="https://www.codium.ai/images/review-2.gif" width="800">
|
||||
</p>
|
||||
</div>
|
||||
|
||||
@ -188,14 +83,46 @@ See the [Tools Guide](./docs/TOOLS_GUIDE.md) for a detailed description of the d
|
||||
[//]: # (</div>)
|
||||
<div align="left">
|
||||
|
||||
|
||||
## Table of Contents
|
||||
- [Overview](#overview)
|
||||
- [Try it now](#try-it-now)
|
||||
- [Installation](#installation)
|
||||
- [Usage guide](./Usage.md)
|
||||
- [How it works](#how-it-works)
|
||||
- [Why use PR-Agent](#why-use-pr-agent)
|
||||
- [Roadmap](#roadmap)
|
||||
</div>
|
||||
<hr>
|
||||
|
||||
|
||||
## Overview
|
||||
`PR-Agent` offers extensive pull request functionalities across various git providers:
|
||||
| | | GitHub | Gitlab | Bitbucket | CodeCommit | Azure DevOps | Gerrit |
|
||||
|-------|---------------------------------------------|:------:|:------:|:---------:|:----------:|:----------:|:----------:|
|
||||
| TOOLS | Review | :white_check_mark: | :white_check_mark: | :white_check_mark: | :white_check_mark: | :white_check_mark: | :white_check_mark: |
|
||||
| | Ask | :white_check_mark: | :white_check_mark: | :white_check_mark: | :white_check_mark: | :white_check_mark: | :white_check_mark: |
|
||||
| | Auto-Description | :white_check_mark: | :white_check_mark: | | :white_check_mark: | :white_check_mark: | :white_check_mark: |
|
||||
| | Improve Code | :white_check_mark: | :white_check_mark: | | :white_check_mark: | | :white_check_mark: |
|
||||
| | ⮑ Extended | :white_check_mark: | :white_check_mark: | | :white_check_mark: | | :white_check_mark: |
|
||||
| | Reflect and Review | :white_check_mark: | | | | :white_check_mark: | :white_check_mark: |
|
||||
| | Update CHANGELOG.md | :white_check_mark: | | | | | |
|
||||
| | | | | | | |
|
||||
| USAGE | CLI | :white_check_mark: | :white_check_mark: | :white_check_mark: | :white_check_mark: | :white_check_mark: |
|
||||
| | App / webhook | :white_check_mark: | :white_check_mark: | | | |
|
||||
| | Tagging bot | :white_check_mark: | | | | |
|
||||
| | Actions | :white_check_mark: | | | | |
|
||||
| | Web server | | | | | | :white_check_mark: |
|
||||
| | | | | | | |
|
||||
| CORE | PR compression | :white_check_mark: | :white_check_mark: | :white_check_mark: | :white_check_mark: | :white_check_mark: | :white_check_mark: |
|
||||
| | Repo language prioritization | :white_check_mark: | :white_check_mark: | :white_check_mark: | :white_check_mark: | :white_check_mark: | :white_check_mark: |
|
||||
| | Adaptive and token-aware<br />file patch fitting | :white_check_mark: | :white_check_mark: | :white_check_mark: | :white_check_mark: | :white_check_mark: | :white_check_mark: |
|
||||
| | Multiple models support | :white_check_mark: | :white_check_mark: | :white_check_mark: | :white_check_mark: | :white_check_mark: | :white_check_mark: |
|
||||
| | Incremental PR Review | :white_check_mark: | | | | | |
|
||||
|
||||
Review the **[usage guide](./Usage.md)** section for detailed instructions how to use the different tools, select the relevant git provider (GitHub, Gitlab, Bitbucket,...), and adjust the configuration file to your needs.
|
||||
|
||||
## Try it now
|
||||
|
||||
Try the GPT-4 powered PR-Agent instantly on _your public GitHub repository_. Just mention `@CodiumAI-Agent` and add the desired command in any PR comment. The agent will generate a response based on your command.
|
||||
You can try GPT-4 powered PR-Agent, on your public GitHub repository, instantly. Just mention `@CodiumAI-Agent` and add the desired command in any PR comment. The agent will generate a response based on your command.
|
||||
For example, add a comment to any pull request with the following text:
|
||||
```
|
||||
@CodiumAI-Agent /review
|
||||
@ -206,12 +133,12 @@ and the agent will respond with a review of your PR
|
||||
|
||||
|
||||
To set up your own PR-Agent, see the [Installation](#installation) section below.
|
||||
Note that when you set your own PR-Agent or use CodiumAI hosted PR-Agent, there is no need to mention `@CodiumAI-Agent ...`. Instead, directly start with the command, e.g., `/ask ...`.
|
||||
|
||||
---
|
||||
|
||||
## Installation
|
||||
To use your own version of PR-Agent, you first need to acquire two tokens:
|
||||
|
||||
To get started with PR-Agent quickly, you first need to acquire two tokens:
|
||||
|
||||
1. An OpenAI key from [here](https://platform.openai.com/), with access to GPT-4.
|
||||
2. A GitHub personal access token (classic) with the repo scope.
|
||||
@ -228,57 +155,48 @@ There are several ways to use PR-Agent:
|
||||
- [Method 6: Deploy as a Lambda Function](INSTALL.md#method-6---deploy-as-a-lambda-function)
|
||||
- [Method 7: AWS CodeCommit](INSTALL.md#method-7---aws-codecommit-setup)
|
||||
- [Method 8: Run a GitLab webhook server](INSTALL.md#method-8---run-a-gitlab-webhook-server)
|
||||
- [Method 9: Run as a Bitbucket Pipeline](INSTALL.md#method-9-run-as-a-bitbucket-pipeline)
|
||||
|
||||
## PR-Agent Pro 💎
|
||||
[PR-Agent Pro](https://www.codium.ai/pricing/) is a hosted version of PR-Agent, provided by CodiumAI. It is available for a monthly fee, and provides the following benefits:
|
||||
1. **Fully managed** - We take care of everything for you - hosting, models, regular updates, and more. Installation is as simple as signing up and adding the PR-Agent app to your GitHub\BitBucket repo.
|
||||
2. **Improved privacy** - No data will be stored or used to train models. PR-Agent Pro will employ zero data retention, and will use an OpenAI account with zero data retention.
|
||||
3. **Improved support** - PR-Agent Pro users will receive priority support, and will be able to request new features and capabilities.
|
||||
4. **Extra features** -In addition to the benefits listed above, PR-Agent Pro will emphasize more customization, and the usage of static code analysis, in addition to LLM logic, to improve results. It has the following additional features:
|
||||
- [**SOC2 compliance check**](https://github.com/Codium-ai/pr-agent/blob/main/docs/REVIEW.md#soc2-ticket-compliance-)
|
||||
- [**PR documentation**](https://github.com/Codium-ai/pr-agent/blob/main/docs/ADD_DOCUMENTATION.md)
|
||||
- [**Custom labels**](https://github.com/Codium-ai/pr-agent/blob/main/docs/DESCRIBE.md#handle-custom-labels-from-the-repos-labels-page-gem)
|
||||
- [**Global configuration**](https://github.com/Codium-ai/pr-agent/blob/main/Usage.md#global-configuration-file-)
|
||||
- [**Analyze PR components**](https://github.com/Codium-ai/pr-agent/blob/main/docs/Analyze.md)
|
||||
- **Custom Code Suggestions** [WIP]
|
||||
- **Chat on Specific Code Lines** [WIP]
|
||||
|
||||
|
||||
## How it works
|
||||
|
||||
The following diagram illustrates PR-Agent tools and their flow:
|
||||
|
||||

|
||||

|
||||
|
||||
Check out the [PR Compression strategy](./PR_COMPRESSION.md) page for more details on how we convert a code diff to a manageable LLM prompt
|
||||
|
||||
## Why use PR-Agent?
|
||||
|
||||
A reasonable question that can be asked is: `"Why use PR-Agent? What makes it stand out from existing tools?"`
|
||||
A reasonable question that can be asked is: `"Why use PR-Agent? What make it stand out from existing tools?"`
|
||||
|
||||
Here are some advantages of PR-Agent:
|
||||
|
||||
- We emphasize **real-life practical usage**. Each tool (review, improve, ask, ...) has a single GPT-4 call, no more. We feel that this is critical for realistic team usage - obtaining an answer quickly (~30 seconds) and affordably.
|
||||
- Our [PR Compression strategy](./PR_COMPRESSION.md) is a core ability that enables to effectively tackle both short and long PRs.
|
||||
- Our JSON prompting strategy enables to have **modular, customizable tools**. For example, the '/review' tool categories can be controlled via the [configuration](pr_agent/settings/configuration.toml) file. Adding additional categories is easy and accessible.
|
||||
- We support **multiple git providers** (GitHub, Gitlab, Bitbucket), **multiple ways** to use the tool (CLI, GitHub Action, GitHub App, Docker, ...), and **multiple models** (GPT-4, GPT-3.5, Anthropic, Cohere, Llama2).
|
||||
- We support **multiple git providers** (GitHub, Gitlab, Bitbucket, CodeCommit), **multiple ways** to use the tool (CLI, GitHub Action, GitHub App, Docker, ...), and **multiple models** (GPT-4, GPT-3.5, Anthropic, Cohere, Llama2).
|
||||
- We are open-source, and welcome contributions from the community.
|
||||
|
||||
|
||||
## Data privacy
|
||||
## Roadmap
|
||||
|
||||
If you host PR-Agent with your OpenAI API key, it is between you and OpenAI. You can read their API data privacy policy here:
|
||||
https://openai.com/enterprise-privacy
|
||||
- [x] Support additional models, as a replacement for OpenAI (see [here](https://github.com/Codium-ai/pr-agent/pull/172))
|
||||
- [x] Develop additional logic for handling large PRs (see [here](https://github.com/Codium-ai/pr-agent/pull/229))
|
||||
- [ ] Add additional context to the prompt. For example, repo (or relevant files) summarization, with tools such a [ctags](https://github.com/universal-ctags/ctags)
|
||||
- [ ] PR-Agent for issues, and just for pull requests
|
||||
- [ ] Adding more tools. Possible directions:
|
||||
- [x] PR description
|
||||
- [x] Inline code suggestions
|
||||
- [x] Reflect and review
|
||||
- [x] Rank the PR (see [here](https://github.com/Codium-ai/pr-agent/pull/89))
|
||||
- [ ] Enforcing CONTRIBUTING.md guidelines
|
||||
- [ ] Performance (are there any performance issues)
|
||||
- [ ] Documentation (is the PR properly documented)
|
||||
- [ ] ...
|
||||
|
||||
When using PR-Agent Pro 💎, hosted by CodiumAI, we will not store any of your data, nor will we use it for training.
|
||||
You will also benefit from an OpenAI account with zero data retention.
|
||||
## Similar Projects
|
||||
|
||||
## Links
|
||||
|
||||
[](https://discord.gg/kG35uSHDBc)
|
||||
|
||||
- Discord community: https://discord.gg/kG35uSHDBc
|
||||
- CodiumAI site: https://codium.ai
|
||||
- Blog: https://www.codium.ai/blog/
|
||||
- Troubleshooting: https://www.codium.ai/blog/technical-faq-and-troubleshooting/
|
||||
- Support: support@codium.ai
|
||||
- [CodiumAI - Meaningful tests for busy devs](https://github.com/Codium-ai/codiumai-vscode-release) (although various capabilities are much more advanced in the CodiumAI IDE plugins)
|
||||
- [Aider - GPT powered coding in your terminal](https://github.com/paul-gauthier/aider)
|
||||
- [openai-pr-reviewer](https://github.com/coderabbitai/openai-pr-reviewer)
|
||||
- [CodeReview BOT](https://github.com/anc95/ChatGPT-CodeReview)
|
||||
- [AI-Maintainer](https://github.com/merwanehamadi/AI-Maintainer)
|
103
RELEASE_NOTES.md
103
RELEASE_NOTES.md
@ -1,103 +0,0 @@
|
||||
## [Version 0.11] - 2023-12-07
|
||||
- codiumai/pr-agent:0.11
|
||||
- codiumai/pr-agent:0.11-github_app
|
||||
- codiumai/pr-agent:0.11-bitbucket-app
|
||||
- codiumai/pr-agent:0.11-gitlab_webhook
|
||||
- codiumai/pr-agent:0.11-github_polling
|
||||
- codiumai/pr-agent:0.11-github_action
|
||||
|
||||
### Added::Algo
|
||||
- New section in `/describe` tool - [PR changes walkthrough](https://github.com/Codium-ai/pr-agent/pull/509)
|
||||
- Improving PR Agent [prompts](https://github.com/Codium-ai/pr-agent/pull/501)
|
||||
- Persistent tools (`/review`, `/describe`) now send an [update message](https://github.com/Codium-ai/pr-agent/pull/499) after finishing
|
||||
- Add Amazon Bedrock [support](https://github.com/Codium-ai/pr-agent/pull/483)
|
||||
|
||||
### Fixed
|
||||
- Update [dependencies](https://github.com/Codium-ai/pr-agent/pull/503) in requirements.txt for Python 3.12
|
||||
|
||||
|
||||
## [Version 0.10] - 2023-11-15
|
||||
- codiumai/pr-agent:0.10
|
||||
- codiumai/pr-agent:0.10-github_app
|
||||
- codiumai/pr-agent:0.10-bitbucket-app
|
||||
- codiumai/pr-agent:0.10-gitlab_webhook
|
||||
- codiumai/pr-agent:0.10-github_polling
|
||||
- codiumai/pr-agent:0.10-github_action
|
||||
|
||||
### Added::Algo
|
||||
- Review tool now works with [persistent comments](https://github.com/Codium-ai/pr-agent/pull/451) by default
|
||||
- Bitbucket now publishes review suggestions with [code links](https://github.com/Codium-ai/pr-agent/pull/428)
|
||||
- Enabling to limit [max number of tokens](https://github.com/Codium-ai/pr-agent/pull/437/files)
|
||||
- Support ['gpt-4-1106-preview'](https://github.com/Codium-ai/pr-agent/pull/437/files) model
|
||||
- Support for Google's [Vertex AI](https://github.com/Codium-ai/pr-agent/pull/436)
|
||||
- Implementing [thresholds](https://github.com/Codium-ai/pr-agent/pull/423) for incremental PR reviews
|
||||
- Decoupled custom labels from [PR type](https://github.com/Codium-ai/pr-agent/pull/431)
|
||||
|
||||
### Fixed
|
||||
- Fixed bug in [parsing quotes](https://github.com/Codium-ai/pr-agent/pull/446) in CLI
|
||||
- Preserve [user-added labels](https://github.com/Codium-ai/pr-agent/pull/433) in pull requests
|
||||
- Bug fixes in GitLab and BitBucket
|
||||
|
||||
## [Version 0.9] - 2023-10-29
|
||||
- codiumai/pr-agent:0.9
|
||||
- codiumai/pr-agent:0.9-github_app
|
||||
- codiumai/pr-agent:0.9-bitbucket-app
|
||||
- codiumai/pr-agent:0.9-gitlab_webhook
|
||||
- codiumai/pr-agent:0.9-github_polling
|
||||
- codiumai/pr-agent:0.9-github_action
|
||||
|
||||
### Added::Algo
|
||||
- New tool - [generate_labels](https://github.com/Codium-ai/pr-agent/blob/main/docs/GENERATE_CUSTOM_LABELS.md)
|
||||
- New ability to use [customize labels](https://github.com/Codium-ai/pr-agent/blob/main/docs/GENERATE_CUSTOM_LABELS.md#how-to-enable-custom-labels) on the `review` and `describe` tools.
|
||||
- New tool - [add_docs](https://github.com/Codium-ai/pr-agent/blob/main/docs/ADD_DOCUMENTATION.md)
|
||||
- GitHub Action: Can now use a `.pr_agent.toml` file to control configuration parameters (see [Usage Guide](./Usage.md#working-with-github-action)).
|
||||
- GitHub App: Added ability to trigger tools on [push events](https://github.com/Codium-ai/pr-agent/blob/main/Usage.md#github-app-automatic-tools-for-new-code-pr-push)
|
||||
- Support custom domain URLs for Azure devops integration (see [link](https://github.com/Codium-ai/pr-agent/pull/381)).
|
||||
- PR Description default mode is now in [bullet points](https://github.com/Codium-ai/pr-agent/blob/main/pr_agent/settings/configuration.toml#L35).
|
||||
|
||||
### Added::Documentation
|
||||
Significant documentation updates (see [Installation Guide](https://github.com/Codium-ai/pr-agent/blob/main/INSTALL.md), [Usage Guide](https://github.com/Codium-ai/pr-agent/blob/main/Usage.md), and [Tools Guide](https://github.com/Codium-ai/pr-agent/blob/main/docs/TOOLS_GUIDE.md))
|
||||
|
||||
### Fixed
|
||||
- Fixed support for BitBucket pipeline (see [link](https://github.com/Codium-ai/pr-agent/pull/386))
|
||||
- Fixed a bug in `review -i` tool
|
||||
- Added blacklist for specific file extensions in `add_docs` tool (see [link](https://github.com/Codium-ai/pr-agent/pull/385/))
|
||||
|
||||
## [Version 0.8] - 2023-09-27
|
||||
- codiumai/pr-agent:0.8
|
||||
- codiumai/pr-agent:0.8-github_app
|
||||
- codiumai/pr-agent:0.8-bitbucket-app
|
||||
- codiumai/pr-agent:0.8-gitlab_webhook
|
||||
- codiumai/pr-agent:0.8-github_polling
|
||||
- codiumai/pr-agent:0.8-github_action
|
||||
|
||||
### Added::Algo
|
||||
- GitHub Action: Can control which tools will run automatically when a new PR is created. (see usage guide: https://github.com/Codium-ai/pr-agent/blob/main/Usage.md#working-with-github-action)
|
||||
- Code suggestion tool: Will try to avoid an 'add comments' suggestion (see https://github.com/Codium-ai/pr-agent/pull/327)
|
||||
|
||||
### Fixed
|
||||
- Gitlab: Fixed a bug of improper usage of pr_id
|
||||
|
||||
|
||||
## [Version 0.7] - 2023-09-20
|
||||
|
||||
### Docker Tags
|
||||
- codiumai/pr-agent:0.7
|
||||
- codiumai/pr-agent:0.7-github_app
|
||||
- codiumai/pr-agent:0.7-bitbucket-app
|
||||
- codiumai/pr-agent:0.7-gitlab_webhook
|
||||
- codiumai/pr-agent:0.7-github_polling
|
||||
- codiumai/pr-agent:0.7-github_action
|
||||
|
||||
### Added::Algo
|
||||
- New tool /similar_issue - Currently on GitHub app and CLI: indexes the issues in the repo, find the most similar issues to the target issue.
|
||||
- Describe markers: Empower the /describe tool with a templating capability (see more details in https://github.com/Codium-ai/pr-agent/pull/273).
|
||||
- New feature in the /review tool - added an estimated effort estimation to the review (https://github.com/Codium-ai/pr-agent/pull/306).
|
||||
|
||||
### Added::Infrastructure
|
||||
- Implementation of a GitLab webhook.
|
||||
- Implementation of a BitBucket app.
|
||||
|
||||
### Fixed
|
||||
- Protection against no code suggestions generated.
|
||||
- Resilience to repositories where the languages cannot be automatically detected.
|
379
Usage.md
379
Usage.md
@ -1,92 +1,61 @@
|
||||
## Usage Guide
|
||||
## Usage guide
|
||||
|
||||
### Table of Contents
|
||||
- [Introduction](#introduction)
|
||||
- [Local Repo (CLI)](#working-from-a-local-repo-cli)
|
||||
- [Online Usage](#online-usage)
|
||||
- [GitHub App](#working-with-github-app)
|
||||
- [GitHub Action](#working-with-github-action)
|
||||
- [BitBucket App](#working-with-bitbucket-self-hosted-app)
|
||||
- [Additional Configurations Walkthrough](#appendix---additional-configurations-walkthrough)
|
||||
- [Working from a local repo (CLI)](#working-from-a-local-repo-cli)
|
||||
- [Online usage](#online-usage)
|
||||
- [Working with GitHub App](#working-with-github-app)
|
||||
- [Working with GitHub Action](#working-with-github-action)
|
||||
- [Appendix - additional configurations walkthrough](#appendix---additional-configurations-walkthrough)
|
||||
|
||||
### Introduction
|
||||
|
||||
After [installation](/INSTALL.md), there are three basic ways to invoke CodiumAI PR-Agent:
|
||||
There are 3 basic ways to invoke CodiumAI PR-Agent:
|
||||
1. Locally running a CLI command
|
||||
2. Online usage - by [commenting](https://github.com/Codium-ai/pr-agent/pull/229#issuecomment-1695021901) on a PR
|
||||
3. Enabling PR-Agent tools to run automatically when a new PR is opened
|
||||
|
||||
See the [installation guide](/INSTALL.md) for instructions on how to setup your own PR-Agent.
|
||||
|
||||
Specifically, CLI commands can be issued by invoking a pre-built [docker image](/INSTALL.md#running-from-source), or by invoking a [locally cloned repo](INSTALL.md#method-2-run-from-source).
|
||||
|
||||
For online usage, you will need to setup either a [GitHub App](INSTALL.md#method-5-run-as-a-github-app), or a [GitHub Action](INSTALL.md#method-3-run-as-a-github-action).
|
||||
GitHub App and GitHub Action also enable to run PR-Agent specific tool automatically when a new PR is opened.
|
||||
|
||||
|
||||
#### The configuration file
|
||||
- The different tools and sub-tools used by CodiumAI PR-Agent are adjustable via the **[configuration file](pr_agent/settings/configuration.toml)**.
|
||||
The different tools and sub-tools used by CodiumAI PR-Agent are adjustable via the **[configuration file](pr_agent/settings/configuration.toml)**.
|
||||
In addition to general configuration options, each tool has its own configurations. For example, the `review` tool will use parameters from the [pr_reviewer](/pr_agent/settings/configuration.toml#L16) section in the configuration file.
|
||||
|
||||
- The [Tools Guide](./docs/TOOLS_GUIDE.md) provides a detailed description of the different tools and their configurations.
|
||||
|
||||
|
||||
- By uploading a local `.pr_agent.toml` file to the root of the repo's main branch, you can edit and customize any configuration parameter. Note that you need to upload `.pr_agent.toml` prior to creating a PR, in order for the configuration to take effect.
|
||||
|
||||
For example, if you set in `.pr_agent.toml`:
|
||||
|
||||
```
|
||||
[pr_reviewer]
|
||||
extra_instructions="""\
|
||||
- instruction a
|
||||
- instruction b
|
||||
...
|
||||
"""
|
||||
```
|
||||
|
||||
Then you can give a list of extra instructions to the `review` tool.
|
||||
|
||||
|
||||
#### Global configuration file 💎
|
||||
|
||||
If you create a repo called `pr-agent-settings` in your **organization**, it's configuration file `.pr_agent.toml` will be used as a global configuration file for any other repo that belongs to the same organization.
|
||||
Parameters from a local `.pr_agent.toml` file, in a specific repo, will override the global configuration parameters.
|
||||
|
||||
For example, in the GitHub organization `Codium-ai`:
|
||||
- The repo [`https://github.com/Codium-ai/pr-agent-settings`](https://github.com/Codium-ai/pr-agent-settings/blob/main/.pr_agent.toml) contains a `.pr_agent.toml` file that serves as a global configuration file for all the repos in the GitHub organization `Codium-ai`.
|
||||
- The repo [`https://github.com/Codium-ai/pr-agent`](https://github.com/Codium-ai/pr-agent/blob/main/.pr_agent.toml) inherits the global configuration file from `pr-agent-settings`.
|
||||
|
||||
#### Ignoring files from analysis
|
||||
In some cases, you may want to exclude specific files or directories from the analysis performed by CodiumAI PR-Agent. This can be useful, for example, when you have files that are generated automatically or files that shouldn't be reviewed, like vendored code.
|
||||
|
||||
To ignore files or directories, edit the **[ignore.toml](/pr_agent/settings/ignore.toml)** configuration file. This setting also exposes the following environment variables:
|
||||
|
||||
- `IGNORE.GLOB`
|
||||
- `IGNORE.REGEX`
|
||||
|
||||
For example, to ignore python files in a PR with online usage, comment on a PR:
|
||||
`/review --ignore.glob=['*.py']`
|
||||
|
||||
To ignore python files in all PRs, set in a configuration file:
|
||||
```
|
||||
[ignore]
|
||||
glob = ['*.py']
|
||||
```
|
||||
|
||||
#### git provider
|
||||
**git provider:**
|
||||
The [git_provider](pr_agent/settings/configuration.toml#L4) field in the configuration file determines the GIT provider that will be used by PR-Agent. Currently, the following providers are supported:
|
||||
`
|
||||
"github", "gitlab", "azure", "codecommit", "local", "gerrit"
|
||||
"github", "gitlab", "azure", "codecommit", "local"
|
||||
`
|
||||
|
||||
[//]: # (** online usage:**)
|
||||
|
||||
[//]: # (Options that are available in the configuration file can be specified at run time when calling actions. Two examples:)
|
||||
|
||||
[//]: # (```)
|
||||
|
||||
[//]: # (- /review --pr_reviewer.extra_instructions="focus on the file: ...")
|
||||
|
||||
[//]: # (- /describe --pr_description.add_original_user_description=false -pr_description.extra_instructions="make sure to mention: ...")
|
||||
|
||||
[//]: # (```)
|
||||
|
||||
### Working from a local repo (CLI)
|
||||
When running from your local repo (CLI), your local configuration file will be used.
|
||||
Examples of invoking the different tools via the CLI:
|
||||
|
||||
- **Review**: `python -m pr_agent.cli --pr_url=<pr_url> review`
|
||||
- **Describe**: `python -m pr_agent.cli --pr_url=<pr_url> describe`
|
||||
- **Improve**: `python -m pr_agent.cli --pr_url=<pr_url> improve`
|
||||
- **Ask**: `python -m pr_agent.cli --pr_url=<pr_url> ask "Write me a poem about this PR"`
|
||||
- **Reflect**: `python -m pr_agent.cli --pr_url=<pr_url> reflect`
|
||||
- **Update Changelog**: `python -m pr_agent.cli --pr_url=<pr_url> update_changelog`
|
||||
Examples for invoking the different tools via the CLI:
|
||||
|
||||
- **Review**: `python cli.py --pr_url=<pr_url> /review`
|
||||
- **Describe**: `python cli.py --pr_url=<pr_url> /describe`
|
||||
- **Improve**: `python cli.py --pr_url=<pr_url> /improve`
|
||||
- **Ask**: `python cli.py --pr_url=<pr_url> /ask "Write me a poem about this PR"`
|
||||
- **Reflect**: `python cli.py --pr_url=<pr_url> /reflect`
|
||||
- **Update Changelog**: `python cli.py --pr_url=<pr_url> /update_changelog`
|
||||
|
||||
`<pr_url>` is the url of the relevant PR (for example: https://github.com/Codium-ai/pr-agent/pull/50).
|
||||
|
||||
@ -94,7 +63,7 @@ Examples of invoking the different tools via the CLI:
|
||||
|
||||
(1) in addition to editing your local configuration file, you can also change any configuration value by adding it to the command line:
|
||||
```
|
||||
python -m pr_agent.cli --pr_url=<pr_url> /review --pr_reviewer.extra_instructions="focus on the file: ..."
|
||||
python cli.py --pr_url=<pr_url> /review --pr_reviewer.extra_instructions="focus on the file: ..."
|
||||
```
|
||||
|
||||
(2) You can print results locally, without publishing them, by setting in `configuration.toml`:
|
||||
@ -103,7 +72,7 @@ python -m pr_agent.cli --pr_url=<pr_url> /review --pr_reviewer.extra_instructio
|
||||
publish_output=true
|
||||
verbosity_level=2
|
||||
```
|
||||
This is useful for debugging or experimenting with different tools.
|
||||
This is useful for debugging or experimenting with the different tools.
|
||||
|
||||
|
||||
### Online usage
|
||||
@ -120,44 +89,30 @@ Commands for invoking the different tools via comments:
|
||||
|
||||
|
||||
To edit a specific configuration value, just add `--config_path=<value>` to any command.
|
||||
For example, if you want to edit the `review` tool configurations, you can run:
|
||||
For example if you want to edit the `review` tool configurations, you can run:
|
||||
```
|
||||
/review --pr_reviewer.extra_instructions="..." --pr_reviewer.require_score_review=false
|
||||
```
|
||||
Any configuration value in [configuration file](pr_agent/settings/configuration.toml) file can be similarly edited. Comment `/config` to see the list of available configurations.
|
||||
Any configuration value in [configuration file](pr_agent/settings/configuration.toml) file can be similarly edited.
|
||||
|
||||
|
||||
### Working with GitHub App
|
||||
When running PR-Agent from GitHub App, the default [configuration file](pr_agent/settings/configuration.toml) from a pre-built docker will be initially loaded.
|
||||
|
||||
By uploading a local `.pr_agent.toml` file to the root of the repo's main branch, you can edit and customize any configuration parameter. Note that you need to upload `.pr_agent.toml` prior to creating a PR, in order for the configuration to take effect.
|
||||
|
||||
For example, if you set in `.pr_agent.toml`:
|
||||
|
||||
```
|
||||
[pr_reviewer]
|
||||
num_code_suggestions=1
|
||||
```
|
||||
|
||||
Then you will overwrite the default number of code suggestions to 1.
|
||||
When running PR-Agent from [GitHub App](INSTALL.md#method-5-run-as-a-github-app), the default configurations from a pre-built repo will be initially loaded.
|
||||
|
||||
#### GitHub app automatic tools
|
||||
The [github_app](pr_agent/settings/configuration.toml#L108) section defines GitHub app specific configurations.
|
||||
|
||||
##### GitHub app automatic tools for PR actions
|
||||
The configuration parameter `pr_commands` defines the list of tools that will be **run automatically** when a new PR is opened.
|
||||
The [github_app](pr_agent/settings/configuration.toml#L56) section defines GitHub app specific configurations.
|
||||
An important parameter is `pr_commands`, which is a list of tools that will be **run automatically when a new PR is opened**:
|
||||
```
|
||||
[github_app]
|
||||
pr_commands = [
|
||||
"/describe --pr_description.add_original_user_description=true --pr_description.keep_original_user_title=true",
|
||||
"/review --pr_reviewer.num_code_suggestions=0",
|
||||
"/improve",
|
||||
"/auto_review",
|
||||
]
|
||||
```
|
||||
This means that when a new PR is opened/reopened or marked as ready for review, PR-Agent will run the `describe`, `review` and `improve` tools.
|
||||
For the `describe` tool, for example, the `add_original_user_description` and `keep_original_user_title` parameters will be set to true.
|
||||
This means that when a new PR is opened, PR-Agent will run the `describe` and `auto_review` tools.
|
||||
For the describe tool, the `add_original_user_description` and `keep_original_user_title` parameters will be set to true.
|
||||
|
||||
You can override the default tool parameters by uploading a local configuration file called `.pr_agent.toml` to the root of your repo.
|
||||
However, you can override the default tool parameters by uploading a local configuration file called `.pr_agent.toml` to the root of your repo.
|
||||
For example, if your local `.pr_agent.toml` file contains:
|
||||
```
|
||||
[pr_description]
|
||||
@ -166,33 +121,11 @@ keep_original_user_title = false
|
||||
```
|
||||
When a new PR is opened, PR-Agent will run the `describe` tool with the above parameters.
|
||||
|
||||
To cancel the automatic run of all the tools, set:
|
||||
```
|
||||
[github_app]
|
||||
handle_pr_actions = []
|
||||
```
|
||||
|
||||
##### GitHub app automatic tools for push actions (commits to an open PR)
|
||||
In addition to running automatic tools when a PR is opened, the GitHub app can also respond to new code that is pushed to an open PR.
|
||||
|
||||
The configuration toggle `handle_push_trigger` can be used to enable this feature.
|
||||
The configuration parameter `push_commands` defines the list of tools that will be **run automatically** when new code is pushed to the PR.
|
||||
```
|
||||
[github_app]
|
||||
handle_push_trigger = true
|
||||
push_commands = [
|
||||
"/describe --pr_description.add_original_user_description=true --pr_description.keep_original_user_title=true",
|
||||
"/review -i --pr_reviewer.remove_previous_review_comment=true",
|
||||
]
|
||||
```
|
||||
This means that when new code is pushed to the PR, the PR-Agent will run the `describe` and incremental `review` tools.
|
||||
For the `describe` tool, the `add_original_user_description` and `keep_original_user_title` parameters will be set to true.
|
||||
For the `review` tool, it will run in incremental mode, and the `remove_previous_review_comment` parameter will be set to true.
|
||||
|
||||
Much like the configurations for `pr_commands`, you can override the default tool parameters by uploading a local configuration file to the root of your repo.
|
||||
Note that a local `.pr_agent.toml` file enables you to edit and customize the default parameters of any tool, not just the ones that are run automatically.
|
||||
|
||||
#### Editing the prompts
|
||||
The prompts for the various PR-Agent tools are defined in the `pr_agent/settings` folder.
|
||||
|
||||
In practice, the prompts are loaded and stored as a standard setting object.
|
||||
Hence, editing them is similar to editing any other configuration value - just place the relevant key in `.pr_agent.toml`file, and override the default value.
|
||||
|
||||
@ -209,60 +142,23 @@ user="""
|
||||
Note that the new prompt will need to generate an output compatible with the relevant [post-process function](./pr_agent/tools/pr_description.py#L137).
|
||||
|
||||
### Working with GitHub Action
|
||||
`GitHub Action` is a different way to trigger PR-Agent tools, and uses a different configuration mechanism than `GitHub App`.
|
||||
You can configure settings for `GitHub Action` by adding environment variables under the env section in `.github/workflows/pr_agent.yml` file.
|
||||
Specifically, start by setting the following environment variables:
|
||||
```yaml
|
||||
env:
|
||||
OPENAI_KEY: ${{ secrets.OPENAI_KEY }} # Make sure to add your OpenAI key to your repo secrets
|
||||
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }} # Make sure to add your GitHub token to your repo secrets
|
||||
github_action_config.auto_review: "true" # enable\disable auto review
|
||||
github_action_config.auto_describe: "true" # enable\disable auto describe
|
||||
github_action_config.auto_improve: "true" # enable\disable auto improve
|
||||
```
|
||||
`github_action_config.auto_review`, `github_action_config.auto_describe` and `github_action_config.auto_improve` are used to enable/disable automatic tools that run when a new PR is opened.
|
||||
If not set, the default configuration is for all three tools to run automatically when a new PR is opened.
|
||||
|
||||
Note that you can give additional config parameters by adding environment variables to `.github/workflows/pr_agent.yml`, or by using a `.pr_agent.toml` file in the root of your repo, similar to the GitHub App usage.
|
||||
|
||||
For example, you can set an environment variable: `pr_description.add_original_user_description=false`, or add a `.pr_agent.toml` file with the following content:
|
||||
```
|
||||
[pr_description]
|
||||
add_original_user_description = false
|
||||
```
|
||||
|
||||
### Working with BitBucket Self-Hosted App
|
||||
Similar to GitHub app, when running PR-Agent from BitBucket App, the default [configuration file](pr_agent/settings/configuration.toml) from a pre-built docker will be initially loaded.
|
||||
|
||||
By uploading a local `.pr_agent.toml` file to the root of the repo's main branch, you can edit and customize any configuration parameter. Note that you need to upload `.pr_agent.toml` prior to creating a PR, in order for the configuration to take effect.
|
||||
|
||||
For example, if your local `.pr_agent.toml` file contains:
|
||||
```
|
||||
[pr_reviewer]
|
||||
inline_code_comments = true
|
||||
```
|
||||
|
||||
Each time you invoke a `/review` tool, it will use inline code comments.
|
||||
|
||||
#### BitBucket Self-Hosted App automatic tools
|
||||
You can configure in your local `.pr_agent.toml` file which tools will **run automatically** when a new PR is opened.
|
||||
|
||||
Specifically, set the following values:
|
||||
```yaml
|
||||
[bitbucket_app]
|
||||
auto_review = true # set as config var in .pr_agent.toml
|
||||
auto_describe = true # set as config var in .pr_agent.toml
|
||||
auto_improve = true # set as config var in .pr_agent.toml
|
||||
```
|
||||
|
||||
`bitbucket_app.auto_review`, `bitbucket_app.auto_describe` and `bitbucket_app.auto_improve` are used to enable/disable automatic tools.
|
||||
If not set, the default option is that only the `review` tool will run automatically when a new PR is opened.
|
||||
|
||||
Note that due to limitations of the bitbucket platform, the `auto_describe` tool will be able to publish a PR description only as a comment.
|
||||
In addition, some subsections like `PR changes walkthrough` will not appear, since they require the usage of collapsible sections, which are not supported by bitbucket.
|
||||
TBD
|
||||
|
||||
### Appendix - additional configurations walkthrough
|
||||
|
||||
#### Changing a model
|
||||
See [here](pr_agent/algo/__init__.py) for the list of available models.
|
||||
|
||||
To use Llama2 model, for example, set:
|
||||
```
|
||||
[config]
|
||||
model = "replicate/llama-2-70b-chat:2c1608e18606fad2812020dc541930f2d0495ce32eee50074220b87300bc16e1"
|
||||
[replicate]
|
||||
key = ...
|
||||
```
|
||||
(you can obtain a Llama2 key from [here](https://replicate.com/replicate/llama-2-70b-chat/api))
|
||||
|
||||
Also review the [AiHandler](pr_agent/algo/ai_handler.py) file for instruction how to set keys for other models.
|
||||
|
||||
#### Extra instructions
|
||||
All PR-Agent tools have a parameter called `extra_instructions`, that enables to add free-text extra instructions. Example usage:
|
||||
@ -270,163 +166,6 @@ All PR-Agent tools have a parameter called `extra_instructions`, that enables to
|
||||
/update_changelog --pr_update_changelog.extra_instructions="Make sure to update also the version ..."
|
||||
```
|
||||
|
||||
#### Working with large PRs
|
||||
|
||||
The default mode of CodiumAI is to have a single call per tool, using GPT-4, which has a token limit of 8000 tokens.
|
||||
This mode provide a very good speed-quality-cost tradeoff, and can handle most PRs successfully.
|
||||
When the PR is above the token limit, it employs a [PR Compression strategy](./PR_COMPRESSION.md).
|
||||
|
||||
However, for very large PRs, or in case you want to emphasize quality over speed and cost, there are 2 possible solutions:
|
||||
1) [Use a model](#changing-a-model) with larger context, like GPT-32K, or claude-100K. This solution will be applicable for all the tools.
|
||||
2) For the `/improve` tool, there is an ['extended' mode](./docs/IMPROVE.md) (`/improve --extended`),
|
||||
which divides the PR to chunks, and process each chunk separately. With this mode, regardless of the model, no compression will be done (but for large PRs, multiple model calls may occur)
|
||||
|
||||
|
||||
#### Changing a model
|
||||
|
||||
See [here](pr_agent/algo/__init__.py) for the list of available models.
|
||||
To use a different model than the default (GPT-4), you need to edit [configuration file](pr_agent/settings/configuration.toml#L2).
|
||||
For models and environments not from OPENAI, you might need to provide additional keys and other parameters. See below for instructions.
|
||||
|
||||
##### Azure
|
||||
To use Azure, set in your `.secrets.toml` (working from CLI), or in the GitHub `Settings > Secrets and variables` (working from GitHub App or GitHub Action):
|
||||
```
|
||||
api_key = "" # your azure api key
|
||||
api_type = "azure"
|
||||
api_version = '2023-05-15' # Check Azure documentation for the current API version
|
||||
api_base = "" # The base URL for your Azure OpenAI resource. e.g. "https://<your resource name>.openai.azure.com"
|
||||
openai.deployment_id = "" # The deployment name you chose when you deployed the engine
|
||||
```
|
||||
|
||||
and set in your configuration file:
|
||||
```
|
||||
[config]
|
||||
model="" # the OpenAI model you've deployed on Azure (e.g. gpt-3.5-turbo)
|
||||
```
|
||||
|
||||
##### Huggingface
|
||||
|
||||
**Local**
|
||||
You can run Huggingface models locally through either [VLLM](https://docs.litellm.ai/docs/providers/vllm) or [Ollama](https://docs.litellm.ai/docs/providers/ollama)
|
||||
|
||||
E.g. to use a new Huggingface model locally via Ollama, set:
|
||||
```
|
||||
[__init__.py]
|
||||
MAX_TOKENS = {
|
||||
"model-name-on-ollama": <max_tokens>
|
||||
}
|
||||
e.g.
|
||||
MAX_TOKENS={
|
||||
...,
|
||||
"ollama/llama2": 4096
|
||||
}
|
||||
|
||||
|
||||
[config] # in configuration.toml
|
||||
model = "ollama/llama2"
|
||||
|
||||
[ollama] # in .secrets.toml
|
||||
api_base = ... # the base url for your huggingface inference endpoint
|
||||
# e.g. if running Ollama locally, you may use:
|
||||
api_base = "http://localhost:11434/"
|
||||
```
|
||||
|
||||
**Inference Endpoints**
|
||||
|
||||
To use a new model with Huggingface Inference Endpoints, for example, set:
|
||||
```
|
||||
[__init__.py]
|
||||
MAX_TOKENS = {
|
||||
"model-name-on-huggingface": <max_tokens>
|
||||
}
|
||||
e.g.
|
||||
MAX_TOKENS={
|
||||
...,
|
||||
"meta-llama/Llama-2-7b-chat-hf": 4096
|
||||
}
|
||||
[config] # in configuration.toml
|
||||
model = "huggingface/meta-llama/Llama-2-7b-chat-hf"
|
||||
|
||||
[huggingface] # in .secrets.toml
|
||||
key = ... # your huggingface api key
|
||||
api_base = ... # the base url for your huggingface inference endpoint
|
||||
```
|
||||
(you can obtain a Llama2 key from [here](https://replicate.com/replicate/llama-2-70b-chat/api))
|
||||
|
||||
##### Replicate
|
||||
|
||||
To use Llama2 model with Replicate, for example, set:
|
||||
```
|
||||
[config] # in configuration.toml
|
||||
model = "replicate/llama-2-70b-chat:2c1608e18606fad2812020dc541930f2d0495ce32eee50074220b87300bc16e1"
|
||||
[replicate] # in .secrets.toml
|
||||
key = ...
|
||||
```
|
||||
(you can obtain a Llama2 key from [here](https://replicate.com/replicate/llama-2-70b-chat/api))
|
||||
|
||||
|
||||
Also review the [AiHandler](pr_agent/algo/ai_handler.py) file for instruction how to set keys for other models.
|
||||
|
||||
##### Vertex AI
|
||||
|
||||
To use Google's Vertex AI platform and its associated models (chat-bison/codechat-bison) set:
|
||||
|
||||
```
|
||||
[config] # in configuration.toml
|
||||
model = "vertex_ai/codechat-bison"
|
||||
fallback_models="vertex_ai/codechat-bison"
|
||||
|
||||
[vertexai] # in .secrets.toml
|
||||
vertex_project = "my-google-cloud-project"
|
||||
vertex_location = ""
|
||||
```
|
||||
|
||||
Your [application default credentials](https://cloud.google.com/docs/authentication/application-default-credentials) will be used for authentication so there is no need to set explicit credentials in most environments.
|
||||
|
||||
If you do want to set explicit credentials then you can use the `GOOGLE_APPLICATION_CREDENTIALS` environment variable set to a path to a json credentials file.
|
||||
|
||||
##### Amazon Bedrock
|
||||
|
||||
To use Amazon Bedrock and its foundational models, add the below configuration:
|
||||
|
||||
```
|
||||
[config] # in configuration.toml
|
||||
model = "anthropic.claude-v2"
|
||||
fallback_models="anthropic.claude-instant-v1"
|
||||
|
||||
[aws] # in .secrets.toml
|
||||
bedrock_region = "us-east-1"
|
||||
```
|
||||
|
||||
Note that you have to add access to foundational models before using them. Please refer to [this document](https://docs.aws.amazon.com/bedrock/latest/userguide/setting-up.html) for more details.
|
||||
|
||||
AWS session is automatically authenticated from your environment, but you can also explicitly set `AWS_ACCESS_KEY_ID` and `AWS_SECRET_ACCESS_KEY` environment variables.
|
||||
|
||||
|
||||
#### Patch Extra Lines
|
||||
By default, around any change in your PR, git patch provides 3 lines of context above and below the change.
|
||||
```
|
||||
@@ -12,5 +12,5 @@ def func1():
|
||||
code line that already existed in the file...
|
||||
code line that already existed in the file...
|
||||
code line that already existed in the file....
|
||||
-code line that was removed in the PR
|
||||
+new code line added in the PR
|
||||
code line that already existed in the file...
|
||||
code line that already existed in the file...
|
||||
code line that already existed in the file...
|
||||
```
|
||||
|
||||
For the `review`, `describe`, `ask` and `add_docs` tools, if the token budget allows, PR-Agent tries to increase the number of lines of context, via the parameter:
|
||||
```
|
||||
[config]
|
||||
patch_extra_lines=3
|
||||
```
|
||||
|
||||
Increasing this number provides more context to the model, but will also increase the token budget.
|
||||
If the PR is too large (see [PR Compression strategy](./PR_COMPRESSION.md)), PR-Agent automatically sets this number to 0, using the original git patch.
|
||||
|
||||
|
||||
#### Azure DevOps provider
|
||||
To use Azure DevOps provider use the following settings in configuration.toml:
|
||||
```
|
||||
|
@ -14,10 +14,6 @@ FROM base as bitbucket_app
|
||||
ADD pr_agent pr_agent
|
||||
CMD ["python", "pr_agent/servers/bitbucket_app.py"]
|
||||
|
||||
FROM base as bitbucket_server_webhook
|
||||
ADD pr_agent pr_agent
|
||||
CMD ["python", "pr_agent/servers/bitbucket_server_webhook.py"]
|
||||
|
||||
FROM base as github_polling
|
||||
ADD pr_agent pr_agent
|
||||
CMD ["python", "pr_agent/servers/github_polling.py"]
|
||||
|
@ -1,24 +0,0 @@
|
||||
# Add Documentation Tool 💎
|
||||
The `add_docs` tool scans the PR code changes, and automatically suggests documentation for any code components that changed in the PR (functions, classes, etc.).
|
||||
|
||||
It can be invoked manually by commenting on any PR:
|
||||
```
|
||||
/add_docs
|
||||
```
|
||||
For example:
|
||||
|
||||
<kbd><img src=https://codium.ai/images/pr_agent/docs_command.png width="768"></kbd>
|
||||
___
|
||||
<kbd><img src=https://codium.ai/images/pr_agent/docs_components.png width="768"></kbd>
|
||||
___
|
||||
<kbd><img src=https://codium.ai/images/pr_agent/docs_single_component.png width="768"></kbd>
|
||||
|
||||
### Configuration options
|
||||
- `docs_style`: The exact style of the documentation (for python docstring). you can choose between: `google`, `numpy`, `sphinx`, `restructuredtext`, `plain`. Default is `sphinx`.
|
||||
- `extra_instructions`: Optional extra instructions to the tool. For example: "focus on the changes in the file X. Ignore change in ...".
|
||||
|
||||
Notes
|
||||
- Language that are currently fully supported: Python, Java, C++, JavaScript, TypeScript.
|
||||
- For languages that are not fully supported, the tool will suggest documentation only for new components in the PR.
|
||||
- A previous version of the tool, that offered support only for new components, was deprecated.
|
||||
|
15
docs/ASK.md
15
docs/ASK.md
@ -1,15 +0,0 @@
|
||||
# ASK Tool
|
||||
|
||||
The `ask` tool answers questions about the PR, based on the PR code changes. Make sure to be specific and clear in your questions.
|
||||
It can be invoked manually by commenting on any PR:
|
||||
```
|
||||
/ask "..."
|
||||
```
|
||||
For example:
|
||||
___
|
||||
<kbd><img src=https://codium.ai/images/pr_agent/ask_comment.png width="768"></kbd>
|
||||
___
|
||||
<kbd><img src=https://codium.ai/images/pr_agent/ask.png width="768"></kbd>
|
||||
___
|
||||
|
||||
Note that the tool does not have "memory" of previous questions, and answers each question independently.
|
@ -1,21 +0,0 @@
|
||||
# Analyze Tool 💎
|
||||
The `analyze` tool combines static code analysis with LLM capabilities to provide a comprehensive analysis of the PR code changes.
|
||||
|
||||
The tool scans the PR code changes, find the code components (methods, functions, classes) that changed, and summarizes the changes in each component.
|
||||
|
||||
It can be invoked manually by commenting on any PR:
|
||||
```
|
||||
/analyze
|
||||
```
|
||||
|
||||
An example [result](https://github.com/Codium-ai/pr-agent/pull/546#issuecomment-1868524805):
|
||||
|
||||
<kbd><img src=https://codium.ai/images/pr_agent/analyze_1.png width="768"></kbd>
|
||||
___
|
||||
<kbd><img src=https://codium.ai/images/pr_agent/analyze_2.png width="768"></kbd>
|
||||
___
|
||||
<kbd><img src=https://codium.ai/images/pr_agent/analyze_3.png width="768"></kbd>
|
||||
|
||||
|
||||
Notes
|
||||
- Language that are currently supported: Python, Java, C++, JavaScript, TypeScript.
|
@ -1,65 +0,0 @@
|
||||
# Custom Suggestions Tool 💎
|
||||
|
||||
## Table of Contents
|
||||
- [Overview](#overview)
|
||||
- [Example usage](#example-usage)
|
||||
- [Configuration options](#configuration-options)
|
||||
|
||||
|
||||
## Overview
|
||||
The `custom_suggestions` tool scans the PR code changes, and automatically generates custom suggestions for improving the PR code.
|
||||
It shares similarities with the `improve` tool, but with one main difference: the `custom_suggestions` tool will only propose suggestions that follow specific guidelines defined by the prompt in: `pr_custom_suggestions.prompt` configuration.
|
||||
|
||||
The tool can be triggered [automatically](https://github.com/Codium-ai/pr-agent/blob/main/Usage.md#github-app-automatic-tools) every time a new PR is opened, or can be invoked manually by commenting on a PR.
|
||||
|
||||
When commenting, use the following template:
|
||||
|
||||
```
|
||||
/custom_suggestions --pr_custom_suggestions.prompt="The suggestions should focus only on the following:\n-...\n-...\n-..."
|
||||
```
|
||||
|
||||
With a [configuration file](https://github.com/Codium-ai/pr-agent/blob/main/Usage.md#working-with-github-app), use the following template:
|
||||
|
||||
```
|
||||
[pr_custom_suggestions]
|
||||
prompt="""\
|
||||
The suggestions should focus only on the following:
|
||||
-...
|
||||
-...
|
||||
-...
|
||||
"""
|
||||
```
|
||||
Using a configuration file is recommended, since it allows to use multi-line instructions.
|
||||
|
||||
Don't forget - with this tool, you are the prompter. Be specific, clear, and concise in the instructions. Specify relevant aspects that you want the model to focus on. \
|
||||
You might benefit from several trial-and-error iterations, until you get the correct prompt for your use case.
|
||||
|
||||
## Example usage
|
||||
|
||||
Here is an example of a possible prompt:
|
||||
```
|
||||
[pr_custom_suggestions]
|
||||
prompt="""\
|
||||
The suggestions should focus only on the following:
|
||||
- look for edge cases when implementing a new function
|
||||
- make sure every variable has a meaningful name
|
||||
- make sure the code is efficient
|
||||
"""
|
||||
```
|
||||
|
||||
The instructions above are just an example. We want to emphasize that the prompt should be specific and clear, and be tailored to the needs of your project.
|
||||
|
||||
Results obtained with the prompt above:
|
||||
___
|
||||
<kbd><img src=https://codium.ai/images/pr_agent/custom_suggestions_prompt.png width="512"></kbd>
|
||||
___
|
||||
<kbd><img src=https://codium.ai/images/pr_agent/custom_suggestions_result.png width="768"></kbd>
|
||||
___
|
||||
|
||||
## Configuration options
|
||||
|
||||
`prompt`: the prompt for the tool. It should be a multi-line string.
|
||||
|
||||
`num_code_suggestions`: number of code suggestions provided by the 'custom_suggestions' tool. Default is 4.
|
||||
|
||||
`enable_help_text`: if set to true, the tool will display a help text in the comment. Default is true.
|
155
docs/DESCRIBE.md
155
docs/DESCRIBE.md
@ -1,155 +0,0 @@
|
||||
# Describe Tool
|
||||
## Table of Contents
|
||||
- [Overview](#overview)
|
||||
- [Configuration options](#configuration-options)
|
||||
- [Inline file summary 💎](#inline-file-summary-)
|
||||
- [Handle custom labels from the Repo's labels page :gem:](#handle-custom-labels-from-the-repos-labels-page-gem)
|
||||
- [Markers template](#markers-template)
|
||||
- [Usage Tips](#usage-tips)
|
||||
- [Automation](#automation)
|
||||
- [Custom labels](#custom-labels)
|
||||
|
||||
## Overview
|
||||
The `describe` tool scans the PR code changes, and generates a description for the PR - title, type, summary, walkthrough and labels.
|
||||
|
||||
The tool can be triggered automatically every time a new PR is [opened](https://github.com/Codium-ai/pr-agent/blob/main/Usage.md#github-app-automatic-tools), or it can be invoked manually by commenting on any PR:
|
||||
```
|
||||
/describe
|
||||
```
|
||||
For example:
|
||||
___
|
||||
<kbd><img src=https://codium.ai/images/pr_agent/describe_comment.png width="768"></kbd>
|
||||
___
|
||||
<kbd><img src=https://codium.ai/images/pr_agent/describe_new.png width="768"></kbd>
|
||||
___
|
||||
|
||||
### Configuration options
|
||||
To edit [configurations](./../pr_agent/settings/configuration.toml#L46) related to the describe tool (`pr_description` section), use the following template:
|
||||
```
|
||||
/describe --pr_description.some_config1=... --pr_description.some_config2=...
|
||||
```
|
||||
|
||||
**Possible configurations:**
|
||||
- `publish_labels`: if set to true, the tool will publish the labels to the PR. Default is true.
|
||||
|
||||
- `publish_description_as_comment`: if set to true, the tool will publish the description as a comment to the PR. If false, it will overwrite the origianl description. Default is false.
|
||||
|
||||
- `add_original_user_description`: if set to true, the tool will add the original user description to the generated description. Default is false.
|
||||
|
||||
- `keep_original_user_title`: if set to true, the tool will keep the original PR title, and won't change it. Default is false.
|
||||
|
||||
- `extra_instructions`: Optional extra instructions to the tool. For example: "focus on the changes in the file X. Ignore change in ...".
|
||||
|
||||
- To enable `custom labels`, apply the configuration changes described [here](./GENERATE_CUSTOM_LABELS.md#configuration-changes)
|
||||
|
||||
- `enable_pr_type`: if set to false, it will not show the `PR type` as a text value in the description content. Default is true.
|
||||
|
||||
- `final_update_message`: if set to true, it will add a comment message [`PR Description updated to latest commit...`](https://github.com/Codium-ai/pr-agent/pull/499#issuecomment-1837412176) after finishing calling `/describe`. Default is true.
|
||||
|
||||
- `enable_semantic_files_types`: if set to true, "Changes walkthrough" section will be generated. Default is true.
|
||||
- `collapsible_file_list`: if set to true, the file list in the "Changes walkthrough" section will be collapsible. If set to "adaptive", the file list will be collapsible only if there are more than 8 files. Default is "adaptive".
|
||||
|
||||
### Inline file summary 💎
|
||||
> This feature is available only in PR-Agent Pro
|
||||
|
||||
This will enable you to quickly understand the changes in each file, while reviewing the code changes (diff view).
|
||||
To enable inline file summary, set `pr_description.inline_file_summary` in the configuration file, possible values are:
|
||||
- `'table'`: File changes walkthrough table will be displayed on the top of the "Files changed" tab, in addition to the "Conversation" tab.
|
||||
<kbd><img src=https://codium.ai/images/pr_agent/diffview-table.png width="1024"></kbd>
|
||||
- `true`: A collapsable file comment with changes title and a changes summary for each file in the PR.
|
||||
<kbd><img src=https://codium.ai/images/pr_agent/diffview_changes.png width="1024"></kbd>
|
||||
- `false` (`default`): File changes walkthrough will be added only to the "Conversation" tab.
|
||||
|
||||
*Note that this feature is currently available only for GitHub.
|
||||
|
||||
|
||||
### Handle custom labels from the Repo's labels page :gem:
|
||||
> This feature is available only in PR-Agent Pro
|
||||
|
||||
You can control the custom labels that will be suggested by the `describe` tool, from the repo's labels page:
|
||||
|
||||
* GitHub : go to `https://github.com/{owner}/{repo}/labels` (or click on the "Labels" tab in the issues or PRs page)
|
||||
* GitLab : go to `https://gitlab.com/{owner}/{repo}/-/labels` (or click on "Manage" -> "Labels" on the left menu)
|
||||
|
||||
Now add/edit the custom labels. they should be formatted as follows:
|
||||
* Label name: The name of the custom label.
|
||||
* Description: Start the description of with prefix `pr_agent:`, for example: `pr_agent: Description of when AI should suggest this label`.<br>
|
||||
|
||||
The description should be comprehensive and detailed, indicating when to add the desired label. For example:
|
||||
<kbd><img src=https://codium.ai/images/pr_agent/add_native_custom_labels.png width="880"></kbd>
|
||||
|
||||
|
||||
### Markers template
|
||||
To enable markers, set `pr_description.use_description_markers=true`.
|
||||
markers enable to easily integrate user's content and auto-generated content, with a template-like mechanism.
|
||||
|
||||
For example, if the PR original description was:
|
||||
```
|
||||
User content...
|
||||
|
||||
## PR Type:
|
||||
pr_agent:type
|
||||
|
||||
## PR Description:
|
||||
pr_agent:summary
|
||||
|
||||
## PR Walkthrough:
|
||||
pr_agent:walkthrough
|
||||
```
|
||||
The marker `pr_agent:type` will be replaced with the PR type, `pr_agent:summary` will be replaced with the PR summary, and `pr_agent:walkthrough` will be replaced with the PR walkthrough.
|
||||
|
||||
<kbd><img src=https://codium.ai/images/pr_agent/describe_markers_before.png width="768"></kbd>
|
||||
|
||||
==>
|
||||
|
||||
<kbd><img src=https://codium.ai/images/pr_agent/describe_markers_after.png width="768"></kbd>
|
||||
|
||||
**Configuration params:**
|
||||
|
||||
- `use_description_markers`: if set to true, the tool will use markers template. It replaces every marker of the form `pr_agent:marker_name` with the relevant content. Default is false.
|
||||
- `include_generated_by_header`: if set to true, the tool will add a dedicated header: 'Generated by PR Agent at ...' to any automatic content. Default is true.
|
||||
|
||||
|
||||
## Usage Tips
|
||||
1) [Automation](#automation)
|
||||
2) [Custom labels](#custom-labels)
|
||||
### Automation
|
||||
- When you first install the app, the [default mode](https://github.com/Codium-ai/pr-agent/blob/main/Usage.md#github-app-automatic-tools) for the describe tool is:
|
||||
```
|
||||
pr_commands = ["/describe --pr_description.add_original_user_description=true"
|
||||
"--pr_description.keep_original_user_title=true", ...]
|
||||
```
|
||||
meaning the `describe` tool will run automatically on every PR, will keep the original title, and will add the original user description above the generated description.
|
||||
<br> This default settings aim to strike a good balance between automation and control:
|
||||
If you want more automation, just give the PR a title, and the tool will auto-write a full description; If you want more control, you can add a detailed description, and the tool will add the complementary description below it.
|
||||
- For maximal automation, you can change the default mode to:
|
||||
```
|
||||
pr_commands = ["/describe --pr_description.add_original_user_description=false"
|
||||
"--pr_description.keep_original_user_title=true", ...]
|
||||
```
|
||||
so the title will be auto-generated as well.
|
||||
- Markers are an alternative way to control the generated description, to give maximal control to the user. If you set:
|
||||
```
|
||||
pr_commands = ["/describe --pr_description.use_description_markers=true", ...]
|
||||
```
|
||||
the tool will replace every marker of the form `pr_agent:marker_name` in the PR description with the relevant content, where `marker_name` is one of the following:
|
||||
- `type`: the PR type.
|
||||
- `summary`: the PR summary.
|
||||
- `walkthrough`: the PR walkthrough.
|
||||
|
||||
Note that when markers are enabled, if the original PR description does not contain any markers, the tool will not alter the description at all.
|
||||
|
||||
### Custom labels
|
||||
The default labels of the describe tool are quite generic, since they are meant to be used in any repo: [`Bug fix`, `Tests`, `Enhancement`, `Documentation`, `Other`].
|
||||
|
||||
If you specify [custom labels](#handle-custom-labels-from-the-repos-labels-page-gem) in the repo's labels page, you can get tailored labels for your use cases.
|
||||
Examples for custom labels:
|
||||
- `Main topic:performance` - pr_agent:The main topic of this PR is performance
|
||||
- `New endpoint` - pr_agent:A new endpoint was added in this PR
|
||||
- `SQL query` - pr_agent:A new SQL query was added in this PR
|
||||
- `Dockerfile changes` - pr_agent:The PR contains changes in the Dockerfile
|
||||
- ...
|
||||
|
||||
The list above is eclectic, and aims to give an idea of different possibilities. Define custom labels that are relevant for your repo and use cases.
|
||||
Note that Labels are not mutually exclusive, so you can add multiple label categories.
|
||||
<br>Make sure to provide proper title, and a detailed and well-phrased description for each label, so the tool will know when to suggest it.
|
@ -1,27 +0,0 @@
|
||||
## Overview
|
||||
`PR-Agent` offers extensive pull request functionalities across various git providers:
|
||||
| | | GitHub | Gitlab | Bitbucket | CodeCommit | Azure DevOps | Gerrit |
|
||||
|-------|---------------------------------------------|:------:|:------:|:---------:|:----------:|:----------:|:----------:|
|
||||
| TOOLS | Review | :white_check_mark: | :white_check_mark: | :white_check_mark: | :white_check_mark: | :white_check_mark: | :white_check_mark: |
|
||||
| | ⮑ Incremental | :white_check_mark: | | | | | |
|
||||
| | Ask | :white_check_mark: | :white_check_mark: | :white_check_mark: | :white_check_mark: | :white_check_mark: | :white_check_mark: |
|
||||
| | Auto-Description | :white_check_mark: | :white_check_mark: | :white_check_mark: | :white_check_mark: | :white_check_mark: | :white_check_mark: |
|
||||
| | Improve Code | :white_check_mark: | :white_check_mark: | :white_check_mark: | :white_check_mark: | | :white_check_mark: |
|
||||
| | ⮑ Extended | :white_check_mark: | :white_check_mark: | :white_check_mark: | :white_check_mark: | | :white_check_mark: |
|
||||
| | Reflect and Review | :white_check_mark: | :white_check_mark: | :white_check_mark: | | :white_check_mark: | :white_check_mark: |
|
||||
| | Update CHANGELOG.md | :white_check_mark: | :white_check_mark: | :white_check_mark: | :white_check_mark: | | |
|
||||
| | Find similar issue | :white_check_mark: | | | | | |
|
||||
| | Add Documentation | :white_check_mark: | :white_check_mark: | :white_check_mark: | :white_check_mark: | | :white_check_mark: |
|
||||
| | Generate Custom Labels 💎 | :white_check_mark: | :white_check_mark: | | | | |
|
||||
| | | | | | | |
|
||||
| USAGE | CLI | :white_check_mark: | :white_check_mark: | :white_check_mark: | :white_check_mark: | :white_check_mark: |
|
||||
| | App / webhook | :white_check_mark: | :white_check_mark: | | | |
|
||||
| | Tagging bot | :white_check_mark: | | | | |
|
||||
| | Actions | :white_check_mark: | | | | |
|
||||
| | Web server | | | | | | :white_check_mark: |
|
||||
| | | | | | | |
|
||||
| CORE | PR compression | :white_check_mark: | :white_check_mark: | :white_check_mark: | :white_check_mark: | :white_check_mark: | :white_check_mark: |
|
||||
| | Repo language prioritization | :white_check_mark: | :white_check_mark: | :white_check_mark: | :white_check_mark: | :white_check_mark: | :white_check_mark: |
|
||||
| | Adaptive and token-aware<br />file patch fitting | :white_check_mark: | :white_check_mark: | :white_check_mark: | :white_check_mark: | :white_check_mark: | :white_check_mark: |
|
||||
| | Multiple models support | :white_check_mark: | :white_check_mark: | :white_check_mark: | :white_check_mark: | :white_check_mark: | :white_check_mark: |
|
||||
| | Incremental PR Review | :white_check_mark: | | | | | |
|
@ -1,57 +0,0 @@
|
||||
# Generate Custom Labels 💎
|
||||
The `generate_labels` tool scans the PR code changes, and given a list of labels and their descriptions, it automatically suggests labels that match the PR code changes.
|
||||
|
||||
It can be invoked manually by commenting on any PR:
|
||||
```
|
||||
/generate_labels
|
||||
```
|
||||
For example:
|
||||
|
||||
If we wish to add detect changes to SQL queries in a given PR, we can add the following custom label along with its description:
|
||||
|
||||
<kbd><img src=https://codium.ai/images/pr_agent/custom_labels_list.png width="768"></kbd>
|
||||
|
||||
When running the `generate_labels` tool on a PR that includes changes in SQL queries, it will automatically suggest the custom label:
|
||||
<kbd><img src=https://codium.ai/images/pr_agent/custom_label_published.png width="768"></kbd>
|
||||
|
||||
Note that in addition to the dedicated tool `generate_labels`, the custom labels will also be used by the `describe` tool.
|
||||
|
||||
### How to enable custom labels
|
||||
There are 3 ways to enable custom labels:
|
||||
|
||||
#### 1. CLI (local configuration file)
|
||||
When working from CLI, you need to apply the [configuration changes](#configuration-changes) to the [custom_labels file](./../pr_agent/settings/custom_labels.toml):
|
||||
|
||||
#### 2. Repo configuration file
|
||||
To enable custom labels, you need to apply the [configuration changes](#configuration-changes) to the local `.pr_agent.toml` file in you repository.
|
||||
|
||||
#### 3. Handle custom labels from the Repo's labels page
|
||||
> This feature is available only in PR-Agent Pro
|
||||
* GitHub : `https://github.com/{owner}/{repo}/labels`, or click on the "Labels" tab in the issues or PRs page.
|
||||
* GitLab : `https://gitlab.com/{owner}/{repo}/-/labels`, or click on "Manage" -> "Labels" on the left menu.
|
||||
|
||||
b. Add/edit the custom labels. It should be formatted as follows:
|
||||
* Label name: The name of the custom label.
|
||||
* Description: Start the description of with prefix `pr_agent:`, for example: `pr_agent: Description of when AI should suggest this label`.<br>
|
||||
The description should be comprehensive and detailed, indicating when to add the desired label.
|
||||
<kbd><img src=https://codium.ai/images/pr_agent/add_native_custom_labels.png width="880"></kbd>
|
||||
|
||||
c. Now the custom labels will be included in the `generate_labels` tool.
|
||||
|
||||
*This feature is supported in GitHub and GitLab.
|
||||
|
||||
#### Configuration changes
|
||||
- Change `enable_custom_labels` to True: This will turn off the default labels and enable the custom labels provided in the custom_labels.toml file.
|
||||
- Add the custom labels. It should be formatted as follows:
|
||||
|
||||
```
|
||||
[config]
|
||||
enable_custom_labels=true
|
||||
|
||||
[custom_labels."Custom Label Name"]
|
||||
description = "Description of when AI should suggest this label"
|
||||
|
||||
[custom_labels."Custom Label 2"]
|
||||
description = "Description of when AI should suggest this label 2"
|
||||
```
|
||||
|
@ -1,90 +0,0 @@
|
||||
# Improve Tool
|
||||
|
||||
## Table of Contents
|
||||
- [Overview](#overview)
|
||||
- [Configuration options](#configuration-options)
|
||||
- [Summarize mode](#summarize-mode)
|
||||
- [Usage Tips](#usage-tips)
|
||||
- [Extra instructions](#extra-instructions)
|
||||
- [PR footprint - regular vs summarize mode](#pr-footprint---regular-vs-summarize-mode)
|
||||
- [A note on code suggestions quality](#a-note-on-code-suggestions-quality)
|
||||
|
||||
## Overview
|
||||
The `improve` tool scans the PR code changes, and automatically generates suggestions for improving the PR code.
|
||||
The tool can be triggered automatically every time a new PR is [opened](https://github.com/Codium-ai/pr-agent/blob/main/Usage.md#github-app-automatic-tools), or it can be invoked manually by commenting on any PR:
|
||||
```
|
||||
/improve
|
||||
```
|
||||
|
||||
### Summarized vs commitable code suggestions
|
||||
|
||||
The code suggestions can be presented as a single comment (via `pr_code_suggestions.summarize=true`):
|
||||
___
|
||||
<kbd><img src=https://codium.ai/images/pr_agent/code_suggestions_as_comment.png width="768"></kbd>
|
||||
___
|
||||
|
||||
Or as a separate commitable code comment for each suggestion:
|
||||
___
|
||||
<kbd><img src=https://codium.ai/images/pr_agent/improve.png width="768"></kbd>
|
||||
|
||||
---
|
||||
Note that a single comment has a significantly smaller PR footprint. We recommend this mode for most cases.
|
||||
Also note that collapsible are not supported in _Bitbucket_. Hence, the suggestions are presented there as code comments.
|
||||
|
||||
### Extended mode
|
||||
|
||||
An extended mode, which does not involve PR Compression and provides more comprehensive suggestions, can be invoked by commenting on any PR:
|
||||
```
|
||||
/improve --extended
|
||||
```
|
||||
Note that the extended mode divides the PR code changes into chunks, up to the token limits, where each chunk is handled separately (might use multiple calls to GPT-4 for large PRs).
|
||||
Hence, the total number of suggestions is proportional to the number of chunks, i.e., the size of the PR.
|
||||
|
||||
### Configuration options
|
||||
|
||||
To edit [configurations](./../pr_agent/settings/configuration.toml#L66) related to the improve tool (`pr_code_suggestions` section), use the following template:
|
||||
```
|
||||
/improve --pr_code_suggestions.some_config1=... --pr_code_suggestions.some_config2=...
|
||||
```
|
||||
|
||||
#### General options
|
||||
- `num_code_suggestions`: number of code suggestions provided by the 'improve' tool. Default is 4.
|
||||
- `extra_instructions`: Optional extra instructions to the tool. For example: "focus on the changes in the file X. Ignore change in ...".
|
||||
- `rank_suggestions`: if set to true, the tool will rank the suggestions, based on importance. Default is false.
|
||||
- `summarize`: if set to true, the tool will display the suggestions in a single comment. Default is false.
|
||||
- `enable_help_text`: if set to true, the tool will display a help text in the comment. Default is true.
|
||||
#### params for '/improve --extended' mode
|
||||
- `auto_extended_mode`: enable extended mode automatically (no need for the `--extended` option). Default is false.
|
||||
- `num_code_suggestions_per_chunk`: number of code suggestions provided by the 'improve' tool, per chunk. Default is 8.
|
||||
- `rank_extended_suggestions`: if set to true, the tool will rank the suggestions, based on importance. Default is true.
|
||||
- `max_number_of_calls`: maximum number of chunks. Default is 5.
|
||||
- `final_clip_factor`: factor to remove suggestions with low confidence. Default is 0.9.
|
||||
|
||||
|
||||
## Usage Tips
|
||||
|
||||
### Extra instructions
|
||||
Extra instructions are very important for the `imrpove` tool, since they enable you to guide the model to suggestions that are more relevant to the specific needs of the project.
|
||||
|
||||
Be specific, clear, and concise in the instructions. With extra instructions, you are the prompter. Specify relevant aspects that you want the model to focus on.
|
||||
|
||||
Examples for extra instructions:
|
||||
```
|
||||
[pr_code_suggestions] # /improve #
|
||||
extra_instructions="""
|
||||
Emphasize the following aspects:
|
||||
- Does the code logic cover relevant edge cases?
|
||||
- Is the code logic clear and easy to understand?
|
||||
- Is the code logic efficient?
|
||||
...
|
||||
"""
|
||||
```
|
||||
Use triple quotes to write multi-line instructions. Use bullet points to make the instructions more readable.
|
||||
|
||||
### A note on code suggestions quality
|
||||
|
||||
- While the current AI for code is getting better and better (GPT-4), it's not flawless. Not all the suggestions will be perfect, and a user should not accept all of them automatically.
|
||||
- Suggestions are not meant to be [simplistic](./../pr_agent/settings/pr_code_suggestions_prompts.toml#L34). Instead, they aim to give deep feedback and raise questions, ideas and thoughts to the user, who can then use his judgment, experience, and understanding of the code base.
|
||||
- Recommended to use the 'extra_instructions' field to guide the model to suggestions that are more relevant to the specific needs of the project.
|
||||
- Best quality will be obtained by using 'improve --extended' mode.
|
||||
|
148
docs/REVIEW.md
148
docs/REVIEW.md
@ -1,148 +0,0 @@
|
||||
# Review Tool
|
||||
|
||||
## Table of Contents
|
||||
- [Overview](#overview)
|
||||
- [Configuration options](#configuration-options)
|
||||
- [Incremental Mode](#incremental-mode)
|
||||
- [PR Reflection](#pr-reflection)
|
||||
- [Usage Tips](#usage-tips)
|
||||
- [General guidelines](#general-guidelines)
|
||||
- [Code suggestions](#code-suggestions)
|
||||
- [Automation](#automation)
|
||||
- [Auto-labels](#auto-labels)
|
||||
- [Extra instructions](#extra-instructions)
|
||||
|
||||
## Overview
|
||||
The `review` tool scans the PR code changes, and automatically generates a PR review.
|
||||
The tool can be triggered automatically every time a new PR is [opened](https://github.com/Codium-ai/pr-agent/blob/main/Usage.md#github-app-automatic-tools), or can be invoked manually by commenting on any PR:
|
||||
```
|
||||
/review
|
||||
```
|
||||
For example:
|
||||
___
|
||||
<kbd><img src=https://codium.ai/images/pr_agent/review_comment.png width="768"></kbd>
|
||||
___
|
||||
<kbd><img src=https://codium.ai/images/pr_agent/review.png width="768"></kbd>
|
||||
___
|
||||
|
||||
### Configuration options
|
||||
|
||||
To edit [configurations](./../pr_agent/settings/configuration.toml#L19) related to the review tool (`pr_reviewer` section), use the following template:
|
||||
```
|
||||
/review --pr_reviewer.some_config1=... --pr_reviewer.some_config2=...
|
||||
```
|
||||
|
||||
#### General options
|
||||
- `num_code_suggestions`: number of code suggestions provided by the 'review' tool. Default is 4.
|
||||
- `inline_code_comments`: if set to true, the tool will publish the code suggestions as comments on the code diff. Default is false.
|
||||
- `persistent_comment`: if set to true, the review comment will be persistent, meaning that every new review request will edit the previous one. Default is true.
|
||||
- `extra_instructions`: Optional extra instructions to the tool. For example: "focus on the changes in the file X. Ignore change in ...".
|
||||
#### Enable\\disable features
|
||||
- `require_focused_review`: if set to true, the tool will add a section - 'is the PR a focused one'. Default is false.
|
||||
- `require_score_review`: if set to true, the tool will add a section that scores the PR. Default is false.
|
||||
- `require_tests_review`: if set to true, the tool will add a section that checks if the PR contains tests. Default is true.
|
||||
- `require_security_review`: if set to true, the tool will add a section that checks if the PR contains security issues. Default is true.
|
||||
- `require_estimate_effort_to_review`: if set to true, the tool will add a section that estimates the effort needed to review the PR. Default is true.
|
||||
#### SOC2 ticket compliance 💎
|
||||
This sub-tool checks if the PR description properly contains a ticket to a project management system (e.g., Jira, Asana, Trello, etc.), as required by SOC2 compliance. If not, it will add a label to the PR: "Missing SOC2 ticket".
|
||||
- `require_soc2_ticket`: If set to true, the SOC2 ticket checker sub-tool will be enabled. Default is false.
|
||||
- `soc2_ticket_prompt`: The prompt for the SOC2 ticket review. Default is: `Does the PR description include a link to ticket in a project management system (e.g., Jira, Asana, Trello, etc.) ?`. Edit this field if your compliance requirements are different.
|
||||
#### Adding PR labels
|
||||
- `enable_review_labels_security`: if set to true, the tool will publish a 'possible security issue' label if it detects a security issue. Default is true.
|
||||
- `enable_review_labels_effort`: if set to true, the tool will publish a 'Review effort [1-5]: x' label. Default is true.
|
||||
|
||||
### Incremental Mode
|
||||
Incremental review only considers changes since the last PR-Agent review. This can be useful when working on the PR in an iterative manner, and you want to focus on the changes since the last review instead of reviewing the entire PR again.
|
||||
For invoking the incremental mode, the following command can be used:
|
||||
```
|
||||
/review -i
|
||||
```
|
||||
Note that the incremental mode is only available for GitHub.
|
||||
|
||||
<kbd><img src=https://codium.ai/images/pr_agent/incremental_review.png width="768"></kbd>
|
||||
|
||||
Under the section 'pr_reviewer', the [configuration file](./../pr_agent/settings/configuration.toml#L19) contains options to customize the 'review -i' tool.
|
||||
These configurations can be used to control the rate at which the incremental review tool will create new review comments when invoked automatically, to prevent making too much noise in the PR.
|
||||
- `minimal_commits_for_incremental_review`: Minimal number of commits since the last review that are required to create incremental review.
|
||||
If there are less than the specified number of commits since the last review, the tool will not perform any action.
|
||||
Default is 0 - the tool will always run, no matter how many commits since the last review.
|
||||
- `minimal_minutes_for_incremental_review`: Minimal number of minutes that need to pass since the last reviewed commit to create incremental review.
|
||||
If less that the specified number of minutes have passed between the last reviewed commit and running this command, the tool will not perform any action.
|
||||
Default is 0 - the tool will always run, no matter how much time have passed since the last reviewed commit.
|
||||
- `require_all_thresholds_for_incremental_review`: If set to true, all the previous thresholds must be met for incremental review to run. If false, only one is enough to run the tool.
|
||||
For example, if `minimal_commits_for_incremental_review=2` and `minimal_minutes_for_incremental_review=2`, and we have 3 commits since the last review, but the last reviewed commit is from 1 minute ago:
|
||||
When `require_all_thresholds_for_incremental_review=true` the incremental review __will not__ run, because only 1 out of 2 conditions were met (we have enough commits but the last review is too recent),
|
||||
but when `require_all_thresholds_for_incremental_review=false` the incremental review __will__ run, because one condition is enough (we have 3 commits which is more than the configured 2).
|
||||
Default is false - the tool will run as long as at least once conditions is met.
|
||||
- `remove_previous_review_comment`: if set to true, the tool will remove the previous review comment before adding a new one. Default is false.
|
||||
|
||||
### PR Reflection
|
||||
By invoking:
|
||||
```
|
||||
/reflect_and_review
|
||||
```
|
||||
The tool will first ask the author questions about the PR, and will guide the review based on their answers.
|
||||
|
||||
<kbd><img src=https://codium.ai/images/pr_agent/reflection_questions.png width="768"></kbd>
|
||||
___
|
||||
<kbd><img src=https://codium.ai/images/pr_agent/reflection_answers.png width="768"></kbd>
|
||||
___
|
||||
<kbd><img src=https://codium.ai/images/pr_agent/reflection_insights.png width="768"></kbd>
|
||||
___
|
||||
|
||||
|
||||
## Usage Tips
|
||||
1) [General guidelines](#general-guidelines)
|
||||
2) [Code suggestions](#code-suggestions)
|
||||
3) [Automation](#automation)
|
||||
4) [Auto-labels](#auto-labels)
|
||||
5) [Extra instructions](#extra-instructions)
|
||||
|
||||
### General guidelines
|
||||
The `review` tool provides a collection of possible feedbacks about a PR.
|
||||
It is recommended to review the [Configuration options](#configuration-options) section, and choose the relevant options for your use case.
|
||||
|
||||
Some of the feature that are disabled by default are quite useful, and should be considered for enabling. For example:
|
||||
`require_score_review`, `require_soc2_ticket`, and more.
|
||||
|
||||
On the other hand, if you find one of the enabled features to be irrelevant for your use case, disable it. No default configuration can fit all use cases.
|
||||
|
||||
### Code suggestions
|
||||
The `review` tool provides several type of feedbacks, one of them is code suggestions.
|
||||
If you are interested **only** in the code suggestions, it is recommended to use the [`improve`](./IMPROVE.md) feature instead, since it dedicated only to code suggestions, and usually gives better results.
|
||||
Use the `review` tool if you want to get a more comprehensive feedback, which includes code suggestions as well.
|
||||
|
||||
### Automation
|
||||
- When you first install the app, the [default mode](https://github.com/Codium-ai/pr-agent/blob/main/Usage.md#github-app-automatic-tools) for the `review` tool is:
|
||||
```
|
||||
pr_commands = ["/review", ...]
|
||||
```
|
||||
meaning the `review` tool will run automatically on every PR, with the default configuration.
|
||||
Edit this field to enable/disable the tool, or to change the used configurations
|
||||
|
||||
### Auto-labels
|
||||
The `review` tool can auto-generate two specific types of labels for a PR:
|
||||
- a `possible security issue` label that detects a possible [security issue](https://github.com/Codium-ai/pr-agent/blob/tr/user_description/pr_agent/settings/pr_reviewer_prompts.toml#L136) (`enable_review_labels_security` flag)
|
||||
- a `Review effort [1-5]: x` label, where x is the estimated effort to review the PR (`enable_review_labels_effort` flag)
|
||||
|
||||
Both modes are useful, and we recommended to enable them.
|
||||
|
||||
### Extra instructions
|
||||
Extra instruction are important.
|
||||
The `review` tool can be configured with extra instructions, which can be used to guide the model to a feedback tailored to the needs of your project.
|
||||
|
||||
Be specific, clear, and concise in the instructions. With extra instructions, you are the prompter. Specify the relevant sub-tool, and the relevant aspects of the PR that you want to emphasize.
|
||||
|
||||
Examples for extra instructions:
|
||||
```
|
||||
[pr_reviewer] # /review #
|
||||
extra_instructions="""
|
||||
In the code feedback section, emphasize the following:
|
||||
- Does the code logic cover relevant edge cases?
|
||||
- Is the code logic clear and easy to understand?
|
||||
- Is the code logic efficient?
|
||||
...
|
||||
"""
|
||||
```
|
||||
Use triple quotes to write multi-line instructions. Use bullet points to make the instructions more readable.
|
||||
|
@ -1,39 +0,0 @@
|
||||
# Similar Issue Tool
|
||||
The similar issue tool retrieves the most similar issues to the current issue.
|
||||
It can be invoked manually by commenting on any PR:
|
||||
```
|
||||
/similar_issue
|
||||
```
|
||||
For example:
|
||||
|
||||
<kbd><img src=https://codium.ai/images/pr_agent/similar_issue_original_issue.png width="768"></kbd>
|
||||
<kbd><img src=https://codium.ai/images/pr_agent/similar_issue_comment.png width="768"></kbd>
|
||||
<kbd><img src=https://codium.ai/images/pr_agent/similar_issue.png width="768"></kbd>
|
||||
|
||||
Note that to perform retrieval, the `similar_issue` tool indexes all the repo previous issues (once).
|
||||
|
||||
|
||||
**Select VectorDBs** by changing `pr_similar_issue` parameter in `configuration.toml` file
|
||||
|
||||
2 VectorDBs are available to switch in
|
||||
1. LanceDB
|
||||
2. Pinecone
|
||||
|
||||
To enable usage of the '**similar issue**' tool for Pinecone, you need to set the following keys in `.secrets.toml` (or in the relevant environment variables):
|
||||
|
||||
```
|
||||
[pinecone]
|
||||
api_key = "..."
|
||||
environment = "..."
|
||||
```
|
||||
These parameters can be obtained by registering to [Pinecone](https://app.pinecone.io/?sessionType=signup/).
|
||||
|
||||
|
||||
### How to use:
|
||||
- To invoke the 'similar issue' tool from **CLI**, run:
|
||||
`python3 cli.py --issue_url=... similar_issue`
|
||||
|
||||
- To invoke the 'similar' issue tool via online usage, [comment](https://github.com/Codium-ai/pr-agent/issues/178#issuecomment-1716934893) on a PR:
|
||||
`/similar_issue`
|
||||
|
||||
- You can also enable the 'similar issue' tool to run automatically when a new issue is opened, by adding it to the [pr_commands list in the github_app section](https://github.com/Codium-ai/pr-agent/blob/main/pr_agent/settings/configuration.toml#L66)
|
30
docs/TEST.md
30
docs/TEST.md
@ -1,30 +0,0 @@
|
||||
# Test Tool 💎
|
||||
By combining LLM abilities with static code analysis, the `test` tool generate tests for a selected component, based on the PR code changes.
|
||||
It can be invoked manually by commenting on any PR:
|
||||
```
|
||||
/test component_name
|
||||
```
|
||||
where 'component_name' is the name of a specific component in the PR.
|
||||
To get a list of the components that changed in the PR, use the [`analyze`](https://github.com/Codium-ai/pr-agent/blob/main/docs/Analyze.md) tool.
|
||||
|
||||
|
||||
An example [result](https://github.com/Codium-ai/pr-agent/pull/598#issuecomment-1913679429):
|
||||
|
||||
<kbd><img src=https://codium.ai/images/pr_agent/test1.png width="704"></kbd>
|
||||
___
|
||||
<kbd><img src=https://codium.ai/images/pr_agent/test2.png width="768"></kbd>
|
||||
___
|
||||
<kbd><img src=https://codium.ai/images/pr_agent/test3.png width="768"></kbd>
|
||||
|
||||
Language that are currently supported by the tool: Python, Java, C++, JavaScript, TypeScript.
|
||||
|
||||
|
||||
|
||||
### Configuration options
|
||||
- `num_tests`: number of tests to generate. Default is 3.
|
||||
- `testing_framework`: the testing framework to use. If not set, for Python it will use `pytest`, for Java it will use `JUnit`, for C++ it will use `Catch2`, and for JavaScript and TypeScript it will use `jest`.
|
||||
- `avoid_mocks`: if set to true, the tool will try to avoid using mocks in the generated tests. Note that even if this option is set to true, the tool might still use mocks if it cannot generate a test without them. Default is true.
|
||||
- `extra_instructions`: Optional extra instructions to the tool. For example: "use the following mock injection scheme: ...".
|
||||
- `file`: in case there are several components with the same name, you can specify the relevant file.
|
||||
- `class_name`: in case there are several methods with the same name in the same file, you can specify the relevant class name.
|
||||
- `enable_help_text`: if set to true, the tool will add a help text to the PR comment. Default is true.
|
@ -1,13 +0,0 @@
|
||||
## Tools Guide
|
||||
- [DESCRIBE](./DESCRIBE.md)
|
||||
- [REVIEW](./REVIEW.md)
|
||||
- [IMPROVE](./IMPROVE.md)
|
||||
- [ASK](./ASK.md)
|
||||
- [SIMILAR_ISSUE](./SIMILAR_ISSUE.md)
|
||||
- [UPDATE CHANGELOG](./UPDATE_CHANGELOG.md)
|
||||
- [ADD DOCUMENTATION](./ADD_DOCUMENTATION.md) 💎
|
||||
- [GENERATE CUSTOM LABELS](./GENERATE_CUSTOM_LABELS.md) 💎
|
||||
- [Analyze](./Analyze.md) 💎
|
||||
- [Test](./TEST.md) 💎
|
||||
|
||||
See the **[installation guide](/INSTALL.md)** for instructions on setting up PR-Agent.
|
@ -1,19 +0,0 @@
|
||||
# Update Changelog Tool
|
||||
|
||||
The `update_changelog` tool automatically updates the CHANGELOG.md file with the PR changes.
|
||||
It can be invoked manually by commenting on any PR:
|
||||
```
|
||||
/update_changelog
|
||||
```
|
||||
For example:
|
||||
|
||||
<kbd><img src=https://codium.ai/images/pr_agent/update_changelog_comment.png width="768"></kbd>
|
||||
<kbd><img src=https://codium.ai/images/pr_agent/update_changelog.png width="768"></kbd>
|
||||
|
||||
|
||||
### Configuration options
|
||||
|
||||
Under the section 'pr_update_changelog', the [configuration file](./../pr_agent/settings/configuration.toml#L50) contains options to customize the 'update changelog' tool:
|
||||
|
||||
- `push_changelog_changes`: whether to push the changes to CHANGELOG.md, or just print them. Default is false (print only).
|
||||
- `extra_instructions`: Optional extra instructions to the tool. For example: "focus on the changes in the file X. Ignore change in ...
|
BIN
pics/debugger.png
Normal file
BIN
pics/debugger.png
Normal file
Binary file not shown.
After Width: | Height: | Size: 534 KiB |
BIN
pics/logo-dark.png
Normal file
BIN
pics/logo-dark.png
Normal file
Binary file not shown.
After Width: | Height: | Size: 22 KiB |
BIN
pics/logo-light.png
Normal file
BIN
pics/logo-light.png
Normal file
Binary file not shown.
After Width: | Height: | Size: 25 KiB |
@ -1,23 +1,18 @@
|
||||
import logging
|
||||
import os
|
||||
import shlex
|
||||
from functools import partial
|
||||
|
||||
from pr_agent.algo.ai_handlers.base_ai_handler import BaseAiHandler
|
||||
from pr_agent.algo.ai_handlers.litellm_ai_handler import LiteLLMAIHandler
|
||||
import tempfile
|
||||
|
||||
from pr_agent.algo.utils import update_settings_from_args
|
||||
from pr_agent.config_loader import get_settings
|
||||
from pr_agent.git_providers.utils import apply_repo_settings
|
||||
from pr_agent.log import get_logger
|
||||
from pr_agent.tools.pr_add_docs import PRAddDocs
|
||||
from pr_agent.git_providers import get_git_provider
|
||||
from pr_agent.tools.pr_code_suggestions import PRCodeSuggestions
|
||||
from pr_agent.tools.pr_config import PRConfig
|
||||
from pr_agent.tools.pr_description import PRDescription
|
||||
from pr_agent.tools.pr_generate_labels import PRGenerateLabels
|
||||
from pr_agent.tools.pr_information_from_user import PRInformationFromUser
|
||||
from pr_agent.tools.pr_questions import PRQuestions
|
||||
from pr_agent.tools.pr_reviewer import PRReviewer
|
||||
from pr_agent.tools.pr_similar_issue import PRSimilarIssue
|
||||
from pr_agent.tools.pr_update_changelog import PRUpdateChangelog
|
||||
from pr_agent.tools.pr_config import PRConfig
|
||||
|
||||
command2class = {
|
||||
"auto_review": PRReviewer,
|
||||
@ -35,46 +30,53 @@ command2class = {
|
||||
"update_changelog": PRUpdateChangelog,
|
||||
"config": PRConfig,
|
||||
"settings": PRConfig,
|
||||
"similar_issue": PRSimilarIssue,
|
||||
"add_docs": PRAddDocs,
|
||||
"generate_labels": PRGenerateLabels,
|
||||
}
|
||||
|
||||
commands = list(command2class.keys())
|
||||
|
||||
class PRAgent:
|
||||
def __init__(self, ai_handler: partial[BaseAiHandler,] = LiteLLMAIHandler):
|
||||
self.ai_handler = ai_handler # will be initialized in run_action
|
||||
def __init__(self):
|
||||
pass
|
||||
|
||||
async def handle_request(self, pr_url, request, notify=None) -> bool:
|
||||
# First, apply repo specific settings if exists
|
||||
apply_repo_settings(pr_url)
|
||||
if get_settings().config.use_repo_settings_file:
|
||||
repo_settings_file = None
|
||||
try:
|
||||
git_provider = get_git_provider()(pr_url)
|
||||
repo_settings = git_provider.get_repo_settings()
|
||||
if repo_settings:
|
||||
repo_settings_file = None
|
||||
fd, repo_settings_file = tempfile.mkstemp(suffix='.toml')
|
||||
os.write(fd, repo_settings)
|
||||
get_settings().load_file(repo_settings_file)
|
||||
finally:
|
||||
if repo_settings_file:
|
||||
try:
|
||||
os.remove(repo_settings_file)
|
||||
except Exception as e:
|
||||
logging.error(f"Failed to remove temporary settings file {repo_settings_file}", e)
|
||||
|
||||
# Then, apply user specific settings if exists
|
||||
if isinstance(request, str):
|
||||
request = request.replace("'", "\\'")
|
||||
lexer = shlex.shlex(request, posix=True)
|
||||
lexer.whitespace_split = True
|
||||
action, *args = list(lexer)
|
||||
else:
|
||||
action, *args = request
|
||||
args = update_settings_from_args(args)
|
||||
|
||||
action = action.lstrip("/").lower()
|
||||
if action == "reflect_and_review":
|
||||
get_settings().pr_reviewer.ask_and_reflect = True
|
||||
if action == "reflect_and_review" and not get_settings().pr_reviewer.ask_and_reflect:
|
||||
action = "review"
|
||||
if action == "answer":
|
||||
if notify:
|
||||
notify()
|
||||
await PRReviewer(pr_url, is_answer=True, args=args, ai_handler=self.ai_handler).run()
|
||||
await PRReviewer(pr_url, is_answer=True, args=args).run()
|
||||
elif action == "auto_review":
|
||||
await PRReviewer(pr_url, is_auto=True, args=args, ai_handler=self.ai_handler).run()
|
||||
await PRReviewer(pr_url, is_auto=True, args=args).run()
|
||||
elif action in command2class:
|
||||
if notify:
|
||||
notify()
|
||||
|
||||
await command2class[action](pr_url, ai_handler=self.ai_handler, args=args).run()
|
||||
await command2class[action](pr_url, args=args).run()
|
||||
else:
|
||||
return False
|
||||
return True
|
||||
|
||||
|
@ -1,5 +1,4 @@
|
||||
MAX_TOKENS = {
|
||||
'text-embedding-ada-002': 8000,
|
||||
'gpt-3.5-turbo': 4000,
|
||||
'gpt-3.5-turbo-0613': 4000,
|
||||
'gpt-3.5-turbo-0301': 4000,
|
||||
@ -8,17 +7,8 @@ MAX_TOKENS = {
|
||||
'gpt-4': 8000,
|
||||
'gpt-4-0613': 8000,
|
||||
'gpt-4-32k': 32000,
|
||||
'gpt-4-1106-preview': 128000, # 128K, but may be limited by config.max_model_tokens
|
||||
'claude-instant-1': 100000,
|
||||
'claude-2': 100000,
|
||||
'command-nightly': 4096,
|
||||
'replicate/llama-2-70b-chat:2c1608e18606fad2812020dc541930f2d0495ce32eee50074220b87300bc16e1': 4096,
|
||||
'meta-llama/Llama-2-7b-chat-hf': 4096,
|
||||
'vertex_ai/codechat-bison': 6144,
|
||||
'vertex_ai/codechat-bison-32k': 32000,
|
||||
'codechat-bison': 6144,
|
||||
'codechat-bison-32k': 32000,
|
||||
'anthropic.claude-v2': 100000,
|
||||
'anthropic.claude-instant-v1': 100000,
|
||||
'anthropic.claude-v1': 100000,
|
||||
}
|
||||
|
117
pr_agent/algo/ai_handler.py
Normal file
117
pr_agent/algo/ai_handler.py
Normal file
@ -0,0 +1,117 @@
|
||||
import logging
|
||||
|
||||
import litellm
|
||||
import openai
|
||||
from litellm import acompletion
|
||||
from openai.error import APIError, RateLimitError, Timeout, TryAgain
|
||||
from retry import retry
|
||||
|
||||
from pr_agent.config_loader import get_settings
|
||||
|
||||
OPENAI_RETRIES = 5
|
||||
|
||||
|
||||
class AiHandler:
|
||||
"""
|
||||
This class handles interactions with the OpenAI API for chat completions.
|
||||
It initializes the API key and other settings from a configuration file,
|
||||
and provides a method for performing chat completions using the OpenAI ChatCompletion API.
|
||||
"""
|
||||
|
||||
def __init__(self):
|
||||
"""
|
||||
Initializes the OpenAI API key and other settings from a configuration file.
|
||||
Raises a ValueError if the OpenAI key is missing.
|
||||
"""
|
||||
try:
|
||||
openai.api_key = get_settings().openai.key
|
||||
litellm.openai_key = get_settings().openai.key
|
||||
litellm.debugger = get_settings().litellm.debugger
|
||||
self.azure = False
|
||||
if get_settings().get("OPENAI.ORG", None):
|
||||
litellm.organization = get_settings().openai.org
|
||||
if get_settings().get("OPENAI.API_TYPE", None):
|
||||
if get_settings().openai.api_type == "azure":
|
||||
self.azure = True
|
||||
litellm.azure_key = get_settings().openai.key
|
||||
if get_settings().get("OPENAI.API_VERSION", None):
|
||||
litellm.api_version = get_settings().openai.api_version
|
||||
if get_settings().get("OPENAI.API_BASE", None):
|
||||
litellm.api_base = get_settings().openai.api_base
|
||||
if get_settings().get("ANTHROPIC.KEY", None):
|
||||
litellm.anthropic_key = get_settings().anthropic.key
|
||||
if get_settings().get("COHERE.KEY", None):
|
||||
litellm.cohere_key = get_settings().cohere.key
|
||||
if get_settings().get("REPLICATE.KEY", None):
|
||||
litellm.replicate_key = get_settings().replicate.key
|
||||
if get_settings().get("REPLICATE.KEY", None):
|
||||
litellm.replicate_key = get_settings().replicate.key
|
||||
if get_settings().get("HUGGINGFACE.KEY", None):
|
||||
litellm.huggingface_key = get_settings().huggingface.key
|
||||
if get_settings().get("LITELLM.DEBUGGER") and get_settings().get("LITELLM.EMAIL"):
|
||||
litellm.email = get_settings().get("LITELLM.EMAIL", None)
|
||||
except AttributeError as e:
|
||||
raise ValueError("OpenAI key is required") from e
|
||||
|
||||
@property
|
||||
def deployment_id(self):
|
||||
"""
|
||||
Returns the deployment ID for the OpenAI API.
|
||||
"""
|
||||
return get_settings().get("OPENAI.DEPLOYMENT_ID", None)
|
||||
|
||||
@retry(exceptions=(APIError, Timeout, TryAgain, AttributeError, RateLimitError),
|
||||
tries=OPENAI_RETRIES, delay=2, backoff=2, jitter=(1, 3))
|
||||
async def chat_completion(self, model: str, system: str, user: str, temperature: float = 0.2):
|
||||
"""
|
||||
Performs a chat completion using the OpenAI ChatCompletion API.
|
||||
Retries in case of API errors or timeouts.
|
||||
|
||||
Args:
|
||||
model (str): The model to use for chat completion.
|
||||
temperature (float): The temperature parameter for chat completion.
|
||||
system (str): The system message for chat completion.
|
||||
user (str): The user message for chat completion.
|
||||
|
||||
Returns:
|
||||
tuple: A tuple containing the response and finish reason from the API.
|
||||
|
||||
Raises:
|
||||
TryAgain: If the API response is empty or there are no choices in the response.
|
||||
APIError: If there is an error during OpenAI inference.
|
||||
Timeout: If there is a timeout during OpenAI inference.
|
||||
TryAgain: If there is an attribute error during OpenAI inference.
|
||||
"""
|
||||
try:
|
||||
deployment_id = self.deployment_id
|
||||
if get_settings().config.verbosity_level >= 2:
|
||||
logging.debug(
|
||||
f"Generating completion with {model}"
|
||||
f"{(' from deployment ' + deployment_id) if deployment_id else ''}"
|
||||
)
|
||||
response = await acompletion(
|
||||
model=model,
|
||||
deployment_id=deployment_id,
|
||||
messages=[
|
||||
{"role": "system", "content": system},
|
||||
{"role": "user", "content": user}
|
||||
],
|
||||
temperature=temperature,
|
||||
azure=self.azure,
|
||||
force_timeout=get_settings().config.ai_timeout
|
||||
)
|
||||
except (APIError, Timeout, TryAgain) as e:
|
||||
logging.error("Error during OpenAI inference: ", e)
|
||||
raise
|
||||
except (RateLimitError) as e:
|
||||
logging.error("Rate limit error during OpenAI inference: ", e)
|
||||
raise
|
||||
except (Exception) as e:
|
||||
logging.error("Unknown error during OpenAI inference: ", e)
|
||||
raise TryAgain from e
|
||||
if response is None or len(response["choices"]) == 0:
|
||||
raise TryAgain
|
||||
resp = response["choices"][0]['message']['content']
|
||||
finish_reason = response["choices"][0]["finish_reason"]
|
||||
print(resp, finish_reason)
|
||||
return resp, finish_reason
|
@ -1,28 +0,0 @@
|
||||
from abc import ABC, abstractmethod
|
||||
|
||||
class BaseAiHandler(ABC):
|
||||
"""
|
||||
This class defines the interface for an AI handler to be used by the PR Agents.
|
||||
"""
|
||||
|
||||
@abstractmethod
|
||||
def __init__(self):
|
||||
pass
|
||||
|
||||
@property
|
||||
@abstractmethod
|
||||
def deployment_id(self):
|
||||
pass
|
||||
|
||||
@abstractmethod
|
||||
async def chat_completion(self, model: str, system: str, user: str, temperature: float = 0.2):
|
||||
"""
|
||||
This method should be implemented to return a chat completion from the AI model.
|
||||
Args:
|
||||
model (str): the name of the model to use for the chat completion
|
||||
system (str): the system message string to use for the chat completion
|
||||
user (str): the user message string to use for the chat completion
|
||||
temperature (float): the temperature to use for the chat completion
|
||||
"""
|
||||
pass
|
||||
|
@ -1,67 +0,0 @@
|
||||
try:
|
||||
from langchain.chat_models import ChatOpenAI, AzureChatOpenAI
|
||||
from langchain.schema import SystemMessage, HumanMessage
|
||||
except: # we don't enforce langchain as a dependency, so if it's not installed, just move on
|
||||
pass
|
||||
|
||||
from pr_agent.algo.ai_handlers.base_ai_handler import BaseAiHandler
|
||||
from pr_agent.config_loader import get_settings
|
||||
from pr_agent.log import get_logger
|
||||
|
||||
from openai.error import APIError, RateLimitError, Timeout, TryAgain
|
||||
from retry import retry
|
||||
import functools
|
||||
|
||||
OPENAI_RETRIES = 5
|
||||
|
||||
class LangChainOpenAIHandler(BaseAiHandler):
|
||||
def __init__(self):
|
||||
# Initialize OpenAIHandler specific attributes here
|
||||
super().__init__()
|
||||
self.azure = get_settings().get("OPENAI.API_TYPE", "").lower() == "azure"
|
||||
try:
|
||||
if self.azure:
|
||||
# using a partial function so we can set the deployment_id later to support fallback_deployments
|
||||
# but still need to access the other settings now so we can raise a proper exception if they're missing
|
||||
self._chat = functools.partial(
|
||||
lambda **kwargs: AzureChatOpenAI(**kwargs),
|
||||
openai_api_key=get_settings().openai.key,
|
||||
openai_api_base=get_settings().openai.api_base,
|
||||
openai_api_version=get_settings().openai.api_version,
|
||||
)
|
||||
else:
|
||||
self._chat = ChatOpenAI(openai_api_key=get_settings().openai.key)
|
||||
except AttributeError as e:
|
||||
if getattr(e, "name"):
|
||||
raise ValueError(f"OpenAI {e.name} is required") from e
|
||||
else:
|
||||
raise e
|
||||
|
||||
@property
|
||||
def chat(self):
|
||||
if self.azure:
|
||||
# we must set the deployment_id only here (instead of the __init__ method) to support fallback_deployments
|
||||
return self._chat(deployment_name=self.deployment_id)
|
||||
else:
|
||||
return self._chat
|
||||
|
||||
@property
|
||||
def deployment_id(self):
|
||||
"""
|
||||
Returns the deployment ID for the OpenAI API.
|
||||
"""
|
||||
return get_settings().get("OPENAI.DEPLOYMENT_ID", None)
|
||||
@retry(exceptions=(APIError, Timeout, TryAgain, AttributeError, RateLimitError),
|
||||
tries=OPENAI_RETRIES, delay=2, backoff=2, jitter=(1, 3))
|
||||
async def chat_completion(self, model: str, system: str, user: str, temperature: float = 0.2):
|
||||
try:
|
||||
messages=[SystemMessage(content=system), HumanMessage(content=user)]
|
||||
|
||||
# get a chat completion from the formatted messages
|
||||
resp = self.chat(messages, model=model, temperature=temperature)
|
||||
finish_reason="completed"
|
||||
return resp.content, finish_reason
|
||||
|
||||
except (Exception) as e:
|
||||
get_logger().error("Unknown error during OpenAI inference: ", e)
|
||||
raise e
|
@ -1,133 +0,0 @@
|
||||
import os
|
||||
|
||||
import boto3
|
||||
import litellm
|
||||
import openai
|
||||
from litellm import acompletion
|
||||
from openai.error import APIError, RateLimitError, Timeout, TryAgain
|
||||
from retry import retry
|
||||
from pr_agent.algo.ai_handlers.base_ai_handler import BaseAiHandler
|
||||
from pr_agent.config_loader import get_settings
|
||||
from pr_agent.log import get_logger
|
||||
|
||||
OPENAI_RETRIES = 5
|
||||
|
||||
|
||||
class LiteLLMAIHandler(BaseAiHandler):
|
||||
"""
|
||||
This class handles interactions with the OpenAI API for chat completions.
|
||||
It initializes the API key and other settings from a configuration file,
|
||||
and provides a method for performing chat completions using the OpenAI ChatCompletion API.
|
||||
"""
|
||||
|
||||
def __init__(self):
|
||||
"""
|
||||
Initializes the OpenAI API key and other settings from a configuration file.
|
||||
Raises a ValueError if the OpenAI key is missing.
|
||||
"""
|
||||
self.azure = False
|
||||
self.aws_bedrock_client = None
|
||||
|
||||
if get_settings().get("OPENAI.KEY", None):
|
||||
openai.api_key = get_settings().openai.key
|
||||
litellm.openai_key = get_settings().openai.key
|
||||
if get_settings().get("litellm.use_client"):
|
||||
litellm_token = get_settings().get("litellm.LITELLM_TOKEN")
|
||||
assert litellm_token, "LITELLM_TOKEN is required"
|
||||
os.environ["LITELLM_TOKEN"] = litellm_token
|
||||
litellm.use_client = True
|
||||
if get_settings().get("OPENAI.ORG", None):
|
||||
litellm.organization = get_settings().openai.org
|
||||
if get_settings().get("OPENAI.API_TYPE", None):
|
||||
if get_settings().openai.api_type == "azure":
|
||||
self.azure = True
|
||||
litellm.azure_key = get_settings().openai.key
|
||||
if get_settings().get("OPENAI.API_VERSION", None):
|
||||
litellm.api_version = get_settings().openai.api_version
|
||||
if get_settings().get("OPENAI.API_BASE", None):
|
||||
litellm.api_base = get_settings().openai.api_base
|
||||
if get_settings().get("ANTHROPIC.KEY", None):
|
||||
litellm.anthropic_key = get_settings().anthropic.key
|
||||
if get_settings().get("COHERE.KEY", None):
|
||||
litellm.cohere_key = get_settings().cohere.key
|
||||
if get_settings().get("REPLICATE.KEY", None):
|
||||
litellm.replicate_key = get_settings().replicate.key
|
||||
if get_settings().get("REPLICATE.KEY", None):
|
||||
litellm.replicate_key = get_settings().replicate.key
|
||||
if get_settings().get("HUGGINGFACE.KEY", None):
|
||||
litellm.huggingface_key = get_settings().huggingface.key
|
||||
if get_settings().get("HUGGINGFACE.API_BASE", None):
|
||||
litellm.api_base = get_settings().huggingface.api_base
|
||||
if get_settings().get("VERTEXAI.VERTEX_PROJECT", None):
|
||||
litellm.vertex_project = get_settings().vertexai.vertex_project
|
||||
litellm.vertex_location = get_settings().get(
|
||||
"VERTEXAI.VERTEX_LOCATION", None
|
||||
)
|
||||
if get_settings().get("AWS.BEDROCK_REGION", None):
|
||||
litellm.AmazonAnthropicConfig.max_tokens_to_sample = 2000
|
||||
self.aws_bedrock_client = boto3.client(
|
||||
service_name="bedrock-runtime",
|
||||
region_name=get_settings().aws.bedrock_region,
|
||||
)
|
||||
|
||||
@property
|
||||
def deployment_id(self):
|
||||
"""
|
||||
Returns the deployment ID for the OpenAI API.
|
||||
"""
|
||||
return get_settings().get("OPENAI.DEPLOYMENT_ID", None)
|
||||
|
||||
@retry(exceptions=(APIError, Timeout, TryAgain, AttributeError, RateLimitError),
|
||||
tries=OPENAI_RETRIES, delay=2, backoff=2, jitter=(1, 3))
|
||||
async def chat_completion(self, model: str, system: str, user: str, temperature: float = 0.2):
|
||||
"""
|
||||
Performs a chat completion using the OpenAI ChatCompletion API.
|
||||
Retries in case of API errors or timeouts.
|
||||
|
||||
Args:
|
||||
model (str): The model to use for chat completion.
|
||||
temperature (float): The temperature parameter for chat completion.
|
||||
system (str): The system message for chat completion.
|
||||
user (str): The user message for chat completion.
|
||||
|
||||
Returns:
|
||||
tuple: A tuple containing the response and finish reason from the API.
|
||||
|
||||
Raises:
|
||||
TryAgain: If the API response is empty or there are no choices in the response.
|
||||
APIError: If there is an error during OpenAI inference.
|
||||
Timeout: If there is a timeout during OpenAI inference.
|
||||
TryAgain: If there is an attribute error during OpenAI inference.
|
||||
"""
|
||||
try:
|
||||
deployment_id = self.deployment_id
|
||||
if self.azure:
|
||||
model = 'azure/' + model
|
||||
messages = [{"role": "system", "content": system}, {"role": "user", "content": user}]
|
||||
kwargs = {
|
||||
"model": model,
|
||||
"deployment_id": deployment_id,
|
||||
"messages": messages,
|
||||
"temperature": temperature,
|
||||
"force_timeout": get_settings().config.ai_timeout,
|
||||
}
|
||||
if self.aws_bedrock_client:
|
||||
kwargs["aws_bedrock_client"] = self.aws_bedrock_client
|
||||
response = await acompletion(**kwargs)
|
||||
except (APIError, Timeout, TryAgain) as e:
|
||||
get_logger().error("Error during OpenAI inference: ", e)
|
||||
raise
|
||||
except (RateLimitError) as e:
|
||||
get_logger().error("Rate limit error during OpenAI inference: ", e)
|
||||
raise
|
||||
except (Exception) as e:
|
||||
get_logger().error("Unknown error during OpenAI inference: ", e)
|
||||
raise TryAgain from e
|
||||
if response is None or len(response["choices"]) == 0:
|
||||
raise TryAgain
|
||||
resp = response["choices"][0]['message']['content']
|
||||
finish_reason = response["choices"][0]["finish_reason"]
|
||||
usage = response.get("usage")
|
||||
get_logger().info("AI response", response=resp, messages=messages, finish_reason=finish_reason,
|
||||
model=model, usage=usage)
|
||||
return resp, finish_reason
|
@ -1,67 +0,0 @@
|
||||
from pr_agent.algo.ai_handlers.base_ai_handler import BaseAiHandler
|
||||
import openai
|
||||
from openai.error import APIError, RateLimitError, Timeout, TryAgain
|
||||
from retry import retry
|
||||
|
||||
from pr_agent.config_loader import get_settings
|
||||
from pr_agent.log import get_logger
|
||||
|
||||
OPENAI_RETRIES = 5
|
||||
|
||||
|
||||
class OpenAIHandler(BaseAiHandler):
|
||||
def __init__(self):
|
||||
# Initialize OpenAIHandler specific attributes here
|
||||
try:
|
||||
super().__init__()
|
||||
openai.api_key = get_settings().openai.key
|
||||
if get_settings().get("OPENAI.ORG", None):
|
||||
openai.organization = get_settings().openai.org
|
||||
if get_settings().get("OPENAI.API_TYPE", None):
|
||||
if get_settings().openai.api_type == "azure":
|
||||
self.azure = True
|
||||
openai.azure_key = get_settings().openai.key
|
||||
if get_settings().get("OPENAI.API_VERSION", None):
|
||||
openai.api_version = get_settings().openai.api_version
|
||||
if get_settings().get("OPENAI.API_BASE", None):
|
||||
openai.api_base = get_settings().openai.api_base
|
||||
|
||||
except AttributeError as e:
|
||||
raise ValueError("OpenAI key is required") from e
|
||||
@property
|
||||
def deployment_id(self):
|
||||
"""
|
||||
Returns the deployment ID for the OpenAI API.
|
||||
"""
|
||||
return get_settings().get("OPENAI.DEPLOYMENT_ID", None)
|
||||
|
||||
@retry(exceptions=(APIError, Timeout, TryAgain, AttributeError, RateLimitError),
|
||||
tries=OPENAI_RETRIES, delay=2, backoff=2, jitter=(1, 3))
|
||||
async def chat_completion(self, model: str, system: str, user: str, temperature: float = 0.2):
|
||||
try:
|
||||
deployment_id = self.deployment_id
|
||||
get_logger().info("System: ", system)
|
||||
get_logger().info("User: ", user)
|
||||
messages = [{"role": "system", "content": system}, {"role": "user", "content": user}]
|
||||
|
||||
chat_completion = await openai.ChatCompletion.acreate(
|
||||
model=model,
|
||||
deployment_id=deployment_id,
|
||||
messages=messages,
|
||||
temperature=temperature,
|
||||
)
|
||||
resp = chat_completion["choices"][0]['message']['content']
|
||||
finish_reason = chat_completion["choices"][0]["finish_reason"]
|
||||
usage = chat_completion.get("usage")
|
||||
get_logger().info("AI response", response=resp, messages=messages, finish_reason=finish_reason,
|
||||
model=model, usage=usage)
|
||||
return resp, finish_reason
|
||||
except (APIError, Timeout, TryAgain) as e:
|
||||
get_logger().error("Error during OpenAI inference: ", e)
|
||||
raise
|
||||
except (RateLimitError) as e:
|
||||
get_logger().error("Rate limit error during OpenAI inference: ", e)
|
||||
raise
|
||||
except (Exception) as e:
|
||||
get_logger().error("Unknown error during OpenAI inference: ", e)
|
||||
raise TryAgain from e
|
@ -1,36 +0,0 @@
|
||||
import fnmatch
|
||||
import re
|
||||
|
||||
from pr_agent.config_loader import get_settings
|
||||
|
||||
def filter_ignored(files):
|
||||
"""
|
||||
Filter out files that match the ignore patterns.
|
||||
"""
|
||||
|
||||
try:
|
||||
# load regex patterns, and translate glob patterns to regex
|
||||
patterns = get_settings().ignore.regex
|
||||
if isinstance(patterns, str):
|
||||
patterns = [patterns]
|
||||
glob_setting = get_settings().ignore.glob
|
||||
if isinstance(glob_setting, str): # --ignore.glob=[.*utils.py], --ignore.glob=.*utils.py
|
||||
glob_setting = glob_setting.strip('[]').split(",")
|
||||
patterns += [fnmatch.translate(glob) for glob in glob_setting]
|
||||
|
||||
# compile all valid patterns
|
||||
compiled_patterns = []
|
||||
for r in patterns:
|
||||
try:
|
||||
compiled_patterns.append(re.compile(r))
|
||||
except re.error:
|
||||
pass
|
||||
|
||||
# keep filenames that _don't_ match the ignore regex
|
||||
for r in compiled_patterns:
|
||||
files = [f for f in files if (f.filename and not r.match(f.filename))]
|
||||
|
||||
except Exception as e:
|
||||
print(f"Could not filter file list: {e}")
|
||||
|
||||
return files
|
@ -1,10 +1,8 @@
|
||||
from __future__ import annotations
|
||||
|
||||
import logging
|
||||
import re
|
||||
|
||||
from pr_agent.config_loader import get_settings
|
||||
from pr_agent.git_providers.git_provider import EDIT_TYPE
|
||||
from pr_agent.log import get_logger
|
||||
|
||||
|
||||
def extend_patch(original_file_str, patch_str, num_lines) -> str:
|
||||
@ -42,16 +40,12 @@ def extend_patch(original_file_str, patch_str, num_lines) -> str:
|
||||
extended_patch_lines.extend(
|
||||
original_lines[start1 + size1 - 1:start1 + size1 - 1 + num_lines])
|
||||
|
||||
res = list(match.groups())
|
||||
for i in range(len(res)):
|
||||
if res[i] is None:
|
||||
res[i] = 0
|
||||
try:
|
||||
start1, size1, start2, size2 = map(int, res[:4])
|
||||
start1, size1, start2, size2 = map(int, match.groups()[:4])
|
||||
except: # '@@ -0,0 +1 @@' case
|
||||
start1, size1, size2 = map(int, res[:3])
|
||||
start1, size1, size2 = map(int, match.groups()[:3])
|
||||
start2 = 0
|
||||
section_header = res[4]
|
||||
section_header = match.groups()[4]
|
||||
extended_start1 = max(1, start1 - num_lines)
|
||||
extended_size1 = size1 + (start1 - extended_start1) + num_lines
|
||||
extended_start2 = max(1, start2 - num_lines)
|
||||
@ -65,7 +59,7 @@ def extend_patch(original_file_str, patch_str, num_lines) -> str:
|
||||
extended_patch_lines.append(line)
|
||||
except Exception as e:
|
||||
if get_settings().config.verbosity_level >= 2:
|
||||
get_logger().error(f"Failed to extend patch: {e}")
|
||||
logging.error(f"Failed to extend patch: {e}")
|
||||
return patch_str
|
||||
|
||||
# finish previous hunk
|
||||
@ -116,7 +110,7 @@ def omit_deletion_hunks(patch_lines) -> str:
|
||||
|
||||
|
||||
def handle_patch_deletions(patch: str, original_file_content_str: str,
|
||||
new_file_content_str: str, file_name: str, edit_type: EDIT_TYPE = EDIT_TYPE.UNKNOWN) -> str:
|
||||
new_file_content_str: str, file_name: str) -> str:
|
||||
"""
|
||||
Handle entire file or deletion patches.
|
||||
|
||||
@ -133,17 +127,17 @@ def handle_patch_deletions(patch: str, original_file_content_str: str,
|
||||
str: The modified patch with deletion hunks omitted.
|
||||
|
||||
"""
|
||||
if not new_file_content_str and edit_type != EDIT_TYPE.ADDED:
|
||||
if not new_file_content_str:
|
||||
# logic for handling deleted files - don't show patch, just show that the file was deleted
|
||||
if get_settings().config.verbosity_level > 0:
|
||||
get_logger().info(f"Processing file: {file_name}, minimizing deletion file")
|
||||
logging.info(f"Processing file: {file_name}, minimizing deletion file")
|
||||
patch = None # file was deleted
|
||||
else:
|
||||
patch_lines = patch.splitlines()
|
||||
patch_new = omit_deletion_hunks(patch_lines)
|
||||
if patch != patch_new:
|
||||
if get_settings().config.verbosity_level > 0:
|
||||
get_logger().info(f"Processing file: {file_name}, hunks were deleted")
|
||||
logging.info(f"Processing file: {file_name}, hunks were deleted")
|
||||
patch = patch_new
|
||||
return patch
|
||||
|
||||
@ -181,7 +175,7 @@ __old hunk__
|
||||
...
|
||||
"""
|
||||
|
||||
patch_with_lines_str = f"\n\n## file: '{file.filename.strip()}'\n"
|
||||
patch_with_lines_str = f"\n\n## {file.filename}\n"
|
||||
patch_lines = patch.splitlines()
|
||||
RE_HUNK_HEADER = re.compile(
|
||||
r"^@@ -(\d+)(?:,(\d+))? \+(\d+)(?:,(\d+))? @@[ ]?(.*)")
|
||||
@ -202,26 +196,21 @@ __old hunk__
|
||||
if new_content_lines:
|
||||
if prev_header_line:
|
||||
patch_with_lines_str += f'\n{prev_header_line}\n'
|
||||
patch_with_lines_str = patch_with_lines_str.rstrip()+'\n__new hunk__\n'
|
||||
patch_with_lines_str += '__new hunk__\n'
|
||||
for i, line_new in enumerate(new_content_lines):
|
||||
patch_with_lines_str += f"{start2 + i} {line_new}\n"
|
||||
if old_content_lines:
|
||||
patch_with_lines_str = patch_with_lines_str.rstrip()+'\n__old hunk__\n'
|
||||
patch_with_lines_str += '__old hunk__\n'
|
||||
for line_old in old_content_lines:
|
||||
patch_with_lines_str += f"{line_old}\n"
|
||||
new_content_lines = []
|
||||
old_content_lines = []
|
||||
if match:
|
||||
prev_header_line = header_line
|
||||
|
||||
res = list(match.groups())
|
||||
for i in range(len(res)):
|
||||
if res[i] is None:
|
||||
res[i] = 0
|
||||
try:
|
||||
start1, size1, start2, size2 = map(int, res[:4])
|
||||
start1, size1, start2, size2 = map(int, match.groups()[:4])
|
||||
except: # '@@ -0,0 +1 @@' case
|
||||
start1, size1, size2 = map(int, res[:3])
|
||||
start1, size1, size2 = map(int, match.groups()[:3])
|
||||
start2 = 0
|
||||
|
||||
elif line.startswith('+'):
|
||||
@ -236,11 +225,11 @@ __old hunk__
|
||||
if match and new_content_lines:
|
||||
if new_content_lines:
|
||||
patch_with_lines_str += f'\n{header_line}\n'
|
||||
patch_with_lines_str = patch_with_lines_str.rstrip()+ '\n__new hunk__\n'
|
||||
patch_with_lines_str += '\n__new hunk__\n'
|
||||
for i, line_new in enumerate(new_content_lines):
|
||||
patch_with_lines_str += f"{start2 + i} {line_new}\n"
|
||||
if old_content_lines:
|
||||
patch_with_lines_str = patch_with_lines_str.rstrip() + '\n__old hunk__\n'
|
||||
patch_with_lines_str += '\n__old hunk__\n'
|
||||
for line_old in old_content_lines:
|
||||
patch_with_lines_str += f"{line_old}\n"
|
||||
|
||||
|
@ -3,7 +3,8 @@ from typing import Dict
|
||||
|
||||
from pr_agent.config_loader import get_settings
|
||||
|
||||
|
||||
language_extension_map_org = get_settings().language_extension_map_org
|
||||
language_extension_map = {k.lower(): v for k, v in language_extension_map_org.items()}
|
||||
|
||||
# Bad Extensions, source: https://github.com/EleutherAI/github-downloader/blob/345e7c4cbb9e0dc8a0615fd995a08bf9d73b3fe6/download_repo_text.py # noqa: E501
|
||||
bad_extensions = get_settings().bad_extensions.default
|
||||
@ -28,8 +29,6 @@ def sort_files_by_main_languages(languages: Dict, files: list):
|
||||
# languages_sorted = sorted(languages, key=lambda x: x[1], reverse=True)
|
||||
# get all extensions for the languages
|
||||
main_extensions = []
|
||||
language_extension_map_org = get_settings().language_extension_map_org
|
||||
language_extension_map = {k.lower(): v for k, v in language_extension_map_org.items()}
|
||||
for language in languages_sorted_list:
|
||||
if language.lower() in language_extension_map:
|
||||
main_extensions.append(language_extension_map[language.lower()])
|
||||
@ -43,11 +42,6 @@ def sort_files_by_main_languages(languages: Dict, files: list):
|
||||
files_sorted = []
|
||||
rest_files = {}
|
||||
|
||||
# if no languages detected, put all files in the "Other" category
|
||||
if not languages:
|
||||
files_sorted = [({"language": "Other", "files": list(files_filtered)})]
|
||||
return files_sorted
|
||||
|
||||
main_extensions_flat = []
|
||||
for ext in main_extensions:
|
||||
main_extensions_flat.extend(ext)
|
||||
|
@ -1,29 +1,27 @@
|
||||
from __future__ import annotations
|
||||
|
||||
import difflib
|
||||
import logging
|
||||
import re
|
||||
import traceback
|
||||
from typing import Any, Callable, List, Tuple
|
||||
|
||||
from github import RateLimitExceededException
|
||||
|
||||
from pr_agent.algo import MAX_TOKENS
|
||||
from pr_agent.algo.git_patch_processing import convert_to_hunks_with_lines_numbers, extend_patch, handle_patch_deletions
|
||||
from pr_agent.algo.language_handler import sort_files_by_main_languages
|
||||
from pr_agent.algo.file_filter import filter_ignored
|
||||
from pr_agent.algo.token_handler import TokenHandler
|
||||
from pr_agent.algo.utils import get_max_tokens
|
||||
from pr_agent.algo.token_handler import TokenHandler, get_token_encoder
|
||||
from pr_agent.config_loader import get_settings
|
||||
from pr_agent.git_providers.git_provider import FilePatchInfo, GitProvider, EDIT_TYPE
|
||||
from pr_agent.log import get_logger
|
||||
from pr_agent.git_providers.git_provider import FilePatchInfo, GitProvider
|
||||
|
||||
DELETED_FILES_ = "Deleted files:\n"
|
||||
|
||||
MORE_MODIFIED_FILES_ = "Additional modified files (insufficient token budget to process):\n"
|
||||
|
||||
ADDED_FILES_ = "Additional added files (insufficient token budget to process):\n"
|
||||
MORE_MODIFIED_FILES_ = "More modified files:\n"
|
||||
|
||||
OUTPUT_BUFFER_TOKENS_SOFT_THRESHOLD = 1000
|
||||
OUTPUT_BUFFER_TOKENS_HARD_THRESHOLD = 600
|
||||
PATCH_EXTRA_LINES = 3
|
||||
|
||||
def get_pr_diff(git_provider: GitProvider, token_handler: TokenHandler, model: str,
|
||||
add_line_numbers_to_hunks: bool = False, disable_extra_lines: bool = False) -> str:
|
||||
@ -46,37 +44,31 @@ def get_pr_diff(git_provider: GitProvider, token_handler: TokenHandler, model: s
|
||||
"""
|
||||
|
||||
if disable_extra_lines:
|
||||
global PATCH_EXTRA_LINES
|
||||
PATCH_EXTRA_LINES = 0
|
||||
else:
|
||||
PATCH_EXTRA_LINES = get_settings().config.patch_extra_lines
|
||||
|
||||
try:
|
||||
diff_files = git_provider.get_diff_files()
|
||||
except RateLimitExceededException as e:
|
||||
get_logger().error(f"Rate limit exceeded for git provider API. original message {e}")
|
||||
logging.error(f"Rate limit exceeded for git provider API. original message {e}")
|
||||
raise
|
||||
|
||||
diff_files = filter_ignored(diff_files)
|
||||
|
||||
# get pr languages
|
||||
pr_languages = sort_files_by_main_languages(git_provider.get_languages(), diff_files)
|
||||
|
||||
# generate a standard diff string, with patch extension
|
||||
patches_extended, total_tokens, patches_extended_tokens = pr_generate_extended_diff(
|
||||
pr_languages, token_handler, add_line_numbers_to_hunks, patch_extra_lines=PATCH_EXTRA_LINES)
|
||||
patches_extended, total_tokens, patches_extended_tokens = pr_generate_extended_diff(pr_languages, token_handler,
|
||||
add_line_numbers_to_hunks)
|
||||
|
||||
# if we are under the limit, return the full diff
|
||||
if total_tokens + OUTPUT_BUFFER_TOKENS_SOFT_THRESHOLD < get_max_tokens(model):
|
||||
if total_tokens + OUTPUT_BUFFER_TOKENS_SOFT_THRESHOLD < MAX_TOKENS[model]:
|
||||
return "\n".join(patches_extended)
|
||||
|
||||
# if we are over the limit, start pruning
|
||||
patches_compressed, modified_file_names, deleted_file_names, added_file_names = \
|
||||
patches_compressed, modified_file_names, deleted_file_names = \
|
||||
pr_generate_compressed_diff(pr_languages, token_handler, model, add_line_numbers_to_hunks)
|
||||
|
||||
final_diff = "\n".join(patches_compressed)
|
||||
if added_file_names:
|
||||
added_list_str = ADDED_FILES_ + "\n".join(added_file_names)
|
||||
final_diff = final_diff + "\n\n" + added_list_str
|
||||
if modified_file_names:
|
||||
modified_list_str = MORE_MODIFIED_FILES_ + "\n".join(modified_file_names)
|
||||
final_diff = final_diff + "\n\n" + modified_list_str
|
||||
@ -88,8 +80,7 @@ def get_pr_diff(git_provider: GitProvider, token_handler: TokenHandler, model: s
|
||||
|
||||
def pr_generate_extended_diff(pr_languages: list,
|
||||
token_handler: TokenHandler,
|
||||
add_line_numbers_to_hunks: bool,
|
||||
patch_extra_lines: int = 0) -> Tuple[list, int, list]:
|
||||
add_line_numbers_to_hunks: bool) -> Tuple[list, int, list]:
|
||||
"""
|
||||
Generate a standard diff string with patch extension, while counting the number of tokens used and applying diff
|
||||
minimization techniques if needed.
|
||||
@ -111,7 +102,7 @@ def pr_generate_extended_diff(pr_languages: list,
|
||||
continue
|
||||
|
||||
# extend each patch with extra lines of context
|
||||
extended_patch = extend_patch(original_file_content_str, patch, num_lines=patch_extra_lines)
|
||||
extended_patch = extend_patch(original_file_content_str, patch, num_lines=PATCH_EXTRA_LINES)
|
||||
full_extended_patch = f"\n\n## {file.filename}\n\n{extended_patch}\n"
|
||||
|
||||
if add_line_numbers_to_hunks:
|
||||
@ -127,7 +118,7 @@ def pr_generate_extended_diff(pr_languages: list,
|
||||
|
||||
|
||||
def pr_generate_compressed_diff(top_langs: list, token_handler: TokenHandler, model: str,
|
||||
convert_hunks_to_line_numbers: bool) -> Tuple[list, list, list, list]:
|
||||
convert_hunks_to_line_numbers: bool) -> Tuple[list, list, list]:
|
||||
"""
|
||||
Generate a compressed diff string for a pull request, using diff minimization techniques to reduce the number of
|
||||
tokens used.
|
||||
@ -153,7 +144,6 @@ def pr_generate_compressed_diff(top_langs: list, token_handler: TokenHandler, mo
|
||||
"""
|
||||
|
||||
patches = []
|
||||
added_files_list = []
|
||||
modified_files_list = []
|
||||
deleted_files_list = []
|
||||
# sort each one of the languages in top_langs by the number of tokens in the diff
|
||||
@ -171,7 +161,7 @@ def pr_generate_compressed_diff(top_langs: list, token_handler: TokenHandler, mo
|
||||
|
||||
# removing delete-only hunks
|
||||
patch = handle_patch_deletions(patch, original_file_content_str,
|
||||
new_file_content_str, file.filename, file.edit_type)
|
||||
new_file_content_str, file.filename)
|
||||
if patch is None:
|
||||
if not deleted_files_list:
|
||||
total_tokens += token_handler.count_tokens(DELETED_FILES_)
|
||||
@ -185,22 +175,17 @@ def pr_generate_compressed_diff(top_langs: list, token_handler: TokenHandler, mo
|
||||
new_patch_tokens = token_handler.count_tokens(patch)
|
||||
|
||||
# Hard Stop, no more tokens
|
||||
if total_tokens > get_max_tokens(model) - OUTPUT_BUFFER_TOKENS_HARD_THRESHOLD:
|
||||
get_logger().warning(f"File was fully skipped, no more tokens: {file.filename}.")
|
||||
if total_tokens > MAX_TOKENS[model] - OUTPUT_BUFFER_TOKENS_HARD_THRESHOLD:
|
||||
logging.warning(f"File was fully skipped, no more tokens: {file.filename}.")
|
||||
continue
|
||||
|
||||
# If the patch is too large, just show the file name
|
||||
if total_tokens + new_patch_tokens > get_max_tokens(model) - OUTPUT_BUFFER_TOKENS_SOFT_THRESHOLD:
|
||||
if total_tokens + new_patch_tokens > MAX_TOKENS[model] - OUTPUT_BUFFER_TOKENS_SOFT_THRESHOLD:
|
||||
# Current logic is to skip the patch if it's too large
|
||||
# TODO: Option for alternative logic to remove hunks from the patch to reduce the number of tokens
|
||||
# until we meet the requirements
|
||||
if get_settings().config.verbosity_level >= 2:
|
||||
get_logger().warning(f"Patch too large, minimizing it, {file.filename}")
|
||||
if file.edit_type == EDIT_TYPE.ADDED:
|
||||
if not added_files_list:
|
||||
total_tokens += token_handler.count_tokens(ADDED_FILES_)
|
||||
added_files_list.append(file.filename)
|
||||
else:
|
||||
logging.warning(f"Patch too large, minimizing it, {file.filename}")
|
||||
if not modified_files_list:
|
||||
total_tokens += token_handler.count_tokens(MORE_MODIFIED_FILES_)
|
||||
modified_files_list.append(file.filename)
|
||||
@ -209,15 +194,15 @@ def pr_generate_compressed_diff(top_langs: list, token_handler: TokenHandler, mo
|
||||
|
||||
if patch:
|
||||
if not convert_hunks_to_line_numbers:
|
||||
patch_final = f"\n\n## file: '{file.filename.strip()}\n\n{patch.strip()}\n'"
|
||||
patch_final = f"## {file.filename}\n\n{patch}\n"
|
||||
else:
|
||||
patch_final = "\n\n" + patch.strip()
|
||||
patch_final = patch
|
||||
patches.append(patch_final)
|
||||
total_tokens += token_handler.count_tokens(patch_final)
|
||||
if get_settings().config.verbosity_level >= 2:
|
||||
get_logger().info(f"Tokens: {total_tokens}, last filename: {file.filename}")
|
||||
logging.info(f"Tokens: {total_tokens}, last filename: {file.filename}")
|
||||
|
||||
return patches, modified_files_list, deleted_files_list, added_files_list
|
||||
return patches, modified_files_list, deleted_files_list
|
||||
|
||||
|
||||
async def retry_with_fallback_models(f: Callable):
|
||||
@ -226,15 +211,10 @@ async def retry_with_fallback_models(f: Callable):
|
||||
# try each (model, deployment_id) pair until one is successful, otherwise raise exception
|
||||
for i, (model, deployment_id) in enumerate(zip(all_models, all_deployments)):
|
||||
try:
|
||||
if get_settings().config.verbosity_level >= 2:
|
||||
get_logger().debug(
|
||||
f"Generating prediction with {model}"
|
||||
f"{(' from deployment ' + deployment_id) if deployment_id else ''}"
|
||||
)
|
||||
get_settings().set("openai.deployment_id", deployment_id)
|
||||
return await f(model)
|
||||
except Exception as e:
|
||||
get_logger().warning(
|
||||
logging.warning(
|
||||
f"Failed to generate prediction with {model}"
|
||||
f"{(' from deployment ' + deployment_id) if deployment_id else ''}: "
|
||||
f"{traceback.format_exc()}"
|
||||
@ -269,44 +249,36 @@ def _get_all_deployments(all_models: List[str]) -> List[str]:
|
||||
|
||||
def find_line_number_of_relevant_line_in_file(diff_files: List[FilePatchInfo],
|
||||
relevant_file: str,
|
||||
relevant_line_in_file: str,
|
||||
absolute_position: int = None) -> Tuple[int, int]:
|
||||
relevant_line_in_file: str) -> Tuple[int, int]:
|
||||
"""
|
||||
Find the line number and absolute position of a relevant line in a file.
|
||||
|
||||
Args:
|
||||
diff_files (List[FilePatchInfo]): A list of FilePatchInfo objects representing the patches of files.
|
||||
relevant_file (str): The name of the file where the relevant line is located.
|
||||
relevant_line_in_file (str): The content of the relevant line.
|
||||
|
||||
Returns:
|
||||
Tuple[int, int]: A tuple containing the line number and absolute position of the relevant line in the file.
|
||||
"""
|
||||
position = -1
|
||||
if absolute_position is None:
|
||||
absolute_position = -1
|
||||
re_hunk_header = re.compile(
|
||||
r"^@@ -(\d+)(?:,(\d+))? \+(\d+)(?:,(\d+))? @@[ ]?(.*)")
|
||||
|
||||
for file in diff_files:
|
||||
if file.filename and (file.filename.strip() == relevant_file):
|
||||
if file.filename.strip() == relevant_file:
|
||||
patch = file.patch
|
||||
patch_lines = patch.splitlines()
|
||||
delta = 0
|
||||
start1, size1, start2, size2 = 0, 0, 0, 0
|
||||
if absolute_position != -1: # matching absolute to relative
|
||||
for i, line in enumerate(patch_lines):
|
||||
# new hunk
|
||||
if line.startswith('@@'):
|
||||
delta = 0
|
||||
match = re_hunk_header.match(line)
|
||||
start1, size1, start2, size2 = map(int, match.groups()[:4])
|
||||
elif not line.startswith('-'):
|
||||
delta += 1
|
||||
|
||||
#
|
||||
absolute_position_curr = start2 + delta - 1
|
||||
|
||||
if absolute_position_curr == absolute_position:
|
||||
position = i
|
||||
break
|
||||
else:
|
||||
# try to find the line in the patch using difflib, with some margin of error
|
||||
matches_difflib: list[str | Any] = difflib.get_close_matches(relevant_line_in_file,
|
||||
patch_lines, n=3, cutoff=0.93)
|
||||
if len(matches_difflib) == 1 and matches_difflib[0].startswith('+'):
|
||||
relevant_line_in_file = matches_difflib[0]
|
||||
|
||||
|
||||
delta = 0
|
||||
start1, size1, start2, size2 = 0, 0, 0, 0
|
||||
for i, line in enumerate(patch_lines):
|
||||
if line.startswith('@@'):
|
||||
delta = 0
|
||||
@ -339,6 +311,35 @@ def find_line_number_of_relevant_line_in_file(diff_files: List[FilePatchInfo],
|
||||
return position, absolute_position
|
||||
|
||||
|
||||
def clip_tokens(text: str, max_tokens: int) -> str:
|
||||
"""
|
||||
Clip the number of tokens in a string to a maximum number of tokens.
|
||||
|
||||
Args:
|
||||
text (str): The string to clip.
|
||||
max_tokens (int): The maximum number of tokens allowed in the string.
|
||||
|
||||
Returns:
|
||||
str: The clipped string.
|
||||
"""
|
||||
if not text:
|
||||
return text
|
||||
|
||||
try:
|
||||
encoder = get_token_encoder()
|
||||
num_input_tokens = len(encoder.encode(text))
|
||||
if num_input_tokens <= max_tokens:
|
||||
return text
|
||||
num_chars = len(text)
|
||||
chars_per_token = num_chars / num_input_tokens
|
||||
num_output_chars = int(chars_per_token * max_tokens)
|
||||
clipped_text = text[:num_output_chars]
|
||||
return clipped_text
|
||||
except Exception as e:
|
||||
logging.warning(f"Failed to clip tokens: {e}")
|
||||
return text
|
||||
|
||||
|
||||
def get_pr_multi_diffs(git_provider: GitProvider,
|
||||
token_handler: TokenHandler,
|
||||
model: str,
|
||||
@ -362,11 +363,9 @@ def get_pr_multi_diffs(git_provider: GitProvider,
|
||||
try:
|
||||
diff_files = git_provider.get_diff_files()
|
||||
except RateLimitExceededException as e:
|
||||
get_logger().error(f"Rate limit exceeded for git provider API. original message {e}")
|
||||
logging.error(f"Rate limit exceeded for git provider API. original message {e}")
|
||||
raise
|
||||
|
||||
diff_files = filter_ignored(diff_files)
|
||||
|
||||
# Sort files by main language
|
||||
pr_languages = sort_files_by_main_languages(git_provider.get_languages(), diff_files)
|
||||
|
||||
@ -375,13 +374,6 @@ def get_pr_multi_diffs(git_provider: GitProvider,
|
||||
for lang in pr_languages:
|
||||
sorted_files.extend(sorted(lang['files'], key=lambda x: x.tokens, reverse=True))
|
||||
|
||||
|
||||
# try first a single run with standard diff string, with patch extension, and no deletions
|
||||
patches_extended, total_tokens, patches_extended_tokens = pr_generate_extended_diff(
|
||||
pr_languages, token_handler, add_line_numbers_to_hunks=True)
|
||||
if total_tokens + OUTPUT_BUFFER_TOKENS_SOFT_THRESHOLD < get_max_tokens(model):
|
||||
return ["\n".join(patches_extended)]
|
||||
|
||||
patches = []
|
||||
final_diff_list = []
|
||||
total_tokens = token_handler.prompt_tokens
|
||||
@ -389,7 +381,7 @@ def get_pr_multi_diffs(git_provider: GitProvider,
|
||||
for file in sorted_files:
|
||||
if call_number > max_calls:
|
||||
if get_settings().config.verbosity_level >= 2:
|
||||
get_logger().info(f"Reached max calls ({max_calls})")
|
||||
logging.info(f"Reached max calls ({max_calls})")
|
||||
break
|
||||
|
||||
original_file_content_str = file.base_file
|
||||
@ -399,26 +391,26 @@ def get_pr_multi_diffs(git_provider: GitProvider,
|
||||
continue
|
||||
|
||||
# Remove delete-only hunks
|
||||
patch = handle_patch_deletions(patch, original_file_content_str, new_file_content_str, file.filename, file.edit_type)
|
||||
patch = handle_patch_deletions(patch, original_file_content_str, new_file_content_str, file.filename)
|
||||
if patch is None:
|
||||
continue
|
||||
|
||||
patch = convert_to_hunks_with_lines_numbers(patch, file)
|
||||
new_patch_tokens = token_handler.count_tokens(patch)
|
||||
if patch and (total_tokens + new_patch_tokens > get_max_tokens(model) - OUTPUT_BUFFER_TOKENS_SOFT_THRESHOLD):
|
||||
if patch and (total_tokens + new_patch_tokens > MAX_TOKENS[model] - OUTPUT_BUFFER_TOKENS_SOFT_THRESHOLD):
|
||||
final_diff = "\n".join(patches)
|
||||
final_diff_list.append(final_diff)
|
||||
patches = []
|
||||
total_tokens = token_handler.prompt_tokens
|
||||
call_number += 1
|
||||
if get_settings().config.verbosity_level >= 2:
|
||||
get_logger().info(f"Call number: {call_number}")
|
||||
logging.info(f"Call number: {call_number}")
|
||||
|
||||
if patch:
|
||||
patches.append(patch)
|
||||
total_tokens += new_patch_tokens
|
||||
if get_settings().config.verbosity_level >= 2:
|
||||
get_logger().info(f"Tokens: {total_tokens}, last filename: {file.filename}")
|
||||
logging.info(f"Tokens: {total_tokens}, last filename: {file.filename}")
|
||||
|
||||
# Add the last chunk
|
||||
if patches:
|
||||
|
@ -21,7 +21,7 @@ class TokenHandler:
|
||||
method.
|
||||
"""
|
||||
|
||||
def __init__(self, pr=None, vars: dict = {}, system="", user=""):
|
||||
def __init__(self, pr, vars: dict, system, user):
|
||||
"""
|
||||
Initializes the TokenHandler object.
|
||||
|
||||
@ -32,7 +32,6 @@ class TokenHandler:
|
||||
- user: The user string.
|
||||
"""
|
||||
self.encoder = get_token_encoder()
|
||||
if pr is not None:
|
||||
self.prompt_tokens = self._get_system_user_tokens(pr, self.encoder, vars, system, user)
|
||||
|
||||
def _get_system_user_tokens(self, pr, encoder, vars: dict, system, user):
|
||||
|
@ -2,6 +2,7 @@ from __future__ import annotations
|
||||
|
||||
import difflib
|
||||
import json
|
||||
import logging
|
||||
import re
|
||||
import textwrap
|
||||
from datetime import datetime
|
||||
@ -9,11 +10,7 @@ from typing import Any, List
|
||||
|
||||
import yaml
|
||||
from starlette_context import context
|
||||
|
||||
from pr_agent.algo import MAX_TOKENS
|
||||
from pr_agent.algo.token_handler import get_token_encoder
|
||||
from pr_agent.config_loader import get_settings, global_settings
|
||||
from pr_agent.log import get_logger
|
||||
|
||||
|
||||
def get_setting(key: str) -> Any:
|
||||
@ -23,7 +20,7 @@ def get_setting(key: str) -> Any:
|
||||
except Exception:
|
||||
return global_settings.get(key, None)
|
||||
|
||||
def convert_to_markdown(output_data: dict, gfm_supported: bool=True) -> str:
|
||||
def convert_to_markdown(output_data: dict) -> str:
|
||||
"""
|
||||
Convert a dictionary of data into markdown format.
|
||||
Args:
|
||||
@ -45,48 +42,34 @@ def convert_to_markdown(output_data: dict, gfm_supported: bool=True) -> str:
|
||||
"General suggestions": "💡",
|
||||
"Insights from user's answers": "📝",
|
||||
"Code feedback": "🤖",
|
||||
"Estimated effort to review [1-5]": "⏱️",
|
||||
}
|
||||
|
||||
for key, value in output_data.items():
|
||||
if value is None or value == '' or value == {} or value == []:
|
||||
if value is None or value == '' or value == {}:
|
||||
continue
|
||||
if isinstance(value, dict):
|
||||
markdown_text += f"## {key}\n\n"
|
||||
markdown_text += convert_to_markdown(value, gfm_supported)
|
||||
markdown_text += convert_to_markdown(value)
|
||||
elif isinstance(value, list):
|
||||
emoji = emojis.get(key, "")
|
||||
if key.lower() == 'code feedback':
|
||||
if gfm_supported:
|
||||
markdown_text += f"\n\n"
|
||||
markdown_text += f"<details><summary> <strong>{ emoji } Code feedback:</strong></summary>"
|
||||
else:
|
||||
markdown_text += f"\n\n**{emoji} Code feedback:**\n\n"
|
||||
markdown_text += f"\n\n- **<details><summary> { emoji } Code feedback:**</summary>\n\n"
|
||||
else:
|
||||
markdown_text += f"- {emoji} **{key}:**\n\n"
|
||||
for i, item in enumerate(value):
|
||||
for item in value:
|
||||
if isinstance(item, dict) and key.lower() == 'code feedback':
|
||||
markdown_text += parse_code_suggestion(item, i, gfm_supported)
|
||||
markdown_text += parse_code_suggestion(item)
|
||||
elif item:
|
||||
markdown_text += f" - {item}\n"
|
||||
if key.lower() == 'code feedback':
|
||||
if gfm_supported:
|
||||
markdown_text += "</details>\n\n"
|
||||
else:
|
||||
markdown_text += "\n\n"
|
||||
elif value != 'n/a':
|
||||
emoji = emojis.get(key, "")
|
||||
if key.lower() == 'general suggestions':
|
||||
if gfm_supported:
|
||||
markdown_text += f"\n\n<strong>{emoji} General suggestions:</strong> {value}\n"
|
||||
else:
|
||||
markdown_text += f"{emoji} **General suggestions:** {value}\n"
|
||||
else:
|
||||
markdown_text += f"- {emoji} **{key}:** {value}\n"
|
||||
return markdown_text
|
||||
|
||||
|
||||
def parse_code_suggestion(code_suggestions: dict, i: int = 0, gfm_supported: bool = True) -> str:
|
||||
def parse_code_suggestion(code_suggestions: dict) -> str:
|
||||
"""
|
||||
Convert a dictionary of data into markdown format.
|
||||
|
||||
@ -97,35 +80,6 @@ def parse_code_suggestion(code_suggestions: dict, i: int = 0, gfm_supported: boo
|
||||
str: A string containing the markdown formatted text generated from the input dictionary.
|
||||
"""
|
||||
markdown_text = ""
|
||||
if gfm_supported and 'relevant line' in code_suggestions:
|
||||
if i == 0:
|
||||
markdown_text += "<hr>"
|
||||
markdown_text += '<table>'
|
||||
for sub_key, sub_value in code_suggestions.items():
|
||||
try:
|
||||
if sub_key.lower() == 'relevant file':
|
||||
relevant_file = sub_value.strip('`').strip('"').strip("'")
|
||||
markdown_text += f"<tr><td>{sub_key}</td><td>{relevant_file}</td></tr>"
|
||||
# continue
|
||||
elif sub_key.lower() == 'suggestion':
|
||||
markdown_text += (f"<tr><td>{sub_key} </td>"
|
||||
f"<td><br>\n\n**{sub_value.strip()}**\n<br></td></tr>")
|
||||
elif sub_key.lower() == 'relevant line':
|
||||
markdown_text += f"<tr><td>relevant line</td>"
|
||||
sub_value_list = sub_value.split('](')
|
||||
relevant_line = sub_value_list[0].lstrip('`').lstrip('[')
|
||||
if len(sub_value_list) > 1:
|
||||
link = sub_value_list[1].rstrip(')').strip('`')
|
||||
markdown_text += f"<td><a href='{link}'>{relevant_line}</a></td>"
|
||||
else:
|
||||
markdown_text += f"<td>{relevant_line}</td>"
|
||||
markdown_text += "</tr>"
|
||||
except Exception as e:
|
||||
get_logger().exception(f"Failed to parse code suggestion: {e}")
|
||||
pass
|
||||
markdown_text += '</table>'
|
||||
markdown_text += "<hr>"
|
||||
else:
|
||||
for sub_key, sub_value in code_suggestions.items():
|
||||
if isinstance(sub_value, dict): # "code example"
|
||||
markdown_text += f" - **{sub_key}:**\n"
|
||||
@ -135,13 +89,9 @@ def parse_code_suggestion(code_suggestions: dict, i: int = 0, gfm_supported: boo
|
||||
markdown_text += f" - **{code_key}:**\n{code_str_indented}\n"
|
||||
else:
|
||||
if "relevant file" in sub_key.lower():
|
||||
markdown_text += f"\n - **{sub_key}:** {sub_value} \n"
|
||||
markdown_text += f"\n - **{sub_key}:** {sub_value}\n"
|
||||
else:
|
||||
markdown_text += f" **{sub_key}:** {sub_value} \n"
|
||||
if not gfm_supported:
|
||||
if "relevant line" not in sub_key.lower(): # nicer presentation
|
||||
# markdown_text = markdown_text.rstrip('\n') + "\\\n" # works for gitlab
|
||||
markdown_text = markdown_text.rstrip('\n') + " \n" # works for gitlab and bitbucker
|
||||
markdown_text += f" **{sub_key}:** {sub_value}\n"
|
||||
|
||||
markdown_text += "\n"
|
||||
return markdown_text
|
||||
@ -199,7 +149,7 @@ def try_fix_json(review, max_iter=10, code_suggestions=False):
|
||||
iter_count += 1
|
||||
|
||||
if not valid_json:
|
||||
get_logger().error("Unable to decode JSON response from AI")
|
||||
logging.error("Unable to decode JSON response from AI")
|
||||
data = {}
|
||||
|
||||
return data
|
||||
@ -270,7 +220,7 @@ def load_large_diff(filename, new_file_content_str: str, original_file_content_s
|
||||
diff = difflib.unified_diff(original_file_content_str.splitlines(keepends=True),
|
||||
new_file_content_str.splitlines(keepends=True))
|
||||
if get_settings().config.verbosity_level >= 2:
|
||||
get_logger().warning(f"File was modified, but no patch was found. Manually creating patch: {filename}.")
|
||||
logging.warning(f"File was modified, but no patch was found. Manually creating patch: {filename}.")
|
||||
patch = ''.join(diff)
|
||||
except Exception:
|
||||
pass
|
||||
@ -302,12 +252,12 @@ def update_settings_from_args(args: List[str]) -> List[str]:
|
||||
vals = arg.split('=', 1)
|
||||
if len(vals) != 2:
|
||||
if len(vals) > 2: # --extended is a valid argument
|
||||
get_logger().error(f'Invalid argument format: {arg}')
|
||||
logging.error(f'Invalid argument format: {arg}')
|
||||
other_args.append(arg)
|
||||
continue
|
||||
key, value = _fix_key_value(*vals)
|
||||
get_settings().set(key, value)
|
||||
get_logger().info(f'Updated setting {key} to: "{value}"')
|
||||
logging.info(f'Updated setting {key} to: "{value}"')
|
||||
else:
|
||||
other_args.append(arg)
|
||||
return other_args
|
||||
@ -319,168 +269,28 @@ def _fix_key_value(key: str, value: str):
|
||||
try:
|
||||
value = yaml.safe_load(value)
|
||||
except Exception as e:
|
||||
get_logger().debug(f"Failed to parse YAML for config override {key}={value}", exc_info=e)
|
||||
logging.error(f"Failed to parse YAML for config override {key}={value}", exc_info=e)
|
||||
return key, value
|
||||
|
||||
|
||||
def load_yaml(response_text: str, keys_fix_yaml: List[str] = []) -> dict:
|
||||
response_text = response_text.removeprefix('```yaml').rstrip('`')
|
||||
def load_yaml(review_text: str) -> dict:
|
||||
review_text = review_text.removeprefix('```yaml').rstrip('`')
|
||||
try:
|
||||
data = yaml.safe_load(response_text)
|
||||
data = yaml.safe_load(review_text)
|
||||
except Exception as e:
|
||||
get_logger().error(f"Failed to parse AI prediction: {e}")
|
||||
data = try_fix_yaml(response_text, keys_fix_yaml=keys_fix_yaml)
|
||||
logging.error(f"Failed to parse AI prediction: {e}")
|
||||
data = try_fix_yaml(review_text)
|
||||
return data
|
||||
|
||||
|
||||
def try_fix_yaml(response_text: str, keys_fix_yaml: List[str] = []) -> dict:
|
||||
response_text_lines = response_text.split('\n')
|
||||
|
||||
keys = ['relevant line:', 'suggestion content:', 'relevant file:', 'existing code:', 'improved code:']
|
||||
keys = keys + keys_fix_yaml
|
||||
# first fallback - try to convert 'relevant line: ...' to relevant line: |-\n ...'
|
||||
response_text_lines_copy = response_text_lines.copy()
|
||||
for i in range(0, len(response_text_lines_copy)):
|
||||
for key in keys:
|
||||
if key in response_text_lines_copy[i] and not '|-' in response_text_lines_copy[i]:
|
||||
response_text_lines_copy[i] = response_text_lines_copy[i].replace(f'{key}',
|
||||
f'{key} |-\n ')
|
||||
try:
|
||||
data = yaml.safe_load('\n'.join(response_text_lines_copy))
|
||||
get_logger().info(f"Successfully parsed AI prediction after adding |-\n")
|
||||
return data
|
||||
except:
|
||||
get_logger().info(f"Failed to parse AI prediction after adding |-\n")
|
||||
|
||||
# second fallback - try to extract only range from first ```yaml to ````
|
||||
snippet_pattern = r'```(yaml)?[\s\S]*?```'
|
||||
snippet = re.search(snippet_pattern, '\n'.join(response_text_lines_copy))
|
||||
if snippet:
|
||||
snippet_text = snippet.group()
|
||||
try:
|
||||
data = yaml.safe_load(snippet_text.removeprefix('```yaml').rstrip('`'))
|
||||
get_logger().info(f"Successfully parsed AI prediction after extracting yaml snippet")
|
||||
return data
|
||||
except:
|
||||
pass
|
||||
|
||||
# third fallback - try to remove leading and trailing curly brackets
|
||||
response_text_copy = response_text.strip().rstrip().removeprefix('{').removesuffix('}')
|
||||
try:
|
||||
data = yaml.safe_load(response_text_copy,)
|
||||
get_logger().info(f"Successfully parsed AI prediction after removing curly brackets")
|
||||
return data
|
||||
except:
|
||||
pass
|
||||
|
||||
# fourth fallback - try to remove last lines
|
||||
def try_fix_yaml(review_text: str) -> dict:
|
||||
review_text_lines = review_text.split('\n')
|
||||
data = {}
|
||||
for i in range(1, len(response_text_lines)):
|
||||
response_text_lines_tmp = '\n'.join(response_text_lines[:-i])
|
||||
for i in range(1, len(review_text_lines)):
|
||||
review_text_lines_tmp = '\n'.join(review_text_lines[:-i])
|
||||
try:
|
||||
data = yaml.safe_load(response_text_lines_tmp,)
|
||||
get_logger().info(f"Successfully parsed AI prediction after removing {i} lines")
|
||||
return data
|
||||
data = yaml.load(review_text_lines_tmp, Loader=yaml.SafeLoader)
|
||||
logging.info(f"Successfully parsed AI prediction after removing {i} lines")
|
||||
break
|
||||
except:
|
||||
pass
|
||||
|
||||
|
||||
def set_custom_labels(variables, git_provider=None):
|
||||
if not get_settings().config.enable_custom_labels:
|
||||
return
|
||||
|
||||
labels = get_settings().custom_labels
|
||||
if not labels:
|
||||
# set default labels
|
||||
labels = ['Bug fix', 'Tests', 'Bug fix with tests', 'Enhancement', 'Documentation', 'Other']
|
||||
labels_list = "\n - ".join(labels) if labels else ""
|
||||
labels_list = f" - {labels_list}" if labels_list else ""
|
||||
variables["custom_labels"] = labels_list
|
||||
return
|
||||
|
||||
# Set custom labels
|
||||
variables["custom_labels_class"] = "class Label(str, Enum):"
|
||||
counter = 0
|
||||
labels_minimal_to_labels_dict = {}
|
||||
for k, v in labels.items():
|
||||
description = "'" + v['description'].strip('\n').replace('\n', '\\n') + "'"
|
||||
# variables["custom_labels_class"] += f"\n {k.lower().replace(' ', '_')} = '{k}' # {description}"
|
||||
variables["custom_labels_class"] += f"\n {k.lower().replace(' ', '_')} = {description}"
|
||||
labels_minimal_to_labels_dict[k.lower().replace(' ', '_')] = k
|
||||
counter += 1
|
||||
variables["labels_minimal_to_labels_dict"] = labels_minimal_to_labels_dict
|
||||
|
||||
def get_user_labels(current_labels: List[str] = None):
|
||||
"""
|
||||
Only keep labels that has been added by the user
|
||||
"""
|
||||
try:
|
||||
if current_labels is None:
|
||||
current_labels = []
|
||||
user_labels = []
|
||||
for label in current_labels:
|
||||
if label.lower() in ['bug fix', 'tests', 'enhancement', 'documentation', 'other']:
|
||||
continue
|
||||
if get_settings().config.enable_custom_labels:
|
||||
if label in get_settings().custom_labels:
|
||||
continue
|
||||
user_labels.append(label)
|
||||
if user_labels:
|
||||
get_logger().info(f"Keeping user labels: {user_labels}")
|
||||
except Exception as e:
|
||||
get_logger().exception(f"Failed to get user labels: {e}")
|
||||
return current_labels
|
||||
return user_labels
|
||||
|
||||
|
||||
def get_max_tokens(model):
|
||||
settings = get_settings()
|
||||
if model in MAX_TOKENS:
|
||||
max_tokens_model = MAX_TOKENS[model]
|
||||
else:
|
||||
raise Exception(f"MAX_TOKENS must be set for model {model} in ./pr_agent/algo/__init__.py")
|
||||
|
||||
if settings.config.max_model_tokens:
|
||||
max_tokens_model = min(settings.config.max_model_tokens, max_tokens_model)
|
||||
# get_logger().debug(f"limiting max tokens to {max_tokens_model}")
|
||||
return max_tokens_model
|
||||
|
||||
|
||||
def clip_tokens(text: str, max_tokens: int, add_three_dots=True) -> str:
|
||||
"""
|
||||
Clip the number of tokens in a string to a maximum number of tokens.
|
||||
|
||||
Args:
|
||||
text (str): The string to clip.
|
||||
max_tokens (int): The maximum number of tokens allowed in the string.
|
||||
add_three_dots (bool, optional): A boolean indicating whether to add three dots at the end of the clipped
|
||||
Returns:
|
||||
str: The clipped string.
|
||||
"""
|
||||
if not text:
|
||||
return text
|
||||
|
||||
try:
|
||||
encoder = get_token_encoder()
|
||||
num_input_tokens = len(encoder.encode(text))
|
||||
if num_input_tokens <= max_tokens:
|
||||
return text
|
||||
num_chars = len(text)
|
||||
chars_per_token = num_chars / num_input_tokens
|
||||
num_output_chars = int(chars_per_token * max_tokens)
|
||||
clipped_text = text[:num_output_chars]
|
||||
if add_three_dots:
|
||||
clipped_text += "...(truncated)"
|
||||
return clipped_text
|
||||
except Exception as e:
|
||||
get_logger().warning(f"Failed to clip tokens: {e}")
|
||||
return text
|
||||
|
||||
def replace_code_tags(text):
|
||||
"""
|
||||
Replace odd instances of ` with <code> and even instances of ` with </code>
|
||||
"""
|
||||
parts = text.split('`')
|
||||
for i in range(1, len(parts), 2):
|
||||
parts[i] = '<code>' + parts[i] + '</code>'
|
||||
return ''.join(parts)
|
||||
return data
|
||||
|
@ -1,13 +1,10 @@
|
||||
import argparse
|
||||
import asyncio
|
||||
import logging
|
||||
import os
|
||||
|
||||
from pr_agent.agent.pr_agent import PRAgent, commands
|
||||
from pr_agent.config_loader import get_settings
|
||||
from pr_agent.log import setup_logger
|
||||
|
||||
setup_logger()
|
||||
|
||||
|
||||
|
||||
def run(inargs=None):
|
||||
@ -20,46 +17,34 @@ For example:
|
||||
- cli.py --pr_url=... improve
|
||||
- cli.py --pr_url=... ask "write me a poem about this PR"
|
||||
- cli.py --pr_url=... reflect
|
||||
- cli.py --issue_url=... similar_issue
|
||||
|
||||
Supported commands:
|
||||
- review / review_pr - Add a review that includes a summary of the PR and specific suggestions for improvement.
|
||||
-review / review_pr - Add a review that includes a summary of the PR and specific suggestions for improvement.
|
||||
|
||||
- ask / ask_question [question] - Ask a question about the PR.
|
||||
-ask / ask_question [question] - Ask a question about the PR.
|
||||
|
||||
- describe / describe_pr - Modify the PR title and description based on the PR's contents.
|
||||
-describe / describe_pr - Modify the PR title and description based on the PR's contents.
|
||||
|
||||
- improve / improve_code - Suggest improvements to the code in the PR as pull request comments ready to commit.
|
||||
-improve / improve_code - Suggest improvements to the code in the PR as pull request comments ready to commit.
|
||||
Extended mode ('improve --extended') employs several calls, and provides a more thorough feedback
|
||||
|
||||
- reflect - Ask the PR author questions about the PR.
|
||||
-reflect - Ask the PR author questions about the PR.
|
||||
|
||||
- update_changelog - Update the changelog based on the PR's contents.
|
||||
|
||||
- add_docs
|
||||
|
||||
- generate_labels
|
||||
-update_changelog - Update the changelog based on the PR's contents.
|
||||
|
||||
|
||||
Configuration:
|
||||
To edit any configuration parameter from 'configuration.toml', just add -config_path=<value>.
|
||||
For example: 'python cli.py --pr_url=... review --pr_reviewer.extra_instructions="focus on the file: ..."'
|
||||
""")
|
||||
parser.add_argument('--pr_url', type=str, help='The URL of the PR to review', default=None)
|
||||
parser.add_argument('--issue_url', type=str, help='The URL of the Issue to review', default=None)
|
||||
parser.add_argument('--pr_url', type=str, help='The URL of the PR to review', required=True)
|
||||
parser.add_argument('command', type=str, help='The', choices=commands, default='review')
|
||||
parser.add_argument('rest', nargs=argparse.REMAINDER, default=[])
|
||||
args = parser.parse_args(inargs)
|
||||
if not args.pr_url and not args.issue_url:
|
||||
parser.print_help()
|
||||
return
|
||||
|
||||
logging.basicConfig(level=os.environ.get("LOGLEVEL", "INFO"))
|
||||
command = args.command.lower()
|
||||
get_settings().set("CONFIG.CLI_MODE", True)
|
||||
if args.issue_url:
|
||||
result = asyncio.run(PRAgent().handle_request(args.issue_url, [command] + args.rest))
|
||||
else:
|
||||
result = asyncio.run(PRAgent().handle_request(args.pr_url, [command] + args.rest))
|
||||
result = asyncio.run(PRAgent().handle_request(args.pr_url, command + " " + " ".join(args.rest)))
|
||||
if not result:
|
||||
parser.print_help()
|
||||
|
||||
|
@ -14,7 +14,6 @@ global_settings = Dynaconf(
|
||||
settings_files=[join(current_dir, f) for f in [
|
||||
"settings/.secrets.toml",
|
||||
"settings/configuration.toml",
|
||||
"settings/ignore.toml",
|
||||
"settings/language_extensions.toml",
|
||||
"settings/pr_reviewer_prompts.toml",
|
||||
"settings/pr_questions_prompts.toml",
|
||||
@ -23,10 +22,7 @@ global_settings = Dynaconf(
|
||||
"settings/pr_sort_code_suggestions_prompts.toml",
|
||||
"settings/pr_information_from_user_prompts.toml",
|
||||
"settings/pr_update_changelog_prompts.toml",
|
||||
"settings/pr_custom_labels.toml",
|
||||
"settings/pr_add_docs.toml",
|
||||
"settings_prod/.secrets.toml",
|
||||
"settings/custom_labels.toml"
|
||||
"settings_prod/.secrets.toml"
|
||||
]]
|
||||
)
|
||||
|
||||
|
@ -1,6 +1,5 @@
|
||||
from pr_agent.config_loader import get_settings
|
||||
from pr_agent.git_providers.bitbucket_provider import BitbucketProvider
|
||||
from pr_agent.git_providers.bitbucket_server_provider import BitbucketServerProvider
|
||||
from pr_agent.git_providers.codecommit_provider import CodeCommitProvider
|
||||
from pr_agent.git_providers.github_provider import GithubProvider
|
||||
from pr_agent.git_providers.gitlab_provider import GitLabProvider
|
||||
@ -13,7 +12,6 @@ _GIT_PROVIDERS = {
|
||||
'github': GithubProvider,
|
||||
'gitlab': GitLabProvider,
|
||||
'bitbucket': BitbucketProvider,
|
||||
'bitbucket_server': BitbucketServerProvider,
|
||||
'azure': AzureDevopsProvider,
|
||||
'codecommit': CodeCommitProvider,
|
||||
'local' : LocalGitProvider,
|
||||
|
@ -1,40 +1,29 @@
|
||||
import os
|
||||
import json
|
||||
import logging
|
||||
from typing import Optional, Tuple
|
||||
from urllib.parse import urlparse
|
||||
|
||||
from ..log import get_logger
|
||||
from ..algo.language_handler import is_valid_file
|
||||
from ..algo.utils import clip_tokens, load_large_diff
|
||||
from ..config_loader import get_settings
|
||||
from .git_provider import EDIT_TYPE, FilePatchInfo, GitProvider
|
||||
import os
|
||||
|
||||
AZURE_DEVOPS_AVAILABLE = True
|
||||
|
||||
try:
|
||||
# noinspection PyUnresolvedReferences
|
||||
from msrest.authentication import BasicAuthentication
|
||||
# noinspection PyUnresolvedReferences
|
||||
from azure.devops.connection import Connection
|
||||
# noinspection PyUnresolvedReferences
|
||||
from azure.devops.v7_1.git.models import (
|
||||
Comment,
|
||||
CommentThread,
|
||||
GitVersionDescriptor,
|
||||
GitPullRequest,
|
||||
)
|
||||
from azure.devops.v7_1.git.models import Comment, CommentThread, GitVersionDescriptor, GitPullRequest
|
||||
except ImportError:
|
||||
AZURE_DEVOPS_AVAILABLE = False
|
||||
|
||||
from ..algo.pr_processing import clip_tokens
|
||||
from ..config_loader import get_settings
|
||||
from ..algo.utils import load_large_diff
|
||||
from ..algo.language_handler import is_valid_file
|
||||
from .git_provider import EDIT_TYPE, FilePatchInfo
|
||||
|
||||
class AzureDevopsProvider(GitProvider):
|
||||
|
||||
def __init__(
|
||||
self, pr_url: Optional[str] = None, incremental: Optional[bool] = False
|
||||
):
|
||||
class AzureDevopsProvider:
|
||||
def __init__(self, pr_url: Optional[str] = None, incremental: Optional[bool] = False):
|
||||
if not AZURE_DEVOPS_AVAILABLE:
|
||||
raise ImportError(
|
||||
"Azure DevOps provider is not available. Please install the required dependencies."
|
||||
)
|
||||
raise ImportError("Azure DevOps provider is not available. Please install the required dependencies.")
|
||||
|
||||
self.azure_devops_client = self._get_azure_devops_client()
|
||||
|
||||
@ -48,123 +37,8 @@ class AzureDevopsProvider(GitProvider):
|
||||
if pr_url:
|
||||
self.set_pr(pr_url)
|
||||
|
||||
def publish_code_suggestions(self, code_suggestions: list) -> bool:
|
||||
"""
|
||||
Publishes code suggestions as comments on the PR.
|
||||
"""
|
||||
post_parameters_list = []
|
||||
for suggestion in code_suggestions:
|
||||
body = suggestion['body']
|
||||
relevant_file = suggestion['relevant_file']
|
||||
relevant_lines_start = suggestion['relevant_lines_start']
|
||||
relevant_lines_end = suggestion['relevant_lines_end']
|
||||
|
||||
if not relevant_lines_start or relevant_lines_start == -1:
|
||||
if get_settings().config.verbosity_level >= 2:
|
||||
get_logger().exception(
|
||||
f"Failed to publish code suggestion, relevant_lines_start is {relevant_lines_start}")
|
||||
continue
|
||||
|
||||
if relevant_lines_end < relevant_lines_start:
|
||||
if get_settings().config.verbosity_level >= 2:
|
||||
get_logger().exception(f"Failed to publish code suggestion, "
|
||||
f"relevant_lines_end is {relevant_lines_end} and "
|
||||
f"relevant_lines_start is {relevant_lines_start}")
|
||||
continue
|
||||
|
||||
if relevant_lines_end > relevant_lines_start:
|
||||
post_parameters = {
|
||||
"body": body,
|
||||
"path": relevant_file,
|
||||
"line": relevant_lines_end,
|
||||
"start_line": relevant_lines_start,
|
||||
"start_side": "RIGHT",
|
||||
}
|
||||
else: # API is different for single line comments
|
||||
post_parameters = {
|
||||
"body": body,
|
||||
"path": relevant_file,
|
||||
"line": relevant_lines_start,
|
||||
"side": "RIGHT",
|
||||
}
|
||||
post_parameters_list.append(post_parameters)
|
||||
|
||||
try:
|
||||
for post_parameters in post_parameters_list:
|
||||
comment = Comment(content=post_parameters["body"], comment_type=1)
|
||||
thread = CommentThread(comments=[comment],
|
||||
thread_context={
|
||||
"filePath": post_parameters["path"],
|
||||
"rightFileStart": {
|
||||
"line": post_parameters["start_line"],
|
||||
"offset": 1,
|
||||
},
|
||||
"rightFileEnd": {
|
||||
"line": post_parameters["line"],
|
||||
"offset": 1,
|
||||
},
|
||||
})
|
||||
self.azure_devops_client.create_thread(
|
||||
comment_thread=thread,
|
||||
project=self.workspace_slug,
|
||||
repository_id=self.repo_slug,
|
||||
pull_request_id=self.pr_num
|
||||
)
|
||||
if get_settings().config.verbosity_level >= 2:
|
||||
get_logger().info(
|
||||
f"Published code suggestion on {self.pr_num} at {post_parameters['path']}"
|
||||
)
|
||||
return True
|
||||
except Exception as e:
|
||||
if get_settings().config.verbosity_level >= 2:
|
||||
get_logger().error(f"Failed to publish code suggestion, error: {e}")
|
||||
return False
|
||||
|
||||
def get_pr_description_full(self) -> str:
|
||||
return self.pr.description
|
||||
|
||||
def remove_comment(self, comment):
|
||||
try:
|
||||
self.azure_devops_client.delete_comment(
|
||||
repository_id=self.repo_slug,
|
||||
pull_request_id=self.pr_num,
|
||||
thread_id=comment["thread_id"],
|
||||
comment_id=comment["comment_id"],
|
||||
project=self.workspace_slug,
|
||||
)
|
||||
except Exception as e:
|
||||
get_logger().exception(f"Failed to remove comment, error: {e}")
|
||||
|
||||
def publish_labels(self, pr_types):
|
||||
try:
|
||||
for pr_type in pr_types:
|
||||
self.azure_devops_client.create_pull_request_label(
|
||||
label={"name": pr_type},
|
||||
project=self.workspace_slug,
|
||||
repository_id=self.repo_slug,
|
||||
pull_request_id=self.pr_num,
|
||||
)
|
||||
except Exception as e:
|
||||
get_logger().exception(f"Failed to publish labels, error: {e}")
|
||||
|
||||
def get_pr_labels(self):
|
||||
try:
|
||||
labels = self.azure_devops_client.get_pull_request_labels(
|
||||
project=self.workspace_slug,
|
||||
repository_id=self.repo_slug,
|
||||
pull_request_id=self.pr_num,
|
||||
)
|
||||
return [label.name for label in labels]
|
||||
except Exception as e:
|
||||
get_logger().exception(f"Failed to get labels, error: {e}")
|
||||
return []
|
||||
|
||||
def is_supported(self, capability: str) -> bool:
|
||||
if capability in [
|
||||
"get_issue_comments",
|
||||
"create_inline_comment",
|
||||
"publish_inline_comments",
|
||||
]:
|
||||
if capability in ['get_issue_comments', 'create_inline_comment', 'publish_inline_comments', 'get_labels', 'remove_initial_comment']:
|
||||
return False
|
||||
return True
|
||||
|
||||
@ -174,35 +48,26 @@ class AzureDevopsProvider(GitProvider):
|
||||
|
||||
def get_repo_settings(self):
|
||||
try:
|
||||
contents = self.azure_devops_client.get_item_content(
|
||||
repository_id=self.repo_slug,
|
||||
project=self.workspace_slug,
|
||||
download=False,
|
||||
include_content_metadata=False,
|
||||
include_content=True,
|
||||
path=".pr_agent.toml",
|
||||
)
|
||||
contents = self.azure_devops_client.get_item_content(repository_id=self.repo_slug,
|
||||
project=self.workspace_slug, download=False,
|
||||
include_content_metadata=False, include_content=True,
|
||||
path=".pr_agent.toml")
|
||||
return contents
|
||||
except Exception as e:
|
||||
if get_settings().config.verbosity_level >= 2:
|
||||
get_logger().error(f"Failed to get repo settings, error: {e}")
|
||||
logging.exception("get repo settings error")
|
||||
return ""
|
||||
|
||||
def get_files(self):
|
||||
files = []
|
||||
for i in self.azure_devops_client.get_pull_request_commits(
|
||||
project=self.workspace_slug,
|
||||
for i in self.azure_devops_client.get_pull_request_commits(project=self.workspace_slug,
|
||||
repository_id=self.repo_slug,
|
||||
pull_request_id=self.pr_num,
|
||||
):
|
||||
changes_obj = self.azure_devops_client.get_changes(
|
||||
project=self.workspace_slug,
|
||||
repository_id=self.repo_slug,
|
||||
commit_id=i.commit_id,
|
||||
)
|
||||
pull_request_id=self.pr_num):
|
||||
|
||||
changes_obj = self.azure_devops_client.get_changes(project=self.workspace_slug,
|
||||
repository_id=self.repo_slug, commit_id=i.commit_id)
|
||||
|
||||
for c in changes_obj.changes:
|
||||
files.append(c["item"]["path"])
|
||||
files.append(c['item']['path'])
|
||||
return list(set(files))
|
||||
|
||||
def get_diff_files(self) -> list[FilePatchInfo]:
|
||||
@ -210,27 +75,20 @@ class AzureDevopsProvider(GitProvider):
|
||||
base_sha = self.pr.last_merge_target_commit
|
||||
head_sha = self.pr.last_merge_source_commit
|
||||
|
||||
commits = self.azure_devops_client.get_pull_request_commits(
|
||||
project=self.workspace_slug,
|
||||
commits = self.azure_devops_client.get_pull_request_commits(project=self.workspace_slug,
|
||||
repository_id=self.repo_slug,
|
||||
pull_request_id=self.pr_num,
|
||||
)
|
||||
pull_request_id=self.pr_num)
|
||||
|
||||
diff_files = []
|
||||
diffs = []
|
||||
diff_types = {}
|
||||
|
||||
for c in commits:
|
||||
changes_obj = self.azure_devops_client.get_changes(
|
||||
project=self.workspace_slug,
|
||||
repository_id=self.repo_slug,
|
||||
commit_id=c.commit_id,
|
||||
)
|
||||
changes_obj = self.azure_devops_client.get_changes(project=self.workspace_slug,
|
||||
repository_id=self.repo_slug, commit_id=c.commit_id)
|
||||
for i in changes_obj.changes:
|
||||
if i["item"]["gitObjectType"] == "tree":
|
||||
continue
|
||||
diffs.append(i["item"]["path"])
|
||||
diff_types[i["item"]["path"]] = i["changeType"]
|
||||
diffs.append(i['item']['path'])
|
||||
diff_types[i['item']['path']] = i['changeType']
|
||||
|
||||
diffs = list(set(diffs))
|
||||
|
||||
@ -238,73 +96,41 @@ class AzureDevopsProvider(GitProvider):
|
||||
if not is_valid_file(file):
|
||||
continue
|
||||
|
||||
version = GitVersionDescriptor(
|
||||
version=head_sha.commit_id, version_type="commit"
|
||||
)
|
||||
try:
|
||||
new_file_content_str = self.azure_devops_client.get_item(
|
||||
repository_id=self.repo_slug,
|
||||
version = GitVersionDescriptor(version=head_sha.commit_id, version_type='commit')
|
||||
new_file_content_str = self.azure_devops_client.get_item(repository_id=self.repo_slug,
|
||||
path=file,
|
||||
project=self.workspace_slug,
|
||||
version_descriptor=version,
|
||||
download=False,
|
||||
include_content=True,
|
||||
)
|
||||
include_content=True)
|
||||
|
||||
new_file_content_str = new_file_content_str.content
|
||||
except Exception as error:
|
||||
get_logger().error(
|
||||
"Failed to retrieve new file content of %s at version %s. Error: %s",
|
||||
file,
|
||||
version,
|
||||
str(error),
|
||||
)
|
||||
new_file_content_str = ""
|
||||
|
||||
edit_type = EDIT_TYPE.MODIFIED
|
||||
if diff_types[file] == "add":
|
||||
if diff_types[file] == 'add':
|
||||
edit_type = EDIT_TYPE.ADDED
|
||||
elif diff_types[file] == "delete":
|
||||
elif diff_types[file] == 'delete':
|
||||
edit_type = EDIT_TYPE.DELETED
|
||||
elif diff_types[file] == "rename":
|
||||
elif diff_types[file] == 'rename':
|
||||
edit_type = EDIT_TYPE.RENAMED
|
||||
|
||||
version = GitVersionDescriptor(
|
||||
version=base_sha.commit_id, version_type="commit"
|
||||
)
|
||||
try:
|
||||
original_file_content_str = self.azure_devops_client.get_item(
|
||||
repository_id=self.repo_slug,
|
||||
version = GitVersionDescriptor(version=base_sha.commit_id, version_type='commit')
|
||||
original_file_content_str = self.azure_devops_client.get_item(repository_id=self.repo_slug,
|
||||
path=file,
|
||||
project=self.workspace_slug,
|
||||
version_descriptor=version,
|
||||
download=False,
|
||||
include_content=True,
|
||||
)
|
||||
include_content=True)
|
||||
original_file_content_str = original_file_content_str.content
|
||||
except Exception as error:
|
||||
get_logger().error(
|
||||
"Failed to retrieve original file content of %s at version %s. Error: %s",
|
||||
file,
|
||||
version,
|
||||
str(error),
|
||||
)
|
||||
original_file_content_str = ""
|
||||
|
||||
patch = load_large_diff(
|
||||
file, new_file_content_str, original_file_content_str
|
||||
)
|
||||
patch = load_large_diff(file, new_file_content_str, original_file_content_str)
|
||||
|
||||
diff_files.append(
|
||||
FilePatchInfo(
|
||||
original_file_content_str,
|
||||
new_file_content_str,
|
||||
diff_files.append(FilePatchInfo(original_file_content_str, new_file_content_str,
|
||||
patch=patch,
|
||||
filename=file,
|
||||
edit_type=edit_type,
|
||||
)
|
||||
)
|
||||
edit_type=edit_type))
|
||||
|
||||
self.diff_files = diff_files
|
||||
return diff_files
|
||||
except Exception as e:
|
||||
print(f"Error: {str(e)}")
|
||||
@ -313,92 +139,67 @@ class AzureDevopsProvider(GitProvider):
|
||||
def publish_comment(self, pr_comment: str, is_temporary: bool = False):
|
||||
comment = Comment(content=pr_comment)
|
||||
thread = CommentThread(comments=[comment])
|
||||
thread_response = self.azure_devops_client.create_thread(
|
||||
comment_thread=thread,
|
||||
project=self.workspace_slug,
|
||||
thread_response = self.azure_devops_client.create_thread(comment_thread=thread, project=self.workspace_slug,
|
||||
repository_id=self.repo_slug,
|
||||
pull_request_id=self.pr_num,
|
||||
)
|
||||
pull_request_id=self.pr_num)
|
||||
if is_temporary:
|
||||
self.temp_comments.append(
|
||||
{"thread_id": thread_response.id, "comment_id": thread_response.comments[0].id}
|
||||
)
|
||||
self.temp_comments.append({'thread_id': thread_response.id, 'comment_id': comment.id})
|
||||
|
||||
def publish_description(self, pr_title: str, pr_body: str):
|
||||
try:
|
||||
updated_pr = GitPullRequest()
|
||||
updated_pr.title = pr_title
|
||||
updated_pr.description = pr_body
|
||||
self.azure_devops_client.update_pull_request(
|
||||
project=self.workspace_slug,
|
||||
self.azure_devops_client.update_pull_request(project=self.workspace_slug,
|
||||
repository_id=self.repo_slug,
|
||||
pull_request_id=self.pr_num,
|
||||
git_pull_request_to_update=updated_pr,
|
||||
)
|
||||
git_pull_request_to_update=updated_pr)
|
||||
except Exception as e:
|
||||
get_logger().exception(
|
||||
f"Could not update pull request {self.pr_num} description: {e}"
|
||||
)
|
||||
logging.exception(f"Could not update pull request {self.pr_num} description: {e}")
|
||||
|
||||
def remove_initial_comment(self):
|
||||
try:
|
||||
for comment in self.temp_comments:
|
||||
self.remove_comment(comment)
|
||||
except Exception as e:
|
||||
get_logger().exception(f"Failed to remove temp comments, error: {e}")
|
||||
return "" # not implemented yet
|
||||
|
||||
def publish_inline_comment(
|
||||
self, body: str, relevant_file: str, relevant_line_in_file: str
|
||||
):
|
||||
raise NotImplementedError(
|
||||
"Azure DevOps provider does not support publishing inline comment yet"
|
||||
)
|
||||
def publish_inline_comment(self, body: str, relevant_file: str, relevant_line_in_file: str):
|
||||
raise NotImplementedError("Azure DevOps provider does not support publishing inline comment yet")
|
||||
|
||||
def create_inline_comment(self, body: str, relevant_file: str, relevant_line_in_file: str):
|
||||
raise NotImplementedError("Azure DevOps provider does not support creating inline comments yet")
|
||||
|
||||
def publish_inline_comments(self, comments: list[dict]):
|
||||
raise NotImplementedError(
|
||||
"Azure DevOps provider does not support publishing inline comments yet"
|
||||
)
|
||||
raise NotImplementedError("Azure DevOps provider does not support publishing inline comments yet")
|
||||
|
||||
def get_title(self):
|
||||
return self.pr.title
|
||||
|
||||
def get_languages(self):
|
||||
languages = []
|
||||
files = self.azure_devops_client.get_items(
|
||||
project=self.workspace_slug,
|
||||
repository_id=self.repo_slug,
|
||||
recursion_level="Full",
|
||||
include_content_metadata=True,
|
||||
include_links=False,
|
||||
download=False,
|
||||
)
|
||||
files = self.azure_devops_client.get_items(project=self.workspace_slug, repository_id=self.repo_slug,
|
||||
recursion_level="Full", include_content_metadata=True,
|
||||
include_links=False, download=False)
|
||||
for f in files:
|
||||
if f.git_object_type == "blob":
|
||||
if f.git_object_type == 'blob':
|
||||
file_name, file_extension = os.path.splitext(f.path)
|
||||
languages.append(file_extension[1:])
|
||||
|
||||
extension_counts = {}
|
||||
for ext in languages:
|
||||
if ext != "":
|
||||
if ext != '':
|
||||
extension_counts[ext] = extension_counts.get(ext, 0) + 1
|
||||
|
||||
total_extensions = sum(extension_counts.values())
|
||||
|
||||
extension_percentages = {
|
||||
ext: (count / total_extensions) * 100
|
||||
for ext, count in extension_counts.items()
|
||||
}
|
||||
extension_percentages = {ext: (count / total_extensions) * 100 for ext, count in extension_counts.items()}
|
||||
|
||||
return extension_percentages
|
||||
|
||||
def get_pr_branch(self):
|
||||
pr_info = self.azure_devops_client.get_pull_request_by_id(
|
||||
project=self.workspace_slug, pull_request_id=self.pr_num
|
||||
)
|
||||
source_branch = pr_info.source_ref_name.split("/")[-1]
|
||||
pr_info = self.azure_devops_client.get_pull_request_by_id(project=self.workspace_slug,
|
||||
pull_request_id=self.pr_num)
|
||||
source_branch = pr_info.source_ref_name.split('/')[-1]
|
||||
return source_branch
|
||||
|
||||
def get_pr_description(self, *, full: bool = True) -> str:
|
||||
def get_pr_description(self):
|
||||
max_tokens = get_settings().get("CONFIG.MAX_DESCRIPTION_TOKENS", None)
|
||||
if max_tokens:
|
||||
return clip_tokens(self.pr.description, max_tokens)
|
||||
@ -408,9 +209,7 @@ class AzureDevopsProvider(GitProvider):
|
||||
return 0
|
||||
|
||||
def get_issue_comments(self):
|
||||
raise NotImplementedError(
|
||||
"Azure DevOps provider does not support issue comments yet"
|
||||
)
|
||||
raise NotImplementedError("Azure DevOps provider does not support issue comments yet")
|
||||
|
||||
def add_eyes_reaction(self, issue_comment_id: int) -> Optional[int]:
|
||||
return True
|
||||
@ -418,16 +217,20 @@ class AzureDevopsProvider(GitProvider):
|
||||
def remove_reaction(self, issue_comment_id: int, reaction_id: int) -> bool:
|
||||
return True
|
||||
|
||||
def get_issue_comments(self):
|
||||
raise NotImplementedError("Azure DevOps provider does not support issue comments yet")
|
||||
|
||||
@staticmethod
|
||||
def _parse_pr_url(pr_url: str) -> Tuple[str, str, int]:
|
||||
def _parse_pr_url(pr_url: str) -> Tuple[str, int]:
|
||||
parsed_url = urlparse(pr_url)
|
||||
|
||||
path_parts = parsed_url.path.strip("/").split("/")
|
||||
if 'azure.com' not in parsed_url.netloc:
|
||||
raise ValueError("The provided URL is not a valid Azure DevOps URL")
|
||||
|
||||
if len(path_parts) < 6 or path_parts[4] != "pullrequest":
|
||||
raise ValueError(
|
||||
"The provided URL does not appear to be a Azure DevOps PR URL"
|
||||
)
|
||||
path_parts = parsed_url.path.strip('/').split('/')
|
||||
|
||||
if len(path_parts) < 6 or path_parts[4] != 'pullrequest':
|
||||
raise ValueError("The provided URL does not appear to be a Azure DevOps PR URL")
|
||||
|
||||
workspace_slug = path_parts[1]
|
||||
repo_slug = path_parts[3]
|
||||
@ -438,15 +241,15 @@ class AzureDevopsProvider(GitProvider):
|
||||
|
||||
return workspace_slug, repo_slug, pr_number
|
||||
|
||||
@staticmethod
|
||||
def _get_azure_devops_client():
|
||||
def _get_azure_devops_client(self):
|
||||
try:
|
||||
pat = get_settings().azure_devops.pat
|
||||
org = get_settings().azure_devops.org
|
||||
except AttributeError as e:
|
||||
raise ValueError("Azure DevOps PAT token is required ") from e
|
||||
raise ValueError(
|
||||
"Azure DevOps PAT token is required ") from e
|
||||
|
||||
credentials = BasicAuthentication("", pat)
|
||||
credentials = BasicAuthentication('', pat)
|
||||
azure_devops_connection = Connection(base_url=org, creds=credentials)
|
||||
azure_devops_client = azure_devops_connection.clients.get_git_client()
|
||||
|
||||
@ -454,25 +257,13 @@ class AzureDevopsProvider(GitProvider):
|
||||
|
||||
def _get_repo(self):
|
||||
if self.repo is None:
|
||||
self.repo = self.azure_devops_client.get_repository(
|
||||
project=self.workspace_slug, repository_id=self.repo_slug
|
||||
)
|
||||
self.repo = self.azure_devops_client.get_repository(project=self.workspace_slug,
|
||||
repository_id=self.repo_slug)
|
||||
return self.repo
|
||||
|
||||
def _get_pr(self):
|
||||
self.pr = self.azure_devops_client.get_pull_request_by_id(
|
||||
pull_request_id=self.pr_num, project=self.workspace_slug
|
||||
)
|
||||
self.pr = self.azure_devops_client.get_pull_request_by_id(pull_request_id=self.pr_num, project=self.workspace_slug)
|
||||
return self.pr
|
||||
|
||||
def get_commit_messages(self):
|
||||
return "" # not implemented yet
|
||||
|
||||
def get_pr_id(self):
|
||||
try:
|
||||
pr_id = f"{self.workspace_slug}/{self.repo_slug}/{self.pr_num}"
|
||||
return pr_id
|
||||
except Exception as e:
|
||||
if get_settings().config.verbosity_level >= 2:
|
||||
get_logger().error(f"Failed to get pr id, error: {e}")
|
||||
return ""
|
||||
|
@ -1,4 +1,5 @@
|
||||
import json
|
||||
import logging
|
||||
from typing import Optional, Tuple
|
||||
from urllib.parse import urlparse
|
||||
|
||||
@ -6,10 +7,8 @@ import requests
|
||||
from atlassian.bitbucket import Cloud
|
||||
from starlette_context import context
|
||||
|
||||
from ..algo.pr_processing import find_line_number_of_relevant_line_in_file
|
||||
from ..config_loader import get_settings
|
||||
from ..log import get_logger
|
||||
from .git_provider import FilePatchInfo, GitProvider, EDIT_TYPE
|
||||
from .git_provider import FilePatchInfo, GitProvider
|
||||
|
||||
|
||||
class BitbucketProvider(GitProvider):
|
||||
@ -32,23 +31,19 @@ class BitbucketProvider(GitProvider):
|
||||
self.repo = None
|
||||
self.pr_num = None
|
||||
self.pr = None
|
||||
self.pr_url = pr_url
|
||||
self.temp_comments = []
|
||||
self.incremental = incremental
|
||||
self.diff_files = None
|
||||
if pr_url:
|
||||
self.set_pr(pr_url)
|
||||
self.bitbucket_comment_api_url = self.pr._BitbucketBase__data["links"]["comments"]["href"]
|
||||
self.bitbucket_pull_request_api_url = self.pr._BitbucketBase__data["links"]['self']['href']
|
||||
self.bitbucket_comment_api_url = self.pr._BitbucketBase__data["links"][
|
||||
"comments"
|
||||
]["href"]
|
||||
|
||||
def get_repo_settings(self):
|
||||
try:
|
||||
url = (f"https://api.bitbucket.org/2.0/repositories/{self.workspace_slug}/{self.repo_slug}/src/"
|
||||
f"{self.pr.destination_branch}/.pr_agent.toml")
|
||||
response = requests.request("GET", url, headers=self.headers)
|
||||
if response.status_code == 404: # not found
|
||||
return ""
|
||||
contents = response.text.encode('utf-8')
|
||||
contents = self.repo_obj.get_contents(
|
||||
".pr_agent.toml", ref=self.pr.head.sha
|
||||
).decoded_content
|
||||
return contents
|
||||
except Exception:
|
||||
return ""
|
||||
@ -66,14 +61,14 @@ class BitbucketProvider(GitProvider):
|
||||
|
||||
if not relevant_lines_start or relevant_lines_start == -1:
|
||||
if get_settings().config.verbosity_level >= 2:
|
||||
get_logger().exception(
|
||||
logging.exception(
|
||||
f"Failed to publish code suggestion, relevant_lines_start is {relevant_lines_start}"
|
||||
)
|
||||
continue
|
||||
|
||||
if relevant_lines_end < relevant_lines_start:
|
||||
if get_settings().config.verbosity_level >= 2:
|
||||
get_logger().exception(
|
||||
logging.exception(
|
||||
f"Failed to publish code suggestion, "
|
||||
f"relevant_lines_end is {relevant_lines_end} and "
|
||||
f"relevant_lines_start is {relevant_lines_start}"
|
||||
@ -102,11 +97,16 @@ class BitbucketProvider(GitProvider):
|
||||
return True
|
||||
except Exception as e:
|
||||
if get_settings().config.verbosity_level >= 2:
|
||||
get_logger().error(f"Failed to publish code suggestion, error: {e}")
|
||||
logging.error(f"Failed to publish code suggestion, error: {e}")
|
||||
return False
|
||||
|
||||
def is_supported(self, capability: str) -> bool:
|
||||
if capability in ['get_issue_comments', 'publish_inline_comments', 'get_labels', 'gfm_markdown']:
|
||||
if capability in [
|
||||
"get_issue_comments",
|
||||
"create_inline_comment",
|
||||
"publish_inline_comments",
|
||||
"get_labels",
|
||||
]:
|
||||
return False
|
||||
return True
|
||||
|
||||
@ -118,9 +118,6 @@ class BitbucketProvider(GitProvider):
|
||||
return [diff.new.path for diff in self.pr.diffstat()]
|
||||
|
||||
def get_diff_files(self) -> list[FilePatchInfo]:
|
||||
if self.diff_files:
|
||||
return self.diff_files
|
||||
|
||||
diffs = self.pr.diffstat()
|
||||
diff_split = [
|
||||
"diff --git%s" % x for x in self.pr.diff().split("diff --git") if x.strip()
|
||||
@ -132,56 +129,16 @@ class BitbucketProvider(GitProvider):
|
||||
diff.old.get_data("links")
|
||||
)
|
||||
new_file_content_str = self._get_pr_file_content(diff.new.get_data("links"))
|
||||
file_patch_canonic_structure = FilePatchInfo(
|
||||
diff_files.append(
|
||||
FilePatchInfo(
|
||||
original_file_content_str,
|
||||
new_file_content_str,
|
||||
diff_split[index],
|
||||
diff.new.path,
|
||||
)
|
||||
|
||||
if diff.data['status'] == 'added':
|
||||
file_patch_canonic_structure.edit_type = EDIT_TYPE.ADDED
|
||||
elif diff.data['status'] == 'removed':
|
||||
file_patch_canonic_structure.edit_type = EDIT_TYPE.DELETED
|
||||
elif diff.data['status'] == 'modified':
|
||||
file_patch_canonic_structure.edit_type = EDIT_TYPE.MODIFIED
|
||||
elif diff.data['status'] == 'renamed':
|
||||
file_patch_canonic_structure.edit_type = EDIT_TYPE.RENAMED
|
||||
diff_files.append(file_patch_canonic_structure)
|
||||
|
||||
|
||||
self.diff_files = diff_files
|
||||
)
|
||||
return diff_files
|
||||
|
||||
def get_latest_commit_url(self):
|
||||
return self.pr.data['source']['commit']['links']['html']['href']
|
||||
|
||||
def get_comment_url(self, comment):
|
||||
return comment.data['links']['html']['href']
|
||||
|
||||
def publish_persistent_comment(self, pr_comment: str, initial_header: str, update_header: bool = True):
|
||||
try:
|
||||
for comment in self.pr.comments():
|
||||
body = comment.raw
|
||||
if initial_header in body:
|
||||
latest_commit_url = self.get_latest_commit_url()
|
||||
comment_url = self.get_comment_url(comment)
|
||||
if update_header:
|
||||
updated_header = f"{initial_header}\n\n### (review updated until commit {latest_commit_url})\n"
|
||||
pr_comment_updated = pr_comment.replace(initial_header, updated_header)
|
||||
else:
|
||||
pr_comment_updated = pr_comment
|
||||
get_logger().info(f"Persistent mode- updating comment {comment_url} to latest review message")
|
||||
d = {"content": {"raw": pr_comment_updated}}
|
||||
response = comment._update_data(comment.put(None, data=d))
|
||||
self.publish_comment(
|
||||
f"**[Persistent review]({comment_url})** updated to latest commit {latest_commit_url}")
|
||||
return
|
||||
except Exception as e:
|
||||
get_logger().exception(f"Failed to update persistent review, error: {e}")
|
||||
pass
|
||||
self.publish_comment(pr_comment)
|
||||
|
||||
def publish_comment(self, pr_comment: str, is_temporary: bool = False):
|
||||
comment = self.pr.comment(pr_comment)
|
||||
if is_temporary:
|
||||
@ -190,84 +147,31 @@ class BitbucketProvider(GitProvider):
|
||||
def remove_initial_comment(self):
|
||||
try:
|
||||
for comment in self.temp_comments:
|
||||
self.remove_comment(comment)
|
||||
except Exception as e:
|
||||
get_logger().exception(f"Failed to remove temp comments, error: {e}")
|
||||
|
||||
def remove_comment(self, comment):
|
||||
try:
|
||||
self.pr.delete(f"comments/{comment}")
|
||||
except Exception as e:
|
||||
get_logger().exception(f"Failed to remove comment, error: {e}")
|
||||
logging.exception(f"Failed to remove temp comments, error: {e}")
|
||||
|
||||
# funtion to create_inline_comment
|
||||
def create_inline_comment(self, body: str, relevant_file: str, relevant_line_in_file: str, absolute_position: int = None):
|
||||
position, absolute_position = find_line_number_of_relevant_line_in_file(self.get_diff_files(),
|
||||
relevant_file.strip('`'),
|
||||
relevant_line_in_file, absolute_position)
|
||||
if position == -1:
|
||||
if get_settings().config.verbosity_level >= 2:
|
||||
get_logger().info(f"Could not find position for {relevant_file} {relevant_line_in_file}")
|
||||
subject_type = "FILE"
|
||||
else:
|
||||
subject_type = "LINE"
|
||||
path = relevant_file.strip()
|
||||
return dict(body=body, path=path, position=absolute_position) if subject_type == "LINE" else {}
|
||||
|
||||
|
||||
def publish_inline_comment(self, comment: str, from_line: int, file: str):
|
||||
payload = json.dumps( {
|
||||
def publish_inline_comment(
|
||||
self, comment: str, from_line: int, to_line: int, file: str
|
||||
):
|
||||
payload = json.dumps(
|
||||
{
|
||||
"content": {
|
||||
"raw": comment,
|
||||
},
|
||||
"inline": {
|
||||
"to": from_line,
|
||||
"path": file
|
||||
},
|
||||
})
|
||||
"inline": {"to": from_line, "path": file},
|
||||
}
|
||||
)
|
||||
response = requests.request(
|
||||
"POST", self.bitbucket_comment_api_url, data=payload, headers=self.headers
|
||||
)
|
||||
return response
|
||||
|
||||
def get_line_link(self, relevant_file: str, relevant_line_start: int, relevant_line_end: int = None) -> str:
|
||||
if relevant_line_start == -1:
|
||||
link = f"{self.pr_url}/#L{relevant_file}"
|
||||
else:
|
||||
link = f"{self.pr_url}/#L{relevant_file}T{relevant_line_start}"
|
||||
return link
|
||||
|
||||
def generate_link_to_relevant_line_number(self, suggestion) -> str:
|
||||
try:
|
||||
relevant_file = suggestion['relevant file'].strip('`').strip("'")
|
||||
relevant_line_str = suggestion['relevant line']
|
||||
if not relevant_line_str:
|
||||
return ""
|
||||
|
||||
diff_files = self.get_diff_files()
|
||||
position, absolute_position = find_line_number_of_relevant_line_in_file \
|
||||
(diff_files, relevant_file, relevant_line_str)
|
||||
|
||||
if absolute_position != -1 and self.pr_url:
|
||||
link = f"{self.pr_url}/#L{relevant_file}T{absolute_position}"
|
||||
return link
|
||||
except Exception as e:
|
||||
if get_settings().config.verbosity_level >= 2:
|
||||
get_logger().info(f"Failed adding line link, error: {e}")
|
||||
|
||||
return ""
|
||||
|
||||
def publish_inline_comments(self, comments: list[dict]):
|
||||
for comment in comments:
|
||||
if 'position' in comment:
|
||||
self.publish_inline_comment(comment['body'], comment['position'], comment['path'])
|
||||
elif 'start_line' in comment: # multi-line comment
|
||||
# note that bitbucket does not seem to support range - only a comment on a single line - https://community.developer.atlassian.com/t/api-post-endpoint-for-inline-pull-request-comments/60452
|
||||
self.publish_inline_comment(comment['body'], comment['start_line'], comment['path'])
|
||||
elif 'line' in comment: # single-line comment
|
||||
self.publish_inline_comment(comment['body'], comment['line'], comment['path'])
|
||||
else:
|
||||
get_logger().error(f"Could not publish inline comment {comment}")
|
||||
self.publish_inline_comment(
|
||||
comment["body"], comment["start_line"], comment["line"], comment["path"]
|
||||
)
|
||||
|
||||
def get_title(self):
|
||||
return self.pr.title
|
||||
@ -335,26 +239,15 @@ class BitbucketProvider(GitProvider):
|
||||
def get_commit_messages(self):
|
||||
return "" # not implemented yet
|
||||
|
||||
# bitbucket does not support labels
|
||||
def publish_description(self, pr_title: str, description: str):
|
||||
payload = json.dumps({
|
||||
"description": description,
|
||||
"title": pr_title
|
||||
|
||||
})
|
||||
|
||||
response = requests.request("PUT", self.bitbucket_pull_request_api_url, headers=self.headers, data=payload)
|
||||
try:
|
||||
if response.status_code != 200:
|
||||
get_logger().info(f"Failed to update description, error code: {response.status_code}")
|
||||
except:
|
||||
def publish_description(self, pr_title: str, pr_body: str):
|
||||
pass
|
||||
return response
|
||||
|
||||
# bitbucket does not support labels
|
||||
def publish_labels(self, pr_types: list):
|
||||
def create_inline_comment(
|
||||
self, body: str, relevant_file: str, relevant_line_in_file: str
|
||||
):
|
||||
pass
|
||||
|
||||
# bitbucket does not support labels
|
||||
def get_pr_labels(self):
|
||||
def publish_labels(self, labels):
|
||||
pass
|
||||
|
||||
def get_labels(self):
|
||||
pass
|
||||
|
@ -1,354 +0,0 @@
|
||||
import json
|
||||
from typing import Optional, Tuple
|
||||
from urllib.parse import urlparse
|
||||
|
||||
import requests
|
||||
from atlassian.bitbucket import Bitbucket
|
||||
from starlette_context import context
|
||||
|
||||
from .git_provider import FilePatchInfo, GitProvider, EDIT_TYPE
|
||||
from ..algo.pr_processing import find_line_number_of_relevant_line_in_file
|
||||
from ..algo.utils import load_large_diff
|
||||
from ..config_loader import get_settings
|
||||
from ..log import get_logger
|
||||
|
||||
|
||||
class BitbucketServerProvider(GitProvider):
|
||||
def __init__(
|
||||
self, pr_url: Optional[str] = None, incremental: Optional[bool] = False
|
||||
):
|
||||
s = requests.Session()
|
||||
try:
|
||||
bearer = context.get("bitbucket_bearer_token", None)
|
||||
s.headers["Authorization"] = f"Bearer {bearer}"
|
||||
except Exception:
|
||||
s.headers[
|
||||
"Authorization"
|
||||
] = f'Bearer {get_settings().get("BITBUCKET_SERVER.BEARER_TOKEN", None)}'
|
||||
|
||||
s.headers["Content-Type"] = "application/json"
|
||||
self.headers = s.headers
|
||||
self.bitbucket_server_url = None
|
||||
self.workspace_slug = None
|
||||
self.repo_slug = None
|
||||
self.repo = None
|
||||
self.pr_num = None
|
||||
self.pr = None
|
||||
self.pr_url = pr_url
|
||||
self.temp_comments = []
|
||||
self.incremental = incremental
|
||||
self.diff_files = None
|
||||
self.bitbucket_pull_request_api_url = pr_url
|
||||
|
||||
self.bitbucket_server_url = self._parse_bitbucket_server(url=pr_url)
|
||||
self.bitbucket_client = Bitbucket(url=self.bitbucket_server_url,
|
||||
token=get_settings().get("BITBUCKET_SERVER.BEARER_TOKEN", None))
|
||||
|
||||
if pr_url:
|
||||
self.set_pr(pr_url)
|
||||
|
||||
def get_repo_settings(self):
|
||||
try:
|
||||
url = (f"{self.bitbucket_server_url}/projects/{self.workspace_slug}/repos/{self.repo_slug}/src/"
|
||||
f"{self.pr.destination_branch}/.pr_agent.toml")
|
||||
response = requests.request("GET", url, headers=self.headers)
|
||||
if response.status_code == 404: # not found
|
||||
return ""
|
||||
contents = response.text.encode('utf-8')
|
||||
return contents
|
||||
except Exception:
|
||||
return ""
|
||||
|
||||
def publish_code_suggestions(self, code_suggestions: list) -> bool:
|
||||
"""
|
||||
Publishes code suggestions as comments on the PR.
|
||||
"""
|
||||
post_parameters_list = []
|
||||
for suggestion in code_suggestions:
|
||||
body = suggestion["body"]
|
||||
relevant_file = suggestion["relevant_file"]
|
||||
relevant_lines_start = suggestion["relevant_lines_start"]
|
||||
relevant_lines_end = suggestion["relevant_lines_end"]
|
||||
|
||||
if not relevant_lines_start or relevant_lines_start == -1:
|
||||
if get_settings().config.verbosity_level >= 2:
|
||||
get_logger().exception(
|
||||
f"Failed to publish code suggestion, relevant_lines_start is {relevant_lines_start}"
|
||||
)
|
||||
continue
|
||||
|
||||
if relevant_lines_end < relevant_lines_start:
|
||||
if get_settings().config.verbosity_level >= 2:
|
||||
get_logger().exception(
|
||||
f"Failed to publish code suggestion, "
|
||||
f"relevant_lines_end is {relevant_lines_end} and "
|
||||
f"relevant_lines_start is {relevant_lines_start}"
|
||||
)
|
||||
continue
|
||||
|
||||
if relevant_lines_end > relevant_lines_start:
|
||||
post_parameters = {
|
||||
"body": body,
|
||||
"path": relevant_file,
|
||||
"line": relevant_lines_end,
|
||||
"start_line": relevant_lines_start,
|
||||
"start_side": "RIGHT",
|
||||
}
|
||||
else: # API is different for single line comments
|
||||
post_parameters = {
|
||||
"body": body,
|
||||
"path": relevant_file,
|
||||
"line": relevant_lines_start,
|
||||
"side": "RIGHT",
|
||||
}
|
||||
post_parameters_list.append(post_parameters)
|
||||
|
||||
try:
|
||||
self.publish_inline_comments(post_parameters_list)
|
||||
return True
|
||||
except Exception as e:
|
||||
if get_settings().config.verbosity_level >= 2:
|
||||
get_logger().error(f"Failed to publish code suggestion, error: {e}")
|
||||
return False
|
||||
|
||||
def is_supported(self, capability: str) -> bool:
|
||||
if capability in ['get_issue_comments', 'get_labels', 'gfm_markdown']:
|
||||
return False
|
||||
return True
|
||||
|
||||
def set_pr(self, pr_url: str):
|
||||
self.workspace_slug, self.repo_slug, self.pr_num = self._parse_pr_url(pr_url)
|
||||
self.pr = self._get_pr()
|
||||
|
||||
def get_file(self, path: str, commit_id: str):
|
||||
file_content = ""
|
||||
try:
|
||||
file_content = self.bitbucket_client.get_content_of_file(self.workspace_slug,
|
||||
self.repo_slug,
|
||||
path,
|
||||
commit_id)
|
||||
except requests.HTTPError as e:
|
||||
get_logger().debug(f"File {path} not found at commit id: {commit_id}")
|
||||
return file_content
|
||||
|
||||
def get_files(self):
|
||||
changes = self.bitbucket_client.get_pull_requests_changes(self.workspace_slug, self.repo_slug, self.pr_num)
|
||||
diffstat = [change["path"]['toString'] for change in changes]
|
||||
return diffstat
|
||||
|
||||
def get_diff_files(self) -> list[FilePatchInfo]:
|
||||
if self.diff_files:
|
||||
return self.diff_files
|
||||
|
||||
commits_in_pr = self.bitbucket_client.get_pull_requests_commits(
|
||||
self.workspace_slug,
|
||||
self.repo_slug,
|
||||
self.pr_num
|
||||
)
|
||||
|
||||
commit_list = list(commits_in_pr)
|
||||
base_sha, head_sha = commit_list[0]['parents'][0]['id'], commit_list[-1]['id']
|
||||
|
||||
diff_files = []
|
||||
original_file_content_str = ""
|
||||
new_file_content_str = ""
|
||||
|
||||
changes = self.bitbucket_client.get_pull_requests_changes(self.workspace_slug, self.repo_slug, self.pr_num)
|
||||
for change in changes:
|
||||
file_path = change['path']['toString']
|
||||
match change['type']:
|
||||
case 'ADD':
|
||||
edit_type = EDIT_TYPE.ADDED
|
||||
new_file_content_str = self.get_file(file_path, head_sha)
|
||||
if isinstance(new_file_content_str, (bytes, bytearray)):
|
||||
new_file_content_str = new_file_content_str.decode("utf-8")
|
||||
original_file_content_str = ""
|
||||
case 'DELETE':
|
||||
edit_type = EDIT_TYPE.DELETED
|
||||
new_file_content_str = ""
|
||||
original_file_content_str = self.get_file(file_path, base_sha)
|
||||
if isinstance(original_file_content_str, (bytes, bytearray)):
|
||||
original_file_content_str = original_file_content_str.decode("utf-8")
|
||||
case 'RENAME':
|
||||
edit_type = EDIT_TYPE.RENAMED
|
||||
case _:
|
||||
edit_type = EDIT_TYPE.MODIFIED
|
||||
original_file_content_str = self.get_file(file_path, base_sha)
|
||||
if isinstance(original_file_content_str, (bytes, bytearray)):
|
||||
original_file_content_str = original_file_content_str.decode("utf-8")
|
||||
new_file_content_str = self.get_file(file_path, head_sha)
|
||||
if isinstance(new_file_content_str, (bytes, bytearray)):
|
||||
new_file_content_str = new_file_content_str.decode("utf-8")
|
||||
|
||||
patch = load_large_diff(file_path, new_file_content_str, original_file_content_str)
|
||||
|
||||
diff_files.append(
|
||||
FilePatchInfo(
|
||||
original_file_content_str,
|
||||
new_file_content_str,
|
||||
patch,
|
||||
file_path,
|
||||
edit_type=edit_type,
|
||||
)
|
||||
)
|
||||
|
||||
self.diff_files = diff_files
|
||||
return diff_files
|
||||
|
||||
def publish_comment(self, pr_comment: str, is_temporary: bool = False):
|
||||
if not is_temporary:
|
||||
self.bitbucket_client.add_pull_request_comment(self.workspace_slug, self.repo_slug, self.pr_num, pr_comment)
|
||||
|
||||
def remove_initial_comment(self):
|
||||
try:
|
||||
for comment in self.temp_comments:
|
||||
self.remove_comment(comment)
|
||||
except ValueError as e:
|
||||
get_logger().exception(f"Failed to remove temp comments, error: {e}")
|
||||
|
||||
def remove_comment(self, comment):
|
||||
pass
|
||||
|
||||
# funtion to create_inline_comment
|
||||
def create_inline_comment(self, body: str, relevant_file: str, relevant_line_in_file: str,
|
||||
absolute_position: int = None):
|
||||
|
||||
position, absolute_position = find_line_number_of_relevant_line_in_file(
|
||||
self.get_diff_files(),
|
||||
relevant_file.strip('`'),
|
||||
relevant_line_in_file,
|
||||
absolute_position
|
||||
)
|
||||
if position == -1:
|
||||
if get_settings().config.verbosity_level >= 2:
|
||||
get_logger().info(f"Could not find position for {relevant_file} {relevant_line_in_file}")
|
||||
subject_type = "FILE"
|
||||
else:
|
||||
subject_type = "LINE"
|
||||
path = relevant_file.strip()
|
||||
return dict(body=body, path=path, position=absolute_position) if subject_type == "LINE" else {}
|
||||
|
||||
def publish_inline_comment(self, comment: str, from_line: int, file: str):
|
||||
payload = {
|
||||
"text": comment,
|
||||
"severity": "NORMAL",
|
||||
"anchor": {
|
||||
"diffType": "EFFECTIVE",
|
||||
"path": file,
|
||||
"lineType": "ADDED",
|
||||
"line": from_line,
|
||||
"fileType": "TO"
|
||||
}
|
||||
}
|
||||
|
||||
response = requests.post(url=self._get_pr_comments_url(), json=payload, headers=self.headers)
|
||||
return response
|
||||
|
||||
def generate_link_to_relevant_line_number(self, suggestion) -> str:
|
||||
try:
|
||||
relevant_file = suggestion['relevant file'].strip('`').strip("'")
|
||||
relevant_line_str = suggestion['relevant line']
|
||||
if not relevant_line_str:
|
||||
return ""
|
||||
|
||||
diff_files = self.get_diff_files()
|
||||
position, absolute_position = find_line_number_of_relevant_line_in_file \
|
||||
(diff_files, relevant_file, relevant_line_str)
|
||||
|
||||
if absolute_position != -1 and self.pr_url:
|
||||
link = f"{self.pr_url}/#L{relevant_file}T{absolute_position}"
|
||||
return link
|
||||
except Exception as e:
|
||||
if get_settings().config.verbosity_level >= 2:
|
||||
get_logger().info(f"Failed adding line link, error: {e}")
|
||||
|
||||
return ""
|
||||
|
||||
def publish_inline_comments(self, comments: list[dict]):
|
||||
for comment in comments:
|
||||
self.publish_inline_comment(comment['body'], comment['position'], comment['path'])
|
||||
|
||||
def get_title(self):
|
||||
return self.pr.title
|
||||
|
||||
def get_languages(self):
|
||||
return {"yaml": 0} # devops LOL
|
||||
|
||||
def get_pr_branch(self):
|
||||
return self.pr.fromRef['displayId']
|
||||
|
||||
def get_pr_description_full(self):
|
||||
return self.pr.description
|
||||
|
||||
def get_user_id(self):
|
||||
return 0
|
||||
|
||||
def get_issue_comments(self):
|
||||
raise NotImplementedError(
|
||||
"Bitbucket provider does not support issue comments yet"
|
||||
)
|
||||
|
||||
def add_eyes_reaction(self, issue_comment_id: int) -> Optional[int]:
|
||||
return True
|
||||
|
||||
def remove_reaction(self, issue_comment_id: int, reaction_id: int) -> bool:
|
||||
return True
|
||||
|
||||
@staticmethod
|
||||
def _parse_bitbucket_server(url: str) -> str:
|
||||
parsed_url = urlparse(url)
|
||||
return f"{parsed_url.scheme}://{parsed_url.netloc}"
|
||||
|
||||
@staticmethod
|
||||
def _parse_pr_url(pr_url: str) -> Tuple[str, str, int]:
|
||||
parsed_url = urlparse(pr_url)
|
||||
path_parts = parsed_url.path.strip("/").split("/")
|
||||
if len(path_parts) < 6 or path_parts[4] != "pull-requests":
|
||||
raise ValueError(
|
||||
"The provided URL does not appear to be a Bitbucket PR URL"
|
||||
)
|
||||
|
||||
workspace_slug = path_parts[1]
|
||||
repo_slug = path_parts[3]
|
||||
try:
|
||||
pr_number = int(path_parts[5])
|
||||
except ValueError as e:
|
||||
raise ValueError("Unable to convert PR number to integer") from e
|
||||
|
||||
return workspace_slug, repo_slug, pr_number
|
||||
|
||||
def _get_repo(self):
|
||||
if self.repo is None:
|
||||
self.repo = self.bitbucket_client.get_repo(self.workspace_slug, self.repo_slug)
|
||||
return self.repo
|
||||
|
||||
def _get_pr(self):
|
||||
pr = self.bitbucket_client.get_pull_request(self.workspace_slug, self.repo_slug, pull_request_id=self.pr_num)
|
||||
return type('new_dict', (object,), pr)
|
||||
|
||||
def _get_pr_file_content(self, remote_link: str):
|
||||
return ""
|
||||
|
||||
def get_commit_messages(self):
|
||||
def get_commit_messages(self):
|
||||
raise NotImplementedError("Get commit messages function not implemented yet.")
|
||||
# bitbucket does not support labels
|
||||
def publish_description(self, pr_title: str, description: str):
|
||||
payload = json.dumps({
|
||||
"description": description,
|
||||
"title": pr_title
|
||||
})
|
||||
|
||||
response = requests.put(url=self.bitbucket_pull_request_api_url, headers=self.headers, data=payload)
|
||||
return response
|
||||
|
||||
# bitbucket does not support labels
|
||||
def publish_labels(self, pr_types: list):
|
||||
pass
|
||||
|
||||
# bitbucket does not support labels
|
||||
def get_pr_labels(self):
|
||||
pass
|
||||
|
||||
def _get_pr_comments_url(self):
|
||||
return f"{self.bitbucket_server_url}/rest/api/latest/projects/{self.workspace_slug}/repos/{self.repo_slug}/pull-requests/{self.pr_num}/comments"
|
@ -54,16 +54,11 @@ class CodeCommitClient:
|
||||
def __init__(self):
|
||||
self.boto_client = None
|
||||
|
||||
def is_supported(self, capability: str) -> bool:
|
||||
if capability in ["gfm_markdown"]:
|
||||
return False
|
||||
return True
|
||||
|
||||
def _connect_boto_client(self):
|
||||
try:
|
||||
self.boto_client = boto3.client("codecommit")
|
||||
except Exception as e:
|
||||
raise ValueError(f"Failed to connect to AWS CodeCommit: {e}") from e
|
||||
raise ValueError(f"Failed to connect to AWS CodeCommit: {e}")
|
||||
|
||||
def get_differences(self, repo_name: int, destination_commit: str, source_commit: str):
|
||||
"""
|
||||
|
@ -1,15 +1,16 @@
|
||||
import logging
|
||||
import os
|
||||
import re
|
||||
from collections import Counter
|
||||
from typing import List, Optional, Tuple
|
||||
from urllib.parse import urlparse
|
||||
|
||||
from pr_agent.git_providers.codecommit_client import CodeCommitClient
|
||||
|
||||
from ..algo.language_handler import is_valid_file, language_extension_map
|
||||
from ..algo.pr_processing import clip_tokens
|
||||
from ..algo.utils import load_large_diff
|
||||
from .git_provider import EDIT_TYPE, FilePatchInfo, GitProvider
|
||||
from ..config_loader import get_settings
|
||||
from ..log import get_logger
|
||||
from .git_provider import EDIT_TYPE, FilePatchInfo, GitProvider, IncrementalPR
|
||||
from pr_agent.git_providers.codecommit_client import CodeCommitClient
|
||||
|
||||
|
||||
class PullRequestCCMimic:
|
||||
@ -61,7 +62,6 @@ class CodeCommitProvider(GitProvider):
|
||||
self.pr = None
|
||||
self.diff_files = None
|
||||
self.git_files = None
|
||||
self.pr_url = pr_url
|
||||
if pr_url:
|
||||
self.set_pr(pr_url)
|
||||
|
||||
@ -74,7 +74,6 @@ class CodeCommitProvider(GitProvider):
|
||||
"create_inline_comment",
|
||||
"publish_inline_comments",
|
||||
"get_labels",
|
||||
"gfm_markdown"
|
||||
]:
|
||||
return False
|
||||
return True
|
||||
@ -166,7 +165,7 @@ class CodeCommitProvider(GitProvider):
|
||||
|
||||
def publish_comment(self, pr_comment: str, is_temporary: bool = False):
|
||||
if is_temporary:
|
||||
get_logger().info(pr_comment)
|
||||
logging.info(pr_comment)
|
||||
return
|
||||
|
||||
pr_comment = CodeCommitProvider._remove_markdown_html(pr_comment)
|
||||
@ -188,12 +187,12 @@ class CodeCommitProvider(GitProvider):
|
||||
for suggestion in code_suggestions:
|
||||
# Verify that each suggestion has the required keys
|
||||
if not all(key in suggestion for key in ["body", "relevant_file", "relevant_lines_start"]):
|
||||
get_logger().warning(f"Skipping code suggestion #{counter}: Each suggestion must have 'body', 'relevant_file', 'relevant_lines_start' keys")
|
||||
logging.warning(f"Skipping code suggestion #{counter}: Each suggestion must have 'body', 'relevant_file', 'relevant_lines_start' keys")
|
||||
continue
|
||||
|
||||
# Publish the code suggestion to CodeCommit
|
||||
try:
|
||||
get_logger().debug(f"Code Suggestion #{counter} in file: {suggestion['relevant_file']}: {suggestion['relevant_lines_start']}")
|
||||
logging.debug(f"Code Suggestion #{counter} in file: {suggestion['relevant_file']}: {suggestion['relevant_lines_start']}")
|
||||
self.codecommit_client.publish_comment(
|
||||
repo_name=self.repo_name,
|
||||
pr_number=self.pr_num,
|
||||
@ -216,36 +215,24 @@ class CodeCommitProvider(GitProvider):
|
||||
def publish_labels(self, labels):
|
||||
return [""] # not implemented yet
|
||||
|
||||
def get_pr_labels(self):
|
||||
def get_labels(self):
|
||||
return [""] # not implemented yet
|
||||
|
||||
def remove_initial_comment(self):
|
||||
return "" # not implemented yet
|
||||
|
||||
def remove_comment(self, comment):
|
||||
return "" # not implemented yet
|
||||
|
||||
def publish_inline_comment(self, body: str, relevant_file: str, relevant_line_in_file: str):
|
||||
# https://boto3.amazonaws.com/v1/documentation/api/latest/reference/services/codecommit/client/post_comment_for_compared_commit.html
|
||||
raise NotImplementedError("CodeCommit provider does not support publishing inline comments yet")
|
||||
|
||||
def create_inline_comment(self, body: str, relevant_file: str, relevant_line_in_file: str):
|
||||
raise NotImplementedError("CodeCommit provider does not support creating inline comments yet")
|
||||
|
||||
def publish_inline_comments(self, comments: list[dict]):
|
||||
raise NotImplementedError("CodeCommit provider does not support publishing inline comments yet")
|
||||
|
||||
def get_title(self):
|
||||
return self.pr.title
|
||||
|
||||
def get_pr_id(self):
|
||||
"""
|
||||
Returns the PR ID in the format: "repo_name/pr_number".
|
||||
Note: This is an internal identifier for PR-Agent,
|
||||
and is not the same as the CodeCommit PR identifier.
|
||||
"""
|
||||
try:
|
||||
pr_id = f"{self.repo_name}/{self.pr_num}"
|
||||
return pr_id
|
||||
except:
|
||||
return ""
|
||||
return self.pr.get("title", "")
|
||||
|
||||
def get_languages(self):
|
||||
"""
|
||||
@ -267,8 +254,6 @@ class CodeCommitProvider(GitProvider):
|
||||
# where each dictionary item is a language name.
|
||||
# We build that language->extension dictionary here in main_extensions_flat.
|
||||
main_extensions_flat = {}
|
||||
language_extension_map_org = get_settings().language_extension_map_org
|
||||
language_extension_map = {k.lower(): v for k, v in language_extension_map_org.items()}
|
||||
for language, extensions in language_extension_map.items():
|
||||
for ext in extensions:
|
||||
main_extensions_flat[ext] = language
|
||||
@ -298,11 +283,11 @@ class CodeCommitProvider(GitProvider):
|
||||
return self.codecommit_client.get_file(self.repo_name, settings_filename, self.pr.source_commit, optional=True)
|
||||
|
||||
def add_eyes_reaction(self, issue_comment_id: int) -> Optional[int]:
|
||||
get_logger().info("CodeCommit provider does not support eyes reaction yet")
|
||||
logging.info("CodeCommit provider does not support eyes reaction yet")
|
||||
return True
|
||||
|
||||
def remove_reaction(self, issue_comment_id: int, reaction_id: int) -> bool:
|
||||
get_logger().info("CodeCommit provider does not support removing reactions yet")
|
||||
logging.info("CodeCommit provider does not support removing reactions yet")
|
||||
return True
|
||||
|
||||
@staticmethod
|
||||
@ -368,7 +353,7 @@ class CodeCommitProvider(GitProvider):
|
||||
# TODO: implement support for multiple targets in one CodeCommit PR
|
||||
# for now, we are only using the first target in the PR
|
||||
if len(response.targets) > 1:
|
||||
get_logger().warning(
|
||||
logging.warning(
|
||||
"Multiple targets in one PR is not supported for CodeCommit yet. Continuing, using the first target only..."
|
||||
)
|
||||
|
||||
|
@ -1,4 +1,5 @@
|
||||
import json
|
||||
import logging
|
||||
import os
|
||||
import pathlib
|
||||
import shutil
|
||||
@ -6,16 +7,18 @@ import subprocess
|
||||
import uuid
|
||||
from collections import Counter, namedtuple
|
||||
from pathlib import Path
|
||||
from tempfile import NamedTemporaryFile, mkdtemp
|
||||
from tempfile import mkdtemp, NamedTemporaryFile
|
||||
|
||||
import requests
|
||||
import urllib3.util
|
||||
from git import Repo
|
||||
|
||||
from pr_agent.config_loader import get_settings
|
||||
from pr_agent.git_providers.git_provider import EDIT_TYPE, FilePatchInfo, GitProvider
|
||||
from pr_agent.git_providers.git_provider import GitProvider, FilePatchInfo, \
|
||||
EDIT_TYPE
|
||||
from pr_agent.git_providers.local_git_provider import PullRequestMimic
|
||||
from pr_agent.log import get_logger
|
||||
|
||||
logger = logging.getLogger(__name__)
|
||||
|
||||
|
||||
def _call(*command, **kwargs) -> (int, str, str):
|
||||
@ -30,42 +33,42 @@ def _call(*command, **kwargs) -> (int, str, str):
|
||||
|
||||
|
||||
def clone(url, directory):
|
||||
get_logger().info("Cloning %s to %s", url, directory)
|
||||
logger.info("Cloning %s to %s", url, directory)
|
||||
stdout = _call('git', 'clone', "--depth", "1", url, directory)
|
||||
get_logger().info(stdout)
|
||||
logger.info(stdout)
|
||||
|
||||
|
||||
def fetch(url, refspec, cwd):
|
||||
get_logger().info("Fetching %s %s", url, refspec)
|
||||
logger.info("Fetching %s %s", url, refspec)
|
||||
stdout = _call(
|
||||
'git', 'fetch', '--depth', '2', url, refspec,
|
||||
cwd=cwd
|
||||
)
|
||||
get_logger().info(stdout)
|
||||
logger.info(stdout)
|
||||
|
||||
|
||||
def checkout(cwd):
|
||||
get_logger().info("Checking out")
|
||||
logger.info("Checking out")
|
||||
stdout = _call('git', 'checkout', "FETCH_HEAD", cwd=cwd)
|
||||
get_logger().info(stdout)
|
||||
logger.info(stdout)
|
||||
|
||||
|
||||
def show(*args, cwd=None):
|
||||
get_logger().info("Show")
|
||||
logger.info("Show")
|
||||
return _call('git', 'show', *args, cwd=cwd)
|
||||
|
||||
|
||||
def diff(*args, cwd=None):
|
||||
get_logger().info("Diff")
|
||||
logger.info("Diff")
|
||||
patch = _call('git', 'diff', *args, cwd=cwd)
|
||||
if not patch:
|
||||
get_logger().warning("No changes found")
|
||||
logger.warning("No changes found")
|
||||
return
|
||||
return patch
|
||||
|
||||
|
||||
def reset_local_changes(cwd):
|
||||
get_logger().info("Reset local changes")
|
||||
logger.info("Reset local changes")
|
||||
_call('git', 'checkout', "--force", cwd=cwd)
|
||||
|
||||
|
||||
@ -112,14 +115,7 @@ def adopt_to_gerrit_message(message):
|
||||
lines = message.splitlines()
|
||||
buf = []
|
||||
for line in lines:
|
||||
# remove markdown formatting
|
||||
line = (line.replace("*", "")
|
||||
.replace("``", "`")
|
||||
.replace("<details>", "")
|
||||
.replace("</details>", "")
|
||||
.replace("<summary>", "")
|
||||
.replace("</summary>", ""))
|
||||
|
||||
line = line.replace("*", "").replace("``", "`")
|
||||
line = line.strip()
|
||||
if line.startswith('#'):
|
||||
buf.append("\n" +
|
||||
@ -192,7 +188,7 @@ class GerritProvider(GitProvider):
|
||||
)
|
||||
self.repo = Repo(self.repo_path)
|
||||
assert self.repo
|
||||
self.pr_url = base_url
|
||||
|
||||
self.pr = PullRequestMimic(self.get_pr_title(), self.get_diff_files())
|
||||
|
||||
def get_pr_title(self):
|
||||
@ -207,7 +203,7 @@ class GerritProvider(GitProvider):
|
||||
Comment = namedtuple('Comment', ['body'])
|
||||
return Comments([Comment(c['message']) for c in reversed(comments)])
|
||||
|
||||
def get_pr_labels(self):
|
||||
def get_labels(self):
|
||||
raise NotImplementedError(
|
||||
'Getting labels is not implemented for the gerrit provider')
|
||||
|
||||
@ -223,12 +219,10 @@ class GerritProvider(GitProvider):
|
||||
return [self.repo.head.commit.message]
|
||||
|
||||
def get_repo_settings(self):
|
||||
try:
|
||||
with open(self.repo_path / ".pr_agent.toml", 'rb') as f:
|
||||
contents = f.read()
|
||||
return contents
|
||||
except OSError:
|
||||
return b""
|
||||
"""
|
||||
TODO: Implement support of .pr_agent.toml
|
||||
"""
|
||||
return ""
|
||||
|
||||
def get_diff_files(self) -> list[FilePatchInfo]:
|
||||
diffs = self.repo.head.commit.diff(
|
||||
@ -310,8 +304,7 @@ class GerritProvider(GitProvider):
|
||||
# 'get_issue_comments',
|
||||
'create_inline_comment',
|
||||
'publish_inline_comments',
|
||||
'get_labels',
|
||||
'gfm_markdown'
|
||||
'get_labels'
|
||||
]:
|
||||
return False
|
||||
return True
|
||||
@ -380,6 +373,11 @@ class GerritProvider(GitProvider):
|
||||
'Publishing inline comments is not implemented for the gerrit '
|
||||
'provider')
|
||||
|
||||
def create_inline_comment(self, body: str, relevant_file: str,
|
||||
relevant_line_in_file: str):
|
||||
raise NotImplementedError(
|
||||
'Creating inline comments is not implemented for the gerrit '
|
||||
'provider')
|
||||
|
||||
def publish_labels(self, labels):
|
||||
# Not applicable to the local git provider,
|
||||
@ -391,8 +389,5 @@ class GerritProvider(GitProvider):
|
||||
# shutil.rmtree(self.repo_path)
|
||||
pass
|
||||
|
||||
def remove_comment(self, comment):
|
||||
pass
|
||||
|
||||
def get_pr_branch(self):
|
||||
return self.repo.head
|
||||
|
@ -1,3 +1,4 @@
|
||||
import logging
|
||||
from abc import ABC, abstractmethod
|
||||
from dataclasses import dataclass
|
||||
|
||||
@ -5,16 +6,12 @@ from dataclasses import dataclass
|
||||
from enum import Enum
|
||||
from typing import Optional
|
||||
|
||||
from pr_agent.config_loader import get_settings
|
||||
from pr_agent.log import get_logger
|
||||
|
||||
|
||||
class EDIT_TYPE(Enum):
|
||||
ADDED = 1
|
||||
DELETED = 2
|
||||
MODIFIED = 3
|
||||
RENAMED = 4
|
||||
UNKNOWN = 5
|
||||
|
||||
|
||||
@dataclass
|
||||
@ -24,10 +21,8 @@ class FilePatchInfo:
|
||||
patch: str
|
||||
filename: str
|
||||
tokens: int = -1
|
||||
edit_type: EDIT_TYPE = EDIT_TYPE.UNKNOWN
|
||||
edit_type: EDIT_TYPE = EDIT_TYPE.MODIFIED
|
||||
old_filename: str = None
|
||||
num_plus_lines: int = -1
|
||||
num_minus_lines: int = -1
|
||||
|
||||
|
||||
class GitProvider(ABC):
|
||||
@ -43,10 +38,38 @@ class GitProvider(ABC):
|
||||
def publish_description(self, pr_title: str, pr_body: str):
|
||||
pass
|
||||
|
||||
@abstractmethod
|
||||
def publish_comment(self, pr_comment: str, is_temporary: bool = False):
|
||||
pass
|
||||
|
||||
@abstractmethod
|
||||
def publish_inline_comment(self, body: str, relevant_file: str, relevant_line_in_file: str):
|
||||
pass
|
||||
|
||||
@abstractmethod
|
||||
def create_inline_comment(self, body: str, relevant_file: str, relevant_line_in_file: str):
|
||||
pass
|
||||
|
||||
@abstractmethod
|
||||
def publish_inline_comments(self, comments: list[dict]):
|
||||
pass
|
||||
|
||||
@abstractmethod
|
||||
def publish_code_suggestions(self, code_suggestions: list) -> bool:
|
||||
pass
|
||||
|
||||
@abstractmethod
|
||||
def publish_labels(self, labels):
|
||||
pass
|
||||
|
||||
@abstractmethod
|
||||
def get_labels(self):
|
||||
pass
|
||||
|
||||
@abstractmethod
|
||||
def remove_initial_comment(self):
|
||||
pass
|
||||
|
||||
@abstractmethod
|
||||
def get_languages(self):
|
||||
pass
|
||||
@ -65,116 +88,31 @@ class GitProvider(ABC):
|
||||
|
||||
def get_pr_description(self, *, full: bool = True) -> str:
|
||||
from pr_agent.config_loader import get_settings
|
||||
from pr_agent.algo.utils import clip_tokens
|
||||
max_tokens_description = get_settings().get("CONFIG.MAX_DESCRIPTION_TOKENS", None)
|
||||
from pr_agent.algo.pr_processing import clip_tokens
|
||||
max_tokens = get_settings().get("CONFIG.MAX_DESCRIPTION_TOKENS", None)
|
||||
description = self.get_pr_description_full() if full else self.get_user_description()
|
||||
if max_tokens_description:
|
||||
return clip_tokens(description, max_tokens_description)
|
||||
if max_tokens:
|
||||
return clip_tokens(description, max_tokens)
|
||||
return description
|
||||
|
||||
def get_user_description(self) -> str:
|
||||
description = (self.get_pr_description_full() or "").strip()
|
||||
description_lowercase = description.lower()
|
||||
get_logger().info(f"Existing description:\n{description_lowercase}")
|
||||
|
||||
# if the existing description wasn't generated by the pr-agent, just return it as-is
|
||||
if not self._is_generated_by_pr_agent(description_lowercase):
|
||||
get_logger().info(f"Existing description was not generated by the pr-agent")
|
||||
if not description.startswith("## PR Type"):
|
||||
return description
|
||||
|
||||
# if the existing description was generated by the pr-agent, but it doesn't contain a user description,
|
||||
# if the existing description was generated by the pr-agent, but it doesn't contain the user description,
|
||||
# return nothing (empty string) because it means there is no user description
|
||||
user_description_header = "## **user description**"
|
||||
if user_description_header not in description_lowercase:
|
||||
get_logger().info(f"Existing description was generated by the pr-agent, but it doesn't contain a user description")
|
||||
if "## User Description:" not in description:
|
||||
return ""
|
||||
|
||||
# otherwise, extract the original user description from the existing pr-agent description and return it
|
||||
# user_description_start_position = description_lowercase.find(user_description_header) + len(user_description_header)
|
||||
# return description[user_description_start_position:].split("\n", 1)[-1].strip()
|
||||
|
||||
# the 'user description' is in the beginning. extract and return it
|
||||
possible_headers = self._possible_headers()
|
||||
start_position = description_lowercase.find(user_description_header) + len(user_description_header)
|
||||
end_position = len(description)
|
||||
for header in possible_headers: # try to clip at the next header
|
||||
if header != user_description_header and header in description_lowercase:
|
||||
end_position = min(end_position, description_lowercase.find(header))
|
||||
if end_position != len(description) and end_position > start_position:
|
||||
original_user_description = description[start_position:end_position].strip()
|
||||
if original_user_description.endswith("___"):
|
||||
original_user_description = original_user_description[:-3].strip()
|
||||
else:
|
||||
original_user_description = description.split("___")[0].strip()
|
||||
if original_user_description.lower().startswith(user_description_header):
|
||||
original_user_description = original_user_description[len(user_description_header):].strip()
|
||||
|
||||
get_logger().info(f"Extracted user description from existing description:\n{original_user_description}")
|
||||
return original_user_description
|
||||
|
||||
def _possible_headers(self):
|
||||
return ("## **user description**", "## **pr type**", "## **pr description**", "## **pr labels**", "## **type**", "## **description**",
|
||||
"## **labels**", "### 🤖 generated by pr agent")
|
||||
|
||||
def _is_generated_by_pr_agent(self, description_lowercase: str) -> bool:
|
||||
possible_headers = self._possible_headers()
|
||||
return any(description_lowercase.startswith(header) for header in possible_headers)
|
||||
|
||||
@abstractmethod
|
||||
def get_repo_settings(self):
|
||||
pass
|
||||
|
||||
def get_pr_id(self):
|
||||
return ""
|
||||
|
||||
def get_line_link(self, relevant_file: str, relevant_line_start: int, relevant_line_end: int = None) -> str:
|
||||
return ""
|
||||
|
||||
#### comments operations ####
|
||||
@abstractmethod
|
||||
def publish_comment(self, pr_comment: str, is_temporary: bool = False):
|
||||
pass
|
||||
|
||||
def publish_persistent_comment(self, pr_comment: str, initial_header: str, update_header: bool):
|
||||
self.publish_comment(pr_comment)
|
||||
|
||||
@abstractmethod
|
||||
def publish_inline_comment(self, body: str, relevant_file: str, relevant_line_in_file: str):
|
||||
pass
|
||||
|
||||
def create_inline_comment(self, body: str, relevant_file: str, relevant_line_in_file: str,
|
||||
absolute_position: int = None):
|
||||
raise NotImplementedError("This git provider does not support creating inline comments yet")
|
||||
|
||||
@abstractmethod
|
||||
def publish_inline_comments(self, comments: list[dict]):
|
||||
pass
|
||||
|
||||
@abstractmethod
|
||||
def remove_initial_comment(self):
|
||||
pass
|
||||
|
||||
@abstractmethod
|
||||
def remove_comment(self, comment):
|
||||
pass
|
||||
return description.split("## User Description:", 1)[1].strip()
|
||||
|
||||
@abstractmethod
|
||||
def get_issue_comments(self):
|
||||
pass
|
||||
|
||||
def get_comment_url(self, comment) -> str:
|
||||
return ""
|
||||
|
||||
#### labels operations ####
|
||||
@abstractmethod
|
||||
def publish_labels(self, labels):
|
||||
pass
|
||||
|
||||
@abstractmethod
|
||||
def get_pr_labels(self):
|
||||
pass
|
||||
|
||||
def get_repo_labels(self):
|
||||
def get_repo_settings(self):
|
||||
pass
|
||||
|
||||
@abstractmethod
|
||||
@ -185,78 +123,49 @@ class GitProvider(ABC):
|
||||
def remove_reaction(self, issue_comment_id: int, reaction_id: int) -> bool:
|
||||
pass
|
||||
|
||||
#### commits operations ####
|
||||
@abstractmethod
|
||||
def get_commit_messages(self):
|
||||
pass
|
||||
|
||||
def get_latest_commit_url(self) -> str:
|
||||
return ""
|
||||
|
||||
def get_main_pr_language(languages, files) -> str:
|
||||
"""
|
||||
Get the main language of the commit. Return an empty string if cannot determine.
|
||||
"""
|
||||
main_language_str = ""
|
||||
if not languages:
|
||||
get_logger().info("No languages detected")
|
||||
return main_language_str
|
||||
if not files:
|
||||
get_logger().info("No files in diff")
|
||||
return main_language_str
|
||||
|
||||
try:
|
||||
top_language = max(languages, key=languages.get).lower()
|
||||
|
||||
# validate that the specific commit uses the main language
|
||||
extension_list = []
|
||||
for file in files:
|
||||
if not file:
|
||||
continue
|
||||
if isinstance(file, str):
|
||||
file = FilePatchInfo(base_file=None, head_file=None, patch=None, filename=file)
|
||||
extension_list.append(file.filename.rsplit('.')[-1])
|
||||
|
||||
# get the most common extension
|
||||
most_common_extension = '.' + max(set(extension_list), key=extension_list.count)
|
||||
try:
|
||||
language_extension_map_org = get_settings().language_extension_map_org
|
||||
language_extension_map = {k.lower(): v for k, v in language_extension_map_org.items()}
|
||||
most_common_extension = max(set(extension_list), key=extension_list.count)
|
||||
|
||||
if top_language in language_extension_map and most_common_extension in language_extension_map[top_language]:
|
||||
# look for a match. TBD: add more languages, do this systematically
|
||||
if most_common_extension == 'py' and top_language == 'python' or \
|
||||
most_common_extension == 'js' and top_language == 'javascript' or \
|
||||
most_common_extension == 'ts' and top_language == 'typescript' or \
|
||||
most_common_extension == 'go' and top_language == 'go' or \
|
||||
most_common_extension == 'java' and top_language == 'java' or \
|
||||
most_common_extension == 'c' and top_language == 'c' or \
|
||||
most_common_extension == 'cpp' and top_language == 'c++' or \
|
||||
most_common_extension == 'cs' and top_language == 'c#' or \
|
||||
most_common_extension == 'swift' and top_language == 'swift' or \
|
||||
most_common_extension == 'php' and top_language == 'php' or \
|
||||
most_common_extension == 'rb' and top_language == 'ruby' or \
|
||||
most_common_extension == 'rs' and top_language == 'rust' or \
|
||||
most_common_extension == 'scala' and top_language == 'scala' or \
|
||||
most_common_extension == 'kt' and top_language == 'kotlin' or \
|
||||
most_common_extension == 'pl' and top_language == 'perl' or \
|
||||
most_common_extension == top_language:
|
||||
main_language_str = top_language
|
||||
else:
|
||||
for language, extensions in language_extension_map.items():
|
||||
if most_common_extension in extensions:
|
||||
main_language_str = language
|
||||
break
|
||||
except Exception as e:
|
||||
get_logger().exception(f"Failed to get main language: {e}")
|
||||
pass
|
||||
|
||||
## old approach:
|
||||
# most_common_extension = max(set(extension_list), key=extension_list.count)
|
||||
# if most_common_extension == 'py' and top_language == 'python' or \
|
||||
# most_common_extension == 'js' and top_language == 'javascript' or \
|
||||
# most_common_extension == 'ts' and top_language == 'typescript' or \
|
||||
# most_common_extension == 'tsx' and top_language == 'typescript' or \
|
||||
# most_common_extension == 'go' and top_language == 'go' or \
|
||||
# most_common_extension == 'java' and top_language == 'java' or \
|
||||
# most_common_extension == 'c' and top_language == 'c' or \
|
||||
# most_common_extension == 'cpp' and top_language == 'c++' or \
|
||||
# most_common_extension == 'cs' and top_language == 'c#' or \
|
||||
# most_common_extension == 'swift' and top_language == 'swift' or \
|
||||
# most_common_extension == 'php' and top_language == 'php' or \
|
||||
# most_common_extension == 'rb' and top_language == 'ruby' or \
|
||||
# most_common_extension == 'rs' and top_language == 'rust' or \
|
||||
# most_common_extension == 'scala' and top_language == 'scala' or \
|
||||
# most_common_extension == 'kt' and top_language == 'kotlin' or \
|
||||
# most_common_extension == 'pl' and top_language == 'perl' or \
|
||||
# most_common_extension == top_language:
|
||||
# main_language_str = top_language
|
||||
|
||||
except Exception as e:
|
||||
get_logger().exception(e)
|
||||
logging.exception(e)
|
||||
pass
|
||||
|
||||
return main_language_str
|
||||
@ -266,13 +175,6 @@ class IncrementalPR:
|
||||
def __init__(self, is_incremental: bool = False):
|
||||
self.is_incremental = is_incremental
|
||||
self.commits_range = None
|
||||
self.first_new_commit = None
|
||||
self.last_seen_commit = None
|
||||
self.first_new_commit_sha = None
|
||||
self.last_seen_commit_sha = None
|
||||
|
||||
@property
|
||||
def first_new_commit_sha(self):
|
||||
return None if self.first_new_commit is None else self.first_new_commit.sha
|
||||
|
||||
@property
|
||||
def last_seen_commit_sha(self):
|
||||
return None if self.last_seen_commit is None else self.last_seen_commit.sha
|
||||
|
@ -1,20 +1,20 @@
|
||||
import time
|
||||
import logging
|
||||
import hashlib
|
||||
|
||||
from datetime import datetime
|
||||
from typing import Optional, Tuple
|
||||
from typing import Optional, Tuple, Any
|
||||
from urllib.parse import urlparse
|
||||
|
||||
from github import AppAuthentication, Auth, Github, GithubException
|
||||
from github import AppAuthentication, Auth, Github, GithubException, Reaction
|
||||
from retry import retry
|
||||
from starlette_context import context
|
||||
|
||||
from .git_provider import FilePatchInfo, GitProvider, IncrementalPR
|
||||
from ..algo.language_handler import is_valid_file
|
||||
from ..algo.pr_processing import find_line_number_of_relevant_line_in_file
|
||||
from ..algo.utils import load_large_diff, clip_tokens
|
||||
from ..algo.utils import load_large_diff
|
||||
from ..algo.pr_processing import find_line_number_of_relevant_line_in_file, clip_tokens
|
||||
from ..config_loader import get_settings
|
||||
from ..log import get_logger
|
||||
from ..servers.utils import RateLimitExceeded
|
||||
from .git_provider import FilePatchInfo, GitProvider, IncrementalPR, EDIT_TYPE
|
||||
|
||||
|
||||
class GithubProvider(GitProvider):
|
||||
@ -32,10 +32,9 @@ class GithubProvider(GitProvider):
|
||||
self.diff_files = None
|
||||
self.git_files = None
|
||||
self.incremental = incremental
|
||||
if pr_url and 'pull' in pr_url:
|
||||
if pr_url:
|
||||
self.set_pr(pr_url)
|
||||
self.last_commit_id = list(self.pr.get_commits())[-1]
|
||||
self.pr_url = self.get_pr_url() # pr_url for github actions can be as api.github.com, so we need to get the url from the pr object
|
||||
|
||||
def is_supported(self, capability: str) -> bool:
|
||||
return True
|
||||
@ -52,44 +51,36 @@ class GithubProvider(GitProvider):
|
||||
def get_incremental_commits(self):
|
||||
self.commits = list(self.pr.get_commits())
|
||||
|
||||
self.previous_review = self.get_previous_review(full=True, incremental=True)
|
||||
self.get_previous_review()
|
||||
if self.previous_review:
|
||||
self.incremental.commits_range = self.get_commit_range()
|
||||
# Get all files changed during the commit range
|
||||
self.file_set = dict()
|
||||
for commit in self.incremental.commits_range:
|
||||
if commit.commit.message.startswith(f"Merge branch '{self._get_repo().default_branch}'"):
|
||||
get_logger().info(f"Skipping merge commit {commit.commit.message}")
|
||||
logging.info(f"Skipping merge commit {commit.commit.message}")
|
||||
continue
|
||||
self.file_set.update({file.filename: file for file in commit.files})
|
||||
else:
|
||||
raise ValueError("No previous review found")
|
||||
|
||||
def get_commit_range(self):
|
||||
last_review_time = self.previous_review.created_at
|
||||
first_new_commit_index = None
|
||||
first_new_commit_index = 0
|
||||
for index in range(len(self.commits) - 1, -1, -1):
|
||||
if self.commits[index].commit.author.date > last_review_time:
|
||||
self.incremental.first_new_commit = self.commits[index]
|
||||
self.incremental.first_new_commit_sha = self.commits[index].sha
|
||||
first_new_commit_index = index
|
||||
else:
|
||||
self.incremental.last_seen_commit = self.commits[index]
|
||||
self.incremental.last_seen_commit_sha = self.commits[index].sha
|
||||
break
|
||||
return self.commits[first_new_commit_index:] if first_new_commit_index is not None else []
|
||||
return self.commits[first_new_commit_index:]
|
||||
|
||||
def get_previous_review(self, *, full: bool, incremental: bool):
|
||||
if not (full or incremental):
|
||||
raise ValueError("At least one of full or incremental must be True")
|
||||
if not getattr(self, "comments", None):
|
||||
def get_previous_review(self):
|
||||
self.previous_review = None
|
||||
self.comments = list(self.pr.get_issue_comments())
|
||||
prefixes = []
|
||||
if full:
|
||||
prefixes.append("## PR Analysis")
|
||||
if incremental:
|
||||
prefixes.append("## Incremental PR Review")
|
||||
for index in range(len(self.comments) - 1, -1, -1):
|
||||
if any(self.comments[index].body.startswith(prefix) for prefix in prefixes):
|
||||
return self.comments[index]
|
||||
if self.comments[index].body.startswith("## PR Analysis"):
|
||||
self.previous_review = self.comments[index]
|
||||
break
|
||||
|
||||
def get_files(self):
|
||||
if self.incremental.is_incremental and self.file_set:
|
||||
@ -133,68 +124,22 @@ class GithubProvider(GitProvider):
|
||||
if not patch:
|
||||
patch = load_large_diff(file.filename, new_file_content_str, original_file_content_str)
|
||||
|
||||
if file.status == 'added':
|
||||
edit_type = EDIT_TYPE.ADDED
|
||||
elif file.status == 'removed':
|
||||
edit_type = EDIT_TYPE.DELETED
|
||||
elif file.status == 'renamed':
|
||||
edit_type = EDIT_TYPE.RENAMED
|
||||
elif file.status == 'modified':
|
||||
edit_type = EDIT_TYPE.MODIFIED
|
||||
else:
|
||||
get_logger().error(f"Unknown edit type: {file.status}")
|
||||
edit_type = EDIT_TYPE.UNKNOWN
|
||||
|
||||
# count number of lines added and removed
|
||||
patch_lines = patch.splitlines(keepends=True)
|
||||
num_plus_lines = len([line for line in patch_lines if line.startswith('+')])
|
||||
num_minus_lines = len([line for line in patch_lines if line.startswith('-')])
|
||||
file_patch_canonical_structure = FilePatchInfo(original_file_content_str, new_file_content_str, patch,
|
||||
file.filename, edit_type=edit_type,
|
||||
num_plus_lines=num_plus_lines,
|
||||
num_minus_lines=num_minus_lines,)
|
||||
diff_files.append(file_patch_canonical_structure)
|
||||
diff_files.append(FilePatchInfo(original_file_content_str, new_file_content_str, patch, file.filename))
|
||||
|
||||
self.diff_files = diff_files
|
||||
return diff_files
|
||||
|
||||
except GithubException.RateLimitExceededException as e:
|
||||
get_logger().error(f"Rate limit exceeded for GitHub API. Original message: {e}")
|
||||
logging.error(f"Rate limit exceeded for GitHub API. Original message: {e}")
|
||||
raise RateLimitExceeded("Rate limit exceeded for GitHub API.") from e
|
||||
|
||||
def publish_description(self, pr_title: str, pr_body: str):
|
||||
self.pr.edit(title=pr_title, body=pr_body)
|
||||
|
||||
def get_latest_commit_url(self) -> str:
|
||||
return self.last_commit_id.html_url
|
||||
|
||||
def get_comment_url(self, comment) -> str:
|
||||
return comment.html_url
|
||||
|
||||
def publish_persistent_comment(self, pr_comment: str, initial_header: str, update_header: bool = True):
|
||||
prev_comments = list(self.pr.get_issue_comments())
|
||||
for comment in prev_comments:
|
||||
body = comment.body
|
||||
if body.startswith(initial_header):
|
||||
latest_commit_url = self.get_latest_commit_url()
|
||||
comment_url = self.get_comment_url(comment)
|
||||
if update_header:
|
||||
updated_header = f"{initial_header}\n\n### (review updated until commit {latest_commit_url})\n"
|
||||
pr_comment_updated = pr_comment.replace(initial_header, updated_header)
|
||||
else:
|
||||
pr_comment_updated = pr_comment
|
||||
get_logger().info(f"Persistent mode- updating comment {comment_url} to latest review message")
|
||||
response = comment.edit(pr_comment_updated)
|
||||
self.publish_comment(
|
||||
f"**[Persistent review]({comment_url})** updated to latest commit {latest_commit_url}")
|
||||
return
|
||||
self.publish_comment(pr_comment)
|
||||
|
||||
def publish_comment(self, pr_comment: str, is_temporary: bool = False):
|
||||
if is_temporary and not get_settings().config.publish_output_progress:
|
||||
get_logger().debug(f"Skipping publish_comment for temporary comment: {pr_comment}")
|
||||
logging.debug(f"Skipping publish_comment for temporary comment: {pr_comment}")
|
||||
return
|
||||
|
||||
response = self.pr.create_issue_comment(pr_comment)
|
||||
if hasattr(response, "user") and hasattr(response.user, "login"):
|
||||
self.github_user_id = response.user.login
|
||||
@ -207,129 +152,19 @@ class GithubProvider(GitProvider):
|
||||
self.publish_inline_comments([self.create_inline_comment(body, relevant_file, relevant_line_in_file)])
|
||||
|
||||
|
||||
def create_inline_comment(self, body: str, relevant_file: str, relevant_line_in_file: str,
|
||||
absolute_position: int = None):
|
||||
position, absolute_position = find_line_number_of_relevant_line_in_file(self.diff_files,
|
||||
relevant_file.strip('`'),
|
||||
relevant_line_in_file,
|
||||
absolute_position)
|
||||
def create_inline_comment(self, body: str, relevant_file: str, relevant_line_in_file: str):
|
||||
position, absolute_position = find_line_number_of_relevant_line_in_file(self.diff_files, relevant_file.strip('`'), relevant_line_in_file)
|
||||
if position == -1:
|
||||
if get_settings().config.verbosity_level >= 2:
|
||||
get_logger().info(f"Could not find position for {relevant_file} {relevant_line_in_file}")
|
||||
logging.info(f"Could not find position for {relevant_file} {relevant_line_in_file}")
|
||||
subject_type = "FILE"
|
||||
else:
|
||||
subject_type = "LINE"
|
||||
path = relevant_file.strip()
|
||||
return dict(body=body, path=path, position=position) if subject_type == "LINE" else {}
|
||||
|
||||
def publish_inline_comments(self, comments: list[dict], disable_fallback: bool = False):
|
||||
try:
|
||||
# publish all comments in a single message
|
||||
def publish_inline_comments(self, comments: list[dict]):
|
||||
self.pr.create_review(commit=self.last_commit_id, comments=comments)
|
||||
except Exception as e:
|
||||
if get_settings().config.verbosity_level >= 2:
|
||||
get_logger().error(f"Failed to publish inline comments")
|
||||
|
||||
if (getattr(e, "status", None) == 422
|
||||
and get_settings().github.publish_inline_comments_fallback_with_verification and not disable_fallback):
|
||||
pass # continue to try _publish_inline_comments_fallback_with_verification
|
||||
else:
|
||||
raise e # will end up with publishing the comments one by one
|
||||
|
||||
try:
|
||||
self._publish_inline_comments_fallback_with_verification(comments)
|
||||
except Exception as e:
|
||||
if get_settings().config.verbosity_level >= 2:
|
||||
get_logger().error(f"Failed to publish inline code comments fallback, error: {e}")
|
||||
raise e
|
||||
|
||||
def _publish_inline_comments_fallback_with_verification(self, comments: list[dict]):
|
||||
"""
|
||||
Check each inline comment separately against the GitHub API and discard of invalid comments,
|
||||
then publish all the remaining valid comments in a single review.
|
||||
For invalid comments, also try removing the suggestion part and posting the comment just on the first line.
|
||||
"""
|
||||
verified_comments, invalid_comments = self._verify_code_comments(comments)
|
||||
|
||||
# publish as a group the verified comments
|
||||
if verified_comments:
|
||||
try:
|
||||
self.pr.create_review(commit=self.last_commit_id, comments=verified_comments)
|
||||
except:
|
||||
pass
|
||||
|
||||
# try to publish one by one the invalid comments as a one-line code comment
|
||||
if invalid_comments and get_settings().github.try_fix_invalid_inline_comments:
|
||||
fixed_comments_as_one_liner = self._try_fix_invalid_inline_comments(
|
||||
[comment for comment, _ in invalid_comments])
|
||||
for comment in fixed_comments_as_one_liner:
|
||||
try:
|
||||
self.publish_inline_comments([comment], disable_fallback=True)
|
||||
if get_settings().config.verbosity_level >= 2:
|
||||
get_logger().info(f"Published invalid comment as a single line comment: {comment}")
|
||||
except:
|
||||
if get_settings().config.verbosity_level >= 2:
|
||||
get_logger().error(f"Failed to publish invalid comment as a single line comment: {comment}")
|
||||
|
||||
def _verify_code_comment(self, comment: dict):
|
||||
is_verified = False
|
||||
e = None
|
||||
try:
|
||||
# event ="" # By leaving this blank, you set the review action state to PENDING
|
||||
input = dict(commit_id=self.last_commit_id.sha, comments=[comment])
|
||||
headers, data = self.pr._requester.requestJsonAndCheck(
|
||||
"POST", f"{self.pr.url}/reviews", input=input)
|
||||
pending_review_id = data["id"]
|
||||
is_verified = True
|
||||
except Exception as err:
|
||||
is_verified = False
|
||||
pending_review_id = None
|
||||
e = err
|
||||
if pending_review_id is not None:
|
||||
try:
|
||||
self.pr._requester.requestJsonAndCheck("DELETE", f"{self.pr.url}/reviews/{pending_review_id}")
|
||||
except Exception:
|
||||
pass
|
||||
return is_verified, e
|
||||
|
||||
def _verify_code_comments(self, comments: list[dict]) -> tuple[list[dict], list[tuple[dict, Exception]]]:
|
||||
"""Very each comment against the GitHub API and return 2 lists: 1 of verified and 1 of invalid comments"""
|
||||
verified_comments = []
|
||||
invalid_comments = []
|
||||
for comment in comments:
|
||||
time.sleep(1) # for avoiding secondary rate limit
|
||||
is_verified, e = self._verify_code_comment(comment)
|
||||
if is_verified:
|
||||
verified_comments.append(comment)
|
||||
else:
|
||||
invalid_comments.append((comment, e))
|
||||
return verified_comments, invalid_comments
|
||||
|
||||
def _try_fix_invalid_inline_comments(self, invalid_comments: list[dict]) -> list[dict]:
|
||||
"""
|
||||
Try fixing invalid comments by removing the suggestion part and setting the comment just on the first line.
|
||||
Return only comments that have been modified in some way.
|
||||
This is a best-effort attempt to fix invalid comments, and should be verified accordingly.
|
||||
"""
|
||||
import copy
|
||||
fixed_comments = []
|
||||
for comment in invalid_comments:
|
||||
try:
|
||||
fixed_comment = copy.deepcopy(comment) # avoid modifying the original comment dict for later logging
|
||||
if "```suggestion" in comment["body"]:
|
||||
fixed_comment["body"] = comment["body"].split("```suggestion")[0]
|
||||
if "start_line" in comment:
|
||||
fixed_comment["line"] = comment["start_line"]
|
||||
del fixed_comment["start_line"]
|
||||
if "start_side" in comment:
|
||||
fixed_comment["side"] = comment["start_side"]
|
||||
del fixed_comment["start_side"]
|
||||
if fixed_comment != comment:
|
||||
fixed_comments.append(fixed_comment)
|
||||
except Exception as e:
|
||||
if get_settings().config.verbosity_level >= 2:
|
||||
get_logger().error(f"Failed to fix inline comment, error: {e}")
|
||||
return fixed_comments
|
||||
|
||||
def publish_code_suggestions(self, code_suggestions: list) -> bool:
|
||||
"""
|
||||
@ -344,13 +179,13 @@ class GithubProvider(GitProvider):
|
||||
|
||||
if not relevant_lines_start or relevant_lines_start == -1:
|
||||
if get_settings().config.verbosity_level >= 2:
|
||||
get_logger().exception(
|
||||
logging.exception(
|
||||
f"Failed to publish code suggestion, relevant_lines_start is {relevant_lines_start}")
|
||||
continue
|
||||
|
||||
if relevant_lines_end < relevant_lines_start:
|
||||
if get_settings().config.verbosity_level >= 2:
|
||||
get_logger().exception(f"Failed to publish code suggestion, "
|
||||
logging.exception(f"Failed to publish code suggestion, "
|
||||
f"relevant_lines_end is {relevant_lines_end} and "
|
||||
f"relevant_lines_start is {relevant_lines_start}")
|
||||
continue
|
||||
@ -373,26 +208,20 @@ class GithubProvider(GitProvider):
|
||||
post_parameters_list.append(post_parameters)
|
||||
|
||||
try:
|
||||
self.publish_inline_comments(post_parameters_list)
|
||||
self.pr.create_review(commit=self.last_commit_id, comments=post_parameters_list)
|
||||
return True
|
||||
except Exception as e:
|
||||
if get_settings().config.verbosity_level >= 2:
|
||||
get_logger().error(f"Failed to publish code suggestion, error: {e}")
|
||||
logging.error(f"Failed to publish code suggestion, error: {e}")
|
||||
return False
|
||||
|
||||
def remove_initial_comment(self):
|
||||
try:
|
||||
for comment in getattr(self.pr, 'comments_list', []):
|
||||
if comment.is_temporary:
|
||||
self.remove_comment(comment)
|
||||
except Exception as e:
|
||||
get_logger().exception(f"Failed to remove initial comment, error: {e}")
|
||||
|
||||
def remove_comment(self, comment):
|
||||
try:
|
||||
comment.delete()
|
||||
except Exception as e:
|
||||
get_logger().exception(f"Failed to remove comment, error: {e}")
|
||||
logging.exception(f"Failed to remove initial comment, error: {e}")
|
||||
|
||||
def get_title(self):
|
||||
return self.pr.title
|
||||
@ -410,10 +239,9 @@ class GithubProvider(GitProvider):
|
||||
def get_user_id(self):
|
||||
if not self.github_user_id:
|
||||
try:
|
||||
self.github_user_id = self.github_client.get_user().raw_data['login']
|
||||
self.github_user_id = self.github_client.get_user().login
|
||||
except Exception as e:
|
||||
self.github_user_id = ""
|
||||
# logging.exception(f"Failed to get user id, error: {e}")
|
||||
logging.exception(f"Failed to get user id, error: {e}")
|
||||
return self.github_user_id
|
||||
|
||||
def get_notifications(self, since: datetime):
|
||||
@ -430,10 +258,7 @@ class GithubProvider(GitProvider):
|
||||
|
||||
def get_repo_settings(self):
|
||||
try:
|
||||
# contents = self.repo_obj.get_contents(".pr_agent.toml", ref=self.pr.head.sha).decoded_content
|
||||
|
||||
# more logical to take 'pr_agent.toml' from the default branch
|
||||
contents = self.repo_obj.get_contents(".pr_agent.toml").decoded_content
|
||||
contents = self.repo_obj.get_contents(".pr_agent.toml", ref=self.pr.head.sha).decoded_content
|
||||
return contents
|
||||
except Exception:
|
||||
return ""
|
||||
@ -443,7 +268,7 @@ class GithubProvider(GitProvider):
|
||||
reaction = self.pr.get_issue_comment(issue_comment_id).create_reaction("eyes")
|
||||
return reaction.id
|
||||
except Exception as e:
|
||||
get_logger().exception(f"Failed to add eyes reaction, error: {e}")
|
||||
logging.exception(f"Failed to add eyes reaction, error: {e}")
|
||||
return None
|
||||
|
||||
def remove_reaction(self, issue_comment_id: int, reaction_id: int) -> bool:
|
||||
@ -451,7 +276,7 @@ class GithubProvider(GitProvider):
|
||||
self.pr.get_issue_comment(issue_comment_id).delete_reaction(reaction_id)
|
||||
return True
|
||||
except Exception as e:
|
||||
get_logger().exception(f"Failed to remove eyes reaction, error: {e}")
|
||||
logging.exception(f"Failed to remove eyes reaction, error: {e}")
|
||||
return False
|
||||
|
||||
|
||||
@ -484,35 +309,6 @@ class GithubProvider(GitProvider):
|
||||
|
||||
return repo_name, pr_number
|
||||
|
||||
@staticmethod
|
||||
def _parse_issue_url(issue_url: str) -> Tuple[str, int]:
|
||||
parsed_url = urlparse(issue_url)
|
||||
|
||||
if 'github.com' not in parsed_url.netloc:
|
||||
raise ValueError("The provided URL is not a valid GitHub URL")
|
||||
|
||||
path_parts = parsed_url.path.strip('/').split('/')
|
||||
if 'api.github.com' in parsed_url.netloc:
|
||||
if len(path_parts) < 5 or path_parts[3] != 'issues':
|
||||
raise ValueError("The provided URL does not appear to be a GitHub ISSUE URL")
|
||||
repo_name = '/'.join(path_parts[1:3])
|
||||
try:
|
||||
issue_number = int(path_parts[4])
|
||||
except ValueError as e:
|
||||
raise ValueError("Unable to convert issue number to integer") from e
|
||||
return repo_name, issue_number
|
||||
|
||||
if len(path_parts) < 4 or path_parts[2] != 'issues':
|
||||
raise ValueError("The provided URL does not appear to be a GitHub PR issue")
|
||||
|
||||
repo_name = '/'.join(path_parts[:2])
|
||||
try:
|
||||
issue_number = int(path_parts[3])
|
||||
except ValueError as e:
|
||||
raise ValueError("Unable to convert issue number to integer") from e
|
||||
|
||||
return repo_name, issue_number
|
||||
|
||||
def _get_github_client(self):
|
||||
deployment_type = get_settings().get("GITHUB.DEPLOYMENT_TYPE", "user")
|
||||
|
||||
@ -526,7 +322,7 @@ class GithubProvider(GitProvider):
|
||||
raise ValueError("GitHub app installation ID is required when using GitHub app deployment")
|
||||
auth = AppAuthentication(app_id=app_id, private_key=private_key,
|
||||
installation_id=self.installation_id)
|
||||
return Github(app_auth=auth, base_url=get_settings().github.base_url)
|
||||
return Github(app_auth=auth)
|
||||
|
||||
if deployment_type == 'user':
|
||||
try:
|
||||
@ -535,7 +331,7 @@ class GithubProvider(GitProvider):
|
||||
raise ValueError(
|
||||
"GitHub token is required when using user deployment. See: "
|
||||
"https://github.com/Codium-ai/pr-agent#method-2-run-from-source") from e
|
||||
return Github(auth=Auth.Token(token), base_url=get_settings().github.base_url)
|
||||
return Github(auth=Auth.Token(token))
|
||||
|
||||
def _get_repo(self):
|
||||
if hasattr(self, 'repo_obj') and \
|
||||
@ -560,7 +356,7 @@ class GithubProvider(GitProvider):
|
||||
def publish_labels(self, pr_types):
|
||||
try:
|
||||
label_color_map = {"Bug fix": "1d76db", "Tests": "e99695", "Bug fix with tests": "c5def5",
|
||||
"Enhancement": "bfd4f2", "Documentation": "d4c5f9",
|
||||
"Refactoring": "bfdadc", "Enhancement": "bfd4f2", "Documentation": "d4c5f9",
|
||||
"Other": "d1bcf9"}
|
||||
post_parameters = []
|
||||
for p in pr_types:
|
||||
@ -570,19 +366,15 @@ class GithubProvider(GitProvider):
|
||||
"PUT", f"{self.pr.issue_url}/labels", input=post_parameters
|
||||
)
|
||||
except Exception as e:
|
||||
get_logger().exception(f"Failed to publish labels, error: {e}")
|
||||
logging.exception(f"Failed to publish labels, error: {e}")
|
||||
|
||||
def get_pr_labels(self):
|
||||
def get_labels(self):
|
||||
try:
|
||||
return [label.name for label in self.pr.labels]
|
||||
except Exception as e:
|
||||
get_logger().exception(f"Failed to get labels, error: {e}")
|
||||
logging.exception(f"Failed to get labels, error: {e}")
|
||||
return []
|
||||
|
||||
def get_repo_labels(self):
|
||||
labels = self.repo_obj.get_labels()
|
||||
return [label for label in labels]
|
||||
|
||||
def get_commit_messages(self):
|
||||
"""
|
||||
Retrieves the commit messages of a pull request.
|
||||
@ -622,24 +414,6 @@ class GithubProvider(GitProvider):
|
||||
return link
|
||||
except Exception as e:
|
||||
if get_settings().config.verbosity_level >= 2:
|
||||
get_logger().info(f"Failed adding line link, error: {e}")
|
||||
logging.info(f"Failed adding line link, error: {e}")
|
||||
|
||||
return ""
|
||||
|
||||
def get_line_link(self, relevant_file: str, relevant_line_start: int, relevant_line_end: int = None) -> str:
|
||||
sha_file = hashlib.sha256(relevant_file.encode('utf-8')).hexdigest()
|
||||
if relevant_line_start == -1:
|
||||
link = f"https://github.com/{self.repo}/pull/{self.pr_num}/files#diff-{sha_file}"
|
||||
elif relevant_line_end:
|
||||
link = f"https://github.com/{self.repo}/pull/{self.pr_num}/files#diff-{sha_file}R{relevant_line_start}-R{relevant_line_end}"
|
||||
else:
|
||||
link = f"https://github.com/{self.repo}/pull/{self.pr_num}/files#diff-{sha_file}R{relevant_line_start}"
|
||||
return link
|
||||
|
||||
|
||||
def get_pr_id(self):
|
||||
try:
|
||||
pr_id = f"{self.repo}/{self.pr_num}"
|
||||
return pr_id
|
||||
except:
|
||||
return ""
|
||||
|
@ -1,4 +1,4 @@
|
||||
import hashlib
|
||||
import logging
|
||||
import re
|
||||
from typing import Optional, Tuple
|
||||
from urllib.parse import urlparse
|
||||
@ -7,12 +7,12 @@ import gitlab
|
||||
from gitlab import GitlabGetError
|
||||
|
||||
from ..algo.language_handler import is_valid_file
|
||||
from ..algo.pr_processing import find_line_number_of_relevant_line_in_file
|
||||
from ..algo.utils import load_large_diff, clip_tokens
|
||||
from ..algo.pr_processing import clip_tokens
|
||||
from ..algo.utils import load_large_diff
|
||||
from ..config_loader import get_settings
|
||||
from .git_provider import EDIT_TYPE, FilePatchInfo, GitProvider
|
||||
from ..log import get_logger
|
||||
|
||||
logger = logging.getLogger()
|
||||
|
||||
class DiffNotFoundError(Exception):
|
||||
"""Raised when the diff for a merge request cannot be found."""
|
||||
@ -37,14 +37,13 @@ class GitLabProvider(GitProvider):
|
||||
self.diff_files = None
|
||||
self.git_files = None
|
||||
self.temp_comments = []
|
||||
self.pr_url = merge_request_url
|
||||
self._set_merge_request(merge_request_url)
|
||||
self.RE_HUNK_HEADER = re.compile(
|
||||
r"^@@ -(\d+)(?:,(\d+))? \+(\d+)(?:,(\d+))? @@[ ]?(.*)")
|
||||
self.incremental = incremental
|
||||
|
||||
def is_supported(self, capability: str) -> bool:
|
||||
if capability in ['get_issue_comments', 'create_inline_comment', 'publish_inline_comments']: # gfm_markdown is supported in gitlab !
|
||||
if capability in ['get_issue_comments', 'create_inline_comment', 'publish_inline_comments']:
|
||||
return False
|
||||
return True
|
||||
|
||||
@ -59,7 +58,7 @@ class GitLabProvider(GitProvider):
|
||||
try:
|
||||
self.last_diff = self.mr.diffs.list(get_all=True)[-1]
|
||||
except IndexError as e:
|
||||
get_logger().error(f"Could not get diff for merge request {self.id_mr}")
|
||||
logger.error(f"Could not get diff for merge request {self.id_mr}")
|
||||
raise DiffNotFoundError(f"Could not get diff for merge request {self.id_mr}") from e
|
||||
|
||||
|
||||
@ -99,7 +98,7 @@ class GitLabProvider(GitProvider):
|
||||
if isinstance(new_file_content_str, bytes):
|
||||
new_file_content_str = bytes.decode(new_file_content_str, 'utf-8')
|
||||
except UnicodeDecodeError:
|
||||
get_logger().warning(
|
||||
logging.warning(
|
||||
f"Cannot decode file {diff['old_path']} or {diff['new_path']} in merge request {self.id_mr}")
|
||||
|
||||
edit_type = EDIT_TYPE.MODIFIED
|
||||
@ -115,20 +114,12 @@ class GitLabProvider(GitProvider):
|
||||
if not patch:
|
||||
patch = load_large_diff(filename, new_file_content_str, original_file_content_str)
|
||||
|
||||
|
||||
# count number of lines added and removed
|
||||
patch_lines = patch.splitlines(keepends=True)
|
||||
num_plus_lines = len([line for line in patch_lines if line.startswith('+')])
|
||||
num_minus_lines = len([line for line in patch_lines if line.startswith('-')])
|
||||
diff_files.append(
|
||||
FilePatchInfo(original_file_content_str, new_file_content_str,
|
||||
patch=patch,
|
||||
filename=filename,
|
||||
edit_type=edit_type,
|
||||
old_filename=None if diff['old_path'] == diff['new_path'] else diff['old_path'],
|
||||
num_plus_lines=num_plus_lines,
|
||||
num_minus_lines=num_minus_lines, ))
|
||||
|
||||
old_filename=None if diff['old_path'] == diff['new_path'] else diff['old_path']))
|
||||
self.diff_files = diff_files
|
||||
return diff_files
|
||||
|
||||
@ -143,34 +134,7 @@ class GitLabProvider(GitProvider):
|
||||
self.mr.description = pr_body
|
||||
self.mr.save()
|
||||
except Exception as e:
|
||||
get_logger().exception(f"Could not update merge request {self.id_mr} description: {e}")
|
||||
|
||||
def get_latest_commit_url(self):
|
||||
return self.mr.commits().next().web_url
|
||||
|
||||
def get_comment_url(self, comment):
|
||||
return f"{self.mr.web_url}#note_{comment.id}"
|
||||
|
||||
def publish_persistent_comment(self, pr_comment: str, initial_header: str, update_header: bool = True):
|
||||
try:
|
||||
for comment in self.mr.notes.list(get_all=True)[::-1]:
|
||||
if comment.body.startswith(initial_header):
|
||||
latest_commit_url = self.get_latest_commit_url()
|
||||
comment_url = self.get_comment_url(comment)
|
||||
if update_header:
|
||||
updated_header = f"{initial_header}\n\n### (review updated until commit {latest_commit_url})\n"
|
||||
pr_comment_updated = pr_comment.replace(initial_header, updated_header)
|
||||
else:
|
||||
pr_comment_updated = pr_comment
|
||||
get_logger().info(f"Persistent mode- updating comment {comment_url} to latest review message")
|
||||
response = self.mr.notes.update(comment.id, {'body': pr_comment_updated})
|
||||
self.publish_comment(
|
||||
f"**[Persistent review]({comment_url})** updated to latest commit {latest_commit_url}")
|
||||
return
|
||||
except Exception as e:
|
||||
get_logger().exception(f"Failed to update persistent review, error: {e}")
|
||||
pass
|
||||
self.publish_comment(pr_comment)
|
||||
logging.exception(f"Could not update merge request {self.id_mr} description: {e}")
|
||||
|
||||
def publish_comment(self, mr_comment: str, is_temporary: bool = False):
|
||||
comment = self.mr.notes.create({'body': mr_comment})
|
||||
@ -183,7 +147,7 @@ class GitLabProvider(GitProvider):
|
||||
self.send_inline_comment(body, edit_type, found, relevant_file, relevant_line_in_file, source_line_no,
|
||||
target_file, target_line_no)
|
||||
|
||||
def create_inline_comment(self, body: str, relevant_file: str, relevant_line_in_file: str, absolute_position: int = None):
|
||||
def create_inline_comment(self, body: str, relevant_file: str, relevant_line_in_file: str):
|
||||
raise NotImplementedError("Gitlab provider does not support creating inline comments yet")
|
||||
|
||||
def create_inline_comments(self, comments: list[dict]):
|
||||
@ -192,12 +156,12 @@ class GitLabProvider(GitProvider):
|
||||
def send_inline_comment(self,body: str,edit_type: str,found: bool,relevant_file: str,relevant_line_in_file: int,
|
||||
source_line_no: int, target_file: str,target_line_no: int) -> None:
|
||||
if not found:
|
||||
get_logger().info(f"Could not find position for {relevant_file} {relevant_line_in_file}")
|
||||
logging.info(f"Could not find position for {relevant_file} {relevant_line_in_file}")
|
||||
else:
|
||||
# in order to have exact sha's we have to find correct diff for this change
|
||||
diff = self.get_relevant_diff(relevant_file, relevant_line_in_file)
|
||||
if diff is None:
|
||||
get_logger().error(f"Could not get diff for merge request {self.id_mr}")
|
||||
logger.error(f"Could not get diff for merge request {self.id_mr}")
|
||||
raise DiffNotFoundError(f"Could not get diff for merge request {self.id_mr}")
|
||||
pos_obj = {'position_type': 'text',
|
||||
'new_path': target_file.filename,
|
||||
@ -210,27 +174,24 @@ class GitLabProvider(GitProvider):
|
||||
else:
|
||||
pos_obj['new_line'] = target_line_no - 1
|
||||
pos_obj['old_line'] = source_line_no - 1
|
||||
get_logger().debug(f"Creating comment in {self.id_mr} with body {body} and position {pos_obj}")
|
||||
try:
|
||||
self.mr.discussions.create({'body': body, 'position': pos_obj})
|
||||
except Exception as e:
|
||||
get_logger().debug(
|
||||
f"Failed to create comment in {self.id_mr} with position {pos_obj} (probably not a '+' line)")
|
||||
logging.debug(f"Creating comment in {self.id_mr} with body {body} and position {pos_obj}")
|
||||
self.mr.discussions.create({'body': body,
|
||||
'position': pos_obj})
|
||||
|
||||
def get_relevant_diff(self, relevant_file: str, relevant_line_in_file: int) -> Optional[dict]:
|
||||
changes = self.mr.changes() # Retrieve the changes for the merge request once
|
||||
if not changes:
|
||||
get_logger().error('No changes found for the merge request.')
|
||||
logging.error('No changes found for the merge request.')
|
||||
return None
|
||||
all_diffs = self.mr.diffs.list(get_all=True)
|
||||
if not all_diffs:
|
||||
get_logger().error('No diffs found for the merge request.')
|
||||
logging.error('No diffs found for the merge request.')
|
||||
return None
|
||||
for diff in all_diffs:
|
||||
for change in changes['changes']:
|
||||
if change['new_path'] == relevant_file and relevant_line_in_file in change['diff']:
|
||||
return diff
|
||||
get_logger().debug(
|
||||
logging.debug(
|
||||
f'No relevant diff found for {relevant_file} {relevant_line_in_file}. Falling back to last diff.')
|
||||
return self.last_diff # fallback to last_diff if no relevant diff is found
|
||||
|
||||
@ -265,10 +226,7 @@ class GitLabProvider(GitProvider):
|
||||
self.send_inline_comment(body, edit_type, found, relevant_file, relevant_line_in_file, source_line_no,
|
||||
target_file, target_line_no)
|
||||
except Exception as e:
|
||||
get_logger().exception(f"Could not publish code suggestion:\nsuggestion: {suggestion}\nerror: {e}")
|
||||
|
||||
# note that we publish suggestions one-by-one. so, if one fails, the rest will still be published
|
||||
return True
|
||||
logging.exception(f"Could not publish code suggestion:\nsuggestion: {suggestion}\nerror: {e}")
|
||||
|
||||
def search_line(self, relevant_file, relevant_line_in_file):
|
||||
target_file = None
|
||||
@ -327,15 +285,9 @@ class GitLabProvider(GitProvider):
|
||||
def remove_initial_comment(self):
|
||||
try:
|
||||
for comment in self.temp_comments:
|
||||
self.remove_comment(comment)
|
||||
except Exception as e:
|
||||
get_logger().exception(f"Failed to remove temp comments, error: {e}")
|
||||
|
||||
def remove_comment(self, comment):
|
||||
try:
|
||||
comment.delete()
|
||||
except Exception as e:
|
||||
get_logger().exception(f"Failed to remove comment, error: {e}")
|
||||
logging.exception(f"Failed to remove temp comments, error: {e}")
|
||||
|
||||
def get_title(self):
|
||||
return self.mr.title
|
||||
@ -355,7 +307,7 @@ class GitLabProvider(GitProvider):
|
||||
|
||||
def get_repo_settings(self):
|
||||
try:
|
||||
contents = self.gl.projects.get(self.id_project).files.get(file_path='.pr_agent.toml', ref=self.mr.target_branch).decode()
|
||||
contents = self.gl.projects.get(self.id_project).files.get(file_path='.pr_agent.toml', ref=self.mr.source_branch)
|
||||
return contents
|
||||
except Exception:
|
||||
return ""
|
||||
@ -403,17 +355,14 @@ class GitLabProvider(GitProvider):
|
||||
self.mr.labels = list(set(pr_types))
|
||||
self.mr.save()
|
||||
except Exception as e:
|
||||
get_logger().exception(f"Failed to publish labels, error: {e}")
|
||||
logging.exception(f"Failed to publish labels, error: {e}")
|
||||
|
||||
def publish_inline_comments(self, comments: list[dict]):
|
||||
pass
|
||||
|
||||
def get_pr_labels(self):
|
||||
def get_labels(self):
|
||||
return self.mr.labels
|
||||
|
||||
def get_repo_labels(self):
|
||||
return self.gl.projects.get(self.id_project).labels.list()
|
||||
|
||||
def get_commit_messages(self):
|
||||
"""
|
||||
Retrieves the commit messages of a pull request.
|
||||
@ -430,44 +379,3 @@ class GitLabProvider(GitProvider):
|
||||
if max_tokens:
|
||||
commit_messages_str = clip_tokens(commit_messages_str, max_tokens)
|
||||
return commit_messages_str
|
||||
|
||||
def get_pr_id(self):
|
||||
try:
|
||||
pr_id = self.mr.web_url
|
||||
return pr_id
|
||||
except:
|
||||
return ""
|
||||
|
||||
def get_line_link(self, relevant_file: str, relevant_line_start: int, relevant_line_end: int = None) -> str:
|
||||
if relevant_line_start == -1:
|
||||
link = f"{self.gl.url}/{self.id_project}/-/blob/{self.mr.source_branch}/{relevant_file}?ref_type=heads"
|
||||
elif relevant_line_end:
|
||||
link = f"{self.gl.url}/{self.id_project}/-/blob/{self.mr.source_branch}/{relevant_file}?ref_type=heads#L{relevant_line_start}-L{relevant_line_end}"
|
||||
else:
|
||||
link = f"{self.gl.url}/{self.id_project}/-/blob/{self.mr.source_branch}/{relevant_file}?ref_type=heads#L{relevant_line_start}"
|
||||
return link
|
||||
|
||||
|
||||
def generate_link_to_relevant_line_number(self, suggestion) -> str:
|
||||
try:
|
||||
relevant_file = suggestion['relevant file'].strip('`').strip("'")
|
||||
relevant_line_str = suggestion['relevant line']
|
||||
if not relevant_line_str:
|
||||
return ""
|
||||
|
||||
position, absolute_position = find_line_number_of_relevant_line_in_file \
|
||||
(self.diff_files, relevant_file, relevant_line_str)
|
||||
|
||||
if absolute_position != -1:
|
||||
# link to right file only
|
||||
link = f"{self.gl.url}/{self.id_project}/-/blob/{self.mr.source_branch}/{relevant_file}?ref_type=heads#L{absolute_position}"
|
||||
|
||||
# # link to diff
|
||||
# sha_file = hashlib.sha1(relevant_file.encode('utf-8')).hexdigest()
|
||||
# link = f"{self.pr.web_url}/diffs#{sha_file}_{absolute_position}_{absolute_position}"
|
||||
return link
|
||||
except Exception as e:
|
||||
if get_settings().config.verbosity_level >= 2:
|
||||
get_logger().info(f"Failed adding line link, error: {e}")
|
||||
|
||||
return ""
|
||||
|
@ -1,3 +1,4 @@
|
||||
import logging
|
||||
from collections import Counter
|
||||
from pathlib import Path
|
||||
from typing import List
|
||||
@ -6,7 +7,6 @@ from git import Repo
|
||||
|
||||
from pr_agent.config_loader import _find_repository_root, get_settings
|
||||
from pr_agent.git_providers.git_provider import EDIT_TYPE, FilePatchInfo, GitProvider
|
||||
from pr_agent.log import get_logger
|
||||
|
||||
|
||||
class PullRequestMimic:
|
||||
@ -49,15 +49,14 @@ class LocalGitProvider(GitProvider):
|
||||
"""
|
||||
Prepare the repository for PR-mimic generation.
|
||||
"""
|
||||
get_logger().debug('Preparing repository for PR-mimic generation...')
|
||||
logging.debug('Preparing repository for PR-mimic generation...')
|
||||
if self.repo.is_dirty():
|
||||
raise ValueError('The repository is not in a clean state. Please commit or stash pending changes.')
|
||||
if self.target_branch_name not in self.repo.heads:
|
||||
raise KeyError(f'Branch: {self.target_branch_name} does not exist')
|
||||
|
||||
def is_supported(self, capability: str) -> bool:
|
||||
if capability in ['get_issue_comments', 'create_inline_comment', 'publish_inline_comments', 'get_labels',
|
||||
'gfm_markdown']:
|
||||
if capability in ['get_issue_comments', 'create_inline_comment', 'publish_inline_comments', 'get_labels']:
|
||||
return False
|
||||
return True
|
||||
|
||||
@ -121,6 +120,9 @@ class LocalGitProvider(GitProvider):
|
||||
def publish_inline_comment(self, body: str, relevant_file: str, relevant_line_in_file: str):
|
||||
raise NotImplementedError('Publishing inline comments is not implemented for the local git provider')
|
||||
|
||||
def create_inline_comment(self, body: str, relevant_file: str, relevant_line_in_file: str):
|
||||
raise NotImplementedError('Creating inline comments is not implemented for the local git provider')
|
||||
|
||||
def publish_inline_comments(self, comments: list[dict]):
|
||||
raise NotImplementedError('Publishing inline comments is not implemented for the local git provider')
|
||||
|
||||
@ -137,9 +139,6 @@ class LocalGitProvider(GitProvider):
|
||||
def remove_initial_comment(self):
|
||||
pass # Not applicable to the local git provider, but required by the interface
|
||||
|
||||
def remove_comment(self, comment):
|
||||
pass # Not applicable to the local git provider, but required by the interface
|
||||
|
||||
def get_languages(self):
|
||||
"""
|
||||
Calculate percentage of languages in repository. Used for hunk prioritisation.
|
||||
@ -175,5 +174,5 @@ class LocalGitProvider(GitProvider):
|
||||
def get_issue_comments(self):
|
||||
raise NotImplementedError('Getting issue comments is not implemented for the local git provider')
|
||||
|
||||
def get_pr_labels(self):
|
||||
def get_labels(self):
|
||||
raise NotImplementedError('Getting labels is not implemented for the local git provider')
|
||||
|
@ -1,37 +0,0 @@
|
||||
import copy
|
||||
import os
|
||||
import tempfile
|
||||
|
||||
from dynaconf import Dynaconf
|
||||
|
||||
from pr_agent.config_loader import get_settings
|
||||
from pr_agent.git_providers import get_git_provider
|
||||
from pr_agent.log import get_logger
|
||||
|
||||
|
||||
def apply_repo_settings(pr_url):
|
||||
if get_settings().config.use_repo_settings_file:
|
||||
repo_settings_file = None
|
||||
try:
|
||||
git_provider = get_git_provider()(pr_url)
|
||||
repo_settings = git_provider.get_repo_settings()
|
||||
if repo_settings:
|
||||
repo_settings_file = None
|
||||
fd, repo_settings_file = tempfile.mkstemp(suffix='.toml')
|
||||
os.write(fd, repo_settings)
|
||||
new_settings = Dynaconf(settings_files=[repo_settings_file])
|
||||
for section, contents in new_settings.as_dict().items():
|
||||
section_dict = copy.deepcopy(get_settings().as_dict().get(section, {}))
|
||||
for key, value in contents.items():
|
||||
section_dict[key] = value
|
||||
get_settings().unset(section)
|
||||
get_settings().set(section, section_dict, merge=False)
|
||||
get_logger().info(f"Applying repo settings for section {section}, contents: {contents}")
|
||||
except Exception as e:
|
||||
get_logger().exception("Failed to apply repo settings", e)
|
||||
finally:
|
||||
if repo_settings_file:
|
||||
try:
|
||||
os.remove(repo_settings_file)
|
||||
except Exception as e:
|
||||
get_logger().error(f"Failed to remove temporary settings file {repo_settings_file}", e)
|
@ -1,40 +0,0 @@
|
||||
import json
|
||||
import logging
|
||||
import sys
|
||||
from enum import Enum
|
||||
|
||||
from loguru import logger
|
||||
|
||||
|
||||
class LoggingFormat(str, Enum):
|
||||
CONSOLE = "CONSOLE"
|
||||
JSON = "JSON"
|
||||
|
||||
|
||||
def json_format(record: dict) -> str:
|
||||
return record["message"]
|
||||
|
||||
|
||||
def setup_logger(level: str = "INFO", fmt: LoggingFormat = LoggingFormat.CONSOLE):
|
||||
level: int = logging.getLevelName(level.upper())
|
||||
if type(level) is not int:
|
||||
level = logging.INFO
|
||||
|
||||
if fmt == LoggingFormat.JSON:
|
||||
logger.remove(None)
|
||||
logger.add(
|
||||
sys.stdout,
|
||||
level=level,
|
||||
format="{message}",
|
||||
colorize=False,
|
||||
serialize=True,
|
||||
)
|
||||
elif fmt == LoggingFormat.CONSOLE:
|
||||
logger.remove(None)
|
||||
logger.add(sys.stdout, level=level, colorize=True)
|
||||
|
||||
return logger
|
||||
|
||||
|
||||
def get_logger(*args, **kwargs):
|
||||
return logger
|
@ -1,8 +1,9 @@
|
||||
import ujson
|
||||
|
||||
from google.cloud import storage
|
||||
|
||||
from pr_agent.config_loader import get_settings
|
||||
from pr_agent.log import get_logger
|
||||
from pr_agent.git_providers.gitlab_provider import logger
|
||||
from pr_agent.secret_providers.secret_provider import SecretProvider
|
||||
|
||||
|
||||
@ -14,7 +15,7 @@ class GoogleCloudStorageSecretProvider(SecretProvider):
|
||||
self.bucket_name = get_settings().google_cloud_storage.bucket_name
|
||||
self.bucket = self.client.bucket(self.bucket_name)
|
||||
except Exception as e:
|
||||
get_logger().error(f"Failed to initialize Google Cloud Storage Secret Provider: {e}")
|
||||
logger.error(f"Failed to initialize Google Cloud Storage Secret Provider: {e}")
|
||||
raise e
|
||||
|
||||
def get_secret(self, secret_name: str) -> str:
|
||||
@ -22,7 +23,7 @@ class GoogleCloudStorageSecretProvider(SecretProvider):
|
||||
blob = self.bucket.blob(secret_name)
|
||||
return blob.download_as_string()
|
||||
except Exception as e:
|
||||
get_logger().error(f"Failed to get secret {secret_name} from Google Cloud Storage: {e}")
|
||||
logger.error(f"Failed to get secret {secret_name} from Google Cloud Storage: {e}")
|
||||
return ""
|
||||
|
||||
def store_secret(self, secret_name: str, secret_value: str):
|
||||
@ -30,5 +31,5 @@ class GoogleCloudStorageSecretProvider(SecretProvider):
|
||||
blob = self.bucket.blob(secret_name)
|
||||
blob.upload_from_string(secret_value)
|
||||
except Exception as e:
|
||||
get_logger().error(f"Failed to store secret {secret_name} in Google Cloud Storage: {e}")
|
||||
logger.error(f"Failed to store secret {secret_name} in Google Cloud Storage: {e}")
|
||||
raise e
|
||||
|
@ -1,7 +1,9 @@
|
||||
import copy
|
||||
import hashlib
|
||||
import json
|
||||
import logging
|
||||
import os
|
||||
import sys
|
||||
import time
|
||||
|
||||
import jwt
|
||||
@ -16,15 +18,9 @@ from starlette_context.middleware import RawContextMiddleware
|
||||
|
||||
from pr_agent.agent.pr_agent import PRAgent
|
||||
from pr_agent.config_loader import get_settings, global_settings
|
||||
from pr_agent.git_providers.utils import apply_repo_settings
|
||||
from pr_agent.log import LoggingFormat, get_logger, setup_logger
|
||||
from pr_agent.secret_providers import get_secret_provider
|
||||
from pr_agent.servers.github_action_runner import get_setting_or_env, is_true
|
||||
from pr_agent.tools.pr_code_suggestions import PRCodeSuggestions
|
||||
from pr_agent.tools.pr_description import PRDescription
|
||||
from pr_agent.tools.pr_reviewer import PRReviewer
|
||||
|
||||
setup_logger(fmt=LoggingFormat.JSON)
|
||||
logging.basicConfig(stream=sys.stdout, level=logging.INFO)
|
||||
router = APIRouter()
|
||||
secret_provider = get_secret_provider()
|
||||
|
||||
@ -53,7 +49,7 @@ async def get_bearer_token(shared_secret: str, client_key: str):
|
||||
bearer_token = response.json()["access_token"]
|
||||
return bearer_token
|
||||
except Exception as e:
|
||||
get_logger().error(f"Failed to get bearer token: {e}")
|
||||
logging.error(f"Failed to get bearer token: {e}")
|
||||
raise e
|
||||
|
||||
@router.get("/")
|
||||
@ -64,23 +60,21 @@ async def handle_manifest(request: Request, response: Response):
|
||||
manifest = manifest.replace("app_key", get_settings().bitbucket.app_key)
|
||||
manifest = manifest.replace("base_url", get_settings().bitbucket.base_url)
|
||||
except:
|
||||
get_logger().error("Failed to replace api_key in Bitbucket manifest, trying to continue")
|
||||
logging.error("Failed to replace api_key in Bitbucket manifest, trying to continue")
|
||||
manifest_obj = json.loads(manifest)
|
||||
return JSONResponse(manifest_obj)
|
||||
|
||||
@router.post("/webhook")
|
||||
async def handle_github_webhooks(background_tasks: BackgroundTasks, request: Request):
|
||||
log_context = {"server_type": "bitbucket_app"}
|
||||
get_logger().debug(request.headers)
|
||||
print(request.headers)
|
||||
jwt_header = request.headers.get("authorization", None)
|
||||
if jwt_header:
|
||||
input_jwt = jwt_header.split(" ")[1]
|
||||
data = await request.json()
|
||||
get_logger().debug(data)
|
||||
print(data)
|
||||
async def inner():
|
||||
try:
|
||||
owner = data["data"]["repository"]["owner"]["username"]
|
||||
log_context["sender"] = owner
|
||||
secrets = json.loads(secret_provider.get_secret(owner))
|
||||
shared_secret = secrets["shared_secret"]
|
||||
client_key = secrets["client_key"]
|
||||
@ -92,31 +86,13 @@ async def handle_github_webhooks(background_tasks: BackgroundTasks, request: Req
|
||||
agent = PRAgent()
|
||||
if event == "pullrequest:created":
|
||||
pr_url = data["data"]["pullrequest"]["links"]["html"]["href"]
|
||||
log_context["api_url"] = pr_url
|
||||
log_context["event"] = "pull_request"
|
||||
if pr_url:
|
||||
with get_logger().contextualize(**log_context):
|
||||
apply_repo_settings(pr_url)
|
||||
auto_review = get_setting_or_env("BITBUCKET_APP.AUTO_REVIEW", None)
|
||||
if auto_review is None or is_true(auto_review): # by default, auto review is enabled
|
||||
await PRReviewer(pr_url).run()
|
||||
auto_improve = get_setting_or_env("BITBUCKET_APP.AUTO_IMPROVE", None)
|
||||
if is_true(auto_improve): # by default, auto improve is disabled
|
||||
await PRCodeSuggestions(pr_url).run()
|
||||
auto_describe = get_setting_or_env("BITBUCKET_APP.AUTO_DESCRIBE", None)
|
||||
if is_true(auto_describe): # by default, auto describe is disabled
|
||||
await PRDescription(pr_url).run()
|
||||
# with get_logger().contextualize(**log_context):
|
||||
# await agent.handle_request(pr_url, "review")
|
||||
await agent.handle_request(pr_url, "review")
|
||||
elif event == "pullrequest:comment_created":
|
||||
pr_url = data["data"]["pullrequest"]["links"]["html"]["href"]
|
||||
log_context["api_url"] = pr_url
|
||||
log_context["event"] = "comment"
|
||||
comment_body = data["data"]["comment"]["content"]["raw"]
|
||||
with get_logger().contextualize(**log_context):
|
||||
await agent.handle_request(pr_url, comment_body)
|
||||
except Exception as e:
|
||||
get_logger().error(f"Failed to handle webhook: {e}")
|
||||
logging.error(f"Failed to handle webhook: {e}")
|
||||
background_tasks.add_task(inner)
|
||||
return "OK"
|
||||
|
||||
@ -127,10 +103,9 @@ async def handle_github_webhooks(request: Request, response: Response):
|
||||
@router.post("/installed")
|
||||
async def handle_installed_webhooks(request: Request, response: Response):
|
||||
try:
|
||||
get_logger().info("handle_installed_webhooks")
|
||||
get_logger().info(request.headers)
|
||||
print(request.headers)
|
||||
data = await request.json()
|
||||
get_logger().info(data)
|
||||
print(data)
|
||||
shared_secret = data["sharedSecret"]
|
||||
client_key = data["clientKey"]
|
||||
username = data["principal"]["username"]
|
||||
@ -140,15 +115,13 @@ async def handle_installed_webhooks(request: Request, response: Response):
|
||||
}
|
||||
secret_provider.store_secret(username, json.dumps(secrets))
|
||||
except Exception as e:
|
||||
get_logger().error(f"Failed to register user: {e}")
|
||||
logging.error(f"Failed to register user: {e}")
|
||||
return JSONResponse({"error": "Unable to register user"}, status_code=500)
|
||||
|
||||
@router.post("/uninstalled")
|
||||
async def handle_uninstalled_webhooks(request: Request, response: Response):
|
||||
get_logger().info("handle_uninstalled_webhooks")
|
||||
|
||||
data = await request.json()
|
||||
get_logger().info(data)
|
||||
print(data)
|
||||
|
||||
|
||||
def start():
|
||||
|
@ -1,80 +0,0 @@
|
||||
import json
|
||||
import os
|
||||
|
||||
import uvicorn
|
||||
from fastapi import APIRouter, FastAPI
|
||||
from fastapi.encoders import jsonable_encoder
|
||||
from starlette import status
|
||||
from starlette.background import BackgroundTasks
|
||||
from starlette.middleware import Middleware
|
||||
from starlette.requests import Request
|
||||
from starlette.responses import JSONResponse
|
||||
from starlette_context.middleware import RawContextMiddleware
|
||||
|
||||
from pr_agent.agent.pr_agent import PRAgent
|
||||
from pr_agent.config_loader import get_settings
|
||||
from pr_agent.log import get_logger
|
||||
from pr_agent.servers.utils import verify_signature
|
||||
|
||||
router = APIRouter()
|
||||
|
||||
|
||||
def handle_request(
|
||||
background_tasks: BackgroundTasks, url: str, body: str, log_context: dict
|
||||
):
|
||||
log_context["action"] = body
|
||||
log_context["api_url"] = url
|
||||
with get_logger().contextualize(**log_context):
|
||||
background_tasks.add_task(PRAgent().handle_request, url, body)
|
||||
|
||||
|
||||
@router.post("/")
|
||||
async def handle_webhook(background_tasks: BackgroundTasks, request: Request):
|
||||
log_context = {"server_type": "bitbucket_server"}
|
||||
data = await request.json()
|
||||
get_logger().info(json.dumps(data))
|
||||
|
||||
webhook_secret = get_settings().get("BITBUCKET_SERVER.WEBHOOK_SECRET", None)
|
||||
if webhook_secret:
|
||||
body_bytes = await request.body()
|
||||
signature_header = request.headers.get("x-hub-signature", None)
|
||||
verify_signature(body_bytes, webhook_secret, signature_header)
|
||||
|
||||
pr_id = data["pullRequest"]["id"]
|
||||
repository_name = data["pullRequest"]["toRef"]["repository"]["slug"]
|
||||
project_name = data["pullRequest"]["toRef"]["repository"]["project"]["key"]
|
||||
bitbucket_server = get_settings().get("BITBUCKET_SERVER.URL")
|
||||
pr_url = f"{bitbucket_server}/projects/{project_name}/repos/{repository_name}/pull-requests/{pr_id}"
|
||||
|
||||
log_context["api_url"] = pr_url
|
||||
log_context["event"] = "pull_request"
|
||||
|
||||
if data["eventKey"] == "pr:opened":
|
||||
body = "review"
|
||||
elif data["eventKey"] == "pr:comment:added":
|
||||
body = data["comment"]["text"]
|
||||
else:
|
||||
return JSONResponse(
|
||||
status_code=status.HTTP_400_BAD_REQUEST,
|
||||
content=json.dumps({"message": "Unsupported event"}),
|
||||
)
|
||||
|
||||
handle_request(background_tasks, pr_url, body, log_context)
|
||||
return JSONResponse(
|
||||
status_code=status.HTTP_200_OK, content=jsonable_encoder({"message": "success"})
|
||||
)
|
||||
|
||||
|
||||
@router.get("/")
|
||||
async def root():
|
||||
return {"status": "ok"}
|
||||
|
||||
|
||||
def start():
|
||||
app = FastAPI(middleware=[Middleware(RawContextMiddleware)])
|
||||
app.include_router(router)
|
||||
uvicorn.run(app, host="0.0.0.0", port=int(os.environ.get("PORT", "3000")))
|
||||
|
||||
|
||||
if __name__ == "__main__":
|
||||
start()
|
@ -1,4 +1,6 @@
|
||||
import copy
|
||||
import logging
|
||||
import sys
|
||||
from enum import Enum
|
||||
from json import JSONDecodeError
|
||||
|
||||
@ -10,10 +12,9 @@ from starlette_context import context
|
||||
from starlette_context.middleware import RawContextMiddleware
|
||||
|
||||
from pr_agent.agent.pr_agent import PRAgent
|
||||
from pr_agent.config_loader import get_settings, global_settings
|
||||
from pr_agent.log import get_logger, setup_logger
|
||||
from pr_agent.config_loader import global_settings, get_settings
|
||||
|
||||
setup_logger()
|
||||
logging.basicConfig(stream=sys.stdout, level=logging.DEBUG)
|
||||
router = APIRouter()
|
||||
|
||||
|
||||
@ -34,7 +35,7 @@ class Item(BaseModel):
|
||||
|
||||
@router.post("/api/v1/gerrit/{action}")
|
||||
async def handle_gerrit_request(action: Action, item: Item):
|
||||
get_logger().debug("Received a Gerrit request")
|
||||
logging.debug("Received a Gerrit request")
|
||||
context["settings"] = copy.deepcopy(global_settings)
|
||||
|
||||
if action == Action.ask:
|
||||
@ -53,7 +54,7 @@ async def get_body(request):
|
||||
try:
|
||||
body = await request.json()
|
||||
except JSONDecodeError as e:
|
||||
get_logger().error("Error parsing request body", e)
|
||||
logging.error("Error parsing request body", e)
|
||||
return {}
|
||||
return body
|
||||
|
||||
|
@ -1,43 +1,23 @@
|
||||
import asyncio
|
||||
import json
|
||||
import os
|
||||
from typing import Union
|
||||
|
||||
from pr_agent.agent.pr_agent import PRAgent
|
||||
from pr_agent.config_loader import get_settings
|
||||
from pr_agent.git_providers import get_git_provider
|
||||
from pr_agent.git_providers.utils import apply_repo_settings
|
||||
from pr_agent.log import get_logger
|
||||
from pr_agent.tools.pr_code_suggestions import PRCodeSuggestions
|
||||
from pr_agent.tools.pr_description import PRDescription
|
||||
from pr_agent.tools.pr_reviewer import PRReviewer
|
||||
|
||||
|
||||
def is_true(value: Union[str, bool]) -> bool:
|
||||
if isinstance(value, bool):
|
||||
return value
|
||||
if isinstance(value, str):
|
||||
return value.lower() == 'true'
|
||||
return False
|
||||
|
||||
|
||||
def get_setting_or_env(key: str, default: Union[str, bool] = None) -> Union[str, bool]:
|
||||
try:
|
||||
value = get_settings().get(key, default)
|
||||
except AttributeError: # TBD still need to debug why this happens on GitHub Actions
|
||||
value = os.getenv(key, None) or os.getenv(key.upper(), None) or os.getenv(key.lower(), None) or default
|
||||
return value
|
||||
|
||||
|
||||
async def run_action():
|
||||
# Get environment variables
|
||||
GITHUB_EVENT_NAME = os.environ.get('GITHUB_EVENT_NAME')
|
||||
GITHUB_EVENT_PATH = os.environ.get('GITHUB_EVENT_PATH')
|
||||
OPENAI_KEY = os.environ.get('OPENAI_KEY') or os.environ.get('OPENAI.KEY')
|
||||
OPENAI_ORG = os.environ.get('OPENAI_ORG') or os.environ.get('OPENAI.ORG')
|
||||
OPENAI_KEY = os.environ.get('OPENAI_KEY')
|
||||
OPENAI_ORG = os.environ.get('OPENAI_ORG')
|
||||
GITHUB_TOKEN = os.environ.get('GITHUB_TOKEN')
|
||||
get_settings().set("CONFIG.PUBLISH_OUTPUT_PROGRESS", False)
|
||||
|
||||
|
||||
# Check if required environment variables are set
|
||||
if not GITHUB_EVENT_NAME:
|
||||
print("GITHUB_EVENT_NAME not set")
|
||||
@ -67,39 +47,13 @@ async def run_action():
|
||||
print(f"Failed to parse JSON: {e}")
|
||||
return
|
||||
|
||||
try:
|
||||
get_logger().info("Applying repo settings")
|
||||
pr_url = event_payload.get("pull_request", {}).get("html_url")
|
||||
if pr_url:
|
||||
apply_repo_settings(pr_url)
|
||||
get_logger().info(f"enable_custom_labels: {get_settings().config.enable_custom_labels}")
|
||||
except Exception as e:
|
||||
get_logger().info(f"github action: failed to apply repo settings: {e}")
|
||||
|
||||
# Handle pull request event
|
||||
if GITHUB_EVENT_NAME == "pull_request":
|
||||
action = event_payload.get("action")
|
||||
if action in ["opened", "reopened"]:
|
||||
pr_url = event_payload.get("pull_request", {}).get("url")
|
||||
if pr_url:
|
||||
# legacy - supporting both GITHUB_ACTION and GITHUB_ACTION_CONFIG
|
||||
auto_review = get_setting_or_env("GITHUB_ACTION.AUTO_REVIEW", None)
|
||||
if auto_review is None:
|
||||
auto_review = get_setting_or_env("GITHUB_ACTION_CONFIG.AUTO_REVIEW", None)
|
||||
auto_describe = get_setting_or_env("GITHUB_ACTION.AUTO_DESCRIBE", None)
|
||||
if auto_describe is None:
|
||||
auto_describe = get_setting_or_env("GITHUB_ACTION_CONFIG.AUTO_DESCRIBE", None)
|
||||
auto_improve = get_setting_or_env("GITHUB_ACTION.AUTO_IMPROVE", None)
|
||||
if auto_improve is None:
|
||||
auto_improve = get_setting_or_env("GITHUB_ACTION_CONFIG.AUTO_IMPROVE", None)
|
||||
|
||||
# invoke by default all three tools
|
||||
if auto_describe is None or is_true(auto_describe):
|
||||
await PRDescription(pr_url).run()
|
||||
if auto_review is None or is_true(auto_review):
|
||||
await PRReviewer(pr_url).run()
|
||||
if auto_improve is None or is_true(auto_improve):
|
||||
await PRCodeSuggestions(pr_url).run()
|
||||
|
||||
# Handle issue comment event
|
||||
elif GITHUB_EVENT_NAME == "issue_comment":
|
||||
@ -107,21 +61,12 @@ async def run_action():
|
||||
if action in ["created", "edited"]:
|
||||
comment_body = event_payload.get("comment", {}).get("body")
|
||||
if comment_body:
|
||||
is_pr = False
|
||||
# check if issue is pull request
|
||||
if event_payload.get("issue", {}).get("pull_request"):
|
||||
url = event_payload.get("issue", {}).get("pull_request", {}).get("url")
|
||||
is_pr = True
|
||||
else:
|
||||
url = event_payload.get("issue", {}).get("url")
|
||||
if url:
|
||||
pr_url = event_payload.get("issue", {}).get("pull_request", {}).get("url")
|
||||
if pr_url:
|
||||
body = comment_body.strip().lower()
|
||||
comment_id = event_payload.get("comment", {}).get("id")
|
||||
provider = get_git_provider()(pr_url=url)
|
||||
if is_pr:
|
||||
await PRAgent().handle_request(url, body, notify=lambda: provider.add_eyes_reaction(comment_id))
|
||||
else:
|
||||
await PRAgent().handle_request(url, body)
|
||||
provider = get_git_provider()(pr_url=pr_url)
|
||||
await PRAgent().handle_request(pr_url, body, notify=lambda: provider.add_eyes_reaction(comment_id))
|
||||
|
||||
|
||||
if __name__ == '__main__':
|
||||
|
@ -1,7 +1,9 @@
|
||||
import copy
|
||||
import logging
|
||||
import sys
|
||||
import os
|
||||
import asyncio.locks
|
||||
from typing import Any, Dict, List, Tuple
|
||||
import time
|
||||
from typing import Any, Dict
|
||||
|
||||
import uvicorn
|
||||
from fastapi import APIRouter, FastAPI, HTTPException, Request, Response
|
||||
@ -13,13 +15,9 @@ from pr_agent.agent.pr_agent import PRAgent
|
||||
from pr_agent.algo.utils import update_settings_from_args
|
||||
from pr_agent.config_loader import get_settings, global_settings
|
||||
from pr_agent.git_providers import get_git_provider
|
||||
from pr_agent.git_providers.utils import apply_repo_settings
|
||||
from pr_agent.git_providers.git_provider import IncrementalPR
|
||||
from pr_agent.log import LoggingFormat, get_logger, setup_logger
|
||||
from pr_agent.servers.utils import verify_signature, DefaultDictWithTimeout
|
||||
|
||||
setup_logger(fmt=LoggingFormat.JSON)
|
||||
from pr_agent.servers.utils import verify_signature
|
||||
|
||||
logging.basicConfig(stream=sys.stdout, level=logging.INFO)
|
||||
router = APIRouter()
|
||||
|
||||
|
||||
@ -30,11 +28,11 @@ async def handle_github_webhooks(request: Request, response: Response):
|
||||
Verifies the request signature, parses the request body, and passes it to the handle_request function for further
|
||||
processing.
|
||||
"""
|
||||
get_logger().debug("Received a GitHub webhook")
|
||||
logging.debug("Received a GitHub webhook")
|
||||
|
||||
body = await get_body(request)
|
||||
|
||||
get_logger().debug(f'Request body:\n{body}')
|
||||
logging.debug(f'Request body:\n{body}')
|
||||
installation_id = body.get("installation", {}).get("id")
|
||||
context["installation_id"] = installation_id
|
||||
context["settings"] = copy.deepcopy(global_settings)
|
||||
@ -46,14 +44,13 @@ async def handle_github_webhooks(request: Request, response: Response):
|
||||
@router.post("/api/v1/marketplace_webhooks")
|
||||
async def handle_marketplace_webhooks(request: Request, response: Response):
|
||||
body = await get_body(request)
|
||||
get_logger().info(f'Request body:\n{body}')
|
||||
|
||||
logging.info(f'Request body:\n{body}')
|
||||
|
||||
async def get_body(request):
|
||||
try:
|
||||
body = await request.json()
|
||||
except Exception as e:
|
||||
get_logger().error("Error parsing request body", e)
|
||||
logging.error("Error parsing request body", e)
|
||||
raise HTTPException(status_code=400, detail="Error parsing request body") from e
|
||||
webhook_secret = getattr(get_settings().github, 'webhook_secret', None)
|
||||
if webhook_secret:
|
||||
@ -63,9 +60,7 @@ async def get_body(request):
|
||||
return body
|
||||
|
||||
|
||||
_duplicate_requests_cache = DefaultDictWithTimeout(ttl=get_settings().github_app.duplicate_requests_cache_ttl)
|
||||
_duplicate_push_triggers = DefaultDictWithTimeout(ttl=get_settings().github_app.push_trigger_pending_tasks_ttl)
|
||||
_pending_task_duplicate_push_conditions = DefaultDictWithTimeout(asyncio.locks.Condition, ttl=get_settings().github_app.push_trigger_pending_tasks_ttl)
|
||||
_duplicate_requests_cache = {}
|
||||
|
||||
|
||||
async def handle_request(body: Dict[str, Any], event: str):
|
||||
@ -81,8 +76,8 @@ async def handle_request(body: Dict[str, Any], event: str):
|
||||
return {}
|
||||
agent = PRAgent()
|
||||
bot_user = get_settings().github_app.bot_user
|
||||
sender = body.get("sender", {}).get("login")
|
||||
log_context = {"action": action, "event": event, "sender": sender, "server_type": "github_app"}
|
||||
logging.info(f"action: '{action}'")
|
||||
logging.info(f"event: '{event}'")
|
||||
|
||||
if get_settings().github_app.duplicate_requests_cache and _is_duplicate_request(body):
|
||||
return {}
|
||||
@ -92,135 +87,56 @@ async def handle_request(body: Dict[str, Any], event: str):
|
||||
if "comment" not in body:
|
||||
return {}
|
||||
comment_body = body.get("comment", {}).get("body")
|
||||
sender = body.get("sender", {}).get("login")
|
||||
if sender and bot_user in sender:
|
||||
get_logger().info(f"Ignoring comment from {bot_user} user")
|
||||
logging.info(f"Ignoring comment from {bot_user} user")
|
||||
return {}
|
||||
get_logger().info(f"Processing comment from {sender} user")
|
||||
logging.info(f"Processing comment from {sender} user")
|
||||
if "issue" in body and "pull_request" in body["issue"] and "url" in body["issue"]["pull_request"]:
|
||||
api_url = body["issue"]["pull_request"]["url"]
|
||||
elif "comment" in body and "pull_request_url" in body["comment"]:
|
||||
api_url = body["comment"]["pull_request_url"]
|
||||
else:
|
||||
return {}
|
||||
log_context["api_url"] = api_url
|
||||
get_logger().info(body)
|
||||
get_logger().info(f"Handling comment because of event={event} and action={action}")
|
||||
logging.info(body)
|
||||
logging.info(f"Handling comment because of event={event} and action={action}")
|
||||
comment_id = body.get("comment", {}).get("id")
|
||||
provider = get_git_provider()(pr_url=api_url)
|
||||
with get_logger().contextualize(**log_context):
|
||||
await agent.handle_request(api_url, comment_body, notify=lambda: provider.add_eyes_reaction(comment_id))
|
||||
|
||||
# handle pull_request event:
|
||||
# automatically review opened/reopened/ready_for_review PRs as long as they're not in draft,
|
||||
# as well as direct review requests from the bot
|
||||
elif event == 'pull_request' and action != 'synchronize':
|
||||
pull_request, api_url = _check_pull_request_event(action, body, log_context, bot_user)
|
||||
if not (pull_request and api_url):
|
||||
elif event == 'pull_request':
|
||||
pull_request = body.get("pull_request")
|
||||
if not pull_request:
|
||||
return {}
|
||||
api_url = pull_request.get("url")
|
||||
if not api_url:
|
||||
return {}
|
||||
if pull_request.get("draft", True) or pull_request.get("state") != "open" or pull_request.get("user", {}).get("login", "") == bot_user:
|
||||
return {}
|
||||
if action in get_settings().github_app.handle_pr_actions:
|
||||
if action == "review_requested":
|
||||
if body.get("requested_reviewer", {}).get("login", "") != bot_user:
|
||||
return {}
|
||||
get_logger().info(f"Performing review for {api_url=} because of {event=} and {action=}")
|
||||
await _perform_commands("pr_commands", agent, body, api_url, log_context)
|
||||
|
||||
# handle pull_request event with synchronize action - "push trigger" for new commits
|
||||
elif event == 'pull_request' and action == 'synchronize':
|
||||
pull_request, api_url = _check_pull_request_event(action, body, log_context, bot_user)
|
||||
if not (pull_request and api_url):
|
||||
return {}
|
||||
|
||||
apply_repo_settings(api_url)
|
||||
if not get_settings().github_app.handle_push_trigger:
|
||||
return {}
|
||||
|
||||
# TODO: do we still want to get the list of commits to filter bot/merge commits?
|
||||
before_sha = body.get("before")
|
||||
after_sha = body.get("after")
|
||||
merge_commit_sha = pull_request.get("merge_commit_sha")
|
||||
if before_sha == after_sha:
|
||||
return {}
|
||||
if get_settings().github_app.push_trigger_ignore_merge_commits and after_sha == merge_commit_sha:
|
||||
return {}
|
||||
if get_settings().github_app.push_trigger_ignore_bot_commits and body.get("sender", {}).get("login", "") == bot_user:
|
||||
return {}
|
||||
|
||||
# Prevent triggering multiple times for subsequent push triggers when one is enough:
|
||||
# The first push will trigger the processing, and if there's a second push in the meanwhile it will wait.
|
||||
# Any more events will be discarded, because they will all trigger the exact same processing on the PR.
|
||||
# We let the second event wait instead of discarding it because while the first event was being processed,
|
||||
# more commits may have been pushed that led to the subsequent events,
|
||||
# so we keep just one waiting as a delegate to trigger the processing for the new commits when done waiting.
|
||||
current_active_tasks = _duplicate_push_triggers.setdefault(api_url, 0)
|
||||
max_active_tasks = 2 if get_settings().github_app.push_trigger_pending_tasks_backlog else 1
|
||||
if current_active_tasks < max_active_tasks:
|
||||
# first task can enter, and second tasks too if backlog is enabled
|
||||
get_logger().info(
|
||||
f"Continue processing push trigger for {api_url=} because there are {current_active_tasks} active tasks"
|
||||
)
|
||||
_duplicate_push_triggers[api_url] += 1
|
||||
else:
|
||||
get_logger().info(
|
||||
f"Skipping push trigger for {api_url=} because another event already triggered the same processing"
|
||||
)
|
||||
return {}
|
||||
async with _pending_task_duplicate_push_conditions[api_url]:
|
||||
if current_active_tasks == 1:
|
||||
# second task waits
|
||||
get_logger().info(
|
||||
f"Waiting to process push trigger for {api_url=} because the first task is still in progress"
|
||||
)
|
||||
await _pending_task_duplicate_push_conditions[api_url].wait()
|
||||
get_logger().info(f"Finished waiting to process push trigger for {api_url=} - continue with flow")
|
||||
|
||||
try:
|
||||
if get_settings().github_app.push_trigger_wait_for_initial_review and not get_git_provider()(api_url, incremental=IncrementalPR(True)).previous_review:
|
||||
get_logger().info(f"Skipping incremental review because there was no initial review for {api_url=} yet")
|
||||
return {}
|
||||
get_logger().info(f"Performing incremental review for {api_url=} because of {event=} and {action=}")
|
||||
await _perform_commands("push_commands", agent, body, api_url, log_context)
|
||||
|
||||
finally:
|
||||
# release the waiting task block
|
||||
async with _pending_task_duplicate_push_conditions[api_url]:
|
||||
_pending_task_duplicate_push_conditions[api_url].notify(1)
|
||||
_duplicate_push_triggers[api_url] -= 1
|
||||
|
||||
get_logger().info("event or action does not require handling")
|
||||
return {}
|
||||
|
||||
|
||||
def _check_pull_request_event(action: str, body: dict, log_context: dict, bot_user: str) -> Tuple[Dict[str, Any], str]:
|
||||
invalid_result = {}, ""
|
||||
pull_request = body.get("pull_request")
|
||||
if not pull_request:
|
||||
return invalid_result
|
||||
api_url = pull_request.get("url")
|
||||
if not api_url:
|
||||
return invalid_result
|
||||
log_context["api_url"] = api_url
|
||||
if pull_request.get("draft", True) or pull_request.get("state") != "open" or pull_request.get("user", {}).get("login", "") == bot_user:
|
||||
return invalid_result
|
||||
if action in ("review_requested", "synchronize") and pull_request.get("created_at") == pull_request.get("updated_at"):
|
||||
if pull_request.get("created_at") == pull_request.get("updated_at"):
|
||||
# avoid double reviews when opening a PR for the first time
|
||||
return invalid_result
|
||||
return pull_request, api_url
|
||||
|
||||
|
||||
async def _perform_commands(commands_conf: str, agent: PRAgent, body: dict, api_url: str, log_context: dict):
|
||||
apply_repo_settings(api_url)
|
||||
commands = get_settings().get(f"github_app.{commands_conf}")
|
||||
for command in commands:
|
||||
return {}
|
||||
logging.info(f"Performing review because of event={event} and action={action}")
|
||||
for command in get_settings().github_app.pr_commands:
|
||||
split_command = command.split(" ")
|
||||
command = split_command[0]
|
||||
args = split_command[1:]
|
||||
other_args = update_settings_from_args(args)
|
||||
new_command = ' '.join([command] + other_args)
|
||||
get_logger().info(body)
|
||||
get_logger().info(f"Performing command: {new_command}")
|
||||
with get_logger().contextualize(**log_context):
|
||||
logging.info(body)
|
||||
logging.info(f"Performing command: {new_command}")
|
||||
await agent.handle_request(api_url, new_command)
|
||||
|
||||
logging.info("event or action does not require handling")
|
||||
return {}
|
||||
|
||||
|
||||
def _is_duplicate_request(body: Dict[str, Any]) -> bool:
|
||||
"""
|
||||
@ -228,11 +144,16 @@ def _is_duplicate_request(body: Dict[str, Any]) -> bool:
|
||||
This function checks if the request is duplicate and if so - ignores it.
|
||||
"""
|
||||
request_hash = hash(str(body))
|
||||
get_logger().info(f"request_hash: {request_hash}")
|
||||
is_duplicate = _duplicate_requests_cache.get(request_hash, False)
|
||||
_duplicate_requests_cache[request_hash] = True
|
||||
logging.info(f"request_hash: {request_hash}")
|
||||
request_time = time.monotonic()
|
||||
ttl = get_settings().github_app.duplicate_requests_cache_ttl # in seconds
|
||||
to_delete = [key for key, key_time in _duplicate_requests_cache.items() if request_time - key_time > ttl]
|
||||
for key in to_delete:
|
||||
del _duplicate_requests_cache[key]
|
||||
is_duplicate = request_hash in _duplicate_requests_cache
|
||||
_duplicate_requests_cache[request_hash] = request_time
|
||||
if is_duplicate:
|
||||
get_logger().info(f"Ignoring duplicate request {request_hash}")
|
||||
logging.info(f"Ignoring duplicate request {request_hash}")
|
||||
return is_duplicate
|
||||
|
||||
|
||||
|
@ -1,4 +1,6 @@
|
||||
import asyncio
|
||||
import logging
|
||||
import sys
|
||||
from datetime import datetime, timezone
|
||||
|
||||
import aiohttp
|
||||
@ -6,9 +8,9 @@ import aiohttp
|
||||
from pr_agent.agent.pr_agent import PRAgent
|
||||
from pr_agent.config_loader import get_settings
|
||||
from pr_agent.git_providers import get_git_provider
|
||||
from pr_agent.log import LoggingFormat, get_logger, setup_logger
|
||||
from pr_agent.servers.help import bot_help_text
|
||||
|
||||
setup_logger(fmt=LoggingFormat.JSON)
|
||||
logging.basicConfig(stream=sys.stdout, level=logging.DEBUG)
|
||||
NOTIFICATION_URL = "https://api.github.com/notifications"
|
||||
|
||||
|
||||
@ -92,7 +94,7 @@ async def polling_loop():
|
||||
comment_body = comment['body'] if 'body' in comment else ''
|
||||
commenter_github_user = comment['user']['login'] \
|
||||
if 'user' in comment else ''
|
||||
get_logger().info(f"Commenter: {commenter_github_user}\nComment: {comment_body}")
|
||||
logging.info(f"Commenter: {commenter_github_user}\nComment: {comment_body}")
|
||||
user_tag = "@" + user_id
|
||||
if user_tag not in comment_body:
|
||||
continue
|
||||
@ -103,12 +105,14 @@ async def polling_loop():
|
||||
notify=lambda: git_provider.add_eyes_reaction(comment_id)) # noqa E501
|
||||
if not success:
|
||||
git_provider.set_pr(pr_url)
|
||||
git_provider.publish_comment("### How to use PR-Agent\n" +
|
||||
bot_help_text(user_id))
|
||||
|
||||
elif response.status != 304:
|
||||
print(f"Failed to fetch notifications. Status code: {response.status}")
|
||||
|
||||
except Exception as e:
|
||||
get_logger().error(f"Exception during processing of a notification: {e}")
|
||||
logging.error(f"Exception during processing of a notification: {e}")
|
||||
|
||||
|
||||
if __name__ == '__main__':
|
||||
|
@ -1,5 +1,7 @@
|
||||
import copy
|
||||
import json
|
||||
import logging
|
||||
import sys
|
||||
|
||||
import uvicorn
|
||||
from fastapi import APIRouter, FastAPI, Request, status
|
||||
@ -12,37 +14,26 @@ from starlette_context.middleware import RawContextMiddleware
|
||||
|
||||
from pr_agent.agent.pr_agent import PRAgent
|
||||
from pr_agent.config_loader import get_settings, global_settings
|
||||
from pr_agent.log import LoggingFormat, get_logger, setup_logger
|
||||
from pr_agent.secret_providers import get_secret_provider
|
||||
|
||||
setup_logger(fmt=LoggingFormat.JSON)
|
||||
logging.basicConfig(stream=sys.stdout, level=logging.INFO)
|
||||
router = APIRouter()
|
||||
|
||||
secret_provider = get_secret_provider() if get_settings().get("CONFIG.SECRET_PROVIDER") else None
|
||||
|
||||
|
||||
def handle_request(background_tasks: BackgroundTasks, url: str, body: str, log_context: dict):
|
||||
log_context["action"] = body
|
||||
log_context["event"] = "pull_request" if body == "/review" else "comment"
|
||||
log_context["api_url"] = url
|
||||
with get_logger().contextualize(**log_context):
|
||||
background_tasks.add_task(PRAgent().handle_request, url, body)
|
||||
|
||||
|
||||
@router.post("/webhook")
|
||||
async def gitlab_webhook(background_tasks: BackgroundTasks, request: Request):
|
||||
log_context = {"server_type": "gitlab_app"}
|
||||
if request.headers.get("X-Gitlab-Token") and secret_provider:
|
||||
request_token = request.headers.get("X-Gitlab-Token")
|
||||
secret = secret_provider.get_secret(request_token)
|
||||
try:
|
||||
secret_dict = json.loads(secret)
|
||||
gitlab_token = secret_dict["gitlab_token"]
|
||||
log_context["sender"] = secret_dict.get("token_name", secret_dict.get("id", "unknown"))
|
||||
context["settings"] = copy.deepcopy(global_settings)
|
||||
context["settings"].gitlab.personal_access_token = gitlab_token
|
||||
except Exception as e:
|
||||
get_logger().error(f"Failed to validate secret {request_token}: {e}")
|
||||
logging.error(f"Failed to validate secret {request_token}: {e}")
|
||||
return JSONResponse(status_code=status.HTTP_401_UNAUTHORIZED, content=jsonable_encoder({"message": "unauthorized"}))
|
||||
elif get_settings().get("GITLAB.SHARED_SECRET"):
|
||||
secret = get_settings().get("GITLAB.SHARED_SECRET")
|
||||
@ -54,17 +45,17 @@ async def gitlab_webhook(background_tasks: BackgroundTasks, request: Request):
|
||||
if not gitlab_token:
|
||||
return JSONResponse(status_code=status.HTTP_401_UNAUTHORIZED, content=jsonable_encoder({"message": "unauthorized"}))
|
||||
data = await request.json()
|
||||
get_logger().info(json.dumps(data))
|
||||
logging.info(json.dumps(data))
|
||||
if data.get('object_kind') == 'merge_request' and data['object_attributes'].get('action') in ['open', 'reopen']:
|
||||
get_logger().info(f"A merge request has been opened: {data['object_attributes'].get('title')}")
|
||||
logging.info(f"A merge request has been opened: {data['object_attributes'].get('title')}")
|
||||
url = data['object_attributes'].get('url')
|
||||
handle_request(background_tasks, url, "/review", log_context)
|
||||
background_tasks.add_task(PRAgent().handle_request, url, "/review")
|
||||
elif data.get('object_kind') == 'note' and data['event_type'] == 'note':
|
||||
if 'merge_request' in data:
|
||||
mr = data['merge_request']
|
||||
url = mr.get('url')
|
||||
body = data.get('object_attributes', {}).get('note')
|
||||
handle_request(background_tasks, url, body, log_context)
|
||||
background_tasks.add_task(PRAgent().handle_request, url, body)
|
||||
return JSONResponse(status_code=status.HTTP_200_OK, content=jsonable_encoder({"message": "success"}))
|
||||
|
||||
|
||||
|
@ -1,328 +1,17 @@
|
||||
class HelpMessage:
|
||||
@staticmethod
|
||||
def get_general_commands_text():
|
||||
commands_text = "> - **/review**: Request a review of your Pull Request. \n" \
|
||||
"> - **/describe**: Update the PR title and description based on the contents of the PR. \n" \
|
||||
"> - **/improve [--extended]**: Suggest code improvements. Extended mode provides a higher quality feedback. \n" \
|
||||
"> - **/ask \\<QUESTION\\>**: Ask a question about the PR. \n" \
|
||||
"> - **/update_changelog**: Update the changelog based on the PR's contents. \n" \
|
||||
"> - **/add_docs** 💎: Generate docstring for new components introduced in the PR. \n" \
|
||||
"> - **/generate_labels** 💎: Generate labels for the PR based on the PR's contents. \n" \
|
||||
"> - **/analyze** 💎: Automatically analyzes the PR, and presents changes walkthrough for each component. \n\n" \
|
||||
">See the [tools guide](https://github.com/Codium-ai/pr-agent/blob/main/docs/TOOLS_GUIDE.md) for more details.\n" \
|
||||
">To list the possible configuration parameters, add a **/config** comment. \n"
|
||||
return commands_text
|
||||
commands_text = "> **/review [-i]**: Request a review of your Pull Request. For an incremental review, which only " \
|
||||
"considers changes since the last review, include the '-i' option.\n" \
|
||||
"> **/describe**: Modify the PR title and description based on the contents of the PR.\n" \
|
||||
"> **/improve [--extended]**: Suggest improvements to the code in the PR. Extended mode employs several calls, and provides a more thorough feedback. \n" \
|
||||
"> **/ask \\<QUESTION\\>**: Pose a question about the PR.\n" \
|
||||
"> **/update_changelog**: Update the changelog based on the PR's contents.\n\n" \
|
||||
">To edit any configuration parameter from **configuration.toml**, add --config_path=new_value\n" \
|
||||
">For example: /review --pr_reviewer.extra_instructions=\"focus on the file: ...\" \n" \
|
||||
">To list the possible configuration parameters, use the **/config** command.\n" \
|
||||
|
||||
|
||||
@staticmethod
|
||||
def get_general_bot_help_text():
|
||||
output = f"> To invoke the PR-Agent, add a comment using one of the following commands: \n{HelpMessage.get_general_commands_text()} \n"
|
||||
return output
|
||||
|
||||
@staticmethod
|
||||
def get_review_usage_guide():
|
||||
output ="**Overview:**\n"
|
||||
output +="The `review` tool scans the PR code changes, and generates a PR review. The tool can be triggered [automatically](https://github.com/Codium-ai/pr-agent/blob/main/Usage.md#github-app-automatic-tools) every time a new PR is opened, or can be invoked manually by commenting on any PR.\n"
|
||||
output +="""\
|
||||
When commenting, to edit [configurations](https://github.com/Codium-ai/pr-agent/blob/main/pr_agent/settings/configuration.toml#L19) related to the review tool (`pr_reviewer` section), use the following template:
|
||||
```
|
||||
/review --pr_reviewer.some_config1=... --pr_reviewer.some_config2=...
|
||||
```
|
||||
With a [configuration file](https://github.com/Codium-ai/pr-agent/blob/main/Usage.md#working-with-github-app), use the following template:
|
||||
```
|
||||
[pr_reviewer]
|
||||
some_config1=...
|
||||
some_config2=...
|
||||
```
|
||||
"""
|
||||
output +="\n\n<table>"
|
||||
|
||||
# extra instructions
|
||||
output += "<tr><td><details> <summary><strong> Utilizing extra instructions</strong></summary><hr>\n\n"
|
||||
output += '''\
|
||||
The `review` tool can be configured with extra instructions, which can be used to guide the model to a feedback tailored to the needs of your project.
|
||||
|
||||
Be specific, clear, and concise in the instructions. With extra instructions, you are the prompter. Specify the relevant sub-tool, and the relevant aspects of the PR that you want to emphasize.
|
||||
|
||||
Examples for extra instructions:
|
||||
```
|
||||
[pr_reviewer] # /review #
|
||||
extra_instructions="""
|
||||
In the 'general suggestions' section, emphasize the following:
|
||||
- Does the code logic cover relevant edge cases?
|
||||
- Is the code logic clear and easy to understand?
|
||||
- Is the code logic efficient?
|
||||
...
|
||||
"""
|
||||
```
|
||||
Use triple quotes to write multi-line instructions. Use bullet points to make the instructions more readable.
|
||||
'''
|
||||
output += "\n\n</details></td></tr>\n\n"
|
||||
|
||||
# automation
|
||||
output += "<tr><td><details> <summary><strong> How to enable\\disable automation</strong></summary><hr>\n\n"
|
||||
output += """\
|
||||
- When you first install PR-Agent app, the [default mode](https://github.com/Codium-ai/pr-agent/blob/main/Usage.md#github-app-automatic-tools) for the `review` tool is:
|
||||
```
|
||||
pr_commands = ["/review", ...]
|
||||
```
|
||||
meaning the `review` tool will run automatically on every PR, with the default configuration.
|
||||
Edit this field to enable/disable the tool, or to change the used configurations
|
||||
"""
|
||||
output += "\n\n</details></td></tr>\n\n"
|
||||
|
||||
# # code feedback
|
||||
# output += "<tr><td><details> <summary><strong> About the 'Code feedback' section</strong></summary><hr>\n\n"
|
||||
# output+="""\
|
||||
# The `review` tool provides several type of feedbacks, one of them is code suggestions.
|
||||
# If you are interested **only** in the code suggestions, it is recommended to use the [`improve`](https://github.com/Codium-ai/pr-agent/blob/main/docs/IMPROVE.md) feature instead, since it dedicated only to code suggestions, and usually gives better results.
|
||||
# Use the `review` tool if you want to get a more comprehensive feedback, which includes code suggestions as well.
|
||||
# """
|
||||
# output += "\n\n</details></td></tr>\n\n"
|
||||
|
||||
# auto-labels
|
||||
output += "<tr><td><details> <summary><strong> Auto-labels</strong></summary><hr>\n\n"
|
||||
output+="""\
|
||||
The `review` tool can auto-generate two specific types of labels for a PR:
|
||||
- a `possible security issue` label, that detects possible [security issues](https://github.com/Codium-ai/pr-agent/blob/tr/user_description/pr_agent/settings/pr_reviewer_prompts.toml#L136) (`enable_review_labels_security` flag)
|
||||
- a `Review effort [1-5]: x` label, where x is the estimated effort to review the PR (`enable_review_labels_effort` flag)
|
||||
"""
|
||||
output += "\n\n</details></td></tr>\n\n"
|
||||
|
||||
# extra sub-tools
|
||||
output += "<tr><td><details> <summary><strong> Extra sub-tools</strong></summary><hr>\n\n"
|
||||
output += """\
|
||||
The `review` tool provides a collection of possible feedbacks about a PR.
|
||||
It is recommended to review the [possible options](https://github.com/Codium-ai/pr-agent/blob/main/docs/REVIEW.md#enabledisable-features), and choose the ones relevant for your use case.
|
||||
Some of the feature that are disabled by default are quite useful, and should be considered for enabling. For example:
|
||||
`require_score_review`, `require_soc2_ticket`, and more.
|
||||
"""
|
||||
output += "\n\n</details></td></tr>\n\n"
|
||||
|
||||
# general
|
||||
output += "\n\n<tr><td><details> <summary><strong> More PR-Agent commands</strong></summary><hr> \n\n"
|
||||
output += HelpMessage.get_general_bot_help_text()
|
||||
output += "\n\n</details></td></tr>\n\n"
|
||||
|
||||
output += "</table>"
|
||||
|
||||
output += f"\n\nSee the [review usage](https://github.com/Codium-ai/pr-agent/blob/main/docs/REVIEW.md) page for a comprehensive guide on using this tool.\n\n"
|
||||
|
||||
return output
|
||||
def bot_help_text(user: str):
|
||||
return f"> Tag me in a comment '@{user}' and add one of the following commands:\n" + commands_text
|
||||
|
||||
|
||||
|
||||
@staticmethod
|
||||
def get_describe_usage_guide():
|
||||
output = "**Overview:**\n"
|
||||
output += "The `describe` tool scans the PR code changes, and generates a description for the PR - title, type, summary, walkthrough and labels. "
|
||||
output += "The tool can be triggered [automatically](https://github.com/Codium-ai/pr-agent/blob/main/Usage.md#github-app-automatic-tools) every time a new PR is opened, or can be invoked manually by commenting on a PR.\n"
|
||||
output += """\
|
||||
|
||||
When commenting, to edit [configurations](https://github.com/Codium-ai/pr-agent/blob/main/pr_agent/settings/configuration.toml#L46) related to the describe tool (`pr_description` section), use the following template:
|
||||
```
|
||||
/describe --pr_description.some_config1=... --pr_description.some_config2=...
|
||||
```
|
||||
With a [configuration file](https://github.com/Codium-ai/pr-agent/blob/main/Usage.md#working-with-github-app), use the following template:
|
||||
```
|
||||
[pr_description]
|
||||
some_config1=...
|
||||
some_config2=...
|
||||
```
|
||||
"""
|
||||
output += "\n\n<table>"
|
||||
|
||||
# automation
|
||||
output += "<tr><td><details> <summary><strong> Enabling\\disabling automation </strong></summary><hr>\n\n"
|
||||
output += """\
|
||||
- When you first install the app, the [default mode](https://github.com/Codium-ai/pr-agent/blob/main/Usage.md#github-app-automatic-tools) for the describe tool is:
|
||||
```
|
||||
pr_commands = ["/describe --pr_description.add_original_user_description=true"
|
||||
"--pr_description.keep_original_user_title=true", ...]
|
||||
```
|
||||
meaning the `describe` tool will run automatically on every PR, will keep the original title, and will add the original user description above the generated description.
|
||||
|
||||
- Markers are an alternative way to control the generated description, to give maximal control to the user. If you set:
|
||||
```
|
||||
pr_commands = ["/describe --pr_description.use_description_markers=true", ...]
|
||||
```
|
||||
the tool will replace every marker of the form `pr_agent:marker_name` in the PR description with the relevant content, where `marker_name` is one of the following:
|
||||
- `type`: the PR type.
|
||||
- `summary`: the PR summary.
|
||||
- `walkthrough`: the PR walkthrough.
|
||||
|
||||
Note that when markers are enabled, if the original PR description does not contain any markers, the tool will not alter the description at all.
|
||||
|
||||
"""
|
||||
output += "\n\n</details></td></tr>\n\n"
|
||||
|
||||
# custom labels
|
||||
output += "<tr><td><details> <summary><strong> Custom labels </strong></summary><hr>\n\n"
|
||||
output += """\
|
||||
The default labels of the `describe` tool are quite generic: [`Bug fix`, `Tests`, `Enhancement`, `Documentation`, `Other`].
|
||||
|
||||
If you specify [custom labels](https://github.com/Codium-ai/pr-agent/blob/main/docs/DESCRIBE.md#handle-custom-labels-from-the-repos-labels-page-gem) in the repo's labels page or via configuration file, you can get tailored labels for your use cases.
|
||||
Examples for custom labels:
|
||||
- `Main topic:performance` - pr_agent:The main topic of this PR is performance
|
||||
- `New endpoint` - pr_agent:A new endpoint was added in this PR
|
||||
- `SQL query` - pr_agent:A new SQL query was added in this PR
|
||||
- `Dockerfile changes` - pr_agent:The PR contains changes in the Dockerfile
|
||||
- ...
|
||||
|
||||
The list above is eclectic, and aims to give an idea of different possibilities. Define custom labels that are relevant for your repo and use cases.
|
||||
Note that Labels are not mutually exclusive, so you can add multiple label categories.
|
||||
Make sure to provide proper title, and a detailed and well-phrased description for each label, so the tool will know when to suggest it.
|
||||
"""
|
||||
output += "\n\n</details></td></tr>\n\n"
|
||||
|
||||
# Inline File Walkthrough
|
||||
output += "<tr><td><details> <summary><strong> Inline File Walkthrough 💎</strong></summary><hr>\n\n"
|
||||
output += """\
|
||||
For enhanced user experience, the `describe` tool can add file summaries directly to the "Files changed" tab in the PR page.
|
||||
This will enable you to quickly understand the changes in each file, while reviewing the code changes (diffs).
|
||||
|
||||
To enable inline file summary, set `pr_description.inline_file_summary` in the configuration file, possible values are:
|
||||
- `'table'`: File changes walkthrough table will be displayed on the top of the "Files changed" tab, in addition to the "Conversation" tab.
|
||||
- `true`: A collapsable file comment with changes title and a changes summary for each file in the PR.
|
||||
- `false` (default): File changes walkthrough will be added only to the "Conversation" tab.
|
||||
"""
|
||||
# extra instructions
|
||||
output += "<tr><td><details> <summary><strong> Utilizing extra instructions</strong></summary><hr>\n\n"
|
||||
output += '''\
|
||||
The `describe` tool can be configured with extra instructions, to guide the model to a feedback tailored to the needs of your project.
|
||||
|
||||
Be specific, clear, and concise in the instructions. With extra instructions, you are the prompter. Notice that the general structure of the description is fixed, and cannot be changed. Extra instructions can change the content or style of each sub-section of the PR description.
|
||||
|
||||
Examples for extra instructions:
|
||||
```
|
||||
[pr_description]
|
||||
extra_instructions="""
|
||||
- The PR title should be in the format: '<PR type>: <title>'
|
||||
- The title should be short and concise (up to 10 words)
|
||||
- ...
|
||||
"""
|
||||
```
|
||||
Use triple quotes to write multi-line instructions. Use bullet points to make the instructions more readable.
|
||||
'''
|
||||
output += "\n\n</details></td></tr>\n\n"
|
||||
|
||||
|
||||
# general
|
||||
output += "\n\n<tr><td><details> <summary><strong> More PR-Agent commands</strong></summary><hr> \n\n"
|
||||
output += HelpMessage.get_general_bot_help_text()
|
||||
output += "\n\n</details></td></tr>\n\n"
|
||||
|
||||
output += "</table>"
|
||||
|
||||
output += f"\n\nSee the [describe usage](https://github.com/Codium-ai/pr-agent/blob/main/docs/DESCRIBE.md) page for a comprehensive guide on using this tool.\n\n"
|
||||
|
||||
return output
|
||||
|
||||
@staticmethod
|
||||
def get_ask_usage_guide():
|
||||
output = "**Overview:**\n"
|
||||
output += """\
|
||||
The `ask` tool answers questions about the PR, based on the PR code changes.
|
||||
It can be invoked manually by commenting on any PR:
|
||||
```
|
||||
/ask "..."
|
||||
```
|
||||
|
||||
Note that the tool does not have "memory" of previous questions, and answers each question independently.
|
||||
"""
|
||||
output += "\n\n<table>"
|
||||
|
||||
# general
|
||||
output += "\n\n<tr><td><details> <summary><strong> More PR-Agent commands</strong></summary><hr> \n\n"
|
||||
output += HelpMessage.get_general_bot_help_text()
|
||||
output += "\n\n</details></td></tr>\n\n"
|
||||
|
||||
output += "</table>"
|
||||
|
||||
output += f"\n\nSee the [ask usage](https://github.com/Codium-ai/pr-agent/blob/main/docs/ASK.md) page for a comprehensive guide on using this tool.\n\n"
|
||||
|
||||
return output
|
||||
|
||||
|
||||
@staticmethod
|
||||
def get_improve_usage_guide():
|
||||
output = "**Overview:**\n"
|
||||
output += "The `improve` tool scans the PR code changes, and automatically generates suggestions for improving the PR code. "
|
||||
output += "The tool can be triggered [automatically](https://github.com/Codium-ai/pr-agent/blob/main/Usage.md#github-app-automatic-tools) every time a new PR is opened, or can be invoked manually by commenting on a PR.\n"
|
||||
output += """\
|
||||
When commenting, to edit [configurations](https://github.com/Codium-ai/pr-agent/blob/main/pr_agent/settings/configuration.toml#L69) related to the improve tool (`pr_code_suggestions` section), use the following template:
|
||||
|
||||
```
|
||||
/improve --pr_code_suggestions.some_config1=... --pr_code_suggestions.some_config2=...
|
||||
```
|
||||
|
||||
With a [configuration file](https://github.com/Codium-ai/pr-agent/blob/main/Usage.md#working-with-github-app), use the following template:
|
||||
|
||||
```
|
||||
[pr_code_suggestions]
|
||||
some_config1=...
|
||||
some_config2=...
|
||||
```
|
||||
|
||||
"""
|
||||
output += "\n\n<table>"
|
||||
|
||||
# automation
|
||||
output += "<tr><td><details> <summary><strong> Enabling\\disabling automation </strong></summary><hr>\n\n"
|
||||
output += """\
|
||||
When you first install the app, the [default mode](https://github.com/Codium-ai/pr-agent/blob/main/Usage.md#github-app-automatic-tools) for the improve tool is:
|
||||
|
||||
```
|
||||
pr_commands = ["/improve --pr_code_suggestions.summarize=true", ...]
|
||||
```
|
||||
|
||||
meaning the `improve` tool will run automatically on every PR, with summarization enabled. Delete this line to disable the tool from running automatically.
|
||||
"""
|
||||
output += "\n\n</details></td></tr>\n\n"
|
||||
|
||||
# extra instructions
|
||||
output += "<tr><td><details> <summary><strong> Utilizing extra instructions</strong></summary><hr>\n\n"
|
||||
output += '''\
|
||||
Extra instructions are very important for the `improve` tool, since they enable to guide the model to suggestions that are more relevant to the specific needs of the project.
|
||||
|
||||
Be specific, clear, and concise in the instructions. With extra instructions, you are the prompter. Specify relevant aspects that you want the model to focus on.
|
||||
|
||||
Examples for extra instructions:
|
||||
|
||||
```
|
||||
[pr_code_suggestions] # /improve #
|
||||
extra_instructions="""
|
||||
Emphasize the following aspects:
|
||||
- Does the code logic cover relevant edge cases?
|
||||
- Is the code logic clear and easy to understand?
|
||||
- Is the code logic efficient?
|
||||
...
|
||||
"""
|
||||
```
|
||||
|
||||
Use triple quotes to write multi-line instructions. Use bullet points to make the instructions more readable.
|
||||
'''
|
||||
output += "\n\n</details></td></tr>\n\n"
|
||||
|
||||
# suggestions quality
|
||||
output += "\n\n<tr><td><details> <summary><strong> A note on code suggestions quality</strong></summary><hr> \n\n"
|
||||
output += """\
|
||||
- While the current AI for code is getting better and better (GPT-4), it's not flawless. Not all the suggestions will be perfect, and a user should not accept all of them automatically.
|
||||
- Suggestions are not meant to be simplistic. Instead, they aim to give deep feedback and raise questions, ideas and thoughts to the user, who can then use his judgment, experience, and understanding of the code base.
|
||||
- Recommended to use the 'extra_instructions' field to guide the model to suggestions that are more relevant to the specific needs of the project, or use the [custom suggestions :gem:](https://github.com/Codium-ai/pr-agent/blob/main/docs/CUSTOM_SUGGESTIONS.md) tool
|
||||
- With large PRs, best quality will be obtained by using 'improve --extended' mode.
|
||||
|
||||
|
||||
"""
|
||||
output += "\n\n</details></td></tr>\n\n"\
|
||||
|
||||
# general
|
||||
output += "\n\n<tr><td><details> <summary><strong> More PR-Agent commands</strong></summary><hr> \n\n"
|
||||
output += HelpMessage.get_general_bot_help_text()
|
||||
output += "\n\n</details></td></tr>\n\n"
|
||||
|
||||
output += "</table>"
|
||||
|
||||
output += f"\n\nSee the [improve usage](https://github.com/Codium-ai/pr-agent/blob/main/docs/IMPROVE.md) page for a more comprehensive guide on using this tool.\n\n"
|
||||
|
||||
return output
|
||||
actions_help_text = "> To invoke the PR-Agent, add a comment using one of the following commands:\n" + \
|
||||
commands_text
|
||||
|
@ -1,13 +1,14 @@
|
||||
import logging
|
||||
|
||||
from fastapi import FastAPI
|
||||
from mangum import Mangum
|
||||
from starlette.middleware import Middleware
|
||||
from starlette_context.middleware import RawContextMiddleware
|
||||
|
||||
from pr_agent.servers.github_app import router
|
||||
|
||||
logger = logging.getLogger()
|
||||
logger.setLevel(logging.DEBUG)
|
||||
|
||||
middleware = [Middleware(RawContextMiddleware)]
|
||||
app = FastAPI(middleware=middleware)
|
||||
app = FastAPI()
|
||||
app.include_router(router)
|
||||
|
||||
handler = Mangum(app, lifespan="off")
|
||||
|
@ -1,8 +1,5 @@
|
||||
import hashlib
|
||||
import hmac
|
||||
import time
|
||||
from collections import defaultdict
|
||||
from typing import Callable, Any
|
||||
|
||||
from fastapi import HTTPException
|
||||
|
||||
@ -28,59 +25,3 @@ def verify_signature(payload_body, secret_token, signature_header):
|
||||
class RateLimitExceeded(Exception):
|
||||
"""Raised when the git provider API rate limit has been exceeded."""
|
||||
pass
|
||||
|
||||
|
||||
class DefaultDictWithTimeout(defaultdict):
|
||||
"""A defaultdict with a time-to-live (TTL)."""
|
||||
|
||||
def __init__(
|
||||
self,
|
||||
default_factory: Callable[[], Any] = None,
|
||||
ttl: int = None,
|
||||
refresh_interval: int = 60,
|
||||
update_key_time_on_get: bool = True,
|
||||
*args,
|
||||
**kwargs,
|
||||
):
|
||||
"""
|
||||
Args:
|
||||
default_factory: The default factory to use for keys that are not in the dictionary.
|
||||
ttl: The time-to-live (TTL) in seconds.
|
||||
refresh_interval: How often to refresh the dict and delete items older than the TTL.
|
||||
update_key_time_on_get: Whether to update the access time of a key also on get (or only when set).
|
||||
"""
|
||||
super().__init__(default_factory, *args, **kwargs)
|
||||
self.__key_times = dict()
|
||||
self.__ttl = ttl
|
||||
self.__refresh_interval = refresh_interval
|
||||
self.__update_key_time_on_get = update_key_time_on_get
|
||||
self.__last_refresh = self.__time() - self.__refresh_interval
|
||||
|
||||
@staticmethod
|
||||
def __time():
|
||||
return time.monotonic()
|
||||
|
||||
def __refresh(self):
|
||||
if self.__ttl is None:
|
||||
return
|
||||
request_time = self.__time()
|
||||
if request_time - self.__last_refresh > self.__refresh_interval:
|
||||
return
|
||||
to_delete = [key for key, key_time in self.__key_times.items() if request_time - key_time > self.__ttl]
|
||||
for key in to_delete:
|
||||
del self[key]
|
||||
self.__last_refresh = request_time
|
||||
|
||||
def __getitem__(self, __key):
|
||||
if self.__update_key_time_on_get:
|
||||
self.__key_times[__key] = self.__time()
|
||||
self.__refresh()
|
||||
return super().__getitem__(__key)
|
||||
|
||||
def __setitem__(self, __key, __value):
|
||||
self.__key_times[__key] = self.__time()
|
||||
return super().__setitem__(__key, __value)
|
||||
|
||||
def __delitem__(self, __key):
|
||||
del self.__key_times[__key]
|
||||
return super().__delitem__(__key)
|
||||
|
@ -16,10 +16,6 @@ key = "" # Acquire through https://platform.openai.com
|
||||
#deployment_id = "" # The deployment name you chose when you deployed the engine
|
||||
#fallback_deployments = [] # For each fallback model specified in configuration.toml in the [config] section, specify the appropriate deployment_id
|
||||
|
||||
[pinecone]
|
||||
api_key = "..."
|
||||
environment = "gcp-starter"
|
||||
|
||||
[anthropic]
|
||||
key = "" # Optional, uncomment if you want to use Anthropic. Acquire through https://www.anthropic.com/
|
||||
|
||||
@ -28,21 +24,6 @@ key = "" # Optional, uncomment if you want to use Cohere. Acquire through https:
|
||||
|
||||
[replicate]
|
||||
key = "" # Optional, uncomment if you want to use Replicate. Acquire through https://replicate.com/
|
||||
|
||||
[huggingface]
|
||||
key = "" # Optional, uncomment if you want to use Huggingface Inference API. Acquire through https://huggingface.co/docs/api-inference/quicktour
|
||||
api_base = "" # the base url for your huggingface inference endpoint
|
||||
|
||||
[ollama]
|
||||
api_base = "" # the base url for your local Llama 2, Code Llama, and other models inference endpoint. Acquire through https://ollama.ai/
|
||||
|
||||
[vertexai]
|
||||
vertex_project = "" # the google cloud platform project name for your vertexai deployment
|
||||
vertex_location = "" # the google cloud platform location for your vertexai deployment
|
||||
|
||||
[aws]
|
||||
bedrock_region = "" # the AWS region to call Bedrock APIs
|
||||
|
||||
[github]
|
||||
# ---- Set the following only for deployment type == "user"
|
||||
user_token = "" # A GitHub personal access token with 'repo' scope.
|
||||
@ -62,17 +43,5 @@ webhook_secret = "<WEBHOOK SECRET>" # Optional, may be commented out.
|
||||
personal_access_token = ""
|
||||
|
||||
[bitbucket]
|
||||
# For Bitbucket personal/repository bearer token
|
||||
# Bitbucket personal bearer token
|
||||
bearer_token = ""
|
||||
|
||||
[bitbucket_server]
|
||||
# For Bitbucket Server bearer token
|
||||
auth_token = ""
|
||||
webhook_secret = ""
|
||||
|
||||
# For Bitbucket app
|
||||
app_key = ""
|
||||
base_url = ""
|
||||
|
||||
[litellm]
|
||||
LITELLM_TOKEN = "" # see https://docs.litellm.ai/docs/debugging/hosted_debugging for details and instructions on how to get a token
|
||||
|
@ -1,5 +1,5 @@
|
||||
[config]
|
||||
model="gpt-4" # "gpt-4-0125-preview"
|
||||
model="gpt-4"
|
||||
fallback_models=["gpt-3.5-turbo-16k"]
|
||||
git_provider="github"
|
||||
publish_output=true
|
||||
@ -7,112 +7,50 @@ publish_output_progress=true
|
||||
verbosity_level=0 # 0,1,2
|
||||
use_extra_bad_extensions=false
|
||||
use_repo_settings_file=true
|
||||
use_global_settings_file=true
|
||||
ai_timeout=180
|
||||
max_description_tokens = 500
|
||||
max_commits_tokens = 500
|
||||
max_model_tokens = 32000 # Limits the maximum number of tokens that can be used by any model, regardless of the model's default capabilities.
|
||||
patch_extra_lines = 3
|
||||
secret_provider="google_cloud_storage"
|
||||
cli_mode=false
|
||||
|
||||
[pr_reviewer] # /review #
|
||||
# enable/disable features
|
||||
require_focused_review=false
|
||||
require_score_review=false
|
||||
require_tests_review=true
|
||||
require_security_review=true
|
||||
require_estimate_effort_to_review=true
|
||||
# soc2
|
||||
require_soc2_ticket=false
|
||||
soc2_ticket_prompt="Does the PR description include a link to ticket in a project management system (e.g., Jira, Asana, Trello, etc.) ?"
|
||||
# general options
|
||||
num_code_suggestions=4
|
||||
inline_code_comments = false
|
||||
ask_and_reflect=false
|
||||
#automatic_review=true
|
||||
remove_previous_review_comment=false
|
||||
persistent_comment=true
|
||||
automatic_review=true
|
||||
extra_instructions = ""
|
||||
# review labels
|
||||
enable_review_labels_security=true
|
||||
enable_review_labels_effort=true
|
||||
# specific configurations for incremental review (/review -i)
|
||||
require_all_thresholds_for_incremental_review=false
|
||||
minimal_commits_for_incremental_review=0
|
||||
minimal_minutes_for_incremental_review=0
|
||||
enable_help_text=true # Determines whether to include help text in the PR review. Enabled by default.
|
||||
|
||||
[pr_description] # /describe #
|
||||
publish_labels=true
|
||||
publish_description_as_comment=false
|
||||
add_original_user_description=true
|
||||
keep_original_user_title=true
|
||||
use_bullet_points=true
|
||||
add_original_user_description=false
|
||||
keep_original_user_title=false
|
||||
extra_instructions = ""
|
||||
enable_pr_type=true
|
||||
final_update_message = true
|
||||
enable_help_text=true
|
||||
## changes walkthrough section
|
||||
enable_semantic_files_types=true
|
||||
collapsible_file_list='adaptive' # true, false, 'adaptive'
|
||||
inline_file_summary=false # false, true, 'table'
|
||||
# markers
|
||||
use_description_markers=false
|
||||
include_generated_by_header=true
|
||||
|
||||
#custom_labels = ['Bug fix', 'Tests', 'Bug fix with tests', 'Enhancement', 'Documentation', 'Other']
|
||||
|
||||
[pr_questions] # /ask #
|
||||
enable_help_text=true
|
||||
|
||||
|
||||
[pr_code_suggestions] # /improve #
|
||||
num_code_suggestions=4
|
||||
summarize = true
|
||||
extra_instructions = ""
|
||||
rank_suggestions = false
|
||||
enable_help_text=true
|
||||
# params for '/improve --extended' mode
|
||||
auto_extended_mode=false
|
||||
num_code_suggestions_per_chunk=8
|
||||
rank_extended_suggestions = true
|
||||
max_number_of_calls = 5
|
||||
final_clip_factor = 0.9
|
||||
|
||||
[pr_add_docs] # /add_docs #
|
||||
extra_instructions = ""
|
||||
docs_style = "Sphinx Style" # "Google Style with Args, Returns, Attributes...etc", "Numpy Style", "Sphinx Style", "PEP257", "reStructuredText"
|
||||
|
||||
[pr_update_changelog] # /update_changelog #
|
||||
push_changelog_changes=false
|
||||
extra_instructions = ""
|
||||
|
||||
[pr_analyze] # /analyze #
|
||||
|
||||
[pr_test] # /test #
|
||||
extra_instructions = ""
|
||||
testing_framework = "" # specify the testing framework you want to use
|
||||
num_tests=3 # number of tests to generate. max 5.
|
||||
avoid_mocks=true # if true, the generated tests will prefer to use real objects instead of mocks
|
||||
file = "" # in case there are several components with the same name, you can specify the relevant file
|
||||
class_name = "" # in case there are several methods with the same name in the same file, you can specify the relevant class name
|
||||
enable_help_text=true
|
||||
|
||||
[pr_config] # /config #
|
||||
|
||||
[github]
|
||||
# The type of deployment to create. Valid values are 'app' or 'user'.
|
||||
deployment_type = "user"
|
||||
ratelimit_retries = 5
|
||||
base_url = "https://api.github.com"
|
||||
publish_inline_comments_fallback_with_verification = true
|
||||
try_fix_invalid_inline_comments = true
|
||||
|
||||
[github_action_config]
|
||||
# auto_review = true # set as env var in .github/workflows/pr-agent.yaml
|
||||
# auto_describe = true # set as env var in .github/workflows/pr-agent.yaml
|
||||
# auto_improve = true # set as env var in .github/workflows/pr-agent.yaml
|
||||
|
||||
[github_app]
|
||||
# these toggles allows running the github app from custom deployments
|
||||
@ -126,32 +64,7 @@ duplicate_requests_cache_ttl = 60 # in seconds
|
||||
handle_pr_actions = ['opened', 'reopened', 'ready_for_review', 'review_requested']
|
||||
pr_commands = [
|
||||
"/describe --pr_description.add_original_user_description=true --pr_description.keep_original_user_title=true",
|
||||
"/review --pr_reviewer.num_code_suggestions=0",
|
||||
"/improve --pr_code_suggestions.summarize=true",
|
||||
]
|
||||
# settings for "pull_request" event with "synchronize" action - used to detect and handle push triggers for new commits
|
||||
handle_push_trigger = false
|
||||
push_trigger_ignore_bot_commits = true
|
||||
push_trigger_ignore_merge_commits = true
|
||||
push_trigger_wait_for_initial_review = true
|
||||
push_trigger_pending_tasks_backlog = true
|
||||
push_trigger_pending_tasks_ttl = 300
|
||||
push_commands = [
|
||||
"/describe --pr_description.add_original_user_description=true --pr_description.keep_original_user_title=true",
|
||||
"""/auto_review -i \
|
||||
--pr_reviewer.require_focused_review=false \
|
||||
--pr_reviewer.require_score_review=false \
|
||||
--pr_reviewer.require_tests_review=false \
|
||||
--pr_reviewer.require_security_review=false \
|
||||
--pr_reviewer.require_estimate_effort_to_review=false \
|
||||
--pr_reviewer.num_code_suggestions=0 \
|
||||
--pr_reviewer.inline_code_comments=false \
|
||||
--pr_reviewer.remove_previous_review_comment=true \
|
||||
--pr_reviewer.require_all_thresholds_for_incremental_review=false \
|
||||
--pr_reviewer.minimal_commits_for_incremental_review=5 \
|
||||
--pr_reviewer.minimal_minutes_for_incremental_review=30 \
|
||||
--pr_reviewer.extra_instructions='' \
|
||||
"""
|
||||
"/auto_review",
|
||||
]
|
||||
|
||||
[gitlab]
|
||||
@ -167,12 +80,6 @@ magic_word = "AutoReview"
|
||||
# Polling interval
|
||||
polling_interval_seconds = 30
|
||||
|
||||
[bitbucket_app]
|
||||
#auto_review = true # set as config var in .pr_agent.toml
|
||||
#auto_describe = true # set as config var in .pr_agent.toml
|
||||
#auto_improve = true # set as config var in .pr_agent.toml
|
||||
|
||||
|
||||
[local]
|
||||
# LocalGitProvider settings - uncomment to use paths other than default
|
||||
# description_path= "path/to/description.md"
|
||||
@ -188,24 +95,6 @@ polling_interval_seconds = 30
|
||||
# token to authenticate in the patch server
|
||||
# patch_server_token = ""
|
||||
|
||||
[bitbucket_server]
|
||||
# URL to the BitBucket Server instance
|
||||
# url = "https://git.bitbucket.com"
|
||||
url = ""
|
||||
|
||||
[litellm]
|
||||
#use_client = false
|
||||
|
||||
[pr_similar_issue]
|
||||
skip_comments = false
|
||||
force_update_dataset = false
|
||||
max_issues_to_scan = 500
|
||||
vectordb = "pinecone"
|
||||
|
||||
[pinecone]
|
||||
# fill and place in .secrets.toml
|
||||
#api_key = ...
|
||||
# environment = "gcp-starter"
|
||||
|
||||
[lancedb]
|
||||
uri = "./lancedb"
|
||||
debugger=false
|
||||
#email="youremail@example.com"
|
||||
|
@ -1,16 +0,0 @@
|
||||
[config]
|
||||
enable_custom_labels=false
|
||||
|
||||
## template for custom labels
|
||||
#[custom_labels."Bug fix"]
|
||||
#description = """Fixes a bug in the code"""
|
||||
#[custom_labels."Tests"]
|
||||
#description = """Adds or modifies tests"""
|
||||
#[custom_labels."Bug fix with tests"]
|
||||
#description = """Fixes a bug in the code and adds or modifies tests"""
|
||||
#[custom_labels."Enhancement"]
|
||||
#description = """Adds new features or modifies existing ones"""
|
||||
#[custom_labels."Documentation"]
|
||||
#description = """Adds or modifies documentation"""
|
||||
#[custom_labels."Other"]
|
||||
#description = """Other changes that do not fit in any of the above categories"""
|
@ -1,11 +0,0 @@
|
||||
[ignore]
|
||||
|
||||
glob = [
|
||||
# Ignore files and directories matching these glob patterns.
|
||||
# See https://docs.python.org/3/library/glob.html
|
||||
'vendor/**',
|
||||
]
|
||||
regex = [
|
||||
# Ignore files and directories matching these regex patterns.
|
||||
# See https://learnbyexample.github.io/python-regex-cheatsheet/
|
||||
]
|
@ -53,8 +53,7 @@ default = [
|
||||
'xz',
|
||||
'zip',
|
||||
'zst',
|
||||
'snap',
|
||||
'lockb'
|
||||
'snap'
|
||||
]
|
||||
extra = [
|
||||
'md',
|
||||
@ -433,6 +432,3 @@ reStructuredText = [".rst", ".rest", ".rest.txt", ".rst.txt", ]
|
||||
wisp = [".wisp", ]
|
||||
xBase = [".prg", ".prw", ]
|
||||
|
||||
[docs_blacklist_extensions]
|
||||
# Disable docs for these extensions of text files and scripts that are not programming languages of function, classes and methods
|
||||
docs_blacklist = ['sql', 'txt', 'yaml', 'json', 'xml', 'md', 'rst', 'rest', 'rest.txt', 'rst.txt', 'mdpolicy', 'mdown', 'markdown', 'mdwn', 'mkd', 'mkdn', 'mkdown', 'sh']
|
@ -1,126 +0,0 @@
|
||||
[pr_add_docs_prompt]
|
||||
system="""You are PR-Doc, a language model that specializes in generating documentation for code components in a Pull Request (PR).
|
||||
Your task is to generate {{ docs_for_language }} for code components in the PR Diff.
|
||||
|
||||
|
||||
Example for the PR Diff format:
|
||||
======
|
||||
## file: 'src/file1.py'
|
||||
|
||||
@@ -12,3 +12,4 @@ def func1():
|
||||
__new hunk__
|
||||
12 code line1 that remained unchanged in the PR
|
||||
14 +new code line1 added in the PR
|
||||
15 +new code line2 added in the PR
|
||||
16 code line2 that remained unchanged in the PR
|
||||
__old hunk__
|
||||
code line1 that remained unchanged in the PR
|
||||
-code line that was removed in the PR
|
||||
code line2 that remained unchanged in the PR
|
||||
|
||||
@@ ... @@ def func2():
|
||||
__new hunk__
|
||||
...
|
||||
__old hunk__
|
||||
...
|
||||
|
||||
|
||||
## file: 'src/file2.py'
|
||||
...
|
||||
======
|
||||
|
||||
|
||||
Specific instructions:
|
||||
- Try to identify edited/added code components (classes/functions/methods...) that are undocumented, and generate {{ docs_for_language }} for each one.
|
||||
- If there are documented (any type of {{ language }} documentation) code components in the PR, Don't generate {{ docs_for_language }} for them.
|
||||
- Ignore code components that don't appear fully in the '__new hunk__' section. For example, you must see the component header and body.
|
||||
- Make sure the {{ docs_for_language }} starts and ends with standard {{ language }} {{ docs_for_language }} signs.
|
||||
- The {{ docs_for_language }} should be in standard format.
|
||||
- Provide the exact line number (inclusive) where the {{ docs_for_language }} should be added.
|
||||
|
||||
|
||||
{%- if extra_instructions %}
|
||||
|
||||
Extra instructions from the user:
|
||||
======
|
||||
{{ extra_instructions }}
|
||||
======
|
||||
{%- endif %}
|
||||
|
||||
|
||||
You must use the following YAML schema to format your answer:
|
||||
```yaml
|
||||
Code Documentation:
|
||||
type: array
|
||||
uniqueItems: true
|
||||
items:
|
||||
relevant file:
|
||||
type: string
|
||||
description: the relevant file full path
|
||||
relevant line:
|
||||
type: integer
|
||||
description: |-
|
||||
The relevant line number from a '__new hunk__' section where the {{ docs_for_language }} should be added.
|
||||
doc placement:
|
||||
type: string
|
||||
enum:
|
||||
- before
|
||||
- after
|
||||
description: |-
|
||||
The {{ docs_for_language }} placement relative to the relevant line (code component).
|
||||
For example, in Python the docs are placed after the function signature, but in Java they are placed before.
|
||||
documentation:
|
||||
type: string
|
||||
description: |-
|
||||
The {{ docs_for_language }} content. It should be complete, correctly formatted and indented, and without line numbers.
|
||||
```
|
||||
|
||||
Example output:
|
||||
```yaml
|
||||
Code Documentation:
|
||||
- relevant file: |-
|
||||
src/file1.py
|
||||
relevant lines: 12
|
||||
doc placement: after
|
||||
documentation: |-
|
||||
\"\"\"
|
||||
This is a python docstring for func1.
|
||||
\"\"\"
|
||||
- ...
|
||||
...
|
||||
```
|
||||
|
||||
|
||||
Each YAML output MUST be after a newline, indented, with block scalar indicator ('|-').
|
||||
Don't repeat the prompt in the answer, and avoid outputting the 'type' and 'description' fields.
|
||||
"""
|
||||
|
||||
user="""PR Info:
|
||||
|
||||
Title: '{{ title }}'
|
||||
|
||||
Branch: '{{ branch }}'
|
||||
|
||||
{%- if description %}
|
||||
|
||||
Description:
|
||||
======
|
||||
{{ description|trim }}
|
||||
======
|
||||
{%- endif %}
|
||||
|
||||
{%- if language %}
|
||||
|
||||
Main PR language: '{{language}}'
|
||||
{%- endif %}
|
||||
|
||||
|
||||
The PR Diff:
|
||||
======
|
||||
{{ diff|trim }}
|
||||
======
|
||||
|
||||
|
||||
Response (should be a valid YAML, and nothing else):
|
||||
```yaml
|
||||
"""
|
@ -1,20 +1,23 @@
|
||||
[pr_code_suggestions_prompt]
|
||||
system="""You are PR-Reviewer, a language model that specializes in suggesting code improvements for a Pull Request (PR).
|
||||
Your task is to provide meaningful and actionable code suggestions, to improve the new code presented in a PR diff (lines starting with '+').
|
||||
system="""You are a language model called PR-Code-Reviewer, that specializes in suggesting code improvements for Pull Request (PR).
|
||||
Your task is to provide meaningful and actionable code suggestions, to improve the new code presented in a PR.
|
||||
|
||||
Example for the PR Diff format:
|
||||
======
|
||||
## file: 'src/file1.py'
|
||||
Example for a PR Diff input:
|
||||
'
|
||||
## src/file1.py
|
||||
|
||||
@@ ... @@ def func1():
|
||||
@@ -12,3 +12,5 @@ def func1():
|
||||
__new hunk__
|
||||
12 code line1 that remained unchanged in the PR
|
||||
13 +new code line2 added in the PR
|
||||
14 code line3 that remained unchanged in the PR
|
||||
12 code line that already existed in the file...
|
||||
13 code line that already existed in the file....
|
||||
14 +new code line1 added in the PR
|
||||
15 +new code line2 added in the PR
|
||||
16 code line that already existed in the file...
|
||||
__old hunk__
|
||||
code line1 that remained unchanged in the PR
|
||||
-old code line2 that was removed in the PR
|
||||
code line3 that remained unchanged in the PR
|
||||
code line that already existed in the file...
|
||||
-code line that was removed in the PR
|
||||
code line that already existed in the file...
|
||||
|
||||
|
||||
@@ ... @@ def func2():
|
||||
__new hunk__
|
||||
@ -23,96 +26,104 @@ __old hunk__
|
||||
...
|
||||
|
||||
|
||||
## file: 'src/file2.py'
|
||||
## src/file2.py
|
||||
...
|
||||
======
|
||||
|
||||
'
|
||||
|
||||
Specific instructions:
|
||||
- Provide up to {{ num_code_suggestions }} code suggestions. The suggestions should be diverse and insightful.
|
||||
- The suggestions should refer only to code from the '__new hunk__' sections, and focus on new lines of code (lines starting with '+').
|
||||
- Prioritize suggestions that address major problems, issues and bugs in the PR code. As a second priority, suggestions should focus on enhancement, best practice, performance, maintainability, and other aspects.
|
||||
- Don't suggest to add docstring, type hints, or comments, or to remove unused imports.
|
||||
- Avoid making suggestions that have already been implemented in the PR code. For example, if you want to add logs, or change a variable to const, or anything else, make sure it isn't already in the '__new hunk__' code.
|
||||
- Provide the exact line numbers range (inclusive) for each suggestion.
|
||||
- When quoting variables or names from the code, use backticks (`) instead of single quote (').
|
||||
- Provide up to {{ num_code_suggestions }} code suggestions.
|
||||
- Prioritize suggestions that address major problems, issues and bugs in the code.
|
||||
As a second priority, suggestions should focus on best practices, code readability, maintainability, enhancments, performance, and other aspects.
|
||||
Don't suggest to add docstring or type hints.
|
||||
Try to provide diverse and insightful suggestions.
|
||||
- Suggestions should refer only to code from the '__new hunk__' sections, and focus on new lines of code (lines starting with '+').
|
||||
Avoid making suggestions that have already been implemented in the PR code. For example, if you want to add logs, or change a variable to const, or anything else, make sure it isn't already in the '__new hunk__' code.
|
||||
For each suggestion, make sure to take into consideration also the context, meaning the lines before and after the relevant code.
|
||||
- Provide the exact line numbers range (inclusive) for each issue.
|
||||
- Assume there is additional relevant code, that is not included in the diff.
|
||||
|
||||
|
||||
{%- if extra_instructions %}
|
||||
|
||||
Extra instructions from the user:
|
||||
======
|
||||
{{ extra_instructions }}
|
||||
======
|
||||
{%- endif %}
|
||||
|
||||
The output must be a YAML object equivalent to type $PRCodeSuggestions, according to the following Pydantic definitions:
|
||||
=====
|
||||
class CodeSuggestion(BaseModel):
|
||||
relevant_file: str = Field(description="the relevant file full path")
|
||||
language: str = Field(description="the code language of the relevant file")
|
||||
suggestion_content: str = Field(description="an actionable suggestion for meaningfully improving the new code introduced in the PR")
|
||||
{%- if summarize_mode %}
|
||||
existing_code: str = Field(description="a short code snippet from a '__new hunk__' section to illustrate the relevant existing code. Don't show the line numbers.")
|
||||
improved_code: str = Field(description="a short code snippet to illustrate the improved code, after applying the suggestion.")
|
||||
one_sentence_summary:str = Field(description="a short summary of the suggestion action, in a single sentence. Focus on the 'what'. Be general, and avoid method or variable names.")
|
||||
{%- else %}
|
||||
existing_code: str = Field(description="a code snippet, demonstrating the relevant code lines from a '__new hunk__' section. It must be contiguous, correctly formatted and indented, and without line numbers")
|
||||
improved_code: str = Field(description="a new code snippet, that can be used to replace the relevant lines in '__new hunk__' code. Replacement suggestions should be complete, correctly formatted and indented, and without line numbers")
|
||||
{%- endif %}
|
||||
relevant_lines_start: int = Field(description="The relevant line number, from a '__new hunk__' section, where the suggestion starts (inclusive). Should be derived from the hunk line numbers, and correspond to the 'existing code' snippet above")
|
||||
relevant_lines_end: int = Field(description="The relevant line number, from a '__new hunk__' section, where the suggestion ends (inclusive). Should be derived from the hunk line numbers, and correspond to the 'existing code' snippet above")
|
||||
label: str = Field(description="a single label for the suggestion, to help the user understand the suggestion type. For example: 'security', 'bug', 'performance', 'enhancement', 'possible issue', 'best practice', 'maintainability', etc. Other labels are also allowed")
|
||||
|
||||
class PRCodeSuggestions(BaseModel):
|
||||
code_suggestions: List[CodeSuggestion]
|
||||
=====
|
||||
|
||||
You must use the following YAML schema to format your answer:
|
||||
```yaml
|
||||
Code suggestions:
|
||||
type: array
|
||||
minItems: 1
|
||||
maxItems: {{ num_code_suggestions }}
|
||||
uniqueItems: true
|
||||
items:
|
||||
relevant file:
|
||||
type: string
|
||||
description: the relevant file full path
|
||||
suggestion content:
|
||||
type: string
|
||||
description: |-
|
||||
a concrete suggestion for meaningfully improving the new PR code.
|
||||
existing code:
|
||||
type: string
|
||||
description: |-
|
||||
a code snippet showing the relevant code lines from a '__new hunk__' section.
|
||||
It must be contiguous, correctly formatted and indented, and without line numbers.
|
||||
relevant lines start:
|
||||
type: integer
|
||||
description: |-
|
||||
The relevant line number from a '__new hunk__' section where the suggestion starts (inclusive).
|
||||
Should be derived from the hunk line numbers, and correspond to the 'existing code' snippet above.
|
||||
relevant lines end:
|
||||
type: integer
|
||||
description: |-
|
||||
The relevant line number from a '__new hunk__' section where the suggestion ends (inclusive).
|
||||
Should be derived from the hunk line numbers, and correspond to the 'existing code' snippet above.
|
||||
improved code:
|
||||
type: string
|
||||
description: |-
|
||||
a new code snippet that can be used to replace the relevant lines in '__new hunk__' code.
|
||||
Replacement suggestions should be complete, correctly formatted and indented, and without line numbers.
|
||||
```
|
||||
|
||||
Example output:
|
||||
```yaml
|
||||
code_suggestions:
|
||||
- relevant_file: |-
|
||||
Code suggestions:
|
||||
- relevant file: |-
|
||||
src/file1.py
|
||||
language: |-
|
||||
python
|
||||
suggestion_content: |-
|
||||
suggestion content: |-
|
||||
Add a docstring to func1()
|
||||
{%- if summarize_mode %}
|
||||
existing_code: |-
|
||||
existing code: |-
|
||||
def func1():
|
||||
improved_code: |-
|
||||
...
|
||||
one_sentence_summary: |-
|
||||
...
|
||||
relevant_lines_start: 12
|
||||
relevant_lines_end: 12
|
||||
{%- else %}
|
||||
existing_code: |-
|
||||
def func1():
|
||||
relevant_lines_start: 12
|
||||
relevant_lines_end: 12
|
||||
improved_code: |-
|
||||
...
|
||||
{%- endif %}
|
||||
label: |-
|
||||
relevant lines start: 12
|
||||
relevant lines end: 12
|
||||
improved code: |-
|
||||
...
|
||||
```
|
||||
|
||||
|
||||
Each YAML output MUST be after a newline, indented, with block scalar indicator ('|-').
|
||||
Don't repeat the prompt in the answer, and avoid outputting the 'type' and 'description' fields.
|
||||
"""
|
||||
|
||||
user="""PR Info:
|
||||
|
||||
Title: '{{title}}'
|
||||
|
||||
Branch: '{{branch}}'
|
||||
|
||||
Description: '{{description}}'
|
||||
|
||||
{%- if language %}
|
||||
|
||||
Main language: {{language}}
|
||||
{%- endif %}
|
||||
|
||||
|
||||
The PR Diff:
|
||||
======
|
||||
{{ diff|trim }}
|
||||
======
|
||||
|
||||
```
|
||||
{{- diff|trim }}
|
||||
```
|
||||
|
||||
Response (should be a valid YAML, and nothing else):
|
||||
```yaml
|
||||
|
@ -1,86 +0,0 @@
|
||||
[pr_custom_labels_prompt]
|
||||
system="""You are PR-Reviewer, a language model designed to review a Git Pull Request (PR).
|
||||
Your task is to provide labels that describe the PR content.
|
||||
{%- if enable_custom_labels %}
|
||||
Thoroughly read the labels name and the provided description, and decide whether the label is relevant to the PR.
|
||||
{%- endif %}
|
||||
|
||||
{%- if extra_instructions %}
|
||||
|
||||
Extra instructions from the user:
|
||||
======
|
||||
{{ extra_instructions }}
|
||||
======
|
||||
{% endif %}
|
||||
|
||||
|
||||
The output must be a YAML object equivalent to type $Labels, according to the following Pydantic definitions:
|
||||
======
|
||||
{%- if enable_custom_labels %}
|
||||
|
||||
{{ custom_labels_class }}
|
||||
|
||||
{%- else %}
|
||||
class Label(str, Enum):
|
||||
bug_fix = "Bug fix"
|
||||
tests = "Tests"
|
||||
enhancement = "Enhancement"
|
||||
documentation = "Documentation"
|
||||
other = "Other"
|
||||
{%- endif %}
|
||||
|
||||
class Labels(BaseModel):
|
||||
labels: List[Label] = Field(min_items=0, description="choose the relevant custom labels that describe the PR content, and return their keys. Use the value field of the Label object to better understand the label meaning.")
|
||||
======
|
||||
|
||||
|
||||
Example output:
|
||||
|
||||
```yaml
|
||||
labels:
|
||||
- ...
|
||||
- ...
|
||||
```
|
||||
|
||||
Answer should be a valid YAML, and nothing else.
|
||||
"""
|
||||
|
||||
user="""PR Info:
|
||||
|
||||
Previous title: '{{title}}'
|
||||
|
||||
Branch: '{{ branch }}'
|
||||
|
||||
{%- if description %}
|
||||
|
||||
Description:
|
||||
======
|
||||
{{ description|trim }}
|
||||
======
|
||||
{%- endif %}
|
||||
|
||||
{%- if language %}
|
||||
|
||||
Main PR language: '{{ language }}'
|
||||
{%- endif %}
|
||||
{%- if commit_messages_str %}
|
||||
|
||||
|
||||
Commit messages:
|
||||
======
|
||||
{{ commit_messages_str|trim }}
|
||||
======
|
||||
{%- endif %}
|
||||
|
||||
|
||||
The PR Git Diff:
|
||||
======
|
||||
{{ diff|trim }}
|
||||
======
|
||||
|
||||
Note that lines in the diff body are prefixed with a symbol that represents the type of change: '-' for deletions, '+' for additions, and ' ' (a space) for unchanged lines.
|
||||
|
||||
|
||||
Response (should be a valid YAML, and nothing else):
|
||||
```yaml
|
||||
"""
|
@ -1,130 +1,86 @@
|
||||
[pr_description_prompt]
|
||||
system="""You are PR-Reviewer, a language model designed to review a Git Pull Request (PR).
|
||||
{%- if enable_custom_labels %}
|
||||
Your task is to provide a full description for the PR content - files walkthrough, title, type, description and labels.
|
||||
{%- else %}
|
||||
Your task is to provide a full description for the PR content - files walkthrough, title, type, and description.
|
||||
{%- endif %}
|
||||
- Focus on the new PR code (lines starting with '+').
|
||||
- Keep in mind that the 'Previous title', 'Previous description' and 'Commit messages' sections may be partial, simplistic, non-informative or out of date. Hence, compare them to the PR diff code, and use them only as a reference.
|
||||
- The generated title and description should prioritize the most significant changes.
|
||||
- If needed, each YAML output should be in block scalar indicator ('|-')
|
||||
- When quoting variables or names from the code, use backticks (`) instead of single quote (').
|
||||
|
||||
system="""You are CodiumAI-PR-Reviewer, a language model designed to review git pull requests.
|
||||
Your task is to provide full description of the PR content.
|
||||
- Make sure not to focus the new PR code (the '+' lines).
|
||||
- Notice that the 'Previous title', 'Previous description' and 'Commit messages' sections may be partial, simplistic, non-informative or not up-to-date. Hence, compare them to the PR diff code, and use them only as a reference.
|
||||
- If needed, each YAML output should be in block scalar format ('|-')
|
||||
{%- if extra_instructions %}
|
||||
|
||||
Extra instructions from the user:
|
||||
=====
|
||||
{{extra_instructions}}
|
||||
=====
|
||||
{{ extra_instructions }}
|
||||
{% endif %}
|
||||
|
||||
|
||||
The output must be a YAML object equivalent to type $PRDescription, according to the following Pydantic definitions:
|
||||
=====
|
||||
class PRType(str, Enum):
|
||||
bug_fix = "Bug fix"
|
||||
tests = "Tests"
|
||||
enhancement = "Enhancement"
|
||||
documentation = "Documentation"
|
||||
other = "Other"
|
||||
|
||||
{%- if enable_custom_labels %}
|
||||
|
||||
{{ custom_labels_class }}
|
||||
|
||||
{%- endif %}
|
||||
|
||||
{%- if enable_semantic_files_types %}
|
||||
|
||||
Class FileDescription(BaseModel):
|
||||
filename: str = Field(description="the relevant file full path")
|
||||
language: str = Field(description="the relevant file language")
|
||||
changes_summary: str = Field(description="concise summary of the changes in the relevant file, in bullet points (1-4 bullet points).")
|
||||
changes_title: str = Field(description="an informative title for the changes in the files, describing its main theme (5-10 words).")
|
||||
label: str = Field(description="a single semantic label that represents a type of code changes that occurred in the File. Possible values (partial list): 'bug fix', 'tests', 'enhancement', 'documentation', 'error handling', 'configuration changes', 'dependencies', 'formatting', 'miscellaneous', ...")
|
||||
{%- endif %}
|
||||
|
||||
Class PRDescription(BaseModel):
|
||||
type: List[PRType] = Field(description="one or more types that describe the PR content. Return the label member value (e.g. 'Bug fix', not 'bug_fix')")
|
||||
{%- if enable_semantic_files_types %}
|
||||
pr_files[List[FileDescription]] = Field(max_items=15, description="a list of the files in the PR, and their changes summary.")
|
||||
{%- endif %}
|
||||
description: str = Field(description="an informative and concise description of the PR. Use bullet points. Display first the most significant changes.")
|
||||
title: str = Field(description="an informative title for the PR, describing its main theme")
|
||||
{%- if enable_custom_labels %}
|
||||
labels: List[Label] = Field(min_items=0, description="choose the relevant custom labels that describe the PR content, and return their keys. Use the value field of the Label object to better understand the label meaning.")
|
||||
{%- endif %}
|
||||
=====
|
||||
You must use the following YAML schema to format your answer:
|
||||
```yaml
|
||||
PR Title:
|
||||
type: string
|
||||
description: an informative title for the PR, describing its main theme
|
||||
PR Type:
|
||||
type: array
|
||||
items:
|
||||
type: string
|
||||
enum:
|
||||
- Bug fix
|
||||
- Tests
|
||||
- Bug fix with tests
|
||||
- Refactoring
|
||||
- Enhancement
|
||||
- Documentation
|
||||
- Other
|
||||
PR Description:
|
||||
type: string
|
||||
description: an informative and concise description of the PR
|
||||
PR Main Files Walkthrough:
|
||||
type: array
|
||||
maxItems: 10
|
||||
description: |-
|
||||
a walkthrough of the PR changes. Review main files, and shortly describe the changes in each file (up to 10 most important files).
|
||||
items:
|
||||
filename:
|
||||
type: string
|
||||
description: the relevant file full path
|
||||
changes in file:
|
||||
type: string
|
||||
description: minimal and concise description of the changes in the relevant file
|
||||
|
||||
|
||||
Example output:
|
||||
|
||||
```yaml
|
||||
type:
|
||||
- ...
|
||||
- ...
|
||||
{%- if enable_semantic_files_types %}
|
||||
pr_files:
|
||||
- filename: |
|
||||
PR Title: |-
|
||||
...
|
||||
language: |
|
||||
PR Type:
|
||||
- Bug fix
|
||||
PR Description: |-
|
||||
...
|
||||
changes_summary: |
|
||||
...
|
||||
changes_title: |
|
||||
...
|
||||
label: |
|
||||
...
|
||||
...
|
||||
{%- endif %}
|
||||
description: |-
|
||||
...
|
||||
title: |-
|
||||
...
|
||||
{%- if enable_custom_labels %}
|
||||
labels:
|
||||
- |
|
||||
...
|
||||
- |
|
||||
...
|
||||
{%- endif %}
|
||||
PR Main Files Walkthrough:
|
||||
- ...
|
||||
- ...
|
||||
```
|
||||
|
||||
Answer should be a valid YAML, and nothing else. Each YAML output MUST be after a newline, with proper indent, and block scalar indicator ('|-')
|
||||
Make sure to output a valid YAML. Don't repeat the prompt in the answer, and avoid outputting the 'type' and 'description' fields.
|
||||
"""
|
||||
|
||||
user="""PR Info:
|
||||
|
||||
Previous title: '{{title}}'
|
||||
|
||||
{%- if description %}
|
||||
|
||||
Previous description:
|
||||
=====
|
||||
{{ description|trim }}
|
||||
=====
|
||||
{%- endif %}
|
||||
|
||||
Previous description: '{{description}}'
|
||||
Branch: '{{branch}}'
|
||||
{%- if language %}
|
||||
|
||||
Main language: {{language}}
|
||||
{%- endif %}
|
||||
{%- if commit_messages_str %}
|
||||
|
||||
Commit messages:
|
||||
=====
|
||||
{{ commit_messages_str|trim }}
|
||||
=====
|
||||
{{commit_messages_str}}
|
||||
{%- endif %}
|
||||
|
||||
|
||||
The PR Diff:
|
||||
=====
|
||||
{{ diff|trim }}
|
||||
=====
|
||||
|
||||
The PR Git Diff:
|
||||
```
|
||||
{{diff}}
|
||||
```
|
||||
Note that lines in the diff body are prefixed with a symbol that represents the type of change: '-' for deletions, '+' for additions, and ' ' (a space) for unchanged lines.
|
||||
|
||||
|
||||
Response (should be a valid YAML, and nothing else):
|
||||
```yaml
|
||||
"""
|
||||
|
@ -1,5 +1,5 @@
|
||||
[pr_information_from_user_prompt]
|
||||
system="""You are PR-Reviewer, a language model designed to review a Git Pull Request (PR).
|
||||
system="""You are CodiumAI-PR-Reviewer, a language model designed to review git pull requests.
|
||||
Given the PR Info and the PR Git Diff, generate 3 short questions about the PR code for the PR author.
|
||||
The goal of the questions is to help the language model understand the PR better, so the questions should be insightful, informative, non-trivial, and relevant to the PR.
|
||||
You should prefer asking yes\\no questions, or multiple choice questions. Also add at least one open-ended question, but make sure they are not too difficult, and can be answered in a sentence or two.
|
||||
@ -16,36 +16,22 @@ Questions to better understand the PR:
|
||||
|
||||
user="""PR Info:
|
||||
Title: '{{title}}'
|
||||
|
||||
Branch: '{{branch}}'
|
||||
|
||||
{%- if description %}
|
||||
|
||||
Description:
|
||||
======
|
||||
{{ description|trim }}
|
||||
======
|
||||
{%- endif %}
|
||||
|
||||
Description: '{{description}}'
|
||||
{%- if language %}
|
||||
|
||||
Main PR language: '{{ language }}'
|
||||
Main language: {{language}}
|
||||
{%- endif %}
|
||||
{%- if commit_messages_str %}
|
||||
|
||||
|
||||
Commit messages:
|
||||
======
|
||||
{{ commit_messages_str|trim }}
|
||||
======
|
||||
{{commit_messages_str}}
|
||||
{%- endif %}
|
||||
|
||||
|
||||
The PR Git Diff:
|
||||
======
|
||||
{{ diff|trim }}
|
||||
======
|
||||
|
||||
```
|
||||
{{diff}}
|
||||
```
|
||||
Note that lines in the diff body are prefixed with a symbol that represents the type of change: '-' for deletions, '+' for additions, and ' ' (a space) for unchanged lines
|
||||
|
||||
|
||||
|
@ -1,42 +1,36 @@
|
||||
[pr_questions_prompt]
|
||||
system="""You are PR-Reviewer, a language model designed to answer questions about a Git Pull Request (PR).
|
||||
|
||||
Your goal is to answer questions\\tasks about the new code introduced in the PR (lines starting with '+' in the 'PR Git Diff' section), and provide feedback.
|
||||
system="""You are CodiumAI-PR-Reviewer, a language model designed to review git pull requests.
|
||||
Your task is to answer questions about the new PR code (the '+' lines), and provide feedback.
|
||||
Be informative, constructive, and give examples. Try to be as specific as possible.
|
||||
Don't avoid answering the questions. You must answer the questions, as best as you can, without adding any unrelated content.
|
||||
Don't avoid answering the questions. You must answer the questions, as best as you can, without adding unrelated content.
|
||||
Make sure not to repeat modifications already implemented in the new PR code (the '+' lines).
|
||||
"""
|
||||
|
||||
user="""PR Info:
|
||||
|
||||
Title: '{{title}}'
|
||||
|
||||
Branch: '{{branch}}'
|
||||
|
||||
{%- if description %}
|
||||
|
||||
Description:
|
||||
======
|
||||
{{ description|trim }}
|
||||
======
|
||||
{%- endif %}
|
||||
|
||||
Description: '{{description}}'
|
||||
{%- if language %}
|
||||
Main language: {{language}}
|
||||
{%- endif %}
|
||||
{%- if commit_messages_str %}
|
||||
|
||||
Main PR language: '{{ language }}'
|
||||
Commit messages:
|
||||
{{commit_messages_str}}
|
||||
{%- endif %}
|
||||
|
||||
|
||||
The PR Git Diff:
|
||||
======
|
||||
{{ diff|trim }}
|
||||
======
|
||||
```
|
||||
{{diff}}
|
||||
```
|
||||
Note that lines in the diff body are prefixed with a symbol that represents the type of change: '-' for deletions, '+' for additions, and ' ' (a space) for unchanged lines
|
||||
|
||||
|
||||
The PR Questions:
|
||||
======
|
||||
{{ questions|trim }}
|
||||
======
|
||||
```
|
||||
{{ questions }}
|
||||
```
|
||||
|
||||
Response to the PR Questions:
|
||||
Response:
|
||||
"""
|
||||
|
@ -1,48 +1,43 @@
|
||||
[pr_review_prompt]
|
||||
system="""You are PR-Reviewer, a language model designed to review a Git Pull Request (PR).
|
||||
system="""You are PR-Reviewer, a language model designed to review git pull requests.
|
||||
Your task is to provide constructive and concise feedback for the PR, and also provide meaningful code suggestions.
|
||||
The review should focus on new code added in the PR diff (lines starting with '+')
|
||||
|
||||
Example PR Diff:
|
||||
======
|
||||
## file: 'src/file1.py'
|
||||
Example PR Diff input:
|
||||
'
|
||||
## src/file1.py
|
||||
|
||||
@@ -12,5 +12,5 @@ def func1():
|
||||
code line 1 that remained unchanged in the PR
|
||||
code line 2 that remained unchanged in the PR
|
||||
code line that already existed in the file...
|
||||
code line that already existed in the file....
|
||||
-code line that was removed in the PR
|
||||
+code line added in the PR
|
||||
code line 3 that remained unchanged in the PR
|
||||
+new code line added in the PR
|
||||
code line that already existed in the file...
|
||||
code line that already existed in the file...
|
||||
|
||||
@@ ... @@ def func2():
|
||||
...
|
||||
|
||||
|
||||
## file: 'src/file2.py'
|
||||
## src/file2.py
|
||||
...
|
||||
======
|
||||
'
|
||||
|
||||
Thre review should focus on new code added in the PR (lines starting with '+'), and not on code that already existed in the file (lines starting with '-', or without prefix).
|
||||
|
||||
{%- if num_code_suggestions > 0 %}
|
||||
|
||||
|
||||
Code suggestions guidelines:
|
||||
- Provide up to {{ num_code_suggestions }} code suggestions. Try to provide diverse and insightful suggestions.
|
||||
- Provide up to {{ num_code_suggestions }} code suggestions.
|
||||
- Focus on important suggestions like fixing code problems, issues and bugs. As a second priority, provide suggestions for meaningful code improvements, like performance, vulnerability, modularity, and best practices.
|
||||
- Avoid making suggestions that have already been implemented in the PR code. For example, if you want to add logs, or change a variable to const, or anything else, make sure it isn't already in the PR code.
|
||||
- Don't suggest to add docstring, type hints, or comments.
|
||||
- Suggestions should focus on the new code added in the PR diff (lines starting with '+')
|
||||
- When quoting variables or names from the code, use backticks (`) instead of single quote (').
|
||||
- Don't suggest to add docstring or type hints.
|
||||
- Suggestions should focus on improving the new code added in the PR (lines starting with '+')
|
||||
{%- endif %}
|
||||
|
||||
{%- if extra_instructions %}
|
||||
|
||||
Extra instructions from the user:
|
||||
======
|
||||
{{ extra_instructions }}
|
||||
======
|
||||
{% endif %}
|
||||
|
||||
|
||||
You must use the following YAML schema to format your answer:
|
||||
```yaml
|
||||
PR Analysis:
|
||||
@ -57,6 +52,7 @@ PR Analysis:
|
||||
enum:
|
||||
- Bug fix
|
||||
- Tests
|
||||
- Refactoring
|
||||
- Enhancement
|
||||
- Documentation
|
||||
- Other
|
||||
@ -89,22 +85,14 @@ PR Analysis:
|
||||
code diff changes are too scattered, then the PR is not focused. Explain
|
||||
your answer shortly.
|
||||
{%- endif %}
|
||||
{%- if require_estimate_effort_to_review %}
|
||||
Estimated effort to review [1-5]:
|
||||
type: string
|
||||
description: >-
|
||||
Estimate, on a scale of 1-5 (inclusive), the time and effort required to review this PR by an experienced and knowledgeable developer. 1 means short and easy review , 5 means long and hard review.
|
||||
Take into account the size, complexity, quality, and the needed changes of the PR code diff.
|
||||
Explain your answer shortly (1-2 sentences). Use the format: '1, because ...'
|
||||
{%- endif %}
|
||||
PR Feedback:
|
||||
General suggestions:
|
||||
type: string
|
||||
description: |-
|
||||
General suggestions and feedback for the contributors and maintainers of this PR.
|
||||
May include important suggestions for the overall structure,
|
||||
primary purpose, best practices, critical bugs, and other aspects of the PR.
|
||||
Don't address PR title and description, or lack of tests. Explain your suggestions.
|
||||
General suggestions and feedback for the contributors and maintainers of
|
||||
this PR. May include important suggestions for the overall structure,
|
||||
primary purpose, best practices, critical bugs, and other aspects of the
|
||||
PR. Don't address PR title and description, or lack of tests. Explain your suggestions.
|
||||
{%- if num_code_suggestions > 0 %}
|
||||
Code feedback:
|
||||
type: array
|
||||
@ -114,16 +102,14 @@ PR Feedback:
|
||||
relevant file:
|
||||
type: string
|
||||
description: the relevant file full path
|
||||
language:
|
||||
type: string
|
||||
description: the language of the relevant file
|
||||
suggestion:
|
||||
type: string
|
||||
description: |-
|
||||
a concrete suggestion for meaningfully improving the new PR code.
|
||||
Also describe how, specifically, the suggestion can be applied to new PR code.
|
||||
Add tags with importance measure that matches each suggestion ('important' or 'medium').
|
||||
Do not make suggestions for updating or adding docstrings, renaming PR title and description, or linter like.
|
||||
a concrete suggestion for meaningfully improving the new PR code. Also
|
||||
describe how, specifically, the suggestion can be applied to new PR
|
||||
code. Add tags with importance measure that matches each suggestion
|
||||
('important' or 'medium'). Do not make suggestions for updating or
|
||||
adding docstrings, renaming PR title and description, or linter like.
|
||||
relevant line:
|
||||
type: string
|
||||
description: |-
|
||||
@ -135,8 +121,8 @@ PR Feedback:
|
||||
Security concerns:
|
||||
type: string
|
||||
description: >-
|
||||
does this PR code introduce possible vulnerabilities such as exposure of sensitive information (e.g., API keys, secrets, passwords), or security concerns like SQL injection, XSS, CSRF, and others ? Answer 'No' if there are no possible issues.
|
||||
Answer 'Yes, because ...' if there are security concerns or issues. Explain your answer shortly.
|
||||
yes\\no question: does this PR code introduce possible security concerns or
|
||||
issues, like SQL injection, XSS, CSRF, and others ? If answered 'yes',explain your answer shortly
|
||||
{%- endif %}
|
||||
```
|
||||
|
||||
@ -148,7 +134,7 @@ PR Analysis:
|
||||
PR summary: |-
|
||||
xxx
|
||||
Type of PR: |-
|
||||
...
|
||||
Bug fix
|
||||
{%- if require_score %}
|
||||
Score: 89
|
||||
{%- endif %}
|
||||
@ -157,10 +143,6 @@ PR Analysis:
|
||||
{%- if require_focused %}
|
||||
Focused PR: no, because ...
|
||||
{%- endif %}
|
||||
{%- if require_estimate_effort_to_review %}
|
||||
Estimated effort to review [1-5]: |-
|
||||
3, because ...
|
||||
{%- endif %}
|
||||
PR Feedback:
|
||||
General PR suggestions: |-
|
||||
...
|
||||
@ -168,8 +150,6 @@ PR Feedback:
|
||||
Code feedback:
|
||||
- relevant file: |-
|
||||
directory/xxx.py
|
||||
language: |-
|
||||
python
|
||||
suggestion: |-
|
||||
xxx [important]
|
||||
relevant line: |-
|
||||
@ -186,46 +166,34 @@ Don't repeat the prompt in the answer, and avoid outputting the 'type' and 'desc
|
||||
"""
|
||||
|
||||
user="""PR Info:
|
||||
|
||||
Title: '{{title}}'
|
||||
|
||||
Branch: '{{branch}}'
|
||||
|
||||
{%- if description %}
|
||||
|
||||
Description:
|
||||
======
|
||||
{{ description|trim }}
|
||||
======
|
||||
Description: '{{description}}'
|
||||
{%- if language %}
|
||||
Main language: {{language}}
|
||||
{%- endif %}
|
||||
|
||||
{%- if commit_messages_str %}
|
||||
|
||||
Commit messages:
|
||||
======
|
||||
{{commit_messages_str}}
|
||||
======
|
||||
{%- endif %}
|
||||
|
||||
{%- if question_str %}
|
||||
=====
|
||||
######
|
||||
Here are questions to better understand the PR. Use the answers to provide better feedback.
|
||||
|
||||
{{ question_str|trim }}
|
||||
{{question_str|trim}}
|
||||
|
||||
User answers:
|
||||
'
|
||||
{{ answer_str|trim }}
|
||||
'
|
||||
=====
|
||||
{{answer_str|trim}}
|
||||
######
|
||||
{%- endif %}
|
||||
|
||||
|
||||
The PR Diff:
|
||||
======
|
||||
{{ diff|trim }}
|
||||
======
|
||||
|
||||
The PR Git Diff:
|
||||
```
|
||||
{{diff}}
|
||||
```
|
||||
Note that lines in the diff body are prefixed with a symbol that represents the type of change: '-' for deletions, '+' for additions. Focus on the '+' lines.
|
||||
|
||||
Response (should be a valid YAML, and nothing else):
|
||||
```yaml
|
||||
|
@ -2,10 +2,10 @@
|
||||
system="""
|
||||
"""
|
||||
|
||||
user="""You are given a list of code suggestions to improve a Git Pull Request (PR):
|
||||
======
|
||||
user="""You are given a list of code suggestions to improve a PR:
|
||||
|
||||
{{ suggestion_str|trim }}
|
||||
======
|
||||
|
||||
|
||||
Your task is to sort the code suggestions by their order of importance, and return a list with sorting order.
|
||||
The sorting order is a list of pairs, where each pair contains the index of the suggestion in the original list.
|
||||
|
@ -1,5 +1,5 @@
|
||||
[pr_update_changelog_prompt]
|
||||
system="""You are a language model called PR-Changelog-Updater.
|
||||
system="""You are a language model called CodiumAI-PR-Changlog-summarizer.
|
||||
Your task is to update the CHANGELOG.md file of the project, to shortly summarize important changes introduced in this PR (the '+' lines).
|
||||
- The output should match the existing CHANGELOG.md format, style and conventions, so it will look like a natural part of the file. For example, if previous changes were summarized in a single line, you should do the same.
|
||||
- Don't repeat previous changes. Generate only new content, that is not already in the CHANGELOG.md file.
|
||||
@ -8,44 +8,28 @@ Your task is to update the CHANGELOG.md file of the project, to shortly summariz
|
||||
{%- if extra_instructions %}
|
||||
|
||||
Extra instructions from the user:
|
||||
======
|
||||
{{ extra_instructions|trim }}
|
||||
======
|
||||
{{ extra_instructions }}
|
||||
{%- endif %}
|
||||
"""
|
||||
|
||||
user="""PR Info:
|
||||
|
||||
Title: '{{title}}'
|
||||
|
||||
Branch: '{{branch}}'
|
||||
|
||||
{%- if description %}
|
||||
|
||||
Description:
|
||||
======
|
||||
{{ description|trim }}
|
||||
======
|
||||
{%- endif %}
|
||||
|
||||
Description: '{{description}}'
|
||||
{%- if language %}
|
||||
|
||||
Main PR language: '{{ language }}'
|
||||
Main language: {{language}}
|
||||
{%- endif %}
|
||||
{%- if commit_messages_str %}
|
||||
|
||||
|
||||
Commit messages:
|
||||
======
|
||||
{{ commit_messages_str|trim }}
|
||||
======
|
||||
{{commit_messages_str}}
|
||||
{%- endif %}
|
||||
|
||||
|
||||
The PR Git Diff:
|
||||
======
|
||||
{{ diff|trim }}
|
||||
======
|
||||
The PR Diff:
|
||||
```
|
||||
{{diff}}
|
||||
```
|
||||
|
||||
Current date:
|
||||
```
|
||||
@ -53,10 +37,9 @@ Current date:
|
||||
```
|
||||
|
||||
The current CHANGELOG.md:
|
||||
======
|
||||
```
|
||||
{{ changelog_file_str }}
|
||||
======
|
||||
|
||||
```
|
||||
|
||||
Response:
|
||||
"""
|
||||
|
@ -1,182 +0,0 @@
|
||||
import copy
|
||||
import textwrap
|
||||
from functools import partial
|
||||
from typing import Dict
|
||||
|
||||
from jinja2 import Environment, StrictUndefined
|
||||
|
||||
from pr_agent.algo.ai_handlers.base_ai_handler import BaseAiHandler
|
||||
from pr_agent.algo.ai_handlers.litellm_ai_handler import LiteLLMAIHandler
|
||||
from pr_agent.algo.pr_processing import get_pr_diff, retry_with_fallback_models
|
||||
from pr_agent.algo.token_handler import TokenHandler
|
||||
from pr_agent.algo.utils import load_yaml
|
||||
from pr_agent.config_loader import get_settings
|
||||
from pr_agent.git_providers import get_git_provider
|
||||
from pr_agent.git_providers.git_provider import get_main_pr_language
|
||||
from pr_agent.log import get_logger
|
||||
|
||||
|
||||
class PRAddDocs:
|
||||
def __init__(self, pr_url: str, cli_mode=False, args: list = None,
|
||||
ai_handler: partial[BaseAiHandler,] = LiteLLMAIHandler):
|
||||
|
||||
self.git_provider = get_git_provider()(pr_url)
|
||||
self.main_language = get_main_pr_language(
|
||||
self.git_provider.get_languages(), self.git_provider.get_files()
|
||||
)
|
||||
|
||||
self.ai_handler = ai_handler()
|
||||
self.patches_diff = None
|
||||
self.prediction = None
|
||||
self.cli_mode = cli_mode
|
||||
self.vars = {
|
||||
"title": self.git_provider.pr.title,
|
||||
"branch": self.git_provider.get_pr_branch(),
|
||||
"description": self.git_provider.get_pr_description(),
|
||||
"language": self.main_language,
|
||||
"diff": "", # empty diff for initial calculation
|
||||
"extra_instructions": get_settings().pr_add_docs.extra_instructions,
|
||||
"commit_messages_str": self.git_provider.get_commit_messages(),
|
||||
'docs_for_language': get_docs_for_language(self.main_language,
|
||||
get_settings().pr_add_docs.docs_style),
|
||||
}
|
||||
self.token_handler = TokenHandler(self.git_provider.pr,
|
||||
self.vars,
|
||||
get_settings().pr_add_docs_prompt.system,
|
||||
get_settings().pr_add_docs_prompt.user)
|
||||
|
||||
async def run(self):
|
||||
try:
|
||||
get_logger().info('Generating code Docs for PR...')
|
||||
if get_settings().config.publish_output:
|
||||
self.git_provider.publish_comment("Generating Documentation...", is_temporary=True)
|
||||
|
||||
get_logger().info('Preparing PR documentation...')
|
||||
await retry_with_fallback_models(self._prepare_prediction)
|
||||
data = self._prepare_pr_code_docs()
|
||||
if (not data) or (not 'Code Documentation' in data):
|
||||
get_logger().info('No code documentation found for PR.')
|
||||
return
|
||||
|
||||
if get_settings().config.publish_output:
|
||||
get_logger().info('Pushing PR documentation...')
|
||||
self.git_provider.remove_initial_comment()
|
||||
get_logger().info('Pushing inline code documentation...')
|
||||
self.push_inline_docs(data)
|
||||
except Exception as e:
|
||||
get_logger().error(f"Failed to generate code documentation for PR, error: {e}")
|
||||
|
||||
async def _prepare_prediction(self, model: str):
|
||||
get_logger().info('Getting PR diff...')
|
||||
|
||||
# Disable adding docs to scripts and other non-relevant text files
|
||||
from pr_agent.algo.language_handler import bad_extensions
|
||||
bad_extensions += get_settings().docs_blacklist_extensions.docs_blacklist
|
||||
|
||||
self.patches_diff = get_pr_diff(self.git_provider,
|
||||
self.token_handler,
|
||||
model,
|
||||
add_line_numbers_to_hunks=True,
|
||||
disable_extra_lines=False)
|
||||
|
||||
get_logger().info('Getting AI prediction...')
|
||||
self.prediction = await self._get_prediction(model)
|
||||
|
||||
async def _get_prediction(self, model: str):
|
||||
variables = copy.deepcopy(self.vars)
|
||||
variables["diff"] = self.patches_diff # update diff
|
||||
environment = Environment(undefined=StrictUndefined)
|
||||
system_prompt = environment.from_string(get_settings().pr_add_docs_prompt.system).render(variables)
|
||||
user_prompt = environment.from_string(get_settings().pr_add_docs_prompt.user).render(variables)
|
||||
if get_settings().config.verbosity_level >= 2:
|
||||
get_logger().info(f"\nSystem prompt:\n{system_prompt}")
|
||||
get_logger().info(f"\nUser prompt:\n{user_prompt}")
|
||||
response, finish_reason = await self.ai_handler.chat_completion(model=model, temperature=0.2,
|
||||
system=system_prompt, user=user_prompt)
|
||||
|
||||
return response
|
||||
|
||||
def _prepare_pr_code_docs(self) -> Dict:
|
||||
docs = self.prediction.strip()
|
||||
data = load_yaml(docs)
|
||||
if isinstance(data, list):
|
||||
data = {'Code Documentation': data}
|
||||
return data
|
||||
|
||||
def push_inline_docs(self, data):
|
||||
docs = []
|
||||
|
||||
if not data['Code Documentation']:
|
||||
return self.git_provider.publish_comment('No code documentation found to improve this PR.')
|
||||
|
||||
for d in data['Code Documentation']:
|
||||
try:
|
||||
if get_settings().config.verbosity_level >= 2:
|
||||
get_logger().info(f"add_docs: {d}")
|
||||
relevant_file = d['relevant file'].strip()
|
||||
relevant_line = int(d['relevant line']) # absolute position
|
||||
documentation = d['documentation']
|
||||
doc_placement = d['doc placement'].strip()
|
||||
if documentation:
|
||||
new_code_snippet = self.dedent_code(relevant_file, relevant_line, documentation, doc_placement,
|
||||
add_original_line=True)
|
||||
|
||||
body = f"**Suggestion:** Proposed documentation\n```suggestion\n" + new_code_snippet + "\n```"
|
||||
docs.append({'body': body, 'relevant_file': relevant_file,
|
||||
'relevant_lines_start': relevant_line,
|
||||
'relevant_lines_end': relevant_line})
|
||||
except Exception:
|
||||
if get_settings().config.verbosity_level >= 2:
|
||||
get_logger().info(f"Could not parse code docs: {d}")
|
||||
|
||||
is_successful = self.git_provider.publish_code_suggestions(docs)
|
||||
if not is_successful:
|
||||
get_logger().info("Failed to publish code docs, trying to publish each docs separately")
|
||||
for doc_suggestion in docs:
|
||||
self.git_provider.publish_code_suggestions([doc_suggestion])
|
||||
|
||||
def dedent_code(self, relevant_file, relevant_lines_start, new_code_snippet, doc_placement='after',
|
||||
add_original_line=False):
|
||||
try: # dedent code snippet
|
||||
self.diff_files = self.git_provider.diff_files if self.git_provider.diff_files \
|
||||
else self.git_provider.get_diff_files()
|
||||
original_initial_line = None
|
||||
for file in self.diff_files:
|
||||
if file.filename.strip() == relevant_file:
|
||||
original_initial_line = file.head_file.splitlines()[relevant_lines_start - 1]
|
||||
break
|
||||
if original_initial_line:
|
||||
if doc_placement == 'after':
|
||||
line = file.head_file.splitlines()[relevant_lines_start]
|
||||
else:
|
||||
line = original_initial_line
|
||||
suggested_initial_line = new_code_snippet.splitlines()[0]
|
||||
original_initial_spaces = len(line) - len(line.lstrip())
|
||||
suggested_initial_spaces = len(suggested_initial_line) - len(suggested_initial_line.lstrip())
|
||||
delta_spaces = original_initial_spaces - suggested_initial_spaces
|
||||
if delta_spaces > 0:
|
||||
new_code_snippet = textwrap.indent(new_code_snippet, delta_spaces * " ").rstrip('\n')
|
||||
if add_original_line:
|
||||
if doc_placement == 'after':
|
||||
new_code_snippet = original_initial_line + "\n" + new_code_snippet
|
||||
else:
|
||||
new_code_snippet = new_code_snippet.rstrip() + "\n" + original_initial_line
|
||||
except Exception as e:
|
||||
if get_settings().config.verbosity_level >= 2:
|
||||
get_logger().info(f"Could not dedent code snippet for file {relevant_file}, error: {e}")
|
||||
|
||||
return new_code_snippet
|
||||
|
||||
|
||||
def get_docs_for_language(language, style):
|
||||
language = language.lower()
|
||||
if language == 'java':
|
||||
return "Javadocs"
|
||||
elif language in ['python', 'lisp', 'clojure']:
|
||||
return f"Docstring ({style})"
|
||||
elif language in ['javascript', 'typescript']:
|
||||
return "JSdocs"
|
||||
elif language == 'c++':
|
||||
return "Doxygen"
|
||||
else:
|
||||
return "Docs"
|
@ -1,25 +1,20 @@
|
||||
import copy
|
||||
import logging
|
||||
import textwrap
|
||||
from functools import partial
|
||||
from typing import Dict, List
|
||||
from typing import List, Dict
|
||||
from jinja2 import Environment, StrictUndefined
|
||||
|
||||
from pr_agent.algo.ai_handlers.base_ai_handler import BaseAiHandler
|
||||
from pr_agent.algo.ai_handlers.litellm_ai_handler import LiteLLMAIHandler
|
||||
from pr_agent.algo.pr_processing import get_pr_diff, get_pr_multi_diffs, retry_with_fallback_models
|
||||
from pr_agent.algo.ai_handler import AiHandler
|
||||
from pr_agent.algo.pr_processing import get_pr_diff, retry_with_fallback_models, get_pr_multi_diffs
|
||||
from pr_agent.algo.token_handler import TokenHandler
|
||||
from pr_agent.algo.utils import load_yaml, replace_code_tags
|
||||
from pr_agent.algo.utils import load_yaml
|
||||
from pr_agent.config_loader import get_settings
|
||||
from pr_agent.git_providers import get_git_provider
|
||||
from pr_agent.git_providers import BitbucketProvider, get_git_provider
|
||||
from pr_agent.git_providers.git_provider import get_main_pr_language
|
||||
from pr_agent.log import get_logger
|
||||
from pr_agent.servers.help import HelpMessage
|
||||
from pr_agent.tools.pr_description import insert_br_after_x_chars
|
||||
import difflib
|
||||
|
||||
|
||||
class PRCodeSuggestions:
|
||||
def __init__(self, pr_url: str, cli_mode=False, args: list = None,
|
||||
ai_handler: partial[BaseAiHandler,] = LiteLLMAIHandler):
|
||||
def __init__(self, pr_url: str, cli_mode=False, args: list = None):
|
||||
|
||||
self.git_provider = get_git_provider()(pr_url)
|
||||
self.main_language = get_main_pr_language(
|
||||
@ -27,16 +22,13 @@ class PRCodeSuggestions:
|
||||
)
|
||||
|
||||
# extended mode
|
||||
try:
|
||||
self.is_extended = self._get_is_extended(args or [])
|
||||
except:
|
||||
self.is_extended = False
|
||||
self.is_extended = any(["extended" in arg for arg in args])
|
||||
if self.is_extended:
|
||||
num_code_suggestions = get_settings().pr_code_suggestions.num_code_suggestions_per_chunk
|
||||
else:
|
||||
num_code_suggestions = get_settings().pr_code_suggestions.num_code_suggestions
|
||||
|
||||
self.ai_handler = ai_handler()
|
||||
self.ai_handler = AiHandler()
|
||||
self.patches_diff = None
|
||||
self.prediction = None
|
||||
self.cli_mode = cli_mode
|
||||
@ -47,7 +39,6 @@ class PRCodeSuggestions:
|
||||
"language": self.main_language,
|
||||
"diff": "", # empty diff for initial calculation
|
||||
"num_code_suggestions": num_code_suggestions,
|
||||
"summarize_mode": get_settings().pr_code_suggestions.summarize,
|
||||
"extra_instructions": get_settings().pr_code_suggestions.extra_instructions,
|
||||
"commit_messages_str": self.git_provider.get_commit_messages(),
|
||||
}
|
||||
@ -57,59 +48,37 @@ class PRCodeSuggestions:
|
||||
get_settings().pr_code_suggestions_prompt.user)
|
||||
|
||||
async def run(self):
|
||||
try:
|
||||
get_logger().info('Generating code suggestions for PR...')
|
||||
logging.info('Generating code suggestions for PR...')
|
||||
if get_settings().config.publish_output:
|
||||
self.git_provider.publish_comment("Preparing suggestions...", is_temporary=True)
|
||||
self.git_provider.publish_comment("Preparing review...", is_temporary=True)
|
||||
|
||||
get_logger().info('Preparing PR code suggestions...')
|
||||
logging.info('Preparing PR review...')
|
||||
if not self.is_extended:
|
||||
await retry_with_fallback_models(self._prepare_prediction)
|
||||
data = self._prepare_pr_code_suggestions()
|
||||
else:
|
||||
data = await retry_with_fallback_models(self._prepare_prediction_extended)
|
||||
|
||||
|
||||
if (not data) or (not 'code_suggestions' in data):
|
||||
get_logger().info('No code suggestions found for PR.')
|
||||
return
|
||||
|
||||
if (not self.is_extended and get_settings().pr_code_suggestions.rank_suggestions) or \
|
||||
(self.is_extended and get_settings().pr_code_suggestions.rank_extended_suggestions):
|
||||
get_logger().info('Ranking Suggestions...')
|
||||
data['code_suggestions'] = await self.rank_suggestions(data['code_suggestions'])
|
||||
logging.info('Ranking Suggestions...')
|
||||
data['Code suggestions'] = await self.rank_suggestions(data['Code suggestions'])
|
||||
|
||||
if get_settings().config.publish_output:
|
||||
get_logger().info('Pushing PR code suggestions...')
|
||||
logging.info('Pushing PR review...')
|
||||
self.git_provider.remove_initial_comment()
|
||||
if get_settings().pr_code_suggestions.summarize and self.git_provider.is_supported("gfm_markdown"):
|
||||
get_logger().info('Pushing summarize code suggestions...')
|
||||
|
||||
# generate summarized suggestions
|
||||
pr_body = self.generate_summarized_suggestions(data)
|
||||
|
||||
# add usage guide
|
||||
if get_settings().pr_code_suggestions.enable_help_text:
|
||||
pr_body += "<hr>\n\n<details> <summary><strong>✨ Usage guide:</strong></summary><hr> \n\n"
|
||||
pr_body += HelpMessage.get_improve_usage_guide()
|
||||
pr_body += "\n</details>\n"
|
||||
|
||||
self.git_provider.publish_comment(pr_body)
|
||||
else:
|
||||
get_logger().info('Pushing inline code suggestions...')
|
||||
logging.info('Pushing inline code suggestions...')
|
||||
self.push_inline_code_suggestions(data)
|
||||
except Exception as e:
|
||||
get_logger().error(f"Failed to generate code suggestions for PR, error: {e}")
|
||||
|
||||
async def _prepare_prediction(self, model: str):
|
||||
get_logger().info('Getting PR diff...')
|
||||
logging.info('Getting PR diff...')
|
||||
self.patches_diff = get_pr_diff(self.git_provider,
|
||||
self.token_handler,
|
||||
model,
|
||||
add_line_numbers_to_hunks=True,
|
||||
disable_extra_lines=True)
|
||||
|
||||
get_logger().info('Getting AI prediction...')
|
||||
logging.info('Getting AI prediction...')
|
||||
self.prediction = await self._get_prediction(model)
|
||||
|
||||
async def _get_prediction(self, model: str):
|
||||
@ -119,67 +88,50 @@ class PRCodeSuggestions:
|
||||
system_prompt = environment.from_string(get_settings().pr_code_suggestions_prompt.system).render(variables)
|
||||
user_prompt = environment.from_string(get_settings().pr_code_suggestions_prompt.user).render(variables)
|
||||
if get_settings().config.verbosity_level >= 2:
|
||||
get_logger().info(f"\nSystem prompt:\n{system_prompt}")
|
||||
get_logger().info(f"\nUser prompt:\n{user_prompt}")
|
||||
logging.info(f"\nSystem prompt:\n{system_prompt}")
|
||||
logging.info(f"\nUser prompt:\n{user_prompt}")
|
||||
response, finish_reason = await self.ai_handler.chat_completion(model=model, temperature=0.2,
|
||||
system=system_prompt, user=user_prompt)
|
||||
|
||||
if get_settings().config.verbosity_level >= 2:
|
||||
get_logger().info(f"\nAI response:\n{response}")
|
||||
|
||||
return response
|
||||
|
||||
def _prepare_pr_code_suggestions(self) -> Dict:
|
||||
review = self.prediction.strip()
|
||||
data = load_yaml(review,
|
||||
keys_fix_yaml=["relevant_file", "suggestion_content", "existing_code", "improved_code"])
|
||||
data = load_yaml(review)
|
||||
if isinstance(data, list):
|
||||
data = {'code_suggestions': data}
|
||||
|
||||
# remove invalid suggestions
|
||||
suggestion_list = []
|
||||
for i, suggestion in enumerate(data['code_suggestions']):
|
||||
if suggestion['existing_code'] != suggestion['improved_code']:
|
||||
suggestion_list.append(suggestion)
|
||||
else:
|
||||
get_logger().debug(
|
||||
f"Skipping suggestion {i + 1}, because existing code is equal to improved code {suggestion['existing_code']}")
|
||||
data['code_suggestions'] = suggestion_list
|
||||
|
||||
data = {'Code suggestions': data}
|
||||
return data
|
||||
|
||||
def push_inline_code_suggestions(self, data):
|
||||
code_suggestions = []
|
||||
|
||||
if not data['code_suggestions']:
|
||||
get_logger().info('No suggestions found to improve this PR.')
|
||||
if not data['Code suggestions']:
|
||||
return self.git_provider.publish_comment('No suggestions found to improve this PR.')
|
||||
|
||||
for d in data['code_suggestions']:
|
||||
for d in data['Code suggestions']:
|
||||
try:
|
||||
if get_settings().config.verbosity_level >= 2:
|
||||
get_logger().info(f"suggestion: {d}")
|
||||
relevant_file = d['relevant_file'].strip()
|
||||
relevant_lines_start = int(d['relevant_lines_start']) # absolute position
|
||||
relevant_lines_end = int(d['relevant_lines_end'])
|
||||
content = d['suggestion_content'].rstrip()
|
||||
new_code_snippet = d['improved_code'].rstrip()
|
||||
label = d['label'].strip()
|
||||
logging.info(f"suggestion: {d}")
|
||||
relevant_file = d['relevant file'].strip()
|
||||
relevant_lines_start = int(d['relevant lines start']) # absolute position
|
||||
relevant_lines_end = int(d['relevant lines end'])
|
||||
content = d['suggestion content']
|
||||
new_code_snippet = d['improved code']
|
||||
|
||||
if new_code_snippet:
|
||||
new_code_snippet = self.dedent_code(relevant_file, relevant_lines_start, new_code_snippet)
|
||||
|
||||
body = f"**Suggestion:** {content} [{label}]\n```suggestion\n" + new_code_snippet + "\n```"
|
||||
body = f"**Suggestion:** {content}\n```suggestion\n" + new_code_snippet + "\n```"
|
||||
code_suggestions.append({'body': body, 'relevant_file': relevant_file,
|
||||
'relevant_lines_start': relevant_lines_start,
|
||||
'relevant_lines_end': relevant_lines_end})
|
||||
except Exception:
|
||||
if get_settings().config.verbosity_level >= 2:
|
||||
get_logger().info(f"Could not parse suggestion: {d}")
|
||||
logging.info(f"Could not parse suggestion: {d}")
|
||||
|
||||
is_successful = self.git_provider.publish_code_suggestions(code_suggestions)
|
||||
if not is_successful:
|
||||
get_logger().info("Failed to publish code suggestions, trying to publish each suggestion separately")
|
||||
logging.info("Failed to publish code suggestions, trying to publish each suggestion separately")
|
||||
for code_suggestion in code_suggestions:
|
||||
self.git_provider.publish_code_suggestions([code_suggestion])
|
||||
|
||||
@ -190,7 +142,6 @@ class PRCodeSuggestions:
|
||||
original_initial_line = None
|
||||
for file in self.diff_files:
|
||||
if file.filename.strip() == relevant_file:
|
||||
if file.head_file: # in bitbucket, head_file is empty. toDo: fix this
|
||||
original_initial_line = file.head_file.splitlines()[relevant_lines_start - 1]
|
||||
break
|
||||
if original_initial_line:
|
||||
@ -202,31 +153,21 @@ class PRCodeSuggestions:
|
||||
new_code_snippet = textwrap.indent(new_code_snippet, delta_spaces * " ").rstrip('\n')
|
||||
except Exception as e:
|
||||
if get_settings().config.verbosity_level >= 2:
|
||||
get_logger().info(f"Could not dedent code snippet for file {relevant_file}, error: {e}")
|
||||
logging.info(f"Could not dedent code snippet for file {relevant_file}, error: {e}")
|
||||
|
||||
return new_code_snippet
|
||||
|
||||
def _get_is_extended(self, args: list[str]) -> bool:
|
||||
"""Check if extended mode should be enabled by the `--extended` flag or automatically according to the configuration"""
|
||||
if any(["extended" in arg for arg in args]):
|
||||
get_logger().info("Extended mode is enabled by the `--extended` flag")
|
||||
return True
|
||||
if get_settings().pr_code_suggestions.auto_extended_mode:
|
||||
get_logger().info("Extended mode is enabled automatically based on the configuration toggle")
|
||||
return True
|
||||
return False
|
||||
|
||||
async def _prepare_prediction_extended(self, model: str) -> dict:
|
||||
get_logger().info('Getting PR diff...')
|
||||
logging.info('Getting PR diff...')
|
||||
patches_diff_list = get_pr_multi_diffs(self.git_provider, self.token_handler, model,
|
||||
max_calls=get_settings().pr_code_suggestions.max_number_of_calls)
|
||||
|
||||
get_logger().info('Getting multi AI predictions...')
|
||||
logging.info('Getting multi AI predictions...')
|
||||
prediction_list = []
|
||||
for i, patches_diff in enumerate(patches_diff_list):
|
||||
get_logger().info(f"Processing chunk {i + 1} of {len(patches_diff_list)}")
|
||||
logging.info(f"Processing chunk {i + 1} of {len(patches_diff_list)}")
|
||||
self.patches_diff = patches_diff
|
||||
prediction = await self._get_prediction(model) # toDo: parallelize
|
||||
prediction = await self._get_prediction(model)
|
||||
prediction_list.append(prediction)
|
||||
self.prediction_list = prediction_list
|
||||
|
||||
@ -234,8 +175,8 @@ class PRCodeSuggestions:
|
||||
for prediction in prediction_list:
|
||||
self.prediction = prediction
|
||||
data_per_chunk = self._prepare_pr_code_suggestions()
|
||||
if "code_suggestions" in data:
|
||||
data["code_suggestions"].extend(data_per_chunk["code_suggestions"])
|
||||
if "Code suggestions" in data:
|
||||
data["Code suggestions"].extend(data_per_chunk["Code suggestions"])
|
||||
else:
|
||||
data.update(data_per_chunk)
|
||||
self.data = data
|
||||
@ -253,14 +194,12 @@ class PRCodeSuggestions:
|
||||
"""
|
||||
|
||||
suggestion_list = []
|
||||
if not data:
|
||||
return suggestion_list
|
||||
for suggestion in data:
|
||||
# remove invalid suggestions
|
||||
for i, suggestion in enumerate(data):
|
||||
if suggestion['existing code'] != suggestion['improved code']:
|
||||
suggestion_list.append(suggestion)
|
||||
data_sorted = [[]] * len(suggestion_list)
|
||||
|
||||
if len(suggestion_list ) == 1:
|
||||
return suggestion_list
|
||||
data_sorted = [[]] * len(suggestion_list)
|
||||
|
||||
try:
|
||||
suggestion_str = ""
|
||||
@ -274,8 +213,8 @@ class PRCodeSuggestions:
|
||||
variables)
|
||||
user_prompt = environment.from_string(get_settings().pr_sort_code_suggestions_prompt.user).render(variables)
|
||||
if get_settings().config.verbosity_level >= 2:
|
||||
get_logger().info(f"\nSystem prompt:\n{system_prompt}")
|
||||
get_logger().info(f"\nUser prompt:\n{user_prompt}")
|
||||
logging.info(f"\nSystem prompt:\n{system_prompt}")
|
||||
logging.info(f"\nUser prompt:\n{user_prompt}")
|
||||
response, finish_reason = await self.ai_handler.chat_completion(model=model, system=system_prompt,
|
||||
user=user_prompt)
|
||||
|
||||
@ -286,107 +225,13 @@ class PRCodeSuggestions:
|
||||
data_sorted[importance_order - 1] = suggestion_list[suggestion_number - 1]
|
||||
|
||||
if get_settings().pr_code_suggestions.final_clip_factor != 1:
|
||||
max_len = max(
|
||||
len(data_sorted),
|
||||
get_settings().pr_code_suggestions.num_code_suggestions,
|
||||
get_settings().pr_code_suggestions.num_code_suggestions_per_chunk,
|
||||
)
|
||||
new_len = int(0.5 + max_len * get_settings().pr_code_suggestions.final_clip_factor)
|
||||
if new_len < len(data_sorted):
|
||||
new_len = int(0.5 + len(data_sorted) * get_settings().pr_code_suggestions.final_clip_factor)
|
||||
data_sorted = data_sorted[:new_len]
|
||||
except Exception as e:
|
||||
if get_settings().config.verbosity_level >= 1:
|
||||
get_logger().info(f"Could not sort suggestions, error: {e}")
|
||||
logging.info(f"Could not sort suggestions, error: {e}")
|
||||
data_sorted = suggestion_list
|
||||
|
||||
return data_sorted
|
||||
|
||||
def generate_summarized_suggestions(self, data: Dict) -> str:
|
||||
try:
|
||||
pr_body = "## PR Code Suggestions\n\n"
|
||||
|
||||
if len(data.get('code_suggestions', [])) == 0:
|
||||
pr_body += "No suggestions found to improve this PR."
|
||||
return pr_body
|
||||
|
||||
language_extension_map_org = get_settings().language_extension_map_org
|
||||
extension_to_language = {}
|
||||
for language, extensions in language_extension_map_org.items():
|
||||
for ext in extensions:
|
||||
extension_to_language[ext] = language
|
||||
|
||||
pr_body += "<table>"
|
||||
header = f"Suggestions"
|
||||
delta = 77
|
||||
header += " " * delta
|
||||
pr_body += f"""<thead><tr><th></th><th>{header}</th></tr></thead>"""
|
||||
pr_body += """<tbody>"""
|
||||
suggestions_labels = dict()
|
||||
# add all suggestions related to each label
|
||||
for suggestion in data['code_suggestions']:
|
||||
label = suggestion['label'].strip().strip("'").strip('"')
|
||||
if label not in suggestions_labels:
|
||||
suggestions_labels[label] = []
|
||||
suggestions_labels[label].append(suggestion)
|
||||
|
||||
for label, suggestions in suggestions_labels.items():
|
||||
pr_body += f"""<tr><td><strong>{label}</strong></td>"""
|
||||
pr_body += f"""<td>"""
|
||||
# pr_body += f"""<details><summary>{len(suggestions)} suggestions</summary>"""
|
||||
pr_body += f"""<table>"""
|
||||
for suggestion in suggestions:
|
||||
|
||||
relevant_file = suggestion['relevant_file'].strip()
|
||||
relevant_lines_start = int(suggestion['relevant_lines_start'])
|
||||
relevant_lines_end = int(suggestion['relevant_lines_end'])
|
||||
range_str = ""
|
||||
if relevant_lines_start == relevant_lines_end:
|
||||
range_str = f"[{relevant_lines_start}]"
|
||||
else:
|
||||
range_str = f"[{relevant_lines_start}-{relevant_lines_end}]"
|
||||
code_snippet_link = self.git_provider.get_line_link(relevant_file, relevant_lines_start,
|
||||
relevant_lines_end)
|
||||
# add html table for each suggestion
|
||||
|
||||
suggestion_content = suggestion['suggestion_content'].rstrip().rstrip()
|
||||
|
||||
suggestion_content = insert_br_after_x_chars(suggestion_content, 90)
|
||||
# pr_body += f"<tr><td><details><summary>{suggestion_content}</summary>"
|
||||
existing_code = suggestion['existing_code'].rstrip()+"\n"
|
||||
improved_code = suggestion['improved_code'].rstrip()+"\n"
|
||||
|
||||
diff = difflib.unified_diff(existing_code.split('\n'),
|
||||
improved_code.split('\n'), n=999)
|
||||
patch_orig = "\n".join(diff)
|
||||
patch = "\n".join(patch_orig.splitlines()[5:]).strip('\n')
|
||||
|
||||
example_code = ""
|
||||
example_code += f"```diff\n{patch}\n```\n"
|
||||
|
||||
pr_body += f"""<tr><td>"""
|
||||
suggestion_summary = suggestion['one_sentence_summary'].strip()
|
||||
if '`' in suggestion_summary:
|
||||
suggestion_summary = replace_code_tags(suggestion_summary)
|
||||
suggestion_summary = suggestion_summary + max((77-len(suggestion_summary)), 0)*" "
|
||||
pr_body += f"""\n\n<details><summary>{suggestion_summary}</summary>\n\n___\n\n"""
|
||||
|
||||
pr_body += f"""
|
||||
|
||||
|
||||
**{suggestion_content}**
|
||||
|
||||
[{relevant_file} {range_str}]({code_snippet_link})
|
||||
|
||||
{example_code}
|
||||
"""
|
||||
pr_body += f"</details>"
|
||||
pr_body += f"</td></tr>"
|
||||
|
||||
pr_body += """</table>"""
|
||||
# pr_body += "</details>"
|
||||
pr_body += """</td></tr>"""
|
||||
pr_body += """</tr></tbody></table>"""
|
||||
return pr_body
|
||||
except Exception as e:
|
||||
get_logger().info(f"Failed to publish summarized code suggestions, error: {e}")
|
||||
return ""
|
||||
|
@ -1,13 +1,14 @@
|
||||
import logging
|
||||
|
||||
from pr_agent.config_loader import get_settings
|
||||
from pr_agent.git_providers import get_git_provider
|
||||
from pr_agent.log import get_logger
|
||||
|
||||
|
||||
class PRConfig:
|
||||
"""
|
||||
The PRConfig class is responsible for listing all configuration options available for the user.
|
||||
"""
|
||||
def __init__(self, pr_url: str, args=None, ai_handler=None):
|
||||
def __init__(self, pr_url: str, args=None):
|
||||
"""
|
||||
Initialize the PRConfig object with the necessary attributes and objects to comment on a pull request.
|
||||
|
||||
@ -18,11 +19,11 @@ class PRConfig:
|
||||
self.git_provider = get_git_provider()(pr_url)
|
||||
|
||||
async def run(self):
|
||||
get_logger().info('Getting configuration settings...')
|
||||
get_logger().info('Preparing configs...')
|
||||
logging.info('Getting configuration settings...')
|
||||
logging.info('Preparing configs...')
|
||||
pr_comment = self._prepare_pr_configs()
|
||||
if get_settings().config.publish_output:
|
||||
get_logger().info('Pushing configs...')
|
||||
logging.info('Pushing configs...')
|
||||
self.git_provider.publish_comment(pr_comment)
|
||||
self.git_provider.remove_initial_comment()
|
||||
return ""
|
||||
@ -43,5 +44,5 @@ class PRConfig:
|
||||
comment_str += f"\n{header.lower()}.{key.lower()} = {repr(value) if isinstance(value, str) else value}"
|
||||
comment_str += " "
|
||||
if get_settings().config.verbosity_level >= 2:
|
||||
get_logger().info(f"comment_str:\n{comment_str}")
|
||||
logging.info(f"comment_str:\n{comment_str}")
|
||||
return comment_str
|
||||
|
@ -1,25 +1,21 @@
|
||||
import copy
|
||||
import re
|
||||
from functools import partial
|
||||
import json
|
||||
import logging
|
||||
from typing import List, Tuple
|
||||
|
||||
from jinja2 import Environment, StrictUndefined
|
||||
|
||||
from pr_agent.algo.ai_handlers.base_ai_handler import BaseAiHandler
|
||||
from pr_agent.algo.ai_handlers.litellm_ai_handler import LiteLLMAIHandler
|
||||
from pr_agent.algo.ai_handler import AiHandler
|
||||
from pr_agent.algo.pr_processing import get_pr_diff, retry_with_fallback_models
|
||||
from pr_agent.algo.token_handler import TokenHandler
|
||||
from pr_agent.algo.utils import load_yaml, set_custom_labels, get_user_labels
|
||||
from pr_agent.algo.utils import load_yaml
|
||||
from pr_agent.config_loader import get_settings
|
||||
from pr_agent.git_providers import get_git_provider
|
||||
from pr_agent.git_providers.git_provider import get_main_pr_language
|
||||
from pr_agent.log import get_logger
|
||||
from pr_agent.servers.help import HelpMessage
|
||||
|
||||
|
||||
class PRDescription:
|
||||
def __init__(self, pr_url: str, args: list = None,
|
||||
ai_handler: partial[BaseAiHandler,] = LiteLLMAIHandler):
|
||||
def __init__(self, pr_url: str, args: list = None):
|
||||
"""
|
||||
Initialize the PRDescription object with the necessary attributes and objects for generating a PR description
|
||||
using an AI model.
|
||||
@ -32,15 +28,9 @@ class PRDescription:
|
||||
self.main_pr_language = get_main_pr_language(
|
||||
self.git_provider.get_languages(), self.git_provider.get_files()
|
||||
)
|
||||
self.pr_id = self.git_provider.get_pr_id()
|
||||
|
||||
if get_settings().pr_description.enable_semantic_files_types and not self.git_provider.is_supported(
|
||||
"gfm_markdown"):
|
||||
get_logger().debug(f"Disabling semantic files types for {self.pr_id}")
|
||||
get_settings().pr_description.enable_semantic_files_types = False
|
||||
|
||||
# Initialize the AI handler
|
||||
self.ai_handler = ai_handler()
|
||||
self.ai_handler = AiHandler()
|
||||
|
||||
# Initialize the variables dictionary
|
||||
self.vars = {
|
||||
@ -50,10 +40,7 @@ class PRDescription:
|
||||
"language": self.main_pr_language,
|
||||
"diff": "", # empty diff for initial calculation
|
||||
"extra_instructions": get_settings().pr_description.extra_instructions,
|
||||
"commit_messages_str": self.git_provider.get_commit_messages(),
|
||||
"enable_custom_labels": get_settings().config.enable_custom_labels,
|
||||
"custom_labels_class": "", # will be filled if necessary in 'set_custom_labels' function
|
||||
"enable_semantic_files_types": get_settings().pr_description.enable_semantic_files_types,
|
||||
"commit_messages_str": self.git_provider.get_commit_messages()
|
||||
}
|
||||
|
||||
self.user_description = self.git_provider.get_user_description()
|
||||
@ -74,84 +61,48 @@ class PRDescription:
|
||||
"""
|
||||
Generates a PR description using an AI model and publishes it to the PR.
|
||||
"""
|
||||
|
||||
try:
|
||||
get_logger().info(f"Generating a PR description {self.pr_id}")
|
||||
logging.info('Generating a PR description...')
|
||||
if get_settings().config.publish_output:
|
||||
self.git_provider.publish_comment("Preparing PR description...", is_temporary=True)
|
||||
self.git_provider.publish_comment("Preparing pr description...", is_temporary=True)
|
||||
|
||||
await retry_with_fallback_models(self._prepare_prediction)
|
||||
|
||||
get_logger().info(f"Preparing answer {self.pr_id}")
|
||||
if self.prediction:
|
||||
self._prepare_data()
|
||||
else:
|
||||
self.git_provider.remove_initial_comment()
|
||||
return None
|
||||
|
||||
if get_settings().pr_description.enable_semantic_files_types:
|
||||
self._prepare_file_labels()
|
||||
|
||||
pr_labels = []
|
||||
if get_settings().pr_description.publish_labels:
|
||||
pr_labels = self._prepare_labels()
|
||||
|
||||
if get_settings().pr_description.use_description_markers:
|
||||
pr_title, pr_body = self._prepare_pr_answer_with_markers()
|
||||
else:
|
||||
pr_title, pr_body, = self._prepare_pr_answer()
|
||||
|
||||
# Add help text if gfm_markdown is supported
|
||||
if self.git_provider.is_supported("gfm_markdown") and get_settings().pr_description.enable_help_text:
|
||||
pr_body += "<hr>\n\n<details> <summary><strong>✨ Usage guide:</strong></summary><hr> \n\n"
|
||||
pr_body += HelpMessage.get_describe_usage_guide()
|
||||
pr_body += "\n</details>\n"
|
||||
|
||||
# final markdown description
|
||||
full_markdown_description = f"## Title\n\n{pr_title}\n\n___\n{pr_body}"
|
||||
get_logger().debug(f"full_markdown_description:\n{full_markdown_description}")
|
||||
logging.info('Preparing answer...')
|
||||
pr_title, pr_body, pr_types, markdown_text = self._prepare_pr_answer()
|
||||
|
||||
if get_settings().config.publish_output:
|
||||
get_logger().info(f"Pushing answer {self.pr_id}")
|
||||
|
||||
# publish labels
|
||||
if get_settings().pr_description.publish_labels and self.git_provider.is_supported("get_labels"):
|
||||
current_labels = self.git_provider.get_pr_labels()
|
||||
user_labels = get_user_labels(current_labels)
|
||||
self.git_provider.publish_labels(pr_labels + user_labels)
|
||||
|
||||
# publish description
|
||||
logging.info('Pushing answer...')
|
||||
if get_settings().pr_description.publish_description_as_comment:
|
||||
get_logger().info(f"Publishing answer as comment")
|
||||
self.git_provider.publish_comment(full_markdown_description)
|
||||
self.git_provider.publish_comment(markdown_text)
|
||||
else:
|
||||
self.git_provider.publish_description(pr_title, pr_body)
|
||||
|
||||
# publish final update message
|
||||
if (get_settings().pr_description.final_update_message and
|
||||
hasattr(self.git_provider, 'pr_url') and self.git_provider.pr_url):
|
||||
latest_commit_url = self.git_provider.get_latest_commit_url()
|
||||
if latest_commit_url:
|
||||
self.git_provider.publish_comment(
|
||||
f"**[PR Description]({self.git_provider.pr_url})** updated to latest commit ({latest_commit_url})")
|
||||
if self.git_provider.is_supported("get_labels"):
|
||||
current_labels = self.git_provider.get_labels()
|
||||
if current_labels is None:
|
||||
current_labels = []
|
||||
self.git_provider.publish_labels(pr_types + current_labels)
|
||||
self.git_provider.remove_initial_comment()
|
||||
except Exception as e:
|
||||
get_logger().error(f"Error generating PR description {self.pr_id}: {e}")
|
||||
|
||||
return ""
|
||||
|
||||
async def _prepare_prediction(self, model: str) -> None:
|
||||
if get_settings().pr_description.use_description_markers and 'pr_agent:' not in self.user_description:
|
||||
return None
|
||||
"""
|
||||
Prepare the AI prediction for the PR description based on the provided model.
|
||||
|
||||
get_logger().info(f"Getting PR diff {self.pr_id}")
|
||||
Args:
|
||||
model (str): The name of the model to be used for generating the prediction.
|
||||
|
||||
Returns:
|
||||
None
|
||||
|
||||
Raises:
|
||||
Any exceptions raised by the 'get_pr_diff' and '_get_prediction' functions.
|
||||
|
||||
"""
|
||||
logging.info('Getting PR diff...')
|
||||
self.patches_diff = get_pr_diff(self.git_provider, self.token_handler, model)
|
||||
if self.patches_diff:
|
||||
get_logger().info(f"Getting AI prediction {self.pr_id}")
|
||||
logging.info('Getting AI prediction...')
|
||||
self.prediction = await self._get_prediction(model)
|
||||
else:
|
||||
get_logger().error(f"Error getting PR diff {self.pr_id}")
|
||||
self.prediction = None
|
||||
|
||||
async def _get_prediction(self, model: str) -> str:
|
||||
"""
|
||||
@ -167,14 +118,12 @@ class PRDescription:
|
||||
variables["diff"] = self.patches_diff # update diff
|
||||
|
||||
environment = Environment(undefined=StrictUndefined)
|
||||
set_custom_labels(variables, self.git_provider)
|
||||
self.variables = variables
|
||||
system_prompt = environment.from_string(get_settings().pr_description_prompt.system).render(variables)
|
||||
user_prompt = environment.from_string(get_settings().pr_description_prompt.user).render(variables)
|
||||
|
||||
if get_settings().config.verbosity_level >= 2:
|
||||
get_logger().info(f"\nSystem prompt:\n{system_prompt}")
|
||||
get_logger().info(f"\nUser prompt:\n{user_prompt}")
|
||||
logging.info(f"\nSystem prompt:\n{system_prompt}")
|
||||
logging.info(f"\nUser prompt:\n{user_prompt}")
|
||||
|
||||
response, finish_reason = await self.ai_handler.chat_completion(
|
||||
model=model,
|
||||
@ -183,114 +132,36 @@ class PRDescription:
|
||||
user=user_prompt
|
||||
)
|
||||
|
||||
if get_settings().config.verbosity_level >= 2:
|
||||
get_logger().info(f"\nAI response:\n{response}")
|
||||
|
||||
return response
|
||||
|
||||
def _prepare_data(self):
|
||||
# Load the AI prediction data into a dictionary
|
||||
self.data = load_yaml(self.prediction.strip())
|
||||
|
||||
if get_settings().pr_description.add_original_user_description and self.user_description:
|
||||
self.data["User Description"] = self.user_description
|
||||
|
||||
# re-order keys
|
||||
if 'User Description' in self.data:
|
||||
self.data['User Description'] = self.data.pop('User Description')
|
||||
if 'title' in self.data:
|
||||
self.data['title'] = self.data.pop('title')
|
||||
if 'type' in self.data:
|
||||
self.data['type'] = self.data.pop('type')
|
||||
if 'labels' in self.data:
|
||||
self.data['labels'] = self.data.pop('labels')
|
||||
if 'description' in self.data:
|
||||
self.data['description'] = self.data.pop('description')
|
||||
if 'pr_files' in self.data:
|
||||
self.data['pr_files'] = self.data.pop('pr_files')
|
||||
|
||||
|
||||
|
||||
|
||||
def _prepare_labels(self) -> List[str]:
|
||||
pr_types = []
|
||||
|
||||
# If the 'PR Type' key is present in the dictionary, split its value by comma and assign it to 'pr_types'
|
||||
if 'labels' in self.data:
|
||||
if type(self.data['labels']) == list:
|
||||
pr_types = self.data['labels']
|
||||
elif type(self.data['labels']) == str:
|
||||
pr_types = self.data['labels'].split(',')
|
||||
elif 'type' in self.data:
|
||||
if type(self.data['type']) == list:
|
||||
pr_types = self.data['type']
|
||||
elif type(self.data['type']) == str:
|
||||
pr_types = self.data['type'].split(',')
|
||||
|
||||
# convert lowercase labels to original case
|
||||
try:
|
||||
if "labels_minimal_to_labels_dict" in self.variables:
|
||||
d: dict = self.variables["labels_minimal_to_labels_dict"]
|
||||
for i, label_i in enumerate(pr_types):
|
||||
if label_i in d:
|
||||
pr_types[i] = d[label_i]
|
||||
except Exception as e:
|
||||
get_logger().error(f"Error converting labels to original case {self.pr_id}: {e}")
|
||||
return pr_types
|
||||
|
||||
def _prepare_pr_answer_with_markers(self) -> Tuple[str, str]:
|
||||
get_logger().info(f"Using description marker replacements {self.pr_id}")
|
||||
title = self.vars["title"]
|
||||
body = self.user_description
|
||||
if get_settings().pr_description.include_generated_by_header:
|
||||
ai_header = f"### 🤖 Generated by PR Agent at {self.git_provider.last_commit_id.sha}\n\n"
|
||||
else:
|
||||
ai_header = ""
|
||||
|
||||
ai_type = self.data.get('type')
|
||||
if ai_type and not re.search(r'<!--\s*pr_agent:type\s*-->', body):
|
||||
pr_type = f"{ai_header}{ai_type}"
|
||||
body = body.replace('pr_agent:type', pr_type)
|
||||
|
||||
ai_summary = self.data.get('description')
|
||||
if ai_summary and not re.search(r'<!--\s*pr_agent:summary\s*-->', body):
|
||||
summary = f"{ai_header}{ai_summary}"
|
||||
body = body.replace('pr_agent:summary', summary)
|
||||
|
||||
ai_walkthrough = self.data.get('pr_files')
|
||||
if ai_walkthrough and not re.search(r'<!--\s*pr_agent:walkthrough\s*-->', body):
|
||||
try:
|
||||
walkthrough_gfm = ""
|
||||
walkthrough_gfm = self.process_pr_files_prediction(walkthrough_gfm, self.file_label_dict)
|
||||
body = body.replace('pr_agent:walkthrough', walkthrough_gfm)
|
||||
except Exception as e:
|
||||
get_logger().error(f"Failing to process walkthrough {self.pr_id}: {e}")
|
||||
body = body.replace('pr_agent:walkthrough', "")
|
||||
|
||||
return title, body
|
||||
|
||||
def _prepare_pr_answer(self) -> Tuple[str, str]:
|
||||
def _prepare_pr_answer(self) -> Tuple[str, str, List[str], str]:
|
||||
"""
|
||||
Prepare the PR description based on the AI prediction data.
|
||||
|
||||
Returns:
|
||||
- title: a string containing the PR title.
|
||||
- pr_body: a string containing the PR description body in a markdown format.
|
||||
- pr_body: a string containing the PR body in a markdown format.
|
||||
- pr_types: a list of strings containing the PR types.
|
||||
- markdown_text: a string containing the AI prediction data in a markdown format. used for publishing a comment
|
||||
"""
|
||||
# Load the AI prediction data into a dictionary
|
||||
data = load_yaml(self.prediction.strip())
|
||||
|
||||
# Iterate over the dictionary items and append the key and value to 'markdown_text' in a markdown format
|
||||
markdown_text = ""
|
||||
# Don't display 'PR Labels'
|
||||
if 'labels' in self.data and self.git_provider.is_supported("get_labels"):
|
||||
self.data.pop('labels')
|
||||
if not get_settings().pr_description.enable_pr_type:
|
||||
self.data.pop('type')
|
||||
for key, value in self.data.items():
|
||||
markdown_text += f"## **{key}**\n\n"
|
||||
markdown_text += f"{value}\n\n"
|
||||
if get_settings().pr_description.add_original_user_description and self.user_description:
|
||||
data["User Description"] = self.user_description
|
||||
|
||||
# Initialization
|
||||
pr_types = []
|
||||
|
||||
# If the 'PR Type' key is present in the dictionary, split its value by comma and assign it to 'pr_types'
|
||||
if 'PR Type' in data:
|
||||
if type(data['PR Type']) == list:
|
||||
pr_types = data['PR Type']
|
||||
elif type(data['PR Type']) == str:
|
||||
pr_types = data['PR Type'].split(',')
|
||||
|
||||
# Remove the 'PR Title' key from the dictionary
|
||||
ai_title = self.data.pop('title', self.vars["title"])
|
||||
ai_title = data.pop('PR Title')
|
||||
if get_settings().pr_description.keep_original_user_title:
|
||||
# Assign the original PR title to the 'title' variable
|
||||
title = self.vars["title"]
|
||||
@ -301,175 +172,25 @@ class PRDescription:
|
||||
# Iterate over the remaining dictionary items and append the key and value to 'pr_body' in a markdown format,
|
||||
# except for the items containing the word 'walkthrough'
|
||||
pr_body = ""
|
||||
for idx, (key, value) in enumerate(self.data.items()):
|
||||
if key == 'pr_files':
|
||||
value = self.file_label_dict
|
||||
key_publish = "Changes walkthrough"
|
||||
else:
|
||||
key_publish = key.rstrip(':').replace("_", " ").capitalize()
|
||||
pr_body += f"## **{key_publish}**\n"
|
||||
for idx, (key, value) in enumerate(data.items()):
|
||||
pr_body += f"## {key}:\n"
|
||||
if 'walkthrough' in key.lower():
|
||||
if self.git_provider.is_supported("gfm_markdown"):
|
||||
pr_body += "<details> <summary>files:</summary>\n\n"
|
||||
# for filename, description in value.items():
|
||||
for file in value:
|
||||
filename = file['filename'].replace("'", "`")
|
||||
description = file['changes_in_file']
|
||||
pr_body += f'- `{filename}`: {description}\n'
|
||||
if self.git_provider.is_supported("gfm_markdown"):
|
||||
pr_body += "</details>\n"
|
||||
elif 'pr_files' in key.lower():
|
||||
pr_body = self.process_pr_files_prediction(pr_body, value)
|
||||
description = file['changes in file']
|
||||
pr_body += f'`{filename}`: {description}\n'
|
||||
else:
|
||||
# if the value is a list, join its items by comma
|
||||
if isinstance(value, list):
|
||||
if type(value) == list:
|
||||
value = ', '.join(v for v in value)
|
||||
pr_body += f"{value}\n"
|
||||
if idx < len(self.data) - 1:
|
||||
pr_body += "\n\n___\n\n"
|
||||
if idx < len(data) - 1:
|
||||
pr_body += "\n___\n"
|
||||
|
||||
markdown_text = f"## Title\n\n{title}\n\n___\n{pr_body}"
|
||||
|
||||
if get_settings().config.verbosity_level >= 2:
|
||||
get_logger().info(f"title:\n{title}\n{pr_body}")
|
||||
logging.info(f"title:\n{title}\n{pr_body}")
|
||||
|
||||
return title, pr_body
|
||||
|
||||
def _prepare_file_labels(self):
|
||||
self.file_label_dict = {}
|
||||
for file in self.data['pr_files']:
|
||||
try:
|
||||
filename = file['filename'].replace("'", "`").replace('"', '`')
|
||||
changes_summary = file['changes_summary']
|
||||
changes_title = file['changes_title'].strip()
|
||||
label = file.get('label')
|
||||
if label not in self.file_label_dict:
|
||||
self.file_label_dict[label] = []
|
||||
self.file_label_dict[label].append((filename, changes_title, changes_summary))
|
||||
except Exception as e:
|
||||
get_logger().error(f"Error preparing file label dict {self.pr_id}: {e}")
|
||||
pass
|
||||
|
||||
def process_pr_files_prediction(self, pr_body, value):
|
||||
# logic for using collapsible file list
|
||||
use_collapsible_file_list = get_settings().pr_description.collapsible_file_list
|
||||
num_files = 0
|
||||
if value:
|
||||
for semantic_label in value.keys():
|
||||
num_files += len(value[semantic_label])
|
||||
if use_collapsible_file_list == "adaptive":
|
||||
use_collapsible_file_list = num_files > 8
|
||||
|
||||
if not self.git_provider.is_supported("gfm_markdown"):
|
||||
get_logger().info(f"Disabling semantic files types for {self.pr_id} since gfm_markdown is not supported")
|
||||
return pr_body
|
||||
try:
|
||||
pr_body += "<table>"
|
||||
header = f"Relevant files"
|
||||
delta = 77
|
||||
# header += " " * delta
|
||||
pr_body += f"""<thead><tr><th></th><th align="left">{header}</th></tr></thead>"""
|
||||
pr_body += """<tbody>"""
|
||||
for semantic_label in value.keys():
|
||||
s_label = semantic_label.strip("'").strip('"')
|
||||
pr_body += f"""<tr><td><strong>{s_label.capitalize()}</strong></td>"""
|
||||
list_tuples = value[semantic_label]
|
||||
|
||||
if use_collapsible_file_list:
|
||||
pr_body += f"""<td><details><summary>{len(list_tuples)} files</summary><table>"""
|
||||
else:
|
||||
pr_body += f"""<td><table>"""
|
||||
for filename, file_changes_title, file_change_description in list_tuples:
|
||||
filename = filename.replace("'", "`")
|
||||
filename_publish = filename.split("/")[-1]
|
||||
file_changes_title_br = insert_br_after_x_chars(file_changes_title, x=(delta - 5),
|
||||
new_line_char="\n\n")
|
||||
file_changes_title_extended = file_changes_title_br.strip() + "</code>"
|
||||
if len(file_changes_title_extended) < (delta - 5):
|
||||
file_changes_title_extended += " " * ((delta - 5) - len(file_changes_title_extended))
|
||||
filename_publish = f"<strong>{filename_publish}</strong><dd><code>{file_changes_title_extended}</dd>"
|
||||
diff_plus_minus = ""
|
||||
delta_nbsp = ""
|
||||
diff_files = self.git_provider.diff_files
|
||||
for f in diff_files:
|
||||
if f.filename.lower() == filename.lower():
|
||||
num_plus_lines = f.num_plus_lines
|
||||
num_minus_lines = f.num_minus_lines
|
||||
diff_plus_minus += f"+{num_plus_lines}/-{num_minus_lines}"
|
||||
delta_nbsp = " " * max(0, (8 - len(diff_plus_minus)))
|
||||
break
|
||||
|
||||
# try to add line numbers link to code suggestions
|
||||
link = ""
|
||||
if hasattr(self.git_provider, 'get_line_link'):
|
||||
filename = filename.strip()
|
||||
link = self.git_provider.get_line_link(filename, relevant_line_start=-1)
|
||||
|
||||
file_change_description_br = insert_br_after_x_chars(file_change_description, x=(delta - 5))
|
||||
pr_body += f"""
|
||||
<tr>
|
||||
<td>
|
||||
<details>
|
||||
<summary>{filename_publish}</summary>
|
||||
<hr>
|
||||
|
||||
{filename}
|
||||
{file_change_description_br}
|
||||
</details>
|
||||
</td>
|
||||
<td><a href="{link}">{diff_plus_minus}</a>{delta_nbsp}</td>
|
||||
</tr>
|
||||
"""
|
||||
if use_collapsible_file_list:
|
||||
pr_body += """</table></details></td></tr>"""
|
||||
else:
|
||||
pr_body += """</table></td></tr>"""
|
||||
pr_body += """</tr></tbody></table>"""
|
||||
|
||||
except Exception as e:
|
||||
get_logger().error(f"Error processing pr files to markdown {self.pr_id}: {e}")
|
||||
pass
|
||||
return pr_body
|
||||
|
||||
def insert_br_after_x_chars(text, x=70, new_line_char="<br> "):
|
||||
"""
|
||||
Insert <br> into a string after a word that increases its length above x characters.
|
||||
"""
|
||||
if len(text) < x:
|
||||
return text
|
||||
|
||||
lines = text.splitlines()
|
||||
words = []
|
||||
for i,line in enumerate(lines):
|
||||
words += line.split(' ')
|
||||
if i<len(lines)-1:
|
||||
words[-1] += "\n"
|
||||
|
||||
|
||||
# words = text.split(' ')
|
||||
|
||||
new_text = ""
|
||||
current_length = 0
|
||||
is_inside_code = False
|
||||
for word in words:
|
||||
# Check if adding this word exceeds x characters
|
||||
if current_length + len(word) > x:
|
||||
if not is_inside_code:
|
||||
new_text += f"{new_line_char} " # Insert line break
|
||||
current_length = 0 # Reset counter
|
||||
else:
|
||||
new_text += f"`{new_line_char} `"
|
||||
# check if inside <code> tag
|
||||
if word.startswith("`") and not is_inside_code and not word.endswith("`"):
|
||||
is_inside_code = True
|
||||
if word.endswith("`"):
|
||||
is_inside_code = False
|
||||
|
||||
# Add the word to the new text
|
||||
if word.endswith("\n"):
|
||||
new_text += word
|
||||
else:
|
||||
new_text += word + " "
|
||||
current_length += len(word) + 1 # Add 1 for the space
|
||||
|
||||
|
||||
if word.endswith("\n"):
|
||||
current_length = 0
|
||||
return new_text.strip() # Remove trailing space
|
||||
return title, pr_body, pr_types, markdown_text
|
@ -1,184 +0,0 @@
|
||||
import copy
|
||||
import re
|
||||
from functools import partial
|
||||
from typing import List, Tuple
|
||||
|
||||
from jinja2 import Environment, StrictUndefined
|
||||
|
||||
from pr_agent.algo.ai_handlers.base_ai_handler import BaseAiHandler
|
||||
from pr_agent.algo.ai_handlers.litellm_ai_handler import LiteLLMAIHandler
|
||||
from pr_agent.algo.pr_processing import get_pr_diff, retry_with_fallback_models
|
||||
from pr_agent.algo.token_handler import TokenHandler
|
||||
from pr_agent.algo.utils import load_yaml, set_custom_labels, get_user_labels
|
||||
from pr_agent.config_loader import get_settings
|
||||
from pr_agent.git_providers import get_git_provider
|
||||
from pr_agent.git_providers.git_provider import get_main_pr_language
|
||||
from pr_agent.log import get_logger
|
||||
|
||||
|
||||
class PRGenerateLabels:
|
||||
def __init__(self, pr_url: str, args: list = None,
|
||||
ai_handler: partial[BaseAiHandler,] = LiteLLMAIHandler):
|
||||
"""
|
||||
Initialize the PRGenerateLabels object with the necessary attributes and objects for generating labels
|
||||
corresponding to the PR using an AI model.
|
||||
Args:
|
||||
pr_url (str): The URL of the pull request.
|
||||
args (list, optional): List of arguments passed to the PRGenerateLabels class. Defaults to None.
|
||||
"""
|
||||
# Initialize the git provider and main PR language
|
||||
self.git_provider = get_git_provider()(pr_url)
|
||||
self.main_pr_language = get_main_pr_language(
|
||||
self.git_provider.get_languages(), self.git_provider.get_files()
|
||||
)
|
||||
self.pr_id = self.git_provider.get_pr_id()
|
||||
|
||||
# Initialize the AI handler
|
||||
self.ai_handler = ai_handler()
|
||||
|
||||
# Initialize the variables dictionary
|
||||
self.vars = {
|
||||
"title": self.git_provider.pr.title,
|
||||
"branch": self.git_provider.get_pr_branch(),
|
||||
"description": self.git_provider.get_pr_description(full=False),
|
||||
"language": self.main_pr_language,
|
||||
"diff": "", # empty diff for initial calculation
|
||||
"extra_instructions": get_settings().pr_description.extra_instructions,
|
||||
"commit_messages_str": self.git_provider.get_commit_messages(),
|
||||
"enable_custom_labels": get_settings().config.enable_custom_labels,
|
||||
"custom_labels_class": "", # will be filled if necessary in 'set_custom_labels' function
|
||||
}
|
||||
|
||||
# Initialize the token handler
|
||||
self.token_handler = TokenHandler(
|
||||
self.git_provider.pr,
|
||||
self.vars,
|
||||
get_settings().pr_custom_labels_prompt.system,
|
||||
get_settings().pr_custom_labels_prompt.user,
|
||||
)
|
||||
|
||||
# Initialize patches_diff and prediction attributes
|
||||
self.patches_diff = None
|
||||
self.prediction = None
|
||||
|
||||
async def run(self):
|
||||
"""
|
||||
Generates a PR labels using an AI model and publishes it to the PR.
|
||||
"""
|
||||
|
||||
try:
|
||||
get_logger().info(f"Generating a PR labels {self.pr_id}")
|
||||
if get_settings().config.publish_output:
|
||||
self.git_provider.publish_comment("Preparing PR labels...", is_temporary=True)
|
||||
|
||||
await retry_with_fallback_models(self._prepare_prediction)
|
||||
|
||||
get_logger().info(f"Preparing answer {self.pr_id}")
|
||||
if self.prediction:
|
||||
self._prepare_data()
|
||||
else:
|
||||
return None
|
||||
|
||||
pr_labels = self._prepare_labels()
|
||||
|
||||
if get_settings().config.publish_output:
|
||||
get_logger().info(f"Pushing labels {self.pr_id}")
|
||||
|
||||
current_labels = self.git_provider.get_pr_labels()
|
||||
user_labels = get_user_labels(current_labels)
|
||||
pr_labels = pr_labels + user_labels
|
||||
|
||||
if self.git_provider.is_supported("get_labels"):
|
||||
self.git_provider.publish_labels(pr_labels)
|
||||
elif pr_labels:
|
||||
value = ', '.join(v for v in pr_labels)
|
||||
pr_labels_text = f"## PR Labels:\n{value}\n"
|
||||
self.git_provider.publish_comment(pr_labels_text, is_temporary=False)
|
||||
self.git_provider.remove_initial_comment()
|
||||
except Exception as e:
|
||||
get_logger().error(f"Error generating PR labels {self.pr_id}: {e}")
|
||||
|
||||
return ""
|
||||
|
||||
async def _prepare_prediction(self, model: str) -> None:
|
||||
"""
|
||||
Prepare the AI prediction for the PR labels based on the provided model.
|
||||
|
||||
Args:
|
||||
model (str): The name of the model to be used for generating the prediction.
|
||||
|
||||
Returns:
|
||||
None
|
||||
|
||||
Raises:
|
||||
Any exceptions raised by the 'get_pr_diff' and '_get_prediction' functions.
|
||||
|
||||
"""
|
||||
|
||||
get_logger().info(f"Getting PR diff {self.pr_id}")
|
||||
self.patches_diff = get_pr_diff(self.git_provider, self.token_handler, model)
|
||||
get_logger().info(f"Getting AI prediction {self.pr_id}")
|
||||
self.prediction = await self._get_prediction(model)
|
||||
|
||||
async def _get_prediction(self, model: str) -> str:
|
||||
"""
|
||||
Generate an AI prediction for the PR labels based on the provided model.
|
||||
|
||||
Args:
|
||||
model (str): The name of the model to be used for generating the prediction.
|
||||
|
||||
Returns:
|
||||
str: The generated AI prediction.
|
||||
"""
|
||||
variables = copy.deepcopy(self.vars)
|
||||
variables["diff"] = self.patches_diff # update diff
|
||||
|
||||
environment = Environment(undefined=StrictUndefined)
|
||||
set_custom_labels(variables, self.git_provider)
|
||||
self.variables = variables
|
||||
system_prompt = environment.from_string(get_settings().pr_custom_labels_prompt.system).render(variables)
|
||||
user_prompt = environment.from_string(get_settings().pr_custom_labels_prompt.user).render(variables)
|
||||
|
||||
if get_settings().config.verbosity_level >= 2:
|
||||
get_logger().info(f"\nSystem prompt:\n{system_prompt}")
|
||||
get_logger().info(f"\nUser prompt:\n{user_prompt}")
|
||||
|
||||
response, finish_reason = await self.ai_handler.chat_completion(
|
||||
model=model,
|
||||
temperature=0.2,
|
||||
system=system_prompt,
|
||||
user=user_prompt
|
||||
)
|
||||
|
||||
if get_settings().config.verbosity_level >= 2:
|
||||
get_logger().info(f"\nAI response:\n{response}")
|
||||
|
||||
return response
|
||||
|
||||
def _prepare_data(self):
|
||||
# Load the AI prediction data into a dictionary
|
||||
self.data = load_yaml(self.prediction.strip())
|
||||
|
||||
|
||||
|
||||
def _prepare_labels(self) -> List[str]:
|
||||
pr_types = []
|
||||
|
||||
# If the 'labels' key is present in the dictionary, split its value by comma and assign it to 'pr_types'
|
||||
if 'labels' in self.data:
|
||||
if type(self.data['labels']) == list:
|
||||
pr_types = self.data['labels']
|
||||
elif type(self.data['labels']) == str:
|
||||
pr_types = self.data['labels'].split(',')
|
||||
|
||||
# convert lowercase labels to original case
|
||||
try:
|
||||
if "labels_minimal_to_labels_dict" in self.variables:
|
||||
d: dict = self.variables["labels_minimal_to_labels_dict"]
|
||||
for i, label_i in enumerate(pr_types):
|
||||
if label_i in d:
|
||||
pr_types[i] = d[label_i]
|
||||
except Exception as e:
|
||||
get_logger().error(f"Error converting labels to original case {self.pr_id}: {e}")
|
||||
|
||||
return pr_types
|
@ -1,26 +1,23 @@
|
||||
import copy
|
||||
from functools import partial
|
||||
import logging
|
||||
|
||||
from jinja2 import Environment, StrictUndefined
|
||||
|
||||
from pr_agent.algo.ai_handlers.base_ai_handler import BaseAiHandler
|
||||
from pr_agent.algo.ai_handlers.litellm_ai_handler import LiteLLMAIHandler
|
||||
from pr_agent.algo.ai_handler import AiHandler
|
||||
from pr_agent.algo.pr_processing import get_pr_diff, retry_with_fallback_models
|
||||
from pr_agent.algo.token_handler import TokenHandler
|
||||
from pr_agent.config_loader import get_settings
|
||||
from pr_agent.git_providers import get_git_provider
|
||||
from pr_agent.git_providers.git_provider import get_main_pr_language
|
||||
from pr_agent.log import get_logger
|
||||
|
||||
|
||||
class PRInformationFromUser:
|
||||
def __init__(self, pr_url: str, args: list = None,
|
||||
ai_handler: partial[BaseAiHandler,] = LiteLLMAIHandler):
|
||||
def __init__(self, pr_url: str, args: list = None):
|
||||
self.git_provider = get_git_provider()(pr_url)
|
||||
self.main_pr_language = get_main_pr_language(
|
||||
self.git_provider.get_languages(), self.git_provider.get_files()
|
||||
)
|
||||
self.ai_handler = ai_handler()
|
||||
self.ai_handler = AiHandler()
|
||||
self.vars = {
|
||||
"title": self.git_provider.pr.title,
|
||||
"branch": self.git_provider.get_pr_branch(),
|
||||
@ -37,22 +34,22 @@ class PRInformationFromUser:
|
||||
self.prediction = None
|
||||
|
||||
async def run(self):
|
||||
get_logger().info('Generating question to the user...')
|
||||
logging.info('Generating question to the user...')
|
||||
if get_settings().config.publish_output:
|
||||
self.git_provider.publish_comment("Preparing questions...", is_temporary=True)
|
||||
await retry_with_fallback_models(self._prepare_prediction)
|
||||
get_logger().info('Preparing questions...')
|
||||
logging.info('Preparing questions...')
|
||||
pr_comment = self._prepare_pr_answer()
|
||||
if get_settings().config.publish_output:
|
||||
get_logger().info('Pushing questions...')
|
||||
logging.info('Pushing questions...')
|
||||
self.git_provider.publish_comment(pr_comment)
|
||||
self.git_provider.remove_initial_comment()
|
||||
return ""
|
||||
|
||||
async def _prepare_prediction(self, model):
|
||||
get_logger().info('Getting PR diff...')
|
||||
logging.info('Getting PR diff...')
|
||||
self.patches_diff = get_pr_diff(self.git_provider, self.token_handler, model)
|
||||
get_logger().info('Getting AI prediction...')
|
||||
logging.info('Getting AI prediction...')
|
||||
self.prediction = await self._get_prediction(model)
|
||||
|
||||
async def _get_prediction(self, model: str):
|
||||
@ -62,8 +59,8 @@ class PRInformationFromUser:
|
||||
system_prompt = environment.from_string(get_settings().pr_information_from_user_prompt.system).render(variables)
|
||||
user_prompt = environment.from_string(get_settings().pr_information_from_user_prompt.user).render(variables)
|
||||
if get_settings().config.verbosity_level >= 2:
|
||||
get_logger().info(f"\nSystem prompt:\n{system_prompt}")
|
||||
get_logger().info(f"\nUser prompt:\n{user_prompt}")
|
||||
logging.info(f"\nSystem prompt:\n{system_prompt}")
|
||||
logging.info(f"\nUser prompt:\n{user_prompt}")
|
||||
response, finish_reason = await self.ai_handler.chat_completion(model=model, temperature=0.2,
|
||||
system=system_prompt, user=user_prompt)
|
||||
return response
|
||||
@ -71,7 +68,7 @@ class PRInformationFromUser:
|
||||
def _prepare_pr_answer(self) -> str:
|
||||
model_output = self.prediction.strip()
|
||||
if get_settings().config.verbosity_level >= 2:
|
||||
get_logger().info(f"answer_str:\n{model_output}")
|
||||
logging.info(f"answer_str:\n{model_output}")
|
||||
answer_str = f"{model_output}\n\n Please respond to the questions above in the following format:\n\n" +\
|
||||
"\n>/answer\n>1) ...\n>2) ...\n>...\n"
|
||||
return answer_str
|
||||
|
@ -1,27 +1,24 @@
|
||||
import copy
|
||||
from functools import partial
|
||||
import logging
|
||||
|
||||
from jinja2 import Environment, StrictUndefined
|
||||
|
||||
from pr_agent.algo.ai_handlers.base_ai_handler import BaseAiHandler
|
||||
from pr_agent.algo.ai_handlers.litellm_ai_handler import LiteLLMAIHandler
|
||||
from pr_agent.algo.ai_handler import AiHandler
|
||||
from pr_agent.algo.pr_processing import get_pr_diff, retry_with_fallback_models
|
||||
from pr_agent.algo.token_handler import TokenHandler
|
||||
from pr_agent.config_loader import get_settings
|
||||
from pr_agent.git_providers import get_git_provider
|
||||
from pr_agent.git_providers.git_provider import get_main_pr_language
|
||||
from pr_agent.log import get_logger
|
||||
from pr_agent.servers.help import HelpMessage
|
||||
|
||||
|
||||
class PRQuestions:
|
||||
def __init__(self, pr_url: str, args=None, ai_handler: partial[BaseAiHandler,] = LiteLLMAIHandler):
|
||||
def __init__(self, pr_url: str, args=None):
|
||||
question_str = self.parse_args(args)
|
||||
self.git_provider = get_git_provider()(pr_url)
|
||||
self.main_pr_language = get_main_pr_language(
|
||||
self.git_provider.get_languages(), self.git_provider.get_files()
|
||||
)
|
||||
self.ai_handler = ai_handler()
|
||||
self.ai_handler = AiHandler()
|
||||
self.question_str = question_str
|
||||
self.vars = {
|
||||
"title": self.git_provider.pr.title,
|
||||
@ -47,27 +44,22 @@ class PRQuestions:
|
||||
return question_str
|
||||
|
||||
async def run(self):
|
||||
get_logger().info('Answering a PR question...')
|
||||
logging.info('Answering a PR question...')
|
||||
if get_settings().config.publish_output:
|
||||
self.git_provider.publish_comment("Preparing answer...", is_temporary=True)
|
||||
await retry_with_fallback_models(self._prepare_prediction)
|
||||
get_logger().info('Preparing answer...')
|
||||
logging.info('Preparing answer...')
|
||||
pr_comment = self._prepare_pr_answer()
|
||||
if self.git_provider.is_supported("gfm_markdown") and get_settings().pr_questions.enable_help_text:
|
||||
pr_comment += "<hr>\n\n<details> <summary><strong>✨ Usage guide:</strong></summary><hr> \n\n"
|
||||
pr_comment += HelpMessage.get_ask_usage_guide()
|
||||
pr_comment += "\n</details>\n"
|
||||
|
||||
if get_settings().config.publish_output:
|
||||
get_logger().info('Pushing answer...')
|
||||
logging.info('Pushing answer...')
|
||||
self.git_provider.publish_comment(pr_comment)
|
||||
self.git_provider.remove_initial_comment()
|
||||
return ""
|
||||
|
||||
async def _prepare_prediction(self, model: str):
|
||||
get_logger().info('Getting PR diff...')
|
||||
logging.info('Getting PR diff...')
|
||||
self.patches_diff = get_pr_diff(self.git_provider, self.token_handler, model)
|
||||
get_logger().info('Getting AI prediction...')
|
||||
logging.info('Getting AI prediction...')
|
||||
self.prediction = await self._get_prediction(model)
|
||||
|
||||
async def _get_prediction(self, model: str):
|
||||
@ -77,8 +69,8 @@ class PRQuestions:
|
||||
system_prompt = environment.from_string(get_settings().pr_questions_prompt.system).render(variables)
|
||||
user_prompt = environment.from_string(get_settings().pr_questions_prompt.user).render(variables)
|
||||
if get_settings().config.verbosity_level >= 2:
|
||||
get_logger().info(f"\nSystem prompt:\n{system_prompt}")
|
||||
get_logger().info(f"\nUser prompt:\n{user_prompt}")
|
||||
logging.info(f"\nSystem prompt:\n{system_prompt}")
|
||||
logging.info(f"\nUser prompt:\n{user_prompt}")
|
||||
response, finish_reason = await self.ai_handler.chat_completion(model=model, temperature=0.2,
|
||||
system=system_prompt, user=user_prompt)
|
||||
return response
|
||||
@ -87,5 +79,5 @@ class PRQuestions:
|
||||
answer_str = f"Question: {self.question_str}\n\n"
|
||||
answer_str += f"Answer:\n{self.prediction.strip()}\n\n"
|
||||
if get_settings().config.verbosity_level >= 2:
|
||||
get_logger().info(f"answer_str:\n{answer_str}")
|
||||
logging.info(f"answer_str:\n{answer_str}")
|
||||
return answer_str
|
||||
|
@ -1,39 +1,35 @@
|
||||
import copy
|
||||
import datetime
|
||||
import json
|
||||
import logging
|
||||
from collections import OrderedDict
|
||||
from functools import partial
|
||||
from typing import List, Tuple
|
||||
|
||||
import yaml
|
||||
from jinja2 import Environment, StrictUndefined
|
||||
from yaml import SafeLoader
|
||||
|
||||
from pr_agent.algo.ai_handlers.base_ai_handler import BaseAiHandler
|
||||
from pr_agent.algo.ai_handlers.litellm_ai_handler import LiteLLMAIHandler
|
||||
from pr_agent.algo.pr_processing import get_pr_diff, retry_with_fallback_models
|
||||
from pr_agent.algo.ai_handler import AiHandler
|
||||
from pr_agent.algo.pr_processing import get_pr_diff, retry_with_fallback_models, \
|
||||
find_line_number_of_relevant_line_in_file, clip_tokens
|
||||
from pr_agent.algo.token_handler import TokenHandler
|
||||
from pr_agent.algo.utils import convert_to_markdown, load_yaml, try_fix_yaml, set_custom_labels, get_user_labels
|
||||
from pr_agent.algo.utils import convert_to_markdown, try_fix_json, try_fix_yaml, load_yaml
|
||||
from pr_agent.config_loader import get_settings
|
||||
from pr_agent.git_providers import get_git_provider
|
||||
from pr_agent.git_providers.git_provider import IncrementalPR, get_main_pr_language
|
||||
from pr_agent.log import get_logger
|
||||
from pr_agent.servers.help import HelpMessage
|
||||
from pr_agent.servers.help import actions_help_text, bot_help_text
|
||||
|
||||
|
||||
class PRReviewer:
|
||||
"""
|
||||
The PRReviewer class is responsible for reviewing a pull request and generating feedback using an AI model.
|
||||
"""
|
||||
def __init__(self, pr_url: str, is_answer: bool = False, is_auto: bool = False, args: list = None,
|
||||
ai_handler: partial[BaseAiHandler,] = LiteLLMAIHandler):
|
||||
def __init__(self, pr_url: str, is_answer: bool = False, is_auto: bool = False, args: list = None):
|
||||
"""
|
||||
Initialize the PRReviewer object with the necessary attributes and objects to review a pull request.
|
||||
|
||||
Args:
|
||||
pr_url (str): The URL of the pull request to be reviewed.
|
||||
is_answer (bool, optional): Indicates whether the review is being done in answer mode. Defaults to False.
|
||||
is_auto (bool, optional): Indicates whether the review is being done in automatic mode. Defaults to False.
|
||||
ai_handler (BaseAiHandler): The AI handler to be used for the review. Defaults to None.
|
||||
args (list, optional): List of arguments passed to the PRReviewer class. Defaults to None.
|
||||
"""
|
||||
self.parse_args(args) # -i command
|
||||
@ -48,7 +44,7 @@ class PRReviewer:
|
||||
|
||||
if self.is_answer and not self.git_provider.is_supported("get_issue_comments"):
|
||||
raise Exception(f"Answer mode is not supported for {get_settings().config.git_provider} for now")
|
||||
self.ai_handler = ai_handler()
|
||||
self.ai_handler = AiHandler()
|
||||
self.patches_diff = None
|
||||
self.prediction = None
|
||||
|
||||
@ -63,14 +59,11 @@ class PRReviewer:
|
||||
"require_tests": get_settings().pr_reviewer.require_tests_review,
|
||||
"require_security": get_settings().pr_reviewer.require_security_review,
|
||||
"require_focused": get_settings().pr_reviewer.require_focused_review,
|
||||
"require_estimate_effort_to_review": get_settings().pr_reviewer.require_estimate_effort_to_review,
|
||||
'num_code_suggestions': get_settings().pr_reviewer.num_code_suggestions,
|
||||
'question_str': question_str,
|
||||
'answer_str': answer_str,
|
||||
"extra_instructions": get_settings().pr_reviewer.extra_instructions,
|
||||
"commit_messages_str": self.git_provider.get_commit_messages(),
|
||||
"custom_labels": "",
|
||||
"enable_custom_labels": get_settings().config.enable_custom_labels,
|
||||
}
|
||||
|
||||
self.token_handler = TokenHandler(
|
||||
@ -98,53 +91,46 @@ class PRReviewer:
|
||||
self.incremental = IncrementalPR(is_incremental)
|
||||
|
||||
async def run(self) -> None:
|
||||
try:
|
||||
if self.incremental.is_incremental and not self._can_run_incremental_review():
|
||||
"""
|
||||
Review the pull request and generate feedback.
|
||||
"""
|
||||
if self.is_auto and not get_settings().pr_reviewer.automatic_review:
|
||||
logging.info(f'Automatic review is disabled {self.pr_url}')
|
||||
return None
|
||||
|
||||
get_logger().info(f'Reviewing PR: {self.pr_url} ...')
|
||||
logging.info(f'Reviewing PR: {self.pr_url} ...')
|
||||
|
||||
if get_settings().config.publish_output:
|
||||
self.git_provider.publish_comment("Preparing review...", is_temporary=True)
|
||||
|
||||
await retry_with_fallback_models(self._prepare_prediction)
|
||||
if not self.prediction:
|
||||
self.git_provider.remove_initial_comment()
|
||||
return None
|
||||
|
||||
get_logger().info('Preparing PR review...')
|
||||
logging.info('Preparing PR review...')
|
||||
pr_comment = self._prepare_pr_review()
|
||||
|
||||
if get_settings().config.publish_output:
|
||||
get_logger().info('Pushing PR review...')
|
||||
previous_review_comment = self._get_previous_review_comment()
|
||||
|
||||
# publish the review
|
||||
if get_settings().pr_reviewer.persistent_comment and not self.incremental.is_incremental:
|
||||
self.git_provider.publish_persistent_comment(pr_comment,
|
||||
initial_header="## PR Analysis",
|
||||
update_header=True)
|
||||
else:
|
||||
logging.info('Pushing PR review...')
|
||||
self.git_provider.publish_comment(pr_comment)
|
||||
|
||||
self.git_provider.remove_initial_comment()
|
||||
if previous_review_comment:
|
||||
self._remove_previous_review_comment(previous_review_comment)
|
||||
|
||||
if get_settings().pr_reviewer.inline_code_comments:
|
||||
get_logger().info('Pushing inline code comments...')
|
||||
logging.info('Pushing inline code comments...')
|
||||
self._publish_inline_code_comments()
|
||||
except Exception as e:
|
||||
get_logger().error(f"Failed to review PR: {e}")
|
||||
|
||||
async def _prepare_prediction(self, model: str) -> None:
|
||||
get_logger().info('Getting PR diff...')
|
||||
"""
|
||||
Prepare the AI prediction for the pull request review.
|
||||
|
||||
Args:
|
||||
model: A string representing the AI model to be used for the prediction.
|
||||
|
||||
Returns:
|
||||
None
|
||||
"""
|
||||
logging.info('Getting PR diff...')
|
||||
self.patches_diff = get_pr_diff(self.git_provider, self.token_handler, model)
|
||||
if self.patches_diff:
|
||||
get_logger().info('Getting AI prediction...')
|
||||
logging.info('Getting AI prediction...')
|
||||
self.prediction = await self._get_prediction(model)
|
||||
else:
|
||||
get_logger().error(f"Error getting PR diff")
|
||||
self.prediction = None
|
||||
|
||||
async def _get_prediction(self, model: str) -> str:
|
||||
"""
|
||||
@ -164,8 +150,8 @@ class PRReviewer:
|
||||
user_prompt = environment.from_string(get_settings().pr_review_prompt.user).render(variables)
|
||||
|
||||
if get_settings().config.verbosity_level >= 2:
|
||||
get_logger().info(f"\nSystem prompt:\n{system_prompt}")
|
||||
get_logger().info(f"\nUser prompt:\n{user_prompt}")
|
||||
logging.info(f"\nSystem prompt:\n{system_prompt}")
|
||||
logging.info(f"\nUser prompt:\n{user_prompt}")
|
||||
|
||||
response, finish_reason = await self.ai_handler.chat_completion(
|
||||
model=model,
|
||||
@ -174,9 +160,6 @@ class PRReviewer:
|
||||
user=user_prompt
|
||||
)
|
||||
|
||||
if get_settings().config.verbosity_level >= 2:
|
||||
get_logger().info(f"\nAI response:\n{response}")
|
||||
|
||||
return response
|
||||
|
||||
def _prepare_pr_review(self) -> str:
|
||||
@ -221,38 +204,30 @@ class PRReviewer:
|
||||
link = self.git_provider.generate_link_to_relevant_line_number(suggestion)
|
||||
if link:
|
||||
suggestion['relevant line'] = f"[{suggestion['relevant line']}]({link})"
|
||||
else:
|
||||
pass
|
||||
|
||||
|
||||
# Add incremental review section
|
||||
if self.incremental.is_incremental:
|
||||
last_commit_url = f"{self.git_provider.get_pr_url()}/commits/" \
|
||||
f"{self.git_provider.incremental.first_new_commit_sha}"
|
||||
last_commit_msg = self.incremental.commits_range[0].commit.message if self.incremental.commits_range else ""
|
||||
incremental_review_markdown_text = f"Starting from commit {last_commit_url}"
|
||||
if last_commit_msg:
|
||||
replacement = last_commit_msg.splitlines(keepends=False)[0].replace('_', r'\_')
|
||||
incremental_review_markdown_text += f" \n_({replacement})_"
|
||||
data = OrderedDict(data)
|
||||
data.update({'Incremental PR Review': {
|
||||
"⏮️ Review for commits since previous PR-Agent review": incremental_review_markdown_text}})
|
||||
"⏮️ Review for commits since previous PR-Agent review": f"Starting from commit {last_commit_url}"}})
|
||||
data.move_to_end('Incremental PR Review', last=False)
|
||||
|
||||
markdown_text = convert_to_markdown(data, self.git_provider.is_supported("gfm_markdown"))
|
||||
markdown_text = convert_to_markdown(data)
|
||||
user = self.git_provider.get_user_id()
|
||||
|
||||
# Add help text if gfm_markdown is supported
|
||||
if self.git_provider.is_supported("gfm_markdown") and get_settings().pr_reviewer.enable_help_text:
|
||||
markdown_text += "<hr>\n\n<details> <summary><strong>✨ Usage guide:</strong></summary><hr> \n\n"
|
||||
markdown_text += HelpMessage.get_review_usage_guide()
|
||||
markdown_text += "\n</details>\n"
|
||||
|
||||
# Add custom labels from the review prediction (effort, security)
|
||||
self.set_review_labels(data)
|
||||
# Add help text if not in CLI mode
|
||||
if not get_settings().get("CONFIG.CLI_MODE", False):
|
||||
markdown_text += "\n### How to use\n"
|
||||
if user and '[bot]' not in user:
|
||||
markdown_text += bot_help_text(user)
|
||||
else:
|
||||
markdown_text += actions_help_text
|
||||
|
||||
# Log markdown response if verbosity level is high
|
||||
if get_settings().config.verbosity_level >= 2:
|
||||
get_logger().info(f"Markdown response:\n{markdown_text}")
|
||||
logging.info(f"Markdown response:\n{markdown_text}")
|
||||
|
||||
if markdown_text == None or len(markdown_text) == 0:
|
||||
markdown_text = ""
|
||||
@ -266,14 +241,21 @@ class PRReviewer:
|
||||
if get_settings().pr_reviewer.num_code_suggestions == 0:
|
||||
return
|
||||
|
||||
data = load_yaml(self.prediction.strip())
|
||||
review_text = self.prediction.strip()
|
||||
review_text = review_text.removeprefix('```yaml').rstrip('`')
|
||||
try:
|
||||
data = yaml.load(review_text, Loader=SafeLoader)
|
||||
except Exception as e:
|
||||
logging.error(f"Failed to parse AI prediction: {e}")
|
||||
data = try_fix_yaml(review_text)
|
||||
|
||||
comments: List[str] = []
|
||||
for suggestion in data.get('PR Feedback', {}).get('Code feedback', []):
|
||||
relevant_file = suggestion.get('relevant file', '').strip()
|
||||
relevant_line_in_file = suggestion.get('relevant line', '').strip()
|
||||
content = suggestion.get('suggestion', '')
|
||||
if not relevant_file or not relevant_line_in_file or not content:
|
||||
get_logger().info("Skipping inline comment with missing file/line/content")
|
||||
logging.info("Skipping inline comment with missing file/line/content")
|
||||
continue
|
||||
|
||||
if self.git_provider.is_supported("create_inline_comment"):
|
||||
@ -309,86 +291,3 @@ class PRReviewer:
|
||||
break
|
||||
|
||||
return question_str, answer_str
|
||||
|
||||
def _get_previous_review_comment(self):
|
||||
"""
|
||||
Get the previous review comment if it exists.
|
||||
"""
|
||||
try:
|
||||
if get_settings().pr_reviewer.remove_previous_review_comment and hasattr(self.git_provider, "get_previous_review"):
|
||||
return self.git_provider.get_previous_review(
|
||||
full=not self.incremental.is_incremental,
|
||||
incremental=self.incremental.is_incremental,
|
||||
)
|
||||
except Exception as e:
|
||||
get_logger().exception(f"Failed to get previous review comment, error: {e}")
|
||||
|
||||
def _remove_previous_review_comment(self, comment):
|
||||
"""
|
||||
Remove the previous review comment if it exists.
|
||||
"""
|
||||
try:
|
||||
if get_settings().pr_reviewer.remove_previous_review_comment and comment:
|
||||
self.git_provider.remove_comment(comment)
|
||||
except Exception as e:
|
||||
get_logger().exception(f"Failed to remove previous review comment, error: {e}")
|
||||
|
||||
def _can_run_incremental_review(self) -> bool:
|
||||
"""Checks if we can run incremental review according the various configurations and previous review"""
|
||||
# checking if running is auto mode but there are no new commits
|
||||
if self.is_auto and not self.incremental.first_new_commit_sha:
|
||||
get_logger().info(f"Incremental review is enabled for {self.pr_url} but there are no new commits")
|
||||
return False
|
||||
# checking if there are enough commits to start the review
|
||||
num_new_commits = len(self.incremental.commits_range)
|
||||
num_commits_threshold = get_settings().pr_reviewer.minimal_commits_for_incremental_review
|
||||
not_enough_commits = num_new_commits < num_commits_threshold
|
||||
# checking if the commits are not too recent to start the review
|
||||
recent_commits_threshold = datetime.datetime.now() - datetime.timedelta(
|
||||
minutes=get_settings().pr_reviewer.minimal_minutes_for_incremental_review
|
||||
)
|
||||
last_seen_commit_date = (
|
||||
self.incremental.last_seen_commit.commit.author.date if self.incremental.last_seen_commit else None
|
||||
)
|
||||
all_commits_too_recent = (
|
||||
last_seen_commit_date > recent_commits_threshold if self.incremental.last_seen_commit else False
|
||||
)
|
||||
# check all the thresholds or just one to start the review
|
||||
condition = any if get_settings().pr_reviewer.require_all_thresholds_for_incremental_review else all
|
||||
if condition((not_enough_commits, all_commits_too_recent)):
|
||||
get_logger().info(
|
||||
f"Incremental review is enabled for {self.pr_url} but didn't pass the threshold check to run:"
|
||||
f"\n* Number of new commits = {num_new_commits} (threshold is {num_commits_threshold})"
|
||||
f"\n* Last seen commit date = {last_seen_commit_date} (threshold is {recent_commits_threshold})"
|
||||
)
|
||||
return False
|
||||
return True
|
||||
|
||||
def set_review_labels(self, data):
|
||||
if (get_settings().pr_reviewer.enable_review_labels_security or
|
||||
get_settings().pr_reviewer.enable_review_labels_effort):
|
||||
try:
|
||||
review_labels = []
|
||||
if get_settings().pr_reviewer.enable_review_labels_effort:
|
||||
estimated_effort = data['PR Analysis']['Estimated effort to review [1-5]']
|
||||
estimated_effort_number = int(estimated_effort.split(',')[0])
|
||||
if 1 <= estimated_effort_number <= 5: # 1, because ...
|
||||
review_labels.append(f'Review effort [1-5]: {estimated_effort_number}')
|
||||
if get_settings().pr_reviewer.enable_review_labels_security:
|
||||
security_concerns = data['PR Analysis']['Security concerns'] # yes, because ...
|
||||
security_concerns_bool = 'yes' in security_concerns.lower() or 'true' in security_concerns.lower()
|
||||
if security_concerns_bool:
|
||||
review_labels.append('Possible security concern')
|
||||
|
||||
current_labels = self.git_provider.get_pr_labels()
|
||||
if current_labels:
|
||||
current_labels_filtered = [label for label in current_labels if
|
||||
not label.lower().startswith('review effort [1-5]:') and not label.lower().startswith(
|
||||
'possible security concern')]
|
||||
else:
|
||||
current_labels_filtered = []
|
||||
if current_labels or review_labels:
|
||||
get_logger().info(f"Setting review labels: {review_labels + current_labels_filtered}")
|
||||
self.git_provider.publish_labels(review_labels + current_labels_filtered)
|
||||
except Exception as e:
|
||||
get_logger().error(f"Failed to set review labels, error: {e}")
|
||||
|
@ -1,480 +0,0 @@
|
||||
import time
|
||||
from enum import Enum
|
||||
from typing import List
|
||||
|
||||
import openai
|
||||
import pandas as pd
|
||||
import pinecone
|
||||
from pinecone_datasets import Dataset, DatasetMetadata
|
||||
from pydantic import BaseModel, Field
|
||||
|
||||
from pr_agent.algo import MAX_TOKENS
|
||||
from pr_agent.algo.token_handler import TokenHandler
|
||||
from pr_agent.algo.utils import get_max_tokens
|
||||
from pr_agent.config_loader import get_settings
|
||||
from pr_agent.git_providers import get_git_provider
|
||||
from pr_agent.log import get_logger
|
||||
|
||||
MODEL = "text-embedding-ada-002"
|
||||
|
||||
|
||||
class PRSimilarIssue:
|
||||
def __init__(self, issue_url: str, ai_handler, args: list = None):
|
||||
if get_settings().config.git_provider != "github":
|
||||
raise Exception("Only github is supported for similar issue tool")
|
||||
|
||||
self.cli_mode = get_settings().CONFIG.CLI_MODE
|
||||
self.max_issues_to_scan = get_settings().pr_similar_issue.max_issues_to_scan
|
||||
self.issue_url = issue_url
|
||||
self.git_provider = get_git_provider()()
|
||||
repo_name, issue_number = self.git_provider._parse_issue_url(issue_url.split('=')[-1])
|
||||
self.git_provider.repo = repo_name
|
||||
self.git_provider.repo_obj = self.git_provider.github_client.get_repo(repo_name)
|
||||
self.token_handler = TokenHandler()
|
||||
repo_obj = self.git_provider.repo_obj
|
||||
repo_name_for_index = self.repo_name_for_index = repo_obj.full_name.lower().replace('/', '-').replace('_/', '-')
|
||||
index_name = self.index_name = "codium-ai-pr-agent-issues"
|
||||
|
||||
if get_settings().pr_similar_issue.vectordb == "pinecone":
|
||||
# assuming pinecone api key and environment are set in secrets file
|
||||
try:
|
||||
api_key = get_settings().pinecone.api_key
|
||||
environment = get_settings().pinecone.environment
|
||||
except Exception:
|
||||
if not self.cli_mode:
|
||||
repo_name, original_issue_number = self.git_provider._parse_issue_url(self.issue_url.split('=')[-1])
|
||||
issue_main = self.git_provider.repo_obj.get_issue(original_issue_number)
|
||||
issue_main.create_comment("Please set pinecone api key and environment in secrets file")
|
||||
raise Exception("Please set pinecone api key and environment in secrets file")
|
||||
|
||||
# check if index exists, and if repo is already indexed
|
||||
run_from_scratch = False
|
||||
if run_from_scratch: # for debugging
|
||||
pinecone.init(api_key=api_key, environment=environment)
|
||||
if index_name in pinecone.list_indexes():
|
||||
get_logger().info('Removing index...')
|
||||
pinecone.delete_index(index_name)
|
||||
get_logger().info('Done')
|
||||
|
||||
upsert = True
|
||||
pinecone.init(api_key=api_key, environment=environment)
|
||||
if not index_name in pinecone.list_indexes():
|
||||
run_from_scratch = True
|
||||
upsert = False
|
||||
else:
|
||||
if get_settings().pr_similar_issue.force_update_dataset:
|
||||
upsert = True
|
||||
else:
|
||||
pinecone_index = pinecone.Index(index_name=index_name)
|
||||
res = pinecone_index.fetch([f"example_issue_{repo_name_for_index}"]).to_dict()
|
||||
if res["vectors"]:
|
||||
upsert = False
|
||||
|
||||
if run_from_scratch or upsert: # index the entire repo
|
||||
get_logger().info('Indexing the entire repo...')
|
||||
|
||||
get_logger().info('Getting issues...')
|
||||
issues = list(repo_obj.get_issues(state='all'))
|
||||
get_logger().info('Done')
|
||||
self._update_index_with_issues(issues, repo_name_for_index, upsert=upsert)
|
||||
else: # update index if needed
|
||||
pinecone_index = pinecone.Index(index_name=index_name)
|
||||
issues_to_update = []
|
||||
issues_paginated_list = repo_obj.get_issues(state='all')
|
||||
counter = 1
|
||||
for issue in issues_paginated_list:
|
||||
if issue.pull_request:
|
||||
continue
|
||||
issue_str, comments, number = self._process_issue(issue)
|
||||
issue_key = f"issue_{number}"
|
||||
id = issue_key + "." + "issue"
|
||||
res = pinecone_index.fetch([id]).to_dict()
|
||||
is_new_issue = True
|
||||
for vector in res["vectors"].values():
|
||||
if vector['metadata']['repo'] == repo_name_for_index:
|
||||
is_new_issue = False
|
||||
break
|
||||
if is_new_issue:
|
||||
counter += 1
|
||||
issues_to_update.append(issue)
|
||||
else:
|
||||
break
|
||||
|
||||
if issues_to_update:
|
||||
get_logger().info(f'Updating index with {counter} new issues...')
|
||||
self._update_index_with_issues(issues_to_update, repo_name_for_index, upsert=True)
|
||||
else:
|
||||
get_logger().info('No new issues to update')
|
||||
|
||||
elif get_settings().pr_similar_issue.vectordb == "lancedb":
|
||||
import lancedb # import lancedb only if needed
|
||||
self.db = lancedb.connect(get_settings().lancedb.uri)
|
||||
self.table = None
|
||||
|
||||
run_from_scratch = False
|
||||
if run_from_scratch: # for debugging
|
||||
if index_name in self.db.table_names():
|
||||
get_logger().info('Removing Table...')
|
||||
self.db.drop_table(index_name)
|
||||
get_logger().info('Done')
|
||||
|
||||
ingest = True
|
||||
if index_name not in self.db.table_names():
|
||||
run_from_scratch = True
|
||||
ingest = False
|
||||
else:
|
||||
if get_settings().pr_similar_issue.force_update_dataset:
|
||||
ingest = True
|
||||
else:
|
||||
self.table = self.db[index_name]
|
||||
res = self.table.search().limit(len(self.table)).where(f"id='example_issue_{repo_name_for_index}'").to_list()
|
||||
get_logger().info("result: ", res)
|
||||
if res[0].get("vector"):
|
||||
ingest = False
|
||||
|
||||
if run_from_scratch or ingest: # indexing the entire repo
|
||||
get_logger().info('Indexing the entire repo...')
|
||||
|
||||
get_logger().info('Getting issues...')
|
||||
issues = list(repo_obj.get_issues(state='all'))
|
||||
get_logger().info('Done')
|
||||
|
||||
self._update_table_with_issues(issues, repo_name_for_index, ingest=ingest)
|
||||
else: # update table if needed
|
||||
issues_to_update = []
|
||||
issues_paginated_list = repo_obj.get_issues(state='all')
|
||||
counter = 1
|
||||
for issue in issues_paginated_list:
|
||||
if issue.pull_request:
|
||||
continue
|
||||
issue_str, comments, number = self._process_issue(issue)
|
||||
issue_key = f"issue_{number}"
|
||||
issue_id = issue_key + "." + "issue"
|
||||
res = self.table.search().limit(len(self.table)).where(f"id='{issue_id}'").to_list()
|
||||
is_new_issue = True
|
||||
for r in res:
|
||||
if r['metadata']['repo'] == repo_name_for_index:
|
||||
is_new_issue = False
|
||||
break
|
||||
if is_new_issue:
|
||||
counter += 1
|
||||
issues_to_update.append(issue)
|
||||
else:
|
||||
break
|
||||
|
||||
if issues_to_update:
|
||||
get_logger().info(f'Updating index with {counter} new issues...')
|
||||
self._update_table_with_issues(issues_to_update, repo_name_for_index, ingest=True)
|
||||
else:
|
||||
get_logger().info('No new issues to update')
|
||||
|
||||
|
||||
async def run(self):
|
||||
get_logger().info('Getting issue...')
|
||||
repo_name, original_issue_number = self.git_provider._parse_issue_url(self.issue_url.split('=')[-1])
|
||||
issue_main = self.git_provider.repo_obj.get_issue(original_issue_number)
|
||||
issue_str, comments, number = self._process_issue(issue_main)
|
||||
openai.api_key = get_settings().openai.key
|
||||
get_logger().info('Done')
|
||||
|
||||
get_logger().info('Querying...')
|
||||
res = openai.Embedding.create(input=[issue_str], engine=MODEL)
|
||||
embeds = [record['embedding'] for record in res['data']]
|
||||
|
||||
relevant_issues_number_list = []
|
||||
relevant_comment_number_list = []
|
||||
score_list = []
|
||||
|
||||
if get_settings().pr_similar_issue.vectordb == "pinecone":
|
||||
pinecone_index = pinecone.Index(index_name=self.index_name)
|
||||
res = pinecone_index.query(embeds[0],
|
||||
top_k=5,
|
||||
filter={"repo": self.repo_name_for_index},
|
||||
include_metadata=True).to_dict()
|
||||
|
||||
for r in res['matches']:
|
||||
# skip example issue
|
||||
if 'example_issue_' in r["id"]:
|
||||
continue
|
||||
|
||||
try:
|
||||
issue_number = int(r["id"].split('.')[0].split('_')[-1])
|
||||
except:
|
||||
get_logger().debug(f"Failed to parse issue number from {r['id']}")
|
||||
continue
|
||||
|
||||
if original_issue_number == issue_number:
|
||||
continue
|
||||
if issue_number not in relevant_issues_number_list:
|
||||
relevant_issues_number_list.append(issue_number)
|
||||
if 'comment' in r["id"]:
|
||||
relevant_comment_number_list.append(int(r["id"].split('.')[1].split('_')[-1]))
|
||||
else:
|
||||
relevant_comment_number_list.append(-1)
|
||||
score_list.append(str("{:.2f}".format(r['score'])))
|
||||
get_logger().info('Done')
|
||||
|
||||
elif get_settings().pr_similar_issue.vectordb == "lancedb":
|
||||
res = self.table.search(embeds[0]).where(f"metadata.repo='{self.repo_name_for_index}'", prefilter=True).to_list()
|
||||
|
||||
for r in res:
|
||||
# skip example issue
|
||||
if 'example_issue_' in r["id"]:
|
||||
continue
|
||||
|
||||
try:
|
||||
issue_number = int(r["id"].split('.')[0].split('_')[-1])
|
||||
except:
|
||||
get_logger().debug(f"Failed to parse issue number from {r['id']}")
|
||||
continue
|
||||
|
||||
if original_issue_number == issue_number:
|
||||
continue
|
||||
if issue_number not in relevant_issues_number_list:
|
||||
relevant_issues_number_list.append(issue_number)
|
||||
|
||||
if 'comment' in r["id"]:
|
||||
relevant_comment_number_list.append(int(r["id"].split('.')[1].split('_')[-1]))
|
||||
else:
|
||||
relevant_comment_number_list.append(-1)
|
||||
score_list.append(str("{:.2f}".format(1-r['_distance'])))
|
||||
get_logger().info('Done')
|
||||
|
||||
get_logger().info('Publishing response...')
|
||||
similar_issues_str = "### Similar Issues\n___\n\n"
|
||||
|
||||
for i, issue_number_similar in enumerate(relevant_issues_number_list):
|
||||
issue = self.git_provider.repo_obj.get_issue(issue_number_similar)
|
||||
title = issue.title
|
||||
url = issue.html_url
|
||||
if relevant_comment_number_list[i] != -1:
|
||||
url = list(issue.get_comments())[relevant_comment_number_list[i]].html_url
|
||||
similar_issues_str += f"{i + 1}. **[{title}]({url})** (score={score_list[i]})\n\n"
|
||||
if get_settings().config.publish_output:
|
||||
response = issue_main.create_comment(similar_issues_str)
|
||||
get_logger().info(similar_issues_str)
|
||||
get_logger().info('Done')
|
||||
|
||||
def _process_issue(self, issue):
|
||||
header = issue.title
|
||||
body = issue.body
|
||||
number = issue.number
|
||||
if get_settings().pr_similar_issue.skip_comments:
|
||||
comments = []
|
||||
else:
|
||||
comments = list(issue.get_comments())
|
||||
issue_str = f"Issue Header: \"{header}\"\n\nIssue Body:\n{body}"
|
||||
return issue_str, comments, number
|
||||
|
||||
def _update_index_with_issues(self, issues_list, repo_name_for_index, upsert=False):
|
||||
get_logger().info('Processing issues...')
|
||||
corpus = Corpus()
|
||||
example_issue_record = Record(
|
||||
id=f"example_issue_{repo_name_for_index}",
|
||||
text="example_issue",
|
||||
metadata=Metadata(repo=repo_name_for_index)
|
||||
)
|
||||
corpus.append(example_issue_record)
|
||||
|
||||
counter = 0
|
||||
for issue in issues_list:
|
||||
if issue.pull_request:
|
||||
continue
|
||||
|
||||
counter += 1
|
||||
if counter % 100 == 0:
|
||||
get_logger().info(f"Scanned {counter} issues")
|
||||
if counter >= self.max_issues_to_scan:
|
||||
get_logger().info(f"Scanned {self.max_issues_to_scan} issues, stopping")
|
||||
break
|
||||
|
||||
issue_str, comments, number = self._process_issue(issue)
|
||||
issue_key = f"issue_{number}"
|
||||
username = issue.user.login
|
||||
created_at = str(issue.created_at)
|
||||
if len(issue_str) < 8000 or \
|
||||
self.token_handler.count_tokens(issue_str) < get_max_tokens(MODEL): # fast reject first
|
||||
issue_record = Record(
|
||||
id=issue_key + "." + "issue",
|
||||
text=issue_str,
|
||||
metadata=Metadata(repo=repo_name_for_index,
|
||||
username=username,
|
||||
created_at=created_at,
|
||||
level=IssueLevel.ISSUE)
|
||||
)
|
||||
corpus.append(issue_record)
|
||||
if comments:
|
||||
for j, comment in enumerate(comments):
|
||||
comment_body = comment.body
|
||||
num_words_comment = len(comment_body.split())
|
||||
if num_words_comment < 10 or not isinstance(comment_body, str):
|
||||
continue
|
||||
|
||||
if len(comment_body) < 8000 or \
|
||||
self.token_handler.count_tokens(comment_body) < MAX_TOKENS[MODEL]:
|
||||
comment_record = Record(
|
||||
id=issue_key + ".comment_" + str(j + 1),
|
||||
text=comment_body,
|
||||
metadata=Metadata(repo=repo_name_for_index,
|
||||
username=username, # use issue username for all comments
|
||||
created_at=created_at,
|
||||
level=IssueLevel.COMMENT)
|
||||
)
|
||||
corpus.append(comment_record)
|
||||
df = pd.DataFrame(corpus.dict()["documents"])
|
||||
get_logger().info('Done')
|
||||
|
||||
get_logger().info('Embedding...')
|
||||
openai.api_key = get_settings().openai.key
|
||||
list_to_encode = list(df["text"].values)
|
||||
try:
|
||||
res = openai.Embedding.create(input=list_to_encode, engine=MODEL)
|
||||
embeds = [record['embedding'] for record in res['data']]
|
||||
except:
|
||||
embeds = []
|
||||
get_logger().error('Failed to embed entire list, embedding one by one...')
|
||||
for i, text in enumerate(list_to_encode):
|
||||
try:
|
||||
res = openai.Embedding.create(input=[text], engine=MODEL)
|
||||
embeds.append(res['data'][0]['embedding'])
|
||||
except:
|
||||
embeds.append([0] * 1536)
|
||||
df["values"] = embeds
|
||||
meta = DatasetMetadata.empty()
|
||||
meta.dense_model.dimension = len(embeds[0])
|
||||
ds = Dataset.from_pandas(df, meta)
|
||||
get_logger().info('Done')
|
||||
|
||||
api_key = get_settings().pinecone.api_key
|
||||
environment = get_settings().pinecone.environment
|
||||
if not upsert:
|
||||
get_logger().info('Creating index from scratch...')
|
||||
ds.to_pinecone_index(self.index_name, api_key=api_key, environment=environment)
|
||||
time.sleep(15) # wait for pinecone to finalize indexing before querying
|
||||
else:
|
||||
get_logger().info('Upserting index...')
|
||||
namespace = ""
|
||||
batch_size: int = 100
|
||||
concurrency: int = 10
|
||||
pinecone.init(api_key=api_key, environment=environment)
|
||||
ds._upsert_to_index(self.index_name, namespace, batch_size, concurrency)
|
||||
time.sleep(5) # wait for pinecone to finalize upserting before querying
|
||||
get_logger().info('Done')
|
||||
|
||||
def _update_table_with_issues(self, issues_list, repo_name_for_index, ingest=False):
|
||||
get_logger().info('Processing issues...')
|
||||
|
||||
corpus = Corpus()
|
||||
example_issue_record = Record(
|
||||
id=f"example_issue_{repo_name_for_index}",
|
||||
text="example_issue",
|
||||
metadata=Metadata(repo=repo_name_for_index)
|
||||
)
|
||||
corpus.append(example_issue_record)
|
||||
|
||||
counter = 0
|
||||
for issue in issues_list:
|
||||
if issue.pull_request:
|
||||
continue
|
||||
|
||||
counter += 1
|
||||
if counter % 100 == 0:
|
||||
get_logger().info(f"Scanned {counter} issues")
|
||||
if counter >= self.max_issues_to_scan:
|
||||
get_logger().info(f"Scanned {self.max_issues_to_scan} issues, stopping")
|
||||
break
|
||||
|
||||
issue_str, comments, number = self._process_issue(issue)
|
||||
issue_key = f"issue_{number}"
|
||||
username = issue.user.login
|
||||
created_at = str(issue.created_at)
|
||||
if len(issue_str) < 8000 or \
|
||||
self.token_handler.count_tokens(issue_str) < get_max_tokens(MODEL): # fast reject first
|
||||
issue_record = Record(
|
||||
id=issue_key + "." + "issue",
|
||||
text=issue_str,
|
||||
metadata=Metadata(repo=repo_name_for_index,
|
||||
username=username,
|
||||
created_at=created_at,
|
||||
level=IssueLevel.ISSUE)
|
||||
)
|
||||
corpus.append(issue_record)
|
||||
if comments:
|
||||
for j, comment in enumerate(comments):
|
||||
comment_body = comment.body
|
||||
num_words_comment = len(comment_body.split())
|
||||
if num_words_comment < 10 or not isinstance(comment_body, str):
|
||||
continue
|
||||
|
||||
if len(comment_body) < 8000 or \
|
||||
self.token_handler.count_tokens(comment_body) < MAX_TOKENS[MODEL]:
|
||||
comment_record = Record(
|
||||
id=issue_key + ".comment_" + str(j + 1),
|
||||
text=comment_body,
|
||||
metadata=Metadata(repo=repo_name_for_index,
|
||||
username=username, # use issue username for all comments
|
||||
created_at=created_at,
|
||||
level=IssueLevel.COMMENT)
|
||||
)
|
||||
corpus.append(comment_record)
|
||||
df = pd.DataFrame(corpus.dict()["documents"])
|
||||
get_logger().info('Done')
|
||||
|
||||
get_logger().info('Embedding...')
|
||||
openai.api_key = get_settings().openai.key
|
||||
list_to_encode = list(df["text"].values)
|
||||
try:
|
||||
res = openai.Embedding.create(input=list_to_encode, engine=MODEL)
|
||||
embeds = [record['embedding'] for record in res['data']]
|
||||
except:
|
||||
embeds = []
|
||||
get_logger().error('Failed to embed entire list, embedding one by one...')
|
||||
for i, text in enumerate(list_to_encode):
|
||||
try:
|
||||
res = openai.Embedding.create(input=[text], engine=MODEL)
|
||||
embeds.append(res['data'][0]['embedding'])
|
||||
except:
|
||||
embeds.append([0] * 1536)
|
||||
df["vector"] = embeds
|
||||
get_logger().info('Done')
|
||||
|
||||
if not ingest:
|
||||
get_logger().info('Creating table from scratch...')
|
||||
self.table = self.db.create_table(self.index_name, data=df, mode="overwrite")
|
||||
time.sleep(15)
|
||||
else:
|
||||
get_logger().info('Ingesting in Table...')
|
||||
if self.index_name not in self.db.table_names():
|
||||
self.table.add(df)
|
||||
else:
|
||||
get_logger().info(f"Table {self.index_name} doesn't exists!")
|
||||
time.sleep(5)
|
||||
get_logger().info('Done')
|
||||
|
||||
|
||||
class IssueLevel(str, Enum):
|
||||
ISSUE = "issue"
|
||||
COMMENT = "comment"
|
||||
|
||||
|
||||
class Metadata(BaseModel):
|
||||
repo: str
|
||||
username: str = Field(default="@codium")
|
||||
created_at: str = Field(default="01-01-1970 00:00:00.00000")
|
||||
level: IssueLevel = Field(default=IssueLevel.ISSUE)
|
||||
|
||||
class Config:
|
||||
use_enum_values = True
|
||||
|
||||
|
||||
class Record(BaseModel):
|
||||
id: str
|
||||
text: str
|
||||
metadata: Metadata
|
||||
|
||||
|
||||
class Corpus(BaseModel):
|
||||
documents: List[Record] = Field(default=[])
|
||||
|
||||
def append(self, r: Record):
|
||||
self.documents.append(r)
|
@ -1,25 +1,23 @@
|
||||
import copy
|
||||
import logging
|
||||
from datetime import date
|
||||
from functools import partial
|
||||
from time import sleep
|
||||
from typing import Tuple
|
||||
|
||||
from jinja2 import Environment, StrictUndefined
|
||||
|
||||
from pr_agent.algo.ai_handlers.base_ai_handler import BaseAiHandler
|
||||
from pr_agent.algo.ai_handlers.litellm_ai_handler import LiteLLMAIHandler
|
||||
from pr_agent.algo.ai_handler import AiHandler
|
||||
from pr_agent.algo.pr_processing import get_pr_diff, retry_with_fallback_models
|
||||
from pr_agent.algo.token_handler import TokenHandler
|
||||
from pr_agent.config_loader import get_settings
|
||||
from pr_agent.git_providers import get_git_provider
|
||||
from pr_agent.git_providers import GithubProvider, get_git_provider
|
||||
from pr_agent.git_providers.git_provider import get_main_pr_language
|
||||
from pr_agent.log import get_logger
|
||||
|
||||
CHANGELOG_LINES = 50
|
||||
|
||||
|
||||
class PRUpdateChangelog:
|
||||
def __init__(self, pr_url: str, cli_mode=False, args=None, ai_handler: partial[BaseAiHandler,] = LiteLLMAIHandler):
|
||||
def __init__(self, pr_url: str, cli_mode=False, args=None):
|
||||
|
||||
self.git_provider = get_git_provider()(pr_url)
|
||||
self.main_language = get_main_pr_language(
|
||||
@ -27,7 +25,7 @@ class PRUpdateChangelog:
|
||||
)
|
||||
self.commit_changelog = get_settings().pr_update_changelog.push_changelog_changes
|
||||
self._get_changlog_file() # self.changelog_file_str
|
||||
self.ai_handler = ai_handler()
|
||||
self.ai_handler = AiHandler()
|
||||
self.patches_diff = None
|
||||
self.prediction = None
|
||||
self.cli_mode = cli_mode
|
||||
@ -48,28 +46,28 @@ class PRUpdateChangelog:
|
||||
get_settings().pr_update_changelog_prompt.user)
|
||||
|
||||
async def run(self):
|
||||
# assert type(self.git_provider) == GithubProvider, "Currently only Github is supported"
|
||||
assert type(self.git_provider) == GithubProvider, "Currently only Github is supported"
|
||||
|
||||
get_logger().info('Updating the changelog...')
|
||||
logging.info('Updating the changelog...')
|
||||
if get_settings().config.publish_output:
|
||||
self.git_provider.publish_comment("Preparing changelog updates...", is_temporary=True)
|
||||
await retry_with_fallback_models(self._prepare_prediction)
|
||||
get_logger().info('Preparing PR changelog updates...')
|
||||
logging.info('Preparing PR changelog updates...')
|
||||
new_file_content, answer = self._prepare_changelog_update()
|
||||
if get_settings().config.publish_output:
|
||||
self.git_provider.remove_initial_comment()
|
||||
get_logger().info('Publishing changelog updates...')
|
||||
logging.info('Publishing changelog updates...')
|
||||
if self.commit_changelog:
|
||||
get_logger().info('Pushing PR changelog updates to repo...')
|
||||
logging.info('Pushing PR changelog updates to repo...')
|
||||
self._push_changelog_update(new_file_content, answer)
|
||||
else:
|
||||
get_logger().info('Publishing PR changelog as comment...')
|
||||
logging.info('Publishing PR changelog as comment...')
|
||||
self.git_provider.publish_comment(f"**Changelog updates:**\n\n{answer}")
|
||||
|
||||
async def _prepare_prediction(self, model: str):
|
||||
get_logger().info('Getting PR diff...')
|
||||
logging.info('Getting PR diff...')
|
||||
self.patches_diff = get_pr_diff(self.git_provider, self.token_handler, model)
|
||||
get_logger().info('Getting AI prediction...')
|
||||
logging.info('Getting AI prediction...')
|
||||
self.prediction = await self._get_prediction(model)
|
||||
|
||||
async def _get_prediction(self, model: str):
|
||||
@ -79,8 +77,8 @@ class PRUpdateChangelog:
|
||||
system_prompt = environment.from_string(get_settings().pr_update_changelog_prompt.system).render(variables)
|
||||
user_prompt = environment.from_string(get_settings().pr_update_changelog_prompt.user).render(variables)
|
||||
if get_settings().config.verbosity_level >= 2:
|
||||
get_logger().info(f"\nSystem prompt:\n{system_prompt}")
|
||||
get_logger().info(f"\nUser prompt:\n{user_prompt}")
|
||||
logging.info(f"\nSystem prompt:\n{system_prompt}")
|
||||
logging.info(f"\nUser prompt:\n{user_prompt}")
|
||||
response, finish_reason = await self.ai_handler.chat_completion(model=model, temperature=0.2,
|
||||
system=system_prompt, user=user_prompt)
|
||||
|
||||
@ -102,7 +100,7 @@ class PRUpdateChangelog:
|
||||
"\n>'/update_changelog --pr_update_changelog.push_changelog_changes=true'\n"
|
||||
|
||||
if get_settings().config.verbosity_level >= 2:
|
||||
get_logger().info(f"answer:\n{answer}")
|
||||
logging.info(f"answer:\n{answer}")
|
||||
|
||||
return new_file_content, answer
|
||||
|
||||
@ -151,7 +149,7 @@ Example:
|
||||
except Exception:
|
||||
self.changelog_file_str = ""
|
||||
if self.commit_changelog:
|
||||
get_logger().info("No CHANGELOG.md file found in the repository. Creating one...")
|
||||
logging.info("No CHANGELOG.md file found in the repository. Creating one...")
|
||||
changelog_file = self.git_provider.repo_obj.create_file(path="CHANGELOG.md",
|
||||
message='add CHANGELOG.md',
|
||||
content="",
|
||||
|
@ -17,7 +17,7 @@ maintainers = [
|
||||
]
|
||||
description = "CodiumAI PR-Agent is an open-source tool to automatically analyze a pull request and provide several types of feedback"
|
||||
readme = "README.md"
|
||||
requires-python = ">=3.10"
|
||||
requires-python = ">=3.9"
|
||||
keywords = ["ai", "tool", "developer", "review", "agent"]
|
||||
license = {file = "LICENSE", name = "Apache 2.0 License"}
|
||||
classifiers = [
|
||||
|
@ -1,27 +1,21 @@
|
||||
aiohttp==3.9.1
|
||||
atlassian-python-api==3.41.4
|
||||
azure-devops==7.1.0b3
|
||||
boto3==1.33.6
|
||||
dynaconf==3.2.4
|
||||
dynaconf==3.1.12
|
||||
fastapi==0.99.0
|
||||
GitPython==3.1.32
|
||||
google-cloud-aiplatform==1.35.0
|
||||
google-cloud-storage==2.10.0
|
||||
Jinja2==3.1.2
|
||||
litellm==0.12.5
|
||||
loguru==0.7.2
|
||||
msrest==0.7.1
|
||||
openai==0.27.8
|
||||
pinecone-client
|
||||
pinecone-datasets @ git+https://github.com/mrT23/pinecone-datasets.git@main
|
||||
lancedb==0.3.4
|
||||
pytest==7.4.0
|
||||
PyGithub==1.59.*
|
||||
PyYAML==6.0.1
|
||||
python-gitlab==3.15.0
|
||||
retry==0.9.2
|
||||
starlette-context==0.3.6
|
||||
tiktoken==0.5.2
|
||||
ujson==5.8.0
|
||||
openai==0.27.8
|
||||
Jinja2==3.1.2
|
||||
tiktoken==0.4.0
|
||||
uvicorn==0.22.0
|
||||
# langchain==0.0.349 # uncomment this to support language LangChainOpenAIHandler
|
||||
python-gitlab==3.15.0
|
||||
pytest~=7.4.0
|
||||
aiohttp~=3.8.4
|
||||
atlassian-python-api==3.39.0
|
||||
GitPython~=3.1.32
|
||||
PyYAML==6.0
|
||||
starlette-context==0.3.6
|
||||
litellm~=0.1.504
|
||||
boto3~=1.28.25
|
||||
google-cloud-storage==2.10.0
|
||||
ujson==5.8.0
|
||||
azure-devops==7.1.0b3
|
||||
msrest==0.7.1
|
@ -1,4 +1,3 @@
|
||||
from pr_agent.git_providers import BitbucketServerProvider
|
||||
from pr_agent.git_providers.bitbucket_provider import BitbucketProvider
|
||||
|
||||
|
||||
@ -9,10 +8,3 @@ class TestBitbucketProvider:
|
||||
assert workspace_slug == "WORKSPACE_XYZ"
|
||||
assert repo_slug == "MY_TEST_REPO"
|
||||
assert pr_number == 321
|
||||
|
||||
def test_bitbucket_server_pr_url(self):
|
||||
url = "https://git.onpreminstance.com/projects/AAA/repos/my-repo/pull-requests/1"
|
||||
workspace_slug, repo_slug, pr_number = BitbucketServerProvider._parse_pr_url(url)
|
||||
assert workspace_slug == "AAA"
|
||||
assert repo_slug == "my-repo"
|
||||
assert pr_number == 1
|
||||
|
@ -1,19 +0,0 @@
|
||||
|
||||
# Generated by CodiumAI
|
||||
|
||||
import pytest
|
||||
|
||||
from pr_agent.algo.utils import clip_tokens
|
||||
|
||||
|
||||
class TestClipTokens:
|
||||
def test_clip(self):
|
||||
text = "line1\nline2\nline3\nline4\nline5\nline6"
|
||||
max_tokens = 25
|
||||
result = clip_tokens(text, max_tokens)
|
||||
assert result == text
|
||||
|
||||
max_tokens = 10
|
||||
result = clip_tokens(text, max_tokens)
|
||||
expected_results = 'line1\nline2\nline3\nli...(truncated)'
|
||||
assert result == expected_results
|
@ -110,7 +110,7 @@ class TestCodeCommitProvider:
|
||||
# Mock the response from the AWS client for get_pull_request method
|
||||
api.boto_client.get_pull_request.return_value = {
|
||||
"pullRequest": {
|
||||
"pullRequestId": "321",
|
||||
"pullRequestId": "3",
|
||||
"title": "My PR",
|
||||
"description": "My PR description",
|
||||
"pullRequestTargets": [
|
||||
|
@ -1,8 +1,6 @@
|
||||
import pytest
|
||||
from unittest.mock import patch
|
||||
from pr_agent.git_providers.codecommit_provider import CodeCommitFile
|
||||
from pr_agent.git_providers.codecommit_provider import CodeCommitProvider
|
||||
from pr_agent.git_providers.codecommit_provider import PullRequestCCMimic
|
||||
from pr_agent.git_providers.git_provider import EDIT_TYPE
|
||||
|
||||
|
||||
@ -27,21 +25,6 @@ class TestCodeCommitFile:
|
||||
|
||||
|
||||
class TestCodeCommitProvider:
|
||||
def test_get_title(self):
|
||||
# Test that the get_title() function returns the PR title
|
||||
with patch.object(CodeCommitProvider, "__init__", lambda x, y: None):
|
||||
provider = CodeCommitProvider(None)
|
||||
provider.pr = PullRequestCCMimic("My Test PR Title", [])
|
||||
assert provider.get_title() == "My Test PR Title"
|
||||
|
||||
def test_get_pr_id(self):
|
||||
# Test that the get_pr_id() function returns the correct ID
|
||||
with patch.object(CodeCommitProvider, "__init__", lambda x, y: None):
|
||||
provider = CodeCommitProvider(None)
|
||||
provider.repo_name = "my_test_repo"
|
||||
provider.pr_num = 321
|
||||
assert provider.get_pr_id() == "my_test_repo/321"
|
||||
|
||||
def test_parse_pr_url(self):
|
||||
# Test that the _parse_pr_url() function can extract the repo name and PR number from a CodeCommit URL
|
||||
url = "https://us-east-1.console.aws.amazon.com/codesuite/codecommit/repositories/my_test_repo/pull-requests/321"
|
||||
|
@ -71,7 +71,7 @@ class TestConvertToMarkdown:
|
||||
- 📌 **Type of PR:** Test type\n\
|
||||
- 🧪 **Relevant tests added:** no\n\
|
||||
- ✨ **Focused PR:** Yes\n\
|
||||
- **General PR suggestions:** general suggestion...\n\n\n<details><summary> <strong>🤖 Code feedback:</strong></summary> - **Code example:**\n - **Before:**\n ```\n Code before\n ```\n - **After:**\n ```\n Code after\n ```\n\n - **Code example:**\n - **Before:**\n ```\n Code before 2\n ```\n - **After:**\n ```\n Code after 2\n ```\n\n</details>\
|
||||
- **General PR suggestions:** general suggestion...\n\n\n- **<details><summary> 🤖 Code feedback:**</summary>\n\n - **Code example:**\n - **Before:**\n ```\n Code before\n ```\n - **After:**\n ```\n Code after\n ```\n\n - **Code example:**\n - **Before:**\n ```\n Code before 2\n ```\n - **After:**\n ```\n Code after 2\n ```\n\n</details>\
|
||||
"""
|
||||
assert convert_to_markdown(input_data).strip() == expected_output.strip()
|
||||
|
||||
|
@ -1,80 +0,0 @@
|
||||
import pytest
|
||||
from pr_agent.algo.file_filter import filter_ignored
|
||||
from pr_agent.config_loader import global_settings
|
||||
|
||||
class TestIgnoreFilter:
|
||||
def test_no_ignores(self):
|
||||
"""
|
||||
Test no files are ignored when no patterns are specified.
|
||||
"""
|
||||
files = [
|
||||
type('', (object,), {'filename': 'file1.py'})(),
|
||||
type('', (object,), {'filename': 'file2.java'})(),
|
||||
type('', (object,), {'filename': 'file3.cpp'})(),
|
||||
type('', (object,), {'filename': 'file4.py'})(),
|
||||
type('', (object,), {'filename': 'file5.py'})()
|
||||
]
|
||||
assert filter_ignored(files) == files, "Expected all files to be returned when no ignore patterns are given."
|
||||
|
||||
def test_glob_ignores(self, monkeypatch):
|
||||
"""
|
||||
Test files are ignored when glob patterns are specified.
|
||||
"""
|
||||
monkeypatch.setattr(global_settings.ignore, 'glob', ['*.py'])
|
||||
|
||||
files = [
|
||||
type('', (object,), {'filename': 'file1.py'})(),
|
||||
type('', (object,), {'filename': 'file2.java'})(),
|
||||
type('', (object,), {'filename': 'file3.cpp'})(),
|
||||
type('', (object,), {'filename': 'file4.py'})(),
|
||||
type('', (object,), {'filename': 'file5.py'})()
|
||||
]
|
||||
expected = [
|
||||
files[1],
|
||||
files[2]
|
||||
]
|
||||
|
||||
filtered_files = filter_ignored(files)
|
||||
assert filtered_files == expected, f"Expected {[file.filename for file in expected]}, but got {[file.filename for file in filtered_files]}."
|
||||
|
||||
def test_regex_ignores(self, monkeypatch):
|
||||
"""
|
||||
Test files are ignored when regex patterns are specified.
|
||||
"""
|
||||
monkeypatch.setattr(global_settings.ignore, 'regex', ['^file[2-4]\..*$'])
|
||||
|
||||
files = [
|
||||
type('', (object,), {'filename': 'file1.py'})(),
|
||||
type('', (object,), {'filename': 'file2.java'})(),
|
||||
type('', (object,), {'filename': 'file3.cpp'})(),
|
||||
type('', (object,), {'filename': 'file4.py'})(),
|
||||
type('', (object,), {'filename': 'file5.py'})()
|
||||
]
|
||||
expected = [
|
||||
files[0],
|
||||
files[4]
|
||||
]
|
||||
|
||||
filtered_files = filter_ignored(files)
|
||||
assert filtered_files == expected, f"Expected {[file.filename for file in expected]}, but got {[file.filename for file in filtered_files]}."
|
||||
|
||||
def test_invalid_regex(self, monkeypatch):
|
||||
"""
|
||||
Test invalid patterns are quietly ignored.
|
||||
"""
|
||||
monkeypatch.setattr(global_settings.ignore, 'regex', ['(((||', '^file[2-4]\..*$'])
|
||||
|
||||
files = [
|
||||
type('', (object,), {'filename': 'file1.py'})(),
|
||||
type('', (object,), {'filename': 'file2.java'})(),
|
||||
type('', (object,), {'filename': 'file3.cpp'})(),
|
||||
type('', (object,), {'filename': 'file4.py'})(),
|
||||
type('', (object,), {'filename': 'file5.py'})()
|
||||
]
|
||||
expected = [
|
||||
files[0],
|
||||
files[4]
|
||||
]
|
||||
|
||||
filtered_files = filter_ignored(files)
|
||||
assert filtered_files == expected, f"Expected {[file.filename for file in expected]}, but got {[file.filename for file in filtered_files]}."
|
@ -43,6 +43,18 @@ class TestHandlePatchDeletions:
|
||||
assert handle_patch_deletions(patch, original_file_content_str, new_file_content_str,
|
||||
file_name) == patch.rstrip()
|
||||
|
||||
# Tests that handle_patch_deletions logs a message when verbosity_level is greater than 0
|
||||
def test_handle_patch_deletions_happy_path_verbosity_level_greater_than_0(self, caplog):
|
||||
patch = '--- a/file.py\n+++ b/file.py\n@@ -1,2 +1,2 @@\n-foo\n-bar\n+baz\n'
|
||||
original_file_content_str = 'foo\nbar\n'
|
||||
new_file_content_str = ''
|
||||
file_name = 'file.py'
|
||||
get_settings().config.verbosity_level = 1
|
||||
|
||||
with caplog.at_level(logging.INFO):
|
||||
handle_patch_deletions(patch, original_file_content_str, new_file_content_str, file_name)
|
||||
assert any("Processing file" in message for message in caplog.messages)
|
||||
|
||||
# Tests that handle_patch_deletions returns 'File was deleted' when new_file_content_str is empty
|
||||
def test_handle_patch_deletions_edge_case_new_file_content_empty(self):
|
||||
patch = '--- a/file.py\n+++ b/file.py\n@@ -1,2 +1,2 @@\n-foo\n-bar\n'
|
||||
|
@ -61,7 +61,7 @@ class TestSortFilesByMainLanguages:
|
||||
type('', (object,), {'filename': 'file1.py'})(),
|
||||
type('', (object,), {'filename': 'file2.java'})()
|
||||
]
|
||||
expected_output = [{'language': 'Other', 'files': files}]
|
||||
expected_output = [{'language': 'Other', 'files': []}]
|
||||
assert sort_files_by_main_languages(languages, files) == expected_output
|
||||
|
||||
# Tests that function handles empty files list
|
||||
|
@ -2,9 +2,6 @@
|
||||
# Generated by CodiumAI
|
||||
|
||||
import pytest
|
||||
import yaml
|
||||
from yaml.scanner import ScannerError
|
||||
|
||||
from pr_agent.algo.utils import load_yaml
|
||||
|
||||
|
||||
@ -15,7 +12,7 @@ class TestLoadYaml:
|
||||
expected_output = {'name': 'John Smith', 'age': 35}
|
||||
assert load_yaml(yaml_str) == expected_output
|
||||
|
||||
def test_load_invalid_yaml1(self):
|
||||
def test_load_complicated_yaml(self):
|
||||
yaml_str = \
|
||||
'''\
|
||||
PR Analysis:
|
||||
@ -29,23 +26,7 @@ PR Feedback:
|
||||
Code feedback:
|
||||
- relevant file: pr_agent/settings/pr_description_prompts.toml
|
||||
suggestion: Consider using a more descriptive variable name than 'user' for the command prompt. A more descriptive name would make the code more readable and maintainable. [medium]
|
||||
relevant line: user="""PR Info: aaa
|
||||
relevant line: 'user="""PR Info:'
|
||||
Security concerns: No'''
|
||||
with pytest.raises(ScannerError):
|
||||
yaml.safe_load(yaml_str)
|
||||
|
||||
expected_output = {'PR Analysis': {'Main theme': 'Enhancing the `/describe` command prompt by adding title and description', 'Type of PR': 'Enhancement', 'Relevant tests added': False, 'Focused PR': 'Yes, the PR is focused on enhancing the `/describe` command prompt.'}, 'PR Feedback': {'General suggestions': 'The PR seems to be well-structured and focused on a specific enhancement. However, it would be beneficial to add tests to ensure the new feature works as expected.', 'Code feedback': [{'relevant file': 'pr_agent/settings/pr_description_prompts.toml', 'suggestion': "Consider using a more descriptive variable name than 'user' for the command prompt. A more descriptive name would make the code more readable and maintainable. [medium]", 'relevant line': 'user="""PR Info: aaa'}], 'Security concerns': False}}
|
||||
expected_output = {'PR Analysis': {'Main theme': 'Enhancing the `/describe` command prompt by adding title and description', 'Type of PR': 'Enhancement', 'Relevant tests added': False, 'Focused PR': 'Yes, the PR is focused on enhancing the `/describe` command prompt.'}, 'PR Feedback': {'General suggestions': 'The PR seems to be well-structured and focused on a specific enhancement. However, it would be beneficial to add tests to ensure the new feature works as expected.', 'Code feedback': [{'relevant file': 'pr_agent/settings/pr_description_prompts.toml', 'suggestion': "Consider using a more descriptive variable name than 'user' for the command prompt. A more descriptive name would make the code more readable and maintainable. [medium]", 'relevant line': 'user="""PR Info:'}], 'Security concerns': False}}
|
||||
assert load_yaml(yaml_str) == expected_output
|
||||
|
||||
def test_load_invalid_yaml2(self):
|
||||
yaml_str = '''\
|
||||
- relevant file: src/app.py:
|
||||
suggestion content: The print statement is outside inside the if __name__ ==: \
|
||||
'''
|
||||
with pytest.raises(ScannerError):
|
||||
yaml.safe_load(yaml_str)
|
||||
|
||||
expected_output =[{'relevant file': 'src/app.py:',
|
||||
'suggestion content': 'The print statement is outside inside the if __name__ ==: '}]
|
||||
assert load_yaml(yaml_str) == expected_output
|
||||
|
||||
|
@ -61,7 +61,7 @@ class TestParseCodeSuggestion:
|
||||
'before': 'Before 1',
|
||||
'after': 'After 1'
|
||||
}
|
||||
expected_output = ' **suggestion:** Suggestion 1 \n **description:** Description 1 \n **before:** Before 1 \n **after:** After 1 \n\n' # noqa: E501
|
||||
expected_output = " **suggestion:** Suggestion 1\n **description:** Description 1\n **before:** Before 1\n **after:** After 1\n\n" # noqa: E501
|
||||
assert parse_code_suggestion(code_suggestions) == expected_output
|
||||
|
||||
# Tests that function returns correct output when input dictionary has 'code example' key
|
||||
@ -74,5 +74,5 @@ class TestParseCodeSuggestion:
|
||||
'after': 'After 2'
|
||||
}
|
||||
}
|
||||
expected_output = ' **suggestion:** Suggestion 2 \n **description:** Description 2 \n - **code example:**\n - **before:**\n ```\n Before 2\n ```\n - **after:**\n ```\n After 2\n ```\n\n' # noqa: E501
|
||||
expected_output = " **suggestion:** Suggestion 2\n **description:** Description 2\n - **code example:**\n - **before:**\n ```\n Before 2\n ```\n - **after:**\n ```\n After 2\n ```\n\n" # noqa: E501
|
||||
assert parse_code_suggestion(code_suggestions) == expected_output
|
||||
|
Some files were not shown because too many files have changed in this diff Show More
Reference in New Issue
Block a user