mirror of
https://github.com/qodo-ai/pr-agent.git
synced 2025-07-08 06:40:39 +08:00
Compare commits
61 Commits
revert-374
...
ok/json_lo
Author | SHA1 | Date | |
---|---|---|---|
9a585de364 | |||
c27dc436c4 | |||
7374243d0b | |||
5c568bc0c5 | |||
22c196cb3b | |||
d2cc856cfc | |||
d772213cfc | |||
638db96311 | |||
4dffabf397 | |||
6f2bbd3baa | |||
9e41f3780c | |||
f53ec1d0cc | |||
f7666cb59a | |||
a7cb59ca8b | |||
ca0ea77415 | |||
0cf27e5fee | |||
f3bdbfc103 | |||
20e3acdd86 | |||
f965b09571 | |||
b8583c998d | |||
726594600b | |||
c77cc1d6ed | |||
b6c9e01a59 | |||
ec673214c8 | |||
16777a5334 | |||
1a89c7eadf | |||
07617eab5a | |||
f9e4c2b098 | |||
fa24413201 | |||
b6cabda586 | |||
abbce60f18 | |||
5daaaf2c1d | |||
e8f207691e | |||
b0dce4ceae | |||
fc494296d7 | |||
67b4069540 | |||
e6defcc846 | |||
096fcbbc17 | |||
eb7add1c77 | |||
1b6fb3ea53 | |||
c57b70f1d4 | |||
a2c3db463a | |||
193da1c356 | |||
5bc26880b3 | |||
21a1cc970e | |||
954727ad67 | |||
1314898cbf | |||
ff04d459d7 | |||
88ca501c0c | |||
fe284a8f91 | |||
d41fe0cf79 | |||
3673924fe9 | |||
d5c098de73 | |||
9f5c0daa8e | |||
bce2262d4e | |||
e6f1e0520a | |||
d8de89ae33 | |||
428c38e3d9 | |||
7ffdf8de37 | |||
83e670c5df | |||
c324d88be3 |
@ -1,18 +0,0 @@
|
||||
FROM python:3.10 as base
|
||||
|
||||
ENV OPENAI_API_KEY=${OPENAI_API_KEY} \
|
||||
BITBUCKET_BEARER_TOKEN=${BITBUCKET_BEARER_TOKEN} \
|
||||
BITBUCKET_PR_ID=${BITBUCKET_PR_ID} \
|
||||
BITBUCKET_REPO_SLUG=${BITBUCKET_REPO_SLUG} \
|
||||
BITBUCKET_WORKSPACE=${BITBUCKET_WORKSPACE}
|
||||
|
||||
|
||||
WORKDIR /app
|
||||
ADD pyproject.toml .
|
||||
ADD requirements.txt .
|
||||
RUN pip install . && rm pyproject.toml requirements.txt
|
||||
ENV PYTHONPATH=/app
|
||||
ADD pr_agent pr_agent
|
||||
ADD bitbucket_pipeline/entrypoint.sh /
|
||||
RUN chmod +x /entrypoint.sh
|
||||
ENTRYPOINT ["/entrypoint.sh"]
|
143
INSTALL.md
143
INSTALL.md
@ -4,66 +4,69 @@
|
||||
To get started with PR-Agent quickly, you first need to acquire two tokens:
|
||||
|
||||
1. An OpenAI key from [here](https://platform.openai.com/), with access to GPT-4.
|
||||
2. A GitHub personal access token (classic) with the repo scope.
|
||||
2. A GitHub\GitLab\BitBucket personal access token (classic) with the repo scope.
|
||||
|
||||
There are several ways to use PR-Agent:
|
||||
|
||||
- [Method 1: Use Docker image (no installation required)](INSTALL.md#method-1-use-docker-image-no-installation-required)
|
||||
- [Method 2: Run from source](INSTALL.md#method-2-run-from-source)
|
||||
- [Method 3: Run as a GitHub Action](INSTALL.md#method-3-run-as-a-github-action)
|
||||
- [Method 4: Run as a polling server](INSTALL.md#method-4-run-as-a-polling-server)
|
||||
- [Method 5: Run as a GitHub App](INSTALL.md#method-5-run-as-a-github-app)
|
||||
- [Method 6: Deploy as a Lambda Function](INSTALL.md#method-6---deploy-as-a-lambda-function)
|
||||
- [Method 7: AWS CodeCommit](INSTALL.md#method-7---aws-codecommit-setup)
|
||||
- [Method 8: Run a GitLab webhook server](INSTALL.md#method-8---run-a-gitlab-webhook-server)
|
||||
- [Method 9: Run as a Bitbucket Pipeline](INSTALL.md#method-9-run-as-a-bitbucket-pipeline)
|
||||
**Locally**
|
||||
- [Using Docker image (no installation required)](INSTALL.md#use-docker-image-no-installation-required)
|
||||
- [Run from source](INSTALL.md#run-from-source)
|
||||
|
||||
**GitHub specific methods**
|
||||
- [Run as a GitHub Action](INSTALL.md#run-as-a-github-action)
|
||||
- [Run as a polling server](INSTALL.md#run-as-a-polling-server)
|
||||
- [Run as a GitHub App](INSTALL.md#run-as-a-github-app)
|
||||
- [Deploy as a Lambda Function](INSTALL.md#deploy-as-a-lambda-function)
|
||||
- [AWS CodeCommit](INSTALL.md#aws-codecommit-setup)
|
||||
|
||||
**GitLab specific methods**
|
||||
- [Run a GitLab webhook server](INSTALL.md#run-a-gitlab-webhook-server)
|
||||
|
||||
**BitBucket specific methods**
|
||||
- [Run as a Bitbucket Pipeline](INSTALL.md#run-as-a-bitbucket-pipeline)
|
||||
- [Run on a hosted app](INSTALL.md#run-on-a-hosted-bitbucket-app)
|
||||
---
|
||||
|
||||
### Method 1: Use Docker image (no installation required)
|
||||
### Use Docker image (no installation required)
|
||||
|
||||
To request a review for a PR, or ask a question about a PR, you can run directly from the Docker image. Here's how:
|
||||
|
||||
1. To request a review for a PR, run the following command:
|
||||
|
||||
For GitHub:
|
||||
```
|
||||
docker run --rm -it -e OPENAI.KEY=<your key> -e GITHUB.USER_TOKEN=<your token> codiumai/pr-agent --pr_url <pr_url> review
|
||||
docker run --rm -it -e OPENAI.KEY=<your key> -e GITHUB.USER_TOKEN=<your token> codiumai/pr-agent:latest --pr_url <pr_url> review
|
||||
```
|
||||
For GitLab:
|
||||
```
|
||||
docker run --rm -it -e OPENAI.KEY=<your key> -e CONFIG.GIT_PROVIDER=gitlab -e GITLAB.PERSONAL_ACCESS_TOKEN=<your token> codiumai/pr-agent --pr_url <pr_url> review
|
||||
docker run --rm -it -e OPENAI.KEY=<your key> -e CONFIG.GIT_PROVIDER=gitlab -e GITLAB.PERSONAL_ACCESS_TOKEN=<your token> codiumai/pr-agent:latest --pr_url <pr_url> review
|
||||
```
|
||||
For BitBucket:
|
||||
```
|
||||
docker run --rm -it -e CONFIG.GIT_PROVIDER=bitbucket -e OPENAI.KEY=$OPENAI_API_KEY -e BITBUCKET.BEARER_TOKEN=$BITBUCKET_BEARER_TOKEN codiumai/pr-agent:latest --pr_url=<pr_url> review
|
||||
```
|
||||
|
||||
For other git providers, update CONFIG.GIT_PROVIDER accordingly, and check the `pr_agent/settings/.secrets_template.toml` file for the environment variables expected names and values.
|
||||
|
||||
2. To ask a question about a PR, run the following command:
|
||||
|
||||
Similarly, to ask a question about a PR, run the following command:
|
||||
```
|
||||
docker run --rm -it -e OPENAI.KEY=<your key> -e GITHUB.USER_TOKEN=<your token> codiumai/pr-agent --pr_url <pr_url> ask "<your question>"
|
||||
```
|
||||
Note: If you want to ensure you're running a specific version of the Docker image, consider using the image's digest.
|
||||
The digest is a unique identifier for a specific version of an image. You can pull and run an image using its digest by referencing it like so: repository@sha256:digest. Always ensure you're using the correct and trusted digest for your operations.
|
||||
|
||||
1. To request a review for a PR using a specific digest, run the following command:
|
||||
A list of the relevant tools can be found in the [tools guide](./docs/TOOLS_GUIDE.md).
|
||||
|
||||
|
||||
Note: If you want to ensure you're running a specific version of the Docker image, consider using the image's digest:
|
||||
```bash
|
||||
docker run --rm -it -e OPENAI.KEY=<your key> -e GITHUB.USER_TOKEN=<your token> codiumai/pr-agent@sha256:71b5ee15df59c745d352d84752d01561ba64b6d51327f97d46152f0c58a5f678 --pr_url <pr_url> review
|
||||
```
|
||||
|
||||
2. To ask a question about a PR using the same digest, run the following command:
|
||||
```bash
|
||||
docker run --rm -it -e OPENAI.KEY=<your key> -e GITHUB.USER_TOKEN=<your token> codiumai/pr-agent@sha256:71b5ee15df59c745d352d84752d01561ba64b6d51327f97d46152f0c58a5f678 --pr_url <pr_url> ask "<your question>"
|
||||
in addition, you can run a [specific released versions](./RELEASE_NOTES.md) of pr-agent, for example:
|
||||
```
|
||||
codiumai/pr-agent@v0.8
|
||||
```
|
||||
|
||||
Possible questions you can ask include:
|
||||
|
||||
- What is the main theme of this PR?
|
||||
- Is the PR ready for merge?
|
||||
- What are the main changes in this PR?
|
||||
- Should this PR be split into smaller parts?
|
||||
- Can you compose a rhymed song about this PR?
|
||||
|
||||
---
|
||||
|
||||
### Method 2: Run from source
|
||||
### Run from source
|
||||
|
||||
1. Clone this repository:
|
||||
|
||||
@ -93,11 +96,14 @@ python3 -m pr_agent.cli --pr_url <pr_url> review
|
||||
python3 -m pr_agent.cli --pr_url <pr_url> ask <your question>
|
||||
python3 -m pr_agent.cli --pr_url <pr_url> describe
|
||||
python3 -m pr_agent.cli --pr_url <pr_url> improve
|
||||
python3 -m pr_agent.cli --pr_url <pr_url> add_docs
|
||||
python3 -m pr_agent.cli --issue_url <issue_url> similar_issue
|
||||
...
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
### Method 3: Run as a GitHub Action
|
||||
### Run as a GitHub Action
|
||||
|
||||
You can use our pre-built Github Action Docker image to run PR-Agent as a Github Action.
|
||||
|
||||
@ -167,10 +173,11 @@ When you open your next PR, you should see a comment from `github-actions` bot w
|
||||
|
||||
---
|
||||
|
||||
### Method 4: Run as a polling server
|
||||
Request reviews by tagging your Github user on a PR
|
||||
### Run as a polling server
|
||||
Request reviews by tagging your GitHub user on a PR
|
||||
|
||||
Follow [steps 1-3](#run-as-a-github-action) of the GitHub Action setup.
|
||||
|
||||
Follow steps 1-3 of method 2.
|
||||
Run the following command to start the server:
|
||||
|
||||
```
|
||||
@ -179,7 +186,7 @@ python pr_agent/servers/github_polling.py
|
||||
|
||||
---
|
||||
|
||||
### Method 5: Run as a GitHub App
|
||||
### Run as a GitHub App
|
||||
Allowing you to automate the review process on your private or public repositories.
|
||||
|
||||
1. Create a GitHub App from the [Github Developer Portal](https://docs.github.com/en/developers/apps/creating-a-github-app).
|
||||
@ -260,13 +267,13 @@ docker push codiumai/pr-agent:github_app # Push to your Docker repository
|
||||
9. Install the app by navigating to the "Install App" tab and selecting your desired repositories.
|
||||
|
||||
> **Note:** When running PR-Agent from GitHub App, the default configuration file (configuration.toml) will be loaded.<br>
|
||||
> However, you can override the default tool parameters by uploading a local configuration file<br>
|
||||
> For more information please check out [CONFIGURATION.md](Usage.md#working-from-github-app-pre-built-repo)
|
||||
> However, you can override the default tool parameters by uploading a local configuration file `.pr_agent.toml`<br>
|
||||
> For more information please check out the [USAGE GUIDE](./Usage.md#working-with-github-app)
|
||||
---
|
||||
|
||||
### Method 6 - Deploy as a Lambda Function
|
||||
### Deploy as a Lambda Function
|
||||
|
||||
1. Follow steps 1-5 of [Method 5](#method-5-run-as-a-github-app).
|
||||
1. Follow steps 1-5 of [Method 5](#run-as-a-github-app).
|
||||
2. Build a docker image that can be used as a lambda function
|
||||
```shell
|
||||
docker buildx build --platform=linux/amd64 . -t codiumai/pr-agent:serverless -f docker/Dockerfile.lambda
|
||||
@ -278,12 +285,12 @@ docker push codiumai/pr-agent:github_app # Push to your Docker repository
|
||||
```
|
||||
4. Create a lambda function that uses the uploaded image. Set the lambda timeout to be at least 3m.
|
||||
5. Configure the lambda function to have a Function URL.
|
||||
6. Go back to steps 8-9 of [Method 5](#method-5-run-as-a-github-app) with the function url as your Webhook URL.
|
||||
6. Go back to steps 8-9 of [Method 5](#run-as-a-github-app) with the function url as your Webhook URL.
|
||||
The Webhook URL would look like `https://<LAMBDA_FUNCTION_URL>/api/v1/github_webhooks`
|
||||
|
||||
---
|
||||
|
||||
### Method 7 - AWS CodeCommit Setup
|
||||
### AWS CodeCommit Setup
|
||||
|
||||
Not all features have been added to CodeCommit yet. As of right now, CodeCommit has been implemented to run the pr-agent CLI on the command line, using AWS credentials stored in environment variables. (More features will be added in the future.) The following is a set of instructions to have pr-agent do a review of your CodeCommit pull request from the command line:
|
||||
|
||||
@ -353,7 +360,7 @@ PYTHONPATH="/PATH/TO/PROJECTS/pr-agent" python pr_agent/cli.py \
|
||||
|
||||
---
|
||||
|
||||
### Method 8 - Run a GitLab webhook server
|
||||
### Run a GitLab webhook server
|
||||
|
||||
1. From the GitLab workspace or group, create an access token. Enable the "api" scope only.
|
||||
2. Generate a random secret for your app, and save it for later. For example, you can use:
|
||||
@ -372,62 +379,36 @@ In the "Trigger" section, check the ‘comments’ and ‘merge request events
|
||||
|
||||
|
||||
|
||||
### Method 9: Run as a Bitbucket Pipeline
|
||||
### Run as a Bitbucket Pipeline
|
||||
|
||||
|
||||
You can use our pre-build Bitbucket-Pipeline docker image to run as Bitbucket-Pipeline.
|
||||
You can use the Bitbucket Pipeline system to run PR-Agent on every pull request open or update.
|
||||
|
||||
1. Add the following file in your repository bitbucket_pipelines.yml
|
||||
|
||||
```yaml
|
||||
pipelines:
|
||||
pipelines:
|
||||
pull-requests:
|
||||
'**':
|
||||
- step:
|
||||
name: PR Agent Pipeline
|
||||
caches:
|
||||
- pip
|
||||
image: python:3.8
|
||||
name: PR Agent Review
|
||||
image: python:3.10
|
||||
services:
|
||||
- docker
|
||||
script:
|
||||
- git clone https://github.com/Codium-ai/pr-agent.git
|
||||
- cd pr-agent
|
||||
- docker build -t bitbucket_runner:latest -f Dockerfile.bitbucket_pipeline .
|
||||
- docker run -e OPENAI_API_KEY=$OPENAI_API_KEY -e BITBUCKET_BEARER_TOKEN=$BITBUCKET_BEARER_TOKEN -e BITBUCKET_PR_ID=$BITBUCKET_PR_ID -e BITBUCKET_REPO_SLUG=$BITBUCKET_REPO_SLUG -e BITBUCKET_WORKSPACE=$BITBUCKET_WORKSPACE bitbucket_runner:latest
|
||||
- docker run -e CONFIG.GIT_PROVIDER=bitbucket -e OPENAI.KEY=$OPENAI_API_KEY -e BITBUCKET.BEARER_TOKEN=$BITBUCKET_BEARER_TOKEN codiumai/pr-agent:latest --pr_url=https://bitbucket.org/$BITBUCKET_WORKSPACE/$BITBUCKET_REPO_SLUG/pull-requests/$BITBUCKET_PR_ID review
|
||||
```
|
||||
|
||||
2. Add the following secret to your repository under Repository settings > Pipelines > Repository variables.
|
||||
2. Add the following secure variables to your repository under Repository settings > Pipelines > Repository variables.
|
||||
OPENAI_API_KEY: <your key>
|
||||
BITBUCKET_BEARER_TOKEN: <your token>
|
||||
|
||||
3. To get BITBUCKET_BEARER_TOKEN follow these steps
|
||||
So here is my step by step tutorial
|
||||
i) Insert your workspace name instead of {workspace_name} and go to the following link in order to create an OAuth consumer.
|
||||
|
||||
https://bitbucket.org/{workspace_name}/workspace/settings/api
|
||||
|
||||
set callback URL to http://localhost:8976 (doesn't need to be a real server there)
|
||||
select permissions: repository -> read
|
||||
|
||||
ii) use consumer's Key as a {client_id} and open the following URL in the browser
|
||||
|
||||
https://bitbucket.org/site/oauth2/authorize?client_id={client_id}&response_type=code
|
||||
|
||||
iii)
|
||||
after you press "Grant access" in the browser it will redirect you to
|
||||
|
||||
http://localhost:8976?code=<CODE>
|
||||
|
||||
iv) use the code from the previous step and consumer's Key as a {client_id}, and consumer's Secret as {client_secret}
|
||||
|
||||
curl -X POST -u "{client_id}:{client_secret}" \
|
||||
https://bitbucket.org/site/oauth2/access_token \
|
||||
-d grant_type=authorization_code \
|
||||
-d code={code} \
|
||||
You can get a Bitbucket token for your repository by following Repository Settings -> Security -> Access Tokens.
|
||||
|
||||
|
||||
After completing this steps, you just to place this access token in the repository varibles.
|
||||
### Run on a hosted Bitbucket app
|
||||
|
||||
Please contact <support@codium.ai> if you're interested in a hosted BitBucket app solution that provides full functionality including PR reviews and comment handling. It's based on the [bitbucket_app.py](https://github.com/Codium-ai/pr-agent/blob/main/pr_agent/git_providers/bitbucket_provider.py) implmentation.
|
||||
|
||||
|
||||
=======
|
||||
|
18
README.md
18
README.md
@ -28,16 +28,16 @@ CodiumAI `PR-Agent` is an open-source tool aiming to help developers review pull
|
||||
\
|
||||
‣ **Update Changelog ([`/update_changelog`](./docs/UPDATE_CHANGELOG.md))**: Automatically updating the CHANGELOG.md file with the PR changes.
|
||||
\
|
||||
‣ **Find similar issue ([`/similar_issue`](./docs/SIMILAR_ISSUE.md))**: Automatically retrieves and presents similar issues
|
||||
‣ **Find Similar Issue ([`/similar_issue`](./docs/SIMILAR_ISSUE.md))**: Automatically retrieves and presents similar issues
|
||||
\
|
||||
‣ **Add Documentation ([`/add_docs`](./docs/ADD_DOCUMENTATION.md))**: Automatically adds documentation to un-documented functions/classes in the PR.
|
||||
|
||||
See the [Usage Guide](./Usage.md) for instructions how to run the different tools from _CLI_, _online usage_, Or by _automatically triggering_ them when a new PR is opened.
|
||||
See the [Installation Guide](./INSTALL.md) for instructions how to install and run the tool on different platforms.
|
||||
|
||||
See the [Usage Guide](./Usage.md) for instructions how to run the different tools from _CLI_, _online usage_, or by _automatically triggering_ them when a new PR is opened.
|
||||
|
||||
See the [Tools Guide](./docs/TOOLS_GUIDE.md) for detailed description of the different tools.
|
||||
|
||||
See the [Release notes](./RELEASE_NOTES.md) for updates on the latest changes.
|
||||
|
||||
<h3>Example results:</h3>
|
||||
</div>
|
||||
<h4><a href="https://github.com/Codium-ai/pr-agent/pull/229#issuecomment-1687561986">/describe:</a></h4>
|
||||
@ -204,6 +204,9 @@ Here are some advantages of PR-Agent:
|
||||
- [x] Documentation (is the PR properly documented)
|
||||
- [ ] ...
|
||||
|
||||
See the [Release notes](./RELEASE_NOTES.md) for updates on the latest changes.
|
||||
|
||||
|
||||
## Similar Projects
|
||||
|
||||
- [CodiumAI - Meaningful tests for busy devs](https://github.com/Codium-ai/codiumai-vscode-release) (although various capabilities are much more advanced in the CodiumAI IDE plugins)
|
||||
@ -211,7 +214,12 @@ Here are some advantages of PR-Agent:
|
||||
- [openai-pr-reviewer](https://github.com/coderabbitai/openai-pr-reviewer)
|
||||
- [CodeReview BOT](https://github.com/anc95/ChatGPT-CodeReview)
|
||||
- [AI-Maintainer](https://github.com/merwanehamadi/AI-Maintainer)
|
||||
|
||||
|
||||
## Data Privacy
|
||||
|
||||
If you use self-host PR-Agent, e.g. via CLI running on your computer, with your OpenAI API key, it is between you and OpenAI. You can read their API data privacy policy here:
|
||||
https://openai.com/enterprise-privacy
|
||||
|
||||
## Links
|
||||
|
||||
[](https://discord.gg/kG35uSHDBc)
|
||||
|
2
Usage.md
2
Usage.md
@ -12,7 +12,7 @@
|
||||
|
||||
### Introduction
|
||||
|
||||
See the **[installation guide](/INSTALL.md)** for instructions on how to setup PR-Agent. After installation, there are three basic ways to invoke CodiumAI PR-Agent:
|
||||
After [installation](/INSTALL.md), there are three basic ways to invoke CodiumAI PR-Agent:
|
||||
1. Locally running a CLI command
|
||||
2. Online usage - by [commenting](https://github.com/Codium-ai/pr-agent/pull/229#issuecomment-1695021901) on a PR
|
||||
3. Enabling PR-Agent tools to run automatically when a new PR is opened
|
||||
|
@ -1,2 +0,0 @@
|
||||
#!/bin/bash
|
||||
python /app/pr_agent/servers/bitbucket_pipeline_runner.py
|
@ -27,18 +27,14 @@ Under the section 'pr_description', the [configuration file](./../pr_agent/setti
|
||||
|
||||
- `extra_instructions`: Optional extra instructions to the tool. For example: "focus on the changes in the file X. Ignore change in ...".
|
||||
|
||||
#### Markers template
|
||||
### Markers template
|
||||
|
||||
markers enable to easily integrate user's content and auto-generated content, with a template-like mechanism.
|
||||
|
||||
- `use_description_markers`: if set to true, the tool will use markers template. It replaces every marker of the form `pr_agent:marker_name` with the relevant content. Default is false.
|
||||
|
||||
For example, if the PR original description was:
|
||||
```
|
||||
User content...
|
||||
|
||||
## PR Type:
|
||||
pr_agent:pr_type
|
||||
|
||||
## PR Description:
|
||||
pr_agent:summary
|
||||
@ -46,6 +42,21 @@ pr_agent:summary
|
||||
## PR Walkthrough:
|
||||
pr_agent:walkthrough
|
||||
```
|
||||
The marker `pr_agent:pr_type` will be replaced with the PR type, `pr_agent:summary` will be replaced with the PR summary, and `pr_agent:walkthrough` will be replaced with the PR walkthrough.
|
||||
The marker `pr_agent:summary` will be replaced with the PR summary, and `pr_agent:walkthrough` will be replaced with the PR walkthrough.
|
||||
|
||||
##### Example:
|
||||
```
|
||||
env:
|
||||
pr_description.use_description_markers: 'true'
|
||||
```
|
||||
|
||||
<kbd><img src=./../pics/describe_markers_before.png width="768"></kbd>
|
||||
|
||||
==>
|
||||
|
||||
<kbd><img src=./../pics/describe_markers_after.png width="768"></kbd>
|
||||
|
||||
##### Configuration params:
|
||||
|
||||
- `use_description_markers`: if set to true, the tool will use markers template. It replaces every marker of the form `pr_agent:marker_name` with the relevant content. Default is false.
|
||||
- `include_generated_by_header`: if set to true, the tool will add a dedicated header: 'Generated by PR Agent at ...' to any automatic content. Default is true.
|
||||
|
35
docs/GENERATE_CUSTOM_LABELS.md
Normal file
35
docs/GENERATE_CUSTOM_LABELS.md
Normal file
@ -0,0 +1,35 @@
|
||||
# Generate Custom Labels
|
||||
The `generte_labels` tool scans the PR code changes, and given a list of labels and their descriptions, it automatically suggests labels that match the PR code changes.
|
||||
|
||||
It can be invoked manually by commenting on any PR:
|
||||
```
|
||||
/generate_labels
|
||||
```
|
||||
For example:
|
||||
|
||||
If we wish to add detect changes to SQL queries in a given PR, we can add the following custom label along with its description:
|
||||
|
||||
<kbd><img src=./../pics/custom_labels_list.png width="768"></kbd>
|
||||
When running the `generte_labels` tool on a PR that includes changes in SQL queries, it will automatically suggest the custom label:
|
||||
<kbd><img src=./../pics/custom_label_published.png width="768"></kbd>
|
||||
|
||||
### Configuration options
|
||||
To enable custom labels, you need to add the following configuration to the [custom_labels file](./../pr_agent/settings/custom_labels.toml):
|
||||
- Change `enable_custom_labels` to True: This will turn off the default labels and enable the custom labels provided in the custom_labels.toml file.
|
||||
- Add the custom labels to the custom_labels.toml file. It should be formatted as follows:
|
||||
```
|
||||
[custom_labels."Custom Label Name"]
|
||||
description = "Description of when AI should suggest this label"
|
||||
```
|
||||
- You can add modify the list to include all the custom labels you wish to use in your repository.
|
||||
|
||||
#### Github Action
|
||||
To use the `generte_labels` tool with Github Action:
|
||||
|
||||
- Add the following file to your repository under `env` section in `.github/workflows/pr_agent.yml`
|
||||
- Comma separated list of custom labels and their descriptions
|
||||
- The number of labels and descriptions should be the same and in the same order (empty descriptions are allowed):
|
||||
```
|
||||
CUSTOM_LABELS: "label1, label2, ..."
|
||||
CUSTOM_LABELS_DESCRIPTION: "label1 description, label2 description, ..."
|
||||
```
|
@ -31,4 +31,15 @@ Under the section 'pr_code_suggestions', the [configuration file](./../pr_agent/
|
||||
- `num_code_suggestions_per_chunk`: number of code suggestions provided by the 'improve' tool, per chunk. Default is 8.
|
||||
- `rank_extended_suggestions`: if set to true, the tool will rank the suggestions, based on importance. Default is true.
|
||||
- `max_number_of_calls`: maximum number of chunks. Default is 5.
|
||||
- `final_clip_factor`: factor to remove suggestions with low confidence. Default is 0.9.
|
||||
- `final_clip_factor`: factor to remove suggestions with low confidence. Default is 0.9.
|
||||
|
||||
|
||||
#### A note on code suggestions quality
|
||||
|
||||
- With current level of AI for code (GPT-4), mistakes can happen. Not all the suggestions will be perfect, and a user should not accept all of them automatically.
|
||||
|
||||
- Suggestions are not meant to be [simplistic](./../pr_agent/settings/pr_code_suggestions_prompts.toml#L34). Instead, they aim to give deep feedback and raise questions, ideas and thoughts to the user, who can then use his judgment, experience, and understanding of the code base.
|
||||
|
||||
- Recommended to use the 'extra_instructions' field to guide the model to suggestions that are more relevant to the specific needs of the project.
|
||||
|
||||
- Best quality will be obtained by using 'improve --extended' mode.
|
@ -43,4 +43,15 @@ The tool will first ask the author questions about the PR, and will guide the re
|
||||
|
||||
<kbd><img src=./../pics/reflection_questions.png width="768"></kbd>
|
||||
<kbd><img src=./../pics/reflection_answers.png width="768"></kbd>
|
||||
<kbd><img src=./../pics/reflection_insights.png width="768"></kbd>
|
||||
<kbd><img src=./../pics/reflection_insights.png width="768"></kbd>
|
||||
|
||||
|
||||
#### A note on code suggestions quality
|
||||
|
||||
- With current level of AI for code (GPT-4), mistakes can happen. Not all the suggestions will be perfect, and a user should not accept all of them automatically.
|
||||
|
||||
- Suggestions are not meant to be [simplistic](./../pr_agent/settings/pr_reviewer_prompts.toml#L29). Instead, they aim to give deep feedback and raise questions, ideas and thoughts to the user, who can then use his judgment, experience, and understanding of the code base.
|
||||
|
||||
- Recommended to use the 'extra_instructions' field to guide the model to suggestions that are more relevant to the specific needs of the project.
|
||||
|
||||
- Unlike the 'review' feature, which does a lot of things, the ['improve --extended'](./IMPROVE.md) feature is dedicated only to suggestions, and usually gives better results.
|
BIN
pics/custom_label_published.png
Normal file
BIN
pics/custom_label_published.png
Normal file
Binary file not shown.
After Width: | Height: | Size: 253 KiB |
BIN
pics/custom_labels_list.png
Normal file
BIN
pics/custom_labels_list.png
Normal file
Binary file not shown.
After Width: | Height: | Size: 84 KiB |
BIN
pics/describe_markers_after.png
Normal file
BIN
pics/describe_markers_after.png
Normal file
Binary file not shown.
After Width: | Height: | Size: 224 KiB |
BIN
pics/describe_markers_before.png
Normal file
BIN
pics/describe_markers_before.png
Normal file
Binary file not shown.
After Width: | Height: | Size: 30 KiB |
@ -1,20 +1,18 @@
|
||||
import logging
|
||||
import os
|
||||
import shlex
|
||||
import tempfile
|
||||
|
||||
from pr_agent.algo.utils import update_settings_from_args
|
||||
from pr_agent.config_loader import get_settings
|
||||
from pr_agent.git_providers import get_git_provider
|
||||
from pr_agent.git_providers.utils import apply_repo_settings
|
||||
from pr_agent.tools.pr_add_docs import PRAddDocs
|
||||
from pr_agent.tools.pr_code_suggestions import PRCodeSuggestions
|
||||
from pr_agent.tools.pr_config import PRConfig
|
||||
from pr_agent.tools.pr_description import PRDescription
|
||||
from pr_agent.tools.pr_generate_labels import PRGenerateLabels
|
||||
from pr_agent.tools.pr_information_from_user import PRInformationFromUser
|
||||
from pr_agent.tools.pr_similar_issue import PRSimilarIssue
|
||||
from pr_agent.tools.pr_questions import PRQuestions
|
||||
from pr_agent.tools.pr_reviewer import PRReviewer
|
||||
from pr_agent.tools.pr_similar_issue import PRSimilarIssue
|
||||
from pr_agent.tools.pr_update_changelog import PRUpdateChangelog
|
||||
from pr_agent.tools.pr_config import PRConfig
|
||||
|
||||
command2class = {
|
||||
"auto_review": PRReviewer,
|
||||
@ -34,6 +32,7 @@ command2class = {
|
||||
"settings": PRConfig,
|
||||
"similar_issue": PRSimilarIssue,
|
||||
"add_docs": PRAddDocs,
|
||||
"generate_labels": PRGenerateLabels,
|
||||
}
|
||||
|
||||
commands = list(command2class.keys())
|
||||
@ -44,22 +43,7 @@ class PRAgent:
|
||||
|
||||
async def handle_request(self, pr_url, request, notify=None) -> bool:
|
||||
# First, apply repo specific settings if exists
|
||||
if get_settings().config.use_repo_settings_file:
|
||||
repo_settings_file = None
|
||||
try:
|
||||
git_provider = get_git_provider()(pr_url)
|
||||
repo_settings = git_provider.get_repo_settings()
|
||||
if repo_settings:
|
||||
repo_settings_file = None
|
||||
fd, repo_settings_file = tempfile.mkstemp(suffix='.toml')
|
||||
os.write(fd, repo_settings)
|
||||
get_settings().load_file(repo_settings_file)
|
||||
finally:
|
||||
if repo_settings_file:
|
||||
try:
|
||||
os.remove(repo_settings_file)
|
||||
except Exception as e:
|
||||
logging.error(f"Failed to remove temporary settings file {repo_settings_file}", e)
|
||||
apply_repo_settings(pr_url)
|
||||
|
||||
# Then, apply user specific settings if exists
|
||||
request = request.replace("'", "\\'")
|
||||
@ -84,3 +68,4 @@ class PRAgent:
|
||||
else:
|
||||
return False
|
||||
return True
|
||||
|
||||
|
@ -1,4 +1,3 @@
|
||||
import logging
|
||||
import os
|
||||
|
||||
import litellm
|
||||
@ -7,6 +6,8 @@ from litellm import acompletion
|
||||
from openai.error import APIError, RateLimitError, Timeout, TryAgain
|
||||
from retry import retry
|
||||
from pr_agent.config_loader import get_settings
|
||||
from pr_agent.log import get_logger
|
||||
|
||||
OPENAI_RETRIES = 5
|
||||
|
||||
|
||||
@ -88,34 +89,34 @@ class AiHandler:
|
||||
try:
|
||||
deployment_id = self.deployment_id
|
||||
if get_settings().config.verbosity_level >= 2:
|
||||
logging.debug(
|
||||
get_logger().debug(
|
||||
f"Generating completion with {model}"
|
||||
f"{(' from deployment ' + deployment_id) if deployment_id else ''}"
|
||||
)
|
||||
if self.azure:
|
||||
model = 'azure/' + model
|
||||
messages = [{"role": "system", "content": system}, {"role": "user", "content": user}]
|
||||
response = await acompletion(
|
||||
model=model,
|
||||
deployment_id=deployment_id,
|
||||
messages=[
|
||||
{"role": "system", "content": system},
|
||||
{"role": "user", "content": user}
|
||||
],
|
||||
messages=messages,
|
||||
temperature=temperature,
|
||||
force_timeout=get_settings().config.ai_timeout
|
||||
)
|
||||
except (APIError, Timeout, TryAgain) as e:
|
||||
logging.error("Error during OpenAI inference: ", e)
|
||||
get_logger().error("Error during OpenAI inference: ", e)
|
||||
raise
|
||||
except (RateLimitError) as e:
|
||||
logging.error("Rate limit error during OpenAI inference: ", e)
|
||||
get_logger().error("Rate limit error during OpenAI inference: ", e)
|
||||
raise
|
||||
except (Exception) as e:
|
||||
logging.error("Unknown error during OpenAI inference: ", e)
|
||||
get_logger().error("Unknown error during OpenAI inference: ", e)
|
||||
raise TryAgain from e
|
||||
if response is None or len(response["choices"]) == 0:
|
||||
raise TryAgain
|
||||
resp = response["choices"][0]['message']['content']
|
||||
finish_reason = response["choices"][0]["finish_reason"]
|
||||
print(resp, finish_reason)
|
||||
usage = response.get("usage")
|
||||
get_logger().info("AI response", response=resp, messages=messages, finish_reason=finish_reason,
|
||||
model=model, usage=usage)
|
||||
return resp, finish_reason
|
||||
|
@ -1,8 +1,9 @@
|
||||
from __future__ import annotations
|
||||
import logging
|
||||
|
||||
import re
|
||||
|
||||
from pr_agent.config_loader import get_settings
|
||||
from pr_agent.log import get_logger
|
||||
|
||||
|
||||
def extend_patch(original_file_str, patch_str, num_lines) -> str:
|
||||
@ -63,7 +64,7 @@ def extend_patch(original_file_str, patch_str, num_lines) -> str:
|
||||
extended_patch_lines.append(line)
|
||||
except Exception as e:
|
||||
if get_settings().config.verbosity_level >= 2:
|
||||
logging.error(f"Failed to extend patch: {e}")
|
||||
get_logger().error(f"Failed to extend patch: {e}")
|
||||
return patch_str
|
||||
|
||||
# finish previous hunk
|
||||
@ -134,14 +135,14 @@ def handle_patch_deletions(patch: str, original_file_content_str: str,
|
||||
if not new_file_content_str:
|
||||
# logic for handling deleted files - don't show patch, just show that the file was deleted
|
||||
if get_settings().config.verbosity_level > 0:
|
||||
logging.info(f"Processing file: {file_name}, minimizing deletion file")
|
||||
get_logger().info(f"Processing file: {file_name}, minimizing deletion file")
|
||||
patch = None # file was deleted
|
||||
else:
|
||||
patch_lines = patch.splitlines()
|
||||
patch_new = omit_deletion_hunks(patch_lines)
|
||||
if patch != patch_new:
|
||||
if get_settings().config.verbosity_level > 0:
|
||||
logging.info(f"Processing file: {file_name}, hunks were deleted")
|
||||
get_logger().info(f"Processing file: {file_name}, hunks were deleted")
|
||||
patch = patch_new
|
||||
return patch
|
||||
|
||||
|
@ -1,7 +1,6 @@
|
||||
from __future__ import annotations
|
||||
|
||||
import difflib
|
||||
import logging
|
||||
import re
|
||||
import traceback
|
||||
from typing import Any, Callable, List, Tuple
|
||||
@ -15,6 +14,7 @@ from pr_agent.algo.file_filter import filter_ignored
|
||||
from pr_agent.algo.token_handler import TokenHandler, get_token_encoder
|
||||
from pr_agent.config_loader import get_settings
|
||||
from pr_agent.git_providers.git_provider import FilePatchInfo, GitProvider
|
||||
from pr_agent.log import get_logger
|
||||
|
||||
DELETED_FILES_ = "Deleted files:\n"
|
||||
|
||||
@ -51,7 +51,7 @@ def get_pr_diff(git_provider: GitProvider, token_handler: TokenHandler, model: s
|
||||
try:
|
||||
diff_files = git_provider.get_diff_files()
|
||||
except RateLimitExceededException as e:
|
||||
logging.error(f"Rate limit exceeded for git provider API. original message {e}")
|
||||
get_logger().error(f"Rate limit exceeded for git provider API. original message {e}")
|
||||
raise
|
||||
|
||||
diff_files = filter_ignored(diff_files)
|
||||
@ -180,7 +180,7 @@ def pr_generate_compressed_diff(top_langs: list, token_handler: TokenHandler, mo
|
||||
|
||||
# Hard Stop, no more tokens
|
||||
if total_tokens > MAX_TOKENS[model] - OUTPUT_BUFFER_TOKENS_HARD_THRESHOLD:
|
||||
logging.warning(f"File was fully skipped, no more tokens: {file.filename}.")
|
||||
get_logger().warning(f"File was fully skipped, no more tokens: {file.filename}.")
|
||||
continue
|
||||
|
||||
# If the patch is too large, just show the file name
|
||||
@ -189,7 +189,7 @@ def pr_generate_compressed_diff(top_langs: list, token_handler: TokenHandler, mo
|
||||
# TODO: Option for alternative logic to remove hunks from the patch to reduce the number of tokens
|
||||
# until we meet the requirements
|
||||
if get_settings().config.verbosity_level >= 2:
|
||||
logging.warning(f"Patch too large, minimizing it, {file.filename}")
|
||||
get_logger().warning(f"Patch too large, minimizing it, {file.filename}")
|
||||
if not modified_files_list:
|
||||
total_tokens += token_handler.count_tokens(MORE_MODIFIED_FILES_)
|
||||
modified_files_list.append(file.filename)
|
||||
@ -204,7 +204,7 @@ def pr_generate_compressed_diff(top_langs: list, token_handler: TokenHandler, mo
|
||||
patches.append(patch_final)
|
||||
total_tokens += token_handler.count_tokens(patch_final)
|
||||
if get_settings().config.verbosity_level >= 2:
|
||||
logging.info(f"Tokens: {total_tokens}, last filename: {file.filename}")
|
||||
get_logger().info(f"Tokens: {total_tokens}, last filename: {file.filename}")
|
||||
|
||||
return patches, modified_files_list, deleted_files_list
|
||||
|
||||
@ -218,7 +218,7 @@ async def retry_with_fallback_models(f: Callable):
|
||||
get_settings().set("openai.deployment_id", deployment_id)
|
||||
return await f(model)
|
||||
except Exception as e:
|
||||
logging.warning(
|
||||
get_logger().warning(
|
||||
f"Failed to generate prediction with {model}"
|
||||
f"{(' from deployment ' + deployment_id) if deployment_id else ''}: "
|
||||
f"{traceback.format_exc()}"
|
||||
@ -340,7 +340,7 @@ def clip_tokens(text: str, max_tokens: int) -> str:
|
||||
clipped_text = text[:num_output_chars]
|
||||
return clipped_text
|
||||
except Exception as e:
|
||||
logging.warning(f"Failed to clip tokens: {e}")
|
||||
get_logger().warning(f"Failed to clip tokens: {e}")
|
||||
return text
|
||||
|
||||
|
||||
@ -367,7 +367,7 @@ def get_pr_multi_diffs(git_provider: GitProvider,
|
||||
try:
|
||||
diff_files = git_provider.get_diff_files()
|
||||
except RateLimitExceededException as e:
|
||||
logging.error(f"Rate limit exceeded for git provider API. original message {e}")
|
||||
get_logger().error(f"Rate limit exceeded for git provider API. original message {e}")
|
||||
raise
|
||||
|
||||
diff_files = filter_ignored(diff_files)
|
||||
@ -387,7 +387,7 @@ def get_pr_multi_diffs(git_provider: GitProvider,
|
||||
for file in sorted_files:
|
||||
if call_number > max_calls:
|
||||
if get_settings().config.verbosity_level >= 2:
|
||||
logging.info(f"Reached max calls ({max_calls})")
|
||||
get_logger().info(f"Reached max calls ({max_calls})")
|
||||
break
|
||||
|
||||
original_file_content_str = file.base_file
|
||||
@ -410,13 +410,13 @@ def get_pr_multi_diffs(git_provider: GitProvider,
|
||||
total_tokens = token_handler.prompt_tokens
|
||||
call_number += 1
|
||||
if get_settings().config.verbosity_level >= 2:
|
||||
logging.info(f"Call number: {call_number}")
|
||||
get_logger().info(f"Call number: {call_number}")
|
||||
|
||||
if patch:
|
||||
patches.append(patch)
|
||||
total_tokens += new_patch_tokens
|
||||
if get_settings().config.verbosity_level >= 2:
|
||||
logging.info(f"Tokens: {total_tokens}, last filename: {file.filename}")
|
||||
get_logger().info(f"Tokens: {total_tokens}, last filename: {file.filename}")
|
||||
|
||||
# Add the last chunk
|
||||
if patches:
|
||||
|
@ -2,7 +2,6 @@ from __future__ import annotations
|
||||
|
||||
import difflib
|
||||
import json
|
||||
import logging
|
||||
import re
|
||||
import textwrap
|
||||
from datetime import datetime
|
||||
@ -11,6 +10,7 @@ from typing import Any, List
|
||||
import yaml
|
||||
from starlette_context import context
|
||||
from pr_agent.config_loader import get_settings, global_settings
|
||||
from pr_agent.log import get_logger
|
||||
|
||||
|
||||
def get_setting(key: str) -> Any:
|
||||
@ -159,7 +159,7 @@ def try_fix_json(review, max_iter=10, code_suggestions=False):
|
||||
iter_count += 1
|
||||
|
||||
if not valid_json:
|
||||
logging.error("Unable to decode JSON response from AI")
|
||||
get_logger().error("Unable to decode JSON response from AI")
|
||||
data = {}
|
||||
|
||||
return data
|
||||
@ -230,7 +230,7 @@ def load_large_diff(filename, new_file_content_str: str, original_file_content_s
|
||||
diff = difflib.unified_diff(original_file_content_str.splitlines(keepends=True),
|
||||
new_file_content_str.splitlines(keepends=True))
|
||||
if get_settings().config.verbosity_level >= 2:
|
||||
logging.warning(f"File was modified, but no patch was found. Manually creating patch: {filename}.")
|
||||
get_logger().warning(f"File was modified, but no patch was found. Manually creating patch: {filename}.")
|
||||
patch = ''.join(diff)
|
||||
except Exception:
|
||||
pass
|
||||
@ -262,12 +262,12 @@ def update_settings_from_args(args: List[str]) -> List[str]:
|
||||
vals = arg.split('=', 1)
|
||||
if len(vals) != 2:
|
||||
if len(vals) > 2: # --extended is a valid argument
|
||||
logging.error(f'Invalid argument format: {arg}')
|
||||
get_logger().error(f'Invalid argument format: {arg}')
|
||||
other_args.append(arg)
|
||||
continue
|
||||
key, value = _fix_key_value(*vals)
|
||||
get_settings().set(key, value)
|
||||
logging.info(f'Updated setting {key} to: "{value}"')
|
||||
get_logger().info(f'Updated setting {key} to: "{value}"')
|
||||
else:
|
||||
other_args.append(arg)
|
||||
return other_args
|
||||
@ -279,7 +279,7 @@ def _fix_key_value(key: str, value: str):
|
||||
try:
|
||||
value = yaml.safe_load(value)
|
||||
except Exception as e:
|
||||
logging.error(f"Failed to parse YAML for config override {key}={value}", exc_info=e)
|
||||
get_logger().error(f"Failed to parse YAML for config override {key}={value}", exc_info=e)
|
||||
return key, value
|
||||
|
||||
|
||||
@ -288,7 +288,7 @@ def load_yaml(review_text: str) -> dict:
|
||||
try:
|
||||
data = yaml.safe_load(review_text)
|
||||
except Exception as e:
|
||||
logging.error(f"Failed to parse AI prediction: {e}")
|
||||
get_logger().error(f"Failed to parse AI prediction: {e}")
|
||||
data = try_fix_yaml(review_text)
|
||||
return data
|
||||
|
||||
@ -299,8 +299,24 @@ def try_fix_yaml(review_text: str) -> dict:
|
||||
review_text_lines_tmp = '\n'.join(review_text_lines[:-i])
|
||||
try:
|
||||
data = yaml.load(review_text_lines_tmp, Loader=yaml.SafeLoader)
|
||||
logging.info(f"Successfully parsed AI prediction after removing {i} lines")
|
||||
get_logger().info(f"Successfully parsed AI prediction after removing {i} lines")
|
||||
break
|
||||
except:
|
||||
pass
|
||||
return data
|
||||
|
||||
|
||||
def set_custom_labels(variables):
|
||||
labels = get_settings().custom_labels
|
||||
if not labels:
|
||||
# set default labels
|
||||
labels = ['Bug fix', 'Tests', 'Bug fix with tests', 'Refactoring', 'Enhancement', 'Documentation', 'Other']
|
||||
labels_list = "\n - ".join(labels) if labels else ""
|
||||
labels_list = f" - {labels_list}" if labels_list else ""
|
||||
variables["custom_labels"] = labels_list
|
||||
return
|
||||
final_labels = ""
|
||||
for k, v in labels.items():
|
||||
final_labels += f" - {k} ({v['description']})\n"
|
||||
variables["custom_labels"] = final_labels
|
||||
variables["custom_labels_examples"] = f" - {list(labels.keys())[0]}"
|
||||
|
@ -1,11 +1,12 @@
|
||||
import argparse
|
||||
import asyncio
|
||||
import logging
|
||||
import os
|
||||
|
||||
from pr_agent.agent.pr_agent import PRAgent, commands
|
||||
from pr_agent.config_loader import get_settings
|
||||
from pr_agent.log import setup_logger
|
||||
|
||||
setup_logger()
|
||||
|
||||
def run(inargs=None):
|
||||
parser = argparse.ArgumentParser(description='AI based pull request analyzer', usage=
|
||||
@ -47,7 +48,6 @@ For example: 'python cli.py --pr_url=... review --pr_reviewer.extra_instructions
|
||||
parser.print_help()
|
||||
return
|
||||
|
||||
logging.basicConfig(level=os.environ.get("LOGLEVEL", "INFO"))
|
||||
command = args.command.lower()
|
||||
get_settings().set("CONFIG.CLI_MODE", True)
|
||||
if args.issue_url:
|
||||
|
@ -23,8 +23,10 @@ global_settings = Dynaconf(
|
||||
"settings/pr_sort_code_suggestions_prompts.toml",
|
||||
"settings/pr_information_from_user_prompts.toml",
|
||||
"settings/pr_update_changelog_prompts.toml",
|
||||
"settings/pr_custom_labels.toml",
|
||||
"settings/pr_add_docs.toml",
|
||||
"settings_prod/.secrets.toml"
|
||||
"settings_prod/.secrets.toml",
|
||||
"settings/custom_labels.toml"
|
||||
]]
|
||||
)
|
||||
|
||||
|
@ -1,10 +1,11 @@
|
||||
import json
|
||||
import logging
|
||||
from typing import Optional, Tuple
|
||||
from urllib.parse import urlparse
|
||||
|
||||
import os
|
||||
|
||||
from ..log import get_logger
|
||||
|
||||
AZURE_DEVOPS_AVAILABLE = True
|
||||
try:
|
||||
from msrest.authentication import BasicAuthentication
|
||||
@ -55,7 +56,7 @@ class AzureDevopsProvider:
|
||||
path=".pr_agent.toml")
|
||||
return contents
|
||||
except Exception as e:
|
||||
logging.exception("get repo settings error")
|
||||
get_logger().exception("get repo settings error")
|
||||
return ""
|
||||
|
||||
def get_files(self):
|
||||
@ -110,7 +111,7 @@ class AzureDevopsProvider:
|
||||
|
||||
new_file_content_str = new_file_content_str.content
|
||||
except Exception as error:
|
||||
logging.error("Failed to retrieve new file content of %s at version %s. Error: %s", file, version, str(error))
|
||||
get_logger().error("Failed to retrieve new file content of %s at version %s. Error: %s", file, version, str(error))
|
||||
new_file_content_str = ""
|
||||
|
||||
edit_type = EDIT_TYPE.MODIFIED
|
||||
@ -131,7 +132,7 @@ class AzureDevopsProvider:
|
||||
include_content=True)
|
||||
original_file_content_str = original_file_content_str.content
|
||||
except Exception as error:
|
||||
logging.error("Failed to retrieve original file content of %s at version %s. Error: %s", file, version, str(error))
|
||||
get_logger().error("Failed to retrieve original file content of %s at version %s. Error: %s", file, version, str(error))
|
||||
original_file_content_str = ""
|
||||
|
||||
patch = load_large_diff(file, new_file_content_str, original_file_content_str)
|
||||
@ -166,7 +167,7 @@ class AzureDevopsProvider:
|
||||
pull_request_id=self.pr_num,
|
||||
git_pull_request_to_update=updated_pr)
|
||||
except Exception as e:
|
||||
logging.exception(f"Could not update pull request {self.pr_num} description: {e}")
|
||||
get_logger().exception(f"Could not update pull request {self.pr_num} description: {e}")
|
||||
|
||||
def remove_initial_comment(self):
|
||||
return "" # not implemented yet
|
||||
@ -235,9 +236,6 @@ class AzureDevopsProvider:
|
||||
def _parse_pr_url(pr_url: str) -> Tuple[str, int]:
|
||||
parsed_url = urlparse(pr_url)
|
||||
|
||||
if 'azure.com' not in parsed_url.netloc:
|
||||
raise ValueError("The provided URL is not a valid Azure DevOps URL")
|
||||
|
||||
path_parts = parsed_url.path.strip('/').split('/')
|
||||
|
||||
if len(path_parts) < 6 or path_parts[4] != 'pullrequest':
|
||||
|
@ -1,5 +1,4 @@
|
||||
import json
|
||||
import logging
|
||||
from typing import Optional, Tuple
|
||||
from urllib.parse import urlparse
|
||||
|
||||
@ -7,8 +6,9 @@ import requests
|
||||
from atlassian.bitbucket import Cloud
|
||||
from starlette_context import context
|
||||
|
||||
from ..algo.pr_processing import clip_tokens, find_line_number_of_relevant_line_in_file
|
||||
from ..algo.pr_processing import find_line_number_of_relevant_line_in_file
|
||||
from ..config_loader import get_settings
|
||||
from ..log import get_logger
|
||||
from .git_provider import FilePatchInfo, GitProvider
|
||||
|
||||
|
||||
@ -61,14 +61,14 @@ class BitbucketProvider(GitProvider):
|
||||
|
||||
if not relevant_lines_start or relevant_lines_start == -1:
|
||||
if get_settings().config.verbosity_level >= 2:
|
||||
logging.exception(
|
||||
get_logger().exception(
|
||||
f"Failed to publish code suggestion, relevant_lines_start is {relevant_lines_start}"
|
||||
)
|
||||
continue
|
||||
|
||||
if relevant_lines_end < relevant_lines_start:
|
||||
if get_settings().config.verbosity_level >= 2:
|
||||
logging.exception(
|
||||
get_logger().exception(
|
||||
f"Failed to publish code suggestion, "
|
||||
f"relevant_lines_end is {relevant_lines_end} and "
|
||||
f"relevant_lines_start is {relevant_lines_start}"
|
||||
@ -97,7 +97,7 @@ class BitbucketProvider(GitProvider):
|
||||
return True
|
||||
except Exception as e:
|
||||
if get_settings().config.verbosity_level >= 2:
|
||||
logging.error(f"Failed to publish code suggestion, error: {e}")
|
||||
get_logger().error(f"Failed to publish code suggestion, error: {e}")
|
||||
return False
|
||||
|
||||
def is_supported(self, capability: str) -> bool:
|
||||
@ -144,7 +144,7 @@ class BitbucketProvider(GitProvider):
|
||||
for comment in self.temp_comments:
|
||||
self.pr.delete(f"comments/{comment}")
|
||||
except Exception as e:
|
||||
logging.exception(f"Failed to remove temp comments, error: {e}")
|
||||
get_logger().exception(f"Failed to remove temp comments, error: {e}")
|
||||
|
||||
|
||||
# funtion to create_inline_comment
|
||||
@ -152,7 +152,7 @@ class BitbucketProvider(GitProvider):
|
||||
position, absolute_position = find_line_number_of_relevant_line_in_file(self.get_diff_files(), relevant_file.strip('`'), relevant_line_in_file)
|
||||
if position == -1:
|
||||
if get_settings().config.verbosity_level >= 2:
|
||||
logging.info(f"Could not find position for {relevant_file} {relevant_line_in_file}")
|
||||
get_logger().info(f"Could not find position for {relevant_file} {relevant_line_in_file}")
|
||||
subject_type = "FILE"
|
||||
else:
|
||||
subject_type = "LINE"
|
||||
|
@ -1,17 +1,16 @@
|
||||
import logging
|
||||
import os
|
||||
import re
|
||||
from collections import Counter
|
||||
from typing import List, Optional, Tuple
|
||||
from urllib.parse import urlparse
|
||||
|
||||
from ..algo.language_handler import is_valid_file, language_extension_map
|
||||
from ..algo.pr_processing import clip_tokens
|
||||
from ..algo.utils import load_large_diff
|
||||
from ..config_loader import get_settings
|
||||
from .git_provider import EDIT_TYPE, FilePatchInfo, GitProvider, IncrementalPR
|
||||
from pr_agent.git_providers.codecommit_client import CodeCommitClient
|
||||
|
||||
from ..algo.language_handler import is_valid_file, language_extension_map
|
||||
from ..algo.utils import load_large_diff
|
||||
from .git_provider import EDIT_TYPE, FilePatchInfo, GitProvider
|
||||
from ..log import get_logger
|
||||
|
||||
|
||||
class PullRequestCCMimic:
|
||||
"""
|
||||
@ -166,7 +165,7 @@ class CodeCommitProvider(GitProvider):
|
||||
|
||||
def publish_comment(self, pr_comment: str, is_temporary: bool = False):
|
||||
if is_temporary:
|
||||
logging.info(pr_comment)
|
||||
get_logger().info(pr_comment)
|
||||
return
|
||||
|
||||
pr_comment = CodeCommitProvider._remove_markdown_html(pr_comment)
|
||||
@ -188,12 +187,12 @@ class CodeCommitProvider(GitProvider):
|
||||
for suggestion in code_suggestions:
|
||||
# Verify that each suggestion has the required keys
|
||||
if not all(key in suggestion for key in ["body", "relevant_file", "relevant_lines_start"]):
|
||||
logging.warning(f"Skipping code suggestion #{counter}: Each suggestion must have 'body', 'relevant_file', 'relevant_lines_start' keys")
|
||||
get_logger().warning(f"Skipping code suggestion #{counter}: Each suggestion must have 'body', 'relevant_file', 'relevant_lines_start' keys")
|
||||
continue
|
||||
|
||||
# Publish the code suggestion to CodeCommit
|
||||
try:
|
||||
logging.debug(f"Code Suggestion #{counter} in file: {suggestion['relevant_file']}: {suggestion['relevant_lines_start']}")
|
||||
get_logger().debug(f"Code Suggestion #{counter} in file: {suggestion['relevant_file']}: {suggestion['relevant_lines_start']}")
|
||||
self.codecommit_client.publish_comment(
|
||||
repo_name=self.repo_name,
|
||||
pr_number=self.pr_num,
|
||||
@ -296,11 +295,11 @@ class CodeCommitProvider(GitProvider):
|
||||
return self.codecommit_client.get_file(self.repo_name, settings_filename, self.pr.source_commit, optional=True)
|
||||
|
||||
def add_eyes_reaction(self, issue_comment_id: int) -> Optional[int]:
|
||||
logging.info("CodeCommit provider does not support eyes reaction yet")
|
||||
get_logger().info("CodeCommit provider does not support eyes reaction yet")
|
||||
return True
|
||||
|
||||
def remove_reaction(self, issue_comment_id: int, reaction_id: int) -> bool:
|
||||
logging.info("CodeCommit provider does not support removing reactions yet")
|
||||
get_logger().info("CodeCommit provider does not support removing reactions yet")
|
||||
return True
|
||||
|
||||
@staticmethod
|
||||
@ -366,7 +365,7 @@ class CodeCommitProvider(GitProvider):
|
||||
# TODO: implement support for multiple targets in one CodeCommit PR
|
||||
# for now, we are only using the first target in the PR
|
||||
if len(response.targets) > 1:
|
||||
logging.warning(
|
||||
get_logger().warning(
|
||||
"Multiple targets in one PR is not supported for CodeCommit yet. Continuing, using the first target only..."
|
||||
)
|
||||
|
||||
|
@ -1,5 +1,4 @@
|
||||
import json
|
||||
import logging
|
||||
import os
|
||||
import pathlib
|
||||
import shutil
|
||||
@ -7,18 +6,16 @@ import subprocess
|
||||
import uuid
|
||||
from collections import Counter, namedtuple
|
||||
from pathlib import Path
|
||||
from tempfile import mkdtemp, NamedTemporaryFile
|
||||
from tempfile import NamedTemporaryFile, mkdtemp
|
||||
|
||||
import requests
|
||||
import urllib3.util
|
||||
from git import Repo
|
||||
|
||||
from pr_agent.config_loader import get_settings
|
||||
from pr_agent.git_providers.git_provider import GitProvider, FilePatchInfo, \
|
||||
EDIT_TYPE
|
||||
from pr_agent.git_providers.git_provider import EDIT_TYPE, FilePatchInfo, GitProvider
|
||||
from pr_agent.git_providers.local_git_provider import PullRequestMimic
|
||||
|
||||
logger = logging.getLogger(__name__)
|
||||
from pr_agent.log import get_logger
|
||||
|
||||
|
||||
def _call(*command, **kwargs) -> (int, str, str):
|
||||
@ -33,42 +30,42 @@ def _call(*command, **kwargs) -> (int, str, str):
|
||||
|
||||
|
||||
def clone(url, directory):
|
||||
logger.info("Cloning %s to %s", url, directory)
|
||||
get_logger().info("Cloning %s to %s", url, directory)
|
||||
stdout = _call('git', 'clone', "--depth", "1", url, directory)
|
||||
logger.info(stdout)
|
||||
get_logger().info(stdout)
|
||||
|
||||
|
||||
def fetch(url, refspec, cwd):
|
||||
logger.info("Fetching %s %s", url, refspec)
|
||||
get_logger().info("Fetching %s %s", url, refspec)
|
||||
stdout = _call(
|
||||
'git', 'fetch', '--depth', '2', url, refspec,
|
||||
cwd=cwd
|
||||
)
|
||||
logger.info(stdout)
|
||||
get_logger().info(stdout)
|
||||
|
||||
|
||||
def checkout(cwd):
|
||||
logger.info("Checking out")
|
||||
get_logger().info("Checking out")
|
||||
stdout = _call('git', 'checkout', "FETCH_HEAD", cwd=cwd)
|
||||
logger.info(stdout)
|
||||
get_logger().info(stdout)
|
||||
|
||||
|
||||
def show(*args, cwd=None):
|
||||
logger.info("Show")
|
||||
get_logger().info("Show")
|
||||
return _call('git', 'show', *args, cwd=cwd)
|
||||
|
||||
|
||||
def diff(*args, cwd=None):
|
||||
logger.info("Diff")
|
||||
get_logger().info("Diff")
|
||||
patch = _call('git', 'diff', *args, cwd=cwd)
|
||||
if not patch:
|
||||
logger.warning("No changes found")
|
||||
get_logger().warning("No changes found")
|
||||
return
|
||||
return patch
|
||||
|
||||
|
||||
def reset_local_changes(cwd):
|
||||
logger.info("Reset local changes")
|
||||
get_logger().info("Reset local changes")
|
||||
_call('git', 'checkout', "--force", cwd=cwd)
|
||||
|
||||
|
||||
|
@ -1,4 +1,3 @@
|
||||
import logging
|
||||
from abc import ABC, abstractmethod
|
||||
from dataclasses import dataclass
|
||||
|
||||
@ -6,6 +5,8 @@ from dataclasses import dataclass
|
||||
from enum import Enum
|
||||
from typing import Optional
|
||||
|
||||
from pr_agent.log import get_logger
|
||||
|
||||
|
||||
class EDIT_TYPE(Enum):
|
||||
ADDED = 1
|
||||
@ -136,7 +137,7 @@ def get_main_pr_language(languages, files) -> str:
|
||||
"""
|
||||
main_language_str = ""
|
||||
if not languages:
|
||||
logging.info("No languages detected")
|
||||
get_logger().info("No languages detected")
|
||||
return main_language_str
|
||||
|
||||
try:
|
||||
@ -172,7 +173,7 @@ def get_main_pr_language(languages, files) -> str:
|
||||
main_language_str = top_language
|
||||
|
||||
except Exception as e:
|
||||
logging.exception(e)
|
||||
get_logger().exception(e)
|
||||
pass
|
||||
|
||||
return main_language_str
|
||||
|
@ -1,20 +1,19 @@
|
||||
import logging
|
||||
import hashlib
|
||||
|
||||
from datetime import datetime
|
||||
from typing import Optional, Tuple, Any
|
||||
from typing import Optional, Tuple
|
||||
from urllib.parse import urlparse
|
||||
|
||||
from github import AppAuthentication, Auth, Github, GithubException, Reaction
|
||||
from github import AppAuthentication, Auth, Github, GithubException
|
||||
from retry import retry
|
||||
from starlette_context import context
|
||||
|
||||
from .git_provider import FilePatchInfo, GitProvider, IncrementalPR
|
||||
from ..algo.language_handler import is_valid_file
|
||||
from ..algo.pr_processing import clip_tokens, find_line_number_of_relevant_line_in_file
|
||||
from ..algo.utils import load_large_diff
|
||||
from ..algo.pr_processing import find_line_number_of_relevant_line_in_file, clip_tokens
|
||||
from ..config_loader import get_settings
|
||||
from ..log import get_logger
|
||||
from ..servers.utils import RateLimitExceeded
|
||||
from .git_provider import FilePatchInfo, GitProvider, IncrementalPR
|
||||
|
||||
|
||||
class GithubProvider(GitProvider):
|
||||
@ -58,7 +57,7 @@ class GithubProvider(GitProvider):
|
||||
self.file_set = dict()
|
||||
for commit in self.incremental.commits_range:
|
||||
if commit.commit.message.startswith(f"Merge branch '{self._get_repo().default_branch}'"):
|
||||
logging.info(f"Skipping merge commit {commit.commit.message}")
|
||||
get_logger().info(f"Skipping merge commit {commit.commit.message}")
|
||||
continue
|
||||
self.file_set.update({file.filename: file for file in commit.files})
|
||||
|
||||
@ -78,7 +77,7 @@ class GithubProvider(GitProvider):
|
||||
self.previous_review = None
|
||||
self.comments = list(self.pr.get_issue_comments())
|
||||
for index in range(len(self.comments) - 1, -1, -1):
|
||||
if self.comments[index].body.startswith("## PR Analysis"):
|
||||
if self.comments[index].body.startswith("## PR Analysis") or self.comments[index].body.startswith("## Incremental PR Review"):
|
||||
self.previous_review = self.comments[index]
|
||||
break
|
||||
|
||||
@ -130,7 +129,7 @@ class GithubProvider(GitProvider):
|
||||
return diff_files
|
||||
|
||||
except GithubException.RateLimitExceededException as e:
|
||||
logging.error(f"Rate limit exceeded for GitHub API. Original message: {e}")
|
||||
get_logger().error(f"Rate limit exceeded for GitHub API. Original message: {e}")
|
||||
raise RateLimitExceeded("Rate limit exceeded for GitHub API.") from e
|
||||
|
||||
def publish_description(self, pr_title: str, pr_body: str):
|
||||
@ -138,7 +137,7 @@ class GithubProvider(GitProvider):
|
||||
|
||||
def publish_comment(self, pr_comment: str, is_temporary: bool = False):
|
||||
if is_temporary and not get_settings().config.publish_output_progress:
|
||||
logging.debug(f"Skipping publish_comment for temporary comment: {pr_comment}")
|
||||
get_logger().debug(f"Skipping publish_comment for temporary comment: {pr_comment}")
|
||||
return
|
||||
response = self.pr.create_issue_comment(pr_comment)
|
||||
if hasattr(response, "user") and hasattr(response.user, "login"):
|
||||
@ -156,7 +155,7 @@ class GithubProvider(GitProvider):
|
||||
position, absolute_position = find_line_number_of_relevant_line_in_file(self.diff_files, relevant_file.strip('`'), relevant_line_in_file)
|
||||
if position == -1:
|
||||
if get_settings().config.verbosity_level >= 2:
|
||||
logging.info(f"Could not find position for {relevant_file} {relevant_line_in_file}")
|
||||
get_logger().info(f"Could not find position for {relevant_file} {relevant_line_in_file}")
|
||||
subject_type = "FILE"
|
||||
else:
|
||||
subject_type = "LINE"
|
||||
@ -179,13 +178,13 @@ class GithubProvider(GitProvider):
|
||||
|
||||
if not relevant_lines_start or relevant_lines_start == -1:
|
||||
if get_settings().config.verbosity_level >= 2:
|
||||
logging.exception(
|
||||
get_logger().exception(
|
||||
f"Failed to publish code suggestion, relevant_lines_start is {relevant_lines_start}")
|
||||
continue
|
||||
|
||||
if relevant_lines_end < relevant_lines_start:
|
||||
if get_settings().config.verbosity_level >= 2:
|
||||
logging.exception(f"Failed to publish code suggestion, "
|
||||
get_logger().exception(f"Failed to publish code suggestion, "
|
||||
f"relevant_lines_end is {relevant_lines_end} and "
|
||||
f"relevant_lines_start is {relevant_lines_start}")
|
||||
continue
|
||||
@ -212,7 +211,7 @@ class GithubProvider(GitProvider):
|
||||
return True
|
||||
except Exception as e:
|
||||
if get_settings().config.verbosity_level >= 2:
|
||||
logging.error(f"Failed to publish code suggestion, error: {e}")
|
||||
get_logger().error(f"Failed to publish code suggestion, error: {e}")
|
||||
return False
|
||||
|
||||
def remove_initial_comment(self):
|
||||
@ -221,7 +220,7 @@ class GithubProvider(GitProvider):
|
||||
if comment.is_temporary:
|
||||
comment.delete()
|
||||
except Exception as e:
|
||||
logging.exception(f"Failed to remove initial comment, error: {e}")
|
||||
get_logger().exception(f"Failed to remove initial comment, error: {e}")
|
||||
|
||||
def get_title(self):
|
||||
return self.pr.title
|
||||
@ -259,7 +258,10 @@ class GithubProvider(GitProvider):
|
||||
|
||||
def get_repo_settings(self):
|
||||
try:
|
||||
contents = self.repo_obj.get_contents(".pr_agent.toml", ref=self.pr.head.sha).decoded_content
|
||||
# contents = self.repo_obj.get_contents(".pr_agent.toml", ref=self.pr.head.sha).decoded_content
|
||||
|
||||
# more logical to take 'pr_agent.toml' from the default branch
|
||||
contents = self.repo_obj.get_contents(".pr_agent.toml").decoded_content
|
||||
return contents
|
||||
except Exception:
|
||||
return ""
|
||||
@ -269,7 +271,7 @@ class GithubProvider(GitProvider):
|
||||
reaction = self.pr.get_issue_comment(issue_comment_id).create_reaction("eyes")
|
||||
return reaction.id
|
||||
except Exception as e:
|
||||
logging.exception(f"Failed to add eyes reaction, error: {e}")
|
||||
get_logger().exception(f"Failed to add eyes reaction, error: {e}")
|
||||
return None
|
||||
|
||||
def remove_reaction(self, issue_comment_id: int, reaction_id: int) -> bool:
|
||||
@ -277,7 +279,7 @@ class GithubProvider(GitProvider):
|
||||
self.pr.get_issue_comment(issue_comment_id).delete_reaction(reaction_id)
|
||||
return True
|
||||
except Exception as e:
|
||||
logging.exception(f"Failed to remove eyes reaction, error: {e}")
|
||||
get_logger().exception(f"Failed to remove eyes reaction, error: {e}")
|
||||
return False
|
||||
|
||||
|
||||
@ -396,13 +398,13 @@ class GithubProvider(GitProvider):
|
||||
"PUT", f"{self.pr.issue_url}/labels", input=post_parameters
|
||||
)
|
||||
except Exception as e:
|
||||
logging.exception(f"Failed to publish labels, error: {e}")
|
||||
get_logger().exception(f"Failed to publish labels, error: {e}")
|
||||
|
||||
def get_labels(self):
|
||||
try:
|
||||
return [label.name for label in self.pr.labels]
|
||||
except Exception as e:
|
||||
logging.exception(f"Failed to get labels, error: {e}")
|
||||
get_logger().exception(f"Failed to get labels, error: {e}")
|
||||
return []
|
||||
|
||||
def get_commit_messages(self):
|
||||
@ -444,7 +446,7 @@ class GithubProvider(GitProvider):
|
||||
return link
|
||||
except Exception as e:
|
||||
if get_settings().config.verbosity_level >= 2:
|
||||
logging.info(f"Failed adding line link, error: {e}")
|
||||
get_logger().info(f"Failed adding line link, error: {e}")
|
||||
|
||||
return ""
|
||||
|
||||
|
@ -1,5 +1,4 @@
|
||||
import hashlib
|
||||
import logging
|
||||
import re
|
||||
from typing import Optional, Tuple
|
||||
from urllib.parse import urlparse
|
||||
@ -12,8 +11,8 @@ from ..algo.pr_processing import clip_tokens, find_line_number_of_relevant_line_
|
||||
from ..algo.utils import load_large_diff
|
||||
from ..config_loader import get_settings
|
||||
from .git_provider import EDIT_TYPE, FilePatchInfo, GitProvider
|
||||
from ..log import get_logger
|
||||
|
||||
logger = logging.getLogger()
|
||||
|
||||
class DiffNotFoundError(Exception):
|
||||
"""Raised when the diff for a merge request cannot be found."""
|
||||
@ -59,7 +58,7 @@ class GitLabProvider(GitProvider):
|
||||
try:
|
||||
self.last_diff = self.mr.diffs.list(get_all=True)[-1]
|
||||
except IndexError as e:
|
||||
logger.error(f"Could not get diff for merge request {self.id_mr}")
|
||||
get_logger().error(f"Could not get diff for merge request {self.id_mr}")
|
||||
raise DiffNotFoundError(f"Could not get diff for merge request {self.id_mr}") from e
|
||||
|
||||
|
||||
@ -99,7 +98,7 @@ class GitLabProvider(GitProvider):
|
||||
if isinstance(new_file_content_str, bytes):
|
||||
new_file_content_str = bytes.decode(new_file_content_str, 'utf-8')
|
||||
except UnicodeDecodeError:
|
||||
logging.warning(
|
||||
get_logger().warning(
|
||||
f"Cannot decode file {diff['old_path']} or {diff['new_path']} in merge request {self.id_mr}")
|
||||
|
||||
edit_type = EDIT_TYPE.MODIFIED
|
||||
@ -135,7 +134,7 @@ class GitLabProvider(GitProvider):
|
||||
self.mr.description = pr_body
|
||||
self.mr.save()
|
||||
except Exception as e:
|
||||
logging.exception(f"Could not update merge request {self.id_mr} description: {e}")
|
||||
get_logger().exception(f"Could not update merge request {self.id_mr} description: {e}")
|
||||
|
||||
def publish_comment(self, mr_comment: str, is_temporary: bool = False):
|
||||
comment = self.mr.notes.create({'body': mr_comment})
|
||||
@ -157,12 +156,12 @@ class GitLabProvider(GitProvider):
|
||||
def send_inline_comment(self,body: str,edit_type: str,found: bool,relevant_file: str,relevant_line_in_file: int,
|
||||
source_line_no: int, target_file: str,target_line_no: int) -> None:
|
||||
if not found:
|
||||
logging.info(f"Could not find position for {relevant_file} {relevant_line_in_file}")
|
||||
get_logger().info(f"Could not find position for {relevant_file} {relevant_line_in_file}")
|
||||
else:
|
||||
# in order to have exact sha's we have to find correct diff for this change
|
||||
diff = self.get_relevant_diff(relevant_file, relevant_line_in_file)
|
||||
if diff is None:
|
||||
logger.error(f"Could not get diff for merge request {self.id_mr}")
|
||||
get_logger().error(f"Could not get diff for merge request {self.id_mr}")
|
||||
raise DiffNotFoundError(f"Could not get diff for merge request {self.id_mr}")
|
||||
pos_obj = {'position_type': 'text',
|
||||
'new_path': target_file.filename,
|
||||
@ -175,23 +174,23 @@ class GitLabProvider(GitProvider):
|
||||
else:
|
||||
pos_obj['new_line'] = target_line_no - 1
|
||||
pos_obj['old_line'] = source_line_no - 1
|
||||
logging.debug(f"Creating comment in {self.id_mr} with body {body} and position {pos_obj}")
|
||||
get_logger().debug(f"Creating comment in {self.id_mr} with body {body} and position {pos_obj}")
|
||||
self.mr.discussions.create({'body': body, 'position': pos_obj})
|
||||
|
||||
def get_relevant_diff(self, relevant_file: str, relevant_line_in_file: int) -> Optional[dict]:
|
||||
changes = self.mr.changes() # Retrieve the changes for the merge request once
|
||||
if not changes:
|
||||
logging.error('No changes found for the merge request.')
|
||||
get_logger().error('No changes found for the merge request.')
|
||||
return None
|
||||
all_diffs = self.mr.diffs.list(get_all=True)
|
||||
if not all_diffs:
|
||||
logging.error('No diffs found for the merge request.')
|
||||
get_logger().error('No diffs found for the merge request.')
|
||||
return None
|
||||
for diff in all_diffs:
|
||||
for change in changes['changes']:
|
||||
if change['new_path'] == relevant_file and relevant_line_in_file in change['diff']:
|
||||
return diff
|
||||
logging.debug(
|
||||
get_logger().debug(
|
||||
f'No relevant diff found for {relevant_file} {relevant_line_in_file}. Falling back to last diff.')
|
||||
return self.last_diff # fallback to last_diff if no relevant diff is found
|
||||
|
||||
@ -226,7 +225,7 @@ class GitLabProvider(GitProvider):
|
||||
self.send_inline_comment(body, edit_type, found, relevant_file, relevant_line_in_file, source_line_no,
|
||||
target_file, target_line_no)
|
||||
except Exception as e:
|
||||
logging.exception(f"Could not publish code suggestion:\nsuggestion: {suggestion}\nerror: {e}")
|
||||
get_logger().exception(f"Could not publish code suggestion:\nsuggestion: {suggestion}\nerror: {e}")
|
||||
|
||||
# note that we publish suggestions one-by-one. so, if one fails, the rest will still be published
|
||||
return True
|
||||
@ -290,7 +289,7 @@ class GitLabProvider(GitProvider):
|
||||
for comment in self.temp_comments:
|
||||
comment.delete()
|
||||
except Exception as e:
|
||||
logging.exception(f"Failed to remove temp comments, error: {e}")
|
||||
get_logger().exception(f"Failed to remove temp comments, error: {e}")
|
||||
|
||||
def get_title(self):
|
||||
return self.mr.title
|
||||
@ -358,7 +357,7 @@ class GitLabProvider(GitProvider):
|
||||
self.mr.labels = list(set(pr_types))
|
||||
self.mr.save()
|
||||
except Exception as e:
|
||||
logging.exception(f"Failed to publish labels, error: {e}")
|
||||
get_logger().exception(f"Failed to publish labels, error: {e}")
|
||||
|
||||
def publish_inline_comments(self, comments: list[dict]):
|
||||
pass
|
||||
@ -410,6 +409,6 @@ class GitLabProvider(GitProvider):
|
||||
return link
|
||||
except Exception as e:
|
||||
if get_settings().config.verbosity_level >= 2:
|
||||
logging.info(f"Failed adding line link, error: {e}")
|
||||
get_logger().info(f"Failed adding line link, error: {e}")
|
||||
|
||||
return ""
|
||||
|
@ -1,4 +1,3 @@
|
||||
import logging
|
||||
from collections import Counter
|
||||
from pathlib import Path
|
||||
from typing import List
|
||||
@ -7,6 +6,7 @@ from git import Repo
|
||||
|
||||
from pr_agent.config_loader import _find_repository_root, get_settings
|
||||
from pr_agent.git_providers.git_provider import EDIT_TYPE, FilePatchInfo, GitProvider
|
||||
from pr_agent.log import get_logger
|
||||
|
||||
|
||||
class PullRequestMimic:
|
||||
@ -49,7 +49,7 @@ class LocalGitProvider(GitProvider):
|
||||
"""
|
||||
Prepare the repository for PR-mimic generation.
|
||||
"""
|
||||
logging.debug('Preparing repository for PR-mimic generation...')
|
||||
get_logger().debug('Preparing repository for PR-mimic generation...')
|
||||
if self.repo.is_dirty():
|
||||
raise ValueError('The repository is not in a clean state. Please commit or stash pending changes.')
|
||||
if self.target_branch_name not in self.repo.heads:
|
||||
|
36
pr_agent/git_providers/utils.py
Normal file
36
pr_agent/git_providers/utils.py
Normal file
@ -0,0 +1,36 @@
|
||||
import copy
|
||||
import os
|
||||
import tempfile
|
||||
|
||||
from dynaconf import Dynaconf
|
||||
|
||||
from pr_agent.config_loader import get_settings
|
||||
from pr_agent.git_providers import get_git_provider
|
||||
from pr_agent.log import get_logger
|
||||
|
||||
|
||||
def apply_repo_settings(pr_url):
|
||||
if get_settings().config.use_repo_settings_file:
|
||||
repo_settings_file = None
|
||||
try:
|
||||
git_provider = get_git_provider()(pr_url)
|
||||
repo_settings = git_provider.get_repo_settings()
|
||||
if repo_settings:
|
||||
repo_settings_file = None
|
||||
fd, repo_settings_file = tempfile.mkstemp(suffix='.toml')
|
||||
os.write(fd, repo_settings)
|
||||
new_settings = Dynaconf(settings_files=[repo_settings_file])
|
||||
for section, contents in new_settings.as_dict().items():
|
||||
section_dict = copy.deepcopy(get_settings().as_dict().get(section, {}))
|
||||
for key, value in contents.items():
|
||||
section_dict[key] = value
|
||||
get_settings().unset(section)
|
||||
get_settings().set(section, section_dict, merge=False)
|
||||
get_logger().info(f"Applying repo settings for section {section}, contents: {contents}")
|
||||
|
||||
finally:
|
||||
if repo_settings_file:
|
||||
try:
|
||||
os.remove(repo_settings_file)
|
||||
except Exception as e:
|
||||
get_logger().error(f"Failed to remove temporary settings file {repo_settings_file}", e)
|
40
pr_agent/log/__init__.py
Normal file
40
pr_agent/log/__init__.py
Normal file
@ -0,0 +1,40 @@
|
||||
import json
|
||||
import logging
|
||||
import sys
|
||||
from enum import Enum
|
||||
|
||||
from loguru import logger
|
||||
|
||||
|
||||
class LoggingFormat(str, Enum):
|
||||
CONSOLE = "CONSOLE"
|
||||
JSON = "JSON"
|
||||
|
||||
|
||||
def json_format(record: dict) -> str:
|
||||
return record["message"]
|
||||
|
||||
|
||||
def setup_logger(level: str = "INFO", fmt: LoggingFormat = LoggingFormat.CONSOLE):
|
||||
level: int = logging.getLevelName(level.upper())
|
||||
if type(level) is not int:
|
||||
level = logging.INFO
|
||||
|
||||
if fmt == LoggingFormat.JSON:
|
||||
logger.remove(None)
|
||||
logger.add(
|
||||
sys.stdout,
|
||||
level=level,
|
||||
format="{message}",
|
||||
colorize=False,
|
||||
serialize=True,
|
||||
)
|
||||
elif fmt == LoggingFormat.CONSOLE:
|
||||
logger.remove(None)
|
||||
logger.add(sys.stdout, level=level, colorize=True)
|
||||
|
||||
return logger
|
||||
|
||||
|
||||
def get_logger(*args, **kwargs):
|
||||
return logger
|
@ -1,9 +1,8 @@
|
||||
import ujson
|
||||
|
||||
from google.cloud import storage
|
||||
|
||||
from pr_agent.config_loader import get_settings
|
||||
from pr_agent.git_providers.gitlab_provider import logger
|
||||
from pr_agent.log import get_logger
|
||||
from pr_agent.secret_providers.secret_provider import SecretProvider
|
||||
|
||||
|
||||
@ -15,7 +14,7 @@ class GoogleCloudStorageSecretProvider(SecretProvider):
|
||||
self.bucket_name = get_settings().google_cloud_storage.bucket_name
|
||||
self.bucket = self.client.bucket(self.bucket_name)
|
||||
except Exception as e:
|
||||
logger.error(f"Failed to initialize Google Cloud Storage Secret Provider: {e}")
|
||||
get_logger().error(f"Failed to initialize Google Cloud Storage Secret Provider: {e}")
|
||||
raise e
|
||||
|
||||
def get_secret(self, secret_name: str) -> str:
|
||||
@ -23,7 +22,7 @@ class GoogleCloudStorageSecretProvider(SecretProvider):
|
||||
blob = self.bucket.blob(secret_name)
|
||||
return blob.download_as_string()
|
||||
except Exception as e:
|
||||
logger.error(f"Failed to get secret {secret_name} from Google Cloud Storage: {e}")
|
||||
get_logger().error(f"Failed to get secret {secret_name} from Google Cloud Storage: {e}")
|
||||
return ""
|
||||
|
||||
def store_secret(self, secret_name: str, secret_value: str):
|
||||
@ -31,5 +30,5 @@ class GoogleCloudStorageSecretProvider(SecretProvider):
|
||||
blob = self.bucket.blob(secret_name)
|
||||
blob.upload_from_string(secret_value)
|
||||
except Exception as e:
|
||||
logger.error(f"Failed to store secret {secret_name} in Google Cloud Storage: {e}")
|
||||
get_logger().error(f"Failed to store secret {secret_name} in Google Cloud Storage: {e}")
|
||||
raise e
|
||||
|
@ -1,9 +1,7 @@
|
||||
import copy
|
||||
import hashlib
|
||||
import json
|
||||
import logging
|
||||
import os
|
||||
import sys
|
||||
import time
|
||||
|
||||
import jwt
|
||||
@ -18,9 +16,10 @@ from starlette_context.middleware import RawContextMiddleware
|
||||
|
||||
from pr_agent.agent.pr_agent import PRAgent
|
||||
from pr_agent.config_loader import get_settings, global_settings
|
||||
from pr_agent.log import LoggingFormat, get_logger, setup_logger
|
||||
from pr_agent.secret_providers import get_secret_provider
|
||||
|
||||
logging.basicConfig(stream=sys.stdout, level=logging.INFO)
|
||||
setup_logger(fmt=LoggingFormat.JSON)
|
||||
router = APIRouter()
|
||||
secret_provider = get_secret_provider()
|
||||
|
||||
@ -49,7 +48,7 @@ async def get_bearer_token(shared_secret: str, client_key: str):
|
||||
bearer_token = response.json()["access_token"]
|
||||
return bearer_token
|
||||
except Exception as e:
|
||||
logging.error(f"Failed to get bearer token: {e}")
|
||||
get_logger().error(f"Failed to get bearer token: {e}")
|
||||
raise e
|
||||
|
||||
@router.get("/")
|
||||
@ -60,21 +59,23 @@ async def handle_manifest(request: Request, response: Response):
|
||||
manifest = manifest.replace("app_key", get_settings().bitbucket.app_key)
|
||||
manifest = manifest.replace("base_url", get_settings().bitbucket.base_url)
|
||||
except:
|
||||
logging.error("Failed to replace api_key in Bitbucket manifest, trying to continue")
|
||||
get_logger().error("Failed to replace api_key in Bitbucket manifest, trying to continue")
|
||||
manifest_obj = json.loads(manifest)
|
||||
return JSONResponse(manifest_obj)
|
||||
|
||||
@router.post("/webhook")
|
||||
async def handle_github_webhooks(background_tasks: BackgroundTasks, request: Request):
|
||||
print(request.headers)
|
||||
log_context = {"server_type": "bitbucket_app"}
|
||||
get_logger().debug(request.headers)
|
||||
jwt_header = request.headers.get("authorization", None)
|
||||
if jwt_header:
|
||||
input_jwt = jwt_header.split(" ")[1]
|
||||
data = await request.json()
|
||||
print(data)
|
||||
get_logger().debug(data)
|
||||
async def inner():
|
||||
try:
|
||||
owner = data["data"]["repository"]["owner"]["username"]
|
||||
log_context["sender"] = owner
|
||||
secrets = json.loads(secret_provider.get_secret(owner))
|
||||
shared_secret = secrets["shared_secret"]
|
||||
client_key = secrets["client_key"]
|
||||
@ -86,13 +87,19 @@ async def handle_github_webhooks(background_tasks: BackgroundTasks, request: Req
|
||||
agent = PRAgent()
|
||||
if event == "pullrequest:created":
|
||||
pr_url = data["data"]["pullrequest"]["links"]["html"]["href"]
|
||||
await agent.handle_request(pr_url, "review")
|
||||
log_context["api_url"] = pr_url
|
||||
log_context["event"] = "pull_request"
|
||||
with get_logger().contextualize(**log_context):
|
||||
await agent.handle_request(pr_url, "review")
|
||||
elif event == "pullrequest:comment_created":
|
||||
pr_url = data["data"]["pullrequest"]["links"]["html"]["href"]
|
||||
log_context["api_url"] = pr_url
|
||||
log_context["event"] = "comment"
|
||||
comment_body = data["data"]["comment"]["content"]["raw"]
|
||||
await agent.handle_request(pr_url, comment_body)
|
||||
with get_logger().contextualize(**log_context):
|
||||
await agent.handle_request(pr_url, comment_body)
|
||||
except Exception as e:
|
||||
logging.error(f"Failed to handle webhook: {e}")
|
||||
get_logger().error(f"Failed to handle webhook: {e}")
|
||||
background_tasks.add_task(inner)
|
||||
return "OK"
|
||||
|
||||
@ -103,9 +110,10 @@ async def handle_github_webhooks(request: Request, response: Response):
|
||||
@router.post("/installed")
|
||||
async def handle_installed_webhooks(request: Request, response: Response):
|
||||
try:
|
||||
print(request.headers)
|
||||
get_logger().info("handle_installed_webhooks")
|
||||
get_logger().info(request.headers)
|
||||
data = await request.json()
|
||||
print(data)
|
||||
get_logger().info(data)
|
||||
shared_secret = data["sharedSecret"]
|
||||
client_key = data["clientKey"]
|
||||
username = data["principal"]["username"]
|
||||
@ -115,13 +123,15 @@ async def handle_installed_webhooks(request: Request, response: Response):
|
||||
}
|
||||
secret_provider.store_secret(username, json.dumps(secrets))
|
||||
except Exception as e:
|
||||
logging.error(f"Failed to register user: {e}")
|
||||
get_logger().error(f"Failed to register user: {e}")
|
||||
return JSONResponse({"error": "Unable to register user"}, status_code=500)
|
||||
|
||||
@router.post("/uninstalled")
|
||||
async def handle_uninstalled_webhooks(request: Request, response: Response):
|
||||
get_logger().info("handle_uninstalled_webhooks")
|
||||
|
||||
data = await request.json()
|
||||
print(data)
|
||||
get_logger().info(data)
|
||||
|
||||
|
||||
def start():
|
||||
|
@ -1,34 +0,0 @@
|
||||
import os
|
||||
from pr_agent.agent.pr_agent import PRAgent
|
||||
from pr_agent.config_loader import get_settings
|
||||
from pr_agent.tools.pr_reviewer import PRReviewer
|
||||
import asyncio
|
||||
|
||||
async def run_action():
|
||||
try:
|
||||
pull_request_id = os.environ.get("BITBUCKET_PR_ID", '')
|
||||
slug = os.environ.get("BITBUCKET_REPO_SLUG", '')
|
||||
workspace = os.environ.get("BITBUCKET_WORKSPACE", '')
|
||||
bearer_token = os.environ.get('BITBUCKET_BEARER_TOKEN', None)
|
||||
OPENAI_KEY = os.environ.get('OPENAI_API_KEY') or os.environ.get('OPENAI.KEY')
|
||||
OPENAI_ORG = os.environ.get('OPENAI_ORG') or os.environ.get('OPENAI.ORG')
|
||||
# Check if required environment variables are set
|
||||
if not bearer_token:
|
||||
print("BITBUCKET_BEARER_TOKEN not set")
|
||||
return
|
||||
|
||||
if not OPENAI_KEY:
|
||||
print("OPENAI_KEY not set")
|
||||
return
|
||||
# Set the environment variables in the settings
|
||||
get_settings().set("BITBUCKET.BEARER_TOKEN", bearer_token)
|
||||
get_settings().set("OPENAI.KEY", OPENAI_KEY)
|
||||
if OPENAI_ORG:
|
||||
get_settings().set("OPENAI.ORG", OPENAI_ORG)
|
||||
if pull_request_id and slug and workspace:
|
||||
pr_url = f"https://bitbucket.org/{workspace}/{slug}/pull-requests/{pull_request_id}"
|
||||
await PRReviewer(pr_url).run()
|
||||
except Exception as e:
|
||||
print(f"An error occurred: {e}")
|
||||
if __name__ == "__main__":
|
||||
asyncio.run(run_action())
|
@ -1,6 +1,4 @@
|
||||
import copy
|
||||
import logging
|
||||
import sys
|
||||
from enum import Enum
|
||||
from json import JSONDecodeError
|
||||
|
||||
@ -12,9 +10,10 @@ from starlette_context import context
|
||||
from starlette_context.middleware import RawContextMiddleware
|
||||
|
||||
from pr_agent.agent.pr_agent import PRAgent
|
||||
from pr_agent.config_loader import global_settings, get_settings
|
||||
from pr_agent.config_loader import get_settings, global_settings
|
||||
from pr_agent.log import get_logger, setup_logger
|
||||
|
||||
logging.basicConfig(stream=sys.stdout, level=logging.DEBUG)
|
||||
setup_logger()
|
||||
router = APIRouter()
|
||||
|
||||
|
||||
@ -35,7 +34,7 @@ class Item(BaseModel):
|
||||
|
||||
@router.post("/api/v1/gerrit/{action}")
|
||||
async def handle_gerrit_request(action: Action, item: Item):
|
||||
logging.debug("Received a Gerrit request")
|
||||
get_logger().debug("Received a Gerrit request")
|
||||
context["settings"] = copy.deepcopy(global_settings)
|
||||
|
||||
if action == Action.ask:
|
||||
@ -54,7 +53,7 @@ async def get_body(request):
|
||||
try:
|
||||
body = await request.json()
|
||||
except JSONDecodeError as e:
|
||||
logging.error("Error parsing request body", e)
|
||||
get_logger().error("Error parsing request body", e)
|
||||
return {}
|
||||
return body
|
||||
|
||||
|
@ -5,6 +5,8 @@ import os
|
||||
from pr_agent.agent.pr_agent import PRAgent
|
||||
from pr_agent.config_loader import get_settings
|
||||
from pr_agent.git_providers import get_git_provider
|
||||
from pr_agent.git_providers.utils import apply_repo_settings
|
||||
from pr_agent.log import get_logger
|
||||
from pr_agent.tools.pr_code_suggestions import PRCodeSuggestions
|
||||
from pr_agent.tools.pr_description import PRDescription
|
||||
from pr_agent.tools.pr_reviewer import PRReviewer
|
||||
@ -17,8 +19,11 @@ async def run_action():
|
||||
OPENAI_KEY = os.environ.get('OPENAI_KEY') or os.environ.get('OPENAI.KEY')
|
||||
OPENAI_ORG = os.environ.get('OPENAI_ORG') or os.environ.get('OPENAI.ORG')
|
||||
GITHUB_TOKEN = os.environ.get('GITHUB_TOKEN')
|
||||
get_settings().set("CONFIG.PUBLISH_OUTPUT_PROGRESS", False)
|
||||
CUSTOM_LABELS = os.environ.get('CUSTOM_LABELS')
|
||||
CUSTOM_LABELS_DESCRIPTIONS = os.environ.get('CUSTOM_LABELS_DESCRIPTIONS')
|
||||
# CUSTOM_LABELS is a comma separated list of labels (string), convert to list and strip spaces
|
||||
|
||||
get_settings().set("CONFIG.PUBLISH_OUTPUT_PROGRESS", False)
|
||||
|
||||
# Check if required environment variables are set
|
||||
if not GITHUB_EVENT_NAME:
|
||||
@ -33,6 +38,7 @@ async def run_action():
|
||||
if not GITHUB_TOKEN:
|
||||
print("GITHUB_TOKEN not set")
|
||||
return
|
||||
# CUSTOM_LABELS_DICT = handle_custom_labels(CUSTOM_LABELS, CUSTOM_LABELS_DESCRIPTIONS)
|
||||
|
||||
# Set the environment variables in the settings
|
||||
get_settings().set("OPENAI.KEY", OPENAI_KEY)
|
||||
@ -40,6 +46,7 @@ async def run_action():
|
||||
get_settings().set("OPENAI.ORG", OPENAI_ORG)
|
||||
get_settings().set("GITHUB.USER_TOKEN", GITHUB_TOKEN)
|
||||
get_settings().set("GITHUB.DEPLOYMENT_TYPE", "user")
|
||||
# get_settings().set("CUSTOM_LABELS", CUSTOM_LABELS_DICT)
|
||||
|
||||
# Load the event payload
|
||||
try:
|
||||
@ -49,6 +56,15 @@ async def run_action():
|
||||
print(f"Failed to parse JSON: {e}")
|
||||
return
|
||||
|
||||
try:
|
||||
get_logger().info("Applying repo settings")
|
||||
pr_url = event_payload.get("pull_request", {}).get("html_url")
|
||||
if pr_url:
|
||||
apply_repo_settings(pr_url)
|
||||
get_logger().info(f"enable_custom_labels: {get_settings().config.enable_custom_labels}")
|
||||
except Exception as e:
|
||||
get_logger().info(f"github action: failed to apply repo settings: {e}")
|
||||
|
||||
# Handle pull request event
|
||||
if GITHUB_EVENT_NAME == "pull_request":
|
||||
action = event_payload.get("action")
|
||||
@ -88,5 +104,31 @@ async def run_action():
|
||||
await PRAgent().handle_request(url, body)
|
||||
|
||||
|
||||
def handle_custom_labels(CUSTOM_LABELS, CUSTOM_LABELS_DESCRIPTIONS):
|
||||
if CUSTOM_LABELS:
|
||||
CUSTOM_LABELS = [x.strip() for x in CUSTOM_LABELS.split(',')]
|
||||
else:
|
||||
# Set default labels
|
||||
CUSTOM_LABELS = ['Bug fix', 'Tests', 'Bug fix with tests', 'Refactoring', 'Enhancement', 'Documentation',
|
||||
'Other']
|
||||
print(f"Using default labels: {CUSTOM_LABELS}")
|
||||
if CUSTOM_LABELS_DESCRIPTIONS:
|
||||
CUSTOM_LABELS_DESCRIPTIONS = [x.strip() for x in CUSTOM_LABELS_DESCRIPTIONS.split(',')]
|
||||
else:
|
||||
# Set default labels
|
||||
CUSTOM_LABELS_DESCRIPTIONS = ['Fixes a bug in the code', 'Adds or modifies tests',
|
||||
'Fixes a bug in the code and adds or modifies tests',
|
||||
'Refactors the code without changing its functionality',
|
||||
'Adds new features or functionality',
|
||||
'Adds or modifies documentation',
|
||||
'Other changes that do not fit in any of the above categories']
|
||||
print(f"Using default labels: {CUSTOM_LABELS_DESCRIPTIONS}")
|
||||
# create a dictionary of labels and descriptions
|
||||
CUSTOM_LABELS_DICT = dict()
|
||||
for i in range(len(CUSTOM_LABELS)):
|
||||
CUSTOM_LABELS_DICT[CUSTOM_LABELS[i]] = {'description': CUSTOM_LABELS_DESCRIPTIONS[i]}
|
||||
return CUSTOM_LABELS_DICT
|
||||
|
||||
|
||||
if __name__ == '__main__':
|
||||
asyncio.run(run_action())
|
@ -1,6 +1,4 @@
|
||||
import copy
|
||||
import logging
|
||||
import sys
|
||||
import os
|
||||
import time
|
||||
from typing import Any, Dict
|
||||
@ -15,9 +13,12 @@ from pr_agent.agent.pr_agent import PRAgent
|
||||
from pr_agent.algo.utils import update_settings_from_args
|
||||
from pr_agent.config_loader import get_settings, global_settings
|
||||
from pr_agent.git_providers import get_git_provider
|
||||
from pr_agent.git_providers.utils import apply_repo_settings
|
||||
from pr_agent.log import LoggingFormat, get_logger, setup_logger
|
||||
from pr_agent.servers.utils import verify_signature
|
||||
|
||||
logging.basicConfig(stream=sys.stdout, level=logging.INFO)
|
||||
setup_logger(fmt=LoggingFormat.JSON)
|
||||
|
||||
router = APIRouter()
|
||||
|
||||
|
||||
@ -28,11 +29,11 @@ async def handle_github_webhooks(request: Request, response: Response):
|
||||
Verifies the request signature, parses the request body, and passes it to the handle_request function for further
|
||||
processing.
|
||||
"""
|
||||
logging.debug("Received a GitHub webhook")
|
||||
get_logger().debug("Received a GitHub webhook")
|
||||
|
||||
body = await get_body(request)
|
||||
|
||||
logging.debug(f'Request body:\n{body}')
|
||||
get_logger().debug(f'Request body:\n{body}')
|
||||
installation_id = body.get("installation", {}).get("id")
|
||||
context["installation_id"] = installation_id
|
||||
context["settings"] = copy.deepcopy(global_settings)
|
||||
@ -44,13 +45,13 @@ async def handle_github_webhooks(request: Request, response: Response):
|
||||
@router.post("/api/v1/marketplace_webhooks")
|
||||
async def handle_marketplace_webhooks(request: Request, response: Response):
|
||||
body = await get_body(request)
|
||||
logging.info(f'Request body:\n{body}')
|
||||
get_logger().info(f'Request body:\n{body}')
|
||||
|
||||
async def get_body(request):
|
||||
try:
|
||||
body = await request.json()
|
||||
except Exception as e:
|
||||
logging.error("Error parsing request body", e)
|
||||
get_logger().error("Error parsing request body", e)
|
||||
raise HTTPException(status_code=400, detail="Error parsing request body") from e
|
||||
webhook_secret = getattr(get_settings().github, 'webhook_secret', None)
|
||||
if webhook_secret:
|
||||
@ -76,8 +77,8 @@ async def handle_request(body: Dict[str, Any], event: str):
|
||||
return {}
|
||||
agent = PRAgent()
|
||||
bot_user = get_settings().github_app.bot_user
|
||||
logging.info(f"action: '{action}'")
|
||||
logging.info(f"event: '{event}'")
|
||||
sender = body.get("sender", {}).get("login")
|
||||
log_context = {"action": action, "event": event, "sender": sender, "server_type": "github_app"}
|
||||
|
||||
if get_settings().github_app.duplicate_requests_cache and _is_duplicate_request(body):
|
||||
return {}
|
||||
@ -87,22 +88,23 @@ async def handle_request(body: Dict[str, Any], event: str):
|
||||
if "comment" not in body:
|
||||
return {}
|
||||
comment_body = body.get("comment", {}).get("body")
|
||||
sender = body.get("sender", {}).get("login")
|
||||
if sender and bot_user in sender:
|
||||
logging.info(f"Ignoring comment from {bot_user} user")
|
||||
get_logger().info(f"Ignoring comment from {bot_user} user")
|
||||
return {}
|
||||
logging.info(f"Processing comment from {sender} user")
|
||||
get_logger().info(f"Processing comment from {sender} user")
|
||||
if "issue" in body and "pull_request" in body["issue"] and "url" in body["issue"]["pull_request"]:
|
||||
api_url = body["issue"]["pull_request"]["url"]
|
||||
elif "comment" in body and "pull_request_url" in body["comment"]:
|
||||
api_url = body["comment"]["pull_request_url"]
|
||||
else:
|
||||
return {}
|
||||
logging.info(body)
|
||||
logging.info(f"Handling comment because of event={event} and action={action}")
|
||||
log_context["api_url"] = api_url
|
||||
get_logger().info(body)
|
||||
get_logger().info(f"Handling comment because of event={event} and action={action}")
|
||||
comment_id = body.get("comment", {}).get("id")
|
||||
provider = get_git_provider()(pr_url=api_url)
|
||||
await agent.handle_request(api_url, comment_body, notify=lambda: provider.add_eyes_reaction(comment_id))
|
||||
with get_logger().contextualize(**log_context):
|
||||
await agent.handle_request(api_url, comment_body, notify=lambda: provider.add_eyes_reaction(comment_id))
|
||||
|
||||
# handle pull_request event:
|
||||
# automatically review opened/reopened/ready_for_review PRs as long as they're not in draft,
|
||||
@ -114,6 +116,7 @@ async def handle_request(body: Dict[str, Any], event: str):
|
||||
api_url = pull_request.get("url")
|
||||
if not api_url:
|
||||
return {}
|
||||
log_context["api_url"] = api_url
|
||||
if pull_request.get("draft", True) or pull_request.get("state") != "open" or pull_request.get("user", {}).get("login", "") == bot_user:
|
||||
return {}
|
||||
if action in get_settings().github_app.handle_pr_actions:
|
||||
@ -123,18 +126,20 @@ async def handle_request(body: Dict[str, Any], event: str):
|
||||
if pull_request.get("created_at") == pull_request.get("updated_at"):
|
||||
# avoid double reviews when opening a PR for the first time
|
||||
return {}
|
||||
logging.info(f"Performing review because of event={event} and action={action}")
|
||||
get_logger().info(f"Performing review because of event={event} and action={action}")
|
||||
apply_repo_settings(api_url)
|
||||
for command in get_settings().github_app.pr_commands:
|
||||
split_command = command.split(" ")
|
||||
command = split_command[0]
|
||||
args = split_command[1:]
|
||||
other_args = update_settings_from_args(args)
|
||||
new_command = ' '.join([command] + other_args)
|
||||
logging.info(body)
|
||||
logging.info(f"Performing command: {new_command}")
|
||||
await agent.handle_request(api_url, new_command)
|
||||
get_logger().info(body)
|
||||
get_logger().info(f"Performing command: {new_command}")
|
||||
with get_logger().contextualize(**log_context):
|
||||
await agent.handle_request(api_url, new_command)
|
||||
|
||||
logging.info("event or action does not require handling")
|
||||
get_logger().info("event or action does not require handling")
|
||||
return {}
|
||||
|
||||
|
||||
@ -144,7 +149,7 @@ def _is_duplicate_request(body: Dict[str, Any]) -> bool:
|
||||
This function checks if the request is duplicate and if so - ignores it.
|
||||
"""
|
||||
request_hash = hash(str(body))
|
||||
logging.info(f"request_hash: {request_hash}")
|
||||
get_logger().info(f"request_hash: {request_hash}")
|
||||
request_time = time.monotonic()
|
||||
ttl = get_settings().github_app.duplicate_requests_cache_ttl # in seconds
|
||||
to_delete = [key for key, key_time in _duplicate_requests_cache.items() if request_time - key_time > ttl]
|
||||
@ -153,7 +158,7 @@ def _is_duplicate_request(body: Dict[str, Any]) -> bool:
|
||||
is_duplicate = request_hash in _duplicate_requests_cache
|
||||
_duplicate_requests_cache[request_hash] = request_time
|
||||
if is_duplicate:
|
||||
logging.info(f"Ignoring duplicate request {request_hash}")
|
||||
get_logger().info(f"Ignoring duplicate request {request_hash}")
|
||||
return is_duplicate
|
||||
|
||||
|
||||
|
@ -1,6 +1,4 @@
|
||||
import asyncio
|
||||
import logging
|
||||
import sys
|
||||
from datetime import datetime, timezone
|
||||
|
||||
import aiohttp
|
||||
@ -8,9 +6,10 @@ import aiohttp
|
||||
from pr_agent.agent.pr_agent import PRAgent
|
||||
from pr_agent.config_loader import get_settings
|
||||
from pr_agent.git_providers import get_git_provider
|
||||
from pr_agent.log import LoggingFormat, get_logger, setup_logger
|
||||
from pr_agent.servers.help import bot_help_text
|
||||
|
||||
logging.basicConfig(stream=sys.stdout, level=logging.DEBUG)
|
||||
setup_logger(fmt=LoggingFormat.JSON)
|
||||
NOTIFICATION_URL = "https://api.github.com/notifications"
|
||||
|
||||
|
||||
@ -94,7 +93,7 @@ async def polling_loop():
|
||||
comment_body = comment['body'] if 'body' in comment else ''
|
||||
commenter_github_user = comment['user']['login'] \
|
||||
if 'user' in comment else ''
|
||||
logging.info(f"Commenter: {commenter_github_user}\nComment: {comment_body}")
|
||||
get_logger().info(f"Commenter: {commenter_github_user}\nComment: {comment_body}")
|
||||
user_tag = "@" + user_id
|
||||
if user_tag not in comment_body:
|
||||
continue
|
||||
@ -112,7 +111,7 @@ async def polling_loop():
|
||||
print(f"Failed to fetch notifications. Status code: {response.status}")
|
||||
|
||||
except Exception as e:
|
||||
logging.error(f"Exception during processing of a notification: {e}")
|
||||
get_logger().error(f"Exception during processing of a notification: {e}")
|
||||
|
||||
|
||||
if __name__ == '__main__':
|
||||
|
@ -1,7 +1,5 @@
|
||||
import copy
|
||||
import json
|
||||
import logging
|
||||
import sys
|
||||
|
||||
import uvicorn
|
||||
from fastapi import APIRouter, FastAPI, Request, status
|
||||
@ -14,26 +12,37 @@ from starlette_context.middleware import RawContextMiddleware
|
||||
|
||||
from pr_agent.agent.pr_agent import PRAgent
|
||||
from pr_agent.config_loader import get_settings, global_settings
|
||||
from pr_agent.log import LoggingFormat, get_logger, setup_logger
|
||||
from pr_agent.secret_providers import get_secret_provider
|
||||
|
||||
logging.basicConfig(stream=sys.stdout, level=logging.INFO)
|
||||
setup_logger(fmt=LoggingFormat.JSON)
|
||||
router = APIRouter()
|
||||
|
||||
secret_provider = get_secret_provider() if get_settings().get("CONFIG.SECRET_PROVIDER") else None
|
||||
|
||||
|
||||
def handle_request(background_tasks: BackgroundTasks, url: str, body: str, log_context: dict):
|
||||
log_context["action"] = body
|
||||
log_context["event"] = "pull_request" if body == "/review" else "comment"
|
||||
log_context["api_url"] = url
|
||||
with get_logger().contextualize(**log_context):
|
||||
background_tasks.add_task(PRAgent().handle_request, url, body)
|
||||
|
||||
|
||||
@router.post("/webhook")
|
||||
async def gitlab_webhook(background_tasks: BackgroundTasks, request: Request):
|
||||
log_context = {"server_type": "gitlab_app"}
|
||||
if request.headers.get("X-Gitlab-Token") and secret_provider:
|
||||
request_token = request.headers.get("X-Gitlab-Token")
|
||||
secret = secret_provider.get_secret(request_token)
|
||||
try:
|
||||
secret_dict = json.loads(secret)
|
||||
gitlab_token = secret_dict["gitlab_token"]
|
||||
log_context["sender"] = secret_dict["id"]
|
||||
context["settings"] = copy.deepcopy(global_settings)
|
||||
context["settings"].gitlab.personal_access_token = gitlab_token
|
||||
except Exception as e:
|
||||
logging.error(f"Failed to validate secret {request_token}: {e}")
|
||||
get_logger().error(f"Failed to validate secret {request_token}: {e}")
|
||||
return JSONResponse(status_code=status.HTTP_401_UNAUTHORIZED, content=jsonable_encoder({"message": "unauthorized"}))
|
||||
elif get_settings().get("GITLAB.SHARED_SECRET"):
|
||||
secret = get_settings().get("GITLAB.SHARED_SECRET")
|
||||
@ -45,17 +54,17 @@ async def gitlab_webhook(background_tasks: BackgroundTasks, request: Request):
|
||||
if not gitlab_token:
|
||||
return JSONResponse(status_code=status.HTTP_401_UNAUTHORIZED, content=jsonable_encoder({"message": "unauthorized"}))
|
||||
data = await request.json()
|
||||
logging.info(json.dumps(data))
|
||||
get_logger().info(json.dumps(data))
|
||||
if data.get('object_kind') == 'merge_request' and data['object_attributes'].get('action') in ['open', 'reopen']:
|
||||
logging.info(f"A merge request has been opened: {data['object_attributes'].get('title')}")
|
||||
get_logger().info(f"A merge request has been opened: {data['object_attributes'].get('title')}")
|
||||
url = data['object_attributes'].get('url')
|
||||
background_tasks.add_task(PRAgent().handle_request, url, "/review")
|
||||
handle_request(background_tasks, url, "/review")
|
||||
elif data.get('object_kind') == 'note' and data['event_type'] == 'note':
|
||||
if 'merge_request' in data:
|
||||
mr = data['merge_request']
|
||||
url = mr.get('url')
|
||||
body = data.get('object_attributes', {}).get('note')
|
||||
background_tasks.add_task(PRAgent().handle_request, url, body)
|
||||
handle_request(background_tasks, url, body)
|
||||
return JSONResponse(status_code=status.HTTP_200_OK, content=jsonable_encoder({"message": "success"}))
|
||||
|
||||
|
||||
|
@ -1,12 +1,10 @@
|
||||
import logging
|
||||
|
||||
from fastapi import FastAPI
|
||||
from mangum import Mangum
|
||||
|
||||
from pr_agent.log import setup_logger
|
||||
from pr_agent.servers.github_app import router
|
||||
|
||||
logger = logging.getLogger()
|
||||
logger.setLevel(logging.DEBUG)
|
||||
setup_logger()
|
||||
|
||||
app = FastAPI()
|
||||
app.include_router(router)
|
||||
|
@ -31,11 +31,15 @@ publish_labels=true
|
||||
publish_description_as_comment=false
|
||||
add_original_user_description=false
|
||||
keep_original_user_title=false
|
||||
use_bullet_points=true
|
||||
extra_instructions = ""
|
||||
|
||||
# markers
|
||||
use_description_markers=false
|
||||
include_generated_by_header=true
|
||||
|
||||
#custom_labels = ['Bug fix', 'Tests', 'Bug fix with tests', 'Refactoring', 'Enhancement', 'Documentation', 'Other']
|
||||
|
||||
[pr_questions] # /ask #
|
||||
|
||||
[pr_code_suggestions] # /improve #
|
||||
@ -122,4 +126,4 @@ max_issues_to_scan = 500
|
||||
[pinecone]
|
||||
# fill and place in .secrets.toml
|
||||
#api_key = ...
|
||||
# environment = "gcp-starter"
|
||||
# environment = "gcp-starter"
|
||||
|
18
pr_agent/settings/custom_labels.toml
Normal file
18
pr_agent/settings/custom_labels.toml
Normal file
@ -0,0 +1,18 @@
|
||||
[config]
|
||||
enable_custom_labels=false
|
||||
|
||||
## template for custom labels
|
||||
#[custom_labels."Bug fix"]
|
||||
#description = "Fixes a bug in the code"
|
||||
#[custom_labels."Tests"]
|
||||
#description = "Adds or modifies tests"
|
||||
#[custom_labels."Bug fix with tests"]
|
||||
#description = "Fixes a bug in the code and adds or modifies tests"
|
||||
#[custom_labels."Refactoring"]
|
||||
#description = "Code refactoring without changing functionality"
|
||||
#[custom_labels."Enhancement"]
|
||||
#description = "Adds new features or functionality"
|
||||
#[custom_labels."Documentation"]
|
||||
#description = "Adds or modifies documentation"
|
||||
#[custom_labels."Other"]
|
||||
#description = "Other changes that do not fit in any of the above categories"
|
@ -433,3 +433,6 @@ reStructuredText = [".rst", ".rest", ".rest.txt", ".rst.txt", ]
|
||||
wisp = [".wisp", ]
|
||||
xBase = [".prg", ".prw", ]
|
||||
|
||||
[docs_blacklist_extensions]
|
||||
# Disable docs for these extensions of text files and scripts that are not programming languages of function, classes and methods
|
||||
docs_blacklist = ['sql', 'txt', 'yaml', 'json', 'xml', 'md', 'rst', 'rest', 'rest.txt', 'rst.txt', 'mdpolicy', 'mdown', 'markdown', 'mdwn', 'mkd', 'mkdn', 'mkdown', 'sh']
|
72
pr_agent/settings/pr_custom_labels.toml
Normal file
72
pr_agent/settings/pr_custom_labels.toml
Normal file
@ -0,0 +1,72 @@
|
||||
[pr_custom_labels_prompt]
|
||||
system="""You are CodiumAI-PR-Reviewer, a language model designed to review git pull requests.
|
||||
Your task is to label the type of the PR content.
|
||||
- Make sure not to focus the new PR code (the '+' lines).
|
||||
- If needed, each YAML output should be in block scalar format ('|-')
|
||||
{%- if extra_instructions %}
|
||||
|
||||
Extra instructions from the user:
|
||||
'
|
||||
{{ extra_instructions }}
|
||||
'
|
||||
{% endif %}
|
||||
|
||||
You must use the following YAML schema to format your answer:
|
||||
```yaml
|
||||
PR Type:
|
||||
type: array
|
||||
{%- if enable_custom_labels %}
|
||||
description: One or more labels that describe the PR type. Don't output the description in the parentheses.
|
||||
{%- endif %}
|
||||
items:
|
||||
type: string
|
||||
enum:
|
||||
{%- if enable_custom_labels %}
|
||||
{{ custom_labels }}
|
||||
{%- else %}
|
||||
- Bug fix
|
||||
- Tests
|
||||
- Refactoring
|
||||
- Enhancement
|
||||
- Documentation
|
||||
- Other
|
||||
{%- endif %}
|
||||
|
||||
Example output:
|
||||
```yaml
|
||||
PR Type:
|
||||
{%- if enable_custom_labels %}
|
||||
{{ custom_labels_examples }}
|
||||
{%- else %}
|
||||
- Bug fix
|
||||
- Tests
|
||||
{%- endif %}
|
||||
```
|
||||
|
||||
Make sure to output a valid YAML. Don't repeat the prompt in the answer, and avoid outputting the 'type' and 'description' fields.
|
||||
"""
|
||||
|
||||
user="""PR Info:
|
||||
Previous title: '{{title}}'
|
||||
Previous description: '{{description}}'
|
||||
Branch: '{{branch}}'
|
||||
{%- if language %}
|
||||
|
||||
Main language: {{language}}
|
||||
{%- endif %}
|
||||
{%- if commit_messages_str %}
|
||||
|
||||
Commit messages:
|
||||
{{commit_messages_str}}
|
||||
{%- endif %}
|
||||
|
||||
|
||||
The PR Git Diff:
|
||||
```
|
||||
{{diff}}
|
||||
```
|
||||
Note that lines in the diff body are prefixed with a symbol that represents the type of change: '-' for deletions, '+' for additions, and ' ' (a space) for unchanged lines.
|
||||
|
||||
Response (should be a valid YAML, and nothing else):
|
||||
```yaml
|
||||
"""
|
@ -19,19 +19,26 @@ PR Title:
|
||||
description: an informative title for the PR, describing its main theme
|
||||
PR Type:
|
||||
type: array
|
||||
{%- if enable_custom_labels %}
|
||||
description: One or more labels that describe the PR type. Don't output the description in the parentheses.
|
||||
{%- endif %}
|
||||
items:
|
||||
type: string
|
||||
enum:
|
||||
{%- if enable_custom_labels %}
|
||||
{{ custom_labels }}
|
||||
{%- else %}
|
||||
- Bug fix
|
||||
- Tests
|
||||
- Bug fix with tests
|
||||
- Refactoring
|
||||
- Enhancement
|
||||
- Documentation
|
||||
- Other
|
||||
{%- endif %}
|
||||
PR Description:
|
||||
type: string
|
||||
description: an informative and concise description of the PR
|
||||
description: an informative and concise description of the PR.
|
||||
{%- if use_bullet_points %} Use bullet points. {% endif %}
|
||||
PR Main Files Walkthrough:
|
||||
type: array
|
||||
maxItems: 10
|
||||
@ -51,7 +58,11 @@ Example output:
|
||||
PR Title: |-
|
||||
...
|
||||
PR Type:
|
||||
{%- if enable_custom_labels %}
|
||||
{{ custom_labels_examples }}
|
||||
{%- else %}
|
||||
- Bug fix
|
||||
{%- endif %}
|
||||
PR Description: |-
|
||||
...
|
||||
PR Main Files Walkthrough:
|
||||
|
@ -25,7 +25,7 @@ code line that already existed in the file....
|
||||
The review should focus on new code added in the PR (lines starting with '+'), and not on code that already existed in the file (lines starting with '-', or without prefix).
|
||||
|
||||
{%- if num_code_suggestions > 0 %}
|
||||
- Provide up to {{ num_code_suggestions }} code suggestions.
|
||||
- Provide up to {{ num_code_suggestions }} code suggestions. Try to provide diverse and insightful suggestions.
|
||||
- Focus on important suggestions like fixing code problems, issues and bugs. As a second priority, provide suggestions for meaningful code improvements, like performance, vulnerability, modularity, and best practices.
|
||||
- Avoid making suggestions that have already been implemented in the PR code. For example, if you want to add logs, or change a variable to const, or anything else, make sure it isn't already in the PR code.
|
||||
- Don't suggest to add docstring, type hints, or comments.
|
||||
@ -51,13 +51,22 @@ PR Analysis:
|
||||
description: summary of the PR in 2-3 sentences.
|
||||
Type of PR:
|
||||
type: string
|
||||
{%- if enable_custom_labels %}
|
||||
description: One or more labels that describe the PR type. Don't output the description in the parentheses.
|
||||
{%- endif %}
|
||||
items:
|
||||
type: string
|
||||
enum:
|
||||
{%- if enable_custom_labels %}
|
||||
{{ custom_labels }}
|
||||
{%- else %}
|
||||
- Bug fix
|
||||
- Tests
|
||||
- Refactoring
|
||||
- Enhancement
|
||||
- Documentation
|
||||
- Other
|
||||
{%- endif %}
|
||||
{%- if require_score %}
|
||||
Score:
|
||||
type: int
|
||||
@ -99,10 +108,10 @@ PR Feedback:
|
||||
General suggestions:
|
||||
type: string
|
||||
description: |-
|
||||
General suggestions and feedback for the contributors and maintainers of
|
||||
this PR. May include important suggestions for the overall structure,
|
||||
primary purpose, best practices, critical bugs, and other aspects of the
|
||||
PR. Don't address PR title and description, or lack of tests. Explain your suggestions.
|
||||
General suggestions and feedback for the contributors and maintainers of this PR.
|
||||
May include important suggestions for the overall structure,
|
||||
primary purpose, best practices, critical bugs, and other aspects of the PR.
|
||||
Don't address PR title and description, or lack of tests. Explain your suggestions.
|
||||
{%- if num_code_suggestions > 0 %}
|
||||
Code feedback:
|
||||
type: array
|
||||
@ -115,11 +124,10 @@ PR Feedback:
|
||||
suggestion:
|
||||
type: string
|
||||
description: |-
|
||||
a concrete suggestion for meaningfully improving the new PR code. Also
|
||||
describe how, specifically, the suggestion can be applied to new PR
|
||||
code. Add tags with importance measure that matches each suggestion
|
||||
('important' or 'medium'). Do not make suggestions for updating or
|
||||
adding docstrings, renaming PR title and description, or linter like.
|
||||
a concrete suggestion for meaningfully improving the new PR code.
|
||||
Also describe how, specifically, the suggestion can be applied to new PR code.
|
||||
Add tags with importance measure that matches each suggestion ('important' or 'medium').
|
||||
Do not make suggestions for updating or adding docstrings, renaming PR title and description, or linter like.
|
||||
relevant line:
|
||||
type: string
|
||||
description: |-
|
||||
|
@ -1,16 +1,17 @@
|
||||
import copy
|
||||
import logging
|
||||
import textwrap
|
||||
from typing import List, Dict
|
||||
from typing import Dict
|
||||
|
||||
from jinja2 import Environment, StrictUndefined
|
||||
|
||||
from pr_agent.algo.ai_handler import AiHandler
|
||||
from pr_agent.algo.pr_processing import get_pr_diff, retry_with_fallback_models, get_pr_multi_diffs
|
||||
from pr_agent.algo.pr_processing import get_pr_diff, retry_with_fallback_models
|
||||
from pr_agent.algo.token_handler import TokenHandler
|
||||
from pr_agent.algo.utils import load_yaml
|
||||
from pr_agent.config_loader import get_settings
|
||||
from pr_agent.git_providers import BitbucketProvider, get_git_provider
|
||||
from pr_agent.git_providers import get_git_provider
|
||||
from pr_agent.git_providers.git_provider import get_main_pr_language
|
||||
from pr_agent.log import get_logger
|
||||
|
||||
|
||||
class PRAddDocs:
|
||||
@ -43,34 +44,39 @@ class PRAddDocs:
|
||||
|
||||
async def run(self):
|
||||
try:
|
||||
logging.info('Generating code Docs for PR...')
|
||||
get_logger().info('Generating code Docs for PR...')
|
||||
if get_settings().config.publish_output:
|
||||
self.git_provider.publish_comment("Generating Documentation...", is_temporary=True)
|
||||
|
||||
logging.info('Preparing PR documentation...')
|
||||
get_logger().info('Preparing PR documentation...')
|
||||
await retry_with_fallback_models(self._prepare_prediction)
|
||||
data = self._prepare_pr_code_docs()
|
||||
if (not data) or (not 'Code Documentation' in data):
|
||||
logging.info('No code documentation found for PR.')
|
||||
get_logger().info('No code documentation found for PR.')
|
||||
return
|
||||
|
||||
if get_settings().config.publish_output:
|
||||
logging.info('Pushing PR documentation...')
|
||||
get_logger().info('Pushing PR documentation...')
|
||||
self.git_provider.remove_initial_comment()
|
||||
logging.info('Pushing inline code documentation...')
|
||||
get_logger().info('Pushing inline code documentation...')
|
||||
self.push_inline_docs(data)
|
||||
except Exception as e:
|
||||
logging.error(f"Failed to generate code documentation for PR, error: {e}")
|
||||
get_logger().error(f"Failed to generate code documentation for PR, error: {e}")
|
||||
|
||||
async def _prepare_prediction(self, model: str):
|
||||
logging.info('Getting PR diff...')
|
||||
get_logger().info('Getting PR diff...')
|
||||
|
||||
# Disable adding docs to scripts and other non-relevant text files
|
||||
from pr_agent.algo.language_handler import bad_extensions
|
||||
bad_extensions += get_settings().docs_blacklist_extensions.docs_blacklist
|
||||
|
||||
self.patches_diff = get_pr_diff(self.git_provider,
|
||||
self.token_handler,
|
||||
model,
|
||||
add_line_numbers_to_hunks=True,
|
||||
disable_extra_lines=False)
|
||||
|
||||
logging.info('Getting AI prediction...')
|
||||
get_logger().info('Getting AI prediction...')
|
||||
self.prediction = await self._get_prediction(model)
|
||||
|
||||
async def _get_prediction(self, model: str):
|
||||
@ -80,8 +86,8 @@ class PRAddDocs:
|
||||
system_prompt = environment.from_string(get_settings().pr_add_docs_prompt.system).render(variables)
|
||||
user_prompt = environment.from_string(get_settings().pr_add_docs_prompt.user).render(variables)
|
||||
if get_settings().config.verbosity_level >= 2:
|
||||
logging.info(f"\nSystem prompt:\n{system_prompt}")
|
||||
logging.info(f"\nUser prompt:\n{user_prompt}")
|
||||
get_logger().info(f"\nSystem prompt:\n{system_prompt}")
|
||||
get_logger().info(f"\nUser prompt:\n{user_prompt}")
|
||||
response, finish_reason = await self.ai_handler.chat_completion(model=model, temperature=0.2,
|
||||
system=system_prompt, user=user_prompt)
|
||||
|
||||
@ -103,7 +109,7 @@ class PRAddDocs:
|
||||
for d in data['Code Documentation']:
|
||||
try:
|
||||
if get_settings().config.verbosity_level >= 2:
|
||||
logging.info(f"add_docs: {d}")
|
||||
get_logger().info(f"add_docs: {d}")
|
||||
relevant_file = d['relevant file'].strip()
|
||||
relevant_line = int(d['relevant line']) # absolute position
|
||||
documentation = d['documentation']
|
||||
@ -118,11 +124,11 @@ class PRAddDocs:
|
||||
'relevant_lines_end': relevant_line})
|
||||
except Exception:
|
||||
if get_settings().config.verbosity_level >= 2:
|
||||
logging.info(f"Could not parse code docs: {d}")
|
||||
get_logger().info(f"Could not parse code docs: {d}")
|
||||
|
||||
is_successful = self.git_provider.publish_code_suggestions(docs)
|
||||
if not is_successful:
|
||||
logging.info("Failed to publish code docs, trying to publish each docs separately")
|
||||
get_logger().info("Failed to publish code docs, trying to publish each docs separately")
|
||||
for doc_suggestion in docs:
|
||||
self.git_provider.publish_code_suggestions([doc_suggestion])
|
||||
|
||||
@ -154,7 +160,7 @@ class PRAddDocs:
|
||||
new_code_snippet = new_code_snippet.rstrip() + "\n" + original_initial_line
|
||||
except Exception as e:
|
||||
if get_settings().config.verbosity_level >= 2:
|
||||
logging.info(f"Could not dedent code snippet for file {relevant_file}, error: {e}")
|
||||
get_logger().info(f"Could not dedent code snippet for file {relevant_file}, error: {e}")
|
||||
|
||||
return new_code_snippet
|
||||
|
||||
|
@ -1,16 +1,17 @@
|
||||
import copy
|
||||
import logging
|
||||
import textwrap
|
||||
from typing import List, Dict
|
||||
from typing import Dict, List
|
||||
|
||||
from jinja2 import Environment, StrictUndefined
|
||||
|
||||
from pr_agent.algo.ai_handler import AiHandler
|
||||
from pr_agent.algo.pr_processing import get_pr_diff, retry_with_fallback_models, get_pr_multi_diffs
|
||||
from pr_agent.algo.pr_processing import get_pr_diff, get_pr_multi_diffs, retry_with_fallback_models
|
||||
from pr_agent.algo.token_handler import TokenHandler
|
||||
from pr_agent.algo.utils import load_yaml
|
||||
from pr_agent.config_loader import get_settings
|
||||
from pr_agent.git_providers import BitbucketProvider, get_git_provider
|
||||
from pr_agent.git_providers import get_git_provider
|
||||
from pr_agent.git_providers.git_provider import get_main_pr_language
|
||||
from pr_agent.log import get_logger
|
||||
|
||||
|
||||
class PRCodeSuggestions:
|
||||
@ -52,42 +53,42 @@ class PRCodeSuggestions:
|
||||
|
||||
async def run(self):
|
||||
try:
|
||||
logging.info('Generating code suggestions for PR...')
|
||||
get_logger().info('Generating code suggestions for PR...')
|
||||
if get_settings().config.publish_output:
|
||||
self.git_provider.publish_comment("Preparing review...", is_temporary=True)
|
||||
|
||||
logging.info('Preparing PR review...')
|
||||
get_logger().info('Preparing PR review...')
|
||||
if not self.is_extended:
|
||||
await retry_with_fallback_models(self._prepare_prediction)
|
||||
data = self._prepare_pr_code_suggestions()
|
||||
else:
|
||||
data = await retry_with_fallback_models(self._prepare_prediction_extended)
|
||||
if (not data) or (not 'Code suggestions' in data):
|
||||
logging.info('No code suggestions found for PR.')
|
||||
get_logger().info('No code suggestions found for PR.')
|
||||
return
|
||||
|
||||
if (not self.is_extended and get_settings().pr_code_suggestions.rank_suggestions) or \
|
||||
(self.is_extended and get_settings().pr_code_suggestions.rank_extended_suggestions):
|
||||
logging.info('Ranking Suggestions...')
|
||||
get_logger().info('Ranking Suggestions...')
|
||||
data['Code suggestions'] = await self.rank_suggestions(data['Code suggestions'])
|
||||
|
||||
if get_settings().config.publish_output:
|
||||
logging.info('Pushing PR review...')
|
||||
get_logger().info('Pushing PR review...')
|
||||
self.git_provider.remove_initial_comment()
|
||||
logging.info('Pushing inline code suggestions...')
|
||||
get_logger().info('Pushing inline code suggestions...')
|
||||
self.push_inline_code_suggestions(data)
|
||||
except Exception as e:
|
||||
logging.error(f"Failed to generate code suggestions for PR, error: {e}")
|
||||
get_logger().error(f"Failed to generate code suggestions for PR, error: {e}")
|
||||
|
||||
async def _prepare_prediction(self, model: str):
|
||||
logging.info('Getting PR diff...')
|
||||
get_logger().info('Getting PR diff...')
|
||||
self.patches_diff = get_pr_diff(self.git_provider,
|
||||
self.token_handler,
|
||||
model,
|
||||
add_line_numbers_to_hunks=True,
|
||||
disable_extra_lines=True)
|
||||
|
||||
logging.info('Getting AI prediction...')
|
||||
get_logger().info('Getting AI prediction...')
|
||||
self.prediction = await self._get_prediction(model)
|
||||
|
||||
async def _get_prediction(self, model: str):
|
||||
@ -97,8 +98,8 @@ class PRCodeSuggestions:
|
||||
system_prompt = environment.from_string(get_settings().pr_code_suggestions_prompt.system).render(variables)
|
||||
user_prompt = environment.from_string(get_settings().pr_code_suggestions_prompt.user).render(variables)
|
||||
if get_settings().config.verbosity_level >= 2:
|
||||
logging.info(f"\nSystem prompt:\n{system_prompt}")
|
||||
logging.info(f"\nUser prompt:\n{user_prompt}")
|
||||
get_logger().info(f"\nSystem prompt:\n{system_prompt}")
|
||||
get_logger().info(f"\nUser prompt:\n{user_prompt}")
|
||||
response, finish_reason = await self.ai_handler.chat_completion(model=model, temperature=0.2,
|
||||
system=system_prompt, user=user_prompt)
|
||||
|
||||
@ -120,7 +121,7 @@ class PRCodeSuggestions:
|
||||
for d in data['Code suggestions']:
|
||||
try:
|
||||
if get_settings().config.verbosity_level >= 2:
|
||||
logging.info(f"suggestion: {d}")
|
||||
get_logger().info(f"suggestion: {d}")
|
||||
relevant_file = d['relevant file'].strip()
|
||||
relevant_lines_start = int(d['relevant lines start']) # absolute position
|
||||
relevant_lines_end = int(d['relevant lines end'])
|
||||
@ -136,11 +137,11 @@ class PRCodeSuggestions:
|
||||
'relevant_lines_end': relevant_lines_end})
|
||||
except Exception:
|
||||
if get_settings().config.verbosity_level >= 2:
|
||||
logging.info(f"Could not parse suggestion: {d}")
|
||||
get_logger().info(f"Could not parse suggestion: {d}")
|
||||
|
||||
is_successful = self.git_provider.publish_code_suggestions(code_suggestions)
|
||||
if not is_successful:
|
||||
logging.info("Failed to publish code suggestions, trying to publish each suggestion separately")
|
||||
get_logger().info("Failed to publish code suggestions, trying to publish each suggestion separately")
|
||||
for code_suggestion in code_suggestions:
|
||||
self.git_provider.publish_code_suggestions([code_suggestion])
|
||||
|
||||
@ -162,19 +163,19 @@ class PRCodeSuggestions:
|
||||
new_code_snippet = textwrap.indent(new_code_snippet, delta_spaces * " ").rstrip('\n')
|
||||
except Exception as e:
|
||||
if get_settings().config.verbosity_level >= 2:
|
||||
logging.info(f"Could not dedent code snippet for file {relevant_file}, error: {e}")
|
||||
get_logger().info(f"Could not dedent code snippet for file {relevant_file}, error: {e}")
|
||||
|
||||
return new_code_snippet
|
||||
|
||||
async def _prepare_prediction_extended(self, model: str) -> dict:
|
||||
logging.info('Getting PR diff...')
|
||||
get_logger().info('Getting PR diff...')
|
||||
patches_diff_list = get_pr_multi_diffs(self.git_provider, self.token_handler, model,
|
||||
max_calls=get_settings().pr_code_suggestions.max_number_of_calls)
|
||||
|
||||
logging.info('Getting multi AI predictions...')
|
||||
get_logger().info('Getting multi AI predictions...')
|
||||
prediction_list = []
|
||||
for i, patches_diff in enumerate(patches_diff_list):
|
||||
logging.info(f"Processing chunk {i + 1} of {len(patches_diff_list)}")
|
||||
get_logger().info(f"Processing chunk {i + 1} of {len(patches_diff_list)}")
|
||||
self.patches_diff = patches_diff
|
||||
prediction = await self._get_prediction(model)
|
||||
prediction_list.append(prediction)
|
||||
@ -222,8 +223,8 @@ class PRCodeSuggestions:
|
||||
variables)
|
||||
user_prompt = environment.from_string(get_settings().pr_sort_code_suggestions_prompt.user).render(variables)
|
||||
if get_settings().config.verbosity_level >= 2:
|
||||
logging.info(f"\nSystem prompt:\n{system_prompt}")
|
||||
logging.info(f"\nUser prompt:\n{user_prompt}")
|
||||
get_logger().info(f"\nSystem prompt:\n{system_prompt}")
|
||||
get_logger().info(f"\nUser prompt:\n{user_prompt}")
|
||||
response, finish_reason = await self.ai_handler.chat_completion(model=model, system=system_prompt,
|
||||
user=user_prompt)
|
||||
|
||||
@ -238,7 +239,7 @@ class PRCodeSuggestions:
|
||||
data_sorted = data_sorted[:new_len]
|
||||
except Exception as e:
|
||||
if get_settings().config.verbosity_level >= 1:
|
||||
logging.info(f"Could not sort suggestions, error: {e}")
|
||||
get_logger().info(f"Could not sort suggestions, error: {e}")
|
||||
data_sorted = suggestion_list
|
||||
|
||||
return data_sorted
|
||||
|
@ -1,7 +1,6 @@
|
||||
import logging
|
||||
|
||||
from pr_agent.config_loader import get_settings
|
||||
from pr_agent.git_providers import get_git_provider
|
||||
from pr_agent.log import get_logger
|
||||
|
||||
|
||||
class PRConfig:
|
||||
@ -19,11 +18,11 @@ class PRConfig:
|
||||
self.git_provider = get_git_provider()(pr_url)
|
||||
|
||||
async def run(self):
|
||||
logging.info('Getting configuration settings...')
|
||||
logging.info('Preparing configs...')
|
||||
get_logger().info('Getting configuration settings...')
|
||||
get_logger().info('Preparing configs...')
|
||||
pr_comment = self._prepare_pr_configs()
|
||||
if get_settings().config.publish_output:
|
||||
logging.info('Pushing configs...')
|
||||
get_logger().info('Pushing configs...')
|
||||
self.git_provider.publish_comment(pr_comment)
|
||||
self.git_provider.remove_initial_comment()
|
||||
return ""
|
||||
@ -44,5 +43,5 @@ class PRConfig:
|
||||
comment_str += f"\n{header.lower()}.{key.lower()} = {repr(value) if isinstance(value, str) else value}"
|
||||
comment_str += " "
|
||||
if get_settings().config.verbosity_level >= 2:
|
||||
logging.info(f"comment_str:\n{comment_str}")
|
||||
get_logger().info(f"comment_str:\n{comment_str}")
|
||||
return comment_str
|
||||
|
@ -1,7 +1,5 @@
|
||||
import copy
|
||||
import json
|
||||
import re
|
||||
import logging
|
||||
from typing import List, Tuple
|
||||
|
||||
from jinja2 import Environment, StrictUndefined
|
||||
@ -9,10 +7,11 @@ from jinja2 import Environment, StrictUndefined
|
||||
from pr_agent.algo.ai_handler import AiHandler
|
||||
from pr_agent.algo.pr_processing import get_pr_diff, retry_with_fallback_models
|
||||
from pr_agent.algo.token_handler import TokenHandler
|
||||
from pr_agent.algo.utils import load_yaml
|
||||
from pr_agent.algo.utils import load_yaml, set_custom_labels
|
||||
from pr_agent.config_loader import get_settings
|
||||
from pr_agent.git_providers import get_git_provider
|
||||
from pr_agent.git_providers.git_provider import get_main_pr_language
|
||||
from pr_agent.log import get_logger
|
||||
|
||||
|
||||
class PRDescription:
|
||||
@ -41,8 +40,12 @@ class PRDescription:
|
||||
"description": self.git_provider.get_pr_description(full=False),
|
||||
"language": self.main_pr_language,
|
||||
"diff": "", # empty diff for initial calculation
|
||||
"use_bullet_points": get_settings().pr_description.use_bullet_points,
|
||||
"extra_instructions": get_settings().pr_description.extra_instructions,
|
||||
"commit_messages_str": self.git_provider.get_commit_messages()
|
||||
"commit_messages_str": self.git_provider.get_commit_messages(),
|
||||
"enable_custom_labels": get_settings().config.enable_custom_labels,
|
||||
"custom_labels": "",
|
||||
"custom_labels_examples": "",
|
||||
}
|
||||
|
||||
self.user_description = self.git_provider.get_user_description()
|
||||
@ -65,13 +68,13 @@ class PRDescription:
|
||||
"""
|
||||
|
||||
try:
|
||||
logging.info(f"Generating a PR description {self.pr_id}")
|
||||
get_logger().info(f"Generating a PR description {self.pr_id}")
|
||||
if get_settings().config.publish_output:
|
||||
self.git_provider.publish_comment("Preparing PR description...", is_temporary=True)
|
||||
|
||||
await retry_with_fallback_models(self._prepare_prediction)
|
||||
|
||||
logging.info(f"Preparing answer {self.pr_id}")
|
||||
get_logger().info(f"Preparing answer {self.pr_id}")
|
||||
if self.prediction:
|
||||
self._prepare_data()
|
||||
else:
|
||||
@ -88,7 +91,7 @@ class PRDescription:
|
||||
full_markdown_description = f"## Title\n\n{pr_title}\n\n___\n{pr_body}"
|
||||
|
||||
if get_settings().config.publish_output:
|
||||
logging.info(f"Pushing answer {self.pr_id}")
|
||||
get_logger().info(f"Pushing answer {self.pr_id}")
|
||||
if get_settings().pr_description.publish_description_as_comment:
|
||||
self.git_provider.publish_comment(full_markdown_description)
|
||||
else:
|
||||
@ -100,7 +103,7 @@ class PRDescription:
|
||||
self.git_provider.publish_labels(pr_labels + current_labels)
|
||||
self.git_provider.remove_initial_comment()
|
||||
except Exception as e:
|
||||
logging.error(f"Error generating PR description {self.pr_id}: {e}")
|
||||
get_logger().error(f"Error generating PR description {self.pr_id}: {e}")
|
||||
|
||||
return ""
|
||||
|
||||
@ -121,9 +124,9 @@ class PRDescription:
|
||||
if get_settings().pr_description.use_description_markers and 'pr_agent:' not in self.user_description:
|
||||
return None
|
||||
|
||||
logging.info(f"Getting PR diff {self.pr_id}")
|
||||
get_logger().info(f"Getting PR diff {self.pr_id}")
|
||||
self.patches_diff = get_pr_diff(self.git_provider, self.token_handler, model)
|
||||
logging.info(f"Getting AI prediction {self.pr_id}")
|
||||
get_logger().info(f"Getting AI prediction {self.pr_id}")
|
||||
self.prediction = await self._get_prediction(model)
|
||||
|
||||
async def _get_prediction(self, model: str) -> str:
|
||||
@ -140,12 +143,13 @@ class PRDescription:
|
||||
variables["diff"] = self.patches_diff # update diff
|
||||
|
||||
environment = Environment(undefined=StrictUndefined)
|
||||
set_custom_labels(variables)
|
||||
system_prompt = environment.from_string(get_settings().pr_description_prompt.system).render(variables)
|
||||
user_prompt = environment.from_string(get_settings().pr_description_prompt.user).render(variables)
|
||||
|
||||
if get_settings().config.verbosity_level >= 2:
|
||||
logging.info(f"\nSystem prompt:\n{system_prompt}")
|
||||
logging.info(f"\nUser prompt:\n{user_prompt}")
|
||||
get_logger().info(f"\nSystem prompt:\n{system_prompt}")
|
||||
get_logger().info(f"\nUser prompt:\n{user_prompt}")
|
||||
|
||||
response, finish_reason = await self.ai_handler.chat_completion(
|
||||
model=model,
|
||||
@ -156,7 +160,6 @@ class PRDescription:
|
||||
|
||||
return response
|
||||
|
||||
|
||||
def _prepare_data(self):
|
||||
# Load the AI prediction data into a dictionary
|
||||
self.data = load_yaml(self.prediction.strip())
|
||||
@ -178,7 +181,7 @@ class PRDescription:
|
||||
return pr_types
|
||||
|
||||
def _prepare_pr_answer_with_markers(self) -> Tuple[str, str]:
|
||||
logging.info(f"Using description marker replacements {self.pr_id}")
|
||||
get_logger().info(f"Using description marker replacements {self.pr_id}")
|
||||
title = self.vars["title"]
|
||||
body = self.user_description
|
||||
if get_settings().pr_description.include_generated_by_header:
|
||||
@ -186,6 +189,11 @@ class PRDescription:
|
||||
else:
|
||||
ai_header = ""
|
||||
|
||||
ai_type = self.data.get('PR Type')
|
||||
if ai_type and not re.search(r'<!--\s*pr_agent:type\s*-->', body):
|
||||
pr_type = f"{ai_header}{ai_type}"
|
||||
body = body.replace('pr_agent:type', pr_type)
|
||||
|
||||
ai_summary = self.data.get('PR Description')
|
||||
if ai_summary and not re.search(r'<!--\s*pr_agent:summary\s*-->', body):
|
||||
summary = f"{ai_header}{ai_summary}"
|
||||
@ -252,6 +260,6 @@ class PRDescription:
|
||||
pr_body += "\n___\n"
|
||||
|
||||
if get_settings().config.verbosity_level >= 2:
|
||||
logging.info(f"title:\n{title}\n{pr_body}")
|
||||
get_logger().info(f"title:\n{title}\n{pr_body}")
|
||||
|
||||
return title, pr_body
|
163
pr_agent/tools/pr_generate_labels.py
Normal file
163
pr_agent/tools/pr_generate_labels.py
Normal file
@ -0,0 +1,163 @@
|
||||
import copy
|
||||
import re
|
||||
from typing import List, Tuple
|
||||
|
||||
from jinja2 import Environment, StrictUndefined
|
||||
|
||||
from pr_agent.algo.ai_handler import AiHandler
|
||||
from pr_agent.algo.pr_processing import get_pr_diff, retry_with_fallback_models
|
||||
from pr_agent.algo.token_handler import TokenHandler
|
||||
from pr_agent.algo.utils import load_yaml, set_custom_labels
|
||||
from pr_agent.config_loader import get_settings
|
||||
from pr_agent.git_providers import get_git_provider
|
||||
from pr_agent.git_providers.git_provider import get_main_pr_language
|
||||
from pr_agent.log import get_logger
|
||||
|
||||
|
||||
class PRGenerateLabels:
|
||||
def __init__(self, pr_url: str, args: list = None):
|
||||
"""
|
||||
Initialize the PRGenerateLabels object with the necessary attributes and objects for generating labels
|
||||
corresponding to the PR using an AI model.
|
||||
Args:
|
||||
pr_url (str): The URL of the pull request.
|
||||
args (list, optional): List of arguments passed to the PRGenerateLabels class. Defaults to None.
|
||||
"""
|
||||
# Initialize the git provider and main PR language
|
||||
self.git_provider = get_git_provider()(pr_url)
|
||||
self.main_pr_language = get_main_pr_language(
|
||||
self.git_provider.get_languages(), self.git_provider.get_files()
|
||||
)
|
||||
self.pr_id = self.git_provider.get_pr_id()
|
||||
|
||||
# Initialize the AI handler
|
||||
self.ai_handler = AiHandler()
|
||||
|
||||
# Initialize the variables dictionary
|
||||
self.vars = {
|
||||
"title": self.git_provider.pr.title,
|
||||
"branch": self.git_provider.get_pr_branch(),
|
||||
"description": self.git_provider.get_pr_description(full=False),
|
||||
"language": self.main_pr_language,
|
||||
"diff": "", # empty diff for initial calculation
|
||||
"use_bullet_points": get_settings().pr_description.use_bullet_points,
|
||||
"extra_instructions": get_settings().pr_description.extra_instructions,
|
||||
"commit_messages_str": self.git_provider.get_commit_messages(),
|
||||
"custom_labels": "",
|
||||
"custom_labels_examples": "",
|
||||
"enable_custom_labels": get_settings().config.enable_custom_labels,
|
||||
}
|
||||
|
||||
# Initialize the token handler
|
||||
self.token_handler = TokenHandler(
|
||||
self.git_provider.pr,
|
||||
self.vars,
|
||||
get_settings().pr_custom_labels_prompt.system,
|
||||
get_settings().pr_custom_labels_prompt.user,
|
||||
)
|
||||
|
||||
# Initialize patches_diff and prediction attributes
|
||||
self.patches_diff = None
|
||||
self.prediction = None
|
||||
|
||||
async def run(self):
|
||||
"""
|
||||
Generates a PR labels using an AI model and publishes it to the PR.
|
||||
"""
|
||||
|
||||
try:
|
||||
get_logger().info(f"Generating a PR labels {self.pr_id}")
|
||||
if get_settings().config.publish_output:
|
||||
self.git_provider.publish_comment("Preparing PR labels...", is_temporary=True)
|
||||
|
||||
await retry_with_fallback_models(self._prepare_prediction)
|
||||
|
||||
get_logger().info(f"Preparing answer {self.pr_id}")
|
||||
if self.prediction:
|
||||
self._prepare_data()
|
||||
else:
|
||||
return None
|
||||
|
||||
pr_labels = self._prepare_labels()
|
||||
|
||||
if get_settings().config.publish_output:
|
||||
get_logger().info(f"Pushing labels {self.pr_id}")
|
||||
if self.git_provider.is_supported("get_labels"):
|
||||
current_labels = self.git_provider.get_labels()
|
||||
if current_labels is None:
|
||||
current_labels = []
|
||||
self.git_provider.publish_labels(pr_labels + current_labels)
|
||||
self.git_provider.remove_initial_comment()
|
||||
except Exception as e:
|
||||
get_logger().error(f"Error generating PR labels {self.pr_id}: {e}")
|
||||
|
||||
return ""
|
||||
|
||||
async def _prepare_prediction(self, model: str) -> None:
|
||||
"""
|
||||
Prepare the AI prediction for the PR labels based on the provided model.
|
||||
|
||||
Args:
|
||||
model (str): The name of the model to be used for generating the prediction.
|
||||
|
||||
Returns:
|
||||
None
|
||||
|
||||
Raises:
|
||||
Any exceptions raised by the 'get_pr_diff' and '_get_prediction' functions.
|
||||
|
||||
"""
|
||||
|
||||
get_logger().info(f"Getting PR diff {self.pr_id}")
|
||||
self.patches_diff = get_pr_diff(self.git_provider, self.token_handler, model)
|
||||
get_logger().info(f"Getting AI prediction {self.pr_id}")
|
||||
self.prediction = await self._get_prediction(model)
|
||||
|
||||
async def _get_prediction(self, model: str) -> str:
|
||||
"""
|
||||
Generate an AI prediction for the PR labels based on the provided model.
|
||||
|
||||
Args:
|
||||
model (str): The name of the model to be used for generating the prediction.
|
||||
|
||||
Returns:
|
||||
str: The generated AI prediction.
|
||||
"""
|
||||
variables = copy.deepcopy(self.vars)
|
||||
variables["diff"] = self.patches_diff # update diff
|
||||
|
||||
environment = Environment(undefined=StrictUndefined)
|
||||
set_custom_labels(variables)
|
||||
system_prompt = environment.from_string(get_settings().pr_custom_labels_prompt.system).render(variables)
|
||||
user_prompt = environment.from_string(get_settings().pr_custom_labels_prompt.user).render(variables)
|
||||
|
||||
if get_settings().config.verbosity_level >= 2:
|
||||
get_logger().info(f"\nSystem prompt:\n{system_prompt}")
|
||||
get_logger().info(f"\nUser prompt:\n{user_prompt}")
|
||||
|
||||
response, finish_reason = await self.ai_handler.chat_completion(
|
||||
model=model,
|
||||
temperature=0.2,
|
||||
system=system_prompt,
|
||||
user=user_prompt
|
||||
)
|
||||
|
||||
return response
|
||||
|
||||
def _prepare_data(self):
|
||||
# Load the AI prediction data into a dictionary
|
||||
self.data = load_yaml(self.prediction.strip())
|
||||
|
||||
|
||||
|
||||
def _prepare_labels(self) -> List[str]:
|
||||
pr_types = []
|
||||
|
||||
# If the 'PR Type' key is present in the dictionary, split its value by comma and assign it to 'pr_types'
|
||||
if 'PR Type' in self.data:
|
||||
if type(self.data['PR Type']) == list:
|
||||
pr_types = self.data['PR Type']
|
||||
elif type(self.data['PR Type']) == str:
|
||||
pr_types = self.data['PR Type'].split(',')
|
||||
|
||||
return pr_types
|
@ -1,5 +1,4 @@
|
||||
import copy
|
||||
import logging
|
||||
|
||||
from jinja2 import Environment, StrictUndefined
|
||||
|
||||
@ -9,6 +8,7 @@ from pr_agent.algo.token_handler import TokenHandler
|
||||
from pr_agent.config_loader import get_settings
|
||||
from pr_agent.git_providers import get_git_provider
|
||||
from pr_agent.git_providers.git_provider import get_main_pr_language
|
||||
from pr_agent.log import get_logger
|
||||
|
||||
|
||||
class PRInformationFromUser:
|
||||
@ -34,22 +34,22 @@ class PRInformationFromUser:
|
||||
self.prediction = None
|
||||
|
||||
async def run(self):
|
||||
logging.info('Generating question to the user...')
|
||||
get_logger().info('Generating question to the user...')
|
||||
if get_settings().config.publish_output:
|
||||
self.git_provider.publish_comment("Preparing questions...", is_temporary=True)
|
||||
await retry_with_fallback_models(self._prepare_prediction)
|
||||
logging.info('Preparing questions...')
|
||||
get_logger().info('Preparing questions...')
|
||||
pr_comment = self._prepare_pr_answer()
|
||||
if get_settings().config.publish_output:
|
||||
logging.info('Pushing questions...')
|
||||
get_logger().info('Pushing questions...')
|
||||
self.git_provider.publish_comment(pr_comment)
|
||||
self.git_provider.remove_initial_comment()
|
||||
return ""
|
||||
|
||||
async def _prepare_prediction(self, model):
|
||||
logging.info('Getting PR diff...')
|
||||
get_logger().info('Getting PR diff...')
|
||||
self.patches_diff = get_pr_diff(self.git_provider, self.token_handler, model)
|
||||
logging.info('Getting AI prediction...')
|
||||
get_logger().info('Getting AI prediction...')
|
||||
self.prediction = await self._get_prediction(model)
|
||||
|
||||
async def _get_prediction(self, model: str):
|
||||
@ -59,8 +59,8 @@ class PRInformationFromUser:
|
||||
system_prompt = environment.from_string(get_settings().pr_information_from_user_prompt.system).render(variables)
|
||||
user_prompt = environment.from_string(get_settings().pr_information_from_user_prompt.user).render(variables)
|
||||
if get_settings().config.verbosity_level >= 2:
|
||||
logging.info(f"\nSystem prompt:\n{system_prompt}")
|
||||
logging.info(f"\nUser prompt:\n{user_prompt}")
|
||||
get_logger().info(f"\nSystem prompt:\n{system_prompt}")
|
||||
get_logger().info(f"\nUser prompt:\n{user_prompt}")
|
||||
response, finish_reason = await self.ai_handler.chat_completion(model=model, temperature=0.2,
|
||||
system=system_prompt, user=user_prompt)
|
||||
return response
|
||||
@ -68,7 +68,7 @@ class PRInformationFromUser:
|
||||
def _prepare_pr_answer(self) -> str:
|
||||
model_output = self.prediction.strip()
|
||||
if get_settings().config.verbosity_level >= 2:
|
||||
logging.info(f"answer_str:\n{model_output}")
|
||||
get_logger().info(f"answer_str:\n{model_output}")
|
||||
answer_str = f"{model_output}\n\n Please respond to the questions above in the following format:\n\n" +\
|
||||
"\n>/answer\n>1) ...\n>2) ...\n>...\n"
|
||||
return answer_str
|
||||
|
@ -1,5 +1,4 @@
|
||||
import copy
|
||||
import logging
|
||||
|
||||
from jinja2 import Environment, StrictUndefined
|
||||
|
||||
@ -9,6 +8,7 @@ from pr_agent.algo.token_handler import TokenHandler
|
||||
from pr_agent.config_loader import get_settings
|
||||
from pr_agent.git_providers import get_git_provider
|
||||
from pr_agent.git_providers.git_provider import get_main_pr_language
|
||||
from pr_agent.log import get_logger
|
||||
|
||||
|
||||
class PRQuestions:
|
||||
@ -44,22 +44,22 @@ class PRQuestions:
|
||||
return question_str
|
||||
|
||||
async def run(self):
|
||||
logging.info('Answering a PR question...')
|
||||
get_logger().info('Answering a PR question...')
|
||||
if get_settings().config.publish_output:
|
||||
self.git_provider.publish_comment("Preparing answer...", is_temporary=True)
|
||||
await retry_with_fallback_models(self._prepare_prediction)
|
||||
logging.info('Preparing answer...')
|
||||
get_logger().info('Preparing answer...')
|
||||
pr_comment = self._prepare_pr_answer()
|
||||
if get_settings().config.publish_output:
|
||||
logging.info('Pushing answer...')
|
||||
get_logger().info('Pushing answer...')
|
||||
self.git_provider.publish_comment(pr_comment)
|
||||
self.git_provider.remove_initial_comment()
|
||||
return ""
|
||||
|
||||
async def _prepare_prediction(self, model: str):
|
||||
logging.info('Getting PR diff...')
|
||||
get_logger().info('Getting PR diff...')
|
||||
self.patches_diff = get_pr_diff(self.git_provider, self.token_handler, model)
|
||||
logging.info('Getting AI prediction...')
|
||||
get_logger().info('Getting AI prediction...')
|
||||
self.prediction = await self._get_prediction(model)
|
||||
|
||||
async def _get_prediction(self, model: str):
|
||||
@ -69,8 +69,8 @@ class PRQuestions:
|
||||
system_prompt = environment.from_string(get_settings().pr_questions_prompt.system).render(variables)
|
||||
user_prompt = environment.from_string(get_settings().pr_questions_prompt.user).render(variables)
|
||||
if get_settings().config.verbosity_level >= 2:
|
||||
logging.info(f"\nSystem prompt:\n{system_prompt}")
|
||||
logging.info(f"\nUser prompt:\n{user_prompt}")
|
||||
get_logger().info(f"\nSystem prompt:\n{system_prompt}")
|
||||
get_logger().info(f"\nUser prompt:\n{user_prompt}")
|
||||
response, finish_reason = await self.ai_handler.chat_completion(model=model, temperature=0.2,
|
||||
system=system_prompt, user=user_prompt)
|
||||
return response
|
||||
@ -79,5 +79,5 @@ class PRQuestions:
|
||||
answer_str = f"Question: {self.question_str}\n\n"
|
||||
answer_str += f"Answer:\n{self.prediction.strip()}\n\n"
|
||||
if get_settings().config.verbosity_level >= 2:
|
||||
logging.info(f"answer_str:\n{answer_str}")
|
||||
get_logger().info(f"answer_str:\n{answer_str}")
|
||||
return answer_str
|
||||
|
@ -1,6 +1,4 @@
|
||||
import copy
|
||||
import json
|
||||
import logging
|
||||
from collections import OrderedDict
|
||||
from typing import List, Tuple
|
||||
|
||||
@ -9,13 +7,13 @@ from jinja2 import Environment, StrictUndefined
|
||||
from yaml import SafeLoader
|
||||
|
||||
from pr_agent.algo.ai_handler import AiHandler
|
||||
from pr_agent.algo.pr_processing import get_pr_diff, retry_with_fallback_models, \
|
||||
find_line_number_of_relevant_line_in_file, clip_tokens
|
||||
from pr_agent.algo.pr_processing import get_pr_diff, retry_with_fallback_models
|
||||
from pr_agent.algo.token_handler import TokenHandler
|
||||
from pr_agent.algo.utils import convert_to_markdown, try_fix_json, try_fix_yaml, load_yaml
|
||||
from pr_agent.algo.utils import convert_to_markdown, load_yaml, try_fix_yaml, set_custom_labels
|
||||
from pr_agent.config_loader import get_settings
|
||||
from pr_agent.git_providers import get_git_provider
|
||||
from pr_agent.git_providers.git_provider import IncrementalPR, get_main_pr_language
|
||||
from pr_agent.log import get_logger
|
||||
from pr_agent.servers.help import actions_help_text, bot_help_text
|
||||
|
||||
|
||||
@ -65,6 +63,8 @@ class PRReviewer:
|
||||
'answer_str': answer_str,
|
||||
"extra_instructions": get_settings().pr_reviewer.extra_instructions,
|
||||
"commit_messages_str": self.git_provider.get_commit_messages(),
|
||||
"custom_labels": "",
|
||||
"enable_custom_labels": get_settings().config.enable_custom_labels,
|
||||
}
|
||||
|
||||
self.token_handler = TokenHandler(
|
||||
@ -98,29 +98,29 @@ class PRReviewer:
|
||||
|
||||
try:
|
||||
if self.is_auto and not get_settings().pr_reviewer.automatic_review:
|
||||
logging.info(f'Automatic review is disabled {self.pr_url}')
|
||||
get_logger().info(f'Automatic review is disabled {self.pr_url}')
|
||||
return None
|
||||
|
||||
logging.info(f'Reviewing PR: {self.pr_url} ...')
|
||||
get_logger().info(f'Reviewing PR: {self.pr_url} ...')
|
||||
|
||||
if get_settings().config.publish_output:
|
||||
self.git_provider.publish_comment("Preparing review...", is_temporary=True)
|
||||
|
||||
await retry_with_fallback_models(self._prepare_prediction)
|
||||
|
||||
logging.info('Preparing PR review...')
|
||||
get_logger().info('Preparing PR review...')
|
||||
pr_comment = self._prepare_pr_review()
|
||||
|
||||
if get_settings().config.publish_output:
|
||||
logging.info('Pushing PR review...')
|
||||
get_logger().info('Pushing PR review...')
|
||||
self.git_provider.publish_comment(pr_comment)
|
||||
self.git_provider.remove_initial_comment()
|
||||
|
||||
if get_settings().pr_reviewer.inline_code_comments:
|
||||
logging.info('Pushing inline code comments...')
|
||||
get_logger().info('Pushing inline code comments...')
|
||||
self._publish_inline_code_comments()
|
||||
except Exception as e:
|
||||
logging.error(f"Failed to review PR: {e}")
|
||||
get_logger().error(f"Failed to review PR: {e}")
|
||||
|
||||
async def _prepare_prediction(self, model: str) -> None:
|
||||
"""
|
||||
@ -132,9 +132,9 @@ class PRReviewer:
|
||||
Returns:
|
||||
None
|
||||
"""
|
||||
logging.info('Getting PR diff...')
|
||||
get_logger().info('Getting PR diff...')
|
||||
self.patches_diff = get_pr_diff(self.git_provider, self.token_handler, model)
|
||||
logging.info('Getting AI prediction...')
|
||||
get_logger().info('Getting AI prediction...')
|
||||
self.prediction = await self._get_prediction(model)
|
||||
|
||||
async def _get_prediction(self, model: str) -> str:
|
||||
@ -151,12 +151,13 @@ class PRReviewer:
|
||||
variables["diff"] = self.patches_diff # update diff
|
||||
|
||||
environment = Environment(undefined=StrictUndefined)
|
||||
set_custom_labels(variables)
|
||||
system_prompt = environment.from_string(get_settings().pr_review_prompt.system).render(variables)
|
||||
user_prompt = environment.from_string(get_settings().pr_review_prompt.user).render(variables)
|
||||
|
||||
if get_settings().config.verbosity_level >= 2:
|
||||
logging.info(f"\nSystem prompt:\n{system_prompt}")
|
||||
logging.info(f"\nUser prompt:\n{user_prompt}")
|
||||
get_logger().info(f"\nSystem prompt:\n{system_prompt}")
|
||||
get_logger().info(f"\nUser prompt:\n{user_prompt}")
|
||||
|
||||
response, finish_reason = await self.ai_handler.chat_completion(
|
||||
model=model,
|
||||
@ -249,7 +250,7 @@ class PRReviewer:
|
||||
|
||||
# Log markdown response if verbosity level is high
|
||||
if get_settings().config.verbosity_level >= 2:
|
||||
logging.info(f"Markdown response:\n{markdown_text}")
|
||||
get_logger().info(f"Markdown response:\n{markdown_text}")
|
||||
|
||||
if markdown_text == None or len(markdown_text) == 0:
|
||||
markdown_text = ""
|
||||
@ -268,7 +269,7 @@ class PRReviewer:
|
||||
try:
|
||||
data = yaml.load(review_text, Loader=SafeLoader)
|
||||
except Exception as e:
|
||||
logging.error(f"Failed to parse AI prediction: {e}")
|
||||
get_logger().error(f"Failed to parse AI prediction: {e}")
|
||||
data = try_fix_yaml(review_text)
|
||||
|
||||
comments: List[str] = []
|
||||
@ -277,7 +278,7 @@ class PRReviewer:
|
||||
relevant_line_in_file = suggestion.get('relevant line', '').strip()
|
||||
content = suggestion.get('suggestion', '')
|
||||
if not relevant_file or not relevant_line_in_file or not content:
|
||||
logging.info("Skipping inline comment with missing file/line/content")
|
||||
get_logger().info("Skipping inline comment with missing file/line/content")
|
||||
continue
|
||||
|
||||
if self.git_provider.is_supported("create_inline_comment"):
|
||||
|
@ -1,18 +1,18 @@
|
||||
import copy
|
||||
import json
|
||||
import logging
|
||||
import time
|
||||
from enum import Enum
|
||||
from typing import List, Tuple
|
||||
import pinecone
|
||||
from typing import List
|
||||
|
||||
import openai
|
||||
import pandas as pd
|
||||
import pinecone
|
||||
from pinecone_datasets import Dataset, DatasetMetadata
|
||||
from pydantic import BaseModel, Field
|
||||
|
||||
from pr_agent.algo import MAX_TOKENS
|
||||
from pr_agent.algo.token_handler import TokenHandler
|
||||
from pr_agent.config_loader import get_settings
|
||||
from pr_agent.git_providers import get_git_provider
|
||||
from pinecone_datasets import Dataset, DatasetMetadata
|
||||
from pr_agent.log import get_logger
|
||||
|
||||
MODEL = "text-embedding-ada-002"
|
||||
|
||||
@ -47,6 +47,13 @@ class PRSimilarIssue:
|
||||
|
||||
# check if index exists, and if repo is already indexed
|
||||
run_from_scratch = False
|
||||
if run_from_scratch: # for debugging
|
||||
pinecone.init(api_key=api_key, environment=environment)
|
||||
if index_name in pinecone.list_indexes():
|
||||
get_logger().info('Removing index...')
|
||||
pinecone.delete_index(index_name)
|
||||
get_logger().info('Done')
|
||||
|
||||
upsert = True
|
||||
pinecone.init(api_key=api_key, environment=environment)
|
||||
if not index_name in pinecone.list_indexes():
|
||||
@ -62,11 +69,11 @@ class PRSimilarIssue:
|
||||
upsert = False
|
||||
|
||||
if run_from_scratch or upsert: # index the entire repo
|
||||
logging.info('Indexing the entire repo...')
|
||||
get_logger().info('Indexing the entire repo...')
|
||||
|
||||
logging.info('Getting issues...')
|
||||
get_logger().info('Getting issues...')
|
||||
issues = list(repo_obj.get_issues(state='all'))
|
||||
logging.info('Done')
|
||||
get_logger().info('Done')
|
||||
self._update_index_with_issues(issues, repo_name_for_index, upsert=upsert)
|
||||
else: # update index if needed
|
||||
pinecone_index = pinecone.Index(index_name=index_name)
|
||||
@ -92,20 +99,20 @@ class PRSimilarIssue:
|
||||
break
|
||||
|
||||
if issues_to_update:
|
||||
logging.info(f'Updating index with {counter} new issues...')
|
||||
get_logger().info(f'Updating index with {counter} new issues...')
|
||||
self._update_index_with_issues(issues_to_update, repo_name_for_index, upsert=True)
|
||||
else:
|
||||
logging.info('No new issues to update')
|
||||
get_logger().info('No new issues to update')
|
||||
|
||||
async def run(self):
|
||||
logging.info('Getting issue...')
|
||||
get_logger().info('Getting issue...')
|
||||
repo_name, original_issue_number = self.git_provider._parse_issue_url(self.issue_url.split('=')[-1])
|
||||
issue_main = self.git_provider.repo_obj.get_issue(original_issue_number)
|
||||
issue_str, comments, number = self._process_issue(issue_main)
|
||||
openai.api_key = get_settings().openai.key
|
||||
logging.info('Done')
|
||||
get_logger().info('Done')
|
||||
|
||||
logging.info('Querying...')
|
||||
get_logger().info('Querying...')
|
||||
res = openai.Embedding.create(input=[issue_str], engine=MODEL)
|
||||
embeds = [record['embedding'] for record in res['data']]
|
||||
pinecone_index = pinecone.Index(index_name=self.index_name)
|
||||
@ -117,7 +124,16 @@ class PRSimilarIssue:
|
||||
relevant_comment_number_list = []
|
||||
score_list = []
|
||||
for r in res['matches']:
|
||||
issue_number = int(r["id"].split('.')[0].split('_')[-1])
|
||||
# skip example issue
|
||||
if 'example_issue_' in r["id"]:
|
||||
continue
|
||||
|
||||
try:
|
||||
issue_number = int(r["id"].split('.')[0].split('_')[-1])
|
||||
except:
|
||||
get_logger().debug(f"Failed to parse issue number from {r['id']}")
|
||||
continue
|
||||
|
||||
if original_issue_number == issue_number:
|
||||
continue
|
||||
if issue_number not in relevant_issues_number_list:
|
||||
@ -127,9 +143,9 @@ class PRSimilarIssue:
|
||||
else:
|
||||
relevant_comment_number_list.append(-1)
|
||||
score_list.append(str("{:.2f}".format(r['score'])))
|
||||
logging.info('Done')
|
||||
get_logger().info('Done')
|
||||
|
||||
logging.info('Publishing response...')
|
||||
get_logger().info('Publishing response...')
|
||||
similar_issues_str = "### Similar Issues\n___\n\n"
|
||||
for i, issue_number_similar in enumerate(relevant_issues_number_list):
|
||||
issue = self.git_provider.repo_obj.get_issue(issue_number_similar)
|
||||
@ -140,8 +156,8 @@ class PRSimilarIssue:
|
||||
similar_issues_str += f"{i + 1}. **[{title}]({url})** (score={score_list[i]})\n\n"
|
||||
if get_settings().config.publish_output:
|
||||
response = issue_main.create_comment(similar_issues_str)
|
||||
logging.info(similar_issues_str)
|
||||
logging.info('Done')
|
||||
get_logger().info(similar_issues_str)
|
||||
get_logger().info('Done')
|
||||
|
||||
def _process_issue(self, issue):
|
||||
header = issue.title
|
||||
@ -155,7 +171,7 @@ class PRSimilarIssue:
|
||||
return issue_str, comments, number
|
||||
|
||||
def _update_index_with_issues(self, issues_list, repo_name_for_index, upsert=False):
|
||||
logging.info('Processing issues...')
|
||||
get_logger().info('Processing issues...')
|
||||
corpus = Corpus()
|
||||
example_issue_record = Record(
|
||||
id=f"example_issue_{repo_name_for_index}",
|
||||
@ -171,9 +187,9 @@ class PRSimilarIssue:
|
||||
|
||||
counter += 1
|
||||
if counter % 100 == 0:
|
||||
logging.info(f"Scanned {counter} issues")
|
||||
get_logger().info(f"Scanned {counter} issues")
|
||||
if counter >= self.max_issues_to_scan:
|
||||
logging.info(f"Scanned {self.max_issues_to_scan} issues, stopping")
|
||||
get_logger().info(f"Scanned {self.max_issues_to_scan} issues, stopping")
|
||||
break
|
||||
|
||||
issue_str, comments, number = self._process_issue(issue)
|
||||
@ -210,9 +226,9 @@ class PRSimilarIssue:
|
||||
)
|
||||
corpus.append(comment_record)
|
||||
df = pd.DataFrame(corpus.dict()["documents"])
|
||||
logging.info('Done')
|
||||
get_logger().info('Done')
|
||||
|
||||
logging.info('Embedding...')
|
||||
get_logger().info('Embedding...')
|
||||
openai.api_key = get_settings().openai.key
|
||||
list_to_encode = list(df["text"].values)
|
||||
try:
|
||||
@ -220,7 +236,7 @@ class PRSimilarIssue:
|
||||
embeds = [record['embedding'] for record in res['data']]
|
||||
except:
|
||||
embeds = []
|
||||
logging.error('Failed to embed entire list, embedding one by one...')
|
||||
get_logger().error('Failed to embed entire list, embedding one by one...')
|
||||
for i, text in enumerate(list_to_encode):
|
||||
try:
|
||||
res = openai.Embedding.create(input=[text], engine=MODEL)
|
||||
@ -231,21 +247,23 @@ class PRSimilarIssue:
|
||||
meta = DatasetMetadata.empty()
|
||||
meta.dense_model.dimension = len(embeds[0])
|
||||
ds = Dataset.from_pandas(df, meta)
|
||||
logging.info('Done')
|
||||
get_logger().info('Done')
|
||||
|
||||
api_key = get_settings().pinecone.api_key
|
||||
environment = get_settings().pinecone.environment
|
||||
if not upsert:
|
||||
logging.info('Creating index from scratch...')
|
||||
get_logger().info('Creating index from scratch...')
|
||||
ds.to_pinecone_index(self.index_name, api_key=api_key, environment=environment)
|
||||
time.sleep(15) # wait for pinecone to finalize indexing before querying
|
||||
else:
|
||||
logging.info('Upserting index...')
|
||||
get_logger().info('Upserting index...')
|
||||
namespace = ""
|
||||
batch_size: int = 100
|
||||
concurrency: int = 10
|
||||
pinecone.init(api_key=api_key, environment=environment)
|
||||
ds._upsert_to_index(self.index_name, namespace, batch_size, concurrency)
|
||||
logging.info('Done')
|
||||
time.sleep(5) # wait for pinecone to finalize upserting before querying
|
||||
get_logger().info('Done')
|
||||
|
||||
|
||||
class IssueLevel(str, Enum):
|
||||
|
@ -1,5 +1,4 @@
|
||||
import copy
|
||||
import logging
|
||||
from datetime import date
|
||||
from time import sleep
|
||||
from typing import Tuple
|
||||
@ -10,8 +9,9 @@ from pr_agent.algo.ai_handler import AiHandler
|
||||
from pr_agent.algo.pr_processing import get_pr_diff, retry_with_fallback_models
|
||||
from pr_agent.algo.token_handler import TokenHandler
|
||||
from pr_agent.config_loader import get_settings
|
||||
from pr_agent.git_providers import GithubProvider, get_git_provider
|
||||
from pr_agent.git_providers import get_git_provider
|
||||
from pr_agent.git_providers.git_provider import get_main_pr_language
|
||||
from pr_agent.log import get_logger
|
||||
|
||||
CHANGELOG_LINES = 50
|
||||
|
||||
@ -48,26 +48,26 @@ class PRUpdateChangelog:
|
||||
async def run(self):
|
||||
# assert type(self.git_provider) == GithubProvider, "Currently only Github is supported"
|
||||
|
||||
logging.info('Updating the changelog...')
|
||||
get_logger().info('Updating the changelog...')
|
||||
if get_settings().config.publish_output:
|
||||
self.git_provider.publish_comment("Preparing changelog updates...", is_temporary=True)
|
||||
await retry_with_fallback_models(self._prepare_prediction)
|
||||
logging.info('Preparing PR changelog updates...')
|
||||
get_logger().info('Preparing PR changelog updates...')
|
||||
new_file_content, answer = self._prepare_changelog_update()
|
||||
if get_settings().config.publish_output:
|
||||
self.git_provider.remove_initial_comment()
|
||||
logging.info('Publishing changelog updates...')
|
||||
get_logger().info('Publishing changelog updates...')
|
||||
if self.commit_changelog:
|
||||
logging.info('Pushing PR changelog updates to repo...')
|
||||
get_logger().info('Pushing PR changelog updates to repo...')
|
||||
self._push_changelog_update(new_file_content, answer)
|
||||
else:
|
||||
logging.info('Publishing PR changelog as comment...')
|
||||
get_logger().info('Publishing PR changelog as comment...')
|
||||
self.git_provider.publish_comment(f"**Changelog updates:**\n\n{answer}")
|
||||
|
||||
async def _prepare_prediction(self, model: str):
|
||||
logging.info('Getting PR diff...')
|
||||
get_logger().info('Getting PR diff...')
|
||||
self.patches_diff = get_pr_diff(self.git_provider, self.token_handler, model)
|
||||
logging.info('Getting AI prediction...')
|
||||
get_logger().info('Getting AI prediction...')
|
||||
self.prediction = await self._get_prediction(model)
|
||||
|
||||
async def _get_prediction(self, model: str):
|
||||
@ -77,8 +77,8 @@ class PRUpdateChangelog:
|
||||
system_prompt = environment.from_string(get_settings().pr_update_changelog_prompt.system).render(variables)
|
||||
user_prompt = environment.from_string(get_settings().pr_update_changelog_prompt.user).render(variables)
|
||||
if get_settings().config.verbosity_level >= 2:
|
||||
logging.info(f"\nSystem prompt:\n{system_prompt}")
|
||||
logging.info(f"\nUser prompt:\n{user_prompt}")
|
||||
get_logger().info(f"\nSystem prompt:\n{system_prompt}")
|
||||
get_logger().info(f"\nUser prompt:\n{user_prompt}")
|
||||
response, finish_reason = await self.ai_handler.chat_completion(model=model, temperature=0.2,
|
||||
system=system_prompt, user=user_prompt)
|
||||
|
||||
@ -100,7 +100,7 @@ class PRUpdateChangelog:
|
||||
"\n>'/update_changelog --pr_update_changelog.push_changelog_changes=true'\n"
|
||||
|
||||
if get_settings().config.verbosity_level >= 2:
|
||||
logging.info(f"answer:\n{answer}")
|
||||
get_logger().info(f"answer:\n{answer}")
|
||||
|
||||
return new_file_content, answer
|
||||
|
||||
@ -149,7 +149,7 @@ Example:
|
||||
except Exception:
|
||||
self.changelog_file_str = ""
|
||||
if self.commit_changelog:
|
||||
logging.info("No CHANGELOG.md file found in the repository. Creating one...")
|
||||
get_logger().info("No CHANGELOG.md file found in the repository. Creating one...")
|
||||
changelog_file = self.git_provider.repo_obj.create_file(path="CHANGELOG.md",
|
||||
message='add CHANGELOG.md',
|
||||
content="",
|
||||
|
@ -20,4 +20,5 @@ ujson==5.8.0
|
||||
azure-devops==7.1.0b3
|
||||
msrest==0.7.1
|
||||
pinecone-client
|
||||
pinecone-datasets @ git+https://github.com/mrT23/pinecone-datasets.git@main
|
||||
pinecone-datasets @ git+https://github.com/mrT23/pinecone-datasets.git@main
|
||||
loguru==0.7.2
|
||||
|
@ -43,18 +43,6 @@ class TestHandlePatchDeletions:
|
||||
assert handle_patch_deletions(patch, original_file_content_str, new_file_content_str,
|
||||
file_name) == patch.rstrip()
|
||||
|
||||
# Tests that handle_patch_deletions logs a message when verbosity_level is greater than 0
|
||||
def test_handle_patch_deletions_happy_path_verbosity_level_greater_than_0(self, caplog):
|
||||
patch = '--- a/file.py\n+++ b/file.py\n@@ -1,2 +1,2 @@\n-foo\n-bar\n+baz\n'
|
||||
original_file_content_str = 'foo\nbar\n'
|
||||
new_file_content_str = ''
|
||||
file_name = 'file.py'
|
||||
get_settings().config.verbosity_level = 1
|
||||
|
||||
with caplog.at_level(logging.INFO):
|
||||
handle_patch_deletions(patch, original_file_content_str, new_file_content_str, file_name)
|
||||
assert any("Processing file" in message for message in caplog.messages)
|
||||
|
||||
# Tests that handle_patch_deletions returns 'File was deleted' when new_file_content_str is empty
|
||||
def test_handle_patch_deletions_edge_case_new_file_content_empty(self):
|
||||
patch = '--- a/file.py\n+++ b/file.py\n@@ -1,2 +1,2 @@\n-foo\n-bar\n'
|
||||
|
Reference in New Issue
Block a user