diff --git a/.github/workflows/build-and-test.yaml b/.github/workflows/build-and-test.yaml index 960da61b..114bbb7e 100644 --- a/.github/workflows/build-and-test.yaml +++ b/.github/workflows/build-and-test.yaml @@ -2,6 +2,8 @@ name: Build-and-test on: push: + pull_request: + types: [ opened, reopened ] jobs: build-and-test: diff --git a/CONFIGURATION.md b/CONFIGURATION.md deleted file mode 100644 index 9bbfd910..00000000 --- a/CONFIGURATION.md +++ /dev/null @@ -1,57 +0,0 @@ -## Configuration - -The different tools and sub-tools used by CodiumAI PR-Agent are adjustable via the **[configuration file](pr_agent/settings/configuration.toml)** - -### Working from CLI -When running from source (CLI), your local configuration file will be initially used. - -Example for invoking the 'review' tools via the CLI: - -``` -python cli.py --pr-url= review -``` -In addition to general configurations, the 'review' tool will use parameters from the `[pr_reviewer]` section (every tool has a dedicated section in the configuration file). - -Note that you can print results locally, without publishing them, by setting in `configuration.toml`: - -``` -[config] -publish_output=true -verbosity_level=2 -``` -This is useful for debugging or experimenting with the different tools. - -### Working from pre-built repo (GitHub Action/GitHub App/Docker) -When running PR-Agent from a pre-built repo, the default configuration file will be loaded. - -To edit the configuration, you have two options: -1. Place a local configuration file in the root of your local repo. The local file will be used instead of the default one. -2. For online usage, just add `--config_path=` to you command, to edit a specific configuration value. -For example if you want to edit `pr_reviewer` configurations, you can run: -``` -/review --pr_reviewer.extra_instructions="..." --pr_reviewer.require_score_review=false ... -``` - -Any configuration value in `configuration.toml` file can be similarly edited. - -### General configuration parameters - -#### Changing a model -See [here](pr_agent/algo/__init__.py) for the list of available models. - -To use Llama2 model, for example, set: -``` -[config] -model = "replicate/llama-2-70b-chat:2c1608e18606fad2812020dc541930f2d0495ce32eee50074220b87300bc16e1" -[replicate] -key = ... -``` -(you can obtain a Llama2 key from [here](https://replicate.com/replicate/llama-2-70b-chat/api)) - -Also review the [AiHandler](pr_agent/algo/ai_handler.py) file for instruction how to set keys for other models. - -#### Extra instructions -All PR-Agent tools have a parameter called `extra_instructions`, that enables to add free-text extra instructions. Example usage: -``` -/update_changelog --pr_update_changelog.extra_instructions="Make sure to update also the version ..." -``` \ No newline at end of file diff --git a/Dockerfile.github_action b/Dockerfile.github_action index d6763f0a..dd1ae750 100644 --- a/Dockerfile.github_action +++ b/Dockerfile.github_action @@ -2,7 +2,8 @@ FROM python:3.10 as base WORKDIR /app ADD pyproject.toml . -RUN pip install . && rm pyproject.toml +ADD requirements.txt . +RUN pip install . && rm pyproject.toml requirements.txt ENV PYTHONPATH=/app ADD pr_agent pr_agent ADD github_action/entrypoint.sh / diff --git a/INSTALL.md b/INSTALL.md index 31952114..74368ac0 100644 --- a/INSTALL.md +++ b/INSTALL.md @@ -1,9 +1,23 @@ ## Installation +To get started with PR-Agent quickly, you first need to acquire two tokens: + +1. An OpenAI key from [here](https://platform.openai.com/), with access to GPT-4. +2. A GitHub personal access token (classic) with the repo scope. + +There are several ways to use PR-Agent: + +- [Method 1: Use Docker image (no installation required)](INSTALL.md#method-1-use-docker-image-no-installation-required) +- [Method 2: Run from source](INSTALL.md#method-2-run-from-source) +- [Method 3: Run as a GitHub Action](INSTALL.md#method-3-run-as-a-github-action) +- [Method 4: Run as a polling server](INSTALL.md#method-4-run-as-a-polling-server) +- [Method 5: Run as a GitHub App](INSTALL.md#method-5-run-as-a-github-app) +- [Method 6: Deploy as a Lambda Function](INSTALL.md#method-6---deploy-as-a-lambda-function) +- [Method 7: AWS CodeCommit](INSTALL.md#method-7---aws-codecommit-setup) --- -#### Method 1: Use Docker image (no installation required) +### Method 1: Use Docker image (no installation required) To request a review for a PR, or ask a question about a PR, you can run directly from the Docker image. Here's how: @@ -18,6 +32,18 @@ docker run --rm -it -e OPENAI.KEY= -e GITHUB.USER_TOKEN= c ``` docker run --rm -it -e OPENAI.KEY= -e GITHUB.USER_TOKEN= codiumai/pr-agent --pr_url ask "" ``` +Note: If you want to ensure you're running a specific version of the Docker image, consider using the image's digest. +The digest is a unique identifier for a specific version of an image. You can pull and run an image using its digest by referencing it like so: repository@sha256:digest. Always ensure you're using the correct and trusted digest for your operations. + +1. To request a review for a PR using a specific digest, run the following command: +```bash +docker run --rm -it -e OPENAI.KEY= -e GITHUB.USER_TOKEN= codiumai/pr-agent@sha256:71b5ee15df59c745d352d84752d01561ba64b6d51327f97d46152f0c58a5f678 --pr_url review +``` + +2. To ask a question about a PR using the same digest, run the following command: +```bash +docker run --rm -it -e OPENAI.KEY= -e GITHUB.USER_TOKEN= codiumai/pr-agent@sha256:71b5ee15df59c745d352d84752d01561ba64b6d51327f97d46152f0c58a5f678 --pr_url ask "" +``` Possible questions you can ask include: @@ -29,52 +55,7 @@ Possible questions you can ask include: --- -#### Method 2: Run as a GitHub Action - -You can use our pre-built Github Action Docker image to run PR-Agent as a Github Action. - -1. Add the following file to your repository under `.github/workflows/pr_agent.yml`: - -```yaml -on: - pull_request: - issue_comment: -jobs: - pr_agent_job: - runs-on: ubuntu-latest - name: Run pr agent on every pull request, respond to user comments - steps: - - name: PR Agent action step - id: pragent - uses: Codium-ai/pr-agent@main - env: - OPENAI_KEY: ${{ secrets.OPENAI_KEY }} - GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }} -``` - -2. Add the following secret to your repository under `Settings > Secrets`: - -``` -OPENAI_KEY: -``` - -The GITHUB_TOKEN secret is automatically created by GitHub. - -3. Merge this change to your main branch. -When you open your next PR, you should see a comment from `github-actions` bot with a review of your PR, and instructions on how to use the rest of the tools. - -4. You may configure PR-Agent by adding environment variables under the env section corresponding to any configurable property in the [configuration](./CONFIGURATION.md) file. Some examples: -```yaml - env: - # ... previous environment values - OPENAI.ORG: "" - PR_REVIEWER.REQUIRE_TESTS_REVIEW: "false" # Disable tests review - PR_CODE_SUGGESTIONS.NUM_CODE_SUGGESTIONS: 6 # Increase number of code suggestions -``` - ---- - -#### Method 3: Run from source +### Method 2: Run from source 1. Clone this repository: @@ -100,15 +81,85 @@ chmod 600 pr_agent/settings/.secrets.toml ``` export PYTHONPATH=[$PYTHONPATH:] -python pr_agent/cli.py --pr_url review -python pr_agent/cli.py --pr_url ask -python pr_agent/cli.py --pr_url describe -python pr_agent/cli.py --pr_url improve +python pr_agent/cli.py --pr_url /review +python pr_agent/cli.py --pr_url /ask +python pr_agent/cli.py --pr_url /describe +python pr_agent/cli.py --pr_url /improve ``` --- -#### Method 4: Run as a polling server +### Method 3: Run as a GitHub Action + +You can use our pre-built Github Action Docker image to run PR-Agent as a Github Action. + +1. Add the following file to your repository under `.github/workflows/pr_agent.yml`: + +```yaml +on: + pull_request: + issue_comment: +jobs: + pr_agent_job: + runs-on: ubuntu-latest + permissions: + issues: write + pull-requests: write + contents: write + name: Run pr agent on every pull request, respond to user comments + steps: + - name: PR Agent action step + id: pragent + uses: Codium-ai/pr-agent@main + env: + OPENAI_KEY: ${{ secrets.OPENAI_KEY }} + GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }} +``` +** if you want to pin your action to a specific commit for stability reasons +```yaml +on: + pull_request: + issue_comment: + +jobs: + pr_agent_job: + runs-on: ubuntu-latest + permissions: + issues: write + pull-requests: write + contents: write + name: Run pr agent on every pull request, respond to user comments + steps: + - name: PR Agent action step + id: pragent + uses: Codium-ai/pr-agent@ + env: + OPENAI_KEY: ${{ secrets.OPENAI_KEY }} + GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }} +``` +2. Add the following secret to your repository under `Settings > Secrets`: + +``` +OPENAI_KEY: +``` + +The GITHUB_TOKEN secret is automatically created by GitHub. + +3. Merge this change to your main branch. +When you open your next PR, you should see a comment from `github-actions` bot with a review of your PR, and instructions on how to use the rest of the tools. + +4. You may configure PR-Agent by adding environment variables under the env section corresponding to any configurable property in the [configuration](./Usage.md) file. Some examples: +```yaml + env: + # ... previous environment values + OPENAI.ORG: "" + PR_REVIEWER.REQUIRE_TESTS_REVIEW: "false" # Disable tests review + PR_CODE_SUGGESTIONS.NUM_CODE_SUGGESTIONS: 6 # Increase number of code suggestions +``` + +--- + +### Method 4: Run as a polling server Request reviews by tagging your Github user on a PR Follow steps 1-3 of method 2. @@ -120,7 +171,7 @@ python pr_agent/servers/github_polling.py --- -#### Method 5: Run as a GitHub App +### Method 5: Run as a GitHub App Allowing you to automate the review process on your private or public repositories. 1. Create a GitHub App from the [Github Developer Portal](https://docs.github.com/en/developers/apps/creating-a-github-app). @@ -200,9 +251,12 @@ docker push codiumai/pr-agent:github_app # Push to your Docker repository 9. Install the app by navigating to the "Install App" tab and selecting your desired repositories. +> **Note:** When running PR-Agent from GitHub App, the default configuration file (configuration.toml) will be loaded.
+> However, you can override the default tool parameters by uploading a local configuration file
+> For more information please check out [CONFIGURATION.md](Usage.md#working-from-github-app-pre-built-repo) --- -#### Deploy as a Lambda Function +### Method 6 - Deploy as a Lambda Function 1. Follow steps 1-5 of [Method 5](#method-5-run-as-a-github-app). 2. Build a docker image that can be used as a lambda function @@ -218,3 +272,80 @@ docker push codiumai/pr-agent:github_app # Push to your Docker repository 5. Configure the lambda function to have a Function URL. 6. Go back to steps 8-9 of [Method 5](#method-5-run-as-a-github-app) with the function url as your Webhook URL. The Webhook URL would look like `https:///api/v1/github_webhooks` + +--- + +### Method 7 - AWS CodeCommit Setup + +Not all features have been added to CodeCommit yet. As of right now, CodeCommit has been implemented to run the pr-agent CLI on the command line, using AWS credentials stored in environment variables. (More features will be added in the future.) The following is a set of instructions to have pr-agent do a review of your CodeCommit pull request from the command line: + +1. Create an IAM user that you will use to read CodeCommit pull requests and post comments + * Note: That user should have CLI access only, not Console access +2. Add IAM permissions to that user, to allow access to CodeCommit (see IAM Role example below) +3. Generate an Access Key for your IAM user +4. Set the Access Key and Secret using environment variables (see Access Key example below) +5. Set the `git_provider` value to `codecommit` in the `pr_agent/settings/configuration.toml` settings file +6. Set the `PYTHONPATH` to include your `pr-agent` project directory + * Option A: Add `PYTHONPATH="/PATH/TO/PROJECTS/pr-agent` to your `.env` file + * Option B: Set `PYTHONPATH` and run the CLI in one command, for example: + * `PYTHONPATH="/PATH/TO/PROJECTS/pr-agent python pr_agent/cli.py [--ARGS]` + +##### AWS CodeCommit IAM Role Example + +Example IAM permissions to that user to allow access to CodeCommit: + +* Note: The following is a working example of IAM permissions that has read access to the repositories and write access to allow posting comments +* Note: If you only want pr-agent to review your pull requests, you can tighten the IAM permissions further, however this IAM example will work, and allow the pr-agent to post comments to the PR +* Note: You may want to replace the `"Resource": "*"` with your list of repos, to limit access to only those repos + +``` +{ + "Version": "2012-10-17", + "Statement": [ + { + "Effect": "Allow", + "Action": [ + "codecommit:BatchDescribe*", + "codecommit:BatchGet*", + "codecommit:Describe*", + "codecommit:EvaluatePullRequestApprovalRules", + "codecommit:Get*", + "codecommit:List*", + "codecommit:PostComment*", + "codecommit:PutCommentReaction", + "codecommit:UpdatePullRequestDescription", + "codecommit:UpdatePullRequestTitle" + ], + "Resource": "*" + } + ] +} +``` + +##### AWS CodeCommit Access Key and Secret + +Example setting the Access Key and Secret using environment variables + +```sh +export AWS_ACCESS_KEY_ID="XXXXXXXXXXXXXXXX" +export AWS_SECRET_ACCESS_KEY="XXXXXXXXXXXXXXXX" +export AWS_DEFAULT_REGION="us-east-1" +``` + +##### AWS CodeCommit CLI Example + +After you set up AWS CodeCommit using the instructions above, here is an example CLI run that tells pr-agent to **review** a given pull request. +(Replace your specific PYTHONPATH and PR URL in the example) + +```sh +PYTHONPATH="/PATH/TO/PROJECTS/pr-agent" python pr_agent/cli.py \ + --pr_url https://us-east-1.console.aws.amazon.com/codesuite/codecommit/repositories/MY_REPO_NAME/pull-requests/321 \ + review +``` + +### Appendix - **Debugging LLM API Calls** +If you're testing your codium/pr-agent server, and need to see if calls were made successfully + the exact call logs, you can use the [LiteLLM Debugger tool](https://docs.litellm.ai/docs/debugging/hosted_debugging). + +You can do this by setting `litellm_debugger=true` in configuration.toml. Your Logs will be viewable in real-time @ `admin.litellm.ai/`. Set your email in the `.secrets.toml` under 'user_email'. + + \ No newline at end of file diff --git a/README.md b/README.md index d2c4e171..5bf7d52e 100644 --- a/README.md +++ b/README.md @@ -15,106 +15,123 @@ Making pull requests less painful with an AI agent
-CodiumAI `PR-Agent` is an open-source tool aiming to help developers review pull requests faster and more efficiently. It automatically analyzes the pull request and can provide several types of feedback: +CodiumAI `PR-Agent` is an open-source tool aiming to help developers review pull requests faster and more efficiently. It automatically analyzes the pull request and can provide several types of PR feedback: -**Auto-Description**: Automatically generating PR description - title, type, summary, code walkthrough and PR labels. +**Auto Description (/describe)**: Automatically generating [PR description](https://github.com/Codium-ai/pr-agent/pull/229#issue-1860711415) - title, type, summary, code walkthrough and labels. \ -**PR Review**: Adjustable feedback about the PR main theme, type, relevant tests, security issues, focus, score, and various suggestions for the PR content. +**Auto Review (/review)**: [Adjustable feedback](https://github.com/Codium-ai/pr-agent/pull/229#issuecomment-1695022908) about the PR main theme, type, relevant tests, security issues, score, and various suggestions for the PR content. \ -**Question Answering**: Answering free-text questions about the PR. +**Question Answering (/ask ...)**: Answering [free-text questions](https://github.com/Codium-ai/pr-agent/pull/229#issuecomment-1695021332) about the PR. \ -**Code Suggestions**: Committable code suggestions for improving the PR. +**Code Suggestions (/improve)**: [Committable code suggestions](https://github.com/Codium-ai/pr-agent/pull/229#discussion_r1306919276) for improving the PR. \ -**Update Changelog**: Automatically updating the CHANGELOG.md file with the PR changes. +**Update Changelog (/update_changelog)**: Automatically updating the CHANGELOG.md file with the [PR changes](https://github.com/Codium-ai/pr-agent/pull/168#discussion_r1282077645). -

Example results:

+ +See the [usage guide](./Usage.md) for instructions how to run the different tools from [CLI](./Usage.md#working-from-a-local-repo-cli), or by [online usage](./Usage.md#online-usage). + +

Example results:

-

/describe:

+

/describe:

-

/review:

+ +

/review:

-

/reflect_and_review:

-
-

- -

-
-

/ask:

-
-

- -

-
-

/improve:

-
-

- -

-
+ +[//]: # (

/reflect_and_review:

) + +[//]: # (
) + +[//]: # (

) + +[//]: # () + +[//]: # (

) + +[//]: # (
) + +[//]: # (

/ask:

) + +[//]: # (
) + +[//]: # (

) + +[//]: # () + +[//]: # (

) + +[//]: # (
) + +[//]: # (

/improve:

) + +[//]: # (
) + +[//]: # (

) + +[//]: # () + +[//]: # (

) + +[//]: # (
)
- +## Table of Contents - [Overview](#overview) - [Try it now](#try-it-now) - [Installation](#installation) -- [Configuration](./CONFIGURATION.md) +- [Usage guide](./Usage.md) - [How it works](#how-it-works) - [Why use PR-Agent](#why-use-pr-agent) - [Roadmap](#roadmap) -- [Similar projects](#similar-projects)
## Overview `PR-Agent` offers extensive pull request functionalities across various git providers: -| | | GitHub | Gitlab | Bitbucket | -|-------|---------------------------------------------|:------:|:------:|:---------:| -| TOOLS | Review | :white_check_mark: | :white_check_mark: | :white_check_mark: | -| | โฎ‘ Inline review | :white_check_mark: | :white_check_mark: | | -| | Ask | :white_check_mark: | :white_check_mark: | :white_check_mark: -| | Auto-Description | :white_check_mark: | :white_check_mark: | | -| | Improve Code | :white_check_mark: | :white_check_mark: | | -| | Reflect and Review | :white_check_mark: | | | -| | Update CHANGELOG.md | :white_check_mark: | | | -| | | | | | -| USAGE | CLI | :white_check_mark: | :white_check_mark: | :white_check_mark: | -| | App / webhook | :white_check_mark: | :white_check_mark: | | -| | Tagging bot | :white_check_mark: | | | -| | Actions | :white_check_mark: | | | -| | | | | | -| CORE | PR compression | :white_check_mark: | :white_check_mark: | :white_check_mark: | -| | Repo language prioritization | :white_check_mark: | :white_check_mark: | :white_check_mark: | -| | Adaptive and token-aware
file patch fitting | :white_check_mark: | :white_check_mark: | :white_check_mark: | -| | Multiple models support | :white_check_mark: | :white_check_mark: | :white_check_mark: | -| | Incremental PR Review | :white_check_mark: | | | +| | | GitHub | Gitlab | Bitbucket | CodeCommit | Azure DevOps | +|-------|---------------------------------------------|:------:|:------:|:---------:|:----------:|:----------:| +| TOOLS | Review | :white_check_mark: | :white_check_mark: | :white_check_mark: | :white_check_mark: | :white_check_mark: | +| | Ask | :white_check_mark: | :white_check_mark: | :white_check_mark: | :white_check_mark: | :white_check_mark: +| | Auto-Description | :white_check_mark: | :white_check_mark: | | :white_check_mark: | :white_check_mark: | +| | Improve Code | :white_check_mark: | :white_check_mark: | | :white_check_mark: | | +| | โฎ‘ Extended | :white_check_mark: | :white_check_mark: | | :white_check_mark: | | +| | Reflect and Review | :white_check_mark: | | | | :white_check_mark: | +| | Update CHANGELOG.md | :white_check_mark: | | | | | +| | | | | | | | +| USAGE | CLI | :white_check_mark: | :white_check_mark: | :white_check_mark: | :white_check_mark: | :white_check_mark: | +| | App / webhook | :white_check_mark: | :white_check_mark: | | | | +| | Tagging bot | :white_check_mark: | | | | | +| | Actions | :white_check_mark: | | | | | +| | | | | | | | +| CORE | PR compression | :white_check_mark: | :white_check_mark: | :white_check_mark: | :white_check_mark: | :white_check_mark: | +| | Repo language prioritization | :white_check_mark: | :white_check_mark: | :white_check_mark: | :white_check_mark: | :white_check_mark: | +| | Adaptive and token-aware
file patch fitting | :white_check_mark: | :white_check_mark: | :white_check_mark: | :white_check_mark: | :white_check_mark: | +| | Multiple models support | :white_check_mark: | :white_check_mark: | :white_check_mark: | :white_check_mark: | :white_check_mark: | +| | Incremental PR Review | :white_check_mark: | | | | | -Examples for invoking the different tools via the CLI: -- **Review**: python cli.py --pr_url= review -- **Describe**: python cli.py --pr_url= describe -- **Improve**: python cli.py --pr_url= improve -- **Ask**: python cli.py --pr_url= ask "Write me a poem about this PR" -- **Reflect**: python cli.py --pr_url= reflect -- **Update Changelog**: python cli.py --pr_url= update_changelog - -"" is the url of the relevant PR (for example: https://github.com/Codium-ai/pr-agent/pull/50). - -In the [configuration](./CONFIGURATION.md) file you can select your git provider (GitHub, Gitlab, Bitbucket), and further configure the different tools. +Review the **[usage guide](./Usage.md)** section for detailed instructions how to use the different tools, select the relevant git provider (GitHub, Gitlab, Bitbucket,...), and adjust the configuration file to your needs. ## Try it now -Try GPT-4 powered PR-Agent on your public GitHub repository for free. Just mention `@CodiumAI-Agent` and add the desired command in any PR comment! The agent will generate a response based on your command. +You can try GPT-4 powered PR-Agent, on your public GitHub repository, instantly. Just mention `@CodiumAI-Agent` and add the desired command in any PR comment. The agent will generate a response based on your command. +For example, add a comment to any pull request with the following text: +``` +@CodiumAI-Agent /review +``` +and the agent will respond with a review of your PR ![Review generation process](https://www.codium.ai/images/demo-2.gif) -To set up your own PR-Agent, see the [Installation](#installation) section + +To set up your own PR-Agent, see the [Installation](#installation) section below. --- @@ -128,13 +145,14 @@ To get started with PR-Agent quickly, you first need to acquire two tokens: There are several ways to use PR-Agent: - [Method 1: Use Docker image (no installation required)](INSTALL.md#method-1-use-docker-image-no-installation-required) -- [Method 2: Run as a GitHub Action](INSTALL.md#method-2-run-as-a-github-action) -- [Method 3: Run from source](INSTALL.md#method-3-run-from-source) +- [Method 2: Run from source](INSTALL.md#method-2-run-from-source) +- [Method 3: Run as a GitHub Action](INSTALL.md#method-3-run-as-a-github-action) - [Method 4: Run as a polling server](INSTALL.md#method-4-run-as-a-polling-server) - Request reviews by tagging your GitHub user on a PR - [Method 5: Run as a GitHub App](INSTALL.md#method-5-run-as-a-github-app) - Allowing you to automate the review process on your private or public repositories - +- [Method 6: Deploy as a Lambda Function](INSTALL.md#method-6---deploy-as-a-lambda-function) +- [Method 7: AWS CodeCommit](INSTALL.md#method-7---aws-codecommit-setup) ## How it works @@ -152,16 +170,17 @@ Here are some advantages of PR-Agent: - We emphasize **real-life practical usage**. Each tool (review, improve, ask, ...) has a single GPT-4 call, no more. We feel that this is critical for realistic team usage - obtaining an answer quickly (~30 seconds) and affordably. - Our [PR Compression strategy](./PR_COMPRESSION.md) is a core ability that enables to effectively tackle both short and long PRs. -- Our JSON prompting strategy enables to have **modular, customizable tools**. For example, the '/review' tool categories can be controlled via the [configuration](./CONFIGURATION.md) file. Adding additional categories is easy and accessible. -- We support **multiple git providers** (GitHub, Gitlab, Bitbucket), **multiple ways** to use the tool (CLI, GitHub Action, GitHub App, Docker, ...), and **multiple models** (GPT-4, GPT-3.5, Anthropic, Cohere, Llama2). +- Our JSON prompting strategy enables to have **modular, customizable tools**. For example, the '/review' tool categories can be controlled via the [configuration](pr_agent/settings/configuration.toml) file. Adding additional categories is easy and accessible. +- We support **multiple git providers** (GitHub, Gitlab, Bitbucket, CodeCommit), **multiple ways** to use the tool (CLI, GitHub Action, GitHub App, Docker, ...), and **multiple models** (GPT-4, GPT-3.5, Anthropic, Cohere, Llama2). - We are open-source, and welcome contributions from the community. ## Roadmap - [x] Support additional models, as a replacement for OpenAI (see [here](https://github.com/Codium-ai/pr-agent/pull/172)) -- [ ] Develop additional logic for handling large PRs +- [x] Develop additional logic for handling large PRs (see [here](https://github.com/Codium-ai/pr-agent/pull/229)) - [ ] Add additional context to the prompt. For example, repo (or relevant files) summarization, with tools such a [ctags](https://github.com/universal-ctags/ctags) +- [ ] PR-Agent for issues, and just for pull requests - [ ] Adding more tools. Possible directions: - [x] PR description - [x] Inline code suggestions @@ -174,8 +193,8 @@ Here are some advantages of PR-Agent: ## Similar Projects -- [CodiumAI - Meaningful tests for busy devs](https://github.com/Codium-ai/codiumai-vscode-release) +- [CodiumAI - Meaningful tests for busy devs](https://github.com/Codium-ai/codiumai-vscode-release) (although various capabilities are much more advanced in the CodiumAI IDE plugins) - [Aider - GPT powered coding in your terminal](https://github.com/paul-gauthier/aider) - [openai-pr-reviewer](https://github.com/coderabbitai/openai-pr-reviewer) - [CodeReview BOT](https://github.com/anc95/ChatGPT-CodeReview) -- [AI-Maintainer](https://github.com/merwanehamadi/AI-Maintainer) +- [AI-Maintainer](https://github.com/merwanehamadi/AI-Maintainer) \ No newline at end of file diff --git a/Usage.md b/Usage.md new file mode 100644 index 00000000..336de974 --- /dev/null +++ b/Usage.md @@ -0,0 +1,182 @@ +## Usage guide + +### Table of Contents +- [Introduction](#introduction) +- [Working from a local repo (CLI)](#working-from-a-local-repo-cli) +- [Online usage](#online-usage) +- [Working with GitHub App](#working-with-github-app) +- [Working with GitHub Action](#working-with-github-action) +- [Appendix - additional configurations walkthrough](#appendix---additional-configurations-walkthrough) + +### Introduction + +There are 3 basic ways to invoke CodiumAI PR-Agent: +1. Locally running a CLI command +2. Online usage - by [commenting](https://github.com/Codium-ai/pr-agent/pull/229#issuecomment-1695021901) on a PR +3. Enabling PR-Agent tools to run automatically when a new PR is opened + +See the [installation guide](/INSTALL.md) for instructions on how to setup your own PR-Agent. + +Specifically, CLI commands can be issued by invoking a pre-built [docker image](/INSTALL.md#running-from-source), or by invoking a [locally cloned repo](INSTALL.md#method-2-run-from-source). + +For online usage, you will need to setup either a [GitHub App](INSTALL.md#method-5-run-as-a-github-app), or a [GitHub Action](INSTALL.md#method-3-run-as-a-github-action). +GitHub App and GitHub Action also enable to run PR-Agent specific tool automatically when a new PR is opened. + + +#### The configuration file +The different tools and sub-tools used by CodiumAI PR-Agent are adjustable via the **[configuration file](pr_agent/settings/configuration.toml)**. +In addition to general configuration options, each tool has its own configurations. For example, the `review` tool will use parameters from the [pr_reviewer](/pr_agent/settings/configuration.toml#L16) section in the configuration file. + +**git provider:** +The [git_provider](pr_agent/settings/configuration.toml#L4) field in the configuration file determines the GIT provider that will be used by PR-Agent. Currently, the following providers are supported: +` +"github", "gitlab", "azure", "codecommit", "local" +` + +[//]: # (** online usage:**) + +[//]: # (Options that are available in the configuration file can be specified at run time when calling actions. Two examples:) + +[//]: # (```) + +[//]: # (- /review --pr_reviewer.extra_instructions="focus on the file: ...") + +[//]: # (- /describe --pr_description.add_original_user_description=false -pr_description.extra_instructions="make sure to mention: ...") + +[//]: # (```) + +### Working from a local repo (CLI) +When running from your local repo (CLI), your local configuration file will be used. + +Examples for invoking the different tools via the CLI: + +- **Review**: `python cli.py --pr_url= /review` +- **Describe**: `python cli.py --pr_url= /describe` +- **Improve**: `python cli.py --pr_url= /improve` +- **Ask**: `python cli.py --pr_url= /ask "Write me a poem about this PR"` +- **Reflect**: `python cli.py --pr_url= /reflect` +- **Update Changelog**: `python cli.py --pr_url= /update_changelog` + +`` is the url of the relevant PR (for example: https://github.com/Codium-ai/pr-agent/pull/50). + +**Notes:** + +(1) in addition to editing your local configuration file, you can also change any configuration value by adding it to the command line: +``` +python cli.py --pr_url= /review --pr_reviewer.extra_instructions="focus on the file: ..." +``` + +(2) You can print results locally, without publishing them, by setting in `configuration.toml`: +``` +[config] +publish_output=true +verbosity_level=2 +``` +This is useful for debugging or experimenting with the different tools. + + +### Online usage + +Online usage means invoking PR-Agent tools by [comments](https://github.com/Codium-ai/pr-agent/pull/229#issuecomment-1695021901) on a PR. +Commands for invoking the different tools via comments: + +- **Review**: `/review` +- **Describe**: `/describe` +- **Improve**: `/improve` +- **Ask**: `/ask "..."` +- **Reflect**: `/reflect` +- **Update Changelog**: `/update_changelog` + + +To edit a specific configuration value, just add `--config_path=` to any command. +For example if you want to edit the `review` tool configurations, you can run: +``` +/review --pr_reviewer.extra_instructions="..." --pr_reviewer.require_score_review=false +``` +Any configuration value in [configuration file](pr_agent/settings/configuration.toml) file can be similarly edited. + + +### Working with GitHub App +When running PR-Agent from [GitHub App](INSTALL.md#method-5-run-as-a-github-app), the default configurations from a pre-built repo will be initially loaded. + +#### GitHub app automatic tools +The [github_app](pr_agent/settings/configuration.toml#L56) section defines GitHub app specific configurations. +An important parameter is `pr_commands`, which is a list of tools that will be **run automatically when a new PR is opened**: +``` +[github_app] +pr_commands = [ + "/describe --pr_description.add_original_user_description=true --pr_description.keep_original_user_title=true", + "/auto_review", +] +``` +This means that when a new PR is opened, PR-Agent will run the `describe` and `auto_review` tools. +For the describe tool, the `add_original_user_description` and `keep_original_user_title` parameters will be set to true. + +However, you can override the default tool parameters by uploading a local configuration file called `.pr_agent.toml` to the root of your repo. +For example, if your local `.pr_agent.toml` file contains: +``` +[pr_description] +add_original_user_description = false +keep_original_user_title = false +``` +When a new PR is opened, PR-Agent will run the `describe` tool with the above parameters. + +Note that a local `.pr_agent.toml` file enables you to edit and customize the default parameters of any tool, not just the ones that are run automatically. + +#### Editing the prompts +The prompts for the various PR-Agent tools are defined in the `pr_agent/settings` folder. + +In practice, the prompts are loaded and stored as a standard setting object. +Hence, editing them is similar to editing any other configuration value - just place the relevant key in `.pr_agent.toml`file, and override the default value. + +For example, if you want to edit the prompts of the [describe](./pr_agent/settings/pr_description_prompts.toml) tool, you can add the following to your `.pr_agent.toml` file: +``` +[pr_description_prompt] +system=""" +... +""" +user=""" +... +""" +``` +Note that the new prompt will need to generate an output compatible with the relevant [post-process function](./pr_agent/tools/pr_description.py#L137). + +### Working with GitHub Action +TBD + +### Appendix - additional configurations walkthrough + +#### Changing a model +See [here](pr_agent/algo/__init__.py) for the list of available models. + +To use Llama2 model, for example, set: +``` +[config] +model = "replicate/llama-2-70b-chat:2c1608e18606fad2812020dc541930f2d0495ce32eee50074220b87300bc16e1" +[replicate] +key = ... +``` +(you can obtain a Llama2 key from [here](https://replicate.com/replicate/llama-2-70b-chat/api)) + +Also review the [AiHandler](pr_agent/algo/ai_handler.py) file for instruction how to set keys for other models. + +#### Extra instructions +All PR-Agent tools have a parameter called `extra_instructions`, that enables to add free-text extra instructions. Example usage: +``` +/update_changelog --pr_update_changelog.extra_instructions="Make sure to update also the version ..." +``` + +#### Azure DevOps provider +To use Azure DevOps provider use the following settings in configuration.toml: +``` +[config] +git_provider="azure" +use_repo_settings_file=false +``` + +And use the following settings (you have to replace the values) in .secrets.toml: +``` +[azure_devops] +org = "https://dev.azure.com/YOUR_ORGANIZATION/" +pat = "YOUR_PAT_TOKEN" +``` \ No newline at end of file diff --git a/docker/Dockerfile b/docker/Dockerfile index 61ab74cf..4336cacc 100644 --- a/docker/Dockerfile +++ b/docker/Dockerfile @@ -2,13 +2,18 @@ FROM python:3.10 as base WORKDIR /app ADD pyproject.toml . -RUN pip install . && rm pyproject.toml +ADD requirements.txt . +RUN pip install . && rm pyproject.toml requirements.txt ENV PYTHONPATH=/app FROM base as github_app ADD pr_agent pr_agent CMD ["python", "pr_agent/servers/github_app.py"] +FROM base as bitbucket_app +ADD pr_agent pr_agent +CMD ["python", "pr_agent/servers/bitbucket_app.py"] + FROM base as github_polling ADD pr_agent pr_agent CMD ["python", "pr_agent/servers/github_polling.py"] diff --git a/pics/debugger.png b/pics/debugger.png new file mode 100644 index 00000000..7d8f201f Binary files /dev/null and b/pics/debugger.png differ diff --git a/pr_agent/agent/pr_agent.py b/pr_agent/agent/pr_agent.py index c1ab4803..9f0886d8 100644 --- a/pr_agent/agent/pr_agent.py +++ b/pr_agent/agent/pr_agent.py @@ -16,6 +16,7 @@ from pr_agent.tools.pr_update_changelog import PRUpdateChangelog from pr_agent.tools.pr_config import PRConfig command2class = { + "auto_review": PRReviewer, "answer": PRReviewer, "review": PRReviewer, "review_pr": PRReviewer, @@ -72,6 +73,8 @@ class PRAgent: if notify: notify() await PRReviewer(pr_url, is_answer=True, args=args).run() + elif action == "auto_review": + await PRReviewer(pr_url, is_auto=True, args=args).run() elif action in command2class: if notify: notify() diff --git a/pr_agent/algo/ai_handler.py b/pr_agent/algo/ai_handler.py index fb5f64fe..fcc5f04c 100644 --- a/pr_agent/algo/ai_handler.py +++ b/pr_agent/algo/ai_handler.py @@ -26,6 +26,7 @@ class AiHandler: try: openai.api_key = get_settings().openai.key litellm.openai_key = get_settings().openai.key + litellm.debugger = get_settings().config.litellm_debugger self.azure = False if get_settings().get("OPENAI.ORG", None): litellm.organization = get_settings().openai.org @@ -43,6 +44,10 @@ class AiHandler: litellm.cohere_key = get_settings().cohere.key if get_settings().get("REPLICATE.KEY", None): litellm.replicate_key = get_settings().replicate.key + if get_settings().get("REPLICATE.KEY", None): + litellm.replicate_key = get_settings().replicate.key + if get_settings().get("HUGGINGFACE.KEY", None): + litellm.huggingface_key = get_settings().huggingface.key except AttributeError as e: raise ValueError("OpenAI key is required") from e @@ -55,7 +60,7 @@ class AiHandler: @retry(exceptions=(APIError, Timeout, TryAgain, AttributeError, RateLimitError), tries=OPENAI_RETRIES, delay=2, backoff=2, jitter=(1, 3)) - async def chat_completion(self, model: str, temperature: float, system: str, user: str): + async def chat_completion(self, model: str, system: str, user: str, temperature: float = 0.2): """ Performs a chat completion using the OpenAI ChatCompletion API. Retries in case of API errors or timeouts. diff --git a/pr_agent/algo/git_patch_processing.py b/pr_agent/algo/git_patch_processing.py index 1aec0006..1a2bd22b 100644 --- a/pr_agent/algo/git_patch_processing.py +++ b/pr_agent/algo/git_patch_processing.py @@ -1,5 +1,4 @@ from __future__ import annotations - import logging import re @@ -157,7 +156,7 @@ def convert_to_hunks_with_lines_numbers(patch: str, file) -> str: example output: ## src/file.ts ---new hunk-- +__new hunk__ 881 line1 882 line2 883 line3 @@ -166,7 +165,7 @@ def convert_to_hunks_with_lines_numbers(patch: str, file) -> str: 889 line6 890 line7 ... ---old hunk-- +__old hunk__ line1 line2 - line3 @@ -176,8 +175,7 @@ def convert_to_hunks_with_lines_numbers(patch: str, file) -> str: ... """ - patch_with_lines_str = f"## {file.filename}\n" - import re + patch_with_lines_str = f"\n\n## {file.filename}\n" patch_lines = patch.splitlines() RE_HUNK_HEADER = re.compile( r"^@@ -(\d+)(?:,(\d+))? \+(\d+)(?:,(\d+))? @@[ ]?(.*)") @@ -185,23 +183,30 @@ def convert_to_hunks_with_lines_numbers(patch: str, file) -> str: old_content_lines = [] match = None start1, size1, start2, size2 = -1, -1, -1, -1 + prev_header_line = [] + header_line =[] for line in patch_lines: if 'no newline at end of file' in line.lower(): continue if line.startswith('@@'): + header_line = line match = RE_HUNK_HEADER.match(line) if match and new_content_lines: # found a new hunk, split the previous lines if new_content_lines: - patch_with_lines_str += '\n--new hunk--\n' + if prev_header_line: + patch_with_lines_str += f'\n{prev_header_line}\n' + patch_with_lines_str += '__new hunk__\n' for i, line_new in enumerate(new_content_lines): patch_with_lines_str += f"{start2 + i} {line_new}\n" if old_content_lines: - patch_with_lines_str += '--old hunk--\n' + patch_with_lines_str += '__old hunk__\n' for line_old in old_content_lines: patch_with_lines_str += f"{line_old}\n" new_content_lines = [] old_content_lines = [] + if match: + prev_header_line = header_line try: start1, size1, start2, size2 = map(int, match.groups()[:4]) except: # '@@ -0,0 +1 @@' case @@ -219,12 +224,13 @@ def convert_to_hunks_with_lines_numbers(patch: str, file) -> str: # finishing last hunk if match and new_content_lines: if new_content_lines: - patch_with_lines_str += '\n--new hunk--\n' + patch_with_lines_str += f'\n{header_line}\n' + patch_with_lines_str += '\n__new hunk__\n' for i, line_new in enumerate(new_content_lines): patch_with_lines_str += f"{start2 + i} {line_new}\n" if old_content_lines: - patch_with_lines_str += '\n--old hunk--\n' + patch_with_lines_str += '\n__old hunk__\n' for line_old in old_content_lines: patch_with_lines_str += f"{line_old}\n" - return patch_with_lines_str.strip() + return patch_with_lines_str.rstrip() diff --git a/pr_agent/algo/pr_processing.py b/pr_agent/algo/pr_processing.py index adab9506..1c34e603 100644 --- a/pr_agent/algo/pr_processing.py +++ b/pr_agent/algo/pr_processing.py @@ -57,7 +57,7 @@ def get_pr_diff(git_provider: GitProvider, token_handler: TokenHandler, model: s pr_languages = sort_files_by_main_languages(git_provider.get_languages(), diff_files) # generate a standard diff string, with patch extension - patches_extended, total_tokens = pr_generate_extended_diff(pr_languages, token_handler, + patches_extended, total_tokens, patches_extended_tokens = pr_generate_extended_diff(pr_languages, token_handler, add_line_numbers_to_hunks) # if we are under the limit, return the full diff @@ -78,9 +78,9 @@ def get_pr_diff(git_provider: GitProvider, token_handler: TokenHandler, model: s return final_diff -def pr_generate_extended_diff(pr_languages: list, token_handler: TokenHandler, - add_line_numbers_to_hunks: bool) -> \ - Tuple[list, int]: +def pr_generate_extended_diff(pr_languages: list, + token_handler: TokenHandler, + add_line_numbers_to_hunks: bool) -> Tuple[list, int, list]: """ Generate a standard diff string with patch extension, while counting the number of tokens used and applying diff minimization techniques if needed. @@ -90,13 +90,10 @@ def pr_generate_extended_diff(pr_languages: list, token_handler: TokenHandler, files. - token_handler: An object of the TokenHandler class used for handling tokens in the context of the pull request. - add_line_numbers_to_hunks: A boolean indicating whether to add line numbers to the hunks in the diff. - - Returns: - - patches_extended: A list of extended patches for each file in the pull request. - - total_tokens: The total number of tokens used in the extended patches. """ total_tokens = token_handler.prompt_tokens # initial tokens patches_extended = [] + patches_extended_tokens = [] for lang in pr_languages: for file in lang['files']: original_file_content_str = file.base_file @@ -106,7 +103,7 @@ def pr_generate_extended_diff(pr_languages: list, token_handler: TokenHandler, # extend each patch with extra lines of context extended_patch = extend_patch(original_file_content_str, patch, num_lines=PATCH_EXTRA_LINES) - full_extended_patch = f"## {file.filename}\n\n{extended_patch}\n" + full_extended_patch = f"\n\n## {file.filename}\n\n{extended_patch}\n" if add_line_numbers_to_hunks: full_extended_patch = convert_to_hunks_with_lines_numbers(extended_patch, file) @@ -114,9 +111,10 @@ def pr_generate_extended_diff(pr_languages: list, token_handler: TokenHandler, patch_tokens = token_handler.count_tokens(full_extended_patch) file.tokens = patch_tokens total_tokens += patch_tokens + patches_extended_tokens.append(patch_tokens) patches_extended.append(full_extended_patch) - return patches_extended, total_tokens + return patches_extended, total_tokens, patches_extended_tokens def pr_generate_compressed_diff(top_langs: list, token_handler: TokenHandler, model: str, @@ -324,7 +322,9 @@ def clip_tokens(text: str, max_tokens: int) -> str: Returns: str: The clipped string. """ - # We'll estimate the number of tokens by hueristically assuming 2.5 tokens per word + if not text: + return text + try: encoder = get_token_encoder() num_input_tokens = len(encoder.encode(text)) @@ -337,4 +337,84 @@ def clip_tokens(text: str, max_tokens: int) -> str: return clipped_text except Exception as e: logging.warning(f"Failed to clip tokens: {e}") - return text \ No newline at end of file + return text + + +def get_pr_multi_diffs(git_provider: GitProvider, + token_handler: TokenHandler, + model: str, + max_calls: int = 5) -> List[str]: + """ + Retrieves the diff files from a Git provider, sorts them by main language, and generates patches for each file. + The patches are split into multiple groups based on the maximum number of tokens allowed for the given model. + + Args: + git_provider (GitProvider): An object that provides access to Git provider APIs. + token_handler (TokenHandler): An object that handles tokens in the context of a pull request. + model (str): The name of the model. + max_calls (int, optional): The maximum number of calls to retrieve diff files. Defaults to 5. + + Returns: + List[str]: A list of final diff strings, split into multiple groups based on the maximum number of tokens allowed for the given model. + + Raises: + RateLimitExceededException: If the rate limit for the Git provider API is exceeded. + """ + try: + diff_files = git_provider.get_diff_files() + except RateLimitExceededException as e: + logging.error(f"Rate limit exceeded for git provider API. original message {e}") + raise + + # Sort files by main language + pr_languages = sort_files_by_main_languages(git_provider.get_languages(), diff_files) + + # Sort files within each language group by tokens in descending order + sorted_files = [] + for lang in pr_languages: + sorted_files.extend(sorted(lang['files'], key=lambda x: x.tokens, reverse=True)) + + patches = [] + final_diff_list = [] + total_tokens = token_handler.prompt_tokens + call_number = 1 + for file in sorted_files: + if call_number > max_calls: + if get_settings().config.verbosity_level >= 2: + logging.info(f"Reached max calls ({max_calls})") + break + + original_file_content_str = file.base_file + new_file_content_str = file.head_file + patch = file.patch + if not patch: + continue + + # Remove delete-only hunks + patch = handle_patch_deletions(patch, original_file_content_str, new_file_content_str, file.filename) + if patch is None: + continue + + patch = convert_to_hunks_with_lines_numbers(patch, file) + new_patch_tokens = token_handler.count_tokens(patch) + if patch and (total_tokens + new_patch_tokens > MAX_TOKENS[model] - OUTPUT_BUFFER_TOKENS_SOFT_THRESHOLD): + final_diff = "\n".join(patches) + final_diff_list.append(final_diff) + patches = [] + total_tokens = token_handler.prompt_tokens + call_number += 1 + if get_settings().config.verbosity_level >= 2: + logging.info(f"Call number: {call_number}") + + if patch: + patches.append(patch) + total_tokens += new_patch_tokens + if get_settings().config.verbosity_level >= 2: + logging.info(f"Tokens: {total_tokens}, last filename: {file.filename}") + + # Add the last chunk + if patches: + final_diff = "\n".join(patches) + final_diff_list.append(final_diff) + + return final_diff_list diff --git a/pr_agent/algo/utils.py b/pr_agent/algo/utils.py index 14fdda59..0124c3d6 100644 --- a/pr_agent/algo/utils.py +++ b/pr_agent/algo/utils.py @@ -250,9 +250,7 @@ def update_settings_from_args(args: List[str]) -> List[str]: logging.error(f'Invalid argument format: {arg}') other_args.append(arg) continue - key, value = vals - key = key.strip().upper() - value = value.strip() + key, value = _fix_key_value(*vals) if key in get_settings(): get_settings().set(key, value) logging.info(f'Updated setting {key} to: "{value}"') diff --git a/pr_agent/cli.py b/pr_agent/cli.py index 0f871041..01c1a7ec 100644 --- a/pr_agent/cli.py +++ b/pr_agent/cli.py @@ -19,13 +19,21 @@ For example: - cli.py --pr_url=... reflect Supported commands: -review / review_pr - Add a review that includes a summary of the PR and specific suggestions for improvement. -ask / ask_question [question] - Ask a question about the PR. -describe / describe_pr - Modify the PR title and description based on the PR's contents. -improve / improve_code - Suggest improvements to the code in the PR as pull request comments ready to commit. -reflect - Ask the PR author questions about the PR. -update_changelog - Update the changelog based on the PR's contents. +-review / review_pr - Add a review that includes a summary of the PR and specific suggestions for improvement. +-ask / ask_question [question] - Ask a question about the PR. + +-describe / describe_pr - Modify the PR title and description based on the PR's contents. + +-improve / improve_code - Suggest improvements to the code in the PR as pull request comments ready to commit. +Extended mode ('improve --extended') employs several calls, and provides a more thorough feedback + +-reflect - Ask the PR author questions about the PR. + +-update_changelog - Update the changelog based on the PR's contents. + + +Configuration: To edit any configuration parameter from 'configuration.toml', just add -config_path=. For example: 'python cli.py --pr_url=... review --pr_reviewer.extra_instructions="focus on the file: ..."' """) diff --git a/pr_agent/config_loader.py b/pr_agent/config_loader.py index 3075e8dc..47edfd97 100644 --- a/pr_agent/config_loader.py +++ b/pr_agent/config_loader.py @@ -19,6 +19,7 @@ global_settings = Dynaconf( "settings/pr_questions_prompts.toml", "settings/pr_description_prompts.toml", "settings/pr_code_suggestions_prompts.toml", + "settings/pr_sort_code_suggestions_prompts.toml", "settings/pr_information_from_user_prompts.toml", "settings/pr_update_changelog_prompts.toml", "settings_prod/.secrets.toml" diff --git a/pr_agent/git_providers/__init__.py b/pr_agent/git_providers/__init__.py index e7c2aa0f..376d09f5 100644 --- a/pr_agent/git_providers/__init__.py +++ b/pr_agent/git_providers/__init__.py @@ -1,13 +1,17 @@ from pr_agent.config_loader import get_settings from pr_agent.git_providers.bitbucket_provider import BitbucketProvider +from pr_agent.git_providers.codecommit_provider import CodeCommitProvider from pr_agent.git_providers.github_provider import GithubProvider from pr_agent.git_providers.gitlab_provider import GitLabProvider from pr_agent.git_providers.local_git_provider import LocalGitProvider +from pr_agent.git_providers.azuredevops_provider import AzureDevopsProvider _GIT_PROVIDERS = { 'github': GithubProvider, 'gitlab': GitLabProvider, 'bitbucket': BitbucketProvider, + 'azure': AzureDevopsProvider, + 'codecommit': CodeCommitProvider, 'local' : LocalGitProvider } diff --git a/pr_agent/git_providers/azuredevops_provider.py b/pr_agent/git_providers/azuredevops_provider.py new file mode 100644 index 00000000..71ae0947 --- /dev/null +++ b/pr_agent/git_providers/azuredevops_provider.py @@ -0,0 +1,269 @@ +import json +import logging +from typing import Optional, Tuple +from urllib.parse import urlparse + +import os + +AZURE_DEVOPS_AVAILABLE = True +try: + from msrest.authentication import BasicAuthentication + from azure.devops.connection import Connection + from azure.devops.v7_1.git.models import Comment, CommentThread, GitVersionDescriptor, GitPullRequest +except ImportError: + AZURE_DEVOPS_AVAILABLE = False + +from ..algo.pr_processing import clip_tokens +from ..config_loader import get_settings +from ..algo.utils import load_large_diff +from ..algo.language_handler import is_valid_file +from .git_provider import EDIT_TYPE, FilePatchInfo + + +class AzureDevopsProvider: + def __init__(self, pr_url: Optional[str] = None, incremental: Optional[bool] = False): + if not AZURE_DEVOPS_AVAILABLE: + raise ImportError("Azure DevOps provider is not available. Please install the required dependencies.") + + self.azure_devops_client = self._get_azure_devops_client() + + self.workspace_slug = None + self.repo_slug = None + self.repo = None + self.pr_num = None + self.pr = None + self.temp_comments = [] + self.incremental = incremental + if pr_url: + self.set_pr(pr_url) + + def is_supported(self, capability: str) -> bool: + if capability in ['get_issue_comments', 'create_inline_comment', 'publish_inline_comments', 'get_labels', 'remove_initial_comment']: + return False + return True + + def set_pr(self, pr_url: str): + self.workspace_slug, self.repo_slug, self.pr_num = self._parse_pr_url(pr_url) + self.pr = self._get_pr() + + def get_repo_settings(self): + try: + contents = self.azure_devops_client.get_item_content(repository_id=self.repo_slug, + project=self.workspace_slug, download=False, + include_content_metadata=False, include_content=True, + path=".pr_agent.toml") + return contents + except Exception as e: + logging.exception("get repo settings error") + return "" + + def get_files(self): + files = [] + for i in self.azure_devops_client.get_pull_request_commits(project=self.workspace_slug, + repository_id=self.repo_slug, + pull_request_id=self.pr_num): + + changes_obj = self.azure_devops_client.get_changes(project=self.workspace_slug, + repository_id=self.repo_slug, commit_id=i.commit_id) + + for c in changes_obj.changes: + files.append(c['item']['path']) + return list(set(files)) + + def get_diff_files(self) -> list[FilePatchInfo]: + try: + base_sha = self.pr.last_merge_target_commit + head_sha = self.pr.last_merge_source_commit + + commits = self.azure_devops_client.get_pull_request_commits(project=self.workspace_slug, + repository_id=self.repo_slug, + pull_request_id=self.pr_num) + + diff_files = [] + diffs = [] + diff_types = {} + + for c in commits: + changes_obj = self.azure_devops_client.get_changes(project=self.workspace_slug, + repository_id=self.repo_slug, commit_id=c.commit_id) + for i in changes_obj.changes: + diffs.append(i['item']['path']) + diff_types[i['item']['path']] = i['changeType'] + + diffs = list(set(diffs)) + + for file in diffs: + if not is_valid_file(file): + continue + + version = GitVersionDescriptor(version=head_sha.commit_id, version_type='commit') + new_file_content_str = self.azure_devops_client.get_item(repository_id=self.repo_slug, + path=file, + project=self.workspace_slug, + version_descriptor=version, + download=False, + include_content=True) + + new_file_content_str = new_file_content_str.content + + edit_type = EDIT_TYPE.MODIFIED + if diff_types[file] == 'add': + edit_type = EDIT_TYPE.ADDED + elif diff_types[file] == 'delete': + edit_type = EDIT_TYPE.DELETED + elif diff_types[file] == 'rename': + edit_type = EDIT_TYPE.RENAMED + + version = GitVersionDescriptor(version=base_sha.commit_id, version_type='commit') + original_file_content_str = self.azure_devops_client.get_item(repository_id=self.repo_slug, + path=file, + project=self.workspace_slug, + version_descriptor=version, + download=False, + include_content=True) + original_file_content_str = original_file_content_str.content + + patch = load_large_diff(file, new_file_content_str, original_file_content_str) + + diff_files.append(FilePatchInfo(original_file_content_str, new_file_content_str, + patch=patch, + filename=file, + edit_type=edit_type)) + + self.diff_files = diff_files + return diff_files + except Exception as e: + print(f"Error: {str(e)}") + return [] + + def publish_comment(self, pr_comment: str, is_temporary: bool = False): + comment = Comment(content=pr_comment) + thread = CommentThread(comments=[comment]) + thread_response = self.azure_devops_client.create_thread(comment_thread=thread, project=self.workspace_slug, + repository_id=self.repo_slug, + pull_request_id=self.pr_num) + if is_temporary: + self.temp_comments.append({'thread_id': thread_response.id, 'comment_id': comment.id}) + + def publish_description(self, pr_title: str, pr_body: str): + try: + updated_pr = GitPullRequest() + updated_pr.title = pr_title + updated_pr.description = pr_body + self.azure_devops_client.update_pull_request(project=self.workspace_slug, + repository_id=self.repo_slug, + pull_request_id=self.pr_num, + git_pull_request_to_update=updated_pr) + except Exception as e: + logging.exception(f"Could not update pull request {self.pr_num} description: {e}") + + def remove_initial_comment(self): + return "" # not implemented yet + + def publish_inline_comment(self, body: str, relevant_file: str, relevant_line_in_file: str): + raise NotImplementedError("Azure DevOps provider does not support publishing inline comment yet") + + def create_inline_comment(self, body: str, relevant_file: str, relevant_line_in_file: str): + raise NotImplementedError("Azure DevOps provider does not support creating inline comments yet") + + def publish_inline_comments(self, comments: list[dict]): + raise NotImplementedError("Azure DevOps provider does not support publishing inline comments yet") + + def get_title(self): + return self.pr.title + + def get_languages(self): + languages = [] + files = self.azure_devops_client.get_items(project=self.workspace_slug, repository_id=self.repo_slug, + recursion_level="Full", include_content_metadata=True, + include_links=False, download=False) + for f in files: + if f.git_object_type == 'blob': + file_name, file_extension = os.path.splitext(f.path) + languages.append(file_extension[1:]) + + extension_counts = {} + for ext in languages: + if ext != '': + extension_counts[ext] = extension_counts.get(ext, 0) + 1 + + total_extensions = sum(extension_counts.values()) + + extension_percentages = {ext: (count / total_extensions) * 100 for ext, count in extension_counts.items()} + + return extension_percentages + + def get_pr_branch(self): + pr_info = self.azure_devops_client.get_pull_request_by_id(project=self.workspace_slug, + pull_request_id=self.pr_num) + source_branch = pr_info.source_ref_name.split('/')[-1] + return source_branch + + def get_pr_description(self): + max_tokens = get_settings().get("CONFIG.MAX_DESCRIPTION_TOKENS", None) + if max_tokens: + return clip_tokens(self.pr.description, max_tokens) + return self.pr.description + + def get_user_id(self): + return 0 + + def get_issue_comments(self): + raise NotImplementedError("Azure DevOps provider does not support issue comments yet") + + def add_eyes_reaction(self, issue_comment_id: int) -> Optional[int]: + return True + + def remove_reaction(self, issue_comment_id: int, reaction_id: int) -> bool: + return True + + def get_issue_comments(self): + raise NotImplementedError("Azure DevOps provider does not support issue comments yet") + + @staticmethod + def _parse_pr_url(pr_url: str) -> Tuple[str, int]: + parsed_url = urlparse(pr_url) + + if 'azure.com' not in parsed_url.netloc: + raise ValueError("The provided URL is not a valid Azure DevOps URL") + + path_parts = parsed_url.path.strip('/').split('/') + + if len(path_parts) < 6 or path_parts[4] != 'pullrequest': + raise ValueError("The provided URL does not appear to be a Azure DevOps PR URL") + + workspace_slug = path_parts[1] + repo_slug = path_parts[3] + try: + pr_number = int(path_parts[5]) + except ValueError as e: + raise ValueError("Unable to convert PR number to integer") from e + + return workspace_slug, repo_slug, pr_number + + def _get_azure_devops_client(self): + try: + pat = get_settings().azure_devops.pat + org = get_settings().azure_devops.org + except AttributeError as e: + raise ValueError( + "Azure DevOps PAT token is required ") from e + + credentials = BasicAuthentication('', pat) + azure_devops_connection = Connection(base_url=org, creds=credentials) + azure_devops_client = azure_devops_connection.clients.get_git_client() + + return azure_devops_client + + def _get_repo(self): + if self.repo is None: + self.repo = self.azure_devops_client.get_repository(project=self.workspace_slug, + repository_id=self.repo_slug) + return self.repo + + def _get_pr(self): + self.pr = self.azure_devops_client.get_pull_request_by_id(pull_request_id=self.pr_num, project=self.workspace_slug) + return self.pr + + def get_commit_messages(self): + return "" # not implemented yet diff --git a/pr_agent/git_providers/bitbucket_provider.py b/pr_agent/git_providers/bitbucket_provider.py index ddb70666..0cd860fa 100644 --- a/pr_agent/git_providers/bitbucket_provider.py +++ b/pr_agent/git_providers/bitbucket_provider.py @@ -1,21 +1,31 @@ +import json import logging from typing import Optional, Tuple from urllib.parse import urlparse import requests from atlassian.bitbucket import Cloud +from starlette_context import context -from ..algo.pr_processing import clip_tokens from ..config_loader import get_settings -from .git_provider import FilePatchInfo +from .git_provider import FilePatchInfo, GitProvider -class BitbucketProvider: - def __init__(self, pr_url: Optional[str] = None, incremental: Optional[bool] = False): +class BitbucketProvider(GitProvider): + def __init__( + self, pr_url: Optional[str] = None, incremental: Optional[bool] = False + ): s = requests.Session() - s.headers['Authorization'] = f'Bearer {get_settings().get("BITBUCKET.BEARER_TOKEN", None)}' + try: + bearer = context.get("bitbucket_bearer_token", None) + s.headers["Authorization"] = f"Bearer {bearer}" + except Exception: + s.headers[ + "Authorization" + ] = f'Bearer {get_settings().get("BITBUCKET.BEARER_TOKEN", None)}' + s.headers["Content-Type"] = "application/json" + self.headers = s.headers self.bitbucket_client = Cloud(session=s) - self.workspace_slug = None self.repo_slug = None self.repo = None @@ -25,16 +35,78 @@ class BitbucketProvider: self.incremental = incremental if pr_url: self.set_pr(pr_url) + self.bitbucket_comment_api_url = self.pr._BitbucketBase__data["links"][ + "comments" + ]["href"] def get_repo_settings(self): try: - contents = self.repo_obj.get_contents(".pr_agent.toml", ref=self.pr.head.sha).decoded_content + contents = self.repo_obj.get_contents( + ".pr_agent.toml", ref=self.pr.head.sha + ).decoded_content return contents except Exception: return "" + def publish_code_suggestions(self, code_suggestions: list) -> bool: + """ + Publishes code suggestions as comments on the PR. + """ + post_parameters_list = [] + for suggestion in code_suggestions: + body = suggestion["body"] + relevant_file = suggestion["relevant_file"] + relevant_lines_start = suggestion["relevant_lines_start"] + relevant_lines_end = suggestion["relevant_lines_end"] + + if not relevant_lines_start or relevant_lines_start == -1: + if get_settings().config.verbosity_level >= 2: + logging.exception( + f"Failed to publish code suggestion, relevant_lines_start is {relevant_lines_start}" + ) + continue + + if relevant_lines_end < relevant_lines_start: + if get_settings().config.verbosity_level >= 2: + logging.exception( + f"Failed to publish code suggestion, " + f"relevant_lines_end is {relevant_lines_end} and " + f"relevant_lines_start is {relevant_lines_start}" + ) + continue + + if relevant_lines_end > relevant_lines_start: + post_parameters = { + "body": body, + "path": relevant_file, + "line": relevant_lines_end, + "start_line": relevant_lines_start, + "start_side": "RIGHT", + } + else: # API is different for single line comments + post_parameters = { + "body": body, + "path": relevant_file, + "line": relevant_lines_start, + "side": "RIGHT", + } + post_parameters_list.append(post_parameters) + + try: + self.publish_inline_comments(post_parameters_list) + return True + except Exception as e: + if get_settings().config.verbosity_level >= 2: + logging.error(f"Failed to publish code suggestion, error: {e}") + return False + def is_supported(self, capability: str) -> bool: - if capability in ['get_issue_comments', 'create_inline_comment', 'publish_inline_comments', 'get_labels']: + if capability in [ + "get_issue_comments", + "create_inline_comment", + "publish_inline_comments", + "get_labels", + ]: return False return True @@ -47,65 +119,80 @@ class BitbucketProvider: def get_diff_files(self) -> list[FilePatchInfo]: diffs = self.pr.diffstat() - diff_split = ['diff --git%s' % x for x in self.pr.diff().split('diff --git') if x.strip()] - + diff_split = [ + "diff --git%s" % x for x in self.pr.diff().split("diff --git") if x.strip() + ] + diff_files = [] for index, diff in enumerate(diffs): - original_file_content_str = self._get_pr_file_content(diff.old.get_data('links')) - new_file_content_str = self._get_pr_file_content(diff.new.get_data('links')) - diff_files.append(FilePatchInfo(original_file_content_str, new_file_content_str, - diff_split[index], diff.new.path)) + original_file_content_str = self._get_pr_file_content( + diff.old.get_data("links") + ) + new_file_content_str = self._get_pr_file_content(diff.new.get_data("links")) + diff_files.append( + FilePatchInfo( + original_file_content_str, + new_file_content_str, + diff_split[index], + diff.new.path, + ) + ) return diff_files def publish_comment(self, pr_comment: str, is_temporary: bool = False): comment = self.pr.comment(pr_comment) if is_temporary: - self.temp_comments.append(comment['id']) + self.temp_comments.append(comment["id"]) def remove_initial_comment(self): try: for comment in self.temp_comments: - self.pr.delete(f'comments/{comment}') + self.pr.delete(f"comments/{comment}") except Exception as e: logging.exception(f"Failed to remove temp comments, error: {e}") - def publish_inline_comment(self, body: str, relevant_file: str, relevant_line_in_file: str): - pass - - def create_inline_comment(self, body: str, relevant_file: str, relevant_line_in_file: str): - raise NotImplementedError("Bitbucket provider does not support creating inline comments yet") + def publish_inline_comment( + self, comment: str, from_line: int, to_line: int, file: str + ): + payload = json.dumps( + { + "content": { + "raw": comment, + }, + "inline": {"to": from_line, "path": file}, + } + ) + response = requests.request( + "POST", self.bitbucket_comment_api_url, data=payload, headers=self.headers + ) + return response def publish_inline_comments(self, comments: list[dict]): - raise NotImplementedError("Bitbucket provider does not support publishing inline comments yet") + for comment in comments: + self.publish_inline_comment( + comment["body"], comment["start_line"], comment["line"], comment["path"] + ) def get_title(self): return self.pr.title def get_languages(self): - languages = {self._get_repo().get_data('language'): 0} + languages = {self._get_repo().get_data("language"): 0} return languages def get_pr_branch(self): return self.pr.source_branch - def get_pr_description(self): - max_tokens = get_settings().get("CONFIG.MAX_DESCRIPTION_TOKENS", None) - if max_tokens: - return clip_tokens(self.pr.description, max_tokens) + def get_pr_description_full(self): return self.pr.description def get_user_id(self): return 0 def get_issue_comments(self): - raise NotImplementedError("Bitbucket provider does not support issue comments yet") - - def get_repo_settings(self): - try: - contents = self.repo_obj.get_contents(".pr_agent.toml", ref=self.pr.head.sha).decoded_content - return contents - except Exception: - return "" + raise NotImplementedError( + "Bitbucket provider does not support issue comments yet" + ) def add_eyes_reaction(self, issue_comment_id: int) -> Optional[int]: return True @@ -116,14 +203,16 @@ class BitbucketProvider: @staticmethod def _parse_pr_url(pr_url: str) -> Tuple[str, int]: parsed_url = urlparse(pr_url) - - if 'bitbucket.org' not in parsed_url.netloc: + + if "bitbucket.org" not in parsed_url.netloc: raise ValueError("The provided URL is not a valid Bitbucket URL") - path_parts = parsed_url.path.strip('/').split('/') - - if len(path_parts) < 4 or path_parts[2] != 'pull-requests': - raise ValueError("The provided URL does not appear to be a Bitbucket PR URL") + path_parts = parsed_url.path.strip("/").split("/") + + if len(path_parts) < 4 or path_parts[2] != "pull-requests": + raise ValueError( + "The provided URL does not appear to be a Bitbucket PR URL" + ) workspace_slug = path_parts[0] repo_slug = path_parts[1] @@ -136,7 +225,9 @@ class BitbucketProvider: def _get_repo(self): if self.repo is None: - self.repo = self.bitbucket_client.workspaces.get(self.workspace_slug).repositories.get(self.repo_slug) + self.repo = self.bitbucket_client.workspaces.get( + self.workspace_slug + ).repositories.get(self.repo_slug) return self.repo def _get_pr(self): @@ -147,3 +238,16 @@ class BitbucketProvider: def get_commit_messages(self): return "" # not implemented yet + + def publish_description(self, pr_title: str, pr_body: str): + pass + def create_inline_comment( + self, body: str, relevant_file: str, relevant_line_in_file: str + ): + pass + + def publish_labels(self, labels): + pass + + def get_labels(self): + pass diff --git a/pr_agent/git_providers/codecommit_client.py b/pr_agent/git_providers/codecommit_client.py new file mode 100644 index 00000000..1112ee22 --- /dev/null +++ b/pr_agent/git_providers/codecommit_client.py @@ -0,0 +1,272 @@ +import boto3 +import botocore + + +class CodeCommitDifferencesResponse: + """ + CodeCommitDifferencesResponse is the response object returned from our get_differences() function. + It maps the JSON response to member variables of this class. + """ + + def __init__(self, json: dict): + before_blob = json.get("beforeBlob", {}) + after_blob = json.get("afterBlob", {}) + + self.before_blob_id = before_blob.get("blobId", "") + self.before_blob_path = before_blob.get("path", "") + self.after_blob_id = after_blob.get("blobId", "") + self.after_blob_path = after_blob.get("path", "") + self.change_type = json.get("changeType", "") + + +class CodeCommitPullRequestResponse: + """ + CodeCommitPullRequestResponse is the response object returned from our get_pr() function. + It maps the JSON response to member variables of this class. + """ + + def __init__(self, json: dict): + self.title = json.get("title", "") + self.description = json.get("description", "") + + self.targets = [] + for target in json.get("pullRequestTargets", []): + self.targets.append(CodeCommitPullRequestResponse.CodeCommitPullRequestTarget(target)) + + class CodeCommitPullRequestTarget: + """ + CodeCommitPullRequestTarget is a subclass of CodeCommitPullRequestResponse that + holds details about an individual target commit. + """ + + def __init__(self, json: dict): + self.source_commit = json.get("sourceCommit", "") + self.source_branch = json.get("sourceReference", "") + self.destination_commit = json.get("destinationCommit", "") + self.destination_branch = json.get("destinationReference", "") + + +class CodeCommitClient: + """ + CodeCommitClient is a wrapper around the AWS boto3 SDK for the CodeCommit client + """ + + def __init__(self): + self.boto_client = None + + def _connect_boto_client(self): + try: + self.boto_client = boto3.client("codecommit") + except Exception as e: + raise ValueError(f"Failed to connect to AWS CodeCommit: {e}") + + def get_differences(self, repo_name: int, destination_commit: str, source_commit: str): + """ + Get the differences between two commits in CodeCommit. + + Args: + - repo_name: Name of the repository + - destination_commit: Commit hash you want to merge into (the "before" hash) (usually on the main or master branch) + - source_commit: Commit hash of the code you are adding (the "after" branch) + + Returns: + - List of CodeCommitDifferencesResponse objects + + Boto3 Documentation: + - aws codecommit get-differences + - https://boto3.amazonaws.com/v1/documentation/api/latest/reference/services/codecommit/client/get_differences.html + """ + if self.boto_client is None: + self._connect_boto_client() + + # The differences response from AWS is paginated, so we need to iterate through the pages to get all the differences. + differences = [] + try: + paginator = self.boto_client.get_paginator("get_differences") + for page in paginator.paginate( + repositoryName=repo_name, + beforeCommitSpecifier=destination_commit, + afterCommitSpecifier=source_commit, + ): + differences.extend(page.get("differences", [])) + except botocore.exceptions.ClientError as e: + if e.response["Error"]["Code"] == 'RepositoryDoesNotExistException': + raise ValueError(f"CodeCommit cannot retrieve differences: Repository does not exist: {repo_name}") from e + raise ValueError(f"CodeCommit cannot retrieve differences for {source_commit}..{destination_commit}") from e + except Exception as e: + raise ValueError(f"CodeCommit cannot retrieve differences for {source_commit}..{destination_commit}") from e + + output = [] + for json in differences: + output.append(CodeCommitDifferencesResponse(json)) + return output + + def get_file(self, repo_name: str, file_path: str, sha_hash: str, optional: bool = False): + """ + Retrieve a file from CodeCommit. + + Args: + - repo_name: Name of the repository + - file_path: Path to the file you are retrieving + - sha_hash: Commit hash of the file you are retrieving + + Returns: + - File contents + + Boto3 Documentation: + - aws codecommit get_file + - https://boto3.amazonaws.com/v1/documentation/api/latest/reference/services/codecommit/client/get_file.html + """ + if not file_path: + return "" + + if self.boto_client is None: + self._connect_boto_client() + + try: + response = self.boto_client.get_file(repositoryName=repo_name, commitSpecifier=sha_hash, filePath=file_path) + except botocore.exceptions.ClientError as e: + if e.response["Error"]["Code"] == 'RepositoryDoesNotExistException': + raise ValueError(f"CodeCommit cannot retrieve PR: Repository does not exist: {repo_name}") from e + # if the file does not exist, but is flagged as optional, then return an empty string + if optional and e.response["Error"]["Code"] == 'FileDoesNotExistException': + return "" + raise ValueError(f"CodeCommit cannot retrieve file '{file_path}' from repository '{repo_name}'") from e + except Exception as e: + raise ValueError(f"CodeCommit cannot retrieve file '{file_path}' from repository '{repo_name}'") from e + if "fileContent" not in response: + raise ValueError(f"File content is empty for file: {file_path}") + + return response.get("fileContent", "") + + def get_pr(self, repo_name: str, pr_number: int): + """ + Get a information about a CodeCommit PR. + + Args: + - repo_name: Name of the repository + - pr_number: The PR number you are requesting + + Returns: + - CodeCommitPullRequestResponse object + + Boto3 Documentation: + - aws codecommit get_pull_request + - https://boto3.amazonaws.com/v1/documentation/api/latest/reference/services/codecommit/client/get_pull_request.html + """ + if self.boto_client is None: + self._connect_boto_client() + + try: + response = self.boto_client.get_pull_request(pullRequestId=str(pr_number)) + except botocore.exceptions.ClientError as e: + if e.response["Error"]["Code"] == 'PullRequestDoesNotExistException': + raise ValueError(f"CodeCommit cannot retrieve PR: PR number does not exist: {pr_number}") from e + if e.response["Error"]["Code"] == 'RepositoryDoesNotExistException': + raise ValueError(f"CodeCommit cannot retrieve PR: Repository does not exist: {repo_name}") from e + raise ValueError(f"CodeCommit cannot retrieve PR: {pr_number}: boto client error") from e + except Exception as e: + raise ValueError(f"CodeCommit cannot retrieve PR: {pr_number}") from e + + if "pullRequest" not in response: + raise ValueError("CodeCommit PR number not found: {pr_number}") + + return CodeCommitPullRequestResponse(response.get("pullRequest", {})) + + def publish_description(self, pr_number: int, pr_title: str, pr_body: str): + """ + Set the title and description on a pull request + + Args: + - pr_number: the AWS CodeCommit pull request number + - pr_title: title of the pull request + - pr_body: body of the pull request + + Returns: + - None + + Boto3 Documentation: + - aws codecommit update_pull_request_title + - https://boto3.amazonaws.com/v1/documentation/api/latest/reference/services/codecommit/client/update_pull_request_title.html + - aws codecommit update_pull_request_description + - https://boto3.amazonaws.com/v1/documentation/api/latest/reference/services/codecommit/client/update_pull_request_description.html + """ + if self.boto_client is None: + self._connect_boto_client() + + try: + self.boto_client.update_pull_request_title(pullRequestId=str(pr_number), title=pr_title) + self.boto_client.update_pull_request_description(pullRequestId=str(pr_number), description=pr_body) + except botocore.exceptions.ClientError as e: + if e.response["Error"]["Code"] == 'PullRequestDoesNotExistException': + raise ValueError(f"PR number does not exist: {pr_number}") from e + if e.response["Error"]["Code"] == 'InvalidTitleException': + raise ValueError(f"Invalid title for PR number: {pr_number}") from e + if e.response["Error"]["Code"] == 'InvalidDescriptionException': + raise ValueError(f"Invalid description for PR number: {pr_number}") from e + if e.response["Error"]["Code"] == 'PullRequestAlreadyClosedException': + raise ValueError(f"PR is already closed: PR number: {pr_number}") from e + raise ValueError(f"Boto3 client error calling publish_description") from e + except Exception as e: + raise ValueError(f"Error calling publish_description") from e + + def publish_comment(self, repo_name: str, pr_number: int, destination_commit: str, source_commit: str, comment: str, annotation_file: str = None, annotation_line: int = None): + """ + Publish a comment to a pull request + + Args: + - repo_name: name of the repository + - pr_number: number of the pull request + - destination_commit: The commit hash you want to merge into (the "before" hash) (usually on the main or master branch) + - source_commit: The commit hash of the code you are adding (the "after" branch) + - comment: The comment you want to publish + - annotation_file: The file you want to annotate (optional) + - annotation_line: The line number you want to annotate (optional) + + Comment annotations for CodeCommit are different than GitHub. + CodeCommit only designates the starting line number for the comment. + It does not support the ending line number to highlight a range of lines. + + Returns: + - None + + Boto3 Documentation: + - aws codecommit post_comment_for_pull_request + - https://boto3.amazonaws.com/v1/documentation/api/latest/reference/services/codecommit/client/post_comment_for_pull_request.html + """ + if self.boto_client is None: + self._connect_boto_client() + + try: + # If the comment has code annotations, + # then set the file path and line number in the location dictionary + if annotation_file and annotation_line: + self.boto_client.post_comment_for_pull_request( + pullRequestId=str(pr_number), + repositoryName=repo_name, + beforeCommitId=destination_commit, + afterCommitId=source_commit, + content=comment, + location={ + "filePath": annotation_file, + "filePosition": annotation_line, + "relativeFileVersion": "AFTER", + }, + ) + else: + # The comment does not have code annotations + self.boto_client.post_comment_for_pull_request( + pullRequestId=str(pr_number), + repositoryName=repo_name, + beforeCommitId=destination_commit, + afterCommitId=source_commit, + content=comment, + ) + except botocore.exceptions.ClientError as e: + if e.response["Error"]["Code"] == 'RepositoryDoesNotExistException': + raise ValueError(f"Repository does not exist: {repo_name}") from e + if e.response["Error"]["Code"] == 'PullRequestDoesNotExistException': + raise ValueError(f"PR number does not exist: {pr_number}") from e + raise ValueError(f"Boto3 client error calling post_comment_for_pull_request") from e + except Exception as e: + raise ValueError(f"Error calling post_comment_for_pull_request") from e diff --git a/pr_agent/git_providers/codecommit_provider.py b/pr_agent/git_providers/codecommit_provider.py new file mode 100644 index 00000000..1f570a1a --- /dev/null +++ b/pr_agent/git_providers/codecommit_provider.py @@ -0,0 +1,480 @@ +import logging +import os +import re +from collections import Counter +from typing import List, Optional, Tuple +from urllib.parse import urlparse + +from ..algo.language_handler import is_valid_file, language_extension_map +from ..algo.pr_processing import clip_tokens +from ..algo.utils import load_large_diff +from ..config_loader import get_settings +from .git_provider import EDIT_TYPE, FilePatchInfo, GitProvider, IncrementalPR +from pr_agent.git_providers.codecommit_client import CodeCommitClient + + +class PullRequestCCMimic: + """ + This class mimics the PullRequest class from the PyGithub library for the CodeCommitProvider. + """ + + def __init__(self, title: str, diff_files: List[FilePatchInfo]): + self.title = title + self.diff_files = diff_files + self.description = None + self.source_commit = None + self.source_branch = None # the branch containing your new code changes + self.destination_commit = None + self.destination_branch = None # the branch you are going to merge into + + +class CodeCommitFile: + """ + This class represents a file in a pull request in CodeCommit. + """ + + def __init__( + self, + a_path: str, + a_blob_id: str, + b_path: str, + b_blob_id: str, + edit_type: EDIT_TYPE, + ): + self.a_path = a_path + self.a_blob_id = a_blob_id + self.b_path = b_path + self.b_blob_id = b_blob_id + self.edit_type: EDIT_TYPE = edit_type + self.filename = b_path if b_path else a_path + + +class CodeCommitProvider(GitProvider): + """ + This class implements the GitProvider interface for AWS CodeCommit repositories. + """ + + def __init__(self, pr_url: Optional[str] = None, incremental: Optional[bool] = False): + self.codecommit_client = CodeCommitClient() + self.aws_client = None + self.repo_name = None + self.pr_num = None + self.pr = None + self.diff_files = None + self.git_files = None + if pr_url: + self.set_pr(pr_url) + + def provider_name(self): + return "CodeCommit" + + def is_supported(self, capability: str) -> bool: + if capability in [ + "get_issue_comments", + "create_inline_comment", + "publish_inline_comments", + "get_labels", + ]: + return False + return True + + def set_pr(self, pr_url: str): + self.repo_name, self.pr_num = self._parse_pr_url(pr_url) + self.pr = self._get_pr() + + def get_files(self) -> list[CodeCommitFile]: + # bring files from CodeCommit only once + if self.git_files: + return self.git_files + + self.git_files = [] + differences = self.codecommit_client.get_differences(self.repo_name, self.pr.destination_commit, self.pr.source_commit) + for item in differences: + self.git_files.append(CodeCommitFile(item.before_blob_path, + item.before_blob_id, + item.after_blob_path, + item.after_blob_id, + CodeCommitProvider._get_edit_type(item.change_type))) + return self.git_files + + def get_diff_files(self) -> list[FilePatchInfo]: + """ + Retrieves the list of files that have been modified, added, deleted, or renamed in a pull request in CodeCommit, + along with their content and patch information. + + Returns: + diff_files (List[FilePatchInfo]): List of FilePatchInfo objects representing the modified, added, deleted, + or renamed files in the merge request. + """ + # bring files from CodeCommit only once + if self.diff_files: + return self.diff_files + + self.diff_files = [] + + files = self.get_files() + for diff_item in files: + patch_filename = "" + if diff_item.a_blob_id is not None: + patch_filename = diff_item.a_path + original_file_content_str = self.codecommit_client.get_file( + self.repo_name, diff_item.a_path, self.pr.destination_commit) + if isinstance(original_file_content_str, (bytes, bytearray)): + original_file_content_str = original_file_content_str.decode("utf-8") + else: + original_file_content_str = "" + + if diff_item.b_blob_id is not None: + patch_filename = diff_item.b_path + new_file_content_str = self.codecommit_client.get_file(self.repo_name, diff_item.b_path, self.pr.source_commit) + if isinstance(new_file_content_str, (bytes, bytearray)): + new_file_content_str = new_file_content_str.decode("utf-8") + else: + new_file_content_str = "" + + patch = load_large_diff(patch_filename, new_file_content_str, original_file_content_str) + + # Store the diffs as a list of FilePatchInfo objects + info = FilePatchInfo( + original_file_content_str, + new_file_content_str, + patch, + diff_item.b_path, + edit_type=diff_item.edit_type, + old_filename=None + if diff_item.a_path == diff_item.b_path + else diff_item.a_path, + ) + # Only add valid files to the diff list + # "bad extensions" are set in the language_extensions.toml file + # a "valid file" is one that is not in the "bad extensions" list + if is_valid_file(info.filename): + self.diff_files.append(info) + + return self.diff_files + + def publish_description(self, pr_title: str, pr_body: str): + try: + self.codecommit_client.publish_description( + pr_number=self.pr_num, + pr_title=pr_title, + pr_body=CodeCommitProvider._add_additional_newlines(pr_body), + ) + except Exception as e: + raise ValueError(f"CodeCommit Cannot publish description for PR: {self.pr_num}") from e + + def publish_comment(self, pr_comment: str, is_temporary: bool = False): + if is_temporary: + logging.info(pr_comment) + return + + pr_comment = CodeCommitProvider._remove_markdown_html(pr_comment) + pr_comment = CodeCommitProvider._add_additional_newlines(pr_comment) + + try: + self.codecommit_client.publish_comment( + repo_name=self.repo_name, + pr_number=self.pr_num, + destination_commit=self.pr.destination_commit, + source_commit=self.pr.source_commit, + comment=pr_comment, + ) + except Exception as e: + raise ValueError(f"CodeCommit Cannot publish comment for PR: {self.pr_num}") from e + + def publish_code_suggestions(self, code_suggestions: list) -> bool: + counter = 1 + for suggestion in code_suggestions: + # Verify that each suggestion has the required keys + if not all(key in suggestion for key in ["body", "relevant_file", "relevant_lines_start"]): + logging.warning(f"Skipping code suggestion #{counter}: Each suggestion must have 'body', 'relevant_file', 'relevant_lines_start' keys") + continue + + # Publish the code suggestion to CodeCommit + try: + logging.debug(f"Code Suggestion #{counter} in file: {suggestion['relevant_file']}: {suggestion['relevant_lines_start']}") + self.codecommit_client.publish_comment( + repo_name=self.repo_name, + pr_number=self.pr_num, + destination_commit=self.pr.destination_commit, + source_commit=self.pr.source_commit, + comment=suggestion["body"], + annotation_file=suggestion["relevant_file"], + annotation_line=suggestion["relevant_lines_start"], + ) + except Exception as e: + raise ValueError(f"CodeCommit Cannot publish code suggestions for PR: {self.pr_num}") from e + + counter += 1 + + # The calling function passes in a list of code suggestions, and this function publishes each suggestion one at a time. + # If we were to return False here, the calling function will attempt to publish the same list of code suggestions again, one at a time. + # Since this function publishes the suggestions one at a time anyway, we always return True here to avoid the retry. + return True + + def publish_labels(self, labels): + return [""] # not implemented yet + + def get_labels(self): + return [""] # not implemented yet + + def remove_initial_comment(self): + return "" # not implemented yet + + def publish_inline_comment(self, body: str, relevant_file: str, relevant_line_in_file: str): + # https://boto3.amazonaws.com/v1/documentation/api/latest/reference/services/codecommit/client/post_comment_for_compared_commit.html + raise NotImplementedError("CodeCommit provider does not support publishing inline comments yet") + + def create_inline_comment(self, body: str, relevant_file: str, relevant_line_in_file: str): + raise NotImplementedError("CodeCommit provider does not support creating inline comments yet") + + def publish_inline_comments(self, comments: list[dict]): + raise NotImplementedError("CodeCommit provider does not support publishing inline comments yet") + + def get_title(self): + return self.pr.get("title", "") + + def get_languages(self): + """ + Returns a dictionary of languages, containing the percentage of each language used in the PR. + + Returns: + - dict: A dictionary where each key is a language name and the corresponding value is the percentage of that language in the PR. + """ + commit_files = self.get_files() + filenames = [ item.filename for item in commit_files ] + extensions = CodeCommitProvider._get_file_extensions(filenames) + + # Calculate the percentage of each file extension in the PR + percentages = CodeCommitProvider._get_language_percentages(extensions) + + # The global language_extension_map is a dictionary of languages, + # where each dictionary item is a BoxList of extensions. + # We want a dictionary of extensions, + # where each dictionary item is a language name. + # We build that language->extension dictionary here in main_extensions_flat. + main_extensions_flat = {} + for language, extensions in language_extension_map.items(): + for ext in extensions: + main_extensions_flat[ext] = language + + # Map the file extension/languages to percentages + languages = {} + for ext, pct in percentages.items(): + languages[main_extensions_flat.get(ext, "")] = pct + + return languages + + def get_pr_branch(self): + return self.pr.source_branch + + def get_pr_description_full(self) -> str: + return self.pr.description + + def get_user_id(self): + return -1 # not implemented yet + + def get_issue_comments(self): + raise NotImplementedError("CodeCommit provider does not support issue comments yet") + + def get_repo_settings(self): + # a local ".pr_agent.toml" settings file is optional + settings_filename = ".pr_agent.toml" + return self.codecommit_client.get_file(self.repo_name, settings_filename, self.pr.source_commit, optional=True) + + def add_eyes_reaction(self, issue_comment_id: int) -> Optional[int]: + logging.info("CodeCommit provider does not support eyes reaction yet") + return True + + def remove_reaction(self, issue_comment_id: int, reaction_id: int) -> bool: + logging.info("CodeCommit provider does not support removing reactions yet") + return True + + @staticmethod + def _parse_pr_url(pr_url: str) -> Tuple[str, int]: + """ + Parse the CodeCommit PR URL and return the repository name and PR number. + + Args: + - pr_url: the full AWS CodeCommit pull request URL + + Returns: + - Tuple[str, int]: A tuple containing the repository name and PR number. + """ + # Example PR URL: + # https://us-east-1.console.aws.amazon.com/codesuite/codecommit/repositories/__MY_REPO__/pull-requests/123456" + parsed_url = urlparse(pr_url) + + if not CodeCommitProvider._is_valid_codecommit_hostname(parsed_url.netloc): + raise ValueError(f"The provided URL is not a valid CodeCommit URL: {pr_url}") + + path_parts = parsed_url.path.strip("/").split("/") + + if ( + len(path_parts) < 6 + or path_parts[0] != "codesuite" + or path_parts[1] != "codecommit" + or path_parts[2] != "repositories" + or path_parts[4] != "pull-requests" + ): + raise ValueError(f"The provided URL does not appear to be a CodeCommit PR URL: {pr_url}") + + repo_name = path_parts[3] + + try: + pr_number = int(path_parts[5]) + except ValueError as e: + raise ValueError(f"Unable to convert PR number to integer: '{path_parts[5]}'") from e + + return repo_name, pr_number + + @staticmethod + def _is_valid_codecommit_hostname(hostname: str) -> bool: + """ + Check if the provided hostname is a valid AWS CodeCommit hostname. + + This is not an exhaustive check of AWS region names, + but instead uses a regex to check for matching AWS region patterns. + + Args: + - hostname: the hostname to check + + Returns: + - bool: True if the hostname is valid, False otherwise. + """ + return re.match(r"^[a-z]{2}-(gov-)?[a-z]+-\d\.console\.aws\.amazon\.com$", hostname) is not None + + def _get_pr(self): + response = self.codecommit_client.get_pr(self.repo_name, self.pr_num) + + if len(response.targets) == 0: + raise ValueError(f"No files found in CodeCommit PR: {self.pr_num}") + + # TODO: implement support for multiple targets in one CodeCommit PR + # for now, we are only using the first target in the PR + if len(response.targets) > 1: + logging.warning( + "Multiple targets in one PR is not supported for CodeCommit yet. Continuing, using the first target only..." + ) + + # Return our object that mimics PullRequest class from the PyGithub library + # (This strategy was copied from the LocalGitProvider) + mimic = PullRequestCCMimic(response.title, self.diff_files) + mimic.description = response.description + mimic.source_commit = response.targets[0].source_commit + mimic.source_branch = response.targets[0].source_branch + mimic.destination_commit = response.targets[0].destination_commit + mimic.destination_branch = response.targets[0].destination_branch + + return mimic + + def get_commit_messages(self): + return "" # not implemented yet + + @staticmethod + def _add_additional_newlines(body: str) -> str: + """ + Replace single newlines in a PR body with double newlines. + + CodeCommit Markdown does not seem to render as well as GitHub Markdown, + so we add additional newlines to the PR body to make it more readable in CodeCommit. + + Args: + - body: the PR body + + Returns: + - str: the PR body with the double newlines added + """ + return re.sub(r'(? str: + """ + Remove the HTML tags from a PR comment. + + CodeCommit Markdown does not seem to render as well as GitHub Markdown, + so we remove the HTML tags from the PR comment to make it more readable in CodeCommit. + + Args: + - comment: the PR comment + + Returns: + - str: the PR comment with the HTML tags removed + """ + comment = comment.replace("
", "") + comment = comment.replace("
", "") + comment = comment.replace("", "") + comment = comment.replace("", "") + return comment + + @staticmethod + def _get_edit_type(codecommit_change_type: str): + """ + Convert the CodeCommit change type string to the EDIT_TYPE enum. + The CodeCommit change type string is returned from the get_differences SDK method. + + Args: + - codecommit_change_type: the CodeCommit change type string + + Returns: + - An EDIT_TYPE enum representing the modified, added, deleted, or renamed file in the PR diff. + """ + t = codecommit_change_type.upper() + edit_type = None + if t == "A": + edit_type = EDIT_TYPE.ADDED + elif t == "D": + edit_type = EDIT_TYPE.DELETED + elif t == "M": + edit_type = EDIT_TYPE.MODIFIED + elif t == "R": + edit_type = EDIT_TYPE.RENAMED + return edit_type + + @staticmethod + def _get_file_extensions(filenames): + """ + Return a list of file extensions from a list of filenames. + The returned extensions will include the dot "." prefix, + to accommodate for the dots in the existing language_extension_map settings. + Filenames with no extension will return an empty string for the extension. + + Args: + - filenames: a list of filenames + + Returns: + - list: A list of file extensions, including the dot "." prefix. + """ + extensions = [] + for filename in filenames: + filename, ext = os.path.splitext(filename) + if ext: + extensions.append(ext.lower()) + else: + extensions.append("") + return extensions + + @staticmethod + def _get_language_percentages(extensions): + """ + Return a dictionary containing the programming language name (as the key), + and the percentage that language is used (as the value), + given a list of file extensions. + + Args: + - extensions: a list of file extensions + + Returns: + - dict: A dictionary where each key is a language name and the corresponding value is the percentage of that language in the PR. + """ + total_files = len(extensions) + if total_files == 0: + return {} + + # Identify language by file extension and count + lang_count = Counter(extensions) + # Convert counts to percentages + lang_percentage = { + lang: round(count / total_files * 100) for lang, count in lang_count.items() + } + return lang_percentage diff --git a/pr_agent/git_providers/git_provider.py b/pr_agent/git_providers/git_provider.py index 4d711a14..330590a1 100644 --- a/pr_agent/git_providers/git_provider.py +++ b/pr_agent/git_providers/git_provider.py @@ -1,3 +1,4 @@ +import logging from abc import ABC, abstractmethod from dataclasses import dataclass @@ -54,7 +55,7 @@ class GitProvider(ABC): pass @abstractmethod - def publish_code_suggestions(self, code_suggestions: list): + def publish_code_suggestions(self, code_suggestions: list) -> bool: pass @abstractmethod @@ -82,9 +83,30 @@ class GitProvider(ABC): pass @abstractmethod - def get_pr_description(self): + def get_pr_description_full(self) -> str: pass + def get_pr_description(self, *, full: bool = True) -> str: + from pr_agent.config_loader import get_settings + from pr_agent.algo.pr_processing import clip_tokens + max_tokens = get_settings().get("CONFIG.MAX_DESCRIPTION_TOKENS", None) + description = self.get_pr_description_full() if full else self.get_user_description() + if max_tokens: + return clip_tokens(description, max_tokens) + return description + + def get_user_description(self) -> str: + description = (self.get_pr_description_full() or "").strip() + # if the existing description wasn't generated by the pr-agent, just return it as-is + if not description.startswith("## PR Type"): + return description + # if the existing description was generated by the pr-agent, but it doesn't contain the user description, + # return nothing (empty string) because it means there is no user description + if "## User Description:" not in description: + return "" + # otherwise, extract the original user description from the existing pr-agent description and return it + return description.split("## User Description:", 1)[1].strip() + @abstractmethod def get_issue_comments(self): pass @@ -116,6 +138,8 @@ def get_main_pr_language(languages, files) -> str: # validate that the specific commit uses the main language extension_list = [] for file in files: + if isinstance(file, str): + file = FilePatchInfo(base_file=None, head_file=None, patch=None, filename=file) extension_list.append(file.filename.rsplit('.')[-1]) # get the most common extension @@ -137,10 +161,11 @@ def get_main_pr_language(languages, files) -> str: most_common_extension == 'scala' and top_language == 'scala' or \ most_common_extension == 'kt' and top_language == 'kotlin' or \ most_common_extension == 'pl' and top_language == 'perl' or \ - most_common_extension == 'swift' and top_language == 'swift': + most_common_extension == top_language: main_language_str = top_language - except Exception: + except Exception as e: + logging.exception(e) pass return main_language_str diff --git a/pr_agent/git_providers/github_provider.py b/pr_agent/git_providers/github_provider.py index c010158d..7e93d18c 100644 --- a/pr_agent/git_providers/github_provider.py +++ b/pr_agent/git_providers/github_provider.py @@ -166,7 +166,7 @@ class GithubProvider(GitProvider): def publish_inline_comments(self, comments: list[dict]): self.pr.create_review(commit=self.last_commit_id, comments=comments) - def publish_code_suggestions(self, code_suggestions: list): + def publish_code_suggestions(self, code_suggestions: list) -> bool: """ Publishes code suggestions as comments on the PR. """ @@ -233,10 +233,7 @@ class GithubProvider(GitProvider): def get_pr_branch(self): return self.pr.head.ref - def get_pr_description(self): - max_tokens = get_settings().get("CONFIG.MAX_DESCRIPTION_TOKENS", None) - if max_tokens: - return clip_tokens(self.pr.body, max_tokens) + def get_pr_description_full(self): return self.pr.body def get_user_id(self): diff --git a/pr_agent/git_providers/gitlab_provider.py b/pr_agent/git_providers/gitlab_provider.py index 73a3a2f9..2deae177 100644 --- a/pr_agent/git_providers/gitlab_provider.py +++ b/pr_agent/git_providers/gitlab_provider.py @@ -14,6 +14,9 @@ from .git_provider import EDIT_TYPE, FilePatchInfo, GitProvider logger = logging.getLogger() +class DiffNotFoundError(Exception): + """Raised when the diff for a merge request cannot be found.""" + pass class GitLabProvider(GitProvider): @@ -56,7 +59,7 @@ class GitLabProvider(GitProvider): self.last_diff = self.mr.diffs.list(get_all=True)[-1] except IndexError as e: logger.error(f"Could not get diff for merge request {self.id_mr}") - raise ValueError(f"Could not get diff for merge request {self.id_mr}") from e + raise DiffNotFoundError(f"Could not get diff for merge request {self.id_mr}") from e def _get_pr_file_content(self, file_path: str, branch: str) -> str: @@ -150,16 +153,20 @@ class GitLabProvider(GitProvider): def create_inline_comments(self, comments: list[dict]): raise NotImplementedError("Gitlab provider does not support publishing inline comments yet") - def send_inline_comment(self, body, edit_type, found, relevant_file, relevant_line_in_file, source_line_no, - target_file, target_line_no): + def send_inline_comment(self,body: str,edit_type: str,found: bool,relevant_file: str,relevant_line_in_file: int, + source_line_no: int, target_file: str,target_line_no: int) -> None: if not found: logging.info(f"Could not find position for {relevant_file} {relevant_line_in_file}") else: - d = self.last_diff + # in order to have exact sha's we have to find correct diff for this change + diff = self.get_relevant_diff(relevant_file, relevant_line_in_file) + if diff is None: + logger.error(f"Could not get diff for merge request {self.id_mr}") + raise DiffNotFoundError(f"Could not get diff for merge request {self.id_mr}") pos_obj = {'position_type': 'text', 'new_path': target_file.filename, 'old_path': target_file.old_filename if target_file.old_filename else target_file.filename, - 'base_sha': d.base_commit_sha, 'start_sha': d.start_commit_sha, 'head_sha': d.head_commit_sha} + 'base_sha': diff.base_commit_sha, 'start_sha': diff.start_commit_sha, 'head_sha': diff.head_commit_sha} if edit_type == 'deletion': pos_obj['old_line'] = source_line_no - 1 elif edit_type == 'addition': @@ -171,7 +178,24 @@ class GitLabProvider(GitProvider): self.mr.discussions.create({'body': body, 'position': pos_obj}) - def publish_code_suggestions(self, code_suggestions: list): + def get_relevant_diff(self, relevant_file: str, relevant_line_in_file: int) -> Optional[dict]: + changes = self.mr.changes() # Retrieve the changes for the merge request once + if not changes: + logging.error('No changes found for the merge request.') + return None + all_diffs = self.mr.diffs.list(get_all=True) + if not all_diffs: + logging.error('No diffs found for the merge request.') + return None + for diff in all_diffs: + for change in changes['changes']: + if change['new_path'] == relevant_file and relevant_line_in_file in change['diff']: + return diff + logging.debug( + f'No relevant diff found for {relevant_file} {relevant_line_in_file}. Falling back to last diff.') + return self.last_diff # fallback to last_diff if no relevant diff is found + + def publish_code_suggestions(self, code_suggestions: list) -> bool: for suggestion in code_suggestions: try: body = suggestion['body'] @@ -275,10 +299,7 @@ class GitLabProvider(GitProvider): def get_pr_branch(self): return self.mr.source_branch - def get_pr_description(self): - max_tokens = get_settings().get("CONFIG.MAX_DESCRIPTION_TOKENS", None) - if max_tokens: - return clip_tokens(self.mr.description, max_tokens) + def get_pr_description_full(self): return self.mr.description def get_issue_comments(self): diff --git a/pr_agent/git_providers/local_git_provider.py b/pr_agent/git_providers/local_git_provider.py index a4f21969..e6ee1456 100644 --- a/pr_agent/git_providers/local_git_provider.py +++ b/pr_agent/git_providers/local_git_provider.py @@ -130,7 +130,7 @@ class LocalGitProvider(GitProvider): relevant_lines_start: int, relevant_lines_end: int): raise NotImplementedError('Publishing code suggestions is not implemented for the local git provider') - def publish_code_suggestions(self, code_suggestions: list): + def publish_code_suggestions(self, code_suggestions: list) -> bool: raise NotImplementedError('Publishing code suggestions is not implemented for the local git provider') def publish_labels(self, labels): @@ -158,7 +158,7 @@ class LocalGitProvider(GitProvider): def get_user_id(self): return -1 # Not used anywhere for the local provider, but required by the interface - def get_pr_description(self): + def get_pr_description_full(self): commits_diff = list(self.repo.iter_commits(self.target_branch_name + '..HEAD')) # Get the commit messages and concatenate commit_messages = " ".join([commit.message for commit in commits_diff]) diff --git a/pr_agent/secret_providers/__init__.py b/pr_agent/secret_providers/__init__.py new file mode 100644 index 00000000..1cc3ea7b --- /dev/null +++ b/pr_agent/secret_providers/__init__.py @@ -0,0 +1,16 @@ +from pr_agent.config_loader import get_settings + + +def get_secret_provider(): + try: + provider_id = get_settings().config.secret_provider + except AttributeError as e: + raise ValueError("secret_provider is a required attribute in the configuration file") from e + try: + if provider_id == 'google_cloud_storage': + from pr_agent.secret_providers.google_cloud_storage_secret_provider import GoogleCloudStorageSecretProvider + return GoogleCloudStorageSecretProvider() + else: + raise ValueError(f"Unknown secret provider: {provider_id}") + except Exception as e: + raise ValueError(f"Failed to initialize secret provider {provider_id}") from e diff --git a/pr_agent/secret_providers/google_cloud_storage_secret_provider.py b/pr_agent/secret_providers/google_cloud_storage_secret_provider.py new file mode 100644 index 00000000..18db5c4b --- /dev/null +++ b/pr_agent/secret_providers/google_cloud_storage_secret_provider.py @@ -0,0 +1,35 @@ +import ujson + +from google.cloud import storage + +from pr_agent.config_loader import get_settings +from pr_agent.git_providers.gitlab_provider import logger +from pr_agent.secret_providers.secret_provider import SecretProvider + + +class GoogleCloudStorageSecretProvider(SecretProvider): + def __init__(self): + try: + self.client = storage.Client.from_service_account_info(ujson.loads(get_settings().google_cloud_storage. + service_account)) + self.bucket_name = get_settings().google_cloud_storage.bucket_name + self.bucket = self.client.bucket(self.bucket_name) + except Exception as e: + logger.error(f"Failed to initialize Google Cloud Storage Secret Provider: {e}") + raise e + + def get_secret(self, secret_name: str) -> str: + try: + blob = self.bucket.blob(secret_name) + return blob.download_as_string() + except Exception as e: + logger.error(f"Failed to get secret {secret_name} from Google Cloud Storage: {e}") + return "" + + def store_secret(self, secret_name: str, secret_value: str): + try: + blob = self.bucket.blob(secret_name) + blob.upload_from_string(secret_value) + except Exception as e: + logger.error(f"Failed to store secret {secret_name} in Google Cloud Storage: {e}") + raise e diff --git a/pr_agent/secret_providers/secret_provider.py b/pr_agent/secret_providers/secret_provider.py new file mode 100644 index 00000000..df1e7780 --- /dev/null +++ b/pr_agent/secret_providers/secret_provider.py @@ -0,0 +1,12 @@ +from abc import ABC, abstractmethod + + +class SecretProvider(ABC): + + @abstractmethod + def get_secret(self, secret_name: str) -> str: + pass + + @abstractmethod + def store_secret(self, secret_name: str, secret_value: str): + pass diff --git a/pr_agent/servers/atlassian-connect.json b/pr_agent/servers/atlassian-connect.json new file mode 100644 index 00000000..1ff50865 --- /dev/null +++ b/pr_agent/servers/atlassian-connect.json @@ -0,0 +1,33 @@ +{ + "name": "CodiumAI PR-Agent", + "description": "CodiumAI PR-Agent", + "key": "app_key", + "vendor": { + "name": "CodiumAI", + "url": "https://codium.ai" + }, + "authentication": { + "type": "jwt" + }, + "baseUrl": "base_url", + "lifecycle": { + "installed": "/installed", + "uninstalled": "/uninstalled" + }, + "scopes": [ + "account", + "repository", + "pullrequest" + ], + "contexts": [ + "account" + ], + "modules": { + "webhooks": [ + { + "event": "*", + "url": "/webhook" + } + ] + } +} \ No newline at end of file diff --git a/pr_agent/servers/bitbucket_app.py b/pr_agent/servers/bitbucket_app.py new file mode 100644 index 00000000..cc6491d4 --- /dev/null +++ b/pr_agent/servers/bitbucket_app.py @@ -0,0 +1,139 @@ +import copy +import hashlib +import json +import logging +import os +import sys +import time + +import jwt +import requests +import uvicorn +from fastapi import APIRouter, FastAPI, Request, Response +from starlette.background import BackgroundTasks +from starlette.middleware import Middleware +from starlette.responses import JSONResponse +from starlette_context import context +from starlette_context.middleware import RawContextMiddleware + +from pr_agent.agent.pr_agent import PRAgent +from pr_agent.config_loader import get_settings, global_settings +from pr_agent.secret_providers import get_secret_provider + +logging.basicConfig(stream=sys.stdout, level=logging.INFO) +router = APIRouter() +secret_provider = get_secret_provider() + +async def get_bearer_token(shared_secret: str, client_key: str): + try: + now = int(time.time()) + url = "https://bitbucket.org/site/oauth2/access_token" + canonical_url = "GET&/site/oauth2/access_token&" + qsh = hashlib.sha256(canonical_url.encode("utf-8")).hexdigest() + app_key = get_settings().bitbucket.app_key + + payload = { + "iss": app_key, + "iat": now, + "exp": now + 240, + "qsh": qsh, + "sub": client_key, + } + token = jwt.encode(payload, shared_secret, algorithm="HS256") + payload = 'grant_type=urn%3Abitbucket%3Aoauth2%3Ajwt' + headers = { + 'Authorization': f'JWT {token}', + 'Content-Type': 'application/x-www-form-urlencoded' + } + response = requests.request("POST", url, headers=headers, data=payload) + bearer_token = response.json()["access_token"] + return bearer_token + except Exception as e: + logging.error(f"Failed to get bearer token: {e}") + raise e + +@router.get("/") +async def handle_manifest(request: Request, response: Response): + cur_dir = os.path.dirname(os.path.abspath(__file__)) + manifest = open(os.path.join(cur_dir, "atlassian-connect.json"), "rt").read() + try: + manifest = manifest.replace("app_key", get_settings().bitbucket.app_key) + manifest = manifest.replace("base_url", get_settings().bitbucket.base_url) + except: + logging.error("Failed to replace api_key in Bitbucket manifest, trying to continue") + manifest_obj = json.loads(manifest) + return JSONResponse(manifest_obj) + +@router.post("/webhook") +async def handle_github_webhooks(background_tasks: BackgroundTasks, request: Request): + print(request.headers) + jwt_header = request.headers.get("authorization", None) + if jwt_header: + input_jwt = jwt_header.split(" ")[1] + data = await request.json() + print(data) + async def inner(): + try: + owner = data["data"]["repository"]["owner"]["username"] + secrets = json.loads(secret_provider.get_secret(owner)) + shared_secret = secrets["shared_secret"] + client_key = secrets["client_key"] + jwt.decode(input_jwt, shared_secret, audience=client_key, algorithms=["HS256"]) + bearer_token = await get_bearer_token(shared_secret, client_key) + context['bitbucket_bearer_token'] = bearer_token + context["settings"] = copy.deepcopy(global_settings) + event = data["event"] + agent = PRAgent() + if event == "pullrequest:created": + pr_url = data["data"]["pullrequest"]["links"]["html"]["href"] + await agent.handle_request(pr_url, "review") + elif event == "pullrequest:comment_created": + pr_url = data["data"]["pullrequest"]["links"]["html"]["href"] + comment_body = data["data"]["comment"]["content"]["raw"] + await agent.handle_request(pr_url, comment_body) + except Exception as e: + logging.error(f"Failed to handle webhook: {e}") + background_tasks.add_task(inner) + return "OK" + +@router.get("/webhook") +async def handle_github_webhooks(request: Request, response: Response): + return "Webhook server online!" + +@router.post("/installed") +async def handle_installed_webhooks(request: Request, response: Response): + try: + print(request.headers) + data = await request.json() + print(data) + shared_secret = data["sharedSecret"] + client_key = data["clientKey"] + username = data["principal"]["username"] + secrets = { + "shared_secret": shared_secret, + "client_key": client_key + } + secret_provider.store_secret(username, json.dumps(secrets)) + except Exception as e: + logging.error(f"Failed to register user: {e}") + return JSONResponse({"error": "Unable to register user"}, status_code=500) + +@router.post("/uninstalled") +async def handle_uninstalled_webhooks(request: Request, response: Response): + data = await request.json() + print(data) + + +def start(): + get_settings().set("CONFIG.PUBLISH_OUTPUT_PROGRESS", False) + get_settings().set("CONFIG.GIT_PROVIDER", "bitbucket") + get_settings().set("PR_DESCRIPTION.PUBLISH_DESCRIPTION_AS_COMMENT", True) + middleware = [Middleware(RawContextMiddleware)] + app = FastAPI(middleware=middleware) + app.include_router(router) + + uvicorn.run(app, host="0.0.0.0", port=int(os.getenv("PORT", "3000"))) + + +if __name__ == '__main__': + start() diff --git a/pr_agent/servers/github_app.py b/pr_agent/servers/github_app.py index 18943ae8..10584e54 100644 --- a/pr_agent/servers/github_app.py +++ b/pr_agent/servers/github_app.py @@ -1,6 +1,8 @@ import copy import logging import sys +import os +import time from typing import Any, Dict import uvicorn @@ -10,11 +12,12 @@ from starlette_context import context from starlette_context.middleware import RawContextMiddleware from pr_agent.agent.pr_agent import PRAgent +from pr_agent.algo.utils import update_settings_from_args from pr_agent.config_loader import get_settings, global_settings from pr_agent.git_providers import get_git_provider from pr_agent.servers.utils import verify_signature -logging.basicConfig(stream=sys.stdout, level=logging.DEBUG) +logging.basicConfig(stream=sys.stdout, level=logging.INFO) router = APIRouter() @@ -34,7 +37,8 @@ async def handle_github_webhooks(request: Request, response: Response): context["installation_id"] = installation_id context["settings"] = copy.deepcopy(global_settings) - return await handle_request(body) + response = await handle_request(body, event=request.headers.get("X-GitHub-Event", None)) + return response or {} @router.post("/api/v1/marketplace_webhooks") @@ -48,71 +52,125 @@ async def get_body(request): except Exception as e: logging.error("Error parsing request body", e) raise HTTPException(status_code=400, detail="Error parsing request body") from e - body_bytes = await request.body() - signature_header = request.headers.get('x-hub-signature-256', None) webhook_secret = getattr(get_settings().github, 'webhook_secret', None) if webhook_secret: + body_bytes = await request.body() + signature_header = request.headers.get('x-hub-signature-256', None) verify_signature(body_bytes, webhook_secret, signature_header) return body +_duplicate_requests_cache = {} -async def handle_request(body: Dict[str, Any]): +async def handle_request(body: Dict[str, Any], event: str): """ Handle incoming GitHub webhook requests. Args: body: The request body. + event: The GitHub event type. """ action = body.get("action") if not action: return {} agent = PRAgent() + bot_user = get_settings().github_app.bot_user + logging.info(f"action: '{action}'") + logging.info(f"event: '{event}'") + if get_settings().github_app.duplicate_requests_cache and _is_duplicate_request(body): + return {} + + # handle all sorts of comment events (e.g. issue_comment) if action == 'created': if "comment" not in body: return {} comment_body = body.get("comment", {}).get("body") sender = body.get("sender", {}).get("login") - if sender and 'bot' in sender: + if sender and bot_user in sender: + logging.info(f"Ignoring comment from {bot_user} user") return {} - if "issue" not in body or "pull_request" not in body["issue"]: + logging.info(f"Processing comment from {sender} user") + if "issue" in body and "pull_request" in body["issue"] and "url" in body["issue"]["pull_request"]: + api_url = body["issue"]["pull_request"]["url"] + elif "comment" in body and "pull_request_url" in body["comment"]: + api_url = body["comment"]["pull_request_url"] + else: return {} - pull_request = body["issue"]["pull_request"] - api_url = pull_request.get("url") + logging.info(f"Handling comment because of event={event} and action={action}") comment_id = body.get("comment", {}).get("id") provider = get_git_provider()(pr_url=api_url) await agent.handle_request(api_url, comment_body, notify=lambda: provider.add_eyes_reaction(comment_id)) - - elif action == "opened" or 'reopened' in action: + # handle pull_request event: + # automatically review opened/reopened/ready_for_review PRs as long as they're not in draft, + # as well as direct review requests from the bot + elif event == 'pull_request': pull_request = body.get("pull_request") if not pull_request: return {} api_url = pull_request.get("url") if not api_url: return {} - await agent.handle_request(api_url, "/review") + if pull_request.get("draft", True) or pull_request.get("state") != "open" or pull_request.get("user", {}).get("login", "") == bot_user: + return {} + if action in get_settings().github_app.handle_pr_actions: + if action == "review_requested": + if body.get("requested_reviewer", {}).get("login", "") != bot_user: + return {} + if pull_request.get("created_at") == pull_request.get("updated_at"): + # avoid double reviews when opening a PR for the first time + return {} + logging.info(f"Performing review because of event={event} and action={action}") + for command in get_settings().github_app.pr_commands: + split_command = command.split(" ") + command = split_command[0] + args = split_command[1:] + other_args = update_settings_from_args(args) + new_command = ' '.join([command] + other_args) + logging.info(f"Performing command: {new_command}") + await agent.handle_request(api_url, new_command) + logging.info("event or action does not require handling") return {} +def _is_duplicate_request(body: Dict[str, Any]) -> bool: + """ + In some deployments its possible to get duplicate requests if the handling is long, + This function checks if the request is duplicate and if so - ignores it. + """ + request_hash = hash(str(body)) + logging.info(f"request_hash: {request_hash}") + request_time = time.monotonic() + ttl = get_settings().github_app.duplicate_requests_cache_ttl # in seconds + to_delete = [key for key, key_time in _duplicate_requests_cache.items() if request_time - key_time > ttl] + for key in to_delete: + del _duplicate_requests_cache[key] + is_duplicate = request_hash in _duplicate_requests_cache + _duplicate_requests_cache[request_hash] = request_time + if is_duplicate: + logging.info(f"Ignoring duplicate request {request_hash}") + return is_duplicate + + @router.get("/") async def root(): return {"status": "ok"} def start(): - # Override the deployment type to app - get_settings().set("GITHUB.DEPLOYMENT_TYPE", "app") + if get_settings().github_app.override_deployment_type: + # Override the deployment type to app + get_settings().set("GITHUB.DEPLOYMENT_TYPE", "app") get_settings().set("CONFIG.PUBLISH_OUTPUT_PROGRESS", False) middleware = [Middleware(RawContextMiddleware)] app = FastAPI(middleware=middleware) app.include_router(router) - uvicorn.run(app, host="0.0.0.0", port=3000) + uvicorn.run(app, host="0.0.0.0", port=int(os.environ.get("PORT", "3000"))) if __name__ == '__main__': - start() \ No newline at end of file + start() diff --git a/pr_agent/servers/help.py b/pr_agent/servers/help.py index 1ee7fc4d..0f3f3caa 100644 --- a/pr_agent/servers/help.py +++ b/pr_agent/servers/help.py @@ -1,7 +1,7 @@ commands_text = "> **/review [-i]**: Request a review of your Pull Request. For an incremental review, which only " \ "considers changes since the last review, include the '-i' option.\n" \ "> **/describe**: Modify the PR title and description based on the contents of the PR.\n" \ - "> **/improve**: Suggest improvements to the code in the PR. \n" \ + "> **/improve [--extended]**: Suggest improvements to the code in the PR. Extended mode employs several calls, and provides a more thorough feedback. \n" \ "> **/ask \\**: Pose a question about the PR.\n" \ "> **/update_changelog**: Update the changelog based on the PR's contents.\n\n" \ ">To edit any configuration parameter from **configuration.toml**, add --config_path=new_value\n" \ diff --git a/pr_agent/settings/configuration.toml b/pr_agent/settings/configuration.toml index 0c502df9..f8abd555 100644 --- a/pr_agent/settings/configuration.toml +++ b/pr_agent/settings/configuration.toml @@ -10,19 +10,24 @@ use_repo_settings_file=true ai_timeout=180 max_description_tokens = 500 max_commits_tokens = 500 +litellm_debugger=false +secret_provider="google_cloud_storage" [pr_reviewer] # /review # -require_focused_review=true +require_focused_review=false require_score_review=false require_tests_review=true require_security_review=true -num_code_suggestions=3 +num_code_suggestions=4 inline_code_comments = false ask_and_reflect=false +automatic_review=true extra_instructions = "" [pr_description] # /describe # publish_description_as_comment=false +add_original_user_description=false +keep_original_user_title=false extra_instructions = "" [pr_questions] # /ask # @@ -30,6 +35,12 @@ extra_instructions = "" [pr_code_suggestions] # /improve # num_code_suggestions=4 extra_instructions = "" +rank_suggestions = false +# params for '/improve --extended' mode +num_code_suggestions_per_chunk=8 +rank_extended_suggestions = true +max_number_of_calls = 5 +final_clip_factor = 0.9 [pr_update_changelog] # /update_changelog # push_changelog_changes=false @@ -42,6 +53,21 @@ extra_instructions = "" deployment_type = "user" ratelimit_retries = 5 +[github_app] +# these toggles allows running the github app from custom deployments +bot_user = "github-actions[bot]" +override_deployment_type = true +# in some deployments it's possible to get duplicate requests if the handling is long, +# these settings are used to avoid handling duplicate requests. +duplicate_requests_cache = false +duplicate_requests_cache_ttl = 60 # in seconds +# settings for "pull_request" event +handle_pr_actions = ['opened', 'reopened', 'ready_for_review', 'review_requested'] +pr_commands = [ + "/describe --pr_description.add_original_user_description=true --pr_description.keep_original_user_title=true", + "/auto_review", +] + [gitlab] # URL to the gitlab service url = "https://gitlab.com" diff --git a/pr_agent/settings/pr_code_suggestions_prompts.toml b/pr_agent/settings/pr_code_suggestions_prompts.toml index 76a3cb6b..6867a883 100644 --- a/pr_agent/settings/pr_code_suggestions_prompts.toml +++ b/pr_agent/settings/pr_code_suggestions_prompts.toml @@ -1,90 +1,130 @@ [pr_code_suggestions_prompt] -system="""You are a language model called CodiumAI-PR-Code-Reviewer. -Your task is to provide meaningfull non-trivial code suggestions to improve the new code in a PR (the '+' lines). -- Try to give important suggestions like fixing code problems, issues and bugs. As a second priority, provide suggestions for meaningfull code improvements, like performance, vulnerability, modularity, and best practices. -- Suggestions should refer only to the 'new hunk' code, and focus on improving the new added code lines, with '+'. -- Provide the exact line number range (inclusive) for each issue. -- Assume there is additional code in the relevant file that is not included in the diff. +system="""You are a language model called PR-Code-Reviewer, that specializes in suggesting code improvements for Pull Request (PR). +Your task is to provide meaningful and actionable code suggestions, to improve the new code presented in a PR. + +Example for a PR Diff input: +' +## src/file1.py + +@@ -12,3 +12,5 @@ def func1(): +__new hunk__ +12 code line that already existed in the file... +13 code line that already existed in the file.... +14 +new code line1 added in the PR +15 +new code line2 added in the PR +16 code line that already existed in the file... +__old hunk__ + code line that already existed in the file... +-code line that was removed in the PR + code line that already existed in the file... + + +@@ ... @@ def func2(): +__new hunk__ +... +__old hunk__ +... + + +## src/file2.py +... +' + +Specific instructions: - Provide up to {{ num_code_suggestions }} code suggestions. -- Make sure not to provide suggestions repeating modifications already implemented in the new PR code (the '+' lines). -- Don't output line numbers in the 'improved code' snippets. +- Prioritize suggestions that address major problems, issues and bugs in the code. + As a second priority, suggestions should focus on best practices, code readability, maintainability, enhancments, performance, and other aspects. + Don't suggest to add docstring or type hints. + Try to provide diverse and insightful suggestions. +- Suggestions should refer only to code from the '__new hunk__' sections, and focus on new lines of code (lines starting with '+'). + Avoid making suggestions that have already been implemented in the PR code. For example, if you want to add logs, or change a variable to const, or anything else, make sure it isn't already in the '__new hunk__' code. + For each suggestion, make sure to take into consideration also the context, meaning the lines before and after the relevant code. +- Provide the exact line numbers range (inclusive) for each issue. +- Assume there is additional relevant code, that is not included in the diff. + {%- if extra_instructions %} Extra instructions from the user: {{ extra_instructions }} -{% endif %} +{%- endif %} -You must use the following JSON schema to format your answer: -```json -{ - "Code suggestions": { - "type": "array", - "minItems": 1, - "maxItems": {{ num_code_suggestions }}, - "uniqueItems": "true", - "items": { - "relevant file": { - "type": "string", - "description": "the relevant file full path" - }, - "suggestion content": { - "type": "string", - "description": "a concrete suggestion for meaningfully improving the new PR code." - }, - "existing code": { - "type": "string", - "description": "a code snippet showing authentic relevant code lines from a 'new hunk' section. It must be continuous, correctly formatted and indented, and without line numbers." - }, - "relevant lines": { - "type": "string", - "description": "the relevant lines in the 'new hunk' sections, in the format of 'start_line-end_line'. For example: '10-15'. They should be derived from the hunk line numbers, and correspond to the 'existing code' snippet above." - }, - "improved code": { - "type": "string", - "description": "a new code snippet that can be used to replace the relevant lines in 'new hunk' code. Replacement suggestions should be complete, correctly formatted and indented, and without line numbers." - } - } - } -} +You must use the following YAML schema to format your answer: +```yaml +Code suggestions: + type: array + minItems: 1 + maxItems: {{ num_code_suggestions }} + uniqueItems: true + items: + relevant file: + type: string + description: the relevant file full path + suggestion content: + type: string + description: |- + a concrete suggestion for meaningfully improving the new PR code. + existing code: + type: string + description: |- + a code snippet showing the relevant code lines from a '__new hunk__' section. + It must be contiguous, correctly formatted and indented, and without line numbers. + relevant lines start: + type: integer + description: |- + The relevant line number from a '__new hunk__' section where the suggestion starts (inclusive). + Should be derived from the hunk line numbers, and correspond to the 'existing code' snippet above. + relevant lines end: + type: integer + description: |- + The relevant line number from a '__new hunk__' section where the suggestion ends (inclusive). + Should be derived from the hunk line numbers, and correspond to the 'existing code' snippet above. + improved code: + type: string + description: |- + a new code snippet that can be used to replace the relevant lines in '__new hunk__' code. + Replacement suggestions should be complete, correctly formatted and indented, and without line numbers. ``` -Example input: -' -## src/file1.py ----new_hunk--- +Example output: +```yaml +Code suggestions: + - relevant file: |- + src/file1.py + suggestion content: |- + Add a docstring to func1() + existing code: |- + def func1(): + relevant lines start: 12 + relevant lines end: 12 + improved code: |- + ... ``` -[new hunk code, annotated with line numbers] -``` ----old_hunk--- -``` -[old hunk code] -``` -... -' + +Each YAML output MUST be after a newline, indented, with block scalar indicator ('|-'). Don't repeat the prompt in the answer, and avoid outputting the 'type' and 'description' fields. """ user="""PR Info: -Title: '{{title}}' -Branch: '{{branch}}' -Description: '{{description}}' -{%- if language %} -Main language: {{language}} -{%- endif %} -{%- if commit_messages_str %} -Commit messages: -{{commit_messages_str}} +Title: '{{title}}' + +Branch: '{{branch}}' + +Description: '{{description}}' + +{%- if language %} + +Main language: {{language}} {%- endif %} The PR Diff: ``` -{{diff}} +{{- diff|trim }} ``` -Response (should be a valid JSON, and nothing else): -```json +Response (should be a valid YAML, and nothing else): +```yaml """ diff --git a/pr_agent/settings/pr_reviewer_prompts.toml b/pr_agent/settings/pr_reviewer_prompts.toml index cdf7f731..7c21f433 100644 --- a/pr_agent/settings/pr_reviewer_prompts.toml +++ b/pr_agent/settings/pr_reviewer_prompts.toml @@ -1,13 +1,36 @@ [pr_review_prompt] -system="""You are CodiumAI-PR-Reviewer, a language model designed to review git pull requests. -Your task is to provide constructive and concise feedback for the PR, and also provide meaningfull code suggestions to improve the new PR code (the '+' lines). +system="""You are PR-Reviewer, a language model designed to review git pull requests. +Your task is to provide constructive and concise feedback for the PR, and also provide meaningful code suggestions. + +Example PR Diff input: +' +## src/file1.py + +@@ -12,5 +12,5 @@ def func1(): +code line that already existed in the file... +code line that already existed in the file.... +-code line that was removed in the PR ++new code line added in the PR + code line that already existed in the file... + code line that already existed in the file... + +@@ ... @@ def func2(): +... + + +## src/file2.py +... +' + +Thre review should focus on new code added in the PR (lines starting with '+'), and not on code that already existed in the file (lines starting with '-', or without prefix). + {%- if num_code_suggestions > 0 %} - Provide up to {{ num_code_suggestions }} code suggestions. -- Try to focus on the most important suggestions, like fixing code problems, issues and bugs. As a second priority, provide suggestions for meaningfull code improvements, like performance, vulnerability, modularity, and best practices. -- Suggestions should focus on improving the new added code lines. -- Make sure not to provide suggestions repeating modifications already implemented in the new PR code (the '+' lines). +- Focus on important suggestions like fixing code problems, issues and bugs. As a second priority, provide suggestions for meaningful code improvements, like performance, vulnerability, modularity, and best practices. +- Avoid making suggestions that have already been implemented in the PR code. For example, if you want to add logs, or change a variable to const, or anything else, make sure it isn't already in the PR code. +- Don't suggest to add docstring or type hints. +- Suggestions should focus on improving the new code added in the PR (lines starting with '+') {%- endif %} -- If needed, each YAML output should be in block scalar format ('|-') {%- if extra_instructions %} @@ -21,6 +44,9 @@ PR Analysis: Main theme: type: string description: a short explanation of the PR + PR summary: + type: string + description: summary of the PR in 2-3 sentences. Type of PR: type: string enum: @@ -33,7 +59,7 @@ PR Analysis: {%- if require_score %} Score: type: int - description: >- + description: |- Rate this PR on a scale of 0-100 (inclusive), where 0 means the worst possible PR code, and 100 means PR code of the highest quality, without any bugs or performance issues, that is ready to be merged immediately and @@ -47,13 +73,13 @@ PR Analysis: {%- if question_str %} Insights from user's answer: type: string - description: >- + description: |- shortly summarize the insights you gained from the user's answers to the questions {%- endif %} {%- if require_focused %} Focused PR: type: string - description: >- + description: |- Is this a focused PR, in the sense that all the PR code diff changes are united under a single focused theme ? If the theme is too broad, or the PR code diff changes are too scattered, then the PR is not focused. Explain @@ -62,12 +88,11 @@ PR Analysis: PR Feedback: General suggestions: type: string - description: >- + description: |- General suggestions and feedback for the contributors and maintainers of this PR. May include important suggestions for the overall structure, primary purpose, best practices, critical bugs, and other aspects of the - PR. Don't address PR title and description, or lack of tests. Explain your - suggestions. + PR. Don't address PR title and description, or lack of tests. Explain your suggestions. {%- if num_code_suggestions > 0 %} Code feedback: type: array @@ -79,7 +104,7 @@ PR Feedback: description: the relevant file full path suggestion: type: string - description: | + description: |- a concrete suggestion for meaningfully improving the new PR code. Also describe how, specifically, the suggestion can be applied to new PR code. Add tags with importance measure that matches each suggestion @@ -87,9 +112,9 @@ PR Feedback: adding docstrings, renaming PR title and description, or linter like. relevant line: type: string - description: | + description: |- a single code line taken from the relevant file, to which the suggestion applies. - The line should be a '+' line. + The code line should start with a '+'. Make sure to output the line exactly as it appears in the relevant file {%- endif %} {%- if require_security %} @@ -104,22 +129,29 @@ PR Feedback: Example output: ```yaml PR Analysis: - Main theme: xxx - Type of PR: Bug fix + Main theme: |- + xxx + PR summary: |- + xxx + Type of PR: |- + Bug fix {%- if require_score %} Score: 89 {%- endif %} - Relevant tests added: No + Relevant tests added: |- + No {%- if require_focused %} Focused PR: no, because ... {%- endif %} PR Feedback: - General PR suggestions: ... + General PR suggestions: |- + ... {%- if num_code_suggestions > 0 %} Code feedback: - relevant file: |- directory/xxx.py - suggestion: xxx [important] + suggestion: |- + xxx [important] relevant line: |- xxx ... @@ -129,7 +161,7 @@ PR Feedback: {%- endif %} ``` -Make sure to output a valid YAML. Use multi-line block scalar ('|') if needed. +Each YAML output MUST be after a newline, indented, with block scalar indicator ('|-'). Don't repeat the prompt in the answer, and avoid outputting the 'type' and 'description' fields. """ @@ -161,7 +193,7 @@ The PR Git Diff: ``` {{diff}} ``` -Note that lines in the diff body are prefixed with a symbol that represents the type of change: '-' for deletions, '+' for additions, and ' ' (a space) for unchanged lines. +Note that lines in the diff body are prefixed with a symbol that represents the type of change: '-' for deletions, '+' for additions. Focus on the '+' lines. Response (should be a valid YAML, and nothing else): ```yaml diff --git a/pr_agent/settings/pr_sort_code_suggestions_prompts.toml b/pr_agent/settings/pr_sort_code_suggestions_prompts.toml new file mode 100644 index 00000000..16b6e861 --- /dev/null +++ b/pr_agent/settings/pr_sort_code_suggestions_prompts.toml @@ -0,0 +1,46 @@ +[pr_sort_code_suggestions_prompt] +system=""" +""" + +user="""You are given a list of code suggestions to improve a PR: + +{{ suggestion_str|trim }} + + +Your task is to sort the code suggestions by their order of importance, and return a list with sorting order. +The sorting order is a list of pairs, where each pair contains the index of the suggestion in the original list. +Rank the suggestions based on their importance to improving the PR, with critical issues first and minor issues last. + +You must use the following YAML schema to format your answer: +```yaml +Sort Order: + type: array + maxItems: {{ suggestion_list|length }} + uniqueItems: true + items: + suggestion number: + type: integer + minimum: 1 + maximum: {{ suggestion_list|length }} + importance order: + type: integer + minimum: 1 + maximum: {{ suggestion_list|length }} +``` + +Example output: +```yaml +Sort Order: + - suggestion number: 1 + importance order: 2 + - suggestion number: 2 + importance order: 3 + - suggestion number: 3 + importance order: 1 +``` + +Make sure to output a valid YAML. Use multi-line block scalar ('|') if needed. +Don't repeat the prompt in the answer, and avoid outputting the 'type' and 'description' fields. +Response (should be a valid YAML, and nothing else): +```yaml +""" diff --git a/pr_agent/tools/pr_code_suggestions.py b/pr_agent/tools/pr_code_suggestions.py index e759d61d..ba45598e 100644 --- a/pr_agent/tools/pr_code_suggestions.py +++ b/pr_agent/tools/pr_code_suggestions.py @@ -1,14 +1,13 @@ import copy -import json import logging import textwrap - +from typing import List, Dict from jinja2 import Environment, StrictUndefined from pr_agent.algo.ai_handler import AiHandler -from pr_agent.algo.pr_processing import get_pr_diff, retry_with_fallback_models +from pr_agent.algo.pr_processing import get_pr_diff, retry_with_fallback_models, get_pr_multi_diffs from pr_agent.algo.token_handler import TokenHandler -from pr_agent.algo.utils import try_fix_json +from pr_agent.algo.utils import load_yaml from pr_agent.config_loader import get_settings from pr_agent.git_providers import BitbucketProvider, get_git_provider from pr_agent.git_providers.git_provider import get_main_pr_language @@ -22,6 +21,13 @@ class PRCodeSuggestions: self.git_provider.get_languages(), self.git_provider.get_files() ) + # extended mode + self.is_extended = any(["extended" in arg for arg in args]) + if self.is_extended: + num_code_suggestions = get_settings().pr_code_suggestions.num_code_suggestions_per_chunk + else: + num_code_suggestions = get_settings().pr_code_suggestions.num_code_suggestions + self.ai_handler = AiHandler() self.patches_diff = None self.prediction = None @@ -32,7 +38,7 @@ class PRCodeSuggestions: "description": self.git_provider.get_pr_description(), "language": self.main_language, "diff": "", # empty diff for initial calculation - "num_code_suggestions": get_settings().pr_code_suggestions.num_code_suggestions, + "num_code_suggestions": num_code_suggestions, "extra_instructions": get_settings().pr_code_suggestions.extra_instructions, "commit_messages_str": self.git_provider.get_commit_messages(), } @@ -42,18 +48,26 @@ class PRCodeSuggestions: get_settings().pr_code_suggestions_prompt.user) async def run(self): - assert type(self.git_provider) != BitbucketProvider, "Bitbucket is not supported for now" - logging.info('Generating code suggestions for PR...') if get_settings().config.publish_output: self.git_provider.publish_comment("Preparing review...", is_temporary=True) - await retry_with_fallback_models(self._prepare_prediction) + logging.info('Preparing PR review...') - data = self._prepare_pr_code_suggestions() + if not self.is_extended: + await retry_with_fallback_models(self._prepare_prediction) + data = self._prepare_pr_code_suggestions() + else: + data = await retry_with_fallback_models(self._prepare_prediction_extended) + + if (not self.is_extended and get_settings().pr_code_suggestions.rank_suggestions) or \ + (self.is_extended and get_settings().pr_code_suggestions.rank_extended_suggestions): + logging.info('Ranking Suggestions...') + data['Code suggestions'] = await self.rank_suggestions(data['Code suggestions']) + if get_settings().config.publish_output: logging.info('Pushing PR review...') self.git_provider.remove_initial_comment() - logging.info('Pushing inline code comments...') + logging.info('Pushing inline code suggestions...') self.push_inline_code_suggestions(data) async def _prepare_prediction(self, model: str): @@ -81,14 +95,11 @@ class PRCodeSuggestions: return response - def _prepare_pr_code_suggestions(self) -> str: + def _prepare_pr_code_suggestions(self) -> Dict: review = self.prediction.strip() - try: - data = json.loads(review) - except json.decoder.JSONDecodeError: - if get_settings().config.verbosity_level >= 2: - logging.info(f"Could not parse json response: {review}") - data = try_fix_json(review, code_suggestions=True) + data = load_yaml(review) + if isinstance(data, list): + data = {'Code suggestions': data} return data def push_inline_code_suggestions(self, data): @@ -102,11 +113,8 @@ class PRCodeSuggestions: if get_settings().config.verbosity_level >= 2: logging.info(f"suggestion: {d}") relevant_file = d['relevant file'].strip() - relevant_lines_str = d['relevant lines'].strip() - if ',' in relevant_lines_str: # handling 'relevant lines': '181, 190' or '178-184, 188-194' - relevant_lines_str = relevant_lines_str.split(',')[0] - relevant_lines_start = int(relevant_lines_str.split('-')[0]) # absolute position - relevant_lines_end = int(relevant_lines_str.split('-')[-1]) + relevant_lines_start = int(d['relevant lines start']) # absolute position + relevant_lines_end = int(d['relevant lines end']) content = d['suggestion content'] new_code_snippet = d['improved code'] @@ -121,7 +129,11 @@ class PRCodeSuggestions: if get_settings().config.verbosity_level >= 2: logging.info(f"Could not parse suggestion: {d}") - self.git_provider.publish_code_suggestions(code_suggestions) + is_successful = self.git_provider.publish_code_suggestions(code_suggestions) + if not is_successful: + logging.info("Failed to publish code suggestions, trying to publish each suggestion separately") + for code_suggestion in code_suggestions: + self.git_provider.publish_code_suggestions([code_suggestion]) def dedent_code(self, relevant_file, relevant_lines_start, new_code_snippet): try: # dedent code snippet @@ -145,3 +157,81 @@ class PRCodeSuggestions: return new_code_snippet + async def _prepare_prediction_extended(self, model: str) -> dict: + logging.info('Getting PR diff...') + patches_diff_list = get_pr_multi_diffs(self.git_provider, self.token_handler, model, + max_calls=get_settings().pr_code_suggestions.max_number_of_calls) + + logging.info('Getting multi AI predictions...') + prediction_list = [] + for i, patches_diff in enumerate(patches_diff_list): + logging.info(f"Processing chunk {i + 1} of {len(patches_diff_list)}") + self.patches_diff = patches_diff + prediction = await self._get_prediction(model) + prediction_list.append(prediction) + self.prediction_list = prediction_list + + data = {} + for prediction in prediction_list: + self.prediction = prediction + data_per_chunk = self._prepare_pr_code_suggestions() + if "Code suggestions" in data: + data["Code suggestions"].extend(data_per_chunk["Code suggestions"]) + else: + data.update(data_per_chunk) + self.data = data + return data + + async def rank_suggestions(self, data: List) -> List: + """ + Call a model to rank (sort) code suggestions based on their importance order. + + Args: + data (List): A list of code suggestions to be ranked. + + Returns: + List: The ranked list of code suggestions. + """ + + suggestion_list = [] + # remove invalid suggestions + for i, suggestion in enumerate(data): + if suggestion['existing code'] != suggestion['improved code']: + suggestion_list.append(suggestion) + + data_sorted = [[]] * len(suggestion_list) + + try: + suggestion_str = "" + for i, suggestion in enumerate(suggestion_list): + suggestion_str += f"suggestion {i + 1}: " + str(suggestion) + '\n\n' + + variables = {'suggestion_list': suggestion_list, 'suggestion_str': suggestion_str} + model = get_settings().config.model + environment = Environment(undefined=StrictUndefined) + system_prompt = environment.from_string(get_settings().pr_sort_code_suggestions_prompt.system).render( + variables) + user_prompt = environment.from_string(get_settings().pr_sort_code_suggestions_prompt.user).render(variables) + if get_settings().config.verbosity_level >= 2: + logging.info(f"\nSystem prompt:\n{system_prompt}") + logging.info(f"\nUser prompt:\n{user_prompt}") + response, finish_reason = await self.ai_handler.chat_completion(model=model, system=system_prompt, + user=user_prompt) + + sort_order = load_yaml(response) + for s in sort_order['Sort Order']: + suggestion_number = s['suggestion number'] + importance_order = s['importance order'] + data_sorted[importance_order - 1] = suggestion_list[suggestion_number - 1] + + if get_settings().pr_code_suggestions.final_clip_factor != 1: + new_len = int(0.5 + len(data_sorted) * get_settings().pr_code_suggestions.final_clip_factor) + data_sorted = data_sorted[:new_len] + except Exception as e: + if get_settings().config.verbosity_level >= 1: + logging.info(f"Could not sort suggestions, error: {e}") + data_sorted = suggestion_list + + return data_sorted + + diff --git a/pr_agent/tools/pr_description.py b/pr_agent/tools/pr_description.py index d55dd55a..c45917f4 100644 --- a/pr_agent/tools/pr_description.py +++ b/pr_agent/tools/pr_description.py @@ -36,12 +36,14 @@ class PRDescription: self.vars = { "title": self.git_provider.pr.title, "branch": self.git_provider.get_pr_branch(), - "description": self.git_provider.get_pr_description(), + "description": self.git_provider.get_pr_description(full=False), "language": self.main_pr_language, "diff": "", # empty diff for initial calculation "extra_instructions": get_settings().pr_description.extra_instructions, "commit_messages_str": self.git_provider.get_commit_messages() } + + self.user_description = self.git_provider.get_user_description() # Initialize the token handler self.token_handler = TokenHandler( @@ -145,15 +147,12 @@ class PRDescription: # Load the AI prediction data into a dictionary data = load_yaml(self.prediction.strip()) + if get_settings().pr_description.add_original_user_description and self.user_description: + data["User Description"] = self.user_description + # Initialization pr_types = [] - # Iterate over the dictionary items and append the key and value to 'markdown_text' in a markdown format - markdown_text = "" - for key, value in data.items(): - markdown_text += f"## {key}\n\n" - markdown_text += f"{value}\n\n" - # If the 'PR Type' key is present in the dictionary, split its value by comma and assign it to 'pr_types' if 'PR Type' in data: if type(data['PR Type']) == list: @@ -161,13 +160,19 @@ class PRDescription: elif type(data['PR Type']) == str: pr_types = data['PR Type'].split(',') - # Assign the value of the 'PR Title' key to 'title' variable and remove it from the dictionary - title = data.pop('PR Title') + # Remove the 'PR Title' key from the dictionary + ai_title = data.pop('PR Title') + if get_settings().pr_description.keep_original_user_title: + # Assign the original PR title to the 'title' variable + title = self.vars["title"] + else: + # Assign the value of the 'PR Title' key to 'title' variable + title = ai_title # Iterate over the remaining dictionary items and append the key and value to 'pr_body' in a markdown format, # except for the items containing the word 'walkthrough' pr_body = "" - for key, value in data.items(): + for idx, (key, value) in enumerate(data.items()): pr_body += f"## {key}:\n" if 'walkthrough' in key.lower(): # for filename, description in value.items(): @@ -179,7 +184,11 @@ class PRDescription: # if the value is a list, join its items by comma if type(value) == list: value = ', '.join(v for v in value) - pr_body += f"{value}\n\n___\n" + pr_body += f"{value}\n" + if idx < len(data) - 1: + pr_body += "\n___\n" + + markdown_text = f"## Title\n\n{title}\n\n___\n{pr_body}" if get_settings().config.verbosity_level >= 2: logging.info(f"title:\n{title}\n{pr_body}") diff --git a/pr_agent/tools/pr_reviewer.py b/pr_agent/tools/pr_reviewer.py index fd6479ae..a89c27a3 100644 --- a/pr_agent/tools/pr_reviewer.py +++ b/pr_agent/tools/pr_reviewer.py @@ -23,7 +23,7 @@ class PRReviewer: """ The PRReviewer class is responsible for reviewing a pull request and generating feedback using an AI model. """ - def __init__(self, pr_url: str, is_answer: bool = False, args: list = None): + def __init__(self, pr_url: str, is_answer: bool = False, is_auto: bool = False, args: list = None): """ Initialize the PRReviewer object with the necessary attributes and objects to review a pull request. @@ -40,6 +40,7 @@ class PRReviewer: ) self.pr_url = pr_url self.is_answer = is_answer + self.is_auto = is_auto if self.is_answer and not self.git_provider.is_supported("get_issue_comments"): raise Exception(f"Answer mode is not supported for {get_settings().config.git_provider} for now") @@ -93,8 +94,12 @@ class PRReviewer: """ Review the pull request and generate feedback. """ - logging.info('Reviewing PR...') - + if self.is_auto and not get_settings().pr_reviewer.automatic_review: + logging.info(f'Automatic review is disabled {self.pr_url}') + return None + + logging.info(f'Reviewing PR: {self.pr_url} ...') + if get_settings().config.publish_output: self.git_provider.publish_comment("Preparing review...", is_temporary=True) diff --git a/pyproject.toml b/pyproject.toml index 2e8f2b5c..0e1289f4 100644 --- a/pyproject.toml +++ b/pyproject.toml @@ -26,25 +26,10 @@ classifiers = [ "Operating System :: Independent", "Programming Language :: Python :: 3", ] +dynamic = ["dependencies"] -dependencies = [ - "dynaconf==3.1.12", - "fastapi==0.99.0", - "PyGithub==1.59.*", - "retry==0.9.2", - "openai==0.27.8", - "Jinja2==3.1.2", - "tiktoken==0.4.0", - "uvicorn==0.22.0", - "python-gitlab==3.15.0", - "pytest~=7.4.0", - "aiohttp~=3.8.4", - "atlassian-python-api==3.39.0", - "GitPython~=3.1.32", - "starlette-context==0.3.6", - "litellm~=0.1.351", - "PyYAML==6.0" -] +[tool.setuptools.dynamic] +dependencies = {file = ["requirements.txt"]} [project.urls] "Homepage" = "https://github.com/Codium-ai/pr-agent" diff --git a/requirements.txt b/requirements.txt index ebea2b71..99efa846 100644 --- a/requirements.txt +++ b/requirements.txt @@ -11,7 +11,11 @@ pytest~=7.4.0 aiohttp~=3.8.4 atlassian-python-api==3.39.0 GitPython~=3.1.32 -litellm~=0.1.351 PyYAML==6.0 starlette-context==0.3.6 -litellm~=0.1.351 \ No newline at end of file +litellm~=0.1.504 +boto3~=1.28.25 +google-cloud-storage==2.10.0 +ujson==5.8.0 +azure-devops==7.1.0b3 +msrest==0.7.1 \ No newline at end of file diff --git a/tests/unittest/test_codecommit_client.py b/tests/unittest/test_codecommit_client.py new file mode 100644 index 00000000..0aa1ffa6 --- /dev/null +++ b/tests/unittest/test_codecommit_client.py @@ -0,0 +1,136 @@ +from unittest.mock import MagicMock +from pr_agent.git_providers.codecommit_client import CodeCommitClient + + +class TestCodeCommitProvider: + def test_get_differences(self): + # Create a mock CodeCommitClient instance and codecommit_client member + api = CodeCommitClient() + api.boto_client = MagicMock() + + # Mock the response from the AWS client for get_differences method + api.boto_client.get_paginator.return_value.paginate.return_value = [ + { + "differences": [ + { + "beforeBlob": { + "path": "file1.py", + "blobId": "291b15c3ab4219e43a5f4f9091e5a97ee9d7400b", + }, + "afterBlob": { + "path": "file1.py", + "blobId": "46ad86582da03cc34c804c24b17976571bca1eba", + }, + "changeType": "M", + }, + { + "beforeBlob": {"path": "", "blobId": ""}, + "afterBlob": { + "path": "file2.py", + "blobId": "2404c7874fcbd684d6779c1420072f088647fd79", + }, + "changeType": "A", + }, + { + "beforeBlob": { + "path": "file3.py", + "blobId": "9af7989045ce40e9478ebb8089dfbadac19a9cde", + }, + "afterBlob": {"path": "", "blobId": ""}, + "changeType": "D", + }, + { + "beforeBlob": { + "path": "file5.py", + "blobId": "738e36eec120ef9d6393a149252698f49156d5b4", + }, + "afterBlob": { + "path": "file6.py", + "blobId": "faecdb85f7ba199df927a783b261378a1baeca85", + }, + "changeType": "R", + }, + ] + } + ] + + diffs = api.get_differences("my_test_repo", "commit1", "commit2") + + assert len(diffs) == 4 + assert diffs[0].before_blob_path == "file1.py" + assert diffs[0].before_blob_id == "291b15c3ab4219e43a5f4f9091e5a97ee9d7400b" + assert diffs[0].after_blob_path == "file1.py" + assert diffs[0].after_blob_id == "46ad86582da03cc34c804c24b17976571bca1eba" + assert diffs[0].change_type == "M" + assert diffs[1].before_blob_path == "" + assert diffs[1].before_blob_id == "" + assert diffs[1].after_blob_path == "file2.py" + assert diffs[1].after_blob_id == "2404c7874fcbd684d6779c1420072f088647fd79" + assert diffs[1].change_type == "A" + assert diffs[2].before_blob_path == "file3.py" + assert diffs[2].before_blob_id == "9af7989045ce40e9478ebb8089dfbadac19a9cde" + assert diffs[2].after_blob_path == "" + assert diffs[2].after_blob_id == "" + assert diffs[2].change_type == "D" + assert diffs[3].before_blob_path == "file5.py" + assert diffs[3].before_blob_id == "738e36eec120ef9d6393a149252698f49156d5b4" + assert diffs[3].after_blob_path == "file6.py" + assert diffs[3].after_blob_id == "faecdb85f7ba199df927a783b261378a1baeca85" + assert diffs[3].change_type == "R" + + def test_get_file(self): + # Create a mock CodeCommitClient instance and codecommit_client member + api = CodeCommitClient() + api.boto_client = MagicMock() + + # Mock the response from the AWS client for get_pull_request method + # def get_file(self, repo_name: str, file_path: str, sha_hash: str): + api.boto_client.get_file.return_value = { + "commitId": "6335d6d4496e8d50af559560997604bb03abc122", + "blobId": "c172209495d7968a8fdad76469564fb708460bc1", + "filePath": "requirements.txt", + "fileSize": 65, + "fileContent": b"boto3==1.28.25\ndynaconf==3.1.12\nfastapi==0.99.0\nPyGithub==1.59.*\n", + } + + repo_name = "my_test_repo" + file_path = "requirements.txt" + sha_hash = "84114a356ece1e5b7637213c8e486fea7c254656" + content = api.get_file(repo_name, file_path, sha_hash) + + assert len(content) == 65 + assert content == b"boto3==1.28.25\ndynaconf==3.1.12\nfastapi==0.99.0\nPyGithub==1.59.*\n" + assert content.decode("utf-8") == "boto3==1.28.25\ndynaconf==3.1.12\nfastapi==0.99.0\nPyGithub==1.59.*\n" + + def test_get_pr(self): + # Create a mock CodeCommitClient instance and codecommit_client member + api = CodeCommitClient() + api.boto_client = MagicMock() + + # Mock the response from the AWS client for get_pull_request method + api.boto_client.get_pull_request.return_value = { + "pullRequest": { + "pullRequestId": "3", + "title": "My PR", + "description": "My PR description", + "pullRequestTargets": [ + { + "sourceCommit": "commit1", + "sourceReference": "branch1", + "destinationCommit": "commit2", + "destinationReference": "branch2", + "repositoryName": "my_test_repo", + } + ], + } + } + + pr = api.get_pr("my_test_repo", 321) + + assert pr.title == "My PR" + assert pr.description == "My PR description" + assert len(pr.targets) == 1 + assert pr.targets[0].source_commit == "commit1" + assert pr.targets[0].source_branch == "branch1" + assert pr.targets[0].destination_commit == "commit2" + assert pr.targets[0].destination_branch == "branch2" diff --git a/tests/unittest/test_codecommit_provider.py b/tests/unittest/test_codecommit_provider.py new file mode 100644 index 00000000..9de7c45c --- /dev/null +++ b/tests/unittest/test_codecommit_provider.py @@ -0,0 +1,172 @@ +import pytest +from pr_agent.git_providers.codecommit_provider import CodeCommitFile +from pr_agent.git_providers.codecommit_provider import CodeCommitProvider +from pr_agent.git_providers.git_provider import EDIT_TYPE + + +class TestCodeCommitFile: + # Test that a CodeCommitFile object is created successfully with valid parameters. + # Generated by CodiumAI + def test_valid_parameters(self): + a_path = "path/to/file_a" + a_blob_id = "12345" + b_path = "path/to/file_b" + b_blob_id = "67890" + edit_type = EDIT_TYPE.ADDED + + file = CodeCommitFile(a_path, a_blob_id, b_path, b_blob_id, edit_type) + + assert file.a_path == a_path + assert file.a_blob_id == a_blob_id + assert file.b_path == b_path + assert file.b_blob_id == b_blob_id + assert file.edit_type == edit_type + assert file.filename == b_path + + +class TestCodeCommitProvider: + def test_parse_pr_url(self): + # Test that the _parse_pr_url() function can extract the repo name and PR number from a CodeCommit URL + url = "https://us-east-1.console.aws.amazon.com/codesuite/codecommit/repositories/my_test_repo/pull-requests/321" + repo_name, pr_number = CodeCommitProvider._parse_pr_url(url) + assert repo_name == "my_test_repo" + assert pr_number == 321 + + def test_is_valid_codecommit_hostname(self): + # Test the various AWS regions + assert CodeCommitProvider._is_valid_codecommit_hostname("af-south-1.console.aws.amazon.com") + assert CodeCommitProvider._is_valid_codecommit_hostname("ap-east-1.console.aws.amazon.com") + assert CodeCommitProvider._is_valid_codecommit_hostname("ap-northeast-1.console.aws.amazon.com") + assert CodeCommitProvider._is_valid_codecommit_hostname("ap-northeast-2.console.aws.amazon.com") + assert CodeCommitProvider._is_valid_codecommit_hostname("ap-northeast-3.console.aws.amazon.com") + assert CodeCommitProvider._is_valid_codecommit_hostname("ap-south-1.console.aws.amazon.com") + assert CodeCommitProvider._is_valid_codecommit_hostname("ap-south-2.console.aws.amazon.com") + assert CodeCommitProvider._is_valid_codecommit_hostname("ap-southeast-1.console.aws.amazon.com") + assert CodeCommitProvider._is_valid_codecommit_hostname("ap-southeast-2.console.aws.amazon.com") + assert CodeCommitProvider._is_valid_codecommit_hostname("ap-southeast-3.console.aws.amazon.com") + assert CodeCommitProvider._is_valid_codecommit_hostname("ap-southeast-4.console.aws.amazon.com") + assert CodeCommitProvider._is_valid_codecommit_hostname("ca-central-1.console.aws.amazon.com") + assert CodeCommitProvider._is_valid_codecommit_hostname("eu-central-1.console.aws.amazon.com") + assert CodeCommitProvider._is_valid_codecommit_hostname("eu-central-2.console.aws.amazon.com") + assert CodeCommitProvider._is_valid_codecommit_hostname("eu-north-1.console.aws.amazon.com") + assert CodeCommitProvider._is_valid_codecommit_hostname("eu-south-1.console.aws.amazon.com") + assert CodeCommitProvider._is_valid_codecommit_hostname("eu-south-2.console.aws.amazon.com") + assert CodeCommitProvider._is_valid_codecommit_hostname("eu-west-1.console.aws.amazon.com") + assert CodeCommitProvider._is_valid_codecommit_hostname("eu-west-2.console.aws.amazon.com") + assert CodeCommitProvider._is_valid_codecommit_hostname("eu-west-3.console.aws.amazon.com") + assert CodeCommitProvider._is_valid_codecommit_hostname("il-central-1.console.aws.amazon.com") + assert CodeCommitProvider._is_valid_codecommit_hostname("me-central-1.console.aws.amazon.com") + assert CodeCommitProvider._is_valid_codecommit_hostname("me-south-1.console.aws.amazon.com") + assert CodeCommitProvider._is_valid_codecommit_hostname("sa-east-1.console.aws.amazon.com") + assert CodeCommitProvider._is_valid_codecommit_hostname("us-east-1.console.aws.amazon.com") + assert CodeCommitProvider._is_valid_codecommit_hostname("us-east-2.console.aws.amazon.com") + assert CodeCommitProvider._is_valid_codecommit_hostname("us-gov-east-1.console.aws.amazon.com") + assert CodeCommitProvider._is_valid_codecommit_hostname("us-gov-west-1.console.aws.amazon.com") + assert CodeCommitProvider._is_valid_codecommit_hostname("us-west-1.console.aws.amazon.com") + assert CodeCommitProvider._is_valid_codecommit_hostname("us-west-2.console.aws.amazon.com") + # Test non-AWS regions + assert not CodeCommitProvider._is_valid_codecommit_hostname("no-such-region.console.aws.amazon.com") + assert not CodeCommitProvider._is_valid_codecommit_hostname("console.aws.amazon.com") + + # Test that an error is raised when an invalid CodeCommit URL is provided to the set_pr() method of the CodeCommitProvider class. + # Generated by CodiumAI + def test_invalid_codecommit_url(self): + provider = CodeCommitProvider() + with pytest.raises(ValueError): + provider.set_pr("https://example.com/codecommit/repositories/my_test_repo/pull-requests/4321") + + def test_get_file_extensions(self): + filenames = [ + "app.py", + "cli.py", + "composer.json", + "composer.lock", + "hello.py", + "image1.jpg", + "image2.JPG", + "index.js", + "provider.py", + "README", + "test.py", + ] + expected_extensions = [ + ".py", + ".py", + ".json", + ".lock", + ".py", + ".jpg", + ".jpg", + ".js", + ".py", + "", + ".py", + ] + extensions = CodeCommitProvider._get_file_extensions(filenames) + assert extensions == expected_extensions + + def test_get_language_percentages(self): + extensions = [ + ".py", + ".py", + ".json", + ".lock", + ".py", + ".jpg", + ".jpg", + ".js", + ".py", + "", + ".py", + ] + percentages = CodeCommitProvider._get_language_percentages(extensions) + assert percentages[".py"] == 45 + assert percentages[".json"] == 9 + assert percentages[".lock"] == 9 + assert percentages[".jpg"] == 18 + assert percentages[".js"] == 9 + assert percentages[""] == 9 + + # The _get_file_extensions function needs the "." prefix on the extension, + # but the _get_language_percentages function will work with or without the "." prefix + extensions = [ + "txt", + "py", + "py", + ] + percentages = CodeCommitProvider._get_language_percentages(extensions) + assert percentages["py"] == 67 + assert percentages["txt"] == 33 + + # test an empty list + percentages = CodeCommitProvider._get_language_percentages([]) + assert percentages == {} + + def test_get_edit_type(self): + # Test that the _get_edit_type() function can convert a CodeCommit letter to an EDIT_TYPE enum + assert CodeCommitProvider._get_edit_type("A") == EDIT_TYPE.ADDED + assert CodeCommitProvider._get_edit_type("D") == EDIT_TYPE.DELETED + assert CodeCommitProvider._get_edit_type("M") == EDIT_TYPE.MODIFIED + assert CodeCommitProvider._get_edit_type("R") == EDIT_TYPE.RENAMED + + assert CodeCommitProvider._get_edit_type("a") == EDIT_TYPE.ADDED + assert CodeCommitProvider._get_edit_type("d") == EDIT_TYPE.DELETED + assert CodeCommitProvider._get_edit_type("m") == EDIT_TYPE.MODIFIED + assert CodeCommitProvider._get_edit_type("r") == EDIT_TYPE.RENAMED + + assert CodeCommitProvider._get_edit_type("X") is None + + def test_add_additional_newlines(self): + # a short string to test adding double newlines + input = "abc\ndef\n\n___\nghi\njkl\nmno\n\npqr\n" + expect = "abc\n\ndef\n\n___\n\nghi\n\njkl\n\nmno\n\npqr\n\n" + assert CodeCommitProvider._add_additional_newlines(input) == expect + # a test example from a real PR + input = "## PR Type:\nEnhancement\n\n___\n## PR Description:\nThis PR introduces a new feature to the script, allowing users to filter servers by name.\n\n___\n## PR Main Files Walkthrough:\n`foo`: The foo script has been updated to include a new command line option `-f` or `--filter`.\n`bar`: The bar script has been updated to list stopped servers.\n" + expect = "## PR Type:\n\nEnhancement\n\n___\n\n## PR Description:\n\nThis PR introduces a new feature to the script, allowing users to filter servers by name.\n\n___\n\n## PR Main Files Walkthrough:\n\n`foo`: The foo script has been updated to include a new command line option `-f` or `--filter`.\n\n`bar`: The bar script has been updated to list stopped servers.\n\n" + assert CodeCommitProvider._add_additional_newlines(input) == expect + + def test_remove_markdown_html(self): + input = "## PR Feedback\n
Code feedback:\nfile foo\n\n" + expect = "## PR Feedback\nCode feedback:\nfile foo\n\n" + assert CodeCommitProvider._remove_markdown_html(input) == expect \ No newline at end of file diff --git a/tests/unittest/test_convert_to_markdown.py b/tests/unittest/test_convert_to_markdown.py index 4463513f..bb6f2268 100644 --- a/tests/unittest/test_convert_to_markdown.py +++ b/tests/unittest/test_convert_to_markdown.py @@ -67,33 +67,11 @@ class TestConvertToMarkdown: ] } expected_output = """\ -- ๐ŸŽฏ **Main theme:** Test -- ๐Ÿ“Œ **Type of PR:** Test type -- ๐Ÿงช **Relevant tests added:** no -- โœจ **Focused PR:** Yes -- ๐Ÿ’ก **General PR suggestions:** general suggestion... - -- ๐Ÿค– **Code feedback:** - - - **Code example:** - - **Before:** - ``` - Code before - ``` - - **After:** - ``` - Code after - ``` - - - **Code example:** - - **Before:** - ``` - Code before 2 - ``` - - **After:** - ``` - Code after 2 - ``` +- ๐ŸŽฏ **Main theme:** Test\n\ +- ๐Ÿ“Œ **Type of PR:** Test type\n\ +- ๐Ÿงช **Relevant tests added:** no\n\ +- โœจ **Focused PR:** Yes\n\ +- **General PR suggestions:** general suggestion...\n\n\n- **
๐Ÿค– Code feedback:**\n\n - **Code example:**\n - **Before:**\n ```\n Code before\n ```\n - **After:**\n ```\n Code after\n ```\n\n - **Code example:**\n - **Before:**\n ```\n Code before 2\n ```\n - **After:**\n ```\n Code after 2\n ```\n\n
\ """ assert convert_to_markdown(input_data).strip() == expected_output.strip() @@ -113,5 +91,5 @@ class TestConvertToMarkdown: 'General PR suggestions': {}, 'Code suggestions': {} } - expected_output = "" + expected_output = '' assert convert_to_markdown(input_data).strip() == expected_output.strip()