mirror of
https://github.com/qodo-ai/pr-agent.git
synced 2025-07-07 14:20:37 +08:00
Compare commits
25 Commits
zmeir-publ
...
ok/update_
Author | SHA1 | Date | |
---|---|---|---|
3b27c834a4 | |||
5bc2ef1eff | |||
2f558006bf | |||
370520df51 | |||
2e832b8fb4 | |||
a47fa342cb | |||
bdf7eff7cd | |||
dc67e6a66e | |||
77f243b7ab | |||
c507785475 | |||
5c5015b267 | |||
3efe08d619 | |||
2e36fce4eb | |||
d6d4427545 | |||
5d45632247 | |||
90c045e3d0 | |||
7f0a96d8f7 | |||
8fb9affef3 | |||
6c42a471e1 | |||
f2b74b6970 | |||
ffd11aeffc | |||
e5a8ed205e | |||
90f97b0226 | |||
978348240b | |||
4d92e7d9c2 |
45
INSTALL.md
45
INSTALL.md
@ -149,16 +149,35 @@ git clone https://github.com/Codium-ai/pr-agent.git
|
||||
```
|
||||
|
||||
5. Copy the secrets template file and fill in the following:
|
||||
```
|
||||
cp pr_agent/settings/.secrets_template.toml pr_agent/settings/.secrets.toml
|
||||
# Edit .secrets.toml file
|
||||
```
|
||||
- Your OpenAI key.
|
||||
- Set deployment_type to 'app'
|
||||
- Copy your app's private key to the private_key field.
|
||||
- Copy your app's ID to the app_id field.
|
||||
- Copy your app's webhook secret to the webhook_secret field.
|
||||
- Set deployment_type to 'app' in [configuration.toml](./pr_agent/settings/configuration.toml)
|
||||
|
||||
> The .secrets.toml file is not copied to the Docker image by default, and is only used for local development.
|
||||
> If you want to use the .secrets.toml file in your Docker image, you can add remove it from the .dockerignore file.
|
||||
> In most production environments, you would inject the secrets file as environment variables or as mounted volumes.
|
||||
> For example, in order to inject a secrets file as a volume in a Kubernetes environment you can update your pod spec to include the following,
|
||||
> assuming you have a secret named `pr-agent-settings` with a key named `.secrets.toml`:
|
||||
```
|
||||
volumes:
|
||||
- name: settings-volume
|
||||
secret:
|
||||
secretName: pr-agent-settings
|
||||
// ...
|
||||
containers:
|
||||
// ...
|
||||
volumeMounts:
|
||||
- mountPath: /app/pr_agent/settings_prod
|
||||
name: settings-volume
|
||||
```
|
||||
cp pr_agent/settings/.secrets_template.toml pr_agent/settings/.secrets.toml
|
||||
# Edit .secrets.toml file
|
||||
```
|
||||
|
||||
> Another option is to set the secrets as environment variables in your deployment environment, for example `OPENAI.KEY` and `GITHUB.USER_TOKEN`.
|
||||
|
||||
6. Build a Docker image for the app and optionally push it to a Docker repository. We'll use Dockerhub as an example:
|
||||
|
||||
@ -169,6 +188,7 @@ docker push codiumai/pr-agent:github_app # Push to your Docker repository
|
||||
|
||||
7. Host the app using a server, serverless function, or container environment. Alternatively, for development and
|
||||
debugging, you may use tools like smee.io to forward webhooks to your local machine.
|
||||
You can check [Deploy as a Lambda Function](#deploy-as-a-lambda-function)
|
||||
|
||||
8. Go back to your app's settings, and set the following:
|
||||
|
||||
@ -178,3 +198,20 @@ docker push codiumai/pr-agent:github_app # Push to your Docker repository
|
||||
9. Install the app by navigating to the "Install App" tab and selecting your desired repositories.
|
||||
|
||||
---
|
||||
|
||||
#### Deploy as a Lambda Function
|
||||
|
||||
1. Follow steps 1-5 of [Method 5](#method-5-run-as-a-github-app).
|
||||
2. Build a docker image that can be used as a lambda function
|
||||
```shell
|
||||
docker buildx build --platform=linux/amd64 . -t codiumai/pr-agent:serverless -f docker/Dockerfile.lambda
|
||||
```
|
||||
3. Push image to ECR
|
||||
```shell
|
||||
docker tag codiumai/pr-agent:serverless <AWS_ACCOUNT>.dkr.ecr.<AWS_REGION>.amazonaws.com/codiumai/pr-agent:serverless
|
||||
docker push <AWS_ACCOUNT>.dkr.ecr.<AWS_REGION>.amazonaws.com/codiumai/pr-agent:serverless
|
||||
```
|
||||
4. Create a lambda function that uses the uploaded image. Set the lambda timeout to be at least 3m.
|
||||
5. Configure the lambda function to have a Function URL.
|
||||
6. Go back to steps 8-9 of [Method 5](#method-5-run-as-a-github-app) with the function url as your Webhook URL.
|
||||
The Webhook URL would look like `https://<LAMBDA_FUNCTION_URL>/api/v1/github_webhooks`
|
||||
|
12
README.md
12
README.md
@ -24,25 +24,25 @@ CodiumAI `PR-Agent` is an open-source tool aiming to help developers review PRs
|
||||
|
||||
<h3>Example results:</h2>
|
||||
</div>
|
||||
<h4>Describe:</h4>
|
||||
<h4>/describe:</h4>
|
||||
<div align="center">
|
||||
<p float="center">
|
||||
<img src="https://codium.ai/images/describe.gif" width="800">
|
||||
</p>
|
||||
</div>
|
||||
<h4>Review:</h4>
|
||||
<h4>/review:</h4>
|
||||
<div align="center">
|
||||
<p float="center">
|
||||
<img src="https://codium.ai/images/review.gif" width="800">
|
||||
</p>
|
||||
</div>
|
||||
<h4>Ask:</h4>
|
||||
<h4>/ask:</h4>
|
||||
<div align="center">
|
||||
<p float="center">
|
||||
<img src="https://codium.ai/images/ask.gif" width="800">
|
||||
</p>
|
||||
</div>
|
||||
<h4>Improve:</h4>
|
||||
<h4>/improve:</h4>
|
||||
<div align="center">
|
||||
<p float="center">
|
||||
<img src="https://codium.ai/images/improve.gif" width="800">
|
||||
@ -76,7 +76,7 @@ To set up your own PR-Agent, see the [Quickstart](#Quickstart) section
|
||||
| TOOLS | Review | :white_check_mark: | :white_check_mark: | :white_check_mark: |
|
||||
| | ⮑ Inline review | :white_check_mark: | :white_check_mark: | |
|
||||
| | Ask | :white_check_mark: | :white_check_mark: | |
|
||||
| | Auto-Description | :white_check_mark: | | |
|
||||
| | Auto-Description | :white_check_mark: | :white_check_mark: | |
|
||||
| | Improve Code | :white_check_mark: | :white_check_mark: | |
|
||||
| | Reflect and Review | :white_check_mark: | | |
|
||||
| | | | | |
|
||||
@ -132,7 +132,7 @@ Here are several ways to install and run PR-Agent:
|
||||
|
||||
## How it works
|
||||
|
||||

|
||||

|
||||
|
||||
Check out the [PR Compression strategy](./PR_COMPRESSION.md) page for more details on how we convert a code diff to a manageable LLM prompt
|
||||
|
||||
|
12
docker/Dockerfile.lambda
Normal file
12
docker/Dockerfile.lambda
Normal file
@ -0,0 +1,12 @@
|
||||
FROM public.ecr.aws/lambda/python:3.10
|
||||
|
||||
RUN yum update -y && \
|
||||
yum install -y gcc python3-devel && \
|
||||
yum clean all
|
||||
|
||||
ADD requirements.txt .
|
||||
RUN pip install -r requirements.txt && rm requirements.txt
|
||||
RUN pip install mangum==16.0.0
|
||||
COPY pr_agent/ ${LAMBDA_TASK_ROOT}/pr_agent/
|
||||
|
||||
CMD ["pr_agent.servers.serverless.serverless"]
|
@ -1,11 +1,11 @@
|
||||
import re
|
||||
|
||||
from pr_agent.config_loader import settings
|
||||
from pr_agent.tools.pr_code_suggestions import PRCodeSuggestions
|
||||
from pr_agent.tools.pr_description import PRDescription
|
||||
from pr_agent.tools.pr_information_from_user import PRInformationFromUser
|
||||
from pr_agent.tools.pr_questions import PRQuestions
|
||||
from pr_agent.tools.pr_reviewer import PRReviewer
|
||||
from pr_agent.config_loader import settings
|
||||
|
||||
|
||||
class PRAgent:
|
||||
|
@ -158,7 +158,7 @@ def convert_to_hunks_with_lines_numbers(patch: str, file) -> str:
|
||||
patch_with_lines_str += f"{start2 + i} {line_new}\n"
|
||||
if old_content_lines:
|
||||
patch_with_lines_str += '--old hunk--\n'
|
||||
for i, line_old in enumerate(old_content_lines):
|
||||
for line_old in old_content_lines:
|
||||
patch_with_lines_str += f"{line_old}\n"
|
||||
new_content_lines = []
|
||||
old_content_lines = []
|
||||
@ -179,7 +179,7 @@ def convert_to_hunks_with_lines_numbers(patch: str, file) -> str:
|
||||
patch_with_lines_str += f"{start2 + i} {line_new}\n"
|
||||
if old_content_lines:
|
||||
patch_with_lines_str += '\n--old hunk--\n'
|
||||
for i, line_old in enumerate(old_content_lines):
|
||||
for line_old in old_content_lines:
|
||||
patch_with_lines_str += f"{line_old}\n"
|
||||
|
||||
return patch_with_lines_str.strip()
|
||||
|
@ -64,7 +64,11 @@ bad_extensions = [
|
||||
|
||||
|
||||
def filter_bad_extensions(files):
|
||||
return [f for f in files if f.filename.split('.')[-1] not in bad_extensions]
|
||||
return [f for f in files if is_valid_file(f.filename)]
|
||||
|
||||
|
||||
def is_valid_file(filename):
|
||||
return filename.split('.')[-1] not in bad_extensions
|
||||
|
||||
|
||||
def sort_files_by_main_languages(languages: Dict, files: list):
|
||||
|
@ -4,8 +4,7 @@ import difflib
|
||||
import logging
|
||||
from typing import Any, Tuple, Union
|
||||
|
||||
from pr_agent.algo.git_patch_processing import extend_patch, handle_patch_deletions, \
|
||||
convert_to_hunks_with_lines_numbers
|
||||
from pr_agent.algo.git_patch_processing import convert_to_hunks_with_lines_numbers, extend_patch, handle_patch_deletions
|
||||
from pr_agent.algo.language_handler import sort_files_by_main_languages
|
||||
from pr_agent.algo.token_handler import TokenHandler
|
||||
from pr_agent.config_loader import settings
|
||||
@ -30,10 +29,10 @@ def get_pr_diff(git_provider: Union[GithubProvider, Any], token_handler: TokenHa
|
||||
global PATCH_EXTRA_LINES
|
||||
PATCH_EXTRA_LINES = 0
|
||||
|
||||
git_provider.pr.diff_files = list(git_provider.get_diff_files())
|
||||
diff_files = list(git_provider.get_diff_files())
|
||||
|
||||
# get pr languages
|
||||
pr_languages = sort_files_by_main_languages(git_provider.get_languages(), git_provider.pr.diff_files)
|
||||
pr_languages = sort_files_by_main_languages(git_provider.get_languages(), diff_files)
|
||||
|
||||
# generate a standard diff string, with patch extension
|
||||
patches_extended, total_tokens = pr_generate_extended_diff(pr_languages, token_handler,
|
||||
|
@ -17,6 +17,7 @@ def convert_to_markdown(output_data: dict) -> str:
|
||||
"Focused PR": "✨",
|
||||
"Security concerns": "🔒",
|
||||
"General PR suggestions": "💡",
|
||||
"Insights from user's answers": "📝",
|
||||
"Code suggestions": "🤖"
|
||||
}
|
||||
|
||||
@ -89,8 +90,8 @@ def try_fix_json(review, max_iter=10, code_suggestions=False):
|
||||
data = {}
|
||||
return data
|
||||
|
||||
|
||||
def fix_json_escape_char(json_message=None):
|
||||
result = None
|
||||
try:
|
||||
result = json.loads(json_message)
|
||||
except Exception as e:
|
||||
@ -100,5 +101,5 @@ def fix_json_escape_char(json_message=None):
|
||||
json_message = list(json_message)
|
||||
json_message[idx_to_replace] = ' '
|
||||
new_message = ''.join(json_message)
|
||||
return fix_JSON(json_message=new_message)
|
||||
return result
|
||||
return fix_json_escape_char(json_message=new_message)
|
||||
return result
|
||||
|
@ -10,7 +10,7 @@ from pr_agent.tools.pr_questions import PRQuestions
|
||||
from pr_agent.tools.pr_reviewer import PRReviewer
|
||||
|
||||
|
||||
def run():
|
||||
def run(args=None):
|
||||
parser = argparse.ArgumentParser(description='AI based pull request analyzer', usage="""\
|
||||
Usage: cli.py --pr-url <URL on supported git hosting service> <command> [<args>].
|
||||
For example:
|
||||
@ -35,7 +35,7 @@ reflect - Ask the PR author questions about the PR.
|
||||
'reflect', 'review_after_reflect'],
|
||||
default='review')
|
||||
parser.add_argument('rest', nargs=argparse.REMAINDER, default=[])
|
||||
args = parser.parse_args()
|
||||
args = parser.parse_args(args)
|
||||
logging.basicConfig(level=os.environ.get("LOGLEVEL", "INFO"))
|
||||
command = args.command.lower()
|
||||
if command in ['ask', 'ask_question']:
|
||||
|
@ -1,7 +1,7 @@
|
||||
from pr_agent.config_loader import settings
|
||||
from pr_agent.git_providers.bitbucket_provider import BitbucketProvider
|
||||
from pr_agent.git_providers.github_provider import GithubProvider
|
||||
from pr_agent.git_providers.gitlab_provider import GitLabProvider
|
||||
from pr_agent.git_providers.bitbucket_provider import BitbucketProvider
|
||||
|
||||
_GIT_PROVIDERS = {
|
||||
'github': GithubProvider,
|
||||
|
@ -1,5 +1,4 @@
|
||||
import logging
|
||||
from datetime import datetime
|
||||
from typing import Optional, Tuple
|
||||
from urllib.parse import urlparse
|
||||
|
||||
@ -10,6 +9,7 @@ from pr_agent.config_loader import settings
|
||||
|
||||
from .git_provider import FilePatchInfo
|
||||
|
||||
|
||||
class BitbucketProvider:
|
||||
def __init__(self, pr_url: Optional[str] = None):
|
||||
s = requests.Session()
|
||||
@ -45,7 +45,8 @@ class BitbucketProvider:
|
||||
for index, diff in enumerate(diffs):
|
||||
original_file_content_str = self._get_pr_file_content(diff.old.get_data('links'))
|
||||
new_file_content_str = self._get_pr_file_content(diff.new.get_data('links'))
|
||||
diff_files.append(FilePatchInfo(original_file_content_str, new_file_content_str, diff_split[index], diff.new.path))
|
||||
diff_files.append(FilePatchInfo(original_file_content_str, new_file_content_str,
|
||||
diff_split[index], diff.new.path))
|
||||
return diff_files
|
||||
|
||||
def publish_comment(self, pr_comment: str, is_temporary: bool = False):
|
||||
|
@ -3,12 +3,15 @@ from dataclasses import dataclass
|
||||
|
||||
# enum EDIT_TYPE (ADDED, DELETED, MODIFIED, RENAMED)
|
||||
from enum import Enum
|
||||
|
||||
|
||||
class EDIT_TYPE(Enum):
|
||||
ADDED = 1
|
||||
DELETED = 2
|
||||
MODIFIED = 3
|
||||
RENAMED = 4
|
||||
|
||||
|
||||
@dataclass
|
||||
class FilePatchInfo:
|
||||
base_file: str
|
||||
|
@ -8,6 +8,7 @@ from github import AppAuthentication, Github, Auth
|
||||
from pr_agent.config_loader import settings
|
||||
|
||||
from .git_provider import FilePatchInfo, GitProvider
|
||||
from ..algo.language_handler import is_valid_file
|
||||
|
||||
|
||||
class GithubProvider(GitProvider):
|
||||
@ -37,9 +38,10 @@ class GithubProvider(GitProvider):
|
||||
files = self.pr.get_files()
|
||||
diff_files = []
|
||||
for file in files:
|
||||
original_file_content_str = self._get_pr_file_content(file, self.pr.base.sha)
|
||||
new_file_content_str = self._get_pr_file_content(file, self.pr.head.sha)
|
||||
diff_files.append(FilePatchInfo(original_file_content_str, new_file_content_str, file.patch, file.filename))
|
||||
if is_valid_file(file.filename):
|
||||
original_file_content_str = self._get_pr_file_content(file, self.pr.base.sha)
|
||||
new_file_content_str = self._get_pr_file_content(file, self.pr.head.sha)
|
||||
diff_files.append(FilePatchInfo(original_file_content_str, new_file_content_str, file.patch, file.filename))
|
||||
self.diff_files = diff_files
|
||||
return diff_files
|
||||
|
||||
@ -70,7 +72,7 @@ class GithubProvider(GitProvider):
|
||||
if relevant_line_in_file in line:
|
||||
position = i
|
||||
break
|
||||
elif relevant_line_in_file[0] == '+' and relevant_line_in_file[1:] in line:
|
||||
elif relevant_line_in_file[0] == '+' and relevant_line_in_file[1:].lstrip() in line:
|
||||
# The model often adds a '+' to the beginning of the relevant_line_in_file even if originally
|
||||
# it's a context line
|
||||
position = i
|
||||
|
@ -8,7 +8,8 @@ from gitlab import GitlabGetError
|
||||
|
||||
from pr_agent.config_loader import settings
|
||||
|
||||
from .git_provider import FilePatchInfo, GitProvider, EDIT_TYPE
|
||||
from .git_provider import EDIT_TYPE, FilePatchInfo, GitProvider
|
||||
from ..algo.language_handler import is_valid_file
|
||||
|
||||
|
||||
class GitLabProvider(GitProvider):
|
||||
@ -51,34 +52,36 @@ class GitLabProvider(GitProvider):
|
||||
try:
|
||||
return self.gl.projects.get(self.id_project).files.get(file_path, branch).decode()
|
||||
except GitlabGetError:
|
||||
# In case of file creation the method returns GitlabGetError (404 file not found). In this case we return an empty string for the diff.
|
||||
# In case of file creation the method returns GitlabGetError (404 file not found).
|
||||
# In this case we return an empty string for the diff.
|
||||
return ''
|
||||
|
||||
def get_diff_files(self) -> list[FilePatchInfo]:
|
||||
diffs = self.mr.changes()['changes']
|
||||
diff_files = []
|
||||
for diff in diffs:
|
||||
original_file_content_str = self._get_pr_file_content(diff['old_path'], self.mr.target_branch)
|
||||
new_file_content_str = self._get_pr_file_content(diff['new_path'], self.mr.source_branch)
|
||||
edit_type = EDIT_TYPE.MODIFIED
|
||||
if diff['new_file']:
|
||||
edit_type = EDIT_TYPE.ADDED
|
||||
elif diff['deleted_file']:
|
||||
edit_type = EDIT_TYPE.DELETED
|
||||
elif diff['renamed_file']:
|
||||
edit_type = EDIT_TYPE.RENAMED
|
||||
try:
|
||||
if isinstance(original_file_content_str, bytes):
|
||||
original_file_content_str = bytes.decode(original_file_content_str, 'utf-8')
|
||||
if isinstance(new_file_content_str, bytes):
|
||||
new_file_content_str = bytes.decode(new_file_content_str, 'utf-8')
|
||||
except UnicodeDecodeError:
|
||||
logging.warning(
|
||||
f"Cannot decode file {diff['old_path']} or {diff['new_path']} in merge request {self.id_mr}")
|
||||
diff_files.append(
|
||||
FilePatchInfo(original_file_content_str, new_file_content_str, diff['diff'], diff['new_path'],
|
||||
edit_type=edit_type,
|
||||
old_filename=None if diff['old_path'] == diff['new_path'] else diff['old_path']))
|
||||
if is_valid_file(diff['new_path']):
|
||||
original_file_content_str = self._get_pr_file_content(diff['old_path'], self.mr.target_branch)
|
||||
new_file_content_str = self._get_pr_file_content(diff['new_path'], self.mr.source_branch)
|
||||
edit_type = EDIT_TYPE.MODIFIED
|
||||
if diff['new_file']:
|
||||
edit_type = EDIT_TYPE.ADDED
|
||||
elif diff['deleted_file']:
|
||||
edit_type = EDIT_TYPE.DELETED
|
||||
elif diff['renamed_file']:
|
||||
edit_type = EDIT_TYPE.RENAMED
|
||||
try:
|
||||
if isinstance(original_file_content_str, bytes):
|
||||
original_file_content_str = bytes.decode(original_file_content_str, 'utf-8')
|
||||
if isinstance(new_file_content_str, bytes):
|
||||
new_file_content_str = bytes.decode(new_file_content_str, 'utf-8')
|
||||
except UnicodeDecodeError:
|
||||
logging.warning(
|
||||
f"Cannot decode file {diff['old_path']} or {diff['new_path']} in merge request {self.id_mr}")
|
||||
diff_files.append(
|
||||
FilePatchInfo(original_file_content_str, new_file_content_str, diff['diff'], diff['new_path'],
|
||||
edit_type=edit_type,
|
||||
old_filename=None if diff['old_path'] == diff['new_path'] else diff['old_path']))
|
||||
self.diff_files = diff_files
|
||||
return diff_files
|
||||
|
||||
@ -86,8 +89,12 @@ class GitLabProvider(GitProvider):
|
||||
return [change['new_path'] for change in self.mr.changes()['changes']]
|
||||
|
||||
def publish_description(self, pr_title: str, pr_body: str):
|
||||
logging.exception("Not implemented yet")
|
||||
pass
|
||||
try:
|
||||
self.mr.title = pr_title
|
||||
self.mr.description = pr_body
|
||||
self.mr.save()
|
||||
except Exception as e:
|
||||
logging.exception(f"Could not update merge request {self.id_mr} description: {e}")
|
||||
|
||||
def publish_comment(self, mr_comment: str, is_temporary: bool = False):
|
||||
comment = self.mr.notes.create({'body': mr_comment})
|
||||
@ -143,7 +150,8 @@ class GitLabProvider(GitProvider):
|
||||
|
||||
lines = target_file.head_file.splitlines()
|
||||
relevant_line_in_file = lines[relevant_lines_start - 1]
|
||||
edit_type, found, source_line_no, target_file, target_line_no = self.find_in_file(target_file, relevant_line_in_file)
|
||||
edit_type, found, source_line_no, target_file, target_line_no = self.find_in_file(target_file,
|
||||
relevant_line_in_file)
|
||||
self.send_inline_comment(body, edit_type, found, relevant_file, relevant_line_in_file, source_line_no,
|
||||
target_file, target_line_no)
|
||||
|
||||
@ -165,7 +173,7 @@ class GitLabProvider(GitProvider):
|
||||
target_file = file
|
||||
patch = file.patch
|
||||
patch_lines = patch.splitlines()
|
||||
for i, line in enumerate(patch_lines):
|
||||
for line in patch_lines:
|
||||
if line.startswith('@@'):
|
||||
match = self.RE_HUNK_HEADER.match(line)
|
||||
if not match:
|
||||
@ -185,7 +193,7 @@ class GitLabProvider(GitProvider):
|
||||
found = True
|
||||
edit_type = self.get_edit_type(line)
|
||||
break
|
||||
elif relevant_line_in_file[0] == '+' and relevant_line_in_file[1:] in line:
|
||||
elif relevant_line_in_file[0] == '+' and relevant_line_in_file[1:].lstrip() in line:
|
||||
# The model often adds a '+' to the beginning of the relevant_line_in_file even if originally
|
||||
# it's a context line
|
||||
found = True
|
||||
|
@ -1,14 +1,9 @@
|
||||
import asyncio
|
||||
import json
|
||||
import os
|
||||
import re
|
||||
|
||||
from pr_agent.agent.pr_agent import PRAgent
|
||||
from pr_agent.config_loader import settings
|
||||
from pr_agent.tools.pr_code_suggestions import PRCodeSuggestions
|
||||
from pr_agent.tools.pr_description import PRDescription
|
||||
from pr_agent.tools.pr_information_from_user import PRInformationFromUser
|
||||
from pr_agent.tools.pr_questions import PRQuestions
|
||||
from pr_agent.tools.pr_reviewer import PRReviewer
|
||||
|
||||
|
||||
|
@ -1,6 +1,5 @@
|
||||
import asyncio
|
||||
import logging
|
||||
import re
|
||||
import sys
|
||||
from datetime import datetime, timezone
|
||||
|
||||
@ -10,10 +9,6 @@ from pr_agent.agent.pr_agent import PRAgent
|
||||
from pr_agent.config_loader import settings
|
||||
from pr_agent.git_providers import get_git_provider
|
||||
from pr_agent.servers.help import bot_help_text
|
||||
from pr_agent.tools.pr_code_suggestions import PRCodeSuggestions
|
||||
from pr_agent.tools.pr_description import PRDescription
|
||||
from pr_agent.tools.pr_questions import PRQuestions
|
||||
from pr_agent.tools.pr_reviewer import PRReviewer
|
||||
|
||||
logging.basicConfig(stream=sys.stdout, level=logging.DEBUG)
|
||||
NOTIFICATION_URL = "https://api.github.com/notifications"
|
||||
@ -103,5 +98,6 @@ async def polling_loop():
|
||||
except Exception as e:
|
||||
logging.error(f"Exception during processing of a notification: {e}")
|
||||
|
||||
|
||||
if __name__ == '__main__':
|
||||
asyncio.run(polling_loop())
|
||||
|
18
pr_agent/servers/serverless.py
Normal file
18
pr_agent/servers/serverless.py
Normal file
@ -0,0 +1,18 @@
|
||||
import logging
|
||||
|
||||
from fastapi import FastAPI
|
||||
from mangum import Mangum
|
||||
|
||||
from pr_agent.servers.github_app import router
|
||||
|
||||
logger = logging.getLogger()
|
||||
logger.setLevel(logging.DEBUG)
|
||||
|
||||
app = FastAPI()
|
||||
app.include_router(router)
|
||||
|
||||
handler = Mangum(app, lifespan="off")
|
||||
|
||||
|
||||
def serverless(event, context):
|
||||
return handler(event, context)
|
@ -8,9 +8,9 @@ from jinja2 import Environment, StrictUndefined
|
||||
from pr_agent.algo.ai_handler import AiHandler
|
||||
from pr_agent.algo.pr_processing import get_pr_diff
|
||||
from pr_agent.algo.token_handler import TokenHandler
|
||||
from pr_agent.algo.utils import convert_to_markdown, try_fix_json
|
||||
from pr_agent.algo.utils import try_fix_json
|
||||
from pr_agent.config_loader import settings
|
||||
from pr_agent.git_providers import get_git_provider, BitbucketProvider
|
||||
from pr_agent.git_providers import BitbucketProvider, get_git_provider
|
||||
from pr_agent.git_providers.git_provider import get_main_pr_language
|
||||
|
||||
|
||||
@ -62,7 +62,6 @@ class PRCodeSuggestions:
|
||||
logging.info('Pushing inline code comments...')
|
||||
self.push_inline_code_suggestions(data)
|
||||
|
||||
|
||||
async def _get_prediction(self):
|
||||
variables = copy.deepcopy(self.vars)
|
||||
variables["diff"] = self.patches_diff # update diff
|
||||
@ -98,12 +97,12 @@ class PRCodeSuggestions:
|
||||
relevant_lines_start = int(relevant_lines_str.split('-')[0]) # absolute position
|
||||
relevant_lines_end = int(relevant_lines_str.split('-')[-1])
|
||||
content = d['suggestion content']
|
||||
existing_code_snippet = d['existing code']
|
||||
new_code_snippet = d['improved code']
|
||||
|
||||
if new_code_snippet:
|
||||
try: # dedent code snippet
|
||||
self.diff_files = self.git_provider.diff_files if self.git_provider.diff_files else self.git_provider.get_diff_files()
|
||||
self.diff_files = self.git_provider.diff_files if self.git_provider.diff_files \
|
||||
else self.git_provider.get_diff_files()
|
||||
original_initial_line = None
|
||||
for file in self.diff_files:
|
||||
if file.filename.strip() == relevant_file:
|
||||
@ -121,7 +120,7 @@ class PRCodeSuggestions:
|
||||
logging.info(f"Could not dedent code snippet for file {relevant_file}, error: {e}")
|
||||
|
||||
body = f"**Suggestion:** {content}\n```suggestion\n" + new_code_snippet + "\n```"
|
||||
success = self.git_provider.publish_code_suggestion(body=body,
|
||||
self.git_provider.publish_code_suggestion(body=body,
|
||||
relevant_file=relevant_file,
|
||||
relevant_lines_start=relevant_lines_start,
|
||||
relevant_lines_end=relevant_lines_end)
|
||||
|
@ -7,7 +7,6 @@ from jinja2 import Environment, StrictUndefined
|
||||
from pr_agent.algo.ai_handler import AiHandler
|
||||
from pr_agent.algo.pr_processing import get_pr_diff
|
||||
from pr_agent.algo.token_handler import TokenHandler
|
||||
from pr_agent.algo.utils import convert_to_markdown
|
||||
from pr_agent.config_loader import settings
|
||||
from pr_agent.git_providers import get_git_provider
|
||||
from pr_agent.git_providers.git_provider import get_main_pr_language
|
||||
|
@ -67,5 +67,5 @@ class PRInformationFromUser:
|
||||
if settings.config.verbosity_level >= 2:
|
||||
logging.info(f"answer_str:\n{model_output}")
|
||||
answer_str = f"{model_output}\n\n Please respond to the questions above in the following format:\n\n" +\
|
||||
f"\n>/answer\n>1) ...\n>2) ...\n>...\n"
|
||||
"\n>/answer\n>1) ...\n>2) ...\n>...\n"
|
||||
return answer_str
|
||||
|
@ -24,7 +24,7 @@ class PRReviewer:
|
||||
self.is_answer = is_answer
|
||||
if self.is_answer and not self.git_provider.is_supported("get_issue_comments"):
|
||||
raise Exception(f"Answer mode is not supported for {settings.config.git_provider} for now")
|
||||
answer_str = question_str = self._get_user_answers()
|
||||
answer_str, question_str = self._get_user_answers()
|
||||
self.ai_handler = AiHandler()
|
||||
self.patches_diff = None
|
||||
self.prediction = None
|
||||
@ -96,7 +96,7 @@ class PRReviewer:
|
||||
del data['PR Feedback']['Security concerns']
|
||||
data['PR Analysis']['Security concerns'] = val
|
||||
|
||||
if settings.config.git_provider == 'github' and \
|
||||
if settings.config.git_provider != 'bitbucket' and \
|
||||
settings.pr_reviewer.inline_code_comments and \
|
||||
'Code suggestions' in data['PR Feedback']:
|
||||
# keeping only code suggestions that can't be submitted as inline comments
|
||||
|
@ -1,13 +1,12 @@
|
||||
# Generated by CodiumAI
|
||||
|
||||
from pr_agent.algo.utils import try_fix_json
|
||||
|
||||
|
||||
import pytest
|
||||
|
||||
class TestTryFixJson:
|
||||
# Tests that JSON with complete 'Code suggestions' section returns expected output
|
||||
def test_incomplete_code_suggestions(self):
|
||||
review = '{"PR Analysis": {"Main theme": "xxx", "Type of PR": "Bug fix"}, "PR Feedback": {"General PR suggestions": "..., `xxx`...", "Code suggestions": [{"relevant file": "xxx.py", "suggestion content": "xxx [important]"}, {"suggestion number": 2, "relevant file": "yyy.py", "suggestion content": "yyy [incomp...'
|
||||
review = '{"PR Analysis": {"Main theme": "xxx", "Type of PR": "Bug fix"}, "PR Feedback": {"General PR suggestions": "..., `xxx`...", "Code suggestions": [{"relevant file": "xxx.py", "suggestion content": "xxx [important]"}, {"suggestion number": 2, "relevant file": "yyy.py", "suggestion content": "yyy [incomp...' # noqa: E501
|
||||
expected_output = {
|
||||
'PR Analysis': {
|
||||
'Main theme': 'xxx',
|
||||
@ -26,7 +25,7 @@ class TestTryFixJson:
|
||||
assert try_fix_json(review) == expected_output
|
||||
|
||||
def test_incomplete_code_suggestions_new_line(self):
|
||||
review = '{"PR Analysis": {"Main theme": "xxx", "Type of PR": "Bug fix"}, "PR Feedback": {"General PR suggestions": "..., `xxx`...", "Code suggestions": [{"relevant file": "xxx.py", "suggestion content": "xxx [important]"} \n\t, {"suggestion number": 2, "relevant file": "yyy.py", "suggestion content": "yyy [incomp...'
|
||||
review = '{"PR Analysis": {"Main theme": "xxx", "Type of PR": "Bug fix"}, "PR Feedback": {"General PR suggestions": "..., `xxx`...", "Code suggestions": [{"relevant file": "xxx.py", "suggestion content": "xxx [important]"} \n\t, {"suggestion number": 2, "relevant file": "yyy.py", "suggestion content": "yyy [incomp...' # noqa: E501
|
||||
expected_output = {
|
||||
'PR Analysis': {
|
||||
'Main theme': 'xxx',
|
||||
@ -45,7 +44,7 @@ class TestTryFixJson:
|
||||
assert try_fix_json(review) == expected_output
|
||||
|
||||
def test_incomplete_code_suggestions_many_close_brackets(self):
|
||||
review = '{"PR Analysis": {"Main theme": "xxx", "Type of PR": "Bug fix"}, "PR Feedback": {"General PR suggestions": "..., `xxx`...", "Code suggestions": [{"relevant file": "xxx.py", "suggestion content": "xxx [important]"} \n, {"suggestion number": 2, "relevant file": "yyy.py", "suggestion content": "yyy }, [}\n ,incomp.} ,..'
|
||||
review = '{"PR Analysis": {"Main theme": "xxx", "Type of PR": "Bug fix"}, "PR Feedback": {"General PR suggestions": "..., `xxx`...", "Code suggestions": [{"relevant file": "xxx.py", "suggestion content": "xxx [important]"} \n, {"suggestion number": 2, "relevant file": "yyy.py", "suggestion content": "yyy }, [}\n ,incomp.} ,..' # noqa: E501
|
||||
expected_output = {
|
||||
'PR Analysis': {
|
||||
'Main theme': 'xxx',
|
||||
@ -64,7 +63,7 @@ class TestTryFixJson:
|
||||
assert try_fix_json(review) == expected_output
|
||||
|
||||
def test_incomplete_code_suggestions_relevant_file(self):
|
||||
review = '{"PR Analysis": {"Main theme": "xxx", "Type of PR": "Bug fix"}, "PR Feedback": {"General PR suggestions": "..., `xxx`...", "Code suggestions": [{"relevant file": "xxx.py", "suggestion content": "xxx [important]"}, {"suggestion number": 2, "relevant file": "yyy.p'
|
||||
review = '{"PR Analysis": {"Main theme": "xxx", "Type of PR": "Bug fix"}, "PR Feedback": {"General PR suggestions": "..., `xxx`...", "Code suggestions": [{"relevant file": "xxx.py", "suggestion content": "xxx [important]"}, {"suggestion number": 2, "relevant file": "yyy.p' # noqa: E501
|
||||
expected_output = {
|
||||
'PR Analysis': {
|
||||
'Main theme': 'xxx',
|
||||
|
Reference in New Issue
Block a user