mirror of
https://github.com/qodo-ai/pr-agent.git
synced 2025-07-06 22:00:40 +08:00
Compare commits
68 Commits
ok/inferen
...
idavidov/g
Author | SHA1 | Date | |
---|---|---|---|
9770f4709a | |||
35afe758e9 | |||
50125ae57f | |||
6595c3e0c9 | |||
fdd16f6c75 | |||
7b7e913195 | |||
5477469a91 | |||
dee1f168f8 | |||
bb18e32c56 | |||
70286e9574 | |||
3f60d12a9a | |||
164b340c29 | |||
4bb035ec0f | |||
23a79bc8fe | |||
1db53ae1ad | |||
cca951d787 | |||
230d684cd3 | |||
0a02fa8597 | |||
f82b9620af | |||
ce29d9eb49 | |||
b7b650eb05 | |||
6ca0655517 | |||
edcf89a456 | |||
7762a67250 | |||
7049c73790 | |||
cc7be0811a | |||
d3a5aea89e | |||
dd87df49f5 | |||
e85bcf3a17 | |||
abb754b16b | |||
bb5878c99a | |||
273a9e35d9 | |||
fcc208d09f | |||
20bbdac135 | |||
ceedf2bf83 | |||
2d6b947292 | |||
2e13b12fe6 | |||
2d56c88291 | |||
cf9c6a872d | |||
0bb8ab70a4 | |||
4a47b78a90 | |||
3e542cd88b | |||
17ed050ca7 | |||
e24c5e3501 | |||
b206b1c5ff | |||
0270306d3c | |||
3e09b9ac37 | |||
725ac9e85d | |||
e00500b90c | |||
f1f271fa00 | |||
d38c5236dd | |||
49a3a1e511 | |||
1b0b90e51d | |||
64481e2d84 | |||
e0f295659d | |||
fe75e3f2ec | |||
e3274af831 | |||
95b6abef09 | |||
7f1849a867 | |||
7760f37dee | |||
ebbe655c40 | |||
164ed77d72 | |||
b1148e5f7a | |||
2012e25596 | |||
a75253097b | |||
079d62af56 | |||
6c4a5bae52 | |||
886139c6b5 |
36
.github/workflows/build-and-test.yaml
vendored
Normal file
36
.github/workflows/build-and-test.yaml
vendored
Normal file
@ -0,0 +1,36 @@
|
|||||||
|
name: Build-and-test
|
||||||
|
|
||||||
|
on:
|
||||||
|
push:
|
||||||
|
|
||||||
|
jobs:
|
||||||
|
build-and-test:
|
||||||
|
runs-on: ubuntu-latest
|
||||||
|
|
||||||
|
steps:
|
||||||
|
- id: checkout
|
||||||
|
uses: actions/checkout@v2
|
||||||
|
|
||||||
|
- id: dockerx
|
||||||
|
name: Setup Docker Buildx
|
||||||
|
uses: docker/setup-buildx-action@v2
|
||||||
|
|
||||||
|
- id: build
|
||||||
|
name: Build dev docker
|
||||||
|
uses: docker/build-push-action@v2
|
||||||
|
with:
|
||||||
|
context: .
|
||||||
|
file: ./docker/Dockerfile
|
||||||
|
push: false
|
||||||
|
load: true
|
||||||
|
tags: codiumai/pr-agent:test
|
||||||
|
cache-from: type=gha,scope=dev
|
||||||
|
cache-to: type=gha,mode=max,scope=dev
|
||||||
|
target: test
|
||||||
|
|
||||||
|
- id: test
|
||||||
|
name: Test dev docker
|
||||||
|
run: |
|
||||||
|
docker run --rm codiumai/pr-agent:test pytest -v
|
||||||
|
|
||||||
|
|
@ -1,6 +1,17 @@
|
|||||||
|
# This workflow enables developers to call PR-Agents `/[actions]` in PR's comments and upon PR creation.
|
||||||
|
# Learn more at https://www.codium.ai/pr-agent/
|
||||||
|
# This is v0.2 of this workflow file
|
||||||
|
|
||||||
|
name: PR-Agent
|
||||||
|
|
||||||
on:
|
on:
|
||||||
pull_request:
|
pull_request:
|
||||||
issue_comment:
|
issue_comment:
|
||||||
|
|
||||||
|
permissions:
|
||||||
|
issues: write
|
||||||
|
pull-requests: write
|
||||||
|
|
||||||
jobs:
|
jobs:
|
||||||
pr_agent_job:
|
pr_agent_job:
|
||||||
runs-on: ubuntu-latest
|
runs-on: ubuntu-latest
|
@ -92,6 +92,7 @@ pip install -r requirements.txt
|
|||||||
|
|
||||||
```
|
```
|
||||||
cp pr_agent/settings/.secrets_template.toml pr_agent/settings/.secrets.toml
|
cp pr_agent/settings/.secrets_template.toml pr_agent/settings/.secrets.toml
|
||||||
|
chmod 600 pr_agent/settings/.secrets.toml
|
||||||
# Edit .secrets.toml file
|
# Edit .secrets.toml file
|
||||||
```
|
```
|
||||||
|
|
||||||
@ -128,6 +129,7 @@ Allowing you to automate the review process on your private or public repositori
|
|||||||
- Pull requests: Read & write
|
- Pull requests: Read & write
|
||||||
- Issue comment: Read & write
|
- Issue comment: Read & write
|
||||||
- Metadata: Read-only
|
- Metadata: Read-only
|
||||||
|
- Contents: Read-only
|
||||||
- Set the following events:
|
- Set the following events:
|
||||||
- Issue comment
|
- Issue comment
|
||||||
- Pull request
|
- Pull request
|
||||||
|
14
README.md
14
README.md
@ -79,7 +79,7 @@ CodiumAI `PR-Agent` is an open-source tool aiming to help developers review pull
|
|||||||
|-------|---------------------------------------------|:------:|:------:|:---------:|
|
|-------|---------------------------------------------|:------:|:------:|:---------:|
|
||||||
| TOOLS | Review | :white_check_mark: | :white_check_mark: | :white_check_mark: |
|
| TOOLS | Review | :white_check_mark: | :white_check_mark: | :white_check_mark: |
|
||||||
| | ⮑ Inline review | :white_check_mark: | :white_check_mark: | |
|
| | ⮑ Inline review | :white_check_mark: | :white_check_mark: | |
|
||||||
| | Ask | :white_check_mark: | :white_check_mark: | |
|
| | Ask | :white_check_mark: | :white_check_mark: | :white_check_mark:
|
||||||
| | Auto-Description | :white_check_mark: | :white_check_mark: | |
|
| | Auto-Description | :white_check_mark: | :white_check_mark: | |
|
||||||
| | Improve Code | :white_check_mark: | :white_check_mark: | |
|
| | Improve Code | :white_check_mark: | :white_check_mark: | |
|
||||||
| | Reflect and Review | :white_check_mark: | | |
|
| | Reflect and Review | :white_check_mark: | | |
|
||||||
@ -97,12 +97,12 @@ CodiumAI `PR-Agent` is an open-source tool aiming to help developers review pull
|
|||||||
| | Incremental PR Review | :white_check_mark: | | |
|
| | Incremental PR Review | :white_check_mark: | | |
|
||||||
|
|
||||||
Examples for invoking the different tools via the CLI:
|
Examples for invoking the different tools via the CLI:
|
||||||
- **Review**: python cli.py --pr-url=<pr_url> review
|
- **Review**: python cli.py --pr_url=<pr_url> review
|
||||||
- **Describe**: python cli.py --pr-url=<pr_url> describe
|
- **Describe**: python cli.py --pr_url=<pr_url> describe
|
||||||
- **Improve**: python cli.py --pr-url=<pr_url> improve
|
- **Improve**: python cli.py --pr_url=<pr_url> improve
|
||||||
- **Ask**: python cli.py --pr-url=<pr_url> ask "Write me a poem about this PR"
|
- **Ask**: python cli.py --pr_url=<pr_url> ask "Write me a poem about this PR"
|
||||||
- **Reflect**: python cli.py --pr-url=<pr_url> reflect
|
- **Reflect**: python cli.py --pr_url=<pr_url> reflect
|
||||||
- **Update Changelog**: python cli.py --pr-url=<pr_url> update_changelog
|
- **Update Changelog**: python cli.py --pr_url=<pr_url> update_changelog
|
||||||
|
|
||||||
"<pr_url>" is the url of the relevant PR (for example: https://github.com/Codium-ai/pr-agent/pull/50).
|
"<pr_url>" is the url of the relevant PR (for example: https://github.com/Codium-ai/pr-agent/pull/50).
|
||||||
|
|
||||||
|
@ -4,17 +4,21 @@ WORKDIR /app
|
|||||||
ADD pyproject.toml .
|
ADD pyproject.toml .
|
||||||
RUN pip install . && rm pyproject.toml
|
RUN pip install . && rm pyproject.toml
|
||||||
ENV PYTHONPATH=/app
|
ENV PYTHONPATH=/app
|
||||||
ADD pr_agent pr_agent
|
|
||||||
|
|
||||||
FROM base as github_app
|
FROM base as github_app
|
||||||
|
ADD pr_agent pr_agent
|
||||||
CMD ["python", "pr_agent/servers/github_app.py"]
|
CMD ["python", "pr_agent/servers/github_app.py"]
|
||||||
|
|
||||||
FROM base as github_polling
|
FROM base as github_polling
|
||||||
|
ADD pr_agent pr_agent
|
||||||
CMD ["python", "pr_agent/servers/github_polling.py"]
|
CMD ["python", "pr_agent/servers/github_polling.py"]
|
||||||
|
|
||||||
FROM base as test
|
FROM base as test
|
||||||
ADD requirements-dev.txt .
|
ADD requirements-dev.txt .
|
||||||
RUN pip install -r requirements-dev.txt && rm requirements-dev.txt
|
RUN pip install -r requirements-dev.txt && rm requirements-dev.txt
|
||||||
|
ADD pr_agent pr_agent
|
||||||
|
ADD tests tests
|
||||||
|
|
||||||
FROM base as cli
|
FROM base as cli
|
||||||
|
ADD pr_agent pr_agent
|
||||||
ENTRYPOINT ["python", "pr_agent/cli.py"]
|
ENTRYPOINT ["python", "pr_agent/cli.py"]
|
||||||
|
@ -37,7 +37,7 @@ class PRAgent:
|
|||||||
def __init__(self):
|
def __init__(self):
|
||||||
pass
|
pass
|
||||||
|
|
||||||
async def handle_request(self, pr_url, request) -> bool:
|
async def handle_request(self, pr_url, request, notify=None) -> bool:
|
||||||
# First, apply repo specific settings if exists
|
# First, apply repo specific settings if exists
|
||||||
if get_settings().config.use_repo_settings_file:
|
if get_settings().config.use_repo_settings_file:
|
||||||
repo_settings_file = None
|
repo_settings_file = None
|
||||||
@ -67,8 +67,12 @@ class PRAgent:
|
|||||||
if action == "reflect_and_review" and not get_settings().pr_reviewer.ask_and_reflect:
|
if action == "reflect_and_review" and not get_settings().pr_reviewer.ask_and_reflect:
|
||||||
action = "review"
|
action = "review"
|
||||||
if action == "answer":
|
if action == "answer":
|
||||||
|
if notify:
|
||||||
|
notify()
|
||||||
await PRReviewer(pr_url, is_answer=True, args=args).run()
|
await PRReviewer(pr_url, is_answer=True, args=args).run()
|
||||||
elif action in command2class:
|
elif action in command2class:
|
||||||
|
if notify:
|
||||||
|
notify()
|
||||||
await command2class[action](pr_url, args=args).run()
|
await command2class[action](pr_url, args=args).run()
|
||||||
else:
|
else:
|
||||||
return False
|
return False
|
||||||
|
@ -29,7 +29,6 @@ class AiHandler:
|
|||||||
self.azure = False
|
self.azure = False
|
||||||
if get_settings().get("OPENAI.ORG", None):
|
if get_settings().get("OPENAI.ORG", None):
|
||||||
litellm.organization = get_settings().openai.org
|
litellm.organization = get_settings().openai.org
|
||||||
self.deployment_id = get_settings().get("OPENAI.DEPLOYMENT_ID", None)
|
|
||||||
if get_settings().get("OPENAI.API_TYPE", None):
|
if get_settings().get("OPENAI.API_TYPE", None):
|
||||||
if get_settings().openai.api_type == "azure":
|
if get_settings().openai.api_type == "azure":
|
||||||
self.azure = True
|
self.azure = True
|
||||||
@ -47,6 +46,13 @@ class AiHandler:
|
|||||||
except AttributeError as e:
|
except AttributeError as e:
|
||||||
raise ValueError("OpenAI key is required") from e
|
raise ValueError("OpenAI key is required") from e
|
||||||
|
|
||||||
|
@property
|
||||||
|
def deployment_id(self):
|
||||||
|
"""
|
||||||
|
Returns the deployment ID for the OpenAI API.
|
||||||
|
"""
|
||||||
|
return get_settings().get("OPENAI.DEPLOYMENT_ID", None)
|
||||||
|
|
||||||
@retry(exceptions=(APIError, Timeout, TryAgain, AttributeError, RateLimitError),
|
@retry(exceptions=(APIError, Timeout, TryAgain, AttributeError, RateLimitError),
|
||||||
tries=OPENAI_RETRIES, delay=2, backoff=2, jitter=(1, 3))
|
tries=OPENAI_RETRIES, delay=2, backoff=2, jitter=(1, 3))
|
||||||
async def chat_completion(self, model: str, temperature: float, system: str, user: str):
|
async def chat_completion(self, model: str, temperature: float, system: str, user: str):
|
||||||
@ -70,9 +76,15 @@ class AiHandler:
|
|||||||
TryAgain: If there is an attribute error during OpenAI inference.
|
TryAgain: If there is an attribute error during OpenAI inference.
|
||||||
"""
|
"""
|
||||||
try:
|
try:
|
||||||
|
deployment_id = self.deployment_id
|
||||||
|
if get_settings().config.verbosity_level >= 2:
|
||||||
|
logging.debug(
|
||||||
|
f"Generating completion with {model}"
|
||||||
|
f"{(' from deployment ' + deployment_id) if deployment_id else ''}"
|
||||||
|
)
|
||||||
response = await acompletion(
|
response = await acompletion(
|
||||||
model=model,
|
model=model,
|
||||||
deployment_id=self.deployment_id,
|
deployment_id=deployment_id,
|
||||||
messages=[
|
messages=[
|
||||||
{"role": "system", "content": system},
|
{"role": "system", "content": system},
|
||||||
{"role": "user", "content": user}
|
{"role": "user", "content": user}
|
||||||
|
@ -11,7 +11,7 @@ from github import RateLimitExceededException
|
|||||||
from pr_agent.algo import MAX_TOKENS
|
from pr_agent.algo import MAX_TOKENS
|
||||||
from pr_agent.algo.git_patch_processing import convert_to_hunks_with_lines_numbers, extend_patch, handle_patch_deletions
|
from pr_agent.algo.git_patch_processing import convert_to_hunks_with_lines_numbers, extend_patch, handle_patch_deletions
|
||||||
from pr_agent.algo.language_handler import sort_files_by_main_languages
|
from pr_agent.algo.language_handler import sort_files_by_main_languages
|
||||||
from pr_agent.algo.token_handler import TokenHandler
|
from pr_agent.algo.token_handler import TokenHandler, get_token_encoder
|
||||||
from pr_agent.config_loader import get_settings
|
from pr_agent.config_loader import get_settings
|
||||||
from pr_agent.git_providers.git_provider import FilePatchInfo, GitProvider
|
from pr_agent.git_providers.git_provider import FilePatchInfo, GitProvider
|
||||||
|
|
||||||
@ -208,18 +208,45 @@ def pr_generate_compressed_diff(top_langs: list, token_handler: TokenHandler, mo
|
|||||||
|
|
||||||
|
|
||||||
async def retry_with_fallback_models(f: Callable):
|
async def retry_with_fallback_models(f: Callable):
|
||||||
|
all_models = _get_all_models()
|
||||||
|
all_deployments = _get_all_deployments(all_models)
|
||||||
|
# try each (model, deployment_id) pair until one is successful, otherwise raise exception
|
||||||
|
for i, (model, deployment_id) in enumerate(zip(all_models, all_deployments)):
|
||||||
|
try:
|
||||||
|
get_settings().set("openai.deployment_id", deployment_id)
|
||||||
|
return await f(model)
|
||||||
|
except Exception as e:
|
||||||
|
logging.warning(
|
||||||
|
f"Failed to generate prediction with {model}"
|
||||||
|
f"{(' from deployment ' + deployment_id) if deployment_id else ''}: "
|
||||||
|
f"{traceback.format_exc()}"
|
||||||
|
)
|
||||||
|
if i == len(all_models) - 1: # If it's the last iteration
|
||||||
|
raise # Re-raise the last exception
|
||||||
|
|
||||||
|
|
||||||
|
def _get_all_models() -> List[str]:
|
||||||
model = get_settings().config.model
|
model = get_settings().config.model
|
||||||
fallback_models = get_settings().config.fallback_models
|
fallback_models = get_settings().config.fallback_models
|
||||||
if not isinstance(fallback_models, list):
|
if not isinstance(fallback_models, list):
|
||||||
fallback_models = [fallback_models]
|
fallback_models = [m.strip() for m in fallback_models.split(",")]
|
||||||
all_models = [model] + fallback_models
|
all_models = [model] + fallback_models
|
||||||
for i, model in enumerate(all_models):
|
return all_models
|
||||||
try:
|
|
||||||
return await f(model)
|
|
||||||
except Exception as e:
|
def _get_all_deployments(all_models: List[str]) -> List[str]:
|
||||||
logging.warning(f"Failed to generate prediction with {model}: {traceback.format_exc()}")
|
deployment_id = get_settings().get("openai.deployment_id", None)
|
||||||
if i == len(all_models) - 1: # If it's the last iteration
|
fallback_deployments = get_settings().get("openai.fallback_deployments", [])
|
||||||
raise # Re-raise the last exception
|
if not isinstance(fallback_deployments, list) and fallback_deployments:
|
||||||
|
fallback_deployments = [d.strip() for d in fallback_deployments.split(",")]
|
||||||
|
if fallback_deployments:
|
||||||
|
all_deployments = [deployment_id] + fallback_deployments
|
||||||
|
if len(all_deployments) < len(all_models):
|
||||||
|
raise ValueError(f"The number of deployments ({len(all_deployments)}) "
|
||||||
|
f"is less than the number of models ({len(all_models)})")
|
||||||
|
else:
|
||||||
|
all_deployments = [deployment_id] * len(all_models)
|
||||||
|
return all_deployments
|
||||||
|
|
||||||
|
|
||||||
def find_line_number_of_relevant_line_in_file(diff_files: List[FilePatchInfo],
|
def find_line_number_of_relevant_line_in_file(diff_files: List[FilePatchInfo],
|
||||||
@ -284,3 +311,30 @@ def find_line_number_of_relevant_line_in_file(diff_files: List[FilePatchInfo],
|
|||||||
absolute_position = start2 + delta - 1
|
absolute_position = start2 + delta - 1
|
||||||
break
|
break
|
||||||
return position, absolute_position
|
return position, absolute_position
|
||||||
|
|
||||||
|
|
||||||
|
def clip_tokens(text: str, max_tokens: int) -> str:
|
||||||
|
"""
|
||||||
|
Clip the number of tokens in a string to a maximum number of tokens.
|
||||||
|
|
||||||
|
Args:
|
||||||
|
text (str): The string to clip.
|
||||||
|
max_tokens (int): The maximum number of tokens allowed in the string.
|
||||||
|
|
||||||
|
Returns:
|
||||||
|
str: The clipped string.
|
||||||
|
"""
|
||||||
|
# We'll estimate the number of tokens by hueristically assuming 2.5 tokens per word
|
||||||
|
try:
|
||||||
|
encoder = get_token_encoder()
|
||||||
|
num_input_tokens = len(encoder.encode(text))
|
||||||
|
if num_input_tokens <= max_tokens:
|
||||||
|
return text
|
||||||
|
num_chars = len(text)
|
||||||
|
chars_per_token = num_chars / num_input_tokens
|
||||||
|
num_output_chars = int(chars_per_token * max_tokens)
|
||||||
|
clipped_text = text[:num_output_chars]
|
||||||
|
return clipped_text
|
||||||
|
except Exception as e:
|
||||||
|
logging.warning(f"Failed to clip tokens: {e}")
|
||||||
|
return text
|
@ -4,6 +4,10 @@ from tiktoken import encoding_for_model, get_encoding
|
|||||||
from pr_agent.config_loader import get_settings
|
from pr_agent.config_loader import get_settings
|
||||||
|
|
||||||
|
|
||||||
|
def get_token_encoder():
|
||||||
|
return encoding_for_model(get_settings().config.model) if "gpt" in get_settings().config.model else get_encoding(
|
||||||
|
"cl100k_base")
|
||||||
|
|
||||||
class TokenHandler:
|
class TokenHandler:
|
||||||
"""
|
"""
|
||||||
A class for handling tokens in the context of a pull request.
|
A class for handling tokens in the context of a pull request.
|
||||||
@ -27,7 +31,7 @@ class TokenHandler:
|
|||||||
- system: The system string.
|
- system: The system string.
|
||||||
- user: The user string.
|
- user: The user string.
|
||||||
"""
|
"""
|
||||||
self.encoder = encoding_for_model(get_settings().config.model) if "gpt" in get_settings().config.model else get_encoding("cl100k_base")
|
self.encoder = get_token_encoder()
|
||||||
self.prompt_tokens = self._get_system_user_tokens(pr, self.encoder, vars, system, user)
|
self.prompt_tokens = self._get_system_user_tokens(pr, self.encoder, vars, system, user)
|
||||||
|
|
||||||
def _get_system_user_tokens(self, pr, encoder, vars: dict, system, user):
|
def _get_system_user_tokens(self, pr, encoder, vars: dict, system, user):
|
||||||
|
@ -8,8 +8,8 @@ import textwrap
|
|||||||
from datetime import datetime
|
from datetime import datetime
|
||||||
from typing import Any, List
|
from typing import Any, List
|
||||||
|
|
||||||
|
import yaml
|
||||||
from starlette_context import context
|
from starlette_context import context
|
||||||
|
|
||||||
from pr_agent.config_loader import get_settings, global_settings
|
from pr_agent.config_loader import get_settings, global_settings
|
||||||
|
|
||||||
|
|
||||||
@ -258,3 +258,26 @@ def update_settings_from_args(args: List[str]) -> List[str]:
|
|||||||
else:
|
else:
|
||||||
other_args.append(arg)
|
other_args.append(arg)
|
||||||
return other_args
|
return other_args
|
||||||
|
|
||||||
|
|
||||||
|
def load_yaml(review_text: str) -> dict:
|
||||||
|
review_text = review_text.removeprefix('```yaml').rstrip('`')
|
||||||
|
try:
|
||||||
|
data = yaml.load(review_text, Loader=yaml.SafeLoader)
|
||||||
|
except Exception as e:
|
||||||
|
logging.error(f"Failed to parse AI prediction: {e}")
|
||||||
|
data = try_fix_yaml(review_text)
|
||||||
|
return data
|
||||||
|
|
||||||
|
def try_fix_yaml(review_text: str) -> dict:
|
||||||
|
review_text_lines = review_text.split('\n')
|
||||||
|
data = {}
|
||||||
|
for i in range(1, len(review_text_lines)):
|
||||||
|
review_text_lines_tmp = '\n'.join(review_text_lines[:-i])
|
||||||
|
try:
|
||||||
|
data = yaml.load(review_text_lines_tmp, Loader=yaml.SafeLoader)
|
||||||
|
logging.info(f"Successfully parsed AI prediction after removing {i} lines")
|
||||||
|
break
|
||||||
|
except:
|
||||||
|
pass
|
||||||
|
return data
|
||||||
|
@ -10,13 +10,13 @@ from pr_agent.config_loader import get_settings
|
|||||||
def run(inargs=None):
|
def run(inargs=None):
|
||||||
parser = argparse.ArgumentParser(description='AI based pull request analyzer', usage=
|
parser = argparse.ArgumentParser(description='AI based pull request analyzer', usage=
|
||||||
"""\
|
"""\
|
||||||
Usage: cli.py --pr-url <URL on supported git hosting service> <command> [<args>].
|
Usage: cli.py --pr-url=<URL on supported git hosting service> <command> [<args>].
|
||||||
For example:
|
For example:
|
||||||
- cli.py --pr-url=... review
|
- cli.py --pr_url=... review
|
||||||
- cli.py --pr-url=... describe
|
- cli.py --pr_url=... describe
|
||||||
- cli.py --pr-url=... improve
|
- cli.py --pr_url=... improve
|
||||||
- cli.py --pr-url=... ask "write me a poem about this PR"
|
- cli.py --pr_url=... ask "write me a poem about this PR"
|
||||||
- cli.py --pr-url=... reflect
|
- cli.py --pr_url=... reflect
|
||||||
|
|
||||||
Supported commands:
|
Supported commands:
|
||||||
review / review_pr - Add a review that includes a summary of the PR and specific suggestions for improvement.
|
review / review_pr - Add a review that includes a summary of the PR and specific suggestions for improvement.
|
||||||
@ -27,7 +27,7 @@ reflect - Ask the PR author questions about the PR.
|
|||||||
update_changelog - Update the changelog based on the PR's contents.
|
update_changelog - Update the changelog based on the PR's contents.
|
||||||
|
|
||||||
To edit any configuration parameter from 'configuration.toml', just add -config_path=<value>.
|
To edit any configuration parameter from 'configuration.toml', just add -config_path=<value>.
|
||||||
For example: '- cli.py --pr-url=... review --pr_reviewer.extra_instructions="focus on the file: ..."'
|
For example: 'python cli.py --pr_url=... review --pr_reviewer.extra_instructions="focus on the file: ..."'
|
||||||
""")
|
""")
|
||||||
parser.add_argument('--pr_url', type=str, help='The URL of the PR to review', required=True)
|
parser.add_argument('--pr_url', type=str, help='The URL of the PR to review', required=True)
|
||||||
parser.add_argument('command', type=str, help='The', choices=commands, default='review')
|
parser.add_argument('command', type=str, help='The', choices=commands, default='review')
|
||||||
|
@ -5,6 +5,7 @@ from urllib.parse import urlparse
|
|||||||
import requests
|
import requests
|
||||||
from atlassian.bitbucket import Cloud
|
from atlassian.bitbucket import Cloud
|
||||||
|
|
||||||
|
from ..algo.pr_processing import clip_tokens
|
||||||
from ..config_loader import get_settings
|
from ..config_loader import get_settings
|
||||||
from .git_provider import FilePatchInfo
|
from .git_provider import FilePatchInfo
|
||||||
|
|
||||||
@ -25,6 +26,13 @@ class BitbucketProvider:
|
|||||||
if pr_url:
|
if pr_url:
|
||||||
self.set_pr(pr_url)
|
self.set_pr(pr_url)
|
||||||
|
|
||||||
|
def get_repo_settings(self):
|
||||||
|
try:
|
||||||
|
contents = self.repo_obj.get_contents(".pr_agent.toml", ref=self.pr.head.sha).decoded_content
|
||||||
|
return contents
|
||||||
|
except Exception:
|
||||||
|
return ""
|
||||||
|
|
||||||
def is_supported(self, capability: str) -> bool:
|
def is_supported(self, capability: str) -> bool:
|
||||||
if capability in ['get_issue_comments', 'create_inline_comment', 'publish_inline_comments', 'get_labels']:
|
if capability in ['get_issue_comments', 'create_inline_comment', 'publish_inline_comments', 'get_labels']:
|
||||||
return False
|
return False
|
||||||
@ -81,6 +89,9 @@ class BitbucketProvider:
|
|||||||
return self.pr.source_branch
|
return self.pr.source_branch
|
||||||
|
|
||||||
def get_pr_description(self):
|
def get_pr_description(self):
|
||||||
|
max_tokens = get_settings().get("CONFIG.MAX_DESCRIPTION_TOKENS", None)
|
||||||
|
if max_tokens:
|
||||||
|
return clip_tokens(self.pr.description, max_tokens)
|
||||||
return self.pr.description
|
return self.pr.description
|
||||||
|
|
||||||
def get_user_id(self):
|
def get_user_id(self):
|
||||||
@ -89,12 +100,25 @@ class BitbucketProvider:
|
|||||||
def get_issue_comments(self):
|
def get_issue_comments(self):
|
||||||
raise NotImplementedError("Bitbucket provider does not support issue comments yet")
|
raise NotImplementedError("Bitbucket provider does not support issue comments yet")
|
||||||
|
|
||||||
|
def get_repo_settings(self):
|
||||||
|
try:
|
||||||
|
contents = self.repo_obj.get_contents(".pr_agent.toml", ref=self.pr.head.sha).decoded_content
|
||||||
|
return contents
|
||||||
|
except Exception:
|
||||||
|
return ""
|
||||||
|
|
||||||
|
def add_eyes_reaction(self, issue_comment_id: int) -> Optional[int]:
|
||||||
|
return True
|
||||||
|
|
||||||
|
def remove_reaction(self, issue_comment_id: int, reaction_id: int) -> bool:
|
||||||
|
return True
|
||||||
|
|
||||||
@staticmethod
|
@staticmethod
|
||||||
def _parse_pr_url(pr_url: str) -> Tuple[str, int]:
|
def _parse_pr_url(pr_url: str) -> Tuple[str, int]:
|
||||||
parsed_url = urlparse(pr_url)
|
parsed_url = urlparse(pr_url)
|
||||||
|
|
||||||
if 'bitbucket.org' not in parsed_url.netloc:
|
if 'bitbucket.org' not in parsed_url.netloc:
|
||||||
raise ValueError("The provided URL is not a valid GitHub URL")
|
raise ValueError("The provided URL is not a valid Bitbucket URL")
|
||||||
|
|
||||||
path_parts = parsed_url.path.strip('/').split('/')
|
path_parts = parsed_url.path.strip('/').split('/')
|
||||||
|
|
||||||
|
@ -3,6 +3,7 @@ from dataclasses import dataclass
|
|||||||
|
|
||||||
# enum EDIT_TYPE (ADDED, DELETED, MODIFIED, RENAMED)
|
# enum EDIT_TYPE (ADDED, DELETED, MODIFIED, RENAMED)
|
||||||
from enum import Enum
|
from enum import Enum
|
||||||
|
from typing import Optional
|
||||||
|
|
||||||
|
|
||||||
class EDIT_TYPE(Enum):
|
class EDIT_TYPE(Enum):
|
||||||
@ -88,6 +89,21 @@ class GitProvider(ABC):
|
|||||||
def get_issue_comments(self):
|
def get_issue_comments(self):
|
||||||
pass
|
pass
|
||||||
|
|
||||||
|
@abstractmethod
|
||||||
|
def get_repo_settings(self):
|
||||||
|
pass
|
||||||
|
|
||||||
|
@abstractmethod
|
||||||
|
def add_eyes_reaction(self, issue_comment_id: int) -> Optional[int]:
|
||||||
|
pass
|
||||||
|
|
||||||
|
@abstractmethod
|
||||||
|
def remove_reaction(self, issue_comment_id: int, reaction_id: int) -> bool:
|
||||||
|
pass
|
||||||
|
|
||||||
|
@abstractmethod
|
||||||
|
def get_commit_messages(self):
|
||||||
|
pass
|
||||||
|
|
||||||
def get_main_pr_language(languages, files) -> str:
|
def get_main_pr_language(languages, files) -> str:
|
||||||
"""
|
"""
|
||||||
|
@ -2,17 +2,17 @@ import logging
|
|||||||
import hashlib
|
import hashlib
|
||||||
|
|
||||||
from datetime import datetime
|
from datetime import datetime
|
||||||
from typing import Optional, Tuple
|
from typing import Optional, Tuple, Any
|
||||||
from urllib.parse import urlparse
|
from urllib.parse import urlparse
|
||||||
|
|
||||||
from github import AppAuthentication, Auth, Github, GithubException
|
from github import AppAuthentication, Auth, Github, GithubException, Reaction
|
||||||
from retry import retry
|
from retry import retry
|
||||||
from starlette_context import context
|
from starlette_context import context
|
||||||
|
|
||||||
from .git_provider import FilePatchInfo, GitProvider, IncrementalPR
|
from .git_provider import FilePatchInfo, GitProvider, IncrementalPR
|
||||||
from ..algo.language_handler import is_valid_file
|
from ..algo.language_handler import is_valid_file
|
||||||
from ..algo.utils import load_large_diff
|
from ..algo.utils import load_large_diff
|
||||||
from ..algo.pr_processing import find_line_number_of_relevant_line_in_file
|
from ..algo.pr_processing import find_line_number_of_relevant_line_in_file, clip_tokens
|
||||||
from ..config_loader import get_settings
|
from ..config_loader import get_settings
|
||||||
from ..servers.utils import RateLimitExceeded
|
from ..servers.utils import RateLimitExceeded
|
||||||
|
|
||||||
@ -153,7 +153,7 @@ class GithubProvider(GitProvider):
|
|||||||
|
|
||||||
|
|
||||||
def create_inline_comment(self, body: str, relevant_file: str, relevant_line_in_file: str):
|
def create_inline_comment(self, body: str, relevant_file: str, relevant_line_in_file: str):
|
||||||
position = find_line_number_of_relevant_line_in_file(self.diff_files, relevant_file.strip('`'), relevant_line_in_file)
|
position, absolute_position = find_line_number_of_relevant_line_in_file(self.diff_files, relevant_file.strip('`'), relevant_line_in_file)
|
||||||
if position == -1:
|
if position == -1:
|
||||||
if get_settings().config.verbosity_level >= 2:
|
if get_settings().config.verbosity_level >= 2:
|
||||||
logging.info(f"Could not find position for {relevant_file} {relevant_line_in_file}")
|
logging.info(f"Could not find position for {relevant_file} {relevant_line_in_file}")
|
||||||
@ -234,6 +234,9 @@ class GithubProvider(GitProvider):
|
|||||||
return self.pr.head.ref
|
return self.pr.head.ref
|
||||||
|
|
||||||
def get_pr_description(self):
|
def get_pr_description(self):
|
||||||
|
max_tokens = get_settings().get("CONFIG.MAX_DESCRIPTION_TOKENS", None)
|
||||||
|
if max_tokens:
|
||||||
|
return clip_tokens(self.pr.body, max_tokens)
|
||||||
return self.pr.body
|
return self.pr.body
|
||||||
|
|
||||||
def get_user_id(self):
|
def get_user_id(self):
|
||||||
@ -263,6 +266,23 @@ class GithubProvider(GitProvider):
|
|||||||
except Exception:
|
except Exception:
|
||||||
return ""
|
return ""
|
||||||
|
|
||||||
|
def add_eyes_reaction(self, issue_comment_id: int) -> Optional[int]:
|
||||||
|
try:
|
||||||
|
reaction = self.pr.get_issue_comment(issue_comment_id).create_reaction("eyes")
|
||||||
|
return reaction.id
|
||||||
|
except Exception as e:
|
||||||
|
logging.exception(f"Failed to add eyes reaction, error: {e}")
|
||||||
|
return None
|
||||||
|
|
||||||
|
def remove_reaction(self, issue_comment_id: int, reaction_id: int) -> bool:
|
||||||
|
try:
|
||||||
|
self.pr.get_issue_comment(issue_comment_id).delete_reaction(reaction_id)
|
||||||
|
return True
|
||||||
|
except Exception as e:
|
||||||
|
logging.exception(f"Failed to remove eyes reaction, error: {e}")
|
||||||
|
return False
|
||||||
|
|
||||||
|
|
||||||
@staticmethod
|
@staticmethod
|
||||||
def _parse_pr_url(pr_url: str) -> Tuple[str, int]:
|
def _parse_pr_url(pr_url: str) -> Tuple[str, int]:
|
||||||
parsed_url = urlparse(pr_url)
|
parsed_url = urlparse(pr_url)
|
||||||
@ -358,27 +378,33 @@ class GithubProvider(GitProvider):
|
|||||||
logging.exception(f"Failed to get labels, error: {e}")
|
logging.exception(f"Failed to get labels, error: {e}")
|
||||||
return []
|
return []
|
||||||
|
|
||||||
def get_commit_messages(self) -> str:
|
def get_commit_messages(self):
|
||||||
"""
|
"""
|
||||||
Retrieves the commit messages of a pull request.
|
Retrieves the commit messages of a pull request.
|
||||||
|
|
||||||
Returns:
|
Returns:
|
||||||
str: A string containing the commit messages of the pull request.
|
str: A string containing the commit messages of the pull request.
|
||||||
"""
|
"""
|
||||||
|
max_tokens = get_settings().get("CONFIG.MAX_COMMITS_TOKENS", None)
|
||||||
try:
|
try:
|
||||||
commit_list = self.pr.get_commits()
|
commit_list = self.pr.get_commits()
|
||||||
commit_messages = [commit.commit.message for commit in commit_list]
|
commit_messages = [commit.commit.message for commit in commit_list]
|
||||||
commit_messages_str = "\n".join([f"{i + 1}. {message}" for i, message in enumerate(commit_messages)])
|
commit_messages_str = "\n".join([f"{i + 1}. {message}" for i, message in enumerate(commit_messages)])
|
||||||
except:
|
except Exception:
|
||||||
commit_messages_str = ""
|
commit_messages_str = ""
|
||||||
|
if max_tokens:
|
||||||
|
commit_messages_str = clip_tokens(commit_messages_str, max_tokens)
|
||||||
return commit_messages_str
|
return commit_messages_str
|
||||||
|
|
||||||
def generate_link_to_relevant_line_number(self, suggestion) -> str:
|
def generate_link_to_relevant_line_number(self, suggestion) -> str:
|
||||||
try:
|
try:
|
||||||
relevant_file = suggestion['relevant file']
|
relevant_file = suggestion['relevant file'].strip('`').strip("'")
|
||||||
relevant_line_str = suggestion['relevant line']
|
relevant_line_str = suggestion['relevant line']
|
||||||
|
if not relevant_line_str:
|
||||||
|
return ""
|
||||||
|
|
||||||
position, absolute_position = find_line_number_of_relevant_line_in_file \
|
position, absolute_position = find_line_number_of_relevant_line_in_file \
|
||||||
(self.diff_files, relevant_file.strip('`'), relevant_line_str)
|
(self.diff_files, relevant_file, relevant_line_str)
|
||||||
|
|
||||||
if absolute_position != -1:
|
if absolute_position != -1:
|
||||||
# # link to right file only
|
# # link to right file only
|
||||||
|
@ -7,12 +7,16 @@ import gitlab
|
|||||||
from gitlab import GitlabGetError
|
from gitlab import GitlabGetError
|
||||||
|
|
||||||
from ..algo.language_handler import is_valid_file
|
from ..algo.language_handler import is_valid_file
|
||||||
|
from ..algo.pr_processing import clip_tokens
|
||||||
from ..algo.utils import load_large_diff
|
from ..algo.utils import load_large_diff
|
||||||
from ..config_loader import get_settings
|
from ..config_loader import get_settings
|
||||||
from .git_provider import EDIT_TYPE, FilePatchInfo, GitProvider
|
from .git_provider import EDIT_TYPE, FilePatchInfo, GitProvider
|
||||||
|
|
||||||
logger = logging.getLogger()
|
logger = logging.getLogger()
|
||||||
|
|
||||||
|
class DiffNotFoundError(Exception):
|
||||||
|
"""Raised when the diff for a merge request cannot be found."""
|
||||||
|
pass
|
||||||
|
|
||||||
class GitLabProvider(GitProvider):
|
class GitLabProvider(GitProvider):
|
||||||
|
|
||||||
@ -55,7 +59,7 @@ class GitLabProvider(GitProvider):
|
|||||||
self.last_diff = self.mr.diffs.list(get_all=True)[-1]
|
self.last_diff = self.mr.diffs.list(get_all=True)[-1]
|
||||||
except IndexError as e:
|
except IndexError as e:
|
||||||
logger.error(f"Could not get diff for merge request {self.id_mr}")
|
logger.error(f"Could not get diff for merge request {self.id_mr}")
|
||||||
raise ValueError(f"Could not get diff for merge request {self.id_mr}") from e
|
raise DiffNotFoundError(f"Could not get diff for merge request {self.id_mr}") from e
|
||||||
|
|
||||||
|
|
||||||
def _get_pr_file_content(self, file_path: str, branch: str) -> str:
|
def _get_pr_file_content(self, file_path: str, branch: str) -> str:
|
||||||
@ -149,16 +153,20 @@ class GitLabProvider(GitProvider):
|
|||||||
def create_inline_comments(self, comments: list[dict]):
|
def create_inline_comments(self, comments: list[dict]):
|
||||||
raise NotImplementedError("Gitlab provider does not support publishing inline comments yet")
|
raise NotImplementedError("Gitlab provider does not support publishing inline comments yet")
|
||||||
|
|
||||||
def send_inline_comment(self, body, edit_type, found, relevant_file, relevant_line_in_file, source_line_no,
|
def send_inline_comment(self,body: str,edit_type: str,found: bool,relevant_file: str,relevant_line_in_file: int,
|
||||||
target_file, target_line_no):
|
source_line_no: int, target_file: str,target_line_no: int) -> None:
|
||||||
if not found:
|
if not found:
|
||||||
logging.info(f"Could not find position for {relevant_file} {relevant_line_in_file}")
|
logging.info(f"Could not find position for {relevant_file} {relevant_line_in_file}")
|
||||||
else:
|
else:
|
||||||
d = self.last_diff
|
# in order to have exact sha's we have to find correct diff for this change
|
||||||
|
diff = self.get_relevant_diff(relevant_file, relevant_line_in_file)
|
||||||
|
if diff is None:
|
||||||
|
logger.error(f"Could not get diff for merge request {self.id_mr}")
|
||||||
|
raise DiffNotFoundError(f"Could not get diff for merge request {self.id_mr}")
|
||||||
pos_obj = {'position_type': 'text',
|
pos_obj = {'position_type': 'text',
|
||||||
'new_path': target_file.filename,
|
'new_path': target_file.filename,
|
||||||
'old_path': target_file.old_filename if target_file.old_filename else target_file.filename,
|
'old_path': target_file.old_filename if target_file.old_filename else target_file.filename,
|
||||||
'base_sha': d.base_commit_sha, 'start_sha': d.start_commit_sha, 'head_sha': d.head_commit_sha}
|
'base_sha': diff.base_commit_sha, 'start_sha': diff.start_commit_sha, 'head_sha': diff.head_commit_sha}
|
||||||
if edit_type == 'deletion':
|
if edit_type == 'deletion':
|
||||||
pos_obj['old_line'] = source_line_no - 1
|
pos_obj['old_line'] = source_line_no - 1
|
||||||
elif edit_type == 'addition':
|
elif edit_type == 'addition':
|
||||||
@ -170,6 +178,23 @@ class GitLabProvider(GitProvider):
|
|||||||
self.mr.discussions.create({'body': body,
|
self.mr.discussions.create({'body': body,
|
||||||
'position': pos_obj})
|
'position': pos_obj})
|
||||||
|
|
||||||
|
def get_relevant_diff(self, relevant_file: str, relevant_line_in_file: int) -> Optional[dict]:
|
||||||
|
changes = self.mr.changes() # Retrieve the changes for the merge request once
|
||||||
|
if not changes:
|
||||||
|
logging.error('No changes found for the merge request.')
|
||||||
|
return None
|
||||||
|
all_diffs = self.mr.diffs.list(get_all=True)
|
||||||
|
if not all_diffs:
|
||||||
|
logging.error('No diffs found for the merge request.')
|
||||||
|
return None
|
||||||
|
for diff in all_diffs:
|
||||||
|
for change in changes['changes']:
|
||||||
|
if change['new_path'] == relevant_file and relevant_line_in_file in change['diff']:
|
||||||
|
return diff
|
||||||
|
logging.debug(
|
||||||
|
f'No relevant diff found for {relevant_file} {relevant_line_in_file}. Falling back to last diff.')
|
||||||
|
return self.last_diff # fallback to last_diff if no relevant diff is found
|
||||||
|
|
||||||
def publish_code_suggestions(self, code_suggestions: list):
|
def publish_code_suggestions(self, code_suggestions: list):
|
||||||
for suggestion in code_suggestions:
|
for suggestion in code_suggestions:
|
||||||
try:
|
try:
|
||||||
@ -275,6 +300,9 @@ class GitLabProvider(GitProvider):
|
|||||||
return self.mr.source_branch
|
return self.mr.source_branch
|
||||||
|
|
||||||
def get_pr_description(self):
|
def get_pr_description(self):
|
||||||
|
max_tokens = get_settings().get("CONFIG.MAX_DESCRIPTION_TOKENS", None)
|
||||||
|
if max_tokens:
|
||||||
|
return clip_tokens(self.mr.description, max_tokens)
|
||||||
return self.mr.description
|
return self.mr.description
|
||||||
|
|
||||||
def get_issue_comments(self):
|
def get_issue_comments(self):
|
||||||
@ -287,6 +315,12 @@ class GitLabProvider(GitProvider):
|
|||||||
except Exception:
|
except Exception:
|
||||||
return ""
|
return ""
|
||||||
|
|
||||||
|
def add_eyes_reaction(self, issue_comment_id: int) -> Optional[int]:
|
||||||
|
return True
|
||||||
|
|
||||||
|
def remove_reaction(self, issue_comment_id: int, reaction_id: int) -> bool:
|
||||||
|
return True
|
||||||
|
|
||||||
def _parse_merge_request_url(self, merge_request_url: str) -> Tuple[str, int]:
|
def _parse_merge_request_url(self, merge_request_url: str) -> Tuple[str, int]:
|
||||||
parsed_url = urlparse(merge_request_url)
|
parsed_url = urlparse(merge_request_url)
|
||||||
|
|
||||||
@ -332,16 +366,19 @@ class GitLabProvider(GitProvider):
|
|||||||
def get_labels(self):
|
def get_labels(self):
|
||||||
return self.mr.labels
|
return self.mr.labels
|
||||||
|
|
||||||
def get_commit_messages(self) -> str:
|
def get_commit_messages(self):
|
||||||
"""
|
"""
|
||||||
Retrieves the commit messages of a pull request.
|
Retrieves the commit messages of a pull request.
|
||||||
|
|
||||||
Returns:
|
Returns:
|
||||||
str: A string containing the commit messages of the pull request.
|
str: A string containing the commit messages of the pull request.
|
||||||
"""
|
"""
|
||||||
|
max_tokens = get_settings().get("CONFIG.MAX_COMMITS_TOKENS", None)
|
||||||
try:
|
try:
|
||||||
commit_messages_list = [commit['message'] for commit in self.mr.commits()._list]
|
commit_messages_list = [commit['message'] for commit in self.mr.commits()._list]
|
||||||
commit_messages_str = "\n".join([f"{i + 1}. {message}" for i, message in enumerate(commit_messages_list)])
|
commit_messages_str = "\n".join([f"{i + 1}. {message}" for i, message in enumerate(commit_messages_list)])
|
||||||
except:
|
except Exception:
|
||||||
commit_messages_str = ""
|
commit_messages_str = ""
|
||||||
|
if max_tokens:
|
||||||
|
commit_messages_str = clip_tokens(commit_messages_str, max_tokens)
|
||||||
return commit_messages_str
|
return commit_messages_str
|
@ -4,6 +4,7 @@ import os
|
|||||||
|
|
||||||
from pr_agent.agent.pr_agent import PRAgent
|
from pr_agent.agent.pr_agent import PRAgent
|
||||||
from pr_agent.config_loader import get_settings
|
from pr_agent.config_loader import get_settings
|
||||||
|
from pr_agent.git_providers import get_git_provider
|
||||||
from pr_agent.tools.pr_reviewer import PRReviewer
|
from pr_agent.tools.pr_reviewer import PRReviewer
|
||||||
|
|
||||||
|
|
||||||
@ -14,6 +15,8 @@ async def run_action():
|
|||||||
OPENAI_KEY = os.environ.get('OPENAI_KEY')
|
OPENAI_KEY = os.environ.get('OPENAI_KEY')
|
||||||
OPENAI_ORG = os.environ.get('OPENAI_ORG')
|
OPENAI_ORG = os.environ.get('OPENAI_ORG')
|
||||||
GITHUB_TOKEN = os.environ.get('GITHUB_TOKEN')
|
GITHUB_TOKEN = os.environ.get('GITHUB_TOKEN')
|
||||||
|
get_settings().set("CONFIG.PUBLISH_OUTPUT_PROGRESS", False)
|
||||||
|
|
||||||
|
|
||||||
# Check if required environment variables are set
|
# Check if required environment variables are set
|
||||||
if not GITHUB_EVENT_NAME:
|
if not GITHUB_EVENT_NAME:
|
||||||
@ -61,7 +64,9 @@ async def run_action():
|
|||||||
pr_url = event_payload.get("issue", {}).get("pull_request", {}).get("url")
|
pr_url = event_payload.get("issue", {}).get("pull_request", {}).get("url")
|
||||||
if pr_url:
|
if pr_url:
|
||||||
body = comment_body.strip().lower()
|
body = comment_body.strip().lower()
|
||||||
await PRAgent().handle_request(pr_url, body)
|
comment_id = event_payload.get("comment", {}).get("id")
|
||||||
|
provider = get_git_provider()(pr_url=pr_url)
|
||||||
|
await PRAgent().handle_request(pr_url, body, notify=lambda: provider.add_eyes_reaction(comment_id))
|
||||||
|
|
||||||
|
|
||||||
if __name__ == '__main__':
|
if __name__ == '__main__':
|
||||||
|
@ -11,6 +11,7 @@ from starlette_context.middleware import RawContextMiddleware
|
|||||||
|
|
||||||
from pr_agent.agent.pr_agent import PRAgent
|
from pr_agent.agent.pr_agent import PRAgent
|
||||||
from pr_agent.config_loader import get_settings, global_settings
|
from pr_agent.config_loader import get_settings, global_settings
|
||||||
|
from pr_agent.git_providers import get_git_provider
|
||||||
from pr_agent.servers.utils import verify_signature
|
from pr_agent.servers.utils import verify_signature
|
||||||
|
|
||||||
logging.basicConfig(stream=sys.stdout, level=logging.DEBUG)
|
logging.basicConfig(stream=sys.stdout, level=logging.DEBUG)
|
||||||
@ -80,7 +81,10 @@ async def handle_request(body: Dict[str, Any]):
|
|||||||
return {}
|
return {}
|
||||||
pull_request = body["issue"]["pull_request"]
|
pull_request = body["issue"]["pull_request"]
|
||||||
api_url = pull_request.get("url")
|
api_url = pull_request.get("url")
|
||||||
await agent.handle_request(api_url, comment_body)
|
comment_id = body.get("comment", {}).get("id")
|
||||||
|
provider = get_git_provider()(pr_url=api_url)
|
||||||
|
await agent.handle_request(api_url, comment_body, notify=lambda: provider.add_eyes_reaction(comment_id))
|
||||||
|
|
||||||
|
|
||||||
elif action == "opened" or 'reopened' in action:
|
elif action == "opened" or 'reopened' in action:
|
||||||
pull_request = body.get("pull_request")
|
pull_request = body.get("pull_request")
|
||||||
@ -102,6 +106,7 @@ async def root():
|
|||||||
def start():
|
def start():
|
||||||
# Override the deployment type to app
|
# Override the deployment type to app
|
||||||
get_settings().set("GITHUB.DEPLOYMENT_TYPE", "app")
|
get_settings().set("GITHUB.DEPLOYMENT_TYPE", "app")
|
||||||
|
get_settings().set("CONFIG.PUBLISH_OUTPUT_PROGRESS", False)
|
||||||
middleware = [Middleware(RawContextMiddleware)]
|
middleware = [Middleware(RawContextMiddleware)]
|
||||||
app = FastAPI(middleware=middleware)
|
app = FastAPI(middleware=middleware)
|
||||||
app.include_router(router)
|
app.include_router(router)
|
||||||
|
@ -36,6 +36,7 @@ async def polling_loop():
|
|||||||
git_provider = get_git_provider()()
|
git_provider = get_git_provider()()
|
||||||
user_id = git_provider.get_user_id()
|
user_id = git_provider.get_user_id()
|
||||||
agent = PRAgent()
|
agent = PRAgent()
|
||||||
|
get_settings().set("CONFIG.PUBLISH_OUTPUT_PROGRESS", False)
|
||||||
|
|
||||||
try:
|
try:
|
||||||
deployment_type = get_settings().github.deployment_type
|
deployment_type = get_settings().github.deployment_type
|
||||||
@ -98,8 +99,10 @@ async def polling_loop():
|
|||||||
if user_tag not in comment_body:
|
if user_tag not in comment_body:
|
||||||
continue
|
continue
|
||||||
rest_of_comment = comment_body.split(user_tag)[1].strip()
|
rest_of_comment = comment_body.split(user_tag)[1].strip()
|
||||||
|
comment_id = comment['id']
|
||||||
success = await agent.handle_request(pr_url, rest_of_comment)
|
git_provider.set_pr(pr_url)
|
||||||
|
success = await agent.handle_request(pr_url, rest_of_comment,
|
||||||
|
notify=lambda: git_provider.add_eyes_reaction(comment_id)) # noqa E501
|
||||||
if not success:
|
if not success:
|
||||||
git_provider.set_pr(pr_url)
|
git_provider.set_pr(pr_url)
|
||||||
git_provider.publish_comment("### How to use PR-Agent\n" +
|
git_provider.publish_comment("### How to use PR-Agent\n" +
|
||||||
|
@ -2,8 +2,9 @@ commands_text = "> **/review [-i]**: Request a review of your Pull Request. For
|
|||||||
"considers changes since the last review, include the '-i' option.\n" \
|
"considers changes since the last review, include the '-i' option.\n" \
|
||||||
"> **/describe**: Modify the PR title and description based on the contents of the PR.\n" \
|
"> **/describe**: Modify the PR title and description based on the contents of the PR.\n" \
|
||||||
"> **/improve**: Suggest improvements to the code in the PR. \n" \
|
"> **/improve**: Suggest improvements to the code in the PR. \n" \
|
||||||
"> **/ask \\<QUESTION\\>**: Pose a question about the PR.\n\n" \
|
"> **/ask \\<QUESTION\\>**: Pose a question about the PR.\n" \
|
||||||
">To edit any configuration parameter from 'configuration.toml', add --config_path=new_value\n" \
|
"> **/update_changelog**: Update the changelog based on the PR's contents.\n\n" \
|
||||||
|
">To edit any configuration parameter from **configuration.toml**, add --config_path=new_value\n" \
|
||||||
">For example: /review --pr_reviewer.extra_instructions=\"focus on the file: ...\" \n" \
|
">For example: /review --pr_reviewer.extra_instructions=\"focus on the file: ...\" \n" \
|
||||||
">To list the possible configuration parameters, use the **/config** command.\n" \
|
">To list the possible configuration parameters, use the **/config** command.\n" \
|
||||||
|
|
||||||
|
@ -14,6 +14,7 @@ key = "" # Acquire through https://platform.openai.com
|
|||||||
#api_version = '2023-05-15' # Check Azure documentation for the current API version
|
#api_version = '2023-05-15' # Check Azure documentation for the current API version
|
||||||
#api_base = "" # The base URL for your Azure OpenAI resource. e.g. "https://<your resource name>.openai.azure.com"
|
#api_base = "" # The base URL for your Azure OpenAI resource. e.g. "https://<your resource name>.openai.azure.com"
|
||||||
#deployment_id = "" # The deployment name you chose when you deployed the engine
|
#deployment_id = "" # The deployment name you chose when you deployed the engine
|
||||||
|
#fallback_deployments = [] # For each fallback model specified in configuration.toml in the [config] section, specify the appropriate deployment_id
|
||||||
|
|
||||||
[anthropic]
|
[anthropic]
|
||||||
key = "" # Optional, uncomment if you want to use Anthropic. Acquire through https://www.anthropic.com/
|
key = "" # Optional, uncomment if you want to use Anthropic. Acquire through https://www.anthropic.com/
|
||||||
|
@ -8,6 +8,8 @@ verbosity_level=0 # 0,1,2
|
|||||||
use_extra_bad_extensions=false
|
use_extra_bad_extensions=false
|
||||||
use_repo_settings_file=true
|
use_repo_settings_file=true
|
||||||
ai_timeout=180
|
ai_timeout=180
|
||||||
|
max_description_tokens = 500
|
||||||
|
max_commits_tokens = 500
|
||||||
|
|
||||||
[pr_reviewer] # /review #
|
[pr_reviewer] # /review #
|
||||||
require_focused_review=true
|
require_focused_review=true
|
||||||
|
@ -2,38 +2,67 @@
|
|||||||
system="""You are CodiumAI-PR-Reviewer, a language model designed to review git pull requests.
|
system="""You are CodiumAI-PR-Reviewer, a language model designed to review git pull requests.
|
||||||
Your task is to provide full description of the PR content.
|
Your task is to provide full description of the PR content.
|
||||||
- Make sure not to focus the new PR code (the '+' lines).
|
- Make sure not to focus the new PR code (the '+' lines).
|
||||||
|
- Notice that the 'Previous title', 'Previous description' and 'Commit messages' sections may be partial, simplistic, non-informative or not up-to-date. Hence, compare them to the PR diff code, and use them only as a reference.
|
||||||
|
- If needed, each YAML output should be in block scalar format ('|-')
|
||||||
{%- if extra_instructions %}
|
{%- if extra_instructions %}
|
||||||
|
|
||||||
Extra instructions from the user:
|
Extra instructions from the user:
|
||||||
{{ extra_instructions }}
|
{{ extra_instructions }}
|
||||||
{% endif %}
|
{% endif %}
|
||||||
|
|
||||||
You must use the following JSON schema to format your answer:
|
You must use the following YAML schema to format your answer:
|
||||||
```json
|
```yaml
|
||||||
{
|
PR Title:
|
||||||
"PR Title": {
|
type: string
|
||||||
"type": "string",
|
description: an informative title for the PR, describing its main theme
|
||||||
"description": "an informative title for the PR, describing its main theme"
|
PR Type:
|
||||||
},
|
type: array
|
||||||
"PR Type": {
|
items:
|
||||||
"type": "string",
|
type: string
|
||||||
"description": possible values are: ["Bug fix", "Tests", "Bug fix with tests", "Refactoring", "Enhancement", "Documentation", "Other"]
|
enum:
|
||||||
},
|
- Bug fix
|
||||||
"PR Description": {
|
- Tests
|
||||||
"type": "string",
|
- Bug fix with tests
|
||||||
"description": "an informative and concise description of the PR"
|
- Refactoring
|
||||||
},
|
- Enhancement
|
||||||
"PR Main Files Walkthrough": {
|
- Documentation
|
||||||
"type": "string",
|
- Other
|
||||||
"description": "a walkthrough of the PR changes. Review main files, in bullet points, and shortly describe the changes in each file (up to 10 most important files). Format: -`filename`: description of changes\n..."
|
PR Description:
|
||||||
}
|
type: string
|
||||||
}
|
description: an informative and concise description of the PR
|
||||||
|
PR Main Files Walkthrough:
|
||||||
|
type: array
|
||||||
|
maxItems: 10
|
||||||
|
description: |-
|
||||||
|
a walkthrough of the PR changes. Review main files, and shortly describe the changes in each file (up to 10 most important files).
|
||||||
|
items:
|
||||||
|
filename:
|
||||||
|
type: string
|
||||||
|
description: the relevant file full path
|
||||||
|
changes in file:
|
||||||
|
type: string
|
||||||
|
description: minimal and concise description of the changes in the relevant file
|
||||||
|
|
||||||
Don't repeat the prompt in the answer, and avoid outputting the 'type' and 'description' fields.
|
|
||||||
|
Example output:
|
||||||
|
```yaml
|
||||||
|
PR Title: |-
|
||||||
|
...
|
||||||
|
PR Type:
|
||||||
|
- Bug fix
|
||||||
|
PR Description: |-
|
||||||
|
...
|
||||||
|
PR Main Files Walkthrough:
|
||||||
|
- ...
|
||||||
|
- ...
|
||||||
|
```
|
||||||
|
|
||||||
|
Make sure to output a valid YAML. Don't repeat the prompt in the answer, and avoid outputting the 'type' and 'description' fields.
|
||||||
"""
|
"""
|
||||||
|
|
||||||
user="""PR Info:
|
user="""PR Info:
|
||||||
|
Previous title: '{{title}}'
|
||||||
|
Previous description: '{{description}}'
|
||||||
Branch: '{{branch}}'
|
Branch: '{{branch}}'
|
||||||
{%- if language %}
|
{%- if language %}
|
||||||
|
|
||||||
@ -52,6 +81,6 @@ The PR Git Diff:
|
|||||||
```
|
```
|
||||||
Note that lines in the diff body are prefixed with a symbol that represents the type of change: '-' for deletions, '+' for additions, and ' ' (a space) for unchanged lines.
|
Note that lines in the diff body are prefixed with a symbol that represents the type of change: '-' for deletions, '+' for additions, and ' ' (a space) for unchanged lines.
|
||||||
|
|
||||||
Response (should be a valid JSON, and nothing else):
|
Response (should be a valid YAML, and nothing else):
|
||||||
```json
|
```yaml
|
||||||
"""
|
"""
|
||||||
|
@ -7,6 +7,7 @@ Your task is to provide constructive and concise feedback for the PR, and also p
|
|||||||
- Suggestions should focus on improving the new added code lines.
|
- Suggestions should focus on improving the new added code lines.
|
||||||
- Make sure not to provide suggestions repeating modifications already implemented in the new PR code (the '+' lines).
|
- Make sure not to provide suggestions repeating modifications already implemented in the new PR code (the '+' lines).
|
||||||
{%- endif %}
|
{%- endif %}
|
||||||
|
- If needed, each YAML output should be in block scalar format ('|-')
|
||||||
|
|
||||||
{%- if extra_instructions %}
|
{%- if extra_instructions %}
|
||||||
|
|
||||||
@ -14,117 +15,121 @@ Extra instructions from the user:
|
|||||||
{{ extra_instructions }}
|
{{ extra_instructions }}
|
||||||
{% endif %}
|
{% endif %}
|
||||||
|
|
||||||
You must use the following JSON schema to format your answer:
|
You must use the following YAML schema to format your answer:
|
||||||
```json
|
```yaml
|
||||||
{
|
PR Analysis:
|
||||||
"PR Analysis": {
|
Main theme:
|
||||||
"Main theme": {
|
type: string
|
||||||
"type": "string",
|
description: a short explanation of the PR
|
||||||
"description": "a short explanation of the PR"
|
Type of PR:
|
||||||
},
|
type: string
|
||||||
"Type of PR": {
|
enum:
|
||||||
"type": "string",
|
- Bug fix
|
||||||
"enum": ["Bug fix", "Tests", "Refactoring", "Enhancement", "Documentation", "Other"]
|
- Tests
|
||||||
},
|
- Refactoring
|
||||||
|
- Enhancement
|
||||||
|
- Documentation
|
||||||
|
- Other
|
||||||
{%- if require_score %}
|
{%- if require_score %}
|
||||||
"Score": {
|
Score:
|
||||||
"type": "int",
|
type: int
|
||||||
"description": "Rate this PR on a scale of 0-100 (inclusive), where 0 means the worst possible PR code, and 100 means PR code of the highest quality, without any bugs or performance issues, that is ready to be merged immediately and run in production at scale."
|
description: >-
|
||||||
},
|
Rate this PR on a scale of 0-100 (inclusive), where 0 means the worst
|
||||||
|
possible PR code, and 100 means PR code of the highest quality, without
|
||||||
|
any bugs or performance issues, that is ready to be merged immediately and
|
||||||
|
run in production at scale.
|
||||||
{%- endif %}
|
{%- endif %}
|
||||||
{%- if require_tests %}
|
{%- if require_tests %}
|
||||||
"Relevant tests added": {
|
Relevant tests added:
|
||||||
"type": "string",
|
type: string
|
||||||
"description": "yes\\no question: does this PR have relevant tests ?"
|
description: yes\\no question: does this PR have relevant tests ?
|
||||||
},
|
|
||||||
{%- endif %}
|
{%- endif %}
|
||||||
{%- if question_str %}
|
{%- if question_str %}
|
||||||
"Insights from user's answer": {
|
Insights from user's answer:
|
||||||
"type": "string",
|
type: string
|
||||||
"description": "shortly summarize the insights you gained from the user's answers to the questions"
|
description: >-
|
||||||
},
|
shortly summarize the insights you gained from the user's answers to the questions
|
||||||
{%- endif %}
|
{%- endif %}
|
||||||
{%- if require_focused %}
|
{%- if require_focused %}
|
||||||
"Focused PR": {
|
Focused PR:
|
||||||
"type": "string",
|
type: string
|
||||||
"description": "Is this a focused PR, in the sense that all the PR code diff changes are united under a single focused theme ? If the theme is too broad, or the PR code diff changes are too scattered, then the PR is not focused. Explain your answer shortly."
|
description: >-
|
||||||
}
|
Is this a focused PR, in the sense that all the PR code diff changes are
|
||||||
},
|
united under a single focused theme ? If the theme is too broad, or the PR
|
||||||
|
code diff changes are too scattered, then the PR is not focused. Explain
|
||||||
|
your answer shortly.
|
||||||
{%- endif %}
|
{%- endif %}
|
||||||
"PR Feedback": {
|
PR Feedback:
|
||||||
"General suggestions": {
|
General suggestions:
|
||||||
"type": "string",
|
type: string
|
||||||
"description": "General suggestions and feedback for the contributors and maintainers of this PR. May include important suggestions for the overall structure, primary purpose, best practices, critical bugs, and other aspects of the PR. Don't address PR title and description, or lack of tests. Explain your suggestions."
|
description: >-
|
||||||
},
|
General suggestions and feedback for the contributors and maintainers of
|
||||||
|
this PR. May include important suggestions for the overall structure,
|
||||||
|
primary purpose, best practices, critical bugs, and other aspects of the
|
||||||
|
PR. Don't address PR title and description, or lack of tests. Explain your
|
||||||
|
suggestions.
|
||||||
{%- if num_code_suggestions > 0 %}
|
{%- if num_code_suggestions > 0 %}
|
||||||
"Code feedback": {
|
Code feedback:
|
||||||
"type": "array",
|
type: array
|
||||||
"maxItems": {{ num_code_suggestions }},
|
maxItems: {{ num_code_suggestions }}
|
||||||
"uniqueItems": true,
|
uniqueItems: true
|
||||||
"items": {
|
items:
|
||||||
"relevant file": {
|
relevant file:
|
||||||
"type": "string",
|
type: string
|
||||||
"description": "the relevant file full path"
|
description: the relevant file full path
|
||||||
},
|
suggestion:
|
||||||
"suggestion": {
|
type: string
|
||||||
"type": "string",
|
description: |
|
||||||
"description": "a concrete suggestion for meaningfully improving the new PR code. Also describe how, specifically, the suggestion can be applied to new PR code. Add tags with importance measure that matches each suggestion ('important' or 'medium'). Do not make suggestions for updating or adding docstrings, renaming PR title and description, or linter like.
|
a concrete suggestion for meaningfully improving the new PR code. Also
|
||||||
},
|
describe how, specifically, the suggestion can be applied to new PR
|
||||||
"relevant line": {
|
code. Add tags with importance measure that matches each suggestion
|
||||||
"type": "string",
|
('important' or 'medium'). Do not make suggestions for updating or
|
||||||
"description": "a single code line taken from the relevant file, to which the suggestion applies. The line should be a '+' line. Make sure to output the line exactly as it appears in the relevant file"
|
adding docstrings, renaming PR title and description, or linter like.
|
||||||
}
|
relevant line:
|
||||||
}
|
type: string
|
||||||
},
|
description: |
|
||||||
|
a single code line taken from the relevant file, to which the suggestion applies.
|
||||||
|
The line should be a '+' line.
|
||||||
|
Make sure to output the line exactly as it appears in the relevant file
|
||||||
{%- endif %}
|
{%- endif %}
|
||||||
{%- if require_security %}
|
{%- if require_security %}
|
||||||
"Security concerns": {
|
Security concerns:
|
||||||
"type": "string",
|
type: string
|
||||||
"description": "yes\\no question: does this PR code introduce possible security concerns or issues, like SQL injection, XSS, CSRF, and others ? If answered 'yes', explain your answer shortly"
|
description: >-
|
||||||
? explain your answer shortly"
|
yes\\no question: does this PR code introduce possible security concerns or
|
||||||
}
|
issues, like SQL injection, XSS, CSRF, and others ? If answered 'yes',explain your answer shortly
|
||||||
{%- endif %}
|
{%- endif %}
|
||||||
}
|
|
||||||
}
|
|
||||||
```
|
```
|
||||||
|
|
||||||
Example output:
|
Example output:
|
||||||
'
|
```yaml
|
||||||
{
|
PR Analysis:
|
||||||
"PR Analysis":
|
Main theme: xxx
|
||||||
{
|
Type of PR: Bug fix
|
||||||
"Main theme": "xxx",
|
|
||||||
"Type of PR": "Bug fix",
|
|
||||||
{%- if require_score %}
|
{%- if require_score %}
|
||||||
"Score": 89,
|
Score: 89
|
||||||
{%- endif %}
|
|
||||||
{%- if require_tests %}
|
|
||||||
"Relevant tests added": "No",
|
|
||||||
{%- endif %}
|
{%- endif %}
|
||||||
|
Relevant tests added: No
|
||||||
{%- if require_focused %}
|
{%- if require_focused %}
|
||||||
"Focused PR": "yes\\no, because ..."
|
Focused PR: no, because ...
|
||||||
{%- endif %}
|
{%- endif %}
|
||||||
},
|
PR Feedback:
|
||||||
"PR Feedback":
|
General PR suggestions: ...
|
||||||
{
|
|
||||||
"General PR suggestions": "..., `xxx`...",
|
|
||||||
{%- if num_code_suggestions > 0 %}
|
{%- if num_code_suggestions > 0 %}
|
||||||
"Code feedback": [
|
Code feedback:
|
||||||
{
|
- relevant file: |-
|
||||||
"relevant file": "directory/xxx.py",
|
directory/xxx.py
|
||||||
"suggestion": "xxx [important]",
|
suggestion: xxx [important]
|
||||||
"relevant line": "xxx",
|
relevant line: |-
|
||||||
},
|
xxx
|
||||||
...
|
...
|
||||||
]
|
|
||||||
{%- endif %}
|
{%- endif %}
|
||||||
{%- if require_security %}
|
{%- if require_security %}
|
||||||
"Security concerns": "No, because ..."
|
Security concerns: No
|
||||||
{%- endif %}
|
{%- endif %}
|
||||||
}
|
```
|
||||||
}
|
|
||||||
'
|
|
||||||
|
|
||||||
|
Make sure to output a valid YAML. Use multi-line block scalar ('|') if needed.
|
||||||
Don't repeat the prompt in the answer, and avoid outputting the 'type' and 'description' fields.
|
Don't repeat the prompt in the answer, and avoid outputting the 'type' and 'description' fields.
|
||||||
"""
|
"""
|
||||||
|
|
||||||
@ -158,6 +163,6 @@ The PR Git Diff:
|
|||||||
```
|
```
|
||||||
Note that lines in the diff body are prefixed with a symbol that represents the type of change: '-' for deletions, '+' for additions, and ' ' (a space) for unchanged lines.
|
Note that lines in the diff body are prefixed with a symbol that represents the type of change: '-' for deletions, '+' for additions, and ' ' (a space) for unchanged lines.
|
||||||
|
|
||||||
Response (should be a valid JSON, and nothing else):
|
Response (should be a valid YAML, and nothing else):
|
||||||
```json
|
```yaml
|
||||||
"""
|
"""
|
||||||
|
@ -93,6 +93,10 @@ class PRCodeSuggestions:
|
|||||||
|
|
||||||
def push_inline_code_suggestions(self, data):
|
def push_inline_code_suggestions(self, data):
|
||||||
code_suggestions = []
|
code_suggestions = []
|
||||||
|
|
||||||
|
if not data['Code suggestions']:
|
||||||
|
return self.git_provider.publish_comment('No suggestions found to improve this PR.')
|
||||||
|
|
||||||
for d in data['Code suggestions']:
|
for d in data['Code suggestions']:
|
||||||
try:
|
try:
|
||||||
if get_settings().config.verbosity_level >= 2:
|
if get_settings().config.verbosity_level >= 2:
|
||||||
|
@ -8,6 +8,7 @@ from jinja2 import Environment, StrictUndefined
|
|||||||
from pr_agent.algo.ai_handler import AiHandler
|
from pr_agent.algo.ai_handler import AiHandler
|
||||||
from pr_agent.algo.pr_processing import get_pr_diff, retry_with_fallback_models
|
from pr_agent.algo.pr_processing import get_pr_diff, retry_with_fallback_models
|
||||||
from pr_agent.algo.token_handler import TokenHandler
|
from pr_agent.algo.token_handler import TokenHandler
|
||||||
|
from pr_agent.algo.utils import load_yaml
|
||||||
from pr_agent.config_loader import get_settings
|
from pr_agent.config_loader import get_settings
|
||||||
from pr_agent.git_providers import get_git_provider
|
from pr_agent.git_providers import get_git_provider
|
||||||
from pr_agent.git_providers.git_provider import get_main_pr_language
|
from pr_agent.git_providers.git_provider import get_main_pr_language
|
||||||
@ -139,34 +140,45 @@ class PRDescription:
|
|||||||
- title: a string containing the PR title.
|
- title: a string containing the PR title.
|
||||||
- pr_body: a string containing the PR body in a markdown format.
|
- pr_body: a string containing the PR body in a markdown format.
|
||||||
- pr_types: a list of strings containing the PR types.
|
- pr_types: a list of strings containing the PR types.
|
||||||
- markdown_text: a string containing the AI prediction data in a markdown format.
|
- markdown_text: a string containing the AI prediction data in a markdown format. used for publishing a comment
|
||||||
"""
|
"""
|
||||||
# Load the AI prediction data into a dictionary
|
# Load the AI prediction data into a dictionary
|
||||||
data = json.loads(self.prediction)
|
data = load_yaml(self.prediction.strip())
|
||||||
|
|
||||||
# Initialization
|
# Initialization
|
||||||
markdown_text = pr_body = ""
|
|
||||||
pr_types = []
|
pr_types = []
|
||||||
|
|
||||||
# Iterate over the dictionary items and append the key and value to 'markdown_text' in a markdown format
|
# Iterate over the dictionary items and append the key and value to 'markdown_text' in a markdown format
|
||||||
|
markdown_text = ""
|
||||||
for key, value in data.items():
|
for key, value in data.items():
|
||||||
markdown_text += f"## {key}\n\n"
|
markdown_text += f"## {key}\n\n"
|
||||||
markdown_text += f"{value}\n\n"
|
markdown_text += f"{value}\n\n"
|
||||||
|
|
||||||
# If the 'PR Type' key is present in the dictionary, split its value by comma and assign it to 'pr_types'
|
# If the 'PR Type' key is present in the dictionary, split its value by comma and assign it to 'pr_types'
|
||||||
if 'PR Type' in data:
|
if 'PR Type' in data:
|
||||||
pr_types = data['PR Type'].split(',')
|
if type(data['PR Type']) == list:
|
||||||
|
pr_types = data['PR Type']
|
||||||
|
elif type(data['PR Type']) == str:
|
||||||
|
pr_types = data['PR Type'].split(',')
|
||||||
|
|
||||||
# Assign the value of the 'PR Title' key to 'title' variable and remove it from the dictionary
|
# Assign the value of the 'PR Title' key to 'title' variable and remove it from the dictionary
|
||||||
title = data.pop('PR Title')
|
title = data.pop('PR Title')
|
||||||
|
|
||||||
# Iterate over the remaining dictionary items and append the key and value to 'pr_body' in a markdown format,
|
# Iterate over the remaining dictionary items and append the key and value to 'pr_body' in a markdown format,
|
||||||
# except for the items containing the word 'walkthrough'
|
# except for the items containing the word 'walkthrough'
|
||||||
|
pr_body = ""
|
||||||
for key, value in data.items():
|
for key, value in data.items():
|
||||||
pr_body += f"## {key}:\n"
|
pr_body += f"## {key}:\n"
|
||||||
if 'walkthrough' in key.lower():
|
if 'walkthrough' in key.lower():
|
||||||
pr_body += f"{value}\n"
|
# for filename, description in value.items():
|
||||||
|
for file in value:
|
||||||
|
filename = file['filename'].replace("'", "`")
|
||||||
|
description = file['changes in file']
|
||||||
|
pr_body += f'`{filename}`: {description}\n'
|
||||||
else:
|
else:
|
||||||
|
# if the value is a list, join its items by comma
|
||||||
|
if type(value) == list:
|
||||||
|
value = ', '.join(v for v in value)
|
||||||
pr_body += f"{value}\n\n___\n"
|
pr_body += f"{value}\n\n___\n"
|
||||||
|
|
||||||
if get_settings().config.verbosity_level >= 2:
|
if get_settings().config.verbosity_level >= 2:
|
||||||
|
@ -4,13 +4,15 @@ import logging
|
|||||||
from collections import OrderedDict
|
from collections import OrderedDict
|
||||||
from typing import List, Tuple
|
from typing import List, Tuple
|
||||||
|
|
||||||
|
import yaml
|
||||||
from jinja2 import Environment, StrictUndefined
|
from jinja2 import Environment, StrictUndefined
|
||||||
|
from yaml import SafeLoader
|
||||||
|
|
||||||
from pr_agent.algo.ai_handler import AiHandler
|
from pr_agent.algo.ai_handler import AiHandler
|
||||||
from pr_agent.algo.pr_processing import get_pr_diff, retry_with_fallback_models, \
|
from pr_agent.algo.pr_processing import get_pr_diff, retry_with_fallback_models, \
|
||||||
find_line_number_of_relevant_line_in_file
|
find_line_number_of_relevant_line_in_file, clip_tokens
|
||||||
from pr_agent.algo.token_handler import TokenHandler
|
from pr_agent.algo.token_handler import TokenHandler
|
||||||
from pr_agent.algo.utils import convert_to_markdown, try_fix_json
|
from pr_agent.algo.utils import convert_to_markdown, try_fix_json, try_fix_yaml, load_yaml
|
||||||
from pr_agent.config_loader import get_settings
|
from pr_agent.config_loader import get_settings
|
||||||
from pr_agent.git_providers import get_git_provider
|
from pr_agent.git_providers import get_git_provider
|
||||||
from pr_agent.git_providers.git_provider import IncrementalPR, get_main_pr_language
|
from pr_agent.git_providers.git_provider import IncrementalPR, get_main_pr_language
|
||||||
@ -160,19 +162,17 @@ class PRReviewer:
|
|||||||
Prepare the PR review by processing the AI prediction and generating a markdown-formatted text that summarizes
|
Prepare the PR review by processing the AI prediction and generating a markdown-formatted text that summarizes
|
||||||
the feedback.
|
the feedback.
|
||||||
"""
|
"""
|
||||||
review = self.prediction.strip()
|
data = load_yaml(self.prediction.strip())
|
||||||
|
|
||||||
try:
|
|
||||||
data = json.loads(review)
|
|
||||||
except json.decoder.JSONDecodeError:
|
|
||||||
data = try_fix_json(review)
|
|
||||||
|
|
||||||
# Move 'Security concerns' key to 'PR Analysis' section for better display
|
# Move 'Security concerns' key to 'PR Analysis' section for better display
|
||||||
pr_feedback = data.get('PR Feedback', {})
|
pr_feedback = data.get('PR Feedback', {})
|
||||||
security_concerns = pr_feedback.get('Security concerns')
|
security_concerns = pr_feedback.get('Security concerns')
|
||||||
if security_concerns:
|
if security_concerns is not None:
|
||||||
del pr_feedback['Security concerns']
|
del pr_feedback['Security concerns']
|
||||||
data.setdefault('PR Analysis', {})['Security concerns'] = security_concerns
|
if type(security_concerns) == bool and security_concerns == False:
|
||||||
|
data.setdefault('PR Analysis', {})['Security concerns'] = 'No security concerns found'
|
||||||
|
else:
|
||||||
|
data.setdefault('PR Analysis', {})['Security concerns'] = security_concerns
|
||||||
|
|
||||||
#
|
#
|
||||||
if 'Code feedback' in pr_feedback:
|
if 'Code feedback' in pr_feedback:
|
||||||
@ -183,6 +183,12 @@ class PRReviewer:
|
|||||||
del pr_feedback['Code feedback']
|
del pr_feedback['Code feedback']
|
||||||
else:
|
else:
|
||||||
for suggestion in code_feedback:
|
for suggestion in code_feedback:
|
||||||
|
if ('relevant file' in suggestion) and (not suggestion['relevant file'].startswith('``')):
|
||||||
|
suggestion['relevant file'] = f"``{suggestion['relevant file']}``"
|
||||||
|
|
||||||
|
if 'relevant line' not in suggestion:
|
||||||
|
suggestion['relevant line'] = ''
|
||||||
|
|
||||||
relevant_line_str = suggestion['relevant line'].split('\n')[0]
|
relevant_line_str = suggestion['relevant line'].split('\n')[0]
|
||||||
|
|
||||||
# removing '+'
|
# removing '+'
|
||||||
@ -219,7 +225,7 @@ class PRReviewer:
|
|||||||
logging.info(f"Markdown response:\n{markdown_text}")
|
logging.info(f"Markdown response:\n{markdown_text}")
|
||||||
|
|
||||||
if markdown_text == None or len(markdown_text) == 0:
|
if markdown_text == None or len(markdown_text) == 0:
|
||||||
markdown_text = review
|
markdown_text = ""
|
||||||
|
|
||||||
return markdown_text
|
return markdown_text
|
||||||
|
|
||||||
@ -230,11 +236,13 @@ class PRReviewer:
|
|||||||
if get_settings().pr_reviewer.num_code_suggestions == 0:
|
if get_settings().pr_reviewer.num_code_suggestions == 0:
|
||||||
return
|
return
|
||||||
|
|
||||||
review = self.prediction.strip()
|
review_text = self.prediction.strip()
|
||||||
|
review_text = review_text.removeprefix('```yaml').rstrip('`')
|
||||||
try:
|
try:
|
||||||
data = json.loads(review)
|
data = yaml.load(review_text, Loader=SafeLoader)
|
||||||
except json.decoder.JSONDecodeError:
|
except Exception as e:
|
||||||
data = try_fix_json(review)
|
logging.error(f"Failed to parse AI prediction: {e}")
|
||||||
|
data = try_fix_yaml(review_text)
|
||||||
|
|
||||||
comments: List[str] = []
|
comments: List[str] = []
|
||||||
for suggestion in data.get('PR Feedback', {}).get('Code feedback', []):
|
for suggestion in data.get('PR Feedback', {}).get('Code feedback', []):
|
||||||
|
@ -42,7 +42,8 @@ dependencies = [
|
|||||||
"atlassian-python-api==3.39.0",
|
"atlassian-python-api==3.39.0",
|
||||||
"GitPython~=3.1.32",
|
"GitPython~=3.1.32",
|
||||||
"starlette-context==0.3.6",
|
"starlette-context==0.3.6",
|
||||||
"litellm~=0.1.351"
|
"litellm~=0.1.351",
|
||||||
|
"PyYAML==6.0"
|
||||||
]
|
]
|
||||||
|
|
||||||
[project.urls]
|
[project.urls]
|
||||||
|
@ -12,3 +12,6 @@ aiohttp~=3.8.4
|
|||||||
atlassian-python-api==3.39.0
|
atlassian-python-api==3.39.0
|
||||||
GitPython~=3.1.32
|
GitPython~=3.1.32
|
||||||
litellm~=0.1.351
|
litellm~=0.1.351
|
||||||
|
PyYAML==6.0
|
||||||
|
starlette-context==0.3.6
|
||||||
|
litellm~=0.1.351
|
10
tests/unittest/test_bitbucket_provider.py
Normal file
10
tests/unittest/test_bitbucket_provider.py
Normal file
@ -0,0 +1,10 @@
|
|||||||
|
from pr_agent.git_providers.bitbucket_provider import BitbucketProvider
|
||||||
|
|
||||||
|
|
||||||
|
class TestBitbucketProvider:
|
||||||
|
def test_parse_pr_url(self):
|
||||||
|
url = "https://bitbucket.org/WORKSPACE_XYZ/MY_TEST_REPO/pull-requests/321"
|
||||||
|
workspace_slug, repo_slug, pr_number = BitbucketProvider._parse_pr_url(url)
|
||||||
|
assert workspace_slug == "WORKSPACE_XYZ"
|
||||||
|
assert repo_slug == "MY_TEST_REPO"
|
||||||
|
assert pr_number == 321
|
32
tests/unittest/test_load_yaml.py
Normal file
32
tests/unittest/test_load_yaml.py
Normal file
@ -0,0 +1,32 @@
|
|||||||
|
|
||||||
|
# Generated by CodiumAI
|
||||||
|
|
||||||
|
import pytest
|
||||||
|
from pr_agent.algo.utils import load_yaml
|
||||||
|
|
||||||
|
|
||||||
|
class TestLoadYaml:
|
||||||
|
# Tests that load_yaml loads a valid YAML string
|
||||||
|
def test_load_valid_yaml(self):
|
||||||
|
yaml_str = 'name: John Smith\nage: 35'
|
||||||
|
expected_output = {'name': 'John Smith', 'age': 35}
|
||||||
|
assert load_yaml(yaml_str) == expected_output
|
||||||
|
|
||||||
|
def test_load_complicated_yaml(self):
|
||||||
|
yaml_str = \
|
||||||
|
'''\
|
||||||
|
PR Analysis:
|
||||||
|
Main theme: Enhancing the `/describe` command prompt by adding title and description
|
||||||
|
Type of PR: Enhancement
|
||||||
|
Relevant tests added: No
|
||||||
|
Focused PR: Yes, the PR is focused on enhancing the `/describe` command prompt.
|
||||||
|
|
||||||
|
PR Feedback:
|
||||||
|
General suggestions: The PR seems to be well-structured and focused on a specific enhancement. However, it would be beneficial to add tests to ensure the new feature works as expected.
|
||||||
|
Code feedback:
|
||||||
|
- relevant file: pr_agent/settings/pr_description_prompts.toml
|
||||||
|
suggestion: Consider using a more descriptive variable name than 'user' for the command prompt. A more descriptive name would make the code more readable and maintainable. [medium]
|
||||||
|
relevant line: 'user="""PR Info:'
|
||||||
|
Security concerns: No'''
|
||||||
|
expected_output = {'PR Analysis': {'Main theme': 'Enhancing the `/describe` command prompt by adding title and description', 'Type of PR': 'Enhancement', 'Relevant tests added': False, 'Focused PR': 'Yes, the PR is focused on enhancing the `/describe` command prompt.'}, 'PR Feedback': {'General suggestions': 'The PR seems to be well-structured and focused on a specific enhancement. However, it would be beneficial to add tests to ensure the new feature works as expected.', 'Code feedback': [{'relevant file': 'pr_agent/settings/pr_description_prompts.toml', 'suggestion': "Consider using a more descriptive variable name than 'user' for the command prompt. A more descriptive name would make the code more readable and maintainable. [medium]", 'relevant line': 'user="""PR Info:'}], 'Security concerns': False}}
|
||||||
|
assert load_yaml(yaml_str) == expected_output
|
Reference in New Issue
Block a user