Compare commits

..

37 Commits

Author SHA1 Message Date
e6bea76eee Typo 2023-10-26 17:07:16 +03:00
414f2b6767 Fix incremental review if there are no new commits (would have performed a full review instead) 2023-10-26 16:49:55 +03:00
6541575a0e Refactor to use pull_request synchronize event 2023-10-26 16:49:54 +03:00
02570ea797 Remove previous review comment on push event 2023-10-26 16:46:54 +03:00
65bb70a1dd Added support for automatic review on push event
The new feature can be enabled via the new configuration `github_app.handle_push_event`. To avoid any unwanted side-effects, the current default of this configuration is set to `false`.

The high level flow (assuming the configuration is enabled):
1. receive push event from GitHub
2. extract branch and commits from event
3. find PR url for branch (currently does not support PRs from forks)
4. perform configured commands (e.g. `/describe`, `/review -i`)

The push event flow is guarded by a backlog queue so that multiple push events on the same branch won't trigger multiple duplicate runs of the PR-Agent commands.
Example timeline:
1. push 1 - start handling event
2. push 2 - waiting to be handled while push 1 event is still running
3. push 3 - event is dropped since handling it and handling push 2 is the same, so it is redundant
4. push 1 finished being handled
5. push 2 awakens from wait and continues handling (potentially reviewing the commits of both push 2 and push 3)

All of these options are configurable and can be enabled/disabled as per the user's desire.

Additional minor changes in this PR:
1. Created `DefaultDictWithTimeout` utility class to avoid too much boilerplate code in managing caches for outdated triggers.
2. Guard against running increment review when there are no new commits.
3. Minor styling changes for incremented review text.
2023-10-25 11:15:23 +03:00
b6cabda586 quick fix 2023-10-19 17:24:37 +03:00
abbce60f18 Merge remote-tracking branch 'origin/main' 2023-10-19 17:10:30 +03:00
5daaaf2c1d quick fix 2023-10-19 17:10:21 +03:00
e8f207691e Merge pull request #391 from Codium-ai/tr/readme
Update and Enhance DESCRIBE.md Documentation
2023-10-19 02:03:50 -07:00
b0dce4ceae describe 2023-10-19 12:02:12 +03:00
fc494296d7 Merge pull request #387 from Codium-ai/ok/json_logging_in_bitbucket
Enhancing Logging in Bitbucket, GitLab, and Google Cloud Storage Secret Provider
2023-10-19 11:59:26 +03:00
67b4069540 describe 2023-10-19 11:45:41 +03:00
e6defcc846 describe 2023-10-19 11:43:18 +03:00
096fcbbc17 describe 2023-10-19 11:40:01 +03:00
eb7add1c77 describe 2023-10-19 11:38:21 +03:00
1b6fb3ea53 Merge pull request #385 from Codium-ai/hl/fix_add_docs_in_scripts
Add Blacklist for Non-Editable File Extensions in Documentation
2023-10-19 11:21:36 +03:00
c57b70f1d4 Merge pull request #390 from Codium-ai/tr/readme
Enhancing Documentation and Updating Configuration for PR Descriptions
2023-10-19 01:04:24 -07:00
a2c3db463a use_bullet_points 2023-10-19 10:45:42 +03:00
193da1c356 update readme 2023-10-19 09:22:26 +03:00
5bc26880b3 update readme 2023-10-19 09:20:36 +03:00
21a1cc970e - update readme
- minor prompts change
2023-10-19 09:16:20 +03:00
954727ad67 Merge pull request #386 from Codium-ai/ok/fix_bitbucket_pipeline
Refactor Bitbucket Pipeline Integration and Update Documentation
2023-10-18 16:45:26 +03:00
1314898cbf Enhance logging in bitbucket_app, gitlab_webhook, and google_cloud_storage_secret_provider with JSON format and additional context 2023-10-18 16:44:03 +03:00
ff04d459d7 Update Bitbucket Pipeline instructions in INSTALL.md, remove redundant functionality 2023-10-18 15:46:43 +03:00
88ca501c0c Merge pull request #377 from zmeir/zmeir-review_incremental_detect_header
Get previous incremental review
2023-10-18 00:30:42 +03:00
fe284a8f91 Merge pull request #382 from Codium-ai/tr/similar_issue_fix
Enhancements and Error Handling in Similar Issue Tool
2023-10-17 09:49:35 -07:00
d41fe0cf79 comment 2023-10-17 19:45:04 +03:00
3673924fe9 Add docs editable blacklist of file extensions like sql, yaml... 2023-10-17 18:50:39 +03:00
d5c098de73 another protection 2023-10-17 10:21:05 +03:00
9f5c0daa8e protection 2023-10-17 09:43:48 +03:00
bce2262d4e Merge pull request #381 from moccajoghurt/feature-allow-custom-urls
Support Custom Domain URLs for Azure DevOps Integration
2023-10-16 22:38:27 -07:00
e6f1e0520a remove azure.com url restriction 2023-10-16 20:38:14 +02:00
d8de89ae33 Get previous incremental review
When getting the last commit in `/review -i` consider also the last __incremental__ review, not just the last __full__ review

Full disclosure I'm not really sure the `/review -i` feature work very well - I might be wrong but it seemed like the actual review in fact addressed all the changes in the PR, and not just the ones from the last review (even though it adds a link to the commit of the last review).  
I think the commit list gathered in `/review -i` doesn't propagate the actual list the reviewer uses. Again, I might be wrong, just took a brief glance at it.
2023-10-16 16:37:10 +03:00
428c38e3d9 Merge pull request #376 from Codium-ai/feature/better_logger
Refactor logging system to use custom logger across the codebase
2023-10-16 16:32:27 +03:00
7ffdf8de37 Remove verbosity level check in handle_patch_deletions test 2023-10-16 16:25:57 +03:00
83e670c5df Enhance logging context in github_app server with server type 2023-10-16 16:13:09 +03:00
c324d88be3 Refactor logging system to use custom logger across the codebase 2023-10-16 14:56:00 +03:00
48 changed files with 765 additions and 508 deletions

View File

@ -1,18 +0,0 @@
FROM python:3.10 as base
ENV OPENAI_API_KEY=${OPENAI_API_KEY} \
BITBUCKET_BEARER_TOKEN=${BITBUCKET_BEARER_TOKEN} \
BITBUCKET_PR_ID=${BITBUCKET_PR_ID} \
BITBUCKET_REPO_SLUG=${BITBUCKET_REPO_SLUG} \
BITBUCKET_WORKSPACE=${BITBUCKET_WORKSPACE}
WORKDIR /app
ADD pyproject.toml .
ADD requirements.txt .
RUN pip install . && rm pyproject.toml requirements.txt
ENV PYTHONPATH=/app
ADD pr_agent pr_agent
ADD bitbucket_pipeline/entrypoint.sh /
RUN chmod +x /entrypoint.sh
ENTRYPOINT ["/entrypoint.sh"]

View File

@ -375,59 +375,28 @@ In the "Trigger" section, check the comments and merge request events
### Method 9: Run as a Bitbucket Pipeline ### Method 9: Run as a Bitbucket Pipeline
You can use our pre-build Bitbucket-Pipeline docker image to run as Bitbucket-Pipeline. You can use the Bitbucket Pipeline system to run PR-Agent on every pull request open or update.
1. Add the following file in your repository bitbucket_pipelines.yml 1. Add the following file in your repository bitbucket_pipelines.yml
```yaml ```yaml
pipelines: pipelines:
pull-requests: pull-requests:
'**': '**':
- step: - step:
name: PR Agent Pipeline name: PR Agent Review
caches: image: python:3.10
- pip
image: python:3.8
services: services:
- docker - docker
script: script:
- git clone https://github.com/Codium-ai/pr-agent.git - docker run -e CONFIG.GIT_PROVIDER=bitbucket -e OPENAI.KEY=$OPENAI_API_KEY -e BITBUCKET.BEARER_TOKEN=$BITBUCKET_BEARER_TOKEN codiumai/pr-agent:latest --pr_url=https://bitbucket.org/$BITBUCKET_WORKSPACE/$BITBUCKET_REPO_SLUG/pull-requests/$BITBUCKET_PR_ID review
- cd pr-agent
- docker build -t bitbucket_runner:latest -f Dockerfile.bitbucket_pipeline .
- docker run -e OPENAI_API_KEY=$OPENAI_API_KEY -e BITBUCKET_BEARER_TOKEN=$BITBUCKET_BEARER_TOKEN -e BITBUCKET_PR_ID=$BITBUCKET_PR_ID -e BITBUCKET_REPO_SLUG=$BITBUCKET_REPO_SLUG -e BITBUCKET_WORKSPACE=$BITBUCKET_WORKSPACE bitbucket_runner:latest
``` ```
2. Add the following secret to your repository under Repository settings > Pipelines > Repository variables. 2. Add the following secure variables to your repository under Repository settings > Pipelines > Repository variables.
OPENAI_API_KEY: <your key> OPENAI_API_KEY: <your key>
BITBUCKET_BEARER_TOKEN: <your token> BITBUCKET_BEARER_TOKEN: <your token>
3. To get BITBUCKET_BEARER_TOKEN follow these steps You can get a Bitbucket token for your repository by following Repository Settings -> Security -> Access Tokens
So here is my step by step tutorial
i) Insert your workspace name instead of {workspace_name} and go to the following link in order to create an OAuth consumer.
https://bitbucket.org/{workspace_name}/workspace/settings/api
set callback URL to http://localhost:8976 (doesn't need to be a real server there)
select permissions: repository -> read
ii) use consumer's Key as a {client_id} and open the following URL in the browser
https://bitbucket.org/site/oauth2/authorize?client_id={client_id}&response_type=code
iii)
after you press "Grant access" in the browser it will redirect you to
http://localhost:8976?code=<CODE>
iv) use the code from the previous step and consumer's Key as a {client_id}, and consumer's Secret as {client_secret}
curl -X POST -u "{client_id}:{client_secret}" \
https://bitbucket.org/site/oauth2/access_token \
-d grant_type=authorization_code \
-d code={code} \
After completing this steps, you just to place this access token in the repository varibles.
======= =======

View File

@ -1,2 +0,0 @@
#!/bin/bash
python /app/pr_agent/servers/bitbucket_pipeline_runner.py

View File

@ -27,18 +27,14 @@ Under the section 'pr_description', the [configuration file](./../pr_agent/setti
- `extra_instructions`: Optional extra instructions to the tool. For example: "focus on the changes in the file X. Ignore change in ...". - `extra_instructions`: Optional extra instructions to the tool. For example: "focus on the changes in the file X. Ignore change in ...".
#### Markers template ### Markers template
markers enable to easily integrate user's content and auto-generated content, with a template-like mechanism. markers enable to easily integrate user's content and auto-generated content, with a template-like mechanism.
- `use_description_markers`: if set to true, the tool will use markers template. It replaces every marker of the form `pr_agent:marker_name` with the relevant content. Default is false.
For example, if the PR original description was: For example, if the PR original description was:
``` ```
User content... User content...
## PR Type:
pr_agent:pr_type
## PR Description: ## PR Description:
pr_agent:summary pr_agent:summary
@ -46,6 +42,21 @@ pr_agent:summary
## PR Walkthrough: ## PR Walkthrough:
pr_agent:walkthrough pr_agent:walkthrough
``` ```
The marker `pr_agent:pr_type` will be replaced with the PR type, `pr_agent:summary` will be replaced with the PR summary, and `pr_agent:walkthrough` will be replaced with the PR walkthrough. The marker `pr_agent:summary` will be replaced with the PR summary, and `pr_agent:walkthrough` will be replaced with the PR walkthrough.
##### Example:
```
env:
pr_description.use_description_markers: 'true'
```
<kbd><img src=./../pics/describe_markers_before.png width="768"></kbd>
==>
<kbd><img src=./../pics/describe_markers_after.png width="768"></kbd>
##### Configuration params:
- `use_description_markers`: if set to true, the tool will use markers template. It replaces every marker of the form `pr_agent:marker_name` with the relevant content. Default is false.
- `include_generated_by_header`: if set to true, the tool will add a dedicated header: 'Generated by PR Agent at ...' to any automatic content. Default is true. - `include_generated_by_header`: if set to true, the tool will add a dedicated header: 'Generated by PR Agent at ...' to any automatic content. Default is true.

View File

@ -31,4 +31,15 @@ Under the section 'pr_code_suggestions', the [configuration file](./../pr_agent/
- `num_code_suggestions_per_chunk`: number of code suggestions provided by the 'improve' tool, per chunk. Default is 8. - `num_code_suggestions_per_chunk`: number of code suggestions provided by the 'improve' tool, per chunk. Default is 8.
- `rank_extended_suggestions`: if set to true, the tool will rank the suggestions, based on importance. Default is true. - `rank_extended_suggestions`: if set to true, the tool will rank the suggestions, based on importance. Default is true.
- `max_number_of_calls`: maximum number of chunks. Default is 5. - `max_number_of_calls`: maximum number of chunks. Default is 5.
- `final_clip_factor`: factor to remove suggestions with low confidence. Default is 0.9. - `final_clip_factor`: factor to remove suggestions with low confidence. Default is 0.9.
#### A note on code suggestions quality
- With current level of AI for code (GPT-4), mistakes can happen. Not all the suggestions will be perfect, and a user should not accept all of them automatically.
- Suggestions are not meant to be [simplistic](./../pr_agent/settings/pr_code_suggestions_prompts.toml#L34). Instead, they aim to give deep feedback and raise questions, ideas and thoughts to the user, who can then use his judgment, experience, and understanding of the code base.
- Recommended to use the 'extra_instructions' field to guide the model to suggestions that are more relevant to the specific needs of the project.
- Best quality will be obtained by using 'improve --extended' mode.

View File

@ -43,4 +43,15 @@ The tool will first ask the author questions about the PR, and will guide the re
<kbd><img src=./../pics/reflection_questions.png width="768"></kbd> <kbd><img src=./../pics/reflection_questions.png width="768"></kbd>
<kbd><img src=./../pics/reflection_answers.png width="768"></kbd> <kbd><img src=./../pics/reflection_answers.png width="768"></kbd>
<kbd><img src=./../pics/reflection_insights.png width="768"></kbd> <kbd><img src=./../pics/reflection_insights.png width="768"></kbd>
#### A note on code suggestions quality
- With current level of AI for code (GPT-4), mistakes can happen. Not all the suggestions will be perfect, and a user should not accept all of them automatically.
- Suggestions are not meant to be [simplistic](./../pr_agent/settings/pr_reviewer_prompts.toml#L29). Instead, they aim to give deep feedback and raise questions, ideas and thoughts to the user, who can then use his judgment, experience, and understanding of the code base.
- Recommended to use the 'extra_instructions' field to guide the model to suggestions that are more relevant to the specific needs of the project.
- Unlike the 'review' feature, which does a lot of things, the ['improve --extended'](./IMPROVE.md) feature is dedicated only to suggestions, and usually gives better results.

Binary file not shown.

After

Width:  |  Height:  |  Size: 224 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 30 KiB

View File

@ -1,20 +1,17 @@
import logging
import os
import shlex import shlex
import tempfile
from pr_agent.algo.utils import update_settings_from_args from pr_agent.algo.utils import update_settings_from_args
from pr_agent.config_loader import get_settings from pr_agent.config_loader import get_settings
from pr_agent.git_providers import get_git_provider from pr_agent.git_providers.utils import apply_repo_settings
from pr_agent.tools.pr_add_docs import PRAddDocs from pr_agent.tools.pr_add_docs import PRAddDocs
from pr_agent.tools.pr_code_suggestions import PRCodeSuggestions from pr_agent.tools.pr_code_suggestions import PRCodeSuggestions
from pr_agent.tools.pr_config import PRConfig
from pr_agent.tools.pr_description import PRDescription from pr_agent.tools.pr_description import PRDescription
from pr_agent.tools.pr_information_from_user import PRInformationFromUser from pr_agent.tools.pr_information_from_user import PRInformationFromUser
from pr_agent.tools.pr_similar_issue import PRSimilarIssue
from pr_agent.tools.pr_questions import PRQuestions from pr_agent.tools.pr_questions import PRQuestions
from pr_agent.tools.pr_reviewer import PRReviewer from pr_agent.tools.pr_reviewer import PRReviewer
from pr_agent.tools.pr_similar_issue import PRSimilarIssue
from pr_agent.tools.pr_update_changelog import PRUpdateChangelog from pr_agent.tools.pr_update_changelog import PRUpdateChangelog
from pr_agent.tools.pr_config import PRConfig
command2class = { command2class = {
"auto_review": PRReviewer, "auto_review": PRReviewer,
@ -44,22 +41,7 @@ class PRAgent:
async def handle_request(self, pr_url, request, notify=None) -> bool: async def handle_request(self, pr_url, request, notify=None) -> bool:
# First, apply repo specific settings if exists # First, apply repo specific settings if exists
if get_settings().config.use_repo_settings_file: apply_repo_settings(pr_url)
repo_settings_file = None
try:
git_provider = get_git_provider()(pr_url)
repo_settings = git_provider.get_repo_settings()
if repo_settings:
repo_settings_file = None
fd, repo_settings_file = tempfile.mkstemp(suffix='.toml')
os.write(fd, repo_settings)
get_settings().load_file(repo_settings_file)
finally:
if repo_settings_file:
try:
os.remove(repo_settings_file)
except Exception as e:
logging.error(f"Failed to remove temporary settings file {repo_settings_file}", e)
# Then, apply user specific settings if exists # Then, apply user specific settings if exists
request = request.replace("'", "\\'") request = request.replace("'", "\\'")
@ -84,3 +66,4 @@ class PRAgent:
else: else:
return False return False
return True return True

View File

@ -1,4 +1,3 @@
import logging
import os import os
import litellm import litellm
@ -7,6 +6,8 @@ from litellm import acompletion
from openai.error import APIError, RateLimitError, Timeout, TryAgain from openai.error import APIError, RateLimitError, Timeout, TryAgain
from retry import retry from retry import retry
from pr_agent.config_loader import get_settings from pr_agent.config_loader import get_settings
from pr_agent.log import get_logger
OPENAI_RETRIES = 5 OPENAI_RETRIES = 5
@ -88,34 +89,34 @@ class AiHandler:
try: try:
deployment_id = self.deployment_id deployment_id = self.deployment_id
if get_settings().config.verbosity_level >= 2: if get_settings().config.verbosity_level >= 2:
logging.debug( get_logger().debug(
f"Generating completion with {model}" f"Generating completion with {model}"
f"{(' from deployment ' + deployment_id) if deployment_id else ''}" f"{(' from deployment ' + deployment_id) if deployment_id else ''}"
) )
if self.azure: if self.azure:
model = 'azure/' + model model = 'azure/' + model
messages = [{"role": "system", "content": system}, {"role": "user", "content": user}]
response = await acompletion( response = await acompletion(
model=model, model=model,
deployment_id=deployment_id, deployment_id=deployment_id,
messages=[ messages=messages,
{"role": "system", "content": system},
{"role": "user", "content": user}
],
temperature=temperature, temperature=temperature,
force_timeout=get_settings().config.ai_timeout force_timeout=get_settings().config.ai_timeout
) )
except (APIError, Timeout, TryAgain) as e: except (APIError, Timeout, TryAgain) as e:
logging.error("Error during OpenAI inference: ", e) get_logger().error("Error during OpenAI inference: ", e)
raise raise
except (RateLimitError) as e: except (RateLimitError) as e:
logging.error("Rate limit error during OpenAI inference: ", e) get_logger().error("Rate limit error during OpenAI inference: ", e)
raise raise
except (Exception) as e: except (Exception) as e:
logging.error("Unknown error during OpenAI inference: ", e) get_logger().error("Unknown error during OpenAI inference: ", e)
raise TryAgain from e raise TryAgain from e
if response is None or len(response["choices"]) == 0: if response is None or len(response["choices"]) == 0:
raise TryAgain raise TryAgain
resp = response["choices"][0]['message']['content'] resp = response["choices"][0]['message']['content']
finish_reason = response["choices"][0]["finish_reason"] finish_reason = response["choices"][0]["finish_reason"]
print(resp, finish_reason) usage = response.get("usage")
get_logger().info("AI response", response=resp, messages=messages, finish_reason=finish_reason,
model=model, usage=usage)
return resp, finish_reason return resp, finish_reason

View File

@ -1,8 +1,9 @@
from __future__ import annotations from __future__ import annotations
import logging
import re import re
from pr_agent.config_loader import get_settings from pr_agent.config_loader import get_settings
from pr_agent.log import get_logger
def extend_patch(original_file_str, patch_str, num_lines) -> str: def extend_patch(original_file_str, patch_str, num_lines) -> str:
@ -63,7 +64,7 @@ def extend_patch(original_file_str, patch_str, num_lines) -> str:
extended_patch_lines.append(line) extended_patch_lines.append(line)
except Exception as e: except Exception as e:
if get_settings().config.verbosity_level >= 2: if get_settings().config.verbosity_level >= 2:
logging.error(f"Failed to extend patch: {e}") get_logger().error(f"Failed to extend patch: {e}")
return patch_str return patch_str
# finish previous hunk # finish previous hunk
@ -134,14 +135,14 @@ def handle_patch_deletions(patch: str, original_file_content_str: str,
if not new_file_content_str: if not new_file_content_str:
# logic for handling deleted files - don't show patch, just show that the file was deleted # logic for handling deleted files - don't show patch, just show that the file was deleted
if get_settings().config.verbosity_level > 0: if get_settings().config.verbosity_level > 0:
logging.info(f"Processing file: {file_name}, minimizing deletion file") get_logger().info(f"Processing file: {file_name}, minimizing deletion file")
patch = None # file was deleted patch = None # file was deleted
else: else:
patch_lines = patch.splitlines() patch_lines = patch.splitlines()
patch_new = omit_deletion_hunks(patch_lines) patch_new = omit_deletion_hunks(patch_lines)
if patch != patch_new: if patch != patch_new:
if get_settings().config.verbosity_level > 0: if get_settings().config.verbosity_level > 0:
logging.info(f"Processing file: {file_name}, hunks were deleted") get_logger().info(f"Processing file: {file_name}, hunks were deleted")
patch = patch_new patch = patch_new
return patch return patch

View File

@ -1,7 +1,6 @@
from __future__ import annotations from __future__ import annotations
import difflib import difflib
import logging
import re import re
import traceback import traceback
from typing import Any, Callable, List, Tuple from typing import Any, Callable, List, Tuple
@ -15,6 +14,7 @@ from pr_agent.algo.file_filter import filter_ignored
from pr_agent.algo.token_handler import TokenHandler, get_token_encoder from pr_agent.algo.token_handler import TokenHandler, get_token_encoder
from pr_agent.config_loader import get_settings from pr_agent.config_loader import get_settings
from pr_agent.git_providers.git_provider import FilePatchInfo, GitProvider from pr_agent.git_providers.git_provider import FilePatchInfo, GitProvider
from pr_agent.log import get_logger
DELETED_FILES_ = "Deleted files:\n" DELETED_FILES_ = "Deleted files:\n"
@ -51,7 +51,7 @@ def get_pr_diff(git_provider: GitProvider, token_handler: TokenHandler, model: s
try: try:
diff_files = git_provider.get_diff_files() diff_files = git_provider.get_diff_files()
except RateLimitExceededException as e: except RateLimitExceededException as e:
logging.error(f"Rate limit exceeded for git provider API. original message {e}") get_logger().error(f"Rate limit exceeded for git provider API. original message {e}")
raise raise
diff_files = filter_ignored(diff_files) diff_files = filter_ignored(diff_files)
@ -180,7 +180,7 @@ def pr_generate_compressed_diff(top_langs: list, token_handler: TokenHandler, mo
# Hard Stop, no more tokens # Hard Stop, no more tokens
if total_tokens > MAX_TOKENS[model] - OUTPUT_BUFFER_TOKENS_HARD_THRESHOLD: if total_tokens > MAX_TOKENS[model] - OUTPUT_BUFFER_TOKENS_HARD_THRESHOLD:
logging.warning(f"File was fully skipped, no more tokens: {file.filename}.") get_logger().warning(f"File was fully skipped, no more tokens: {file.filename}.")
continue continue
# If the patch is too large, just show the file name # If the patch is too large, just show the file name
@ -189,7 +189,7 @@ def pr_generate_compressed_diff(top_langs: list, token_handler: TokenHandler, mo
# TODO: Option for alternative logic to remove hunks from the patch to reduce the number of tokens # TODO: Option for alternative logic to remove hunks from the patch to reduce the number of tokens
# until we meet the requirements # until we meet the requirements
if get_settings().config.verbosity_level >= 2: if get_settings().config.verbosity_level >= 2:
logging.warning(f"Patch too large, minimizing it, {file.filename}") get_logger().warning(f"Patch too large, minimizing it, {file.filename}")
if not modified_files_list: if not modified_files_list:
total_tokens += token_handler.count_tokens(MORE_MODIFIED_FILES_) total_tokens += token_handler.count_tokens(MORE_MODIFIED_FILES_)
modified_files_list.append(file.filename) modified_files_list.append(file.filename)
@ -204,7 +204,7 @@ def pr_generate_compressed_diff(top_langs: list, token_handler: TokenHandler, mo
patches.append(patch_final) patches.append(patch_final)
total_tokens += token_handler.count_tokens(patch_final) total_tokens += token_handler.count_tokens(patch_final)
if get_settings().config.verbosity_level >= 2: if get_settings().config.verbosity_level >= 2:
logging.info(f"Tokens: {total_tokens}, last filename: {file.filename}") get_logger().info(f"Tokens: {total_tokens}, last filename: {file.filename}")
return patches, modified_files_list, deleted_files_list return patches, modified_files_list, deleted_files_list
@ -218,7 +218,7 @@ async def retry_with_fallback_models(f: Callable):
get_settings().set("openai.deployment_id", deployment_id) get_settings().set("openai.deployment_id", deployment_id)
return await f(model) return await f(model)
except Exception as e: except Exception as e:
logging.warning( get_logger().warning(
f"Failed to generate prediction with {model}" f"Failed to generate prediction with {model}"
f"{(' from deployment ' + deployment_id) if deployment_id else ''}: " f"{(' from deployment ' + deployment_id) if deployment_id else ''}: "
f"{traceback.format_exc()}" f"{traceback.format_exc()}"
@ -340,7 +340,7 @@ def clip_tokens(text: str, max_tokens: int) -> str:
clipped_text = text[:num_output_chars] clipped_text = text[:num_output_chars]
return clipped_text return clipped_text
except Exception as e: except Exception as e:
logging.warning(f"Failed to clip tokens: {e}") get_logger().warning(f"Failed to clip tokens: {e}")
return text return text
@ -367,7 +367,7 @@ def get_pr_multi_diffs(git_provider: GitProvider,
try: try:
diff_files = git_provider.get_diff_files() diff_files = git_provider.get_diff_files()
except RateLimitExceededException as e: except RateLimitExceededException as e:
logging.error(f"Rate limit exceeded for git provider API. original message {e}") get_logger().error(f"Rate limit exceeded for git provider API. original message {e}")
raise raise
diff_files = filter_ignored(diff_files) diff_files = filter_ignored(diff_files)
@ -387,7 +387,7 @@ def get_pr_multi_diffs(git_provider: GitProvider,
for file in sorted_files: for file in sorted_files:
if call_number > max_calls: if call_number > max_calls:
if get_settings().config.verbosity_level >= 2: if get_settings().config.verbosity_level >= 2:
logging.info(f"Reached max calls ({max_calls})") get_logger().info(f"Reached max calls ({max_calls})")
break break
original_file_content_str = file.base_file original_file_content_str = file.base_file
@ -410,13 +410,13 @@ def get_pr_multi_diffs(git_provider: GitProvider,
total_tokens = token_handler.prompt_tokens total_tokens = token_handler.prompt_tokens
call_number += 1 call_number += 1
if get_settings().config.verbosity_level >= 2: if get_settings().config.verbosity_level >= 2:
logging.info(f"Call number: {call_number}") get_logger().info(f"Call number: {call_number}")
if patch: if patch:
patches.append(patch) patches.append(patch)
total_tokens += new_patch_tokens total_tokens += new_patch_tokens
if get_settings().config.verbosity_level >= 2: if get_settings().config.verbosity_level >= 2:
logging.info(f"Tokens: {total_tokens}, last filename: {file.filename}") get_logger().info(f"Tokens: {total_tokens}, last filename: {file.filename}")
# Add the last chunk # Add the last chunk
if patches: if patches:

View File

@ -2,7 +2,6 @@ from __future__ import annotations
import difflib import difflib
import json import json
import logging
import re import re
import textwrap import textwrap
from datetime import datetime from datetime import datetime
@ -11,6 +10,7 @@ from typing import Any, List
import yaml import yaml
from starlette_context import context from starlette_context import context
from pr_agent.config_loader import get_settings, global_settings from pr_agent.config_loader import get_settings, global_settings
from pr_agent.log import get_logger
def get_setting(key: str) -> Any: def get_setting(key: str) -> Any:
@ -159,7 +159,7 @@ def try_fix_json(review, max_iter=10, code_suggestions=False):
iter_count += 1 iter_count += 1
if not valid_json: if not valid_json:
logging.error("Unable to decode JSON response from AI") get_logger().error("Unable to decode JSON response from AI")
data = {} data = {}
return data return data
@ -230,7 +230,7 @@ def load_large_diff(filename, new_file_content_str: str, original_file_content_s
diff = difflib.unified_diff(original_file_content_str.splitlines(keepends=True), diff = difflib.unified_diff(original_file_content_str.splitlines(keepends=True),
new_file_content_str.splitlines(keepends=True)) new_file_content_str.splitlines(keepends=True))
if get_settings().config.verbosity_level >= 2: if get_settings().config.verbosity_level >= 2:
logging.warning(f"File was modified, but no patch was found. Manually creating patch: {filename}.") get_logger().warning(f"File was modified, but no patch was found. Manually creating patch: {filename}.")
patch = ''.join(diff) patch = ''.join(diff)
except Exception: except Exception:
pass pass
@ -262,12 +262,12 @@ def update_settings_from_args(args: List[str]) -> List[str]:
vals = arg.split('=', 1) vals = arg.split('=', 1)
if len(vals) != 2: if len(vals) != 2:
if len(vals) > 2: # --extended is a valid argument if len(vals) > 2: # --extended is a valid argument
logging.error(f'Invalid argument format: {arg}') get_logger().error(f'Invalid argument format: {arg}')
other_args.append(arg) other_args.append(arg)
continue continue
key, value = _fix_key_value(*vals) key, value = _fix_key_value(*vals)
get_settings().set(key, value) get_settings().set(key, value)
logging.info(f'Updated setting {key} to: "{value}"') get_logger().info(f'Updated setting {key} to: "{value}"')
else: else:
other_args.append(arg) other_args.append(arg)
return other_args return other_args
@ -279,7 +279,7 @@ def _fix_key_value(key: str, value: str):
try: try:
value = yaml.safe_load(value) value = yaml.safe_load(value)
except Exception as e: except Exception as e:
logging.error(f"Failed to parse YAML for config override {key}={value}", exc_info=e) get_logger().error(f"Failed to parse YAML for config override {key}={value}", exc_info=e)
return key, value return key, value
@ -288,7 +288,7 @@ def load_yaml(review_text: str) -> dict:
try: try:
data = yaml.safe_load(review_text) data = yaml.safe_load(review_text)
except Exception as e: except Exception as e:
logging.error(f"Failed to parse AI prediction: {e}") get_logger().error(f"Failed to parse AI prediction: {e}")
data = try_fix_yaml(review_text) data = try_fix_yaml(review_text)
return data return data
@ -299,7 +299,7 @@ def try_fix_yaml(review_text: str) -> dict:
review_text_lines_tmp = '\n'.join(review_text_lines[:-i]) review_text_lines_tmp = '\n'.join(review_text_lines[:-i])
try: try:
data = yaml.load(review_text_lines_tmp, Loader=yaml.SafeLoader) data = yaml.load(review_text_lines_tmp, Loader=yaml.SafeLoader)
logging.info(f"Successfully parsed AI prediction after removing {i} lines") get_logger().info(f"Successfully parsed AI prediction after removing {i} lines")
break break
except: except:
pass pass

View File

@ -1,11 +1,12 @@
import argparse import argparse
import asyncio import asyncio
import logging
import os import os
from pr_agent.agent.pr_agent import PRAgent, commands from pr_agent.agent.pr_agent import PRAgent, commands
from pr_agent.config_loader import get_settings from pr_agent.config_loader import get_settings
from pr_agent.log import setup_logger
setup_logger()
def run(inargs=None): def run(inargs=None):
parser = argparse.ArgumentParser(description='AI based pull request analyzer', usage= parser = argparse.ArgumentParser(description='AI based pull request analyzer', usage=
@ -47,7 +48,6 @@ For example: 'python cli.py --pr_url=... review --pr_reviewer.extra_instructions
parser.print_help() parser.print_help()
return return
logging.basicConfig(level=os.environ.get("LOGLEVEL", "INFO"))
command = args.command.lower() command = args.command.lower()
get_settings().set("CONFIG.CLI_MODE", True) get_settings().set("CONFIG.CLI_MODE", True)
if args.issue_url: if args.issue_url:

View File

@ -1,10 +1,11 @@
import json import json
import logging
from typing import Optional, Tuple from typing import Optional, Tuple
from urllib.parse import urlparse from urllib.parse import urlparse
import os import os
from ..log import get_logger
AZURE_DEVOPS_AVAILABLE = True AZURE_DEVOPS_AVAILABLE = True
try: try:
from msrest.authentication import BasicAuthentication from msrest.authentication import BasicAuthentication
@ -55,7 +56,7 @@ class AzureDevopsProvider:
path=".pr_agent.toml") path=".pr_agent.toml")
return contents return contents
except Exception as e: except Exception as e:
logging.exception("get repo settings error") get_logger().exception("get repo settings error")
return "" return ""
def get_files(self): def get_files(self):
@ -110,7 +111,7 @@ class AzureDevopsProvider:
new_file_content_str = new_file_content_str.content new_file_content_str = new_file_content_str.content
except Exception as error: except Exception as error:
logging.error("Failed to retrieve new file content of %s at version %s. Error: %s", file, version, str(error)) get_logger().error("Failed to retrieve new file content of %s at version %s. Error: %s", file, version, str(error))
new_file_content_str = "" new_file_content_str = ""
edit_type = EDIT_TYPE.MODIFIED edit_type = EDIT_TYPE.MODIFIED
@ -131,7 +132,7 @@ class AzureDevopsProvider:
include_content=True) include_content=True)
original_file_content_str = original_file_content_str.content original_file_content_str = original_file_content_str.content
except Exception as error: except Exception as error:
logging.error("Failed to retrieve original file content of %s at version %s. Error: %s", file, version, str(error)) get_logger().error("Failed to retrieve original file content of %s at version %s. Error: %s", file, version, str(error))
original_file_content_str = "" original_file_content_str = ""
patch = load_large_diff(file, new_file_content_str, original_file_content_str) patch = load_large_diff(file, new_file_content_str, original_file_content_str)
@ -166,7 +167,7 @@ class AzureDevopsProvider:
pull_request_id=self.pr_num, pull_request_id=self.pr_num,
git_pull_request_to_update=updated_pr) git_pull_request_to_update=updated_pr)
except Exception as e: except Exception as e:
logging.exception(f"Could not update pull request {self.pr_num} description: {e}") get_logger().exception(f"Could not update pull request {self.pr_num} description: {e}")
def remove_initial_comment(self): def remove_initial_comment(self):
return "" # not implemented yet return "" # not implemented yet
@ -235,9 +236,6 @@ class AzureDevopsProvider:
def _parse_pr_url(pr_url: str) -> Tuple[str, int]: def _parse_pr_url(pr_url: str) -> Tuple[str, int]:
parsed_url = urlparse(pr_url) parsed_url = urlparse(pr_url)
if 'azure.com' not in parsed_url.netloc:
raise ValueError("The provided URL is not a valid Azure DevOps URL")
path_parts = parsed_url.path.strip('/').split('/') path_parts = parsed_url.path.strip('/').split('/')
if len(path_parts) < 6 or path_parts[4] != 'pullrequest': if len(path_parts) < 6 or path_parts[4] != 'pullrequest':

View File

@ -1,5 +1,4 @@
import json import json
import logging
from typing import Optional, Tuple from typing import Optional, Tuple
from urllib.parse import urlparse from urllib.parse import urlparse
@ -7,8 +6,9 @@ import requests
from atlassian.bitbucket import Cloud from atlassian.bitbucket import Cloud
from starlette_context import context from starlette_context import context
from ..algo.pr_processing import clip_tokens, find_line_number_of_relevant_line_in_file from ..algo.pr_processing import find_line_number_of_relevant_line_in_file
from ..config_loader import get_settings from ..config_loader import get_settings
from ..log import get_logger
from .git_provider import FilePatchInfo, GitProvider from .git_provider import FilePatchInfo, GitProvider
@ -61,14 +61,14 @@ class BitbucketProvider(GitProvider):
if not relevant_lines_start or relevant_lines_start == -1: if not relevant_lines_start or relevant_lines_start == -1:
if get_settings().config.verbosity_level >= 2: if get_settings().config.verbosity_level >= 2:
logging.exception( get_logger().exception(
f"Failed to publish code suggestion, relevant_lines_start is {relevant_lines_start}" f"Failed to publish code suggestion, relevant_lines_start is {relevant_lines_start}"
) )
continue continue
if relevant_lines_end < relevant_lines_start: if relevant_lines_end < relevant_lines_start:
if get_settings().config.verbosity_level >= 2: if get_settings().config.verbosity_level >= 2:
logging.exception( get_logger().exception(
f"Failed to publish code suggestion, " f"Failed to publish code suggestion, "
f"relevant_lines_end is {relevant_lines_end} and " f"relevant_lines_end is {relevant_lines_end} and "
f"relevant_lines_start is {relevant_lines_start}" f"relevant_lines_start is {relevant_lines_start}"
@ -97,7 +97,7 @@ class BitbucketProvider(GitProvider):
return True return True
except Exception as e: except Exception as e:
if get_settings().config.verbosity_level >= 2: if get_settings().config.verbosity_level >= 2:
logging.error(f"Failed to publish code suggestion, error: {e}") get_logger().error(f"Failed to publish code suggestion, error: {e}")
return False return False
def is_supported(self, capability: str) -> bool: def is_supported(self, capability: str) -> bool:
@ -142,17 +142,22 @@ class BitbucketProvider(GitProvider):
def remove_initial_comment(self): def remove_initial_comment(self):
try: try:
for comment in self.temp_comments: for comment in self.temp_comments:
self.pr.delete(f"comments/{comment}") self.remove_comment(comment)
except Exception as e: except Exception as e:
logging.exception(f"Failed to remove temp comments, error: {e}") get_logger().exception(f"Failed to remove temp comments, error: {e}")
def remove_comment(self, comment):
try:
self.pr.delete(f"comments/{comment}")
except Exception as e:
get_logger().exception(f"Failed to remove comment, error: {e}")
# funtion to create_inline_comment # funtion to create_inline_comment
def create_inline_comment(self, body: str, relevant_file: str, relevant_line_in_file: str): def create_inline_comment(self, body: str, relevant_file: str, relevant_line_in_file: str):
position, absolute_position = find_line_number_of_relevant_line_in_file(self.get_diff_files(), relevant_file.strip('`'), relevant_line_in_file) position, absolute_position = find_line_number_of_relevant_line_in_file(self.get_diff_files(), relevant_file.strip('`'), relevant_line_in_file)
if position == -1: if position == -1:
if get_settings().config.verbosity_level >= 2: if get_settings().config.verbosity_level >= 2:
logging.info(f"Could not find position for {relevant_file} {relevant_line_in_file}") get_logger().info(f"Could not find position for {relevant_file} {relevant_line_in_file}")
subject_type = "FILE" subject_type = "FILE"
else: else:
subject_type = "LINE" subject_type = "LINE"

View File

@ -1,17 +1,16 @@
import logging
import os import os
import re import re
from collections import Counter from collections import Counter
from typing import List, Optional, Tuple from typing import List, Optional, Tuple
from urllib.parse import urlparse from urllib.parse import urlparse
from ..algo.language_handler import is_valid_file, language_extension_map
from ..algo.pr_processing import clip_tokens
from ..algo.utils import load_large_diff
from ..config_loader import get_settings
from .git_provider import EDIT_TYPE, FilePatchInfo, GitProvider, IncrementalPR
from pr_agent.git_providers.codecommit_client import CodeCommitClient from pr_agent.git_providers.codecommit_client import CodeCommitClient
from ..algo.language_handler import is_valid_file, language_extension_map
from ..algo.utils import load_large_diff
from .git_provider import EDIT_TYPE, FilePatchInfo, GitProvider
from ..log import get_logger
class PullRequestCCMimic: class PullRequestCCMimic:
""" """
@ -166,7 +165,7 @@ class CodeCommitProvider(GitProvider):
def publish_comment(self, pr_comment: str, is_temporary: bool = False): def publish_comment(self, pr_comment: str, is_temporary: bool = False):
if is_temporary: if is_temporary:
logging.info(pr_comment) get_logger().info(pr_comment)
return return
pr_comment = CodeCommitProvider._remove_markdown_html(pr_comment) pr_comment = CodeCommitProvider._remove_markdown_html(pr_comment)
@ -188,12 +187,12 @@ class CodeCommitProvider(GitProvider):
for suggestion in code_suggestions: for suggestion in code_suggestions:
# Verify that each suggestion has the required keys # Verify that each suggestion has the required keys
if not all(key in suggestion for key in ["body", "relevant_file", "relevant_lines_start"]): if not all(key in suggestion for key in ["body", "relevant_file", "relevant_lines_start"]):
logging.warning(f"Skipping code suggestion #{counter}: Each suggestion must have 'body', 'relevant_file', 'relevant_lines_start' keys") get_logger().warning(f"Skipping code suggestion #{counter}: Each suggestion must have 'body', 'relevant_file', 'relevant_lines_start' keys")
continue continue
# Publish the code suggestion to CodeCommit # Publish the code suggestion to CodeCommit
try: try:
logging.debug(f"Code Suggestion #{counter} in file: {suggestion['relevant_file']}: {suggestion['relevant_lines_start']}") get_logger().debug(f"Code Suggestion #{counter} in file: {suggestion['relevant_file']}: {suggestion['relevant_lines_start']}")
self.codecommit_client.publish_comment( self.codecommit_client.publish_comment(
repo_name=self.repo_name, repo_name=self.repo_name,
pr_number=self.pr_num, pr_number=self.pr_num,
@ -222,6 +221,9 @@ class CodeCommitProvider(GitProvider):
def remove_initial_comment(self): def remove_initial_comment(self):
return "" # not implemented yet return "" # not implemented yet
def remove_comment(self, comment):
return "" # not implemented yet
def publish_inline_comment(self, body: str, relevant_file: str, relevant_line_in_file: str): def publish_inline_comment(self, body: str, relevant_file: str, relevant_line_in_file: str):
# https://boto3.amazonaws.com/v1/documentation/api/latest/reference/services/codecommit/client/post_comment_for_compared_commit.html # https://boto3.amazonaws.com/v1/documentation/api/latest/reference/services/codecommit/client/post_comment_for_compared_commit.html
raise NotImplementedError("CodeCommit provider does not support publishing inline comments yet") raise NotImplementedError("CodeCommit provider does not support publishing inline comments yet")
@ -296,11 +298,11 @@ class CodeCommitProvider(GitProvider):
return self.codecommit_client.get_file(self.repo_name, settings_filename, self.pr.source_commit, optional=True) return self.codecommit_client.get_file(self.repo_name, settings_filename, self.pr.source_commit, optional=True)
def add_eyes_reaction(self, issue_comment_id: int) -> Optional[int]: def add_eyes_reaction(self, issue_comment_id: int) -> Optional[int]:
logging.info("CodeCommit provider does not support eyes reaction yet") get_logger().info("CodeCommit provider does not support eyes reaction yet")
return True return True
def remove_reaction(self, issue_comment_id: int, reaction_id: int) -> bool: def remove_reaction(self, issue_comment_id: int, reaction_id: int) -> bool:
logging.info("CodeCommit provider does not support removing reactions yet") get_logger().info("CodeCommit provider does not support removing reactions yet")
return True return True
@staticmethod @staticmethod
@ -366,7 +368,7 @@ class CodeCommitProvider(GitProvider):
# TODO: implement support for multiple targets in one CodeCommit PR # TODO: implement support for multiple targets in one CodeCommit PR
# for now, we are only using the first target in the PR # for now, we are only using the first target in the PR
if len(response.targets) > 1: if len(response.targets) > 1:
logging.warning( get_logger().warning(
"Multiple targets in one PR is not supported for CodeCommit yet. Continuing, using the first target only..." "Multiple targets in one PR is not supported for CodeCommit yet. Continuing, using the first target only..."
) )

View File

@ -1,5 +1,4 @@
import json import json
import logging
import os import os
import pathlib import pathlib
import shutil import shutil
@ -7,18 +6,16 @@ import subprocess
import uuid import uuid
from collections import Counter, namedtuple from collections import Counter, namedtuple
from pathlib import Path from pathlib import Path
from tempfile import mkdtemp, NamedTemporaryFile from tempfile import NamedTemporaryFile, mkdtemp
import requests import requests
import urllib3.util import urllib3.util
from git import Repo from git import Repo
from pr_agent.config_loader import get_settings from pr_agent.config_loader import get_settings
from pr_agent.git_providers.git_provider import GitProvider, FilePatchInfo, \ from pr_agent.git_providers.git_provider import EDIT_TYPE, FilePatchInfo, GitProvider
EDIT_TYPE
from pr_agent.git_providers.local_git_provider import PullRequestMimic from pr_agent.git_providers.local_git_provider import PullRequestMimic
from pr_agent.log import get_logger
logger = logging.getLogger(__name__)
def _call(*command, **kwargs) -> (int, str, str): def _call(*command, **kwargs) -> (int, str, str):
@ -33,42 +30,42 @@ def _call(*command, **kwargs) -> (int, str, str):
def clone(url, directory): def clone(url, directory):
logger.info("Cloning %s to %s", url, directory) get_logger().info("Cloning %s to %s", url, directory)
stdout = _call('git', 'clone', "--depth", "1", url, directory) stdout = _call('git', 'clone', "--depth", "1", url, directory)
logger.info(stdout) get_logger().info(stdout)
def fetch(url, refspec, cwd): def fetch(url, refspec, cwd):
logger.info("Fetching %s %s", url, refspec) get_logger().info("Fetching %s %s", url, refspec)
stdout = _call( stdout = _call(
'git', 'fetch', '--depth', '2', url, refspec, 'git', 'fetch', '--depth', '2', url, refspec,
cwd=cwd cwd=cwd
) )
logger.info(stdout) get_logger().info(stdout)
def checkout(cwd): def checkout(cwd):
logger.info("Checking out") get_logger().info("Checking out")
stdout = _call('git', 'checkout', "FETCH_HEAD", cwd=cwd) stdout = _call('git', 'checkout', "FETCH_HEAD", cwd=cwd)
logger.info(stdout) get_logger().info(stdout)
def show(*args, cwd=None): def show(*args, cwd=None):
logger.info("Show") get_logger().info("Show")
return _call('git', 'show', *args, cwd=cwd) return _call('git', 'show', *args, cwd=cwd)
def diff(*args, cwd=None): def diff(*args, cwd=None):
logger.info("Diff") get_logger().info("Diff")
patch = _call('git', 'diff', *args, cwd=cwd) patch = _call('git', 'diff', *args, cwd=cwd)
if not patch: if not patch:
logger.warning("No changes found") get_logger().warning("No changes found")
return return
return patch return patch
def reset_local_changes(cwd): def reset_local_changes(cwd):
logger.info("Reset local changes") get_logger().info("Reset local changes")
_call('git', 'checkout', "--force", cwd=cwd) _call('git', 'checkout', "--force", cwd=cwd)
@ -399,5 +396,8 @@ class GerritProvider(GitProvider):
# shutil.rmtree(self.repo_path) # shutil.rmtree(self.repo_path)
pass pass
def remove_comment(self, comment):
pass
def get_pr_branch(self): def get_pr_branch(self):
return self.repo.head return self.repo.head

View File

@ -1,4 +1,3 @@
import logging
from abc import ABC, abstractmethod from abc import ABC, abstractmethod
from dataclasses import dataclass from dataclasses import dataclass
@ -6,6 +5,8 @@ from dataclasses import dataclass
from enum import Enum from enum import Enum
from typing import Optional from typing import Optional
from pr_agent.log import get_logger
class EDIT_TYPE(Enum): class EDIT_TYPE(Enum):
ADDED = 1 ADDED = 1
@ -70,6 +71,10 @@ class GitProvider(ABC):
def remove_initial_comment(self): def remove_initial_comment(self):
pass pass
@abstractmethod
def remove_comment(self, comment):
pass
@abstractmethod @abstractmethod
def get_languages(self): def get_languages(self):
pass pass
@ -136,7 +141,7 @@ def get_main_pr_language(languages, files) -> str:
""" """
main_language_str = "" main_language_str = ""
if not languages: if not languages:
logging.info("No languages detected") get_logger().info("No languages detected")
return main_language_str return main_language_str
try: try:
@ -172,7 +177,7 @@ def get_main_pr_language(languages, files) -> str:
main_language_str = top_language main_language_str = top_language
except Exception as e: except Exception as e:
logging.exception(e) get_logger().exception(e)
pass pass
return main_language_str return main_language_str

View File

@ -1,20 +1,19 @@
import logging
import hashlib import hashlib
from datetime import datetime from datetime import datetime
from typing import Optional, Tuple, Any from typing import Optional, Tuple
from urllib.parse import urlparse from urllib.parse import urlparse
from github import AppAuthentication, Auth, Github, GithubException, Reaction from github import AppAuthentication, Auth, Github, GithubException
from retry import retry from retry import retry
from starlette_context import context from starlette_context import context
from .git_provider import FilePatchInfo, GitProvider, IncrementalPR
from ..algo.language_handler import is_valid_file from ..algo.language_handler import is_valid_file
from ..algo.pr_processing import clip_tokens, find_line_number_of_relevant_line_in_file
from ..algo.utils import load_large_diff from ..algo.utils import load_large_diff
from ..algo.pr_processing import find_line_number_of_relevant_line_in_file, clip_tokens
from ..config_loader import get_settings from ..config_loader import get_settings
from ..log import get_logger
from ..servers.utils import RateLimitExceeded from ..servers.utils import RateLimitExceeded
from .git_provider import FilePatchInfo, GitProvider, IncrementalPR
class GithubProvider(GitProvider): class GithubProvider(GitProvider):
@ -51,20 +50,20 @@ class GithubProvider(GitProvider):
def get_incremental_commits(self): def get_incremental_commits(self):
self.commits = list(self.pr.get_commits()) self.commits = list(self.pr.get_commits())
self.get_previous_review() self.previous_review = self.get_previous_review(full=True, incremental=True)
if self.previous_review: if self.previous_review:
self.incremental.commits_range = self.get_commit_range() self.incremental.commits_range = self.get_commit_range()
# Get all files changed during the commit range # Get all files changed during the commit range
self.file_set = dict() self.file_set = dict()
for commit in self.incremental.commits_range: for commit in self.incremental.commits_range:
if commit.commit.message.startswith(f"Merge branch '{self._get_repo().default_branch}'"): if commit.commit.message.startswith(f"Merge branch '{self._get_repo().default_branch}'"):
logging.info(f"Skipping merge commit {commit.commit.message}") get_logger().info(f"Skipping merge commit {commit.commit.message}")
continue continue
self.file_set.update({file.filename: file for file in commit.files}) self.file_set.update({file.filename: file for file in commit.files})
def get_commit_range(self): def get_commit_range(self):
last_review_time = self.previous_review.created_at last_review_time = self.previous_review.created_at
first_new_commit_index = 0 first_new_commit_index = None
for index in range(len(self.commits) - 1, -1, -1): for index in range(len(self.commits) - 1, -1, -1):
if self.commits[index].commit.author.date > last_review_time: if self.commits[index].commit.author.date > last_review_time:
self.incremental.first_new_commit_sha = self.commits[index].sha self.incremental.first_new_commit_sha = self.commits[index].sha
@ -72,15 +71,21 @@ class GithubProvider(GitProvider):
else: else:
self.incremental.last_seen_commit_sha = self.commits[index].sha self.incremental.last_seen_commit_sha = self.commits[index].sha
break break
return self.commits[first_new_commit_index:] return self.commits[first_new_commit_index:] if first_new_commit_index is not None else []
def get_previous_review(self): def get_previous_review(self, *, full: bool, incremental: bool):
self.previous_review = None if not (full or incremental):
self.comments = list(self.pr.get_issue_comments()) raise ValueError("At least one of full or incremental must be True")
if not getattr(self, "comments", None):
self.comments = list(self.pr.get_issue_comments())
prefixes = []
if full:
prefixes.append("## PR Analysis")
if incremental:
prefixes.append("## Incremental PR Review")
for index in range(len(self.comments) - 1, -1, -1): for index in range(len(self.comments) - 1, -1, -1):
if self.comments[index].body.startswith("## PR Analysis"): if any(self.comments[index].body.startswith(prefix) for prefix in prefixes):
self.previous_review = self.comments[index] return self.comments[index]
break
def get_files(self): def get_files(self):
if self.incremental.is_incremental and self.file_set: if self.incremental.is_incremental and self.file_set:
@ -130,7 +135,7 @@ class GithubProvider(GitProvider):
return diff_files return diff_files
except GithubException.RateLimitExceededException as e: except GithubException.RateLimitExceededException as e:
logging.error(f"Rate limit exceeded for GitHub API. Original message: {e}") get_logger().error(f"Rate limit exceeded for GitHub API. Original message: {e}")
raise RateLimitExceeded("Rate limit exceeded for GitHub API.") from e raise RateLimitExceeded("Rate limit exceeded for GitHub API.") from e
def publish_description(self, pr_title: str, pr_body: str): def publish_description(self, pr_title: str, pr_body: str):
@ -138,7 +143,7 @@ class GithubProvider(GitProvider):
def publish_comment(self, pr_comment: str, is_temporary: bool = False): def publish_comment(self, pr_comment: str, is_temporary: bool = False):
if is_temporary and not get_settings().config.publish_output_progress: if is_temporary and not get_settings().config.publish_output_progress:
logging.debug(f"Skipping publish_comment for temporary comment: {pr_comment}") get_logger().debug(f"Skipping publish_comment for temporary comment: {pr_comment}")
return return
response = self.pr.create_issue_comment(pr_comment) response = self.pr.create_issue_comment(pr_comment)
if hasattr(response, "user") and hasattr(response.user, "login"): if hasattr(response, "user") and hasattr(response.user, "login"):
@ -156,7 +161,7 @@ class GithubProvider(GitProvider):
position, absolute_position = find_line_number_of_relevant_line_in_file(self.diff_files, relevant_file.strip('`'), relevant_line_in_file) position, absolute_position = find_line_number_of_relevant_line_in_file(self.diff_files, relevant_file.strip('`'), relevant_line_in_file)
if position == -1: if position == -1:
if get_settings().config.verbosity_level >= 2: if get_settings().config.verbosity_level >= 2:
logging.info(f"Could not find position for {relevant_file} {relevant_line_in_file}") get_logger().info(f"Could not find position for {relevant_file} {relevant_line_in_file}")
subject_type = "FILE" subject_type = "FILE"
else: else:
subject_type = "LINE" subject_type = "LINE"
@ -179,13 +184,13 @@ class GithubProvider(GitProvider):
if not relevant_lines_start or relevant_lines_start == -1: if not relevant_lines_start or relevant_lines_start == -1:
if get_settings().config.verbosity_level >= 2: if get_settings().config.verbosity_level >= 2:
logging.exception( get_logger().exception(
f"Failed to publish code suggestion, relevant_lines_start is {relevant_lines_start}") f"Failed to publish code suggestion, relevant_lines_start is {relevant_lines_start}")
continue continue
if relevant_lines_end < relevant_lines_start: if relevant_lines_end < relevant_lines_start:
if get_settings().config.verbosity_level >= 2: if get_settings().config.verbosity_level >= 2:
logging.exception(f"Failed to publish code suggestion, " get_logger().exception(f"Failed to publish code suggestion, "
f"relevant_lines_end is {relevant_lines_end} and " f"relevant_lines_end is {relevant_lines_end} and "
f"relevant_lines_start is {relevant_lines_start}") f"relevant_lines_start is {relevant_lines_start}")
continue continue
@ -212,16 +217,22 @@ class GithubProvider(GitProvider):
return True return True
except Exception as e: except Exception as e:
if get_settings().config.verbosity_level >= 2: if get_settings().config.verbosity_level >= 2:
logging.error(f"Failed to publish code suggestion, error: {e}") get_logger().error(f"Failed to publish code suggestion, error: {e}")
return False return False
def remove_initial_comment(self): def remove_initial_comment(self):
try: try:
for comment in getattr(self.pr, 'comments_list', []): for comment in getattr(self.pr, 'comments_list', []):
if comment.is_temporary: if comment.is_temporary:
comment.delete() self.remove_comment(comment)
except Exception as e: except Exception as e:
logging.exception(f"Failed to remove initial comment, error: {e}") get_logger().exception(f"Failed to remove initial comment, error: {e}")
def remove_comment(self, comment):
try:
comment.delete()
except Exception as e:
get_logger().exception(f"Failed to remove comment, error: {e}")
def get_title(self): def get_title(self):
return self.pr.title return self.pr.title
@ -269,7 +280,7 @@ class GithubProvider(GitProvider):
reaction = self.pr.get_issue_comment(issue_comment_id).create_reaction("eyes") reaction = self.pr.get_issue_comment(issue_comment_id).create_reaction("eyes")
return reaction.id return reaction.id
except Exception as e: except Exception as e:
logging.exception(f"Failed to add eyes reaction, error: {e}") get_logger().exception(f"Failed to add eyes reaction, error: {e}")
return None return None
def remove_reaction(self, issue_comment_id: int, reaction_id: int) -> bool: def remove_reaction(self, issue_comment_id: int, reaction_id: int) -> bool:
@ -277,7 +288,7 @@ class GithubProvider(GitProvider):
self.pr.get_issue_comment(issue_comment_id).delete_reaction(reaction_id) self.pr.get_issue_comment(issue_comment_id).delete_reaction(reaction_id)
return True return True
except Exception as e: except Exception as e:
logging.exception(f"Failed to remove eyes reaction, error: {e}") get_logger().exception(f"Failed to remove eyes reaction, error: {e}")
return False return False
@ -396,13 +407,13 @@ class GithubProvider(GitProvider):
"PUT", f"{self.pr.issue_url}/labels", input=post_parameters "PUT", f"{self.pr.issue_url}/labels", input=post_parameters
) )
except Exception as e: except Exception as e:
logging.exception(f"Failed to publish labels, error: {e}") get_logger().exception(f"Failed to publish labels, error: {e}")
def get_labels(self): def get_labels(self):
try: try:
return [label.name for label in self.pr.labels] return [label.name for label in self.pr.labels]
except Exception as e: except Exception as e:
logging.exception(f"Failed to get labels, error: {e}") get_logger().exception(f"Failed to get labels, error: {e}")
return [] return []
def get_commit_messages(self): def get_commit_messages(self):
@ -444,7 +455,7 @@ class GithubProvider(GitProvider):
return link return link
except Exception as e: except Exception as e:
if get_settings().config.verbosity_level >= 2: if get_settings().config.verbosity_level >= 2:
logging.info(f"Failed adding line link, error: {e}") get_logger().info(f"Failed adding line link, error: {e}")
return "" return ""

View File

@ -1,5 +1,4 @@
import hashlib import hashlib
import logging
import re import re
from typing import Optional, Tuple from typing import Optional, Tuple
from urllib.parse import urlparse from urllib.parse import urlparse
@ -12,8 +11,8 @@ from ..algo.pr_processing import clip_tokens, find_line_number_of_relevant_line_
from ..algo.utils import load_large_diff from ..algo.utils import load_large_diff
from ..config_loader import get_settings from ..config_loader import get_settings
from .git_provider import EDIT_TYPE, FilePatchInfo, GitProvider from .git_provider import EDIT_TYPE, FilePatchInfo, GitProvider
from ..log import get_logger
logger = logging.getLogger()
class DiffNotFoundError(Exception): class DiffNotFoundError(Exception):
"""Raised when the diff for a merge request cannot be found.""" """Raised when the diff for a merge request cannot be found."""
@ -59,7 +58,7 @@ class GitLabProvider(GitProvider):
try: try:
self.last_diff = self.mr.diffs.list(get_all=True)[-1] self.last_diff = self.mr.diffs.list(get_all=True)[-1]
except IndexError as e: except IndexError as e:
logger.error(f"Could not get diff for merge request {self.id_mr}") get_logger().error(f"Could not get diff for merge request {self.id_mr}")
raise DiffNotFoundError(f"Could not get diff for merge request {self.id_mr}") from e raise DiffNotFoundError(f"Could not get diff for merge request {self.id_mr}") from e
@ -99,7 +98,7 @@ class GitLabProvider(GitProvider):
if isinstance(new_file_content_str, bytes): if isinstance(new_file_content_str, bytes):
new_file_content_str = bytes.decode(new_file_content_str, 'utf-8') new_file_content_str = bytes.decode(new_file_content_str, 'utf-8')
except UnicodeDecodeError: except UnicodeDecodeError:
logging.warning( get_logger().warning(
f"Cannot decode file {diff['old_path']} or {diff['new_path']} in merge request {self.id_mr}") f"Cannot decode file {diff['old_path']} or {diff['new_path']} in merge request {self.id_mr}")
edit_type = EDIT_TYPE.MODIFIED edit_type = EDIT_TYPE.MODIFIED
@ -135,7 +134,7 @@ class GitLabProvider(GitProvider):
self.mr.description = pr_body self.mr.description = pr_body
self.mr.save() self.mr.save()
except Exception as e: except Exception as e:
logging.exception(f"Could not update merge request {self.id_mr} description: {e}") get_logger().exception(f"Could not update merge request {self.id_mr} description: {e}")
def publish_comment(self, mr_comment: str, is_temporary: bool = False): def publish_comment(self, mr_comment: str, is_temporary: bool = False):
comment = self.mr.notes.create({'body': mr_comment}) comment = self.mr.notes.create({'body': mr_comment})
@ -157,12 +156,12 @@ class GitLabProvider(GitProvider):
def send_inline_comment(self,body: str,edit_type: str,found: bool,relevant_file: str,relevant_line_in_file: int, def send_inline_comment(self,body: str,edit_type: str,found: bool,relevant_file: str,relevant_line_in_file: int,
source_line_no: int, target_file: str,target_line_no: int) -> None: source_line_no: int, target_file: str,target_line_no: int) -> None:
if not found: if not found:
logging.info(f"Could not find position for {relevant_file} {relevant_line_in_file}") get_logger().info(f"Could not find position for {relevant_file} {relevant_line_in_file}")
else: else:
# in order to have exact sha's we have to find correct diff for this change # in order to have exact sha's we have to find correct diff for this change
diff = self.get_relevant_diff(relevant_file, relevant_line_in_file) diff = self.get_relevant_diff(relevant_file, relevant_line_in_file)
if diff is None: if diff is None:
logger.error(f"Could not get diff for merge request {self.id_mr}") get_logger().error(f"Could not get diff for merge request {self.id_mr}")
raise DiffNotFoundError(f"Could not get diff for merge request {self.id_mr}") raise DiffNotFoundError(f"Could not get diff for merge request {self.id_mr}")
pos_obj = {'position_type': 'text', pos_obj = {'position_type': 'text',
'new_path': target_file.filename, 'new_path': target_file.filename,
@ -175,23 +174,23 @@ class GitLabProvider(GitProvider):
else: else:
pos_obj['new_line'] = target_line_no - 1 pos_obj['new_line'] = target_line_no - 1
pos_obj['old_line'] = source_line_no - 1 pos_obj['old_line'] = source_line_no - 1
logging.debug(f"Creating comment in {self.id_mr} with body {body} and position {pos_obj}") get_logger().debug(f"Creating comment in {self.id_mr} with body {body} and position {pos_obj}")
self.mr.discussions.create({'body': body, 'position': pos_obj}) self.mr.discussions.create({'body': body, 'position': pos_obj})
def get_relevant_diff(self, relevant_file: str, relevant_line_in_file: int) -> Optional[dict]: def get_relevant_diff(self, relevant_file: str, relevant_line_in_file: int) -> Optional[dict]:
changes = self.mr.changes() # Retrieve the changes for the merge request once changes = self.mr.changes() # Retrieve the changes for the merge request once
if not changes: if not changes:
logging.error('No changes found for the merge request.') get_logger().error('No changes found for the merge request.')
return None return None
all_diffs = self.mr.diffs.list(get_all=True) all_diffs = self.mr.diffs.list(get_all=True)
if not all_diffs: if not all_diffs:
logging.error('No diffs found for the merge request.') get_logger().error('No diffs found for the merge request.')
return None return None
for diff in all_diffs: for diff in all_diffs:
for change in changes['changes']: for change in changes['changes']:
if change['new_path'] == relevant_file and relevant_line_in_file in change['diff']: if change['new_path'] == relevant_file and relevant_line_in_file in change['diff']:
return diff return diff
logging.debug( get_logger().debug(
f'No relevant diff found for {relevant_file} {relevant_line_in_file}. Falling back to last diff.') f'No relevant diff found for {relevant_file} {relevant_line_in_file}. Falling back to last diff.')
return self.last_diff # fallback to last_diff if no relevant diff is found return self.last_diff # fallback to last_diff if no relevant diff is found
@ -226,7 +225,7 @@ class GitLabProvider(GitProvider):
self.send_inline_comment(body, edit_type, found, relevant_file, relevant_line_in_file, source_line_no, self.send_inline_comment(body, edit_type, found, relevant_file, relevant_line_in_file, source_line_no,
target_file, target_line_no) target_file, target_line_no)
except Exception as e: except Exception as e:
logging.exception(f"Could not publish code suggestion:\nsuggestion: {suggestion}\nerror: {e}") get_logger().exception(f"Could not publish code suggestion:\nsuggestion: {suggestion}\nerror: {e}")
# note that we publish suggestions one-by-one. so, if one fails, the rest will still be published # note that we publish suggestions one-by-one. so, if one fails, the rest will still be published
return True return True
@ -288,9 +287,15 @@ class GitLabProvider(GitProvider):
def remove_initial_comment(self): def remove_initial_comment(self):
try: try:
for comment in self.temp_comments: for comment in self.temp_comments:
comment.delete() self.remove_comment(comment)
except Exception as e: except Exception as e:
logging.exception(f"Failed to remove temp comments, error: {e}") get_logger().exception(f"Failed to remove temp comments, error: {e}")
def remove_comment(self, comment):
try:
comment.delete()
except Exception as e:
get_logger().exception(f"Failed to remove comment, error: {e}")
def get_title(self): def get_title(self):
return self.mr.title return self.mr.title
@ -358,7 +363,7 @@ class GitLabProvider(GitProvider):
self.mr.labels = list(set(pr_types)) self.mr.labels = list(set(pr_types))
self.mr.save() self.mr.save()
except Exception as e: except Exception as e:
logging.exception(f"Failed to publish labels, error: {e}") get_logger().exception(f"Failed to publish labels, error: {e}")
def publish_inline_comments(self, comments: list[dict]): def publish_inline_comments(self, comments: list[dict]):
pass pass
@ -410,6 +415,6 @@ class GitLabProvider(GitProvider):
return link return link
except Exception as e: except Exception as e:
if get_settings().config.verbosity_level >= 2: if get_settings().config.verbosity_level >= 2:
logging.info(f"Failed adding line link, error: {e}") get_logger().info(f"Failed adding line link, error: {e}")
return "" return ""

View File

@ -1,4 +1,3 @@
import logging
from collections import Counter from collections import Counter
from pathlib import Path from pathlib import Path
from typing import List from typing import List
@ -7,6 +6,7 @@ from git import Repo
from pr_agent.config_loader import _find_repository_root, get_settings from pr_agent.config_loader import _find_repository_root, get_settings
from pr_agent.git_providers.git_provider import EDIT_TYPE, FilePatchInfo, GitProvider from pr_agent.git_providers.git_provider import EDIT_TYPE, FilePatchInfo, GitProvider
from pr_agent.log import get_logger
class PullRequestMimic: class PullRequestMimic:
@ -49,7 +49,7 @@ class LocalGitProvider(GitProvider):
""" """
Prepare the repository for PR-mimic generation. Prepare the repository for PR-mimic generation.
""" """
logging.debug('Preparing repository for PR-mimic generation...') get_logger().debug('Preparing repository for PR-mimic generation...')
if self.repo.is_dirty(): if self.repo.is_dirty():
raise ValueError('The repository is not in a clean state. Please commit or stash pending changes.') raise ValueError('The repository is not in a clean state. Please commit or stash pending changes.')
if self.target_branch_name not in self.repo.heads: if self.target_branch_name not in self.repo.heads:
@ -140,6 +140,9 @@ class LocalGitProvider(GitProvider):
def remove_initial_comment(self): def remove_initial_comment(self):
pass # Not applicable to the local git provider, but required by the interface pass # Not applicable to the local git provider, but required by the interface
def remove_comment(self, comment):
pass # Not applicable to the local git provider, but required by the interface
def get_languages(self): def get_languages(self):
""" """
Calculate percentage of languages in repository. Used for hunk prioritisation. Calculate percentage of languages in repository. Used for hunk prioritisation.

View File

@ -0,0 +1,35 @@
import copy
import os
import tempfile
from dynaconf import Dynaconf
from pr_agent.config_loader import get_settings
from pr_agent.git_providers import get_git_provider
from pr_agent.log import get_logger
def apply_repo_settings(pr_url):
if get_settings().config.use_repo_settings_file:
repo_settings_file = None
try:
git_provider = get_git_provider()(pr_url)
repo_settings = git_provider.get_repo_settings()
if repo_settings:
repo_settings_file = None
fd, repo_settings_file = tempfile.mkstemp(suffix='.toml')
os.write(fd, repo_settings)
new_settings = Dynaconf(settings_files=[repo_settings_file])
for section, contents in new_settings.as_dict().items():
section_dict = copy.deepcopy(get_settings().as_dict().get(section, {}))
for key, value in contents.items():
section_dict[key] = value
get_settings().unset(section)
get_settings().set(section, section_dict, merge=False)
finally:
if repo_settings_file:
try:
os.remove(repo_settings_file)
except Exception as e:
get_logger().error(f"Failed to remove temporary settings file {repo_settings_file}", e)

40
pr_agent/log/__init__.py Normal file
View File

@ -0,0 +1,40 @@
import json
import logging
import sys
from enum import Enum
from loguru import logger
class LoggingFormat(str, Enum):
CONSOLE = "CONSOLE"
JSON = "JSON"
def json_format(record: dict) -> str:
return record["message"]
def setup_logger(level: str = "INFO", fmt: LoggingFormat = LoggingFormat.CONSOLE):
level: int = logging.getLevelName(level.upper())
if type(level) is not int:
level = logging.INFO
if fmt == LoggingFormat.JSON:
logger.remove(None)
logger.add(
sys.stdout,
level=level,
format="{message}",
colorize=False,
serialize=True,
)
elif fmt == LoggingFormat.CONSOLE:
logger.remove(None)
logger.add(sys.stdout, level=level, colorize=True)
return logger
def get_logger(*args, **kwargs):
return logger

View File

@ -1,9 +1,8 @@
import ujson import ujson
from google.cloud import storage from google.cloud import storage
from pr_agent.config_loader import get_settings from pr_agent.config_loader import get_settings
from pr_agent.git_providers.gitlab_provider import logger from pr_agent.log import get_logger
from pr_agent.secret_providers.secret_provider import SecretProvider from pr_agent.secret_providers.secret_provider import SecretProvider
@ -15,7 +14,7 @@ class GoogleCloudStorageSecretProvider(SecretProvider):
self.bucket_name = get_settings().google_cloud_storage.bucket_name self.bucket_name = get_settings().google_cloud_storage.bucket_name
self.bucket = self.client.bucket(self.bucket_name) self.bucket = self.client.bucket(self.bucket_name)
except Exception as e: except Exception as e:
logger.error(f"Failed to initialize Google Cloud Storage Secret Provider: {e}") get_logger().error(f"Failed to initialize Google Cloud Storage Secret Provider: {e}")
raise e raise e
def get_secret(self, secret_name: str) -> str: def get_secret(self, secret_name: str) -> str:
@ -23,7 +22,7 @@ class GoogleCloudStorageSecretProvider(SecretProvider):
blob = self.bucket.blob(secret_name) blob = self.bucket.blob(secret_name)
return blob.download_as_string() return blob.download_as_string()
except Exception as e: except Exception as e:
logger.error(f"Failed to get secret {secret_name} from Google Cloud Storage: {e}") get_logger().error(f"Failed to get secret {secret_name} from Google Cloud Storage: {e}")
return "" return ""
def store_secret(self, secret_name: str, secret_value: str): def store_secret(self, secret_name: str, secret_value: str):
@ -31,5 +30,5 @@ class GoogleCloudStorageSecretProvider(SecretProvider):
blob = self.bucket.blob(secret_name) blob = self.bucket.blob(secret_name)
blob.upload_from_string(secret_value) blob.upload_from_string(secret_value)
except Exception as e: except Exception as e:
logger.error(f"Failed to store secret {secret_name} in Google Cloud Storage: {e}") get_logger().error(f"Failed to store secret {secret_name} in Google Cloud Storage: {e}")
raise e raise e

View File

@ -1,9 +1,7 @@
import copy import copy
import hashlib import hashlib
import json import json
import logging
import os import os
import sys
import time import time
import jwt import jwt
@ -18,9 +16,10 @@ from starlette_context.middleware import RawContextMiddleware
from pr_agent.agent.pr_agent import PRAgent from pr_agent.agent.pr_agent import PRAgent
from pr_agent.config_loader import get_settings, global_settings from pr_agent.config_loader import get_settings, global_settings
from pr_agent.log import LoggingFormat, get_logger, setup_logger
from pr_agent.secret_providers import get_secret_provider from pr_agent.secret_providers import get_secret_provider
logging.basicConfig(stream=sys.stdout, level=logging.INFO) setup_logger(fmt=LoggingFormat.JSON)
router = APIRouter() router = APIRouter()
secret_provider = get_secret_provider() secret_provider = get_secret_provider()
@ -49,7 +48,7 @@ async def get_bearer_token(shared_secret: str, client_key: str):
bearer_token = response.json()["access_token"] bearer_token = response.json()["access_token"]
return bearer_token return bearer_token
except Exception as e: except Exception as e:
logging.error(f"Failed to get bearer token: {e}") get_logger().error(f"Failed to get bearer token: {e}")
raise e raise e
@router.get("/") @router.get("/")
@ -60,21 +59,23 @@ async def handle_manifest(request: Request, response: Response):
manifest = manifest.replace("app_key", get_settings().bitbucket.app_key) manifest = manifest.replace("app_key", get_settings().bitbucket.app_key)
manifest = manifest.replace("base_url", get_settings().bitbucket.base_url) manifest = manifest.replace("base_url", get_settings().bitbucket.base_url)
except: except:
logging.error("Failed to replace api_key in Bitbucket manifest, trying to continue") get_logger().error("Failed to replace api_key in Bitbucket manifest, trying to continue")
manifest_obj = json.loads(manifest) manifest_obj = json.loads(manifest)
return JSONResponse(manifest_obj) return JSONResponse(manifest_obj)
@router.post("/webhook") @router.post("/webhook")
async def handle_github_webhooks(background_tasks: BackgroundTasks, request: Request): async def handle_github_webhooks(background_tasks: BackgroundTasks, request: Request):
print(request.headers) log_context = {"server_type": "bitbucket_app"}
get_logger().debug(request.headers)
jwt_header = request.headers.get("authorization", None) jwt_header = request.headers.get("authorization", None)
if jwt_header: if jwt_header:
input_jwt = jwt_header.split(" ")[1] input_jwt = jwt_header.split(" ")[1]
data = await request.json() data = await request.json()
print(data) get_logger().debug(data)
async def inner(): async def inner():
try: try:
owner = data["data"]["repository"]["owner"]["username"] owner = data["data"]["repository"]["owner"]["username"]
log_context["sender"] = owner
secrets = json.loads(secret_provider.get_secret(owner)) secrets = json.loads(secret_provider.get_secret(owner))
shared_secret = secrets["shared_secret"] shared_secret = secrets["shared_secret"]
client_key = secrets["client_key"] client_key = secrets["client_key"]
@ -86,13 +87,19 @@ async def handle_github_webhooks(background_tasks: BackgroundTasks, request: Req
agent = PRAgent() agent = PRAgent()
if event == "pullrequest:created": if event == "pullrequest:created":
pr_url = data["data"]["pullrequest"]["links"]["html"]["href"] pr_url = data["data"]["pullrequest"]["links"]["html"]["href"]
await agent.handle_request(pr_url, "review") log_context["api_url"] = pr_url
log_context["event"] = "pull_request"
with get_logger().contextualize(**log_context):
await agent.handle_request(pr_url, "review")
elif event == "pullrequest:comment_created": elif event == "pullrequest:comment_created":
pr_url = data["data"]["pullrequest"]["links"]["html"]["href"] pr_url = data["data"]["pullrequest"]["links"]["html"]["href"]
log_context["api_url"] = pr_url
log_context["event"] = "comment"
comment_body = data["data"]["comment"]["content"]["raw"] comment_body = data["data"]["comment"]["content"]["raw"]
await agent.handle_request(pr_url, comment_body) with get_logger().contextualize(**log_context):
await agent.handle_request(pr_url, comment_body)
except Exception as e: except Exception as e:
logging.error(f"Failed to handle webhook: {e}") get_logger().error(f"Failed to handle webhook: {e}")
background_tasks.add_task(inner) background_tasks.add_task(inner)
return "OK" return "OK"
@ -103,9 +110,10 @@ async def handle_github_webhooks(request: Request, response: Response):
@router.post("/installed") @router.post("/installed")
async def handle_installed_webhooks(request: Request, response: Response): async def handle_installed_webhooks(request: Request, response: Response):
try: try:
print(request.headers) get_logger().info("handle_installed_webhooks")
get_logger().info(request.headers)
data = await request.json() data = await request.json()
print(data) get_logger().info(data)
shared_secret = data["sharedSecret"] shared_secret = data["sharedSecret"]
client_key = data["clientKey"] client_key = data["clientKey"]
username = data["principal"]["username"] username = data["principal"]["username"]
@ -115,13 +123,15 @@ async def handle_installed_webhooks(request: Request, response: Response):
} }
secret_provider.store_secret(username, json.dumps(secrets)) secret_provider.store_secret(username, json.dumps(secrets))
except Exception as e: except Exception as e:
logging.error(f"Failed to register user: {e}") get_logger().error(f"Failed to register user: {e}")
return JSONResponse({"error": "Unable to register user"}, status_code=500) return JSONResponse({"error": "Unable to register user"}, status_code=500)
@router.post("/uninstalled") @router.post("/uninstalled")
async def handle_uninstalled_webhooks(request: Request, response: Response): async def handle_uninstalled_webhooks(request: Request, response: Response):
get_logger().info("handle_uninstalled_webhooks")
data = await request.json() data = await request.json()
print(data) get_logger().info(data)
def start(): def start():

View File

@ -1,34 +0,0 @@
import os
from pr_agent.agent.pr_agent import PRAgent
from pr_agent.config_loader import get_settings
from pr_agent.tools.pr_reviewer import PRReviewer
import asyncio
async def run_action():
try:
pull_request_id = os.environ.get("BITBUCKET_PR_ID", '')
slug = os.environ.get("BITBUCKET_REPO_SLUG", '')
workspace = os.environ.get("BITBUCKET_WORKSPACE", '')
bearer_token = os.environ.get('BITBUCKET_BEARER_TOKEN', None)
OPENAI_KEY = os.environ.get('OPENAI_API_KEY') or os.environ.get('OPENAI.KEY')
OPENAI_ORG = os.environ.get('OPENAI_ORG') or os.environ.get('OPENAI.ORG')
# Check if required environment variables are set
if not bearer_token:
print("BITBUCKET_BEARER_TOKEN not set")
return
if not OPENAI_KEY:
print("OPENAI_KEY not set")
return
# Set the environment variables in the settings
get_settings().set("BITBUCKET.BEARER_TOKEN", bearer_token)
get_settings().set("OPENAI.KEY", OPENAI_KEY)
if OPENAI_ORG:
get_settings().set("OPENAI.ORG", OPENAI_ORG)
if pull_request_id and slug and workspace:
pr_url = f"https://bitbucket.org/{workspace}/{slug}/pull-requests/{pull_request_id}"
await PRReviewer(pr_url).run()
except Exception as e:
print(f"An error occurred: {e}")
if __name__ == "__main__":
asyncio.run(run_action())

View File

@ -1,6 +1,4 @@
import copy import copy
import logging
import sys
from enum import Enum from enum import Enum
from json import JSONDecodeError from json import JSONDecodeError
@ -12,9 +10,10 @@ from starlette_context import context
from starlette_context.middleware import RawContextMiddleware from starlette_context.middleware import RawContextMiddleware
from pr_agent.agent.pr_agent import PRAgent from pr_agent.agent.pr_agent import PRAgent
from pr_agent.config_loader import global_settings, get_settings from pr_agent.config_loader import get_settings, global_settings
from pr_agent.log import get_logger, setup_logger
logging.basicConfig(stream=sys.stdout, level=logging.DEBUG) setup_logger()
router = APIRouter() router = APIRouter()
@ -35,7 +34,7 @@ class Item(BaseModel):
@router.post("/api/v1/gerrit/{action}") @router.post("/api/v1/gerrit/{action}")
async def handle_gerrit_request(action: Action, item: Item): async def handle_gerrit_request(action: Action, item: Item):
logging.debug("Received a Gerrit request") get_logger().debug("Received a Gerrit request")
context["settings"] = copy.deepcopy(global_settings) context["settings"] = copy.deepcopy(global_settings)
if action == Action.ask: if action == Action.ask:
@ -54,7 +53,7 @@ async def get_body(request):
try: try:
body = await request.json() body = await request.json()
except JSONDecodeError as e: except JSONDecodeError as e:
logging.error("Error parsing request body", e) get_logger().error("Error parsing request body", e)
return {} return {}
return body return body

View File

@ -1,9 +1,7 @@
import copy import copy
import logging
import sys
import os import os
import time import asyncio.locks
from typing import Any, Dict from typing import Any, Dict, List, Tuple
import uvicorn import uvicorn
from fastapi import APIRouter, FastAPI, HTTPException, Request, Response from fastapi import APIRouter, FastAPI, HTTPException, Request, Response
@ -15,9 +13,13 @@ from pr_agent.agent.pr_agent import PRAgent
from pr_agent.algo.utils import update_settings_from_args from pr_agent.algo.utils import update_settings_from_args
from pr_agent.config_loader import get_settings, global_settings from pr_agent.config_loader import get_settings, global_settings
from pr_agent.git_providers import get_git_provider from pr_agent.git_providers import get_git_provider
from pr_agent.servers.utils import verify_signature from pr_agent.git_providers.utils import apply_repo_settings
from pr_agent.git_providers.git_provider import IncrementalPR
from pr_agent.log import LoggingFormat, get_logger, setup_logger
from pr_agent.servers.utils import verify_signature, DefaultDictWithTimeout
setup_logger(fmt=LoggingFormat.JSON)
logging.basicConfig(stream=sys.stdout, level=logging.INFO)
router = APIRouter() router = APIRouter()
@ -28,11 +30,11 @@ async def handle_github_webhooks(request: Request, response: Response):
Verifies the request signature, parses the request body, and passes it to the handle_request function for further Verifies the request signature, parses the request body, and passes it to the handle_request function for further
processing. processing.
""" """
logging.debug("Received a GitHub webhook") get_logger().debug("Received a GitHub webhook")
body = await get_body(request) body = await get_body(request)
logging.debug(f'Request body:\n{body}') get_logger().debug(f'Request body:\n{body}')
installation_id = body.get("installation", {}).get("id") installation_id = body.get("installation", {}).get("id")
context["installation_id"] = installation_id context["installation_id"] = installation_id
context["settings"] = copy.deepcopy(global_settings) context["settings"] = copy.deepcopy(global_settings)
@ -44,13 +46,14 @@ async def handle_github_webhooks(request: Request, response: Response):
@router.post("/api/v1/marketplace_webhooks") @router.post("/api/v1/marketplace_webhooks")
async def handle_marketplace_webhooks(request: Request, response: Response): async def handle_marketplace_webhooks(request: Request, response: Response):
body = await get_body(request) body = await get_body(request)
logging.info(f'Request body:\n{body}') get_logger().info(f'Request body:\n{body}')
async def get_body(request): async def get_body(request):
try: try:
body = await request.json() body = await request.json()
except Exception as e: except Exception as e:
logging.error("Error parsing request body", e) get_logger().error("Error parsing request body", e)
raise HTTPException(status_code=400, detail="Error parsing request body") from e raise HTTPException(status_code=400, detail="Error parsing request body") from e
webhook_secret = getattr(get_settings().github, 'webhook_secret', None) webhook_secret = getattr(get_settings().github, 'webhook_secret', None)
if webhook_secret: if webhook_secret:
@ -60,7 +63,9 @@ async def get_body(request):
return body return body
_duplicate_requests_cache = {} _duplicate_requests_cache = DefaultDictWithTimeout(ttl=get_settings().github_app.duplicate_requests_cache_ttl)
_duplicate_push_triggers = DefaultDictWithTimeout(ttl=get_settings().github_app.push_trigger_pending_tasks_ttl)
_pending_task_duplicate_push_conditions = DefaultDictWithTimeout(asyncio.locks.Condition, ttl=get_settings().github_app.push_trigger_pending_tasks_ttl)
async def handle_request(body: Dict[str, Any], event: str): async def handle_request(body: Dict[str, Any], event: str):
@ -76,8 +81,8 @@ async def handle_request(body: Dict[str, Any], event: str):
return {} return {}
agent = PRAgent() agent = PRAgent()
bot_user = get_settings().github_app.bot_user bot_user = get_settings().github_app.bot_user
logging.info(f"action: '{action}'") sender = body.get("sender", {}).get("login")
logging.info(f"event: '{event}'") log_context = {"action": action, "event": event, "sender": sender, "server_type": "github_app"}
if get_settings().github_app.duplicate_requests_cache and _is_duplicate_request(body): if get_settings().github_app.duplicate_requests_cache and _is_duplicate_request(body):
return {} return {}
@ -87,73 +92,142 @@ async def handle_request(body: Dict[str, Any], event: str):
if "comment" not in body: if "comment" not in body:
return {} return {}
comment_body = body.get("comment", {}).get("body") comment_body = body.get("comment", {}).get("body")
sender = body.get("sender", {}).get("login")
if sender and bot_user in sender: if sender and bot_user in sender:
logging.info(f"Ignoring comment from {bot_user} user") get_logger().info(f"Ignoring comment from {bot_user} user")
return {} return {}
logging.info(f"Processing comment from {sender} user") get_logger().info(f"Processing comment from {sender} user")
if "issue" in body and "pull_request" in body["issue"] and "url" in body["issue"]["pull_request"]: if "issue" in body and "pull_request" in body["issue"] and "url" in body["issue"]["pull_request"]:
api_url = body["issue"]["pull_request"]["url"] api_url = body["issue"]["pull_request"]["url"]
elif "comment" in body and "pull_request_url" in body["comment"]: elif "comment" in body and "pull_request_url" in body["comment"]:
api_url = body["comment"]["pull_request_url"] api_url = body["comment"]["pull_request_url"]
else: else:
return {} return {}
logging.info(body) log_context["api_url"] = api_url
logging.info(f"Handling comment because of event={event} and action={action}") get_logger().info(body)
get_logger().info(f"Handling comment because of event={event} and action={action}")
comment_id = body.get("comment", {}).get("id") comment_id = body.get("comment", {}).get("id")
provider = get_git_provider()(pr_url=api_url) provider = get_git_provider()(pr_url=api_url)
await agent.handle_request(api_url, comment_body, notify=lambda: provider.add_eyes_reaction(comment_id)) with get_logger().contextualize(**log_context):
await agent.handle_request(api_url, comment_body, notify=lambda: provider.add_eyes_reaction(comment_id))
# handle pull_request event: # handle pull_request event:
# automatically review opened/reopened/ready_for_review PRs as long as they're not in draft, # automatically review opened/reopened/ready_for_review PRs as long as they're not in draft,
# as well as direct review requests from the bot # as well as direct review requests from the bot
elif event == 'pull_request': elif event == 'pull_request' and action != 'synchronize':
pull_request = body.get("pull_request") pull_request, api_url = _check_pull_request_event(action, body, log_context, bot_user)
if not pull_request: if not (pull_request and api_url):
return {}
api_url = pull_request.get("url")
if not api_url:
return {}
if pull_request.get("draft", True) or pull_request.get("state") != "open" or pull_request.get("user", {}).get("login", "") == bot_user:
return {} return {}
if action in get_settings().github_app.handle_pr_actions: if action in get_settings().github_app.handle_pr_actions:
if action == "review_requested": if action == "review_requested":
if body.get("requested_reviewer", {}).get("login", "") != bot_user: if body.get("requested_reviewer", {}).get("login", "") != bot_user:
return {} return {}
if pull_request.get("created_at") == pull_request.get("updated_at"): get_logger().info(f"Performing review for {api_url=} because of {event=} and {action=}")
# avoid double reviews when opening a PR for the first time await _perform_commands(get_settings().github_app.pr_commands, agent, body, api_url, log_context)
return {}
logging.info(f"Performing review because of event={event} and action={action}")
for command in get_settings().github_app.pr_commands:
split_command = command.split(" ")
command = split_command[0]
args = split_command[1:]
other_args = update_settings_from_args(args)
new_command = ' '.join([command] + other_args)
logging.info(body)
logging.info(f"Performing command: {new_command}")
await agent.handle_request(api_url, new_command)
logging.info("event or action does not require handling") # handle pull_request event with synchronize action - "push trigger" for new commits
elif event == 'pull_request' and action == 'synchronize' and get_settings().github_app.handle_push_trigger:
pull_request, api_url = _check_pull_request_event(action, body, log_context, bot_user)
if not (pull_request and api_url):
return {}
# TODO: do we still want to get the list of commits to filter bot/merge commits?
before_sha = body.get("before")
after_sha = body.get("after")
merge_commit_sha = pull_request.get("merge_commit_sha")
if before_sha == after_sha:
return {}
if get_settings().github_app.push_trigger_ignore_merge_commits and after_sha == merge_commit_sha:
return {}
if get_settings().github_app.push_trigger_ignore_bot_commits and body.get("sender", {}).get("login", "") == bot_user:
return {}
# Prevent triggering multiple times for subsequent push triggers when one is enough:
# The first push will trigger the processing, and if there's a second push in the meanwhile it will wait.
# Any more events will be discarded, because they will all trigger the exact same processing on the PR.
# We let the second event wait instead of discarding it because while the first event was being processed,
# more commits may have been pushed that led to the subsequent events,
# so we keep just one waiting as a delegate to trigger the processing for the new commits when done waiting.
current_active_tasks = _duplicate_push_triggers.setdefault(api_url, 0)
max_active_tasks = 2 if get_settings().github_app.push_trigger_pending_tasks_backlog else 1
if current_active_tasks < max_active_tasks:
# first task can enter, and second tasks too if backlog is enabled
get_logger().info(
f"Continue processing push trigger for {api_url=} because there are {current_active_tasks} active tasks"
)
_duplicate_push_triggers[api_url] += 1
else:
get_logger().info(
f"Skipping push trigger for {api_url=} because another event already triggered the same processing"
)
return {}
async with _pending_task_duplicate_push_conditions[api_url]:
if current_active_tasks == 1:
# second task waits
get_logger().info(
f"Waiting to process push trigger for {api_url=} because the first task is still in progress"
)
await _pending_task_duplicate_push_conditions[api_url].wait()
get_logger().info(f"Finished waiting to process push trigger for {api_url=} - continue with flow")
try:
if get_settings().github_app.push_trigger_wait_for_initial_review and not get_git_provider()(api_url, incremental=IncrementalPR(True)).previous_review:
get_logger().info(f"Skipping incremental review because there was no initial review for {api_url=} yet")
return {}
get_logger().info(f"Performing incremental review for {api_url=} because of {event=} and {action=}")
await _perform_commands(get_settings().github_app.push_commands, agent, body, api_url, log_context)
finally:
# release the waiting task block
async with _pending_task_duplicate_push_conditions[api_url]:
_pending_task_duplicate_push_conditions[api_url].notify(1)
_duplicate_push_triggers[api_url] -= 1
get_logger().info("event or action does not require handling")
return {} return {}
def _check_pull_request_event(action: str, body: dict, log_context: dict, bot_user: str) -> Tuple[Dict[str, Any], str]:
invalid_result = {}, ""
pull_request = body.get("pull_request")
if not pull_request:
return invalid_result
api_url = pull_request.get("url")
if not api_url:
return invalid_result
log_context["api_url"] = api_url
if pull_request.get("draft", True) or pull_request.get("state") != "open" or pull_request.get("user", {}).get("login", "") == bot_user:
return invalid_result
if action in ("review_requested", "synchronize") and pull_request.get("created_at") == pull_request.get("updated_at"):
# avoid double reviews when opening a PR for the first time
return invalid_result
return pull_request, api_url
async def _perform_commands(commands: List[str], agent: PRAgent, body: dict, api_url: str, log_context: dict):
apply_repo_settings(api_url)
for command in commands:
split_command = command.split(" ")
command = split_command[0]
args = split_command[1:]
other_args = update_settings_from_args(args)
new_command = ' '.join([command] + other_args)
get_logger().info(body)
get_logger().info(f"Performing command: {new_command}")
with get_logger().contextualize(**log_context):
await agent.handle_request(api_url, new_command)
def _is_duplicate_request(body: Dict[str, Any]) -> bool: def _is_duplicate_request(body: Dict[str, Any]) -> bool:
""" """
In some deployments its possible to get duplicate requests if the handling is long, In some deployments its possible to get duplicate requests if the handling is long,
This function checks if the request is duplicate and if so - ignores it. This function checks if the request is duplicate and if so - ignores it.
""" """
request_hash = hash(str(body)) request_hash = hash(str(body))
logging.info(f"request_hash: {request_hash}") get_logger().info(f"request_hash: {request_hash}")
request_time = time.monotonic() is_duplicate = _duplicate_requests_cache.get(request_hash, False)
ttl = get_settings().github_app.duplicate_requests_cache_ttl # in seconds _duplicate_requests_cache[request_hash] = True
to_delete = [key for key, key_time in _duplicate_requests_cache.items() if request_time - key_time > ttl]
for key in to_delete:
del _duplicate_requests_cache[key]
is_duplicate = request_hash in _duplicate_requests_cache
_duplicate_requests_cache[request_hash] = request_time
if is_duplicate: if is_duplicate:
logging.info(f"Ignoring duplicate request {request_hash}") get_logger().info(f"Ignoring duplicate request {request_hash}")
return is_duplicate return is_duplicate

View File

@ -1,6 +1,4 @@
import asyncio import asyncio
import logging
import sys
from datetime import datetime, timezone from datetime import datetime, timezone
import aiohttp import aiohttp
@ -8,9 +6,10 @@ import aiohttp
from pr_agent.agent.pr_agent import PRAgent from pr_agent.agent.pr_agent import PRAgent
from pr_agent.config_loader import get_settings from pr_agent.config_loader import get_settings
from pr_agent.git_providers import get_git_provider from pr_agent.git_providers import get_git_provider
from pr_agent.log import LoggingFormat, get_logger, setup_logger
from pr_agent.servers.help import bot_help_text from pr_agent.servers.help import bot_help_text
logging.basicConfig(stream=sys.stdout, level=logging.DEBUG) setup_logger(fmt=LoggingFormat.JSON)
NOTIFICATION_URL = "https://api.github.com/notifications" NOTIFICATION_URL = "https://api.github.com/notifications"
@ -94,7 +93,7 @@ async def polling_loop():
comment_body = comment['body'] if 'body' in comment else '' comment_body = comment['body'] if 'body' in comment else ''
commenter_github_user = comment['user']['login'] \ commenter_github_user = comment['user']['login'] \
if 'user' in comment else '' if 'user' in comment else ''
logging.info(f"Commenter: {commenter_github_user}\nComment: {comment_body}") get_logger().info(f"Commenter: {commenter_github_user}\nComment: {comment_body}")
user_tag = "@" + user_id user_tag = "@" + user_id
if user_tag not in comment_body: if user_tag not in comment_body:
continue continue
@ -112,7 +111,7 @@ async def polling_loop():
print(f"Failed to fetch notifications. Status code: {response.status}") print(f"Failed to fetch notifications. Status code: {response.status}")
except Exception as e: except Exception as e:
logging.error(f"Exception during processing of a notification: {e}") get_logger().error(f"Exception during processing of a notification: {e}")
if __name__ == '__main__': if __name__ == '__main__':

View File

@ -1,7 +1,5 @@
import copy import copy
import json import json
import logging
import sys
import uvicorn import uvicorn
from fastapi import APIRouter, FastAPI, Request, status from fastapi import APIRouter, FastAPI, Request, status
@ -14,26 +12,37 @@ from starlette_context.middleware import RawContextMiddleware
from pr_agent.agent.pr_agent import PRAgent from pr_agent.agent.pr_agent import PRAgent
from pr_agent.config_loader import get_settings, global_settings from pr_agent.config_loader import get_settings, global_settings
from pr_agent.log import LoggingFormat, get_logger, setup_logger
from pr_agent.secret_providers import get_secret_provider from pr_agent.secret_providers import get_secret_provider
logging.basicConfig(stream=sys.stdout, level=logging.INFO) setup_logger(fmt=LoggingFormat.JSON)
router = APIRouter() router = APIRouter()
secret_provider = get_secret_provider() if get_settings().get("CONFIG.SECRET_PROVIDER") else None secret_provider = get_secret_provider() if get_settings().get("CONFIG.SECRET_PROVIDER") else None
def handle_request(background_tasks: BackgroundTasks, url: str, body: str, log_context: dict):
log_context["action"] = body
log_context["event"] = "pull_request" if body == "/review" else "comment"
log_context["api_url"] = url
with get_logger().contextualize(**log_context):
background_tasks.add_task(PRAgent().handle_request, url, body)
@router.post("/webhook") @router.post("/webhook")
async def gitlab_webhook(background_tasks: BackgroundTasks, request: Request): async def gitlab_webhook(background_tasks: BackgroundTasks, request: Request):
log_context = {"server_type": "gitlab_app"}
if request.headers.get("X-Gitlab-Token") and secret_provider: if request.headers.get("X-Gitlab-Token") and secret_provider:
request_token = request.headers.get("X-Gitlab-Token") request_token = request.headers.get("X-Gitlab-Token")
secret = secret_provider.get_secret(request_token) secret = secret_provider.get_secret(request_token)
try: try:
secret_dict = json.loads(secret) secret_dict = json.loads(secret)
gitlab_token = secret_dict["gitlab_token"] gitlab_token = secret_dict["gitlab_token"]
log_context["sender"] = secret_dict["id"]
context["settings"] = copy.deepcopy(global_settings) context["settings"] = copy.deepcopy(global_settings)
context["settings"].gitlab.personal_access_token = gitlab_token context["settings"].gitlab.personal_access_token = gitlab_token
except Exception as e: except Exception as e:
logging.error(f"Failed to validate secret {request_token}: {e}") get_logger().error(f"Failed to validate secret {request_token}: {e}")
return JSONResponse(status_code=status.HTTP_401_UNAUTHORIZED, content=jsonable_encoder({"message": "unauthorized"})) return JSONResponse(status_code=status.HTTP_401_UNAUTHORIZED, content=jsonable_encoder({"message": "unauthorized"}))
elif get_settings().get("GITLAB.SHARED_SECRET"): elif get_settings().get("GITLAB.SHARED_SECRET"):
secret = get_settings().get("GITLAB.SHARED_SECRET") secret = get_settings().get("GITLAB.SHARED_SECRET")
@ -45,17 +54,17 @@ async def gitlab_webhook(background_tasks: BackgroundTasks, request: Request):
if not gitlab_token: if not gitlab_token:
return JSONResponse(status_code=status.HTTP_401_UNAUTHORIZED, content=jsonable_encoder({"message": "unauthorized"})) return JSONResponse(status_code=status.HTTP_401_UNAUTHORIZED, content=jsonable_encoder({"message": "unauthorized"}))
data = await request.json() data = await request.json()
logging.info(json.dumps(data)) get_logger().info(json.dumps(data))
if data.get('object_kind') == 'merge_request' and data['object_attributes'].get('action') in ['open', 'reopen']: if data.get('object_kind') == 'merge_request' and data['object_attributes'].get('action') in ['open', 'reopen']:
logging.info(f"A merge request has been opened: {data['object_attributes'].get('title')}") get_logger().info(f"A merge request has been opened: {data['object_attributes'].get('title')}")
url = data['object_attributes'].get('url') url = data['object_attributes'].get('url')
background_tasks.add_task(PRAgent().handle_request, url, "/review") handle_request(background_tasks, url, "/review")
elif data.get('object_kind') == 'note' and data['event_type'] == 'note': elif data.get('object_kind') == 'note' and data['event_type'] == 'note':
if 'merge_request' in data: if 'merge_request' in data:
mr = data['merge_request'] mr = data['merge_request']
url = mr.get('url') url = mr.get('url')
body = data.get('object_attributes', {}).get('note') body = data.get('object_attributes', {}).get('note')
background_tasks.add_task(PRAgent().handle_request, url, body) handle_request(background_tasks, url, body)
return JSONResponse(status_code=status.HTTP_200_OK, content=jsonable_encoder({"message": "success"})) return JSONResponse(status_code=status.HTTP_200_OK, content=jsonable_encoder({"message": "success"}))

View File

@ -1,12 +1,10 @@
import logging
from fastapi import FastAPI from fastapi import FastAPI
from mangum import Mangum from mangum import Mangum
from pr_agent.log import setup_logger
from pr_agent.servers.github_app import router from pr_agent.servers.github_app import router
logger = logging.getLogger() setup_logger()
logger.setLevel(logging.DEBUG)
app = FastAPI() app = FastAPI()
app.include_router(router) app.include_router(router)

View File

@ -1,5 +1,8 @@
import hashlib import hashlib
import hmac import hmac
import time
from collections import defaultdict
from typing import Callable, Any
from fastapi import HTTPException from fastapi import HTTPException
@ -25,3 +28,59 @@ def verify_signature(payload_body, secret_token, signature_header):
class RateLimitExceeded(Exception): class RateLimitExceeded(Exception):
"""Raised when the git provider API rate limit has been exceeded.""" """Raised when the git provider API rate limit has been exceeded."""
pass pass
class DefaultDictWithTimeout(defaultdict):
"""A defaultdict with a time-to-live (TTL)."""
def __init__(
self,
default_factory: Callable[[], Any] = None,
ttl: int = None,
refresh_interval: int = 60,
update_key_time_on_get: bool = True,
*args,
**kwargs,
):
"""
Args:
default_factory: The default factory to use for keys that are not in the dictionary.
ttl: The time-to-live (TTL) in seconds.
refresh_interval: How often to refresh the dict and delete items older than the TTL.
update_key_time_on_get: Whether to update the access time of a key also on get (or only when set).
"""
super().__init__(default_factory, *args, **kwargs)
self.__key_times = dict()
self.__ttl = ttl
self.__refresh_interval = refresh_interval
self.__update_key_time_on_get = update_key_time_on_get
self.__last_refresh = self.__time() - self.__refresh_interval
@staticmethod
def __time():
return time.monotonic()
def __refresh(self):
if self.__ttl is None:
return
request_time = self.__time()
if request_time - self.__last_refresh > self.__refresh_interval:
return
to_delete = [key for key, key_time in self.__key_times.items() if request_time - key_time > self.__ttl]
for key in to_delete:
del self[key]
self.__last_refresh = request_time
def __getitem__(self, __key):
if self.__update_key_time_on_get:
self.__key_times[__key] = self.__time()
self.__refresh()
return super().__getitem__(__key)
def __setitem__(self, __key, __value):
self.__key_times[__key] = self.__time()
return super().__setitem__(__key, __value)
def __delitem__(self, __key):
del self.__key_times[__key]
return super().__delitem__(__key)

View File

@ -24,6 +24,7 @@ num_code_suggestions=4
inline_code_comments = false inline_code_comments = false
ask_and_reflect=false ask_and_reflect=false
automatic_review=true automatic_review=true
remove_previous_review_comment=false
extra_instructions = "" extra_instructions = ""
[pr_description] # /describe # [pr_description] # /describe #
@ -31,6 +32,7 @@ publish_labels=true
publish_description_as_comment=false publish_description_as_comment=false
add_original_user_description=false add_original_user_description=false
keep_original_user_title=false keep_original_user_title=false
use_bullet_points=true
extra_instructions = "" extra_instructions = ""
# markers # markers
use_description_markers=false use_description_markers=false
@ -82,6 +84,27 @@ pr_commands = [
"/describe --pr_description.add_original_user_description=true --pr_description.keep_original_user_title=true", "/describe --pr_description.add_original_user_description=true --pr_description.keep_original_user_title=true",
"/auto_review", "/auto_review",
] ]
# settings for "pull_request" event with "synchronize" action - used to detect and handle push triggers for new commits
handle_push_trigger = false
push_trigger_ignore_bot_commits = true
push_trigger_ignore_merge_commits = true
push_trigger_wait_for_initial_review = true
push_trigger_pending_tasks_backlog = true
push_trigger_pending_tasks_ttl = 300
push_commands = [
"/describe --pr_description.add_original_user_description=true --pr_description.keep_original_user_title=true",
"""/auto_review -i \
--pr_reviewer.require_focused_review=false \
--pr_reviewer.require_score_review=false \
--pr_reviewer.require_tests_review=false \
--pr_reviewer.require_security_review=false \
--pr_reviewer.require_estimate_effort_to_review=false \
--pr_reviewer.num_code_suggestions=0 \
--pr_reviewer.inline_code_comments=false \
--pr_reviewer.remove_previous_review_comment=true \
--pr_reviewer.extra_instructions='' \
"""
]
[gitlab] [gitlab]
# URL to the gitlab service # URL to the gitlab service

View File

@ -433,3 +433,6 @@ reStructuredText = [".rst", ".rest", ".rest.txt", ".rst.txt", ]
wisp = [".wisp", ] wisp = [".wisp", ]
xBase = [".prg", ".prw", ] xBase = [".prg", ".prw", ]
[docs_blacklist_extensions]
# Disable docs for these extensions of text files and scripts that are not programming languages of function, classes and methods
docs_blacklist = ['sql', 'txt', 'yaml', 'json', 'xml', 'md', 'rst', 'rest', 'rest.txt', 'rst.txt', 'mdpolicy', 'mdown', 'markdown', 'mdwn', 'mkd', 'mkdn', 'mkdown', 'sh']

View File

@ -31,7 +31,8 @@ PR Type:
- Other - Other
PR Description: PR Description:
type: string type: string
description: an informative and concise description of the PR description: an informative and concise description of the PR.
{%- if use_bullet_points %} Use bullet points. {% endif %}
PR Main Files Walkthrough: PR Main Files Walkthrough:
type: array type: array
maxItems: 10 maxItems: 10

View File

@ -25,7 +25,7 @@ code line that already existed in the file....
The review should focus on new code added in the PR (lines starting with '+'), and not on code that already existed in the file (lines starting with '-', or without prefix). The review should focus on new code added in the PR (lines starting with '+'), and not on code that already existed in the file (lines starting with '-', or without prefix).
{%- if num_code_suggestions > 0 %} {%- if num_code_suggestions > 0 %}
- Provide up to {{ num_code_suggestions }} code suggestions. - Provide up to {{ num_code_suggestions }} code suggestions. Try to provide diverse and insightful suggestions.
- Focus on important suggestions like fixing code problems, issues and bugs. As a second priority, provide suggestions for meaningful code improvements, like performance, vulnerability, modularity, and best practices. - Focus on important suggestions like fixing code problems, issues and bugs. As a second priority, provide suggestions for meaningful code improvements, like performance, vulnerability, modularity, and best practices.
- Avoid making suggestions that have already been implemented in the PR code. For example, if you want to add logs, or change a variable to const, or anything else, make sure it isn't already in the PR code. - Avoid making suggestions that have already been implemented in the PR code. For example, if you want to add logs, or change a variable to const, or anything else, make sure it isn't already in the PR code.
- Don't suggest to add docstring, type hints, or comments. - Don't suggest to add docstring, type hints, or comments.
@ -99,10 +99,10 @@ PR Feedback:
General suggestions: General suggestions:
type: string type: string
description: |- description: |-
General suggestions and feedback for the contributors and maintainers of General suggestions and feedback for the contributors and maintainers of this PR.
this PR. May include important suggestions for the overall structure, May include important suggestions for the overall structure,
primary purpose, best practices, critical bugs, and other aspects of the primary purpose, best practices, critical bugs, and other aspects of the PR.
PR. Don't address PR title and description, or lack of tests. Explain your suggestions. Don't address PR title and description, or lack of tests. Explain your suggestions.
{%- if num_code_suggestions > 0 %} {%- if num_code_suggestions > 0 %}
Code feedback: Code feedback:
type: array type: array
@ -115,11 +115,10 @@ PR Feedback:
suggestion: suggestion:
type: string type: string
description: |- description: |-
a concrete suggestion for meaningfully improving the new PR code. Also a concrete suggestion for meaningfully improving the new PR code.
describe how, specifically, the suggestion can be applied to new PR Also describe how, specifically, the suggestion can be applied to new PR code.
code. Add tags with importance measure that matches each suggestion Add tags with importance measure that matches each suggestion ('important' or 'medium').
('important' or 'medium'). Do not make suggestions for updating or Do not make suggestions for updating or adding docstrings, renaming PR title and description, or linter like.
adding docstrings, renaming PR title and description, or linter like.
relevant line: relevant line:
type: string type: string
description: |- description: |-

View File

@ -1,16 +1,17 @@
import copy import copy
import logging
import textwrap import textwrap
from typing import List, Dict from typing import Dict
from jinja2 import Environment, StrictUndefined from jinja2 import Environment, StrictUndefined
from pr_agent.algo.ai_handler import AiHandler from pr_agent.algo.ai_handler import AiHandler
from pr_agent.algo.pr_processing import get_pr_diff, retry_with_fallback_models, get_pr_multi_diffs from pr_agent.algo.pr_processing import get_pr_diff, retry_with_fallback_models
from pr_agent.algo.token_handler import TokenHandler from pr_agent.algo.token_handler import TokenHandler
from pr_agent.algo.utils import load_yaml from pr_agent.algo.utils import load_yaml
from pr_agent.config_loader import get_settings from pr_agent.config_loader import get_settings
from pr_agent.git_providers import BitbucketProvider, get_git_provider from pr_agent.git_providers import get_git_provider
from pr_agent.git_providers.git_provider import get_main_pr_language from pr_agent.git_providers.git_provider import get_main_pr_language
from pr_agent.log import get_logger
class PRAddDocs: class PRAddDocs:
@ -43,34 +44,39 @@ class PRAddDocs:
async def run(self): async def run(self):
try: try:
logging.info('Generating code Docs for PR...') get_logger().info('Generating code Docs for PR...')
if get_settings().config.publish_output: if get_settings().config.publish_output:
self.git_provider.publish_comment("Generating Documentation...", is_temporary=True) self.git_provider.publish_comment("Generating Documentation...", is_temporary=True)
logging.info('Preparing PR documentation...') get_logger().info('Preparing PR documentation...')
await retry_with_fallback_models(self._prepare_prediction) await retry_with_fallback_models(self._prepare_prediction)
data = self._prepare_pr_code_docs() data = self._prepare_pr_code_docs()
if (not data) or (not 'Code Documentation' in data): if (not data) or (not 'Code Documentation' in data):
logging.info('No code documentation found for PR.') get_logger().info('No code documentation found for PR.')
return return
if get_settings().config.publish_output: if get_settings().config.publish_output:
logging.info('Pushing PR documentation...') get_logger().info('Pushing PR documentation...')
self.git_provider.remove_initial_comment() self.git_provider.remove_initial_comment()
logging.info('Pushing inline code documentation...') get_logger().info('Pushing inline code documentation...')
self.push_inline_docs(data) self.push_inline_docs(data)
except Exception as e: except Exception as e:
logging.error(f"Failed to generate code documentation for PR, error: {e}") get_logger().error(f"Failed to generate code documentation for PR, error: {e}")
async def _prepare_prediction(self, model: str): async def _prepare_prediction(self, model: str):
logging.info('Getting PR diff...') get_logger().info('Getting PR diff...')
# Disable adding docs to scripts and other non-relevant text files
from pr_agent.algo.language_handler import bad_extensions
bad_extensions += get_settings().docs_blacklist_extensions.docs_blacklist
self.patches_diff = get_pr_diff(self.git_provider, self.patches_diff = get_pr_diff(self.git_provider,
self.token_handler, self.token_handler,
model, model,
add_line_numbers_to_hunks=True, add_line_numbers_to_hunks=True,
disable_extra_lines=False) disable_extra_lines=False)
logging.info('Getting AI prediction...') get_logger().info('Getting AI prediction...')
self.prediction = await self._get_prediction(model) self.prediction = await self._get_prediction(model)
async def _get_prediction(self, model: str): async def _get_prediction(self, model: str):
@ -80,8 +86,8 @@ class PRAddDocs:
system_prompt = environment.from_string(get_settings().pr_add_docs_prompt.system).render(variables) system_prompt = environment.from_string(get_settings().pr_add_docs_prompt.system).render(variables)
user_prompt = environment.from_string(get_settings().pr_add_docs_prompt.user).render(variables) user_prompt = environment.from_string(get_settings().pr_add_docs_prompt.user).render(variables)
if get_settings().config.verbosity_level >= 2: if get_settings().config.verbosity_level >= 2:
logging.info(f"\nSystem prompt:\n{system_prompt}") get_logger().info(f"\nSystem prompt:\n{system_prompt}")
logging.info(f"\nUser prompt:\n{user_prompt}") get_logger().info(f"\nUser prompt:\n{user_prompt}")
response, finish_reason = await self.ai_handler.chat_completion(model=model, temperature=0.2, response, finish_reason = await self.ai_handler.chat_completion(model=model, temperature=0.2,
system=system_prompt, user=user_prompt) system=system_prompt, user=user_prompt)
@ -103,7 +109,7 @@ class PRAddDocs:
for d in data['Code Documentation']: for d in data['Code Documentation']:
try: try:
if get_settings().config.verbosity_level >= 2: if get_settings().config.verbosity_level >= 2:
logging.info(f"add_docs: {d}") get_logger().info(f"add_docs: {d}")
relevant_file = d['relevant file'].strip() relevant_file = d['relevant file'].strip()
relevant_line = int(d['relevant line']) # absolute position relevant_line = int(d['relevant line']) # absolute position
documentation = d['documentation'] documentation = d['documentation']
@ -118,11 +124,11 @@ class PRAddDocs:
'relevant_lines_end': relevant_line}) 'relevant_lines_end': relevant_line})
except Exception: except Exception:
if get_settings().config.verbosity_level >= 2: if get_settings().config.verbosity_level >= 2:
logging.info(f"Could not parse code docs: {d}") get_logger().info(f"Could not parse code docs: {d}")
is_successful = self.git_provider.publish_code_suggestions(docs) is_successful = self.git_provider.publish_code_suggestions(docs)
if not is_successful: if not is_successful:
logging.info("Failed to publish code docs, trying to publish each docs separately") get_logger().info("Failed to publish code docs, trying to publish each docs separately")
for doc_suggestion in docs: for doc_suggestion in docs:
self.git_provider.publish_code_suggestions([doc_suggestion]) self.git_provider.publish_code_suggestions([doc_suggestion])
@ -154,7 +160,7 @@ class PRAddDocs:
new_code_snippet = new_code_snippet.rstrip() + "\n" + original_initial_line new_code_snippet = new_code_snippet.rstrip() + "\n" + original_initial_line
except Exception as e: except Exception as e:
if get_settings().config.verbosity_level >= 2: if get_settings().config.verbosity_level >= 2:
logging.info(f"Could not dedent code snippet for file {relevant_file}, error: {e}") get_logger().info(f"Could not dedent code snippet for file {relevant_file}, error: {e}")
return new_code_snippet return new_code_snippet

View File

@ -1,16 +1,17 @@
import copy import copy
import logging
import textwrap import textwrap
from typing import List, Dict from typing import Dict, List
from jinja2 import Environment, StrictUndefined from jinja2 import Environment, StrictUndefined
from pr_agent.algo.ai_handler import AiHandler from pr_agent.algo.ai_handler import AiHandler
from pr_agent.algo.pr_processing import get_pr_diff, retry_with_fallback_models, get_pr_multi_diffs from pr_agent.algo.pr_processing import get_pr_diff, get_pr_multi_diffs, retry_with_fallback_models
from pr_agent.algo.token_handler import TokenHandler from pr_agent.algo.token_handler import TokenHandler
from pr_agent.algo.utils import load_yaml from pr_agent.algo.utils import load_yaml
from pr_agent.config_loader import get_settings from pr_agent.config_loader import get_settings
from pr_agent.git_providers import BitbucketProvider, get_git_provider from pr_agent.git_providers import get_git_provider
from pr_agent.git_providers.git_provider import get_main_pr_language from pr_agent.git_providers.git_provider import get_main_pr_language
from pr_agent.log import get_logger
class PRCodeSuggestions: class PRCodeSuggestions:
@ -52,42 +53,42 @@ class PRCodeSuggestions:
async def run(self): async def run(self):
try: try:
logging.info('Generating code suggestions for PR...') get_logger().info('Generating code suggestions for PR...')
if get_settings().config.publish_output: if get_settings().config.publish_output:
self.git_provider.publish_comment("Preparing review...", is_temporary=True) self.git_provider.publish_comment("Preparing review...", is_temporary=True)
logging.info('Preparing PR review...') get_logger().info('Preparing PR review...')
if not self.is_extended: if not self.is_extended:
await retry_with_fallback_models(self._prepare_prediction) await retry_with_fallback_models(self._prepare_prediction)
data = self._prepare_pr_code_suggestions() data = self._prepare_pr_code_suggestions()
else: else:
data = await retry_with_fallback_models(self._prepare_prediction_extended) data = await retry_with_fallback_models(self._prepare_prediction_extended)
if (not data) or (not 'Code suggestions' in data): if (not data) or (not 'Code suggestions' in data):
logging.info('No code suggestions found for PR.') get_logger().info('No code suggestions found for PR.')
return return
if (not self.is_extended and get_settings().pr_code_suggestions.rank_suggestions) or \ if (not self.is_extended and get_settings().pr_code_suggestions.rank_suggestions) or \
(self.is_extended and get_settings().pr_code_suggestions.rank_extended_suggestions): (self.is_extended and get_settings().pr_code_suggestions.rank_extended_suggestions):
logging.info('Ranking Suggestions...') get_logger().info('Ranking Suggestions...')
data['Code suggestions'] = await self.rank_suggestions(data['Code suggestions']) data['Code suggestions'] = await self.rank_suggestions(data['Code suggestions'])
if get_settings().config.publish_output: if get_settings().config.publish_output:
logging.info('Pushing PR review...') get_logger().info('Pushing PR review...')
self.git_provider.remove_initial_comment() self.git_provider.remove_initial_comment()
logging.info('Pushing inline code suggestions...') get_logger().info('Pushing inline code suggestions...')
self.push_inline_code_suggestions(data) self.push_inline_code_suggestions(data)
except Exception as e: except Exception as e:
logging.error(f"Failed to generate code suggestions for PR, error: {e}") get_logger().error(f"Failed to generate code suggestions for PR, error: {e}")
async def _prepare_prediction(self, model: str): async def _prepare_prediction(self, model: str):
logging.info('Getting PR diff...') get_logger().info('Getting PR diff...')
self.patches_diff = get_pr_diff(self.git_provider, self.patches_diff = get_pr_diff(self.git_provider,
self.token_handler, self.token_handler,
model, model,
add_line_numbers_to_hunks=True, add_line_numbers_to_hunks=True,
disable_extra_lines=True) disable_extra_lines=True)
logging.info('Getting AI prediction...') get_logger().info('Getting AI prediction...')
self.prediction = await self._get_prediction(model) self.prediction = await self._get_prediction(model)
async def _get_prediction(self, model: str): async def _get_prediction(self, model: str):
@ -97,8 +98,8 @@ class PRCodeSuggestions:
system_prompt = environment.from_string(get_settings().pr_code_suggestions_prompt.system).render(variables) system_prompt = environment.from_string(get_settings().pr_code_suggestions_prompt.system).render(variables)
user_prompt = environment.from_string(get_settings().pr_code_suggestions_prompt.user).render(variables) user_prompt = environment.from_string(get_settings().pr_code_suggestions_prompt.user).render(variables)
if get_settings().config.verbosity_level >= 2: if get_settings().config.verbosity_level >= 2:
logging.info(f"\nSystem prompt:\n{system_prompt}") get_logger().info(f"\nSystem prompt:\n{system_prompt}")
logging.info(f"\nUser prompt:\n{user_prompt}") get_logger().info(f"\nUser prompt:\n{user_prompt}")
response, finish_reason = await self.ai_handler.chat_completion(model=model, temperature=0.2, response, finish_reason = await self.ai_handler.chat_completion(model=model, temperature=0.2,
system=system_prompt, user=user_prompt) system=system_prompt, user=user_prompt)
@ -120,7 +121,7 @@ class PRCodeSuggestions:
for d in data['Code suggestions']: for d in data['Code suggestions']:
try: try:
if get_settings().config.verbosity_level >= 2: if get_settings().config.verbosity_level >= 2:
logging.info(f"suggestion: {d}") get_logger().info(f"suggestion: {d}")
relevant_file = d['relevant file'].strip() relevant_file = d['relevant file'].strip()
relevant_lines_start = int(d['relevant lines start']) # absolute position relevant_lines_start = int(d['relevant lines start']) # absolute position
relevant_lines_end = int(d['relevant lines end']) relevant_lines_end = int(d['relevant lines end'])
@ -136,11 +137,11 @@ class PRCodeSuggestions:
'relevant_lines_end': relevant_lines_end}) 'relevant_lines_end': relevant_lines_end})
except Exception: except Exception:
if get_settings().config.verbosity_level >= 2: if get_settings().config.verbosity_level >= 2:
logging.info(f"Could not parse suggestion: {d}") get_logger().info(f"Could not parse suggestion: {d}")
is_successful = self.git_provider.publish_code_suggestions(code_suggestions) is_successful = self.git_provider.publish_code_suggestions(code_suggestions)
if not is_successful: if not is_successful:
logging.info("Failed to publish code suggestions, trying to publish each suggestion separately") get_logger().info("Failed to publish code suggestions, trying to publish each suggestion separately")
for code_suggestion in code_suggestions: for code_suggestion in code_suggestions:
self.git_provider.publish_code_suggestions([code_suggestion]) self.git_provider.publish_code_suggestions([code_suggestion])
@ -162,19 +163,19 @@ class PRCodeSuggestions:
new_code_snippet = textwrap.indent(new_code_snippet, delta_spaces * " ").rstrip('\n') new_code_snippet = textwrap.indent(new_code_snippet, delta_spaces * " ").rstrip('\n')
except Exception as e: except Exception as e:
if get_settings().config.verbosity_level >= 2: if get_settings().config.verbosity_level >= 2:
logging.info(f"Could not dedent code snippet for file {relevant_file}, error: {e}") get_logger().info(f"Could not dedent code snippet for file {relevant_file}, error: {e}")
return new_code_snippet return new_code_snippet
async def _prepare_prediction_extended(self, model: str) -> dict: async def _prepare_prediction_extended(self, model: str) -> dict:
logging.info('Getting PR diff...') get_logger().info('Getting PR diff...')
patches_diff_list = get_pr_multi_diffs(self.git_provider, self.token_handler, model, patches_diff_list = get_pr_multi_diffs(self.git_provider, self.token_handler, model,
max_calls=get_settings().pr_code_suggestions.max_number_of_calls) max_calls=get_settings().pr_code_suggestions.max_number_of_calls)
logging.info('Getting multi AI predictions...') get_logger().info('Getting multi AI predictions...')
prediction_list = [] prediction_list = []
for i, patches_diff in enumerate(patches_diff_list): for i, patches_diff in enumerate(patches_diff_list):
logging.info(f"Processing chunk {i + 1} of {len(patches_diff_list)}") get_logger().info(f"Processing chunk {i + 1} of {len(patches_diff_list)}")
self.patches_diff = patches_diff self.patches_diff = patches_diff
prediction = await self._get_prediction(model) prediction = await self._get_prediction(model)
prediction_list.append(prediction) prediction_list.append(prediction)
@ -222,8 +223,8 @@ class PRCodeSuggestions:
variables) variables)
user_prompt = environment.from_string(get_settings().pr_sort_code_suggestions_prompt.user).render(variables) user_prompt = environment.from_string(get_settings().pr_sort_code_suggestions_prompt.user).render(variables)
if get_settings().config.verbosity_level >= 2: if get_settings().config.verbosity_level >= 2:
logging.info(f"\nSystem prompt:\n{system_prompt}") get_logger().info(f"\nSystem prompt:\n{system_prompt}")
logging.info(f"\nUser prompt:\n{user_prompt}") get_logger().info(f"\nUser prompt:\n{user_prompt}")
response, finish_reason = await self.ai_handler.chat_completion(model=model, system=system_prompt, response, finish_reason = await self.ai_handler.chat_completion(model=model, system=system_prompt,
user=user_prompt) user=user_prompt)
@ -238,7 +239,7 @@ class PRCodeSuggestions:
data_sorted = data_sorted[:new_len] data_sorted = data_sorted[:new_len]
except Exception as e: except Exception as e:
if get_settings().config.verbosity_level >= 1: if get_settings().config.verbosity_level >= 1:
logging.info(f"Could not sort suggestions, error: {e}") get_logger().info(f"Could not sort suggestions, error: {e}")
data_sorted = suggestion_list data_sorted = suggestion_list
return data_sorted return data_sorted

View File

@ -1,7 +1,6 @@
import logging
from pr_agent.config_loader import get_settings from pr_agent.config_loader import get_settings
from pr_agent.git_providers import get_git_provider from pr_agent.git_providers import get_git_provider
from pr_agent.log import get_logger
class PRConfig: class PRConfig:
@ -19,11 +18,11 @@ class PRConfig:
self.git_provider = get_git_provider()(pr_url) self.git_provider = get_git_provider()(pr_url)
async def run(self): async def run(self):
logging.info('Getting configuration settings...') get_logger().info('Getting configuration settings...')
logging.info('Preparing configs...') get_logger().info('Preparing configs...')
pr_comment = self._prepare_pr_configs() pr_comment = self._prepare_pr_configs()
if get_settings().config.publish_output: if get_settings().config.publish_output:
logging.info('Pushing configs...') get_logger().info('Pushing configs...')
self.git_provider.publish_comment(pr_comment) self.git_provider.publish_comment(pr_comment)
self.git_provider.remove_initial_comment() self.git_provider.remove_initial_comment()
return "" return ""
@ -44,5 +43,5 @@ class PRConfig:
comment_str += f"\n{header.lower()}.{key.lower()} = {repr(value) if isinstance(value, str) else value}" comment_str += f"\n{header.lower()}.{key.lower()} = {repr(value) if isinstance(value, str) else value}"
comment_str += " " comment_str += " "
if get_settings().config.verbosity_level >= 2: if get_settings().config.verbosity_level >= 2:
logging.info(f"comment_str:\n{comment_str}") get_logger().info(f"comment_str:\n{comment_str}")
return comment_str return comment_str

View File

@ -1,7 +1,5 @@
import copy import copy
import json
import re import re
import logging
from typing import List, Tuple from typing import List, Tuple
from jinja2 import Environment, StrictUndefined from jinja2 import Environment, StrictUndefined
@ -13,6 +11,7 @@ from pr_agent.algo.utils import load_yaml
from pr_agent.config_loader import get_settings from pr_agent.config_loader import get_settings
from pr_agent.git_providers import get_git_provider from pr_agent.git_providers import get_git_provider
from pr_agent.git_providers.git_provider import get_main_pr_language from pr_agent.git_providers.git_provider import get_main_pr_language
from pr_agent.log import get_logger
class PRDescription: class PRDescription:
@ -41,6 +40,7 @@ class PRDescription:
"description": self.git_provider.get_pr_description(full=False), "description": self.git_provider.get_pr_description(full=False),
"language": self.main_pr_language, "language": self.main_pr_language,
"diff": "", # empty diff for initial calculation "diff": "", # empty diff for initial calculation
"use_bullet_points": get_settings().pr_description.use_bullet_points,
"extra_instructions": get_settings().pr_description.extra_instructions, "extra_instructions": get_settings().pr_description.extra_instructions,
"commit_messages_str": self.git_provider.get_commit_messages() "commit_messages_str": self.git_provider.get_commit_messages()
} }
@ -65,13 +65,13 @@ class PRDescription:
""" """
try: try:
logging.info(f"Generating a PR description {self.pr_id}") get_logger().info(f"Generating a PR description {self.pr_id}")
if get_settings().config.publish_output: if get_settings().config.publish_output:
self.git_provider.publish_comment("Preparing PR description...", is_temporary=True) self.git_provider.publish_comment("Preparing PR description...", is_temporary=True)
await retry_with_fallback_models(self._prepare_prediction) await retry_with_fallback_models(self._prepare_prediction)
logging.info(f"Preparing answer {self.pr_id}") get_logger().info(f"Preparing answer {self.pr_id}")
if self.prediction: if self.prediction:
self._prepare_data() self._prepare_data()
else: else:
@ -88,7 +88,7 @@ class PRDescription:
full_markdown_description = f"## Title\n\n{pr_title}\n\n___\n{pr_body}" full_markdown_description = f"## Title\n\n{pr_title}\n\n___\n{pr_body}"
if get_settings().config.publish_output: if get_settings().config.publish_output:
logging.info(f"Pushing answer {self.pr_id}") get_logger().info(f"Pushing answer {self.pr_id}")
if get_settings().pr_description.publish_description_as_comment: if get_settings().pr_description.publish_description_as_comment:
self.git_provider.publish_comment(full_markdown_description) self.git_provider.publish_comment(full_markdown_description)
else: else:
@ -100,7 +100,7 @@ class PRDescription:
self.git_provider.publish_labels(pr_labels + current_labels) self.git_provider.publish_labels(pr_labels + current_labels)
self.git_provider.remove_initial_comment() self.git_provider.remove_initial_comment()
except Exception as e: except Exception as e:
logging.error(f"Error generating PR description {self.pr_id}: {e}") get_logger().error(f"Error generating PR description {self.pr_id}: {e}")
return "" return ""
@ -121,9 +121,9 @@ class PRDescription:
if get_settings().pr_description.use_description_markers and 'pr_agent:' not in self.user_description: if get_settings().pr_description.use_description_markers and 'pr_agent:' not in self.user_description:
return None return None
logging.info(f"Getting PR diff {self.pr_id}") get_logger().info(f"Getting PR diff {self.pr_id}")
self.patches_diff = get_pr_diff(self.git_provider, self.token_handler, model) self.patches_diff = get_pr_diff(self.git_provider, self.token_handler, model)
logging.info(f"Getting AI prediction {self.pr_id}") get_logger().info(f"Getting AI prediction {self.pr_id}")
self.prediction = await self._get_prediction(model) self.prediction = await self._get_prediction(model)
async def _get_prediction(self, model: str) -> str: async def _get_prediction(self, model: str) -> str:
@ -144,8 +144,8 @@ class PRDescription:
user_prompt = environment.from_string(get_settings().pr_description_prompt.user).render(variables) user_prompt = environment.from_string(get_settings().pr_description_prompt.user).render(variables)
if get_settings().config.verbosity_level >= 2: if get_settings().config.verbosity_level >= 2:
logging.info(f"\nSystem prompt:\n{system_prompt}") get_logger().info(f"\nSystem prompt:\n{system_prompt}")
logging.info(f"\nUser prompt:\n{user_prompt}") get_logger().info(f"\nUser prompt:\n{user_prompt}")
response, finish_reason = await self.ai_handler.chat_completion( response, finish_reason = await self.ai_handler.chat_completion(
model=model, model=model,
@ -178,7 +178,7 @@ class PRDescription:
return pr_types return pr_types
def _prepare_pr_answer_with_markers(self) -> Tuple[str, str]: def _prepare_pr_answer_with_markers(self) -> Tuple[str, str]:
logging.info(f"Using description marker replacements {self.pr_id}") get_logger().info(f"Using description marker replacements {self.pr_id}")
title = self.vars["title"] title = self.vars["title"]
body = self.user_description body = self.user_description
if get_settings().pr_description.include_generated_by_header: if get_settings().pr_description.include_generated_by_header:
@ -186,6 +186,11 @@ class PRDescription:
else: else:
ai_header = "" ai_header = ""
ai_type = self.data.get('PR Type')
if ai_type and not re.search(r'<!--\s*pr_agent:type\s*-->', body):
pr_type = f"{ai_header}{ai_type}"
body = body.replace('pr_agent:type', pr_type)
ai_summary = self.data.get('PR Description') ai_summary = self.data.get('PR Description')
if ai_summary and not re.search(r'<!--\s*pr_agent:summary\s*-->', body): if ai_summary and not re.search(r'<!--\s*pr_agent:summary\s*-->', body):
summary = f"{ai_header}{ai_summary}" summary = f"{ai_header}{ai_summary}"
@ -252,6 +257,6 @@ class PRDescription:
pr_body += "\n___\n" pr_body += "\n___\n"
if get_settings().config.verbosity_level >= 2: if get_settings().config.verbosity_level >= 2:
logging.info(f"title:\n{title}\n{pr_body}") get_logger().info(f"title:\n{title}\n{pr_body}")
return title, pr_body return title, pr_body

View File

@ -1,5 +1,4 @@
import copy import copy
import logging
from jinja2 import Environment, StrictUndefined from jinja2 import Environment, StrictUndefined
@ -9,6 +8,7 @@ from pr_agent.algo.token_handler import TokenHandler
from pr_agent.config_loader import get_settings from pr_agent.config_loader import get_settings
from pr_agent.git_providers import get_git_provider from pr_agent.git_providers import get_git_provider
from pr_agent.git_providers.git_provider import get_main_pr_language from pr_agent.git_providers.git_provider import get_main_pr_language
from pr_agent.log import get_logger
class PRInformationFromUser: class PRInformationFromUser:
@ -34,22 +34,22 @@ class PRInformationFromUser:
self.prediction = None self.prediction = None
async def run(self): async def run(self):
logging.info('Generating question to the user...') get_logger().info('Generating question to the user...')
if get_settings().config.publish_output: if get_settings().config.publish_output:
self.git_provider.publish_comment("Preparing questions...", is_temporary=True) self.git_provider.publish_comment("Preparing questions...", is_temporary=True)
await retry_with_fallback_models(self._prepare_prediction) await retry_with_fallback_models(self._prepare_prediction)
logging.info('Preparing questions...') get_logger().info('Preparing questions...')
pr_comment = self._prepare_pr_answer() pr_comment = self._prepare_pr_answer()
if get_settings().config.publish_output: if get_settings().config.publish_output:
logging.info('Pushing questions...') get_logger().info('Pushing questions...')
self.git_provider.publish_comment(pr_comment) self.git_provider.publish_comment(pr_comment)
self.git_provider.remove_initial_comment() self.git_provider.remove_initial_comment()
return "" return ""
async def _prepare_prediction(self, model): async def _prepare_prediction(self, model):
logging.info('Getting PR diff...') get_logger().info('Getting PR diff...')
self.patches_diff = get_pr_diff(self.git_provider, self.token_handler, model) self.patches_diff = get_pr_diff(self.git_provider, self.token_handler, model)
logging.info('Getting AI prediction...') get_logger().info('Getting AI prediction...')
self.prediction = await self._get_prediction(model) self.prediction = await self._get_prediction(model)
async def _get_prediction(self, model: str): async def _get_prediction(self, model: str):
@ -59,8 +59,8 @@ class PRInformationFromUser:
system_prompt = environment.from_string(get_settings().pr_information_from_user_prompt.system).render(variables) system_prompt = environment.from_string(get_settings().pr_information_from_user_prompt.system).render(variables)
user_prompt = environment.from_string(get_settings().pr_information_from_user_prompt.user).render(variables) user_prompt = environment.from_string(get_settings().pr_information_from_user_prompt.user).render(variables)
if get_settings().config.verbosity_level >= 2: if get_settings().config.verbosity_level >= 2:
logging.info(f"\nSystem prompt:\n{system_prompt}") get_logger().info(f"\nSystem prompt:\n{system_prompt}")
logging.info(f"\nUser prompt:\n{user_prompt}") get_logger().info(f"\nUser prompt:\n{user_prompt}")
response, finish_reason = await self.ai_handler.chat_completion(model=model, temperature=0.2, response, finish_reason = await self.ai_handler.chat_completion(model=model, temperature=0.2,
system=system_prompt, user=user_prompt) system=system_prompt, user=user_prompt)
return response return response
@ -68,7 +68,7 @@ class PRInformationFromUser:
def _prepare_pr_answer(self) -> str: def _prepare_pr_answer(self) -> str:
model_output = self.prediction.strip() model_output = self.prediction.strip()
if get_settings().config.verbosity_level >= 2: if get_settings().config.verbosity_level >= 2:
logging.info(f"answer_str:\n{model_output}") get_logger().info(f"answer_str:\n{model_output}")
answer_str = f"{model_output}\n\n Please respond to the questions above in the following format:\n\n" +\ answer_str = f"{model_output}\n\n Please respond to the questions above in the following format:\n\n" +\
"\n>/answer\n>1) ...\n>2) ...\n>...\n" "\n>/answer\n>1) ...\n>2) ...\n>...\n"
return answer_str return answer_str

View File

@ -1,5 +1,4 @@
import copy import copy
import logging
from jinja2 import Environment, StrictUndefined from jinja2 import Environment, StrictUndefined
@ -9,6 +8,7 @@ from pr_agent.algo.token_handler import TokenHandler
from pr_agent.config_loader import get_settings from pr_agent.config_loader import get_settings
from pr_agent.git_providers import get_git_provider from pr_agent.git_providers import get_git_provider
from pr_agent.git_providers.git_provider import get_main_pr_language from pr_agent.git_providers.git_provider import get_main_pr_language
from pr_agent.log import get_logger
class PRQuestions: class PRQuestions:
@ -44,22 +44,22 @@ class PRQuestions:
return question_str return question_str
async def run(self): async def run(self):
logging.info('Answering a PR question...') get_logger().info('Answering a PR question...')
if get_settings().config.publish_output: if get_settings().config.publish_output:
self.git_provider.publish_comment("Preparing answer...", is_temporary=True) self.git_provider.publish_comment("Preparing answer...", is_temporary=True)
await retry_with_fallback_models(self._prepare_prediction) await retry_with_fallback_models(self._prepare_prediction)
logging.info('Preparing answer...') get_logger().info('Preparing answer...')
pr_comment = self._prepare_pr_answer() pr_comment = self._prepare_pr_answer()
if get_settings().config.publish_output: if get_settings().config.publish_output:
logging.info('Pushing answer...') get_logger().info('Pushing answer...')
self.git_provider.publish_comment(pr_comment) self.git_provider.publish_comment(pr_comment)
self.git_provider.remove_initial_comment() self.git_provider.remove_initial_comment()
return "" return ""
async def _prepare_prediction(self, model: str): async def _prepare_prediction(self, model: str):
logging.info('Getting PR diff...') get_logger().info('Getting PR diff...')
self.patches_diff = get_pr_diff(self.git_provider, self.token_handler, model) self.patches_diff = get_pr_diff(self.git_provider, self.token_handler, model)
logging.info('Getting AI prediction...') get_logger().info('Getting AI prediction...')
self.prediction = await self._get_prediction(model) self.prediction = await self._get_prediction(model)
async def _get_prediction(self, model: str): async def _get_prediction(self, model: str):
@ -69,8 +69,8 @@ class PRQuestions:
system_prompt = environment.from_string(get_settings().pr_questions_prompt.system).render(variables) system_prompt = environment.from_string(get_settings().pr_questions_prompt.system).render(variables)
user_prompt = environment.from_string(get_settings().pr_questions_prompt.user).render(variables) user_prompt = environment.from_string(get_settings().pr_questions_prompt.user).render(variables)
if get_settings().config.verbosity_level >= 2: if get_settings().config.verbosity_level >= 2:
logging.info(f"\nSystem prompt:\n{system_prompt}") get_logger().info(f"\nSystem prompt:\n{system_prompt}")
logging.info(f"\nUser prompt:\n{user_prompt}") get_logger().info(f"\nUser prompt:\n{user_prompt}")
response, finish_reason = await self.ai_handler.chat_completion(model=model, temperature=0.2, response, finish_reason = await self.ai_handler.chat_completion(model=model, temperature=0.2,
system=system_prompt, user=user_prompt) system=system_prompt, user=user_prompt)
return response return response
@ -79,5 +79,5 @@ class PRQuestions:
answer_str = f"Question: {self.question_str}\n\n" answer_str = f"Question: {self.question_str}\n\n"
answer_str += f"Answer:\n{self.prediction.strip()}\n\n" answer_str += f"Answer:\n{self.prediction.strip()}\n\n"
if get_settings().config.verbosity_level >= 2: if get_settings().config.verbosity_level >= 2:
logging.info(f"answer_str:\n{answer_str}") get_logger().info(f"answer_str:\n{answer_str}")
return answer_str return answer_str

View File

@ -1,6 +1,4 @@
import copy import copy
import json
import logging
from collections import OrderedDict from collections import OrderedDict
from typing import List, Tuple from typing import List, Tuple
@ -9,13 +7,13 @@ from jinja2 import Environment, StrictUndefined
from yaml import SafeLoader from yaml import SafeLoader
from pr_agent.algo.ai_handler import AiHandler from pr_agent.algo.ai_handler import AiHandler
from pr_agent.algo.pr_processing import get_pr_diff, retry_with_fallback_models, \ from pr_agent.algo.pr_processing import get_pr_diff, retry_with_fallback_models
find_line_number_of_relevant_line_in_file, clip_tokens
from pr_agent.algo.token_handler import TokenHandler from pr_agent.algo.token_handler import TokenHandler
from pr_agent.algo.utils import convert_to_markdown, try_fix_json, try_fix_yaml, load_yaml from pr_agent.algo.utils import convert_to_markdown, load_yaml, try_fix_yaml
from pr_agent.config_loader import get_settings from pr_agent.config_loader import get_settings
from pr_agent.git_providers import get_git_provider from pr_agent.git_providers import get_git_provider
from pr_agent.git_providers.git_provider import IncrementalPR, get_main_pr_language from pr_agent.git_providers.git_provider import IncrementalPR, get_main_pr_language
from pr_agent.log import get_logger
from pr_agent.servers.help import actions_help_text, bot_help_text from pr_agent.servers.help import actions_help_text, bot_help_text
@ -98,29 +96,33 @@ class PRReviewer:
try: try:
if self.is_auto and not get_settings().pr_reviewer.automatic_review: if self.is_auto and not get_settings().pr_reviewer.automatic_review:
logging.info(f'Automatic review is disabled {self.pr_url}') get_logger().info(f'Automatic review is disabled {self.pr_url}')
return None
if self.is_auto and self.incremental.is_incremental and not self.incremental.first_new_commit_sha:
get_logger().info(f"Incremental review is enabled for {self.pr_url} but there are no new commits")
return None return None
logging.info(f'Reviewing PR: {self.pr_url} ...') get_logger().info(f'Reviewing PR: {self.pr_url} ...')
if get_settings().config.publish_output: if get_settings().config.publish_output:
self.git_provider.publish_comment("Preparing review...", is_temporary=True) self.git_provider.publish_comment("Preparing review...", is_temporary=True)
await retry_with_fallback_models(self._prepare_prediction) await retry_with_fallback_models(self._prepare_prediction)
logging.info('Preparing PR review...') get_logger().info('Preparing PR review...')
pr_comment = self._prepare_pr_review() pr_comment = self._prepare_pr_review()
if get_settings().config.publish_output: if get_settings().config.publish_output:
logging.info('Pushing PR review...') get_logger().info('Pushing PR review...')
previous_review_comment = self._get_previous_review_comment()
self.git_provider.publish_comment(pr_comment) self.git_provider.publish_comment(pr_comment)
self.git_provider.remove_initial_comment() self.git_provider.remove_initial_comment()
self._remove_previous_review_comment(previous_review_comment)
if get_settings().pr_reviewer.inline_code_comments: if get_settings().pr_reviewer.inline_code_comments:
logging.info('Pushing inline code comments...') get_logger().info('Pushing inline code comments...')
self._publish_inline_code_comments() self._publish_inline_code_comments()
except Exception as e: except Exception as e:
logging.error(f"Failed to review PR: {e}") get_logger().error(f"Failed to review PR: {e}")
async def _prepare_prediction(self, model: str) -> None: async def _prepare_prediction(self, model: str) -> None:
""" """
@ -132,9 +134,9 @@ class PRReviewer:
Returns: Returns:
None None
""" """
logging.info('Getting PR diff...') get_logger().info('Getting PR diff...')
self.patches_diff = get_pr_diff(self.git_provider, self.token_handler, model) self.patches_diff = get_pr_diff(self.git_provider, self.token_handler, model)
logging.info('Getting AI prediction...') get_logger().info('Getting AI prediction...')
self.prediction = await self._get_prediction(model) self.prediction = await self._get_prediction(model)
async def _get_prediction(self, model: str) -> str: async def _get_prediction(self, model: str) -> str:
@ -155,8 +157,8 @@ class PRReviewer:
user_prompt = environment.from_string(get_settings().pr_review_prompt.user).render(variables) user_prompt = environment.from_string(get_settings().pr_review_prompt.user).render(variables)
if get_settings().config.verbosity_level >= 2: if get_settings().config.verbosity_level >= 2:
logging.info(f"\nSystem prompt:\n{system_prompt}") get_logger().info(f"\nSystem prompt:\n{system_prompt}")
logging.info(f"\nUser prompt:\n{user_prompt}") get_logger().info(f"\nUser prompt:\n{user_prompt}")
response, finish_reason = await self.ai_handler.chat_completion( response, finish_reason = await self.ai_handler.chat_completion(
model=model, model=model,
@ -230,9 +232,13 @@ class PRReviewer:
if self.incremental.is_incremental: if self.incremental.is_incremental:
last_commit_url = f"{self.git_provider.get_pr_url()}/commits/" \ last_commit_url = f"{self.git_provider.get_pr_url()}/commits/" \
f"{self.git_provider.incremental.first_new_commit_sha}" f"{self.git_provider.incremental.first_new_commit_sha}"
last_commit_msg = self.incremental.commits_range[0].commit.message if self.incremental.commits_range else ""
incremental_review_markdown_text = f"Starting from commit {last_commit_url}"
if last_commit_msg:
incremental_review_markdown_text += f" \n_({last_commit_msg.splitlines(keepends=False)[0]})_"
data = OrderedDict(data) data = OrderedDict(data)
data.update({'Incremental PR Review': { data.update({'Incremental PR Review': {
"⏮️ Review for commits since previous PR-Agent review": f"Starting from commit {last_commit_url}"}}) "⏮️ Review for commits since previous PR-Agent review": incremental_review_markdown_text}})
data.move_to_end('Incremental PR Review', last=False) data.move_to_end('Incremental PR Review', last=False)
markdown_text = convert_to_markdown(data, self.git_provider.is_supported("gfm_markdown")) markdown_text = convert_to_markdown(data, self.git_provider.is_supported("gfm_markdown"))
@ -249,7 +255,7 @@ class PRReviewer:
# Log markdown response if verbosity level is high # Log markdown response if verbosity level is high
if get_settings().config.verbosity_level >= 2: if get_settings().config.verbosity_level >= 2:
logging.info(f"Markdown response:\n{markdown_text}") get_logger().info(f"Markdown response:\n{markdown_text}")
if markdown_text == None or len(markdown_text) == 0: if markdown_text == None or len(markdown_text) == 0:
markdown_text = "" markdown_text = ""
@ -268,7 +274,7 @@ class PRReviewer:
try: try:
data = yaml.load(review_text, Loader=SafeLoader) data = yaml.load(review_text, Loader=SafeLoader)
except Exception as e: except Exception as e:
logging.error(f"Failed to parse AI prediction: {e}") get_logger().error(f"Failed to parse AI prediction: {e}")
data = try_fix_yaml(review_text) data = try_fix_yaml(review_text)
comments: List[str] = [] comments: List[str] = []
@ -277,7 +283,7 @@ class PRReviewer:
relevant_line_in_file = suggestion.get('relevant line', '').strip() relevant_line_in_file = suggestion.get('relevant line', '').strip()
content = suggestion.get('suggestion', '') content = suggestion.get('suggestion', '')
if not relevant_file or not relevant_line_in_file or not content: if not relevant_file or not relevant_line_in_file or not content:
logging.info("Skipping inline comment with missing file/line/content") get_logger().info("Skipping inline comment with missing file/line/content")
continue continue
if self.git_provider.is_supported("create_inline_comment"): if self.git_provider.is_supported("create_inline_comment"):
@ -313,3 +319,26 @@ class PRReviewer:
break break
return question_str, answer_str return question_str, answer_str
def _get_previous_review_comment(self):
"""
Get the previous review comment if it exists.
"""
try:
if get_settings().pr_reviewer.remove_previous_review_comment and hasattr(self.git_provider, "get_previous_review"):
return self.git_provider.get_previous_review(
full=not self.incremental.is_incremental,
incremental=self.incremental.is_incremental,
)
except Exception as e:
get_logger().exception(f"Failed to get previous review comment, error: {e}")
def _remove_previous_review_comment(self, comment):
"""
Remove the previous review comment if it exists.
"""
try:
if get_settings().pr_reviewer.remove_previous_review_comment and comment:
self.git_provider.remove_comment(comment)
except Exception as e:
get_logger().exception(f"Failed to remove previous review comment, error: {e}")

View File

@ -1,18 +1,18 @@
import copy import time
import json
import logging
from enum import Enum from enum import Enum
from typing import List, Tuple from typing import List
import pinecone
import openai import openai
import pandas as pd import pandas as pd
import pinecone
from pinecone_datasets import Dataset, DatasetMetadata
from pydantic import BaseModel, Field from pydantic import BaseModel, Field
from pr_agent.algo import MAX_TOKENS from pr_agent.algo import MAX_TOKENS
from pr_agent.algo.token_handler import TokenHandler from pr_agent.algo.token_handler import TokenHandler
from pr_agent.config_loader import get_settings from pr_agent.config_loader import get_settings
from pr_agent.git_providers import get_git_provider from pr_agent.git_providers import get_git_provider
from pinecone_datasets import Dataset, DatasetMetadata from pr_agent.log import get_logger
MODEL = "text-embedding-ada-002" MODEL = "text-embedding-ada-002"
@ -47,6 +47,13 @@ class PRSimilarIssue:
# check if index exists, and if repo is already indexed # check if index exists, and if repo is already indexed
run_from_scratch = False run_from_scratch = False
if run_from_scratch: # for debugging
pinecone.init(api_key=api_key, environment=environment)
if index_name in pinecone.list_indexes():
get_logger().info('Removing index...')
pinecone.delete_index(index_name)
get_logger().info('Done')
upsert = True upsert = True
pinecone.init(api_key=api_key, environment=environment) pinecone.init(api_key=api_key, environment=environment)
if not index_name in pinecone.list_indexes(): if not index_name in pinecone.list_indexes():
@ -62,11 +69,11 @@ class PRSimilarIssue:
upsert = False upsert = False
if run_from_scratch or upsert: # index the entire repo if run_from_scratch or upsert: # index the entire repo
logging.info('Indexing the entire repo...') get_logger().info('Indexing the entire repo...')
logging.info('Getting issues...') get_logger().info('Getting issues...')
issues = list(repo_obj.get_issues(state='all')) issues = list(repo_obj.get_issues(state='all'))
logging.info('Done') get_logger().info('Done')
self._update_index_with_issues(issues, repo_name_for_index, upsert=upsert) self._update_index_with_issues(issues, repo_name_for_index, upsert=upsert)
else: # update index if needed else: # update index if needed
pinecone_index = pinecone.Index(index_name=index_name) pinecone_index = pinecone.Index(index_name=index_name)
@ -92,20 +99,20 @@ class PRSimilarIssue:
break break
if issues_to_update: if issues_to_update:
logging.info(f'Updating index with {counter} new issues...') get_logger().info(f'Updating index with {counter} new issues...')
self._update_index_with_issues(issues_to_update, repo_name_for_index, upsert=True) self._update_index_with_issues(issues_to_update, repo_name_for_index, upsert=True)
else: else:
logging.info('No new issues to update') get_logger().info('No new issues to update')
async def run(self): async def run(self):
logging.info('Getting issue...') get_logger().info('Getting issue...')
repo_name, original_issue_number = self.git_provider._parse_issue_url(self.issue_url.split('=')[-1]) repo_name, original_issue_number = self.git_provider._parse_issue_url(self.issue_url.split('=')[-1])
issue_main = self.git_provider.repo_obj.get_issue(original_issue_number) issue_main = self.git_provider.repo_obj.get_issue(original_issue_number)
issue_str, comments, number = self._process_issue(issue_main) issue_str, comments, number = self._process_issue(issue_main)
openai.api_key = get_settings().openai.key openai.api_key = get_settings().openai.key
logging.info('Done') get_logger().info('Done')
logging.info('Querying...') get_logger().info('Querying...')
res = openai.Embedding.create(input=[issue_str], engine=MODEL) res = openai.Embedding.create(input=[issue_str], engine=MODEL)
embeds = [record['embedding'] for record in res['data']] embeds = [record['embedding'] for record in res['data']]
pinecone_index = pinecone.Index(index_name=self.index_name) pinecone_index = pinecone.Index(index_name=self.index_name)
@ -117,7 +124,16 @@ class PRSimilarIssue:
relevant_comment_number_list = [] relevant_comment_number_list = []
score_list = [] score_list = []
for r in res['matches']: for r in res['matches']:
issue_number = int(r["id"].split('.')[0].split('_')[-1]) # skip example issue
if 'example_issue_' in r["id"]:
continue
try:
issue_number = int(r["id"].split('.')[0].split('_')[-1])
except:
get_logger().debug(f"Failed to parse issue number from {r['id']}")
continue
if original_issue_number == issue_number: if original_issue_number == issue_number:
continue continue
if issue_number not in relevant_issues_number_list: if issue_number not in relevant_issues_number_list:
@ -127,9 +143,9 @@ class PRSimilarIssue:
else: else:
relevant_comment_number_list.append(-1) relevant_comment_number_list.append(-1)
score_list.append(str("{:.2f}".format(r['score']))) score_list.append(str("{:.2f}".format(r['score'])))
logging.info('Done') get_logger().info('Done')
logging.info('Publishing response...') get_logger().info('Publishing response...')
similar_issues_str = "### Similar Issues\n___\n\n" similar_issues_str = "### Similar Issues\n___\n\n"
for i, issue_number_similar in enumerate(relevant_issues_number_list): for i, issue_number_similar in enumerate(relevant_issues_number_list):
issue = self.git_provider.repo_obj.get_issue(issue_number_similar) issue = self.git_provider.repo_obj.get_issue(issue_number_similar)
@ -140,8 +156,8 @@ class PRSimilarIssue:
similar_issues_str += f"{i + 1}. **[{title}]({url})** (score={score_list[i]})\n\n" similar_issues_str += f"{i + 1}. **[{title}]({url})** (score={score_list[i]})\n\n"
if get_settings().config.publish_output: if get_settings().config.publish_output:
response = issue_main.create_comment(similar_issues_str) response = issue_main.create_comment(similar_issues_str)
logging.info(similar_issues_str) get_logger().info(similar_issues_str)
logging.info('Done') get_logger().info('Done')
def _process_issue(self, issue): def _process_issue(self, issue):
header = issue.title header = issue.title
@ -155,7 +171,7 @@ class PRSimilarIssue:
return issue_str, comments, number return issue_str, comments, number
def _update_index_with_issues(self, issues_list, repo_name_for_index, upsert=False): def _update_index_with_issues(self, issues_list, repo_name_for_index, upsert=False):
logging.info('Processing issues...') get_logger().info('Processing issues...')
corpus = Corpus() corpus = Corpus()
example_issue_record = Record( example_issue_record = Record(
id=f"example_issue_{repo_name_for_index}", id=f"example_issue_{repo_name_for_index}",
@ -171,9 +187,9 @@ class PRSimilarIssue:
counter += 1 counter += 1
if counter % 100 == 0: if counter % 100 == 0:
logging.info(f"Scanned {counter} issues") get_logger().info(f"Scanned {counter} issues")
if counter >= self.max_issues_to_scan: if counter >= self.max_issues_to_scan:
logging.info(f"Scanned {self.max_issues_to_scan} issues, stopping") get_logger().info(f"Scanned {self.max_issues_to_scan} issues, stopping")
break break
issue_str, comments, number = self._process_issue(issue) issue_str, comments, number = self._process_issue(issue)
@ -210,9 +226,9 @@ class PRSimilarIssue:
) )
corpus.append(comment_record) corpus.append(comment_record)
df = pd.DataFrame(corpus.dict()["documents"]) df = pd.DataFrame(corpus.dict()["documents"])
logging.info('Done') get_logger().info('Done')
logging.info('Embedding...') get_logger().info('Embedding...')
openai.api_key = get_settings().openai.key openai.api_key = get_settings().openai.key
list_to_encode = list(df["text"].values) list_to_encode = list(df["text"].values)
try: try:
@ -220,7 +236,7 @@ class PRSimilarIssue:
embeds = [record['embedding'] for record in res['data']] embeds = [record['embedding'] for record in res['data']]
except: except:
embeds = [] embeds = []
logging.error('Failed to embed entire list, embedding one by one...') get_logger().error('Failed to embed entire list, embedding one by one...')
for i, text in enumerate(list_to_encode): for i, text in enumerate(list_to_encode):
try: try:
res = openai.Embedding.create(input=[text], engine=MODEL) res = openai.Embedding.create(input=[text], engine=MODEL)
@ -231,21 +247,23 @@ class PRSimilarIssue:
meta = DatasetMetadata.empty() meta = DatasetMetadata.empty()
meta.dense_model.dimension = len(embeds[0]) meta.dense_model.dimension = len(embeds[0])
ds = Dataset.from_pandas(df, meta) ds = Dataset.from_pandas(df, meta)
logging.info('Done') get_logger().info('Done')
api_key = get_settings().pinecone.api_key api_key = get_settings().pinecone.api_key
environment = get_settings().pinecone.environment environment = get_settings().pinecone.environment
if not upsert: if not upsert:
logging.info('Creating index from scratch...') get_logger().info('Creating index from scratch...')
ds.to_pinecone_index(self.index_name, api_key=api_key, environment=environment) ds.to_pinecone_index(self.index_name, api_key=api_key, environment=environment)
time.sleep(15) # wait for pinecone to finalize indexing before querying
else: else:
logging.info('Upserting index...') get_logger().info('Upserting index...')
namespace = "" namespace = ""
batch_size: int = 100 batch_size: int = 100
concurrency: int = 10 concurrency: int = 10
pinecone.init(api_key=api_key, environment=environment) pinecone.init(api_key=api_key, environment=environment)
ds._upsert_to_index(self.index_name, namespace, batch_size, concurrency) ds._upsert_to_index(self.index_name, namespace, batch_size, concurrency)
logging.info('Done') time.sleep(5) # wait for pinecone to finalize upserting before querying
get_logger().info('Done')
class IssueLevel(str, Enum): class IssueLevel(str, Enum):

View File

@ -1,5 +1,4 @@
import copy import copy
import logging
from datetime import date from datetime import date
from time import sleep from time import sleep
from typing import Tuple from typing import Tuple
@ -10,8 +9,9 @@ from pr_agent.algo.ai_handler import AiHandler
from pr_agent.algo.pr_processing import get_pr_diff, retry_with_fallback_models from pr_agent.algo.pr_processing import get_pr_diff, retry_with_fallback_models
from pr_agent.algo.token_handler import TokenHandler from pr_agent.algo.token_handler import TokenHandler
from pr_agent.config_loader import get_settings from pr_agent.config_loader import get_settings
from pr_agent.git_providers import GithubProvider, get_git_provider from pr_agent.git_providers import get_git_provider
from pr_agent.git_providers.git_provider import get_main_pr_language from pr_agent.git_providers.git_provider import get_main_pr_language
from pr_agent.log import get_logger
CHANGELOG_LINES = 50 CHANGELOG_LINES = 50
@ -48,26 +48,26 @@ class PRUpdateChangelog:
async def run(self): async def run(self):
# assert type(self.git_provider) == GithubProvider, "Currently only Github is supported" # assert type(self.git_provider) == GithubProvider, "Currently only Github is supported"
logging.info('Updating the changelog...') get_logger().info('Updating the changelog...')
if get_settings().config.publish_output: if get_settings().config.publish_output:
self.git_provider.publish_comment("Preparing changelog updates...", is_temporary=True) self.git_provider.publish_comment("Preparing changelog updates...", is_temporary=True)
await retry_with_fallback_models(self._prepare_prediction) await retry_with_fallback_models(self._prepare_prediction)
logging.info('Preparing PR changelog updates...') get_logger().info('Preparing PR changelog updates...')
new_file_content, answer = self._prepare_changelog_update() new_file_content, answer = self._prepare_changelog_update()
if get_settings().config.publish_output: if get_settings().config.publish_output:
self.git_provider.remove_initial_comment() self.git_provider.remove_initial_comment()
logging.info('Publishing changelog updates...') get_logger().info('Publishing changelog updates...')
if self.commit_changelog: if self.commit_changelog:
logging.info('Pushing PR changelog updates to repo...') get_logger().info('Pushing PR changelog updates to repo...')
self._push_changelog_update(new_file_content, answer) self._push_changelog_update(new_file_content, answer)
else: else:
logging.info('Publishing PR changelog as comment...') get_logger().info('Publishing PR changelog as comment...')
self.git_provider.publish_comment(f"**Changelog updates:**\n\n{answer}") self.git_provider.publish_comment(f"**Changelog updates:**\n\n{answer}")
async def _prepare_prediction(self, model: str): async def _prepare_prediction(self, model: str):
logging.info('Getting PR diff...') get_logger().info('Getting PR diff...')
self.patches_diff = get_pr_diff(self.git_provider, self.token_handler, model) self.patches_diff = get_pr_diff(self.git_provider, self.token_handler, model)
logging.info('Getting AI prediction...') get_logger().info('Getting AI prediction...')
self.prediction = await self._get_prediction(model) self.prediction = await self._get_prediction(model)
async def _get_prediction(self, model: str): async def _get_prediction(self, model: str):
@ -77,8 +77,8 @@ class PRUpdateChangelog:
system_prompt = environment.from_string(get_settings().pr_update_changelog_prompt.system).render(variables) system_prompt = environment.from_string(get_settings().pr_update_changelog_prompt.system).render(variables)
user_prompt = environment.from_string(get_settings().pr_update_changelog_prompt.user).render(variables) user_prompt = environment.from_string(get_settings().pr_update_changelog_prompt.user).render(variables)
if get_settings().config.verbosity_level >= 2: if get_settings().config.verbosity_level >= 2:
logging.info(f"\nSystem prompt:\n{system_prompt}") get_logger().info(f"\nSystem prompt:\n{system_prompt}")
logging.info(f"\nUser prompt:\n{user_prompt}") get_logger().info(f"\nUser prompt:\n{user_prompt}")
response, finish_reason = await self.ai_handler.chat_completion(model=model, temperature=0.2, response, finish_reason = await self.ai_handler.chat_completion(model=model, temperature=0.2,
system=system_prompt, user=user_prompt) system=system_prompt, user=user_prompt)
@ -100,7 +100,7 @@ class PRUpdateChangelog:
"\n>'/update_changelog --pr_update_changelog.push_changelog_changes=true'\n" "\n>'/update_changelog --pr_update_changelog.push_changelog_changes=true'\n"
if get_settings().config.verbosity_level >= 2: if get_settings().config.verbosity_level >= 2:
logging.info(f"answer:\n{answer}") get_logger().info(f"answer:\n{answer}")
return new_file_content, answer return new_file_content, answer
@ -149,7 +149,7 @@ Example:
except Exception: except Exception:
self.changelog_file_str = "" self.changelog_file_str = ""
if self.commit_changelog: if self.commit_changelog:
logging.info("No CHANGELOG.md file found in the repository. Creating one...") get_logger().info("No CHANGELOG.md file found in the repository. Creating one...")
changelog_file = self.git_provider.repo_obj.create_file(path="CHANGELOG.md", changelog_file = self.git_provider.repo_obj.create_file(path="CHANGELOG.md",
message='add CHANGELOG.md', message='add CHANGELOG.md',
content="", content="",

View File

@ -20,4 +20,5 @@ ujson==5.8.0
azure-devops==7.1.0b3 azure-devops==7.1.0b3
msrest==0.7.1 msrest==0.7.1
pinecone-client pinecone-client
pinecone-datasets @ git+https://github.com/mrT23/pinecone-datasets.git@main pinecone-datasets @ git+https://github.com/mrT23/pinecone-datasets.git@main
loguru==0.7.2

View File

@ -43,18 +43,6 @@ class TestHandlePatchDeletions:
assert handle_patch_deletions(patch, original_file_content_str, new_file_content_str, assert handle_patch_deletions(patch, original_file_content_str, new_file_content_str,
file_name) == patch.rstrip() file_name) == patch.rstrip()
# Tests that handle_patch_deletions logs a message when verbosity_level is greater than 0
def test_handle_patch_deletions_happy_path_verbosity_level_greater_than_0(self, caplog):
patch = '--- a/file.py\n+++ b/file.py\n@@ -1,2 +1,2 @@\n-foo\n-bar\n+baz\n'
original_file_content_str = 'foo\nbar\n'
new_file_content_str = ''
file_name = 'file.py'
get_settings().config.verbosity_level = 1
with caplog.at_level(logging.INFO):
handle_patch_deletions(patch, original_file_content_str, new_file_content_str, file_name)
assert any("Processing file" in message for message in caplog.messages)
# Tests that handle_patch_deletions returns 'File was deleted' when new_file_content_str is empty # Tests that handle_patch_deletions returns 'File was deleted' when new_file_content_str is empty
def test_handle_patch_deletions_edge_case_new_file_content_empty(self): def test_handle_patch_deletions_edge_case_new_file_content_empty(self):
patch = '--- a/file.py\n+++ b/file.py\n@@ -1,2 +1,2 @@\n-foo\n-bar\n' patch = '--- a/file.py\n+++ b/file.py\n@@ -1,2 +1,2 @@\n-foo\n-bar\n'