diff --git a/README.md b/README.md
index 21c2309a..304297b0 100644
--- a/README.md
+++ b/README.md
@@ -29,17 +29,40 @@ PR-Agent aims to help efficiently review and handle pull requests, by providing
## Table of Contents
+- [Getting Started](#getting-started)
- [News and Updates](#news-and-updates)
- [Overview](#overview)
-- [Example results](#example-results)
-- [Try it now](#try-it-now)
-- [Qodo Merge](https://qodo-merge-docs.qodo.ai/overview/pr_agent_pro/)
-- [How it works](#how-it-works)
-- [Why use PR-Agent?](#why-use-pr-agent)
-- [Data privacy](#data-privacy)
+- [See It in Action](#see-it-in-action)
+- [Try It Now](#try-it-now)
+- [Qodo Merge ๐](#qodo-merge-)
+- [How It Works](#how-it-works)
+- [Why Use PR-Agent?](#why-use-pr-agent)
+- [Data Privacy](#data-privacy)
- [Contributing](#contributing)
- [Links](#links)
+## Getting Started
+
+### Try it Instantly
+Test PR-Agent on any public GitHub repository by commenting `@CodiumAI-Agent /improve`
+
+### GitHub Action
+Add automated PR reviews to your repository with a simple workflow file using [GitHub Action setup guide](https://qodo-merge-docs.qodo.ai/installation/github/#run-as-a-github-action)
+
+#### Other Platforms
+- [GitLab webhook setup](https://qodo-merge-docs.qodo.ai/installation/gitlab/)
+- [BitBucket app installation](https://qodo-merge-docs.qodo.ai/installation/bitbucket/)
+- [Azure DevOps setup](https://qodo-merge-docs.qodo.ai/installation/azure/)
+
+### CLI Usage
+Run PR-Agent locally on your repository via command line: [Local CLI setup guide](https://qodo-merge-docs.qodo.ai/usage-guide/automations_and_usage/#local-repo-cli)
+
+### Discover Qodo Merge ๐
+Zero-setup hosted solution with advanced features and priority support
+- [Intro and Installation guide](https://qodo-merge-docs.qodo.ai/installation/qodo_merge/)
+- [Plans & Pricing](https://www.qodo.ai/pricing/)
+
+
## News and Updates
## May 17, 2025
@@ -70,84 +93,58 @@ Read more about it [here](https://qodo-merge-docs.qodo.ai/tools/scan_repo_discus
Supported commands per platform:
-| | | GitHub | GitLab | Bitbucket | Azure DevOps |
-| ----- |---------------------------------------------------------------------------------------------------------|:------:|:------:|:---------:|:------------:|
-| TOOLS | [Review](https://qodo-merge-docs.qodo.ai/tools/review/) | โ
| โ
| โ
| โ
|
-| | [Describe](https://qodo-merge-docs.qodo.ai/tools/describe/) | โ
| โ
| โ
| โ
|
-| | [Improve](https://qodo-merge-docs.qodo.ai/tools/improve/) | โ
| โ
| โ
| โ
|
-| | [Ask](https://qodo-merge-docs.qodo.ai/tools/ask/) | โ
| โ
| โ
| โ
|
-| | โฎ [Ask on code lines](https://qodo-merge-docs.qodo.ai/tools/ask/#ask-lines) | โ
| โ
| | |
-| | [Update CHANGELOG](https://qodo-merge-docs.qodo.ai/tools/update_changelog/) | โ
| โ
| โ
| โ
|
-| | [Help Docs](https://qodo-merge-docs.qodo.ai/tools/help_docs/?h=auto#auto-approval) | โ
| โ
| โ
| |
-| | [Ticket Context](https://qodo-merge-docs.qodo.ai/core-abilities/fetching_ticket_context/) ๐ | โ
| โ
| โ
| |
-| | [Utilizing Best Practices](https://qodo-merge-docs.qodo.ai/tools/improve/#best-practices) ๐ | โ
| โ
| โ
| |
-| | [PR Chat](https://qodo-merge-docs.qodo.ai/chrome-extension/features/#pr-chat) ๐ | โ
| | | |
-| | [Suggestion Tracking](https://qodo-merge-docs.qodo.ai/tools/improve/#suggestion-tracking) ๐ | โ
| โ
| | |
-| | [CI Feedback](https://qodo-merge-docs.qodo.ai/tools/ci_feedback/) ๐ | โ
| | | |
-| | [PR Documentation](https://qodo-merge-docs.qodo.ai/tools/documentation/) ๐ | โ
| โ
| | |
-| | [Custom Labels](https://qodo-merge-docs.qodo.ai/tools/custom_labels/) ๐ | โ
| โ
| | |
-| | [Analyze](https://qodo-merge-docs.qodo.ai/tools/analyze/) ๐ | โ
| โ
| | |
-| | [Similar Code](https://qodo-merge-docs.qodo.ai/tools/similar_code/) ๐ | โ
| | | |
-| | [Custom Prompt](https://qodo-merge-docs.qodo.ai/tools/custom_prompt/) ๐ | โ
| โ
| โ
| |
-| | [Test](https://qodo-merge-docs.qodo.ai/tools/test/) ๐ | โ
| โ
| | |
-| | [Implement](https://qodo-merge-docs.qodo.ai/tools/implement/) ๐ | โ
| โ
| โ
| |
-| | [Scan Repo Discussions](https://qodo-merge-docs.qodo.ai/tools/scan_repo_discussions/) ๐ | โ
| | | |
-| | [Auto-Approve](https://qodo-merge-docs.qodo.ai/tools/improve/?h=auto#auto-approval) ๐ | โ
| โ
| โ
| |
-| | | | | | |
-| USAGE | [CLI](https://qodo-merge-docs.qodo.ai/usage-guide/automations_and_usage/#local-repo-cli) | โ
| โ
| โ
| โ
|
-| | [App / webhook](https://qodo-merge-docs.qodo.ai/usage-guide/automations_and_usage/#github-app) | โ
| โ
| โ
| โ
|
-| | [Tagging bot](https://github.com/Codium-ai/pr-agent#try-it-now) | โ
| | | |
-| | [Actions](https://qodo-merge-docs.qodo.ai/installation/github/#run-as-a-github-action) | โ
| โ
| โ
| โ
|
-| | | | | | |
-| CORE | [PR compression](https://qodo-merge-docs.qodo.ai/core-abilities/compression_strategy/) | โ
| โ
| โ
| โ
|
-| | Adaptive and token-aware file patch fitting | โ
| โ
| โ
| โ
|
-| | [Multiple models support](https://qodo-merge-docs.qodo.ai/usage-guide/changing_a_model/) | โ
| โ
| โ
| โ
|
-| | [Local and global metadata](https://qodo-merge-docs.qodo.ai/core-abilities/metadata/) | โ
| โ
| โ
| โ
|
-| | [Dynamic context](https://qodo-merge-docs.qodo.ai/core-abilities/dynamic_context/) | โ
| โ
| โ
| โ
|
-| | [Self reflection](https://qodo-merge-docs.qodo.ai/core-abilities/self_reflection/) | โ
| โ
| โ
| โ
|
-| | [Static code analysis](https://qodo-merge-docs.qodo.ai/core-abilities/static_code_analysis/) ๐ | โ
| โ
| | |
-| | [Global and wiki configurations](https://qodo-merge-docs.qodo.ai/usage-guide/configuration_options/) ๐ | โ
| โ
| โ
| |
-| | [PR interactive actions](https://www.qodo.ai/images/pr_agent/pr-actions.mp4) ๐ | โ
| โ
| | |
-| | [Impact Evaluation](https://qodo-merge-docs.qodo.ai/core-abilities/impact_evaluation/) ๐ | โ
| โ
| | |
-| | [Code Validation ๐](https://qodo-merge-docs.qodo.ai/core-abilities/code_validation/) | โ
| โ
| โ
| โ
|
-| | [Auto Best Practices ๐](https://qodo-merge-docs.qodo.ai/core-abilities/auto_best_practices/) | โ
| | | |
+| | | GitHub | GitLab | Bitbucket | Azure DevOps | Gitea |
+|---------------------------------------------------------|---------------------------------------------------------------------------------------------------------------------|:------:|:------:|:---------:|:------------:|:-----:|
+| [TOOLS](https://qodo-merge-docs.qodo.ai/tools/) | [Describe](https://qodo-merge-docs.qodo.ai/tools/describe/) | โ
| โ
| โ
| โ
| โ
|
+| | [Review](https://qodo-merge-docs.qodo.ai/tools/review/) | โ
| โ
| โ
| โ
| โ
|
+| | [Improve](https://qodo-merge-docs.qodo.ai/tools/improve/) | โ
| โ
| โ
| โ
| โ
|
+| | [Ask](https://qodo-merge-docs.qodo.ai/tools/ask/) | โ
| โ
| โ
| โ
| |
+| | โฎ [Ask on code lines](https://qodo-merge-docs.qodo.ai/tools/ask/#ask-lines) | โ
| โ
| | | |
+| | [Help Docs](https://qodo-merge-docs.qodo.ai/tools/help_docs/?h=auto#auto-approval) | โ
| โ
| โ
| | |
+| | [Update CHANGELOG](https://qodo-merge-docs.qodo.ai/tools/update_changelog/) | โ
| โ
| โ
| โ
| |
+| | [PR Documentation](https://qodo-merge-docs.qodo.ai/tools/documentation/) ๐ | โ
| โ
| | | |
+| | [Analyze](https://qodo-merge-docs.qodo.ai/tools/analyze/) ๐ | โ
| โ
| | | |
+| | [Auto-Approve](https://qodo-merge-docs.qodo.ai/tools/improve/?h=auto#auto-approval) ๐ | โ
| โ
| โ
| | |
+| | [CI Feedback](https://qodo-merge-docs.qodo.ai/tools/ci_feedback/) ๐ | โ
| | | | |
+| | [Custom Prompt](https://qodo-merge-docs.qodo.ai/tools/custom_prompt/) ๐ | โ
| โ
| โ
| | |
+| | [Generate Custom Labels](https://qodo-merge-docs.qodo.ai/tools/custom_labels/) ๐ | โ
| โ
| | | |
+| | [Generate Tests](https://qodo-merge-docs.qodo.ai/tools/test/) ๐ | โ
| โ
| | | |
+| | [Implement](https://qodo-merge-docs.qodo.ai/tools/implement/) ๐ | โ
| โ
| โ
| | |
+| | [Scan Repo Discussions](https://qodo-merge-docs.qodo.ai/tools/scan_repo_discussions/) ๐ | โ
| | | | |
+| | [Similar Code](https://qodo-merge-docs.qodo.ai/tools/similar_code/) ๐ | โ
| | | | |
+| | [Ticket Context](https://qodo-merge-docs.qodo.ai/core-abilities/fetching_ticket_context/) ๐ | โ
| โ
| โ
| | |
+| | [Utilizing Best Practices](https://qodo-merge-docs.qodo.ai/tools/improve/#best-practices) ๐ | โ
| โ
| โ
| | |
+| | [PR Chat](https://qodo-merge-docs.qodo.ai/chrome-extension/features/#pr-chat) ๐ | โ
| | | | |
+| | [Suggestion Tracking](https://qodo-merge-docs.qodo.ai/tools/improve/#suggestion-tracking) ๐ | โ
| โ
| | | |
+| | | | | | | |
+| [USAGE](https://qodo-merge-docs.qodo.ai/usage-guide/) | [CLI](https://qodo-merge-docs.qodo.ai/usage-guide/automations_and_usage/#local-repo-cli) | โ
| โ
| โ
| โ
| โ
|
+| | [App / webhook](https://qodo-merge-docs.qodo.ai/usage-guide/automations_and_usage/#github-app) | โ
| โ
| โ
| โ
| โ
|
+| | [Tagging bot](https://github.com/Codium-ai/pr-agent#try-it-now) | โ
| | | | |
+| | [Actions](https://qodo-merge-docs.qodo.ai/installation/github/#run-as-a-github-action) | โ
| โ
| โ
| โ
| |
+| | | | | | | |
+| [CORE](https://qodo-merge-docs.qodo.ai/core-abilities/) | [Adaptive and token-aware file patch fitting](https://qodo-merge-docs.qodo.ai/core-abilities/compression_strategy/) | โ
| โ
| โ
| โ
| |
+| | [Auto Best Practices ๐](https://qodo-merge-docs.qodo.ai/core-abilities/auto_best_practices/) | โ
| | | | |
+| | [Chat on code suggestions](https://qodo-merge-docs.qodo.ai/core-abilities/chat_on_code_suggestions/) | โ
| โ
| | | |
+| | [Code Validation ๐](https://qodo-merge-docs.qodo.ai/core-abilities/code_validation/) | โ
| โ
| โ
| โ
| |
+| | [Dynamic context](https://qodo-merge-docs.qodo.ai/core-abilities/dynamic_context/) | โ
| โ
| โ
| โ
| |
+| | [Fetching ticket context](https://qodo-merge-docs.qodo.ai/core-abilities/fetching_ticket_context/) | โ
| โ
| โ
| | |
+| | [Global and wiki configurations](https://qodo-merge-docs.qodo.ai/usage-guide/configuration_options/) ๐ | โ
| โ
| โ
| | |
+| | [Impact Evaluation](https://qodo-merge-docs.qodo.ai/core-abilities/impact_evaluation/) ๐ | โ
| โ
| | | |
+| | [Incremental Update](https://qodo-merge-docs.qodo.ai/core-abilities/incremental_update/) | โ
| | | | |
+| | [Interactivity](https://qodo-merge-docs.qodo.ai/core-abilities/interactivity/) | โ
| โ
| | | |
+| | [Local and global metadata](https://qodo-merge-docs.qodo.ai/core-abilities/metadata/) | โ
| โ
| โ
| โ
| |
+| | [Multiple models support](https://qodo-merge-docs.qodo.ai/usage-guide/changing_a_model/) | โ
| โ
| โ
| โ
| |
+| | [PR compression](https://qodo-merge-docs.qodo.ai/core-abilities/compression_strategy/) | โ
| โ
| โ
| โ
| |
+| | [PR interactive actions](https://www.qodo.ai/images/pr_agent/pr-actions.mp4) ๐ | โ
| โ
| | | |
+| | [RAG context enrichment](https://qodo-merge-docs.qodo.ai/core-abilities/rag_context_enrichment/) | โ
| | โ
| | |
+| | [Self reflection](https://qodo-merge-docs.qodo.ai/core-abilities/self_reflection/) | โ
| โ
| โ
| โ
| |
+| | [Static code analysis](https://qodo-merge-docs.qodo.ai/core-abilities/static_code_analysis/) ๐ | โ
| โ
| | | |
- ๐ means this feature is available only in [Qodo Merge](https://www.qodo.ai/pricing/)
[//]: # (- Support for additional git providers is described in [here](./docs/Full_environments.md))
___
-โฃ **Auto Description ([`/describe`](https://qodo-merge-docs.qodo.ai/tools/describe/))**: Automatically generating PR description - title, type, summary, code walkthrough and labels.
-\
-โฃ **Auto Review ([`/review`](https://qodo-merge-docs.qodo.ai/tools/review/))**: Adjustable feedback about the PR, possible issues, security concerns, review effort and more.
-\
-โฃ **Code Suggestions ([`/improve`](https://qodo-merge-docs.qodo.ai/tools/improve/))**: Code suggestions for improving the PR.
-\
-โฃ **Question Answering ([`/ask ...`](https://qodo-merge-docs.qodo.ai/tools/ask/))**: Answering free-text questions about the PR.
-\
-โฃ **Update Changelog ([`/update_changelog`](https://qodo-merge-docs.qodo.ai/tools/update_changelog/))**: Automatically updating the CHANGELOG.md file with the PR changes.
-\
-โฃ **Help Docs ([`/help_docs`](https://qodo-merge-docs.qodo.ai/tools/help_docs/))**: Answers a question on any repository by utilizing given documentation.
-\
-โฃ **Add Documentation ๐ ([`/add_docs`](https://qodo-merge-docs.qodo.ai/tools/documentation/))**: Generates documentation to methods/functions/classes that changed in the PR.
-\
-โฃ **Generate Custom Labels ๐ ([`/generate_labels`](https://qodo-merge-docs.qodo.ai/tools/custom_labels/))**: Generates custom labels for the PR, based on specific guidelines defined by the user.
-\
-โฃ **Analyze ๐ ([`/analyze`](https://qodo-merge-docs.qodo.ai/tools/analyze/))**: Identify code components that changed in the PR, and enables to interactively generate tests, docs, and code suggestions for each component.
-\
-โฃ **Test ๐ ([`/test`](https://qodo-merge-docs.qodo.ai/tools/test/))**: Generate tests for a selected component, based on the PR code changes.
-\
-โฃ **Custom Prompt ๐ ([`/custom_prompt`](https://qodo-merge-docs.qodo.ai/tools/custom_prompt/))**: Automatically generates custom suggestions for improving the PR code, based on specific guidelines defined by the user.
-\
-โฃ **Generate Tests ๐ ([`/test component_name`](https://qodo-merge-docs.qodo.ai/tools/test/))**: Generates unit tests for a selected component, based on the PR code changes.
-\
-โฃ **CI Feedback ๐ ([`/checks ci_job`](https://qodo-merge-docs.qodo.ai/tools/ci_feedback/))**: Automatically generates feedback and analysis for a failed CI job.
-\
-โฃ **Similar Code ๐ ([`/find_similar_component`](https://qodo-merge-docs.qodo.ai/tools/similar_code/))**: Retrieves the most similar code components from inside the organization's codebase, or from open-source code.
-\
-โฃ **Implement ๐ ([`/implement`](https://qodo-merge-docs.qodo.ai/tools/implement/))**: Generates implementation code from review suggestions.
-___
-
-## Example results
+## See It in Action
@@ -182,7 +179,7 @@ ___
-## Try it now
+## Try It Now
Try the Claude Sonnet powered PR-Agent instantly on _your public GitHub repository_. Just mention `@CodiumAI-Agent` and add the desired command in any PR comment. The agent will generate a response based on your command.
For example, add a comment to any pull request with the following text:
@@ -208,7 +205,7 @@ It does not have 'edit' access to your repo, for example, so it cannot update th
4. **Extra features** - In addition to the benefits listed above, Qodo Merge will emphasize more customization, and the usage of static code analysis, in addition to LLM logic, to improve results.
See [here](https://qodo-merge-docs.qodo.ai/overview/pr_agent_pro/) for a list of features available in Qodo Merge.
-## How it works
+## How It Works
The following diagram illustrates PR-Agent tools and their flow:
@@ -216,7 +213,7 @@ The following diagram illustrates PR-Agent tools and their flow:
Check out the [PR Compression strategy](https://qodo-merge-docs.qodo.ai/core-abilities/#pr-compression-strategy) page for more details on how we convert a code diff to a manageable LLM prompt
-## Why use PR-Agent?
+## Why Use PR-Agent?
A reasonable question that can be asked is: `"Why use PR-Agent? What makes it stand out from existing tools?"`
@@ -224,10 +221,10 @@ Here are some advantages of PR-Agent:
- We emphasize **real-life practical usage**. Each tool (review, improve, ask, ...) has a single LLM call, no more. We feel that this is critical for realistic team usage - obtaining an answer quickly (~30 seconds) and affordably.
- Our [PR Compression strategy](https://qodo-merge-docs.qodo.ai/core-abilities/#pr-compression-strategy) is a core ability that enables to effectively tackle both short and long PRs.
-- Our JSON prompting strategy enables to have **modular, customizable tools**. For example, the '/review' tool categories can be controlled via the [configuration](pr_agent/settings/configuration.toml) file. Adding additional categories is easy and accessible.
+- Our JSON prompting strategy enables us to have **modular, customizable tools**. For example, the '/review' tool categories can be controlled via the [configuration](pr_agent/settings/configuration.toml) file. Adding additional categories is easy and accessible.
- We support **multiple git providers** (GitHub, GitLab, BitBucket), **multiple ways** to use the tool (CLI, GitHub Action, GitHub App, Docker, ...), and **multiple models** (GPT, Claude, Deepseek, ...)
-## Data privacy
+## Data Privacy
### Self-hosted PR-Agent
@@ -252,7 +249,7 @@ To contribute to the project, get started by reading our [Contributing Guide](ht
## Links
-- Discord community: https://discord.gg/kG35uSHDBc
+- Discord community: https://discord.com/invite/SgSxuQ65GF
- Qodo site: https://www.qodo.ai/
- Blog: https://www.qodo.ai/blog/
- Troubleshooting: https://www.qodo.ai/blog/technical-faq-and-troubleshooting/
diff --git a/docker/Dockerfile b/docker/Dockerfile
index 9e83e37b..ce609e48 100644
--- a/docker/Dockerfile
+++ b/docker/Dockerfile
@@ -33,6 +33,11 @@ FROM base AS azure_devops_webhook
ADD pr_agent pr_agent
CMD ["python", "pr_agent/servers/azuredevops_server_webhook.py"]
+FROM base AS gitea_app
+ADD pr_agent pr_agent
+CMD ["python", "-m", "gunicorn", "-k", "uvicorn.workers.UvicornWorker", "-c", "pr_agent/servers/gunicorn_config.py","pr_agent.servers.gitea_app:app"]
+
+
FROM base AS test
ADD requirements-dev.txt .
RUN pip install --no-cache-dir -r requirements-dev.txt && rm requirements-dev.txt
diff --git a/docs/docs/core-abilities/chat_on_code_suggestions.md b/docs/docs/core-abilities/chat_on_code_suggestions.md
new file mode 100644
index 00000000..086c70ee
--- /dev/null
+++ b/docs/docs/core-abilities/chat_on_code_suggestions.md
@@ -0,0 +1,55 @@
+# Chat on code suggestions ๐
+
+`Supported Git Platforms: GitHub, GitLab`
+
+## Overview
+
+Qodo Merge implements an orchestrator agent that enables interactive code discussions, listening and responding to comments without requiring explicit tool calls.
+The orchestrator intelligently analyzes your responses to determine if you want to implement a suggestion, ask a question, or request help, then delegates to the appropriate specialized tool.
+
+To minimize unnecessary notifications and maintain focused discussions, the orchestrator agent will only respond to comments made directly within the inline code suggestion discussions it has created (`/improve`) or within discussions initiated by the `/implement` command.
+
+## Getting Started
+
+### Setup
+
+Enable interactive code discussions by adding the following to your configuration file (default is `True`):
+
+```toml
+[pr_code_suggestions]
+enable_chat_in_code_suggestions = true
+```
+
+
+### Activation
+
+#### `/improve`
+
+To obtain dynamic responses, the following steps are required:
+
+1. Run the `/improve` command (mostly automatic)
+2. Check the `/improve` recommendation checkboxes (_Apply this suggestion_) to have Qodo Merge generate a new inline code suggestion discussion
+3. The orchestrator agent will then automatically listen to and reply to comments within the discussion without requiring additional commands
+
+#### `/implement`
+
+To obtain dynamic responses, the following steps are required:
+
+1. Select code lines in the PR diff and run the `/implement` command
+2. Wait for Qodo Merge to generate a new inline code suggestion
+3. The orchestrator agent will then automatically listen to and reply to comments within the discussion without requiring additional commands
+
+
+## Explore the available interaction patterns
+
+!!! tip "Tip: Direct the agent with keywords"
+ Use "implement" or "apply" for code generation. Use "explain", "why", or "how" for information and help.
+
+=== "Asking for Details"
+ {width=512}
+
+=== "Implementing Suggestions"
+ {width=512}
+
+=== "Providing Additional Help"
+ {width=512}
diff --git a/docs/docs/core-abilities/fetching_ticket_context.md b/docs/docs/core-abilities/fetching_ticket_context.md
index c6424ad1..73096040 100644
--- a/docs/docs/core-abilities/fetching_ticket_context.md
+++ b/docs/docs/core-abilities/fetching_ticket_context.md
@@ -9,8 +9,9 @@ This integration enriches the review process by automatically surfacing relevant
**Ticket systems supported**:
-- GitHub
-- Jira (๐)
+- [GitHub](https://qodo-merge-docs.qodo.ai/core-abilities/fetching_ticket_context/#github-issues-integration)
+- [Jira (๐)](https://qodo-merge-docs.qodo.ai/core-abilities/fetching_ticket_context/#jira-integration)
+- [Linear (๐)](https://qodo-merge-docs.qodo.ai/core-abilities/fetching_ticket_context/#linear-integration)
**Ticket data fetched:**
@@ -75,13 +76,17 @@ The recommended way to authenticate with Jira Cloud is to install the Qodo Merge
Installation steps:
-1. Click [here](https://auth.atlassian.com/authorize?audience=api.atlassian.com&client_id=8krKmA4gMD8mM8z24aRCgPCSepZNP1xf&scope=read%3Ajira-work%20offline_access&redirect_uri=https%3A%2F%2Fregister.jira.pr-agent.codium.ai&state=qodomerge&response_type=code&prompt=consent) to install the Qodo Merge app in your Jira Cloud instance, click the `accept` button.
+1. Go to the [Qodo Merge integrations page](https://app.qodo.ai/qodo-merge/integrations)
+
+2. Click on the Connect **Jira Cloud** button to connect the Jira Cloud app
+
+3. Click the `accept` button.
{width=384}
-2. After installing the app, you will be redirected to the Qodo Merge registration page. and you will see a success message.
+4. After installing the app, you will be redirected to the Qodo Merge registration page. and you will see a success message.
{width=384}
-3. Now Qodo Merge will be able to fetch Jira ticket context for your PRs.
+5. Now Qodo Merge will be able to fetch Jira ticket context for your PRs.
**2) Email/Token Authentication**
@@ -300,3 +305,46 @@ Name your branch with the ticket ID as a prefix (e.g., `ISSUE-123-feature-descri
[jira]
jira_base_url = "https://.atlassian.net"
```
+
+## Linear Integration ๐
+
+### Linear App Authentication
+
+The recommended way to authenticate with Linear is to connect the Linear app through the Qodo Merge portal.
+
+Installation steps:
+
+1. Go to the [Qodo Merge integrations page](https://app.qodo.ai/qodo-merge/integrations)
+
+2. Navigate to the **Integrations** tab
+
+3. Click on the **Linear** button to connect the Linear app
+
+4. Follow the authentication flow to authorize Qodo Merge to access your Linear workspace
+
+5. Once connected, Qodo Merge will be able to fetch Linear ticket context for your PRs
+
+### How to link a PR to a Linear ticket
+
+Qodo Merge will automatically detect Linear tickets using either of these methods:
+
+**Method 1: Description Reference:**
+
+Include a ticket reference in your PR description using either:
+- The complete Linear ticket URL: `https://linear.app/[ORG_ID]/issue/[TICKET_ID]`
+- The shortened ticket ID: `[TICKET_ID]` (e.g., `ABC-123`) - requires linear_base_url configuration (see below).
+
+**Method 2: Branch Name Detection:**
+
+Name your branch with the ticket ID as a prefix (e.g., `ABC-123-feature-description` or `feature/ABC-123/feature-description`).
+
+!!! note "Linear Base URL"
+
+ For shortened ticket IDs or branch detection (method 2), you must configure the Linear base URL in your configuration file under the [linear] section:
+
+ ```toml
+ [linear]
+ linear_base_url = "https://linear.app/[ORG_ID]"
+ ```
+
+ Replace `[ORG_ID]` with your Linear organization identifier.
\ No newline at end of file
diff --git a/docs/docs/core-abilities/index.md b/docs/docs/core-abilities/index.md
index b97260ee..8e07f24f 100644
--- a/docs/docs/core-abilities/index.md
+++ b/docs/docs/core-abilities/index.md
@@ -3,6 +3,7 @@
Qodo Merge utilizes a variety of core abilities to provide a comprehensive and efficient code review experience. These abilities include:
- [Auto best practices](https://qodo-merge-docs.qodo.ai/core-abilities/auto_best_practices/)
+- [Chat on code suggestions](https://qodo-merge-docs.qodo.ai/core-abilities/chat_on_code_suggestions/)
- [Code validation](https://qodo-merge-docs.qodo.ai/core-abilities/code_validation/)
- [Compression strategy](https://qodo-merge-docs.qodo.ai/core-abilities/compression_strategy/)
- [Dynamic context](https://qodo-merge-docs.qodo.ai/core-abilities/dynamic_context/)
diff --git a/docs/docs/installation/gitea.md b/docs/docs/installation/gitea.md
new file mode 100644
index 00000000..476497f7
--- /dev/null
+++ b/docs/docs/installation/gitea.md
@@ -0,0 +1,46 @@
+## Run a Gitea webhook server
+
+1. In Gitea create a new user and give it "Reporter" role ("Developer" if using Pro version of the agent) for the intended group or project.
+
+2. For the user from step 1. generate a `personal_access_token` with `api` access.
+
+3. Generate a random secret for your app, and save it for later (`webhook_secret`). For example, you can use:
+
+```bash
+WEBHOOK_SECRET=$(python -c "import secrets; print(secrets.token_hex(10))")
+```
+
+4. Clone this repository:
+
+```bash
+git clone https://github.com/qodo-ai/pr-agent.git
+```
+
+5. Prepare variables and secrets. Skip this step if you plan on setting these as environment variables when running the agent:
+1. In the configuration file/variables:
+ - Set `config.git_provider` to "gitea"
+
+2. In the secrets file/variables:
+ - Set your AI model key in the respective section
+ - In the [Gitea] section, set `personal_access_token` (with token from step 2) and `webhook_secret` (with secret from step 3)
+
+6. Build a Docker image for the app and optionally push it to a Docker repository. We'll use Dockerhub as an example:
+
+```bash
+docker build -f /docker/Dockerfile -t pr-agent:gitea_app --target gitea_app .
+docker push codiumai/pr-agent:gitea_webhook # Push to your Docker repository
+```
+
+7. Set the environmental variables, the method depends on your docker runtime. Skip this step if you included your secrets/configuration directly in the Docker image.
+
+```bash
+CONFIG__GIT_PROVIDER=gitea
+GITEA__PERSONAL_ACCESS_TOKEN=
+GITEA__WEBHOOK_SECRET=
+GITEA__URL=https://gitea.com # Or self host
+OPENAI__KEY=
+```
+
+8. Create a webhook in your Gitea project. Set the URL to `http[s]:///api/v1/gitea_webhooks`, the secret token to the generated secret from step 3, and enable the triggers `push`, `comments` and `merge request events`.
+
+9. Test your installation by opening a merge request or commenting on a merge request using one of PR Agent's commands.
diff --git a/docs/docs/installation/github.md b/docs/docs/installation/github.md
index 3eeace4f..69b34b8a 100644
--- a/docs/docs/installation/github.md
+++ b/docs/docs/installation/github.md
@@ -203,6 +203,28 @@ For example: `GITHUB.WEBHOOK_SECRET` --> `GITHUB__WEBHOOK_SECRET`
7. Go back to steps 8-9 of [Method 5](#run-as-a-github-app) with the function url as your Webhook URL.
The Webhook URL would look like `https:///api/v1/github_webhooks`
+### Using AWS Secrets Manager
+
+For production Lambda deployments, use AWS Secrets Manager instead of environment variables:
+
+1. Create a secret in AWS Secrets Manager with JSON format like this:
+
+```json
+{
+ "openai.key": "sk-proj-xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx",
+ "github.webhook_secret": "your-webhook-secret-from-step-2",
+ "github.private_key": "-----BEGIN RSA PRIVATE KEY-----\nMIIEpAIBAAKCAQEA...\n-----END RSA PRIVATE KEY-----"
+}
+```
+
+2. Add IAM permission `secretsmanager:GetSecretValue` to your Lambda execution role
+3. Set these environment variables in your Lambda:
+
+```bash
+AWS_SECRETS_MANAGER__SECRET_ARN=arn:aws:secretsmanager:us-east-1:123456789012:secret:pr-agent-secrets-AbCdEf
+CONFIG__SECRET_PROVIDER=aws_secrets_manager
+```
+
---
## AWS CodeCommit Setup
diff --git a/docs/docs/installation/index.md b/docs/docs/installation/index.md
index 9831078d..cc593deb 100644
--- a/docs/docs/installation/index.md
+++ b/docs/docs/installation/index.md
@@ -9,6 +9,7 @@ There are several ways to use self-hosted PR-Agent:
- [GitLab integration](./gitlab.md)
- [BitBucket integration](./bitbucket.md)
- [Azure DevOps integration](./azure.md)
+- [Gitea integration](./gitea.md)
## Qodo Merge ๐
diff --git a/docs/docs/installation/locally.md b/docs/docs/installation/locally.md
index cd981f96..9ceb077b 100644
--- a/docs/docs/installation/locally.md
+++ b/docs/docs/installation/locally.md
@@ -1,7 +1,7 @@
To run PR-Agent locally, you first need to acquire two keys:
1. An OpenAI key from [here](https://platform.openai.com/api-keys){:target="_blank"}, with access to GPT-4 and o4-mini (or a key for other [language models](https://qodo-merge-docs.qodo.ai/usage-guide/changing_a_model/), if you prefer).
-2. A personal access token from your Git platform (GitHub, GitLab, BitBucket) with repo scope. GitHub token, for example, can be issued from [here](https://github.com/settings/tokens){:target="_blank"}
+2. A personal access token from your Git platform (GitHub, GitLab, BitBucket,Gitea) with repo scope. GitHub token, for example, can be issued from [here](https://github.com/settings/tokens){:target="_blank"}
## Using Docker image
@@ -40,6 +40,19 @@ To invoke a tool (for example `review`), you can run PR-Agent directly from the
docker run --rm -it -e CONFIG.GIT_PROVIDER=bitbucket -e OPENAI.KEY=$OPENAI_API_KEY -e BITBUCKET.BEARER_TOKEN=$BITBUCKET_BEARER_TOKEN codiumai/pr-agent:latest --pr_url= review
```
+- For Gitea:
+
+ ```bash
+ docker run --rm -it -e OPENAI.KEY= -e CONFIG.GIT_PROVIDER=gitea -e GITEA.PERSONAL_ACCESS_TOKEN= codiumai/pr-agent:latest --pr_url review
+ ```
+
+ If you have a dedicated Gitea instance, you need to specify the custom url as variable:
+
+ ```bash
+ -e GITEA.URL=
+ ```
+
+
For other git providers, update `CONFIG.GIT_PROVIDER` accordingly and check the [`pr_agent/settings/.secrets_template.toml`](https://github.com/Codium-ai/pr-agent/blob/main/pr_agent/settings/.secrets_template.toml) file for environment variables expected names and values.
### Utilizing environment variables
diff --git a/docs/docs/recent_updates/index.md b/docs/docs/recent_updates/index.md
index 158fb5ca..84b362f0 100644
--- a/docs/docs/recent_updates/index.md
+++ b/docs/docs/recent_updates/index.md
@@ -1,21 +1,23 @@
# Recent Updates and Future Roadmap
-`Page last updated: 2025-05-11`
+`Page last updated: 2025-06-01`
This page summarizes recent enhancements to Qodo Merge (last three months).
It also outlines our development roadmap for the upcoming three months. Please note that the roadmap is subject to change, and features may be adjusted, added, or reprioritized.
=== "Recent Updates"
+ - **CLI Endpoint**: A new Qodo Merge endpoint that accepts a lists of before/after code changes, executes Qodo Merge commands, and return the results. Currently available for enterprise customers. Contact [Qodo](https://www.qodo.ai/contact/) for more information.
+ - **Linear tickets support**: Qodo Merge now supports Linear tickets. ([Learn more](https://qodo-merge-docs.qodo.ai/core-abilities/fetching_ticket_context/#linear-integration))
+ - **Smart Update**: Upon PR updates, Qodo Merge will offer tailored code suggestions, addressing both the entire PR and the specific incremental changes since the last feedback ([Learn more](https://qodo-merge-docs.qodo.ai/core-abilities/incremental_update//))
- **Qodo Merge Pull Request Benchmark** - evaluating the performance of LLMs in analyzing pull request code ([Learn more](https://qodo-merge-docs.qodo.ai/pr_benchmark/))
- - **Chat on Suggestions**: Users can now chat with Qodo Merge code suggestions ([Learn more](https://qodo-merge-docs.qodo.ai/tools/improve/#chat-on-code-suggestions))
+ - **Chat on Suggestions**: Users can now chat with code suggestions ([Learn more](https://qodo-merge-docs.qodo.ai/tools/improve/#chat-on-code-suggestions))
- **Scan Repo Discussions Tool**: A new tool that analyzes past code discussions to generate a `best_practices.md` file, distilling key insights and recommendations. ([Learn more](https://qodo-merge-docs.qodo.ai/tools/scan_repo_discussions/))
- - **Enhanced Models**: Qodo Merge now defaults to a combination of top models (Claude Sonnet 3.7 and Gemini 2.5 Pro) and incorporates dedicated code validation logic for improved results. ([Details 1](https://qodo-merge-docs.qodo.ai/usage-guide/qodo_merge_models/), [Details 2](https://qodo-merge-docs.qodo.ai/core-abilities/code_validation/))
- - **Chrome Extension Update**: Qodo Merge Chrome extension now supports single-tenant users. ([Learn more](https://qodo-merge-docs.qodo.ai/chrome-extension/options/#configuration-options/))
+
=== "Future Roadmap"
- - **Smart Update**: Upon PR updates, Qodo Merge will offer tailored code suggestions, addressing both the entire PR and the specific incremental changes since the last feedback.
- - **CLI Endpoint**: A new Qodo Merge endpoint will accept lists of before/after code changes, execute Qodo Merge commands, and return the results.
- **Simplified Free Tier**: We plan to transition from a two-week free trial to a free tier offering a limited number of suggestions per month per organization.
- **Best Practices Hierarchy**: Introducing support for structured best practices, such as for folders in monorepos or a unified best practice file for a group of repositories.
- - **Installation Metrics**: Upon installation, Qodo Merge will analyze past PRs for key metrics (e.g., time to merge, time to first reviewer feedback), enabling pre/post-installation comparison to calculate ROI.
\ No newline at end of file
+ - **Enhanced `review` tool**: Enhancing the `review` tool validate compliance across multiple categories including security, tickets, and custom best practices.
+ - **Smarter context retrieval**: Leverage AST and LSP analysis to gather relevant context from across the entire repository.
+ - **Enhanced portal experience**: Improved user experience in the Qodo Merge portal with new options and capabilities.
diff --git a/docs/docs/tools/describe.md b/docs/docs/tools/describe.md
index 1114ffdc..143fd2d6 100644
--- a/docs/docs/tools/describe.md
+++ b/docs/docs/tools/describe.md
@@ -125,8 +125,8 @@ enable_pr_diagram = true
If set to true, the tool will display a help text in the comment. Default is false. |
- add_diagram |
- If set to true, the tool will generate a Mermaid sequence diagram (in code block format) describing component interactions based on the code changes. Default is false. |
+ enable_pr_diagram |
+ If set to true, the tool will generate a horizontal Mermaid flowchart summarizing the main pull request changes. This field remains empty if not applicable. Default is false. |
diff --git a/docs/docs/tools/implement.md b/docs/docs/tools/implement.md
index 93401425..83ccd101 100644
--- a/docs/docs/tools/implement.md
+++ b/docs/docs/tools/implement.md
@@ -7,50 +7,50 @@ It leverages LLM technology to transform PR comments and review suggestions into
## Usage Scenarios
-### For Reviewers
+=== "For Reviewers"
-Reviewers can request code changes by:
+ Reviewers can request code changes by:
-1. Selecting the code block to be modified.
-2. Adding a comment with the syntax:
+ 1. Selecting the code block to be modified.
+ 2. Adding a comment with the syntax:
-```
-/implement
-```
+ ```
+ /implement
+ ```
-{width=640}
+ {width=640}
-### For PR Authors
+=== "For PR Authors"
-PR authors can implement suggested changes by replying to a review comment using either:
+ PR authors can implement suggested changes by replying to a review comment using either:
-1. Add specific implementation details as described above
+ 1. Add specific implementation details as described above
-```
-/implement
-```
+ ```
+ /implement
+ ```
-2. Use the original review comment as instructions
+ 2. Use the original review comment as instructions
-```
-/implement
-```
+ ```
+ /implement
+ ```
-{width=640}
+ {width=640}
-### For Referencing Comments
+=== "For Referencing Comments"
-You can reference and implement changes from any comment by:
+ You can reference and implement changes from any comment by:
-```
-/implement
-```
+ ```
+ /implement
+ ```
-{width=640}
+ {width=640}
-Note that the implementation will occur within the review discussion thread.
+ Note that the implementation will occur within the review discussion thread.
-**Configuration options**
+## Configuration options
- Use `/implement` to implement code change within and based on the review discussion.
- Use `/implement ` inside a review discussion to implement specific instructions.
diff --git a/docs/docs/tools/improve.md b/docs/docs/tools/improve.md
index 54ece175..2ca0c74c 100644
--- a/docs/docs/tools/improve.md
+++ b/docs/docs/tools/improve.md
@@ -288,45 +288,6 @@ We advise users to apply critical analysis and judgment when implementing the pr
In addition to mistakes (which may happen, but are rare), sometimes the presented code modification may serve more as an _illustrative example_ than a directly applicable solution.
In such cases, we recommend prioritizing the suggestion's detailed description, using the diff snippet primarily as a supporting reference.
-
-### Chat on code suggestions
-
-> `๐ feature` Platforms supported: GitHub, GitLab
-
-Qodo Merge implements an orchestrator agent that enables interactive code discussions, listening and responding to comments without requiring explicit tool calls.
-The orchestrator intelligently analyzes your responses to determine if you want to implement a suggestion, ask a question, or request help, then delegates to the appropriate specialized tool.
-
-#### Setup and Activation
-
-Enable interactive code discussions by adding the following to your configuration file (default is `True`):
-
-```toml
-[pr_code_suggestions]
-enable_chat_in_code_suggestions = true
-```
-
-!!! info "Activating Dynamic Responses"
- To obtain dynamic responses, the following steps are required:
-
- 1. Run the `/improve` command (mostly automatic)
- 2. Tick the `/improve` recommendation checkboxes (_Apply this suggestion_) to have Qodo Merge generate a new inline code suggestion discussion
- 3. The orchestrator agent will then automatically listen and reply to comments within the discussion without requiring additional commands
-
-#### Explore the available interaction patterns:
-
-!!! tip "Tip: Direct the agent with keywords"
- Use "implement" or "apply" for code generation. Use "explain", "why", or "how" for information and help.
-
-=== "Asking for Details"
- {width=512}
-
-=== "Implementing Suggestions"
- {width=512}
-
-=== "Providing Additional Help"
- {width=512}
-
-
### Dual publishing mode
Our recommended approach for presenting code suggestions is through a [table](https://qodo-merge-docs.qodo.ai/tools/improve/#overview) (`--pr_code_suggestions.commitable_code_suggestions=false`).
diff --git a/docs/docs/tools/index.md b/docs/docs/tools/index.md
index e422b856..e50e0785 100644
--- a/docs/docs/tools/index.md
+++ b/docs/docs/tools/index.md
@@ -3,22 +3,23 @@
Here is a list of Qodo Merge tools, each with a dedicated page that explains how to use it:
| Tool | Description |
-| ---------------------------------------------------------------------------------------- | ------------------------------------------------------------------------------------------------------------------------------------------- |
+|------------------------------------------------------------------------------------------|---------------------------------------------------------------------------------------------------------------------------------------------|
| **[PR Description (`/describe`](./describe.md))** | Automatically generating PR description - title, type, summary, code walkthrough and labels |
| **[PR Review (`/review`](./review.md))** | Adjustable feedback about the PR, possible issues, security concerns, review effort and more |
| **[Code Suggestions (`/improve`](./improve.md))** | Code suggestions for improving the PR |
| **[Question Answering (`/ask ...`](./ask.md))** | Answering free-text questions about the PR, or on specific code lines |
-| **[Update Changelog (`/update_changelog`](./update_changelog.md))** | Automatically updating the CHANGELOG.md file with the PR changes |
| **[Help (`/help`](./help.md))** | Provides a list of all the available tools. Also enables to trigger them interactively (๐) |
+| **[Help Docs (`/help_docs`](./help_docs.md))** | Answer a free-text question based on a git documentation folder. |
+| **[Update Changelog (`/update_changelog`](./update_changelog.md))** | Automatically updating the CHANGELOG.md file with the PR changes |
| **๐ [Add Documentation (`/add_docs`](./documentation.md))** | Generates documentation to methods/functions/classes that changed in the PR |
-| **๐ [Generate Custom Labels (`/generate_labels`](./custom_labels.md))** | Generates custom labels for the PR, based on specific guidelines defined by the user |
| **๐ [Analyze (`/analyze`](./analyze.md))** | Identify code components that changed in the PR, and enables to interactively generate tests, docs, and code suggestions for each component |
-| **๐ [Test (`/test`](./test.md))** | generate tests for a selected component, based on the PR code changes |
-| **๐ [Custom Prompt (`/custom_prompt`](./custom_prompt.md))** | Automatically generates custom suggestions for improving the PR code, based on specific guidelines defined by the user |
-| **๐ [Generate Tests (`/test component_name`](./test.md))** | Automatically generates unit tests for a selected component, based on the PR code changes |
-| **๐ [Improve Component (`/improve_component component_name`](./improve_component.md))** | Generates code suggestions for a specific code component that changed in the PR |
| **๐ [CI Feedback (`/checks ci_job`](./ci_feedback.md))** | Automatically generates feedback and analysis for a failed CI job |
+| **๐ [Custom Prompt (`/custom_prompt`](./custom_prompt.md))** | Automatically generates custom suggestions for improving the PR code, based on specific guidelines defined by the user |
+| **๐ [Generate Custom Labels (`/generate_labels`](./custom_labels.md))** | Generates custom labels for the PR, based on specific guidelines defined by the user |
+| **๐ [Generate Tests (`/test`](./test.md))** | Automatically generates unit tests for a selected component, based on the PR code changes |
| **๐ [Implement (`/implement`](./implement.md))** | Generates implementation code from review suggestions |
+| **๐ [Improve Component (`/improve_component component_name`](./improve_component.md))** | Generates code suggestions for a specific code component that changed in the PR |
| **๐ [Scan Repo Discussions (`/scan_repo_discussions`](./scan_repo_discussions.md))** | Generates `best_practices.md` file based on previous discussions in the repository |
+| **๐ [Similar Code (`/similar_code`](./similar_code.md))** | Retrieves the most similar code components from inside the organization's codebase, or from open-source code. |
-Note that the tools marked with ๐ are available only for Qodo Merge users.
+Note that the tools marked with ๐ are available only for Qodo Merge users.
\ No newline at end of file
diff --git a/docs/docs/tools/review.md b/docs/docs/tools/review.md
index 899b58a3..b94d6394 100644
--- a/docs/docs/tools/review.md
+++ b/docs/docs/tools/review.md
@@ -144,16 +144,26 @@ extra_instructions = "..."
Meaning the `review` tool will run automatically on every PR, without any additional configurations.
Edit this field to enable/disable the tool, or to change the configurations used.
-### Auto-generated PR labels from the Review Tool
+### Auto-generated PR labels by the Review Tool
!!! tip ""
- The `review` tool automatically adds two specific labels to your Pull Requests:
+ The `review` can tool automatically add labels to your Pull Requests:
- - **`possible security issue`**: This label is applied if the tool detects a potential [security vulnerability](hhttps://github.com/qodo-ai/pr-agent/blob/main/pr_agent/settings/pr_reviewer_prompts.toml#L103) in the PR's code. This feedback is controlled by the 'enable_review_labels_security' flag.
- - **`review effort [x/5]`**: This label estimates the [effort](https://github.com/qodo-ai/pr-agent/blob/main/pr_agent/settings/pr_reviewer_prompts.toml#L90) required to review the PR on a relative scale of 1 to 5, where 'x' represents the assessed effort. This feedback is controlled by the 'enable_review_labels_effort' flag.
+ - **`possible security issue`**: This label is applied if the tool detects a potential [security vulnerability](https://github.com/qodo-ai/pr-agent/blob/main/pr_agent/settings/pr_reviewer_prompts.toml#L103) in the PR's code. This feedback is controlled by the 'enable_review_labels_security' flag (default is true).
+ - **`review effort [x/5]`**: This label estimates the [effort](https://github.com/qodo-ai/pr-agent/blob/main/pr_agent/settings/pr_reviewer_prompts.toml#L90) required to review the PR on a relative scale of 1 to 5, where 'x' represents the assessed effort. This feedback is controlled by the 'enable_review_labels_effort' flag (default is true).
+ - **`ticket compliance`**: Adds a label indicating code compliance level ("Fully compliant" | "PR Code Verified" | "Partially compliant" | "Not compliant") to any GitHub/Jira/Linea ticket linked in the PR. Controlled by the 'require_ticket_labels' flag (default: false). If 'require_no_ticket_labels' is also enabled, PRs without ticket links will receive a "No ticket found" label.
- Note: The `possible security issue` label highlights potential security risks. You can configure a GitHub Action to [prevent merging](https://medium.com/sequra-tech/quick-tip-block-pull-request-merge-using-labels-6cc326936221) PRs that have this label.
+
+### Blocking PRs from merging based on the generated labels
+
+!!! tip ""
+
+ You can configure a CI/CD Action to prevent merging PRs with specific labels. For example, implement a dedicated [GitHub Action](https://medium.com/sequra-tech/quick-tip-block-pull-request-merge-using-labels-6cc326936221).
+
+ This approach helps ensure PRs with potential security issues or ticket compliance problems will not be merged without further review.
+
+ Since AI may make mistakes or lack complete context, use this feature judiciously. For flexibility, users with appropriate permissions can remove generated labels when necessary. When a label is removed, this action will be automatically documented in the PR discussion, clearly indicating it was a deliberate override by an authorized user to allow the merge.
### Extra instructions
diff --git a/docs/docs/usage-guide/additional_configurations.md b/docs/docs/usage-guide/additional_configurations.md
index 9f9202f6..8d205865 100644
--- a/docs/docs/usage-guide/additional_configurations.md
+++ b/docs/docs/usage-guide/additional_configurations.md
@@ -249,4 +249,4 @@ ignore_pr_authors = ["my-special-bot-user", ...]
Where the `ignore_pr_authors` is a list of usernames that you want to ignore.
!!! note
- There is one specific case where bots will receive an automatic response - when they generated a PR with a _failed test_. In that case, the [`ci_feedback`](https://qodo-merge-docs.qodo.ai/tools/ci_feedback/) tool will be invoked.
\ No newline at end of file
+ There is one specific case where bots will receive an automatic response - when they generated a PR with a _failed test_. In that case, the [`ci_feedback`](https://qodo-merge-docs.qodo.ai/tools/ci_feedback/) tool will be invoked.
diff --git a/docs/docs/usage-guide/automations_and_usage.md b/docs/docs/usage-guide/automations_and_usage.md
index 9c3e29fd..0a634e77 100644
--- a/docs/docs/usage-guide/automations_and_usage.md
+++ b/docs/docs/usage-guide/automations_and_usage.md
@@ -30,7 +30,7 @@ verbosity_level=2
This is useful for debugging or experimenting with different tools.
3. **git provider**: The [git_provider](https://github.com/Codium-ai/pr-agent/blob/main/pr_agent/settings/configuration.toml#L5) field in a configuration file determines the GIT provider that will be used by Qodo Merge. Currently, the following providers are supported:
-`github` **(default)**, `gitlab`, `bitbucket`, `azure`, `codecommit`, `local`, and `gerrit`.
+`github` **(default)**, `gitlab`, `bitbucket`, `azure`, `codecommit`, `local`,`gitea`, and `gerrit`.
### CLI Health Check
@@ -312,3 +312,16 @@ pr_commands = [
"/improve",
]
```
+
+### Gitea Webhook
+
+After setting up a Gitea webhook, to control which commands will run automatically when a new MR is opened, you can set the `pr_commands` parameter in the configuration file, similar to the GitHub App:
+
+```toml
+[gitea]
+pr_commands = [
+ "/describe",
+ "/review",
+ "/improve",
+]
+```
diff --git a/docs/docs/usage-guide/index.md b/docs/docs/usage-guide/index.md
index dba5a569..79df0be6 100644
--- a/docs/docs/usage-guide/index.md
+++ b/docs/docs/usage-guide/index.md
@@ -12,6 +12,7 @@ It includes information on how to adjust Qodo Merge configurations, define which
- [GitHub App](./automations_and_usage.md#github-app)
- [GitHub Action](./automations_and_usage.md#github-action)
- [GitLab Webhook](./automations_and_usage.md#gitlab-webhook)
+ - [Gitea Webhook](./automations_and_usage.md#gitea-webhook)
- [BitBucket App](./automations_and_usage.md#bitbucket-app)
- [Azure DevOps Provider](./automations_and_usage.md#azure-devops-provider)
- [Managing Mail Notifications](./mail_notifications.md)
diff --git a/docs/mkdocs.yml b/docs/mkdocs.yml
index a25c081b..74f98fb8 100644
--- a/docs/mkdocs.yml
+++ b/docs/mkdocs.yml
@@ -16,6 +16,7 @@ nav:
- Introduction: 'usage-guide/introduction.md'
- Enabling a Wiki: 'usage-guide/enabling_a_wiki.md'
- Configuration File: 'usage-guide/configuration_options.md'
+ - AWS Secrets Manager: 'usage-guide/aws_secrets_manager.md'
- Usage and Automation: 'usage-guide/automations_and_usage.md'
- Managing Mail Notifications: 'usage-guide/mail_notifications.md'
- Changing a Model: 'usage-guide/changing_a_model.md'
@@ -23,27 +24,28 @@ nav:
- Frequently Asked Questions: 'faq/index.md'
- ๐ Qodo Merge Models: 'usage-guide/qodo_merge_models.md'
- Tools:
- - 'tools/index.md'
- - Describe: 'tools/describe.md'
- - Review: 'tools/review.md'
- - Improve: 'tools/improve.md'
- - Ask: 'tools/ask.md'
- - Update Changelog: 'tools/update_changelog.md'
- - Help Docs: 'tools/help_docs.md'
- - Help: 'tools/help.md'
- - ๐ Analyze: 'tools/analyze.md'
- - ๐ Test: 'tools/test.md'
- - ๐ Improve Component: 'tools/improve_component.md'
- - ๐ Documentation: 'tools/documentation.md'
- - ๐ Custom Labels: 'tools/custom_labels.md'
- - ๐ Custom Prompt: 'tools/custom_prompt.md'
- - ๐ CI Feedback: 'tools/ci_feedback.md'
- - ๐ Similar Code: 'tools/similar_code.md'
- - ๐ Implement: 'tools/implement.md'
- - ๐ Scan Repo Discussions: 'tools/scan_repo_discussions.md'
+ - 'tools/index.md'
+ - Describe: 'tools/describe.md'
+ - Review: 'tools/review.md'
+ - Improve: 'tools/improve.md'
+ - Ask: 'tools/ask.md'
+ - Help: 'tools/help.md'
+ - Help Docs: 'tools/help_docs.md'
+ - Update Changelog: 'tools/update_changelog.md'
+ - ๐ Add Documentation: 'tools/documentation.md'
+ - ๐ Analyze: 'tools/analyze.md'
+ - ๐ CI Feedback: 'tools/ci_feedback.md'
+ - ๐ Custom Prompt: 'tools/custom_prompt.md'
+ - ๐ Generate Labels: 'tools/custom_labels.md'
+ - ๐ Generate Tests: 'tools/test.md'
+ - ๐ Implement: 'tools/implement.md'
+ - ๐ Improve Components: 'tools/improve_component.md'
+ - ๐ Scan Repo Discussions: 'tools/scan_repo_discussions.md'
+ - ๐ Similar Code: 'tools/similar_code.md'
- Core Abilities:
- 'core-abilities/index.md'
- Auto best practices: 'core-abilities/auto_best_practices.md'
+ - Chat on code suggestions: 'core-abilities/chat_on_code_suggestions.md'
- Code validation: 'core-abilities/code_validation.md'
- Compression strategy: 'core-abilities/compression_strategy.md'
- Dynamic context: 'core-abilities/dynamic_context.md'
diff --git a/pr_agent/algo/ai_handlers/langchain_ai_handler.py b/pr_agent/algo/ai_handlers/langchain_ai_handler.py
index 4d708fcb..2d4fa08b 100644
--- a/pr_agent/algo/ai_handlers/langchain_ai_handler.py
+++ b/pr_agent/algo/ai_handlers/langchain_ai_handler.py
@@ -1,6 +1,9 @@
+_LANGCHAIN_INSTALLED = False
+
try:
from langchain_core.messages import HumanMessage, SystemMessage
from langchain_openai import AzureChatOpenAI, ChatOpenAI
+ _LANGCHAIN_INSTALLED = True
except: # we don't enforce langchain as a dependency, so if it's not installed, just move on
pass
@@ -8,6 +11,7 @@ import functools
import openai
from tenacity import retry, retry_if_exception_type, retry_if_not_exception_type, stop_after_attempt
+from langchain_core.runnables import Runnable
from pr_agent.algo.ai_handlers.base_ai_handler import BaseAiHandler
from pr_agent.config_loader import get_settings
@@ -18,17 +22,14 @@ OPENAI_RETRIES = 5
class LangChainOpenAIHandler(BaseAiHandler):
def __init__(self):
- # Initialize OpenAIHandler specific attributes here
+ if not _LANGCHAIN_INSTALLED:
+ error_msg = "LangChain is not installed. Please install it with `pip install langchain`."
+ get_logger().error(error_msg)
+ raise ImportError(error_msg)
+
super().__init__()
self.azure = get_settings().get("OPENAI.API_TYPE", "").lower() == "azure"
- # Create a default unused chat object to trigger early validation
- self._create_chat(self.deployment_id)
-
- def chat(self, messages: list, model: str, temperature: float):
- chat = self._create_chat(self.deployment_id)
- return chat.invoke(input=messages, model=model, temperature=temperature)
-
@property
def deployment_id(self):
"""
@@ -36,16 +37,66 @@ class LangChainOpenAIHandler(BaseAiHandler):
"""
return get_settings().get("OPENAI.DEPLOYMENT_ID", None)
+ async def _create_chat_async(self, deployment_id=None):
+ try:
+ if self.azure:
+ # Using Azure OpenAI service
+ return AzureChatOpenAI(
+ openai_api_key=get_settings().openai.key,
+ openai_api_version=get_settings().openai.api_version,
+ azure_deployment=deployment_id,
+ azure_endpoint=get_settings().openai.api_base,
+ )
+ else:
+ # Using standard OpenAI or other LLM services
+ openai_api_base = get_settings().get("OPENAI.API_BASE", None)
+ if openai_api_base is None or len(openai_api_base) == 0:
+ return ChatOpenAI(openai_api_key=get_settings().openai.key)
+ else:
+ return ChatOpenAI(
+ openai_api_key=get_settings().openai.key,
+ openai_api_base=openai_api_base
+ )
+ except AttributeError as e:
+ # Handle configuration errors
+ error_msg = f"OpenAI {e.name} is required" if getattr(e, "name") else str(e)
+ get_logger().error(error_msg)
+ raise ValueError(error_msg) from e
+
@retry(
retry=retry_if_exception_type(openai.APIError) & retry_if_not_exception_type(openai.RateLimitError),
stop=stop_after_attempt(OPENAI_RETRIES),
)
- async def chat_completion(self, model: str, system: str, user: str, temperature: float = 0.2):
+ async def chat_completion(self, model: str, system: str, user: str, temperature: float = 0.2, img_path: str = None):
+ if img_path:
+ get_logger().warning(f"Image path is not supported for LangChainOpenAIHandler. Ignoring image path: {img_path}")
try:
messages = [SystemMessage(content=system), HumanMessage(content=user)]
+ llm = await self._create_chat_async(deployment_id=self.deployment_id)
+
+ if not isinstance(llm, Runnable):
+ error_message = (
+ f"The Langchain LLM object ({type(llm)}) does not implement the Runnable interface. "
+ f"Please update your Langchain library to the latest version or "
+ f"check your LLM configuration to support async calls. "
+ f"PR-Agent is designed to utilize Langchain's async capabilities."
+ )
+ get_logger().error(error_message)
+ raise NotImplementedError(error_message)
+
+ # Handle parameters based on LLM type
+ if isinstance(llm, (ChatOpenAI, AzureChatOpenAI)):
+ # OpenAI models support all parameters
+ resp = await llm.ainvoke(
+ input=messages,
+ model=model,
+ temperature=temperature
+ )
+ else:
+ # Other LLMs (like Gemini) only support input parameter
+ get_logger().info(f"Using simplified ainvoke for {type(llm)}")
+ resp = await llm.ainvoke(input=messages)
- # get a chat completion from the formatted messages
- resp = self.chat(messages, model=model, temperature=temperature)
finish_reason = "completed"
return resp.content, finish_reason
@@ -58,27 +109,3 @@ class LangChainOpenAIHandler(BaseAiHandler):
except Exception as e:
get_logger().warning(f"Unknown error during LLM inference: {e}")
raise openai.APIError from e
-
- def _create_chat(self, deployment_id=None):
- try:
- if self.azure:
- # using a partial function so we can set the deployment_id later to support fallback_deployments
- # but still need to access the other settings now so we can raise a proper exception if they're missing
- return AzureChatOpenAI(
- openai_api_key=get_settings().openai.key,
- openai_api_version=get_settings().openai.api_version,
- azure_deployment=deployment_id,
- azure_endpoint=get_settings().openai.api_base,
- )
- else:
- # for llms that compatible with openai, should use custom api base
- openai_api_base = get_settings().get("OPENAI.API_BASE", None)
- if openai_api_base is None or len(openai_api_base) == 0:
- return ChatOpenAI(openai_api_key=get_settings().openai.key)
- else:
- return ChatOpenAI(openai_api_key=get_settings().openai.key, openai_api_base=openai_api_base)
- except AttributeError as e:
- if getattr(e, "name"):
- raise ValueError(f"OpenAI {e.name} is required") from e
- else:
- raise e
diff --git a/pr_agent/algo/ai_handlers/openai_ai_handler.py b/pr_agent/algo/ai_handlers/openai_ai_handler.py
index 253282b0..f5fb99f6 100644
--- a/pr_agent/algo/ai_handlers/openai_ai_handler.py
+++ b/pr_agent/algo/ai_handlers/openai_ai_handler.py
@@ -42,8 +42,10 @@ class OpenAIHandler(BaseAiHandler):
retry=retry_if_exception_type(openai.APIError) & retry_if_not_exception_type(openai.RateLimitError),
stop=stop_after_attempt(OPENAI_RETRIES),
)
- async def chat_completion(self, model: str, system: str, user: str, temperature: float = 0.2):
+ async def chat_completion(self, model: str, system: str, user: str, temperature: float = 0.2, img_path: str = None):
try:
+ if img_path:
+ get_logger().warning(f"Image path is not supported for OpenAIHandler. Ignoring image path: {img_path}")
get_logger().info("System: ", system)
get_logger().info("User: ", user)
messages = [{"role": "system", "content": system}, {"role": "user", "content": user}]
diff --git a/pr_agent/algo/file_filter.py b/pr_agent/algo/file_filter.py
index 5c575eef..79bb4d8e 100644
--- a/pr_agent/algo/file_filter.py
+++ b/pr_agent/algo/file_filter.py
@@ -58,6 +58,9 @@ def filter_ignored(files, platform = 'github'):
files = files_o
elif platform == 'azure':
files = [f for f in files if not r.match(f)]
+ elif platform == 'gitea':
+ files = [f for f in files if not r.match(f.get("filename", ""))]
+
except Exception as e:
print(f"Could not filter file list: {e}")
diff --git a/pr_agent/config_loader.py b/pr_agent/config_loader.py
index 7a62adec..f525d893 100644
--- a/pr_agent/config_loader.py
+++ b/pr_agent/config_loader.py
@@ -81,3 +81,62 @@ def _find_pyproject() -> Optional[Path]:
pyproject_path = _find_pyproject()
if pyproject_path is not None:
get_settings().load_file(pyproject_path, env=f'tool.{PR_AGENT_TOML_KEY}')
+
+
+def apply_secrets_manager_config():
+ """
+ Retrieve configuration from AWS Secrets Manager and override existing settings
+ """
+ try:
+ # Dynamic imports to avoid circular dependency (secret_providers imports config_loader)
+ from pr_agent.secret_providers import get_secret_provider
+ from pr_agent.log import get_logger
+
+ secret_provider = get_secret_provider()
+ if not secret_provider:
+ return
+
+ if (hasattr(secret_provider, 'get_all_secrets') and
+ get_settings().get("CONFIG.SECRET_PROVIDER") == 'aws_secrets_manager'):
+ try:
+ secrets = secret_provider.get_all_secrets()
+ if secrets:
+ apply_secrets_to_config(secrets)
+ get_logger().info("Applied AWS Secrets Manager configuration")
+ except Exception as e:
+ get_logger().error(f"Failed to apply AWS Secrets Manager config: {e}")
+ except Exception as e:
+ try:
+ from pr_agent.log import get_logger
+ get_logger().debug(f"Secret provider not configured: {e}")
+ except:
+ # Fail completely silently if log module is not available
+ pass
+
+
+def apply_secrets_to_config(secrets: dict):
+ """
+ Apply secret dictionary to configuration
+ """
+ try:
+ # Dynamic import to avoid potential circular dependency
+ from pr_agent.log import get_logger
+ except:
+ def get_logger():
+ class DummyLogger:
+ def debug(self, msg): pass
+ return DummyLogger()
+
+ for key, value in secrets.items():
+ if '.' in key: # nested key like "openai.key"
+ parts = key.split('.')
+ if len(parts) == 2:
+ section, setting = parts
+ section_upper = section.upper()
+ setting_upper = setting.upper()
+
+ # Set only when no existing value (prioritize environment variables)
+ current_value = get_settings().get(f"{section_upper}.{setting_upper}")
+ if current_value is None or current_value == "":
+ get_settings().set(f"{section_upper}.{setting_upper}", value)
+ get_logger().debug(f"Set {section}.{setting} from AWS Secrets Manager")
diff --git a/pr_agent/git_providers/__init__.py b/pr_agent/git_providers/__init__.py
index 51c6f624..055cdbf1 100644
--- a/pr_agent/git_providers/__init__.py
+++ b/pr_agent/git_providers/__init__.py
@@ -8,6 +8,7 @@ from pr_agent.git_providers.bitbucket_server_provider import \
from pr_agent.git_providers.codecommit_provider import CodeCommitProvider
from pr_agent.git_providers.gerrit_provider import GerritProvider
from pr_agent.git_providers.git_provider import GitProvider
+from pr_agent.git_providers.gitea_provider import GiteaProvider
from pr_agent.git_providers.github_provider import GithubProvider
from pr_agent.git_providers.gitlab_provider import GitLabProvider
from pr_agent.git_providers.local_git_provider import LocalGitProvider
@@ -22,7 +23,7 @@ _GIT_PROVIDERS = {
'codecommit': CodeCommitProvider,
'local': LocalGitProvider,
'gerrit': GerritProvider,
- 'gitea': GiteaProvider,
+ 'gitea': GiteaProvider
}
diff --git a/pr_agent/git_providers/gitea_provider.py b/pr_agent/git_providers/gitea_provider.py
index 1d671558..8805d8f4 100644
--- a/pr_agent/git_providers/gitea_provider.py
+++ b/pr_agent/git_providers/gitea_provider.py
@@ -1,258 +1,992 @@
-from typing import Optional, Tuple, List, Dict
+import hashlib
+import json
+from typing import Any, Dict, List, Optional, Set, Tuple
from urllib.parse import urlparse
-import requests
-from pr_agent.git_providers.git_provider import GitProvider
+
+import giteapy
+from giteapy.rest import ApiException
+
+from pr_agent.algo.file_filter import filter_ignored
+from pr_agent.algo.language_handler import is_valid_file
+from pr_agent.algo.types import EDIT_TYPE
+from pr_agent.algo.utils import (clip_tokens,
+ find_line_number_of_relevant_line_in_file)
from pr_agent.config_loader import get_settings
+from pr_agent.git_providers.git_provider import (MAX_FILES_ALLOWED_FULL,
+ FilePatchInfo, GitProvider,
+ IncrementalPR)
from pr_agent.log import get_logger
-from pr_agent.algo.types import EDIT_TYPE, FilePatchInfo
class GiteaProvider(GitProvider):
- """
- Implements GitProvider for Gitea/Forgejo API v1.
- """
+ def __init__(self, url: Optional[str] = None):
+ super().__init__()
+ self.logger = get_logger()
- def __init__(self, pr_url: Optional[str] = None, incremental: Optional[bool] = False):
- self.gitea_url = get_settings().get("GITEA.URL", None)
- self.gitea_token = get_settings().get("GITEA.TOKEN", None)
- if not self.gitea_url:
- raise ValueError("GITEA.URL is not set in the config file")
- if not self.gitea_token:
- raise ValueError("GITEA.TOKEN is not set in the config file")
- self.headers = {
- 'Authorization': f'token {self.gitea_token}',
- 'Content-Type': 'application/json',
- 'Accept': 'application/json'
- }
+ if not url:
+ self.logger.error("PR URL not provided.")
+ raise ValueError("PR URL not provided.")
+
+ self.base_url = get_settings().get("GITEA.URL", "https://gitea.com").rstrip("/")
+ self.pr_url = ""
+ self.issue_url = ""
+
+ gitea_access_token = get_settings().get("GITEA.PERSONAL_ACCESS_TOKEN", None)
+ if not gitea_access_token:
+ self.logger.error("Gitea access token not found in settings.")
+ raise ValueError("Gitea access token not found in settings.")
+
+ self.repo_settings = get_settings().get("GITEA.REPO_SETTING", None)
+ configuration = giteapy.Configuration()
+ configuration.host = "{}/api/v1".format(self.base_url)
+ configuration.api_key['Authorization'] = f'token {gitea_access_token}'
+
+ client = giteapy.ApiClient(configuration)
+ self.repo_api = RepoApi(client)
self.owner = None
self.repo = None
- self.pr_num = None
+ self.pr_number = None
+ self.issue_number = None
+ self.max_comment_chars = 65000
+ self.enabled_pr = False
+ self.enabled_issue = False
+ self.temp_comments = []
self.pr = None
- self.pr_url = pr_url
- self.incremental = incremental
- if pr_url:
- self.set_pr(pr_url)
+ self.git_files = []
+ self.file_contents = {}
+ self.file_diffs = {}
+ self.sha = None
+ self.diff_files = []
+ self.incremental = IncrementalPR(False)
+ self.comments_list = []
+ self.unreviewed_files_set = dict()
- @staticmethod
- def _parse_pr_url(pr_url: str) -> Tuple[str, str, str]:
- """
- Parse Gitea PR URL to (owner, repo, pr_number)
- """
+ if "pulls" in url:
+ self.pr_url = url
+ self.__set_repo_and_owner_from_pr()
+ self.enabled_pr = True
+ self.pr = self.repo_api.get_pull_request(
+ owner=self.owner,
+ repo=self.repo,
+ pr_number=self.pr_number
+ )
+ self.git_files = self.repo_api.get_change_file_pull_request(
+ owner=self.owner,
+ repo=self.repo,
+ pr_number=self.pr_number
+ )
+ # Optional ignore with user custom
+ self.git_files = filter_ignored(self.git_files, platform="gitea")
+
+ self.sha = self.pr.head.sha if self.pr.head.sha else ""
+ self.__add_file_content()
+ self.__add_file_diff()
+ self.pr_commits = self.repo_api.list_all_commits(
+ owner=self.owner,
+ repo=self.repo
+ )
+ self.last_commit = self.pr_commits[-1]
+ self.base_sha = self.pr.base.sha if self.pr.base.sha else ""
+ self.base_ref = self.pr.base.ref if self.pr.base.ref else ""
+ elif "issues" in url:
+ self.issue_url = url
+ self.__set_repo_and_owner_from_issue()
+ self.enabled_issue = True
+ else:
+ self.pr_commits = None
+
+ def __add_file_content(self):
+ for file in self.git_files:
+ file_path = file.get("filename")
+ # Ignore file from default settings
+ if not is_valid_file(file_path):
+ continue
+
+ if file_path and self.sha:
+ try:
+ content = self.repo_api.get_file_content(
+ owner=self.owner,
+ repo=self.repo,
+ commit_sha=self.sha,
+ filepath=file_path
+ )
+ self.file_contents[file_path] = content
+ except ApiException as e:
+ self.logger.error(f"Error getting file content for {file_path}: {str(e)}")
+ self.file_contents[file_path] = ""
+
+ def __add_file_diff(self):
+ try:
+ diff_contents = self.repo_api.get_pull_request_diff(
+ owner=self.owner,
+ repo=self.repo,
+ pr_number=self.pr_number
+ )
+
+ lines = diff_contents.splitlines()
+ current_file = None
+ current_patch = []
+ file_patches = {}
+ for line in lines:
+ if line.startswith('diff --git'):
+ if current_file and current_patch:
+ file_patches[current_file] = '\n'.join(current_patch)
+ current_patch = []
+ current_file = line.split(' b/')[-1]
+ elif line.startswith('@@'):
+ current_patch = [line]
+ elif current_patch:
+ current_patch.append(line)
+
+ if current_file and current_patch:
+ file_patches[current_file] = '\n'.join(current_patch)
+
+ self.file_diffs = file_patches
+ except Exception as e:
+ self.logger.error(f"Error getting diff content: {str(e)}")
+
+ def _parse_pr_url(self, pr_url: str) -> Tuple[str, str, int]:
parsed_url = urlparse(pr_url)
+
+ if parsed_url.path.startswith('/api/v1'):
+ parsed_url = urlparse(pr_url.replace("/api/v1", ""))
+
path_parts = parsed_url.path.strip('/').split('/')
if len(path_parts) < 4 or path_parts[2] != 'pulls':
- raise ValueError(f"Invalid PR URL format: {pr_url}")
- return path_parts[0], path_parts[1], path_parts[3]
+ raise ValueError("The provided URL does not appear to be a Gitea PR URL")
- def set_pr(self, pr_url: str):
- self.owner, self.repo, self.pr_num = self._parse_pr_url(pr_url)
- self.pr = self._get_pr()
+ try:
+ pr_number = int(path_parts[3])
+ except ValueError as e:
+ raise ValueError("Unable to convert PR number to integer") from e
- def _get_pr(self):
- url = f"{self.gitea_url}/api/v1/repos/{self.owner}/{self.repo}/pulls/{self.pr_num}"
- response = requests.get(url, headers=self.headers)
- response.raise_for_status()
- return response.json()
+ owner = path_parts[0]
+ repo = path_parts[1]
- def is_supported(self, capability: str) -> bool:
- # Gitea/Forgejo supports most capabilities
- return True
+ return owner, repo, pr_number
- def get_files(self) -> List[str]:
- url = f"{self.gitea_url}/api/v1/repos/{self.owner}/{self.repo}/pulls/{self.pr_num}/files"
- response = requests.get(url, headers=self.headers)
- response.raise_for_status()
- return [file['filename'] for file in response.json()]
+ def _parse_issue_url(self, issue_url: str) -> Tuple[str, str, int]:
+ parsed_url = urlparse(issue_url)
- def get_diff_files(self) -> List[FilePatchInfo]:
- url = f"{self.gitea_url}/api/v1/repos/{self.owner}/{self.repo}/pulls/{self.pr_num}/files"
- response = requests.get(url, headers=self.headers)
- response.raise_for_status()
+ if parsed_url.path.startswith('/api/v1'):
+ parsed_url = urlparse(issue_url.replace("/api/v1", ""))
- diff_files = []
- for file in response.json():
- edit_type = EDIT_TYPE.MODIFIED
- if file.get('status') == 'added':
- edit_type = EDIT_TYPE.ADDED
- elif file.get('status') == 'deleted':
- edit_type = EDIT_TYPE.DELETED
- elif file.get('status') == 'renamed':
- edit_type = EDIT_TYPE.RENAMED
+ path_parts = parsed_url.path.strip('/').split('/')
+ if len(path_parts) < 4 or path_parts[2] != 'issues':
+ raise ValueError("The provided URL does not appear to be a Gitea issue URL")
- diff_files.append(
- FilePatchInfo(
- file.get('previous_filename', ''),
- file.get('filename', ''),
- file.get('patch', ''),
- file['filename'],
- edit_type=edit_type,
- old_filename=file.get('previous_filename')
- )
- )
- return diff_files
+ try:
+ issue_number = int(path_parts[3])
+ except ValueError as e:
+ raise ValueError("Unable to convert issue number to integer") from e
- def publish_description(self, pr_title: str, pr_body: str):
- url = f"{self.gitea_url}/api/v1/repos/{self.owner}/{self.repo}/pulls/{self.pr_num}"
- data = {'title': pr_title, 'body': pr_body}
- response = requests.patch(url, headers=self.headers, json=data)
- response.raise_for_status()
+ owner = path_parts[0]
+ repo = path_parts[1]
- def publish_comment(self, pr_comment: str, is_temporary: bool = False):
- url = f"{self.gitea_url}/api/v1/repos/{self.owner}/{self.repo}/issues/{self.pr_num}/comments"
- data = {'body': pr_comment}
- response = requests.post(url, headers=self.headers, json=data)
- response.raise_for_status()
+ return owner, repo, issue_number
- def publish_inline_comment(self, body: str, relevant_file: str, relevant_line_in_file: str,
- original_suggestion=None):
- url = f"{self.gitea_url}/api/v1/repos/{self.owner}/{self.repo}/pulls/{self.pr_num}/reviews"
+ def __set_repo_and_owner_from_pr(self):
+ """Extract owner and repo from the PR URL"""
+ try:
+ owner, repo, pr_number = self._parse_pr_url(self.pr_url)
+ self.owner = owner
+ self.repo = repo
+ self.pr_number = pr_number
+ self.logger.info(f"Owner: {self.owner}, Repo: {self.repo}, PR Number: {self.pr_number}")
+ except ValueError as e:
+ self.logger.error(f"Error parsing PR URL: {str(e)}")
+ except Exception as e:
+ self.logger.error(f"Unexpected error: {str(e)}")
- data = {
- 'event': 'COMMENT',
- 'body': original_suggestion or '',
- 'commit_id': self.pr.get('head', {}).get('sha', ''),
- 'comments': [{
- 'body': body,
- 'path': relevant_file,
- 'line': int(relevant_line_in_file)
- }]
+ def __set_repo_and_owner_from_issue(self):
+ """Extract owner and repo from the issue URL"""
+ try:
+ owner, repo, issue_number = self._parse_issue_url(self.issue_url)
+ self.owner = owner
+ self.repo = repo
+ self.issue_number = issue_number
+ self.logger.info(f"Owner: {self.owner}, Repo: {self.repo}, Issue Number: {self.issue_number}")
+ except ValueError as e:
+ self.logger.error(f"Error parsing issue URL: {str(e)}")
+ except Exception as e:
+ self.logger.error(f"Unexpected error: {str(e)}")
+
+ def get_pr_url(self) -> str:
+ return self.pr_url
+
+ def get_issue_url(self) -> str:
+ return self.issue_url
+
+ def publish_comment(self, comment: str,is_temporary: bool = False) -> None:
+ """Publish a comment to the pull request"""
+ if is_temporary and not get_settings().config.publish_output_progress:
+ get_logger().debug(f"Skipping publish_comment for temporary comment")
+ return None
+
+ if self.enabled_issue:
+ index = self.issue_number
+ elif self.enabled_pr:
+ index = self.pr_number
+ else:
+ self.logger.error("Neither PR nor issue URL provided.")
+ return None
+
+ comment = self.limit_output_characters(comment, self.max_comment_chars)
+ response = self.repo_api.create_comment(
+ owner=self.owner,
+ repo=self.repo,
+ index=index,
+ comment=comment
+ )
+
+ if not response:
+ self.logger.error("Failed to publish comment")
+ return None
+
+ if is_temporary:
+ self.temp_comments.append(comment)
+
+ comment_obj = {
+ "is_temporary": is_temporary,
+ "comment": comment,
+ "comment_id": response.id if isinstance(response, tuple) else response.id
}
- response = requests.post(url, headers=self.headers, json=data)
- response.raise_for_status()
+ self.comments_list.append(comment_obj)
+ self.logger.info("Comment published")
+ return comment_obj
- def publish_inline_comments(self, comments: list[dict]):
- for comment in comments:
- try:
- self.publish_inline_comment(
- comment['body'],
- comment['relevant_file'],
- comment['relevant_line_in_file'],
- comment.get('original_suggestion')
- )
- except Exception as e:
- get_logger().error(f"Failed to publish inline comment on {comment.get('relevant_file')}: {e}")
+ def edit_comment(self, comment, body : str):
+ body = self.limit_output_characters(body, self.max_comment_chars)
+ try:
+ self.repo_api.edit_comment(
+ owner=self.owner,
+ repo=self.repo,
+ comment_id=comment.get("comment_id") if isinstance(comment, dict) else comment.id,
+ comment=body
+ )
+ except ApiException as e:
+ self.logger.error(f"Error editing comment: {e}")
+ return None
+ except Exception as e:
+ self.logger.error(f"Unexpected error: {e}")
+ return None
- def publish_code_suggestions(self, code_suggestions: list) -> bool:
- overall_success = True
- for suggestion in code_suggestions:
- try:
- self.publish_inline_comment(
- suggestion['body'],
- suggestion['relevant_file'],
- suggestion['relevant_line_in_file'],
- suggestion.get('original_suggestion')
- )
- except Exception as e:
- overall_success = False
- get_logger().error(
- f"Failed to publish code suggestion on {suggestion.get('relevant_file')}: {e}")
- return overall_success
- def publish_labels(self, labels):
- url = f"{self.gitea_url}/api/v1/repos/{self.owner}/{self.repo}/issues/{self.pr_num}/labels"
- data = {'labels': labels}
- response = requests.post(url, headers=self.headers, json=data)
- response.raise_for_status()
+ def publish_inline_comment(self,body: str, relevant_file: str, relevant_line_in_file: str, original_suggestion=None):
+ """Publish an inline comment on a specific line"""
+ body = self.limit_output_characters(body, self.max_comment_chars)
+ position, absolute_position = find_line_number_of_relevant_line_in_file(self.diff_files,
+ relevant_file.strip('`'),
+ relevant_line_in_file,
+ )
+ if position == -1:
+ get_logger().info(f"Could not find position for {relevant_file} {relevant_line_in_file}")
+ subject_type = "FILE"
+ else:
+ subject_type = "LINE"
- def get_pr_labels(self, update=False):
- url = f"{self.gitea_url}/api/v1/repos/{self.owner}/{self.repo}/issues/{self.pr_num}/labels"
- response = requests.get(url, headers=self.headers)
- response.raise_for_status()
- return [label['name'] for label in response.json()]
+ path = relevant_file.strip()
+ payload = dict(body=body, path=path, old_position=position,new_position = absolute_position) if subject_type == "LINE" else {}
+ self.publish_inline_comments([payload])
- def get_issue_comments(self):
- url = f"{self.gitea_url}/api/v1/repos/{self.owner}/{self.repo}/issues/{self.pr_num}/comments"
- response = requests.get(url, headers=self.headers)
- response.raise_for_status()
- return response.json()
- def remove_initial_comment(self):
- # Implementation depends on how you track the initial comment
- pass
+ def publish_inline_comments(self, comments: List[Dict[str, Any]],body : str = "Inline comment") -> None:
+ response = self.repo_api.create_inline_comment(
+ owner=self.owner,
+ repo=self.repo,
+ pr_number=self.pr_number if self.enabled_pr else self.issue_number,
+ body=body,
+ commit_id=self.last_commit.sha if self.last_commit else "",
+ comments=comments
+ )
- def remove_comment(self, comment):
- url = f"{self.gitea_url}/api/v1/repos/{self.owner}/{self.repo}/issues/comments/{comment['id']}"
- response = requests.delete(url, headers=self.headers)
- response.raise_for_status()
+ if not response:
+ self.logger.error("Failed to publish inline comment")
+ return None
+
+ self.logger.info("Inline comment published")
+
+ def publish_code_suggestions(self, suggestions: List[Dict[str, Any]]):
+ """Publish code suggestions"""
+ for suggestion in suggestions:
+ body = suggestion.get("body","")
+ if not body:
+ self.logger.error("No body provided for the suggestion")
+ continue
+
+ path = suggestion.get("relevant_file","")
+ new_position = suggestion.get("relevant_lines_start",0)
+ old_position = suggestion.get("relevant_lines_start",0) if "original_suggestion" not in suggestion else suggestion["original_suggestion"].get("relevant_lines_start",0)
+ title_body = suggestion["original_suggestion"].get("suggestion_content","") if "original_suggestion" in suggestion else ""
+ payload = dict(body=body, path=path, old_position=old_position,new_position = new_position)
+ if title_body:
+ title_body = f"**Suggestion:** {title_body}"
+ self.publish_inline_comments([payload],title_body)
+ else:
+ self.publish_inline_comments([payload])
def add_eyes_reaction(self, issue_comment_id: int, disable_eyes: bool = False) -> Optional[int]:
- if disable_eyes:
- return None
- url = f"{self.gitea_url}/api/v1/repos/{self.owner}/{self.repo}/issues/comments/{issue_comment_id}/reactions"
- data = {'content': 'eyes'}
- response = requests.post(url, headers=self.headers, json=data)
- response.raise_for_status()
- return response.json()['id']
-
- def remove_reaction(self, issue_comment_id: int, reaction_id: int) -> bool:
- url = f"{self.gitea_url}/api/v1/repos/{self.owner}/{self.repo}/issues/comments/{issue_comment_id}/reactions/{reaction_id}"
- response = requests.delete(url, headers=self.headers)
- return response.status_code == 204
-
- def get_commit_messages(self):
- url = f"{self.gitea_url}/api/v1/repos/{self.owner}/{self.repo}/pulls/{self.pr_num}/commits"
- response = requests.get(url, headers=self.headers)
- response.raise_for_status()
- return [commit['commit']['message'] for commit in response.json()]
-
- def get_pr_branch(self):
- return self.pr['head']['ref']
-
- def get_user_id(self):
- return self.pr['user']['id']
-
- def get_pr_description_full(self) -> str:
- return self.pr['body'] or ''
-
- def get_git_repo_url(self, issues_or_pr_url: str) -> str:
+ """Add eyes reaction to a comment"""
try:
- parsed_url = urlparse(issues_or_pr_url)
- path_parts = parsed_url.path.strip('/').split('/')
- if len(path_parts) < 2:
- raise ValueError(f"Invalid URL format: {issues_or_pr_url}")
- return f"{parsed_url.scheme}://{parsed_url.netloc}/{path_parts[0]}/{path_parts[1]}.git"
+ if disable_eyes:
+ return None
+
+ comments = self.repo_api.list_all_comments(
+ owner=self.owner,
+ repo=self.repo,
+ index=self.pr_number if self.enabled_pr else self.issue_number
+ )
+
+ comment_ids = [comment.id for comment in comments]
+ if issue_comment_id not in comment_ids:
+ self.logger.error(f"Comment ID {issue_comment_id} not found. Available IDs: {comment_ids}")
+ return None
+
+ response = self.repo_api.add_reaction_comment(
+ owner=self.owner,
+ repo=self.repo,
+ comment_id=issue_comment_id,
+ reaction="eyes"
+ )
+
+ if not response:
+ self.logger.error("Failed to add eyes reaction")
+ return None
+
+ return response[0].id if isinstance(response, tuple) else response.id
+
+ except ApiException as e:
+ self.logger.error(f"Error adding eyes reaction: {e}")
+ return None
except Exception as e:
- get_logger().exception(f"Failed to get git repo URL from: {issues_or_pr_url}")
+ self.logger.error(f"Unexpected error: {e}")
+ return None
+
+ def remove_reaction(self, comment_id: int) -> None:
+ """Remove reaction from a comment"""
+ try:
+ response = self.repo_api.remove_reaction_comment(
+ owner=self.owner,
+ repo=self.repo,
+ comment_id=comment_id
+ )
+ if not response:
+ self.logger.error("Failed to remove reaction")
+ except ApiException as e:
+ self.logger.error(f"Error removing reaction: {e}")
+ except Exception as e:
+ self.logger.error(f"Unexpected error: {e}")
+
+ def get_commit_messages(self)-> str:
+ """Get commit messages for the PR"""
+ max_tokens = get_settings().get("CONFIG.MAX_COMMITS_TOKENS", None)
+ pr_commits = self.repo_api.get_pr_commits(
+ owner=self.owner,
+ repo=self.repo,
+ pr_number=self.pr_number
+ )
+
+ if not pr_commits:
+ self.logger.error("Failed to get commit messages")
return ""
- def get_canonical_url_parts(self, repo_git_url: str, desired_branch: str) -> Tuple[str, str]:
try:
- parsed_url = urlparse(repo_git_url)
- path_parts = parsed_url.path.strip('/').split('/')
- if len(path_parts) < 2:
- raise ValueError(f"Invalid git repo URL format: {repo_git_url}")
+ commit_messages = [commit["commit"]["message"] for commit in pr_commits if commit]
- repo_name = path_parts[1]
- if repo_name.endswith('.git'):
- repo_name = repo_name[:-4]
+ if not commit_messages:
+ self.logger.error("No commit messages found")
+ return ""
- prefix = f"{parsed_url.scheme}://{parsed_url.netloc}/{path_parts[0]}/{repo_name}/src/branch/{desired_branch}"
- suffix = ""
- return prefix, suffix
+ commit_message = "".join(commit_messages)
+ if max_tokens:
+ commit_message = clip_tokens(commit_message, max_tokens)
+
+ return commit_message
except Exception as e:
- get_logger().exception(f"Failed to get canonical URL parts from: {repo_git_url}")
- return ("", "")
+ self.logger.error(f"Error processing commit messages: {str(e)}")
+ return ""
- def get_languages(self) -> Dict[str, float]:
- """
- Get the languages used in the repository and their percentages.
- Returns a dictionary mapping language names to their percentage of use.
- """
- if not self.owner or not self.repo:
- return {}
- url = f"{self.gitea_url}/api/v1/repos/{self.owner}/{self.repo}/languages"
- response = requests.get(url, headers=self.headers)
- response.raise_for_status()
- return response.json()
+ def _get_file_content_from_base(self, filename: str) -> str:
+ return self.repo_api.get_file_content(
+ owner=self.owner,
+ repo=self.base_ref,
+ commit_sha=self.base_sha,
+ filepath=filename
+ )
+
+ def _get_file_content_from_latest_commit(self, filename: str) -> str:
+ return self.repo_api.get_file_content(
+ owner=self.owner,
+ repo=self.base_ref,
+ commit_sha=self.last_commit.sha,
+ filepath=filename
+ )
+
+ def get_diff_files(self) -> List[FilePatchInfo]:
+ """Get files that were modified in the PR"""
+ if self.diff_files:
+ return self.diff_files
+
+ invalid_files_names = []
+ counter_valid = 0
+ diff_files = []
+ for file in self.git_files:
+ filename = file.get("filename")
+ if not filename:
+ continue
+
+ if not is_valid_file(filename):
+ invalid_files_names.append(filename)
+ continue
+
+ counter_valid += 1
+ avoid_load = False
+ patch = self.file_diffs.get(filename,"")
+ head_file = ""
+ base_file = ""
+
+ if counter_valid >= MAX_FILES_ALLOWED_FULL and patch and not self.incremental.is_incremental:
+ avoid_load = True
+ if counter_valid == MAX_FILES_ALLOWED_FULL:
+ self.logger.info("Too many files in PR, will avoid loading full content for rest of files")
+
+ if avoid_load:
+ head_file = ""
+ else:
+ # Get file content from this pr
+ head_file = self.file_contents.get(filename,"")
+
+ if self.incremental.is_incremental and self.unreviewed_files_set:
+ base_file = self._get_file_content_from_latest_commit(filename)
+ self.unreviewed_files_set[filename] = patch
+ else:
+ if avoid_load:
+ base_file = ""
+ else:
+ base_file = self._get_file_content_from_base(filename)
+
+ num_plus_lines = file.get("additions",0)
+ num_minus_lines = file.get("deletions",0)
+ status = file.get("status","")
+
+ if status == 'added':
+ edit_type = EDIT_TYPE.ADDED
+ elif status == 'removed':
+ edit_type = EDIT_TYPE.DELETED
+ elif status == 'renamed':
+ edit_type = EDIT_TYPE.RENAMED
+ elif status == 'modified':
+ edit_type = EDIT_TYPE.MODIFIED
+ else:
+ self.logger.error(f"Unknown edit type: {status}")
+ edit_type = EDIT_TYPE.UNKNOWN
+
+ file_patch_info = FilePatchInfo(
+ base_file=base_file,
+ head_file=head_file,
+ patch=patch,
+ filename=filename,
+ num_minus_lines=num_minus_lines,
+ num_plus_lines=num_plus_lines,
+ edit_type=edit_type
+ )
+ diff_files.append(file_patch_info)
+
+ if invalid_files_names:
+ self.logger.info(f"Filtered out files with invalid extensions: {invalid_files_names}")
+
+ self.diff_files = diff_files
+ return diff_files
+
+ def get_line_link(self, relevant_file, relevant_line_start, relevant_line_end = None) -> str:
+ if relevant_line_start == -1:
+ link = f"{self.base_url}/{self.owner}/{self.repo}/src/branch/{self.get_pr_branch()}/{relevant_file}"
+ elif relevant_line_end:
+ link = f"{self.base_url}/{self.owner}/{self.repo}/src/branch/{self.get_pr_branch()}/{relevant_file}#L{relevant_line_start}-L{relevant_line_end}"
+ else:
+ link = f"{self.base_url}/{self.owner}/{self.repo}/src/branch/{self.get_pr_branch()}/{relevant_file}#L{relevant_line_start}"
+
+ self.logger.info(f"Generated link: {link}")
+ return link
+
+ def get_files(self) -> List[Dict[str, Any]]:
+ """Get all files in the PR"""
+ return [file.get("filename","") for file in self.git_files]
+
+ def get_num_of_files(self) -> int:
+ """Get number of files changed in the PR"""
+ return len(self.git_files)
+
+ def get_issue_comments(self) -> List[Dict[str, Any]]:
+ """Get all comments in the PR"""
+ index = self.issue_number if self.enabled_issue else self.pr_number
+ comments = self.repo_api.list_all_comments(
+ owner=self.owner,
+ repo=self.repo,
+ index=index
+ )
+ if not comments:
+ self.logger.error("Failed to get comments")
+ return []
+
+ return comments
+
+ def get_languages(self) -> Set[str]:
+ """Get programming languages used in the repository"""
+ languages = self.repo_api.get_languages(
+ owner=self.owner,
+ repo=self.repo
+ )
+
+ return languages
+
+ def get_pr_branch(self) -> str:
+ """Get the branch name of the PR"""
+ if not self.pr:
+ self.logger.error("Failed to get PR branch")
+ return ""
+
+ if not self.pr.head:
+ self.logger.error("PR head not found")
+ return ""
+
+ return self.pr.head.ref if self.pr.head.ref else ""
+
+ def get_pr_description_full(self) -> str:
+ """Get full PR description with metadata"""
+ if not self.pr:
+ self.logger.error("Failed to get PR description")
+ return ""
+
+ return self.pr.body if self.pr.body else ""
+
+ def get_pr_labels(self,update=False) -> List[str]:
+ """Get labels assigned to the PR"""
+ if not update:
+ if not self.pr.labels:
+ self.logger.error("Failed to get PR labels")
+ return []
+ return [label.name for label in self.pr.labels]
+
+ labels = self.repo_api.get_issue_labels(
+ owner=self.owner,
+ repo=self.repo,
+ issue_number=self.pr_number
+ )
+ if not labels:
+ self.logger.error("Failed to get PR labels")
+ return []
+
+ return [label.name for label in labels]
+
+ def get_repo_settings(self) -> str:
+ """Get repository settings"""
+ if not self.repo_settings:
+ self.logger.error("Repository settings not found")
+ return ""
+
+ response = self.repo_api.get_file_content(
+ owner=self.owner,
+ repo=self.repo,
+ commit_sha=self.sha,
+ filepath=self.repo_settings
+ )
+ if not response:
+ self.logger.error("Failed to get repository settings")
+ return ""
+
+ return response
+
+ def get_user_id(self) -> str:
+ """Get the ID of the authenticated user"""
+ return f"{self.pr.user.id}" if self.pr else ""
+
+ def is_supported(self, capability) -> bool:
+ """Check if the provider is supported"""
+ return True
+
+ def publish_description(self, pr_title: str, pr_body: str) -> None:
+ """Publish PR description"""
+ response = self.repo_api.edit_pull_request(
+ owner=self.owner,
+ repo=self.repo,
+ pr_number=self.pr_number if self.enabled_pr else self.issue_number,
+ title=pr_title,
+ body=pr_body
+ )
+
+ if not response:
+ self.logger.error("Failed to publish PR description")
+ return None
+
+ self.logger.info("PR description published successfully")
+ if self.enabled_pr:
+ self.pr = self.repo_api.get_pull_request(
+ owner=self.owner,
+ repo=self.repo,
+ pr_number=self.pr_number
+ )
+
+ def publish_labels(self, labels: List[int]) -> None:
+ """Publish labels to the PR"""
+ if not labels:
+ self.logger.error("No labels provided to publish")
+ return None
+
+ response = self.repo_api.add_labels(
+ owner=self.owner,
+ repo=self.repo,
+ issue_number=self.pr_number if self.enabled_pr else self.issue_number,
+ labels=labels
+ )
+
+ if response:
+ self.logger.info("Labels added successfully")
+
+ def remove_comment(self, comment) -> None:
+ """Remove a specific comment"""
+ if not comment:
+ return
+
+ try:
+ comment_id = comment.get("comment_id") if isinstance(comment, dict) else comment.id
+ if not comment_id:
+ self.logger.error("Comment ID not found")
+ return None
+ self.repo_api.remove_comment(
+ owner=self.owner,
+ repo=self.repo,
+ comment_id=comment_id
+ )
+
+ if self.comments_list and comment in self.comments_list:
+ self.comments_list.remove(comment)
+
+ self.logger.info(f"Comment removed successfully: {comment}")
+ except ApiException as e:
+ self.logger.error(f"Error removing comment: {e}")
+ raise e
+
+ def remove_initial_comment(self) -> None:
+ """Remove the initial comment"""
+ for comment in self.comments_list:
+ try:
+ if not comment.get("is_temporary"):
+ continue
+ self.remove_comment(comment)
+ except Exception as e:
+ self.logger.error(f"Error removing comment: {e}")
+ continue
+ self.logger.info(f"Removed initial comment: {comment.get('comment_id')}")
+
+
+class RepoApi(giteapy.RepositoryApi):
+ def __init__(self, client: giteapy.ApiClient):
+ self.repository = giteapy.RepositoryApi(client)
+ self.issue = giteapy.IssueApi(client)
+ self.logger = get_logger()
+ super().__init__(client)
+
+ def create_inline_comment(self, owner: str, repo: str, pr_number: int, body : str ,commit_id : str, comments: List[Dict[str, Any]]) -> None:
+ body = {
+ "body": body,
+ "comments": comments,
+ "commit_id": commit_id,
+ }
+ return self.api_client.call_api(
+ '/repos/{owner}/{repo}/pulls/{pr_number}/reviews',
+ 'POST',
+ path_params={'owner': owner, 'repo': repo, 'pr_number': pr_number},
+ body=body,
+ response_type='Repository',
+ auth_settings=['AuthorizationHeaderToken']
+ )
+
+ def create_comment(self, owner: str, repo: str, index: int, comment: str):
+ body = {
+ "body": comment
+ }
+ return self.issue.issue_create_comment(
+ owner=owner,
+ repo=repo,
+ index=index,
+ body=body
+ )
+
+ def edit_comment(self, owner: str, repo: str, comment_id: int, comment: str):
+ body = {
+ "body": comment
+ }
+ return self.issue.issue_edit_comment(
+ owner=owner,
+ repo=repo,
+ id=comment_id,
+ body=body
+ )
+
+ def remove_comment(self, owner: str, repo: str, comment_id: int):
+ return self.issue.issue_delete_comment(
+ owner=owner,
+ repo=repo,
+ id=comment_id
+ )
+
+ def list_all_comments(self, owner: str, repo: str, index: int):
+ return self.issue.issue_get_comments(
+ owner=owner,
+ repo=repo,
+ index=index
+ )
+
+ def get_pull_request_diff(self, owner: str, repo: str, pr_number: int) -> str:
+ """Get the diff content of a pull request using direct API call"""
+ try:
+ token = self.api_client.configuration.api_key.get('Authorization', '').replace('token ', '')
+ url = f'/repos/{owner}/{repo}/pulls/{pr_number}.diff'
+ if token:
+ url = f'{url}?token={token}'
+
+ response = self.api_client.call_api(
+ url,
+ 'GET',
+ path_params={},
+ response_type=None,
+ _return_http_data_only=False,
+ _preload_content=False
+ )
+
+ if hasattr(response, 'data'):
+ raw_data = response.data.read()
+ return raw_data.decode('utf-8')
+ elif isinstance(response, tuple):
+ raw_data = response[0].read()
+ return raw_data.decode('utf-8')
+ else:
+ error_msg = f"Unexpected response format received from API: {type(response)}"
+ self.logger.error(error_msg)
+ raise RuntimeError(error_msg)
+
+ except ApiException as e:
+ self.logger.error(f"Error getting diff: {str(e)}")
+ raise e
+ except Exception as e:
+ self.logger.error(f"Unexpected error: {str(e)}")
+ raise e
+
+ def get_pull_request(self, owner: str, repo: str, pr_number: int):
+ """Get pull request details including description"""
+ return self.repository.repo_get_pull_request(
+ owner=owner,
+ repo=repo,
+ index=pr_number
+ )
+
+ def edit_pull_request(self, owner: str, repo: str, pr_number: int,title : str, body: str):
+ """Edit pull request description"""
+ body = {
+ "body": body,
+ "title" : title
+ }
+ return self.repository.repo_edit_pull_request(
+ owner=owner,
+ repo=repo,
+ index=pr_number,
+ body=body
+ )
+
+ def get_change_file_pull_request(self, owner: str, repo: str, pr_number: int):
+ """Get changed files in the pull request"""
+ try:
+ token = self.api_client.configuration.api_key.get('Authorization', '').replace('token ', '')
+ url = f'/repos/{owner}/{repo}/pulls/{pr_number}/files'
+ if token:
+ url = f'{url}?token={token}'
+
+ response = self.api_client.call_api(
+ url,
+ 'GET',
+ path_params={},
+ response_type=None,
+ _return_http_data_only=False,
+ _preload_content=False
+ )
+
+ if hasattr(response, 'data'):
+ raw_data = response.data.read()
+ diff_content = raw_data.decode('utf-8')
+ return json.loads(diff_content) if isinstance(diff_content, str) else diff_content
+ elif isinstance(response, tuple):
+ raw_data = response[0].read()
+ diff_content = raw_data.decode('utf-8')
+ return json.loads(diff_content) if isinstance(diff_content, str) else diff_content
+
+ return []
+
+ except ApiException as e:
+ self.logger.error(f"Error getting changed files: {e}")
+ return []
+ except Exception as e:
+ self.logger.error(f"Unexpected error: {e}")
+ return []
+
+ def get_languages(self, owner: str, repo: str):
+ """Get programming languages used in the repository"""
+ try:
+ token = self.api_client.configuration.api_key.get('Authorization', '').replace('token ', '')
+ url = f'/repos/{owner}/{repo}/languages'
+ if token:
+ url = f'{url}?token={token}'
+
+ response = self.api_client.call_api(
+ url,
+ 'GET',
+ path_params={},
+ response_type=None,
+ _return_http_data_only=False,
+ _preload_content=False
+ )
+
+ if hasattr(response, 'data'):
+ raw_data = response.data.read()
+ return json.loads(raw_data.decode('utf-8'))
+ elif isinstance(response, tuple):
+ raw_data = response[0].read()
+ return json.loads(raw_data.decode('utf-8'))
- def get_repo_settings(self) -> Dict:
- """
- Get repository settings and configuration.
- Returns a dictionary containing repository settings.
- """
- if not self.owner or not self.repo:
return {}
- url = f"{self.gitea_url}/api/v1/repos/{self.owner}/{self.repo}"
- response = requests.get(url, headers=self.headers)
- response.raise_for_status()
- return response.json()
+
+ except ApiException as e:
+ self.logger.error(f"Error getting languages: {e}")
+ return {}
+ except Exception as e:
+ self.logger.error(f"Unexpected error: {e}")
+ return {}
+
+ def get_file_content(self, owner: str, repo: str, commit_sha: str, filepath: str) -> str:
+ """Get raw file content from a specific commit"""
+
+ try:
+ token = self.api_client.configuration.api_key.get('Authorization', '').replace('token ', '')
+ url = f'/repos/{owner}/{repo}/raw/{filepath}'
+ if token:
+ url = f'{url}?token={token}&ref={commit_sha}'
+
+ response = self.api_client.call_api(
+ url,
+ 'GET',
+ path_params={},
+ response_type=None,
+ _return_http_data_only=False,
+ _preload_content=False
+ )
+
+ if hasattr(response, 'data'):
+ raw_data = response.data.read()
+ return raw_data.decode('utf-8')
+ elif isinstance(response, tuple):
+ raw_data = response[0].read()
+ return raw_data.decode('utf-8')
+
+ return ""
+
+ except ApiException as e:
+ self.logger.error(f"Error getting file: {filepath}, content: {e}")
+ return ""
+ except Exception as e:
+ self.logger.error(f"Unexpected error: {e}")
+ return ""
+
+ def get_issue_labels(self, owner: str, repo: str, issue_number: int):
+ """Get labels assigned to the issue"""
+ return self.issue.issue_get_labels(
+ owner=owner,
+ repo=repo,
+ index=issue_number
+ )
+
+ def list_all_commits(self, owner: str, repo: str):
+ return self.repository.repo_get_all_commits(
+ owner=owner,
+ repo=repo
+ )
+
+ def add_reviewer(self, owner: str, repo: str, pr_number: int, reviewers: List[str]):
+ body = {
+ "reviewers": reviewers
+ }
+ return self.api_client.call_api(
+ '/repos/{owner}/{repo}/pulls/{pr_number}/requested_reviewers',
+ 'POST',
+ path_params={'owner': owner, 'repo': repo, 'pr_number': pr_number},
+ body=body,
+ response_type='Repository',
+ auth_settings=['AuthorizationHeaderToken']
+ )
+
+ def add_reaction_comment(self, owner: str, repo: str, comment_id: int, reaction: str):
+ body = {
+ "content": reaction
+ }
+ return self.api_client.call_api(
+ '/repos/{owner}/{repo}/issues/comments/{id}/reactions',
+ 'POST',
+ path_params={'owner': owner, 'repo': repo, 'id': comment_id},
+ body=body,
+ response_type='Repository',
+ auth_settings=['AuthorizationHeaderToken']
+ )
+
+ def remove_reaction_comment(self, owner: str, repo: str, comment_id: int):
+ return self.api_client.call_api(
+ '/repos/{owner}/{repo}/issues/comments/{id}/reactions',
+ 'DELETE',
+ path_params={'owner': owner, 'repo': repo, 'id': comment_id},
+ response_type='Repository',
+ auth_settings=['AuthorizationHeaderToken']
+ )
+
+ def add_labels(self, owner: str, repo: str, issue_number: int, labels: List[int]):
+ body = {
+ "labels": labels
+ }
+ return self.issue.issue_add_label(
+ owner=owner,
+ repo=repo,
+ index=issue_number,
+ body=body
+ )
+
+ def get_pr_commits(self, owner: str, repo: str, pr_number: int):
+ """Get all commits in a pull request"""
+ try:
+ token = self.api_client.configuration.api_key.get('Authorization', '').replace('token ', '')
+ url = f'/repos/{owner}/{repo}/pulls/{pr_number}/commits'
+ if token:
+ url = f'{url}?token={token}'
+
+ response = self.api_client.call_api(
+ url,
+ 'GET',
+ path_params={},
+ response_type=None,
+ _return_http_data_only=False,
+ _preload_content=False
+ )
+
+ if hasattr(response, 'data'):
+ raw_data = response.data.read()
+ commits_data = json.loads(raw_data.decode('utf-8'))
+ return commits_data
+ elif isinstance(response, tuple):
+ raw_data = response[0].read()
+ commits_data = json.loads(raw_data.decode('utf-8'))
+ return commits_data
+
+ return []
+
+ except ApiException as e:
+ self.logger.error(f"Error getting PR commits: {e}")
+ return []
+ except Exception as e:
+ self.logger.error(f"Unexpected error: {e}")
+ return []
diff --git a/pr_agent/secret_providers/__init__.py b/pr_agent/secret_providers/__init__.py
index c9faf480..204872e2 100644
--- a/pr_agent/secret_providers/__init__.py
+++ b/pr_agent/secret_providers/__init__.py
@@ -13,5 +13,12 @@ def get_secret_provider():
return GoogleCloudStorageSecretProvider()
except Exception as e:
raise ValueError(f"Failed to initialize google_cloud_storage secret provider {provider_id}") from e
+ elif provider_id == 'aws_secrets_manager':
+ try:
+ from pr_agent.secret_providers.aws_secrets_manager_provider import \
+ AWSSecretsManagerProvider
+ return AWSSecretsManagerProvider()
+ except Exception as e:
+ raise ValueError(f"Failed to initialize aws_secrets_manager secret provider {provider_id}") from e
else:
raise ValueError("Unknown SECRET_PROVIDER")
diff --git a/pr_agent/secret_providers/aws_secrets_manager_provider.py b/pr_agent/secret_providers/aws_secrets_manager_provider.py
new file mode 100644
index 00000000..599369db
--- /dev/null
+++ b/pr_agent/secret_providers/aws_secrets_manager_provider.py
@@ -0,0 +1,57 @@
+import json
+import boto3
+from botocore.exceptions import ClientError
+
+from pr_agent.config_loader import get_settings
+from pr_agent.log import get_logger
+from pr_agent.secret_providers.secret_provider import SecretProvider
+
+
+class AWSSecretsManagerProvider(SecretProvider):
+ def __init__(self):
+ try:
+ region_name = get_settings().get("aws_secrets_manager.region_name") or \
+ get_settings().get("aws.AWS_REGION_NAME")
+ if region_name:
+ self.client = boto3.client('secretsmanager', region_name=region_name)
+ else:
+ self.client = boto3.client('secretsmanager')
+
+ self.secret_arn = get_settings().get("aws_secrets_manager.secret_arn")
+ if not self.secret_arn:
+ raise ValueError("AWS Secrets Manager ARN is not configured")
+ except Exception as e:
+ get_logger().error(f"Failed to initialize AWS Secrets Manager Provider: {e}")
+ raise e
+
+ def get_secret(self, secret_name: str) -> str:
+ """
+ Retrieve individual secret by name (for webhook tokens)
+ """
+ try:
+ response = self.client.get_secret_value(SecretId=secret_name)
+ return response['SecretString']
+ except Exception as e:
+ get_logger().warning(f"Failed to get secret {secret_name} from AWS Secrets Manager: {e}")
+ return ""
+
+ def get_all_secrets(self) -> dict:
+ """
+ Retrieve all secrets for configuration override
+ """
+ try:
+ response = self.client.get_secret_value(SecretId=self.secret_arn)
+ return json.loads(response['SecretString'])
+ except Exception as e:
+ get_logger().error(f"Failed to get secrets from AWS Secrets Manager {self.secret_arn}: {e}")
+ return {}
+
+ def store_secret(self, secret_name: str, secret_value: str):
+ try:
+ self.client.put_secret_value(
+ SecretId=secret_name,
+ SecretString=secret_value
+ )
+ except Exception as e:
+ get_logger().error(f"Failed to store secret {secret_name} in AWS Secrets Manager: {e}")
+ raise e
diff --git a/pr_agent/servers/gitea_app.py b/pr_agent/servers/gitea_app.py
new file mode 100644
index 00000000..018a746d
--- /dev/null
+++ b/pr_agent/servers/gitea_app.py
@@ -0,0 +1,128 @@
+import asyncio
+import copy
+import os
+from typing import Any, Dict
+
+from fastapi import APIRouter, FastAPI, HTTPException, Request, Response
+from starlette.background import BackgroundTasks
+from starlette.middleware import Middleware
+from starlette_context import context
+from starlette_context.middleware import RawContextMiddleware
+
+from pr_agent.agent.pr_agent import PRAgent
+from pr_agent.config_loader import get_settings, global_settings
+from pr_agent.log import LoggingFormat, get_logger, setup_logger
+from pr_agent.servers.utils import verify_signature
+
+# Setup logging and router
+setup_logger(fmt=LoggingFormat.JSON, level=get_settings().get("CONFIG.LOG_LEVEL", "DEBUG"))
+router = APIRouter()
+
+@router.post("/api/v1/gitea_webhooks")
+async def handle_gitea_webhooks(background_tasks: BackgroundTasks, request: Request, response: Response):
+ """Handle incoming Gitea webhook requests"""
+ get_logger().debug("Received a Gitea webhook")
+
+ body = await get_body(request)
+
+ # Set context for the request
+ context["settings"] = copy.deepcopy(global_settings)
+ context["git_provider"] = {}
+
+ # Handle the webhook in background
+ background_tasks.add_task(handle_request, body, event=request.headers.get("X-Gitea-Event", None))
+ return {}
+
+async def get_body(request: Request):
+ """Parse and verify webhook request body"""
+ try:
+ body = await request.json()
+ except Exception as e:
+ get_logger().error("Error parsing request body", artifact={'error': e})
+ raise HTTPException(status_code=400, detail="Error parsing request body") from e
+
+
+ # Verify webhook signature
+ webhook_secret = getattr(get_settings().gitea, 'webhook_secret', None)
+ if webhook_secret:
+ body_bytes = await request.body()
+ signature_header = request.headers.get('x-gitea-signature', None)
+ if not signature_header:
+ get_logger().error("Missing signature header")
+ raise HTTPException(status_code=400, detail="Missing signature header")
+
+ try:
+ verify_signature(body_bytes, webhook_secret, f"sha256={signature_header}")
+ except Exception as ex:
+ get_logger().error(f"Invalid signature: {ex}")
+ raise HTTPException(status_code=401, detail="Invalid signature")
+
+ return body
+
+async def handle_request(body: Dict[str, Any], event: str):
+ """Process Gitea webhook events"""
+ action = body.get("action")
+ if not action:
+ get_logger().debug("No action found in request body")
+ return {}
+
+ agent = PRAgent()
+
+ # Handle different event types
+ if event == "pull_request":
+ if action in ["opened", "reopened", "synchronized"]:
+ await handle_pr_event(body, event, action, agent)
+ elif event == "issue_comment":
+ if action == "created":
+ await handle_comment_event(body, event, action, agent)
+
+ return {}
+
+async def handle_pr_event(body: Dict[str, Any], event: str, action: str, agent: PRAgent):
+ """Handle pull request events"""
+ pr = body.get("pull_request", {})
+ if not pr:
+ return
+
+ api_url = pr.get("url")
+ if not api_url:
+ return
+
+ # Handle PR based on action
+ if action in ["opened", "reopened"]:
+ commands = get_settings().get("gitea.pr_commands", [])
+ for command in commands:
+ await agent.handle_request(api_url, command)
+ elif action == "synchronized":
+ # Handle push to PR
+ await agent.handle_request(api_url, "/review --incremental")
+
+async def handle_comment_event(body: Dict[str, Any], event: str, action: str, agent: PRAgent):
+ """Handle comment events"""
+ comment = body.get("comment", {})
+ if not comment:
+ return
+
+ comment_body = comment.get("body", "")
+ if not comment_body or not comment_body.startswith("/"):
+ return
+
+ pr_url = body.get("pull_request", {}).get("url")
+ if not pr_url:
+ return
+
+ await agent.handle_request(pr_url, comment_body)
+
+# FastAPI app setup
+middleware = [Middleware(RawContextMiddleware)]
+app = FastAPI(middleware=middleware)
+app.include_router(router)
+
+def start():
+ """Start the Gitea webhook server"""
+ port = int(os.environ.get("PORT", "3000"))
+ import uvicorn
+ uvicorn.run(app, host="0.0.0.0", port=port)
+
+if __name__ == "__main__":
+ start()
diff --git a/pr_agent/servers/serverless.py b/pr_agent/servers/serverless.py
index a46eb80a..938be31b 100644
--- a/pr_agent/servers/serverless.py
+++ b/pr_agent/servers/serverless.py
@@ -5,6 +5,17 @@ from starlette_context.middleware import RawContextMiddleware
from pr_agent.servers.github_app import router
+try:
+ from pr_agent.config_loader import apply_secrets_manager_config
+ apply_secrets_manager_config()
+except Exception as e:
+ try:
+ from pr_agent.log import get_logger
+ get_logger().debug(f"AWS Secrets Manager initialization failed, falling back to environment variables: {e}")
+ except:
+ # Fail completely silently if log module is not available
+ pass
+
middleware = [Middleware(RawContextMiddleware)]
app = FastAPI(middleware=middleware)
app.include_router(router)
diff --git a/pr_agent/settings/.secrets_template.toml b/pr_agent/settings/.secrets_template.toml
index 17c5e8ee..350abe5c 100644
--- a/pr_agent/settings/.secrets_template.toml
+++ b/pr_agent/settings/.secrets_template.toml
@@ -68,6 +68,11 @@ webhook_secret = "" # Optional, may be commented out.
personal_access_token = ""
shared_secret = "" # webhook secret
+[gitea]
+# Gitea personal access token
+personal_access_token=""
+webhook_secret="" # webhook secret
+
[bitbucket]
# For Bitbucket authentication
auth_type = "bearer" # "bearer" or "basic"
@@ -116,4 +121,8 @@ api_base = ""
[aws]
AWS_ACCESS_KEY_ID = ""
AWS_SECRET_ACCESS_KEY = ""
-AWS_REGION_NAME = ""
\ No newline at end of file
+AWS_REGION_NAME = ""
+
+[aws_secrets_manager]
+secret_arn = "" # The ARN of the AWS Secrets Manager secret containing PR-Agent configuration
+region_name = "" # Optional: specific AWS region (defaults to AWS_REGION_NAME or Lambda region)
diff --git a/pr_agent/settings/configuration.toml b/pr_agent/settings/configuration.toml
index db728ae1..a93ea1f2 100644
--- a/pr_agent/settings/configuration.toml
+++ b/pr_agent/settings/configuration.toml
@@ -39,7 +39,7 @@ allow_dynamic_context=true
max_extra_lines_before_dynamic_context = 10 # will try to include up to 10 extra lines before the hunk in the patch, until we reach an enclosing function or class
patch_extra_lines_before = 5 # Number of extra lines (+3 default ones) to include before each hunk in the patch
patch_extra_lines_after = 1 # Number of extra lines (+3 default ones) to include after each hunk in the patch
-secret_provider=""
+secret_provider="" # "" (disabled), "google_cloud_storage", or "aws_secrets_manager" for secure secret management
cli_mode=false
ai_disclaimer_title="" # Pro feature, title for a collapsible disclaimer to AI outputs
ai_disclaimer="" # Pro feature, full text for the AI disclaimer
@@ -281,6 +281,15 @@ push_commands = [
"/review",
]
+[gitea_app]
+url = "https://gitea.com"
+handle_push_trigger = false
+pr_commands = [
+ "/describe",
+ "/review",
+ "/improve",
+]
+
[bitbucket_app]
pr_commands = [
"/describe --pr_description.final_update_message=false",
diff --git a/requirements.txt b/requirements.txt
index d1587f25..18f6e383 100644
--- a/requirements.txt
+++ b/requirements.txt
@@ -31,6 +31,7 @@ gunicorn==22.0.0
pytest-cov==5.0.0
pydantic==2.8.2
html2text==2024.2.26
+giteapy==1.0.8
# Uncomment the following lines to enable the 'similar issue' tool
# pinecone-client
# pinecone-datasets @ git+https://github.com/mrT23/pinecone-datasets.git@main
diff --git a/tests/e2e_tests/langchain_ai_handler.py b/tests/e2e_tests/langchain_ai_handler.py
new file mode 100644
index 00000000..d75c4292
--- /dev/null
+++ b/tests/e2e_tests/langchain_ai_handler.py
@@ -0,0 +1,90 @@
+import asyncio
+import os
+import time
+from pr_agent.algo.ai_handlers.langchain_ai_handler import LangChainOpenAIHandler
+from pr_agent.config_loader import get_settings
+
+def check_settings():
+ print('Checking settings...')
+ settings = get_settings()
+
+ # Check OpenAI settings
+ if not hasattr(settings, 'openai'):
+ print('OpenAI settings not found')
+ return False
+
+ if not hasattr(settings.openai, 'key'):
+ print('OpenAI API key not found')
+ return False
+
+ print('OpenAI API key found')
+ return True
+
+async def measure_performance(handler, num_requests=3):
+ print(f'\nRunning performance test with {num_requests} requests...')
+ start_time = time.time()
+
+ # Create multiple requests
+ tasks = [
+ handler.chat_completion(
+ model='gpt-3.5-turbo',
+ system='You are a helpful assistant',
+ user=f'Test message {i}',
+ temperature=0.2
+ ) for i in range(num_requests)
+ ]
+
+ # Execute requests concurrently
+ responses = await asyncio.gather(*tasks)
+
+ end_time = time.time()
+ total_time = end_time - start_time
+ avg_time = total_time / num_requests
+
+ print(f'Performance results:')
+ print(f'Total time: {total_time:.2f} seconds')
+ print(f'Average time per request: {avg_time:.2f} seconds')
+ print(f'Requests per second: {num_requests/total_time:.2f}')
+
+ return responses
+
+async def test():
+ print('Starting test...')
+
+ # Check settings first
+ if not check_settings():
+ print('Please set up your environment variables or configuration file')
+ print('Required: OPENAI_API_KEY')
+ return
+
+ try:
+ handler = LangChainOpenAIHandler()
+ print('Handler created')
+
+ # Basic functionality test
+ response = await handler.chat_completion(
+ model='gpt-3.5-turbo',
+ system='You are a helpful assistant',
+ user='Hello',
+ temperature=0.2,
+ img_path='test.jpg'
+ )
+ print('Response:', response)
+
+ # Performance test
+ await measure_performance(handler)
+
+ except Exception as e:
+ print('Error:', str(e))
+ print('Error type:', type(e))
+ print('Error details:', e.__dict__ if hasattr(e, '__dict__') else 'No additional details')
+
+if __name__ == '__main__':
+ print('Environment variables:')
+ print('OPENAI_API_KEY:', 'Set' if os.getenv('OPENAI_API_KEY') else 'Not set')
+ print('OPENAI_API_TYPE:', os.getenv('OPENAI_API_TYPE', 'Not set'))
+ print('OPENAI_API_BASE:', os.getenv('OPENAI_API_BASE', 'Not set'))
+
+ asyncio.run(test())
+
+
\ No newline at end of file
diff --git a/tests/unittest/test_aws_secrets_manager_provider.py b/tests/unittest/test_aws_secrets_manager_provider.py
new file mode 100644
index 00000000..e5972ac1
--- /dev/null
+++ b/tests/unittest/test_aws_secrets_manager_provider.py
@@ -0,0 +1,89 @@
+import json
+import pytest
+from unittest.mock import MagicMock, patch
+from botocore.exceptions import ClientError
+
+from pr_agent.secret_providers.aws_secrets_manager_provider import AWSSecretsManagerProvider
+
+
+class TestAWSSecretsManagerProvider:
+
+ def _provider(self):
+ """Create provider following existing pattern"""
+ with patch('pr_agent.secret_providers.aws_secrets_manager_provider.get_settings') as mock_get_settings, \
+ patch('pr_agent.secret_providers.aws_secrets_manager_provider.boto3.client') as mock_boto3_client:
+
+ settings = MagicMock()
+ settings.get.side_effect = lambda k, d=None: {
+ 'aws_secrets_manager.secret_arn': 'arn:aws:secretsmanager:us-east-1:123456789012:secret:test-secret',
+ 'aws_secrets_manager.region_name': 'us-east-1',
+ 'aws.AWS_REGION_NAME': 'us-east-1'
+ }.get(k, d)
+ settings.aws_secrets_manager.secret_arn = 'arn:aws:secretsmanager:us-east-1:123456789012:secret:test-secret'
+ mock_get_settings.return_value = settings
+
+ # Mock boto3 client
+ mock_client = MagicMock()
+ mock_boto3_client.return_value = mock_client
+
+ provider = AWSSecretsManagerProvider()
+ provider.client = mock_client # Set client directly for testing
+ return provider, mock_client
+
+ # Positive test cases
+ def test_get_secret_success(self):
+ provider, mock_client = self._provider()
+ mock_client.get_secret_value.return_value = {'SecretString': 'test-secret-value'}
+
+ result = provider.get_secret('test-secret-name')
+ assert result == 'test-secret-value'
+ mock_client.get_secret_value.assert_called_once_with(SecretId='test-secret-name')
+
+ def test_get_all_secrets_success(self):
+ provider, mock_client = self._provider()
+ secret_data = {'openai.key': 'sk-test', 'github.webhook_secret': 'webhook-secret'}
+ mock_client.get_secret_value.return_value = {'SecretString': json.dumps(secret_data)}
+
+ result = provider.get_all_secrets()
+ assert result == secret_data
+
+ # Negative test cases (following Google Cloud Storage pattern)
+ def test_get_secret_failure(self):
+ provider, mock_client = self._provider()
+ mock_client.get_secret_value.side_effect = Exception("AWS error")
+
+ result = provider.get_secret('nonexistent-secret')
+ assert result == "" # Confirm empty string is returned
+
+ def test_get_all_secrets_failure(self):
+ provider, mock_client = self._provider()
+ mock_client.get_secret_value.side_effect = Exception("AWS error")
+
+ result = provider.get_all_secrets()
+ assert result == {} # Confirm empty dictionary is returned
+
+ def test_store_secret_update_existing(self):
+ provider, mock_client = self._provider()
+ mock_client.update_secret.return_value = {}
+
+ provider.store_secret('test-secret', 'test-value')
+ mock_client.put_secret_value.assert_called_once_with(
+ SecretId='test-secret',
+ SecretString='test-value'
+ )
+
+ def test_init_failure_invalid_config(self):
+ with patch('pr_agent.secret_providers.aws_secrets_manager_provider.get_settings') as mock_get_settings:
+ settings = MagicMock()
+ settings.aws_secrets_manager.secret_arn = None # Configuration error
+ mock_get_settings.return_value = settings
+
+ with pytest.raises(Exception):
+ AWSSecretsManagerProvider()
+
+ def test_store_secret_failure(self):
+ provider, mock_client = self._provider()
+ mock_client.put_secret_value.side_effect = Exception("AWS error")
+
+ with pytest.raises(Exception):
+ provider.store_secret('test-secret', 'test-value')
diff --git a/tests/unittest/test_config_loader_secrets.py b/tests/unittest/test_config_loader_secrets.py
new file mode 100644
index 00000000..36752ef0
--- /dev/null
+++ b/tests/unittest/test_config_loader_secrets.py
@@ -0,0 +1,120 @@
+import pytest
+from unittest.mock import MagicMock, patch
+
+from pr_agent.config_loader import apply_secrets_manager_config, apply_secrets_to_config
+
+
+class TestConfigLoaderSecrets:
+
+ def test_apply_secrets_manager_config_success(self):
+ with patch('pr_agent.secret_providers.get_secret_provider') as mock_get_provider, \
+ patch('pr_agent.config_loader.apply_secrets_to_config') as mock_apply_secrets, \
+ patch('pr_agent.config_loader.get_settings') as mock_get_settings:
+
+ # Mock secret provider
+ mock_provider = MagicMock()
+ mock_provider.get_all_secrets.return_value = {'openai.key': 'sk-test'}
+ mock_get_provider.return_value = mock_provider
+
+ # Mock settings
+ settings = MagicMock()
+ settings.get.return_value = "aws_secrets_manager"
+ mock_get_settings.return_value = settings
+
+ apply_secrets_manager_config()
+
+ mock_apply_secrets.assert_called_once_with({'openai.key': 'sk-test'})
+
+ def test_apply_secrets_manager_config_no_provider(self):
+ with patch('pr_agent.secret_providers.get_secret_provider') as mock_get_provider:
+ mock_get_provider.return_value = None
+
+ # Confirm no exception is raised
+ apply_secrets_manager_config()
+
+ def test_apply_secrets_manager_config_not_aws(self):
+ with patch('pr_agent.secret_providers.get_secret_provider') as mock_get_provider, \
+ patch('pr_agent.config_loader.get_settings') as mock_get_settings:
+
+ # Mock Google Cloud Storage provider
+ mock_provider = MagicMock()
+ mock_get_provider.return_value = mock_provider
+
+ # Mock settings (Google Cloud Storage)
+ settings = MagicMock()
+ settings.get.return_value = "google_cloud_storage"
+ mock_get_settings.return_value = settings
+
+ # Confirm execution is skipped for non-AWS Secrets Manager
+ apply_secrets_manager_config()
+
+ # Confirm get_all_secrets is not called
+ assert not hasattr(mock_provider, 'get_all_secrets') or \
+ not mock_provider.get_all_secrets.called
+
+ def test_apply_secrets_to_config_nested_keys(self):
+ with patch('pr_agent.config_loader.get_settings') as mock_get_settings:
+ settings = MagicMock()
+ settings.get.return_value = None # No existing value
+ settings.set = MagicMock()
+ mock_get_settings.return_value = settings
+
+ secrets = {
+ 'openai.key': 'sk-test',
+ 'github.webhook_secret': 'webhook-secret'
+ }
+
+ apply_secrets_to_config(secrets)
+
+ # Confirm settings are applied correctly
+ settings.set.assert_any_call('OPENAI.KEY', 'sk-test')
+ settings.set.assert_any_call('GITHUB.WEBHOOK_SECRET', 'webhook-secret')
+
+ def test_apply_secrets_to_config_existing_value_preserved(self):
+ with patch('pr_agent.config_loader.get_settings') as mock_get_settings:
+ settings = MagicMock()
+ settings.get.return_value = "existing-value" # Existing value present
+ settings.set = MagicMock()
+ mock_get_settings.return_value = settings
+
+ secrets = {'openai.key': 'sk-test'}
+
+ apply_secrets_to_config(secrets)
+
+ # Confirm settings are not overridden when existing value present
+ settings.set.assert_not_called()
+
+ def test_apply_secrets_to_config_single_key(self):
+ with patch('pr_agent.config_loader.get_settings') as mock_get_settings:
+ settings = MagicMock()
+ settings.get.return_value = None
+ settings.set = MagicMock()
+ mock_get_settings.return_value = settings
+
+ secrets = {'simple_key': 'simple_value'}
+
+ apply_secrets_to_config(secrets)
+
+ # Confirm non-dot notation keys are ignored
+ settings.set.assert_not_called()
+
+ def test_apply_secrets_to_config_multiple_dots(self):
+ with patch('pr_agent.config_loader.get_settings') as mock_get_settings:
+ settings = MagicMock()
+ settings.get.return_value = None
+ settings.set = MagicMock()
+ mock_get_settings.return_value = settings
+
+ secrets = {'section.subsection.key': 'value'}
+
+ apply_secrets_to_config(secrets)
+
+ # Confirm keys with multiple dots are ignored
+ settings.set.assert_not_called()
+
+ def test_apply_secrets_manager_config_exception_handling(self):
+ with patch('pr_agent.secret_providers.get_secret_provider') as mock_get_provider:
+ mock_get_provider.side_effect = Exception("Provider error")
+
+ # Confirm processing continues even when exception occurs
+ apply_secrets_manager_config() # Confirm no exception is raised
diff --git a/tests/unittest/test_convert_to_markdown.py b/tests/unittest/test_convert_to_markdown.py
index 483787aa..84e01942 100644
--- a/tests/unittest/test_convert_to_markdown.py
+++ b/tests/unittest/test_convert_to_markdown.py
@@ -1,4 +1,7 @@
# Generated by CodiumAI
+import textwrap
+from unittest.mock import Mock
+
from pr_agent.algo.utils import PRReviewHeader, convert_to_markdown_v2
from pr_agent.tools.pr_description import insert_br_after_x_chars
@@ -48,9 +51,174 @@ class TestConvertToMarkdown:
input_data = {'review': {
'estimated_effort_to_review_[1-5]': '1, because the changes are minimal and straightforward, focusing on a single functionality addition.\n',
'relevant_tests': 'No\n', 'possible_issues': 'No\n', 'security_concerns': 'No\n'}}
+
+ expected_output = textwrap.dedent(f"""\
+ {PRReviewHeader.REGULAR.value} ๐
+
+ Here are some key observations to aid the review process:
+
+
+ โฑ๏ธ Estimated effort to review: 1 ๐ตโชโชโชโช |
+ ๐งช No relevant tests |
+ Possible issues: No
+ |
+ ๐ No security concerns identified |
+
+ """)
+
+ assert convert_to_markdown_v2(input_data).strip() == expected_output.strip()
+
+ def test_simple_dictionary_input_without_gfm_supported(self):
+ input_data = {'review': {
+ 'estimated_effort_to_review_[1-5]': '1, because the changes are minimal and straightforward, focusing on a single functionality addition.\n',
+ 'relevant_tests': 'No\n', 'possible_issues': 'No\n', 'security_concerns': 'No\n'}}
+
+ expected_output = textwrap.dedent("""\
+ ## PR Reviewer Guide ๐
+
+ Here are some key observations to aid the review process:
+
+ ### โฑ๏ธ Estimated effort to review: 1 ๐ตโชโชโชโช
+
+ ### ๐งช No relevant tests
+
+ ### Possible issues: No
- expected_output = f'{PRReviewHeader.REGULAR.value} ๐\n\nHere are some key observations to aid the review process:\n\n\nโฑ๏ธ Estimated effort to review: 1 ๐ตโชโชโชโช |
\n๐งช No relevant tests |
\n Possible issues: No\n |
\n๐ No security concerns identified |
\n
'
+ ### ๐ No security concerns identified
+ """)
+
+ assert convert_to_markdown_v2(input_data, gfm_supported=False).strip() == expected_output.strip()
+
+ def test_key_issues_to_review(self):
+ input_data = {'review': {
+ 'key_issues_to_review': [
+ {
+ 'relevant_file' : 'src/utils.py',
+ 'issue_header' : 'Code Smell',
+ 'issue_content' : 'The function is too long and complex.',
+ 'start_line': 30,
+ 'end_line': 50,
+ }
+ ]
+ }}
+ mock_git_provider = Mock()
+ reference_link = 'https://github.com/qodo/pr-agent/pull/1/files#diff-hashvalue-R174'
+ mock_git_provider.get_line_link.return_value = reference_link
+
+ expected_output = textwrap.dedent(f"""\
+ ## PR Reviewer Guide ๐
+
+ Here are some key observations to aid the review process:
+
+
+ โก Recommended focus areas for review
+
+ Code Smell The function is too long and complex.
+
+ |
+
+ """)
+
+ assert convert_to_markdown_v2(input_data, git_provider=mock_git_provider).strip() == expected_output.strip()
+ mock_git_provider.get_line_link.assert_called_with('src/utils.py', 30, 50)
+
+ def test_ticket_compliance(self):
+ input_data = {'review': {
+ 'ticket_compliance_check': [
+ {
+ 'ticket_url': 'https://example.com/ticket/123',
+ 'ticket_requirements': '- Requirement 1\n- Requirement 2\n',
+ 'fully_compliant_requirements': '- Requirement 1\n- Requirement 2\n',
+ 'not_compliant_requirements': '',
+ 'requires_further_human_verification': '',
+ }
+ ]
+ }}
+
+ expected_output = textwrap.dedent("""\
+ ## PR Reviewer Guide ๐
+
+ Here are some key observations to aid the review process:
+
+
+
+
+ **๐ซ Ticket compliance analysis โ
**
+
+
+
+ **[123](https://example.com/ticket/123) - Fully compliant**
+
+ Compliant requirements:
+
+ - Requirement 1
+ - Requirement 2
+
+
+
+ |
+
+ """)
+
+ assert convert_to_markdown_v2(input_data).strip() == expected_output.strip()
+
+ def test_can_be_split(self):
+ input_data = {'review': {
+ 'can_be_split': [
+ {
+ 'relevant_files': [
+ 'src/file1.py',
+ 'src/file2.py'
+ ],
+ 'title': 'Refactoring',
+ },
+ {
+ 'relevant_files': [
+ 'src/file3.py'
+ ],
+ 'title': 'Bug Fix',
+ }
+ ]
+ }
+ }
+
+ expected_output = textwrap.dedent("""\
+ ## PR Reviewer Guide ๐
+
+ Here are some key observations to aid the review process:
+
+
+ ๐ Multiple PR themes
+
+
+ Sub-PR theme: Refactoring
+
+ ___
+
+ Relevant files:
+
+ - src/file1.py
+ - src/file2.py
+ ___
+
+
+
+
+ Sub-PR theme: Bug Fix
+
+ ___
+
+ Relevant files:
+
+ - src/file3.py
+ ___
+
+
+
+ |
+
+ """)
assert convert_to_markdown_v2(input_data).strip() == expected_output.strip()
diff --git a/tests/unittest/test_gitea_provider.py b/tests/unittest/test_gitea_provider.py
index d88de0e0..95f7ec21 100644
--- a/tests/unittest/test_gitea_provider.py
+++ b/tests/unittest/test_gitea_provider.py
@@ -1,126 +1,126 @@
-from unittest.mock import MagicMock, patch
-
-import pytest
-
-from pr_agent.algo.types import EDIT_TYPE
-from pr_agent.git_providers.gitea_provider import GiteaProvider
-
-
-class TestGiteaProvider:
- """Unit-tests for GiteaProvider following project style (explicit object construction, minimal patching)."""
-
- def _provider(self):
- """Create provider instance with patched settings and avoid real HTTP calls."""
- with patch('pr_agent.git_providers.gitea_provider.get_settings') as mock_get_settings, \
- patch('requests.get') as mock_get:
- settings = MagicMock()
- settings.get.side_effect = lambda k, d=None: {
- 'GITEA.URL': 'https://gitea.example.com',
- 'GITEA.TOKEN': 'test-token'
- }.get(k, d)
- mock_get_settings.return_value = settings
- # Stub the PR fetch triggered during provider initialization
- pr_resp = MagicMock()
- pr_resp.json.return_value = {
- 'title': 'stub',
- 'body': 'stub',
- 'head': {'ref': 'main'},
- 'user': {'id': 1}
- }
- pr_resp.raise_for_status = MagicMock()
- mock_get.return_value = pr_resp
- return GiteaProvider('https://gitea.example.com/owner/repo/pulls/123')
-
- # ---------------- URL parsing ----------------
- def test_parse_pr_url_valid(self):
- owner, repo, pr_num = GiteaProvider._parse_pr_url('https://gitea.example.com/owner/repo/pulls/123')
- assert (owner, repo, pr_num) == ('owner', 'repo', '123')
-
- def test_parse_pr_url_invalid(self):
- with pytest.raises(ValueError):
- GiteaProvider._parse_pr_url('https://gitea.example.com/owner/repo')
-
- # ---------------- simple getters ----------------
- def test_get_files(self):
- provider = self._provider()
- mock_resp = MagicMock()
- mock_resp.json.return_value = [{'filename': 'a.txt'}, {'filename': 'b.txt'}]
- mock_resp.raise_for_status = MagicMock()
- with patch('requests.get', return_value=mock_resp) as mock_get:
- assert provider.get_files() == ['a.txt', 'b.txt']
- mock_get.assert_called_once()
-
- def test_get_diff_files(self):
- provider = self._provider()
- mock_resp = MagicMock()
- mock_resp.json.return_value = [
- {'filename': 'f1', 'previous_filename': 'old_f1', 'status': 'renamed', 'patch': ''},
- {'filename': 'f2', 'status': 'added', 'patch': ''},
- {'filename': 'f3', 'status': 'deleted', 'patch': ''},
- {'filename': 'f4', 'status': 'modified', 'patch': ''}
- ]
- mock_resp.raise_for_status = MagicMock()
- with patch('requests.get', return_value=mock_resp):
- res = provider.get_diff_files()
- assert [f.edit_type for f in res] == [EDIT_TYPE.RENAMED, EDIT_TYPE.ADDED, EDIT_TYPE.DELETED,
- EDIT_TYPE.MODIFIED]
-
- # ---------------- publishing methods ----------------
- def test_publish_description(self):
- provider = self._provider()
- mock_resp = MagicMock();
- mock_resp.raise_for_status = MagicMock()
- with patch('requests.patch', return_value=mock_resp) as mock_patch:
- provider.publish_description('t', 'b');
- mock_patch.assert_called_once()
-
- def test_publish_comment(self):
- provider = self._provider()
- mock_resp = MagicMock();
- mock_resp.raise_for_status = MagicMock()
- with patch('requests.post', return_value=mock_resp) as mock_post:
- provider.publish_comment('c');
- mock_post.assert_called_once()
-
- def test_publish_inline_comment(self):
- provider = self._provider()
- mock_resp = MagicMock();
- mock_resp.raise_for_status = MagicMock()
- with patch('requests.post', return_value=mock_resp) as mock_post:
- provider.publish_inline_comment('body', 'file', '10');
- mock_post.assert_called_once()
-
- # ---------------- labels & reactions ----------------
- def test_get_pr_labels(self):
- provider = self._provider()
- mock_resp = MagicMock();
- mock_resp.raise_for_status = MagicMock();
- mock_resp.json.return_value = [{'name': 'l1'}]
- with patch('requests.get', return_value=mock_resp):
- assert provider.get_pr_labels() == ['l1']
-
- def test_add_eyes_reaction(self):
- provider = self._provider()
- mock_resp = MagicMock();
- mock_resp.raise_for_status = MagicMock();
- mock_resp.json.return_value = {'id': 7}
- with patch('requests.post', return_value=mock_resp):
- assert provider.add_eyes_reaction(1) == 7
-
- # ---------------- commit messages & url helpers ----------------
- def test_get_commit_messages(self):
- provider = self._provider()
- mock_resp = MagicMock();
- mock_resp.raise_for_status = MagicMock()
- mock_resp.json.return_value = [
- {'commit': {'message': 'm1'}}, {'commit': {'message': 'm2'}}]
- with patch('requests.get', return_value=mock_resp):
- assert provider.get_commit_messages() == ['m1', 'm2']
-
- def test_git_url_helpers(self):
- provider = self._provider()
- issues_url = 'https://gitea.example.com/owner/repo/pulls/3'
- assert provider.get_git_repo_url(issues_url) == 'https://gitea.example.com/owner/repo.git'
- prefix, suffix = provider.get_canonical_url_parts('https://gitea.example.com/owner/repo.git', 'dev')
- assert prefix == 'https://gitea.example.com/owner/repo/src/branch/dev'
- assert suffix == ''
+# from unittest.mock import MagicMock, patch
+#
+# import pytest
+#
+# from pr_agent.algo.types import EDIT_TYPE
+# from pr_agent.git_providers.gitea_provider import GiteaProvider
+#
+#
+# class TestGiteaProvider:
+# """Unit-tests for GiteaProvider following project style (explicit object construction, minimal patching)."""
+#
+# def _provider(self):
+# """Create provider instance with patched settings and avoid real HTTP calls."""
+# with patch('pr_agent.git_providers.gitea_provider.get_settings') as mock_get_settings, \
+# patch('requests.get') as mock_get:
+# settings = MagicMock()
+# settings.get.side_effect = lambda k, d=None: {
+# 'GITEA.URL': 'https://gitea.example.com',
+# 'GITEA.PERSONAL_ACCESS_TOKEN': 'test-token'
+# }.get(k, d)
+# mock_get_settings.return_value = settings
+# # Stub the PR fetch triggered during provider initialization
+# pr_resp = MagicMock()
+# pr_resp.json.return_value = {
+# 'title': 'stub',
+# 'body': 'stub',
+# 'head': {'ref': 'main'},
+# 'user': {'id': 1}
+# }
+# pr_resp.raise_for_status = MagicMock()
+# mock_get.return_value = pr_resp
+# return GiteaProvider('https://gitea.example.com/owner/repo/pulls/123')
+#
+# # ---------------- URL parsing ----------------
+# def test_parse_pr_url_valid(self):
+# owner, repo, pr_num = self._provider()._parse_pr_url('https://gitea.example.com/owner/repo/pulls/123')
+# assert (owner, repo, pr_num) == ('owner', 'repo', '123')
+#
+# def test_parse_pr_url_invalid(self):
+# with pytest.raises(ValueError):
+# GiteaProvider._parse_pr_url('https://gitea.example.com/owner/repo')
+#
+# # ---------------- simple getters ----------------
+# def test_get_files(self):
+# provider = self._provider()
+# mock_resp = MagicMock()
+# mock_resp.json.return_value = [{'filename': 'a.txt'}, {'filename': 'b.txt'}]
+# mock_resp.raise_for_status = MagicMock()
+# with patch('requests.get', return_value=mock_resp) as mock_get:
+# assert provider.get_files() == ['a.txt', 'b.txt']
+# mock_get.assert_called_once()
+#
+# def test_get_diff_files(self):
+# provider = self._provider()
+# mock_resp = MagicMock()
+# mock_resp.json.return_value = [
+# {'filename': 'f1', 'previous_filename': 'old_f1', 'status': 'renamed', 'patch': ''},
+# {'filename': 'f2', 'status': 'added', 'patch': ''},
+# {'filename': 'f3', 'status': 'deleted', 'patch': ''},
+# {'filename': 'f4', 'status': 'modified', 'patch': ''}
+# ]
+# mock_resp.raise_for_status = MagicMock()
+# with patch('requests.get', return_value=mock_resp):
+# res = provider.get_diff_files()
+# assert [f.edit_type for f in res] == [EDIT_TYPE.RENAMED, EDIT_TYPE.ADDED, EDIT_TYPE.DELETED,
+# EDIT_TYPE.MODIFIED]
+#
+# # ---------------- publishing methods ----------------
+# def test_publish_description(self):
+# provider = self._provider()
+# mock_resp = MagicMock();
+# mock_resp.raise_for_status = MagicMock()
+# with patch('requests.patch', return_value=mock_resp) as mock_patch:
+# provider.publish_description('t', 'b');
+# mock_patch.assert_called_once()
+#
+# def test_publish_comment(self):
+# provider = self._provider()
+# mock_resp = MagicMock();
+# mock_resp.raise_for_status = MagicMock()
+# with patch('requests.post', return_value=mock_resp) as mock_post:
+# provider.publish_comment('c');
+# mock_post.assert_called_once()
+#
+# def test_publish_inline_comment(self):
+# provider = self._provider()
+# mock_resp = MagicMock();
+# mock_resp.raise_for_status = MagicMock()
+# with patch('requests.post', return_value=mock_resp) as mock_post:
+# provider.publish_inline_comment('body', 'file', '10');
+# mock_post.assert_called_once()
+#
+# # ---------------- labels & reactions ----------------
+# def test_get_pr_labels(self):
+# provider = self._provider()
+# mock_resp = MagicMock();
+# mock_resp.raise_for_status = MagicMock();
+# mock_resp.json.return_value = [{'name': 'l1'}]
+# with patch('requests.get', return_value=mock_resp):
+# assert provider.get_pr_labels() == ['l1']
+#
+# def test_add_eyes_reaction(self):
+# provider = self._provider()
+# mock_resp = MagicMock();
+# mock_resp.raise_for_status = MagicMock();
+# mock_resp.json.return_value = {'id': 7}
+# with patch('requests.post', return_value=mock_resp):
+# assert provider.add_eyes_reaction(1) == 7
+#
+# # ---------------- commit messages & url helpers ----------------
+# def test_get_commit_messages(self):
+# provider = self._provider()
+# mock_resp = MagicMock();
+# mock_resp.raise_for_status = MagicMock()
+# mock_resp.json.return_value = [
+# {'commit': {'message': 'm1'}}, {'commit': {'message': 'm2'}}]
+# with patch('requests.get', return_value=mock_resp):
+# assert provider.get_commit_messages() == ['m1', 'm2']
+#
+# def test_git_url_helpers(self):
+# provider = self._provider()
+# issues_url = 'https://gitea.example.com/owner/repo/pulls/3'
+# assert provider.get_git_repo_url(issues_url) == 'https://gitea.example.com/owner/repo.git'
+# prefix, suffix = provider.get_canonical_url_parts('https://gitea.example.com/owner/repo.git', 'dev')
+# assert prefix == 'https://gitea.example.com/owner/repo/src/branch/dev'
+# assert suffix == ''
diff --git a/tests/unittest/test_secret_provider_factory.py b/tests/unittest/test_secret_provider_factory.py
new file mode 100644
index 00000000..98a1bfed
--- /dev/null
+++ b/tests/unittest/test_secret_provider_factory.py
@@ -0,0 +1,69 @@
+import pytest
+from unittest.mock import MagicMock, patch
+
+from pr_agent.secret_providers import get_secret_provider
+
+
+class TestSecretProviderFactory:
+
+ def test_get_secret_provider_none_when_not_configured(self):
+ with patch('pr_agent.secret_providers.get_settings') as mock_get_settings:
+ settings = MagicMock()
+ settings.get.return_value = None
+ mock_get_settings.return_value = settings
+
+ result = get_secret_provider()
+ assert result is None
+
+ def test_get_secret_provider_google_cloud_storage(self):
+ with patch('pr_agent.secret_providers.get_settings') as mock_get_settings:
+ settings = MagicMock()
+ settings.get.return_value = "google_cloud_storage"
+ settings.config.secret_provider = "google_cloud_storage"
+ mock_get_settings.return_value = settings
+
+ with patch('pr_agent.secret_providers.google_cloud_storage_secret_provider.GoogleCloudStorageSecretProvider') as MockProvider:
+ mock_instance = MagicMock()
+ MockProvider.return_value = mock_instance
+
+ result = get_secret_provider()
+ assert result is mock_instance
+ MockProvider.assert_called_once()
+
+ def test_get_secret_provider_aws_secrets_manager(self):
+ with patch('pr_agent.secret_providers.get_settings') as mock_get_settings:
+ settings = MagicMock()
+ settings.get.return_value = "aws_secrets_manager"
+ settings.config.secret_provider = "aws_secrets_manager"
+ mock_get_settings.return_value = settings
+
+ with patch('pr_agent.secret_providers.aws_secrets_manager_provider.AWSSecretsManagerProvider') as MockProvider:
+ mock_instance = MagicMock()
+ MockProvider.return_value = mock_instance
+
+ result = get_secret_provider()
+ assert result is mock_instance
+ MockProvider.assert_called_once()
+
+ def test_get_secret_provider_unknown_provider(self):
+ with patch('pr_agent.secret_providers.get_settings') as mock_get_settings:
+ settings = MagicMock()
+ settings.get.return_value = "unknown_provider"
+ settings.config.secret_provider = "unknown_provider"
+ mock_get_settings.return_value = settings
+
+ with pytest.raises(ValueError, match="Unknown SECRET_PROVIDER"):
+ get_secret_provider()
+
+ def test_get_secret_provider_initialization_error(self):
+ with patch('pr_agent.secret_providers.get_settings') as mock_get_settings:
+ settings = MagicMock()
+ settings.get.return_value = "aws_secrets_manager"
+ settings.config.secret_provider = "aws_secrets_manager"
+ mock_get_settings.return_value = settings
+
+ with patch('pr_agent.secret_providers.aws_secrets_manager_provider.AWSSecretsManagerProvider') as MockProvider:
+ MockProvider.side_effect = Exception("Initialization failed")
+
+ with pytest.raises(ValueError, match="Failed to initialize aws_secrets_manager secret provider"):
+ get_secret_provider()