{"config":{"lang":["en"],"separator":"[\\s\\-]+","pipeline":["stopWordFilter"]},"docs":[{"location":"","title":"Overview","text":"
PR-Agent is an open-source tool to help efficiently review and handle pull requests. Qodo Merge is a hosted version of PR-Agent, designed for companies and teams that require additional features and capabilities
See the Installation Guide for instructions on installing and running the tool on different git platforms.
See the Usage Guide for instructions on running commands via different interfaces, including CLI, online usage, or by automatically triggering them when a new PR is opened.
See the Tools Guide for a detailed description of the different tools.
See the Video Tutorials for practical demonstrations on how to use the tools.
To search the documentation site using natural language:
1) Comment /help \"your question\"
in either:
2) The bot will respond with an answer that includes relevant documentation links.
"},{"location":"#features","title":"Features","text":"PR-Agent and Qodo Merge offer comprehensive pull request functionalities integrated with various git providers:
GitHub GitLab Bitbucket Azure DevOps Gitea TOOLS Describe \u2705 \u2705 \u2705 \u2705 \u2705 Review \u2705 \u2705 \u2705 \u2705 \u2705 Improve \u2705 \u2705 \u2705 \u2705 \u2705 Ask \u2705 \u2705 \u2705 \u2705 \u2b91 Ask on code lines \u2705 \u2705 Help Docs \u2705 \u2705 \u2705 Update CHANGELOG \u2705 \u2705 \u2705 \u2705 Add Documentation \ud83d\udc8e \u2705 \u2705 Analyze \ud83d\udc8e \u2705 \u2705 Auto-Approve \ud83d\udc8e \u2705 \u2705 \u2705 CI Feedback \ud83d\udc8e \u2705 Custom Prompt \ud83d\udc8e \u2705 \u2705 \u2705 Generate Custom Labels \ud83d\udc8e \u2705 \u2705 Generate Tests \ud83d\udc8e \u2705 \u2705 Implement \ud83d\udc8e \u2705 \u2705 \u2705 Scan Repo Discussions \ud83d\udc8e \u2705 Similar Code \ud83d\udc8e \u2705 Ticket Context \ud83d\udc8e \u2705 \u2705 \u2705 Utilizing Best Practices \ud83d\udc8e \u2705 \u2705 \u2705 PR Chat \ud83d\udc8e \u2705 Suggestion Tracking \ud83d\udc8e \u2705 \u2705 USAGE CLI \u2705 \u2705 \u2705 \u2705 \u2705 App / webhook \u2705 \u2705 \u2705 \u2705 \u2705 Tagging bot \u2705 Actions \u2705 \u2705 \u2705 \u2705 CORE Adaptive and token-aware file patch fitting \u2705 \u2705 \u2705 \u2705 Auto Best Practices \ud83d\udc8e \u2705 Chat on code suggestions \u2705 \u2705 Code Validation \ud83d\udc8e \u2705 \u2705 \u2705 \u2705 Dynamic context \u2705 \u2705 \u2705 \u2705 Fetching ticket context \u2705 \u2705 \u2705 Global and wiki configurations \ud83d\udc8e \u2705 \u2705 \u2705 Impact Evaluation \ud83d\udc8e \u2705 \u2705 Incremental Update \ud83d\udc8e \u2705 Interactivity \u2705 \u2705 Local and global metadata \u2705 \u2705 \u2705 \u2705 Multiple models support \u2705 \u2705 \u2705 \u2705 PR compression \u2705 \u2705 \u2705 \u2705 PR interactive actions \ud83d\udc8e \u2705 \u2705 RAG context enrichment \u2705 \u2705 Self reflection \u2705 \u2705 \u2705 \u2705 Static code analysis \ud83d\udc8e \u2705 \u2705\ud83d\udc8e means Qodo Merge only
All along the documentation, \ud83d\udc8e marks a feature available only in Qodo Merge, and not in the open-source version.
"},{"location":"#example-results","title":"Example Results","text":""},{"location":"#describe","title":"/describe","text":""},{"location":"#review","title":"/review","text":""},{"location":"#improve","title":"/improve","text":""},{"location":"#generate_labels","title":"/generate_labels","text":""},{"location":"#how-it-works","title":"How it Works","text":"The following diagram illustrates Qodo Merge tools and their flow:
Check out the PR Compression strategy page for more details on how we convert a code diff to a manageable LLM prompt
"},{"location":"ai_search/","title":"AI Docs Search","text":"AI Docs SearchSearch through our documentation using AI-powered natural language queries.
Search"},{"location":"chrome-extension/","title":"Chrome extension","text":"Qodo Merge Chrome extension is a collection of tools that integrates seamlessly with your GitHub environment, aiming to enhance your Git usage experience, and providing AI-powered capabilities to your PRs.
With a single-click installation you will gain access to a context-aware chat on your pull requests code, a toolbar extension with multiple AI feedbacks, Qodo Merge filters, and additional abilities.
The extension is powered by top code models like Claude 3.7 Sonnet and o4-mini. All the extension's features are free to use on public repositories.
For private repositories, you will need to install Qodo Merge in addition to the extension. For a demonstration of how to install Qodo Merge and use it with the Chrome extension, please refer to the tutorial video at the provided link.
"},{"location":"chrome-extension/#supported-browsers","title":"Supported browsers","text":"The extension is supported on all Chromium-based browsers, including Google Chrome, Arc, Opera, Brave, and Microsoft Edge.
"},{"location":"chrome-extension/data_privacy/","title":"Data privacy","text":"We take your code's security and privacy seriously:
The PR-Chat feature allows to freely chat with your PR code, within your GitHub environment. It will seamlessly use the PR as context to your chat session, and provide AI-powered feedback.
To enable private chat, simply install the Qodo Merge Chrome extension. After installation, each PR's file-changed tab will include a chat box, where you may ask questions about your code. This chat session is private, and won't be visible to other users.
All open-source repositories are supported. For private repositories, you will also need to install Qodo Merge. After installation, make sure to open at least one new PR to fully register your organization. Once done, you can chat with both new and existing PRs across all installed repositories.
"},{"location":"chrome-extension/features/#context-aware-pr-chat","title":"Context-aware PR chat","text":"Qodo Merge constructs a comprehensive context for each pull request, incorporating the PR description, commit messages, and code changes with extended dynamic context. This contextual information, along with additional PR-related data, forms the foundation for an AI-powered chat session. The agent then leverages this rich context to provide intelligent, tailored responses to user inquiries about the pull request.
"},{"location":"chrome-extension/features/#toolbar-extension","title":"Toolbar extension","text":"
With Qodo Merge Chrome extension, it's easier than ever to interactively configure and experiment with the different tools and configuration options.
For private repositories, after you found the setup that works for you, you can also easily export it as a persistent configuration file, and use it for automatic commands.
"},{"location":"chrome-extension/features/#qodo-merge-filters","title":"Qodo Merge filters","text":"Qodo Merge filters is a sidepanel option. that allows you to filter different message in the conversation tab.
For example, you can choose to present only message from Qodo Merge, or filter those messages, focusing only on user's comments.
"},{"location":"chrome-extension/features/#enhanced-code-suggestions","title":"Enhanced code suggestions","text":"Qodo Merge Chrome extension adds the following capabilities to code suggestions tool's comments:
To access the options page for the Qodo Merge Chrome extension:
Alternatively, you can access the options page directly using this URL:
chrome-extension://ephlnjeghhogofkifjloamocljapahnl/options.html
"},{"location":"chrome-extension/options/#configuration-options","title":"Configuration Options","text":""},{"location":"chrome-extension/options/#api-base-host","title":"API Base Host","text":"For single-tenant customers, you can configure the extension to communicate directly with your company's Qodo Merge server instance.
To set this up:
Note: The extension does not send your code to the server, but only triggers your previously installed Qodo Merge application.
"},{"location":"chrome-extension/options/#interface-options","title":"Interface Options","text":"You can customize the extension's interface by:
Remember to click \"Save Settings\" after making any changes.
"},{"location":"core-abilities/","title":"Core Abilities","text":"Qodo Merge utilizes a variety of core abilities to provide a comprehensive and efficient code review experience. These abilities include:
Here are some additional technical blogs from Qodo, that delve deeper into the core capabilities and features of Large Language Models (LLMs) when applied to coding tasks. These resources provide more comprehensive insights into leveraging LLMs for software development.
"},{"location":"core-abilities/#code-generation-and-llms","title":"Code Generation and LLMs","text":"Supported Git Platforms: GitHub, GitLab, Bitbucket
Under specific conditions, Qodo Merge can auto-approve a PR when a manual comment is invoked, or when the PR meets certain criteria.
To ensure safety, the auto-approval feature is disabled by default. To enable auto-approval features, you need to actively set one or both of the following options in a pre-defined configuration file:
[config]\nenable_comment_approval = true # For approval via comments\nenable_auto_approval = true # For criteria-based auto-approval\n
Notes
To enable approval by commenting, set in the configuration file:
[config]\nenable_comment_approval = true\n
After enabling, by commenting on a PR:
/review auto_approve\n
Qodo Merge will approve the PR and add a comment with the reason for the approval.
"},{"location":"core-abilities/auto_approval/#auto-approval-when-the-pr-meets-certain-criteria","title":"Auto-approval when the PR meets certain criteria","text":"To enable auto-approval based on specific criteria, first, you need to enable the top-level flag:
[config]\nenable_auto_approval = true\n
There are two possible paths leading to this auto-approval - one via the review
tool, and one via the improve
tool. Each tool can independently trigger auto-approval.
review
tool","text":"Review effort score criteria
[config]\nenable_auto_approval = true\nauto_approve_for_low_review_effort = X # X is a number between 1 and 5\n
When the review effort score is lower than or equal to X, the PR will be auto-approved (unless ticket compliance is enabled and fails, see below).
Ticket compliance criteria
[config]\nenable_auto_approval = true\nensure_ticket_compliance = true # Default is false\n
If ensure_ticket_compliance
is set to true
, auto-approval for the review
toll path will be disabled if no ticket is linked to the PR, or if the PR is not fully compliant with a linked ticket. This ensures that PRs are only auto-approved if their associated tickets are properly resolved.
You can also prevent auto-approval if the PR exceeds the ticket's scope (see here).
improve
tool","text":"PRs can be auto-approved when the improve
tool doesn't find code suggestions. To enable this feature, set the following in the configuration file:
[config]\nenable_auto_approval = true\nauto_approve_for_no_suggestions = true\n
"},{"location":"core-abilities/auto_best_practices/","title":"Auto Best Practices \ud83d\udc8e","text":"Supported Git Platforms: GitHub
Note - enabling a Wiki is required for this feature.
"},{"location":"core-abilities/auto_best_practices/#finding-code-problems-exploration-phase","title":"Finding Code Problems - Exploration Phase","text":"The improve
tool identifies potential issues, problems and bugs in Pull Request (PR) code changes. Rather than focusing on minor issues like code style or formatting, the tool intelligently analyzes code to detect meaningful problems.
The analysis intentionally takes a flexible, exploratory approach to identify meaningful potential issues, allowing the tool to surface relevant code suggestions without being constrained by predefined categories.
"},{"location":"core-abilities/auto_best_practices/#tracking-implemented-suggestions","title":"Tracking Implemented Suggestions","text":"Qodo Merge features a novel tracking system that automatically detects when PR authors implement AI-generated code suggestions. All accepted suggestions are aggregated in a repository-specific wiki page called .pr_agent_accepted_suggestions
Monthly, Qodo Merge analyzes the collection of accepted suggestions to generate repository-specific best practices, stored in .pr_agent_auto_best_practices
wiki file. These best practices reflect recurring patterns in accepted code improvements.
The improve
tool will incorporate these best practices as an additional analysis layer, checking PR code changes against known patterns of previously accepted improvements. This creates a two-phase analysis:
By keeping these phases decoupled, the tool remains free to discover new or unseen issues and problems, while also learning from past experiences.
When presenting the suggestions generated by the improve
tool, Qodo Merge will add a dedicated label for each suggestion generated from the auto best practices - 'Learned best practice':
Teams and companies can also manually define their own custom best practices in Qodo Merge.
When custom best practices exist, Qodo Merge will still generate an 'auto best practices' wiki file, though it won't be used by the improve
tool. However, this auto-generated file can still serve two valuable purposes:
Even when using custom best practices, we recommend regularly reviewing the auto best practices file to refine your custom rules.
"},{"location":"core-abilities/auto_best_practices/#relevant-configurations","title":"Relevant configurations","text":"[auto_best_practices]\n# Disable all auto best practices usage or generation\nenable_auto_best_practices = true \n\n# Disable usage of auto best practices file in the 'improve' tool\nutilize_auto_best_practices = true \n\n# Extra instructions to the auto best practices generation prompt\nextra_instructions = \"\" \n\n# Max number of patterns to be detected\nmax_patterns = 5 \n
"},{"location":"core-abilities/chat_on_code_suggestions/","title":"Chat on code suggestions \ud83d\udc8e","text":"Supported Git Platforms: GitHub, GitLab
Qodo Merge implements an orchestrator agent that enables interactive code discussions, listening and responding to comments without requiring explicit tool calls. The orchestrator intelligently analyzes your responses to determine if you want to implement a suggestion, ask a question, or request help, then delegates to the appropriate specialized tool.
To minimize unnecessary notifications and maintain focused discussions, the orchestrator agent will only respond to comments made directly within the inline code suggestion discussions it has created (/improve
) or within discussions initiated by the /implement
command.
Enable interactive code discussions by adding the following to your configuration file (default is True
):
[pr_code_suggestions]\nenable_chat_in_code_suggestions = true\n
"},{"location":"core-abilities/chat_on_code_suggestions/#activation","title":"Activation","text":""},{"location":"core-abilities/chat_on_code_suggestions/#improve","title":"/improve
","text":"To obtain dynamic responses, the following steps are required:
/improve
command (mostly automatic)/improve
recommendation checkboxes (Apply this suggestion) to have Qodo Merge generate a new inline code suggestion discussion/implement
","text":"To obtain dynamic responses, the following steps are required:
/implement
commandTip: Direct the agent with keywords
Use \"implement\" or \"apply\" for code generation. Use \"explain\", \"why\", or \"how\" for information and help.
Asking for DetailsImplementing SuggestionsProviding Additional Help "},{"location":"core-abilities/code_validation/","title":"Code Validation \ud83d\udc8e","text":"Supported Git Platforms: GitHub, GitLab, Bitbucket
The Git environment usually represents the final stage before code enters production. Hence, Detecting bugs and issues during the review process is critical.
The improve
tool provides actionable code suggestions for your pull requests, aiming to help detect and fix bugs and problems. By default, suggestions appear as a comment in a table format:
Each suggestion in the table can be \"applied\" by clicking on the Apply this suggestion
checkbox, converting it to a committable Git code change that can be committed directly to the PR. This approach allows to fix issues without returning to your IDE for manual edits \u2014 significantly faster and more convenient.
However, committing a suggestion in a Git environment carries more risk than in a local IDE, as you don't have the opportunity to fully run and test the code before committing.
To balance convenience with safety, Qodo Merge implements a dual validation system for each generated code suggestion:
1) Localization - Qodo Merge confirms that the suggestion's line numbers and surrounding code, as predicted by the model, actually match the repo code. This means that the model correctly identified the context and location of the code to be changed.
2) \"Compilation\" - Using static code analysis, Qodo Merge verifies that after applying the suggestion, the modified file will still be valid, meaning tree-sitter syntax processing will not throw an error. This process is relevant for multiple programming languages, see here for the full list of supported languages.
When a suggestion fails to meet these validation criteria, it may still provide valuable feedback, but isn't suitable for direct application to the PR. In such cases, Qodo Merge will omit the 'apply' checkbox and instead display:
[To ensure code accuracy, apply this suggestion manually]
All suggestions that pass these validations undergo a final stage of self-reflection, where the AI model evaluates, scores, and re-ranks its own suggestions, eliminating any that are irrelevant or incorrect. Read more about this process in the self-reflection page.
"},{"location":"core-abilities/code_validation/#conclusion","title":"Conclusion","text":"The validation methods described above enhance the reliability of code suggestions and help PR authors determine which suggestions are safer to apply in the Git environment. Of course, additional factors should be considered, such as suggestion complexity and potential code impact.
Human judgment remains essential. After clicking 'apply', Qodo Merge still presents the 'before' and 'after' code snippets for review, allowing you to assess the changes before finalizing the commit.
"},{"location":"core-abilities/compression_strategy/","title":"Compression strategy","text":"Supported Git Platforms: GitHub, GitLab, Bitbucket
There are two scenarios:
For both scenarios, we first use the following strategy
"},{"location":"core-abilities/compression_strategy/#repo-language-prioritization-strategy","title":"Repo language prioritization strategy","text":"We prioritize the languages of the repo based on the following criteria:
[[file.py, file2.py],[file3.js, file4.jsx],[readme.md]]
In this case, we can fit the entire PR in a single prompt:
Pull Requests can be very long and contain a lot of information with varying degree of relevance to the pr-agent. We want to be able to pack as much information as possible in a single LMM prompt, while keeping the information relevant to the pr-agent.
"},{"location":"core-abilities/compression_strategy/#compression-strategy","title":"Compression strategy","text":"We prioritize additions over deletions:
deleted files
)We use tiktoken to tokenize the patches after the modifications described above, and we use the following strategy to fit the patches into the prompt:
[[file2.py, file.py],[file4.jsx, file3.js],[readme.md]]
other modified files
to the prompt until the prompt reaches the max token length (hard stop), skip the rest of the patches.deleted files
to the prompt until the prompt reaches the max token length (hard stop), skip the rest of the patches.Supported Git Platforms: GitHub, GitLab, Bitbucket
Qodo Merge uses an asymmetric and dynamic context strategy to improve AI analysis of code changes in pull requests. It provides more context before changes than after, and dynamically adjusts the context based on code structure (e.g., enclosing functions or classes). This approach balances providing sufficient context for accurate analysis, while avoiding needle-in-the-haystack information overload that could degrade AI performance or exceed token limits.
"},{"location":"core-abilities/dynamic_context/#introduction","title":"Introduction","text":"Pull request code changes are retrieved in a unified diff format, showing three lines of context before and after each modified section, with additions marked by '+' and deletions by '-'.
@@ -12,5 +12,5 @@ def func1():\n code line that already existed in the file...\n code line that already existed in the file...\n code line that already existed in the file....\n-code line that was removed in the PR\n+new code line added in the PR\n code line that already existed in the file...\n code line that already existed in the file...\n code line that already existed in the file...\n\n@@ -26,2 +26,4 @@ def func2():\n...\n
This unified diff format can be challenging for AI models to interpret accurately, as it provides limited context for understanding the full scope of code changes. The presentation of code using '+', '-', and ' ' symbols to indicate additions, deletions, and unchanged lines respectively also differs from the standard code formatting typically used to train AI models.
"},{"location":"core-abilities/dynamic_context/#challenges-of-expanding-the-context-window","title":"Challenges of expanding the context window","text":"While expanding the context window is technically feasible, it presents a more fundamental trade-off:
Pros:
Cons:
Excessive context may overwhelm the model with extraneous information, creating a \"needle in a haystack\" scenario where focusing on the relevant details (the code that actually changed) becomes challenging. LLM quality is known to degrade when the context gets larger. Pull requests often encompass multiple changes across many files, potentially spanning hundreds of lines of modified code. This complexity presents a genuine risk of overwhelming the model with excessive context.
Increased context expands the token count, increasing processing time and cost, and may prevent the model from processing the entire pull request in a single pass.
To address these challenges, Qodo Merge employs an asymmetric and dynamic context strategy, providing the model with more focused and relevant context information for each code change.
Asymmetric:
We start by recognizing that the context preceding a code change is typically more crucial for understanding the modification than the context following it. Consequently, Qodo Merge implements an asymmetric context policy, decoupling the context window into two distinct segments: one for the code before the change and another for the code after.
By independently adjusting each context window, Qodo Merge can supply the model with a more tailored and pertinent context for individual code changes.
Dynamic:
We also employ a \"dynamic\" context strategy. We start by recognizing that the optimal context for a code change often corresponds to its enclosing code component (e.g., function, class), rather than a fixed number of lines. Consequently, we dynamically adjust the context window based on the code's structure, ensuring the model receives the most pertinent information for each modification.
To prevent overwhelming the model with excessive context, we impose a limit on the number of lines searched when identifying the enclosing component. This balance allows for comprehensive understanding while maintaining efficiency and limiting context token usage.
"},{"location":"core-abilities/dynamic_context/#appendix-relevant-configuration-options","title":"Appendix - relevant configuration options","text":"[config]\npatch_extension_skip_types =[\".md\",\".txt\"] # Skip files with these extensions when trying to extend the context\nallow_dynamic_context=true # Allow dynamic context extension\nmax_extra_lines_before_dynamic_context = 8 # will try to include up to X extra lines before the hunk in the patch, until we reach an enclosing function or class\npatch_extra_lines_before = 3 # Number of extra lines (+3 default ones) to include before each hunk in the patch\npatch_extra_lines_after = 1 # Number of extra lines (+3 default ones) to include after each hunk in the patch\n
"},{"location":"core-abilities/fetching_ticket_context/","title":"Fetching Ticket Context for PRs","text":"Supported Git Platforms: GitHub, GitLab, Bitbucket
Qodo Merge streamlines code review workflows by seamlessly connecting with multiple ticket management systems. This integration enriches the review process by automatically surfacing relevant ticket information and context alongside code changes.
Ticket systems supported:
Ticket data fetched:
Ticket Recognition Requirements:
Qodo Merge will recognize the ticket and use the ticket content (title, description, labels) to provide additional context for the code changes. By understanding the reasoning and intent behind modifications, the LLM can offer more insightful and relevant code analysis.
"},{"location":"core-abilities/fetching_ticket_context/#review-tool","title":"Review tool","text":"Similarly to the describe
tool, the review
tool will use the ticket content to provide additional context for the code changes.
In addition, this feature will evaluate how well a Pull Request (PR) adheres to its original purpose/intent as defined by the associated ticket or issue mentioned in the PR description. Each ticket will be assigned a label (Compliance/Alignment level), Indicates the degree to which the PR fulfills its original purpose:
A PR Code Verified
label indicates the PR code meets ticket requirements, but requires additional manual testing beyond the code scope. For example - validating UI display across different environments (Mac, Windows, mobile, etc.).
By default, the tool will automatically validate if the PR complies with the referenced ticket. If you want to disable this feedback, add the following line to your configuration file:
[pr_reviewer]\nrequire_ticket_analysis_review=false\n
If you set:
[pr_reviewer]\ncheck_pr_additional_content=true\n
(default: false
) the review
tool will also validate that the PR code doesn't contain any additional content that is not related to the ticket. If it does, the PR will be labeled at best as PR Code Verified
, and the review
tool will provide a comment with the additional unrelated content found in the PR code.
Qodo Merge will automatically recognize GitHub issues mentioned in the PR description and fetch the issue content. Examples of valid GitHub issue references:
https://github.com/<ORG_NAME>/<REPO_NAME>/issues/<ISSUE_NUMBER>
#<ISSUE_NUMBER>
<ORG_NAME>/<REPO_NAME>#<ISSUE_NUMBER>
Since Qodo Merge is integrated with GitHub, it doesn't require any additional configuration to fetch GitHub issues.
"},{"location":"core-abilities/fetching_ticket_context/#jira-integration","title":"Jira Integration \ud83d\udc8e","text":"We support both Jira Cloud and Jira Server/Data Center.
"},{"location":"core-abilities/fetching_ticket_context/#jira-cloud","title":"Jira Cloud","text":"There are two ways to authenticate with Jira Cloud:
1) Jira App Authentication
The recommended way to authenticate with Jira Cloud is to install the Qodo Merge app in your Jira Cloud instance. This will allow Qodo Merge to access Jira data on your behalf.
Installation steps:
Go to the Qodo Merge integrations page
Click on the Connect Jira Cloud button to connect the Jira Cloud app
Click the accept
button.
After installing the app, you will be redirected to the Qodo Merge registration page. and you will see a success message.
Now Qodo Merge will be able to fetch Jira ticket context for your PRs.
2) Email/Token Authentication
You can create an API token from your Atlassian account:
Log in to https://id.atlassian.com/manage-profile/security/api-tokens.
Click Create API token.
From the dialog that appears, enter a name for your new token and click Create.
Click Copy to clipboard.
[jira]\njira_api_token = \"YOUR_API_TOKEN\"\njira_api_email = \"YOUR_EMAIL\"\n
"},{"location":"core-abilities/fetching_ticket_context/#jira-data-centerserver","title":"Jira Data Center/Server","text":""},{"location":"core-abilities/fetching_ticket_context/#using-basic-authentication-for-jira-data-centerserver","title":"Using Basic Authentication for Jira Data Center/Server","text":"You can use your Jira username and password to authenticate with Jira Data Center/Server.
In your Configuration file/Environment variables/Secrets file, add the following lines:
jira_api_email = \"your_username\"\njira_api_token = \"your_password\"\n
(Note that indeed the 'jira_api_email' field is used for the username, and the 'jira_api_token' field is used for the user password.)
"},{"location":"core-abilities/fetching_ticket_context/#validating-basic-authentication-via-python-script","title":"Validating Basic authentication via Python script","text":"If you are facing issues retrieving tickets in Qodo Merge with Basic auth, you can validate the flow using a Python script. This following steps will help you check if the basic auth is working correctly, and if you can access the Jira ticket details:
run pip install jira==3.8.0
run the following Python script (after replacing the placeholders with your actual values):
from jira import JIRA\n\n\nif __name__ == \"__main__\":\n try:\n # Jira server URL\n server = \"https://...\"\n # Basic auth\n username = \"...\"\n password = \"...\"\n # Jira ticket code (e.g. \"PROJ-123\")\n ticket_id = \"...\"\n\n print(\"Initializing JiraServerTicketProvider with JIRA server\")\n # Initialize JIRA client\n jira = JIRA(\n server=server,\n basic_auth=(username, password),\n timeout=30\n )\n if jira:\n print(f\"JIRA client initialized successfully\")\n else:\n print(\"Error initializing JIRA client\")\n\n # Fetch ticket details\n ticket = jira.issue(ticket_id)\n print(f\"Ticket title: {ticket.fields.summary}\")\n\n except Exception as e:\n print(f\"Error fetching JIRA ticket details: {e}\")\n
"},{"location":"core-abilities/fetching_ticket_context/#using-a-personal-access-token-pat-for-jira-data-centerserver","title":"Using a Personal Access Token (PAT) for Jira Data Center/Server","text":"[jira]\njira_base_url = \"YOUR_JIRA_BASE_URL\" # e.g. https://jira.example.com\njira_api_token = \"YOUR_API_TOKEN\"\n
"},{"location":"core-abilities/fetching_ticket_context/#validating-pat-token-via-python-script","title":"Validating PAT token via Python script","text":"If you are facing issues retrieving tickets in Qodo Merge with PAT token, you can validate the flow using a Python script. This following steps will help you check if the token is working correctly, and if you can access the Jira ticket details:
run pip install jira==3.8.0
run the following Python script (after replacing the placeholders with your actual values):
from jira import JIRA\n\n\nif __name__ == \"__main__\":\n try:\n # Jira server URL\n server = \"https://...\"\n # Jira PAT token\n token_auth = \"...\"\n # Jira ticket code (e.g. \"PROJ-123\")\n ticket_id = \"...\"\n\n print(\"Initializing JiraServerTicketProvider with JIRA server\")\n # Initialize JIRA client\n jira = JIRA(\n server=server,\n token_auth=token_auth,\n timeout=30\n )\n if jira:\n print(f\"JIRA client initialized successfully\")\n else:\n print(\"Error initializing JIRA client\")\n\n # Fetch ticket details\n ticket = jira.issue(ticket_id)\n print(f\"Ticket title: {ticket.fields.summary}\")\n\n except Exception as e:\n print(f\"Error fetching JIRA ticket details: {e}\")\n
"},{"location":"core-abilities/fetching_ticket_context/#multi-jira-server-configuration","title":"Multi-JIRA Server Configuration \ud83d\udc8e","text":"Qodo Merge supports connecting to multiple JIRA servers using different authentication methods.
Email/Token (Basic Auth)PAT AuthJira Cloud AppConfigure multiple servers using Email/Token authentication:
jira_servers
: List of JIRA server URLsjira_api_token
: List of API tokens (for Cloud) or passwords (for Data Center)jira_api_email
: List of emails (for Cloud) or usernames (for Data Center)jira_base_url
: Default server for ticket IDs like PROJ-123
, Each repository can configure (local config file) its own jira_base_url
to choose which server to use by default.Example Configuration:
[jira]\n# Server URLs\njira_servers = [\"https://company.atlassian.net\", \"https://datacenter.jira.com\"]\n\n# API tokens/passwords\njira_api_token = [\"cloud_api_token_here\", \"datacenter_password\"]\n\n# Emails/usernames (both required)\njira_api_email = [\"user@company.com\", \"datacenter_username\"]\n\n# Default server for ticket IDs\njira_base_url = \"https://company.atlassian.net\"\n
Configure multiple servers using Personal Access Token authentication:
jira_servers
: List of JIRA server URLsjira_api_token
: List of PAT tokensjira_api_email
: Not needed (can be omitted or left empty)jira_base_url
: Default server for ticket IDs like PROJ-123
, Each repository can configure (local config file) its own jira_base_url
to choose which server to use by default.Example Configuration:
[jira]\n# Server URLs\njira_servers = [\"https://server1.jira.com\", \"https://server2.jira.com\"]\n\n# PAT tokens only\njira_api_token = [\"pat_token_1\", \"pat_token_2\"]\n\n# Default server for ticket IDs\njira_base_url = \"https://server1.jira.com\"\n
Mixed Authentication (Email/Token + PAT):
[jira]\njira_servers = [\"https://company.atlassian.net\", \"https://server.jira.com\"]\njira_api_token = [\"cloud_api_token\", \"server_pat_token\"]\njira_api_email = [\"user@company.com\", \"\"] # Empty for PAT\n
For Jira Cloud instances using App Authentication:
[jira]\njira_base_url = \"https://primary-team.atlassian.net\"\n
Full URLs (e.g., https://other-team.atlassian.net/browse/TASK-456
) will automatically use the correct connected instance.
To integrate with Jira, you can link your PR to a ticket using either of these methods:
Method 1: Description Reference:
Include a ticket reference in your PR description, using either the complete URL format https://<JIRA_ORG>.atlassian.net/browse/ISSUE-123
or the shortened ticket ID ISSUE-123
(without prefix or suffix for the shortened ID).
Method 2: Branch Name Detection:
Name your branch with the ticket ID as a prefix (e.g., ISSUE-123-feature-description
or ISSUE-123/feature-description
).
Jira Base URL
For shortened ticket IDs or branch detection (method 2 for JIRA cloud), you must configure the Jira base URL in your configuration file under the [jira] section:
[jira]\njira_base_url = \"https://<JIRA_ORG>.atlassian.net\"\n
Where <JIRA_ORG>
is your Jira organization identifier (e.g., mycompany
for https://mycompany.atlassian.net
)."},{"location":"core-abilities/fetching_ticket_context/#linear-integration","title":"Linear Integration \ud83d\udc8e","text":""},{"location":"core-abilities/fetching_ticket_context/#linear-app-authentication","title":"Linear App Authentication","text":"The recommended way to authenticate with Linear is to connect the Linear app through the Qodo Merge portal.
Installation steps:
Go to the Qodo Merge integrations page
Navigate to the Integrations tab
Click on the Linear button to connect the Linear app
Follow the authentication flow to authorize Qodo Merge to access your Linear workspace
Once connected, Qodo Merge will be able to fetch Linear ticket context for your PRs
Qodo Merge will automatically detect Linear tickets using either of these methods:
Method 1: Description Reference:
Include a ticket reference in your PR description using either: - The complete Linear ticket URL: https://linear.app/[ORG_ID]/issue/[TICKET_ID]
- The shortened ticket ID: [TICKET_ID]
(e.g., ABC-123
) - requires linear_base_url configuration (see below).
Method 2: Branch Name Detection:
Name your branch with the ticket ID as a prefix (e.g., ABC-123-feature-description
or feature/ABC-123/feature-description
).
Linear Base URL
For shortened ticket IDs or branch detection (method 2), you must configure the Linear base URL in your configuration file under the [linear] section:
[linear]\nlinear_base_url = \"https://linear.app/[ORG_ID]\"\n
Replace [ORG_ID]
with your Linear organization identifier.
Supported Git Platforms: GitHub, GitLab, Bitbucket
Demonstrating the return on investment (ROI) of AI-powered initiatives is crucial for modern organizations. To address this need, Qodo Merge has developed an AI impact measurement tools and metrics, providing advanced analytics to help businesses quantify the tangible benefits of AI adoption in their PR review process.
"},{"location":"core-abilities/impact_evaluation/#auto-impact-validator-real-time-tracking-of-implemented-qodo-merge-suggestions","title":"Auto Impact Validator - Real-Time Tracking of Implemented Qodo Merge Suggestions","text":""},{"location":"core-abilities/impact_evaluation/#how-it-works","title":"How It Works","text":"When a user pushes a new commit to the pull request, Qodo Merge automatically compares the updated code against the previous suggestions, marking them as implemented if the changes address these recommendations, whether directly or indirectly:
Upon confirming that a suggestion was implemented, Qodo Merge automatically adds a \u2705 (check mark) to the relevant suggestion, enabling transparent tracking of Qodo Merge's impact analysis. Qodo Merge will also add, inside the relevant suggestions, an explanation of how the new code was impacted by each suggestion.
"},{"location":"core-abilities/impact_evaluation/#dashboard-metrics","title":"Dashboard Metrics","text":"The dashboard provides macro-level insights into the overall impact of Qodo Merge on the pull-request process with key productivity metrics.
By offering clear, data-driven evidence of Qodo Merge's impact, it empowers leadership teams to make informed decisions about the tool's effectiveness and ROI.
Here are key metrics that the dashboard tracks:
"},{"location":"core-abilities/impact_evaluation/#qodo-merge-impacts-per-1k-lines","title":"Qodo Merge Impacts per 1K Lines","text":"Explanation: for every 1K lines of code (additions/edits), Qodo Merge had on average ~X suggestions implemented.
Why This Metric Matters:
Explanation: This chart illustrates the distribution of implemented suggestions across different categories, enabling teams to better understand Qodo Merge's impact on various aspects of code quality and development practices.
"},{"location":"core-abilities/impact_evaluation/#suggestion-score-distribution","title":"Suggestion Score Distribution","text":"Explanation: The distribution of the suggestion score for the implemented suggestions, ensuring that higher-scored suggestions truly represent more significant improvements.
"},{"location":"core-abilities/incremental_update/","title":"Incremental Update \ud83d\udc8e","text":"Supported Git Platforms: GitHub, GitLab (Both cloud & server. For server: Version 17 and above)
The Incremental Update feature helps users focus on feedback for their newest changes, making large PRs more manageable.
"},{"location":"core-abilities/incremental_update/#how-it-works","title":"How it works","text":"Update Option on Subsequent CommitsGeneration of Incremental UpdateWhenever new commits are pushed following a recent code suggestions report for this PR, an Update button appears (as seen above).
Once the user clicks on the button:
improve
tool identifies the new changes (the \"delta\")Supported Git Platforms: GitHub, GitLab
Qodo Merge transforms static code reviews into interactive experiences by enabling direct actions from pull request (PR) comments. Developers can immediately trigger actions and apply changes with simple checkbox clicks.
This focused workflow maintains context while dramatically reducing the time between PR creation and final merge. The approach eliminates manual steps, provides clear visual indicators, and creates immediate feedback loops all within the same interface.
"},{"location":"core-abilities/interactivity/#key-interactive-features","title":"Key Interactive Features","text":""},{"location":"core-abilities/interactivity/#1-interactive-improve-tool","title":"1. Interactive/improve
Tool","text":"The /improve
command delivers a comprehensive interactive experience:
Apply this suggestion: Clicking this checkbox instantly converts a suggestion into a committable code change. When committed to the PR, changes made to code that was flagged for improvement will be marked with a check mark, allowing developers to easily track and review implemented recommendations.
More: Triggers additional suggestions generation while keeping each suggestion focused and relevant as the original set
Update: Triggers a re-analysis of the code, providing updated suggestions based on the latest changes
Author self-review: Interactive acknowledgment that developers have opened and reviewed collapsed suggestions
/analyze
Tool","text":"The /analyze
command provides component-level analysis with interactive options for each identified code component:
Interactive checkboxes to generate tests, documentation, and code suggestions for specific components
On-demand similar code search that activates when a checkbox is clicked
Component-specific actions that trigger only for the selected elements, providing focused assistance
/help
Tool","text":"The /help
command not only lists available tools and their descriptions but also enables immediate tool invocation through interactive checkboxes. When a user checks a tool's checkbox, Qodo Merge instantly triggers that tool without requiring additional commands. This transforms the standard help menu into an interactive launch pad for all Qodo Merge capabilities, eliminating context switching by keeping developers within their PR workflow.
Supported Git Platforms: GitHub, GitLab, Bitbucket
1. Qodo Merge initially retrieves for each PR the following data:
Tip: Organization-level metadata
In addition to the inputs above, Qodo Merge can incorporate supplementary preferences provided by the user, like extra_instructions
and organization best practices
. This information can be used to enhance the PR analysis.
2. By default, the first command that Qodo Merge executes is describe
, which generates three types of outputs:
These AI-generated outputs are now considered as part of the PR metadata, and can be used in subsequent commands like review
and improve
. This effectively enables multi-stage chain-of-thought analysis, without doing any additional API calls which will cost time and money.
For example, when generating code suggestions for different files, Qodo Merge can inject the AI-generated \"Changes walkthrough\" file summary in the prompt:
## File: 'src/file1.py'\n### AI-generated file summary:\n- edited function `func1` that does X\n- Removed function `func2` that was not used\n- ....\n\n@@ ... @@ def func1():\n__new hunk__\n11 unchanged code line0\n12 unchanged code line1\n13 +new code line2 added\n14 unchanged code line3\n__old hunk__\n unchanged code line0\n unchanged code line1\n-old code line2 removed\n unchanged code line3\n\n@@ ... @@ def func2():\n__new hunk__\n...\n__old hunk__\n...\n
3. The entire PR files that were retrieved are also used to expand and enhance the PR context (see Dynamic Context).
4. All the metadata described above represents several level of cumulative analysis - ranging from hunk level, to file level, to PR level, to organization level. This comprehensive approach enables Qodo Merge AI models to generate more precise and contextually relevant suggestions and feedback.
"},{"location":"core-abilities/rag_context_enrichment/","title":"RAG Context Enrichment \ud83d\udc8e","text":"Supported Git Platforms: GitHub, Bitbucket Data Center
Prerequisites
A feature that enhances AI analysis by retrieving and referencing relevant code patterns from your project, enabling context-aware insights during code reviews.
"},{"location":"core-abilities/rag_context_enrichment/#how-does-rag-context-enrichment-work","title":"How does RAG Context Enrichment work?","text":"Using Retrieval-Augmented Generation (RAG), it searches your configured repositories for contextually relevant code segments, enriching pull request (PR) insights and accelerating review accuracy.
"},{"location":"core-abilities/rag_context_enrichment/#getting-started","title":"Getting started","text":""},{"location":"core-abilities/rag_context_enrichment/#configuration-options","title":"Configuration options","text":"In order to enable the RAG feature, add the following lines to your configuration file:
[rag_arguments]\nenable_rag=true\n
RAG Arguments Options enable_rag If set to true, repository enrichment using RAG will be enabled. Default is false. rag_repo_list A list of repositories that will be used by the semantic search for RAG. Use ['all']
to consider the entire codebase or a select list of repositories, for example: ['my-org/my-repo', ...]. Default: the repository from which the PR was opened.
RAG capability is exclusively available in the following tools:
/review
/implement
/ask
The /review
tool offers the Focus area from RAG data which contains feedback based on the RAG references analysis. The complete list of references found relevant to the PR will be shown in the References section, helping developers understand the broader context by exploring the provided references.
The /implement
tool utilizes the RAG feature to provide comprehensive context of the repository codebase, allowing it to generate more refined code output. The References section contains links to the content used to support the code generation.
The /ask
tool can access broader repository context through the RAG feature when answering questions that go beyond the PR scope alone. The References section displays the additional repository content consulted to formulate the answer.
Supported Git Platforms: GitHub, GitLab, Bitbucket
Qodo Merge implements a self-reflection process where the AI model reflects, scores, and re-ranks its own suggestions, eliminating irrelevant or incorrect ones. This approach improves the quality and relevance of suggestions, saving users time and enhancing their experience. Configuration options allow users to set a score threshold for further filtering out suggestions.
"},{"location":"core-abilities/self_reflection/#introduction-efficient-review-with-hierarchical-presentation","title":"Introduction - Efficient Review with Hierarchical Presentation","text":"Given that not all generated code suggestions will be relevant, it is crucial to enable users to review them in a fast and efficient way, allowing quick identification and filtering of non-applicable ones.
To achieve this goal, Qodo Merge offers a dedicated hierarchical structure when presenting suggestions to users:
Fast Review
This hierarchical structure is designed to facilitate rapid review of each suggestion, with users spending an average of ~5-10 seconds per item.
"},{"location":"core-abilities/self_reflection/#self-reflection-and-re-ranking","title":"Self-reflection and Re-ranking","text":"The AI model is initially tasked with generating suggestions, and outputting them in order of importance. However, in practice we observe that models often struggle to simultaneously generate high-quality code suggestions and rank them well in a single pass. Furthermore, the initial set of generated suggestions sometimes contains easily identifiable errors.
To address these issues, we implemented a \"self-reflection\" process that refines suggestion ranking and eliminates irrelevant or incorrect proposals. This process consists of the following steps:
Note that presenting all generated suggestions simultaneously provides the model with a comprehensive context, enabling it to make more informed decisions compared to evaluating each suggestion individually.
To conclude, the self-reflection process enables Qodo Merge to prioritize suggestions based on their importance, eliminate inaccurate or irrelevant proposals, and optionally exclude suggestions that fall below a specified threshold of significance. This results in a more refined and valuable set of suggestions for the user, saving time and improving the overall experience.
"},{"location":"core-abilities/self_reflection/#example-results","title":"Example Results","text":""},{"location":"core-abilities/self_reflection/#appendix-relevant-configuration-options","title":"Appendix - Relevant Configuration Options","text":"[pr_code_suggestions]\nsuggestions_score_threshold = 0 # Filter out suggestions with a score below this threshold (0-10)\n
"},{"location":"core-abilities/static_code_analysis/","title":"Static Code Analysis \ud83d\udc8e","text":"Supported Git Platforms: GitHub, GitLab, Bitbucket
By combining static code analysis with LLM capabilities, Qodo Merge can provide a comprehensive analysis of the PR code changes on a component level.
It scans the PR code changes, finds all the code components (methods, functions, classes) that changed, and enables to interactively generate tests, docs, code suggestions and similar code search for each component.
Language that are currently supported:
Python, Java, C++, JavaScript, TypeScript, C#, Go.
"},{"location":"core-abilities/static_code_analysis/#capabilities","title":"Capabilities","text":""},{"location":"core-abilities/static_code_analysis/#analyze-pr","title":"Analyze PR","text":"The analyze
tool enables to interactively generate tests, docs, code suggestions and similar code search for each component that changed in the PR. It can be invoked manually by commenting on any PR:
/analyze\n
An example result:
Clicking on each checkbox will trigger the relevant tool for the selected component.
"},{"location":"core-abilities/static_code_analysis/#generate-tests","title":"Generate Tests","text":"The test
tool generate tests for a selected component, based on the PR code changes. It can be invoked manually by commenting on any PR:
/test component_name\n
where 'component_name' is the name of a specific component in the PR, Or be triggered interactively by using the analyze
tool.
The add_docs
tool scans the PR code changes, and automatically generate docstrings for any code components that changed in the PR. It can be invoked manually by commenting on any PR:
/add_docs component_name\n
Or be triggered interactively by using the analyze
tool.
The improve_component
tool generates code suggestions for a specific code component that changed in the PR. It can be invoked manually by commenting on any PR:
/improve_component component_name\n
Or be triggered interactively by using the analyze
tool.
The similar code
tool retrieves the most similar code components from inside the organization's codebase or from open-source code, including details about the license associated with each repository.
For example:
Global Search
for a method called chat_completion
:
Qodo Merge is designed to assist, not replace, human reviewers.
Reviewing PRs is a tedious and time-consuming task often seen as a \"chore\". In addition, the longer the PR \u2013 the shorter the relative feedback, since long PRs can overwhelm reviewers, both in terms of technical difficulty, and the actual review time. Qodo Merge aims to address these pain points, and to assist and empower both the PR author and reviewer.
However, Qodo Merge has built-in safeguards to ensure the developer remains in the driver's seat. For example:
Read more about this issue in our blog
"},{"location":"faq/#answer2","title":"Answer:2","text":"AI errors are rare, but possible. A main value from reviewing the code suggestions lies in their high probability of catching mistakes or bugs made by the PR author. We believe it's worth spending 30-60 seconds reviewing suggestions, even if some aren't relevant, as this practice can enhance code quality and prevent bugs in production.
The hierarchical structure of the suggestions is designed to help the user quickly understand them, and to decide which ones are relevant and which are not:
Category
header is relevant, the user should move to the summarized suggestion description.In addition, we recommend to use the extra_instructions
field to guide the model to suggestions that are more relevant to the specific needs of the project.
See here for more information on how to use the extra_instructions
and best_practices
configuration options, to guide the model to more tailored suggestions.
No. Qodo Merge strict privacy policy ensures that your code is not stored or used for training purposes.
For a detailed overview of our data privacy policy, please refer to this link
"},{"location":"faq/#answer5","title":"Answer:5","text":"When you self-host the open-source version, you use your own keys.
Qodo Merge with SaaS deployment is a hosted version of Qodo Merge, where Qodo manages the infrastructure and the keys. For enterprise customers, on-prem deployment is also available. Contact us for more information.
"},{"location":"faq/#answer6","title":"Answer:6","text":"Yes. While Qodo Merge won't automatically review draft PRs, you can still get feedback by manually requesting it through online commenting.
For active PRs, you can customize the automatic feedback settings here to match your team's workflow.
"},{"location":"faq/#answer7","title":"Answer:7","text":"Yes, you can customize review effort estimates using the extra_instructions
configuration option (see documentation).
Example mapping:
Note: The effort levels (1-5) are primarily meant for comparative purposes, helping teams prioritize reviewing smaller PRs first. The actual review duration may vary, as the focus is on providing consistent relative effort estimates.
"},{"location":"installation/","title":"Installation","text":""},{"location":"installation/#self-hosted-pr-agent","title":"Self-hosted PR-Agent","text":"There are several ways to use self-hosted PR-Agent:
Qodo Merge, an app hosted by QodoAI for GitHub\\GitLab\\BitBucket, is also available. With Qodo Merge, installation is as simple as adding the Qodo Merge app to your relevant repositories. See here for more details.
"},{"location":"installation/azure/","title":"Azure","text":""},{"location":"installation/azure/#azure-devops-pipeline","title":"Azure DevOps Pipeline","text":"You can use a pre-built Action Docker image to run PR-Agent as an Azure devops pipeline. Add the following file to your repository under azure-pipelines.yml
:
# Opt out of CI triggers\ntrigger: none\n\n# Configure PR trigger\npr:\n branches:\n include:\n - '*'\n autoCancel: true\n drafts: false\n\nstages:\n- stage: pr_agent\n displayName: 'PR Agent Stage'\n jobs:\n - job: pr_agent_job\n displayName: 'PR Agent Job'\n pool:\n vmImage: 'ubuntu-latest'\n container:\n image: codiumai/pr-agent:latest\n options: --entrypoint \"\"\n variables:\n - group: pr_agent\n steps:\n - script: |\n echo \"Running PR Agent action step\"\n\n # Construct PR_URL\n PR_URL=\"${SYSTEM_COLLECTIONURI}${SYSTEM_TEAMPROJECT}/_git/${BUILD_REPOSITORY_NAME}/pullrequest/${SYSTEM_PULLREQUEST_PULLREQUESTID}\"\n echo \"PR_URL=$PR_URL\"\n\n # Extract organization URL from System.CollectionUri\n ORG_URL=$(echo \"$(System.CollectionUri)\" | sed 's/\\/$//') # Remove trailing slash if present\n echo \"Organization URL: $ORG_URL\"\n\n export azure_devops__org=\"$ORG_URL\"\n export config__git_provider=\"azure\"\n\n pr-agent --pr_url=\"$PR_URL\" describe\n pr-agent --pr_url=\"$PR_URL\" review\n pr-agent --pr_url=\"$PR_URL\" improve\n env:\n azure_devops__pat: $(azure_devops_pat)\n openai__key: $(OPENAI_KEY)\n displayName: 'Run Qodo Merge'\n
This script will run Qodo Merge on every new merge request, with the improve
, review
, and describe
commands. Note that you need to export the azure_devops__pat
and OPENAI_KEY
variables in the Azure DevOps pipeline settings (Pipelines -> Library -> + Variable group):
Make sure to give pipeline permissions to the pr_agent
variable group.
Note that Azure Pipelines lacks support for triggering workflows from PR comments. If you find a viable solution, please contribute it to our issue tracker
"},{"location":"installation/azure/#azure-devops-from-cli","title":"Azure DevOps from CLI","text":"To use Azure DevOps provider use the following settings in configuration.toml:
[config]\ngit_provider=\"azure\"\n
Azure DevOps provider supports PAT token or DefaultAzureCredential authentication. PAT is faster to create, but has built-in expiration date, and will use the user identity for API calls. Using DefaultAzureCredential you can use managed identity or Service principle, which are more secure and will create separate ADO user identity (via AAD) to the agent.
If PAT was chosen, you can assign the value in .secrets.toml. If DefaultAzureCredential was chosen, you can assigned the additional env vars like AZURE_CLIENT_SECRET directly, or use managed identity/az cli (for local development) without any additional configuration. in any case, 'org' value must be assigned in .secrets.toml:
[azure_devops]\norg = \"https://dev.azure.com/YOUR_ORGANIZATION/\"\n# pat = \"YOUR_PAT_TOKEN\" needed only if using PAT for authentication\n
"},{"location":"installation/azure/#azure-devops-webhook","title":"Azure DevOps Webhook","text":"To trigger from an Azure webhook, you need to manually add a webhook. Use the \"Pull request created\" type to trigger a review, or \"Pull request commented on\" to trigger any supported comment with / comment on the relevant PR. Note that for the \"Pull request commented on\" trigger, only API v2.0 is supported.
For webhook security, create a sporadic username/password pair and configure the webhook username and password on both the server and Azure DevOps webhook. These will be sent as basic Auth data by the webhook with each request:
[azure_devops_server]\nwebhook_username = \"<basic auth user>\"\nwebhook_password = \"<basic auth password>\"\n
Ensure that the webhook endpoint is only accessible over HTTPS to mitigate the risk of credential interception when using basic authentication.
"},{"location":"installation/bitbucket/","title":"Bitbucket","text":""},{"location":"installation/bitbucket/#run-as-a-bitbucket-pipeline","title":"Run as a Bitbucket Pipeline","text":"You can use the Bitbucket Pipeline system to run PR-Agent on every pull request open or update.
pipelines:\n pull-requests:\n '**':\n - step:\n name: PR Agent Review\n image: codiumai/pr-agent:latest\n script:\n - pr-agent --pr_url=https://bitbucket.org/$BITBUCKET_WORKSPACE/$BITBUCKET_REPO_SLUG/pull-requests/$BITBUCKET_PR_ID review\n
Add the following secure variables to your repository under Repository settings > Pipelines > Repository variables.
CONFIG__GIT_PROVIDER: bitbucket
<your key>
basic
or bearer
(default is bearer
)<your token>
(required when auth_type is bearer)<your token>
(required when auth_type is basic)You can get a Bitbucket token for your repository by following Repository Settings -> Security -> Access Tokens. For basic auth, you can generate a base64 encoded token from your username:password combination.
Note that comments on a PR are not supported in Bitbucket Pipeline.
"},{"location":"installation/bitbucket/#bitbucket-server-and-data-center","title":"Bitbucket Server and Data Center","text":"Login into your on-prem instance of Bitbucket with your service account username and password. Navigate to Manage account
, HTTP Access tokens
, Create Token
. Generate the token and add it to .secret.toml under bitbucket_server
section
[bitbucket_server]\nbearer_token = \"<your key>\"\n
"},{"location":"installation/bitbucket/#run-it-as-cli","title":"Run it as CLI","text":"Modify configuration.toml
:
git_provider=\"bitbucket_server\"\n
and pass the Pull request URL:
python cli.py --pr_url https://git.on-prem-instance-of-bitbucket.com/projects/PROJECT/repos/REPO/pull-requests/1 review\n
"},{"location":"installation/bitbucket/#run-it-as-service","title":"Run it as service","text":"To run PR-Agent as webhook, build the docker image:
docker build . -t codiumai/pr-agent:bitbucket_server_webhook --target bitbucket_server_webhook -f docker/Dockerfile\ndocker push codiumai/pr-agent:bitbucket_server_webhook # Push to your Docker repository\n
Navigate to Projects
or Repositories
, Settings
, Webhooks
, Create Webhook
. Fill in the name and URL. For Authentication, select 'None'. Select the 'Pull Request Opened' checkbox to receive that event as a webhook.
The URL should end with /webhook
, for example: https://domain.com/webhook
In Gitea create a new user and give it \"Reporter\" role (\"Developer\" if using Pro version of the agent) for the intended group or project.
For the user from step 1. generate a personal_access_token
with api
access.
Generate a random secret for your app, and save it for later (webhook_secret
). For example, you can use:
WEBHOOK_SECRET=$(python -c \"import secrets; print(secrets.token_hex(10))\")\n
git clone https://github.com/qodo-ai/pr-agent.git\n
Prepare variables and secrets. Skip this step if you plan on setting these as environment variables when running the agent:
config.git_provider
to \"gitea\"personal_access_token
(with token from step 2) and webhook_secret
(with secret from step 3)Build a Docker image for the app and optionally push it to a Docker repository. We'll use Dockerhub as an example:
docker build -f /docker/Dockerfile -t pr-agent:gitea_app --target gitea_app .\ndocker push codiumai/pr-agent:gitea_webhook # Push to your Docker repository\n
CONFIG__GIT_PROVIDER=gitea\nGITEA__PERSONAL_ACCESS_TOKEN=<personal_access_token>\nGITEA__WEBHOOK_SECRET=<webhook_secret>\nGITEA__URL=https://gitea.com # Or self host\nOPENAI__KEY=<your_openai_api_key>\nGITEA__SKIP_SSL_VERIFICATION=false # or true\nGITEA__SSL_CA_CERT=/path/to/cacert.pem\n
Create a webhook in your Gitea project. Set the URL to http[s]://<PR_AGENT_HOSTNAME>/api/v1/gitea_webhooks
, the secret token to the generated secret from step 3, and enable the triggers push
, comments
and merge request events
.
Test your installation by opening a merge request or commenting on a merge request using one of PR Agent's commands.
You can use our pre-built Github Action Docker image to run PR-Agent as a Github Action.
1) Add the following file to your repository under .github/workflows/pr_agent.yml
:
on:\n pull_request:\n types: [opened, reopened, ready_for_review]\n issue_comment:\njobs:\n pr_agent_job:\n if: ${{ github.event.sender.type != 'Bot' }}\n runs-on: ubuntu-latest\n permissions:\n issues: write\n pull-requests: write\n contents: write\n name: Run pr agent on every pull request, respond to user comments\n steps:\n - name: PR Agent action step\n id: pragent\n uses: qodo-ai/pr-agent@main\n env:\n OPENAI_KEY: ${{ secrets.OPENAI_KEY }}\n GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}\n
2) Add the following secret to your repository under Settings > Secrets and variables > Actions > New repository secret > Add secret
:
Name = OPENAI_KEY\nSecret = <your key>\n
The GITHUB_TOKEN secret is automatically created by GitHub.
3) Merge this change to your main branch. When you open your next PR, you should see a comment from github-actions
bot with a review of your PR, and instructions on how to use the rest of the tools.
4) You may configure Qodo Merge by adding environment variables under the env section corresponding to any configurable property in the configuration file. Some examples:
env:\n # ... previous environment values\n OPENAI.ORG: \"<Your organization name under your OpenAI account>\"\n PR_REVIEWER.REQUIRE_TESTS_REVIEW: \"false\" # Disable tests review\n PR_CODE_SUGGESTIONS.NUM_CODE_SUGGESTIONS: 6 # Increase number of code suggestions\n
See detailed usage instructions in the USAGE GUIDE
"},{"location":"installation/github/#using-a-specific-release","title":"Using a specific release","text":"if you want to pin your action to a specific release (v0.23 for example) for stability reasons, use:
...\n steps:\n - name: PR Agent action step\n id: pragent\n uses: docker://codiumai/pr-agent:0.23-github_action\n...\n
For enhanced security, you can also specify the Docker image by its digest:
...\n steps:\n - name: PR Agent action step\n id: pragent\n uses: docker://codiumai/pr-agent@sha256:14165e525678ace7d9b51cda8652c2d74abb4e1d76b57c4a6ccaeba84663cc64\n...\n
"},{"location":"installation/github/#action-for-github-enterprise-server","title":"Action for GitHub enterprise server","text":"To use the action with a GitHub enterprise server, add an environment variable GITHUB.BASE_URL
with the API URL of your GitHub server.
For example, if your GitHub server is at https://github.mycompany.com
, add the following to your workflow file:
env:\n # ... previous environment values\n GITHUB.BASE_URL: \"https://github.mycompany.com/api/v3\"\n
"},{"location":"installation/github/#run-as-a-github-app","title":"Run as a GitHub App","text":"Allowing you to automate the review process on your private or public repositories.
1) Create a GitHub App from the Github Developer Portal.
2) Generate a random secret for your app, and save it for later. For example, you can use:
WEBHOOK_SECRET=$(python -c \"import secrets; print(secrets.token_hex(10))\")\n
3) Acquire the following pieces of information from your app's settings page:
4) Clone this repository:
git clone https://github.com/Codium-ai/pr-agent.git\n
5) Copy the secrets template file and fill in the following:
cp pr_agent/settings/.secrets_template.toml pr_agent/settings/.secrets.toml\n# Edit .secrets.toml file\n
Set deployment_type to 'app' in configuration.toml
The .secrets.toml file is not copied to the Docker image by default, and is only used for local development. If you want to use the .secrets.toml file in your Docker image, you can add remove it from the .dockerignore file. In most production environments, you would inject the secrets file as environment variables or as mounted volumes. For example, in order to inject a secrets file as a volume in a Kubernetes environment you can update your pod spec to include the following, assuming you have a secret named pr-agent-settings
with a key named .secrets.toml
:
volumes:\n - name: settings-volume\n secret:\n secretName: pr-agent-settings\n// ...\n containers:\n// ...\n volumeMounts:\n - mountPath: /app/pr_agent/settings_prod\n name: settings-volume\n
Another option is to set the secrets as environment variables in your deployment environment, for example OPENAI.KEY
and GITHUB.USER_TOKEN
.
6) Build a Docker image for the app and optionally push it to a Docker repository. We'll use Dockerhub as an example:
```bash\ndocker build . -t codiumai/pr-agent:github_app --target github_app -f docker/Dockerfile\ndocker push codiumai/pr-agent:github_app # Push to your Docker repository\n```\n
Host the app using a server, serverless function, or container environment. Alternatively, for development and debugging, you may use tools like smee.io to forward webhooks to your local machine. You can check Deploy as a Lambda Function
Go back to your app's settings, and set the following:
Webhook URL: The URL of your app's server or the URL of the smee.io channel.
Webhook secret: The secret you generated earlier.
Install the app by navigating to the \"Install App\" tab and selecting your desired repositories.
Note: When running Qodo Merge from GitHub app, the default configuration file (configuration.toml) will be loaded. However, you can override the default tool parameters by uploading a local configuration file .pr_agent.toml
For more information please check out the USAGE GUIDE
Note that since AWS Lambda env vars cannot have \".\" in the name, you can replace each \".\" in an env variable with \"__\". For example: GITHUB.WEBHOOK_SECRET
--> GITHUB__WEBHOOK_SECRET
Build a docker image that can be used as a lambda function
```shell
Push image to ECR
docker tag codiumai/pr-agent:github_lambda <AWS_ACCOUNT>.dkr.ecr.<AWS_REGION>.amazonaws.com/codiumai/pr-agent:github_lambda\ndocker push <AWS_ACCOUNT>.dkr.ecr.<AWS_REGION>.amazonaws.com/codiumai/pr-agent:github_lambda\n
Create a lambda function that uses the uploaded image. Set the lambda timeout to be at least 3m.
AZURE_DEVOPS_CACHE_DIR
to a writable location such as /tmp. (see link)https://<LAMBDA_FUNCTION_URL>/api/v1/github_webhooks
docker buildx build --platform=linux/amd64 . -t codiumai/pr-agent:github_lambda --target github_lambda -f docker/Dockerfile.lambda ```
"},{"location":"installation/github/#using-aws-secrets-manager","title":"Using AWS Secrets Manager","text":"For production Lambda deployments, use AWS Secrets Manager instead of environment variables:
{\n \"openai.key\": \"sk-proj-xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx\",\n \"github.webhook_secret\": \"your-webhook-secret-from-step-2\",\n \"github.private_key\": \"-----BEGIN RSA PRIVATE KEY-----\\nMIIEpAIBAAKCAQEA...\\n-----END RSA PRIVATE KEY-----\"\n}\n
secretsmanager:GetSecretValue
to your Lambda execution roleAWS_SECRETS_MANAGER__SECRET_ARN=arn:aws:secretsmanager:us-east-1:123456789012:secret:pr-agent-secrets-AbCdEf\nCONFIG__SECRET_PROVIDER=aws_secrets_manager\n
"},{"location":"installation/github/#aws-codecommit-setup","title":"AWS CodeCommit Setup","text":"Not all features have been added to CodeCommit yet. As of right now, CodeCommit has been implemented to run the Qodo Merge CLI on the command line, using AWS credentials stored in environment variables. (More features will be added in the future.) The following is a set of instructions to have Qodo Merge do a review of your CodeCommit pull request from the command line:
git_provider
value to codecommit
in the pr_agent/settings/configuration.toml
settings filePYTHONPATH
to include your pr-agent
project directoryPYTHONPATH=\"/PATH/TO/PROJECTS/pr-agent
to your .env
filePYTHONPATH
and run the CLI in one command, for example:PYTHONPATH=\"/PATH/TO/PROJECTS/pr-agent python pr_agent/cli.py [--ARGS]
Example IAM permissions to that user to allow access to CodeCommit:
\"Resource\": \"*\"
with your list of repos, to limit access to only those repos{\n \"Version\": \"2012-10-17\",\n \"Statement\": [\n {\n \"Effect\": \"Allow\",\n \"Action\": [\n \"codecommit:BatchDescribe*\",\n \"codecommit:BatchGet*\",\n \"codecommit:Describe*\",\n \"codecommit:EvaluatePullRequestApprovalRules\",\n \"codecommit:Get*\",\n \"codecommit:List*\",\n \"codecommit:PostComment*\",\n \"codecommit:PutCommentReaction\",\n \"codecommit:UpdatePullRequestDescription\",\n \"codecommit:UpdatePullRequestTitle\"\n ],\n \"Resource\": \"*\"\n }\n ]\n}\n
"},{"location":"installation/github/#aws-codecommit-access-key-and-secret","title":"AWS CodeCommit Access Key and Secret","text":"Example setting the Access Key and Secret using environment variables
export AWS_ACCESS_KEY_ID=\"XXXXXXXXXXXXXXXX\"\nexport AWS_SECRET_ACCESS_KEY=\"XXXXXXXXXXXXXXXX\"\nexport AWS_DEFAULT_REGION=\"us-east-1\"\n
"},{"location":"installation/github/#aws-codecommit-cli-example","title":"AWS CodeCommit CLI Example","text":"After you set up AWS CodeCommit using the instructions above, here is an example CLI run that tells pr-agent to review a given pull request. (Replace your specific PYTHONPATH and PR URL in the example)
PYTHONPATH=\"/PATH/TO/PROJECTS/pr-agent\" python pr_agent/cli.py \\\n --pr_url https://us-east-1.console.aws.amazon.com/codesuite/codecommit/repositories/MY_REPO_NAME/pull-requests/321 \\\n review\n
"},{"location":"installation/gitlab/","title":"Gitlab","text":""},{"location":"installation/gitlab/#run-as-a-gitlab-pipeline","title":"Run as a GitLab Pipeline","text":"You can use a pre-built Action Docker image to run PR-Agent as a GitLab pipeline. This is a simple way to get started with Qodo Merge without setting up your own server.
(1) Add the following file to your repository under .gitlab-ci.yml
:
stages:\n - pr_agent\n\npr_agent_job:\n stage: pr_agent\n image:\n name: codiumai/pr-agent:latest\n entrypoint: [\"\"]\n script:\n - cd /app\n - echo \"Running PR Agent action step\"\n - export MR_URL=\"$CI_MERGE_REQUEST_PROJECT_URL/merge_requests/$CI_MERGE_REQUEST_IID\"\n - echo \"MR_URL=$MR_URL\"\n - export gitlab__url=$CI_SERVER_PROTOCOL://$CI_SERVER_FQDN\n - export gitlab__PERSONAL_ACCESS_TOKEN=$GITLAB_PERSONAL_ACCESS_TOKEN\n - export config__git_provider=\"gitlab\"\n - export openai__key=$OPENAI_KEY\n - python -m pr_agent.cli --pr_url=\"$MR_URL\" describe\n - python -m pr_agent.cli --pr_url=\"$MR_URL\" review\n - python -m pr_agent.cli --pr_url=\"$MR_URL\" improve\n rules:\n - if: '$CI_PIPELINE_SOURCE == \"merge_request_event\"'\n
This script will run Qodo Merge on every new merge request. You can modify the rules
section to run Qodo Merge on different events. You can also modify the script
section to run different Qodo Merge commands, or with different parameters by exporting different environment variables.
(2) Add the following masked variables to your GitLab repository (CI/CD -> Variables):
GITLAB_PERSONAL_ACCESS_TOKEN
: Your GitLab personal access token.
OPENAI_KEY
: Your OpenAI key.
Note that if your base branches are not protected, don't set the variables as protected
, since the pipeline will not have access to them.
Note: The $CI_SERVER_FQDN
variable is available starting from GitLab version 16.10. If you're using an earlier version, this variable will not be available. However, you can combine $CI_SERVER_HOST
and $CI_SERVER_PORT
to achieve the same result. Please ensure you're using a compatible version or adjust your configuration.
In GitLab create a new user and give it \"Reporter\" role (\"Developer\" if using Pro version of the agent) for the intended group or project.
For the user from step 1, generate a personal_access_token
with api
access.
Generate a random secret for your app, and save it for later (shared_secret
). For example, you can use:
SHARED_SECRET=$(python -c \"import secrets; print(secrets.token_hex(10))\")\n
git clone https://github.com/qodo-ai/pr-agent.git\n
Prepare variables and secrets. Skip this step if you plan on setting these as environment variables when running the agent:
In the configuration file/variables:
config.git_provider
to \"gitlab\"In the secrets file/variables:
personal_access_token
(with token from step 2) and shared_secret
(with secret from step 3)Build a Docker image for the app and optionally push it to a Docker repository. We'll use Dockerhub as an example:
docker build . -t gitlab_pr_agent --target gitlab_webhook -f docker/Dockerfile\ndocker push codiumai/pr-agent:gitlab_webhook # Push to your Docker repository\n
CONFIG__GIT_PROVIDER=gitlab\nGITLAB__PERSONAL_ACCESS_TOKEN=<personal_access_token>\nGITLAB__SHARED_SECRET=<shared_secret>\nGITLAB__URL=https://gitlab.com\nOPENAI__KEY=<your_openai_api_key>\n
Create a webhook in your GitLab project. Set the URL to http[s]://<PR_AGENT_HOSTNAME>/webhook
, the secret token to the generated secret from step 3, and enable the triggers push
, comments
and merge request events
.
Test your installation by opening a merge request or commenting on a merge request using one of PR Agent's commands.
Note that since AWS Lambda env vars cannot have \".\" in the name, you can replace each \".\" in an env variable with \"__\". For example: GITLAB.PERSONAL_ACCESS_TOKEN
--> GITLAB__PERSONAL_ACCESS_TOKEN
Build a docker image that can be used as a lambda function
shell docker buildx build --platform=linux/amd64 . -t codiumai/pr-agent:gitlab_lambda --target gitlab_lambda -f docker/Dockerfile.lambda
Push image to ECR
docker tag codiumai/pr-agent:gitlab_lambda <AWS_ACCOUNT>.dkr.ecr.<AWS_REGION>.amazonaws.com/codiumai/pr-agent:gitlab_lambda\ndocker push <AWS_ACCOUNT>.dkr.ecr.<AWS_REGION>.amazonaws.com/codiumai/pr-agent:gitlab_lambda\n
Create a lambda function that uses the uploaded image. Set the lambda timeout to be at least 3m.
AZURE_DEVOPS_CACHE_DIR
to a writable location such as /tmp. (see link)https://<LAMBDA_FUNCTION_URL>/webhook
For production Lambda deployments, use AWS Secrets Manager instead of environment variables:
project-webhook-secret-001
){\n \"gitlab_token\": \"glpat-xxxxxxxxxxxxxxxxxxxxxxxx\",\n \"token_name\": \"project-webhook-001\"\n}\n
pr-agent-main-config
){\n \"openai.key\": \"sk-proj-xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx\"\n}\n
CONFIG__SECRET_PROVIDER=aws_secrets_manager\nAWS_SECRETS_MANAGER__SECRET_ARN=arn:aws:secretsmanager:us-east-1:123456789012:secret:pr-agent-main-config-AbCdEf\n
project-webhook-secret-001
Important: When using Secrets Manager, GitLab's webhook secret must be the Secrets Manager secret name.
secretsmanager:GetSecretValue
to your Lambda execution roleTo run PR-Agent locally, you first need to acquire two keys:
A list of the relevant tools can be found in the tools guide.
To invoke a tool (for example review
), you can run PR-Agent directly from the Docker image. Here's how:
For GitHub:
docker run --rm -it -e OPENAI.KEY=<your_openai_key> -e GITHUB.USER_TOKEN=<your_github_token> codiumai/pr-agent:latest --pr_url <pr_url> review\n
If you are using GitHub enterprise server, you need to specify the custom url as variable. For example, if your GitHub server is at https://github.mycompany.com
, add the following to the command:
-e GITHUB.BASE_URL=https://github.mycompany.com/api/v3\n
For GitLab:
docker run --rm -it -e OPENAI.KEY=<your key> -e CONFIG.GIT_PROVIDER=gitlab -e GITLAB.PERSONAL_ACCESS_TOKEN=<your token> codiumai/pr-agent:latest --pr_url <pr_url> review\n
If you have a dedicated GitLab instance, you need to specify the custom url as variable:
-e GITLAB.URL=<your gitlab instance url>\n
For BitBucket:
docker run --rm -it -e CONFIG.GIT_PROVIDER=bitbucket -e OPENAI.KEY=$OPENAI_API_KEY -e BITBUCKET.BEARER_TOKEN=$BITBUCKET_BEARER_TOKEN codiumai/pr-agent:latest --pr_url=<pr_url> review\n
For Gitea:
docker run --rm -it -e OPENAI.KEY=<your key> -e CONFIG.GIT_PROVIDER=gitea -e GITEA.PERSONAL_ACCESS_TOKEN=<your token> codiumai/pr-agent:latest --pr_url <pr_url> review\n
If you have a dedicated Gitea instance, you need to specify the custom url as variable:
-e GITEA.URL=<your gitea instance url>\n
For other git providers, update CONFIG.GIT_PROVIDER
accordingly and check the pr_agent/settings/.secrets_template.toml
file for environment variables expected names and values.
It is also possible to provide or override the configuration by setting the corresponding environment variables. You can define the corresponding environment variables by following this convention: <TABLE>__<KEY>=<VALUE>
or <TABLE>.<KEY>=<VALUE>
. The <TABLE>
refers to a table/section in a configuration file and <KEY>=<VALUE>
refers to the key/value pair of a setting in the configuration file.
For example, suppose you want to run pr_agent
that connects to a self-hosted GitLab instance similar to an example above. You can define the environment variables in a plain text file named .env
with the following content:
CONFIG__GIT_PROVIDER=\"gitlab\"\nGITLAB__URL=\"<your url>\"\nGITLAB__PERSONAL_ACCESS_TOKEN=\"<your token>\"\nOPENAI__KEY=\"<your key>\"\n
Then, you can run pr_agent
using Docker with the following command:
docker run --rm -it --env-file .env codiumai/pr-agent:latest <tool> <tool parameter>\n
"},{"location":"installation/locally/#i-get-an-error-when-running-the-docker-image-what-should-i-do","title":"I get an error when running the Docker image. What should I do?","text":"If you encounter an error when running the Docker image, it is almost always due to a misconfiguration of api keys or tokens.
Note that litellm, which is used by pr-agent, sometimes returns non-informative error messages such as APIError: OpenAIException - Connection error.
Carefully check the api keys and tokens you provided and make sure they are correct. Adjustments may be needed depending on your llm provider.
For example, for Azure OpenAI, additional keys are needed. Same goes for other providers, make sure to check the documentation
"},{"location":"installation/locally/#using-pip-package","title":"Using pip package","text":"Install the package:
pip install pr-agent\n
Then run the relevant tool with the script below. Make sure to fill in the required parameters (user_token
, openai_key
, pr_url
, command
):
from pr_agent import cli\nfrom pr_agent.config_loader import get_settings\n\ndef main():\n # Fill in the following values\n provider = \"github\" # github/gitlab/bitbucket/azure_devops\n user_token = \"...\" # user token\n openai_key = \"...\" # OpenAI key\n pr_url = \"...\" # PR URL, for example 'https://github.com/Codium-ai/pr-agent/pull/809'\n command = \"/review\" # Command to run (e.g. '/review', '/describe', '/ask=\"What is the purpose of this PR?\"', ...)\n\n # Setting the configurations\n get_settings().set(\"CONFIG.git_provider\", provider)\n get_settings().set(\"openai.key\", openai_key)\n get_settings().set(\"github.user_token\", user_token)\n\n # Run the command. Feedback will appear in GitHub PR comments\n cli.run_command(pr_url, command)\n\n\nif __name__ == '__main__':\n main()\n
"},{"location":"installation/locally/#run-from-source","title":"Run from source","text":"git clone https://github.com/Codium-ai/pr-agent.git\n
/pr-agent
folder and install the requirements in your favorite virtual environment:pip install -e .\n
Note: If you get an error related to Rust in the dependency installation then make sure Rust is installed and in your PATH
, instructions: https://rustup.rs
cp pr_agent/settings/.secrets_template.toml pr_agent/settings/.secrets.toml\nchmod 600 pr_agent/settings/.secrets.toml\n# Edit .secrets.toml file\n
python3 -m pr_agent.cli --pr_url <pr_url> review\npython3 -m pr_agent.cli --pr_url <pr_url> ask <your question>\npython3 -m pr_agent.cli --pr_url <pr_url> describe\npython3 -m pr_agent.cli --pr_url <pr_url> improve\npython3 -m pr_agent.cli --pr_url <pr_url> add_docs\npython3 -m pr_agent.cli --pr_url <pr_url> generate_labels\npython3 -m pr_agent.cli --issue_url <issue_url> similar_issue\n...\n
[Optional] Add the pr_agent folder to your PYTHONPATH
export PYTHONPATH=$PYTHONPATH:<PATH to pr_agent folder>\n
"},{"location":"installation/pr_agent/","title":"PR-Agent Installation Guide","text":"PR-Agent can be deployed in various environments and platforms. Choose the installation method that best suits your needs:
"},{"location":"installation/pr_agent/#local-installation","title":"\ud83d\udda5\ufe0f Local Installation","text":"Learn how to run PR-Agent locally using:
View Local Installation Guide \u2192
"},{"location":"installation/pr_agent/#github-integration","title":"\ud83d\udc19 GitHub Integration","text":"Set up PR-Agent with GitHub as:
View GitHub Integration Guide \u2192
"},{"location":"installation/pr_agent/#gitlab-integration","title":"\ud83e\udd8a GitLab Integration","text":"Deploy PR-Agent on GitLab as:
View GitLab Integration Guide \u2192
"},{"location":"installation/pr_agent/#bitbucket-integration","title":"\ud83d\udfe6 BitBucket Integration","text":"Implement PR-Agent in BitBucket as:
View BitBucket Integration Guide \u2192
"},{"location":"installation/pr_agent/#azure-devops-integration","title":"\ud83d\udd37 Azure DevOps Integration","text":"Configure PR-Agent with Azure DevOps as:
View Azure DevOps Integration Guide \u2192
"},{"location":"installation/qodo_merge/","title":"\ud83d\udc8e Qodo Merge","text":"Qodo Merge is a versatile application compatible with GitHub, GitLab, and BitBucket, hosted by QodoAI. See here for more details about the benefits of using Qodo Merge.
"},{"location":"installation/qodo_merge/#usage-and-licensing","title":"Usage and Licensing","text":""},{"location":"installation/qodo_merge/#cloud-users","title":"Cloud Users","text":"Non-paying users will enjoy feedback on up to 75 PRs per git organization per month. Above this limit, PRs will not receive feedback until a new month begins.
For unlimited access, user licenses (seats) are required. Each user requires an individual seat license. After purchasing seats, the team owner can assign them to specific users through the management portal.
With an assigned seat, users can seamlessly deploy the application across any of their code repositories in a git organization, and receive feedback on all their PRs.
"},{"location":"installation/qodo_merge/#enterprise-account","title":"Enterprise Account","text":"For companies who require an Enterprise account, please contact us to initiate a trial period, and to discuss pricing and licensing options.
"},{"location":"installation/qodo_merge/#install-qodo-merge-for-github","title":"Install Qodo Merge for GitHub","text":""},{"location":"installation/qodo_merge/#github-cloud","title":"GitHub Cloud","text":"Qodo Merge for GitHub cloud is available for installation through the GitHub Marketplace.
"},{"location":"installation/qodo_merge/#github-enterprise-server","title":"GitHub Enterprise Server","text":"To use Qodo Merge on your private GitHub Enterprise Server, you will need to contact Qodo for starting an Enterprise trial.
(Note: The marketplace app is not compatible with GitHub Enterprise Server. Installation requires creating a private GitHub App instead.)
"},{"location":"installation/qodo_merge/#github-open-source-projects","title":"GitHub Open Source Projects","text":"For open-source projects, Qodo Merge is available for free usage. To install Qodo Merge for your open-source repositories, use the following marketplace link.
"},{"location":"installation/qodo_merge/#install-qodo-merge-for-bitbucket","title":"Install Qodo Merge for Bitbucket","text":""},{"location":"installation/qodo_merge/#bitbucket-cloud","title":"Bitbucket Cloud","text":"Qodo Merge for Bitbucket Cloud is available for installation through the following link
"},{"location":"installation/qodo_merge/#bitbucket-server","title":"Bitbucket Server","text":"To use Qodo Merge application on your private Bitbucket Server, you will need to contact us for starting an Enterprise trial.
"},{"location":"installation/qodo_merge/#install-qodo-merge-for-gitlab","title":"Install Qodo Merge for GitLab","text":""},{"location":"installation/qodo_merge/#gitlab-cloud","title":"GitLab Cloud","text":"Since GitLab platform does not support apps, installing Qodo Merge for GitLab is a bit more involved, and requires the following steps:
"},{"location":"installation/qodo_merge/#step-1","title":"Step 1","text":"Acquire a personal, project or group level access token. Enable the \u201capi\u201d scope in order to allow Qodo Merge to read pull requests, comment and respond to requests.
Store the token in a safe place, you won\u2019t be able to access it again after it was generated.
"},{"location":"installation/qodo_merge/#step-2","title":"Step 2","text":"Generate a shared secret and link it to the access token. Browse to https://register.gitlab.pr-agent.codium.ai. Fill in your generated GitLab token and your company or personal name in the appropriate fields and click \"Submit\".
You should see \"Success!\" displayed above the Submit button, and a shared secret will be generated. Store it in a safe place, you won\u2019t be able to access it again after it was generated.
"},{"location":"installation/qodo_merge/#step-3","title":"Step 3","text":"Install a webhook for your repository or groups, by clicking \u201cwebhooks\u201d on the settings menu. Click the \u201cAdd new webhook\u201d button.
In the webhook definition form, fill in the following fields: URL: https://pro.gitlab.pr-agent.codium.ai/webhook
Secret token: Your QodoAI key Trigger: Check the \u2018comments\u2019 and \u2018merge request events\u2019 boxes. Enable SSL verification: Check the box.
"},{"location":"installation/qodo_merge/#step-4","title":"Step 4","text":"You\u2019re all set!
Open a new merge request or add a MR comment with one of Qodo Merge\u2019s commands such as /review, /describe or /improve.
"},{"location":"installation/qodo_merge/#gitlab-server","title":"GitLab Server","text":"For limited free usage on private GitLab Server, the same installation steps as for GitLab Cloud apply. For unlimited usage, you will need to contact Qodo for moving to an Enterprise account.
"},{"location":"overview/data_privacy/","title":"Data Privacy","text":""},{"location":"overview/data_privacy/#self-hosted-pr-agent","title":"Self-hosted PR-Agent","text":"When using Qodo Merge\ud83d\udc8e, hosted by Qodo, we will not store any of your data, nor will we use it for training. You will also benefit from an OpenAI account with zero data retention.
For certain clients, Qodo Merge will use Qodo\u2019s proprietary models. If this is the case, you will be notified.
No passive collection of Code and Pull Requests\u2019 data \u2014 Qodo Merge will be active only when you invoke it, and it will then extract and analyze only data relevant to the executed command and queried pull request.
Qodo Merge is a hosted version of the open-source PR-Agent. It is designed for companies and teams that require additional features and capabilities.
Free users receive a quota of 75 monthly PR feedbacks per git organization. Unlimited usage requires a paid subscription. See details.
Qodo Merge provides the following benefits:
Fully managed - We take care of everything for you - hosting, models, regular updates, and more. Installation is as simple as signing up and adding the Qodo Merge app to your GitHub\\GitLab\\BitBucket repo.
Improved privacy - No data will be stored or used to train models. Qodo Merge will employ zero data retention, and will use an OpenAI and Claude accounts with zero data retention.
Improved support - Qodo Merge users will receive priority support, and will be able to request new features and capabilities.
Supporting self-hosted git servers - Qodo Merge can be installed on GitHub Enterprise Server, GitLab, and BitBucket. For more information, see the installation guide.
PR Chat - Qodo Merge allows you to engage in private chat about your pull requests on private repositories.
Here are some of the additional features and capabilities that Qodo Merge offers, and are not available in the open-source version of PR-Agent:
Feature Description Model selection Choose the model that best fits your needs, among top models likeClaude Sonnet
, o4-mini
Global and wiki configuration Control configurations for many repositories from a single location; Edit configuration of a single repo without committing code Apply suggestions Generate committable code from the relevant suggestions interactively by clicking on a checkbox Suggestions impact Automatically mark suggestions that were implemented by the user (either directly in GitHub, or indirectly in the IDE) to enable tracking of the impact of the suggestions CI feedback Automatically analyze failed CI checks on GitHub and provide actionable feedback in the PR conversation, helping to resolve issues quickly Advanced usage statistics Qodo Merge offers detailed statistics at user, repository, and company levels, including metrics about Qodo Merge usage, and also general statistics and insights Incorporating companies' best practices Use the companies' best practices as reference to increase the effectiveness and the relevance of the code suggestions Interactive triggering Interactively apply different tools via the analyze
command Custom labels Define custom labels for Qodo Merge to assign to the PR"},{"location":"overview/pr_agent_pro/#additional-tools","title":"Additional tools","text":"Here are additional tools that are available only for Qodo Merge users:
Feature Description Custom Prompt Suggestions Generate code suggestions based on custom prompts from the user Analyze PR components Identify the components that changed in the PR, and enable to interactively apply different tools to them Tests Generate tests for code components that changed in the PR PR documentation Generate docstring for code components that changed in the PR Improve Component Generate code suggestions for code components that changed in the PR Similar code search Search for similar code in the repository, organization, or entire GitHub Code implementation Generates implementation code from review suggestions"},{"location":"overview/pr_agent_pro/#supported-languages","title":"Supported languages","text":"Qodo Merge leverages the world's leading code models, such as Claude 4 Sonnet, o4-mini and Gemini-2.5-Pro. As a result, its primary tools such as describe
, review
, and improve
, as well as the PR-chat feature, support virtually all programming languages.
For specialized commands that require static code analysis, Qodo Merge offers support for specific languages. For more details about features that require static code analysis, please refer to the documentation.
"},{"location":"pr_benchmark/","title":"Qodo Merge Pull Request Benchmark","text":""},{"location":"pr_benchmark/#methodology","title":"Methodology","text":"Qodo Merge PR Benchmark evaluates and compares the performance of Large Language Models (LLMs) in analyzing pull request code and providing meaningful code suggestions. Our diverse dataset contains 400 pull requests from over 100 repositories, spanning various programming languages and frameworks to reflect real-world scenarios.
For each pull request, we have pre-generated suggestions from eleven different top-performing models using the Qodo Merge improve
tool. The prompt for response generation can be found here.
To benchmark a model, we generate its suggestions for the same pull requests and ask a high-performing judge model to rank the new model's output against the pre-generated baseline suggestions. We utilize OpenAI's o3
model as the judge, though other models have yielded consistent results. The prompt for this ranking judgment is available here.
We aggregate ranking outcomes across all pull requests, calculating performance metrics for the evaluated model.
We also analyze the qualitative feedback from the judge to identify the model's comparative strengths and weaknesses against the established baselines. This approach provides not just a quantitative score but also a detailed analysis of each model's strengths and weaknesses.
A list of the models used for generating the baseline suggestions, and example results, can be found in the Appendix.
"},{"location":"pr_benchmark/#pr-benchmark-results","title":"PR Benchmark Results","text":"Model Name Version (Date) Thinking budget tokens Score o3 2025-04-16 'medium' (8000) 62.5 o4-mini 2025-04-16 'medium' (8000) 57.7 Gemini-2.5-pro 2025-06-05 4096 56.3 Gemini-2.5-pro 2025-06-05 1024 44.3 Grok-4 2025-07-09 unknown 41.7 Claude-4-sonnet 2025-05-14 4096 39.7 Claude-4-sonnet 2025-05-14 39.0 Codex-mini 2025-06-20 unknown 37.2 Gemini-2.5-flash 2025-04-17 33.5 Claude-4-opus-20250514 2025-05-14 32.8 Claude-3.7-sonnet 2025-02-19 32.4 GPT-4.1 2025-04-14 26.5"},{"location":"pr_benchmark/#results-analysis","title":"Results Analysis","text":""},{"location":"pr_benchmark/#o3","title":"O3","text":"Final score: 62.5
strengths:
weaknesses:
Final score: 57.7
strengths:
weaknesses:
Final score: 56.3
strengths:
weaknesses:
Final score: 39.7
strengths:
weaknesses:
improved_code
identical to original.Final score: 39.0
strengths:
Consistently well-formatted & rule-compliant output: Almost every answer follows the required YAML schema, keeps within the 3-suggestion limit, and returns an empty list when no issues are found, showing good instruction following.
Actionable, code-level patches: When it does spot a defect the model usually supplies clear, minimal diffs or replacement snippets that compile / run, making the fix easy to apply.
Decent hit-rate on \u201cobvious\u201d bugs: The model reliably catches the most blatant syntax errors, null-checks, enum / cast problems, and other first-order issues, so it often ties or slightly beats weaker baseline replies.
weaknesses:
Shallow coverage: It frequently stops after one easy bug and overlooks additional, equally-critical problems that stronger reviewers find, leaving significant risks unaddressed.
False positives & harmful fixes: In a noticeable minority of cases it misdiagnoses code, suggests changes that break compilation or behaviour, or flags non-issues, sometimes making its output worse than doing nothing.
Drifts into non-critical or out-of-scope advice: The model regularly proposes style tweaks, documentation edits, or changes to unchanged lines, violating the \u201ccritical new-code only\u201d requirement.
strengths:
weaknesses:
Final score: 26.5
strengths:
weaknesses:
set
change, false dangling-reference claims) or carry metadata errors (mis-labeling files as \u201cpython\u201d). final score: 37.2
strengths:
weaknesses:
final score: 32.8
strengths:
weaknesses:
final score: 32.8
strengths:
weaknesses:
Some examples of benchmarked PRs and their results:
The following models were used for generating the benchmark baseline:
(1) anthropic_sonnet_3.7_v1:0\n\n(2) claude-4-opus-20250514\n\n(3) claude-4-sonnet-20250514\n\n(4) claude-4-sonnet-20250514_thinking_2048\n\n(5) gemini-2.5-flash-preview-04-17\n\n(6) gemini-2.5-pro-preview-05-06\n\n(7) gemini-2.5-pro-preview-06-05_1024\n\n(8) gemini-2.5-pro-preview-06-05_4096\n\n(9) gpt-4.1\n\n(10) o3\n\n(11) o4-mini_medium\n
"},{"location":"recent_updates/","title":"Recent Updates and Future Roadmap","text":"Page last updated: 2025-07-01
This page summarizes recent enhancements to Qodo Merge (last three months).
It also outlines our development roadmap for the upcoming three months. Please note that the roadmap is subject to change, and features may be adjusted, added, or reprioritized.
Recent UpdatesFuture Roadmapreview
tool: Enhancing the review
tool validate compliance across multiple categories including security, tickets, and custom best practices.Here is a list of Qodo Merge tools, each with a dedicated page that explains how to use it:
Tool Description PR Description (/describe
) Automatically generating PR description - title, type, summary, code walkthrough and labels PR Review (/review
) Adjustable feedback about the PR, possible issues, security concerns, review effort and more Code Suggestions (/improve
) Code suggestions for improving the PR Question Answering (/ask ...
) Answering free-text questions about the PR, or on specific code lines Help (/help
) Provides a list of all the available tools. Also enables to trigger them interactively (\ud83d\udc8e) Help Docs (/help_docs
) Answer a free-text question based on a git documentation folder. Update Changelog (/update_changelog
) Automatically updating the CHANGELOG.md file with the PR changes \ud83d\udc8e Add Documentation (/add_docs
) Generates documentation to methods/functions/classes that changed in the PR \ud83d\udc8e Analyze (/analyze
) Identify code components that changed in the PR, and enables to interactively generate tests, docs, and code suggestions for each component \ud83d\udc8e CI Feedback (/checks ci_job
) Automatically generates feedback and analysis for a failed CI job \ud83d\udc8e Custom Prompt (/custom_prompt
) Automatically generates custom suggestions for improving the PR code, based on specific guidelines defined by the user \ud83d\udc8e Generate Custom Labels (/generate_labels
) Generates custom labels for the PR, based on specific guidelines defined by the user \ud83d\udc8e Generate Tests (/test
) Automatically generates unit tests for a selected component, based on the PR code changes \ud83d\udc8e Implement (/implement
) Generates implementation code from review suggestions \ud83d\udc8e Improve Component (/improve_component component_name
) Generates code suggestions for a specific code component that changed in the PR \ud83d\udc8e Scan Repo Discussions (/scan_repo_discussions
) Generates best_practices.md
file based on previous discussions in the repository \ud83d\udc8e Similar Code (/similar_code
) Retrieves the most similar code components from inside the organization's codebase, or from open-source code. Note that the tools marked with \ud83d\udc8e are available only for Qodo Merge users.
"},{"location":"tools/analyze/","title":"\ud83d\udc8e Analyze","text":""},{"location":"tools/analyze/#overview","title":"Overview","text":"The analyze
tool combines advanced static code analysis with LLM capabilities to provide a comprehensive analysis of the PR code changes.
The tool scans the PR code changes, finds the code components (methods, functions, classes) that changed, and enables to interactively generate tests, docs, code suggestions and similar code search for each component.
It can be invoked manually by commenting on any PR:
/analyze\n
"},{"location":"tools/analyze/#example-usage","title":"Example usage","text":"An example result:
Language that are currently supported:
Python, Java, C++, JavaScript, TypeScript, C#, Go.
"},{"location":"tools/ask/","title":"Ask","text":""},{"location":"tools/ask/#overview","title":"Overview","text":"The ask
tool answers questions about the PR, based on the PR code changes. Make sure to be specific and clear in your questions. It can be invoked manually by commenting on any PR:
/ask \"...\"\n
"},{"location":"tools/ask/#example-usage","title":"Example usage","text":""},{"location":"tools/ask/#ask-lines","title":"Ask lines","text":"You can run /ask
on specific lines of code in the PR from the PR's diff view. The tool will answer questions based on the code changes in the selected lines.
/ask \"...\"
in the comment box and press Add single comment
button.Note that the tool does not have \"memory\" of previous questions, and answers each question independently.
"},{"location":"tools/ask/#ask-on-images","title":"Ask on images","text":"You can also ask questions about images that appear in the comment, where the entire PR code will be used as context. The basic syntax is:
/ask \"...\"\n\n[Image](https://real_link_to_image)\n
where https://real_link_to_image
is the direct link to the image.
Note that GitHub has a built-in mechanism of pasting images in comments. However, pasted image does not provide a direct link. To get a direct link to an image, we recommend using the following scheme:
1. First, post a comment that contains only the image:
2. Quote reply to that comment:
3. In the screen opened, type the question below the image:
4. Post the comment, and receive the answer:
See a full video tutorial here
"},{"location":"tools/ci_feedback/","title":"\ud83d\udc8e CI Feedback","text":""},{"location":"tools/ci_feedback/#overview","title":"Overview","text":"The CI feedback tool (/checks)
automatically triggers when a PR has a failed check. The tool analyzes the failed checks and provides several feedbacks:
\u2192
In addition to being automatically triggered, the tool can also be invoked manually by commenting on a PR:
/checks \"https://github.com/{repo_name}/actions/runs/{run_number}/job/{job_number}\"\n
where {repo_name}
is the name of the repository, {run_number}
is the run number of the failed check, and {job_number}
is the job number of the failed check.
If you wish to disable the tool from running automatically, you can do so by adding the following configuration to the configuration file:
[checks]\nenable_auto_checks_feedback = false\n
"},{"location":"tools/ci_feedback/#configuration-options","title":"Configuration options","text":"enable_auto_checks_feedback
- if set to true, the tool will automatically provide feedback when a check is failed. Default is true.excluded_checks_list
- a list of checks to exclude from the feedback, for example: [\"check1\", \"check2\"]. Default is an empty list.persistent_comment
- if set to true, the tool will overwrite a previous checks comment with the new feedback. Default is true.enable_help_text=true
- if set to true, the tool will provide a help message when a user comments \"/checks\" on a PR. Default is true.final_update_message
- if persistent_comment
is true and updating a previous checks message, the tool will also create a new message: \"Persistent checks updated to latest commit\". Default is true.The generate_labels
tool scans the PR code changes, and given a list of labels and their descriptions, it automatically suggests labels that match the PR code changes.
It can be invoked manually by commenting on any PR:
/generate_labels\n
"},{"location":"tools/custom_labels/#example-usage","title":"Example usage","text":"If we wish to add detect changes to SQL queries in a given PR, we can add the following custom label along with its description:
When running the generate_labels
tool on a PR that includes changes in SQL queries, it will automatically suggest the custom label:
Note that in addition to the dedicated tool generate_labels
, the custom labels will also be used by the describe
tool.
There are 3 ways to enable custom labels:
"},{"location":"tools/custom_labels/#1-cli-local-configuration-file","title":"1. CLI (local configuration file)","text":"When working from CLI, you need to apply the configuration changes to the custom_labels file:
"},{"location":"tools/custom_labels/#2-repo-configuration-file","title":"2. Repo configuration file","text":"To enable custom labels, you need to apply the configuration changes to the local .pr_agent.toml
file in your repository.
This feature is available only in Qodo Merge
https://github.com/{owner}/{repo}/labels
, or click on the \"Labels\" tab in the issues or PRs page.https://gitlab.com/{owner}/{repo}/-/labels
, or click on \"Manage\" -> \"Labels\" on the left menu.b. Add/edit the custom labels. It should be formatted as follows:
pr_agent:
, for example: pr_agent: Description of when AI should suggest this label
. The description should be comprehensive and detailed, indicating when to add the desired label.c. Now the custom labels will be included in the generate_labels
tool.
This feature is supported in GitHub and GitLab.
"},{"location":"tools/custom_labels/#configuration-options","title":"Configuration options","text":"enable_custom_labels
to True: This will turn off the default labels and enable the custom labels provided in the custom_labels.toml file.[config]\nenable_custom_labels=true\n\n[custom_labels.\"Custom Label Name\"]\ndescription = \"Description of when AI should suggest this label\"\n\n[custom_labels.\"Custom Label 2\"]\ndescription = \"Description of when AI should suggest this label 2\"\n
"},{"location":"tools/custom_prompt/","title":"\ud83d\udc8e Custom Prompt","text":""},{"location":"tools/custom_prompt/#overview","title":"Overview","text":"The custom_prompt
tool scans the PR code changes, and automatically generates suggestions for improving the PR code. It shares similarities with the improve
tool, but with one main difference: the custom_prompt
tool will only propose suggestions that follow specific guidelines defined by the prompt in: pr_custom_prompt.prompt
configuration.
The tool can be triggered automatically every time a new PR is opened, or can be invoked manually by commenting on a PR.
When commenting, use the following template:
/custom_prompt --pr_custom_prompt.prompt=\"\nThe code suggestions should focus only on the following:\n- ...\n- ...\n\n\"\n
With a configuration file, use the following template:
[pr_custom_prompt]\nprompt=\"\"\"\\\nThe suggestions should focus only on the following:\n-...\n-...\n\n\"\"\"\n
Remember - with this tool, you are the prompter. Be specific, clear, and concise in the instructions. Specify relevant aspects that you want the model to focus on. \\ You might benefit from several trial-and-error iterations, until you get the correct prompt for your use case.
"},{"location":"tools/custom_prompt/#example-usage","title":"Example usage","text":"Here is an example of a possible prompt, defined in the configuration file:
[pr_custom_prompt]\nprompt=\"\"\"\\\nThe code suggestions should focus only on the following:\n- look for edge cases when implementing a new function\n- make sure every variable has a meaningful name\n- make sure the code is efficient\n\"\"\"\n
(The instructions above are just an example. We want to emphasize that the prompt should be specific and clear, and be tailored to the needs of your project)
Results obtained with the prompt above:
"},{"location":"tools/custom_prompt/#configuration-options","title":"Configuration options","text":"prompt
: the prompt for the tool. It should be a multi-line string.
num_code_suggestions_per_chunk
: number of code suggestions provided by the 'custom_prompt' tool, per chunk. Default is 3.
enable_help_text
: if set to true, the tool will display a help text in the comment. Default is true.
The describe
tool scans the PR code changes, and generates a description for the PR - title, type, summary, walkthrough and labels.
The tool can be triggered automatically every time a new PR is opened, or it can be invoked manually by commenting on any PR:
/describe\n
"},{"location":"tools/describe/#example-usage","title":"Example usage","text":""},{"location":"tools/describe/#manual-triggering","title":"Manual triggering","text":"Invoke the tool manually by commenting /describe
on any PR:
After ~30 seconds, the tool will generate a description for the PR:
If you want to edit configurations, add the relevant ones to the command:
/describe --pr_description.some_config1=... --pr_description.some_config2=...\n
"},{"location":"tools/describe/#automatic-triggering","title":"Automatic triggering","text":"To run the describe
automatically when a PR is opened, define in a configuration file:
[github_app]\npr_commands = [\n \"/describe\",\n ...\n]\n\n[pr_description]\npublish_labels = true\n...\n
pr_commands
lists commands that will be executed automatically when a PR is opened.[pr_description]
section contains the configurations for the describe
tool you want to edit (if any).By default, Qodo Merge tries to preserve your original PR description by placing it above the generated content. This requires including your description during the initial PR creation.
\"Qodo removed the original description from the PR. Why\"?
From our experience, there are two possible reasons:
If you edit the description while the automated tool is running, a race condition may occur, potentially causing your original description to be lost. Hence, create a description before launching the PR.
When updating PR descriptions, the /describe
tool considers everything above the \"PR Type\" field as user content and will preserve it. Everything below this marker is treated as previously auto-generated content and will be replaced.
The /describe
tool includes a Mermaid sequence diagram showing component/function interactions.
This option is enabled by default via the pr_description.enable_pr_diagram
param.
publish_labels If set to true, the tool will publish labels to the PR. Default is false. publish_description_as_comment If set to true, the tool will publish the description as a comment to the PR. If false, it will overwrite the original description. Default is false. publish_description_as_comment_persistent If set to true and publish_description_as_comment
is true, the tool will publish the description as a persistent comment to the PR. Default is true. add_original_user_description If set to true, the tool will add the original user description to the generated description. Default is true. generate_ai_title If set to true, the tool will also generate an AI title for the PR. Default is false. extra_instructions Optional extra instructions to the tool. For example: \"focus on the changes in the file X. Ignore change in ...\" enable_pr_type If set to false, it will not show the PR type
as a text value in the description content. Default is true. final_update_message If set to true, it will add a comment message PR Description updated to latest commit...
after finishing calling /describe
. Default is false. enable_semantic_files_types If set to true, \"Changes walkthrough\" section will be generated. Default is true. collapsible_file_list If set to true, the file list in the \"Changes walkthrough\" section will be collapsible. If set to \"adaptive\", the file list will be collapsible only if there are more than 8 files. Default is \"adaptive\". enable_large_pr_handling \ud83d\udc8e If set to true, in case of a large PR the tool will make several calls to the AI and combine them to be able to cover more files. Default is true. enable_help_text If set to true, the tool will display a help text in the comment. Default is false. enable_pr_diagram If set to true, the tool will generate a horizontal Mermaid flowchart summarizing the main pull request changes. This field remains empty if not applicable. Default is true.
This feature enables you to copy the changes walkthrough
table to the \"Files changed\" tab, so you can quickly understand the changes in each file while reviewing the code changes (diff view).
To copy the changes walkthrough
table to the \"Files changed\" tab, you can click on the checkbox that appears PR Description status message below the main PR Description:
If you prefer to have the file summaries appear in the \"Files changed\" tab on every PR, change the pr_description.inline_file_summary
parameter in the configuration file, possible values are:
'table'
: File changes walkthrough table will be displayed on the top of the \"Files changed\" tab, in addition to the \"Conversation\" tab.true
: A collapsible file comment with changes title and a changes summary for each file in the PR.false
(default
): File changes walkthrough will be added only to the \"Conversation\" tab.Note: that this feature is currently available only for GitHub.
"},{"location":"tools/describe/#markers-template","title":"Markers template","text":"To enable markers, set pr_description.use_description_markers=true
. Markers enable to easily integrate user's content and auto-generated content, with a template-like mechanism.
For example, if the PR original description was:
User content...\n\n## PR Type:\npr_agent:type\n\n## PR Description:\npr_agent:summary\n\n## PR Walkthrough:\npr_agent:walkthrough\n\n## PR Diagram:\npr_agent:diagram\n
The marker pr_agent:type
will be replaced with the PR type, pr_agent:summary
will be replaced with the PR summary, pr_agent:walkthrough
will be replaced with the PR walkthrough, and pr_agent:diagram
will be replaced with the sequence diagram (if enabled).
becomes
Configuration params:
use_description_markers
: if set to true, the tool will use markers template. It replaces every marker of the form pr_agent:marker_name
with the relevant content. Default is false.include_generated_by_header
: if set to true, the tool will add a dedicated header: 'Generated by PR Agent at ...' to any automatic content. Default is true.diagram
: if present as a marker, will be replaced by the PR sequence diagram (if enabled).The default labels of the describe tool are quite generic, since they are meant to be used in any repo: [Bug fix
, Tests
, Enhancement
, Documentation
, Other
].
You can define custom labels that are relevant for your repo and use cases. Custom labels can be defined in a configuration file, or directly in the repo's labels page.
Make sure to provide proper title, and a detailed and well-phrased description for each label, so the tool will know when to suggest it. Each label description should be a conditional statement, that indicates if to add the label to the PR or not, according to the PR content.
"},{"location":"tools/describe/#handle-custom-labels-from-a-configuration-file","title":"Handle custom labels from a configuration file","text":"Example for a custom labels configuration setup in a configuration file:
[config]\nenable_custom_labels=true\n\n\n[custom_labels.\"sql_changes\"]\ndescription = \"Use when a PR contains changes to SQL queries\"\n\n[custom_labels.\"test\"]\ndescription = \"use when a PR primarily contains new tests\"\n\n...\n
"},{"location":"tools/describe/#handle-custom-labels-from-the-repos-labels-page","title":"Handle custom labels from the Repo's labels page \ud83d\udc8e","text":"You can also control the custom labels that will be suggested by the describe
tool from the repo's labels page:
https://github.com/{owner}/{repo}/labels
(or click on the \"Labels\" tab in the issues or PRs page)https://gitlab.com/{owner}/{repo}/-/labels
(or click on \"Manage\" -> \"Labels\" on the left menu)Now add/edit the custom labels. they should be formatted as follows:
pr_agent:
, for example: pr_agent: Description of when AI should suggest this label
.Examples for custom labels:
Main topic:performance
- pr_agent:The main topic of this PR is performanceNew endpoint
- pr_agent:A new endpoint was added in this PRSQL query
- pr_agent:A new SQL query was added in this PRDockerfile changes
- pr_agent:The PR contains changes in the DockerfileThe description should be comprehensive and detailed, indicating when to add the desired label. For example:
"},{"location":"tools/describe/#usage-tips","title":"Usage Tips","text":"Automation
pr_commands = [\"/describe\", ...]\n
meaning the describe
tool will run automatically on every PR, with the default configurations.pr_commands = [\"/describe --pr_description.use_description_markers=true\", ...]\n
the tool will replace every marker of the form pr_agent:marker_name
in the PR description with the relevant content, where marker_name
is one of the following: *type
: the PR type. * summary
: the PR summary. * walkthrough
: the PR walkthrough.
The add_docs
tool scans the PR code changes, and automatically suggests documentation for any code components that changed in the PR (functions, classes, etc.).
It can be invoked manually by commenting on any PR:
/add_docs\n
"},{"location":"tools/documentation/#example-usage","title":"Example usage","text":"Invoke the tool manually by commenting /add_docs
on any PR:
The tool will generate documentation for all the components that changed in the PR:
You can state a name of a specific component in the PR to get documentation only for that component:
/add_docs component_name\n
"},{"location":"tools/documentation/#manual-triggering","title":"Manual triggering","text":"Comment /add_docs
on a PR to invoke it manually.
To automatically run the add_docs
tool when a pull request is opened, define in a configuration file:
[github_app]\npr_commands = [\n \"/add_docs\",\n ...\n]\n
The pr_commands
list defines commands that run automatically when a PR is opened. Since this is under the [github_app] section, it only applies when using the Qodo Merge GitHub App in GitHub environments.
docs_style
: The exact style of the documentation (for python docstring). you can choose between: google
, numpy
, sphinx
, restructuredtext
, plain
. Default is sphinx
.extra_instructions
: Optional extra instructions to the tool. For example: \"focus on the changes in the file X. Ignore change in ...\".Notes
analyze
tool.The help
tool provides a list of all the available tools and their descriptions. For Qodo Merge users, it also enables to trigger each tool by checking the relevant box.
It can be invoked manually by commenting on any PR:
/help\n
"},{"location":"tools/help/#example-usage","title":"Example usage","text":"An example result:
\u2192
"},{"location":"tools/help_docs/","title":"Help Docs","text":""},{"location":"tools/help_docs/#overview","title":"Overview","text":"The help_docs
tool can answer a free-text question based on a git documentation folder.
It can be invoked manually by commenting on any PR or Issue:
/help_docs \"...\"\n
Or configured to be triggered automatically when a new issue is opened.
The tool assumes by default that the documentation is located in the root of the repository, at /docs
folder. However, this can be customized by setting the docs_path
configuration option:
[pr_help_docs]\nrepo_url = \"\" # The repository to use as context\ndocs_path = \"docs\" # The documentation folder\nrepo_default_branch = \"main\" # The branch to use in case repo_url overwritten\n
See more configuration options in the Configuration options section.
"},{"location":"tools/help_docs/#example-usage","title":"Example usage","text":"Asking a question about another repository
Response:
"},{"location":"tools/help_docs/#run-automatically-when-a-new-issue-is-opened","title":"Run automatically when a new issue is opened","text":"You can configure PR-Agent to run help_docs
automatically on any newly created issue. This can be useful, for example, for providing immediate feedback to users who open issues with questions on open-source projects with extensive documentation.
Here's how:
1) Follow the steps depicted under Run as a Github Action to create a new workflow, such as:.github/workflows/help_docs.yml
:
2) Edit your yaml file to the following:
name: Run pr agent on every opened issue, respond to user comments on an issue\n\n#When the action is triggered\non:\n issues:\n types: [opened] #New issue\n\n# Read env. variables\nenv:\n GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}\n GITHUB_API_URL: ${{ github.api_url }}\n GIT_REPO_URL: ${{ github.event.repository.clone_url }}\n ISSUE_URL: ${{ github.event.issue.html_url || github.event.comment.html_url }}\n ISSUE_BODY: ${{ github.event.issue.body || github.event.comment.body }}\n OPENAI_KEY: ${{ secrets.OPENAI_KEY }}\n\n# The actual set of actions\njobs:\n issue_agent:\n runs-on: ubuntu-latest\n if: ${{ github.event.sender.type != 'Bot' }} #Do not respond to bots\n\n # Set required permissions\n permissions:\n contents: read # For reading repository contents\n issues: write # For commenting on issues\n\n steps:\n - name: Run PR Agent on Issues\n if: ${{ env.ISSUE_URL != '' }}\n uses: docker://codiumai/pr-agent:latest\n with:\n entrypoint: /bin/bash #Replace invoking cli.py directly with a shell\n args: |\n -c \"cd /app && \\\n echo 'Running Issue Agent action step on ISSUE_URL=$ISSUE_URL' && \\\n export config__git_provider='github' && \\\n export github__user_token=$GITHUB_TOKEN && \\ \n export github__base_url=$GITHUB_API_URL && \\\n export openai__key=$OPENAI_KEY && \\\n python -m pr_agent.cli --issue_url=$ISSUE_URL --pr_help_docs.repo_url=\"...\" --pr_help_docs.docs_path=\"...\" --pr_help_docs.openai_key=$OPENAI_KEY && \\help_docs \\\"$ISSUE_BODY\\\"\"\n
3) Following completion of the remaining steps (such as adding secrets and relevant configurations, such as repo_url
and docs_path
) merge this change to your main branch. When a new issue is opened, you should see a comment from github-actions
bot with an auto response, assuming the question is related to the documentation of the repository.
Under the section pr_help_docs
, the configuration file contains options to customize the 'help docs' tool:
repo_url
: If not overwritten, will use the repo from where the context came from (issue or PR), otherwise - use the given repo as context.repo_default_branch
: The branch to use in case repo_url overwritten, otherwise - has no effect.docs_path
: Relative path from root of repository (either the one this PR has been issued for, or above repo url).exclude_root_readme
: Whether or not to exclude the root README file for querying the model.supported_doc_exts
: Which file extensions should be included for the purpose of querying the model.Platforms supported: GitHub, GitLab, Bitbucket
The implement
tool converts human code review discussions and feedback into ready-to-commit code changes. It leverages LLM technology to transform PR comments and review suggestions into concrete implementation code, helping developers quickly turn feedback into working solutions.
Reviewers can request code changes by:
/implement <code-change-description>\n
PR authors can implement suggested changes by replying to a review comment using either:
/implement <code-change-description>\n
/implement\n
You can reference and implement changes from any comment by:
/implement <link-to-review-comment>\n
Note that the implementation will occur within the review discussion thread.
"},{"location":"tools/implement/#configuration-options","title":"Configuration options","text":"/implement
to implement code change within and based on the review discussion./implement <code-change-description>
inside a review discussion to implement specific instructions./implement <link-to-review-comment>
to indirectly call the tool from any comment.The improve
tool scans the PR code changes, and automatically generates meaningful suggestions for improving the PR code. The tool can be triggered automatically every time a new PR is opened, or it can be invoked manually by commenting on any PR:
/improve\n
"},{"location":"tools/improve/#how-it-looks","title":"How it looks","text":"Suggestions OverviewSelecting a specific suggestion The following features are available only for Qodo Merge \ud83d\udc8e users:
Apply / Chat
checkbox, which interactively converts a suggestion into a committable code commentMore
checkbox to generate additional suggestionsInvoke the tool manually by commenting /improve
on any PR. The code suggestions by default are presented as a single comment:
To edit configurations related to the improve
tool, use the following template:
/improve --pr_code_suggestions.some_config1=... --pr_code_suggestions.some_config2=...\n
For example, you can choose to present all the suggestions as committable code comments, by running the following command:
/improve --pr_code_suggestions.commitable_code_suggestions=true\n
As can be seen, a single table comment has a significantly smaller PR footprint. We recommend this mode for most cases. Also note that collapsible are not supported in Bitbucket. Hence, the suggestions can only be presented in Bitbucket as code comments.
"},{"location":"tools/improve/#manual-more-suggestions","title":"Manual more suggestions","text":"To generate more suggestions (distinct from the ones already generated), for git-providers that don't support interactive checkbox option, you can manually run:
/improve --more_suggestions=true\n
"},{"location":"tools/improve/#automatic-triggering","title":"Automatic triggering","text":"To run the improve
automatically when a PR is opened, define in a configuration file:
[github_app]\npr_commands = [\n \"/improve\",\n ...\n]\n\n[pr_code_suggestions]\nnum_code_suggestions_per_chunk = ...\n...\n
pr_commands
lists commands that will be executed automatically when a PR is opened.[pr_code_suggestions]
section contains the configurations for the improve
tool you want to edit (if any)\ud83d\udc8e feature
Qodo Merge tracks two types of implementations for tracking implemented suggestions:
Apply
checkbox.In post-process, Qodo Merge counts the number of suggestions that were implemented, and provides general statistics and insights about the suggestions' impact on the PR process.
"},{"location":"tools/improve/#suggestion-tracking","title":"Suggestion tracking","text":"\ud83d\udc8e feature. Platforms supported: GitHub, GitLab
Qodo Merge employs a novel detection system to automatically identify AI code suggestions that PR authors have accepted and implemented.
Accepted suggestions are also automatically documented in a dedicated wiki page called .pr_agent_accepted_suggestions
, allowing users to track historical changes, assess the tool's effectiveness, and learn from previously implemented recommendations in the repository. An example result:
This dedicated wiki page will also serve as a foundation for future AI model improvements, allowing it to learn from historically implemented suggestions and generate more targeted, contextually relevant recommendations.
This feature is controlled by a boolean configuration parameter: pr_code_suggestions.wiki_page_accepted_suggestions
(default is true).
Wiki must be enabled
While the aggregation process is automatic, GitHub repositories require a one-time manual wiki setup.
To initialize the wiki: navigate to Wiki
, select Create the first page
, then click Save page
.
Once a wiki repo is created, the tool will automatically use this wiki for tracking suggestions.
Why a wiki page?
Your code belongs to you, and we respect your privacy. Hence, we won't store any code suggestions in an external database.
Instead, we leverage a dedicated private page, within your repository wiki, to track suggestions. This approach offers convenient secure suggestion tracking while avoiding pull requests or any noise to the main repository.
"},{"location":"tools/improve/#extra-instructions-and-best-practices","title":"Extra instructions
and best practices
","text":"The improve
tool can be further customized by providing additional instructions and best practices to the AI model.
You can use the extra_instructions
configuration option to give the AI model additional instructions for the improve
tool. Be specific, clear, and concise in the instructions. With extra instructions, you are the prompter.
Examples for possible instructions:
[pr_code_suggestions]\nextra_instructions=\"\"\"\\\n(1) Answer in Japanese\n(2) Don't suggest to add try-except block\n(3) Ignore changes in toml files\n...\n\"\"\"\n
Use triple quotes to write multi-line instructions. Use bullet points or numbers to make the instructions more readable.
"},{"location":"tools/improve/#best-practices","title":"Best practices","text":"\ud83d\udc8e feature. Platforms supported: GitHub, GitLab, Bitbucket
Qodo Merge supports both simple and hierarchical best practices configurations to provide guidance to the AI model for generating relevant code suggestions.
Writing effective best practices filesThe following guidelines apply to all best practices files:
Pattern 1: Add proper error handling with try-except blocks around external function calls.
Example code before:
# Some code that might raise an exception\nreturn process_pr_data(data)\n
Example code after:
try:\n # Some code that might raise an exception\n return process_pr_data(data)\nexcept Exception as e:\n logger.exception(\"Failed to process request\", extra={\"error\": e})\n
Pattern 2: Add defensive null/empty checks before accessing object properties or performing operations on potentially null variables to prevent runtime errors.
Example code before:
def get_pr_code(pr_data):\n if \"changed_code\" in pr_data:\n return pr_data.get(\"changed_code\", \"\")\n return \"\"\n
Example code after:
def get_pr_code(pr_data):\n if pr_data is None:\n return \"\"\n if \"changed_code\" in pr_data:\n return pr_data.get(\"changed_code\", \"\")\n return \"\"\n
"},{"location":"tools/improve/#local-best-practices","title":"Local best practices","text":"For basic usage, create a best_practices.md
file in your repository's root directory containing a list of best practices, coding standards, and guidelines specific to your repository.
The AI model will use this best_practices.md
file as a reference, and in case the PR code violates any of the guidelines, it will create additional suggestions, with a dedicated label: Organization best practice
.
For organizations managing multiple repositories with different requirements, Qodo Merge supports a hierarchical best practices system using a dedicated global configuration repository.
Supported scenarios:
1. Create a new repository named pr-agent-settings
in your organization/workspace.
2. Build the folder hierarchy in your pr-agent-settings
repository, for example:
pr-agent-settings/\n\u251c\u2500\u2500 metadata.yaml # Maps repos/folders to best practice paths\n\u2514\u2500\u2500 codebase_standards/ # Root for all best practice definitions\n \u251c\u2500\u2500 global/ # Global rules, inherited widely\n \u2502 \u2514\u2500\u2500 best_practices.md\n \u251c\u2500\u2500 groups/ # For groups of repositories\n \u2502 \u251c\u2500\u2500 frontend_repos/\n \u2502 \u2502 \u2514\u2500\u2500 best_practices.md\n \u2502 \u251c\u2500\u2500 backend_repos/\n \u2502 \u2502 \u2514\u2500\u2500 best_practices.md\n \u2502 \u2514\u2500\u2500 ...\n \u251c\u2500\u2500 qodo-merge/ # For standalone repositories\n \u2502 \u2514\u2500\u2500 best_practices.md\n \u251c\u2500\u2500 qodo-monorepo/ # For monorepo-specific rules \n \u2502 \u251c\u2500\u2500 best_practices.md # Root level monorepo rules\n \u2502 \u251c\u2500\u2500 qodo-github/ # Subproject best practices\n \u2502 \u2502 \u2514\u2500\u2500 best_practices.md\n \u2502 \u2514\u2500\u2500 qodo-gitlab/ # Another subproject\n \u2502 \u2514\u2500\u2500 best_practices.md\n \u2514\u2500\u2500 ... # More repositories\n
3. Define the metadata file metadata.yaml
that maps your repositories to their relevant best practices paths, for example:
# Standalone repos\nqodo-merge:\n best_practices_paths:\n - \"qodo-merge\"\n\n# Group-associated repos\nrepo_b:\n best_practices_paths:\n - \"groups/backend_repos\"\n\n# Multi-group repos\nrepo_c:\n best_practices_paths:\n - \"groups/frontend_repos\"\n - \"groups/backend_repos\"\n\n# Monorepo with subprojects\nqodo-monorepo:\n best_practices_paths:\n - \"qodo-monorepo\"\n monorepo_subprojects:\n qodo-github:\n best_practices_paths:\n - \"qodo-monorepo/qodo-github\"\n qodo-gitlab:\n best_practices_paths:\n - \"qodo-monorepo/qodo-gitlab\"\n
4. Set the following configuration in your global configuration file:
[best_practices]\nenable_global_best_practices = true\n
Best practices priority and fallback behavior When global best practices are enabled, Qodo Merge follows this priority order:
1. Primary: Global hierarchical best practices from pr-agent-settings
repository:
1.1 If the repository is mapped in `metadata.yaml`, it uses the specified paths\n\n1.2 For monorepos, it automatically collects best practices matching PR file paths\n\n1.3 If no mapping exists, it falls back to the global best practices\n
2. Fallback: Local repository best_practices.md
file:
2.1 Used when global best practices are not found or configured\n\n2.2 Acts as a safety net for repositories not yet configured in the global system\n\n2.3 Local best practices are completely ignored when global best practices are successfully loaded\n
Edge cases and behavior metadata.yaml
don't exist in the file system, those paths are skippedBest practice suggestions are labeled as Organization best practice
by default. To customize this label, modify it in your configuration file:
[best_practices]\norganization_name = \"...\"\n
And the label will be: {organization_name} best practice
.
\ud83d\udc8e feature. Platforms supported: GitHub.
Auto best practices
is a novel Qodo Merge capability that:
This creates an automatic feedback loop where the system continuously learns from your team's choices to provide increasingly relevant suggestions. The system maintains two analysis phases:
Note that when a custom best practices exist, Qodo Merge will still generate an 'auto best practices' wiki file, though it won't use it in the improve
tool. Learn more about utilizing 'auto best practices' in our detailed guide.
[auto_best_practices]\n# Disable all auto best practices usage or generation\nenable_auto_best_practices = true \n\n# Disable usage of auto best practices file in the 'improve' tool\nutilize_auto_best_practices = true \n\n# Extra instructions to the auto best practices generation prompt\nextra_instructions = \"\" \n\n# Max number of patterns to be detected\nmax_patterns = 5 \n
"},{"location":"tools/improve/#multiple-best-practices-sources","title":"Multiple best practices sources","text":"The improve
tool will combine best practices from all available sources - global configuration, local configuration, and auto-generated files - to provide you with comprehensive recommendations.
\ud83d\udc8e feature
The extra instructions
configuration is more related to the improve
tool prompt. It can be used, for example, to avoid specific suggestions (\"Don't suggest to add try-except block\", \"Ignore changes in toml files\", ...) or to emphasize specific aspects or formats (\"Answer in Japanese\", \"Give only short suggestions\", ...)
In contrast, the best_practices.md
file is a general guideline for the way code should be written in the repo.
Using a combination of both can help the AI model to provide relevant and tailored suggestions.
"},{"location":"tools/improve/#usage-tips","title":"Usage Tips","text":""},{"location":"tools/improve/#implementing-the-proposed-code-suggestions","title":"Implementing the proposed code suggestions","text":"Each generated suggestion consists of three key elements:
We advise users to apply critical analysis and judgment when implementing the proposed suggestions. In addition to mistakes (which may happen, but are rare), sometimes the presented code modification may serve more as an illustrative example than a directly applicable solution. In such cases, we recommend prioritizing the suggestion's detailed description, using the diff snippet primarily as a supporting reference.
"},{"location":"tools/improve/#dual-publishing-mode","title":"Dual publishing mode","text":"Our recommended approach for presenting code suggestions is through a table (--pr_code_suggestions.commitable_code_suggestions=false
). This method significantly reduces the PR footprint and allows for quick and easy digestion of multiple suggestions.
We also offer a complementary dual publishing mode. When enabled, suggestions exceeding a certain score threshold are not only displayed in the table, but also presented as committable PR comments. This mode helps highlight suggestions deemed more critical.
To activate dual publishing mode, use the following setting:
[pr_code_suggestions]\ndual_publishing_score_threshold = x\n
Where x represents the minimum score threshold (>=) for suggestions to be presented as committable PR comments in addition to the table. Default is -1 (disabled).
"},{"location":"tools/improve/#controlling-suggestions-depth","title":"Controlling suggestions depth","text":"\ud83d\udc8e feature
You can control the depth and comprehensiveness of the code suggestions by using the pr_code_suggestions.suggestions_depth
parameter.
Available options:
selective
- Shows only suggestions above a score threshold of 6regular
- Default mode with balanced suggestion coverage exhaustive
- Provides maximum suggestion comprehensiveness(Alternatively, use numeric values: 1, 2, or 3 respectively)
We recommend starting with regular
mode, then exploring exhaustive
mode, which can provide more comprehensive suggestions and enhanced bug detection.
\ud83d\udc8e feature. Platforms supported: GitHub, GitLab
If you set in a configuration file:
[pr_code_suggestions]\ndemand_code_suggestions_self_review = true\n
The improve
tool will add a checkbox below the suggestions, prompting user to acknowledge that they have reviewed the suggestions. You can set the content of the checkbox text via:
[pr_code_suggestions]\ncode_suggestions_self_review_text = \"... (your text here) ...\"\n
Tip - Reducing visual footprint after self-review \ud83d\udc8e
The configuration parameter pr_code_suggestions.fold_suggestions_on_self_review
(default is True) can be used to automatically fold the suggestions after the user clicks the self-review checkbox.
This reduces the visual footprint of the suggestions, and also indicates to the PR reviewer that the suggestions have been reviewed by the PR author, and don't require further attention.
Tip - Demanding self-review from the PR author \ud83d\udc8e
By setting:
[pr_code_suggestions]\napprove_pr_on_self_review = true\n
the tool can automatically add an approval when the PR author clicks the self-review checkbox. If you keep the number of required reviewers for a PR to 1 and enable this configuration, this effectively means that the PR author can approve the PR by actively clicking the self-review checkbox.
To prevent unauthorized approvals, this configuration defaults to false, and cannot be altered through online comments; enabling requires a direct update to the configuration file and a commit to the repository. This ensures that utilizing the feature demands a deliberate documented decision by the repository owner.
Qodo Merge uses a dynamic strategy to generate code suggestions based on the size of the pull request (PR). Here's how it works:
"},{"location":"tools/improve/#1-chunking-large-prs","title":"1. Chunking large PRs","text":"pr_code_suggestions.max_context_tokens
tokens (default: 24,000).pr_code_suggestions.num_code_suggestions_per_chunk
suggestions (default: 4).This approach has two main benefits:
Note: Chunking is primarily relevant for large PRs. For most PRs (up to 600 lines of code), Qodo Merge will be able to process the entire code in a single call.
"},{"location":"tools/improve/#configuration-options","title":"Configuration options","text":"General options extra_instructions Optional extra instructions to the tool. For example: \"focus on the changes in the file X. Ignore change in ...\". commitable_code_suggestions If set to true, the tool will display the suggestions as committable code comments. Default is false. enable_chat_in_code_suggestions If set to true, QM bot will interact with comments made on code changes it has proposed. Default is true. suggestions_depth \ud83d\udc8e Controls the depth of the suggestions. Can be set to 'selective', 'regular', or 'exhaustive'. Default is 'regular'. dual_publishing_score_threshold Minimum score threshold for suggestions to be presented as committable PR comments in addition to the table. Default is -1 (disabled). focus_only_on_problems If set to true, suggestions will focus primarily on identifying and fixing code problems, and less on style considerations like best practices, maintainability, or readability. Default is true. persistent_comment If set to true, the improve comment will be persistent, meaning that every new improve request will edit the previous one. Default is true. suggestions_score_threshold Any suggestion with importance score less than this threshold will be removed. Default is 0. Highly recommend not to set this value above 7-8, since above it may clip relevant suggestions that can be useful. apply_suggestions_checkbox Enable the checkbox to create a committable suggestion. Default is true. enable_more_suggestions_checkbox Enable the checkbox to generate more suggestions. Default is true. enable_help_text If set to true, the tool will display a help text in the comment. Default is true. enable_chat_text If set to true, the tool will display a reference to the PR chat in the comment. Default is true. publish_output_no_suggestions If set to true, the tool will publish a comment even if no suggestions were found. Default is true. wiki_page_accepted_suggestions If set to true, the tool will automatically track accepted suggestions in a dedicated wiki page called .pr_agent_accepted_suggestions
. Default is true. allow_thumbs_up_down If set to true, all code suggestions will have thumbs up and thumbs down buttons, to encourage users to provide feedback on the suggestions. Default is false. Note that this feature is for statistics tracking. It will not affect future feedback from the AI model. Params for number of suggestions and AI calls
auto_extended_mode Enable chunking the PR code and running the tool on each chunk. Default is true. num_code_suggestions_per_chunk Number of code suggestions provided by the 'improve' tool, per chunk. Default is 3. max_number_of_calls Maximum number of chunks. Default is 3.
"},{"location":"tools/improve/#understanding-ai-code-suggestions","title":"Understanding AI Code Suggestions","text":"extra_instructions
and best practices
fields.The improve_component
tool generates code suggestions for a specific code component that changed in the PR. it can be invoked manually by commenting on any PR:
/improve_component component_name\n
To get a list of the components that changed in the PR and choose the relevant component interactively, use the analyze
tool.
Invoke the tool manually by commenting /improve_component
on any PR:
The tool will generate code suggestions for the selected component (if no component is stated, it will generate code suggestions for the largest component):
Notes
analyze
tool.num_code_suggestions
: number of code suggestions to provide. Default is 4extra_instructions
: Optional extra instructions to the tool. For example: \"focus on ...\".file
: in case there are several components with the same name, you can specify the relevant file.class_name
: in case there are several methods with the same name in the same file, you can specify the relevant class name.The review
tool scans the PR code changes, and generates a list of feedbacks about the PR, aiming to aid the reviewing process. The tool can be triggered automatically every time a new PR is opened, or can be invoked manually by commenting on any PR:
/review\n
Note that the main purpose of the review
tool is to provide the PR reviewer with useful feedbacks and insights. The PR author, in contrast, may prefer to save time and focus on the output of the improve tool, which provides actionable code suggestions.
(Read more about the different personas in the PR process and how Qodo Merge aims to assist them in our blog)
"},{"location":"tools/review/#example-usage","title":"Example usage","text":""},{"location":"tools/review/#manual-triggering","title":"Manual triggering","text":"Invoke the tool manually by commenting /review
on any PR:
After ~30 seconds, the tool will generate a review for the PR:
If you want to edit configurations, add the relevant ones to the command:
/review --pr_reviewer.some_config1=... --pr_reviewer.some_config2=...\n
"},{"location":"tools/review/#automatic-triggering","title":"Automatic triggering","text":"To run the review
automatically when a PR is opened, define in a configuration file:
[github_app]\npr_commands = [\n \"/review\",\n ...\n]\n\n[pr_reviewer]\nextra_instructions = \"...\"\n...\n
pr_commands
lists commands that will be executed automatically when a PR is opened.[pr_reviewer]
section contains the configurations for the review
tool you want to edit (if any).persistent_comment If set to true, the review comment will be persistent, meaning that every new review request will edit the previous one. Default is true. final_update_message When set to true, updating a persistent review comment during online commenting will automatically add a short comment with a link to the updated review in the pull request .Default is true. extra_instructions Optional extra instructions to the tool. For example: \"focus on the changes in the file X. Ignore change in ...\". enable_help_text If set to true, the tool will display a help text in the comment. Default is true. num_max_findings Number of maximum returned findings. Default is 3.
Enable\\disable specific sub-sectionsrequire_score_review If set to true, the tool will add a section that scores the PR. Default is false. require_tests_review If set to true, the tool will add a section that checks if the PR contains tests. Default is true. require_estimate_effort_to_review If set to true, the tool will add a section that estimates the effort needed to review the PR. Default is true. require_can_be_split_review If set to true, the tool will add a section that checks if the PR contains several themes, and can be split into smaller PRs. Default is false. require_security_review If set to true, the tool will add a section that checks if the PR contains a possible security or vulnerability issue. Default is true. require_todo_scan If set to true, the tool will add a section that lists TODO comments found in the PR code changes. Default is false. require_ticket_analysis_review If set to true, and the PR contains a GitHub or Jira ticket link, the tool will add a section that checks if the PR in fact fulfilled the ticket requirements. Default is true.
Adding PR labelsYou can enable\\disable the review
tool to add specific labels to the PR:
enable_review_labels_security If set to true, the tool will publish a 'possible security issue' label if it detects a security issue. Default is true. enable_review_labels_effort If set to true, the tool will publish a 'Review effort x/5' label (1\u20135 scale). Default is true.
"},{"location":"tools/review/#usage-tips","title":"Usage Tips","text":""},{"location":"tools/review/#general-guidelines","title":"General guidelines","text":"The review
tool provides a collection of configurable feedbacks about a PR. It is recommended to review the Configuration options section, and choose the relevant options for your use case.
Some of the features that are disabled by default are quite useful, and should be considered for enabling. For example: require_score_review
, and more.
On the other hand, if you find one of the enabled features to be irrelevant for your use case, disable it. No default configuration can fit all use cases.
"},{"location":"tools/review/#automation","title":"Automation","text":"When you first install Qodo Merge app, the default mode for the review
tool is:
pr_commands = [\"/review\", ...]\n
Meaning the review
tool will run automatically on every PR, without any additional configurations. Edit this field to enable/disable the tool, or to change the configurations used."},{"location":"tools/review/#auto-generated-pr-labels-by-the-review-tool","title":"Auto-generated PR labels by the Review Tool","text":"The review
can tool automatically add labels to your Pull Requests:
possible security issue
: This label is applied if the tool detects a potential security vulnerability in the PR's code. This feedback is controlled by the 'enable_review_labels_security' flag (default is true).review effort [x/5]
: This label estimates the effort required to review the PR on a relative scale of 1 to 5, where 'x' represents the assessed effort. This feedback is controlled by the 'enable_review_labels_effort' flag (default is true).ticket compliance
: Adds a label indicating code compliance level (\"Fully compliant\" | \"PR Code Verified\" | \"Partially compliant\" | \"Not compliant\") to any GitHub/Jira/Linea ticket linked in the PR. Controlled by the 'require_ticket_labels' flag (default: false). If 'require_no_ticket_labels' is also enabled, PRs without ticket links will receive a \"No ticket found\" label.You can configure a CI/CD Action to prevent merging PRs with specific labels. For example, implement a dedicated GitHub Action.
This approach helps ensure PRs with potential security issues or ticket compliance problems will not be merged without further review.
Since AI may make mistakes or lack complete context, use this feature judiciously. For flexibility, users with appropriate permissions can remove generated labels when necessary. When a label is removed, this action will be automatically documented in the PR discussion, clearly indicating it was a deliberate override by an authorized user to allow the merge.
"},{"location":"tools/review/#extra-instructions","title":"Extra instructions","text":"Extra instructions are important. The review
tool can be configured with extra instructions, which can be used to guide the model to a feedback tailored to the needs of your project.
Be specific, clear, and concise in the instructions. With extra instructions, you are the prompter. Specify the relevant sub-tool, and the relevant aspects of the PR that you want to emphasize.
Examples of extra instructions:
[pr_reviewer]\nextra_instructions=\"\"\"\\\nIn the code feedback section, emphasize the following:\n- Does the code logic cover relevant edge cases?\n- Is the code logic clear and easy to understand?\n- Is the code logic efficient?\n...\n\"\"\"\n
Use triple quotes to write multi-line instructions. Use bullet points to make the instructions more readable."},{"location":"tools/scan_repo_discussions/","title":"\ud83d\udc8e Scan Repo Discussions","text":"Platforms supported: GitHub
The scan_repo_discussions
tool analyzes code discussions (meaning review comments over code lines) from merged pull requests over the past 12 months. It processes these discussions alongside other PR metadata to identify recurring patterns related to best practices in team feedback and code reviews, generating a comprehensive best_practices.md
document that distills key insights and recommendations.
This file captures repository-specific patterns derived from your team's actual workflow and discussions, rather than more generic best practices. It will be utilized by Qodo Merge to provide tailored suggestions for improving code quality in future pull requests.
Active repositories are needed
The tool is designed to work with real-life repositories, as it relies on actual discussions to generate meaningful insights. At least 50 merged PRs are required to generate the best_practices.md
file.
Additional customization
Teams are encouraged to further customize and refine these insights to better align with their specific development priorities and contexts. This can be done by editing the best_practices.md
file directly when the PR is created, or iteratively over time to enhance the 'best practices' suggestions provided by Qodo Merge.
The tool can be invoked manually by commenting on any PR:
/scan_repo_discussions\n
As a response, the bot will create a new PR that contains an auto-generated best_practices.md
file. Note that the scan can take several minutes to complete, since up to 250 PRs are scanned.
The PR created by the bot:
The best_practices.md
file in the PR:
/scan_repo_discussions --scan_repo_discussions.force_scan=true
to force generating a PR with a new best_practices.md
file, even if it already exists (by default, the bot will not generate a new file if it already exists)./scan_repo_discussions --scan_repo_discussions.days_back=X
to specify the number of days back to scan for discussions. The default is 365 days./scan_repo_discussions --scan_repo_discussions.minimal_number_of_prs=X
to specify the minimum number of merged PRs needed to generate the best_practices.md
file. The default is 50 PRs.The similar code tool retrieves the most similar code components from inside the organization's codebase, or from open-source code.
For example:
Global Search
for a method called chat_completion
:
Qodo Merge will examine the code component and will extract the most relevant keywords to search for similar code:
extracted keywords
: the keywords that were extracted from the code by Qodo Merge. the link will open a search page with the extracted keywords, to allow the user to modify the search if needed.search context
: the context in which the search will be performed, organization's codebase or open-source code (Global).similar code
: the most similar code components found. the link will open the code component in the relevant file.relevant repositories
: the open-source repositories in which that are relevant to the searched code component and it's keywords.Search result link example:
Organization Search
:
To invoke the similar code
tool manually, comment on the PR:
/find_similar_component COMPONENT_NAME\n
Where COMPONENT_NAME
should be the name of a code component in the PR (class, method, function).
If there is a name ambiguity, there are two configurations that will help the tool to find the correct component:
--pr_find_similar_component.file
: in case there are several components with the same name, you can specify the relevant file.--pr_find_similar_component.class_name
: in case there are several methods with the same name in the same file, you can specify the relevant class name.example:
/find_similar_component COMPONENT_NAME --pr_find_similar_component.file=FILE_NAME\n
"},{"location":"tools/similar_code/#automatically-via-analyze-table","title":"Automatically (via Analyze table)","text":"It can be invoked automatically from the analyze table, can be accessed by:
/analyze\n
Choose the components you want to find similar code for, and click on the similar
checkbox.
You can search for similar code either within the organization's codebase or globally, which includes open-source repositories. Each result will include the relevant code components along with their associated license details.
"},{"location":"tools/similar_code/#configuration-options","title":"Configuration options","text":"search_from_org
: if set to true, the tool will search for similar code in the organization's codebase. Default is false.number_of_keywords
: number of keywords to use for the search. Default is 5.number_of_results
: the maximum number of results to present. Default is 5.The similar issue tool retrieves the most similar issues to the current issue. It can be invoked manually by commenting on any PR:
/similar_issue\n
"},{"location":"tools/similar_issues/#example-usage","title":"Example usage","text":"Note that to perform retrieval, the similar_issue
tool indexes all the repo previous issues (once).
Configure your preferred database by changing the pr_similar_issue
parameter in configuration.toml
file.
Choose from the following Vector Databases:
To use Pinecone with the similar issue
tool, add these credentials to .secrets.toml
(or set as environment variables):
[pinecone]\napi_key = \"...\"\nenvironment = \"...\"\n
These parameters can be obtained by registering to Pinecone.
"},{"location":"tools/similar_issues/#how-to-use","title":"How to use","text":"To invoke the 'similar issue' tool from CLI, run: python3 cli.py --issue_url=... similar_issue
To invoke the 'similar' issue tool via online usage, comment on a PR: /similar_issue
You can also enable the 'similar issue' tool to run automatically when a new issue is opened, by adding it to the pr_commands list in the github_app section
By combining LLM abilities with static code analysis, the test
tool generate tests for a selected component, based on the PR code changes. It can be invoked manually by commenting on any PR:
/test component_name\n
where 'component_name' is the name of a specific component in the PR. To get a list of the components that changed in the PR and choose the relevant component interactively, use the analyze
tool.
Invoke the tool manually by commenting /test
on any PR: The tool will generate tests for the selected component (if no component is stated, it will generate tests for largest component):
(Example taken from here):
Notes
analyze
tool.num_tests
: number of tests to generate. Default is 3.testing_framework
: the testing framework to use. If not set, for Python it will use pytest
, for Java it will use JUnit
, for C++ it will use Catch2
, and for JavaScript and TypeScript it will use jest
.avoid_mocks
: if set to true, the tool will try to avoid using mocks in the generated tests. Note that even if this option is set to true, the tool might still use mocks if it cannot generate a test without them. Default is true.extra_instructions
: Optional extra instructions to the tool. For example: \"use the following mock injection scheme: ...\".file
: in case there are several components with the same name, you can specify the relevant file.class_name
: in case there are several methods with the same name in the same file, you can specify the relevant class name.enable_help_text
: if set to true, the tool will add a help text to the PR comment. Default is true.The update_changelog
tool automatically updates the CHANGELOG.md file with the PR changes. It can be invoked manually by commenting on any PR:
/update_changelog\n
"},{"location":"tools/update_changelog/#example-usage","title":"Example usage","text":""},{"location":"tools/update_changelog/#configuration-options","title":"Configuration options","text":"Under the section pr_update_changelog
, the configuration file contains options to customize the 'update changelog' tool:
push_changelog_changes
: whether to push the changes to CHANGELOG.md, or just publish them as a comment. Default is false (publish as comment).extra_instructions
: Optional extra instructions to the tool. For example: \"Use the following structure: ...\"add_pr_link
: whether the model should try to add a link to the PR in the changelog. Default is true.skip_ci_on_push
: whether the commit message (when push_changelog_changes
is true) will include the term \"[skip ci]\", preventing CI tests to be triggered on the changelog commit. Default is true.This section provides a detailed guide on how to use Qodo Merge. It includes information on how to adjust Qodo Merge configurations, define which tools will run automatically, and other advanced configurations.
This document outlines a series of recommended best practices for Python development. These guidelines aim to improve code quality, maintainability, and readability.
"},{"location":"usage-guide/EXAMPLE_BEST_PRACTICE/#imports","title":"Imports","text":"Use import
statements for packages and modules only, not for individual types, classes, or functions.
Reusability mechanism for sharing code from one module to another.
"},{"location":"usage-guide/EXAMPLE_BEST_PRACTICE/#decision","title":"Decision","text":"import x
for importing packages and modules.from x import y
where x
is the package prefix and y
is the module name with no prefix.from x import y as z
in any of the following circumstances:y
are to be imported.y
conflicts with a top-level name defined in the current module.y
conflicts with a common parameter name that is part of the public API (e.g., features
).y
is an inconveniently long name, or too generic in the context of your codeimport y as z
only when z
is a standard abbreviation (e.g., import numpy as np
).For example the module sound.effects.echo
may be imported as follows:
from sound.effects import echo\n...\necho.EchoFilter(input, output, delay=0.7, atten=4)\n
Do not use relative names in imports. Even if the module is in the same package, use the full package name. This helps prevent unintentionally importing a package twice.
"},{"location":"usage-guide/EXAMPLE_BEST_PRACTICE/#exemptions","title":"Exemptions","text":"Exemptions from this rule:
typing
modulecollections.abc
moduletyping_extensions
moduleImport each module using the full pathname location of the module.
"},{"location":"usage-guide/EXAMPLE_BEST_PRACTICE/#decision_1","title":"Decision","text":"All new code should import each module by its full package name.
Imports should be as follows:
Yes:\n # Reference absl.flags in code with the complete name (verbose).\n import absl.flags\n from doctor.who import jodie\n\n _FOO = absl.flags.DEFINE_string(...)\n
Yes:\n # Reference flags in code with just the module name (common).\n from absl import flags\n from doctor.who import jodie\n\n _FOO = flags.DEFINE_string(...)\n
(assume this file lives in doctor/who/
where jodie.py
also exists)
No:\n # Unclear what module the author wanted and what will be imported. The actual\n # import behavior depends on external factors controlling sys.path.\n # Which possible jodie module did the author intend to import?\n import jodie\n
The directory the main binary is located in should not be assumed to be in sys.path
despite that happening in some environments. This being the case, code should assume that import jodie
refers to a third-party or top-level package named jodie
, not a local jodie.py
.
Use default iterators and operators for types that support them, like lists, dictionaries, and files.
"},{"location":"usage-guide/EXAMPLE_BEST_PRACTICE/#definition_1","title":"Definition","text":"Container types, like dictionaries and lists, define default iterators and membership test operators (\u201cin\u201d and \u201cnot in\u201d).
"},{"location":"usage-guide/EXAMPLE_BEST_PRACTICE/#decision_2","title":"Decision","text":"Use default iterators and operators for types that support them, like lists, dictionaries, and files. The built-in types define iterator methods, too. Prefer these methods to methods that return lists, except that you should not mutate a container while iterating over it.
Yes: for key in adict: ...\n if obj in alist: ...\n for line in afile: ...\n for k, v in adict.items(): ...\n
No: for key in adict.keys(): ...\n for line in afile.readlines(): ...\n
"},{"location":"usage-guide/EXAMPLE_BEST_PRACTICE/#lambda-functions","title":"Lambda Functions","text":"Okay for one-liners. Prefer generator expressions over map()
or filter()
with a lambda
.
Lambdas are allowed. If the code inside the lambda function spans multiple lines or is longer than 60-80 chars, it might be better to define it as a regular nested function.
For common operations like multiplication, use the functions from the operator
module instead of lambda functions. For example, prefer operator.mul
to lambda x, y: x * y
.
Okay in most cases.
"},{"location":"usage-guide/EXAMPLE_BEST_PRACTICE/#definition_2","title":"Definition","text":"You can specify values for variables at the end of a function\u2019s parameter list, e.g., def foo(a, b=0):
. If foo
is called with only one argument, b
is set to 0. If it is called with two arguments, b
has the value of the second argument.
Okay to use with the following caveat:
Do not use mutable objects as default values in the function or method definition.
Yes: def foo(a, b=None):\n if b is None:\n b = []\nYes: def foo(a, b: Sequence | None = None):\n if b is None:\n b = []\nYes: def foo(a, b: Sequence = ()): # Empty tuple OK since tuples are immutable.\n ...\n
from absl import flags\n_FOO = flags.DEFINE_string(...)\n\nNo: def foo(a, b=[]):\n ...\nNo: def foo(a, b=time.time()): # Is `b` supposed to represent when this module was loaded?\n ...\nNo: def foo(a, b=_FOO.value): # sys.argv has not yet been parsed...\n ...\nNo: def foo(a, b: Mapping = {}): # Could still get passed to unchecked code.\n ...\n
"},{"location":"usage-guide/EXAMPLE_BEST_PRACTICE/#truefalse-evaluations","title":"True/False Evaluations","text":"Use the \u201cimplicit\u201d false if possible, e.g., if foo:
rather than if foo != []:
Okay to use.
An example of the use of this feature is:
def get_adder(summand1: float) -> Callable[[float], float]:\n \"\"\"Returns a function that adds numbers to a given number.\"\"\"\n def adder(summand2: float) -> float:\n return summand1 + summand2\n\n return adder\n
"},{"location":"usage-guide/EXAMPLE_BEST_PRACTICE/#decision_5","title":"Decision","text":"Okay to use.
"},{"location":"usage-guide/EXAMPLE_BEST_PRACTICE/#threading","title":"Threading","text":"Do not rely on the atomicity of built-in types.
While Python\u2019s built-in data types such as dictionaries appear to have atomic operations, there are corner cases where they aren\u2019t atomic (e.g. if __hash__
or __eq__
are implemented as Python methods) and their atomicity should not be relied upon. Neither should you rely on atomic variable assignment (since this in turn depends on dictionaries).
Use the queue
module\u2019s Queue
data type as the preferred way to communicate data between threads. Otherwise, use the threading
module and its locking primitives. Prefer condition variables and threading.Condition
instead of using lower-level locks.
The possible configurations of Qodo Merge are stored in here. In the tools page you can find explanations on how to use these configurations for each tool.
To print all the available configurations as a comment on your PR, you can use the following command:
/config\n
To view the actual configurations used for a specific tool, after all the user settings are applied, you can add for each tool a --config.output_relevant_configurations=true
suffix. For example:
/improve --config.output_relevant_configurations=true\n
Will output an additional field showing the actual configurations used for the improve
tool.
In some cases, you may want to exclude specific files or directories from the analysis performed by Qodo Merge. This can be useful, for example, when you have files that are generated automatically or files that shouldn't be reviewed, like vendor code.
You can ignore files or folders using the following methods:
IGNORE.GLOB
IGNORE.REGEX
which you can edit to ignore files or folders based on glob or regex patterns.
"},{"location":"usage-guide/additional_configurations/#example-usage","title":"Example usage","text":"Let's look at an example where we want to ignore all files with .py
extension from the analysis.
To ignore Python files in a PR with online usage, comment on a PR: /review --ignore.glob=\"['*.py']\"
To ignore Python files in all PRs using glob
pattern, set in a configuration file:
[ignore]\nglob = ['*.py']\n
And to ignore Python files in all PRs using regex
pattern, set in a configuration file:
[ignore]\nregex = ['.*\\.py$']\n
"},{"location":"usage-guide/additional_configurations/#extra-instructions","title":"Extra instructions","text":"All Qodo Merge tools have a parameter called extra_instructions
, that enables to add free-text extra instructions. Example usage:
/update_changelog --pr_update_changelog.extra_instructions=\"Make sure to update also the version ...\"\n
"},{"location":"usage-guide/additional_configurations/#language-settings","title":"Language Settings","text":"The default response language for Qodo Merge is U.S. English. However, some development teams may prefer to display information in a different language. For example, your team's workflow might improve if PR descriptions and code suggestions are set to your country's native language.
To configure this, set the response_language
parameter in the configuration file. This will prompt the model to respond in the specified language. Use a standard locale code based on ISO 3166 (country codes) and ISO 639 (language codes) to define a language-country pair. See this comprehensive list of locale codes.
Example:
[config]\nresponse_language = \"it-IT\"\n
This will set the response language globally for all the commands to Italian.
Important: Note that only dynamic text generated by the AI model is translated to the configured language. Static text such as labels and table headers that are not part of the AI models response will remain in US English. In addition, the model you are using must have good support for the specified language.
"},{"location":"usage-guide/additional_configurations/#patch-extra-lines","title":"Patch Extra Lines","text":"By default, around any change in your PR, git patch provides three lines of context above and below the change.
@@ -12,5 +12,5 @@ def func1():\n code line that already existed in the file...\n code line that already existed in the file...\n code line that already existed in the file....\n-code line that was removed in the PR\n+new code line added in the PR\n code line that already existed in the file...\n code line that already existed in the file...\n code line that already existed in the file...\n
Qodo Merge will try to increase the number of lines of context, via the parameter:
[config]\npatch_extra_lines_before=3\npatch_extra_lines_after=1\n
Increasing this number provides more context to the model, but will also increase the token budget, and may overwhelm the model with too much information, unrelated to the actual PR code changes.
If the PR is too large (see PR Compression strategy), Qodo Merge may automatically set this number to 0, and will use the original git patch.
"},{"location":"usage-guide/additional_configurations/#log-level","title":"Log Level","text":"Qodo Merge allows you to control the verbosity of logging by using the log_level
configuration parameter. This is particularly useful for troubleshooting and debugging issues with your PR workflows.
[config]\nlog_level = \"DEBUG\" # Options: \"DEBUG\", \"INFO\", \"WARNING\", \"ERROR\", \"CRITICAL\"\n
The default log level is \"DEBUG\", which provides detailed output of all operations. If you prefer less verbose logs, you can set higher log levels like \"INFO\" or \"WARNING\".
"},{"location":"usage-guide/additional_configurations/#integrating-with-logging-observability-platforms","title":"Integrating with Logging Observability Platforms","text":"Various logging observability tools can be used out-of-the box when using the default LiteLLM AI Handler. Simply configure the LiteLLM callback settings in configuration.toml
and set environment variables according to the LiteLLM documentation.
For example, to use LangSmith you can add the following to your configuration.toml
file:
[litellm]\nenable_callbacks = true\nsuccess_callback = [\"langsmith\"]\nfailure_callback = [\"langsmith\"]\nservice_callback = []\n
Then set the following environment variables:
LANGSMITH_API_KEY=<api_key>\nLANGSMITH_PROJECT=<project>\nLANGSMITH_BASE_URL=<url>\n
"},{"location":"usage-guide/additional_configurations/#ignoring-automatic-commands-in-prs","title":"Ignoring automatic commands in PRs","text":"Qodo Merge allows you to automatically ignore certain PRs based on various criteria:
To ignore PRs with a specific title such as \"[Bump]: ...\", you can add the following to your configuration.toml
file:
[config]\nignore_pr_title = [\"\\\\[Bump\\\\]\"]\n
Where the ignore_pr_title
is a list of regex patterns to match the PR title you want to ignore. Default is ignore_pr_title = [\"^\\\\[Auto\\\\]\", \"^Auto\"]
.
To ignore PRs from specific source or target branches, you can add the following to your configuration.toml
file:
[config]\nignore_pr_source_branches = ['develop', 'main', 'master', 'stage']\nignore_pr_target_branches = [\"qa\"]\n
Where the ignore_pr_source_branches
and ignore_pr_target_branches
are lists of regex patterns to match the source and target branches you want to ignore. They are not mutually exclusive, you can use them together or separately.
To ignore PRs from specific repositories, you can add the following to your configuration.toml
file:
[config]\nignore_repositories = [\"my-org/my-repo1\", \"my-org/my-repo2\"]\n
Where the ignore_repositories
is a list of regex patterns to match the repositories you want to ignore. This is useful when you have multiple repositories and want to exclude certain ones from analysis.
To allow only specific folders (often needed in large monorepos), set:
[config]\nallow_only_specific_folders=['folder1','folder2']\n
For the configuration above, automatic feedback will only be triggered when the PR changes include files where 'folder1' or 'folder2' is in the file path
"},{"location":"usage-guide/additional_configurations/#ignoring-prs-containing-specific-labels","title":"Ignoring PRs containing specific labels","text":"To ignore PRs containing specific labels, you can add the following to your configuration.toml
file:
[config]\nignore_pr_labels = [\"do-not-merge\"]\n
Where the ignore_pr_labels
is a list of labels that when present in the PR, the PR will be ignored.
Qodo Merge tries to automatically identify and ignore pull requests created by bots using:
While this detection is robust, it may not catch all cases, particularly when:
To supplement the automatic bot detection, you can manually specify users to ignore. Add the following to your configuration.toml
file to ignore PRs from specific users:
[config]\nignore_pr_authors = [\"my-special-bot-user\", ...]\n
Where the ignore_pr_authors
is a list of usernames that you want to ignore.
Note
There is one specific case where bots will receive an automatic response - when they generated a PR with a failed test. In that case, the ci_feedback
tool will be invoked.
To automatically exclude files generated by specific languages or frameworks, you can add the following to your configuration.toml
file:
[config]\nignore_language_framework = ['protobuf', ...]\n
You can view the list of auto-generated file patterns in generated_code_ignore.toml
. Files matching these glob patterns will be automatically excluded from PR Agent analysis.
When running from your locally cloned Qodo Merge repo (CLI), your local configuration file will be used. Examples of invoking the different tools via the CLI:
python -m pr_agent.cli --pr_url=<pr_url> review
python -m pr_agent.cli --pr_url=<pr_url> describe
python -m pr_agent.cli --pr_url=<pr_url> improve
python -m pr_agent.cli --pr_url=<pr_url> ask \"Write me a poem about this PR\"
python -m pr_agent.cli --pr_url=<pr_url> update_changelog
<pr_url>
is the url of the relevant PR (for example: #50).
Notes:
python -m pr_agent.cli --pr_url=<pr_url> /review --pr_reviewer.extra_instructions=\"focus on the file: ...\"\n
configuration.toml
:[config]\npublish_output=false\nverbosity_level=2\n
This is useful for debugging or experimenting with different tools.
github
(default), gitlab
, bitbucket
, azure
, codecommit
, local
,gitea
, and gerrit
.To verify that Qodo Merge has been configured correctly, you can run this health check command from the repository root:
python -m tests.health_test.main\n
If the health check passes, you will see the following output:
========\nHealth test passed successfully\n========\n
At the end of the run.
Before running the health check, ensure you have:
Online usage means invoking Qodo Merge tools by comments on a PR. Commands for invoking the different tools via comments:
/review
/describe
/improve
(or /improve_code
for bitbucket, since /improve
is sometimes reserved)/ask \"...\"
/update_changelog
To edit a specific configuration value, just add --config_path=<value>
to any command. For example, if you want to edit the review
tool configurations, you can run:
/review --pr_reviewer.extra_instructions=\"...\" --pr_reviewer.require_score_review=false\n
Any configuration value in configuration file file can be similarly edited. Comment /config
to see the list of available configurations.
To easily disable all automatic feedback from Qodo Merge (GitHub App, GitLab Webhook, BitBucket App, Azure DevOps Webhook), set in a configuration file:
[config]\ndisable_auto_feedback = true\n
When this parameter is set to true
, Qodo Merge will not run any automatic tools (like describe
, review
, improve
) when a new PR is opened, or when new code is pushed to an open PR.
Configurations for Qodo Merge
Qodo Merge for GitHub is an App, hosted by Qodo. So all the instructions below are relevant also for Qodo Merge users. Same goes for GitLab webhook and BitBucket App sections.
"},{"location":"usage-guide/automations_and_usage/#github-app-automatic-tools-when-a-new-pr-is-opened","title":"GitHub app automatic tools when a new PR is opened","text":"The github_app section defines GitHub app specific configurations.
The configuration parameter pr_commands
defines the list of tools that will be run automatically when a new PR is opened:
[github_app]\npr_commands = [\n \"/describe\",\n \"/review\",\n \"/improve\",\n]\n
This means that when a new PR is opened/reopened or marked as ready for review, Qodo Merge will run the describe
, review
and improve
tools.
Draft PRs:
By default, draft PRs are not considered for automatic tools, but you can change this by setting the feedback_on_draft_pr
parameter to true
in the configuration file.
[github_app]\nfeedback_on_draft_pr = true\n
Changing default tool parameters:
You can override the default tool parameters by using one the three options for a configuration file: wiki, local, or global. For example, if your configuration file contains:
[pr_description]\ngenerate_ai_title = true\n
Every time you run the describe
tool (including automatic runs) the PR title will be generated by the AI.
Parameters for automated runs:
You can customize configurations specifically for automated runs by using the --config_path=<value>
parameter. For instance, to modify the review
tool settings only for newly opened PRs, use:
[github_app]\npr_commands = [\n \"/describe\",\n \"/review --pr_reviewer.extra_instructions='focus on the file: ...'\",\n \"/improve\",\n]\n
"},{"location":"usage-guide/automations_and_usage/#github-app-automatic-tools-for-push-actions-commits-to-an-open-pr","title":"GitHub app automatic tools for push actions (commits to an open PR)","text":"In addition to running automatic tools when a PR is opened, the GitHub app can also respond to new code that is pushed to an open PR.
The configuration toggle handle_push_trigger
can be used to enable this feature. The configuration parameter push_commands
defines the list of tools that will be run automatically when new code is pushed to the PR.
[github_app]\nhandle_push_trigger = true\npush_commands = [\n \"/describe\",\n \"/review\",\n]\n
This means that when new code is pushed to the PR, the Qodo Merge will run the describe
and review
tools, with the specified parameters.
GitHub Action
is a different way to trigger Qodo Merge tools, and uses a different configuration mechanism than GitHub App
. You can configure settings for GitHub Action
by adding environment variables under the env section in .github/workflows/pr_agent.yml
file. Specifically, start by setting the following environment variables:
env:\n OPENAI_KEY: ${{ secrets.OPENAI_KEY }} # Make sure to add your OpenAI key to your repo secrets\n GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }} # Make sure to add your GitHub token to your repo secrets\n github_action_config.auto_review: \"true\" # enable\\disable auto review\n github_action_config.auto_describe: \"true\" # enable\\disable auto describe\n github_action_config.auto_improve: \"true\" # enable\\disable auto improve\n github_action_config.pr_actions: '[\"opened\", \"reopened\", \"ready_for_review\", \"review_requested\"]'\n
github_action_config.auto_review
, github_action_config.auto_describe
and github_action_config.auto_improve
are used to enable/disable automatic tools that run when a new PR is opened. If not set, the default configuration is for all three tools to run automatically when a new PR is opened.
github_action_config.pr_actions
is used to configure which pull_requests
events will trigger the enabled auto flags If not set, the default configuration is [\"opened\", \"reopened\", \"ready_for_review\", \"review_requested\"]
github_action_config.enable_output
are used to enable/disable github actions output parameter (default is true
). Review result is output as JSON to steps.{step-id}.outputs.review
property. The JSON structure is equivalent to the yaml data structure defined in pr_reviewer_prompts.toml.
Note that you can give additional config parameters by adding environment variables to .github/workflows/pr_agent.yml
, or by using a .pr_agent.toml
configuration file in the root of your repo
For example, you can set an environment variable: pr_description.publish_labels=false
, or add a .pr_agent.toml
file with the following content:
[pr_description]\npublish_labels = false\n
to prevent Qodo Merge from publishing labels when running the describe
tool.
After setting up a GitLab webhook, to control which commands will run automatically when a new MR is opened, you can set the pr_commands
parameter in the configuration file, similar to the GitHub App:
[gitlab]\npr_commands = [\n \"/describe\",\n \"/review\",\n \"/improve\",\n]\n
the GitLab webhook can also respond to new code that is pushed to an open MR. The configuration toggle handle_push_trigger
can be used to enable this feature. The configuration parameter push_commands
defines the list of tools that will be run automatically when new code is pushed to the MR.
[gitlab]\nhandle_push_trigger = true\npush_commands = [\n \"/describe\",\n \"/review\",\n]\n
Note that to use the 'handle_push_trigger' feature, you need to give the gitlab webhook also the \"Push events\" scope.
"},{"location":"usage-guide/automations_and_usage/#bitbucket-app","title":"BitBucket App","text":"Similar to GitHub app, when running Qodo Merge from BitBucket App, the default configuration file will be initially loaded.
By uploading a local .pr_agent.toml
file to the root of the repo's default branch, you can edit and customize any configuration parameter. Note that you need to upload .pr_agent.toml
prior to creating a PR, in order for the configuration to take effect.
For example, if your local .pr_agent.toml
file contains:
[pr_reviewer]\nextra_instructions = \"Answer in japanese\"\n
Each time you invoke a /review
tool, it will use the extra instructions you set in the local configuration file.
Note that among other limitations, BitBucket provides relatively low rate-limits for applications (up to 1000 requests per hour), and does not provide an API to track the actual rate-limit usage. If you experience a lack of responses from Qodo Merge, you might want to set: bitbucket_app.avoid_full_files=true
in your configuration file. This will prevent Qodo Merge from acquiring the full file content, and will only use the diff content. This will reduce the number of requests made to BitBucket, at the cost of small decrease in accuracy, as dynamic context will not be applicable.
To control which commands will run automatically when a new PR is opened, you can set the pr_commands
parameter in the configuration file: Specifically, set the following values:
[bitbucket_app]\npr_commands = [\n \"/review\",\n \"/improve --pr_code_suggestions.commitable_code_suggestions=true --pr_code_suggestions.suggestions_score_threshold=7\",\n]\n
Note that we set specifically for bitbucket, we recommend using: --pr_code_suggestions.suggestions_score_threshold=7
and that is the default value we set for bitbucket. Since this platform only supports inline code suggestions, we want to limit the number of suggestions, and only present a limited number.
To enable BitBucket app to respond to each push to the PR, set (for example):
[bitbucket_app]\nhandle_push_trigger = true\npush_commands = [\n \"/describe\",\n \"/review\",\n]\n
"},{"location":"usage-guide/automations_and_usage/#azure-devops-provider","title":"Azure DevOps provider","text":"To use Azure DevOps provider use the following settings in configuration.toml:
[config]\ngit_provider=\"azure\"\n
Azure DevOps provider supports PAT token or DefaultAzureCredential authentication. PAT is faster to create, but has build in expiration date, and will use the user identity for API calls. Using DefaultAzureCredential you can use managed identity or Service principle, which are more secure and will create separate ADO user identity (via AAD) to the agent.
If PAT was chosen, you can assign the value in .secrets.toml. If DefaultAzureCredential was chosen, you can assigned the additional env vars like AZURE_CLIENT_SECRET directly, or use managed identity/az cli (for local development) without any additional configuration. in any case, 'org' value must be assigned in .secrets.toml:
[azure_devops]\norg = \"https://dev.azure.com/YOUR_ORGANIZATION/\"\n# pat = \"YOUR_PAT_TOKEN\" needed only if using PAT for authentication\n
"},{"location":"usage-guide/automations_and_usage/#azure-devops-webhook","title":"Azure DevOps Webhook","text":"To control which commands will run automatically when a new PR is opened, you can set the pr_commands
parameter in the configuration file, similar to the GitHub App:
[azure_devops_server]\npr_commands = [\n \"/describe\",\n \"/review\",\n \"/improve\",\n]\n
"},{"location":"usage-guide/automations_and_usage/#gitea-webhook","title":"Gitea Webhook","text":"After setting up a Gitea webhook, to control which commands will run automatically when a new MR is opened, you can set the pr_commands
parameter in the configuration file, similar to the GitHub App:
[gitea]\npr_commands = [\n \"/describe\",\n \"/review\",\n \"/improve\",\n]\n
"},{"location":"usage-guide/changing_a_model/","title":"Changing a Model","text":""},{"location":"usage-guide/changing_a_model/#changing-a-model-in-pr-agent","title":"Changing a model in PR-Agent","text":"See here for a list of available models. To use a different model than the default (o4-mini), you need to edit in the configuration file the fields:
[config]\nmodel = \"...\"\nfallback_models = [\"...\"]\n
For models and environments not from OpenAI, you might need to provide additional keys and other parameters. You can give parameters via a configuration file, or from environment variables.
Model-specific environment variables
See litellm documentation for the environment variables needed per model, as they may vary and change over time. Our documentation per-model may not always be up-to-date with the latest changes. Failing to set the needed keys of a specific model will usually result in litellm not identifying the model type, and failing to utilize it.
"},{"location":"usage-guide/changing_a_model/#openai-like-api","title":"OpenAI like API","text":"To use an OpenAI like API, set the following in your .secrets.toml
file:
[openai]\napi_base = \"https://api.openai.com/v1\"\napi_key = \"sk-...\"\n
or use the environment variables (make sure to use double underscores __
):
OPENAI__API_BASE=https://api.openai.com/v1\nOPENAI__KEY=sk-...\n
"},{"location":"usage-guide/changing_a_model/#openai-flex-processing","title":"OpenAI Flex Processing","text":"To reduce costs for non-urgent/background tasks, enable Flex Processing:
[litellm]\nextra_body='{\"processing_mode\": \"flex\"}'\n
See OpenAI Flex Processing docs for details.
"},{"location":"usage-guide/changing_a_model/#azure","title":"Azure","text":"To use Azure, set in your .secrets.toml
(working from CLI), or in the GitHub Settings > Secrets and variables
(working from GitHub App or GitHub Action):
[openai]\nkey = \"\" # your azure api key\napi_type = \"azure\"\napi_version = '2023-05-15' # Check Azure documentation for the current API version\napi_base = \"\" # The base URL for your Azure OpenAI resource. e.g. \"https://<your resource name>.openai.azure.com\"\ndeployment_id = \"\" # The deployment name you chose when you deployed the engine\n
and set in your configuration file:
[config]\nmodel=\"\" # the OpenAI model you've deployed on Azure (e.g. gpt-4o)\nfallback_models=[\"...\"]\n
To use Azure AD (Entra id) based authentication set in your .secrets.toml
(working from CLI), or in the GitHub Settings > Secrets and variables
(working from GitHub App or GitHub Action):
[azure_ad]\nclient_id = \"\" # Your Azure AD application client ID\nclient_secret = \"\" # Your Azure AD application client secret\ntenant_id = \"\" # Your Azure AD tenant ID\napi_base = \"\" # Your Azure OpenAI service base URL (e.g., https://openai.xyz.com/)\n
Passing custom headers to the underlying LLM Model API can be done by setting extra_headers parameter to litellm.
[litellm]\nextra_headers='{\"projectId\": \"<authorized projectId >\", ...}') #The value of this setting should be a JSON string representing the desired headers, a ValueError is thrown otherwise.\n
This enables users to pass authorization tokens or API keys, when routing requests through an API management gateway.
"},{"location":"usage-guide/changing_a_model/#ollama","title":"Ollama","text":"You can run models locally through either VLLM or Ollama
E.g. to use a new model locally via Ollama, set in .secrets.toml
or in a configuration file:
[config]\nmodel = \"ollama/qwen2.5-coder:32b\"\nfallback_models=[\"ollama/qwen2.5-coder:32b\"]\ncustom_model_max_tokens=128000 # set the maximal input tokens for the model\nduplicate_examples=true # will duplicate the examples in the prompt, to help the model to generate structured output\n\n[ollama]\napi_base = \"http://localhost:11434\" # or whatever port you're running Ollama on\n
By default, Ollama uses a context window size of 2048 tokens. In most cases this is not enough to cover pr-agent prompt and pull-request diff. Context window size can be overridden with the OLLAMA_CONTEXT_LENGTH
environment variable. For example, to set the default context length to 8K, use: OLLAMA_CONTEXT_LENGTH=8192 ollama serve
. More information you can find on the official ollama faq.
Please note that the custom_model_max_tokens
setting should be configured in accordance with the OLLAMA_CONTEXT_LENGTH
. Failure to do so may result in unexpected model output.
Local models vs commercial models
Qodo Merge is compatible with almost any AI model, but analyzing complex code repositories and pull requests requires a model specifically optimized for code analysis.
Commercial models such as GPT-4, Claude Sonnet, and Gemini have demonstrated robust capabilities in generating structured output for code analysis tasks with large input. In contrast, most open-source models currently available (as of January 2025) face challenges with these complex tasks.
Based on our testing, local open-source models are suitable for experimentation and learning purposes (mainly for the ask
command), but they are not suitable for production-level code analysis tasks.
Hence, for production workflows and real-world usage, we recommend using commercial models.
"},{"location":"usage-guide/changing_a_model/#hugging-face","title":"Hugging Face","text":"To use a new model with Hugging Face Inference Endpoints, for example, set:
[config] # in configuration.toml\nmodel = \"huggingface/meta-llama/Llama-2-7b-chat-hf\"\nfallback_models=[\"huggingface/meta-llama/Llama-2-7b-chat-hf\"]\ncustom_model_max_tokens=... # set the maximal input tokens for the model\n\n[huggingface] # in .secrets.toml\nkey = ... # your Hugging Face api key\napi_base = ... # the base url for your Hugging Face inference endpoint\n
(you can obtain a Llama2 key from here)
"},{"location":"usage-guide/changing_a_model/#replicate","title":"Replicate","text":"To use Llama2 model with Replicate, for example, set:
[config] # in configuration.toml\nmodel = \"replicate/llama-2-70b-chat:2c1608e18606fad2812020dc541930f2d0495ce32eee50074220b87300bc16e1\"\nfallback_models=[\"replicate/llama-2-70b-chat:2c1608e18606fad2812020dc541930f2d0495ce32eee50074220b87300bc16e1\"]\n[replicate] # in .secrets.toml\nkey = ...\n
(you can obtain a Llama2 key from here)
Also, review the AiHandler file for instructions on how to set keys for other models.
"},{"location":"usage-guide/changing_a_model/#groq","title":"Groq","text":"To use Llama3 model with Groq, for example, set:
[config] # in configuration.toml\nmodel = \"llama3-70b-8192\"\nfallback_models = [\"groq/llama3-70b-8192\"]\n[groq] # in .secrets.toml\nkey = ... # your Groq api key\n
(you can obtain a Groq key from here)
"},{"location":"usage-guide/changing_a_model/#xai","title":"xAI","text":"To use xAI's models with PR-Agent, set:
[config] # in configuration.toml\nmodel = \"xai/grok-2-latest\"\nfallback_models = [\"xai/grok-2-latest\"] # or any other model as fallback\n\n[xai] # in .secrets.toml\nkey = \"...\" # your xAI API key\n
You can obtain an xAI API key from xAI's console by creating an account and navigating to the developer settings page.
"},{"location":"usage-guide/changing_a_model/#vertex-ai","title":"Vertex AI","text":"To use Google's Vertex AI platform and its associated models (chat-bison/codechat-bison) set:
[config] # in configuration.toml\nmodel = \"vertex_ai/codechat-bison\"\nfallback_models=\"vertex_ai/codechat-bison\"\n\n[vertexai] # in .secrets.toml\nvertex_project = \"my-google-cloud-project\"\nvertex_location = \"\"\n
Your application default credentials will be used for authentication so there is no need to set explicit credentials in most environments.
If you do want to set explicit credentials, then you can use the GOOGLE_APPLICATION_CREDENTIALS
environment variable set to a path to a json credentials file.
To use Google AI Studio models, set the relevant models in the configuration section of the configuration file:
[config] # in configuration.toml\nmodel=\"gemini/gemini-1.5-flash\"\nfallback_models=[\"gemini/gemini-1.5-flash\"]\n\n[google_ai_studio] # in .secrets.toml\ngemini_api_key = \"...\"\n
If you don't want to set the API key in the .secrets.toml file, you can set the GOOGLE_AI_STUDIO.GEMINI_API_KEY
environment variable.
To use Anthropic models, set the relevant models in the configuration section of the configuration file:
[config]\nmodel=\"anthropic/claude-3-opus-20240229\"\nfallback_models=[\"anthropic/claude-3-opus-20240229\"]\n
And also set the api key in the .secrets.toml file:
[anthropic]\nKEY = \"...\"\n
See litellm documentation for more information about the environment variables required for Anthropic.
"},{"location":"usage-guide/changing_a_model/#amazon-bedrock","title":"Amazon Bedrock","text":"To use Amazon Bedrock and its foundational models, add the below configuration:
[config] # in configuration.toml\nmodel=\"bedrock/anthropic.claude-3-5-sonnet-20240620-v1:0\"\nfallback_models=[\"bedrock/anthropic.claude-3-5-sonnet-20240620-v1:0\"]\n\n[aws]\nAWS_ACCESS_KEY_ID=\"...\"\nAWS_SECRET_ACCESS_KEY=\"...\"\nAWS_REGION_NAME=\"...\"\n
You can also use the new Meta Llama 4 models available on Amazon Bedrock:
[config] # in configuration.toml\nmodel=\"bedrock/us.meta.llama4-scout-17b-instruct-v1:0\"\nfallback_models=[\"bedrock/us.meta.llama4-maverick-17b-instruct-v1:0\"]\n
See litellm documentation for more information about the environment variables required for Amazon Bedrock.
"},{"location":"usage-guide/changing_a_model/#deepseek","title":"DeepSeek","text":"To use deepseek-chat model with DeepSeek, for example, set:
[config] # in configuration.toml\nmodel = \"deepseek/deepseek-chat\"\nfallback_models=[\"deepseek/deepseek-chat\"]\n
and fill up your key
[deepseek] # in .secrets.toml\nkey = ...\n
(you can obtain a deepseek-chat key from here)
"},{"location":"usage-guide/changing_a_model/#deepinfra","title":"DeepInfra","text":"To use DeepSeek model with DeepInfra, for example, set:
[config] # in configuration.toml\nmodel = \"deepinfra/deepseek-ai/DeepSeek-R1-Distill-Llama-70B\"\nfallback_models = [\"deepinfra/deepseek-ai/DeepSeek-R1-Distill-Qwen-32B\"]\n[deepinfra] # in .secrets.toml\nkey = ... # your DeepInfra api key\n
(you can obtain a DeepInfra key from here)
"},{"location":"usage-guide/changing_a_model/#mistral","title":"Mistral","text":"To use models like Mistral or Codestral with Mistral, for example, set:
[config] # in configuration.toml\nmodel = \"mistral/mistral-small-latest\"\nfallback_models = [\"mistral/mistral-medium-latest\"]\n[mistral] # in .secrets.toml\nkey = \"...\" # your Mistral api key\n
(you can obtain a Mistral key from here)
"},{"location":"usage-guide/changing_a_model/#codestral","title":"Codestral","text":"To use Codestral model with Codestral, for example, set:
[config] # in configuration.toml\nmodel = \"codestral/codestral-latest\"\nfallback_models = [\"codestral/codestral-2405\"]\n[codestral] # in .secrets.toml\nkey = \"...\" # your Codestral api key\n
(you can obtain a Codestral key from here)
"},{"location":"usage-guide/changing_a_model/#openrouter","title":"Openrouter","text":"To use model from Openrouter, for example, set:
[config] # in configuration.toml \nmodel=\"openrouter/anthropic/claude-3.7-sonnet\"\nfallback_models=[\"openrouter/deepseek/deepseek-chat\"]\ncustom_model_max_tokens=20000\n\n[openrouter] # in .secrets.toml or passed an environment variable openrouter__key\nkey = \"...\" # your openrouter api key\n
(you can obtain an Openrouter API key from here)
"},{"location":"usage-guide/changing_a_model/#custom-models","title":"Custom models","text":"If the relevant model doesn't appear here, you can still use it as a custom model:
[config]\nmodel=\"custom_model_name\"\nfallback_models=[\"custom_model_name\"]\n
[config]\ncustom_model_max_tokens= ...\n
Go to litellm documentation, find the model you want to use, and set the relevant environment variables.
Most reasoning models do not support chat-style inputs (system
and user
messages) or temperature settings. To bypass chat templates and temperature controls, set config.custom_reasoning_model = true
in your configuration file.
[config]\nreasoning_efffort= = \"medium\" # \"low\", \"medium\", \"high\"\n
With the OpenAI models that support reasoning effort (eg: o4-mini), you can specify its reasoning effort via config
section. The default value is medium
. You can change it to high
or low
based on your usage.
[config]\nenable_claude_extended_thinking = false # Set to true to enable extended thinking feature\nextended_thinking_budget_tokens = 2048\nextended_thinking_max_output_tokens = 4096\n
"},{"location":"usage-guide/configuration_options/","title":"Configuration File","text":"The different tools and sub-tools used by Qodo Merge are adjustable via a Git configuration file. There are three main ways to set persistent configurations:
In terms of precedence, wiki configurations will override local configurations, and local configurations will override global configurations.
For a list of all possible configurations, see the configuration options page. In addition to general configuration options, each tool has its own configurations. For example, the review
tool will use parameters from the pr_reviewer section in the configuration file.
Tip1: Edit only what you need
Your configuration file should be minimal, and edit only the relevant values. Don't copy the entire configuration options, since it can lead to legacy problems when something changes.
Tip2: Show relevant configurations
If you set config.output_relevant_configurations
to True, each tool will also output in a collapsible section its relevant configurations. This can be useful for debugging, or getting to know the configurations better.
Platforms supported: GitHub, GitLab, Bitbucket
With Qodo Merge, you can set configurations by creating a page called .pr_agent.toml
in the wiki of the repo. The advantage of this method is that it allows to set configurations without needing to commit new content to the repo - just edit the wiki page and save.
Click here to see a short instructional video. We recommend surrounding the configuration content with triple-quotes (or ```toml), to allow better presentation when displayed in the wiki as markdown. An example content:
[pr_description]\ngenerate_ai_title=true\n
Qodo Merge will know to remove the surrounding quotes when reading the configuration content.
"},{"location":"usage-guide/configuration_options/#local-configuration-file","title":"Local configuration file","text":"Platforms supported: GitHub, GitLab, Bitbucket, Azure DevOps
By uploading a local .pr_agent.toml
file to the root of the repo's default branch, you can edit and customize any configuration parameter. Note that you need to upload or update .pr_agent.toml
before using the PR Agent tools (either at PR creation or via manual trigger) for the configuration to take effect.
For example, if you set in .pr_agent.toml
:
[pr_reviewer]\nextra_instructions=\"\"\"\\\n- instruction a\n- instruction b\n...\n\"\"\"\n
Then you can give a list of extra instructions to the review
tool.
Platforms supported: GitHub, GitLab, Bitbucket
If you create a repo called pr-agent-settings
in your organization, its configuration file .pr_agent.toml
will be used as a global configuration file for any other repo that belongs to the same organization. Parameters from a local .pr_agent.toml
file, in a specific repo, will override the global configuration parameters.
For example, in the GitHub organization Codium-ai
:
The file https://github.com/Codium-ai/pr-agent-settings/.pr_agent.toml
serves as a global configuration file for all the repos in the GitHub organization Codium-ai
.
The repo https://github.com/Codium-ai/pr-agent
inherits the global configuration file from pr-agent-settings
.
Relevant platforms: Bitbucket Data Center
In Bitbucket Data Center, there are two levels where you can define a global configuration file:
Create a repository named pr-agent-settings
within a specific project. The configuration file in this repository will apply to all repositories under the same project.
Create a dedicated project to hold a global configuration file that affects all repositories across all projects in your organization.
Setting up organization-level global configuration:
.pr_agent.toml
configuration file\u2014structured similarly to the global configuration file described above.Repositories across your entire Bitbucket organization will inherit the configuration from this file.
Note
If both organization-level and project-level global settings are defined, the project-level settings will take precedence over the organization-level configuration. Additionally, parameters from a repository\u2019s local .pr_agent.toml file will always override both global settings.
"},{"location":"usage-guide/enabling_a_wiki/","title":"Enabling a Wiki","text":"Supported Git Platforms: GitHub, GitLab, Bitbucket
For optimal functionality of Qodo Merge, we recommend enabling a wiki for each repository where Qodo Merge is installed. The wiki serves several important purposes:
Key Wiki Features: \ud83d\udc8e
Setup Instructions (GitHub):
To enable a wiki for your repository:
After installation, there are three basic ways to invoke Qodo Merge:
Specifically, CLI commands can be issued by invoking a pre-built docker image, or by invoking a locally cloned repo.
For online usage, you will need to setup either a GitHub App or a GitHub Action (GitHub), a GitLab webhook (GitLab), or a BitBucket App (BitBucket). These platforms also enable to run Qodo Merge specific tools automatically when a new PR is opened, or on each push to a branch.
"},{"location":"usage-guide/mail_notifications/","title":"Managing Mail Notifications","text":"Unfortunately, it is not possible in GitHub to disable mail notifications from a specific user. If you are subscribed to notifications for a repo with Qodo Merge, we recommend turning off notifications for PR comments, to avoid lengthy emails:
As an alternative, you can filter in your mail provider the notifications specifically from the Qodo Merge bot, see how.
Another option to reduce the mail overload, yet still receive notifications on Qodo Merge tools, is to disable the help collapsible section in Qodo Merge bot comments. This can done by setting enable_help_text=false
for the relevant tool in the configuration file. For example, to disable the help text for the pr_reviewer
tool, set:
[pr_reviewer]\nenable_help_text = false\n
"},{"location":"usage-guide/qodo_merge_models/","title":"\ud83d\udc8e Qodo Merge Models","text":"The default models used by Qodo Merge (June 2025) are a combination of Claude Sonnet 4 and Gemini 2.5 Pro.
"},{"location":"usage-guide/qodo_merge_models/#selecting-a-specific-model","title":"Selecting a Specific Model","text":"Users can configure Qodo Merge to use only a specific model by editing the configuration file. The models supported by Qodo Merge are:
claude-4-sonnet
o4-mini
gpt-4.1
gemini-2.5-pro
deepseek/r1
To restrict Qodo Merge to using only o4-mini
, add this setting:
[config]\nmodel=\"o4-mini\"\n
To restrict Qodo Merge to using only GPT-4.1
, add this setting:
[config]\nmodel=\"gpt-4.1\"\n
To restrict Qodo Merge to using only gemini-2.5-pro
, add this setting:
[config]\nmodel=\"gemini-2.5-pro\"\n
To restrict Qodo Merge to using only deepseek-r1
us-hosted, add this setting:
[config]\nmodel=\"deepseek/r1\"\n
"}]}