{"config":{"lang":["en"],"separator":"[\\s\\-]+","pipeline":["stopWordFilter"]},"docs":[{"location":"","title":"Overview","text":"

PR-Agent is an open-source tool to help efficiently review and handle pull requests. Qodo Merge is a hosted version of PR-Agent, designed for companies and teams that require additional features and capabilities

"},{"location":"#docs-smart-search","title":"Docs Smart Search","text":"

To search the documentation site using natural language:

1) Comment /help \"your question\" in either:

2) The bot will respond with an answer that includes relevant documentation links.

"},{"location":"#features","title":"Features","text":"

PR-Agent and Qodo Merge offer comprehensive pull request functionalities integrated with various git providers:

GitHub GitLab Bitbucket Azure DevOps Gitea TOOLS Describe \u2705 \u2705 \u2705 \u2705 \u2705 Review \u2705 \u2705 \u2705 \u2705 \u2705 Improve \u2705 \u2705 \u2705 \u2705 \u2705 Ask \u2705 \u2705 \u2705 \u2705 \u2b91 Ask on code lines \u2705 \u2705 Help Docs \u2705 \u2705 \u2705 Update CHANGELOG \u2705 \u2705 \u2705 \u2705 Add Documentation \ud83d\udc8e \u2705 \u2705 Analyze \ud83d\udc8e \u2705 \u2705 Auto-Approve \ud83d\udc8e \u2705 \u2705 \u2705 CI Feedback \ud83d\udc8e \u2705 Custom Prompt \ud83d\udc8e \u2705 \u2705 \u2705 Generate Custom Labels \ud83d\udc8e \u2705 \u2705 Generate Tests \ud83d\udc8e \u2705 \u2705 Implement \ud83d\udc8e \u2705 \u2705 \u2705 Scan Repo Discussions \ud83d\udc8e \u2705 Similar Code \ud83d\udc8e \u2705 Ticket Context \ud83d\udc8e \u2705 \u2705 \u2705 Utilizing Best Practices \ud83d\udc8e \u2705 \u2705 \u2705 PR Chat \ud83d\udc8e \u2705 Suggestion Tracking \ud83d\udc8e \u2705 \u2705 USAGE CLI \u2705 \u2705 \u2705 \u2705 \u2705 App / webhook \u2705 \u2705 \u2705 \u2705 \u2705 Tagging bot \u2705 Actions \u2705 \u2705 \u2705 \u2705 CORE Adaptive and token-aware file patch fitting \u2705 \u2705 \u2705 \u2705 Auto Best Practices \ud83d\udc8e \u2705 Chat on code suggestions \u2705 \u2705 Code Validation \ud83d\udc8e \u2705 \u2705 \u2705 \u2705 Dynamic context \u2705 \u2705 \u2705 \u2705 Fetching ticket context \u2705 \u2705 \u2705 Global and wiki configurations \ud83d\udc8e \u2705 \u2705 \u2705 Impact Evaluation \ud83d\udc8e \u2705 \u2705 Incremental Update \ud83d\udc8e \u2705 Interactivity \u2705 \u2705 Local and global metadata \u2705 \u2705 \u2705 \u2705 Multiple models support \u2705 \u2705 \u2705 \u2705 PR compression \u2705 \u2705 \u2705 \u2705 PR interactive actions \ud83d\udc8e \u2705 \u2705 RAG context enrichment \u2705 \u2705 Self reflection \u2705 \u2705 \u2705 \u2705 Static code analysis \ud83d\udc8e \u2705 \u2705

\ud83d\udc8e means Qodo Merge only

All along the documentation, \ud83d\udc8e marks a feature available only in Qodo Merge, and not in the open-source version.

"},{"location":"#example-results","title":"Example Results","text":""},{"location":"#describe","title":"/describe","text":""},{"location":"#review","title":"/review","text":""},{"location":"#improve","title":"/improve","text":""},{"location":"#generate_labels","title":"/generate_labels","text":""},{"location":"#how-it-works","title":"How it Works","text":"

The following diagram illustrates Qodo Merge tools and their flow:

Check out the PR Compression strategy page for more details on how we convert a code diff to a manageable LLM prompt

"},{"location":"ai_search/","title":"AI Docs Search","text":"AI Docs Search

Search through our documentation using AI-powered natural language queries.

Search"},{"location":"chrome-extension/","title":"Chrome extension","text":"

Qodo Merge Chrome extension is a collection of tools that integrates seamlessly with your GitHub environment, aiming to enhance your Git usage experience, and providing AI-powered capabilities to your PRs.

With a single-click installation you will gain access to a context-aware chat on your pull requests code, a toolbar extension with multiple AI feedbacks, Qodo Merge filters, and additional abilities.

The extension is powered by top code models like Claude 3.7 Sonnet and o4-mini. All the extension's features are free to use on public repositories.

For private repositories, you will need to install Qodo Merge in addition to the extension. For a demonstration of how to install Qodo Merge and use it with the Chrome extension, please refer to the tutorial video at the provided link.

"},{"location":"chrome-extension/#supported-browsers","title":"Supported browsers","text":"

The extension is supported on all Chromium-based browsers, including Google Chrome, Arc, Opera, Brave, and Microsoft Edge.

"},{"location":"chrome-extension/data_privacy/","title":"Data privacy","text":"

We take your code's security and privacy seriously:

"},{"location":"chrome-extension/features/","title":"Features","text":""},{"location":"chrome-extension/features/#pr-chat","title":"PR chat","text":"

The PR-Chat feature allows to freely chat with your PR code, within your GitHub environment. It will seamlessly use the PR as context to your chat session, and provide AI-powered feedback.

To enable private chat, simply install the Qodo Merge Chrome extension. After installation, each PR's file-changed tab will include a chat box, where you may ask questions about your code. This chat session is private, and won't be visible to other users.

All open-source repositories are supported. For private repositories, you will also need to install Qodo Merge. After installation, make sure to open at least one new PR to fully register your organization. Once done, you can chat with both new and existing PRs across all installed repositories.

"},{"location":"chrome-extension/features/#context-aware-pr-chat","title":"Context-aware PR chat","text":"

Qodo Merge constructs a comprehensive context for each pull request, incorporating the PR description, commit messages, and code changes with extended dynamic context. This contextual information, along with additional PR-related data, forms the foundation for an AI-powered chat session. The agent then leverages this rich context to provide intelligent, tailored responses to user inquiries about the pull request.

"},{"location":"chrome-extension/features/#toolbar-extension","title":"Toolbar extension","text":"

With Qodo Merge Chrome extension, it's easier than ever to interactively configure and experiment with the different tools and configuration options.

For private repositories, after you found the setup that works for you, you can also easily export it as a persistent configuration file, and use it for automatic commands.

"},{"location":"chrome-extension/features/#qodo-merge-filters","title":"Qodo Merge filters","text":"

Qodo Merge filters is a sidepanel option. that allows you to filter different message in the conversation tab.

For example, you can choose to present only message from Qodo Merge, or filter those messages, focusing only on user's comments.

"},{"location":"chrome-extension/features/#enhanced-code-suggestions","title":"Enhanced code suggestions","text":"

Qodo Merge Chrome extension adds the following capabilities to code suggestions tool's comments:

"},{"location":"chrome-extension/options/","title":"Options","text":""},{"location":"chrome-extension/options/#options-and-configurations","title":"Options and Configurations","text":""},{"location":"chrome-extension/options/#accessing-the-options-page","title":"Accessing the Options Page","text":"

To access the options page for the Qodo Merge Chrome extension:

  1. Find the extension icon in your Chrome toolbar (usually in the top-right corner of your browser)
  2. Right-click on the extension icon
  3. Select \"Options\" from the context menu that appears

Alternatively, you can access the options page directly using this URL:

chrome-extension://ephlnjeghhogofkifjloamocljapahnl/options.html

"},{"location":"chrome-extension/options/#configuration-options","title":"Configuration Options","text":""},{"location":"chrome-extension/options/#api-base-host","title":"API Base Host","text":"

For single-tenant customers, you can configure the extension to communicate directly with your company's Qodo Merge server instance.

To set this up:

Note: The extension does not send your code to the server, but only triggers your previously installed Qodo Merge application.

"},{"location":"chrome-extension/options/#interface-options","title":"Interface Options","text":"

You can customize the extension's interface by:

Remember to click \"Save Settings\" after making any changes.

"},{"location":"core-abilities/","title":"Core Abilities","text":"

Qodo Merge utilizes a variety of core abilities to provide a comprehensive and efficient code review experience. These abilities include:

"},{"location":"core-abilities/#blogs","title":"Blogs","text":"

Here are some additional technical blogs from Qodo, that delve deeper into the core capabilities and features of Large Language Models (LLMs) when applied to coding tasks. These resources provide more comprehensive insights into leveraging LLMs for software development.

"},{"location":"core-abilities/#code-generation-and-llms","title":"Code Generation and LLMs","text":""},{"location":"core-abilities/#development-processes","title":"Development Processes","text":""},{"location":"core-abilities/#cost-optimization","title":"Cost Optimization","text":""},{"location":"core-abilities/auto_approval/","title":"Auto-approval \ud83d\udc8e","text":"

Supported Git Platforms: GitHub, GitLab, Bitbucket

Under specific conditions, Qodo Merge can auto-approve a PR when a manual comment is invoked, or when the PR meets certain criteria.

To ensure safety, the auto-approval feature is disabled by default. To enable auto-approval features, you need to actively set one or both of the following options in a pre-defined configuration file:

[config]\nenable_comment_approval = true # For approval via comments\nenable_auto_approval = true   # For criteria-based auto-approval\n

Notes

"},{"location":"core-abilities/auto_approval/#approval-by-commenting","title":"Approval by commenting","text":"

To enable approval by commenting, set in the configuration file:

[config]\nenable_comment_approval = true\n

After enabling, by commenting on a PR:

/review auto_approve\n

Qodo Merge will approve the PR and add a comment with the reason for the approval.

"},{"location":"core-abilities/auto_approval/#auto-approval-when-the-pr-meets-certain-criteria","title":"Auto-approval when the PR meets certain criteria","text":"

To enable auto-approval based on specific criteria, first, you need to enable the top-level flag:

[config]\nenable_auto_approval = true\n

There are two possible paths leading to this auto-approval - one via the review tool, and one via the improve tool. Each tool can independently trigger auto-approval.

"},{"location":"core-abilities/auto_approval/#auto-approval-via-the-review-tool","title":"Auto-approval via the review tool","text":""},{"location":"core-abilities/auto_approval/#auto-approval-via-the-improve-tool","title":"Auto-approval via the improve tool","text":"

PRs can be auto-approved when the improve tool doesn't find code suggestions. To enable this feature, set the following in the configuration file:

[config]\nenable_auto_approval = true\nauto_approve_for_no_suggestions = true\n
"},{"location":"core-abilities/auto_best_practices/","title":"Auto Best Practices \ud83d\udc8e","text":"

Supported Git Platforms: GitHub

"},{"location":"core-abilities/auto_best_practices/#overview","title":"Overview","text":"

Note - enabling a Wiki is required for this feature.

"},{"location":"core-abilities/auto_best_practices/#finding-code-problems-exploration-phase","title":"Finding Code Problems - Exploration Phase","text":"

The improve tool identifies potential issues, problems and bugs in Pull Request (PR) code changes. Rather than focusing on minor issues like code style or formatting, the tool intelligently analyzes code to detect meaningful problems.

The analysis intentionally takes a flexible, exploratory approach to identify meaningful potential issues, allowing the tool to surface relevant code suggestions without being constrained by predefined categories.

"},{"location":"core-abilities/auto_best_practices/#tracking-implemented-suggestions","title":"Tracking Implemented Suggestions","text":"

Qodo Merge features a novel tracking system that automatically detects when PR authors implement AI-generated code suggestions. All accepted suggestions are aggregated in a repository-specific wiki page called .pr_agent_accepted_suggestions

"},{"location":"core-abilities/auto_best_practices/#learning-and-applying-auto-best-practices","title":"Learning and Applying Auto Best Practices","text":"

Monthly, Qodo Merge analyzes the collection of accepted suggestions to generate repository-specific best practices, stored in .pr_agent_auto_best_practices wiki file. These best practices reflect recurring patterns in accepted code improvements.

The improve tool will incorporate these best practices as an additional analysis layer, checking PR code changes against known patterns of previously accepted improvements. This creates a two-phase analysis:

  1. Open exploration for general code issues
  2. Targeted checking against established best practices - exploiting the knowledge gained from past suggestions

By keeping these phases decoupled, the tool remains free to discover new or unseen issues and problems, while also learning from past experiences.

When presenting the suggestions generated by the improve tool, Qodo Merge will add a dedicated label for each suggestion generated from the auto best practices - 'Learned best practice':

"},{"location":"core-abilities/auto_best_practices/#auto-best-practices-vs-custom-best-practices","title":"Auto Best Practices vs Custom Best Practices","text":"

Teams and companies can also manually define their own custom best practices in Qodo Merge.

When custom best practices exist, Qodo Merge will still generate an 'auto best practices' wiki file, though it won't be used by the improve tool. However, this auto-generated file can still serve two valuable purposes:

  1. It can help enhance your custom best practices with additional insights derived from suggestions your team found valuable enough to implement
  2. It demonstrates effective patterns for writing AI-friendly best practices

Even when using custom best practices, we recommend regularly reviewing the auto best practices file to refine your custom rules.

"},{"location":"core-abilities/auto_best_practices/#relevant-configurations","title":"Relevant configurations","text":"
[auto_best_practices]\n# Disable all auto best practices usage or generation\nenable_auto_best_practices = true  \n\n# Disable usage of auto best practices file in the 'improve' tool\nutilize_auto_best_practices = true \n\n# Extra instructions to the auto best practices generation prompt\nextra_instructions = \"\"            \n\n# Max number of patterns to be detected\nmax_patterns = 5                   \n
"},{"location":"core-abilities/chat_on_code_suggestions/","title":"Chat on code suggestions \ud83d\udc8e","text":"

Supported Git Platforms: GitHub, GitLab

"},{"location":"core-abilities/chat_on_code_suggestions/#overview","title":"Overview","text":"

Qodo Merge implements an orchestrator agent that enables interactive code discussions, listening and responding to comments without requiring explicit tool calls. The orchestrator intelligently analyzes your responses to determine if you want to implement a suggestion, ask a question, or request help, then delegates to the appropriate specialized tool.

To minimize unnecessary notifications and maintain focused discussions, the orchestrator agent will only respond to comments made directly within the inline code suggestion discussions it has created (/improve) or within discussions initiated by the /implement command.

"},{"location":"core-abilities/chat_on_code_suggestions/#getting-started","title":"Getting Started","text":""},{"location":"core-abilities/chat_on_code_suggestions/#setup","title":"Setup","text":"

Enable interactive code discussions by adding the following to your configuration file (default is True):

[pr_code_suggestions]\nenable_chat_in_code_suggestions = true\n
"},{"location":"core-abilities/chat_on_code_suggestions/#activation","title":"Activation","text":""},{"location":"core-abilities/chat_on_code_suggestions/#improve","title":"/improve","text":"

To obtain dynamic responses, the following steps are required:

  1. Run the /improve command (mostly automatic)
  2. Check the /improve recommendation checkboxes (Apply this suggestion) to have Qodo Merge generate a new inline code suggestion discussion
  3. The orchestrator agent will then automatically listen to and reply to comments within the discussion without requiring additional commands
"},{"location":"core-abilities/chat_on_code_suggestions/#implement","title":"/implement","text":"

To obtain dynamic responses, the following steps are required:

  1. Select code lines in the PR diff and run the /implement command
  2. Wait for Qodo Merge to generate a new inline code suggestion
  3. The orchestrator agent will then automatically listen to and reply to comments within the discussion without requiring additional commands
"},{"location":"core-abilities/chat_on_code_suggestions/#explore-the-available-interaction-patterns","title":"Explore the available interaction patterns","text":"

Tip: Direct the agent with keywords

Use \"implement\" or \"apply\" for code generation. Use \"explain\", \"why\", or \"how\" for information and help.

Asking for DetailsImplementing SuggestionsProviding Additional Help

"},{"location":"core-abilities/code_validation/","title":"Code Validation \ud83d\udc8e","text":"

Supported Git Platforms: GitHub, GitLab, Bitbucket

"},{"location":"core-abilities/code_validation/#introduction","title":"Introduction","text":"

The Git environment usually represents the final stage before code enters production. Hence, Detecting bugs and issues during the review process is critical.

The improve tool provides actionable code suggestions for your pull requests, aiming to help detect and fix bugs and problems. By default, suggestions appear as a comment in a table format:

"},{"location":"core-abilities/code_validation/#validation-of-code-suggestions","title":"Validation of Code Suggestions","text":"

Each suggestion in the table can be \"applied\" by clicking on the Apply this suggestion checkbox, converting it to a committable Git code change that can be committed directly to the PR. This approach allows to fix issues without returning to your IDE for manual edits \u2014 significantly faster and more convenient.

However, committing a suggestion in a Git environment carries more risk than in a local IDE, as you don't have the opportunity to fully run and test the code before committing.

To balance convenience with safety, Qodo Merge implements a dual validation system for each generated code suggestion:

1) Localization - Qodo Merge confirms that the suggestion's line numbers and surrounding code, as predicted by the model, actually match the repo code. This means that the model correctly identified the context and location of the code to be changed.

2) \"Compilation\" - Using static code analysis, Qodo Merge verifies that after applying the suggestion, the modified file will still be valid, meaning tree-sitter syntax processing will not throw an error. This process is relevant for multiple programming languages, see here for the full list of supported languages.

When a suggestion fails to meet these validation criteria, it may still provide valuable feedback, but isn't suitable for direct application to the PR. In such cases, Qodo Merge will omit the 'apply' checkbox and instead display:

[To ensure code accuracy, apply this suggestion manually]

All suggestions that pass these validations undergo a final stage of self-reflection, where the AI model evaluates, scores, and re-ranks its own suggestions, eliminating any that are irrelevant or incorrect. Read more about this process in the self-reflection page.

"},{"location":"core-abilities/code_validation/#conclusion","title":"Conclusion","text":"

The validation methods described above enhance the reliability of code suggestions and help PR authors determine which suggestions are safer to apply in the Git environment. Of course, additional factors should be considered, such as suggestion complexity and potential code impact.

Human judgment remains essential. After clicking 'apply', Qodo Merge still presents the 'before' and 'after' code snippets for review, allowing you to assess the changes before finalizing the commit.

"},{"location":"core-abilities/compression_strategy/","title":"Compression strategy","text":"

Supported Git Platforms: GitHub, GitLab, Bitbucket

"},{"location":"core-abilities/compression_strategy/#overview","title":"Overview","text":"

There are two scenarios:

  1. The PR is small enough to fit in a single prompt (including system and user prompt)
  2. The PR is too large to fit in a single prompt (including system and user prompt)

For both scenarios, we first use the following strategy

"},{"location":"core-abilities/compression_strategy/#repo-language-prioritization-strategy","title":"Repo language prioritization strategy","text":"

We prioritize the languages of the repo based on the following criteria:

  1. Exclude binary files and non code files (e.g. images, pdfs, etc)
  2. Given the main languages used in the repo
  3. We sort the PR files by the most common languages in the repo (in descending order):
  4. [[file.py, file2.py],[file3.js, file4.jsx],[readme.md]]
"},{"location":"core-abilities/compression_strategy/#small-pr","title":"Small PR","text":"

In this case, we can fit the entire PR in a single prompt:

  1. Exclude binary files and non code files (e.g. images, pdfs, etc)
  2. We Expand the surrounding context of each patch to 3 lines above and below the patch
"},{"location":"core-abilities/compression_strategy/#large-pr","title":"Large PR","text":""},{"location":"core-abilities/compression_strategy/#motivation","title":"Motivation","text":"

Pull Requests can be very long and contain a lot of information with varying degree of relevance to the pr-agent. We want to be able to pack as much information as possible in a single LMM prompt, while keeping the information relevant to the pr-agent.

"},{"location":"core-abilities/compression_strategy/#compression-strategy","title":"Compression strategy","text":"

We prioritize additions over deletions:

"},{"location":"core-abilities/compression_strategy/#adaptive-and-token-aware-file-patch-fitting","title":"Adaptive and token-aware file patch fitting","text":"

We use tiktoken to tokenize the patches after the modifications described above, and we use the following strategy to fit the patches into the prompt:

  1. Within each language we sort the files by the number of tokens in the file (in descending order):
  2. Iterate through the patches in the order described above
  3. Add the patches to the prompt until the prompt reaches a certain buffer from the max token length
  4. If there are still patches left, add the remaining patches as a list called other modified files to the prompt until the prompt reaches the max token length (hard stop), skip the rest of the patches.
  5. If we haven't reached the max token length, add the deleted files to the prompt until the prompt reaches the max token length (hard stop), skip the rest of the patches.
"},{"location":"core-abilities/compression_strategy/#example","title":"Example","text":""},{"location":"core-abilities/dynamic_context/","title":"Dynamic context","text":"

Supported Git Platforms: GitHub, GitLab, Bitbucket

Qodo Merge uses an asymmetric and dynamic context strategy to improve AI analysis of code changes in pull requests. It provides more context before changes than after, and dynamically adjusts the context based on code structure (e.g., enclosing functions or classes). This approach balances providing sufficient context for accurate analysis, while avoiding needle-in-the-haystack information overload that could degrade AI performance or exceed token limits.

"},{"location":"core-abilities/dynamic_context/#introduction","title":"Introduction","text":"

Pull request code changes are retrieved in a unified diff format, showing three lines of context before and after each modified section, with additions marked by '+' and deletions by '-'.

@@ -12,5 +12,5 @@ def func1():\n code line that already existed in the file...\n code line that already existed in the file...\n code line that already existed in the file....\n-code line that was removed in the PR\n+new code line added in the PR\n code line that already existed in the file...\n code line that already existed in the file...\n code line that already existed in the file...\n\n@@ -26,2 +26,4 @@ def func2():\n...\n

This unified diff format can be challenging for AI models to interpret accurately, as it provides limited context for understanding the full scope of code changes. The presentation of code using '+', '-', and ' ' symbols to indicate additions, deletions, and unchanged lines respectively also differs from the standard code formatting typically used to train AI models.

"},{"location":"core-abilities/dynamic_context/#challenges-of-expanding-the-context-window","title":"Challenges of expanding the context window","text":"

While expanding the context window is technically feasible, it presents a more fundamental trade-off:

Pros:

Cons:

"},{"location":"core-abilities/dynamic_context/#asymmetric-and-dynamic-context","title":"Asymmetric and dynamic context","text":"

To address these challenges, Qodo Merge employs an asymmetric and dynamic context strategy, providing the model with more focused and relevant context information for each code change.

Asymmetric:

We start by recognizing that the context preceding a code change is typically more crucial for understanding the modification than the context following it. Consequently, Qodo Merge implements an asymmetric context policy, decoupling the context window into two distinct segments: one for the code before the change and another for the code after.

By independently adjusting each context window, Qodo Merge can supply the model with a more tailored and pertinent context for individual code changes.

Dynamic:

We also employ a \"dynamic\" context strategy. We start by recognizing that the optimal context for a code change often corresponds to its enclosing code component (e.g., function, class), rather than a fixed number of lines. Consequently, we dynamically adjust the context window based on the code's structure, ensuring the model receives the most pertinent information for each modification.

To prevent overwhelming the model with excessive context, we impose a limit on the number of lines searched when identifying the enclosing component. This balance allows for comprehensive understanding while maintaining efficiency and limiting context token usage.

"},{"location":"core-abilities/dynamic_context/#appendix-relevant-configuration-options","title":"Appendix - relevant configuration options","text":"
[config]\npatch_extension_skip_types =[\".md\",\".txt\"]  # Skip files with these extensions when trying to extend the context\nallow_dynamic_context=true                  # Allow dynamic context extension\nmax_extra_lines_before_dynamic_context = 8  # will try to include up to X extra lines before the hunk in the patch, until we reach an enclosing function or class\npatch_extra_lines_before = 3                # Number of extra lines (+3 default ones) to include before each hunk in the patch\npatch_extra_lines_after = 1                 # Number of extra lines (+3 default ones) to include after each hunk in the patch\n
"},{"location":"core-abilities/fetching_ticket_context/","title":"Fetching Ticket Context for PRs","text":"

Supported Git Platforms: GitHub, GitLab, Bitbucket

"},{"location":"core-abilities/fetching_ticket_context/#overview","title":"Overview","text":"

Qodo Merge streamlines code review workflows by seamlessly connecting with multiple ticket management systems. This integration enriches the review process by automatically surfacing relevant ticket information and context alongside code changes.

Ticket systems supported:

Ticket data fetched:

  1. Ticket Title
  2. Ticket Description
  3. Custom Fields (Acceptance criteria)
  4. Subtasks (linked tasks)
  5. Labels
  6. Attached Images/Screenshots
"},{"location":"core-abilities/fetching_ticket_context/#affected-tools","title":"Affected Tools","text":"

Ticket Recognition Requirements:

"},{"location":"core-abilities/fetching_ticket_context/#describe-tool","title":"Describe tool","text":"

Qodo Merge will recognize the ticket and use the ticket content (title, description, labels) to provide additional context for the code changes. By understanding the reasoning and intent behind modifications, the LLM can offer more insightful and relevant code analysis.

"},{"location":"core-abilities/fetching_ticket_context/#review-tool","title":"Review tool","text":"

Similarly to the describe tool, the review tool will use the ticket content to provide additional context for the code changes.

In addition, this feature will evaluate how well a Pull Request (PR) adheres to its original purpose/intent as defined by the associated ticket or issue mentioned in the PR description. Each ticket will be assigned a label (Compliance/Alignment level), Indicates the degree to which the PR fulfills its original purpose:

A PR Code Verified label indicates the PR code meets ticket requirements, but requires additional manual testing beyond the code scope. For example - validating UI display across different environments (Mac, Windows, mobile, etc.).

"},{"location":"core-abilities/fetching_ticket_context/#configuration-options","title":"Configuration options","text":""},{"location":"core-abilities/fetching_ticket_context/#github-issues-integration","title":"GitHub Issues Integration","text":"

Qodo Merge will automatically recognize GitHub issues mentioned in the PR description and fetch the issue content. Examples of valid GitHub issue references:

Since Qodo Merge is integrated with GitHub, it doesn't require any additional configuration to fetch GitHub issues.

"},{"location":"core-abilities/fetching_ticket_context/#jira-integration","title":"Jira Integration \ud83d\udc8e","text":"

We support both Jira Cloud and Jira Server/Data Center.

"},{"location":"core-abilities/fetching_ticket_context/#jira-cloud","title":"Jira Cloud","text":"

There are two ways to authenticate with Jira Cloud:

1) Jira App Authentication

The recommended way to authenticate with Jira Cloud is to install the Qodo Merge app in your Jira Cloud instance. This will allow Qodo Merge to access Jira data on your behalf.

Installation steps:

  1. Go to the Qodo Merge integrations page

  2. Click on the Connect Jira Cloud button to connect the Jira Cloud app

  3. Click the accept button.

  4. After installing the app, you will be redirected to the Qodo Merge registration page. and you will see a success message.

  5. Now Qodo Merge will be able to fetch Jira ticket context for your PRs.

2) Email/Token Authentication

You can create an API token from your Atlassian account:

  1. Log in to https://id.atlassian.com/manage-profile/security/api-tokens.

  2. Click Create API token.

  3. From the dialog that appears, enter a name for your new token and click Create.

  4. Click Copy to clipboard.

  1. In your configuration file add the following lines:
[jira]\njira_api_token = \"YOUR_API_TOKEN\"\njira_api_email = \"YOUR_EMAIL\"\n
"},{"location":"core-abilities/fetching_ticket_context/#jira-data-centerserver","title":"Jira Data Center/Server","text":""},{"location":"core-abilities/fetching_ticket_context/#using-basic-authentication-for-jira-data-centerserver","title":"Using Basic Authentication for Jira Data Center/Server","text":"

You can use your Jira username and password to authenticate with Jira Data Center/Server.

In your Configuration file/Environment variables/Secrets file, add the following lines:

jira_api_email = \"your_username\"\njira_api_token = \"your_password\"\n

(Note that indeed the 'jira_api_email' field is used for the username, and the 'jira_api_token' field is used for the user password.)

"},{"location":"core-abilities/fetching_ticket_context/#validating-basic-authentication-via-python-script","title":"Validating Basic authentication via Python script","text":"

If you are facing issues retrieving tickets in Qodo Merge with Basic auth, you can validate the flow using a Python script. This following steps will help you check if the basic auth is working correctly, and if you can access the Jira ticket details:

  1. run pip install jira==3.8.0

  2. run the following Python script (after replacing the placeholders with your actual values):

Script to validate basic auth
from jira import JIRA\n\n\nif __name__ == \"__main__\":\n    try:\n        # Jira server URL\n        server = \"https://...\"\n        # Basic auth\n        username = \"...\"\n        password = \"...\"\n        # Jira ticket code (e.g. \"PROJ-123\")\n        ticket_id = \"...\"\n\n        print(\"Initializing JiraServerTicketProvider with JIRA server\")\n        # Initialize JIRA client\n        jira = JIRA(\n            server=server,\n            basic_auth=(username, password),\n            timeout=30\n        )\n        if jira:\n            print(f\"JIRA client initialized successfully\")\n        else:\n            print(\"Error initializing JIRA client\")\n\n        # Fetch ticket details\n        ticket = jira.issue(ticket_id)\n        print(f\"Ticket title: {ticket.fields.summary}\")\n\n    except Exception as e:\n        print(f\"Error fetching JIRA ticket details: {e}\")\n
"},{"location":"core-abilities/fetching_ticket_context/#using-a-personal-access-token-pat-for-jira-data-centerserver","title":"Using a Personal Access Token (PAT) for Jira Data Center/Server","text":"
  1. Create a Personal Access Token (PAT) in your Jira account
  2. In your Configuration file/Environment variables/Secrets file, add the following lines:
[jira]\njira_base_url = \"YOUR_JIRA_BASE_URL\" # e.g. https://jira.example.com\njira_api_token = \"YOUR_API_TOKEN\"\n
"},{"location":"core-abilities/fetching_ticket_context/#validating-pat-token-via-python-script","title":"Validating PAT token via Python script","text":"

If you are facing issues retrieving tickets in Qodo Merge with PAT token, you can validate the flow using a Python script. This following steps will help you check if the token is working correctly, and if you can access the Jira ticket details:

  1. run pip install jira==3.8.0

  2. run the following Python script (after replacing the placeholders with your actual values):

Script to validate PAT token
from jira import JIRA\n\n\nif __name__ == \"__main__\":\n    try:\n        # Jira server URL\n        server = \"https://...\"\n        # Jira PAT token\n        token_auth = \"...\"\n        # Jira ticket code (e.g. \"PROJ-123\")\n        ticket_id = \"...\"\n\n        print(\"Initializing JiraServerTicketProvider with JIRA server\")\n        # Initialize JIRA client\n        jira = JIRA(\n            server=server,\n            token_auth=token_auth,\n            timeout=30\n        )\n        if jira:\n            print(f\"JIRA client initialized successfully\")\n        else:\n            print(\"Error initializing JIRA client\")\n\n        # Fetch ticket details\n        ticket = jira.issue(ticket_id)\n        print(f\"Ticket title: {ticket.fields.summary}\")\n\n    except Exception as e:\n        print(f\"Error fetching JIRA ticket details: {e}\")\n
"},{"location":"core-abilities/fetching_ticket_context/#multi-jira-server-configuration","title":"Multi-JIRA Server Configuration \ud83d\udc8e","text":"

Qodo Merge supports connecting to multiple JIRA servers using different authentication methods.

Email/Token (Basic Auth)PAT AuthJira Cloud App

Configure multiple servers using Email/Token authentication:

Example Configuration:

[jira]\n# Server URLs\njira_servers = [\"https://company.atlassian.net\", \"https://datacenter.jira.com\"]\n\n# API tokens/passwords\njira_api_token = [\"cloud_api_token_here\", \"datacenter_password\"]\n\n# Emails/usernames (both required)\njira_api_email = [\"user@company.com\", \"datacenter_username\"]\n\n# Default server for ticket IDs\njira_base_url = \"https://company.atlassian.net\"\n

Configure multiple servers using Personal Access Token authentication:

Example Configuration:

[jira]\n# Server URLs\njira_servers = [\"https://server1.jira.com\", \"https://server2.jira.com\"]\n\n# PAT tokens only\njira_api_token = [\"pat_token_1\", \"pat_token_2\"]\n\n# Default server for ticket IDs\njira_base_url = \"https://server1.jira.com\"\n

Mixed Authentication (Email/Token + PAT):

[jira]\njira_servers = [\"https://company.atlassian.net\", \"https://server.jira.com\"]\njira_api_token = [\"cloud_api_token\", \"server_pat_token\"]\njira_api_email = [\"user@company.com\", \"\"]  # Empty for PAT\n

For Jira Cloud instances using App Authentication:

  1. Install the Qodo Merge app on each JIRA Cloud instance you want to connect to
  2. Set the default server for ticket ID resolution:
[jira]\njira_base_url = \"https://primary-team.atlassian.net\"\n

Full URLs (e.g., https://other-team.atlassian.net/browse/TASK-456) will automatically use the correct connected instance.

"},{"location":"core-abilities/fetching_ticket_context/#how-to-link-a-pr-to-a-jira-ticket","title":"How to link a PR to a Jira ticket","text":"

To integrate with Jira, you can link your PR to a ticket using either of these methods:

Method 1: Description Reference:

Include a ticket reference in your PR description, using either the complete URL format https://<JIRA_ORG>.atlassian.net/browse/ISSUE-123 or the shortened ticket ID ISSUE-123 (without prefix or suffix for the shortened ID).

Method 2: Branch Name Detection:

Name your branch with the ticket ID as a prefix (e.g., ISSUE-123-feature-description or ISSUE-123/feature-description).

Jira Base URL

For shortened ticket IDs or branch detection (method 2 for JIRA cloud), you must configure the Jira base URL in your configuration file under the [jira] section:

[jira]\njira_base_url = \"https://<JIRA_ORG>.atlassian.net\"\n
Where <JIRA_ORG> is your Jira organization identifier (e.g., mycompany for https://mycompany.atlassian.net).

"},{"location":"core-abilities/fetching_ticket_context/#linear-integration","title":"Linear Integration \ud83d\udc8e","text":""},{"location":"core-abilities/fetching_ticket_context/#linear-app-authentication","title":"Linear App Authentication","text":"

The recommended way to authenticate with Linear is to connect the Linear app through the Qodo Merge portal.

Installation steps:

  1. Go to the Qodo Merge integrations page

  2. Navigate to the Integrations tab

  3. Click on the Linear button to connect the Linear app

  4. Follow the authentication flow to authorize Qodo Merge to access your Linear workspace

  5. Once connected, Qodo Merge will be able to fetch Linear ticket context for your PRs

"},{"location":"core-abilities/fetching_ticket_context/#how-to-link-a-pr-to-a-linear-ticket","title":"How to link a PR to a Linear ticket","text":"

Qodo Merge will automatically detect Linear tickets using either of these methods:

Method 1: Description Reference:

Include a ticket reference in your PR description using either: - The complete Linear ticket URL: https://linear.app/[ORG_ID]/issue/[TICKET_ID] - The shortened ticket ID: [TICKET_ID] (e.g., ABC-123) - requires linear_base_url configuration (see below).

Method 2: Branch Name Detection:

Name your branch with the ticket ID as a prefix (e.g., ABC-123-feature-description or feature/ABC-123/feature-description).

Linear Base URL

For shortened ticket IDs or branch detection (method 2), you must configure the Linear base URL in your configuration file under the [linear] section:

[linear]\nlinear_base_url = \"https://linear.app/[ORG_ID]\"\n

Replace [ORG_ID] with your Linear organization identifier.

"},{"location":"core-abilities/impact_evaluation/","title":"Impact Evaluation \ud83d\udc8e","text":"

Supported Git Platforms: GitHub, GitLab, Bitbucket

Demonstrating the return on investment (ROI) of AI-powered initiatives is crucial for modern organizations. To address this need, Qodo Merge has developed an AI impact measurement tools and metrics, providing advanced analytics to help businesses quantify the tangible benefits of AI adoption in their PR review process.

"},{"location":"core-abilities/impact_evaluation/#auto-impact-validator-real-time-tracking-of-implemented-qodo-merge-suggestions","title":"Auto Impact Validator - Real-Time Tracking of Implemented Qodo Merge Suggestions","text":""},{"location":"core-abilities/impact_evaluation/#how-it-works","title":"How It Works","text":"

When a user pushes a new commit to the pull request, Qodo Merge automatically compares the updated code against the previous suggestions, marking them as implemented if the changes address these recommendations, whether directly or indirectly:

  1. Direct Implementation: The user directly addresses the suggestion as-is in the PR, either by clicking on the \"apply code suggestion\" checkbox or by making the changes manually.
  2. Indirect Implementation: Qodo Merge recognizes when a suggestion's intent is fulfilled, even if the exact code changes differ from the original recommendation. It marks these suggestions as implemented, acknowledging that users may achieve the same goal through alternative solutions.
"},{"location":"core-abilities/impact_evaluation/#real-time-visual-feedback","title":"Real-Time Visual Feedback","text":"

Upon confirming that a suggestion was implemented, Qodo Merge automatically adds a \u2705 (check mark) to the relevant suggestion, enabling transparent tracking of Qodo Merge's impact analysis. Qodo Merge will also add, inside the relevant suggestions, an explanation of how the new code was impacted by each suggestion.

"},{"location":"core-abilities/impact_evaluation/#dashboard-metrics","title":"Dashboard Metrics","text":"

The dashboard provides macro-level insights into the overall impact of Qodo Merge on the pull-request process with key productivity metrics.

By offering clear, data-driven evidence of Qodo Merge's impact, it empowers leadership teams to make informed decisions about the tool's effectiveness and ROI.

Here are key metrics that the dashboard tracks:

"},{"location":"core-abilities/impact_evaluation/#qodo-merge-impacts-per-1k-lines","title":"Qodo Merge Impacts per 1K Lines","text":"

Explanation: for every 1K lines of code (additions/edits), Qodo Merge had on average ~X suggestions implemented.

Why This Metric Matters:

  1. Standardized and Comparable Measurement: By measuring impacts per 1K lines of code additions, you create a standardized metric that can be compared across different projects, teams, customers, and time periods. This standardization is crucial for meaningful analysis, benchmarking, and identifying where Qodo Merge is most effective.
  2. Accounts for PR Variability and Incentivizes Quality: This metric addresses the fact that \"Not all PRs are created equal.\" By normalizing against lines of code rather than PR count, you account for the variability in PR sizes and focus on the quality and impact of suggestions rather than just the number of PRs affected.
  3. Quantifies Value and ROI: The metric directly correlates with the value Qodo Merge is providing, showing how frequently it offers improvements relative to the amount of new code being written. This provides a clear, quantifiable way to demonstrate Qodo Merge's return on investment to stakeholders.
"},{"location":"core-abilities/impact_evaluation/#suggestion-effectiveness-across-categories","title":"Suggestion Effectiveness Across Categories","text":"

Explanation: This chart illustrates the distribution of implemented suggestions across different categories, enabling teams to better understand Qodo Merge's impact on various aspects of code quality and development practices.

"},{"location":"core-abilities/impact_evaluation/#suggestion-score-distribution","title":"Suggestion Score Distribution","text":"

Explanation: The distribution of the suggestion score for the implemented suggestions, ensuring that higher-scored suggestions truly represent more significant improvements.

"},{"location":"core-abilities/incremental_update/","title":"Incremental Update \ud83d\udc8e","text":"

Supported Git Platforms: GitHub, GitLab (Both cloud & server. For server: Version 17 and above)

"},{"location":"core-abilities/incremental_update/#overview","title":"Overview","text":"

The Incremental Update feature helps users focus on feedback for their newest changes, making large PRs more manageable.

"},{"location":"core-abilities/incremental_update/#how-it-works","title":"How it works","text":"Update Option on Subsequent CommitsGeneration of Incremental Update

Whenever new commits are pushed following a recent code suggestions report for this PR, an Update button appears (as seen above).

Once the user clicks on the button:

"},{"location":"core-abilities/incremental_update/#benefits-for-developers","title":"Benefits for Developers","text":""},{"location":"core-abilities/interactivity/","title":"Interactivity \ud83d\udc8e","text":"

Supported Git Platforms: GitHub, GitLab

"},{"location":"core-abilities/interactivity/#overview","title":"Overview","text":"

Qodo Merge transforms static code reviews into interactive experiences by enabling direct actions from pull request (PR) comments. Developers can immediately trigger actions and apply changes with simple checkbox clicks.

This focused workflow maintains context while dramatically reducing the time between PR creation and final merge. The approach eliminates manual steps, provides clear visual indicators, and creates immediate feedback loops all within the same interface.

"},{"location":"core-abilities/interactivity/#key-interactive-features","title":"Key Interactive Features","text":""},{"location":"core-abilities/interactivity/#1-interactive-improve-tool","title":"1. Interactive /improve Tool","text":"

The /improve command delivers a comprehensive interactive experience:

"},{"location":"core-abilities/interactivity/#2-interactive-analyze-tool","title":"2. Interactive /analyze Tool","text":"

The /analyze command provides component-level analysis with interactive options for each identified code component:

"},{"location":"core-abilities/interactivity/#3-interactive-help-tool","title":"3. Interactive /help Tool","text":"

The /help command not only lists available tools and their descriptions but also enables immediate tool invocation through interactive checkboxes. When a user checks a tool's checkbox, Qodo Merge instantly triggers that tool without requiring additional commands. This transforms the standard help menu into an interactive launch pad for all Qodo Merge capabilities, eliminating context switching by keeping developers within their PR workflow.

"},{"location":"core-abilities/metadata/","title":"Local and global metadata injection with multi-stage analysis","text":"

Supported Git Platforms: GitHub, GitLab, Bitbucket

1. Qodo Merge initially retrieves for each PR the following data:

Tip: Organization-level metadata

In addition to the inputs above, Qodo Merge can incorporate supplementary preferences provided by the user, like extra_instructions and organization best practices. This information can be used to enhance the PR analysis.

2. By default, the first command that Qodo Merge executes is describe, which generates three types of outputs:

These AI-generated outputs are now considered as part of the PR metadata, and can be used in subsequent commands like review and improve. This effectively enables multi-stage chain-of-thought analysis, without doing any additional API calls which will cost time and money.

For example, when generating code suggestions for different files, Qodo Merge can inject the AI-generated \"Changes walkthrough\" file summary in the prompt:

## File: 'src/file1.py'\n### AI-generated file summary:\n- edited function `func1` that does X\n- Removed function `func2` that was not used\n- ....\n\n@@ ... @@ def func1():\n__new hunk__\n11  unchanged code line0\n12  unchanged code line1\n13 +new code line2 added\n14  unchanged code line3\n__old hunk__\n unchanged code line0\n unchanged code line1\n-old code line2 removed\n unchanged code line3\n\n@@ ... @@ def func2():\n__new hunk__\n...\n__old hunk__\n...\n

3. The entire PR files that were retrieved are also used to expand and enhance the PR context (see Dynamic Context).

4. All the metadata described above represents several level of cumulative analysis - ranging from hunk level, to file level, to PR level, to organization level. This comprehensive approach enables Qodo Merge AI models to generate more precise and contextually relevant suggestions and feedback.

"},{"location":"core-abilities/rag_context_enrichment/","title":"RAG Context Enrichment \ud83d\udc8e","text":"

Supported Git Platforms: GitHub, Bitbucket Data Center

Prerequisites

"},{"location":"core-abilities/rag_context_enrichment/#overview","title":"Overview","text":""},{"location":"core-abilities/rag_context_enrichment/#what-is-rag-context-enrichment","title":"What is RAG Context Enrichment?","text":"

A feature that enhances AI analysis by retrieving and referencing relevant code patterns from your project, enabling context-aware insights during code reviews.

"},{"location":"core-abilities/rag_context_enrichment/#how-does-rag-context-enrichment-work","title":"How does RAG Context Enrichment work?","text":"

Using Retrieval-Augmented Generation (RAG), it searches your configured repositories for contextually relevant code segments, enriching pull request (PR) insights and accelerating review accuracy.

"},{"location":"core-abilities/rag_context_enrichment/#getting-started","title":"Getting started","text":""},{"location":"core-abilities/rag_context_enrichment/#configuration-options","title":"Configuration options","text":"

In order to enable the RAG feature, add the following lines to your configuration file:

[rag_arguments]\nenable_rag=true\n
RAG Arguments Options

enable_rag If set to true, repository enrichment using RAG will be enabled. Default is false. rag_repo_list A list of repositories that will be used by the semantic search for RAG. Use ['all'] to consider the entire codebase or a select list of repositories, for example: ['my-org/my-repo', ...]. Default: the repository from which the PR was opened.

"},{"location":"core-abilities/rag_context_enrichment/#applications","title":"Applications","text":"

RAG capability is exclusively available in the following tools:

/review/implement/ask

The /review tool offers the Focus area from RAG data which contains feedback based on the RAG references analysis. The complete list of references found relevant to the PR will be shown in the References section, helping developers understand the broader context by exploring the provided references.

The /implement tool utilizes the RAG feature to provide comprehensive context of the repository codebase, allowing it to generate more refined code output. The References section contains links to the content used to support the code generation.

The /ask tool can access broader repository context through the RAG feature when answering questions that go beyond the PR scope alone. The References section displays the additional repository content consulted to formulate the answer.

"},{"location":"core-abilities/rag_context_enrichment/#limitations","title":"Limitations","text":""},{"location":"core-abilities/rag_context_enrichment/#querying-the-codebase-presents-significant-challenges","title":"Querying the codebase presents significant challenges","text":""},{"location":"core-abilities/rag_context_enrichment/#this-feature-has-several-requirements-and-restrictions","title":"This feature has several requirements and restrictions","text":""},{"location":"core-abilities/self_reflection/","title":"Self-reflection","text":"

Supported Git Platforms: GitHub, GitLab, Bitbucket

Qodo Merge implements a self-reflection process where the AI model reflects, scores, and re-ranks its own suggestions, eliminating irrelevant or incorrect ones. This approach improves the quality and relevance of suggestions, saving users time and enhancing their experience. Configuration options allow users to set a score threshold for further filtering out suggestions.

"},{"location":"core-abilities/self_reflection/#introduction-efficient-review-with-hierarchical-presentation","title":"Introduction - Efficient Review with Hierarchical Presentation","text":"

Given that not all generated code suggestions will be relevant, it is crucial to enable users to review them in a fast and efficient way, allowing quick identification and filtering of non-applicable ones.

To achieve this goal, Qodo Merge offers a dedicated hierarchical structure when presenting suggestions to users:

Fast Review

This hierarchical structure is designed to facilitate rapid review of each suggestion, with users spending an average of ~5-10 seconds per item.

"},{"location":"core-abilities/self_reflection/#self-reflection-and-re-ranking","title":"Self-reflection and Re-ranking","text":"

The AI model is initially tasked with generating suggestions, and outputting them in order of importance. However, in practice we observe that models often struggle to simultaneously generate high-quality code suggestions and rank them well in a single pass. Furthermore, the initial set of generated suggestions sometimes contains easily identifiable errors.

To address these issues, we implemented a \"self-reflection\" process that refines suggestion ranking and eliminates irrelevant or incorrect proposals. This process consists of the following steps:

  1. Presenting the generated suggestions to the model in a follow-up call.
  2. Instructing the model to score each suggestion on a scale of 0-10 and provide a rationale for the assigned score.
  3. Utilizing these scores to re-rank the suggestions and filter out incorrect ones (with a score of 0).
  4. Optionally, filtering out all suggestions below a user-defined score threshold.

Note that presenting all generated suggestions simultaneously provides the model with a comprehensive context, enabling it to make more informed decisions compared to evaluating each suggestion individually.

To conclude, the self-reflection process enables Qodo Merge to prioritize suggestions based on their importance, eliminate inaccurate or irrelevant proposals, and optionally exclude suggestions that fall below a specified threshold of significance. This results in a more refined and valuable set of suggestions for the user, saving time and improving the overall experience.

"},{"location":"core-abilities/self_reflection/#example-results","title":"Example Results","text":""},{"location":"core-abilities/self_reflection/#appendix-relevant-configuration-options","title":"Appendix - Relevant Configuration Options","text":"
[pr_code_suggestions]\nsuggestions_score_threshold = 0 # Filter out suggestions with a score below this threshold (0-10)\n
"},{"location":"core-abilities/static_code_analysis/","title":"Static Code Analysis \ud83d\udc8e","text":"

Supported Git Platforms: GitHub, GitLab, Bitbucket

By combining static code analysis with LLM capabilities, Qodo Merge can provide a comprehensive analysis of the PR code changes on a component level.

It scans the PR code changes, finds all the code components (methods, functions, classes) that changed, and enables to interactively generate tests, docs, code suggestions and similar code search for each component.

Language that are currently supported:

Python, Java, C++, JavaScript, TypeScript, C#, Go.

"},{"location":"core-abilities/static_code_analysis/#capabilities","title":"Capabilities","text":""},{"location":"core-abilities/static_code_analysis/#analyze-pr","title":"Analyze PR","text":"

The analyze tool enables to interactively generate tests, docs, code suggestions and similar code search for each component that changed in the PR. It can be invoked manually by commenting on any PR:

/analyze\n

An example result:

Clicking on each checkbox will trigger the relevant tool for the selected component.

"},{"location":"core-abilities/static_code_analysis/#generate-tests","title":"Generate Tests","text":"

The test tool generate tests for a selected component, based on the PR code changes. It can be invoked manually by commenting on any PR:

/test component_name\n

where 'component_name' is the name of a specific component in the PR, Or be triggered interactively by using the analyze tool.

"},{"location":"core-abilities/static_code_analysis/#generate-docs-for-a-component","title":"Generate Docs for a Component","text":"

The add_docs tool scans the PR code changes, and automatically generate docstrings for any code components that changed in the PR. It can be invoked manually by commenting on any PR:

/add_docs component_name\n

Or be triggered interactively by using the analyze tool.

"},{"location":"core-abilities/static_code_analysis/#generate-code-suggestions-for-a-component","title":"Generate Code Suggestions for a Component","text":"

The improve_component tool generates code suggestions for a specific code component that changed in the PR. It can be invoked manually by commenting on any PR:

/improve_component component_name\n

Or be triggered interactively by using the analyze tool.

"},{"location":"core-abilities/static_code_analysis/#find-similar-code","title":"Find Similar Code","text":"

The similar code tool retrieves the most similar code components from inside the organization's codebase or from open-source code, including details about the license associated with each repository.

For example:

Global Search for a method called chat_completion:

"},{"location":"faq/","title":"FAQ","text":"Q: Can Qodo Merge serve as a substitute for a human reviewer? Q: I received an incorrect or irrelevant suggestion. Why? Q: How can I get more tailored suggestions? Q: Will you store my code? Are you using my code to train models? Q: Can I use my own LLM keys with Qodo Merge? Q: Can Qodo Merge review draft/offline PRs? Q: Can the 'Review effort' feedback be calibrated or customized?"},{"location":"faq/#answer1","title":"Answer:1","text":"

Qodo Merge is designed to assist, not replace, human reviewers.

Reviewing PRs is a tedious and time-consuming task often seen as a \"chore\". In addition, the longer the PR \u2013 the shorter the relative feedback, since long PRs can overwhelm reviewers, both in terms of technical difficulty, and the actual review time. Qodo Merge aims to address these pain points, and to assist and empower both the PR author and reviewer.

However, Qodo Merge has built-in safeguards to ensure the developer remains in the driver's seat. For example:

  1. Preserves user's original PR header
  2. Places user's description above the AI-generated PR description
  3. Won't approve PRs; approval remains reviewer's responsibility
  4. The code suggestions are optional, and aim to:

Read more about this issue in our blog

"},{"location":"faq/#answer2","title":"Answer:2","text":""},{"location":"faq/#answer3","title":"Answer:3","text":"

See here for more information on how to use the extra_instructions and best_practices configuration options, to guide the model to more tailored suggestions.

"},{"location":"faq/#answer4","title":"Answer:4","text":"

No. Qodo Merge strict privacy policy ensures that your code is not stored or used for training purposes.

For a detailed overview of our data privacy policy, please refer to this link

"},{"location":"faq/#answer5","title":"Answer:5","text":"

When you self-host the open-source version, you use your own keys.

Qodo Merge with SaaS deployment is a hosted version of Qodo Merge, where Qodo manages the infrastructure and the keys. For enterprise customers, on-prem deployment is also available. Contact us for more information.

"},{"location":"faq/#answer6","title":"Answer:6","text":"

Yes. While Qodo Merge won't automatically review draft PRs, you can still get feedback by manually requesting it through online commenting.

For active PRs, you can customize the automatic feedback settings here to match your team's workflow.

"},{"location":"faq/#answer7","title":"Answer:7","text":"

Yes, you can customize review effort estimates using the extra_instructions configuration option (see documentation).

Example mapping:

Note: The effort levels (1-5) are primarily meant for comparative purposes, helping teams prioritize reviewing smaller PRs first. The actual review duration may vary, as the focus is on providing consistent relative effort estimates.

"},{"location":"installation/","title":"Installation","text":""},{"location":"installation/#self-hosted-pr-agent","title":"Self-hosted PR-Agent","text":"

There are several ways to use self-hosted PR-Agent:

"},{"location":"installation/#qodo-merge","title":"Qodo Merge \ud83d\udc8e","text":"

Qodo Merge, an app hosted by QodoAI for GitHub\\GitLab\\BitBucket, is also available. With Qodo Merge, installation is as simple as adding the Qodo Merge app to your relevant repositories. See here for more details.

"},{"location":"installation/azure/","title":"Azure","text":""},{"location":"installation/azure/#azure-devops-pipeline","title":"Azure DevOps Pipeline","text":"

You can use a pre-built Action Docker image to run PR-Agent as an Azure devops pipeline. Add the following file to your repository under azure-pipelines.yml:

# Opt out of CI triggers\ntrigger: none\n\n# Configure PR trigger\npr:\n  branches:\n    include:\n    - '*'\n  autoCancel: true\n  drafts: false\n\nstages:\n- stage: pr_agent\n  displayName: 'PR Agent Stage'\n  jobs:\n  - job: pr_agent_job\n    displayName: 'PR Agent Job'\n    pool:\n      vmImage: 'ubuntu-latest'\n    container:\n      image: codiumai/pr-agent:latest\n      options: --entrypoint \"\"\n    variables:\n      - group: pr_agent\n    steps:\n    - script: |\n        echo \"Running PR Agent action step\"\n\n        # Construct PR_URL\n        PR_URL=\"${SYSTEM_COLLECTIONURI}${SYSTEM_TEAMPROJECT}/_git/${BUILD_REPOSITORY_NAME}/pullrequest/${SYSTEM_PULLREQUEST_PULLREQUESTID}\"\n        echo \"PR_URL=$PR_URL\"\n\n        # Extract organization URL from System.CollectionUri\n        ORG_URL=$(echo \"$(System.CollectionUri)\" | sed 's/\\/$//') # Remove trailing slash if present\n        echo \"Organization URL: $ORG_URL\"\n\n        export azure_devops__org=\"$ORG_URL\"\n        export config__git_provider=\"azure\"\n\n        pr-agent --pr_url=\"$PR_URL\" describe\n        pr-agent --pr_url=\"$PR_URL\" review\n        pr-agent --pr_url=\"$PR_URL\" improve\n      env:\n        azure_devops__pat: $(azure_devops_pat)\n        openai__key: $(OPENAI_KEY)\n      displayName: 'Run Qodo Merge'\n

This script will run Qodo Merge on every new merge request, with the improve, review, and describe commands. Note that you need to export the azure_devops__pat and OPENAI_KEY variables in the Azure DevOps pipeline settings (Pipelines -> Library -> + Variable group):

Make sure to give pipeline permissions to the pr_agent variable group.

Note that Azure Pipelines lacks support for triggering workflows from PR comments. If you find a viable solution, please contribute it to our issue tracker

"},{"location":"installation/azure/#azure-devops-from-cli","title":"Azure DevOps from CLI","text":"

To use Azure DevOps provider use the following settings in configuration.toml:

[config]\ngit_provider=\"azure\"\n

Azure DevOps provider supports PAT token or DefaultAzureCredential authentication. PAT is faster to create, but has built-in expiration date, and will use the user identity for API calls. Using DefaultAzureCredential you can use managed identity or Service principle, which are more secure and will create separate ADO user identity (via AAD) to the agent.

If PAT was chosen, you can assign the value in .secrets.toml. If DefaultAzureCredential was chosen, you can assigned the additional env vars like AZURE_CLIENT_SECRET directly, or use managed identity/az cli (for local development) without any additional configuration. in any case, 'org' value must be assigned in .secrets.toml:

[azure_devops]\norg = \"https://dev.azure.com/YOUR_ORGANIZATION/\"\n# pat = \"YOUR_PAT_TOKEN\" needed only if using PAT for authentication\n
"},{"location":"installation/azure/#azure-devops-webhook","title":"Azure DevOps Webhook","text":"

To trigger from an Azure webhook, you need to manually add a webhook. Use the \"Pull request created\" type to trigger a review, or \"Pull request commented on\" to trigger any supported comment with / comment on the relevant PR. Note that for the \"Pull request commented on\" trigger, only API v2.0 is supported.

For webhook security, create a sporadic username/password pair and configure the webhook username and password on both the server and Azure DevOps webhook. These will be sent as basic Auth data by the webhook with each request:

[azure_devops_server]\nwebhook_username = \"<basic auth user>\"\nwebhook_password = \"<basic auth password>\"\n

Ensure that the webhook endpoint is only accessible over HTTPS to mitigate the risk of credential interception when using basic authentication.

"},{"location":"installation/bitbucket/","title":"Bitbucket","text":""},{"location":"installation/bitbucket/#run-as-a-bitbucket-pipeline","title":"Run as a Bitbucket Pipeline","text":"

You can use the Bitbucket Pipeline system to run PR-Agent on every pull request open or update.

  1. Add the following file in your repository bitbucket-pipelines.yml
pipelines:\n    pull-requests:\n      '**':\n        - step:\n            name: PR Agent Review\n            image: codiumai/pr-agent:latest\n            script:\n              - pr-agent --pr_url=https://bitbucket.org/$BITBUCKET_WORKSPACE/$BITBUCKET_REPO_SLUG/pull-requests/$BITBUCKET_PR_ID review\n
  1. Add the following secure variables to your repository under Repository settings > Pipelines > Repository variables.

  2. CONFIG__GIT_PROVIDER: bitbucket

  3. OPENAI__KEY: <your key>
  4. BITBUCKET__AUTH_TYPE: basic or bearer (default is bearer)
  5. BITBUCKET__BEARER_TOKEN: <your token> (required when auth_type is bearer)
  6. BITBUCKET__BASIC_TOKEN: <your token> (required when auth_type is basic)

You can get a Bitbucket token for your repository by following Repository Settings -> Security -> Access Tokens. For basic auth, you can generate a base64 encoded token from your username:password combination.

Note that comments on a PR are not supported in Bitbucket Pipeline.

"},{"location":"installation/bitbucket/#bitbucket-server-and-data-center","title":"Bitbucket Server and Data Center","text":"

Login into your on-prem instance of Bitbucket with your service account username and password. Navigate to Manage account, HTTP Access tokens, Create Token. Generate the token and add it to .secret.toml under bitbucket_server section

[bitbucket_server]\nbearer_token = \"<your key>\"\n
"},{"location":"installation/bitbucket/#run-it-as-cli","title":"Run it as CLI","text":"

Modify configuration.toml:

git_provider=\"bitbucket_server\"\n

and pass the Pull request URL:

python cli.py --pr_url https://git.on-prem-instance-of-bitbucket.com/projects/PROJECT/repos/REPO/pull-requests/1 review\n
"},{"location":"installation/bitbucket/#run-it-as-service","title":"Run it as service","text":"

To run PR-Agent as webhook, build the docker image:

docker build . -t codiumai/pr-agent:bitbucket_server_webhook --target bitbucket_server_webhook -f docker/Dockerfile\ndocker push codiumai/pr-agent:bitbucket_server_webhook  # Push to your Docker repository\n

Navigate to Projects or Repositories, Settings, Webhooks, Create Webhook. Fill in the name and URL. For Authentication, select 'None'. Select the 'Pull Request Opened' checkbox to receive that event as a webhook.

The URL should end with /webhook, for example: https://domain.com/webhook

"},{"location":"installation/gitea/","title":"Gitea","text":""},{"location":"installation/gitea/#run-a-gitea-webhook-server","title":"Run a Gitea webhook server","text":"
  1. In Gitea create a new user and give it \"Reporter\" role (\"Developer\" if using Pro version of the agent) for the intended group or project.

  2. For the user from step 1. generate a personal_access_token with api access.

  3. Generate a random secret for your app, and save it for later (webhook_secret). For example, you can use:

WEBHOOK_SECRET=$(python -c \"import secrets; print(secrets.token_hex(10))\")\n
  1. Clone this repository:
git clone https://github.com/qodo-ai/pr-agent.git\n
  1. Prepare variables and secrets. Skip this step if you plan on setting these as environment variables when running the agent:

  2. Build a Docker image for the app and optionally push it to a Docker repository. We'll use Dockerhub as an example:

docker build -f /docker/Dockerfile -t pr-agent:gitea_app --target gitea_app .\ndocker push codiumai/pr-agent:gitea_webhook  # Push to your Docker repository\n
  1. Set the environmental variables, the method depends on your docker runtime. Skip this step if you included your secrets/configuration directly in the Docker image.
CONFIG__GIT_PROVIDER=gitea\nGITEA__PERSONAL_ACCESS_TOKEN=<personal_access_token>\nGITEA__WEBHOOK_SECRET=<webhook_secret>\nGITEA__URL=https://gitea.com # Or self host\nOPENAI__KEY=<your_openai_api_key>\nGITEA__SKIP_SSL_VERIFICATION=false # or true\nGITEA__SSL_CA_CERT=/path/to/cacert.pem\n
  1. Create a webhook in your Gitea project. Set the URL to http[s]://<PR_AGENT_HOSTNAME>/api/v1/gitea_webhooks, the secret token to the generated secret from step 3, and enable the triggers push, comments and merge request events.

  2. Test your installation by opening a merge request or commenting on a merge request using one of PR Agent's commands.

"},{"location":"installation/github/","title":"Github","text":""},{"location":"installation/github/#run-as-a-github-action","title":"Run as a GitHub Action","text":"

You can use our pre-built Github Action Docker image to run PR-Agent as a Github Action.

1) Add the following file to your repository under .github/workflows/pr_agent.yml:

on:\n  pull_request:\n    types: [opened, reopened, ready_for_review]\n  issue_comment:\njobs:\n  pr_agent_job:\n    if: ${{ github.event.sender.type != 'Bot' }}\n    runs-on: ubuntu-latest\n    permissions:\n      issues: write\n      pull-requests: write\n      contents: write\n    name: Run pr agent on every pull request, respond to user comments\n    steps:\n      - name: PR Agent action step\n        id: pragent\n        uses: qodo-ai/pr-agent@main\n        env:\n          OPENAI_KEY: ${{ secrets.OPENAI_KEY }}\n          GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}\n

2) Add the following secret to your repository under Settings > Secrets and variables > Actions > New repository secret > Add secret:

Name = OPENAI_KEY\nSecret = <your key>\n

The GITHUB_TOKEN secret is automatically created by GitHub.

3) Merge this change to your main branch. When you open your next PR, you should see a comment from github-actions bot with a review of your PR, and instructions on how to use the rest of the tools.

4) You may configure Qodo Merge by adding environment variables under the env section corresponding to any configurable property in the configuration file. Some examples:

      env:\n        # ... previous environment values\n        OPENAI.ORG: \"<Your organization name under your OpenAI account>\"\n        PR_REVIEWER.REQUIRE_TESTS_REVIEW: \"false\" # Disable tests review\n        PR_CODE_SUGGESTIONS.NUM_CODE_SUGGESTIONS: 6 # Increase number of code suggestions\n

See detailed usage instructions in the USAGE GUIDE

"},{"location":"installation/github/#using-a-specific-release","title":"Using a specific release","text":"

if you want to pin your action to a specific release (v0.23 for example) for stability reasons, use:

...\n    steps:\n      - name: PR Agent action step\n        id: pragent\n        uses: docker://codiumai/pr-agent:0.23-github_action\n...\n

For enhanced security, you can also specify the Docker image by its digest:

...\n    steps:\n      - name: PR Agent action step\n        id: pragent\n        uses: docker://codiumai/pr-agent@sha256:14165e525678ace7d9b51cda8652c2d74abb4e1d76b57c4a6ccaeba84663cc64\n...\n

"},{"location":"installation/github/#action-for-github-enterprise-server","title":"Action for GitHub enterprise server","text":"

To use the action with a GitHub enterprise server, add an environment variable GITHUB.BASE_URL with the API URL of your GitHub server.

For example, if your GitHub server is at https://github.mycompany.com, add the following to your workflow file:

      env:\n        # ... previous environment values\n        GITHUB.BASE_URL: \"https://github.mycompany.com/api/v3\"\n

"},{"location":"installation/github/#run-as-a-github-app","title":"Run as a GitHub App","text":"

Allowing you to automate the review process on your private or public repositories.

1) Create a GitHub App from the Github Developer Portal.

2) Generate a random secret for your app, and save it for later. For example, you can use:

WEBHOOK_SECRET=$(python -c \"import secrets; print(secrets.token_hex(10))\")\n

3) Acquire the following pieces of information from your app's settings page:

4) Clone this repository:

git clone https://github.com/Codium-ai/pr-agent.git\n

5) Copy the secrets template file and fill in the following:

cp pr_agent/settings/.secrets_template.toml pr_agent/settings/.secrets.toml\n# Edit .secrets.toml file\n

6) Build a Docker image for the app and optionally push it to a Docker repository. We'll use Dockerhub as an example:

```bash\ndocker build . -t codiumai/pr-agent:github_app --target github_app -f docker/Dockerfile\ndocker push codiumai/pr-agent:github_app  # Push to your Docker repository\n```\n
  1. Host the app using a server, serverless function, or container environment. Alternatively, for development and debugging, you may use tools like smee.io to forward webhooks to your local machine. You can check Deploy as a Lambda Function

  2. Go back to your app's settings, and set the following:

  3. Webhook URL: The URL of your app's server or the URL of the smee.io channel.

  4. Webhook secret: The secret you generated earlier.

  5. Install the app by navigating to the \"Install App\" tab and selecting your desired repositories.

Note: When running Qodo Merge from GitHub app, the default configuration file (configuration.toml) will be loaded. However, you can override the default tool parameters by uploading a local configuration file .pr_agent.toml For more information please check out the USAGE GUIDE

"},{"location":"installation/github/#deploy-as-a-lambda-function","title":"Deploy as a Lambda Function","text":"

Note that since AWS Lambda env vars cannot have \".\" in the name, you can replace each \".\" in an env variable with \"__\". For example: GITHUB.WEBHOOK_SECRET --> GITHUB__WEBHOOK_SECRET

  1. Follow steps 1-5 from here.
  2. Build a docker image that can be used as a lambda function

    ```shell

  3. Push image to ECR

    docker tag codiumai/pr-agent:github_lambda <AWS_ACCOUNT>.dkr.ecr.<AWS_REGION>.amazonaws.com/codiumai/pr-agent:github_lambda\ndocker push <AWS_ACCOUNT>.dkr.ecr.<AWS_REGION>.amazonaws.com/codiumai/pr-agent:github_lambda\n
  4. Create a lambda function that uses the uploaded image. Set the lambda timeout to be at least 3m.

  5. Configure the lambda function to have a Function URL.
  6. In the environment variables of the Lambda function, specify AZURE_DEVOPS_CACHE_DIR to a writable location such as /tmp. (see link)
  7. Go back to steps 8-9 of Method 5 with the function url as your Webhook URL. The Webhook URL would look like https://<LAMBDA_FUNCTION_URL>/api/v1/github_webhooks
"},{"location":"installation/github/#note-target-github_lambda-is-optional-as-its-the-default-target","title":"Note: --target github_lambda is optional as it's the default target","text":"

docker buildx build --platform=linux/amd64 . -t codiumai/pr-agent:github_lambda --target github_lambda -f docker/Dockerfile.lambda ```

"},{"location":"installation/github/#using-aws-secrets-manager","title":"Using AWS Secrets Manager","text":"

For production Lambda deployments, use AWS Secrets Manager instead of environment variables:

  1. Create a secret in AWS Secrets Manager with JSON format like this:
{\n  \"openai.key\": \"sk-proj-xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx\",\n  \"github.webhook_secret\": \"your-webhook-secret-from-step-2\",\n  \"github.private_key\": \"-----BEGIN RSA PRIVATE KEY-----\\nMIIEpAIBAAKCAQEA...\\n-----END RSA PRIVATE KEY-----\"\n}\n
  1. Add IAM permission secretsmanager:GetSecretValue to your Lambda execution role
  2. Set these environment variables in your Lambda:
AWS_SECRETS_MANAGER__SECRET_ARN=arn:aws:secretsmanager:us-east-1:123456789012:secret:pr-agent-secrets-AbCdEf\nCONFIG__SECRET_PROVIDER=aws_secrets_manager\n
"},{"location":"installation/github/#aws-codecommit-setup","title":"AWS CodeCommit Setup","text":"

Not all features have been added to CodeCommit yet. As of right now, CodeCommit has been implemented to run the Qodo Merge CLI on the command line, using AWS credentials stored in environment variables. (More features will be added in the future.) The following is a set of instructions to have Qodo Merge do a review of your CodeCommit pull request from the command line:

  1. Create an IAM user that you will use to read CodeCommit pull requests and post comments
  2. Add IAM permissions to that user, to allow access to CodeCommit (see IAM Role example below)
  3. Generate an Access Key for your IAM user
  4. Set the Access Key and Secret using environment variables (see Access Key example below)
  5. Set the git_provider value to codecommit in the pr_agent/settings/configuration.toml settings file
  6. Set the PYTHONPATH to include your pr-agent project directory
"},{"location":"installation/github/#aws-codecommit-iam-role-example","title":"AWS CodeCommit IAM Role Example","text":"

Example IAM permissions to that user to allow access to CodeCommit:

{\n    \"Version\": \"2012-10-17\",\n    \"Statement\": [\n        {\n            \"Effect\": \"Allow\",\n            \"Action\": [\n                \"codecommit:BatchDescribe*\",\n                \"codecommit:BatchGet*\",\n                \"codecommit:Describe*\",\n                \"codecommit:EvaluatePullRequestApprovalRules\",\n                \"codecommit:Get*\",\n                \"codecommit:List*\",\n                \"codecommit:PostComment*\",\n                \"codecommit:PutCommentReaction\",\n                \"codecommit:UpdatePullRequestDescription\",\n                \"codecommit:UpdatePullRequestTitle\"\n            ],\n            \"Resource\": \"*\"\n        }\n    ]\n}\n
"},{"location":"installation/github/#aws-codecommit-access-key-and-secret","title":"AWS CodeCommit Access Key and Secret","text":"

Example setting the Access Key and Secret using environment variables

export AWS_ACCESS_KEY_ID=\"XXXXXXXXXXXXXXXX\"\nexport AWS_SECRET_ACCESS_KEY=\"XXXXXXXXXXXXXXXX\"\nexport AWS_DEFAULT_REGION=\"us-east-1\"\n
"},{"location":"installation/github/#aws-codecommit-cli-example","title":"AWS CodeCommit CLI Example","text":"

After you set up AWS CodeCommit using the instructions above, here is an example CLI run that tells pr-agent to review a given pull request. (Replace your specific PYTHONPATH and PR URL in the example)

PYTHONPATH=\"/PATH/TO/PROJECTS/pr-agent\" python pr_agent/cli.py \\\n  --pr_url https://us-east-1.console.aws.amazon.com/codesuite/codecommit/repositories/MY_REPO_NAME/pull-requests/321 \\\n  review\n
"},{"location":"installation/gitlab/","title":"Gitlab","text":""},{"location":"installation/gitlab/#run-as-a-gitlab-pipeline","title":"Run as a GitLab Pipeline","text":"

You can use a pre-built Action Docker image to run PR-Agent as a GitLab pipeline. This is a simple way to get started with Qodo Merge without setting up your own server.

(1) Add the following file to your repository under .gitlab-ci.yml:

stages:\n  - pr_agent\n\npr_agent_job:\n  stage: pr_agent\n  image:\n    name: codiumai/pr-agent:latest\n    entrypoint: [\"\"]\n  script:\n    - cd /app\n    - echo \"Running PR Agent action step\"\n    - export MR_URL=\"$CI_MERGE_REQUEST_PROJECT_URL/merge_requests/$CI_MERGE_REQUEST_IID\"\n    - echo \"MR_URL=$MR_URL\"\n    - export gitlab__url=$CI_SERVER_PROTOCOL://$CI_SERVER_FQDN\n    - export gitlab__PERSONAL_ACCESS_TOKEN=$GITLAB_PERSONAL_ACCESS_TOKEN\n    - export config__git_provider=\"gitlab\"\n    - export openai__key=$OPENAI_KEY\n    - python -m pr_agent.cli --pr_url=\"$MR_URL\" describe\n    - python -m pr_agent.cli --pr_url=\"$MR_URL\" review\n    - python -m pr_agent.cli --pr_url=\"$MR_URL\" improve\n  rules:\n    - if: '$CI_PIPELINE_SOURCE == \"merge_request_event\"'\n

This script will run Qodo Merge on every new merge request. You can modify the rules section to run Qodo Merge on different events. You can also modify the script section to run different Qodo Merge commands, or with different parameters by exporting different environment variables.

(2) Add the following masked variables to your GitLab repository (CI/CD -> Variables):

Note that if your base branches are not protected, don't set the variables as protected, since the pipeline will not have access to them.

Note: The $CI_SERVER_FQDN variable is available starting from GitLab version 16.10. If you're using an earlier version, this variable will not be available. However, you can combine $CI_SERVER_HOST and $CI_SERVER_PORT to achieve the same result. Please ensure you're using a compatible version or adjust your configuration.

"},{"location":"installation/gitlab/#run-a-gitlab-webhook-server","title":"Run a GitLab webhook server","text":"
  1. In GitLab create a new user and give it \"Reporter\" role (\"Developer\" if using Pro version of the agent) for the intended group or project.

  2. For the user from step 1, generate a personal_access_token with api access.

  3. Generate a random secret for your app, and save it for later (shared_secret). For example, you can use:

SHARED_SECRET=$(python -c \"import secrets; print(secrets.token_hex(10))\")\n
  1. Clone this repository:
git clone https://github.com/qodo-ai/pr-agent.git\n
  1. Prepare variables and secrets. Skip this step if you plan on setting these as environment variables when running the agent:

    1. In the configuration file/variables:

      • Set config.git_provider to \"gitlab\"
    2. In the secrets file/variables:

      • Set your AI model key in the respective section
      • In the [gitlab] section, set personal_access_token (with token from step 2) and shared_secret (with secret from step 3)
  2. Build a Docker image for the app and optionally push it to a Docker repository. We'll use Dockerhub as an example:

docker build . -t gitlab_pr_agent --target gitlab_webhook -f docker/Dockerfile\ndocker push codiumai/pr-agent:gitlab_webhook  # Push to your Docker repository\n
  1. Set the environmental variables, the method depends on your docker runtime. Skip this step if you included your secrets/configuration directly in the Docker image.
CONFIG__GIT_PROVIDER=gitlab\nGITLAB__PERSONAL_ACCESS_TOKEN=<personal_access_token>\nGITLAB__SHARED_SECRET=<shared_secret>\nGITLAB__URL=https://gitlab.com\nOPENAI__KEY=<your_openai_api_key>\n
  1. Create a webhook in your GitLab project. Set the URL to http[s]://<PR_AGENT_HOSTNAME>/webhook, the secret token to the generated secret from step 3, and enable the triggers push, comments and merge request events.

  2. Test your installation by opening a merge request or commenting on a merge request using one of PR Agent's commands.

"},{"location":"installation/gitlab/#deploy-as-a-lambda-function","title":"Deploy as a Lambda Function","text":"

Note that since AWS Lambda env vars cannot have \".\" in the name, you can replace each \".\" in an env variable with \"__\". For example: GITLAB.PERSONAL_ACCESS_TOKEN --> GITLAB__PERSONAL_ACCESS_TOKEN

  1. Follow steps 1-5 from Run a GitLab webhook server.
  2. Build a docker image that can be used as a lambda function

    shell docker buildx build --platform=linux/amd64 . -t codiumai/pr-agent:gitlab_lambda --target gitlab_lambda -f docker/Dockerfile.lambda

  3. Push image to ECR

    docker tag codiumai/pr-agent:gitlab_lambda <AWS_ACCOUNT>.dkr.ecr.<AWS_REGION>.amazonaws.com/codiumai/pr-agent:gitlab_lambda\ndocker push <AWS_ACCOUNT>.dkr.ecr.<AWS_REGION>.amazonaws.com/codiumai/pr-agent:gitlab_lambda\n
  4. Create a lambda function that uses the uploaded image. Set the lambda timeout to be at least 3m.

  5. Configure the lambda function to have a Function URL.
  6. In the environment variables of the Lambda function, specify AZURE_DEVOPS_CACHE_DIR to a writable location such as /tmp. (see link)
  7. Go back to steps 8-9 of Run a GitLab webhook server with the function URL as your Webhook URL. The Webhook URL would look like https://<LAMBDA_FUNCTION_URL>/webhook
"},{"location":"installation/gitlab/#using-aws-secrets-manager","title":"Using AWS Secrets Manager","text":"

For production Lambda deployments, use AWS Secrets Manager instead of environment variables:

  1. Create individual secrets for each GitLab webhook with this JSON format (e.g., secret name: project-webhook-secret-001)
{\n  \"gitlab_token\": \"glpat-xxxxxxxxxxxxxxxxxxxxxxxx\",\n  \"token_name\": \"project-webhook-001\"\n}\n
  1. Create a main configuration secret for common settings (e.g., secret name: pr-agent-main-config)
{\n  \"openai.key\": \"sk-proj-xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx\"\n}\n
  1. Set these environment variables in your Lambda:
CONFIG__SECRET_PROVIDER=aws_secrets_manager\nAWS_SECRETS_MANAGER__SECRET_ARN=arn:aws:secretsmanager:us-east-1:123456789012:secret:pr-agent-main-config-AbCdEf\n
  1. In your GitLab webhook configuration, set the Secret Token to the Secret name created in step 1:
  2. Example: project-webhook-secret-001

Important: When using Secrets Manager, GitLab's webhook secret must be the Secrets Manager secret name.

  1. Add IAM permission secretsmanager:GetSecretValue to your Lambda execution role
"},{"location":"installation/locally/","title":"Locally","text":"

To run PR-Agent locally, you first need to acquire two keys:

  1. An OpenAI key from here, with access to GPT-4 and o4-mini (or a key for other language models, if you prefer).
  2. A personal access token from your Git platform (GitHub, GitLab, BitBucket,Gitea) with repo scope. GitHub token, for example, can be issued from here
"},{"location":"installation/locally/#using-docker-image","title":"Using Docker image","text":"

A list of the relevant tools can be found in the tools guide.

To invoke a tool (for example review), you can run PR-Agent directly from the Docker image. Here's how:

For other git providers, update CONFIG.GIT_PROVIDER accordingly and check the pr_agent/settings/.secrets_template.toml file for environment variables expected names and values.

"},{"location":"installation/locally/#utilizing-environment-variables","title":"Utilizing environment variables","text":"

It is also possible to provide or override the configuration by setting the corresponding environment variables. You can define the corresponding environment variables by following this convention: <TABLE>__<KEY>=<VALUE> or <TABLE>.<KEY>=<VALUE>. The <TABLE> refers to a table/section in a configuration file and <KEY>=<VALUE> refers to the key/value pair of a setting in the configuration file.

For example, suppose you want to run pr_agent that connects to a self-hosted GitLab instance similar to an example above. You can define the environment variables in a plain text file named .env with the following content:

CONFIG__GIT_PROVIDER=\"gitlab\"\nGITLAB__URL=\"<your url>\"\nGITLAB__PERSONAL_ACCESS_TOKEN=\"<your token>\"\nOPENAI__KEY=\"<your key>\"\n

Then, you can run pr_agent using Docker with the following command:

docker run --rm -it --env-file .env codiumai/pr-agent:latest <tool> <tool parameter>\n
"},{"location":"installation/locally/#i-get-an-error-when-running-the-docker-image-what-should-i-do","title":"I get an error when running the Docker image. What should I do?","text":"

If you encounter an error when running the Docker image, it is almost always due to a misconfiguration of api keys or tokens.

Note that litellm, which is used by pr-agent, sometimes returns non-informative error messages such as APIError: OpenAIException - Connection error. Carefully check the api keys and tokens you provided and make sure they are correct. Adjustments may be needed depending on your llm provider.

For example, for Azure OpenAI, additional keys are needed. Same goes for other providers, make sure to check the documentation

"},{"location":"installation/locally/#using-pip-package","title":"Using pip package","text":"

Install the package:

pip install pr-agent\n

Then run the relevant tool with the script below. Make sure to fill in the required parameters (user_token, openai_key, pr_url, command):

from pr_agent import cli\nfrom pr_agent.config_loader import get_settings\n\ndef main():\n    # Fill in the following values\n    provider = \"github\" # github/gitlab/bitbucket/azure_devops\n    user_token = \"...\"  #  user token\n    openai_key = \"...\"  # OpenAI key\n    pr_url = \"...\"      # PR URL, for example 'https://github.com/Codium-ai/pr-agent/pull/809'\n    command = \"/review\" # Command to run (e.g. '/review', '/describe', '/ask=\"What is the purpose of this PR?\"', ...)\n\n    # Setting the configurations\n    get_settings().set(\"CONFIG.git_provider\", provider)\n    get_settings().set(\"openai.key\", openai_key)\n    get_settings().set(\"github.user_token\", user_token)\n\n    # Run the command. Feedback will appear in GitHub PR comments\n    cli.run_command(pr_url, command)\n\n\nif __name__ == '__main__':\n    main()\n
"},{"location":"installation/locally/#run-from-source","title":"Run from source","text":"
  1. Clone this repository:
git clone https://github.com/Codium-ai/pr-agent.git\n
  1. Navigate to the /pr-agent folder and install the requirements in your favorite virtual environment:
pip install -e .\n

Note: If you get an error related to Rust in the dependency installation then make sure Rust is installed and in your PATH, instructions: https://rustup.rs

  1. Copy the secrets template file and fill in your OpenAI key and your GitHub user token:
cp pr_agent/settings/.secrets_template.toml pr_agent/settings/.secrets.toml\nchmod 600 pr_agent/settings/.secrets.toml\n# Edit .secrets.toml file\n
  1. Run the cli.py script:
python3 -m pr_agent.cli --pr_url <pr_url> review\npython3 -m pr_agent.cli --pr_url <pr_url> ask <your question>\npython3 -m pr_agent.cli --pr_url <pr_url> describe\npython3 -m pr_agent.cli --pr_url <pr_url> improve\npython3 -m pr_agent.cli --pr_url <pr_url> add_docs\npython3 -m pr_agent.cli --pr_url <pr_url> generate_labels\npython3 -m pr_agent.cli --issue_url <issue_url> similar_issue\n...\n

[Optional] Add the pr_agent folder to your PYTHONPATH

export PYTHONPATH=$PYTHONPATH:<PATH to pr_agent folder>\n
"},{"location":"installation/pr_agent/","title":"PR-Agent Installation Guide","text":"

PR-Agent can be deployed in various environments and platforms. Choose the installation method that best suits your needs:

"},{"location":"installation/pr_agent/#local-installation","title":"\ud83d\udda5\ufe0f Local Installation","text":"

Learn how to run PR-Agent locally using:

View Local Installation Guide \u2192

"},{"location":"installation/pr_agent/#github-integration","title":"\ud83d\udc19 GitHub Integration","text":"

Set up PR-Agent with GitHub as:

View GitHub Integration Guide \u2192

"},{"location":"installation/pr_agent/#gitlab-integration","title":"\ud83e\udd8a GitLab Integration","text":"

Deploy PR-Agent on GitLab as:

View GitLab Integration Guide \u2192

"},{"location":"installation/pr_agent/#bitbucket-integration","title":"\ud83d\udfe6 BitBucket Integration","text":"

Implement PR-Agent in BitBucket as:

View BitBucket Integration Guide \u2192

"},{"location":"installation/pr_agent/#azure-devops-integration","title":"\ud83d\udd37 Azure DevOps Integration","text":"

Configure PR-Agent with Azure DevOps as:

View Azure DevOps Integration Guide \u2192

"},{"location":"installation/qodo_merge/","title":"\ud83d\udc8e Qodo Merge","text":"

Qodo Merge is a versatile application compatible with GitHub, GitLab, and BitBucket, hosted by QodoAI. See here for more details about the benefits of using Qodo Merge.

"},{"location":"installation/qodo_merge/#usage-and-licensing","title":"Usage and Licensing","text":""},{"location":"installation/qodo_merge/#cloud-users","title":"Cloud Users","text":"

Non-paying users will enjoy feedback on up to 75 PRs per git organization per month. Above this limit, PRs will not receive feedback until a new month begins.

For unlimited access, user licenses (seats) are required. Each user requires an individual seat license. After purchasing seats, the team owner can assign them to specific users through the management portal.

With an assigned seat, users can seamlessly deploy the application across any of their code repositories in a git organization, and receive feedback on all their PRs.

"},{"location":"installation/qodo_merge/#enterprise-account","title":"Enterprise Account","text":"

For companies who require an Enterprise account, please contact us to initiate a trial period, and to discuss pricing and licensing options.

"},{"location":"installation/qodo_merge/#install-qodo-merge-for-github","title":"Install Qodo Merge for GitHub","text":""},{"location":"installation/qodo_merge/#github-cloud","title":"GitHub Cloud","text":"

Qodo Merge for GitHub cloud is available for installation through the GitHub Marketplace.

"},{"location":"installation/qodo_merge/#github-enterprise-server","title":"GitHub Enterprise Server","text":"

To use Qodo Merge on your private GitHub Enterprise Server, you will need to contact Qodo for starting an Enterprise trial.

(Note: The marketplace app is not compatible with GitHub Enterprise Server. Installation requires creating a private GitHub App instead.)

"},{"location":"installation/qodo_merge/#github-open-source-projects","title":"GitHub Open Source Projects","text":"

For open-source projects, Qodo Merge is available for free usage. To install Qodo Merge for your open-source repositories, use the following marketplace link.

"},{"location":"installation/qodo_merge/#install-qodo-merge-for-bitbucket","title":"Install Qodo Merge for Bitbucket","text":""},{"location":"installation/qodo_merge/#bitbucket-cloud","title":"Bitbucket Cloud","text":"

Qodo Merge for Bitbucket Cloud is available for installation through the following link

"},{"location":"installation/qodo_merge/#bitbucket-server","title":"Bitbucket Server","text":"

To use Qodo Merge application on your private Bitbucket Server, you will need to contact us for starting an Enterprise trial.

"},{"location":"installation/qodo_merge/#install-qodo-merge-for-gitlab","title":"Install Qodo Merge for GitLab","text":""},{"location":"installation/qodo_merge/#gitlab-cloud","title":"GitLab Cloud","text":"

Since GitLab platform does not support apps, installing Qodo Merge for GitLab is a bit more involved, and requires the following steps:

"},{"location":"installation/qodo_merge/#step-1","title":"Step 1","text":"

Acquire a personal, project or group level access token. Enable the \u201capi\u201d scope in order to allow Qodo Merge to read pull requests, comment and respond to requests.

Store the token in a safe place, you won\u2019t be able to access it again after it was generated.

"},{"location":"installation/qodo_merge/#step-2","title":"Step 2","text":"

Generate a shared secret and link it to the access token. Browse to https://register.gitlab.pr-agent.codium.ai. Fill in your generated GitLab token and your company or personal name in the appropriate fields and click \"Submit\".

You should see \"Success!\" displayed above the Submit button, and a shared secret will be generated. Store it in a safe place, you won\u2019t be able to access it again after it was generated.

"},{"location":"installation/qodo_merge/#step-3","title":"Step 3","text":"

Install a webhook for your repository or groups, by clicking \u201cwebhooks\u201d on the settings menu. Click the \u201cAdd new webhook\u201d button.

In the webhook definition form, fill in the following fields: URL: https://pro.gitlab.pr-agent.codium.ai/webhook

Secret token: Your QodoAI key Trigger: Check the \u2018comments\u2019 and \u2018merge request events\u2019 boxes. Enable SSL verification: Check the box.

"},{"location":"installation/qodo_merge/#step-4","title":"Step 4","text":"

You\u2019re all set!

Open a new merge request or add a MR comment with one of Qodo Merge\u2019s commands such as /review, /describe or /improve.

"},{"location":"installation/qodo_merge/#gitlab-server","title":"GitLab Server","text":"

For limited free usage on private GitLab Server, the same installation steps as for GitLab Cloud apply. For unlimited usage, you will need to contact Qodo for moving to an Enterprise account.

"},{"location":"overview/data_privacy/","title":"Data Privacy","text":""},{"location":"overview/data_privacy/#self-hosted-pr-agent","title":"Self-hosted PR-Agent","text":""},{"location":"overview/data_privacy/#qodo-merge","title":"Qodo Merge \ud83d\udc8e","text":""},{"location":"overview/data_privacy/#qodo-merge-chrome-extension","title":"Qodo Merge Chrome extension","text":""},{"location":"overview/pr_agent_pro/","title":"\ud83d\udc8e Qodo Merge","text":""},{"location":"overview/pr_agent_pro/#overview","title":"Overview","text":"

Qodo Merge is a hosted version of the open-source PR-Agent. It is designed for companies and teams that require additional features and capabilities.

Free users receive a quota of 75 monthly PR feedbacks per git organization. Unlimited usage requires a paid subscription. See details.

Qodo Merge provides the following benefits:

  1. Fully managed - We take care of everything for you - hosting, models, regular updates, and more. Installation is as simple as signing up and adding the Qodo Merge app to your GitHub\\GitLab\\BitBucket repo.

  2. Improved privacy - No data will be stored or used to train models. Qodo Merge will employ zero data retention, and will use an OpenAI and Claude accounts with zero data retention.

  3. Improved support - Qodo Merge users will receive priority support, and will be able to request new features and capabilities.

  4. Supporting self-hosted git servers - Qodo Merge can be installed on GitHub Enterprise Server, GitLab, and BitBucket. For more information, see the installation guide.

  5. PR Chat - Qodo Merge allows you to engage in private chat about your pull requests on private repositories.

"},{"location":"overview/pr_agent_pro/#additional-features","title":"Additional features","text":"

Here are some of the additional features and capabilities that Qodo Merge offers, and are not available in the open-source version of PR-Agent:

Feature Description Model selection Choose the model that best fits your needs, among top models like Claude Sonnet, o4-mini Global and wiki configuration Control configurations for many repositories from a single location; Edit configuration of a single repo without committing code Apply suggestions Generate committable code from the relevant suggestions interactively by clicking on a checkbox Suggestions impact Automatically mark suggestions that were implemented by the user (either directly in GitHub, or indirectly in the IDE) to enable tracking of the impact of the suggestions CI feedback Automatically analyze failed CI checks on GitHub and provide actionable feedback in the PR conversation, helping to resolve issues quickly Advanced usage statistics Qodo Merge offers detailed statistics at user, repository, and company levels, including metrics about Qodo Merge usage, and also general statistics and insights Incorporating companies' best practices Use the companies' best practices as reference to increase the effectiveness and the relevance of the code suggestions Interactive triggering Interactively apply different tools via the analyze command Custom labels Define custom labels for Qodo Merge to assign to the PR"},{"location":"overview/pr_agent_pro/#additional-tools","title":"Additional tools","text":"

Here are additional tools that are available only for Qodo Merge users:

Feature Description Custom Prompt Suggestions Generate code suggestions based on custom prompts from the user Analyze PR components Identify the components that changed in the PR, and enable to interactively apply different tools to them Tests Generate tests for code components that changed in the PR PR documentation Generate docstring for code components that changed in the PR Improve Component Generate code suggestions for code components that changed in the PR Similar code search Search for similar code in the repository, organization, or entire GitHub Code implementation Generates implementation code from review suggestions"},{"location":"overview/pr_agent_pro/#supported-languages","title":"Supported languages","text":"

Qodo Merge leverages the world's leading code models, such as Claude 4 Sonnet, o4-mini and Gemini-2.5-Pro. As a result, its primary tools such as describe, review, and improve, as well as the PR-chat feature, support virtually all programming languages.

For specialized commands that require static code analysis, Qodo Merge offers support for specific languages. For more details about features that require static code analysis, please refer to the documentation.

"},{"location":"pr_benchmark/","title":"Qodo Merge Pull Request Benchmark","text":""},{"location":"pr_benchmark/#methodology","title":"Methodology","text":"

Qodo Merge PR Benchmark evaluates and compares the performance of Large Language Models (LLMs) in analyzing pull request code and providing meaningful code suggestions. Our diverse dataset contains 400 pull requests from over 100 repositories, spanning various programming languages and frameworks to reflect real-world scenarios.

A list of the models used for generating the baseline suggestions, and example results, can be found in the Appendix.

"},{"location":"pr_benchmark/#pr-benchmark-results","title":"PR Benchmark Results","text":"Model Name Version (Date) Thinking budget tokens Score o3 2025-04-16 'medium' (8000) 62.5 o4-mini 2025-04-16 'medium' (8000) 57.7 Gemini-2.5-pro 2025-06-05 4096 56.3 Gemini-2.5-pro 2025-06-05 1024 44.3 Grok-4 2025-07-09 unknown 41.7 Claude-4-sonnet 2025-05-14 4096 39.7 Claude-4-sonnet 2025-05-14 39.0 Codex-mini 2025-06-20 unknown 37.2 Gemini-2.5-flash 2025-04-17 33.5 Claude-4-opus-20250514 2025-05-14 32.8 Claude-3.7-sonnet 2025-02-19 32.4 GPT-4.1 2025-04-14 26.5"},{"location":"pr_benchmark/#results-analysis","title":"Results Analysis","text":""},{"location":"pr_benchmark/#o3","title":"O3","text":"

Final score: 62.5

strengths:

weaknesses:

"},{"location":"pr_benchmark/#o4-mini-medium-thinking-tokens","title":"O4 Mini ('medium' thinking tokens)","text":"

Final score: 57.7

strengths:

weaknesses:

"},{"location":"pr_benchmark/#gemini-25-pro-4096-thinking-tokens","title":"Gemini-2.5 Pro (4096 thinking tokens)","text":"

Final score: 56.3

strengths:

weaknesses:

"},{"location":"pr_benchmark/#claude-4-sonnet-4096-thinking-tokens","title":"Claude-4 Sonnet (4096 thinking tokens)","text":"

Final score: 39.7

strengths:

weaknesses:

"},{"location":"pr_benchmark/#claude-4-sonnet","title":"Claude-4 Sonnet","text":"

Final score: 39.0

strengths:

weaknesses:

"},{"location":"pr_benchmark/#gemini-25-flash","title":"Gemini-2.5 Flash","text":"

strengths:

weaknesses:

"},{"location":"pr_benchmark/#gpt-41","title":"GPT-4.1","text":"

Final score: 26.5

strengths:

weaknesses:

"},{"location":"pr_benchmark/#openai-codex-mini","title":"OpenAI codex-mini","text":"

final score: 37.2

strengths:

weaknesses:

"},{"location":"pr_benchmark/#claude-4-opus","title":"Claude-4 Opus","text":"

final score: 32.8

strengths:

weaknesses:

"},{"location":"pr_benchmark/#grok-4","title":"Grok-4","text":"

final score: 32.8

strengths:

weaknesses:

"},{"location":"pr_benchmark/#appendix-example-results","title":"Appendix - Example Results","text":"

Some examples of benchmarked PRs and their results:

"},{"location":"pr_benchmark/#models-used-for-benchmarking","title":"Models Used for Benchmarking","text":"

The following models were used for generating the benchmark baseline:

(1) anthropic_sonnet_3.7_v1:0\n\n(2) claude-4-opus-20250514\n\n(3) claude-4-sonnet-20250514\n\n(4) claude-4-sonnet-20250514_thinking_2048\n\n(5) gemini-2.5-flash-preview-04-17\n\n(6) gemini-2.5-pro-preview-05-06\n\n(7) gemini-2.5-pro-preview-06-05_1024\n\n(8) gemini-2.5-pro-preview-06-05_4096\n\n(9) gpt-4.1\n\n(10) o3\n\n(11) o4-mini_medium\n
"},{"location":"recent_updates/","title":"Recent Updates and Future Roadmap","text":"

Page last updated: 2025-07-01

This page summarizes recent enhancements to Qodo Merge (last three months).

It also outlines our development roadmap for the upcoming three months. Please note that the roadmap is subject to change, and features may be adjusted, added, or reprioritized.

Recent UpdatesFuture Roadmap "},{"location":"tools/","title":"Tools","text":"

Here is a list of Qodo Merge tools, each with a dedicated page that explains how to use it:

Tool Description PR Description (/describe) Automatically generating PR description - title, type, summary, code walkthrough and labels PR Review (/review) Adjustable feedback about the PR, possible issues, security concerns, review effort and more Code Suggestions (/improve) Code suggestions for improving the PR Question Answering (/ask ...) Answering free-text questions about the PR, or on specific code lines Help (/help) Provides a list of all the available tools. Also enables to trigger them interactively (\ud83d\udc8e) Help Docs (/help_docs) Answer a free-text question based on a git documentation folder. Update Changelog (/update_changelog) Automatically updating the CHANGELOG.md file with the PR changes \ud83d\udc8e Add Documentation (/add_docs) Generates documentation to methods/functions/classes that changed in the PR \ud83d\udc8e Analyze (/analyze) Identify code components that changed in the PR, and enables to interactively generate tests, docs, and code suggestions for each component \ud83d\udc8e CI Feedback (/checks ci_job) Automatically generates feedback and analysis for a failed CI job \ud83d\udc8e Custom Prompt (/custom_prompt) Automatically generates custom suggestions for improving the PR code, based on specific guidelines defined by the user \ud83d\udc8e Generate Custom Labels (/generate_labels) Generates custom labels for the PR, based on specific guidelines defined by the user \ud83d\udc8e Generate Tests (/test) Automatically generates unit tests for a selected component, based on the PR code changes \ud83d\udc8e Implement (/implement) Generates implementation code from review suggestions \ud83d\udc8e Improve Component (/improve_component component_name) Generates code suggestions for a specific code component that changed in the PR \ud83d\udc8e Scan Repo Discussions (/scan_repo_discussions) Generates best_practices.md file based on previous discussions in the repository \ud83d\udc8e Similar Code (/similar_code) Retrieves the most similar code components from inside the organization's codebase, or from open-source code.

Note that the tools marked with \ud83d\udc8e are available only for Qodo Merge users.

"},{"location":"tools/analyze/","title":"\ud83d\udc8e Analyze","text":""},{"location":"tools/analyze/#overview","title":"Overview","text":"

The analyze tool combines advanced static code analysis with LLM capabilities to provide a comprehensive analysis of the PR code changes.

The tool scans the PR code changes, finds the code components (methods, functions, classes) that changed, and enables to interactively generate tests, docs, code suggestions and similar code search for each component.

It can be invoked manually by commenting on any PR:

/analyze\n
"},{"location":"tools/analyze/#example-usage","title":"Example usage","text":"

An example result:

Language that are currently supported:

Python, Java, C++, JavaScript, TypeScript, C#, Go.

"},{"location":"tools/ask/","title":"Ask","text":""},{"location":"tools/ask/#overview","title":"Overview","text":"

The ask tool answers questions about the PR, based on the PR code changes. Make sure to be specific and clear in your questions. It can be invoked manually by commenting on any PR:

/ask \"...\"\n
"},{"location":"tools/ask/#example-usage","title":"Example usage","text":""},{"location":"tools/ask/#ask-lines","title":"Ask lines","text":"

You can run /ask on specific lines of code in the PR from the PR's diff view. The tool will answer questions based on the code changes in the selected lines.

Note that the tool does not have \"memory\" of previous questions, and answers each question independently.

"},{"location":"tools/ask/#ask-on-images","title":"Ask on images","text":"

You can also ask questions about images that appear in the comment, where the entire PR code will be used as context. The basic syntax is:

/ask \"...\"\n\n[Image](https://real_link_to_image)\n

where https://real_link_to_image is the direct link to the image.

Note that GitHub has a built-in mechanism of pasting images in comments. However, pasted image does not provide a direct link. To get a direct link to an image, we recommend using the following scheme:

1. First, post a comment that contains only the image:

2. Quote reply to that comment:

3. In the screen opened, type the question below the image:

4. Post the comment, and receive the answer:

See a full video tutorial here

"},{"location":"tools/ci_feedback/","title":"\ud83d\udc8e CI Feedback","text":""},{"location":"tools/ci_feedback/#overview","title":"Overview","text":"

The CI feedback tool (/checks) automatically triggers when a PR has a failed check. The tool analyzes the failed checks and provides several feedbacks:

"},{"location":"tools/ci_feedback/#example-usage","title":"Example usage","text":"

\u2192

In addition to being automatically triggered, the tool can also be invoked manually by commenting on a PR:

/checks \"https://github.com/{repo_name}/actions/runs/{run_number}/job/{job_number}\"\n

where {repo_name} is the name of the repository, {run_number} is the run number of the failed check, and {job_number} is the job number of the failed check.

"},{"location":"tools/ci_feedback/#disabling-the-tool-from-running-automatically","title":"Disabling the tool from running automatically","text":"

If you wish to disable the tool from running automatically, you can do so by adding the following configuration to the configuration file:

[checks]\nenable_auto_checks_feedback = false\n
"},{"location":"tools/ci_feedback/#configuration-options","title":"Configuration options","text":""},{"location":"tools/custom_labels/","title":"\ud83d\udc8e Generate Labels","text":""},{"location":"tools/custom_labels/#overview","title":"Overview","text":"

The generate_labels tool scans the PR code changes, and given a list of labels and their descriptions, it automatically suggests labels that match the PR code changes.

It can be invoked manually by commenting on any PR:

/generate_labels\n
"},{"location":"tools/custom_labels/#example-usage","title":"Example usage","text":"

If we wish to add detect changes to SQL queries in a given PR, we can add the following custom label along with its description:

When running the generate_labels tool on a PR that includes changes in SQL queries, it will automatically suggest the custom label:

Note that in addition to the dedicated tool generate_labels, the custom labels will also be used by the describe tool.

"},{"location":"tools/custom_labels/#how-to-enable-custom-labels","title":"How to enable custom labels","text":"

There are 3 ways to enable custom labels:

"},{"location":"tools/custom_labels/#1-cli-local-configuration-file","title":"1. CLI (local configuration file)","text":"

When working from CLI, you need to apply the configuration changes to the custom_labels file:

"},{"location":"tools/custom_labels/#2-repo-configuration-file","title":"2. Repo configuration file","text":"

To enable custom labels, you need to apply the configuration changes to the local .pr_agent.toml file in your repository.

"},{"location":"tools/custom_labels/#3-handle-custom-labels-from-the-repos-labels-page","title":"3. Handle custom labels from the Repo's labels page \ud83d\udc8e","text":"

This feature is available only in Qodo Merge

b. Add/edit the custom labels. It should be formatted as follows:

c. Now the custom labels will be included in the generate_labels tool.

This feature is supported in GitHub and GitLab.

"},{"location":"tools/custom_labels/#configuration-options","title":"Configuration options","text":"
[config]\nenable_custom_labels=true\n\n[custom_labels.\"Custom Label Name\"]\ndescription = \"Description of when AI should suggest this label\"\n\n[custom_labels.\"Custom Label 2\"]\ndescription = \"Description of when AI should suggest this label 2\"\n
"},{"location":"tools/custom_prompt/","title":"\ud83d\udc8e Custom Prompt","text":""},{"location":"tools/custom_prompt/#overview","title":"Overview","text":"

The custom_prompt tool scans the PR code changes, and automatically generates suggestions for improving the PR code. It shares similarities with the improve tool, but with one main difference: the custom_prompt tool will only propose suggestions that follow specific guidelines defined by the prompt in: pr_custom_prompt.prompt configuration.

The tool can be triggered automatically every time a new PR is opened, or can be invoked manually by commenting on a PR.

When commenting, use the following template:

/custom_prompt --pr_custom_prompt.prompt=\"\nThe code suggestions should focus only on the following:\n- ...\n- ...\n\n\"\n

With a configuration file, use the following template:

[pr_custom_prompt]\nprompt=\"\"\"\\\nThe suggestions should focus only on the following:\n-...\n-...\n\n\"\"\"\n

Remember - with this tool, you are the prompter. Be specific, clear, and concise in the instructions. Specify relevant aspects that you want the model to focus on. \\ You might benefit from several trial-and-error iterations, until you get the correct prompt for your use case.

"},{"location":"tools/custom_prompt/#example-usage","title":"Example usage","text":"

Here is an example of a possible prompt, defined in the configuration file:

[pr_custom_prompt]\nprompt=\"\"\"\\\nThe code suggestions should focus only on the following:\n- look for edge cases when implementing a new function\n- make sure every variable has a meaningful name\n- make sure the code is efficient\n\"\"\"\n

(The instructions above are just an example. We want to emphasize that the prompt should be specific and clear, and be tailored to the needs of your project)

Results obtained with the prompt above:

"},{"location":"tools/custom_prompt/#configuration-options","title":"Configuration options","text":""},{"location":"tools/describe/","title":"Describe","text":""},{"location":"tools/describe/#overview","title":"Overview","text":"

The describe tool scans the PR code changes, and generates a description for the PR - title, type, summary, walkthrough and labels.

The tool can be triggered automatically every time a new PR is opened, or it can be invoked manually by commenting on any PR:

/describe\n
"},{"location":"tools/describe/#example-usage","title":"Example usage","text":""},{"location":"tools/describe/#manual-triggering","title":"Manual triggering","text":"

Invoke the tool manually by commenting /describe on any PR:

After ~30 seconds, the tool will generate a description for the PR:

If you want to edit configurations, add the relevant ones to the command:

/describe --pr_description.some_config1=... --pr_description.some_config2=...\n
"},{"location":"tools/describe/#automatic-triggering","title":"Automatic triggering","text":"

To run the describe automatically when a PR is opened, define in a configuration file:

[github_app]\npr_commands = [\n    \"/describe\",\n    ...\n]\n\n[pr_description]\npublish_labels = true\n...\n
"},{"location":"tools/describe/#preserving-the-original-user-description","title":"Preserving the original user description","text":"

By default, Qodo Merge tries to preserve your original PR description by placing it above the generated content. This requires including your description during the initial PR creation.

\"Qodo removed the original description from the PR. Why\"?

From our experience, there are two possible reasons:

"},{"location":"tools/describe/#sequence-diagram-support","title":"Sequence Diagram Support","text":"

The /describe tool includes a Mermaid sequence diagram showing component/function interactions.

This option is enabled by default via the pr_description.enable_pr_diagram param.

"},{"location":"tools/describe/#configuration-options","title":"Configuration options","text":"Possible configurations

publish_labels If set to true, the tool will publish labels to the PR. Default is false. publish_description_as_comment If set to true, the tool will publish the description as a comment to the PR. If false, it will overwrite the original description. Default is false. publish_description_as_comment_persistent If set to true and publish_description_as_comment is true, the tool will publish the description as a persistent comment to the PR. Default is true. add_original_user_description If set to true, the tool will add the original user description to the generated description. Default is true. generate_ai_title If set to true, the tool will also generate an AI title for the PR. Default is false. extra_instructions Optional extra instructions to the tool. For example: \"focus on the changes in the file X. Ignore change in ...\" enable_pr_type If set to false, it will not show the PR type as a text value in the description content. Default is true. final_update_message If set to true, it will add a comment message PR Description updated to latest commit... after finishing calling /describe. Default is false. enable_semantic_files_types If set to true, \"Changes walkthrough\" section will be generated. Default is true. collapsible_file_list If set to true, the file list in the \"Changes walkthrough\" section will be collapsible. If set to \"adaptive\", the file list will be collapsible only if there are more than 8 files. Default is \"adaptive\". enable_large_pr_handling \ud83d\udc8e If set to true, in case of a large PR the tool will make several calls to the AI and combine them to be able to cover more files. Default is true. enable_help_text If set to true, the tool will display a help text in the comment. Default is false. enable_pr_diagram If set to true, the tool will generate a horizontal Mermaid flowchart summarizing the main pull request changes. This field remains empty if not applicable. Default is true.

"},{"location":"tools/describe/#inline-file-summary","title":"Inline file summary \ud83d\udc8e","text":"

This feature enables you to copy the changes walkthrough table to the \"Files changed\" tab, so you can quickly understand the changes in each file while reviewing the code changes (diff view).

To copy the changes walkthrough table to the \"Files changed\" tab, you can click on the checkbox that appears PR Description status message below the main PR Description:

If you prefer to have the file summaries appear in the \"Files changed\" tab on every PR, change the pr_description.inline_file_summary parameter in the configuration file, possible values are:

Note: that this feature is currently available only for GitHub.

"},{"location":"tools/describe/#markers-template","title":"Markers template","text":"

To enable markers, set pr_description.use_description_markers=true. Markers enable to easily integrate user's content and auto-generated content, with a template-like mechanism.

For example, if the PR original description was:

User content...\n\n## PR Type:\npr_agent:type\n\n## PR Description:\npr_agent:summary\n\n## PR Walkthrough:\npr_agent:walkthrough\n\n## PR Diagram:\npr_agent:diagram\n

The marker pr_agent:type will be replaced with the PR type, pr_agent:summary will be replaced with the PR summary, pr_agent:walkthrough will be replaced with the PR walkthrough, and pr_agent:diagram will be replaced with the sequence diagram (if enabled).

becomes

Configuration params:

"},{"location":"tools/describe/#custom-labels","title":"Custom labels","text":"

The default labels of the describe tool are quite generic, since they are meant to be used in any repo: [Bug fix, Tests, Enhancement, Documentation, Other].

You can define custom labels that are relevant for your repo and use cases. Custom labels can be defined in a configuration file, or directly in the repo's labels page.

Make sure to provide proper title, and a detailed and well-phrased description for each label, so the tool will know when to suggest it. Each label description should be a conditional statement, that indicates if to add the label to the PR or not, according to the PR content.

"},{"location":"tools/describe/#handle-custom-labels-from-a-configuration-file","title":"Handle custom labels from a configuration file","text":"

Example for a custom labels configuration setup in a configuration file:

[config]\nenable_custom_labels=true\n\n\n[custom_labels.\"sql_changes\"]\ndescription = \"Use when a PR contains changes to SQL queries\"\n\n[custom_labels.\"test\"]\ndescription = \"use when a PR primarily contains new tests\"\n\n...\n
"},{"location":"tools/describe/#handle-custom-labels-from-the-repos-labels-page","title":"Handle custom labels from the Repo's labels page \ud83d\udc8e","text":"

You can also control the custom labels that will be suggested by the describe tool from the repo's labels page:

Now add/edit the custom labels. they should be formatted as follows:

Examples for custom labels:

The description should be comprehensive and detailed, indicating when to add the desired label. For example:

"},{"location":"tools/describe/#usage-tips","title":"Usage Tips","text":"

Automation

pr_commands = [\"/describe --pr_description.use_description_markers=true\", ...]\n

the tool will replace every marker of the form pr_agent:marker_name in the PR description with the relevant content, where marker_name is one of the following: *type: the PR type. * summary: the PR summary. * walkthrough: the PR walkthrough.

"},{"location":"tools/documentation/","title":"\ud83d\udc8e Add Documentation","text":""},{"location":"tools/documentation/#overview","title":"Overview","text":"

The add_docs tool scans the PR code changes, and automatically suggests documentation for any code components that changed in the PR (functions, classes, etc.).

It can be invoked manually by commenting on any PR:

/add_docs\n
"},{"location":"tools/documentation/#example-usage","title":"Example usage","text":"

Invoke the tool manually by commenting /add_docs on any PR:

The tool will generate documentation for all the components that changed in the PR:

You can state a name of a specific component in the PR to get documentation only for that component:

/add_docs component_name\n
"},{"location":"tools/documentation/#manual-triggering","title":"Manual triggering","text":"

Comment /add_docs on a PR to invoke it manually.

"},{"location":"tools/documentation/#automatic-triggering","title":"Automatic triggering","text":"

To automatically run the add_docs tool when a pull request is opened, define in a configuration file:

[github_app]\npr_commands = [\n    \"/add_docs\",\n    ...\n]\n

The pr_commands list defines commands that run automatically when a PR is opened. Since this is under the [github_app] section, it only applies when using the Qodo Merge GitHub App in GitHub environments.

"},{"location":"tools/documentation/#configuration-options","title":"Configuration options","text":"

Notes

"},{"location":"tools/help/","title":"Help","text":""},{"location":"tools/help/#overview","title":"Overview","text":"

The help tool provides a list of all the available tools and their descriptions. For Qodo Merge users, it also enables to trigger each tool by checking the relevant box.

It can be invoked manually by commenting on any PR:

/help\n
"},{"location":"tools/help/#example-usage","title":"Example usage","text":"

An example result:

\u2192

"},{"location":"tools/help_docs/","title":"Help Docs","text":""},{"location":"tools/help_docs/#overview","title":"Overview","text":"

The help_docs tool can answer a free-text question based on a git documentation folder.

It can be invoked manually by commenting on any PR or Issue:

/help_docs \"...\"\n

Or configured to be triggered automatically when a new issue is opened.

The tool assumes by default that the documentation is located in the root of the repository, at /docs folder. However, this can be customized by setting the docs_path configuration option:

[pr_help_docs]\nrepo_url = \"\"                 # The repository to use as context\ndocs_path = \"docs\"            # The documentation folder\nrepo_default_branch = \"main\"  # The branch to use in case repo_url overwritten\n

See more configuration options in the Configuration options section.

"},{"location":"tools/help_docs/#example-usage","title":"Example usage","text":"

Asking a question about another repository

Response:

"},{"location":"tools/help_docs/#run-automatically-when-a-new-issue-is-opened","title":"Run automatically when a new issue is opened","text":"

You can configure PR-Agent to run help_docs automatically on any newly created issue. This can be useful, for example, for providing immediate feedback to users who open issues with questions on open-source projects with extensive documentation.

Here's how:

1) Follow the steps depicted under Run as a Github Action to create a new workflow, such as:.github/workflows/help_docs.yml:

2) Edit your yaml file to the following:

name: Run pr agent on every opened issue, respond to user comments on an issue\n\n#When the action is triggered\non:\n  issues:\n    types: [opened] #New issue\n\n# Read env. variables\nenv:\n  GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}\n  GITHUB_API_URL: ${{ github.api_url }}\n  GIT_REPO_URL: ${{ github.event.repository.clone_url }}\n  ISSUE_URL: ${{ github.event.issue.html_url || github.event.comment.html_url }}\n  ISSUE_BODY: ${{ github.event.issue.body || github.event.comment.body }}\n  OPENAI_KEY: ${{ secrets.OPENAI_KEY }}\n\n# The actual set of actions\njobs:\n  issue_agent:\n    runs-on: ubuntu-latest\n    if: ${{ github.event.sender.type != 'Bot' }} #Do not respond to bots\n\n    # Set required permissions\n    permissions:\n      contents: read    # For reading repository contents\n      issues: write     # For commenting on issues\n\n    steps:\n      - name: Run PR Agent on Issues\n        if: ${{ env.ISSUE_URL != '' }}\n        uses: docker://codiumai/pr-agent:latest\n        with:\n          entrypoint: /bin/bash #Replace invoking cli.py directly with a shell\n          args: |\n            -c \"cd /app && \\\n            echo 'Running Issue Agent action step on ISSUE_URL=$ISSUE_URL' && \\\n            export config__git_provider='github' && \\\n            export github__user_token=$GITHUB_TOKEN && \\            \n            export github__base_url=$GITHUB_API_URL && \\\n            export openai__key=$OPENAI_KEY && \\\n            python -m pr_agent.cli --issue_url=$ISSUE_URL --pr_help_docs.repo_url=\"...\" --pr_help_docs.docs_path=\"...\" --pr_help_docs.openai_key=$OPENAI_KEY && \\help_docs \\\"$ISSUE_BODY\\\"\"\n

3) Following completion of the remaining steps (such as adding secrets and relevant configurations, such as repo_url and docs_path) merge this change to your main branch. When a new issue is opened, you should see a comment from github-actions bot with an auto response, assuming the question is related to the documentation of the repository.

"},{"location":"tools/help_docs/#configuration-options","title":"Configuration options","text":"

Under the section pr_help_docs, the configuration file contains options to customize the 'help docs' tool:

"},{"location":"tools/implement/","title":"\ud83d\udc8e Implement","text":"

Platforms supported: GitHub, GitLab, Bitbucket

"},{"location":"tools/implement/#overview","title":"Overview","text":"

The implement tool converts human code review discussions and feedback into ready-to-commit code changes. It leverages LLM technology to transform PR comments and review suggestions into concrete implementation code, helping developers quickly turn feedback into working solutions.

"},{"location":"tools/implement/#usage-scenarios","title":"Usage Scenarios","text":"For ReviewersFor PR AuthorsFor Referencing Comments

Reviewers can request code changes by:

  1. Selecting the code block to be modified.
  2. Adding a comment with the syntax:
/implement <code-change-description>\n

PR authors can implement suggested changes by replying to a review comment using either:

  1. Add specific implementation details as described above
/implement <code-change-description>\n
  1. Use the original review comment as instructions
/implement\n

You can reference and implement changes from any comment by:

/implement <link-to-review-comment>\n

Note that the implementation will occur within the review discussion thread.

"},{"location":"tools/implement/#configuration-options","title":"Configuration options","text":""},{"location":"tools/improve/","title":"Improve","text":""},{"location":"tools/improve/#overview","title":"Overview","text":"

The improve tool scans the PR code changes, and automatically generates meaningful suggestions for improving the PR code. The tool can be triggered automatically every time a new PR is opened, or it can be invoked manually by commenting on any PR:

/improve\n
"},{"location":"tools/improve/#how-it-looks","title":"How it looks","text":"Suggestions OverviewSelecting a specific suggestion

The following features are available only for Qodo Merge \ud83d\udc8e users:

"},{"location":"tools/improve/#example-usage","title":"Example usage","text":""},{"location":"tools/improve/#manual-triggering","title":"Manual triggering","text":"

Invoke the tool manually by commenting /improve on any PR. The code suggestions by default are presented as a single comment:

To edit configurations related to the improve tool, use the following template:

/improve --pr_code_suggestions.some_config1=... --pr_code_suggestions.some_config2=...\n

For example, you can choose to present all the suggestions as committable code comments, by running the following command:

/improve --pr_code_suggestions.commitable_code_suggestions=true\n

As can be seen, a single table comment has a significantly smaller PR footprint. We recommend this mode for most cases. Also note that collapsible are not supported in Bitbucket. Hence, the suggestions can only be presented in Bitbucket as code comments.

"},{"location":"tools/improve/#manual-more-suggestions","title":"Manual more suggestions","text":"

To generate more suggestions (distinct from the ones already generated), for git-providers that don't support interactive checkbox option, you can manually run:

/improve --more_suggestions=true\n
"},{"location":"tools/improve/#automatic-triggering","title":"Automatic triggering","text":"

To run the improve automatically when a PR is opened, define in a configuration file:

[github_app]\npr_commands = [\n    \"/improve\",\n    ...\n]\n\n[pr_code_suggestions]\nnum_code_suggestions_per_chunk = ...\n...\n
"},{"location":"tools/improve/#assessing-impact","title":"Assessing Impact","text":"

\ud83d\udc8e feature

Qodo Merge tracks two types of implementations for tracking implemented suggestions:

In post-process, Qodo Merge counts the number of suggestions that were implemented, and provides general statistics and insights about the suggestions' impact on the PR process.

"},{"location":"tools/improve/#suggestion-tracking","title":"Suggestion tracking","text":"

\ud83d\udc8e feature. Platforms supported: GitHub, GitLab

Qodo Merge employs a novel detection system to automatically identify AI code suggestions that PR authors have accepted and implemented.

Accepted suggestions are also automatically documented in a dedicated wiki page called .pr_agent_accepted_suggestions, allowing users to track historical changes, assess the tool's effectiveness, and learn from previously implemented recommendations in the repository. An example result:

This dedicated wiki page will also serve as a foundation for future AI model improvements, allowing it to learn from historically implemented suggestions and generate more targeted, contextually relevant recommendations.

This feature is controlled by a boolean configuration parameter: pr_code_suggestions.wiki_page_accepted_suggestions (default is true).

Wiki must be enabled

While the aggregation process is automatic, GitHub repositories require a one-time manual wiki setup.

To initialize the wiki: navigate to Wiki, select Create the first page, then click Save page.

Once a wiki repo is created, the tool will automatically use this wiki for tracking suggestions.

Why a wiki page?

Your code belongs to you, and we respect your privacy. Hence, we won't store any code suggestions in an external database.

Instead, we leverage a dedicated private page, within your repository wiki, to track suggestions. This approach offers convenient secure suggestion tracking while avoiding pull requests or any noise to the main repository.

"},{"location":"tools/improve/#extra-instructions-and-best-practices","title":"Extra instructions and best practices","text":"

The improve tool can be further customized by providing additional instructions and best practices to the AI model.

"},{"location":"tools/improve/#extra-instructions","title":"Extra instructions","text":"

You can use the extra_instructions configuration option to give the AI model additional instructions for the improve tool. Be specific, clear, and concise in the instructions. With extra instructions, you are the prompter.

Examples for possible instructions:

[pr_code_suggestions]\nextra_instructions=\"\"\"\\\n(1) Answer in Japanese\n(2) Don't suggest to add try-except block\n(3) Ignore changes in toml files\n...\n\"\"\"\n

Use triple quotes to write multi-line instructions. Use bullet points or numbers to make the instructions more readable.

"},{"location":"tools/improve/#best-practices","title":"Best practices","text":"

\ud83d\udc8e feature. Platforms supported: GitHub, GitLab, Bitbucket

Qodo Merge supports both simple and hierarchical best practices configurations to provide guidance to the AI model for generating relevant code suggestions.

Writing effective best practices files

The following guidelines apply to all best practices files:

Example of a best practices file

Pattern 1: Add proper error handling with try-except blocks around external function calls.

Example code before:

# Some code that might raise an exception\nreturn process_pr_data(data)\n

Example code after:

try:\n    # Some code that might raise an exception\n    return process_pr_data(data)\nexcept Exception as e:\n    logger.exception(\"Failed to process request\", extra={\"error\": e})\n

Pattern 2: Add defensive null/empty checks before accessing object properties or performing operations on potentially null variables to prevent runtime errors.

Example code before:

def get_pr_code(pr_data):\n    if \"changed_code\" in pr_data:\n        return pr_data.get(\"changed_code\", \"\")\n    return \"\"\n

Example code after:

def get_pr_code(pr_data):\n    if pr_data is None:\n        return \"\"\n    if \"changed_code\" in pr_data:\n        return pr_data.get(\"changed_code\", \"\")\n    return \"\"\n
"},{"location":"tools/improve/#local-best-practices","title":"Local best practices","text":"

For basic usage, create a best_practices.md file in your repository's root directory containing a list of best practices, coding standards, and guidelines specific to your repository.

The AI model will use this best_practices.md file as a reference, and in case the PR code violates any of the guidelines, it will create additional suggestions, with a dedicated label: Organization best practice.

"},{"location":"tools/improve/#global-hierarchical-best-practices","title":"Global hierarchical best practices","text":"

For organizations managing multiple repositories with different requirements, Qodo Merge supports a hierarchical best practices system using a dedicated global configuration repository.

Supported scenarios:

  1. Standalone repositories: Individual repositories can have their own specific best practices tailored to their unique requirements
  2. Groups of repositories: Repositories can be mapped to shared group-level best practices for consistent standards across similar projects
  3. Monorepos with subprojects: Large monorepos can have both repository-level and subproject-level best practices, with automatic path-based matching
"},{"location":"tools/improve/#setting-up-global-hierarchical-best-practices","title":"Setting up global hierarchical best practices","text":"

1. Create a new repository named pr-agent-settings in your organization/workspace.

2. Build the folder hierarchy in your pr-agent-settings repository, for example:

pr-agent-settings/\n\u251c\u2500\u2500 metadata.yaml                     # Maps repos/folders to best practice paths\n\u2514\u2500\u2500 codebase_standards/               # Root for all best practice definitions\n    \u251c\u2500\u2500 global/                       # Global rules, inherited widely\n    \u2502   \u2514\u2500\u2500 best_practices.md\n    \u251c\u2500\u2500 groups/                       # For groups of repositories\n    \u2502   \u251c\u2500\u2500 frontend_repos/\n    \u2502   \u2502   \u2514\u2500\u2500 best_practices.md\n    \u2502   \u251c\u2500\u2500 backend_repos/\n    \u2502   \u2502   \u2514\u2500\u2500 best_practices.md\n    \u2502   \u2514\u2500\u2500 ...\n    \u251c\u2500\u2500 qodo-merge/                   # For standalone repositories\n    \u2502   \u2514\u2500\u2500 best_practices.md\n    \u251c\u2500\u2500 qodo-monorepo/                # For monorepo-specific rules \n    \u2502   \u251c\u2500\u2500 best_practices.md         # Root level monorepo rules\n    \u2502   \u251c\u2500\u2500 qodo-github/              # Subproject best practices\n    \u2502   \u2502   \u2514\u2500\u2500 best_practices.md\n    \u2502   \u2514\u2500\u2500 qodo-gitlab/              # Another subproject\n    \u2502       \u2514\u2500\u2500 best_practices.md\n    \u2514\u2500\u2500 ...                           # More repositories\n

3. Define the metadata file metadata.yaml that maps your repositories to their relevant best practices paths, for example:

# Standalone repos\nqodo-merge:\n  best_practices_paths:\n    - \"qodo-merge\"\n\n# Group-associated repos\nrepo_b:\n  best_practices_paths:\n    - \"groups/backend_repos\"\n\n# Multi-group repos\nrepo_c:\n  best_practices_paths:\n    - \"groups/frontend_repos\"\n    - \"groups/backend_repos\"\n\n# Monorepo with subprojects\nqodo-monorepo:\n  best_practices_paths:\n    - \"qodo-monorepo\"\n  monorepo_subprojects:\n    qodo-github:\n      best_practices_paths:\n        - \"qodo-monorepo/qodo-github\"\n    qodo-gitlab:\n      best_practices_paths:\n        - \"qodo-monorepo/qodo-gitlab\"\n

4. Set the following configuration in your global configuration file:

[best_practices]\nenable_global_best_practices = true\n
Best practices priority and fallback behavior

When global best practices are enabled, Qodo Merge follows this priority order:

1. Primary: Global hierarchical best practices from pr-agent-settings repository:

1.1 If the repository is mapped in `metadata.yaml`, it uses the specified paths\n\n1.2 For monorepos, it automatically collects best practices matching PR file paths\n\n1.3 If no mapping exists, it falls back to the global best practices\n

2. Fallback: Local repository best_practices.md file:

2.1 Used when global best practices are not found or configured\n\n2.2 Acts as a safety net for repositories not yet configured in the global system\n\n2.3 Local best practices are completely ignored when global best practices are successfully loaded\n
Edge cases and behavior Dedicated label for best practices suggestions

Best practice suggestions are labeled as Organization best practice by default. To customize this label, modify it in your configuration file:

[best_practices]\norganization_name = \"...\"\n

And the label will be: {organization_name} best practice.

"},{"location":"tools/improve/#example-results","title":"Example results","text":""},{"location":"tools/improve/#auto-best-practices","title":"Auto best practices","text":"

\ud83d\udc8e feature. Platforms supported: GitHub.

Auto best practices is a novel Qodo Merge capability that:

  1. Identifies recurring patterns from accepted suggestions
  2. Automatically generates best practices page based on what your team consistently values
  3. Applies these learned patterns to future code reviews

This creates an automatic feedback loop where the system continuously learns from your team's choices to provide increasingly relevant suggestions. The system maintains two analysis phases:

Note that when a custom best practices exist, Qodo Merge will still generate an 'auto best practices' wiki file, though it won't use it in the improve tool. Learn more about utilizing 'auto best practices' in our detailed guide.

"},{"location":"tools/improve/#relevant-configurations","title":"Relevant configurations","text":"
[auto_best_practices]\n# Disable all auto best practices usage or generation\nenable_auto_best_practices = true  \n\n# Disable usage of auto best practices file in the 'improve' tool\nutilize_auto_best_practices = true \n\n# Extra instructions to the auto best practices generation prompt\nextra_instructions = \"\"            \n\n# Max number of patterns to be detected\nmax_patterns = 5                   \n
"},{"location":"tools/improve/#multiple-best-practices-sources","title":"Multiple best practices sources","text":"

The improve tool will combine best practices from all available sources - global configuration, local configuration, and auto-generated files - to provide you with comprehensive recommendations.

"},{"location":"tools/improve/#combining-extra-instructions-and-best-practices","title":"Combining 'extra instructions' and 'best practices'","text":"

\ud83d\udc8e feature

The extra instructions configuration is more related to the improve tool prompt. It can be used, for example, to avoid specific suggestions (\"Don't suggest to add try-except block\", \"Ignore changes in toml files\", ...) or to emphasize specific aspects or formats (\"Answer in Japanese\", \"Give only short suggestions\", ...)

In contrast, the best_practices.md file is a general guideline for the way code should be written in the repo.

Using a combination of both can help the AI model to provide relevant and tailored suggestions.

"},{"location":"tools/improve/#usage-tips","title":"Usage Tips","text":""},{"location":"tools/improve/#implementing-the-proposed-code-suggestions","title":"Implementing the proposed code suggestions","text":"

Each generated suggestion consists of three key elements:

  1. A single-line summary of the proposed change
  2. An expandable section containing a comprehensive description of the suggestion
  3. A diff snippet showing the recommended code modification (before and after)

We advise users to apply critical analysis and judgment when implementing the proposed suggestions. In addition to mistakes (which may happen, but are rare), sometimes the presented code modification may serve more as an illustrative example than a directly applicable solution. In such cases, we recommend prioritizing the suggestion's detailed description, using the diff snippet primarily as a supporting reference.

"},{"location":"tools/improve/#dual-publishing-mode","title":"Dual publishing mode","text":"

Our recommended approach for presenting code suggestions is through a table (--pr_code_suggestions.commitable_code_suggestions=false). This method significantly reduces the PR footprint and allows for quick and easy digestion of multiple suggestions.

We also offer a complementary dual publishing mode. When enabled, suggestions exceeding a certain score threshold are not only displayed in the table, but also presented as committable PR comments. This mode helps highlight suggestions deemed more critical.

To activate dual publishing mode, use the following setting:

[pr_code_suggestions]\ndual_publishing_score_threshold = x\n

Where x represents the minimum score threshold (>=) for suggestions to be presented as committable PR comments in addition to the table. Default is -1 (disabled).

"},{"location":"tools/improve/#controlling-suggestions-depth","title":"Controlling suggestions depth","text":"

\ud83d\udc8e feature

You can control the depth and comprehensiveness of the code suggestions by using the pr_code_suggestions.suggestions_depth parameter.

Available options:

(Alternatively, use numeric values: 1, 2, or 3 respectively)

We recommend starting with regular mode, then exploring exhaustive mode, which can provide more comprehensive suggestions and enhanced bug detection.

"},{"location":"tools/improve/#self-review","title":"Self-review","text":"

\ud83d\udc8e feature. Platforms supported: GitHub, GitLab

If you set in a configuration file:

[pr_code_suggestions]\ndemand_code_suggestions_self_review = true\n

The improve tool will add a checkbox below the suggestions, prompting user to acknowledge that they have reviewed the suggestions. You can set the content of the checkbox text via:

[pr_code_suggestions]\ncode_suggestions_self_review_text = \"... (your text here) ...\"\n

Tip - Reducing visual footprint after self-review \ud83d\udc8e

The configuration parameter pr_code_suggestions.fold_suggestions_on_self_review (default is True) can be used to automatically fold the suggestions after the user clicks the self-review checkbox.

This reduces the visual footprint of the suggestions, and also indicates to the PR reviewer that the suggestions have been reviewed by the PR author, and don't require further attention.

Tip - Demanding self-review from the PR author \ud83d\udc8e

By setting:

[pr_code_suggestions]\napprove_pr_on_self_review = true\n
the tool can automatically add an approval when the PR author clicks the self-review checkbox.

"},{"location":"tools/improve/#how-many-code-suggestions-are-generated","title":"How many code suggestions are generated?","text":"

Qodo Merge uses a dynamic strategy to generate code suggestions based on the size of the pull request (PR). Here's how it works:

"},{"location":"tools/improve/#1-chunking-large-prs","title":"1. Chunking large PRs","text":""},{"location":"tools/improve/#2-generating-suggestions","title":"2. Generating suggestions","text":"

This approach has two main benefits:

Note: Chunking is primarily relevant for large PRs. For most PRs (up to 600 lines of code), Qodo Merge will be able to process the entire code in a single call.

"},{"location":"tools/improve/#configuration-options","title":"Configuration options","text":"General options

extra_instructions Optional extra instructions to the tool. For example: \"focus on the changes in the file X. Ignore change in ...\". commitable_code_suggestions If set to true, the tool will display the suggestions as committable code comments. Default is false. enable_chat_in_code_suggestions If set to true, QM bot will interact with comments made on code changes it has proposed. Default is true. suggestions_depth \ud83d\udc8e Controls the depth of the suggestions. Can be set to 'selective', 'regular', or 'exhaustive'. Default is 'regular'. dual_publishing_score_threshold Minimum score threshold for suggestions to be presented as committable PR comments in addition to the table. Default is -1 (disabled). focus_only_on_problems If set to true, suggestions will focus primarily on identifying and fixing code problems, and less on style considerations like best practices, maintainability, or readability. Default is true. persistent_comment If set to true, the improve comment will be persistent, meaning that every new improve request will edit the previous one. Default is true. suggestions_score_threshold Any suggestion with importance score less than this threshold will be removed. Default is 0. Highly recommend not to set this value above 7-8, since above it may clip relevant suggestions that can be useful. apply_suggestions_checkbox Enable the checkbox to create a committable suggestion. Default is true. enable_more_suggestions_checkbox Enable the checkbox to generate more suggestions. Default is true. enable_help_text If set to true, the tool will display a help text in the comment. Default is true. enable_chat_text If set to true, the tool will display a reference to the PR chat in the comment. Default is true. publish_output_no_suggestions If set to true, the tool will publish a comment even if no suggestions were found. Default is true. wiki_page_accepted_suggestions If set to true, the tool will automatically track accepted suggestions in a dedicated wiki page called .pr_agent_accepted_suggestions. Default is true. allow_thumbs_up_down If set to true, all code suggestions will have thumbs up and thumbs down buttons, to encourage users to provide feedback on the suggestions. Default is false. Note that this feature is for statistics tracking. It will not affect future feedback from the AI model. Params for number of suggestions and AI calls

auto_extended_mode Enable chunking the PR code and running the tool on each chunk. Default is true. num_code_suggestions_per_chunk Number of code suggestions provided by the 'improve' tool, per chunk. Default is 3. max_number_of_calls Maximum number of chunks. Default is 3.

"},{"location":"tools/improve/#understanding-ai-code-suggestions","title":"Understanding AI Code Suggestions","text":""},{"location":"tools/improve_component/","title":"\ud83d\udc8e Improve Components","text":""},{"location":"tools/improve_component/#overview","title":"Overview","text":"

The improve_component tool generates code suggestions for a specific code component that changed in the PR. it can be invoked manually by commenting on any PR:

/improve_component component_name\n

To get a list of the components that changed in the PR and choose the relevant component interactively, use the analyze tool.

"},{"location":"tools/improve_component/#example-usage","title":"Example usage","text":"

Invoke the tool manually by commenting /improve_component on any PR:

The tool will generate code suggestions for the selected component (if no component is stated, it will generate code suggestions for the largest component):

Notes

"},{"location":"tools/improve_component/#configuration-options","title":"Configuration options","text":""},{"location":"tools/review/","title":"Review","text":""},{"location":"tools/review/#overview","title":"Overview","text":"

The review tool scans the PR code changes, and generates a list of feedbacks about the PR, aiming to aid the reviewing process. The tool can be triggered automatically every time a new PR is opened, or can be invoked manually by commenting on any PR:

/review\n

Note that the main purpose of the review tool is to provide the PR reviewer with useful feedbacks and insights. The PR author, in contrast, may prefer to save time and focus on the output of the improve tool, which provides actionable code suggestions.

(Read more about the different personas in the PR process and how Qodo Merge aims to assist them in our blog)

"},{"location":"tools/review/#example-usage","title":"Example usage","text":""},{"location":"tools/review/#manual-triggering","title":"Manual triggering","text":"

Invoke the tool manually by commenting /review on any PR:

After ~30 seconds, the tool will generate a review for the PR:

If you want to edit configurations, add the relevant ones to the command:

/review --pr_reviewer.some_config1=... --pr_reviewer.some_config2=...\n
"},{"location":"tools/review/#automatic-triggering","title":"Automatic triggering","text":"

To run the review automatically when a PR is opened, define in a configuration file:

[github_app]\npr_commands = [\n    \"/review\",\n    ...\n]\n\n[pr_reviewer]\nextra_instructions = \"...\"\n...\n
"},{"location":"tools/review/#configuration-options","title":"Configuration options","text":"General options

persistent_comment If set to true, the review comment will be persistent, meaning that every new review request will edit the previous one. Default is true. final_update_message When set to true, updating a persistent review comment during online commenting will automatically add a short comment with a link to the updated review in the pull request .Default is true. extra_instructions Optional extra instructions to the tool. For example: \"focus on the changes in the file X. Ignore change in ...\". enable_help_text If set to true, the tool will display a help text in the comment. Default is true. num_max_findings Number of maximum returned findings. Default is 3.

Enable\\disable specific sub-sections

require_score_review If set to true, the tool will add a section that scores the PR. Default is false. require_tests_review If set to true, the tool will add a section that checks if the PR contains tests. Default is true. require_estimate_effort_to_review If set to true, the tool will add a section that estimates the effort needed to review the PR. Default is true. require_can_be_split_review If set to true, the tool will add a section that checks if the PR contains several themes, and can be split into smaller PRs. Default is false. require_security_review If set to true, the tool will add a section that checks if the PR contains a possible security or vulnerability issue. Default is true. require_todo_scan If set to true, the tool will add a section that lists TODO comments found in the PR code changes. Default is false. require_ticket_analysis_review If set to true, and the PR contains a GitHub or Jira ticket link, the tool will add a section that checks if the PR in fact fulfilled the ticket requirements. Default is true.

Adding PR labels

You can enable\\disable the review tool to add specific labels to the PR:

enable_review_labels_security If set to true, the tool will publish a 'possible security issue' label if it detects a security issue. Default is true. enable_review_labels_effort If set to true, the tool will publish a 'Review effort x/5' label (1\u20135 scale). Default is true.

"},{"location":"tools/review/#usage-tips","title":"Usage Tips","text":""},{"location":"tools/review/#general-guidelines","title":"General guidelines","text":"

The review tool provides a collection of configurable feedbacks about a PR. It is recommended to review the Configuration options section, and choose the relevant options for your use case.

Some of the features that are disabled by default are quite useful, and should be considered for enabling. For example: require_score_review, and more.

On the other hand, if you find one of the enabled features to be irrelevant for your use case, disable it. No default configuration can fit all use cases.

"},{"location":"tools/review/#automation","title":"Automation","text":"

When you first install Qodo Merge app, the default mode for the review tool is:

pr_commands = [\"/review\", ...]\n
Meaning the review tool will run automatically on every PR, without any additional configurations. Edit this field to enable/disable the tool, or to change the configurations used.

"},{"location":"tools/review/#auto-generated-pr-labels-by-the-review-tool","title":"Auto-generated PR labels by the Review Tool","text":"

The review can tool automatically add labels to your Pull Requests:

"},{"location":"tools/review/#auto-blocking-prs-from-being-merged-based-on-the-generated-labels","title":"Auto-blocking PRs from being merged based on the generated labels","text":"

You can configure a CI/CD Action to prevent merging PRs with specific labels. For example, implement a dedicated GitHub Action.

This approach helps ensure PRs with potential security issues or ticket compliance problems will not be merged without further review.

Since AI may make mistakes or lack complete context, use this feature judiciously. For flexibility, users with appropriate permissions can remove generated labels when necessary. When a label is removed, this action will be automatically documented in the PR discussion, clearly indicating it was a deliberate override by an authorized user to allow the merge.

"},{"location":"tools/review/#extra-instructions","title":"Extra instructions","text":"

Extra instructions are important. The review tool can be configured with extra instructions, which can be used to guide the model to a feedback tailored to the needs of your project.

Be specific, clear, and concise in the instructions. With extra instructions, you are the prompter. Specify the relevant sub-tool, and the relevant aspects of the PR that you want to emphasize.

Examples of extra instructions:

[pr_reviewer]\nextra_instructions=\"\"\"\\\nIn the code feedback section, emphasize the following:\n- Does the code logic cover relevant edge cases?\n- Is the code logic clear and easy to understand?\n- Is the code logic efficient?\n...\n\"\"\"\n
Use triple quotes to write multi-line instructions. Use bullet points to make the instructions more readable.

"},{"location":"tools/scan_repo_discussions/","title":"\ud83d\udc8e Scan Repo Discussions","text":"

Platforms supported: GitHub

"},{"location":"tools/scan_repo_discussions/#overview","title":"Overview","text":"

The scan_repo_discussions tool analyzes code discussions (meaning review comments over code lines) from merged pull requests over the past 12 months. It processes these discussions alongside other PR metadata to identify recurring patterns related to best practices in team feedback and code reviews, generating a comprehensive best_practices.md document that distills key insights and recommendations.

This file captures repository-specific patterns derived from your team's actual workflow and discussions, rather than more generic best practices. It will be utilized by Qodo Merge to provide tailored suggestions for improving code quality in future pull requests.

Active repositories are needed

The tool is designed to work with real-life repositories, as it relies on actual discussions to generate meaningful insights. At least 50 merged PRs are required to generate the best_practices.md file.

Additional customization

Teams are encouraged to further customize and refine these insights to better align with their specific development priorities and contexts. This can be done by editing the best_practices.md file directly when the PR is created, or iteratively over time to enhance the 'best practices' suggestions provided by Qodo Merge.

The tool can be invoked manually by commenting on any PR:

/scan_repo_discussions\n

As a response, the bot will create a new PR that contains an auto-generated best_practices.md file. Note that the scan can take several minutes to complete, since up to 250 PRs are scanned.

"},{"location":"tools/scan_repo_discussions/#example-usage","title":"Example usage","text":"

The PR created by the bot:

The best_practices.md file in the PR:

"},{"location":"tools/scan_repo_discussions/#configuration-options","title":"Configuration options","text":""},{"location":"tools/similar_code/","title":"\ud83d\udc8e Similar Code","text":""},{"location":"tools/similar_code/#overview","title":"Overview","text":"

The similar code tool retrieves the most similar code components from inside the organization's codebase, or from open-source code.

For example:

Global Search for a method called chat_completion:

Qodo Merge will examine the code component and will extract the most relevant keywords to search for similar code:

Search result link example:

Organization Search:

"},{"location":"tools/similar_code/#how-to-use","title":"How to use","text":""},{"location":"tools/similar_code/#manually","title":"Manually","text":"

To invoke the similar code tool manually, comment on the PR:

/find_similar_component COMPONENT_NAME\n

Where COMPONENT_NAME should be the name of a code component in the PR (class, method, function).

If there is a name ambiguity, there are two configurations that will help the tool to find the correct component:

example:

/find_similar_component COMPONENT_NAME --pr_find_similar_component.file=FILE_NAME\n
"},{"location":"tools/similar_code/#automatically-via-analyze-table","title":"Automatically (via Analyze table)","text":"

It can be invoked automatically from the analyze table, can be accessed by:

/analyze\n

Choose the components you want to find similar code for, and click on the similar checkbox.

You can search for similar code either within the organization's codebase or globally, which includes open-source repositories. Each result will include the relevant code components along with their associated license details.

"},{"location":"tools/similar_code/#configuration-options","title":"Configuration options","text":""},{"location":"tools/similar_issues/","title":"Similar issues","text":""},{"location":"tools/similar_issues/#overview","title":"Overview","text":"

The similar issue tool retrieves the most similar issues to the current issue. It can be invoked manually by commenting on any PR:

/similar_issue\n
"},{"location":"tools/similar_issues/#example-usage","title":"Example usage","text":"

Note that to perform retrieval, the similar_issue tool indexes all the repo previous issues (once).

"},{"location":"tools/similar_issues/#selecting-a-vector-database","title":"Selecting a Vector Database","text":"

Configure your preferred database by changing the pr_similar_issue parameter in configuration.toml file.

"},{"location":"tools/similar_issues/#available-options","title":"Available Options","text":"

Choose from the following Vector Databases:

  1. LanceDB
  2. Pinecone
"},{"location":"tools/similar_issues/#pinecone-configuration","title":"Pinecone Configuration","text":"

To use Pinecone with the similar issue tool, add these credentials to .secrets.toml (or set as environment variables):

[pinecone]\napi_key = \"...\"\nenvironment = \"...\"\n

These parameters can be obtained by registering to Pinecone.

"},{"location":"tools/similar_issues/#how-to-use","title":"How to use","text":""},{"location":"tools/test/","title":"\ud83d\udc8e Generate Tests","text":""},{"location":"tools/test/#overview","title":"Overview","text":"

By combining LLM abilities with static code analysis, the test tool generate tests for a selected component, based on the PR code changes. It can be invoked manually by commenting on any PR:

/test component_name\n

where 'component_name' is the name of a specific component in the PR. To get a list of the components that changed in the PR and choose the relevant component interactively, use the analyze tool.

"},{"location":"tools/test/#example-usage","title":"Example usage","text":"

Invoke the tool manually by commenting /test on any PR: The tool will generate tests for the selected component (if no component is stated, it will generate tests for largest component):

(Example taken from here):

Notes

"},{"location":"tools/test/#configuration-options","title":"Configuration options","text":""},{"location":"tools/update_changelog/","title":"Update Changelog","text":""},{"location":"tools/update_changelog/#overview","title":"Overview","text":"

The update_changelog tool automatically updates the CHANGELOG.md file with the PR changes. It can be invoked manually by commenting on any PR:

/update_changelog\n
"},{"location":"tools/update_changelog/#example-usage","title":"Example usage","text":""},{"location":"tools/update_changelog/#configuration-options","title":"Configuration options","text":"

Under the section pr_update_changelog, the configuration file contains options to customize the 'update changelog' tool:

"},{"location":"usage-guide/","title":"Usage guide","text":"

This section provides a detailed guide on how to use Qodo Merge. It includes information on how to adjust Qodo Merge configurations, define which tools will run automatically, and other advanced configurations.

"},{"location":"usage-guide/EXAMPLE_BEST_PRACTICE/","title":"EXAMPLE BEST PRACTICE","text":""},{"location":"usage-guide/EXAMPLE_BEST_PRACTICE/#recommend-python-best-practices","title":"Recommend Python Best Practices","text":"

This document outlines a series of recommended best practices for Python development. These guidelines aim to improve code quality, maintainability, and readability.

"},{"location":"usage-guide/EXAMPLE_BEST_PRACTICE/#imports","title":"Imports","text":"

Use import statements for packages and modules only, not for individual types, classes, or functions.

"},{"location":"usage-guide/EXAMPLE_BEST_PRACTICE/#definition","title":"Definition","text":"

Reusability mechanism for sharing code from one module to another.

"},{"location":"usage-guide/EXAMPLE_BEST_PRACTICE/#decision","title":"Decision","text":"

For example the module sound.effects.echo may be imported as follows:

from sound.effects import echo\n...\necho.EchoFilter(input, output, delay=0.7, atten=4)\n

Do not use relative names in imports. Even if the module is in the same package, use the full package name. This helps prevent unintentionally importing a package twice.

"},{"location":"usage-guide/EXAMPLE_BEST_PRACTICE/#exemptions","title":"Exemptions","text":"

Exemptions from this rule:

"},{"location":"usage-guide/EXAMPLE_BEST_PRACTICE/#packages","title":"Packages","text":"

Import each module using the full pathname location of the module.

"},{"location":"usage-guide/EXAMPLE_BEST_PRACTICE/#decision_1","title":"Decision","text":"

All new code should import each module by its full package name.

Imports should be as follows:

Yes:\n  # Reference absl.flags in code with the complete name (verbose).\n  import absl.flags\n  from doctor.who import jodie\n\n  _FOO = absl.flags.DEFINE_string(...)\n
Yes:\n  # Reference flags in code with just the module name (common).\n  from absl import flags\n  from doctor.who import jodie\n\n  _FOO = flags.DEFINE_string(...)\n

(assume this file lives in doctor/who/ where jodie.py also exists)

No:\n  # Unclear what module the author wanted and what will be imported.  The actual\n  # import behavior depends on external factors controlling sys.path.\n  # Which possible jodie module did the author intend to import?\n  import jodie\n

The directory the main binary is located in should not be assumed to be in sys.path despite that happening in some environments. This being the case, code should assume that import jodie refers to a third-party or top-level package named jodie, not a local jodie.py.

"},{"location":"usage-guide/EXAMPLE_BEST_PRACTICE/#default-iterators-and-operators","title":"Default Iterators and Operators","text":"

Use default iterators and operators for types that support them, like lists, dictionaries, and files.

"},{"location":"usage-guide/EXAMPLE_BEST_PRACTICE/#definition_1","title":"Definition","text":"

Container types, like dictionaries and lists, define default iterators and membership test operators (\u201cin\u201d and \u201cnot in\u201d).

"},{"location":"usage-guide/EXAMPLE_BEST_PRACTICE/#decision_2","title":"Decision","text":"

Use default iterators and operators for types that support them, like lists, dictionaries, and files. The built-in types define iterator methods, too. Prefer these methods to methods that return lists, except that you should not mutate a container while iterating over it.

Yes:  for key in adict: ...\n      if obj in alist: ...\n      for line in afile: ...\n      for k, v in adict.items(): ...\n
No:   for key in adict.keys(): ...\n      for line in afile.readlines(): ...\n
"},{"location":"usage-guide/EXAMPLE_BEST_PRACTICE/#lambda-functions","title":"Lambda Functions","text":"

Okay for one-liners. Prefer generator expressions over map() or filter() with a lambda.

"},{"location":"usage-guide/EXAMPLE_BEST_PRACTICE/#decision_3","title":"Decision","text":"

Lambdas are allowed. If the code inside the lambda function spans multiple lines or is longer than 60-80 chars, it might be better to define it as a regular nested function.

For common operations like multiplication, use the functions from the operator module instead of lambda functions. For example, prefer operator.mul to lambda x, y: x * y.

"},{"location":"usage-guide/EXAMPLE_BEST_PRACTICE/#default-argument-values","title":"Default Argument Values","text":"

Okay in most cases.

"},{"location":"usage-guide/EXAMPLE_BEST_PRACTICE/#definition_2","title":"Definition","text":"

You can specify values for variables at the end of a function\u2019s parameter list, e.g., def foo(a, b=0):. If foo is called with only one argument, b is set to 0. If it is called with two arguments, b has the value of the second argument.

"},{"location":"usage-guide/EXAMPLE_BEST_PRACTICE/#decision_4","title":"Decision","text":"

Okay to use with the following caveat:

Do not use mutable objects as default values in the function or method definition.

Yes: def foo(a, b=None):\n         if b is None:\n             b = []\nYes: def foo(a, b: Sequence | None = None):\n         if b is None:\n             b = []\nYes: def foo(a, b: Sequence = ()):  # Empty tuple OK since tuples are immutable.\n         ...\n
from absl import flags\n_FOO = flags.DEFINE_string(...)\n\nNo:  def foo(a, b=[]):\n         ...\nNo:  def foo(a, b=time.time()):  # Is `b` supposed to represent when this module was loaded?\n         ...\nNo:  def foo(a, b=_FOO.value):  # sys.argv has not yet been parsed...\n         ...\nNo:  def foo(a, b: Mapping = {}):  # Could still get passed to unchecked code.\n         ...\n
"},{"location":"usage-guide/EXAMPLE_BEST_PRACTICE/#truefalse-evaluations","title":"True/False Evaluations","text":"

Use the \u201cimplicit\u201d false if possible, e.g., if foo: rather than if foo != []:

"},{"location":"usage-guide/EXAMPLE_BEST_PRACTICE/#lexical-scoping","title":"Lexical Scoping","text":"

Okay to use.

An example of the use of this feature is:

def get_adder(summand1: float) -> Callable[[float], float]:\n    \"\"\"Returns a function that adds numbers to a given number.\"\"\"\n    def adder(summand2: float) -> float:\n        return summand1 + summand2\n\n    return adder\n
"},{"location":"usage-guide/EXAMPLE_BEST_PRACTICE/#decision_5","title":"Decision","text":"

Okay to use.

"},{"location":"usage-guide/EXAMPLE_BEST_PRACTICE/#threading","title":"Threading","text":"

Do not rely on the atomicity of built-in types.

While Python\u2019s built-in data types such as dictionaries appear to have atomic operations, there are corner cases where they aren\u2019t atomic (e.g. if __hash__ or __eq__ are implemented as Python methods) and their atomicity should not be relied upon. Neither should you rely on atomic variable assignment (since this in turn depends on dictionaries).

Use the queue module\u2019s Queue data type as the preferred way to communicate data between threads. Otherwise, use the threading module and its locking primitives. Prefer condition variables and threading.Condition instead of using lower-level locks.

"},{"location":"usage-guide/additional_configurations/","title":"Additional Configurations","text":""},{"location":"usage-guide/additional_configurations/#show-possible-configurations","title":"Show possible configurations","text":"

The possible configurations of Qodo Merge are stored in here. In the tools page you can find explanations on how to use these configurations for each tool.

To print all the available configurations as a comment on your PR, you can use the following command:

/config\n

To view the actual configurations used for a specific tool, after all the user settings are applied, you can add for each tool a --config.output_relevant_configurations=true suffix. For example:

/improve --config.output_relevant_configurations=true\n

Will output an additional field showing the actual configurations used for the improve tool.

"},{"location":"usage-guide/additional_configurations/#ignoring-files-from-analysis","title":"Ignoring files from analysis","text":"

In some cases, you may want to exclude specific files or directories from the analysis performed by Qodo Merge. This can be useful, for example, when you have files that are generated automatically or files that shouldn't be reviewed, like vendor code.

You can ignore files or folders using the following methods:

which you can edit to ignore files or folders based on glob or regex patterns.

"},{"location":"usage-guide/additional_configurations/#example-usage","title":"Example usage","text":"

Let's look at an example where we want to ignore all files with .py extension from the analysis.

To ignore Python files in a PR with online usage, comment on a PR: /review --ignore.glob=\"['*.py']\"

To ignore Python files in all PRs using glob pattern, set in a configuration file:

[ignore]\nglob = ['*.py']\n

And to ignore Python files in all PRs using regex pattern, set in a configuration file:

[ignore]\nregex = ['.*\\.py$']\n
"},{"location":"usage-guide/additional_configurations/#extra-instructions","title":"Extra instructions","text":"

All Qodo Merge tools have a parameter called extra_instructions, that enables to add free-text extra instructions. Example usage:

/update_changelog --pr_update_changelog.extra_instructions=\"Make sure to update also the version ...\"\n
"},{"location":"usage-guide/additional_configurations/#language-settings","title":"Language Settings","text":"

The default response language for Qodo Merge is U.S. English. However, some development teams may prefer to display information in a different language. For example, your team's workflow might improve if PR descriptions and code suggestions are set to your country's native language.

To configure this, set the response_language parameter in the configuration file. This will prompt the model to respond in the specified language. Use a standard locale code based on ISO 3166 (country codes) and ISO 639 (language codes) to define a language-country pair. See this comprehensive list of locale codes.

Example:

[config]\nresponse_language = \"it-IT\"\n

This will set the response language globally for all the commands to Italian.

Important: Note that only dynamic text generated by the AI model is translated to the configured language. Static text such as labels and table headers that are not part of the AI models response will remain in US English. In addition, the model you are using must have good support for the specified language.

"},{"location":"usage-guide/additional_configurations/#patch-extra-lines","title":"Patch Extra Lines","text":"

By default, around any change in your PR, git patch provides three lines of context above and below the change.

@@ -12,5 +12,5 @@ def func1():\n code line that already existed in the file...\n code line that already existed in the file...\n code line that already existed in the file....\n-code line that was removed in the PR\n+new code line added in the PR\n code line that already existed in the file...\n code line that already existed in the file...\n code line that already existed in the file...\n

Qodo Merge will try to increase the number of lines of context, via the parameter:

[config]\npatch_extra_lines_before=3\npatch_extra_lines_after=1\n

Increasing this number provides more context to the model, but will also increase the token budget, and may overwhelm the model with too much information, unrelated to the actual PR code changes.

If the PR is too large (see PR Compression strategy), Qodo Merge may automatically set this number to 0, and will use the original git patch.

"},{"location":"usage-guide/additional_configurations/#log-level","title":"Log Level","text":"

Qodo Merge allows you to control the verbosity of logging by using the log_level configuration parameter. This is particularly useful for troubleshooting and debugging issues with your PR workflows.

[config]\nlog_level = \"DEBUG\"  # Options: \"DEBUG\", \"INFO\", \"WARNING\", \"ERROR\", \"CRITICAL\"\n

The default log level is \"DEBUG\", which provides detailed output of all operations. If you prefer less verbose logs, you can set higher log levels like \"INFO\" or \"WARNING\".

"},{"location":"usage-guide/additional_configurations/#integrating-with-logging-observability-platforms","title":"Integrating with Logging Observability Platforms","text":"

Various logging observability tools can be used out-of-the box when using the default LiteLLM AI Handler. Simply configure the LiteLLM callback settings in configuration.toml and set environment variables according to the LiteLLM documentation.

For example, to use LangSmith you can add the following to your configuration.toml file:

[litellm]\nenable_callbacks = true\nsuccess_callback = [\"langsmith\"]\nfailure_callback = [\"langsmith\"]\nservice_callback = []\n

Then set the following environment variables:

LANGSMITH_API_KEY=<api_key>\nLANGSMITH_PROJECT=<project>\nLANGSMITH_BASE_URL=<url>\n
"},{"location":"usage-guide/additional_configurations/#ignoring-automatic-commands-in-prs","title":"Ignoring automatic commands in PRs","text":"

Qodo Merge allows you to automatically ignore certain PRs based on various criteria:

"},{"location":"usage-guide/additional_configurations/#ignoring-prs-with-specific-titles","title":"Ignoring PRs with specific titles","text":"

To ignore PRs with a specific title such as \"[Bump]: ...\", you can add the following to your configuration.toml file:

[config]\nignore_pr_title = [\"\\\\[Bump\\\\]\"]\n

Where the ignore_pr_title is a list of regex patterns to match the PR title you want to ignore. Default is ignore_pr_title = [\"^\\\\[Auto\\\\]\", \"^Auto\"].

"},{"location":"usage-guide/additional_configurations/#ignoring-prs-between-specific-branches","title":"Ignoring PRs between specific branches","text":"

To ignore PRs from specific source or target branches, you can add the following to your configuration.toml file:

[config]\nignore_pr_source_branches = ['develop', 'main', 'master', 'stage']\nignore_pr_target_branches = [\"qa\"]\n

Where the ignore_pr_source_branches and ignore_pr_target_branches are lists of regex patterns to match the source and target branches you want to ignore. They are not mutually exclusive, you can use them together or separately.

"},{"location":"usage-guide/additional_configurations/#ignoring-prs-from-specific-repositories","title":"Ignoring PRs from specific repositories","text":"

To ignore PRs from specific repositories, you can add the following to your configuration.toml file:

[config]\nignore_repositories = [\"my-org/my-repo1\", \"my-org/my-repo2\"]\n

Where the ignore_repositories is a list of regex patterns to match the repositories you want to ignore. This is useful when you have multiple repositories and want to exclude certain ones from analysis.

"},{"location":"usage-guide/additional_configurations/#ignoring-prs-not-from-specific-folders","title":"Ignoring PRs not from specific folders","text":"

To allow only specific folders (often needed in large monorepos), set:

[config]\nallow_only_specific_folders=['folder1','folder2']\n

For the configuration above, automatic feedback will only be triggered when the PR changes include files where 'folder1' or 'folder2' is in the file path

"},{"location":"usage-guide/additional_configurations/#ignoring-prs-containing-specific-labels","title":"Ignoring PRs containing specific labels","text":"

To ignore PRs containing specific labels, you can add the following to your configuration.toml file:

[config]\nignore_pr_labels = [\"do-not-merge\"]\n

Where the ignore_pr_labels is a list of labels that when present in the PR, the PR will be ignored.

"},{"location":"usage-guide/additional_configurations/#ignoring-prs-from-specific-users","title":"Ignoring PRs from specific users","text":"

Qodo Merge tries to automatically identify and ignore pull requests created by bots using:

While this detection is robust, it may not catch all cases, particularly when:

To supplement the automatic bot detection, you can manually specify users to ignore. Add the following to your configuration.toml file to ignore PRs from specific users:

[config]\nignore_pr_authors = [\"my-special-bot-user\", ...]\n

Where the ignore_pr_authors is a list of usernames that you want to ignore.

Note

There is one specific case where bots will receive an automatic response - when they generated a PR with a failed test. In that case, the ci_feedback tool will be invoked.

"},{"location":"usage-guide/additional_configurations/#ignoring-generated-files-by-languageframework","title":"Ignoring Generated Files by Language/Framework","text":"

To automatically exclude files generated by specific languages or frameworks, you can add the following to your configuration.toml file:

[config]\nignore_language_framework = ['protobuf', ...]\n

You can view the list of auto-generated file patterns in generated_code_ignore.toml. Files matching these glob patterns will be automatically excluded from PR Agent analysis.

"},{"location":"usage-guide/automations_and_usage/","title":"Usage and Automation","text":""},{"location":"usage-guide/automations_and_usage/#local-repo-cli","title":"Local repo (CLI)","text":"

When running from your locally cloned Qodo Merge repo (CLI), your local configuration file will be used. Examples of invoking the different tools via the CLI:

<pr_url> is the url of the relevant PR (for example: #50).

Notes:

  1. in addition to editing your local configuration file, you can also change any configuration value by adding it to the command line:
python -m pr_agent.cli --pr_url=<pr_url>  /review --pr_reviewer.extra_instructions=\"focus on the file: ...\"\n
  1. You can print results locally, without publishing them, by setting in configuration.toml:
[config]\npublish_output=false\nverbosity_level=2\n

This is useful for debugging or experimenting with different tools.

  1. git provider: The git_provider field in a configuration file determines the GIT provider that will be used by Qodo Merge. Currently, the following providers are supported: github (default), gitlab, bitbucket, azure, codecommit, local,gitea, and gerrit.
"},{"location":"usage-guide/automations_and_usage/#cli-health-check","title":"CLI Health Check","text":"

To verify that Qodo Merge has been configured correctly, you can run this health check command from the repository root:

python -m tests.health_test.main\n

If the health check passes, you will see the following output:

========\nHealth test passed successfully\n========\n

At the end of the run.

Before running the health check, ensure you have:

"},{"location":"usage-guide/automations_and_usage/#online-usage","title":"Online usage","text":"

Online usage means invoking Qodo Merge tools by comments on a PR. Commands for invoking the different tools via comments:

To edit a specific configuration value, just add --config_path=<value> to any command. For example, if you want to edit the review tool configurations, you can run:

/review --pr_reviewer.extra_instructions=\"...\" --pr_reviewer.require_score_review=false\n

Any configuration value in configuration file file can be similarly edited. Comment /config to see the list of available configurations.

"},{"location":"usage-guide/automations_and_usage/#qodo-merge-automatic-feedback","title":"Qodo Merge Automatic Feedback","text":""},{"location":"usage-guide/automations_and_usage/#disabling-all-automatic-feedback","title":"Disabling all automatic feedback","text":"

To easily disable all automatic feedback from Qodo Merge (GitHub App, GitLab Webhook, BitBucket App, Azure DevOps Webhook), set in a configuration file:

[config]\ndisable_auto_feedback = true\n

When this parameter is set to true, Qodo Merge will not run any automatic tools (like describe, review, improve) when a new PR is opened, or when new code is pushed to an open PR.

"},{"location":"usage-guide/automations_and_usage/#github-app","title":"GitHub App","text":"

Configurations for Qodo Merge

Qodo Merge for GitHub is an App, hosted by Qodo. So all the instructions below are relevant also for Qodo Merge users. Same goes for GitLab webhook and BitBucket App sections.

"},{"location":"usage-guide/automations_and_usage/#github-app-automatic-tools-when-a-new-pr-is-opened","title":"GitHub app automatic tools when a new PR is opened","text":"

The github_app section defines GitHub app specific configurations.

The configuration parameter pr_commands defines the list of tools that will be run automatically when a new PR is opened:

[github_app]\npr_commands = [\n    \"/describe\",\n    \"/review\",\n    \"/improve\",\n]\n

This means that when a new PR is opened/reopened or marked as ready for review, Qodo Merge will run the describe, review and improve tools.

Draft PRs:

By default, draft PRs are not considered for automatic tools, but you can change this by setting the feedback_on_draft_pr parameter to true in the configuration file.

[github_app]\nfeedback_on_draft_pr = true\n

Changing default tool parameters:

You can override the default tool parameters by using one the three options for a configuration file: wiki, local, or global. For example, if your configuration file contains:

[pr_description]\ngenerate_ai_title = true\n

Every time you run the describe tool (including automatic runs) the PR title will be generated by the AI.

Parameters for automated runs:

You can customize configurations specifically for automated runs by using the --config_path=<value> parameter. For instance, to modify the review tool settings only for newly opened PRs, use:

[github_app]\npr_commands = [\n    \"/describe\",\n    \"/review --pr_reviewer.extra_instructions='focus on the file: ...'\",\n    \"/improve\",\n]\n
"},{"location":"usage-guide/automations_and_usage/#github-app-automatic-tools-for-push-actions-commits-to-an-open-pr","title":"GitHub app automatic tools for push actions (commits to an open PR)","text":"

In addition to running automatic tools when a PR is opened, the GitHub app can also respond to new code that is pushed to an open PR.

The configuration toggle handle_push_trigger can be used to enable this feature. The configuration parameter push_commands defines the list of tools that will be run automatically when new code is pushed to the PR.

[github_app]\nhandle_push_trigger = true\npush_commands = [\n    \"/describe\",\n    \"/review\",\n]\n

This means that when new code is pushed to the PR, the Qodo Merge will run the describe and review tools, with the specified parameters.

"},{"location":"usage-guide/automations_and_usage/#github-action","title":"GitHub Action","text":"

GitHub Action is a different way to trigger Qodo Merge tools, and uses a different configuration mechanism than GitHub App. You can configure settings for GitHub Action by adding environment variables under the env section in .github/workflows/pr_agent.yml file. Specifically, start by setting the following environment variables:

      env:\n        OPENAI_KEY: ${{ secrets.OPENAI_KEY }} # Make sure to add your OpenAI key to your repo secrets\n        GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }} # Make sure to add your GitHub token to your repo secrets\n        github_action_config.auto_review: \"true\" # enable\\disable auto review\n        github_action_config.auto_describe: \"true\" # enable\\disable auto describe\n        github_action_config.auto_improve: \"true\" # enable\\disable auto improve\n        github_action_config.pr_actions: '[\"opened\", \"reopened\", \"ready_for_review\", \"review_requested\"]'\n

github_action_config.auto_review, github_action_config.auto_describe and github_action_config.auto_improve are used to enable/disable automatic tools that run when a new PR is opened. If not set, the default configuration is for all three tools to run automatically when a new PR is opened.

github_action_config.pr_actions is used to configure which pull_requests events will trigger the enabled auto flags If not set, the default configuration is [\"opened\", \"reopened\", \"ready_for_review\", \"review_requested\"]

github_action_config.enable_output are used to enable/disable github actions output parameter (default is true). Review result is output as JSON to steps.{step-id}.outputs.review property. The JSON structure is equivalent to the yaml data structure defined in pr_reviewer_prompts.toml.

Note that you can give additional config parameters by adding environment variables to .github/workflows/pr_agent.yml, or by using a .pr_agent.toml configuration file in the root of your repo

For example, you can set an environment variable: pr_description.publish_labels=false, or add a .pr_agent.toml file with the following content:

[pr_description]\npublish_labels = false\n

to prevent Qodo Merge from publishing labels when running the describe tool.

"},{"location":"usage-guide/automations_and_usage/#gitlab-webhook","title":"GitLab Webhook","text":"

After setting up a GitLab webhook, to control which commands will run automatically when a new MR is opened, you can set the pr_commands parameter in the configuration file, similar to the GitHub App:

[gitlab]\npr_commands = [\n    \"/describe\",\n    \"/review\",\n    \"/improve\",\n]\n

the GitLab webhook can also respond to new code that is pushed to an open MR. The configuration toggle handle_push_trigger can be used to enable this feature. The configuration parameter push_commands defines the list of tools that will be run automatically when new code is pushed to the MR.

[gitlab]\nhandle_push_trigger = true\npush_commands = [\n    \"/describe\",\n    \"/review\",\n]\n

Note that to use the 'handle_push_trigger' feature, you need to give the gitlab webhook also the \"Push events\" scope.

"},{"location":"usage-guide/automations_and_usage/#bitbucket-app","title":"BitBucket App","text":"

Similar to GitHub app, when running Qodo Merge from BitBucket App, the default configuration file will be initially loaded.

By uploading a local .pr_agent.toml file to the root of the repo's default branch, you can edit and customize any configuration parameter. Note that you need to upload .pr_agent.toml prior to creating a PR, in order for the configuration to take effect.

For example, if your local .pr_agent.toml file contains:

[pr_reviewer]\nextra_instructions = \"Answer in japanese\"\n

Each time you invoke a /review tool, it will use the extra instructions you set in the local configuration file.

Note that among other limitations, BitBucket provides relatively low rate-limits for applications (up to 1000 requests per hour), and does not provide an API to track the actual rate-limit usage. If you experience a lack of responses from Qodo Merge, you might want to set: bitbucket_app.avoid_full_files=true in your configuration file. This will prevent Qodo Merge from acquiring the full file content, and will only use the diff content. This will reduce the number of requests made to BitBucket, at the cost of small decrease in accuracy, as dynamic context will not be applicable.

"},{"location":"usage-guide/automations_and_usage/#bitbucket-self-hosted-app-automatic-tools","title":"BitBucket Self-Hosted App automatic tools","text":"

To control which commands will run automatically when a new PR is opened, you can set the pr_commands parameter in the configuration file: Specifically, set the following values:

[bitbucket_app]\npr_commands = [\n    \"/review\",\n    \"/improve --pr_code_suggestions.commitable_code_suggestions=true --pr_code_suggestions.suggestions_score_threshold=7\",\n]\n

Note that we set specifically for bitbucket, we recommend using: --pr_code_suggestions.suggestions_score_threshold=7 and that is the default value we set for bitbucket. Since this platform only supports inline code suggestions, we want to limit the number of suggestions, and only present a limited number.

To enable BitBucket app to respond to each push to the PR, set (for example):

[bitbucket_app]\nhandle_push_trigger = true\npush_commands = [\n    \"/describe\",\n    \"/review\",\n]\n
"},{"location":"usage-guide/automations_and_usage/#azure-devops-provider","title":"Azure DevOps provider","text":"

To use Azure DevOps provider use the following settings in configuration.toml:

[config]\ngit_provider=\"azure\"\n

Azure DevOps provider supports PAT token or DefaultAzureCredential authentication. PAT is faster to create, but has build in expiration date, and will use the user identity for API calls. Using DefaultAzureCredential you can use managed identity or Service principle, which are more secure and will create separate ADO user identity (via AAD) to the agent.

If PAT was chosen, you can assign the value in .secrets.toml. If DefaultAzureCredential was chosen, you can assigned the additional env vars like AZURE_CLIENT_SECRET directly, or use managed identity/az cli (for local development) without any additional configuration. in any case, 'org' value must be assigned in .secrets.toml:

[azure_devops]\norg = \"https://dev.azure.com/YOUR_ORGANIZATION/\"\n# pat = \"YOUR_PAT_TOKEN\" needed only if using PAT for authentication\n
"},{"location":"usage-guide/automations_and_usage/#azure-devops-webhook","title":"Azure DevOps Webhook","text":"

To control which commands will run automatically when a new PR is opened, you can set the pr_commands parameter in the configuration file, similar to the GitHub App:

[azure_devops_server]\npr_commands = [\n    \"/describe\",\n    \"/review\",\n    \"/improve\",\n]\n
"},{"location":"usage-guide/automations_and_usage/#gitea-webhook","title":"Gitea Webhook","text":"

After setting up a Gitea webhook, to control which commands will run automatically when a new MR is opened, you can set the pr_commands parameter in the configuration file, similar to the GitHub App:

[gitea]\npr_commands = [\n    \"/describe\",\n    \"/review\",\n    \"/improve\",\n]\n
"},{"location":"usage-guide/changing_a_model/","title":"Changing a Model","text":""},{"location":"usage-guide/changing_a_model/#changing-a-model-in-pr-agent","title":"Changing a model in PR-Agent","text":"

See here for a list of available models. To use a different model than the default (o4-mini), you need to edit in the configuration file the fields:

[config]\nmodel = \"...\"\nfallback_models = [\"...\"]\n

For models and environments not from OpenAI, you might need to provide additional keys and other parameters. You can give parameters via a configuration file, or from environment variables.

Model-specific environment variables

See litellm documentation for the environment variables needed per model, as they may vary and change over time. Our documentation per-model may not always be up-to-date with the latest changes. Failing to set the needed keys of a specific model will usually result in litellm not identifying the model type, and failing to utilize it.

"},{"location":"usage-guide/changing_a_model/#openai-like-api","title":"OpenAI like API","text":"

To use an OpenAI like API, set the following in your .secrets.toml file:

[openai]\napi_base = \"https://api.openai.com/v1\"\napi_key = \"sk-...\"\n

or use the environment variables (make sure to use double underscores __):

OPENAI__API_BASE=https://api.openai.com/v1\nOPENAI__KEY=sk-...\n
"},{"location":"usage-guide/changing_a_model/#openai-flex-processing","title":"OpenAI Flex Processing","text":"

To reduce costs for non-urgent/background tasks, enable Flex Processing:

[litellm]\nextra_body='{\"processing_mode\": \"flex\"}'\n

See OpenAI Flex Processing docs for details.

"},{"location":"usage-guide/changing_a_model/#azure","title":"Azure","text":"

To use Azure, set in your .secrets.toml (working from CLI), or in the GitHub Settings > Secrets and variables (working from GitHub App or GitHub Action):

[openai]\nkey = \"\" # your azure api key\napi_type = \"azure\"\napi_version = '2023-05-15'  # Check Azure documentation for the current API version\napi_base = \"\"  # The base URL for your Azure OpenAI resource. e.g. \"https://<your resource name>.openai.azure.com\"\ndeployment_id = \"\"  # The deployment name you chose when you deployed the engine\n

and set in your configuration file:

[config]\nmodel=\"\" # the OpenAI model you've deployed on Azure (e.g. gpt-4o)\nfallback_models=[\"...\"]\n

To use Azure AD (Entra id) based authentication set in your .secrets.toml (working from CLI), or in the GitHub Settings > Secrets and variables (working from GitHub App or GitHub Action):

[azure_ad]\nclient_id = \"\"  # Your Azure AD application client ID\nclient_secret = \"\"  # Your Azure AD application client secret\ntenant_id = \"\"  # Your Azure AD tenant ID\napi_base = \"\"  # Your Azure OpenAI service base URL (e.g., https://openai.xyz.com/)\n

Passing custom headers to the underlying LLM Model API can be done by setting extra_headers parameter to litellm.

[litellm]\nextra_headers='{\"projectId\": \"<authorized projectId >\", ...}') #The value of this setting should be a JSON string representing the desired headers, a ValueError is thrown otherwise.\n

This enables users to pass authorization tokens or API keys, when routing requests through an API management gateway.

"},{"location":"usage-guide/changing_a_model/#ollama","title":"Ollama","text":"

You can run models locally through either VLLM or Ollama

E.g. to use a new model locally via Ollama, set in .secrets.toml or in a configuration file:

[config]\nmodel = \"ollama/qwen2.5-coder:32b\"\nfallback_models=[\"ollama/qwen2.5-coder:32b\"]\ncustom_model_max_tokens=128000 # set the maximal input tokens for the model\nduplicate_examples=true # will duplicate the examples in the prompt, to help the model to generate structured output\n\n[ollama]\napi_base = \"http://localhost:11434\" # or whatever port you're running Ollama on\n

By default, Ollama uses a context window size of 2048 tokens. In most cases this is not enough to cover pr-agent prompt and pull-request diff. Context window size can be overridden with the OLLAMA_CONTEXT_LENGTH environment variable. For example, to set the default context length to 8K, use: OLLAMA_CONTEXT_LENGTH=8192 ollama serve. More information you can find on the official ollama faq.

Please note that the custom_model_max_tokens setting should be configured in accordance with the OLLAMA_CONTEXT_LENGTH. Failure to do so may result in unexpected model output.

Local models vs commercial models

Qodo Merge is compatible with almost any AI model, but analyzing complex code repositories and pull requests requires a model specifically optimized for code analysis.

Commercial models such as GPT-4, Claude Sonnet, and Gemini have demonstrated robust capabilities in generating structured output for code analysis tasks with large input. In contrast, most open-source models currently available (as of January 2025) face challenges with these complex tasks.

Based on our testing, local open-source models are suitable for experimentation and learning purposes (mainly for the ask command), but they are not suitable for production-level code analysis tasks.

Hence, for production workflows and real-world usage, we recommend using commercial models.

"},{"location":"usage-guide/changing_a_model/#hugging-face","title":"Hugging Face","text":"

To use a new model with Hugging Face Inference Endpoints, for example, set:

[config] # in configuration.toml\nmodel = \"huggingface/meta-llama/Llama-2-7b-chat-hf\"\nfallback_models=[\"huggingface/meta-llama/Llama-2-7b-chat-hf\"]\ncustom_model_max_tokens=... # set the maximal input tokens for the model\n\n[huggingface] # in .secrets.toml\nkey = ... # your Hugging Face api key\napi_base = ... # the base url for your Hugging Face inference endpoint\n

(you can obtain a Llama2 key from here)

"},{"location":"usage-guide/changing_a_model/#replicate","title":"Replicate","text":"

To use Llama2 model with Replicate, for example, set:

[config] # in configuration.toml\nmodel = \"replicate/llama-2-70b-chat:2c1608e18606fad2812020dc541930f2d0495ce32eee50074220b87300bc16e1\"\nfallback_models=[\"replicate/llama-2-70b-chat:2c1608e18606fad2812020dc541930f2d0495ce32eee50074220b87300bc16e1\"]\n[replicate] # in .secrets.toml\nkey = ...\n

(you can obtain a Llama2 key from here)

Also, review the AiHandler file for instructions on how to set keys for other models.

"},{"location":"usage-guide/changing_a_model/#groq","title":"Groq","text":"

To use Llama3 model with Groq, for example, set:

[config] # in configuration.toml\nmodel = \"llama3-70b-8192\"\nfallback_models = [\"groq/llama3-70b-8192\"]\n[groq] # in .secrets.toml\nkey = ... # your Groq api key\n

(you can obtain a Groq key from here)

"},{"location":"usage-guide/changing_a_model/#xai","title":"xAI","text":"

To use xAI's models with PR-Agent, set:

[config] # in configuration.toml\nmodel = \"xai/grok-2-latest\"\nfallback_models = [\"xai/grok-2-latest\"] # or any other model as fallback\n\n[xai] # in .secrets.toml\nkey = \"...\" # your xAI API key\n

You can obtain an xAI API key from xAI's console by creating an account and navigating to the developer settings page.

"},{"location":"usage-guide/changing_a_model/#vertex-ai","title":"Vertex AI","text":"

To use Google's Vertex AI platform and its associated models (chat-bison/codechat-bison) set:

[config] # in configuration.toml\nmodel = \"vertex_ai/codechat-bison\"\nfallback_models=\"vertex_ai/codechat-bison\"\n\n[vertexai] # in .secrets.toml\nvertex_project = \"my-google-cloud-project\"\nvertex_location = \"\"\n

Your application default credentials will be used for authentication so there is no need to set explicit credentials in most environments.

If you do want to set explicit credentials, then you can use the GOOGLE_APPLICATION_CREDENTIALS environment variable set to a path to a json credentials file.

"},{"location":"usage-guide/changing_a_model/#google-ai-studio","title":"Google AI Studio","text":"

To use Google AI Studio models, set the relevant models in the configuration section of the configuration file:

[config] # in configuration.toml\nmodel=\"gemini/gemini-1.5-flash\"\nfallback_models=[\"gemini/gemini-1.5-flash\"]\n\n[google_ai_studio] # in .secrets.toml\ngemini_api_key = \"...\"\n

If you don't want to set the API key in the .secrets.toml file, you can set the GOOGLE_AI_STUDIO.GEMINI_API_KEY environment variable.

"},{"location":"usage-guide/changing_a_model/#anthropic","title":"Anthropic","text":"

To use Anthropic models, set the relevant models in the configuration section of the configuration file:

[config]\nmodel=\"anthropic/claude-3-opus-20240229\"\nfallback_models=[\"anthropic/claude-3-opus-20240229\"]\n

And also set the api key in the .secrets.toml file:

[anthropic]\nKEY = \"...\"\n

See litellm documentation for more information about the environment variables required for Anthropic.

"},{"location":"usage-guide/changing_a_model/#amazon-bedrock","title":"Amazon Bedrock","text":"

To use Amazon Bedrock and its foundational models, add the below configuration:

[config] # in configuration.toml\nmodel=\"bedrock/anthropic.claude-3-5-sonnet-20240620-v1:0\"\nfallback_models=[\"bedrock/anthropic.claude-3-5-sonnet-20240620-v1:0\"]\n\n[aws]\nAWS_ACCESS_KEY_ID=\"...\"\nAWS_SECRET_ACCESS_KEY=\"...\"\nAWS_REGION_NAME=\"...\"\n

You can also use the new Meta Llama 4 models available on Amazon Bedrock:

[config] # in configuration.toml\nmodel=\"bedrock/us.meta.llama4-scout-17b-instruct-v1:0\"\nfallback_models=[\"bedrock/us.meta.llama4-maverick-17b-instruct-v1:0\"]\n

See litellm documentation for more information about the environment variables required for Amazon Bedrock.

"},{"location":"usage-guide/changing_a_model/#deepseek","title":"DeepSeek","text":"

To use deepseek-chat model with DeepSeek, for example, set:

[config] # in configuration.toml\nmodel = \"deepseek/deepseek-chat\"\nfallback_models=[\"deepseek/deepseek-chat\"]\n

and fill up your key

[deepseek] # in .secrets.toml\nkey = ...\n

(you can obtain a deepseek-chat key from here)

"},{"location":"usage-guide/changing_a_model/#deepinfra","title":"DeepInfra","text":"

To use DeepSeek model with DeepInfra, for example, set:

[config] # in configuration.toml\nmodel = \"deepinfra/deepseek-ai/DeepSeek-R1-Distill-Llama-70B\"\nfallback_models = [\"deepinfra/deepseek-ai/DeepSeek-R1-Distill-Qwen-32B\"]\n[deepinfra] # in .secrets.toml\nkey = ... # your DeepInfra api key\n

(you can obtain a DeepInfra key from here)

"},{"location":"usage-guide/changing_a_model/#mistral","title":"Mistral","text":"

To use models like Mistral or Codestral with Mistral, for example, set:

[config] # in configuration.toml\nmodel = \"mistral/mistral-small-latest\"\nfallback_models = [\"mistral/mistral-medium-latest\"]\n[mistral] # in .secrets.toml\nkey = \"...\" # your Mistral api key\n

(you can obtain a Mistral key from here)

"},{"location":"usage-guide/changing_a_model/#codestral","title":"Codestral","text":"

To use Codestral model with Codestral, for example, set:

[config] # in configuration.toml\nmodel = \"codestral/codestral-latest\"\nfallback_models = [\"codestral/codestral-2405\"]\n[codestral] # in .secrets.toml\nkey = \"...\" # your Codestral api key\n

(you can obtain a Codestral key from here)

"},{"location":"usage-guide/changing_a_model/#openrouter","title":"Openrouter","text":"

To use model from Openrouter, for example, set:

[config] # in configuration.toml \nmodel=\"openrouter/anthropic/claude-3.7-sonnet\"\nfallback_models=[\"openrouter/deepseek/deepseek-chat\"]\ncustom_model_max_tokens=20000\n\n[openrouter]  # in .secrets.toml or passed an environment variable openrouter__key\nkey = \"...\" # your openrouter api key\n

(you can obtain an Openrouter API key from here)

"},{"location":"usage-guide/changing_a_model/#custom-models","title":"Custom models","text":"

If the relevant model doesn't appear here, you can still use it as a custom model:

  1. Set the model name in the configuration file:
[config]\nmodel=\"custom_model_name\"\nfallback_models=[\"custom_model_name\"]\n
  1. Set the maximal tokens for the model:
[config]\ncustom_model_max_tokens= ...\n
  1. Go to litellm documentation, find the model you want to use, and set the relevant environment variables.

  2. Most reasoning models do not support chat-style inputs (system and user messages) or temperature settings. To bypass chat templates and temperature controls, set config.custom_reasoning_model = true in your configuration file.

"},{"location":"usage-guide/changing_a_model/#dedicated-parameters","title":"Dedicated parameters","text":""},{"location":"usage-guide/changing_a_model/#openai-models","title":"OpenAI models","text":"
[config]\nreasoning_efffort= = \"medium\" # \"low\", \"medium\", \"high\"\n

With the OpenAI models that support reasoning effort (eg: o4-mini), you can specify its reasoning effort via config section. The default value is medium. You can change it to high or low based on your usage.

"},{"location":"usage-guide/changing_a_model/#anthropic-models","title":"Anthropic models","text":"
[config]\nenable_claude_extended_thinking = false # Set to true to enable extended thinking feature\nextended_thinking_budget_tokens = 2048\nextended_thinking_max_output_tokens = 4096\n
"},{"location":"usage-guide/configuration_options/","title":"Configuration File","text":"

The different tools and sub-tools used by Qodo Merge are adjustable via a Git configuration file. There are three main ways to set persistent configurations:

  1. Wiki configuration page \ud83d\udc8e
  2. Local configuration file
  3. Global configuration file \ud83d\udc8e

In terms of precedence, wiki configurations will override local configurations, and local configurations will override global configurations.

For a list of all possible configurations, see the configuration options page. In addition to general configuration options, each tool has its own configurations. For example, the review tool will use parameters from the pr_reviewer section in the configuration file.

Tip1: Edit only what you need

Your configuration file should be minimal, and edit only the relevant values. Don't copy the entire configuration options, since it can lead to legacy problems when something changes.

Tip2: Show relevant configurations

If you set config.output_relevant_configurations to True, each tool will also output in a collapsible section its relevant configurations. This can be useful for debugging, or getting to know the configurations better.

"},{"location":"usage-guide/configuration_options/#wiki-configuration-file","title":"Wiki configuration file \ud83d\udc8e","text":"

Platforms supported: GitHub, GitLab, Bitbucket

With Qodo Merge, you can set configurations by creating a page called .pr_agent.toml in the wiki of the repo. The advantage of this method is that it allows to set configurations without needing to commit new content to the repo - just edit the wiki page and save.

Click here to see a short instructional video. We recommend surrounding the configuration content with triple-quotes (or ```toml), to allow better presentation when displayed in the wiki as markdown. An example content:

[pr_description]\ngenerate_ai_title=true\n

Qodo Merge will know to remove the surrounding quotes when reading the configuration content.

"},{"location":"usage-guide/configuration_options/#local-configuration-file","title":"Local configuration file","text":"

Platforms supported: GitHub, GitLab, Bitbucket, Azure DevOps

By uploading a local .pr_agent.toml file to the root of the repo's default branch, you can edit and customize any configuration parameter. Note that you need to upload or update .pr_agent.toml before using the PR Agent tools (either at PR creation or via manual trigger) for the configuration to take effect.

For example, if you set in .pr_agent.toml:

[pr_reviewer]\nextra_instructions=\"\"\"\\\n- instruction a\n- instruction b\n...\n\"\"\"\n

Then you can give a list of extra instructions to the review tool.

"},{"location":"usage-guide/configuration_options/#global-configuration-file","title":"Global configuration file \ud83d\udc8e","text":"

Platforms supported: GitHub, GitLab, Bitbucket

If you create a repo called pr-agent-settings in your organization, its configuration file .pr_agent.toml will be used as a global configuration file for any other repo that belongs to the same organization. Parameters from a local .pr_agent.toml file, in a specific repo, will override the global configuration parameters.

For example, in the GitHub organization Codium-ai:

"},{"location":"usage-guide/configuration_options/#bitbucket-organization-level-configuration-file","title":"Bitbucket Organization level configuration file \ud83d\udc8e","text":"

Relevant platforms: Bitbucket Data Center

In Bitbucket Data Center, there are two levels where you can define a global configuration file:

Create a repository named pr-agent-settings within a specific project. The configuration file in this repository will apply to all repositories under the same project.

Create a dedicated project to hold a global configuration file that affects all repositories across all projects in your organization.

Setting up organization-level global configuration:

  1. Create a new project with both the name and key: PR_AGENT_SETTINGS.
  2. Inside the PR_AGENT_SETTINGS project, create a repository named pr-agent-settings.
  3. In this repository, add a .pr_agent.toml configuration file\u2014structured similarly to the global configuration file described above.
  4. Optionally, you can add organizational-level global best practices.

Repositories across your entire Bitbucket organization will inherit the configuration from this file.

Note

If both organization-level and project-level global settings are defined, the project-level settings will take precedence over the organization-level configuration. Additionally, parameters from a repository\u2019s local .pr_agent.toml file will always override both global settings.

"},{"location":"usage-guide/enabling_a_wiki/","title":"Enabling a Wiki","text":"

Supported Git Platforms: GitHub, GitLab, Bitbucket

For optimal functionality of Qodo Merge, we recommend enabling a wiki for each repository where Qodo Merge is installed. The wiki serves several important purposes:

Key Wiki Features: \ud83d\udc8e

Setup Instructions (GitHub):

To enable a wiki for your repository:

  1. Navigate to your repository's main page on GitHub
  2. Select \"Settings\" from the top navigation bar
  3. Locate the \"Features\" section
  4. Enable the \"Wikis\" option by checking the corresponding box
  5. Return to your repository's main page
  6. Look for the newly added \"Wiki\" tab in the top navigation
  7. Initialize your wiki by clicking \"Create the first page\" and saving (this step is important - without creating an initial page, the wiki will not be fully functional)
"},{"location":"usage-guide/enabling_a_wiki/#why-wiki","title":"Why Wiki?","text":""},{"location":"usage-guide/introduction/","title":"Introduction","text":"

After installation, there are three basic ways to invoke Qodo Merge:

  1. Locally running a CLI command
  2. Online usage - by commenting on a PR
  3. Enabling Qodo Merge tools to run automatically when a new PR is opened

Specifically, CLI commands can be issued by invoking a pre-built docker image, or by invoking a locally cloned repo.

For online usage, you will need to setup either a GitHub App or a GitHub Action (GitHub), a GitLab webhook (GitLab), or a BitBucket App (BitBucket). These platforms also enable to run Qodo Merge specific tools automatically when a new PR is opened, or on each push to a branch.

"},{"location":"usage-guide/mail_notifications/","title":"Managing Mail Notifications","text":"

Unfortunately, it is not possible in GitHub to disable mail notifications from a specific user. If you are subscribed to notifications for a repo with Qodo Merge, we recommend turning off notifications for PR comments, to avoid lengthy emails:

As an alternative, you can filter in your mail provider the notifications specifically from the Qodo Merge bot, see how.

Another option to reduce the mail overload, yet still receive notifications on Qodo Merge tools, is to disable the help collapsible section in Qodo Merge bot comments. This can done by setting enable_help_text=false for the relevant tool in the configuration file. For example, to disable the help text for the pr_reviewer tool, set:

[pr_reviewer]\nenable_help_text = false\n
"},{"location":"usage-guide/qodo_merge_models/","title":"\ud83d\udc8e Qodo Merge Models","text":"

The default models used by Qodo Merge (June 2025) are a combination of Claude Sonnet 4 and Gemini 2.5 Pro.

"},{"location":"usage-guide/qodo_merge_models/#selecting-a-specific-model","title":"Selecting a Specific Model","text":"

Users can configure Qodo Merge to use only a specific model by editing the configuration file. The models supported by Qodo Merge are:

To restrict Qodo Merge to using only o4-mini, add this setting:

[config]\nmodel=\"o4-mini\"\n

To restrict Qodo Merge to using only GPT-4.1, add this setting:

[config]\nmodel=\"gpt-4.1\"\n

To restrict Qodo Merge to using only gemini-2.5-pro, add this setting:

[config]\nmodel=\"gemini-2.5-pro\"\n

To restrict Qodo Merge to using only deepseek-r1 us-hosted, add this setting:

[config]\nmodel=\"deepseek/r1\"\n
"}]}