10 KiB
Overview
The improve
tool scans the PR code changes, and automatically generates meaningful suggestions for improving the PR code.
The tool can be triggered automatically every time a new PR is opened, or it can be invoked manually by commenting on any PR:
/improve
Note that the Apply this suggestion
checkbox, which interactively converts a suggestion into a commitable code comment, is available only for PR-Agent Pro 💎 users.
Example usage
Manual triggering
Invoke the tool manually by commenting /improve
on any PR. The code suggestions by default are presented as a single comment:
To edit configurations related to the improve tool, use the following template:
/improve --pr_code_suggestions.some_config1=... --pr_code_suggestions.some_config2=...
For example, you can choose to present all the suggestions as commitable code comments, by running the following command:
/improve --pr_code_suggestions.commitable_code_suggestions=true
As can be seen, a single table comment has a significantly smaller PR footprint. We recommend this mode for most cases. Also note that collapsible are not supported in Bitbucket. Hence, the suggestions can only be presented in Bitbucket as code comments.
Automatic triggering
To run the improve
automatically when a PR is opened, define in a configuration file:
[github_app]
pr_commands = [
"/improve",
...
]
[pr_code_suggestions]
num_code_suggestions_per_chunk = ...
...
- The
pr_commands
lists commands that will be executed automatically when a PR is opened. - The
[pr_code_suggestions]
section contains the configurations for theimprove
tool you want to edit (if any)
Assessing Impact 💎
Note that PR-Agent pro tracks two types of implementations:
- Direct implementation - when the user directly applies the suggestion by clicking the
Apply
checkbox. - Indirect implementation - when the user implements the suggestion in their IDE environment. In this case, PR-Agent will utilize, after each commit, a dedicated logic to identify if a suggestion was implemented, and will mark it as implemented.
In post-process, PR-Agent counts the number of suggestions that were implemented, and provides general statistics and insights about the suggestions' impact on the PR process.
Usage Tips
Self-review
If you set in a configuration file:
[pr_code_suggestions]
demand_code_suggestions_self_review = true
The improve
tool will add a checkbox below the suggestions, prompting user to acknowledge that they have reviewed the suggestions.
You can set the content of the checkbox text via:
[pr_code_suggestions]
code_suggestions_self_review_text = "... (your text here) ..."
💎 In addition, by setting:
[pr_code_suggestions]
approve_pr_on_self_review = true
the tool can automatically approve the PR when the user checks the self-review checkbox.
!!! tip "Tip - demanding self-review from the PR author" If you set the number of required reviewers for a PR to 2, this effectively means that the PR author must click the self-review checkbox before the PR can be merged (in addition to a human reviewer).
{width=512}
Extra instructions
and best practices
Extra instructions
You can use the extra_instructions
configuration option to give the AI model additional instructions for the improve
tool.
Be specific, clear, and concise in the instructions. With extra instructions, you are the prompter. Specify relevant aspects that you want the model to focus on.
Examples for possible instructions:
[pr_code_suggestions]
extra_instructions="""\
(1) Answer in japanese
(2) Don't suggest to add try-excpet block
(3) Ignore changes in toml files
...
"""
Use triple quotes to write multi-line instructions. Use bullet points or numbers to make the instructions more readable.
Best practices 💎
Another option to give additional guidance to the AI model is by creating a dedicated wiki page called best_practices.md
.
This page can contain a list of best practices, coding standards, and guidelines that are specific to your repo/organization
The AI model will use this page as a reference, and in case the PR code violates any of the guidelines, it will suggest improvements accordingly, with a dedicated label: Organization best practice
.
Example for a best_practices.md
content can be found here (adapted from Google's pyguide).
This file is only an example. Since it is used as a prompt for an AI model, we want to emphasize the following:
- It should be written in a clear and concise manner
- If needed, it should give short relevant code snippets as examples
- Up to 800 lines are allowed
Example results:
Note that while the extra instructions
are more related to the way the improve
tool behaves, the best_practices.md
file is a general guideline for the way code should be written in the repo.
Using a combination of both can help the AI model to provide relevant and tailored suggestions.
Configuration options
!!! example "General options"
num_code_suggestions | Number of code suggestions provided by the 'improve' tool. Default is 4 for CLI, 0 for auto tools. |
extra_instructions | Optional extra instructions to the tool. For example: "focus on the changes in the file X. Ignore change in ...". |
rank_suggestions | If set to true, the tool will rank the suggestions, based on importance. Default is false. |
commitable_code_suggestions | If set to true, the tool will display the suggestions as commitable code comments. Default is false. |
persistent_comment | If set to true, the improve comment will be persistent, meaning that every new improve request will edit the previous one. Default is false. |
self_reflect_on_suggestions | If set to true, the improve tool will calculate an importance score for each suggestion [1-10], and sort the suggestion labels group based on this score. Default is true. |
suggestions_score_threshold | Any suggestion with importance score less than this threshold will be removed. Default is 0. Highly recommend not to set this value above 7-8, since above it may clip relevant suggestions that can be useful. |
apply_suggestions_checkbox | Enable the checkbox to create a committable suggestion. Default is true. |
enable_help_text | If set to true, the tool will display a help text in the comment. Default is true. |
enable_chat_text | If set to true, the tool will display a reference to the PR chat in the comment. Default is true. |
!!! example "params for 'extended' mode"
auto_extended_mode | Enable extended mode automatically (no need for the --extended option). Default is true. |
num_code_suggestions_per_chunk | Number of code suggestions provided by the 'improve' tool, per chunk. Default is 5. |
rank_extended_suggestions | If set to true, the tool will rank the suggestions, based on importance. Default is true. |
max_number_of_calls | Maximum number of chunks. Default is 5. |
final_clip_factor | Factor to remove suggestions with low confidence. Default is 0.9. |
A note on code suggestions quality
-
AI models for code are getting better and better (Sonnet-3.5 and GPT-4), but they are not flawless. Not all the suggestions will be perfect, and a user should not accept all of them automatically. Critical reading and judgment are required.
-
While mistakes of the AI are rare but can happen, a real benefit from the suggestions of the
improve
(andreview
) tool is to catch, with high probability, mistakes or bugs done by the PR author, when they happen. So, it's a good practice to spend the needed ~30-60 seconds to review the suggestions, even if not all of them are always relevant. -
The hierarchical structure of the suggestions is designed to help the user to quickly understand them, and to decide which ones are relevant and which are not:
- Only if the
Category
header is relevant, the user should move to the summarized suggestion description - Only if the summarized suggestion description is relevant, the user should click on the collapsible, to read the full suggestion description with a code preview example.
- Only if the
-
In addition, we recommend to use the
extra_instructions
field to guide the model to suggestions that are more relevant to the specific needs of the project. -
The interactive PR chat also provides an easy way to get more tailored suggestions and feedback from the AI model.