Merge remote-tracking branch 'origin/main' into tr/help_fixes

This commit is contained in:
mrT23
2024-09-22 09:27:10 +03:00
4 changed files with 50 additions and 35 deletions

View File

@ -86,18 +86,49 @@ code_suggestions_self_review_text = "... (your text here) ..."
![self_review_1](https://codium.ai/images/pr_agent/self_review_1.png){width=512}
💎 In addition, by setting:
!!! tip "Tip - demanding self-review from the PR author 💎"
By setting:
```
[pr_code_suggestions]
approve_pr_on_self_review = true
```
the tool can automatically approve the PR when the user checks the self-review checkbox.
the tool can automatically add an approval when the PR author clicks the self-review checkbox.
!!! tip "Tip - demanding self-review from the PR author"
If you set the number of required reviewers for a PR to 2, this effectively means that the PR author must click the self-review checkbox before the PR can be merged (in addition to a human reviewer).
- If you set the number of required reviewers for a PR to 2, this effectively means that the PR author must click the self-review checkbox before the PR can be merged (in addition to a human reviewer).
![self_review_2](https://codium.ai/images/pr_agent/self_review_2.png){width=512}
- If you keep the number of required reviewers for a PR to 1 and enable this configuration, this effectively means that the PR author can approve the PR by actively clicking the self-review checkbox.
To prevent unauthorized approvals, this configuration defaults to false, and cannot be altered through online comments; enabling requires a direct update to the configuration file and a commit to the repository. This ensures that utilizing the feature demands a deliberate documented decision by the repository owner.
### How many code suggestions are generated?
PR-Agent uses a dynamic strategy to generate code suggestions based on the size of the pull request (PR). Here's how it works:
1) Chunking large PRs:
- PR-Agent divides large PRs into 'chunks'.
- Each chunk contains up to `pr_code_suggestions.max_context_tokens` tokens (default: 14,000).
2) Generating suggestions:
- For each chunk, PR-Agent generates up to `pr_code_suggestions.num_code_suggestions_per_chunk` suggestions (default: 4).
This approach has two main benefits:
- Scalability: The number of suggestions scales with the PR size, rather than being fixed.
- Quality: By processing smaller chunks, the AI can maintain higher quality suggestions, as larger contexts tend to decrease AI performance.
Note: Chunking is primarily relevant for large PRs. For most PRs (up to 500 lines of code), PR-Agent will be able to process the entire code in a single call.
### 'Extra instructions' and 'best practices'
#### Extra instructions
@ -170,18 +201,10 @@ Using a combination of both can help the AI model to provide relevant and tailor
??? example "General options"
<table>
<tr>
<td><b>num_code_suggestions</b></td>
<td>Number of code suggestions provided by the 'improve' tool. Default is 4 for CLI, 0 for auto tools.</td>
</tr>
<tr>
<td><b>extra_instructions</b></td>
<td>Optional extra instructions to the tool. For example: "focus on the changes in the file X. Ignore change in ...".</td>
</tr>
<tr>
<td><b>rank_suggestions</b></td>
<td>If set to true, the tool will rank the suggestions, based on importance. Default is false.</td>
</tr>
<tr>
<td><b>commitable_code_suggestions</b></td>
<td>If set to true, the tool will display the suggestions as commitable code comments. Default is false.</td>
@ -212,29 +235,25 @@ Using a combination of both can help the AI model to provide relevant and tailor
</tr>
</table>
??? example "params for 'extended' mode"
??? example "Params for number of suggestions and AI calls"
<table>
<tr>
<td><b>auto_extended_mode</b></td>
<td>Enable extended mode automatically (no need for the --extended option). Default is true.</td>
<td>Enable chunking the PR code and running the tool on each chunk. Default is true.</td>
</tr>
<tr>
<td><b>num_code_suggestions_per_chunk</b></td>
<td>Number of code suggestions provided by the 'improve' tool, per chunk. Default is 5.</td>
<td>Number of code suggestions provided by the 'improve' tool, per chunk. Default is 4.</td>
</tr>
<tr>
<td><b>max_number_of_calls</b></td>
<td>Maximum number of chunks. Default is 3.</td>
</tr>
<tr>
<td><b>rank_extended_suggestions</b></td>
<td>If set to true, the tool will rank the suggestions, based on importance. Default is true.</td>
</tr>
<tr>
<td><b>max_number_of_calls</b></td>
<td>Maximum number of chunks. Default is 5.</td>
</tr>
<tr>
<td><b>final_clip_factor</b></td>
<td>Factor to remove suggestions with low confidence. Default is 0.9.</td>
</tr>
</table>
## A note on code suggestions quality

View File

@ -175,10 +175,10 @@ By uploading a local `.pr_agent.toml` file to the root of the repo's main branch
For example, if your local `.pr_agent.toml` file contains:
```
[pr_reviewer]
inline_code_comments = true
extra_instructions = "Answer in japanese"
```
Each time you invoke a `/review` tool, it will use inline code comments.
Each time you invoke a `/review` tool, it will use the extra instructions you set in the local configuration file.
Note that among other limitations, BitBucket provides relatively low rate-limits for applications (up to 1000 requests per hour), and does not provide an API to track the actual rate-limit usage.

View File

@ -108,7 +108,6 @@ enable_help_text=false
[pr_code_suggestions] # /improve #
max_context_tokens=14000
num_code_suggestions=4
commitable_code_suggestions = false
extra_instructions = ""
rank_suggestions = false

View File

@ -44,10 +44,8 @@ class PRCodeSuggestions:
self.is_extended = self._get_is_extended(args or [])
except:
self.is_extended = False
if self.is_extended:
num_code_suggestions = get_settings().pr_code_suggestions.num_code_suggestions_per_chunk
else:
num_code_suggestions = get_settings().pr_code_suggestions.num_code_suggestions
self.ai_handler = ai_handler()
self.ai_handler.main_pr_language = self.main_language
@ -601,7 +599,6 @@ class PRCodeSuggestions:
if get_settings().pr_code_suggestions.final_clip_factor != 1:
max_len = max(
len(data_sorted),
get_settings().pr_code_suggestions.num_code_suggestions,
get_settings().pr_code_suggestions.num_code_suggestions_per_chunk,
)
new_len = int(0.5 + max_len * get_settings().pr_code_suggestions.final_clip_factor)