Merge pull request #1771 from qodo-ai/tr/new_benchmark

Tr/new benchmark
This commit is contained in:
Tal
2025-05-13 11:18:34 +03:00
committed by GitHub
5 changed files with 192 additions and 106 deletions

View File

@ -3,7 +3,6 @@
Qodo Merge utilizes a variety of core abilities to provide a comprehensive and efficient code review experience. These abilities include:
- [Auto best practices](https://qodo-merge-docs.qodo.ai/core-abilities/auto_best_practices/)
- [Pull request benchmark](https://qodo-merge-docs.qodo.ai/finetuning_benchmark/)
- [Code validation](https://qodo-merge-docs.qodo.ai/core-abilities/code_validation/)
- [Compression strategy](https://qodo-merge-docs.qodo.ai/core-abilities/compression_strategy/)
- [Dynamic context](https://qodo-merge-docs.qodo.ai/core-abilities/dynamic_context/)

View File

@ -1,102 +0,0 @@
# Qodo Merge Pull Request Benchmark
On coding tasks, the gap between open-source models and top closed-source models such as Claude and GPT is significant.
<br>
In practice, open-source models are unsuitable for most real-world code tasks, and require further fine-tuning to produce acceptable results.
_Qodo Merge pull request benchmark_ aims to benchmark models on their ability to be fine-tuned for a coding task.
Specifically, we chose to fine-tune open-source models on the task of analyzing a pull request, and providing useful feedback and code suggestions.
Here are the results:
<br>
<br>
**Model performance:**
| Model name | Model size [B] | Better than gpt-4 rate, after fine-tuning [%] |
|-----------------------------|----------------|----------------------------------------------|
| **DeepSeek 34B-instruct** | **34** | **40.7** |
| DeepSeek 34B-base | 34 | 38.2 |
| Phind-34b | 34 | 38 |
| Granite-34B | 34 | 37.6 |
| Codestral-22B-v0.1 | 22 | 32.7 |
| QWEN-1.5-32B | 32 | 29 |
| | | |
| **CodeQwen1.5-7B** | **7** | **35.4** |
| Llama-3.1-8B-Instruct | 8 | 35.2 |
| Granite-8b-code-instruct | 8 | 34.2 |
| CodeLlama-7b-hf | 7 | 31.8 |
| Gemma-7B | 7 | 27.2 |
| DeepSeek coder-7b-instruct | 7 | 26.8 |
| Llama-3-8B-Instruct | 8 | 26.8 |
| Mistral-7B-v0.1 | 7 | 16.1 |
<br>
**Fine-tuning impact:**
| Model name | Model size [B] | Fine-tuned | Better than gpt-4 rate [%] |
|---------------------------|----------------|------------|----------------------------|
| DeepSeek 34B-instruct | 34 | yes | 40.7 |
| DeepSeek 34B-instruct | 34 | no | 3.6 |
## Results analysis
- **Fine-tuning is a must** - without fine-tuning, open-source models provide poor results on most real-world code tasks, which include complicated prompt and lengthy context. We clearly see that without fine-tuning, deepseek model was 96.4% of the time inferior to GPT-4, while after fine-tuning, it is better 40.7% of the time.
- **Always start from a code-dedicated model** — When fine-tuning, always start from a code-dedicated model, and not from a general-usage model. The gaps in downstream results are very big.
- **Don't believe the hype** —newer models, or models from big-tech companies (Llama3, Gemma, Mistral), are not always better for fine-tuning.
- **The best large model** - For large 34B code-dedicated models, the gaps when doing proper fine-tuning are small. The current top model is **DeepSeek 34B-instruct**
- **The best small model** - For small 7B code-dedicated models, the gaps when fine-tuning are much larger. **CodeQWEN 1.5-7B** is by far the best model for fine-tuning.
- **Base vs. instruct** - For the top model (deepseek), we saw small advantage when starting from the instruct version. However, we recommend testing both versions on each specific task, as the base model is generally considered more suitable for fine-tuning.
## Dataset
### Training dataset
Our training dataset comprises 25,000 pull requests, aggregated from permissive license repos. For each pull request, we generated responses for the three main tools of Qodo Merge:
[Describe](https://qodo-merge-docs.qodo.ai/tools/describe/), [Review](https://qodo-merge-docs.qodo.ai/tools/improve/) and [Improve](https://qodo-merge-docs.qodo.ai/tools/improve/).
On the raw data collected, we employed various automatic and manual cleaning techniques to ensure the outputs were of the highest quality, and suitable for instruct-tuning.
Here are the prompts, and example outputs, used as input-output pairs to fine-tune the models:
| Tool | Prompt | Example output |
|----------|------------------------------------------------------------------------------------------------------------|----------------|
| Describe | [link](https://github.com/Codium-ai/pr-agent/blob/main/pr_agent/settings/pr_description_prompts.toml) | [link](https://github.com/Codium-ai/pr-agent/pull/910#issue-2303989601) |
| Review | [link](https://github.com/Codium-ai/pr-agent/blob/main/pr_agent/settings/pr_reviewer_prompts.toml) | [link](https://github.com/Codium-ai/pr-agent/pull/910#issuecomment-2118761219) |
| Improve | [link](https://github.com/Codium-ai/pr-agent/blob/main/pr_agent/settings/pr_code_suggestions_prompts.toml) | [link](https://github.com/Codium-ai/pr-agent/pull/910#issuecomment-2118761309) |
### Evaluation dataset
- For each tool, we aggregated 200 additional examples to be used for evaluation. These examples were not used in the training dataset, and were manually selected to represent diverse real-world use-cases.
- For each test example, we generated two responses: one from the fine-tuned model, and one from the best code model in the world, `gpt-4-turbo-2024-04-09`.
- We used a third LLM to judge which response better answers the prompt, and will likely be perceived by a human as better response.
<br>
We experimented with three model as judges: `gpt-4-turbo-2024-04-09`, `gpt-4o`, and `claude-3-opus-20240229`. All three produced similar results, with the same ranking order. This strengthens the validity of our testing protocol.
The evaluation prompt can be found [here](https://github.com/Codium-ai/pr-agent/blob/main/pr_agent/settings/pr_evaluate_prompt_response.toml)
Here is an example of a judge model feedback:
```
command: improve
model1_score: 9,
model2_score: 6,
why: |
Response 1 is better because it provides more actionable and specific suggestions that directly
enhance the code's maintainability, performance, and best practices. For example, it suggests
using a variable for reusable widget instances and using named routes for navigation, which
are practical improvements. In contrast, Response 2 focuses more on general advice and less
actionable suggestions, such as changing variable names and adding comments, which are less
critical for immediate code improvement."
```
## Comparing Top Closed-Source Models
Another application of the Pull Request Benchmark is comparing leading closed-source models to determine which performs better at analyzing pull request code.
The evaluation methodology resembles the approach used for evaluating fine-tuned models:
- We ran each model across 200 diverse pull requests, asking them to generate code suggestions using Qodo Merge's `improve` tool
- A third top model served as judge to determine which response better fulfilled the prompt and would likely be perceived as superior by human users

View File

@ -0,0 +1,188 @@
# Qodo Merge Pull Request Benchmark
## Methodology
Qodo Merge PR Benchmark evaluates and compares the performance of two Large Language Models (LLMs) in analyzing pull request code and providing meaningful code suggestions.
Our diverse dataset comprises of 400 pull requests from over 100 repositories, spanning various programming languages and frameworks to reflect real-world scenarios.
- For each pull request, two distinct LLMs process the same prompt using the Qodo Merge `improve` tool, each generating two sets of responses. The prompt for response generation can be found [here](https://github.com/qodo-ai/pr-agent/blob/main/pr_agent/settings/code_suggestions/pr_code_suggestions_prompts_not_decoupled.toml).
- Subsequently, a high-performing third model (an AI judge) evaluates the responses from the initial two models to determine the superior one. We utilize OpenAI's `o3` model as the judge, though other models have yielded consistent results. The prompt for this comparative judgment is available [here](https://github.com/Codium-ai/pr-agent-settings/tree/main/benchmark).
- We aggregate comparison outcomes across all the pull requests, calculating the win rate for each model. We also analyze the qualitative feedback (the "why" explanations from the judge) to identify each model's comparative strengths and weaknesses.
This approach provides not just a quantitative score but also a detailed analysis of each model's strengths and weaknesses.
- The final output is a "Model Card", comparing the evaluated model against others. To ensure full transparency and enable community scrutiny, we also share the raw code suggestions generated by each model, and the judge's specific feedback.
Note that this benchmark focuses on quality: the ability of an LLM to process complex pull request with multiple files and nuanced task to produce high-quality code suggestions.
Other factors like speed, cost, and availability, while also relevant for model selection, are outside this benchmark's scope.
## TL;DR
Here's a summary of the win rates based on the benchmark:
[//]: # (| Model A | Model B | Model A Win Rate | Model B Win Rate |)
[//]: # (|:-------------------------------|:-------------------------------|:----------------:|:----------------:|)
[//]: # (| Gemini-2.5-pro-preview-05-06 | GPT-4.1 | 70.4% | 29.6% |)
[//]: # (| Gemini-2.5-pro-preview-05-06 | Sonnet 3.7 | 78.1% | 21.9% |)
[//]: # (| GPT-4.1 | Sonnet 3.7 | 61.0% | 39.0% |)
<table>
<thead>
<tr>
<th style="text-align:left;">Model A</th>
<th style="text-align:left;">Model B</th>
<th style="text-align:center;">Model A Win Rate</th> <th style="text-align:center;">Model B Win Rate</th> </tr>
</thead>
<tbody>
<tr>
<td style="text-align:left;">Gemini-2.5-pro-preview-05-06</td>
<td style="text-align:left;">GPT-4.1</td>
<td style="text-align:center; color: #1E8449;"><b>70.4%</b></td> <td style="text-align:center; color: #D8000C;"><b>29.6%</b></td> </tr>
<tr>
<td style="text-align:left;">Gemini-2.5-pro-preview-05-06</td>
<td style="text-align:left;">Sonnet 3.7</td>
<td style="text-align:center; color: #1E8449;"><b>78.1%</b></td> <td style="text-align:center; color: #D8000C;"><b>21.9%</b></td> </tr>
<tr>
<td style="text-align:left;">GPT-4.1</td>
<td style="text-align:left;">Sonnet 3.7</td>
<td style="text-align:center; color: #1E8449;"><b>61.0%</b></td> <td style="text-align:center; color: #D8000C;"><b>39.0%</b></td> </tr>
</tbody>
</table>
## Gemini-2.5-pro-preview-05-06 - Model Card
### Comparison against GPT-4.1
![Comparison](https://codium.ai/images/qodo_merge_benchmark/gpt-4.1_vs_gemini-2.5-pro-preview-05-06_judge_o3.png){width=768}
#### Analysis Summary
Model 'Gemini-2.5-pro-preview-05-06' is generally more useful thanks to wider and more accurate bug detection and concrete patches, but it sacrifices compliance discipline and sometimes oversteps the task rules. Model 'GPT-4.1' is safer and highly rule-abiding, yet often too timid—missing many genuine issues and providing limited insight. An ideal reviewer would combine 'GPT-4.1 restraint with 'Gemini-2.5-pro-preview-05-06' thoroughness.
#### Detailed Analysis
Gemini-2.5-pro-preview-05-06 strengths:
- better_bug_coverage: Detects and explains more critical issues, winning in ~70 % of comparisons and achieving a higher average score.
- actionable_fixes: Supplies clear code snippets, correct language labels, and often multiple coherent suggestions per diff.
- deeper_reasoning: Shows stronger grasp of logic, edge cases, and cross-file implications, leading to broader, high-impact reviews.
Gemini-2.5-pro-preview-05-06 weaknesses:
- guideline_violations: More prone to over-eager advice—non-critical tweaks, touching unchanged code, suggesting new imports, or minor format errors.
- occasional_overreach: Some fixes are speculative or risky, potentially introducing new bugs.
- redundant_or_duplicate: At times repeats the same point or exceeds the required brevity.
### Comparison against Sonnet 3.7
![Comparison](https://codium.ai/images/qodo_merge_benchmark/sonnet_37_vs_gemini-2.5-pro-preview-05-06_judge_o3.png){width=768}
#### Analysis Summary
Model 'Gemini-2.5-pro-preview-05-06' is the stronger reviewer—more frequently identifies genuine, high-impact bugs and provides well-formed, actionable fixes. Model 'Sonnet 3.7' is safer against false positives and tends to be concise but often misses important defects or offers low-value or incorrect suggestions.
See raw results [here](https://github.com/Codium-ai/pr-agent-settings/blob/main/benchmark/sonnet_37_vs_gemini-2.5-pro-preview-05-06.md)
#### Detailed Analysis
Gemini-2.5-pro-preview-05-06 strengths:
- higher_accuracy_and_coverage: finds real critical bugs and supplies actionable patches in most examples (better in 78 % of cases).
- guideline_awareness: usually respects new-lines-only scope, ≤3 suggestions, proper YAML, and stays silent when no issues exist.
- detailed_reasoning_and_patches: explanations tie directly to the diff and fixes are concrete, often catching multiple related defects that 'Sonnet 3.7' overlooks.
Gemini-2.5-pro-preview-05-06 weaknesses:
- occasional_rule_violations: sometimes proposes new imports, package-version changes, or edits outside the added lines.
- overzealous_suggestions: may add speculative or stylistic fixes that exceed the “critical” scope, or mis-label severity.
- sporadic_technical_slips: a few patches contain minor coding errors, oversized snippets, or duplicate/contradicting advice.
## GPT-4.1 - Model Card
### Comparison against Sonnet 3.7
![Comparison](https://codium.ai/images/qodo_merge_benchmark/gpt-4.1_vs_sonnet_3.7_judge_o3.png){width=768}
#### Analysis Summary
Model 'GPT-4.1' is safer and more compliant, preferring silence over speculation, which yields fewer rule breaches and false positives but misses some real bugs.
Model 'Sonnet 3.7' is more adventurous and often uncovers important issues that 'GPT-4.1' ignores, yet its aggressive style leads to frequent guideline violations and a higher proportion of incorrect or non-critical advice.
See raw results [here](https://github.com/Codium-ai/pr-agent-settings/blob/main/benchmark/gpt-4.1_vs_sonnet_3.7_judge_o3.md)
#### Detailed Analysis
GPT-4.1 strengths:
- Strong guideline adherence: usually stays strictly on `+` lines, avoids non-critical or stylistic advice, and rarely suggests forbidden imports; often outputs an empty list when no real bug exists.
- Lower false-positive rate: suggestions are more accurate and seldom introduce new bugs; fixes compile more reliably.
- Good schema discipline: YAML is almost always well-formed and fields are populated correctly.
GPT-4.1 weaknesses:
- Misses bugs: often returns an empty list even when a clear critical issue is present, so coverage is narrower.
- Sparse feedback: when it does comment, it tends to give fewer suggestions and sometimes lacks depth or completeness.
- Occasional metadata/slip-ups (wrong language tags, overly broad code spans), though less harmful than Sonnet 3.7 errors.
### Comparison against Gemini-2.5-pro-preview-05-06
![Comparison](https://codium.ai/images/qodo_merge_benchmark/gpt-4.1_vs_gemini-2.5-pro-preview-05-06_judge_o3.png){width=768}
#### Analysis Summary
Model 'Gemini-2.5-pro-preview-05-06' is generally more useful thanks to wider and more accurate bug detection and concrete patches, but it sacrifices compliance discipline and sometimes oversteps the task rules. Model 'GPT-4.1' is safer and highly rule-abiding, yet often too timid—missing many genuine issues and providing limited insight. An ideal reviewer would combine 'GPT-4.1 restraint with 'Gemini-2.5-pro-preview-05-06' thoroughness.
#### Detailed Analysis
GPT-4.1 strengths:
- strict_compliance: Usually sticks to the “critical bugs only / new + lines only” rule, so outputs rarely violate task constraints.
- low_risk: Conservative behaviour avoids harmful or speculative fixes; safer when no obvious issue exists.
- concise_formatting: Tends to produce minimal, correctly-structured YAML without extra noise.
GPT-4.1 weaknesses:
- under_detection: Frequently returns an empty list even when real bugs are present, missing ~70 % of the time.
- shallow_analysis: When it does suggest fixes, coverage is narrow and technical depth is limited, sometimes with wrong language tags or minor format slips.
- occasional_inaccuracy: A few suggestions are unfounded or duplicate, and rare guideline breaches (e.g., import advice) still occur.
## Sonnet 3.7 - Model Card
### Comparison against GPT-4.1
![Comparison](https://codium.ai/images/qodo_merge_benchmark/gpt-4.1_vs_sonnet_3.7_judge_o3.png){width=768}
#### Analysis Summary
Model 'GPT-4.1' is safer and more compliant, preferring silence over speculation, which yields fewer rule breaches and false positives but misses some real bugs.
Model 'Sonnet 3.7' is more adventurous and often uncovers important issues that 'GPT-4.1' ignores, yet its aggressive style leads to frequent guideline violations and a higher proportion of incorrect or non-critical advice.
See raw results [here](https://github.com/Codium-ai/pr-agent-settings/blob/main/benchmark/gpt-4.1_vs_sonnet_3.7_judge_o3.md)
#### Detailed Analysis
'Sonnet 3.7' strengths:
- Better bug discovery breadth: more willing to dive into logic and spot critical problems that 'GPT-4.1' overlooks; often supplies multiple, detailed fixes.
- Richer explanations & patches: gives fuller context and, when correct, proposes more functional or user-friendly solutions.
- Generally correct language/context tagging and targeted code snippets.
'Sonnet 3.7' weaknesses:
- Guideline violations: frequently flags non-critical issues, edits untouched code, or recommends adding imports, breaching task rules.
- Higher error rate: suggestions are more speculative and sometimes introduce new defects or duplicate work already done.
- Occasional schema or formatting mistakes (missing list value, duplicated suggestions), reducing reliability.
### Comparison against Gemini-2.5-pro-preview-05-06
![Comparison](https://codium.ai/images/qodo_merge_benchmark/sonnet_37_vs_gemini-2.5-pro-preview-05-06_judge_o3.png){width=768}
#### Analysis Summary
Model 'Gemini-2.5-pro-preview-05-06' is the stronger reviewer—more frequently identifies genuine, high-impact bugs and provides well-formed, actionable fixes. Model 'Sonnet 3.7' is safer against false positives and tends to be concise but often misses important defects or offers low-value or incorrect suggestions.
See raw results [here](https://github.com/Codium-ai/pr-agent-settings/blob/main/benchmark/sonnet_37_vs_gemini-2.5-pro-preview-05-06.md)

View File

@ -22,4 +22,5 @@ It includes information on how to adjust Qodo Merge configurations, define which
- [Working with large PRs](./additional_configurations.md#working-with-large-prs)
- [Changing a model](./additional_configurations.md#changing-a-model)
- [Patch Extra Lines](./additional_configurations.md#patch-extra-lines)
- [FAQ](https://qodo-merge-docs.qodo.ai/faq/)
- [Qodo Merge Models](./qodo_merge_models)

View File

@ -20,6 +20,7 @@ nav:
- Managing Mail Notifications: 'usage-guide/mail_notifications.md'
- Changing a Model: 'usage-guide/changing_a_model.md'
- Additional Configurations: 'usage-guide/additional_configurations.md'
- Frequently Asked Questions: 'faq/index.md'
- 💎 Qodo Merge Models: 'usage-guide/qodo_merge_models.md'
- Tools:
- 'tools/index.md'
@ -43,7 +44,6 @@ nav:
- Core Abilities:
- 'core-abilities/index.md'
- Auto best practices: 'core-abilities/auto_best_practices.md'
- Pull request benchmark: 'finetuning_benchmark/index.md'
- Code validation: 'core-abilities/code_validation.md'
- Compression strategy: 'core-abilities/compression_strategy.md'
- Dynamic context: 'core-abilities/dynamic_context.md'
@ -59,8 +59,8 @@ nav:
- Features: 'chrome-extension/features.md'
- Data Privacy: 'chrome-extension/data_privacy.md'
- Options: 'chrome-extension/options.md'
- FAQ:
- FAQ: 'faq/index.md'
- PR Benchmark:
- PR Benchmark: 'pr_benchmark/index.md'
- Recent Updates:
- Recent Updates: 'recent_updates/index.md'
- AI Docs Search: 'ai_search/index.md'