mirror of
https://github.com/qodo-ai/pr-agent.git
synced 2025-07-10 07:40:39 +08:00
docs: improve /implement tool documentation and update news section
This commit is contained in:
@ -1,12 +1,12 @@
|
||||
## Overview
|
||||
|
||||
The `implement` tool automatically generates implementation code based on PR review suggestions.
|
||||
It combines LLM capabilities with PR review suggestions to help developers implement code changes quickly and with confidence.
|
||||
The `implement` tool converts human code review discussions and feedback into ready-to-commit code changes.
|
||||
It leverages LLM technology to transform PR comments and review suggestions into concrete implementation code, helping developers quickly turn feedback into working solutions.
|
||||
|
||||
## Usage Scenarios
|
||||
|
||||
|
||||
### 1. For Reviewers
|
||||
### For Reviewers
|
||||
|
||||
Reviewers can request code changes by: <br>
|
||||
1. Selecting the code block to be modified. <br>
|
||||
@ -15,10 +15,10 @@ Reviewers can request code changes by: <br>
|
||||
/implement <code-change-description>
|
||||
```
|
||||
|
||||
{width=512}
|
||||
{width=640}
|
||||
|
||||
|
||||
### 2. For PR Authors
|
||||
### For PR Authors
|
||||
|
||||
PR authors can implement suggested changes by replying to a review comment using either: <br>
|
||||
1. Add specific implementation details as described above
|
||||
@ -30,16 +30,16 @@ PR authors can implement suggested changes by replying to a review comment using
|
||||
/implement
|
||||
```
|
||||
|
||||
{width=512}
|
||||
{width=640}
|
||||
|
||||
### 3. For Referencing Comments
|
||||
### For Referencing Comments
|
||||
|
||||
You can reference and implement changes from any comment by:
|
||||
```
|
||||
/implement <link-to-review-comment>
|
||||
```
|
||||
|
||||
{width=512}
|
||||
{width=640}
|
||||
|
||||
Note that the implementation will occur within the review discussion thread.
|
||||
|
||||
|
@ -40,7 +40,7 @@ E.g. to use a new model locally via Ollama, set in `.secrets.toml` or in a confi
|
||||
model = "ollama/qwen2.5-coder:32b"
|
||||
fallback_models=["ollama/qwen2.5-coder:32b"]
|
||||
custom_model_max_tokens=128000 # set the maximal input tokens for the model
|
||||
duplicate_examples=true # will duplicate the examples in the prompt, to help the model to output structured output
|
||||
duplicate_examples=true # will duplicate the examples in the prompt, to help the model to generate structured output
|
||||
|
||||
[ollama]
|
||||
api_base = "http://localhost:11434" # or whatever port you're running Ollama on
|
||||
@ -48,11 +48,14 @@ api_base = "http://localhost:11434" # or whatever port you're running Ollama on
|
||||
|
||||
!!! note "Local models vs commercial models"
|
||||
Qodo Merge is compatible with almost any AI model, but analyzing complex code repositories and pull requests requires a model specifically optimized for code analysis.
|
||||
|
||||
Commercial models such as GPT-4, Claude Sonnet, and Gemini have demonstrated robust capabilities in generating structured output for code analysis tasks with large input. In contrast, most open-source models currently available (as of January 2025) face challenges with these complex tasks.
|
||||
|
||||
Based on our testing, local open-source models are suitable for experimentation and learning purposes, but they are not suitable for production-level code analysis tasks.
|
||||
|
||||
Hence, for production workflows and real-world usage, we recommend using commercial models.
|
||||
|
||||
### Hugging Face Inference Endpoints
|
||||
### Hugging Face
|
||||
|
||||
To use a new model with Hugging Face Inference Endpoints, for example, set:
|
||||
```
|
||||
|
Reference in New Issue
Block a user