mirror of
https://github.com/qodo-ai/pr-agent.git
synced 2025-07-16 10:40:16 +08:00
readme updates
This commit is contained in:
4
Usage.md
4
Usage.md
@ -248,9 +248,9 @@ This mode provide a very good speed-quality-cost tradeoff, and can handle most P
|
||||
When the PR is above the token limit, it employs a [PR Compression strategy](./PR_COMPRESSION.md).
|
||||
|
||||
However, for very large PRs, or in case you want to emphasize quality over speed and cost, there are 2 possible solutions:
|
||||
1) [use a model](#changing-a-model) with larger context, like GPT-32K, or claude-100K. This solution will be applicable for all the tools
|
||||
1) [Use a model](#changing-a-model) with larger context, like GPT-32K, or claude-100K. This solution will be applicable for all the tools.
|
||||
2) For the `/improve` tool, there is an ['extended' mode](./docs/IMPROVE.md) (`/improve --extended`),
|
||||
which divides the PR to chunks, and process each chunk separately, so regardless of the model, no compression will be done (but for large PRs, multiple calls may occur)
|
||||
which divides the PR to chunks, and process each chunk separately. With this mode, regardless of the model, no compression will be done (but for large PRs, multiple model calls may occur)
|
||||
|
||||
### Appendix - additional configurations walkthrough
|
||||
|
||||
|
Reference in New Issue
Block a user