7a6a28d2b9
feat: add openrouter support in litellm
2025-05-07 11:54:07 +07:00
f505c7ad3c
Add multi-model support for different reasoning tasks
2025-04-27 11:00:34 +03:00
c951fc9a87
Improve dynamic context handling with partial line matching and adjust model configuration
2025-04-27 10:46:23 +03:00
3f194e6730
Improve dynamic context handling in git patch processing
2025-04-27 10:07:56 +03:00
f53bd524c5
Support multiple model types for different reasoning tasks
2025-04-27 08:50:03 +03:00
4ac0aa56e5
Update model references from o3-mini to o4-mini and add Gemini models
2025-04-19 09:26:35 +03:00
869a179506
feat: add support for Mistral and Codestral models
2025-04-18 14:04:59 +09:00
4e3e963ce5
Add OpenAI o3 & 4o-mini reasoning models
...
Reference:
- https://platform.openai.com/docs/models/o3
- https://platform.openai.com/docs/models/o4-mini
- https://openai.com/index/introducing-o3-and-o4-mini/
2025-04-17 02:32:14 +08:00
27a7c1a94f
doc update and minor fix
2025-04-16 13:32:53 +05:30
dc46acb762
doc update and minor fix
2025-04-16 13:27:52 +05:30
0da667d179
support Azure AD authentication for OpenAI services for litellm implemetation
2025-04-16 11:19:04 +05:30
08bf9593b2
Fix tokenizer fallback to use o200k_base instead of cl100k_base
2025-04-14 21:15:19 +03:00
57808075be
Add support of OpenAI GPT-4.1 model family
...
Reference:
- https://openai.com/index/gpt-4-1/
- https://platform.openai.com/docs/models/gpt-4.1
2025-04-15 01:57:46 +08:00
60ace1ed09
Merge pull request #1685 from imperorrp/add_gemini2.5preview
...
Add support of Gemini 2.5 Pro preview model
2025-04-11 09:54:09 +03:00
7f6014e064
Merge pull request #1684 from PeterDaveHelloKitchen/Support-xAI-Grok
...
Add support of xAI and their Grok-2 & Grok-3 model
2025-04-11 09:53:08 +03:00
0ac7028bc6
Support xAI Grok-3 series models
...
Reference:
- https://docs.x.ai/docs/release-notes#april-2025
2025-04-11 00:40:00 +08:00
eb9c4fa110
add gemini 2.5 pro preview model token limit
2025-04-08 20:41:59 +05:30
83bb3b25d8
Add support of Meta's Llama 4 Scout and Maverick 17b from Groq Cloud
...
Reference:
- https://ai.meta.com/blog/llama-4-multimodal-intelligence/
- https://console.groq.com/docs/models#preview-models
- https://groq.com/llama-4-now-live-on-groq-build-fast-at-the-lowest-cost-without-compromise/
2025-04-08 01:47:15 +08:00
665fb90a98
Add support of xAI and their Grok-2 model
...
Close #1630
2025-04-08 01:36:21 +08:00
9b19fcdc90
Add support of OpenAI GPT-4.5 Preview model
...
Reference:
- https://openai.com/index/introducing-gpt-4-5/
- https://platform.openai.com/docs/models/gpt-4.5-preview
2025-04-04 05:13:15 +08:00
14971c4f5f
Add support for documentation content exceeding token limits ( #1670 )
...
* - Add support for documentation content exceeding token limits via two phase operation:
1. Ask LLM to rank headings which are most likely to contain an answer to a user question
2. Provide the corresponding files for the LLM to search for an answer.
- Refactor of help_docs to make the code more readable
- For the purpose of getting canonical path: git providers to use default branch and not the PR's source branch.
- Refactor of token counting and making it clear on when an estimate factor will be used.
* Code review changes:
1. Correctly handle exception during retry_with_fallback_models (to allow fallback model to run in case of failure)
2. Better naming for default_branch in bitbucket cloud provider
2025-04-03 11:51:26 +03:00
8495e4d549
More comprehensive handling in count_tokens(force_accurate==True): In case model is neither OpenAI nor Anthropic Claude, simply use an elbow room factor in order to force a more conservative estimate.
2025-03-24 15:47:35 +02:00
dd80276f3f
Support cloning repo
...
Support forcing accurate token calculation (claude)
Help docs: Add desired branch in case of user supplied git repo, with default set to "main"
Better documentation for getting canonical url parts
2025-03-23 09:55:58 +02:00
6610921bba
cleanup
2025-03-20 21:49:19 +02:00
f5e381e1b2
Add fallback for YAML parsing using original response text
2025-03-11 17:11:10 +02:00
9a574e0caa
Add filter for files with bad extensions in language handler
2025-03-11 17:03:05 +02:00
0f33750035
Remove unused filter_bad_extensions function and rename diff_files_original to diff_files
2025-03-11 16:56:41 +02:00
d16012a568
Add decoupled and non-decoupled modes for code suggestions
2025-03-11 16:46:53 +02:00
f5bd98a3b9
Add check for auto-generated files in language handler
2025-03-11 14:37:45 +02:00
ffefcb8a04
Fix default value for extended_thinking_max_output_tokens
2025-03-11 17:48:12 +07:00
35bb2b31e3
feat: add enable_comment_approval to encoded forbidden args
2025-03-10 12:10:19 +02:00
20d709075c
Merge pull request #1613 from qodo-ai/hl/update_auto_approve_docs
...
docs: update auto-approval documentation with clearer configuration
2025-03-10 11:56:48 +02:00
52c99e3f7b
Merge pull request #1605 from KennyDizi/main
...
Support extended thinking for model `claude-3-7-sonnet-20250219`
2025-03-09 17:03:37 +02:00
884b49dd84
Add encoded: enable_manual_approval
2025-03-09 17:01:04 +02:00
222155e4f2
Optimize logging
2025-03-08 08:53:29 +07:00
f9d5e72058
Move logic to _configure_claude_extended_thinking
2025-03-08 08:35:34 +07:00
2619ff3eb3
Merge pull request #1612 from congziqi77/main
...
fix: repeat processing files to ignore
2025-03-07 21:08:46 +02:00
a8935dece3
Using 2048 for extended_thinking_budget_tokens as well as extended_thinking_max_output_tokens
2025-03-07 17:27:56 +07:00
fd12191fcf
fix: repeat processing files to ignore
2025-03-07 09:11:43 +08:00
4f2551e0a6
feat: add DeepInfra support
2025-03-06 15:49:07 +07:00
30bf7572b0
Validate extended thinking parameters
2025-03-03 18:44:26 +07:00
440d2368a4
Set temperature to 1 when using extended thinking
2025-03-03 18:30:52 +07:00
215c10cc8c
Add thinking block to request parameters
2025-03-03 18:29:33 +07:00
7623e1a419
Removed trailing spaces
2025-03-03 18:23:45 +07:00
5e30e190b8
Define models that support extended thinking feature
2025-03-03 18:22:31 +07:00
8e6267b0e6
chore: bedrock/us.anthropic.claude-3-7-sonnet-20250219-v1:0
2025-03-02 08:44:23 +09:00
3817aa2868
fix: remove redundant temperature logging in litellm handler
2025-02-27 10:55:01 +02:00
c7f4b87d6f
Merge pull request #1583 from qodo-ai/hl/enhance_azure_devops
...
feat: enhance Azure DevOps integration with improved error handling a…
2025-02-26 17:17:31 +02:00
52a68bcd44
fix: adjust newline formatting in issue details summary
2025-02-26 16:49:44 +02:00
d6f405dd0d
Merge pull request #1564 from chandan84/fix/support_litellm_extra_headers
...
Fix/support litellm extra headers
2025-02-26 10:15:22 +02:00