|
8e0c5c8784
|
refactor(ai_handler): remove model parameter from _get_completion and handle it within the method
|
2025-07-13 21:29:53 +03:00 |
|
|
0e9cf274ef
|
refactor(ai_handler): move streaming response handling and Azure token generation to helpers
|
2025-07-13 21:23:04 +03:00 |
|
|
3aae48f09c
|
Merge pull request #1925 from Makonike/feature_only_streaming_model_support
feat: Support Only Streaming Model
|
2025-07-13 21:16:49 +03:00 |
|
|
8c7680d85d
|
refactor(ai_handler): add a return statement or raise an exception in the elif branch
|
2025-07-13 22:57:43 +08:00 |
|
|
11fb6ccc7e
|
refactor(ai_handler): compact streaming path to reduce main flow impact
|
2025-07-13 22:37:14 +08:00 |
|
|
74df3f8bd5
|
fix(ai_handler): improve empty streaming response validation logic
|
2025-07-10 15:14:25 +08:00 |
|
|
31e25a5965
|
refactor(ai_handler): improve streaming response handling robustness
|
2025-07-09 15:39:15 +08:00 |
|
|
85e1e2d4ee
|
feat: add debug logging support for streaming models
|
2025-07-09 15:29:03 +08:00 |
|
|
2d8bee0d6d
|
feat: add validation for empty streaming responses in LiteLLM handler
|
2025-07-09 15:04:18 +08:00 |
|
|
e0d7083768
|
feat: refactor LITELLM.EXTRA_BODY processing into a dedicated method
|
2025-07-09 12:04:26 +05:30 |
|
|
5e82d0a316
|
feat: add streaming support for openai/qwq-plus model
|
2025-07-08 11:51:30 +08:00 |
|
|
e2d71acb9d
|
fix: remove comments
|
2025-07-07 21:27:35 +05:30 |
|
|
8127d52ab3
|
fix: security checks
|
2025-07-07 21:26:13 +05:30 |
|
|
6a55bbcd23
|
fix: prevent LITELLM.EXTRA_BODY from overriding existing parameters in LiteLLMAIHandler
|
2025-07-07 21:20:25 +05:30 |
|
|
12af211c13
|
feat: support OpenAI Flex Processing via [litellm] extra_body config
|
2025-07-07 21:14:45 +05:30 |
|
|
608065f2ad
|
fix: typos
|
2025-06-17 09:26:57 +03:00 |
|
|
466ec4ce90
|
fix: exclude RateLimitError from retry logic
|
2025-05-22 15:04:16 +09:00 |
|
|
6405284461
|
fix: reorder exception handling to enable proper retry behavior
|
2025-05-20 18:22:33 +09:00 |
|
|
250870a3da
|
enable usage of openai like apis
|
2025-05-15 16:05:05 +02:00 |
|
|
7a6a28d2b9
|
feat: add openrouter support in litellm
|
2025-05-07 11:54:07 +07:00 |
|
|
869a179506
|
feat: add support for Mistral and Codestral models
|
2025-04-18 14:04:59 +09:00 |
|
|
27a7c1a94f
|
doc update and minor fix
|
2025-04-16 13:32:53 +05:30 |
|
|
dc46acb762
|
doc update and minor fix
|
2025-04-16 13:27:52 +05:30 |
|
|
0da667d179
|
support Azure AD authentication for OpenAI services for litellm implemetation
|
2025-04-16 11:19:04 +05:30 |
|
|
665fb90a98
|
Add support of xAI and their Grok-2 model
Close #1630
|
2025-04-08 01:36:21 +08:00 |
|
|
6610921bba
|
cleanup
|
2025-03-20 21:49:19 +02:00 |
|
|
ffefcb8a04
|
Fix default value for extended_thinking_max_output_tokens
|
2025-03-11 17:48:12 +07:00 |
|
|
52c99e3f7b
|
Merge pull request #1605 from KennyDizi/main
Support extended thinking for model `claude-3-7-sonnet-20250219`
|
2025-03-09 17:03:37 +02:00 |
|
|
222155e4f2
|
Optimize logging
|
2025-03-08 08:53:29 +07:00 |
|
|
f9d5e72058
|
Move logic to _configure_claude_extended_thinking
|
2025-03-08 08:35:34 +07:00 |
|
|
a8935dece3
|
Using 2048 for extended_thinking_budget_tokens as well as extended_thinking_max_output_tokens
|
2025-03-07 17:27:56 +07:00 |
|
|
4f2551e0a6
|
feat: add DeepInfra support
|
2025-03-06 15:49:07 +07:00 |
|
|
30bf7572b0
|
Validate extended thinking parameters
|
2025-03-03 18:44:26 +07:00 |
|
|
440d2368a4
|
Set temperature to 1 when using extended thinking
|
2025-03-03 18:30:52 +07:00 |
|
|
215c10cc8c
|
Add thinking block to request parameters
|
2025-03-03 18:29:33 +07:00 |
|
|
7623e1a419
|
Removed trailing spaces
|
2025-03-03 18:23:45 +07:00 |
|
|
3817aa2868
|
fix: remove redundant temperature logging in litellm handler
|
2025-02-27 10:55:01 +02:00 |
|
|
d6f405dd0d
|
Merge pull request #1564 from chandan84/fix/support_litellm_extra_headers
Fix/support litellm extra headers
|
2025-02-26 10:15:22 +02:00 |
|
|
93e34703ab
|
Update litellm_ai_handler.py
updates made based on review on https://github.com/qodo-ai/pr-agent/pull/1564
|
2025-02-25 14:44:03 -05:00 |
|
|
84983f3e9d
|
line 253-261, pass extra_headers fields from settings to litellm, exception handling to check if extra_headers is in dict format
|
2025-02-22 14:56:17 -05:00 |
|
|
71451de156
|
Update litellm_ai_handler.py
line 253-258, pass extra_headers fields from settings to litellm, exception handling to check if extra_headers is in dict format
|
2025-02-22 14:43:03 -05:00 |
|
|
0e4a1d9ab8
|
line 253-258, pass extra_headers fields from settings to litellm, exception handling to check if extra_headers is in dict format
|
2025-02-22 14:38:38 -05:00 |
|
|
e7b05732f8
|
line 253-255, pass extra_headers fields from settings to litellm
|
2025-02-22 14:12:39 -05:00 |
|
|
37083ae354
|
Improve logging for adding parameters: temperature and reasoning_effort
|
2025-02-22 22:19:58 +07:00 |
|
|
9abb212e83
|
Add reasoning_effort argument to chat completion request
|
2025-02-21 22:16:18 +07:00 |
|
|
35059cadf7
|
Update pr_agent/algo/ai_handlers/litellm_ai_handler.py
Co-authored-by: qodo-merge-pro-for-open-source[bot] <189517486+qodo-merge-pro-for-open-source[bot]@users.noreply.github.com>
|
2025-02-18 11:50:48 +02:00 |
|
|
4edb8b89d1
|
feat: add support for custom reasoning models
|
2025-02-18 11:46:22 +02:00 |
|
|
adfc2a6b69
|
Add temperature only if model supports it
|
2025-02-16 15:43:40 +07:00 |
|
|
4ac1e15bae
|
Refactoring user messages only flow
|
2025-02-02 18:01:44 +07:00 |
|
|
48377e3c81
|
Add a null check for user_message_only_models before using it
|
2025-01-31 11:53:05 +07:00 |
|