|
869a179506
|
feat: add support for Mistral and Codestral models
|
2025-04-18 14:04:59 +09:00 |
|
|
27a7c1a94f
|
doc update and minor fix
|
2025-04-16 13:32:53 +05:30 |
|
|
dc46acb762
|
doc update and minor fix
|
2025-04-16 13:27:52 +05:30 |
|
|
0da667d179
|
support Azure AD authentication for OpenAI services for litellm implemetation
|
2025-04-16 11:19:04 +05:30 |
|
|
665fb90a98
|
Add support of xAI and their Grok-2 model
Close #1630
|
2025-04-08 01:36:21 +08:00 |
|
|
6610921bba
|
cleanup
|
2025-03-20 21:49:19 +02:00 |
|
|
ffefcb8a04
|
Fix default value for extended_thinking_max_output_tokens
|
2025-03-11 17:48:12 +07:00 |
|
|
52c99e3f7b
|
Merge pull request #1605 from KennyDizi/main
Support extended thinking for model `claude-3-7-sonnet-20250219`
|
2025-03-09 17:03:37 +02:00 |
|
|
222155e4f2
|
Optimize logging
|
2025-03-08 08:53:29 +07:00 |
|
|
f9d5e72058
|
Move logic to _configure_claude_extended_thinking
|
2025-03-08 08:35:34 +07:00 |
|
|
a8935dece3
|
Using 2048 for extended_thinking_budget_tokens as well as extended_thinking_max_output_tokens
|
2025-03-07 17:27:56 +07:00 |
|
|
4f2551e0a6
|
feat: add DeepInfra support
|
2025-03-06 15:49:07 +07:00 |
|
|
30bf7572b0
|
Validate extended thinking parameters
|
2025-03-03 18:44:26 +07:00 |
|
|
440d2368a4
|
Set temperature to 1 when using extended thinking
|
2025-03-03 18:30:52 +07:00 |
|
|
215c10cc8c
|
Add thinking block to request parameters
|
2025-03-03 18:29:33 +07:00 |
|
|
7623e1a419
|
Removed trailing spaces
|
2025-03-03 18:23:45 +07:00 |
|
|
3817aa2868
|
fix: remove redundant temperature logging in litellm handler
|
2025-02-27 10:55:01 +02:00 |
|
|
d6f405dd0d
|
Merge pull request #1564 from chandan84/fix/support_litellm_extra_headers
Fix/support litellm extra headers
|
2025-02-26 10:15:22 +02:00 |
|
|
93e34703ab
|
Update litellm_ai_handler.py
updates made based on review on https://github.com/qodo-ai/pr-agent/pull/1564
|
2025-02-25 14:44:03 -05:00 |
|
|
84983f3e9d
|
line 253-261, pass extra_headers fields from settings to litellm, exception handling to check if extra_headers is in dict format
|
2025-02-22 14:56:17 -05:00 |
|
|
71451de156
|
Update litellm_ai_handler.py
line 253-258, pass extra_headers fields from settings to litellm, exception handling to check if extra_headers is in dict format
|
2025-02-22 14:43:03 -05:00 |
|
|
0e4a1d9ab8
|
line 253-258, pass extra_headers fields from settings to litellm, exception handling to check if extra_headers is in dict format
|
2025-02-22 14:38:38 -05:00 |
|
|
e7b05732f8
|
line 253-255, pass extra_headers fields from settings to litellm
|
2025-02-22 14:12:39 -05:00 |
|
|
37083ae354
|
Improve logging for adding parameters: temperature and reasoning_effort
|
2025-02-22 22:19:58 +07:00 |
|
|
9abb212e83
|
Add reasoning_effort argument to chat completion request
|
2025-02-21 22:16:18 +07:00 |
|
|
35059cadf7
|
Update pr_agent/algo/ai_handlers/litellm_ai_handler.py
Co-authored-by: qodo-merge-pro-for-open-source[bot] <189517486+qodo-merge-pro-for-open-source[bot]@users.noreply.github.com>
|
2025-02-18 11:50:48 +02:00 |
|
|
4edb8b89d1
|
feat: add support for custom reasoning models
|
2025-02-18 11:46:22 +02:00 |
|
|
adfc2a6b69
|
Add temperature only if model supports it
|
2025-02-16 15:43:40 +07:00 |
|
|
4ac1e15bae
|
Refactoring user messages only flow
|
2025-02-02 18:01:44 +07:00 |
|
|
48377e3c81
|
Add a null check for user_message_only_models before using it
|
2025-01-31 11:53:05 +07:00 |
|
|
7eb26b3220
|
Check current model is in user_message_only_models list
|
2025-01-31 11:25:51 +07:00 |
|
|
c2ca79da0d
|
Combining system and user prompts for o1 series and deepseek-reasoner models
|
2025-01-22 20:33:43 +07:00 |
|
|
e58a535594
|
Inject deepseek key to DEEPSEEK_API_KEY environment variable
|
2025-01-17 11:43:06 +07:00 |
|
|
23678c1d4d
|
Update O1_MODEL_PREFIX to o1 based on new models released
|
2024-12-22 10:36:59 +07:00 |
|
|
452abe2e18
|
Move get_version to algo/util.py; fix version to 0.25
|
2024-12-17 08:44:53 -07:00 |
|
|
75a120952c
|
Add version metadata and --version command
|
2024-12-09 09:27:54 -07:00 |
|
|
81dea65856
|
Format files by pre-commit run -a
Signed-off-by: Yu Ishikawa <yu-iskw@users.noreply.github.com>
|
2024-10-30 10:00:36 +09:00 |
|
|
db062e3e35
|
Support Google AI Studio
Signed-off-by: Yu Ishikawa <yu-iskw@users.noreply.github.com>
|
2024-10-29 08:00:16 +09:00 |
|
|
dcb7b66fd7
|
Update pr_agent/algo/ai_handlers/litellm_ai_handler.py
Co-authored-by: codiumai-pr-agent-pro[bot] <151058649+codiumai-pr-agent-pro[bot]@users.noreply.github.com>
|
2024-10-19 11:34:57 +03:00 |
|
|
b7437147af
|
fix: correct model type extraction for O1 model handling in litellm_ai_handler.py
|
2024-10-19 11:32:45 +03:00 |
|
|
e6c56c7355
|
Update pr_agent/algo/ai_handlers/litellm_ai_handler.py
Co-authored-by: codiumai-pr-agent-pro[bot] <151058649+codiumai-pr-agent-pro[bot]@users.noreply.github.com>
|
2024-10-09 08:56:31 +03:00 |
|
|
727b08fde3
|
feat: add support for O1 model by combining system and user prompts in litellm_ai_handler
|
2024-10-09 08:53:34 +03:00 |
|
|
8d82cb2e04
|
f string
|
2024-09-15 08:50:24 +03:00 |
|
|
8f943a0d44
|
fix: update error logging messages and system prompt handling in litellm_ai_handler.py
|
2024-09-15 08:07:59 +03:00 |
|
|
578d7c69f8
|
fix: change deprecated timeout parameter for litellm
|
2024-08-29 21:45:48 +09:00 |
|
|
8aa76a0ac5
|
Add and document abilty to use LiteLLM Logging Observability tools
|
2024-08-19 15:45:47 -04:00 |
|
|
aa87bc60f6
|
Rename 'add_callbacks' to 'add_litellm_callbacks' for clarity in litellm_ai_handler
|
2024-08-17 09:20:30 +03:00 |
|
|
c76aabc71e
|
Add callback functionality to litellm_ai_handler for enhanced logging and metadata capture
|
2024-08-17 09:15:05 +03:00 |
|
|
e7e3970874
|
Add error handling for empty system prompt in litellm_ai_handler and type conversion in utils.py
|
2024-08-13 16:26:32 +03:00 |
|
|
70da871876
|
lower OpenAI errors to warnings
|
2024-08-12 12:27:48 +03:00 |
|