440d2368a4
Set temperature to 1 when using extended thinking
2025-03-03 18:30:52 +07:00
215c10cc8c
Add thinking block to request parameters
2025-03-03 18:29:33 +07:00
7623e1a419
Removed trailing spaces
2025-03-03 18:23:45 +07:00
3817aa2868
fix: remove redundant temperature logging in litellm handler
2025-02-27 10:55:01 +02:00
d6f405dd0d
Merge pull request #1564 from chandan84/fix/support_litellm_extra_headers
...
Fix/support litellm extra headers
2025-02-26 10:15:22 +02:00
93e34703ab
Update litellm_ai_handler.py
...
updates made based on review on https://github.com/qodo-ai/pr-agent/pull/1564
2025-02-25 14:44:03 -05:00
84983f3e9d
line 253-261, pass extra_headers fields from settings to litellm, exception handling to check if extra_headers is in dict format
2025-02-22 14:56:17 -05:00
71451de156
Update litellm_ai_handler.py
...
line 253-258, pass extra_headers fields from settings to litellm, exception handling to check if extra_headers is in dict format
2025-02-22 14:43:03 -05:00
0e4a1d9ab8
line 253-258, pass extra_headers fields from settings to litellm, exception handling to check if extra_headers is in dict format
2025-02-22 14:38:38 -05:00
e7b05732f8
line 253-255, pass extra_headers fields from settings to litellm
2025-02-22 14:12:39 -05:00
37083ae354
Improve logging for adding parameters: temperature and reasoning_effort
2025-02-22 22:19:58 +07:00
9abb212e83
Add reasoning_effort argument to chat completion request
2025-02-21 22:16:18 +07:00
35059cadf7
Update pr_agent/algo/ai_handlers/litellm_ai_handler.py
...
Co-authored-by: qodo-merge-pro-for-open-source[bot] <189517486+qodo-merge-pro-for-open-source[bot]@users.noreply.github.com>
2025-02-18 11:50:48 +02:00
4edb8b89d1
feat: add support for custom reasoning models
2025-02-18 11:46:22 +02:00
adfc2a6b69
Add temperature only if model supports it
2025-02-16 15:43:40 +07:00
4ac1e15bae
Refactoring user messages only flow
2025-02-02 18:01:44 +07:00
48377e3c81
Add a null check for user_message_only_models before using it
2025-01-31 11:53:05 +07:00
7eb26b3220
Check current model is in user_message_only_models list
2025-01-31 11:25:51 +07:00
c2ca79da0d
Combining system and user prompts for o1 series and deepseek-reasoner models
2025-01-22 20:33:43 +07:00
e58a535594
Inject deepseek key to DEEPSEEK_API_KEY environment variable
2025-01-17 11:43:06 +07:00
23678c1d4d
Update O1_MODEL_PREFIX to o1 based on new models released
2024-12-22 10:36:59 +07:00
452abe2e18
Move get_version to algo/util.py; fix version to 0.25
2024-12-17 08:44:53 -07:00
75a120952c
Add version metadata and --version command
2024-12-09 09:27:54 -07:00
81dea65856
Format files by pre-commit run -a
...
Signed-off-by: Yu Ishikawa <yu-iskw@users.noreply.github.com >
2024-10-30 10:00:36 +09:00
db062e3e35
Support Google AI Studio
...
Signed-off-by: Yu Ishikawa <yu-iskw@users.noreply.github.com >
2024-10-29 08:00:16 +09:00
dcb7b66fd7
Update pr_agent/algo/ai_handlers/litellm_ai_handler.py
...
Co-authored-by: codiumai-pr-agent-pro[bot] <151058649+codiumai-pr-agent-pro[bot]@users.noreply.github.com>
2024-10-19 11:34:57 +03:00
b7437147af
fix: correct model type extraction for O1 model handling in litellm_ai_handler.py
2024-10-19 11:32:45 +03:00
e6c56c7355
Update pr_agent/algo/ai_handlers/litellm_ai_handler.py
...
Co-authored-by: codiumai-pr-agent-pro[bot] <151058649+codiumai-pr-agent-pro[bot]@users.noreply.github.com>
2024-10-09 08:56:31 +03:00
727b08fde3
feat: add support for O1 model by combining system and user prompts in litellm_ai_handler
2024-10-09 08:53:34 +03:00
8d82cb2e04
f string
2024-09-15 08:50:24 +03:00
8f943a0d44
fix: update error logging messages and system prompt handling in litellm_ai_handler.py
2024-09-15 08:07:59 +03:00
578d7c69f8
fix: change deprecated timeout parameter for litellm
2024-08-29 21:45:48 +09:00
8aa76a0ac5
Add and document abilty to use LiteLLM Logging Observability tools
2024-08-19 15:45:47 -04:00
aa87bc60f6
Rename 'add_callbacks' to 'add_litellm_callbacks' for clarity in litellm_ai_handler
2024-08-17 09:20:30 +03:00
c76aabc71e
Add callback functionality to litellm_ai_handler for enhanced logging and metadata capture
2024-08-17 09:15:05 +03:00
e7e3970874
Add error handling for empty system prompt in litellm_ai_handler and type conversion in utils.py
2024-08-13 16:26:32 +03:00
70da871876
lower OpenAI errors to warnings
2024-08-12 12:27:48 +03:00
38c38ec280
Update pr_agent/algo/ai_handlers/litellm_ai_handler.py
...
Co-authored-by: codiumai-pr-agent-pro[bot] <151058649+codiumai-pr-agent-pro[bot]@users.noreply.github.com>
2024-07-27 18:03:35 +03:00
3904eebf85
Update pr_agent/algo/ai_handlers/litellm_ai_handler.py
...
Co-authored-by: codiumai-pr-agent-pro[bot] <151058649+codiumai-pr-agent-pro[bot]@users.noreply.github.com>
2024-07-27 18:02:57 +03:00
3067afbcb3
Update seed handling: log fixed seed usage; adjust default seed and temperature in config
2024-07-27 17:50:59 +03:00
7eadb45c09
Refactor seed handling logic in litellm_ai_handler to improve readability and error checking
2024-07-27 17:23:42 +03:00
ac247dbc2c
Add end-to-end tests for GitHub, GitLab, and Bitbucket apps; update temperature setting usage across tools
2024-07-27 17:19:32 +03:00
20d9d8ad07
Update pr_agent/algo/ai_handlers/litellm_ai_handler.py
...
Co-authored-by: codiumai-pr-agent-pro[bot] <151058649+codiumai-pr-agent-pro[bot]@users.noreply.github.com>
2024-07-04 12:26:23 +03:00
f3c80891f8
sonnet-3.5
2024-07-04 12:23:36 +03:00
bf5673912d
APITimeoutError
2024-06-29 11:30:15 +03:00
5268a84bcc
repetition_penalty
...
Correct the spelling of this variable.
Fix spelling errors now will prevent issues going forward where people have to misspell something on purpose
2024-06-16 17:28:30 +01:00
b4f0ad948f
Update Python code formatting, configuration loading, and local model additions
...
1. Code Formatting:
- Standardized Python code formatting across multiple files to align with PEP 8 guidelines. This includes adjustments to whitespace, line breaks, and inline comments.
2. Configuration Loader Enhancements:
- Enhanced the `get_settings` function in `config_loader.py` to provide more robust handling of settings retrieval. Added detailed documentation to improve code maintainability and clarity.
3. Model Addition in __init__.py:
- Added a new model "ollama/llama3" with a token limit to the MAX_TOKENS dictionary in `__init__.py` to support new AI capabilities and configurations.
2024-06-03 23:58:31 +08:00
fae6cab2a7
Merge pull request #877 from randy-tsukemen/support-groq-llama3
...
Add Groq Llama3 support
2024-04-22 11:41:12 +03:00
0a53f09a7f
Add GROQ.KEY support in LiteLLMAIHandler
2024-04-21 15:21:45 +09:00
7a9e73702d
Fix duplicate assignment of replicate_key in LiteLLMAIHandler
2024-04-21 14:47:25 +09:00