aa87bc60f6
Rename 'add_callbacks' to 'add_litellm_callbacks' for clarity in litellm_ai_handler
2024-08-17 09:20:30 +03:00
c76aabc71e
Add callback functionality to litellm_ai_handler for enhanced logging and metadata capture
2024-08-17 09:15:05 +03:00
b9df034c97
Merge pull request #1138 from Codium-ai/tr/err_protections
...
Add 'only_markdown' parameter to emphasize_header call in utils.py fo…
2024-08-14 14:03:43 +03:00
bae8d36698
Add 'only_markdown' parameter to emphasize_header call in utils.py for security concerns section
2024-08-14 14:02:09 +03:00
4fea780b9b
fix html escaping
2024-08-14 12:13:51 +03:00
f4b06640d2
Add info log for successful AI prediction parse in utils.py
2024-08-14 08:14:51 +03:00
f1981092d3
Add warning log for initial AI prediction parse failure and error log for fallback failure in utils.py
2024-08-14 08:08:55 +03:00
86a9cfedc8
Add error handling for empty diff files in utils.py and optimize file content retrieval in Bitbucket provider
2024-08-13 22:33:07 +03:00
f89bdcf3c3
Add error handling for missing custom label settings in utils.py
2024-08-13 16:40:05 +03:00
e7e3970874
Add error handling for empty system prompt in litellm_ai_handler and type conversion in utils.py
2024-08-13 16:26:32 +03:00
1aa6dd9b5d
Add error handling for missing file paths in Bitbucket provider and improve file validation logic
2024-08-13 11:28:21 +03:00
4a38861d06
Add error handling for missing file paths in file_filter.py for Bitbucket and GitLab platforms
2024-08-13 08:59:27 +03:00
4228f92e7e
Merge pull request #1119 from Codium-ai/hl/limit_long_comments
...
Hl/limit long comments
2024-08-12 16:25:42 +03:00
70da871876
lower OpenAI errors to warnings
2024-08-12 12:27:48 +03:00
5c4bc0a008
Add Bitbucket diff handling and improve error logging
...
- Implement `publish_file_comments` method placeholder
- Enhance `is_supported` method to include `publish_file_comments`
- Refactor diff splitting logic to handle Bitbucket-specific headers
- Improve error handling and logging for file content retrieval
- Add `get_pr_owner_id` method to retrieve PR owner ID
- Update `_get_pr_file_content` to fetch file content from remote link
- Fix variable name typo in `extend_patch` function in `git_patch_processing.py`
2024-08-12 09:48:26 +03:00
4c1c313031
Add missing newline in extended patch and remove trailing whitespace
2024-08-11 18:49:28 +03:00
12742ef499
Adjust patch extension logic to handle cases where extended size exceeds original file length
2024-08-11 15:48:58 +03:00
63e921a2c5
Adjust patch extension logic to handle cases where extended size exceeds original file length
2024-08-11 15:46:46 +03:00
a06670bc27
Fix incorrect logic for extending patch size beyond original file length
2024-08-11 15:20:27 +03:00
e85b75fe64
Refactor patch extension logic to handle cases with zero extra lines
2024-08-11 12:56:56 +03:00
df04a7e046
Add spaces to extra lines in patch extension for consistency
2024-08-11 12:32:26 +03:00
9c3f080112
comments
2024-08-11 12:15:47 +03:00
ed65493718
Handle edge cases for patch extension and update tests
2024-08-11 12:08:00 +03:00
e238a88824
Add tests for patch extension and update configuration for extra lines handling
...
- Added unit tests in `test_extend_patch.py` and `test_pr_generate_extended_diff.py` to verify patch extension functionality with extra lines.
- Updated `pr_processing.py` to include `patch_extra_lines_before` and `patch_extra_lines_after` settings.
- Modified `configuration.toml` to adjust `patch_extra_lines_before` to 4 and `max_context_tokens` to 16000.
- Enabled extra lines in `pr_code_suggestions.py`.
- Added new model `claude-3-5-sonnet` to `__init__.py`.
2024-08-11 09:21:34 +03:00
61bdfd3b99
patch_extra_lines_before and patch_extra_lines_after
2024-08-10 21:55:51 +03:00
84b80f792d
protections
2024-08-09 21:44:00 +03:00
b370cb6ae7
Merge pull request #1102 from MarkRx/feature/langchain-azure-fix
...
Fix LangChainOpenAIHandler for Azure
2024-08-08 19:37:26 +03:00
4201779ce2
Fix LangChainOpenAIHandler for Azure
2024-08-08 09:55:18 -04:00
4c0fd37ac2
Fix pr_processing.get_pr_multi_diffs
...
Fix function to return an empty list instead of a single joined string when patches_extended is empty.
2024-08-08 11:46:26 +09:00
c996c7117f
Fix function to return an empty list instead of a single joined string when patches_extended is empty.
2024-08-08 11:32:10 +09:00
9be5cc6dec
Add support model gpt-4o-2024-08-06
2024-08-07 07:28:51 +07:00
3420e6f30d
patch improvements
2024-08-03 12:44:49 +03:00
1cefd23739
Merge pull request #1073 from h0rv/patch-1
...
Improve response cleaning
2024-08-02 12:21:40 +03:00
039d85b836
fix cleaning
2024-08-01 15:50:00 -04:00
d671c78233
Merge remote-tracking branch 'origin/main'
2024-07-31 13:32:51 +03:00
240e0374e7
fixed extra call bug
2024-07-31 13:32:42 +03:00
172d0c0358
improve response cleaning
...
The prompt for the model starts with a code block (```). When testing watsonx models (llama and granite), they would generate the closing block in the response.
2024-07-29 10:26:58 -04:00
f50832e19b
Update __init__.py
2024-07-29 08:32:34 +03:00
af84409c1d
Merge pull request #1067 from Codium-ai/tr/custom_model
...
docs: update usage guide and README; fix minor formatting issues in u…
2024-07-28 09:34:05 +03:00
e946a0ea9f
docs: update usage guide and README; fix minor formatting issues in utils.py
2024-07-28 09:30:21 +03:00
27d6560de8
Update pr_agent/algo/utils.py
...
Co-authored-by: codiumai-pr-agent-pro[bot] <151058649+codiumai-pr-agent-pro[bot]@users.noreply.github.com>
2024-07-28 08:58:03 +03:00
6ba7b3eea2
fix condition
2024-07-28 08:57:39 +03:00
86d9612882
docs: update usage guide for changing models; add custom model support and reorganize sections
2024-07-28 08:55:01 +03:00
49f608c968
Merge pull request #1065 from dceoy/feature/add-groq-models
...
Add Llama 3.1 and Mixtral 8x7B for Groq
2024-07-28 08:31:50 +03:00
495e2ccb7d
Add Llama 3.1 and Mixtral 8x7B for Groq
2024-07-28 02:28:42 +09:00
38c38ec280
Update pr_agent/algo/ai_handlers/litellm_ai_handler.py
...
Co-authored-by: codiumai-pr-agent-pro[bot] <151058649+codiumai-pr-agent-pro[bot]@users.noreply.github.com>
2024-07-27 18:03:35 +03:00
3904eebf85
Update pr_agent/algo/ai_handlers/litellm_ai_handler.py
...
Co-authored-by: codiumai-pr-agent-pro[bot] <151058649+codiumai-pr-agent-pro[bot]@users.noreply.github.com>
2024-07-27 18:02:57 +03:00
3067afbcb3
Update seed handling: log fixed seed usage; adjust default seed and temperature in config
2024-07-27 17:50:59 +03:00
7eadb45c09
Refactor seed handling logic in litellm_ai_handler to improve readability and error checking
2024-07-27 17:23:42 +03:00
ac247dbc2c
Add end-to-end tests for GitHub, GitLab, and Bitbucket apps; update temperature setting usage across tools
2024-07-27 17:19:32 +03:00