Commit graph

566 commits

Author SHA1 Message Date
mrT23
01ba6fe63d
feat: enhance error handling and logging, update AI metadata terminology
- Improved error handling and logging in `pr_processing.py` and `github_polling.py` to provide more detailed error information.
- Updated AI metadata terminology from "AI-generated file summary" to "AI-generated changes summary" across multiple files for consistency.
- Added a placeholder method `publish_file_comments` in `azuredevops_provider.py`.
- Refined logging messages in `azuredevops_provider.py` for better clarity.
2024-09-10 17:44:26 +03:00
mrT23
c8e8ed89d2
feat: integrate Dynaconf for configuration management and enhance config display 2024-09-09 08:31:20 +03:00
mrT23
ebc5cafb2b
protection 2024-09-08 17:46:21 +03:00
mrT23
86103c65e8
pattern_back 2024-09-08 17:24:13 +03:00
mrT23
8706f643ef
enable ai_metadata 2024-09-08 16:26:26 +03:00
mrT23
5432469ef6
fix: ensure non-empty lines are processed correctly in git patch handling 2024-09-01 08:39:29 +03:00
woung717
578d7c69f8
fix: change deprecated timeout parameter for litellm 2024-08-29 21:45:48 +09:00
mrT23
c2f52539aa
fix: handle deleted files in git patch processing and update section header logic 2024-08-27 09:31:31 +03:00
mrT23
441e098e2a
fix: correct YAML formatting in response text processing in utils.py 2024-08-25 11:26:48 +03:00
Tal
745e955d1f
Merge pull request #1145 from MarkRx/feature/litellm-logging-observability
Add and document abilty to use LiteLLM Logging Observability tools
2024-08-22 09:58:53 +03:00
mrT23
d467f5a7fd
patch_extension_skip_types 2024-08-20 11:37:27 +03:00
mrT23
2d5b060168
patch_extension_skip_types 2024-08-20 11:33:56 +03:00
mrT23
b7eb6be5a0
Update PR code suggestions and reviewer prompts for clarity and consistency 2024-08-20 11:27:35 +03:00
mrT23
660a60924e
Add filename parameter and skip logic to extend_patch function in git_patch_processing.py 2024-08-20 11:23:37 +03:00
MarkRx
8aa76a0ac5 Add and document abilty to use LiteLLM Logging Observability tools 2024-08-19 15:45:47 -04:00
mrT23
fc40ca9196
Refactor dynamic context handling in git patch processing and update configuration default 2024-08-19 08:38:26 +03:00
mrT23
e9535ea164
Add dynamic context handling in git patch processing
- Introduce `allow_dynamic_context` and `max_extra_lines_before_dynamic_context` settings.
- Adjust context limits dynamically based on section headers.
- Add logging for dynamic context adjustments and section header findings.
2024-08-18 17:45:18 +03:00
mrT23
aa87bc60f6
Rename 'add_callbacks' to 'add_litellm_callbacks' for clarity in litellm_ai_handler 2024-08-17 09:20:30 +03:00
mrT23
c76aabc71e
Add callback functionality to litellm_ai_handler for enhanced logging and metadata capture 2024-08-17 09:15:05 +03:00
Tal
b9df034c97
Merge pull request #1138 from Codium-ai/tr/err_protections
Add 'only_markdown' parameter to emphasize_header call in utils.py fo…
2024-08-14 14:03:43 +03:00
mrT23
bae8d36698
Add 'only_markdown' parameter to emphasize_header call in utils.py for security concerns section 2024-08-14 14:02:09 +03:00
Hussam.lawen
4fea780b9b
fix html escaping 2024-08-14 12:13:51 +03:00
mrT23
f4b06640d2
Add info log for successful AI prediction parse in utils.py 2024-08-14 08:14:51 +03:00
mrT23
f1981092d3
Add warning log for initial AI prediction parse failure and error log for fallback failure in utils.py 2024-08-14 08:08:55 +03:00
mrT23
86a9cfedc8
Add error handling for empty diff files in utils.py and optimize file content retrieval in Bitbucket provider 2024-08-13 22:33:07 +03:00
mrT23
f89bdcf3c3
Add error handling for missing custom label settings in utils.py 2024-08-13 16:40:05 +03:00
mrT23
e7e3970874
Add error handling for empty system prompt in litellm_ai_handler and type conversion in utils.py 2024-08-13 16:26:32 +03:00
mrT23
1aa6dd9b5d
Add error handling for missing file paths in Bitbucket provider and improve file validation logic 2024-08-13 11:28:21 +03:00
mrT23
4a38861d06
Add error handling for missing file paths in file_filter.py for Bitbucket and GitLab platforms 2024-08-13 08:59:27 +03:00
Tal
4228f92e7e
Merge pull request #1119 from Codium-ai/hl/limit_long_comments
Hl/limit long comments
2024-08-12 16:25:42 +03:00
Hussam.lawen
70da871876
lower OpenAI errors to warnings 2024-08-12 12:27:48 +03:00
mrT23
5c4bc0a008
Add Bitbucket diff handling and improve error logging
- Implement `publish_file_comments` method placeholder
- Enhance `is_supported` method to include `publish_file_comments`
- Refactor diff splitting logic to handle Bitbucket-specific headers
- Improve error handling and logging for file content retrieval
- Add `get_pr_owner_id` method to retrieve PR owner ID
- Update `_get_pr_file_content` to fetch file content from remote link
- Fix variable name typo in `extend_patch` function in `git_patch_processing.py`
2024-08-12 09:48:26 +03:00
mrT23
4c1c313031
Add missing newline in extended patch and remove trailing whitespace 2024-08-11 18:49:28 +03:00
mrT23
12742ef499
Adjust patch extension logic to handle cases where extended size exceeds original file length 2024-08-11 15:48:58 +03:00
mrT23
63e921a2c5
Adjust patch extension logic to handle cases where extended size exceeds original file length 2024-08-11 15:46:46 +03:00
mrT23
a06670bc27
Fix incorrect logic for extending patch size beyond original file length 2024-08-11 15:20:27 +03:00
mrT23
e85b75fe64
Refactor patch extension logic to handle cases with zero extra lines 2024-08-11 12:56:56 +03:00
mrT23
df04a7e046
Add spaces to extra lines in patch extension for consistency 2024-08-11 12:32:26 +03:00
mrT23
9c3f080112
comments 2024-08-11 12:15:47 +03:00
mrT23
ed65493718
Handle edge cases for patch extension and update tests 2024-08-11 12:08:00 +03:00
mrT23
e238a88824
Add tests for patch extension and update configuration for extra lines handling
- Added unit tests in `test_extend_patch.py` and `test_pr_generate_extended_diff.py` to verify patch extension functionality with extra lines.
- Updated `pr_processing.py` to include `patch_extra_lines_before` and `patch_extra_lines_after` settings.
- Modified `configuration.toml` to adjust `patch_extra_lines_before` to 4 and `max_context_tokens` to 16000.
- Enabled extra lines in `pr_code_suggestions.py`.
- Added new model `claude-3-5-sonnet` to `__init__.py`.
2024-08-11 09:21:34 +03:00
mrT23
61bdfd3b99
patch_extra_lines_before and patch_extra_lines_after 2024-08-10 21:55:51 +03:00
mrT23
84b80f792d
protections 2024-08-09 21:44:00 +03:00
Tal
b370cb6ae7
Merge pull request #1102 from MarkRx/feature/langchain-azure-fix
Fix LangChainOpenAIHandler for Azure
2024-08-08 19:37:26 +03:00
MarkRx
4201779ce2 Fix LangChainOpenAIHandler for Azure 2024-08-08 09:55:18 -04:00
Benedict Lee
4c0fd37ac2
Fix pr_processing.get_pr_multi_diffs
Fix function to return an empty list instead of a single joined string when patches_extended is empty.
2024-08-08 11:46:26 +09:00
Benedict Lee
c996c7117f
Fix function to return an empty list instead of a single joined string when patches_extended is empty. 2024-08-08 11:32:10 +09:00
KennyDizi
9be5cc6dec Add support model gpt-4o-2024-08-06 2024-08-07 07:28:51 +07:00
mrT23
3420e6f30d
patch improvements 2024-08-03 12:44:49 +03:00
Tal
1cefd23739
Merge pull request #1073 from h0rv/patch-1
Improve response cleaning
2024-08-02 12:21:40 +03:00
Robby
039d85b836 fix cleaning 2024-08-01 15:50:00 -04:00
mrT23
d671c78233
Merge remote-tracking branch 'origin/main' 2024-07-31 13:32:51 +03:00
mrT23
240e0374e7
fixed extra call bug 2024-07-31 13:32:42 +03:00
Robby
172d0c0358
improve response cleaning
The prompt for the model starts with a code block (```). When testing watsonx models (llama and granite), they would generate the closing block in the response.
2024-07-29 10:26:58 -04:00
Tal
f50832e19b
Update __init__.py 2024-07-29 08:32:34 +03:00
Tal
af84409c1d
Merge pull request #1067 from Codium-ai/tr/custom_model
docs: update usage guide and README; fix minor formatting issues in u…
2024-07-28 09:34:05 +03:00
mrT23
e946a0ea9f
docs: update usage guide and README; fix minor formatting issues in utils.py 2024-07-28 09:30:21 +03:00
Tal
27d6560de8
Update pr_agent/algo/utils.py
Co-authored-by: codiumai-pr-agent-pro[bot] <151058649+codiumai-pr-agent-pro[bot]@users.noreply.github.com>
2024-07-28 08:58:03 +03:00
mrT23
6ba7b3eea2
fix condition 2024-07-28 08:57:39 +03:00
mrT23
86d9612882
docs: update usage guide for changing models; add custom model support and reorganize sections 2024-07-28 08:55:01 +03:00
Tal
49f608c968
Merge pull request #1065 from dceoy/feature/add-groq-models
Add Llama 3.1 and Mixtral 8x7B for Groq
2024-07-28 08:31:50 +03:00
dceoy
495e2ccb7d Add Llama 3.1 and Mixtral 8x7B for Groq 2024-07-28 02:28:42 +09:00
Tal
38c38ec280
Update pr_agent/algo/ai_handlers/litellm_ai_handler.py
Co-authored-by: codiumai-pr-agent-pro[bot] <151058649+codiumai-pr-agent-pro[bot]@users.noreply.github.com>
2024-07-27 18:03:35 +03:00
Tal
3904eebf85
Update pr_agent/algo/ai_handlers/litellm_ai_handler.py
Co-authored-by: codiumai-pr-agent-pro[bot] <151058649+codiumai-pr-agent-pro[bot]@users.noreply.github.com>
2024-07-27 18:02:57 +03:00
mrT23
3067afbcb3
Update seed handling: log fixed seed usage; adjust default seed and temperature in config 2024-07-27 17:50:59 +03:00
mrT23
7eadb45c09
Refactor seed handling logic in litellm_ai_handler to improve readability and error checking 2024-07-27 17:23:42 +03:00
mrT23
ac247dbc2c
Add end-to-end tests for GitHub, GitLab, and Bitbucket apps; update temperature setting usage across tools 2024-07-27 17:19:32 +03:00
KennyDizi
581c95c4ab Add support model gpt-4o-mini-2024-07-18 2024-07-23 07:43:47 +07:00
KennyDizi
789c48a216 Add support model gpt-4o-mini 2024-07-23 07:41:04 +07:00
KennyDizi
8a7f3501ea Fix duplication code 2024-07-16 18:27:58 +07:00
mrT23
6151bfac25
key.lower 2024-07-14 09:00:10 +03:00
mrT23
5d6e1de157
review with links 2024-07-14 08:53:53 +03:00
mrT23
034ec8f53a
provider 2024-07-11 18:37:37 +03:00
R-Mathis
19ca7f887a add additional google models support 2024-07-09 14:29:50 +02:00
mrT23
745d0c537c
hotfix 2024-07-07 15:07:09 +03:00
Tal
20d9d8ad07
Update pr_agent/algo/ai_handlers/litellm_ai_handler.py
Co-authored-by: codiumai-pr-agent-pro[bot] <151058649+codiumai-pr-agent-pro[bot]@users.noreply.github.com>
2024-07-04 12:26:23 +03:00
mrT23
f3c80891f8
sonnet-3.5 2024-07-04 12:23:36 +03:00
Tal
12973c2c99
Merge pull request #1021 from Codium-ai/tr/margins
increase margins
2024-07-04 12:13:22 +03:00
mrT23
422b4082b5
No key issues to review 2024-07-03 20:58:25 +03:00
mrT23
2235a19345
increase margins 2024-07-03 20:53:15 +03:00
mrT23
e30c70d2ca
keys fallback 2024-07-03 20:29:17 +03:00
mrT23
0bac03496a
keys fallback 2024-07-03 17:06:27 +03:00
mrT23
bea68084b3
ValueError 2024-07-03 08:51:08 +03:00
mrT23
57abf4ac62
tests 2024-07-03 08:47:59 +03:00
mrT23
3e265682a7
extend additional files 2024-07-03 08:32:37 +03:00
mrT23
d7c0f87ea5
table 2024-07-03 08:19:58 +03:00
mrT23
92d040c80f
Merge remote-tracking branch 'origin/main' into tr/review_redesign 2024-07-03 07:54:26 +03:00
mrT23
8d87b41cf2
extend additional files 2024-06-30 20:28:32 +03:00
mrT23
f058c09a68
extend additional files 2024-06-30 20:20:50 +03:00
mrT23
f2cb70ea67
extend additional files 2024-06-30 18:38:06 +03:00
mrT23
3373fb404a
review_v2 2024-06-29 21:57:20 +03:00
mrT23
df02cc1437
Merge remote-tracking branch 'origin/main' into tr/review_redesign
# Conflicts:
#	pr_agent/tools/pr_reviewer.py
2024-06-29 21:55:49 +03:00
Tal
6a5f43f8ce
Merge pull request #1005 from KennyDizi/main
Centralize PR Review Title Definition
2024-06-29 21:53:20 +03:00
mrT23
0dc7bdabd2
review_v2 2024-06-29 21:22:25 +03:00
mrT23
defe200817
review_v2 2024-06-29 13:08:34 +03:00
mrT23
bf5673912d
APITimeoutError 2024-06-29 11:30:15 +03:00
mrT23
556dc68add
s 2024-06-27 08:32:14 +03:00
KennyDizi
382da3a5b6 Use descriptive name for the ReviewHeaderTitle enum to reflect its specific purpose related to PR headers 2024-06-27 07:17:26 +07:00
KennyDizi
692904bb71 Use ReviewHeaderTitle in lieu of PrReviewTitle 2024-06-27 07:11:57 +07:00
KennyDizi
ba963149ac Fix extract PrReviewTitle member value 2024-06-27 07:10:57 +07:00
KennyDizi
7348d4144b Rename PrReviewTitle enum 2024-06-27 07:05:03 +07:00
KennyDizi
c185b7c610 Apply PrReviewTitles enum for algo utils file 2024-06-27 07:03:08 +07:00
KennyDizi
3d60954167 Add PrReviewTitles enum 2024-06-27 06:59:49 +07:00
mrT23
0f920bcc5b
s 2024-06-26 20:11:20 +03:00
mrT23
55a82382ef
Merge remote-tracking branch 'origin/main' 2024-06-26 16:20:16 +03:00
mrT23
6c2a14d557
fix: correct indentation in PR description preparation logic 2024-06-26 16:20:05 +03:00
Mitsuki Ogasahara
b814e4a26d feat: Support Anthropic Calude 3.5 Sonnet on Vertex AI 2024-06-25 17:32:17 +09:00
R-Mathis
69f6997739 remove extra space 2024-06-24 14:01:33 +02:00
R-Mathis
8cc436cbd6 add gemini support for pr agent 2024-06-24 13:48:56 +02:00
R-Mathis
384dfc2292 add text bison support for pr agent 2024-06-24 13:28:37 +02:00
R-Mathis
40737c3932 add gemini support for pr agent 2024-06-24 12:08:16 +02:00
R-Mathis
c46434ac5e add gemini support for pr agent 2024-06-24 12:03:34 +02:00
R-Mathis
255c2d8e94 add gemini support for pr ageny 2024-06-24 11:35:41 +02:00
mrT23
2990aac955
docs: update custom labels configuration and usage instructions in describe tool 2024-06-23 16:53:45 +03:00
Diogo Simoes
a3d4d6d86f feat: claude 3.5 sonnet support 2024-06-21 09:30:52 +01:00
BrianTeeman
5268a84bcc
repetition_penalty
Correct the spelling of this variable.

Fix spelling errors now will prevent issues going forward where people have to misspell something on purpose
2024-06-16 17:28:30 +01:00
mrT23
4db428456d
Refactor filter_bad_extensions and is_valid_file functions to improve code readability and reusability 2024-06-15 20:10:46 +03:00
Tal
774bba4ed2
Merge pull request #964 from evalphobia/feature/vertexai-calude3
Support models: Anthropic Claude 3 on Vertex AI
2024-06-15 19:49:49 +03:00
mrT23
e083841d96
Add file ignore functionality and update documentation for ignore patterns 2024-06-13 13:18:15 +03:00
mrT23
bedcc2433c
Add file ignore functionality and update documentation for ignore patterns 2024-06-13 13:00:39 +03:00
mrT23
8ff85a9daf
Fix markdown formatting in utils.py by removing extra newlines 2024-06-13 12:45:57 +03:00
mrT23
58bc54b193
Add file ignore functionality and update documentation for ignore patterns 2024-06-13 12:27:10 +03:00
mrT23
aa56c0097d
Add file ignore functionality and update documentation for ignore patterns 2024-06-13 12:20:21 +03:00
mrT23
20f6af803c
Add file ignore functionality and update documentation for ignore patterns 2024-06-13 12:09:52 +03:00
mrT23
2076454798
Add file ignore functionality and update documentation for ignore patterns 2024-06-13 12:01:50 +03:00
mrT23
55b52ad6b2
Add exception handling to process_can_be_split function and update pr_reviewer_prompts.toml formatting 2024-06-13 09:28:51 +03:00
evalphobia
b0f9b96c75 Support models: Anthropic Calude 3 on Vertex AI 2024-06-13 14:34:14 +09:00
Tal
aac7aeabd1
Update PR review prompts and terminology for clarity and consistency (#954)
* Update PR review prompts and terminology for clarity and consistency
2024-06-10 08:44:11 +03:00
mrT23
9c8bc6c86a
Update PR review prompts and terminology for clarity and consistency 2024-06-09 14:29:32 +03:00
ryan
b28f66aaa0 1. update LangChainOpenAIHandler to support langchain version 0.2
2. read openai_api_base from settings for llms that compatible with openai
2024-06-06 22:27:01 +08:00
Kamakura
c53c6aee7f fix wrong provider name 2024-06-04 15:09:30 +08:00
Kamakura
d3a7041f0d update alog/__init__.py 2024-06-04 00:00:22 +08:00
Kamakura
b4f0ad948f Update Python code formatting, configuration loading, and local model additions
1. Code Formatting:
   - Standardized Python code formatting across multiple files to align with PEP 8 guidelines. This includes adjustments to whitespace, line breaks, and inline comments.

2. Configuration Loader Enhancements:
   - Enhanced the `get_settings` function in `config_loader.py` to provide more robust handling of settings retrieval. Added detailed documentation to improve code maintainability and clarity.

3. Model Addition in __init__.py:
   - Added a new model "ollama/llama3" with a token limit to the MAX_TOKENS dictionary in `__init__.py` to support new AI capabilities and configurations.
2024-06-03 23:58:31 +08:00
mrT23
911c1268fc
Add large_patch_policy configuration and implement patch clipping logic 2024-05-29 13:52:44 +03:00
mrT23
17f46bb53b
Add large_patch_policy configuration and implement patch clipping logic 2024-05-29 13:42:44 +03:00
mrT23
da44bd7d5e
extended_patch 2024-05-22 21:50:00 +03:00
mrT23
4cd9626217
grammar 2024-05-22 21:47:49 +03:00
mrT23
9b56c83c1d
APP_NAME 2024-05-19 12:18:22 +03:00
mrT23
dcd188193b
Update configuration_options.md to include tip on showing relevant configurations 2024-05-19 08:20:15 +03:00
mrT23
ea4ee1adbc
Add show_relevant_configurations function and integrate it across tools to output relevant configurations if enabled 2024-05-18 13:09:50 +03:00
Tal
be701aa868
Merge pull request #902 from Codium-ai/tr/self_reflect
Refactor Azure DevOps provider to use PR iterations for change detect…
2024-05-15 09:22:14 +03:00
mrT23
e56320540b
Refactor Azure DevOps provider to use PR iterations for change detection, improving accuracy of diff file identification 2024-05-15 09:05:01 +03:00
KennyDizi
36ad8935ad Add gpt-4o models 2024-05-14 08:24:34 +07:00
mrT23
34ad5f2aa2
toolbar emojis in pr-agent feedbacks 2024-05-05 13:33:54 +03:00
tacascer
2e34436589 chore: update GPT3.5 models 2024-04-22 20:25:32 -04:00
Tal
fae6cab2a7
Merge pull request #877 from randy-tsukemen/support-groq-llama3
Add Groq Llama3 support
2024-04-22 11:41:12 +03:00
Randy, Huang
0a53f09a7f Add GROQ.KEY support in LiteLLMAIHandler 2024-04-21 15:21:45 +09:00
Randy, Huang
7a9e73702d Fix duplicate assignment of replicate_key in LiteLLMAIHandler 2024-04-21 14:47:25 +09:00
mrT23
92ef2b4464
ask 2024-04-14 14:09:58 +03:00
mrT23
4683a29819
s 2024-04-14 12:34:14 +03:00
mrT23
8f0f08006f
s 2024-04-14 12:00:19 +03:00
mrT23
a4680ded93
protections 2024-04-12 20:32:47 +03:00
idubnori
9e4ffd824c
Merge branch 'main' into feature/gha-outputs-1 2024-04-10 23:27:44 +09:00
idubnori
ae633b3cea refine: github_action_output 2024-04-10 22:30:16 +09:00
idubnori
97dcb34d77 clean: rename to github_action_output 2024-04-10 22:16:09 +09:00
Yuta Nishi
108b1afa0e
add new models 2024-04-10 14:44:38 +09:00
idubnori
75c4befadf feat: set review data to github actions output 2024-04-10 01:02:05 +09:00
mrT23
84d8f78d0c
publish_output 2024-04-08 14:00:41 +03:00
mrT23
2be0e9108e
readme 2024-04-07 17:00:40 +03:00
mrT23
9c3673209d
TokenEncoder 2024-04-03 08:42:50 +03:00
gregoryboue
501b059575
feat: allows ollama usage
Fix https://github.com/Codium-ai/pr-agent/issues/657
2024-04-02 11:01:45 +02:00
Yuta Nishi
d064a352ad
feat(pr_agent): add support for gpt-4-turbo-preview model and update default settings 2024-03-26 14:47:05 +09:00
Tal
dd83b196b4
Merge pull request #781 from koid/feature/support-bedrock-claude3
added support for bedrock/claude3
2024-03-18 08:03:29 +02:00
mrT23
498b4cb34e
readme 2024-03-17 09:59:55 +02:00
mrT23
6d39773a17
prompt 2024-03-17 09:44:50 +02:00
mrT23
99a676d792
Merge remote-tracking branch 'origin/main' into tr/split 2024-03-17 09:00:04 +02:00
mrT23
669e076938
Enhance AI handler logging and add main PR language attribute to AI handler in various tools 2024-03-16 13:52:02 +02:00
mrT23
74345284bd
Enhance AI handler logging and add main PR language attribute to AI handler in various tools 2024-03-16 13:47:44 +02:00
koid
3bae515e39 add claude-3-haiku 2024-03-14 16:58:44 +09:00
koid
f94a0fd704 add Claude3Config 2024-03-13 11:24:51 +09:00
koid
1ed2cd064a add config litellm.drop_params 2024-03-13 11:20:02 +09:00
koid
d62796ac68 update max_tokens 2024-03-13 11:14:04 +09:00
mrT23
8324e9a38d
can_be_split 2024-03-10 16:56:32 +02:00
Hussam.lawen
ff2346eacc
update markdown 2024-03-10 14:18:29 +02:00
mrT23
0690f2bbfd
Refactor litellm_ai_handler.py and update requirements.txt
- Replace retry library with tenacity for better exception handling
- Add verbosity level checks for logging prompts and AI responses
- Add support for HuggingFace API base and repetition penalty in chat completion
- Update requirements.txt with tenacity library
2024-03-06 12:13:54 +02:00
mrT23
26fb2a4164
Add support for Anthropic Claude-3 model in configuration and update Usage.md 2024-03-06 08:20:08 +02:00
mrT23
1c856a7d41
upgrade litellm 2024-03-06 08:06:59 +02:00
mrT23
da39149c61
Add unique_strings function and remove duplicate issues in utils.py; Update pr_reviewer_prompts.toml template 2024-03-04 11:07:39 +02:00
mrT23
b3fd05c465
try-except 2024-03-03 13:58:10 +02:00
mrT23
e589dcb489
Enhance markdown formatting and update prompt descriptions in pr_reviewer_prompts.toml 2024-03-01 13:02:50 +02:00
mrT23
85cdf05ca8
review formatting 2024-02-26 09:36:16 +02:00
mrT23
7c9a389abf
review formatting 2024-02-26 09:27:13 +02:00
mrT23
18472492bc
s 2024-02-26 09:14:12 +02:00
mrT23
dad3d3429f
artifact 2024-02-25 10:45:15 +02:00
mrT23
984a2888ae
Refactor logging statements for better readability and debugging 2024-02-25 10:04:04 +02:00
mrT23
8252b98bf5
Refactor logging statements for better readability and debugging 2024-02-25 10:01:53 +02:00
mrT23
34e421f79b
Refactor logging statements for better readability and debugging 2024-02-25 09:58:58 +02:00
mrT23
877796b539
Refactor logging statements for better readability and debugging 2024-02-25 09:46:07 +02:00
mrT23
0f815876e5
bitbucket code suggestions 2024-02-19 19:46:57 +02:00
mrT23
d47a840179
bitbucket code suggestions 2024-02-19 19:43:31 +02:00
Tal
7b15101051
Merge pull request #661 from Codium-ai/hl/ask_line
Hl/ask line
2024-02-17 22:08:55 -08:00
Tal
2b12042a85
Merge pull request #667 from Codium-ai/tr_ado
azure webhook
2024-02-17 22:01:57 -08:00
mrT23
c6cb0524b4
rstrip 2024-02-18 07:56:14 +02:00
Hussam.lawen
3eef0a4ebd
fix line selection, don't support line deletions 2024-02-15 22:21:58 +02:00
Hussam.lawen
fff52e9e26
Add ask line feature 2024-02-15 14:25:22 +02:00
mrT23
54a989d30f
no html bitbucket 2024-02-13 18:37:48 +02:00
mrT23
480a890741
no html bitbucket 2024-02-13 18:33:22 +02:00
Yochai Lehman
9a54be5414 add webhook support 2024-02-11 16:52:49 -05:00
mrT23
01fbebfc5e
relevant tests 2024-02-09 12:50:51 +02:00
mrT23
6837e43114
help 2024-02-09 11:30:28 +02:00