Commit graph

651 commits

Author SHA1 Message Date
Tal
43dbe24a7f
Merge pull request #1817 from PeterDaveHelloKitchen/Grok-3
Add Grok-3 non-beta model IDs
2025-05-24 16:32:50 +03:00
Tal
f4a9bc3de7
Merge pull request #1814 from hirobf10/support-claude-4
feat: add support for Claude 4 family
2025-05-24 16:29:48 +03:00
dst03106
b83085ea00 fix: remove whitespace from relevant file 2025-05-24 22:01:29 +09:00
Peter Dave Hello
95c94b80a2 Add Grok-3 non-beta model IDs 2025-05-24 14:22:55 +08:00
TaskerJang
e2586cb64a docs: improve clip_tokens function docstring and add examples 2025-05-24 10:46:58 +09:00
Hiroyuki Otomo
1f836e405d fix: reflect comments 2025-05-24 09:45:27 +09:00
joosomi
66131854c1 fix: avoid incorrect ToDo header 2025-05-24 03:04:59 +09:00
joosomi
788c0c12e6 feat: add TODO comments to PR review output 2025-05-24 01:52:52 +09:00
Hiroyuki Otomo
10703a9098 feat: add support for Claude 4 2025-05-23 14:16:44 +09:00
Pinyoo Thotaboot
930cd69909 Fixed conflicts 2025-05-22 14:54:26 +07:00
Kangmoon Seo
466ec4ce90 fix: exclude RateLimitError from retry logic 2025-05-22 15:04:16 +09:00
kkan9ma
facfb5f46b Add missing code: use_context=False 2025-05-22 13:32:20 +09:00
kkan9ma
cc686ef26d Reorder model check: OpenAI before Anthropic
OpenAI is the default in most cases, so checking it first skips unnecessary Anthropic logic.
2025-05-22 13:12:04 +09:00
kkan9ma
ead7491ca9 Apply convention for marking private 2025-05-21 18:08:48 +09:00
kkan9ma
df0355d827 Remove member variable for restroring get_settings() 2025-05-21 18:07:47 +09:00
kkan9ma
c3ea048b71 Restore original return logic for force_accurate condition 2025-05-21 17:52:51 +09:00
kkan9ma
648829b770 Rename method 2025-05-21 17:51:03 +09:00
Kangmoon Seo
6405284461 fix: reorder exception handling to enable proper retry behavior 2025-05-20 18:22:33 +09:00
kkan9ma
97f2b6f736 Fix TypeError 2025-05-20 15:29:27 +09:00
kkan9ma
f198e6fa09 Add constants and improve token calculation logic 2025-05-20 14:12:24 +09:00
kkan9ma
e72bb28c4e Replace get_settings() with self.settings 2025-05-20 13:50:30 +09:00
kkan9ma
81fa22e4df Add model name validation 2025-05-20 13:47:15 +09:00
mrT23
db5138dc42
Improve YAML parsing with additional fallback strategies for AI predictions 2025-05-17 20:38:05 +03:00
Tal
c15fb16528
Merge pull request #1779 from dnnspaul/main
Enable usage of OpenAI like APIs
2025-05-16 16:59:18 +03:00
mrT23
9974015682
Add Gemini-2.5-pro-preview-05-06 model and update litellm dependency 2025-05-16 16:32:45 +03:00
Pinyoo Thotaboot
2d7636543c Implement provider 2025-05-16 16:31:49 +07:00
kkan9ma
05ab5f699f Improve token calculation logic based on model type
- Rename calc_tokens to get_token_count_by_model_type for clearer intent
- Separate model type detection logic to improve maintainability
2025-05-16 17:51:22 +09:00
Dennis Paul
250870a3da enable usage of openai like apis 2025-05-15 16:05:05 +02:00
irfan.putra
7a6a28d2b9 feat: add openrouter support in litellm 2025-05-07 11:54:07 +07:00
mrT23
f505c7ad3c
Add multi-model support for different reasoning tasks 2025-04-27 11:00:34 +03:00
mrT23
c951fc9a87
Improve dynamic context handling with partial line matching and adjust model configuration 2025-04-27 10:46:23 +03:00
mrT23
3f194e6730
Improve dynamic context handling in git patch processing 2025-04-27 10:07:56 +03:00
mrT23
f53bd524c5
Support multiple model types for different reasoning tasks 2025-04-27 08:50:03 +03:00
mrT23
4ac0aa56e5
Update model references from o3-mini to o4-mini and add Gemini models 2025-04-19 09:26:35 +03:00
dst03106
869a179506 feat: add support for Mistral and Codestral models 2025-04-18 14:04:59 +09:00
Peter Dave Hello
4e3e963ce5 Add OpenAI o3 & 4o-mini reasoning models
Reference:
- https://platform.openai.com/docs/models/o3
- https://platform.openai.com/docs/models/o4-mini
- https://openai.com/index/introducing-o3-and-o4-mini/
2025-04-17 02:32:14 +08:00
arpit-at
27a7c1a94f doc update and minor fix 2025-04-16 13:32:53 +05:30
arpit-at
dc46acb762 doc update and minor fix 2025-04-16 13:27:52 +05:30
arpit-at
0da667d179 support Azure AD authentication for OpenAI services for litellm implemetation 2025-04-16 11:19:04 +05:30
mrT23
08bf9593b2
Fix tokenizer fallback to use o200k_base instead of cl100k_base 2025-04-14 21:15:19 +03:00
Peter Dave Hello
57808075be Add support of OpenAI GPT-4.1 model family
Reference:
- https://openai.com/index/gpt-4-1/
- https://platform.openai.com/docs/models/gpt-4.1
2025-04-15 01:57:46 +08:00
Tal
60ace1ed09
Merge pull request #1685 from imperorrp/add_gemini2.5preview
Add support of Gemini 2.5 Pro preview model
2025-04-11 09:54:09 +03:00
Tal
7f6014e064
Merge pull request #1684 from PeterDaveHelloKitchen/Support-xAI-Grok
Add support of xAI and their Grok-2 & Grok-3 model
2025-04-11 09:53:08 +03:00
Peter Dave Hello
0ac7028bc6 Support xAI Grok-3 series models
Reference:
- https://docs.x.ai/docs/release-notes#april-2025
2025-04-11 00:40:00 +08:00
Ratish Panda
eb9c4fa110 add gemini 2.5 pro preview model token limit 2025-04-08 20:41:59 +05:30
Peter Dave Hello
83bb3b25d8 Add support of Meta's Llama 4 Scout and Maverick 17b from Groq Cloud
Reference:
- https://ai.meta.com/blog/llama-4-multimodal-intelligence/
- https://console.groq.com/docs/models#preview-models
- https://groq.com/llama-4-now-live-on-groq-build-fast-at-the-lowest-cost-without-compromise/
2025-04-08 01:47:15 +08:00
Peter Dave Hello
665fb90a98 Add support of xAI and their Grok-2 model
Close #1630
2025-04-08 01:36:21 +08:00
Peter Dave Hello
9b19fcdc90 Add support of OpenAI GPT-4.5 Preview model
Reference:
- https://openai.com/index/introducing-gpt-4-5/
- https://platform.openai.com/docs/models/gpt-4.5-preview
2025-04-04 05:13:15 +08:00
sharoneyal
14971c4f5f
Add support for documentation content exceeding token limits (#1670)
* - Add support for documentation content exceeding token limits via two phase operation:
1. Ask LLM to rank headings which are most likely to contain an answer to a user question
2. Provide the corresponding files for the LLM to search for an answer.

- Refactor of help_docs to make the code more readable
- For the purpose of getting canonical path: git providers to use default branch and not the PR's source branch.
- Refactor of token counting and making it clear on when an estimate factor will be used.

* Code review changes:
1. Correctly handle exception during retry_with_fallback_models (to allow fallback model to run in case of failure)
2. Better naming for default_branch in bitbucket cloud provider
2025-04-03 11:51:26 +03:00
Eyal Sharon
8495e4d549 More comprehensive handling in count_tokens(force_accurate==True): In case model is neither OpenAI nor Anthropic Claude, simply use an elbow room factor in order to force a more conservative estimate. 2025-03-24 15:47:35 +02:00
Eyal Sharon
dd80276f3f Support cloning repo
Support forcing accurate token calculation (claude)
Help docs: Add desired branch in case of user supplied git repo, with default set to "main"
Better documentation for getting canonical url parts
2025-03-23 09:55:58 +02:00
mrT23
6610921bba
cleanup 2025-03-20 21:49:19 +02:00
mrT23
f5e381e1b2
Add fallback for YAML parsing using original response text 2025-03-11 17:11:10 +02:00
mrT23
9a574e0caa
Add filter for files with bad extensions in language handler 2025-03-11 17:03:05 +02:00
mrT23
0f33750035
Remove unused filter_bad_extensions function and rename diff_files_original to diff_files 2025-03-11 16:56:41 +02:00
mrT23
d16012a568
Add decoupled and non-decoupled modes for code suggestions 2025-03-11 16:46:53 +02:00
mrT23
f5bd98a3b9
Add check for auto-generated files in language handler 2025-03-11 14:37:45 +02:00
Kenny Dizi
ffefcb8a04 Fix default value for extended_thinking_max_output_tokens 2025-03-11 17:48:12 +07:00
mrT23
35bb2b31e3
feat: add enable_comment_approval to encoded forbidden args 2025-03-10 12:10:19 +02:00
Tal
20d709075c
Merge pull request #1613 from qodo-ai/hl/update_auto_approve_docs
docs: update auto-approval documentation with clearer configuration
2025-03-10 11:56:48 +02:00
Tal
52c99e3f7b
Merge pull request #1605 from KennyDizi/main
Support extended thinking for model `claude-3-7-sonnet-20250219`
2025-03-09 17:03:37 +02:00
Hussam.lawen
884b49dd84
Add encoded: enable_manual_approval 2025-03-09 17:01:04 +02:00
Kenny Dizi
222155e4f2 Optimize logging 2025-03-08 08:53:29 +07:00
Kenny Dizi
f9d5e72058 Move logic to _configure_claude_extended_thinking 2025-03-08 08:35:34 +07:00
Tal
2619ff3eb3
Merge pull request #1612 from congziqi77/main
fix: repeat processing files to ignore
2025-03-07 21:08:46 +02:00
Kenny Dizi
a8935dece3 Using 2048 for extended_thinking_budget_tokens as well as extended_thinking_max_output_tokens 2025-03-07 17:27:56 +07:00
congziqi
fd12191fcf fix: repeat processing files to ignore 2025-03-07 09:11:43 +08:00
muhammad-asn
4f2551e0a6 feat: add DeepInfra support 2025-03-06 15:49:07 +07:00
Kenny Dizi
30bf7572b0 Validate extended thinking parameters 2025-03-03 18:44:26 +07:00
Kenny Dizi
440d2368a4 Set temperature to 1 when using extended thinking 2025-03-03 18:30:52 +07:00
Kenny Dizi
215c10cc8c Add thinking block to request parameters 2025-03-03 18:29:33 +07:00
Kenny Dizi
7623e1a419 Removed trailing spaces 2025-03-03 18:23:45 +07:00
Kenny Dizi
5e30e190b8 Define models that support extended thinking feature 2025-03-03 18:22:31 +07:00
atsushi-ishibashi
8e6267b0e6 chore: bedrock/us.anthropic.claude-3-7-sonnet-20250219-v1:0 2025-03-02 08:44:23 +09:00
mrT23
3817aa2868
fix: remove redundant temperature logging in litellm handler 2025-02-27 10:55:01 +02:00
Hussam Lawen
c7f4b87d6f
Merge pull request #1583 from qodo-ai/hl/enhance_azure_devops
feat: enhance Azure DevOps integration with improved error handling a…
2025-02-26 17:17:31 +02:00
Hussam.lawen
52a68bcd44
fix: adjust newline formatting in issue details summary 2025-02-26 16:49:44 +02:00
Tal
d6f405dd0d
Merge pull request #1564 from chandan84/fix/support_litellm_extra_headers
Fix/support litellm extra headers
2025-02-26 10:15:22 +02:00
Tal
25ba9414fe
Merge pull request #1561 from KennyDizi/main
Support reasoning effort via configuration
2025-02-26 10:13:05 +02:00
chandan84
93e34703ab
Update litellm_ai_handler.py
updates made based on review on https://github.com/qodo-ai/pr-agent/pull/1564
2025-02-25 14:44:03 -05:00
Hiroyuki Otomo
1dc3db7322
Update pr_agent/algo/__init__.py
Co-authored-by: qodo-merge-pro-for-open-source[bot] <189517486+qodo-merge-pro-for-open-source[bot]@users.noreply.github.com>
2025-02-25 16:51:55 +09:00
Hiroyuki Otomo
049fc558a8
Update pr_agent/algo/__init__.py
Co-authored-by: qodo-merge-pro-for-open-source[bot] <189517486+qodo-merge-pro-for-open-source[bot]@users.noreply.github.com>
2025-02-25 16:51:50 +09:00
Hiroyuki Otomo
2dc89d0998
Update pr_agent/algo/__init__.py
Co-authored-by: qodo-merge-pro-for-open-source[bot] <189517486+qodo-merge-pro-for-open-source[bot]@users.noreply.github.com>
2025-02-25 16:51:39 +09:00
Hiroyuki Otomo
a24b06b253 feat: support Claude 3.7 Sonnet 2025-02-25 12:58:20 +09:00
Tal
393516f746
Merge pull request #1556 from benedict-lee/main
Fix prompt to not output diff prefixes in existing_code,improved_code pydantic definitions
2025-02-24 22:10:30 +02:00
mrT23
56250f5ea8
feat: improve patch extension with new file content comparison 2025-02-24 11:46:12 +02:00
Benedict Lee
feb306727e
fix : refine handling of leading '+' in response text 2025-02-24 09:15:00 +09:00
chandan84
84983f3e9d line 253-261, pass extra_headers fields from settings to litellm, exception handling to check if extra_headers is in dict format 2025-02-22 14:56:17 -05:00
chandan84
71451de156
Update litellm_ai_handler.py
line 253-258, pass extra_headers fields from settings to litellm, exception handling to check if extra_headers is in dict format
2025-02-22 14:43:03 -05:00
chandan84
0e4a1d9ab8 line 253-258, pass extra_headers fields from settings to litellm, exception handling to check if extra_headers is in dict format 2025-02-22 14:38:38 -05:00
chandan84
e7b05732f8 line 253-255, pass extra_headers fields from settings to litellm 2025-02-22 14:12:39 -05:00
Trung Dinh
37083ae354 Improve logging for adding parameters: temperature and reasoning_effort 2025-02-22 22:19:58 +07:00
Trung Dinh
9abb212e83 Add reasoning_effort argument to chat completion request 2025-02-21 22:16:18 +07:00
Trung Dinh
d37732c25d Define ReasoningEffort enum 2025-02-21 22:10:49 +07:00
Trung Dinh
e6b6e28d6b Define SUPPORT_REASONING_EFFORT_MODELS list 2025-02-21 22:10:33 +07:00
mrT23
2887d0a7ed
refactor: move CLI argument validation to dedicated class 2025-02-20 17:51:16 +02:00
Tal
35059cadf7
Update pr_agent/algo/ai_handlers/litellm_ai_handler.py
Co-authored-by: qodo-merge-pro-for-open-source[bot] <189517486+qodo-merge-pro-for-open-source[bot]@users.noreply.github.com>
2025-02-18 11:50:48 +02:00
mrT23
4edb8b89d1
feat: add support for custom reasoning models 2025-02-18 11:46:22 +02:00
Yu Ishikawa
22f02ac08c Support generally available gemini-2.0-flash
Signed-off-by: Yu Ishikawa <yu-iskw@users.noreply.github.com>
2025-02-17 08:40:05 +09:00
Trung Dinh
adfc2a6b69 Add temperature only if model supports it 2025-02-16 15:43:40 +07:00