Aditya Singh
2332bf411a
Name changes
2025-12-02 10:35:15 +05:30
sharoneyal
8d3f3bca45
Add support for SSL related interactions: ( #2074 )
...
Build-and-test / build-and-test (push) Waiting to run
docs-ci / deploy (push) Waiting to run
* Add support for SSL related interactions:
1. Inject SSL related env variables to git clone
2. Force LiteLLM to use aiohttp which honors SSL env. variables, this is done by setting configuration toml as follows:
[litellm]
disable_aiohttp = true
* Update pr_agent/git_providers/git_provider.py
Co-authored-by: qodo-merge-for-open-source[bot] <189517486+qodo-merge-for-open-source[bot]@users.noreply.github.com>
* Guard get_git_ssl_env invocation with try-catch
---------
Co-authored-by: qodo-merge-for-open-source[bot] <189517486+qodo-merge-for-open-source[bot]@users.noreply.github.com>
2025-10-21 19:06:18 +03:00
mrT23
f3287a9f67
fix: update model prefix in litellm_ai_handler and adjust dependencies in requirements.txt
2025-08-08 09:34:31 +03:00
mrT23
de5c1adaa0
fix: update temperature handling for GPT-5 and upgrade aiohttp version
2025-08-08 08:37:28 +03:00
mrT23
5162d847b3
feat: add support for gpt-5 model and update configuration
2025-08-08 08:28:42 +03:00
Abhinav Kumar
a8b8202567
fix: logic
2025-07-26 11:40:40 +05:30
Abhinav Kumar
af2b66bb51
feat: Add support for Bedrock custom inference profiles via model_id
2025-07-26 11:32:34 +05:30
mrT23
8e0c5c8784
refactor(ai_handler): remove model parameter from _get_completion and handle it within the method
2025-07-13 21:29:53 +03:00
mrT23
0e9cf274ef
refactor(ai_handler): move streaming response handling and Azure token generation to helpers
2025-07-13 21:23:04 +03:00
Tal
3aae48f09c
Merge pull request #1925 from Makonike/feature_only_streaming_model_support
...
feat: Support Only Streaming Model
2025-07-13 21:16:49 +03:00
Makonike
8c7680d85d
refactor(ai_handler): add a return statement or raise an exception in the elif branch
2025-07-13 22:57:43 +08:00
Makonike
11fb6ccc7e
refactor(ai_handler): compact streaming path to reduce main flow impact
2025-07-13 22:37:14 +08:00
Makonike
74df3f8bd5
fix(ai_handler): improve empty streaming response validation logic
2025-07-10 15:14:25 +08:00
Makonike
31e25a5965
refactor(ai_handler): improve streaming response handling robustness
2025-07-09 15:39:15 +08:00
Makonike
85e1e2d4ee
feat: add debug logging support for streaming models
2025-07-09 15:29:03 +08:00
Makonike
2d8bee0d6d
feat: add validation for empty streaming responses in LiteLLM handler
2025-07-09 15:04:18 +08:00
Abhinav Kumar
e0d7083768
feat: refactor LITELLM.EXTRA_BODY processing into a dedicated method
2025-07-09 12:04:26 +05:30
Makonike
5e82d0a316
feat: add streaming support for openai/qwq-plus model
2025-07-08 11:51:30 +08:00
Abhinav Kumar
e2d71acb9d
fix: remove comments
2025-07-07 21:27:35 +05:30
Abhinav Kumar
8127d52ab3
fix: security checks
2025-07-07 21:26:13 +05:30
Abhinav Kumar
6a55bbcd23
fix: prevent LITELLM.EXTRA_BODY from overriding existing parameters in LiteLLMAIHandler
2025-07-07 21:20:25 +05:30
Abhinav Kumar
12af211c13
feat: support OpenAI Flex Processing via [litellm] extra_body config
2025-07-07 21:14:45 +05:30
Alessio
608065f2ad
fix: typos
2025-06-17 09:26:57 +03:00
Akileo
e8ace9fcf9
change type check and remove useless sync
2025-05-26 14:52:45 +09:00
Akileo
ff52ae9281
add img_path and _create_chat_async
2025-05-25 15:34:50 +09:00
Akileo
d791e9f3d1
Fix: Improve langchain import error handling and add img_path to handler
...
Addresses issue #1784 :
- Raises ImportError if langchain is not installed when LangChainOpenAIHandler is initialized.
- Adds img_path parameter to LangChainOpenAIHandler.chat_completion for interface consistency.
- Logs a warning if img_path is used with LangChainOpenAIHandler.
2025-05-25 15:28:18 +09:00
Kangmoon Seo
466ec4ce90
fix: exclude RateLimitError from retry logic
2025-05-22 15:04:16 +09:00
Kangmoon Seo
6405284461
fix: reorder exception handling to enable proper retry behavior
2025-05-20 18:22:33 +09:00
Dennis Paul
250870a3da
enable usage of openai like apis
2025-05-15 16:05:05 +02:00
irfan.putra
7a6a28d2b9
feat: add openrouter support in litellm
2025-05-07 11:54:07 +07:00
dst03106
869a179506
feat: add support for Mistral and Codestral models
2025-04-18 14:04:59 +09:00
arpit-at
27a7c1a94f
doc update and minor fix
2025-04-16 13:32:53 +05:30
arpit-at
dc46acb762
doc update and minor fix
2025-04-16 13:27:52 +05:30
arpit-at
0da667d179
support Azure AD authentication for OpenAI services for litellm implemetation
2025-04-16 11:19:04 +05:30
Peter Dave Hello
665fb90a98
Add support of xAI and their Grok-2 model
...
Close #1630
2025-04-08 01:36:21 +08:00
mrT23
6610921bba
cleanup
2025-03-20 21:49:19 +02:00
Kenny Dizi
ffefcb8a04
Fix default value for extended_thinking_max_output_tokens
2025-03-11 17:48:12 +07:00
Tal
52c99e3f7b
Merge pull request #1605 from KennyDizi/main
...
Support extended thinking for model `claude-3-7-sonnet-20250219`
2025-03-09 17:03:37 +02:00
Kenny Dizi
222155e4f2
Optimize logging
2025-03-08 08:53:29 +07:00
Kenny Dizi
f9d5e72058
Move logic to _configure_claude_extended_thinking
2025-03-08 08:35:34 +07:00
Kenny Dizi
a8935dece3
Using 2048 for extended_thinking_budget_tokens as well as extended_thinking_max_output_tokens
2025-03-07 17:27:56 +07:00
muhammad-asn
4f2551e0a6
feat: add DeepInfra support
2025-03-06 15:49:07 +07:00
Kenny Dizi
30bf7572b0
Validate extended thinking parameters
2025-03-03 18:44:26 +07:00
Kenny Dizi
440d2368a4
Set temperature to 1 when using extended thinking
2025-03-03 18:30:52 +07:00
Kenny Dizi
215c10cc8c
Add thinking block to request parameters
2025-03-03 18:29:33 +07:00
Kenny Dizi
7623e1a419
Removed trailing spaces
2025-03-03 18:23:45 +07:00
mrT23
3817aa2868
fix: remove redundant temperature logging in litellm handler
2025-02-27 10:55:01 +02:00
Tal
d6f405dd0d
Merge pull request #1564 from chandan84/fix/support_litellm_extra_headers
...
Fix/support litellm extra headers
2025-02-26 10:15:22 +02:00
chandan84
93e34703ab
Update litellm_ai_handler.py
...
updates made based on review on https://github.com/qodo-ai/pr-agent/pull/1564
2025-02-25 14:44:03 -05:00
chandan84
84983f3e9d
line 253-261, pass extra_headers fields from settings to litellm, exception handling to check if extra_headers is in dict format
2025-02-22 14:56:17 -05:00