Commit graph

685 commits

Author SHA1 Message Date
Elmar Kenguerli
049331d1c1
Merge b57337f854 into dcab8a893c 2025-10-14 14:54:31 -07:00
ofir-frd
40633d7a9e Merge branch 'main' into pr/2046
# Conflicts:
#	requirements.txt
2025-10-11 16:50:46 +03:00
Tomoya Kawaguchi
a194ca65d2
Update pr_agent/algo/pr_processing.py
Co-authored-by: qodo-merge-for-open-source[bot] <189517486+qodo-merge-for-open-source[bot]@users.noreply.github.com>
2025-10-10 20:44:31 +09:00
tomoya-kawaguchi
632c39962c chore: add original exception 2025-10-07 21:16:52 +09:00
tomoya-kawaguchi
be93651db8 chore: add error log on model prediction failure 2025-10-07 19:19:03 +09:00
cawamata
2866384931 add JP Anthropic Claude Sonnet 4.5 2025-10-02 04:30:41 +09:00
cawamata
3d66a9e8c3 feat: add support for Claude Sonnet 4.5 2025-09-30 21:53:04 +09:00
ElmarKenguerli
529ed56643 Add generic push outputs mechanism with Slack relay server
- Implement provider-agnostic push_outputs function supporting stdout, file, and webhook channels
- Add FastAPI-based Slack webhook relay server for external integrations
- Integrate push_outputs into PR reviewer tool to emit review data
- Add configuration section for push_outputs with default disabled state
2025-09-26 14:42:11 +02:00
Tal
dae9683770
Tr/updates23 (#2008)
* fix: improve PR description field guidance for clarity

* feat: refine suggestion guidelines to avoid redundant recommendations in PR reviews

* feat: enhance YAML parsing logic with additional keys and fallback strategies

* fix: update expected output format in YAML parsing test case
2025-08-22 10:16:08 +03:00
Alessio
5fc466bfbc
PR reviewer tool: add an opt-in work time estimation (#2006)
* feat: add `ContributionTimeCostEstimate`

* docs: mentiond `require_estimate_contribution_time_cost` for `reviewer`

* feat: implement time cost estimate for `reviewer`

* test: non-GFM output

To ensure parity and prevent regressions in plain Markdown rendering.
2025-08-22 09:53:08 +03:00
mrT23
81525cd25a
fix: correct prefix handling in load_yaml function for improved YAML parsing 2025-08-14 18:49:06 +03:00
dceoy
5c3da5d83e Add models for Groq 2025-08-12 23:55:32 +09:00
mrT23
bb115432f2
feat: add support for new GPT-5 models and update documentation 2025-08-08 09:39:24 +03:00
mrT23
f3287a9f67
fix: update model prefix in litellm_ai_handler and adjust dependencies in requirements.txt 2025-08-08 09:34:31 +03:00
mrT23
de5c1adaa0
fix: update temperature handling for GPT-5 and upgrade aiohttp version 2025-08-08 08:37:28 +03:00
mrT23
5162d847b3
feat: add support for gpt-5 model and update configuration 2025-08-08 08:28:42 +03:00
Tal
82feddbb95
Merge pull request #1970 from huangyoje/fix/sort-files-by-token
Fix: defer file sorting until after token calculation
2025-08-06 08:28:21 +03:00
huangyoje
b81b671ab1 Fix: defer file sorting until after token calculation 2025-08-03 11:50:38 +08:00
Abhinav Kumar
a8b8202567 fix: logic 2025-07-26 11:40:40 +05:30
Abhinav Kumar
af2b66bb51 feat: Add support for Bedrock custom inference profiles via model_id 2025-07-26 11:32:34 +05:30
Tal
ae6576c06b
Merge pull request #1938 from furikake6000/fix/fix-ignore-files-config-on-bitbucketserver
fix: add support for filtering ignored files in Bitbucket Server provider
2025-07-22 07:58:51 +03:00
furikake6000
bdee6f9f36 fix: add error handling to bitbucket file filtering 2025-07-21 10:15:57 +00:00
mrT23
e8c73e7baa
refactor(utils): improve file walkthrough parsing with regex and better error handling 2025-07-18 08:54:52 +03:00
mrT23
754d47f187
refactor(pr_description): redesign changes walkthrough section and improve file processing 2025-07-18 08:51:48 +03:00
furikake6000
380437b44f feat: add support for filtering ignored files in Bitbucket Server provider 2025-07-14 15:26:30 +00:00
mrT23
8e0c5c8784
refactor(ai_handler): remove model parameter from _get_completion and handle it within the method 2025-07-13 21:29:53 +03:00
mrT23
0e9cf274ef
refactor(ai_handler): move streaming response handling and Azure token generation to helpers 2025-07-13 21:23:04 +03:00
Tal
3aae48f09c
Merge pull request #1925 from Makonike/feature_only_streaming_model_support
feat: Support Only Streaming Model
2025-07-13 21:16:49 +03:00
Makonike
8c7680d85d refactor(ai_handler): add a return statement or raise an exception in the elif branch 2025-07-13 22:57:43 +08:00
Makonike
11fb6ccc7e refactor(ai_handler): compact streaming path to reduce main flow impact 2025-07-13 22:37:14 +08:00
Makonike
74df3f8bd5 fix(ai_handler): improve empty streaming response validation logic 2025-07-10 15:14:25 +08:00
Makonike
31e25a5965 refactor(ai_handler): improve streaming response handling robustness 2025-07-09 15:39:15 +08:00
Makonike
85e1e2d4ee feat: add debug logging support for streaming models 2025-07-09 15:29:03 +08:00
Makonike
2d8bee0d6d feat: add validation for empty streaming responses in LiteLLM handler 2025-07-09 15:04:18 +08:00
Abhinav Kumar
e0d7083768 feat: refactor LITELLM.EXTRA_BODY processing into a dedicated method 2025-07-09 12:04:26 +05:30
Makonike
5e82d0a316 feat: add streaming support for openai/qwq-plus model 2025-07-08 11:51:30 +08:00
Abhinav Kumar
e2d71acb9d fix: remove comments 2025-07-07 21:27:35 +05:30
Abhinav Kumar
8127d52ab3 fix: security checks 2025-07-07 21:26:13 +05:30
Abhinav Kumar
6a55bbcd23 fix: prevent LITELLM.EXTRA_BODY from overriding existing parameters in LiteLLMAIHandler 2025-07-07 21:20:25 +05:30
Abhinav Kumar
12af211c13 feat: support OpenAI Flex Processing via [litellm] extra_body config 2025-07-07 21:14:45 +05:30
Tal
446c1fb49a
Merge pull request #1898 from isExample/feat/ignore-language-framework
feat: support ignoring auto-generated files by language/framework
2025-06-29 10:19:37 +03:00
isExample
bd9ddc8b86 fix: avoid crash when ignore_language_framework is not a list 2025-06-29 02:06:40 +09:00
Tal
1c174f263f
Merge pull request #1892 from dcieslak19973/pr-agent-1886-add-bedrock-llama4-20250622
Add bedrock Llama 4 Scout/Maverick
2025-06-27 11:16:49 +03:00
isExample
87a245bf9c fix: support root-level matching for '**/' globs
- generate an additional regex to match root-level files alongside '**/' patterns.
- ensure files in the repo root are correctly excluded from analysis.
2025-06-26 15:26:12 +09:00
isExample
c7241ca093 feat: support ignore_language_framework via generated_code_ignore.toml
- [generated_code] table defines default glob patterns for code-generation tools
- merge generated_code globs into ignore logic
2025-06-25 23:39:14 +09:00
Pavan Parwatikar
2d61ff7b88
gemini 2.5 slash/pro GA models
Adding stable/GA models of gemini-2.5 models.
Added gemini-2.5-pro and gemini-2.5-flash both to vertex_ai and gemini configs
2025-06-24 11:06:01 +05:30
dcieslak19973
849cb2ea5a Change token limit to 128k for Bedrock Llama 2025-06-22 14:36:41 -05:00
dcieslak19973
22c16f586b Add bedrock Llama 4 Scout/Maverick 2025-06-22 11:05:08 -05:00
mrT23
7c02678ba5
refactor: extract TODO formatting functions and simplify data structure 2025-06-21 09:20:43 +03:00
Tal
7db4d97fc2
Merge pull request #1880 from alessio-locatelli/fix_yes_no
fix: typos, grammar
2025-06-18 20:22:02 +03:00