Merge remote-tracking branch 'origin/main' into tr/benchmark

This commit is contained in:
mrT23 2025-08-08 08:29:45 +03:00
commit 62a029d36a
No known key found for this signature in database
GPG key ID: D350490E39D5F5AD
10 changed files with 115 additions and 52 deletions

View file

@ -1,3 +1,5 @@
In this page we will cover how to install and run PR-Agent as a GitHub Action or GitHub App, and how to configure it for your needs.
## Run as a GitHub Action
You can use our pre-built Github Action Docker image to run PR-Agent as a Github Action.
@ -51,13 +53,13 @@ When you open your next PR, you should see a comment from `github-actions` bot w
See detailed usage instructions in the [USAGE GUIDE](https://qodo-merge-docs.qodo.ai/usage-guide/automations_and_usage/#github-action)
## Configuration Examples
### Configuration Examples
This section provides detailed, step-by-step examples for configuring PR-Agent with different models and advanced options in GitHub Actions.
### Quick Start Examples
#### Quick Start Examples
#### Basic Setup (OpenAI Default)
##### Basic Setup (OpenAI Default)
Copy this minimal workflow to get started with the default OpenAI models:
@ -83,7 +85,7 @@ jobs:
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
```
#### Gemini Setup
##### Gemini Setup
Ready-to-use workflow for Gemini models:
@ -145,7 +147,7 @@ jobs:
github_action_config.auto_improve: "true"
```
### Basic Configuration with Tool Controls
#### Basic Configuration with Tool Controls
Start with this enhanced workflow that includes tool configuration:
@ -178,9 +180,9 @@ jobs:
github_action_config.pr_actions: '["opened", "reopened", "ready_for_review", "review_requested"]'
```
### Switching Models
#### Switching Models
#### Using Gemini (Google AI Studio)
##### Using Gemini (Google AI Studio)
To use Gemini models instead of the default OpenAI models:
@ -199,11 +201,12 @@ To use Gemini models instead of the default OpenAI models:
```
**Required Secrets:**
- Add `GEMINI_API_KEY` to your repository secrets (get it from [Google AI Studio](https://aistudio.google.com/))
**Note:** When using non-OpenAI models like Gemini, you don't need to set `OPENAI_KEY` - only the model-specific API key is required.
#### Using Claude (Anthropic)
##### Using Claude (Anthropic)
To use Claude models:
@ -222,11 +225,12 @@ To use Claude models:
```
**Required Secrets:**
- Add `ANTHROPIC_KEY` to your repository secrets (get it from [Anthropic Console](https://console.anthropic.com/))
**Note:** When using non-OpenAI models like Claude, you don't need to set `OPENAI_KEY` - only the model-specific API key is required.
#### Using Azure OpenAI
##### Using Azure OpenAI
To use Azure OpenAI services:
@ -249,11 +253,12 @@ To use Azure OpenAI services:
```
**Required Secrets:**
- `AZURE_OPENAI_KEY`: Your Azure OpenAI API key
- `AZURE_OPENAI_ENDPOINT`: Your Azure OpenAI endpoint URL
- `AZURE_OPENAI_DEPLOYMENT`: Your deployment name
#### Using Local Models (Ollama)
##### Using Local Models (Ollama)
To use local models via Ollama:
@ -275,9 +280,9 @@ To use local models via Ollama:
**Note:** For local models, you'll need to use a self-hosted runner with Ollama installed, as GitHub Actions hosted runners cannot access localhost services.
### Advanced Configuration Options
#### Advanced Configuration Options
#### Custom Review Instructions
##### Custom Review Instructions
Add specific instructions for the review process:
@ -293,7 +298,7 @@ Add specific instructions for the review process:
github_action_config.auto_improve: "true"
```
#### Language-Specific Configuration
##### Language-Specific Configuration
Configure for specific programming languages:
@ -311,7 +316,7 @@ Configure for specific programming languages:
github_action_config.auto_improve: "true"
```
#### Selective Tool Execution
##### Selective Tool Execution
Run only specific tools automatically:
@ -327,7 +332,7 @@ Run only specific tools automatically:
github_action_config.pr_actions: '["opened", "reopened"]'
```
### Using Configuration Files
#### Using Configuration Files
Instead of setting all options via environment variables, you can use a `.pr_agent.toml` file in your repository root:
@ -375,9 +380,9 @@ jobs:
github_action_config.auto_improve: "true"
```
### Troubleshooting Common Issues
#### Troubleshooting Common Issues
#### Model Not Found Errors
##### Model Not Found Errors
If you get model not found errors:
@ -387,7 +392,7 @@ If you get model not found errors:
3. **Check model availability**: Some models may not be available in all regions or may require specific access
#### Environment Variable Format
##### Environment Variable Format
Remember these key points about environment variables:
@ -396,7 +401,7 @@ Remember these key points about environment variables:
- Arrays should be JSON strings: `'["item1", "item2"]'`
- Model names are case-sensitive
#### Rate Limiting
##### Rate Limiting
If you encounter rate limiting:
@ -413,7 +418,7 @@ If you encounter rate limiting:
github_action_config.auto_improve: "true"
```
#### Common Error Messages and Solutions
##### Common Error Messages and Solutions
**Error: "Model not found"**
- **Solution**: Check the model name format and ensure it matches the exact identifier. See the [Changing a model in PR-Agent](../usage-guide/changing_a_model.md) guide for supported models and their correct identifiers.
@ -435,22 +440,25 @@ If you encounter rate limiting:
```
**Error: "Invalid JSON format"**
- **Solution**: Check that arrays are properly formatted as JSON strings:
```yaml
# Correct
config.fallback_models: '["model1", "model2"]'
# Incorrect (interpreted as a YAML list, not a string)
config.fallback_models: ["model1", "model2"]
```
#### Debugging Tips
- **Solution**: Check that arrays are properly formatted as JSON strings:
```yaml
Correct:
config.fallback_models: '["model1", "model2"]'
Incorrect (interpreted as a YAML list, not a string):
config.fallback_models: ["model1", "model2"]
```
##### Debugging Tips
1. **Enable verbose logging**: Add `config.verbosity_level: "2"` to see detailed logs
2. **Check GitHub Actions logs**: Look at the step output for specific error messages
3. **Test with minimal configuration**: Start with just the basic setup and add options one by one
4. **Verify secrets**: Double-check that all required secrets are set in your repository settings
#### Performance Optimization
##### Performance Optimization
For better performance with large repositories:
@ -468,9 +476,10 @@ For better performance with large repositories:
github_action_config.auto_improve: "true"
```
### Reference
#### Reference
For more detailed configuration options, see:
- [Changing a model in PR-Agent](../usage-guide/changing_a_model.md)
- [Configuration options](../usage-guide/configuration_options.md)
- [Automations and usage](../usage-guide/automations_and_usage.md#github-action)
@ -602,7 +611,9 @@ cp pr_agent/settings/.secrets_template.toml pr_agent/settings/.secrets.toml
> For more information please check out the [USAGE GUIDE](../usage-guide/automations_and_usage.md#github-app)
---
## Deploy as a Lambda Function
## Additional deployment methods
### Deploy as a Lambda Function
Note that since AWS Lambda env vars cannot have "." in the name, you can replace each "." in an env variable with "__".<br>
For example: `GITHUB.WEBHOOK_SECRET` --> `GITHUB__WEBHOOK_SECRET`
@ -611,9 +622,10 @@ For example: `GITHUB.WEBHOOK_SECRET` --> `GITHUB__WEBHOOK_SECRET`
2. Build a docker image that can be used as a lambda function
```shell
# Note: --target github_lambda is optional as it's the default target
docker buildx build --platform=linux/amd64 . -t codiumai/pr-agent:github_lambda --target github_lambda -f docker/Dockerfile.lambda
```
(Note: --target github_lambda is optional as it's the default target)
3. Push image to ECR
@ -628,7 +640,7 @@ For example: `GITHUB.WEBHOOK_SECRET` --> `GITHUB__WEBHOOK_SECRET`
7. Go back to steps 8-9 of [Method 5](#run-as-a-github-app) with the function url as your Webhook URL.
The Webhook URL would look like `https://<LAMBDA_FUNCTION_URL>/api/v1/github_webhooks`
### Using AWS Secrets Manager
#### Using AWS Secrets Manager
For production Lambda deployments, use AWS Secrets Manager instead of environment variables:
@ -652,7 +664,7 @@ CONFIG__SECRET_PROVIDER=aws_secrets_manager
---
## AWS CodeCommit Setup
### AWS CodeCommit Setup
Not all features have been added to CodeCommit yet. As of right now, CodeCommit has been implemented to run the Qodo Merge CLI on the command line, using AWS credentials stored in environment variables. (More features will be added in the future.) The following is a set of instructions to have Qodo Merge do a review of your CodeCommit pull request from the command line:
@ -669,7 +681,7 @@ Not all features have been added to CodeCommit yet. As of right now, CodeCommit
---
#### AWS CodeCommit IAM Role Example
##### AWS CodeCommit IAM Role Example
Example IAM permissions to that user to allow access to CodeCommit:
@ -701,7 +713,7 @@ Example IAM permissions to that user to allow access to CodeCommit:
}
```
#### AWS CodeCommit Access Key and Secret
##### AWS CodeCommit Access Key and Secret
Example setting the Access Key and Secret using environment variables
@ -711,7 +723,7 @@ export AWS_SECRET_ACCESS_KEY="XXXXXXXXXXXXXXXX"
export AWS_DEFAULT_REGION="us-east-1"
```
#### AWS CodeCommit CLI Example
##### AWS CodeCommit CLI Example
After you set up AWS CodeCommit using the instructions above, here is an example CLI run that tells pr-agent to **review** a given pull request.
(Replace your specific PYTHONPATH and PR URL in the example)

View file

@ -42,6 +42,9 @@ Note that if your base branches are not protected, don't set the variables as `p
> **Note**: The `$CI_SERVER_FQDN` variable is available starting from GitLab version 16.10. If you're using an earlier version, this variable will not be available. However, you can combine `$CI_SERVER_HOST` and `$CI_SERVER_PORT` to achieve the same result. Please ensure you're using a compatible version or adjust your configuration.
> **Note**: The `gitlab__SSL_VERIFY` environment variable can be used to specify the path to a custom CA certificate bundle for SSL verification. GitLab exposes the `$CI_SERVER_TLS_CA_FILE` variable, which points to the custom CA certificate file configured in your GitLab instance.
> Alternatively, SSL verification can be disabled entirely by setting `gitlab__SSL_VERIFY=false`, although this is not recommended.
## Run a GitLab webhook server
1. In GitLab create a new user and give it "Reporter" role ("Developer" if using Pro version of the agent) for the intended group or project.
@ -67,6 +70,7 @@ git clone https://github.com/qodo-ai/pr-agent.git
2. In the secrets file/variables:
- Set your AI model key in the respective section
- In the [gitlab] section, set `personal_access_token` (with token from step 2) and `shared_secret` (with secret from step 3)
- **Authentication type**: Set `auth_type` to `"private_token"` for older GitLab versions (e.g., 11.x) or private deployments. Default is `"oauth_token"` for gitlab.com and newer versions.
6. Build a Docker image for the app and optionally push it to a Docker repository. We'll use Dockerhub as an example:
@ -82,6 +86,7 @@ CONFIG__GIT_PROVIDER=gitlab
GITLAB__PERSONAL_ACCESS_TOKEN=<personal_access_token>
GITLAB__SHARED_SECRET=<shared_secret>
GITLAB__URL=https://gitlab.com
GITLAB__AUTH_TYPE=oauth_token # Use "private_token" for older GitLab versions
OPENAI__KEY=<your_openai_api_key>
```

View file

@ -33,7 +33,7 @@ To use Qodo Merge on your private GitHub Enterprise Server, you will need to [co
### GitHub Open Source Projects
For open-source projects, Qodo Merge is available for free usage. To install Qodo Merge for your open-source repositories, use the following marketplace [link](https://github.com/apps/qodo-merge-pro-for-open-source).
For open-source projects, Qodo Merge is available for free usage. To install Qodo Merge for your open-source repositories, use the following marketplace [link](https://github.com/marketplace/qodo-merge-pro-for-open-source).
## Install Qodo Merge for Bitbucket

View file

@ -28,6 +28,8 @@ MAX_TOKENS = {
'gpt-4.1-mini-2025-04-14': 1047576,
'gpt-4.1-nano': 1047576,
'gpt-4.1-nano-2025-04-14': 1047576,
'gpt-5': 200000,
'gpt-5-2025-08-07': 200000,
'o1-mini': 128000, # 128K, but may be limited by config.max_model_tokens
'o1-mini-2024-09-12': 128000, # 128K, but may be limited by config.max_model_tokens
'o1-preview': 128000, # 128K, but may be limited by config.max_model_tokens

View file

@ -288,6 +288,21 @@ class LiteLLMAIHandler(BaseAiHandler):
messages[1]["content"] = [{"type": "text", "text": messages[1]["content"]},
{"type": "image_url", "image_url": {"url": img_path}}]
thinking_kwargs_gpt5 = None
if model.startswith('gpt-5'):
if model.endswith('_thinking'):
thinking_kwargs_gpt5 = {
"reasoning_effort": 'low',
"allowed_openai_params": ["reasoning_effort"],
}
else:
thinking_kwargs_gpt5 = {
"reasoning_effort": 'minimal',
"allowed_openai_params": ["reasoning_effort"],
}
model = model.replace('_thinking', '') # remove _thinking suffix
# Currently, some models do not support a separate system and user prompts
if model in self.user_message_only_models or get_settings().config.custom_reasoning_model:
user = f"{system}\n\n\n{user}"
@ -310,6 +325,11 @@ class LiteLLMAIHandler(BaseAiHandler):
"api_base": self.api_base,
}
if thinking_kwargs_gpt5:
kwargs.update(thinking_kwargs_gpt5)
if 'temperature' in kwargs:
del kwargs['temperature']
# Add temperature only if model supports it
if model not in self.no_support_temperature_models and not get_settings().config.custom_reasoning_model:
# get_logger().info(f"Adding temperature with value {temperature} to model {model}.")

View file

@ -398,11 +398,6 @@ def get_pr_multi_diffs(git_provider: GitProvider,
# Sort files by main language
pr_languages = sort_files_by_main_languages(git_provider.get_languages(), diff_files)
# Sort files within each language group by tokens in descending order
sorted_files = []
for lang in pr_languages:
sorted_files.extend(sorted(lang['files'], key=lambda x: x.tokens, reverse=True))
# Get the maximum number of extra lines before and after the patch
PATCH_EXTRA_LINES_BEFORE = get_settings().config.patch_extra_lines_before
PATCH_EXTRA_LINES_AFTER = get_settings().config.patch_extra_lines_after
@ -420,6 +415,11 @@ def get_pr_multi_diffs(git_provider: GitProvider,
if total_tokens + OUTPUT_BUFFER_TOKENS_SOFT_THRESHOLD < get_max_tokens(model):
return ["\n".join(patches_extended)] if patches_extended else []
# Sort files within each language group by tokens in descending order
sorted_files = []
for lang in pr_languages:
sorted_files.extend(sorted(lang['files'], key=lambda x: x.tokens, reverse=True))
patches = []
final_diff_list = []
total_tokens = token_handler.prompt_tokens

View file

@ -32,13 +32,34 @@ class GitLabProvider(GitProvider):
if not gitlab_url:
raise ValueError("GitLab URL is not set in the config file")
self.gitlab_url = gitlab_url
ssl_verify = get_settings().get("GITLAB.SSL_VERIFY", True)
gitlab_access_token = get_settings().get("GITLAB.PERSONAL_ACCESS_TOKEN", None)
if not gitlab_access_token:
raise ValueError("GitLab personal access token is not set in the config file")
self.gl = gitlab.Gitlab(
url=gitlab_url,
oauth_token=gitlab_access_token
)
# Authentication method selection via configuration
auth_method = get_settings().get("GITLAB.AUTH_TYPE", "oauth_token")
# Basic validation of authentication type
if auth_method not in ["oauth_token", "private_token"]:
raise ValueError(f"Unsupported GITLAB.AUTH_TYPE: '{auth_method}'. "
f"Must be 'oauth_token' or 'private_token'.")
# Create GitLab instance based on authentication method
try:
if auth_method == "oauth_token":
self.gl = gitlab.Gitlab(
url=gitlab_url,
oauth_token=gitlab_access_token,
ssl_verify=ssl_verify
)
else: # private_token
self.gl = gitlab.Gitlab(
url=gitlab_url,
private_token=gitlab_access_token
)
except Exception as e:
get_logger().error(f"Failed to create GitLab instance: {e}")
raise ValueError(f"Unable to authenticate with GitLab: {e}")
self.max_comment_chars = 65000
self.id_project = None
self.id_mr = None
@ -52,6 +73,7 @@ class GitLabProvider(GitProvider):
r"^@@ -(\d+)(?:,(\d+))? \+(\d+)(?:,(\d+))? @@[ ]?(.*)")
self.incremental = incremental
def is_supported(self, capability: str) -> bool:
if capability in ['get_issue_comments', 'create_inline_comment', 'publish_inline_comments',
'publish_file_comments']: # gfm_markdown is supported in gitlab !
@ -719,7 +741,7 @@ class GitLabProvider(GitProvider):
get_logger().error(f"Repo URL: {repo_url_to_clone} is not a valid gitlab URL.")
return None
(scheme, base_url) = repo_url_to_clone.split("gitlab.")
access_token = self.gl.oauth_token
access_token = getattr(self.gl, 'oauth_token', None) or getattr(self.gl, 'private_token', None)
if not all([scheme, access_token, base_url]):
get_logger().error(f"Either no access token found, or repo URL: {repo_url_to_clone} "
f"is missing prefix: {scheme} and/or base URL: {base_url}.")

View file

@ -287,7 +287,7 @@ def handle_ask_line(body, data):
question = body.replace('/ask', '').strip()
path = data['object_attributes']['position']['new_path']
side = 'RIGHT' # if line_range_['start']['type'] == 'new' else 'LEFT'
_id = data['object_attributes']["discussion_id"]
comment_id = data['object_attributes']["discussion_id"]
get_logger().info("Handling line ")
body = f"/ask_line --line_start={start_line} --line_end={end_line} --side={side} --file_name={path} --comment_id={comment_id} {question}"
except Exception as e:

View file

@ -6,8 +6,8 @@
[config]
# models
model="o4-mini"
fallback_models=["gpt-4.1"]
model="gpt-5-2025-08-07"
fallback_models=["o4-mini"]
#model_reasoning="o4-mini" # dedicated reasoning model for self-reflection
#model_weak="gpt-4o" # optional, a weaker model to use for some easier tasks
# CLI
@ -284,6 +284,8 @@ push_commands = [
"/describe",
"/review",
]
# Configure SSL validation for GitLab. Can be either set to the path of a custom CA or disabled entirely.
# ssl_verify = true
[gitea_app]
url = "https://gitea.com"

View file

@ -13,7 +13,7 @@ google-cloud-aiplatform==1.38.0
google-generativeai==0.8.3
google-cloud-storage==2.10.0
Jinja2==3.1.2
litellm==1.70.4
litellm==1.75.2
loguru==0.7.2
msrest==0.7.1
openai>=1.55.3