Merge remote-tracking branch 'origin/main'

This commit is contained in:
ofir-frd 2025-08-11 11:26:22 +03:00
commit 5e28f8a1d1
21 changed files with 185 additions and 78 deletions

View file

@ -72,6 +72,10 @@ You can receive automatic feedback from Qodo Merge on your local IDE after each
## News and Updates ## News and Updates
## Aug 8, 2025
Added full support for GPT-5 models. View the [benchmark results](https://qodo-merge-docs.qodo.ai/pr_benchmark/#pr-benchmark-results) for details on the performance of GPT-5 models in PR-Agent.
## Jul 1, 2025 ## Jul 1, 2025
You can now receive automatic feedback from Qodo Merge in your local IDE after each commit. Read more about it [here](https://github.com/qodo-ai/agents/tree/main/agents/qodo-merge-post-commit). You can now receive automatic feedback from Qodo Merge in your local IDE after each commit. Read more about it [here](https://github.com/qodo-ai/agents/tree/main/agents/qodo-merge-post-commit).
@ -208,7 +212,7 @@ ___
## Try It Now ## Try It Now
Try the Claude Sonnet powered PR-Agent instantly on _your public GitHub repository_. Just mention `@CodiumAI-Agent` and add the desired command in any PR comment. The agent will generate a response based on your command. Try the GPT-5 powered PR-Agent instantly on _your public GitHub repository_. Just mention `@CodiumAI-Agent` and add the desired command in any PR comment. The agent will generate a response based on your command.
For example, add a comment to any pull request with the following text: For example, add a comment to any pull request with the following text:
``` ```

View file

@ -4,7 +4,7 @@
With a single-click installation you will gain access to a context-aware chat on your pull requests code, a toolbar extension with multiple AI feedbacks, Qodo Merge filters, and additional abilities. With a single-click installation you will gain access to a context-aware chat on your pull requests code, a toolbar extension with multiple AI feedbacks, Qodo Merge filters, and additional abilities.
The extension is powered by top code models like Claude 3.7 Sonnet and o4-mini. All the extension's features are free to use on public repositories. The extension is powered by top code models like GPT-5. All the extension's features are free to use on public repositories.
For private repositories, you will need to install [Qodo Merge](https://github.com/apps/qodo-merge-pro){:target="_blank"} in addition to the extension. For private repositories, you will need to install [Qodo Merge](https://github.com/apps/qodo-merge-pro){:target="_blank"} in addition to the extension.
For a demonstration of how to install Qodo Merge and use it with the Chrome extension, please refer to the tutorial video at the provided [link](https://codium.ai/images/pr_agent/private_repos.mp4){:target="_blank"}. For a demonstration of how to install Qodo Merge and use it with the Chrome extension, please refer to the tutorial video at the provided [link](https://codium.ai/images/pr_agent/private_repos.mp4){:target="_blank"}.

View file

@ -44,11 +44,18 @@ enable_rag=true
RAG capability is exclusively available in the following tools: RAG capability is exclusively available in the following tools:
=== "`/review`" === "`/ask`"
The [`/review`](https://qodo-merge-docs.qodo.ai/tools/review/) tool offers the _Focus area from RAG data_ which contains feedback based on the RAG references analysis. The [`/ask`](https://qodo-merge-docs.qodo.ai/tools/ask/) tool can access broader repository context through the RAG feature when answering questions that go beyond the PR scope alone.
The complete list of references found relevant to the PR will be shown in the _References_ section, helping developers understand the broader context by exploring the provided references. The _References_ section displays the additional repository content consulted to formulate the answer.
![RAGed review tool](https://codium.ai/images/pr_agent/rag_review.png){width=640} ![RAGed ask tool](https://codium.ai/images/pr_agent/rag_ask.png){width=640}
=== "`/compliance`"
The [`/compliance`](https://qodo-merge-docs.qodo.ai/tools/compliance/) tool offers the _Codebase Code Duplication Compliance_ section which contains feedback based on the RAG references.
This section highlights possible code duplication issues in the PR, providing developers with insights into potential code quality concerns.
![RAGed compliance tool](https://codium.ai/images/pr_agent/rag_compliance.png){width=640}
=== "`/implement`" === "`/implement`"
The [`/implement`](https://qodo-merge-docs.qodo.ai/tools/implement/) tool utilizes the RAG feature to provide comprehensive context of the repository codebase, allowing it to generate more refined code output. The [`/implement`](https://qodo-merge-docs.qodo.ai/tools/implement/) tool utilizes the RAG feature to provide comprehensive context of the repository codebase, allowing it to generate more refined code output.
@ -56,11 +63,11 @@ RAG capability is exclusively available in the following tools:
![RAGed implement tool](https://codium.ai/images/pr_agent/rag_implement.png){width=640} ![RAGed implement tool](https://codium.ai/images/pr_agent/rag_implement.png){width=640}
=== "`/ask`" === "`/review`"
The [`/ask`](https://qodo-merge-docs.qodo.ai/tools/ask/) tool can access broader repository context through the RAG feature when answering questions that go beyond the PR scope alone. The [`/review`](https://qodo-merge-docs.qodo.ai/tools/review/) tool offers the _Focus area from RAG data_ which contains feedback based on the RAG references analysis.
The _References_ section displays the additional repository content consulted to formulate the answer. The complete list of references found relevant to the PR will be shown in the _References_ section, helping developers understand the broader context by exploring the provided references.
![RAGed ask tool](https://codium.ai/images/pr_agent/rag_ask.png){width=640} ![RAGed review tool](https://codium.ai/images/pr_agent/rag_review.png){width=640}
## Limitations ## Limitations

View file

@ -26,7 +26,7 @@ ___
#### Answer:<span style="display:none;">2</span> #### Answer:<span style="display:none;">2</span>
- Modern AI models, like Claude Sonnet and GPT-4, are improving rapidly but remain imperfect. Users should critically evaluate all suggestions rather than accepting them automatically. - Modern AI models, like Claude Sonnet and GPT-5, are improving rapidly but remain imperfect. Users should critically evaluate all suggestions rather than accepting them automatically.
- AI errors are rare, but possible. A main value from reviewing the code suggestions lies in their high probability of catching **mistakes or bugs made by the PR author**. We believe it's worth spending 30-60 seconds reviewing suggestions, even if some aren't relevant, as this practice can enhance code quality and prevent bugs in production. - AI errors are rare, but possible. A main value from reviewing the code suggestions lies in their high probability of catching **mistakes or bugs made by the PR author**. We believe it's worth spending 30-60 seconds reviewing suggestions, even if some aren't relevant, as this practice can enhance code quality and prevent bugs in production.

View file

@ -1,3 +1,5 @@
In this page we will cover how to install and run PR-Agent as a GitHub Action or GitHub App, and how to configure it for your needs.
## Run as a GitHub Action ## Run as a GitHub Action
You can use our pre-built Github Action Docker image to run PR-Agent as a Github Action. You can use our pre-built Github Action Docker image to run PR-Agent as a Github Action.
@ -51,13 +53,13 @@ When you open your next PR, you should see a comment from `github-actions` bot w
See detailed usage instructions in the [USAGE GUIDE](https://qodo-merge-docs.qodo.ai/usage-guide/automations_and_usage/#github-action) See detailed usage instructions in the [USAGE GUIDE](https://qodo-merge-docs.qodo.ai/usage-guide/automations_and_usage/#github-action)
## Configuration Examples ### Configuration Examples
This section provides detailed, step-by-step examples for configuring PR-Agent with different models and advanced options in GitHub Actions. This section provides detailed, step-by-step examples for configuring PR-Agent with different models and advanced options in GitHub Actions.
### Quick Start Examples #### Quick Start Examples
#### Basic Setup (OpenAI Default) ##### Basic Setup (OpenAI Default)
Copy this minimal workflow to get started with the default OpenAI models: Copy this minimal workflow to get started with the default OpenAI models:
@ -83,7 +85,7 @@ jobs:
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }} GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
``` ```
#### Gemini Setup ##### Gemini Setup
Ready-to-use workflow for Gemini models: Ready-to-use workflow for Gemini models:
@ -145,7 +147,7 @@ jobs:
github_action_config.auto_improve: "true" github_action_config.auto_improve: "true"
``` ```
### Basic Configuration with Tool Controls #### Basic Configuration with Tool Controls
Start with this enhanced workflow that includes tool configuration: Start with this enhanced workflow that includes tool configuration:
@ -178,9 +180,9 @@ jobs:
github_action_config.pr_actions: '["opened", "reopened", "ready_for_review", "review_requested"]' github_action_config.pr_actions: '["opened", "reopened", "ready_for_review", "review_requested"]'
``` ```
### Switching Models #### Switching Models
#### Using Gemini (Google AI Studio) ##### Using Gemini (Google AI Studio)
To use Gemini models instead of the default OpenAI models: To use Gemini models instead of the default OpenAI models:
@ -199,11 +201,12 @@ To use Gemini models instead of the default OpenAI models:
``` ```
**Required Secrets:** **Required Secrets:**
- Add `GEMINI_API_KEY` to your repository secrets (get it from [Google AI Studio](https://aistudio.google.com/)) - Add `GEMINI_API_KEY` to your repository secrets (get it from [Google AI Studio](https://aistudio.google.com/))
**Note:** When using non-OpenAI models like Gemini, you don't need to set `OPENAI_KEY` - only the model-specific API key is required. **Note:** When using non-OpenAI models like Gemini, you don't need to set `OPENAI_KEY` - only the model-specific API key is required.
#### Using Claude (Anthropic) ##### Using Claude (Anthropic)
To use Claude models: To use Claude models:
@ -222,11 +225,12 @@ To use Claude models:
``` ```
**Required Secrets:** **Required Secrets:**
- Add `ANTHROPIC_KEY` to your repository secrets (get it from [Anthropic Console](https://console.anthropic.com/)) - Add `ANTHROPIC_KEY` to your repository secrets (get it from [Anthropic Console](https://console.anthropic.com/))
**Note:** When using non-OpenAI models like Claude, you don't need to set `OPENAI_KEY` - only the model-specific API key is required. **Note:** When using non-OpenAI models like Claude, you don't need to set `OPENAI_KEY` - only the model-specific API key is required.
#### Using Azure OpenAI ##### Using Azure OpenAI
To use Azure OpenAI services: To use Azure OpenAI services:
@ -249,11 +253,12 @@ To use Azure OpenAI services:
``` ```
**Required Secrets:** **Required Secrets:**
- `AZURE_OPENAI_KEY`: Your Azure OpenAI API key - `AZURE_OPENAI_KEY`: Your Azure OpenAI API key
- `AZURE_OPENAI_ENDPOINT`: Your Azure OpenAI endpoint URL - `AZURE_OPENAI_ENDPOINT`: Your Azure OpenAI endpoint URL
- `AZURE_OPENAI_DEPLOYMENT`: Your deployment name - `AZURE_OPENAI_DEPLOYMENT`: Your deployment name
#### Using Local Models (Ollama) ##### Using Local Models (Ollama)
To use local models via Ollama: To use local models via Ollama:
@ -275,9 +280,9 @@ To use local models via Ollama:
**Note:** For local models, you'll need to use a self-hosted runner with Ollama installed, as GitHub Actions hosted runners cannot access localhost services. **Note:** For local models, you'll need to use a self-hosted runner with Ollama installed, as GitHub Actions hosted runners cannot access localhost services.
### Advanced Configuration Options #### Advanced Configuration Options
#### Custom Review Instructions ##### Custom Review Instructions
Add specific instructions for the review process: Add specific instructions for the review process:
@ -293,7 +298,7 @@ Add specific instructions for the review process:
github_action_config.auto_improve: "true" github_action_config.auto_improve: "true"
``` ```
#### Language-Specific Configuration ##### Language-Specific Configuration
Configure for specific programming languages: Configure for specific programming languages:
@ -311,7 +316,7 @@ Configure for specific programming languages:
github_action_config.auto_improve: "true" github_action_config.auto_improve: "true"
``` ```
#### Selective Tool Execution ##### Selective Tool Execution
Run only specific tools automatically: Run only specific tools automatically:
@ -327,7 +332,7 @@ Run only specific tools automatically:
github_action_config.pr_actions: '["opened", "reopened"]' github_action_config.pr_actions: '["opened", "reopened"]'
``` ```
### Using Configuration Files #### Using Configuration Files
Instead of setting all options via environment variables, you can use a `.pr_agent.toml` file in your repository root: Instead of setting all options via environment variables, you can use a `.pr_agent.toml` file in your repository root:
@ -375,9 +380,9 @@ jobs:
github_action_config.auto_improve: "true" github_action_config.auto_improve: "true"
``` ```
### Troubleshooting Common Issues #### Troubleshooting Common Issues
#### Model Not Found Errors ##### Model Not Found Errors
If you get model not found errors: If you get model not found errors:
@ -387,7 +392,7 @@ If you get model not found errors:
3. **Check model availability**: Some models may not be available in all regions or may require specific access 3. **Check model availability**: Some models may not be available in all regions or may require specific access
#### Environment Variable Format ##### Environment Variable Format
Remember these key points about environment variables: Remember these key points about environment variables:
@ -396,7 +401,7 @@ Remember these key points about environment variables:
- Arrays should be JSON strings: `'["item1", "item2"]'` - Arrays should be JSON strings: `'["item1", "item2"]'`
- Model names are case-sensitive - Model names are case-sensitive
#### Rate Limiting ##### Rate Limiting
If you encounter rate limiting: If you encounter rate limiting:
@ -413,7 +418,7 @@ If you encounter rate limiting:
github_action_config.auto_improve: "true" github_action_config.auto_improve: "true"
``` ```
#### Common Error Messages and Solutions ##### Common Error Messages and Solutions
**Error: "Model not found"** **Error: "Model not found"**
- **Solution**: Check the model name format and ensure it matches the exact identifier. See the [Changing a model in PR-Agent](../usage-guide/changing_a_model.md) guide for supported models and their correct identifiers. - **Solution**: Check the model name format and ensure it matches the exact identifier. See the [Changing a model in PR-Agent](../usage-guide/changing_a_model.md) guide for supported models and their correct identifiers.
@ -435,22 +440,25 @@ If you encounter rate limiting:
``` ```
**Error: "Invalid JSON format"** **Error: "Invalid JSON format"**
- **Solution**: Check that arrays are properly formatted as JSON strings:
```yaml
# Correct
config.fallback_models: '["model1", "model2"]'
# Incorrect (interpreted as a YAML list, not a string)
config.fallback_models: ["model1", "model2"]
```
#### Debugging Tips - **Solution**: Check that arrays are properly formatted as JSON strings:
```yaml
Correct:
config.fallback_models: '["model1", "model2"]'
Incorrect (interpreted as a YAML list, not a string):
config.fallback_models: ["model1", "model2"]
```
##### Debugging Tips
1. **Enable verbose logging**: Add `config.verbosity_level: "2"` to see detailed logs 1. **Enable verbose logging**: Add `config.verbosity_level: "2"` to see detailed logs
2. **Check GitHub Actions logs**: Look at the step output for specific error messages 2. **Check GitHub Actions logs**: Look at the step output for specific error messages
3. **Test with minimal configuration**: Start with just the basic setup and add options one by one 3. **Test with minimal configuration**: Start with just the basic setup and add options one by one
4. **Verify secrets**: Double-check that all required secrets are set in your repository settings 4. **Verify secrets**: Double-check that all required secrets are set in your repository settings
#### Performance Optimization ##### Performance Optimization
For better performance with large repositories: For better performance with large repositories:
@ -468,9 +476,10 @@ For better performance with large repositories:
github_action_config.auto_improve: "true" github_action_config.auto_improve: "true"
``` ```
### Reference #### Reference
For more detailed configuration options, see: For more detailed configuration options, see:
- [Changing a model in PR-Agent](../usage-guide/changing_a_model.md) - [Changing a model in PR-Agent](../usage-guide/changing_a_model.md)
- [Configuration options](../usage-guide/configuration_options.md) - [Configuration options](../usage-guide/configuration_options.md)
- [Automations and usage](../usage-guide/automations_and_usage.md#github-action) - [Automations and usage](../usage-guide/automations_and_usage.md#github-action)
@ -602,7 +611,9 @@ cp pr_agent/settings/.secrets_template.toml pr_agent/settings/.secrets.toml
> For more information please check out the [USAGE GUIDE](../usage-guide/automations_and_usage.md#github-app) > For more information please check out the [USAGE GUIDE](../usage-guide/automations_and_usage.md#github-app)
--- ---
## Deploy as a Lambda Function ## Additional deployment methods
### Deploy as a Lambda Function
Note that since AWS Lambda env vars cannot have "." in the name, you can replace each "." in an env variable with "__".<br> Note that since AWS Lambda env vars cannot have "." in the name, you can replace each "." in an env variable with "__".<br>
For example: `GITHUB.WEBHOOK_SECRET` --> `GITHUB__WEBHOOK_SECRET` For example: `GITHUB.WEBHOOK_SECRET` --> `GITHUB__WEBHOOK_SECRET`
@ -611,9 +622,10 @@ For example: `GITHUB.WEBHOOK_SECRET` --> `GITHUB__WEBHOOK_SECRET`
2. Build a docker image that can be used as a lambda function 2. Build a docker image that can be used as a lambda function
```shell ```shell
# Note: --target github_lambda is optional as it's the default target
docker buildx build --platform=linux/amd64 . -t codiumai/pr-agent:github_lambda --target github_lambda -f docker/Dockerfile.lambda docker buildx build --platform=linux/amd64 . -t codiumai/pr-agent:github_lambda --target github_lambda -f docker/Dockerfile.lambda
``` ```
(Note: --target github_lambda is optional as it's the default target)
3. Push image to ECR 3. Push image to ECR
@ -628,7 +640,7 @@ For example: `GITHUB.WEBHOOK_SECRET` --> `GITHUB__WEBHOOK_SECRET`
7. Go back to steps 8-9 of [Method 5](#run-as-a-github-app) with the function url as your Webhook URL. 7. Go back to steps 8-9 of [Method 5](#run-as-a-github-app) with the function url as your Webhook URL.
The Webhook URL would look like `https://<LAMBDA_FUNCTION_URL>/api/v1/github_webhooks` The Webhook URL would look like `https://<LAMBDA_FUNCTION_URL>/api/v1/github_webhooks`
### Using AWS Secrets Manager #### Using AWS Secrets Manager
For production Lambda deployments, use AWS Secrets Manager instead of environment variables: For production Lambda deployments, use AWS Secrets Manager instead of environment variables:
@ -652,7 +664,7 @@ CONFIG__SECRET_PROVIDER=aws_secrets_manager
--- ---
## AWS CodeCommit Setup ### AWS CodeCommit Setup
Not all features have been added to CodeCommit yet. As of right now, CodeCommit has been implemented to run the Qodo Merge CLI on the command line, using AWS credentials stored in environment variables. (More features will be added in the future.) The following is a set of instructions to have Qodo Merge do a review of your CodeCommit pull request from the command line: Not all features have been added to CodeCommit yet. As of right now, CodeCommit has been implemented to run the Qodo Merge CLI on the command line, using AWS credentials stored in environment variables. (More features will be added in the future.) The following is a set of instructions to have Qodo Merge do a review of your CodeCommit pull request from the command line:
@ -669,7 +681,7 @@ Not all features have been added to CodeCommit yet. As of right now, CodeCommit
--- ---
#### AWS CodeCommit IAM Role Example ##### AWS CodeCommit IAM Role Example
Example IAM permissions to that user to allow access to CodeCommit: Example IAM permissions to that user to allow access to CodeCommit:
@ -701,7 +713,7 @@ Example IAM permissions to that user to allow access to CodeCommit:
} }
``` ```
#### AWS CodeCommit Access Key and Secret ##### AWS CodeCommit Access Key and Secret
Example setting the Access Key and Secret using environment variables Example setting the Access Key and Secret using environment variables
@ -711,7 +723,7 @@ export AWS_SECRET_ACCESS_KEY="XXXXXXXXXXXXXXXX"
export AWS_DEFAULT_REGION="us-east-1" export AWS_DEFAULT_REGION="us-east-1"
``` ```
#### AWS CodeCommit CLI Example ##### AWS CodeCommit CLI Example
After you set up AWS CodeCommit using the instructions above, here is an example CLI run that tells pr-agent to **review** a given pull request. After you set up AWS CodeCommit using the instructions above, here is an example CLI run that tells pr-agent to **review** a given pull request.
(Replace your specific PYTHONPATH and PR URL in the example) (Replace your specific PYTHONPATH and PR URL in the example)

View file

@ -42,6 +42,9 @@ Note that if your base branches are not protected, don't set the variables as `p
> **Note**: The `$CI_SERVER_FQDN` variable is available starting from GitLab version 16.10. If you're using an earlier version, this variable will not be available. However, you can combine `$CI_SERVER_HOST` and `$CI_SERVER_PORT` to achieve the same result. Please ensure you're using a compatible version or adjust your configuration. > **Note**: The `$CI_SERVER_FQDN` variable is available starting from GitLab version 16.10. If you're using an earlier version, this variable will not be available. However, you can combine `$CI_SERVER_HOST` and `$CI_SERVER_PORT` to achieve the same result. Please ensure you're using a compatible version or adjust your configuration.
> **Note**: The `gitlab__SSL_VERIFY` environment variable can be used to specify the path to a custom CA certificate bundle for SSL verification. GitLab exposes the `$CI_SERVER_TLS_CA_FILE` variable, which points to the custom CA certificate file configured in your GitLab instance.
> Alternatively, SSL verification can be disabled entirely by setting `gitlab__SSL_VERIFY=false`, although this is not recommended.
## Run a GitLab webhook server ## Run a GitLab webhook server
1. In GitLab create a new user and give it "Reporter" role ("Developer" if using Pro version of the agent) for the intended group or project. 1. In GitLab create a new user and give it "Reporter" role ("Developer" if using Pro version of the agent) for the intended group or project.
@ -67,6 +70,7 @@ git clone https://github.com/qodo-ai/pr-agent.git
2. In the secrets file/variables: 2. In the secrets file/variables:
- Set your AI model key in the respective section - Set your AI model key in the respective section
- In the [gitlab] section, set `personal_access_token` (with token from step 2) and `shared_secret` (with secret from step 3) - In the [gitlab] section, set `personal_access_token` (with token from step 2) and `shared_secret` (with secret from step 3)
- **Authentication type**: Set `auth_type` to `"private_token"` for older GitLab versions (e.g., 11.x) or private deployments. Default is `"oauth_token"` for gitlab.com and newer versions.
6. Build a Docker image for the app and optionally push it to a Docker repository. We'll use Dockerhub as an example: 6. Build a Docker image for the app and optionally push it to a Docker repository. We'll use Dockerhub as an example:
@ -82,6 +86,7 @@ CONFIG__GIT_PROVIDER=gitlab
GITLAB__PERSONAL_ACCESS_TOKEN=<personal_access_token> GITLAB__PERSONAL_ACCESS_TOKEN=<personal_access_token>
GITLAB__SHARED_SECRET=<shared_secret> GITLAB__SHARED_SECRET=<shared_secret>
GITLAB__URL=https://gitlab.com GITLAB__URL=https://gitlab.com
GITLAB__AUTH_TYPE=oauth_token # Use "private_token" for older GitLab versions
OPENAI__KEY=<your_openai_api_key> OPENAI__KEY=<your_openai_api_key>
``` ```

View file

@ -33,7 +33,7 @@ To use Qodo Merge on your private GitHub Enterprise Server, you will need to [co
### GitHub Open Source Projects ### GitHub Open Source Projects
For open-source projects, Qodo Merge is available for free usage. To install Qodo Merge for your open-source repositories, use the following marketplace [link](https://github.com/apps/qodo-merge-pro-for-open-source). For open-source projects, Qodo Merge is available for free usage. To install Qodo Merge for your open-source repositories, use the following marketplace [link](https://github.com/marketplace/qodo-merge-pro-for-open-source).
## Install Qodo Merge for Bitbucket ## Install Qodo Merge for Bitbucket

View file

@ -24,7 +24,7 @@ Here are some of the additional features and capabilities that Qodo Merge offers
| Feature | Description | | Feature | Description |
| -------------------------------------------------------------------------------------------------------------------- |--------------------------------------------------------------------------------------------------------------------------------------------------------| | -------------------------------------------------------------------------------------------------------------------- |--------------------------------------------------------------------------------------------------------------------------------------------------------|
| [**Model selection**](https://qodo-merge-docs.qodo.ai/usage-guide/PR_agent_pro_models/) | Choose the model that best fits your needs, among top models like `Claude Sonnet`, `o4-mini` | | [**Model selection**](https://qodo-merge-docs.qodo.ai/usage-guide/PR_agent_pro_models/) | Choose the model that best fits your needs |
| [**Global and wiki configuration**](https://qodo-merge-docs.qodo.ai/usage-guide/configuration_options/) | Control configurations for many repositories from a single location; <br>Edit configuration of a single repo without committing code | | [**Global and wiki configuration**](https://qodo-merge-docs.qodo.ai/usage-guide/configuration_options/) | Control configurations for many repositories from a single location; <br>Edit configuration of a single repo without committing code |
| [**Apply suggestions**](https://qodo-merge-docs.qodo.ai/tools/improve/#overview) | Generate committable code from the relevant suggestions interactively by clicking on a checkbox | | [**Apply suggestions**](https://qodo-merge-docs.qodo.ai/tools/improve/#overview) | Generate committable code from the relevant suggestions interactively by clicking on a checkbox |
| [**Suggestions impact**](https://qodo-merge-docs.qodo.ai/tools/improve/#assessing-impact) | Automatically mark suggestions that were implemented by the user (either directly in GitHub, or indirectly in the IDE) to enable tracking of the impact of the suggestions | | [**Suggestions impact**](https://qodo-merge-docs.qodo.ai/tools/improve/#assessing-impact) | Automatically mark suggestions that were implemented by the user (either directly in GitHub, or indirectly in the IDE) to enable tracking of the impact of the suggestions |

View file

@ -34,6 +34,24 @@ A list of the models used for generating the baseline suggestions, and example r
</tr> </tr>
</thead> </thead>
<tbody> <tbody>
<tr>
<td style="text-align:left;">GPT-5</td>
<td style="text-align:left;">2025-08-07</td>
<td style="text-align:left;">medium</td>
<td style="text-align:center;"><b>72.2</b></td>
</tr>
<tr>
<td style="text-align:left;">GPT-5</td>
<td style="text-align:left;">2025-08-07</td>
<td style="text-align:left;">low</td>
<td style="text-align:center;"><b>67.8</b></td>
</tr>
<tr>
<td style="text-align:left;">GPT-5</td>
<td style="text-align:left;">2025-08-07</td>
<td style="text-align:left;">minimal</td>
<td style="text-align:center;"><b>62.7</b></td>
</tr>
<tr> <tr>
<td style="text-align:left;">o3</td> <td style="text-align:left;">o3</td>
<td style="text-align:left;">2025-04-16</td> <td style="text-align:left;">2025-04-16</td>

View file

@ -5,10 +5,10 @@
The `compliance` tool performs comprehensive compliance checks on PR code changes, validating them against security standards, ticket requirements, and custom organizational compliance checklists, thereby helping teams, enterprises, and agents maintain consistent code quality and security practices while ensuring that development work aligns with business requirements. The `compliance` tool performs comprehensive compliance checks on PR code changes, validating them against security standards, ticket requirements, and custom organizational compliance checklists, thereby helping teams, enterprises, and agents maintain consistent code quality and security practices while ensuring that development work aligns with business requirements.
=== "Fully Compliant" === "Fully Compliant"
![compliance_overview](https://codium.ai/images/pr_agent/compliance_full.png){width=256} ![compliance_overview](https://codium.ai/images/pr_agent/compliance_full.png){width=512}
=== "Partially Compliant" === "Partially Compliant"
![compliance_overview](https://codium.ai/images/pr_agent/compliance_partial.png){width=256} ![compliance_overview](https://codium.ai/images/pr_agent/compliance_partial.png){width=512}
___ ___

View file

@ -202,6 +202,20 @@ publish_labels = false
to prevent Qodo Merge from publishing labels when running the `describe` tool. to prevent Qodo Merge from publishing labels when running the `describe` tool.
#### Enable using commands in PR
You can configure your GitHub Actions workflow to trigger on `issue_comment` [events](https://docs.github.com/en/actions/reference/workflows-and-actions/events-that-trigger-workflows#issue_comment) (`created` and `edited`).
Example GitHub Actions workflow configuration:
```yaml
on:
issue_comment:
types: [created, edited]
```
When this is configured, Qodo merge can be invoked by commenting on the PR.
#### Quick Reference: Model Configuration in GitHub Actions #### Quick Reference: Model Configuration in GitHub Actions
For detailed step-by-step examples of configuring different models (Gemini, Claude, Azure OpenAI, etc.) in GitHub Actions, see the [Configuration Examples](../installation/github.md#configuration-examples) section in the installation guide. For detailed step-by-step examples of configuring different models (Gemini, Claude, Azure OpenAI, etc.) in GitHub Actions, see the [Configuration Examples](../installation/github.md#configuration-examples) section in the installation guide.

View file

@ -107,7 +107,7 @@ Please note that the `custom_model_max_tokens` setting should be configured in a
!!! note "Local models vs commercial models" !!! note "Local models vs commercial models"
Qodo Merge is compatible with almost any AI model, but analyzing complex code repositories and pull requests requires a model specifically optimized for code analysis. Qodo Merge is compatible with almost any AI model, but analyzing complex code repositories and pull requests requires a model specifically optimized for code analysis.
Commercial models such as GPT-4, Claude Sonnet, and Gemini have demonstrated robust capabilities in generating structured output for code analysis tasks with large input. In contrast, most open-source models currently available (as of January 2025) face challenges with these complex tasks. Commercial models such as GPT-5, Claude Sonnet, and Gemini have demonstrated robust capabilities in generating structured output for code analysis tasks with large input. In contrast, most open-source models currently available (as of January 2025) face challenges with these complex tasks.
Based on our testing, local open-source models are suitable for experimentation and learning purposes (mainly for the `ask` command), but they are not suitable for production-level code analysis tasks. Based on our testing, local open-source models are suitable for experimentation and learning purposes (mainly for the `ask` command), but they are not suitable for production-level code analysis tasks.
@ -379,7 +379,7 @@ To bypass chat templates and temperature controls, set `config.custom_reasoning_
```toml ```toml
[config] [config]
reasoning_efffort = "medium" # "low", "medium", "high" reasoning_effort = "medium" # "low", "medium", "high"
``` ```
With the OpenAI models that support reasoning effort (eg: o4-mini), you can specify its reasoning effort via `config` section. The default value is `medium`. You can change it to `high` or `low` based on your usage. With the OpenAI models that support reasoning effort (eg: o4-mini), you can specify its reasoning effort via `config` section. The default value is `medium`. You can change it to `high` or `low` based on your usage.

View file

@ -1,5 +1,5 @@
The default models used by Qodo Merge (June 2025) are a combination of Claude Sonnet 4 and Gemini 2.5 Pro. The default models used by Qodo Merge (June 2025) are a combination of GPT-5 and Gemini 2.5 Pro.
### Selecting a Specific Model ### Selecting a Specific Model
@ -19,11 +19,11 @@ To restrict Qodo Merge to using only `o4-mini`, add this setting:
model="o4-mini" model="o4-mini"
``` ```
To restrict Qodo Merge to using only `GPT-4.1`, add this setting: To restrict Qodo Merge to using only `GPT-5`, add this setting:
```toml ```toml
[config] [config]
model="gpt-4.1" model="gpt-5"
``` ```
To restrict Qodo Merge to using only `gemini-2.5-pro`, add this setting: To restrict Qodo Merge to using only `gemini-2.5-pro`, add this setting:
@ -33,10 +33,9 @@ To restrict Qodo Merge to using only `gemini-2.5-pro`, add this setting:
model="gemini-2.5-pro" model="gemini-2.5-pro"
``` ```
To restrict Qodo Merge to using only `claude-4-sonnet`, add this setting:
To restrict Qodo Merge to using only `deepseek-r1` us-hosted, add this setting:
```toml ```toml
[config] [config]
model="deepseek/r1" model="claude-4-sonnet"
``` ```

View file

@ -28,6 +28,10 @@ MAX_TOKENS = {
'gpt-4.1-mini-2025-04-14': 1047576, 'gpt-4.1-mini-2025-04-14': 1047576,
'gpt-4.1-nano': 1047576, 'gpt-4.1-nano': 1047576,
'gpt-4.1-nano-2025-04-14': 1047576, 'gpt-4.1-nano-2025-04-14': 1047576,
'gpt-5-nano': 200000, # 200K, but may be limited by config.max_model_tokens
'gpt-5-mini': 200000, # 200K, but may be limited by config.max_model_tokens
'gpt-5': 200000,
'gpt-5-2025-08-07': 200000,
'o1-mini': 128000, # 128K, but may be limited by config.max_model_tokens 'o1-mini': 128000, # 128K, but may be limited by config.max_model_tokens
'o1-mini-2024-09-12': 128000, # 128K, but may be limited by config.max_model_tokens 'o1-mini-2024-09-12': 128000, # 128K, but may be limited by config.max_model_tokens
'o1-preview': 128000, # 128K, but may be limited by config.max_model_tokens 'o1-preview': 128000, # 128K, but may be limited by config.max_model_tokens

View file

@ -288,6 +288,21 @@ class LiteLLMAIHandler(BaseAiHandler):
messages[1]["content"] = [{"type": "text", "text": messages[1]["content"]}, messages[1]["content"] = [{"type": "text", "text": messages[1]["content"]},
{"type": "image_url", "image_url": {"url": img_path}}] {"type": "image_url", "image_url": {"url": img_path}}]
thinking_kwargs_gpt5 = None
if model.startswith('gpt-5'):
if model.endswith('_thinking'):
thinking_kwargs_gpt5 = {
"reasoning_effort": 'low',
"allowed_openai_params": ["reasoning_effort"],
}
else:
thinking_kwargs_gpt5 = {
"reasoning_effort": 'minimal',
"allowed_openai_params": ["reasoning_effort"],
}
model = 'openai/'+model.replace('_thinking', '') # remove _thinking suffix
# Currently, some models do not support a separate system and user prompts # Currently, some models do not support a separate system and user prompts
if model in self.user_message_only_models or get_settings().config.custom_reasoning_model: if model in self.user_message_only_models or get_settings().config.custom_reasoning_model:
user = f"{system}\n\n\n{user}" user = f"{system}\n\n\n{user}"
@ -315,6 +330,11 @@ class LiteLLMAIHandler(BaseAiHandler):
# get_logger().info(f"Adding temperature with value {temperature} to model {model}.") # get_logger().info(f"Adding temperature with value {temperature} to model {model}.")
kwargs["temperature"] = temperature kwargs["temperature"] = temperature
if thinking_kwargs_gpt5:
kwargs.update(thinking_kwargs_gpt5)
if 'temperature' in kwargs:
del kwargs['temperature']
# Add reasoning_effort if model supports it # Add reasoning_effort if model supports it
if (model in self.support_reasoning_models): if (model in self.support_reasoning_models):
supported_reasoning_efforts = [ReasoningEffort.HIGH.value, ReasoningEffort.MEDIUM.value, ReasoningEffort.LOW.value] supported_reasoning_efforts = [ReasoningEffort.HIGH.value, ReasoningEffort.MEDIUM.value, ReasoningEffort.LOW.value]

View file

@ -398,11 +398,6 @@ def get_pr_multi_diffs(git_provider: GitProvider,
# Sort files by main language # Sort files by main language
pr_languages = sort_files_by_main_languages(git_provider.get_languages(), diff_files) pr_languages = sort_files_by_main_languages(git_provider.get_languages(), diff_files)
# Sort files within each language group by tokens in descending order
sorted_files = []
for lang in pr_languages:
sorted_files.extend(sorted(lang['files'], key=lambda x: x.tokens, reverse=True))
# Get the maximum number of extra lines before and after the patch # Get the maximum number of extra lines before and after the patch
PATCH_EXTRA_LINES_BEFORE = get_settings().config.patch_extra_lines_before PATCH_EXTRA_LINES_BEFORE = get_settings().config.patch_extra_lines_before
PATCH_EXTRA_LINES_AFTER = get_settings().config.patch_extra_lines_after PATCH_EXTRA_LINES_AFTER = get_settings().config.patch_extra_lines_after
@ -420,6 +415,11 @@ def get_pr_multi_diffs(git_provider: GitProvider,
if total_tokens + OUTPUT_BUFFER_TOKENS_SOFT_THRESHOLD < get_max_tokens(model): if total_tokens + OUTPUT_BUFFER_TOKENS_SOFT_THRESHOLD < get_max_tokens(model):
return ["\n".join(patches_extended)] if patches_extended else [] return ["\n".join(patches_extended)] if patches_extended else []
# Sort files within each language group by tokens in descending order
sorted_files = []
for lang in pr_languages:
sorted_files.extend(sorted(lang['files'], key=lambda x: x.tokens, reverse=True))
patches = [] patches = []
final_diff_list = [] final_diff_list = []
total_tokens = token_handler.prompt_tokens total_tokens = token_handler.prompt_tokens

View file

@ -32,13 +32,34 @@ class GitLabProvider(GitProvider):
if not gitlab_url: if not gitlab_url:
raise ValueError("GitLab URL is not set in the config file") raise ValueError("GitLab URL is not set in the config file")
self.gitlab_url = gitlab_url self.gitlab_url = gitlab_url
ssl_verify = get_settings().get("GITLAB.SSL_VERIFY", True)
gitlab_access_token = get_settings().get("GITLAB.PERSONAL_ACCESS_TOKEN", None) gitlab_access_token = get_settings().get("GITLAB.PERSONAL_ACCESS_TOKEN", None)
if not gitlab_access_token: if not gitlab_access_token:
raise ValueError("GitLab personal access token is not set in the config file") raise ValueError("GitLab personal access token is not set in the config file")
self.gl = gitlab.Gitlab( # Authentication method selection via configuration
url=gitlab_url, auth_method = get_settings().get("GITLAB.AUTH_TYPE", "oauth_token")
oauth_token=gitlab_access_token
) # Basic validation of authentication type
if auth_method not in ["oauth_token", "private_token"]:
raise ValueError(f"Unsupported GITLAB.AUTH_TYPE: '{auth_method}'. "
f"Must be 'oauth_token' or 'private_token'.")
# Create GitLab instance based on authentication method
try:
if auth_method == "oauth_token":
self.gl = gitlab.Gitlab(
url=gitlab_url,
oauth_token=gitlab_access_token,
ssl_verify=ssl_verify
)
else: # private_token
self.gl = gitlab.Gitlab(
url=gitlab_url,
private_token=gitlab_access_token
)
except Exception as e:
get_logger().error(f"Failed to create GitLab instance: {e}")
raise ValueError(f"Unable to authenticate with GitLab: {e}")
self.max_comment_chars = 65000 self.max_comment_chars = 65000
self.id_project = None self.id_project = None
self.id_mr = None self.id_mr = None
@ -52,6 +73,7 @@ class GitLabProvider(GitProvider):
r"^@@ -(\d+)(?:,(\d+))? \+(\d+)(?:,(\d+))? @@[ ]?(.*)") r"^@@ -(\d+)(?:,(\d+))? \+(\d+)(?:,(\d+))? @@[ ]?(.*)")
self.incremental = incremental self.incremental = incremental
def is_supported(self, capability: str) -> bool: def is_supported(self, capability: str) -> bool:
if capability in ['get_issue_comments', 'create_inline_comment', 'publish_inline_comments', if capability in ['get_issue_comments', 'create_inline_comment', 'publish_inline_comments',
'publish_file_comments']: # gfm_markdown is supported in gitlab ! 'publish_file_comments']: # gfm_markdown is supported in gitlab !
@ -719,7 +741,7 @@ class GitLabProvider(GitProvider):
get_logger().error(f"Repo URL: {repo_url_to_clone} is not a valid gitlab URL.") get_logger().error(f"Repo URL: {repo_url_to_clone} is not a valid gitlab URL.")
return None return None
(scheme, base_url) = repo_url_to_clone.split("gitlab.") (scheme, base_url) = repo_url_to_clone.split("gitlab.")
access_token = self.gl.oauth_token access_token = getattr(self.gl, 'oauth_token', None) or getattr(self.gl, 'private_token', None)
if not all([scheme, access_token, base_url]): if not all([scheme, access_token, base_url]):
get_logger().error(f"Either no access token found, or repo URL: {repo_url_to_clone} " get_logger().error(f"Either no access token found, or repo URL: {repo_url_to_clone} "
f"is missing prefix: {scheme} and/or base URL: {base_url}.") f"is missing prefix: {scheme} and/or base URL: {base_url}.")

View file

@ -287,7 +287,7 @@ def handle_ask_line(body, data):
question = body.replace('/ask', '').strip() question = body.replace('/ask', '').strip()
path = data['object_attributes']['position']['new_path'] path = data['object_attributes']['position']['new_path']
side = 'RIGHT' # if line_range_['start']['type'] == 'new' else 'LEFT' side = 'RIGHT' # if line_range_['start']['type'] == 'new' else 'LEFT'
_id = data['object_attributes']["discussion_id"] comment_id = data['object_attributes']["discussion_id"]
get_logger().info("Handling line ") get_logger().info("Handling line ")
body = f"/ask_line --line_start={start_line} --line_end={end_line} --side={side} --file_name={path} --comment_id={comment_id} {question}" body = f"/ask_line --line_start={start_line} --line_end={end_line} --side={side} --file_name={path} --comment_id={comment_id} {question}"
except Exception as e: except Exception as e:

View file

@ -6,8 +6,8 @@
[config] [config]
# models # models
model="o4-mini" model="gpt-5-2025-08-07"
fallback_models=["gpt-4.1"] fallback_models=["o4-mini"]
#model_reasoning="o4-mini" # dedicated reasoning model for self-reflection #model_reasoning="o4-mini" # dedicated reasoning model for self-reflection
#model_weak="gpt-4o" # optional, a weaker model to use for some easier tasks #model_weak="gpt-4o" # optional, a weaker model to use for some easier tasks
# CLI # CLI
@ -284,6 +284,8 @@ push_commands = [
"/describe", "/describe",
"/review", "/review",
] ]
# Configure SSL validation for GitLab. Can be either set to the path of a custom CA or disabled entirely.
# ssl_verify = true
[gitea_app] [gitea_app]
url = "https://gitea.com" url = "https://gitea.com"

View file

@ -641,7 +641,7 @@ class PRDescription:
continue continue
filename = file['filename'].replace("'", "`").replace('"', '`') filename = file['filename'].replace("'", "`").replace('"', '`')
changes_summary = file.get('changes_summary', "") changes_summary = file.get('changes_summary', "")
if not changes_summary: if not changes_summary and self.vars.get('include_file_summary_changes', True):
get_logger().warning(f"Empty changes summary in file label dict, skipping file", get_logger().warning(f"Empty changes summary in file label dict, skipping file",
artifact={"file": file}) artifact={"file": file})
continue continue

View file

@ -1,4 +1,4 @@
aiohttp==3.9.5 aiohttp==3.10.2
anthropic>=0.52.0 anthropic>=0.52.0
#anthropic[vertex]==0.47.1 #anthropic[vertex]==0.47.1
atlassian-python-api==3.41.4 atlassian-python-api==3.41.4
@ -7,13 +7,13 @@ azure-identity==1.15.0
boto3==1.33.6 boto3==1.33.6
certifi==2024.8.30 certifi==2024.8.30
dynaconf==3.2.4 dynaconf==3.2.4
fastapi==0.111.0 fastapi==0.115.6
GitPython==3.1.41 GitPython==3.1.41
google-cloud-aiplatform==1.38.0 google-cloud-aiplatform==1.38.0
google-generativeai==0.8.3 google-generativeai==0.8.3
google-cloud-storage==2.10.0 google-cloud-storage==2.10.0
Jinja2==3.1.2 Jinja2==3.1.2
litellm==1.70.4 litellm==1.73.6
loguru==0.7.2 loguru==0.7.2
msrest==0.7.1 msrest==0.7.1
openai>=1.55.3 openai>=1.55.3
@ -27,7 +27,7 @@ tiktoken==0.8.0
ujson==5.8.0 ujson==5.8.0
uvicorn==0.22.0 uvicorn==0.22.0
tenacity==8.2.3 tenacity==8.2.3
gunicorn==22.0.0 gunicorn==23.0.0
pytest-cov==5.0.0 pytest-cov==5.0.0
pydantic==2.8.2 pydantic==2.8.2
html2text==2024.2.26 html2text==2024.2.26