Merge branch 'main' into fix/gitlab-private-deployment-401

This commit is contained in:
Tal 2025-08-06 08:30:54 +03:00 committed by GitHub
commit 65457b2569
No known key found for this signature in database
GPG key ID: B5690EEEBB952194
12 changed files with 159 additions and 27 deletions

View file

@ -44,11 +44,18 @@ enable_rag=true
RAG capability is exclusively available in the following tools:
=== "`/review`"
The [`/review`](https://qodo-merge-docs.qodo.ai/tools/review/) tool offers the _Focus area from RAG data_ which contains feedback based on the RAG references analysis.
The complete list of references found relevant to the PR will be shown in the _References_ section, helping developers understand the broader context by exploring the provided references.
=== "`/ask`"
The [`/ask`](https://qodo-merge-docs.qodo.ai/tools/ask/) tool can access broader repository context through the RAG feature when answering questions that go beyond the PR scope alone.
The _References_ section displays the additional repository content consulted to formulate the answer.
![RAGed review tool](https://codium.ai/images/pr_agent/rag_review.png){width=640}
![RAGed ask tool](https://codium.ai/images/pr_agent/rag_ask.png){width=640}
=== "`/compliance`"
The [`/compliance`](https://qodo-merge-docs.qodo.ai/tools/compliance/) tool offers the _Codebase Code Duplication Compliance_ section which contains feedback based on the RAG references.
This section highlights possible code duplication issues in the PR, providing developers with insights into potential code quality concerns.
![RAGed compliance tool](https://codium.ai/images/pr_agent/rag_compliance.png){width=640}
=== "`/implement`"
The [`/implement`](https://qodo-merge-docs.qodo.ai/tools/implement/) tool utilizes the RAG feature to provide comprehensive context of the repository codebase, allowing it to generate more refined code output.
@ -56,11 +63,11 @@ RAG capability is exclusively available in the following tools:
![RAGed implement tool](https://codium.ai/images/pr_agent/rag_implement.png){width=640}
=== "`/ask`"
The [`/ask`](https://qodo-merge-docs.qodo.ai/tools/ask/) tool can access broader repository context through the RAG feature when answering questions that go beyond the PR scope alone.
The _References_ section displays the additional repository content consulted to formulate the answer.
=== "`/review`"
The [`/review`](https://qodo-merge-docs.qodo.ai/tools/review/) tool offers the _Focus area from RAG data_ which contains feedback based on the RAG references analysis.
The complete list of references found relevant to the PR will be shown in the _References_ section, helping developers understand the broader context by exploring the provided references.
![RAGed ask tool](https://codium.ai/images/pr_agent/rag_ask.png){width=640}
![RAGed review tool](https://codium.ai/images/pr_agent/rag_review.png){width=640}
## Limitations

View file

@ -42,6 +42,9 @@ Note that if your base branches are not protected, don't set the variables as `p
> **Note**: The `$CI_SERVER_FQDN` variable is available starting from GitLab version 16.10. If you're using an earlier version, this variable will not be available. However, you can combine `$CI_SERVER_HOST` and `$CI_SERVER_PORT` to achieve the same result. Please ensure you're using a compatible version or adjust your configuration.
> **Note**: The `gitlab__SSL_VERIFY` environment variable can be used to specify the path to a custom CA certificate bundle for SSL verification. GitLab exposes the `$CI_SERVER_TLS_CA_FILE` variable, which points to the custom CA certificate file configured in your GitLab instance.
> Alternatively, SSL verification can be disabled entirely by setting `gitlab__SSL_VERIFY=false`, although this is not recommended.
## Run a GitLab webhook server
1. In GitLab create a new user and give it "Reporter" role ("Developer" if using Pro version of the agent) for the intended group or project.

View file

@ -5,10 +5,10 @@
The `compliance` tool performs comprehensive compliance checks on PR code changes, validating them against security standards, ticket requirements, and custom organizational compliance checklists, thereby helping teams, enterprises, and agents maintain consistent code quality and security practices while ensuring that development work aligns with business requirements.
=== "Fully Compliant"
![compliance_overview](https://codium.ai/images/pr_agent/compliance_full.png){width=256}
![compliance_overview](https://codium.ai/images/pr_agent/compliance_full.png){width=512}
=== "Partially Compliant"
![compliance_overview](https://codium.ai/images/pr_agent/compliance_partial.png){width=256}
![compliance_overview](https://codium.ai/images/pr_agent/compliance_partial.png){width=512}
___
@ -111,9 +111,11 @@ Validates against an organization-specific compliance checklist:
Each compliance is defined in a YAML file as follows:
- `title`: Used to provide a clear name for the compliance
- `compliance_label`: Used to automatically generate labels for non-compliance issues
- `objective`, `success_criteria`, and `failure_criteria`: These fields are used to clearly define what constitutes compliance
- `title` (required): A clear, descriptive title that identifies what is being checked
- `compliance_label` (required): Determines whether this compliance generates labels for non-compliance issues (set to `true` or `false`)
- `objective` (required): A detailed description of the goal or purpose this compliance aims to achieve
- `success_criteria` and `failure_criteria` (at least one required; both recommended): Define the conditions for compliance
???+ tip "Example of a compliance checklist"
@ -137,9 +139,9 @@ Each compliance is defined in a YAML file as follows:
### Local Compliance Checklists
For basic usage, create a `pr_compliance_checklist.yaml` file in your repository's root directory containing the compliance rules specific to your repository.
For basic usage, create a `pr_compliance_checklist.yaml` file in your repository's root directory containing the compliance requirements specific to your repository.
The AI model will use this `pr_compliance_checklist.yaml` file as a reference, and if the PR code violates any of the rules, it will be shown in the compliance tool's comment.
The AI model will use this `pr_compliance_checklist.yaml` file as a reference, and if the PR code violates any of the compliance requirements, it will be shown in the compliance tool's comment.
### Global Hierarchical Compliance

View file

@ -202,6 +202,20 @@ publish_labels = false
to prevent Qodo Merge from publishing labels when running the `describe` tool.
#### Enable using commands in PR
You can configure your GitHub Actions workflow to trigger on `issue_comment` [events](https://docs.github.com/en/actions/reference/workflows-and-actions/events-that-trigger-workflows#issue_comment) (`created` and `edited`).
Example GitHub Actions workflow configuration:
```yaml
on:
issue_comment:
types: [created, edited]
```
When this is configured, Qodo merge can be invoked by commenting on the PR.
#### Quick Reference: Model Configuration in GitHub Actions
For detailed step-by-step examples of configuring different models (Gemini, Claude, Azure OpenAI, etc.) in GitHub Actions, see the [Configuration Examples](../installation/github.md#configuration-examples) section in the installation guide.

View file

@ -250,6 +250,26 @@ model="bedrock/us.meta.llama4-scout-17b-instruct-v1:0"
fallback_models=["bedrock/us.meta.llama4-maverick-17b-instruct-v1:0"]
```
#### Custom Inference Profiles
To use a custom inference profile with Amazon Bedrock (for cost allocation tags and other configuration settings), add the `model_id` parameter to your configuration:
```toml
[config] # in configuration.toml
model="bedrock/anthropic.claude-3-5-sonnet-20240620-v1:0"
fallback_models=["bedrock/anthropic.claude-3-5-sonnet-20240620-v1:0"]
[aws]
AWS_ACCESS_KEY_ID="..."
AWS_SECRET_ACCESS_KEY="..."
AWS_REGION_NAME="..."
[litellm]
model_id = "your-custom-inference-profile-id"
```
The `model_id` parameter will be passed to all Bedrock completion calls, allowing you to use custom inference profiles for better cost allocation and reporting.
See [litellm](https://docs.litellm.ai/docs/providers/bedrock#usage) documentation for more information about the environment variables required for Amazon Bedrock.
### DeepSeek

View file

@ -352,6 +352,12 @@ class LiteLLMAIHandler(BaseAiHandler):
# Support for custom OpenAI body fields (e.g., Flex Processing)
kwargs = _process_litellm_extra_body(kwargs)
# Support for Bedrock custom inference profile via model_id
model_id = get_settings().get("litellm.model_id")
if model_id and 'bedrock/' in model:
kwargs["model_id"] = model_id
get_logger().info(f"Using Bedrock custom inference profile: {model_id}")
get_logger().debug("Prompts", artifact={"system": system, "user": user})
if get_settings().config.verbosity_level >= 2:

View file

@ -398,11 +398,6 @@ def get_pr_multi_diffs(git_provider: GitProvider,
# Sort files by main language
pr_languages = sort_files_by_main_languages(git_provider.get_languages(), diff_files)
# Sort files within each language group by tokens in descending order
sorted_files = []
for lang in pr_languages:
sorted_files.extend(sorted(lang['files'], key=lambda x: x.tokens, reverse=True))
# Get the maximum number of extra lines before and after the patch
PATCH_EXTRA_LINES_BEFORE = get_settings().config.patch_extra_lines_before
PATCH_EXTRA_LINES_AFTER = get_settings().config.patch_extra_lines_after
@ -420,6 +415,11 @@ def get_pr_multi_diffs(git_provider: GitProvider,
if total_tokens + OUTPUT_BUFFER_TOKENS_SOFT_THRESHOLD < get_max_tokens(model):
return ["\n".join(patches_extended)] if patches_extended else []
# Sort files within each language group by tokens in descending order
sorted_files = []
for lang in pr_languages:
sorted_files.extend(sorted(lang['files'], key=lambda x: x.tokens, reverse=True))
patches = []
final_diff_list = []
total_tokens = token_handler.prompt_tokens

View file

@ -32,6 +32,7 @@ class GitLabProvider(GitProvider):
if not gitlab_url:
raise ValueError("GitLab URL is not set in the config file")
self.gitlab_url = gitlab_url
ssl_verify = get_settings().get("GITLAB.SSL_VERIFY", True)
gitlab_access_token = get_settings().get("GITLAB.PERSONAL_ACCESS_TOKEN", None)
if not gitlab_access_token:
raise ValueError("GitLab personal access token is not set in the config file")
@ -49,6 +50,7 @@ class GitLabProvider(GitProvider):
self.gl = gitlab.Gitlab(
url=gitlab_url,
oauth_token=gitlab_access_token
ssl_verify=ssl_verify
)
else: # private_token
self.gl = gitlab.Gitlab(
@ -581,10 +583,52 @@ class GitLabProvider(GitProvider):
return self.id_project.split('/')[0]
def add_eyes_reaction(self, issue_comment_id: int, disable_eyes: bool = False) -> Optional[int]:
return True
if disable_eyes:
return None
try:
if not self.id_mr:
get_logger().warning("Cannot add eyes reaction: merge request ID is not set.")
return None
mr = self.gl.projects.get(self.id_project).mergerequests.get(self.id_mr)
comment = mr.notes.get(issue_comment_id)
if not comment:
get_logger().warning(f"Comment with ID {issue_comment_id} not found in merge request {self.id_mr}.")
return None
award_emoji = comment.awardemojis.create({
'name': 'eyes'
})
return award_emoji.id
except Exception as e:
get_logger().warning(f"Failed to add eyes reaction, error: {e}")
return None
def remove_reaction(self, issue_comment_id: int, reaction_id: int) -> bool:
return True
def remove_reaction(self, issue_comment_id: int, reaction_id: str) -> bool:
try:
if not self.id_mr:
get_logger().warning("Cannot remove reaction: merge request ID is not set.")
return False
mr = self.gl.projects.get(self.id_project).mergerequests.get(self.id_mr)
comment = mr.notes.get(issue_comment_id)
if not comment:
get_logger().warning(f"Comment with ID {issue_comment_id} not found in merge request {self.id_mr}.")
return False
reactions = comment.awardemojis.list()
for reaction in reactions:
if reaction.name == reaction_id:
reaction.delete()
return True
get_logger().warning(f"Reaction '{reaction_id}' not found in comment {issue_comment_id}.")
return False
except Exception as e:
get_logger().warning(f"Failed to remove reaction, error: {e}")
return False
def _parse_merge_request_url(self, merge_request_url: str) -> Tuple[str, int]:
parsed_url = urlparse(merge_request_url)

View file

@ -18,6 +18,7 @@ from pr_agent.config_loader import get_settings, global_settings
from pr_agent.git_providers.utils import apply_repo_settings
from pr_agent.log import LoggingFormat, get_logger, setup_logger
from pr_agent.secret_providers import get_secret_provider
from pr_agent.git_providers import get_git_provider_with_context
setup_logger(fmt=LoggingFormat.JSON, level=get_settings().get("CONFIG.LOG_LEVEL", "DEBUG"))
router = APIRouter()
@ -25,15 +26,14 @@ router = APIRouter()
secret_provider = get_secret_provider() if get_settings().get("CONFIG.SECRET_PROVIDER") else None
async def handle_request(api_url: str, body: str, log_context: dict, sender_id: str):
async def handle_request(api_url: str, body: str, log_context: dict, sender_id: str, notify=None):
log_context["action"] = body
log_context["event"] = "pull_request" if body == "/review" else "comment"
log_context["api_url"] = api_url
log_context["app_name"] = get_settings().get("CONFIG.APP_NAME", "Unknown")
with get_logger().contextualize(**log_context):
await PRAgent().handle_request(api_url, body)
await PRAgent().handle_request(api_url, body, notify)
async def _perform_commands_gitlab(commands_conf: str, agent: PRAgent, api_url: str,
log_context: dict, data: dict):
@ -259,13 +259,15 @@ async def gitlab_webhook(background_tasks: BackgroundTasks, request: Request):
if 'merge_request' in data:
mr = data['merge_request']
url = mr.get('url')
comment_id = data.get('object_attributes', {}).get('id')
provider = get_git_provider_with_context(pr_url=url)
get_logger().info(f"A comment has been added to a merge request: {url}")
body = data.get('object_attributes', {}).get('note')
if data.get('object_attributes', {}).get('type') == 'DiffNote' and '/ask' in body: # /ask_line
body = handle_ask_line(body, data)
await handle_request(url, body, log_context, sender_id)
await handle_request(url, body, log_context, sender_id, notify=lambda: provider.add_eyes_reaction(comment_id))
background_tasks.add_task(inner, request_json)
end_time = datetime.now()

View file

@ -19,6 +19,7 @@ key = "" # Acquire through https://platform.openai.com
# OpenAI Flex Processing (optional, for cost savings)
# [litellm]
# extra_body='{"processing_mode": "flex"}'
# model_id = "" # Optional: Custom inference profile ID for Amazon Bedrock
[pinecone]
api_key = "..."

View file

@ -284,6 +284,8 @@ push_commands = [
"/describe",
"/review",
]
# Configure SSL validation for GitLab. Can be either set to the path of a custom CA or disabled entirely.
# ssl_verify = true
[gitea_app]
url = "https://gitea.com"
@ -334,6 +336,7 @@ enable_callbacks = false
success_callback = []
failure_callback = []
service_callback = []
# model_id = "" # Optional: Custom inference profile ID for Amazon Bedrock
[pr_similar_issue]
skip_comments = false

View file

@ -0,0 +1,30 @@
pr_compliances:
- title: "Consistent Naming Conventions"
compliance_label: false
objective: "All new variables, functions, and classes must follow the project's established naming standards"
success_criteria: "All identifiers follow the established naming patterns (camelCase, snake_case, etc.)"
failure_criteria: "Inconsistent or non-standard naming that deviates from project conventions"
- title: "No Dead or Commented-Out Code"
compliance_label: false
objective: "Keep the codebase clean by ensuring all submitted code is active and necessary"
success_criteria: "All code in the PR is active and serves a purpose; no commented-out blocks"
failure_criteria: "Presence of unused, dead, or commented-out code sections"
- title: "Robust Error Handling"
compliance_label: false
objective: "Ensure potential errors and edge cases are anticipated and handled gracefully throughout the code"
success_criteria: "All error scenarios are properly caught and handled with appropriate responses"
failure_criteria: "Unhandled exceptions, ignored errors, or missing edge case handling"
- title: "Single Responsibility for Functions"
compliance_label: false
objective: "Each function should have a single, well-defined responsibility"
success_criteria: "Functions perform one cohesive task with a single purpose"
failure_criteria: "Functions that combine multiple unrelated operations or handle several distinct concerns"
- title: "When relevant, utilize early return"
compliance_label: false
objective: "In a code snippet containing multiple logic conditions (such as 'if-else'), prefer an early return on edge cases than deep nesting"
success_criteria: "When relevant, utilize early return that reduces nesting"
failure_criteria: "Unjustified deep nesting that can be simplified by early return"