Merge branch 'dev' into universal_file_deletion

This commit is contained in:
Classic298 2025-11-10 15:34:05 +01:00 committed by GitHub
commit d94492dc0e
No known key found for this signature in database
GPG key ID: B5690EEEBB952194
200 changed files with 12606 additions and 8075 deletions

View file

@ -11,9 +11,9 @@ body:
## Important Notes ## Important Notes
- **Before submitting a bug report**: Please check the [Issues](https://github.com/open-webui/open-webui/issues) and [Discussions](https://github.com/open-webui/open-webui/discussions) sections to see if a similar issue has already been reported. If unsure, start a discussion first, as this helps us efficiently focus on improving the project. Duplicates may be closed without notice. **Please search for existing issues and discussions.** - **Before submitting a bug report**: Please check the [Issues](https://github.com/open-webui/open-webui/issues) and [Discussions](https://github.com/open-webui/open-webui/discussions) sections to see if a similar issue has already been reported. If unsure, start a discussion first, as this helps us efficiently focus on improving the project. Duplicates may be closed without notice. **Please search for existing issues AND discussions. No matter open or closed.**
- Check for opened, **but also for (recently) CLOSED issues** as the issue you are trying to report **might already have been fixed!** - Check for opened, **but also for (recently) CLOSED issues** as the issue you are trying to report **might already have been fixed on the dev branch!**
- **Respectful collaboration**: Open WebUI is a volunteer-driven project with a single maintainer and contributors who also have full-time jobs. Please be constructive and respectful in your communication. - **Respectful collaboration**: Open WebUI is a volunteer-driven project with a single maintainer and contributors who also have full-time jobs. Please be constructive and respectful in your communication.
@ -21,6 +21,8 @@ body:
- **Bug Reproducibility**: If a bug cannot be reproduced using a `:main` or `:dev` Docker setup or with `pip install` on Python 3.11, community assistance may be required. In such cases, we will move it to the "[Issues](https://github.com/open-webui/open-webui/discussions/categories/issues)" Discussions section. Your help is appreciated! - **Bug Reproducibility**: If a bug cannot be reproduced using a `:main` or `:dev` Docker setup or with `pip install` on Python 3.11, community assistance may be required. In such cases, we will move it to the "[Issues](https://github.com/open-webui/open-webui/discussions/categories/issues)" Discussions section. Your help is appreciated!
- **Scope**: If you want to report a SECURITY VULNERABILITY, then do so through our [GitHub security page](https://github.com/open-webui/open-webui/security).
- type: checkboxes - type: checkboxes
id: issue-check id: issue-check
attributes: attributes:
@ -31,6 +33,8 @@ body:
required: true required: true
- label: I have searched for any existing and/or related discussions. - label: I have searched for any existing and/or related discussions.
required: true required: true
- label: I have also searched in the CLOSED issues AND CLOSED discussions and found no related items (your issue might already be addressed on the development branch!).
required: true
- label: I am using the latest version of Open WebUI. - label: I am using the latest version of Open WebUI.
required: true required: true

View file

@ -8,10 +8,21 @@ body:
value: | value: |
## Important Notes ## Important Notes
### Before submitting ### Before submitting
Please check the [Issues](https://github.com/open-webui/open-webui/issues) or [Discussions](https://github.com/open-webui/open-webui/discussions) to see if a similar request has been posted.
Please check the **open AND closed** [Issues](https://github.com/open-webui/open-webui/issues) AND [Discussions](https://github.com/open-webui/open-webui/discussions) to see if a similar request has been posted.
It's likely we're already tracking it! If youre unsure, start a discussion post first. It's likely we're already tracking it! If youre unsure, start a discussion post first.
This will help us efficiently focus on improving the project.
#### Scope
If your feature request is likely to take more than a quick coding session to implement, test and verify, then open it in the **Ideas** section of the [Discussions](https://github.com/open-webui/open-webui/discussions) instead.
**We will close and force move your feature request to the Ideas section, if we believe your feature request is not trivial/quick to implement.**
This is to ensure the issues tab is used only for issues, quickly addressable feature requests and tracking tickets by the maintainers.
Other feature requests belong in the **Ideas** section of the [Discussions](https://github.com/open-webui/open-webui/discussions).
If your feature request might impact others in the community, definitely open a discussion instead and evaluate whether and how to implement it.
This will help us efficiently focus on improving the project.
### Collaborate respectfully ### Collaborate respectfully
We value a **constructive attitude**, so please be mindful of your communication. If negativity is part of your approach, our capacity to engage may be limited. We're here to help if you're **open to learning** and **communicating positively**. We value a **constructive attitude**, so please be mindful of your communication. If negativity is part of your approach, our capacity to engage may be limited. We're here to help if you're **open to learning** and **communicating positively**.
@ -22,7 +33,6 @@ body:
We appreciate your time and ask that you **respect ours**. We appreciate your time and ask that you **respect ours**.
### Contributing ### Contributing
If you encounter an issue, we highly encourage you to submit a pull request or fork the project. We actively work to prevent contributor burnout to maintain the quality and continuity of Open WebUI. If you encounter an issue, we highly encourage you to submit a pull request or fork the project. We actively work to prevent contributor burnout to maintain the quality and continuity of Open WebUI.
@ -35,14 +45,22 @@ body:
label: Check Existing Issues label: Check Existing Issues
description: Please confirm that you've checked for existing similar requests description: Please confirm that you've checked for existing similar requests
options: options:
- label: I have searched the existing issues and discussions. - label: I have searched for all existing **open AND closed** issues and discussions for similar requests. I have found none that is comparable to my request.
required: true
- type: checkboxes
id: feature-scope
attributes:
label: Verify Feature Scope
description: Please confirm the feature's scope is within the described scope
options:
- label: I have read through and understood the scope definition for feature requests in the Issues section. I believe my feature request meets the definition and belongs in the Issues section instead of the Discussions.
required: true required: true
- type: textarea - type: textarea
id: problem-description id: problem-description
attributes: attributes:
label: Problem Description label: Problem Description
description: Is your feature request related to a problem? Please provide a clear and concise description of what the problem is. description: Is your feature request related to a problem? Please provide a clear and concise description of what the problem is.
placeholder: "Ex. I'm always frustrated when..." placeholder: "Ex. I'm always frustrated when... / Not related to a problem"
validations: validations:
required: true required: true
- type: textarea - type: textarea

View file

@ -1,16 +1,18 @@
# Pull Request Checklist # Pull Request Checklist
### Note to first-time contributors: Please open a discussion post in [Discussions](https://github.com/open-webui/open-webui/discussions) and describe your changes before submitting a pull request. ### Note to first-time contributors: Please open a discussion post in [Discussions](https://github.com/open-webui/open-webui/discussions) to discuss your idea/fix with the community before creating a pull request, and describe your changes before submitting a pull request.
This is to ensure large feature PRs are discussed with the community first, before starting work on it. If the community does not want this feature or it is not relevant for Open WebUI as a project, it can be identified in the discussion before working on the feature and submitting the PR.
**Before submitting, make sure you've checked the following:** **Before submitting, make sure you've checked the following:**
- [ ] **Target branch:** Verify that the pull request targets the `dev` branch. Not targeting the `dev` branch may lead to immediate closure of the PR. - [ ] **Target branch:** Verify that the pull request targets the `dev` branch. **Not targeting the `dev` branch will lead to immediate closure of the PR.**
- [ ] **Description:** Provide a concise description of the changes made in this pull request. - [ ] **Description:** Provide a concise description of the changes made in this pull request down below.
- [ ] **Changelog:** Ensure a changelog entry following the format of [Keep a Changelog](https://keepachangelog.com/) is added at the bottom of the PR description. - [ ] **Changelog:** Ensure a changelog entry following the format of [Keep a Changelog](https://keepachangelog.com/) is added at the bottom of the PR description.
- [ ] **Documentation:** If necessary, update relevant documentation [Open WebUI Docs](https://github.com/open-webui/docs) like environment variables, the tutorials, or other documentation sources. - [ ] **Documentation:** If necessary, update relevant documentation [Open WebUI Docs](https://github.com/open-webui/docs) like environment variables, the tutorials, or other documentation sources.
- [ ] **Dependencies:** Are there any new dependencies? Have you updated the dependency versions in the documentation? - [ ] **Dependencies:** Are there any new dependencies? Have you updated the dependency versions in the documentation?
- [ ] **Testing:** Perform manual tests to verify the implemented fix/feature works as intended AND does not break any other functionality. Take this as an opportunity to make screenshots of the feature/fix and include it in the PR description. - [ ] **Testing:** Perform manual tests to **verify the implemented fix/feature works as intended AND does not break any other functionality**. Take this as an opportunity to **make screenshots of the feature/fix and include it in the PR description**.
- [ ] **Agentic AI Code:**: Confirm this Pull Request is **not written by any AI Agent** or has at least gone through additional human review **and** manual testing. If any AI Agent is the co-author of this PR, it may lead to immediate closure of the PR. - [ ] **Agentic AI Code:** Confirm this Pull Request is **not written by any AI Agent** or has at least **gone through additional human review AND manual testing**. If any AI Agent is the co-author of this PR, it may lead to immediate closure of the PR.
- [ ] **Code review:** Have you performed a self-review of your code, addressing any coding standard issues and ensuring adherence to the project's coding standards? - [ ] **Code review:** Have you performed a self-review of your code, addressing any coding standard issues and ensuring adherence to the project's coding standards?
- [ ] **Title Prefix:** To clearly categorize this pull request, prefix the pull request title using one of the following: - [ ] **Title Prefix:** To clearly categorize this pull request, prefix the pull request title using one of the following:
- **BREAKING CHANGE**: Significant changes that may affect compatibility - **BREAKING CHANGE**: Significant changes that may affect compatibility
@ -75,3 +77,6 @@
### Contributor License Agreement ### Contributor License Agreement
By submitting this pull request, I confirm that I have read and fully agree to the [Contributor License Agreement (CLA)](https://github.com/open-webui/open-webui/blob/main/CONTRIBUTOR_LICENSE_AGREEMENT), and I am providing my contributions under its terms. By submitting this pull request, I confirm that I have read and fully agree to the [Contributor License Agreement (CLA)](https://github.com/open-webui/open-webui/blob/main/CONTRIBUTOR_LICENSE_AGREEMENT), and I am providing my contributions under its terms.
> [!NOTE]
> Deleting the CLA section will lead to immediate closure of your PR and it will not be merged in.

View file

@ -5,6 +5,134 @@ All notable changes to this project will be documented in this file.
The format is based on [Keep a Changelog](https://keepachangelog.com/en/1.1.0/), The format is based on [Keep a Changelog](https://keepachangelog.com/en/1.1.0/),
and this project adheres to [Semantic Versioning](https://semver.org/spec/v2.0.0.html). and this project adheres to [Semantic Versioning](https://semver.org/spec/v2.0.0.html).
## [0.6.36] - 2025-11-07
### Added
- 🔐 OAuth group parsing now supports configurable separators via the "OAUTH_GROUPS_SEPARATOR" environment variable, enabling proper handling of semicolon-separated group claims from providers like CILogon. [#18987](https://github.com/open-webui/open-webui/pull/18987), [#18979](https://github.com/open-webui/open-webui/issues/18979)
### Fixed
- 🛠️ Tool calling functionality is restored by correcting asynchronous function handling in tool parameter updates. [#18981](https://github.com/open-webui/open-webui/issues/18981)
- 🖼️ The ComfyUI image edit workflow editor modal now opens correctly when clicking the Edit button. [#18978](https://github.com/open-webui/open-webui/issues/18978)
- 🔥 Firecrawl import errors are resolved by implementing lazy loading and using the correct class name. [#18973](https://github.com/open-webui/open-webui/issues/18973)
- 🔌 Socket.IO CORS warning is resolved by properly configuring CORS origins for Socket.IO connections. [Commit](https://github.com/open-webui/open-webui/commit/639d26252e528c9c37a5f553b11eb94376d8792d)
## [0.6.35] - 2025-11-06
### Added
- 🖼️ Image generation system received a comprehensive overhaul with major new capabilities including full image editing support allowing users to modify existing images using text prompts with OpenAI, Gemini, or ComfyUI engines, adding Gemini 2.5 Flash Image (Nano Banana) support, Qwen Image Edit integration, resolution of base64-encoded image display issues, streamlined AUTOMATIC1111 configuration by consolidating parameters into a flexible JSON parameters field, and enhanced UI with a code editor modal for ComfyUI workflow management. [#17434](https://github.com/open-webui/open-webui/pull/17434), [#16976](https://github.com/open-webui/open-webui/issues/16976), [Commit](https://github.com/open-webui/open-webui/commit/8e5690aab4f632a57027e2acf880b8f89a8717c0), [Commit](https://github.com/open-webui/open-webui/commit/72f8539fd2e679fec0762945f22f4b8a6920afa0), [Commit](https://github.com/open-webui/open-webui/commit/8d34fcb586eeee1fac6da2f991518b8a68b00b72), [Commit](https://github.com/open-webui/open-webui/commit/72900cd686de1fa6be84b5a8a2fc857cff7b91b8)
- 🔒 CORS origin validation was added to WebSocket connections as a defense-in-depth security measure against cross-site WebSocket hijacking attacks. [#18411](https://github.com/open-webui/open-webui/pull/18411), [#18410](https://github.com/open-webui/open-webui/issues/18410)
- 🔄 Automatic page refresh now occurs when a version update is detected via WebSocket connection, ensuring users always run the latest version without cache issues. [Commit](https://github.com/open-webui/open-webui/commit/989f192c92d2fe55daa31336e7971e21798b96ae)
- 🐍 Experimental initial preparations for Python 3.13 compatibility by updating dependencies with security enhancements and cryptographic improvements. [#18430](https://github.com/open-webui/open-webui/pull/18430), [#18424](https://github.com/open-webui/open-webui/pull/18424)
- ⚡ Image compression now preserves the original image format instead of converting to PNG, significantly reducing file sizes and improving chat loading performance. [#18506](https://github.com/open-webui/open-webui/pull/18506)
- 🎤 Mistral Voxtral model support was added for text-to-speech, including voxtral-small and voxtral-mini models with both transcription and chat completion API support. [#18934](https://github.com/open-webui/open-webui/pull/18934)
- 🔊 Text-to-speech now uses a global audio queue system to prevent overlapping playback, ensuring only one TTS instance plays at a time with proper stop/start controls and automatic cleanup when switching between messages. [#16152](https://github.com/open-webui/open-webui/pull/16152), [#18744](https://github.com/open-webui/open-webui/pull/18744), [#16150](https://github.com/open-webui/open-webui/issues/16150)
- 🔊 ELEVENLABS_API_BASE_URL environment variable now allows configuration of custom ElevenLabs API endpoints, enabling support for EU residency API requirements. [#18402](https://github.com/open-webui/open-webui/issues/18402)
- 🔐 OAUTH_ROLES_SEPARATOR environment variable now allows custom role separators for OAuth roles that contain commas, useful for roles specified in LDAP syntax. [#18572](https://github.com/open-webui/open-webui/pull/18572)
- 📄 External document loaders can now optionally forward user information headers when ENABLE_FORWARD_USER_INFO_HEADERS is enabled, enabling cost tracking, audit logs, and usage analytics for external services. [#18731](https://github.com/open-webui/open-webui/pull/18731)
- 📄 MISTRAL_OCR_API_BASE_URL environment variable now allows configuration of custom Mistral OCR API endpoints for flexible deployment options. [Commit](https://github.com/open-webui/open-webui/commit/415b93c7c35c2e2db4425e6da1b88b3750f496b0)
- ⌨️ Keyboard shortcut hints are now displayed on sidebar buttons with a refactored shortcuts modal that accurately reflects all available hotkeys across different keyboard layouts. [#18473](https://github.com/open-webui/open-webui/pull/18473)
- 🛠️ Tooltips now display tool descriptions when hovering over tool names on the model edit page, improving usability and providing immediate context. [#18707](https://github.com/open-webui/open-webui/pull/18707)
- 📝 "Create a new note" from the search modal now immediately creates a new private note and opens it in the editor instead of navigating to the generic notes page. [#18255](https://github.com/open-webui/open-webui/pull/18255)
- 🖨️ Code block output now preserves whitespace formatting with monospace font to accurately reflect terminal behavior. [#18352](https://github.com/open-webui/open-webui/pull/18352)
- ✏️ Edit button is now available in the three-dot menu of models in the workspace section for quick access to model editing, with the menu reorganized for better user experience and Edit, Clone, Copy Link, and Share options logically grouped. [#18574](https://github.com/open-webui/open-webui/pull/18574)
- 📌 Sidebar models section is now collapsible, allowing users to expand and collapse the pinned models list for better sidebar organization. [Commit](https://github.com/open-webui/open-webui/commit/82c08a3b5d189f81c96b6548cc872198771015b0)
- 🌙 Dark mode styles for select elements were added using Tailwind CSS classes, improving consistency across the interface. [#18636](https://github.com/open-webui/open-webui/pull/18636)
- 🔄 Various improvements were implemented across the frontend and backend to enhance performance, stability, and security.
- 🌐 Translations for Portuguese (Brazil), Greek, German, Traditional Chinese, Simplified Chinese, Spanish, Georgian, Danish, and Estonian were enhanced and expanded.
### Fixed
- 🔒 Server-Sent Event (SSE) code injection vulnerability in Direct Connections is resolved by blocking event emission from untrusted external model servers; event emitters from direct connected model servers are no longer supported, preventing arbitrary JavaScript execution in user browsers. [Commit](https://github.com/open-webui/open-webui/commit/8af6a4cf21b756a66cd58378a01c60f74c39b7ca)
- 🛡️ DOM XSS vulnerability in "Insert Prompt as Rich Text" is resolved by sanitizing HTML content with DOMPurify before rendering. [Commit](https://github.com/open-webui/open-webui/commit/eb9c4c0e358c274aea35f21c2856c0a20051e5f1)
- ⚙️ MCP server cancellation scope corruption is prevented by reversing disconnection order to follow LIFO and properly handling exceptions, resolving 100% CPU usage when resuming chats with expired tokens or using multiple streamable MCP servers. [#18537](https://github.com/open-webui/open-webui/pull/18537)
- 🔧 UI freeze when querying models with knowledge bases containing inconsistent distance metrics is resolved by properly initializing the distances array in citations. [#18585](https://github.com/open-webui/open-webui/pull/18585)
- 🤖 Duplicate model IDs from multiple OpenAI endpoints are now automatically deduplicated server-side, preventing frontend crashes for users with unified gateway proxies that aggregate multiple providers. [Commit](https://github.com/open-webui/open-webui/commit/fdf7ca11d4f3cc8fe63e81c98dc0d1e48e52ba36)
- 🔐 Login failures with passwords longer than 72 bytes are resolved by safely truncating oversized passwords for bcrypt compatibility. [#18157](https://github.com/open-webui/open-webui/issues/18157)
- 🔐 OAuth 2.1 MCP tool connections now automatically re-register clients when stored client IDs become stale, preventing unauthorized_client errors after editing tool endpoints and providing detailed error messages for callback failures. [#18415](https://github.com/open-webui/open-webui/pull/18415), [#18309](https://github.com/open-webui/open-webui/issues/18309)
- 🔓 OAuth 2.1 discovery, metadata fetching, and dynamic client registration now correctly use HTTP proxy environment variables when trust_env is enabled. [Commit](https://github.com/open-webui/open-webui/commit/bafeb76c411483bd6b135f0edbcdce048120f264)
- 🔌 MCP server connection failures now display clear error messages in the chat interface instead of silently failing. [#18892](https://github.com/open-webui/open-webui/pull/18892), [#18889](https://github.com/open-webui/open-webui/issues/18889)
- 💬 Chat titles are now properly generated even when title auto-generation is disabled in interface settings, fixing an issue where chats would remain labeled as "New chat". [#18761](https://github.com/open-webui/open-webui/pull/18761), [#18717](https://github.com/open-webui/open-webui/issues/18717), [#6478](https://github.com/open-webui/open-webui/issues/6478)
- 🔍 Chat query errors are prevented by properly validating and handling the "order_by" parameter to ensure requested columns exist. [#18400](https://github.com/open-webui/open-webui/pull/18400), [#18452](https://github.com/open-webui/open-webui/pull/18452)
- 🔧 Root-level max_tokens parameter is no longer dropped when proxying to Ollama, properly converting to num_predict to limit output token length as intended. [#18618](https://github.com/open-webui/open-webui/issues/18618)
- 🔑 Self-hosted Marker instances can now be used without requiring an API key, while keeping it optional for datalab Marker service users. [#18617](https://github.com/open-webui/open-webui/issues/18617)
- 🔧 OpenAPI specification endpoint conflict between "/api/v1/models" and "/api/v1/models/" is resolved by changing the models router endpoint to "/list", preventing duplicate operationId errors when generating TypeScript API clients. [#18758](https://github.com/open-webui/open-webui/issues/18758)
- 🏷️ Model tags are now de-duplicated case-insensitively in both the model selector and workspace models page, preventing duplicate entries with different capitalization from appearing in filter dropdowns. [#18716](https://github.com/open-webui/open-webui/pull/18716), [#18711](https://github.com/open-webui/open-webui/issues/18711)
- 📄 Docling RAG parameter configuration is now correctly saved in the admin UI by fixing the typo in the "DOCLING_PARAMS" parameter name. [#18390](https://github.com/open-webui/open-webui/pull/18390)
- 📃 Tika document processing now automatically detects content types instead of relying on potentially incorrect browser-provided mime-types, improving file handling accuracy for formats like RTF. [#18765](https://github.com/open-webui/open-webui/pull/18765), [#18683](https://github.com/open-webui/open-webui/issues/18683)
- 🖼️ Image and video uploads to knowledge bases now display proper error messages instead of showing an infinite spinner when the content extraction engine does not support these file types. [#18514](https://github.com/open-webui/open-webui/issues/18514)
- 📝 Notes PDF export now properly detects and applies dark mode styling consistently across both the notes list and individual note pages, with a shared utility function to eliminate code duplication. [#18526](https://github.com/open-webui/open-webui/issues/18526)
- 💭 Details tags for reasoning content are now correctly identified and rendered even when the same tag is present in user messages. [#18840](https://github.com/open-webui/open-webui/pull/18840), [#18294](https://github.com/open-webui/open-webui/issues/18294)
- 📊 Mermaid and Vega rendering errors now display inline with the code instead of showing repetitive toast notifications, improving user experience when models generate invalid diagram syntax. [Commit](https://github.com/open-webui/open-webui/commit/fdc0f04a8b7dd0bc9f9dc0e7e30854f7a0eea3e9)
- 📈 Mermaid diagram rendering errors no longer cause UI unavailability or display error messages below the input box. [#18493](https://github.com/open-webui/open-webui/pull/18493), [#18340](https://github.com/open-webui/open-webui/issues/18340)
- 🔗 Web search SSL verification is now asynchronous, preventing the website from hanging during web search operations. [#18714](https://github.com/open-webui/open-webui/pull/18714), [#18699](https://github.com/open-webui/open-webui/issues/18699)
- 🌍 Web search results now correctly use HTTP proxy environment variables when WEB_SEARCH_TRUST_ENV is enabled. [#18667](https://github.com/open-webui/open-webui/pull/18667), [#7008](https://github.com/open-webui/open-webui/discussions/7008)
- 🔍 Google Programmable Search Engine now properly includes referer headers, enabling API keys with HTTP referrer restrictions configured in Google Cloud Console. [#18871](https://github.com/open-webui/open-webui/pull/18871), [#18870](https://github.com/open-webui/open-webui/issues/18870)
- ⚡ YouTube video transcript fetching now works correctly when using a proxy connection. [#18419](https://github.com/open-webui/open-webui/pull/18419)
- 🎙️ Speech-to-text transcription no longer deletes or replaces existing text in the prompt input field, properly preserving any previously entered content. [#18540](https://github.com/open-webui/open-webui/issues/18540)
- 🎙️ The "Instant Auto-Send After Voice Transcription" setting now functions correctly and automatically sends transcribed text when enabled. [#18466](https://github.com/open-webui/open-webui/issues/18466)
- ⚙️ Chat settings now load properly when reopening a tab or starting a new session by initializing defaults when sessionStorage is empty. [#18438](https://github.com/open-webui/open-webui/pull/18438)
- 🔎 Folder tag search in the sidebar now correctly handles folder names with multiple spaces by replacing all spaces with underscores. [Commit](https://github.com/open-webui/open-webui/commit/a8fe979af68e47e4e4bb3eb76e48d93d60cd2a45)
- 🛠️ Functions page now updates immediately after deleting a function, removing the need for a manual page reload. [#18912](https://github.com/open-webui/open-webui/pull/18912), [#18908](https://github.com/open-webui/open-webui/issues/18908)
- 🛠️ Native tool calling now properly supports sequential tool calls with shared context, allowing tools to access images and data from previous tool executions in the same conversation. [#18664](https://github.com/open-webui/open-webui/pull/18664)
- 🎯 Globally enabled actions in the model editor now correctly apply as global instead of being treated as disabled. [#18577](https://github.com/open-webui/open-webui/pull/18577)
- 📋 Clipboard images pasted via the "{{CLIPBOARD}}" prompt variable are now correctly converted to base64 format before being sent to the backend, resolving base64 encoding errors. [#18432](https://github.com/open-webui/open-webui/pull/18432), [#18425](https://github.com/open-webui/open-webui/issues/18425)
- 📋 File list is now cleared when switching to models that do not support file uploads, preventing files from being sent to incompatible models. [#18496](https://github.com/open-webui/open-webui/pull/18496)
- 📂 Move menu no longer displays when folders are empty. [#18484](https://github.com/open-webui/open-webui/pull/18484)
- 📁 Folder and channel creation now validates that names are not empty, preventing creation of folders or channels with no name and showing an error toast if attempted. [#18564](https://github.com/open-webui/open-webui/pull/18564)
- 🖊️ Rich text input no longer removes text between equals signs when pasting code with comparison operators. [#18551](https://github.com/open-webui/open-webui/issues/18551)
- ⌨️ Keyboard shortcuts now display the correct keys for international and non-QWERTY keyboard layouts by detecting the user's layout using the Keyboard API. [#18533](https://github.com/open-webui/open-webui/pull/18533)
- 🌐 "Attach Webpage" button now displays with correct disabled styling when a model does not support file uploads. [#18483](https://github.com/open-webui/open-webui/pull/18483)
- 🎚️ Divider no longer displays in the integrations menu when no integrations are enabled. [#18487](https://github.com/open-webui/open-webui/pull/18487)
- 📱 Chat controls button is now properly hidden on mobile for users without admin or explicit chat control permissions. [#18641](https://github.com/open-webui/open-webui/pull/18641)
- 📍 User menu, download submenu, and move submenu are now repositioned to prevent overlap with the Chat Controls sidebar when it is open. [Commit](https://github.com/open-webui/open-webui/commit/414ab51cb6df1ab0d6c85ac6c1f2c5c9a5f8e2aa)
- 🎯 Artifacts button no longer appears in the chat menu when there are no artifacts to display. [Commit](https://github.com/open-webui/open-webui/commit/ed6449d35f84f68dc75ee5c6b3f4748a3fda0096)
- 🎨 Artifacts view now automatically displays when opening an existing conversation containing artifacts, improving user experience. [#18215](https://github.com/open-webui/open-webui/pull/18215)
- 🖌️ Formatting toolbar is no longer hidden under images or code blocks in chat and now displays correctly above all message content.
- 🎨 Layout shift near system instructions is prevented by properly rendering the chat component when system prompts are empty. [#18594](https://github.com/open-webui/open-webui/pull/18594)
- 📐 Modal layout shift caused by scrollbar appearance is prevented by adding a stable scrollbar gutter. [#18591](https://github.com/open-webui/open-webui/pull/18591)
- ✨ Spacing between icon and label in the user menu dropdown items is now consistent. [#18595](https://github.com/open-webui/open-webui/pull/18595)
- 💬 Duplicate prompt suggestions no longer cause the webpage to freeze or throw JavaScript errors by implementing proper key management with composite keys. [#18841](https://github.com/open-webui/open-webui/pull/18841), [#18566](https://github.com/open-webui/open-webui/issues/18566)
- 🔍 Chat preview loading in the search modal now works correctly for all search results by fixing an index boundary check that previously caused out-of-bounds errors. [#18911](https://github.com/open-webui/open-webui/pull/18911)
- ♿ Screen reader support was enhanced by wrapping messages in semantic elements with descriptive aria-labels, adding "Assistant is typing" and "Response complete" announcements for improved accessibility. [#18735](https://github.com/open-webui/open-webui/pull/18735)
- 🔒 Incorrect await call in the OAuth 2.1 flow is removed, eliminating a logged exception during authentication. [#18236](https://github.com/open-webui/open-webui/pull/18236)
- 🛡️ Duplicate crossorigin attribute in the manifest file was removed. [#18413](https://github.com/open-webui/open-webui/pull/18413)
### Changed
- 🔄 Firecrawl integration was refactored to use the official Firecrawl SDK instead of direct HTTP requests and langchain_community FireCrawlLoader, improving reliability and performance with batch scraping support and enhanced error handling. [#18635](https://github.com/open-webui/open-webui/pull/18635)
- 📄 MinerU content extraction engine now only supports PDF files following the upstream removal of LibreOffice document conversion in version 2.0.0; users needing to process office documents should convert them to PDF format first. [#18448](https://github.com/open-webui/open-webui/issues/18448)
## [0.6.34] - 2025-10-16
### Added
- 📄 MinerU is now supported as a document parser backend, with support for both local and managed API deployments. [#18306](https://github.com/open-webui/open-webui/pull/18306)
- 🔒 JWT token expiration default is now set to 4 weeks instead of never expiring, with security warnings displayed in backend logs and admin UI when set to unlimited. [#18261](https://github.com/open-webui/open-webui/pull/18261), [#18262](https://github.com/open-webui/open-webui/pull/18262)
- ⚡ Page loading performance is improved by preventing unnecessary API requests when sidebar folders are not expanded. [#18179](https://github.com/open-webui/open-webui/pull/18179), [#17476](https://github.com/open-webui/open-webui/issues/17476)
- 📁 File hash values are now included in the knowledge endpoint response, enabling efficient file synchronization through hash comparison. [#18284](https://github.com/open-webui/open-webui/pull/18284), [#18283](https://github.com/open-webui/open-webui/issues/18283)
- 🎨 Chat dialog scrollbar visibility is improved by increasing its width, making it easier to use for navigation. [#18369](https://github.com/open-webui/open-webui/pull/18369), [#11782](https://github.com/open-webui/open-webui/issues/11782)
- 🔄 Various improvements were implemented across the frontend and backend to enhance performance, stability, and security.
- 🌐 Translations for Catalan, Chinese, Czech, Finnish, German, Kabyle, Korean, Portuguese (Brazil), Spanish, Thai, and Turkish were enhanced and expanded.
### Fixed
- 📚 Focused retrieval mode now works correctly, preventing the system from forcing full context mode and loading all documents in a knowledge base regardless of settings. [#18133](https://github.com/open-webui/open-webui/issues/18133)
- 🔧 Filter inlet functions now correctly execute on tool call continuations, ensuring parameter persistence throughout tool interactions. [#18222](https://github.com/open-webui/open-webui/issues/18222)
- 🛠️ External tool servers now properly support DELETE requests with body data. [#18289](https://github.com/open-webui/open-webui/pull/18289), [#18287](https://github.com/open-webui/open-webui/issues/18287)
- 🗄️ Oracle23ai vector database client now correctly handles variable initialization, resolving UnboundLocalError when retrieving items from collections. [#18356](https://github.com/open-webui/open-webui/issues/18356)
- 🔧 Model auto-pull functionality now works correctly even when user settings remain unmodified. [#18324](https://github.com/open-webui/open-webui/pull/18324)
- 🎨 Duplicate HTML content in artifacts is now prevented by improving code block detection logic. [#18195](https://github.com/open-webui/open-webui/pull/18195), [#6154](https://github.com/open-webui/open-webui/issues/6154)
- 💬 Pinned chats now appear in the Reference Chats list and can be referenced in conversations. [#18288](https://github.com/open-webui/open-webui/issues/18288)
- 📝 Misleading knowledge base warning text in documents settings is clarified to correctly instruct users about reindexing vectors. [#18263](https://github.com/open-webui/open-webui/pull/18263)
- 🔔 Toast notifications can now be dismissed even when a modal is open. [#18260](https://github.com/open-webui/open-webui/pull/18260)
- 🔘 The "Chats" button in the sidebar now correctly toggles chat list visibility without navigating away from the current page. [#18232](https://github.com/open-webui/open-webui/pull/18232)
- 🎯 The Integrations menu no longer closes prematurely when clicking outside the Valves modal. [#18310](https://github.com/open-webui/open-webui/pull/18310)
- 🛠️ Tool ID display issues where "undefined" was incorrectly shown in the interface are now resolved. [#18178](https://github.com/open-webui/open-webui/pull/18178)
- 🛠️ Model management issues caused by excessively long model IDs are now prevented through validation that limits model IDs to 256 characters. [#18125](https://github.com/open-webui/open-webui/issues/18125)
## [0.6.33] - 2025-10-08 ## [0.6.33] - 2025-10-08
### Added ### Added

View file

@ -17,7 +17,7 @@ Passionate about open-source AI? [Join our team →](https://careers.openwebui.c
![Open WebUI Demo](./demo.gif) ![Open WebUI Demo](./demo.gif)
> [!TIP] > [!TIP]
> **Looking for an [Enterprise Plan](https://docs.openwebui.com/enterprise)?** **[Speak with Our Sales Team Today!](mailto:sales@openwebui.com)** > **Looking for an [Enterprise Plan](https://docs.openwebui.com/enterprise)?** **[Speak with Our Sales Team Today!](https://docs.openwebui.com/enterprise)**
> >
> Get **enhanced capabilities**, including **custom theming and branding**, **Service Level Agreement (SLA) support**, **Long-Term Support (LTS) versions**, and **more!** > Get **enhanced capabilities**, including **custom theming and branding**, **Service Level Agreement (SLA) support**, **Long-Term Support (LTS) versions**, and **more!**
@ -65,43 +65,6 @@ For more information, be sure to check out our [Open WebUI Documentation](https:
Want to learn more about Open WebUI's features? Check out our [Open WebUI documentation](https://docs.openwebui.com/features) for a comprehensive overview! Want to learn more about Open WebUI's features? Check out our [Open WebUI documentation](https://docs.openwebui.com/features) for a comprehensive overview!
## Sponsors 🙌
#### Emerald
<table>
<!-- <tr>
<td>
<a href="https://n8n.io/" target="_blank">
<img src="https://docs.openwebui.com/sponsors/logos/n8n.png" alt="n8n" style="width: 8rem; height: 8rem; border-radius: .75rem;" />
</a>
</td>
<td>
<a href="https://n8n.io/">n8n</a> • Does your interface have a backend yet?<br>Try <a href="https://n8n.io/">n8n</a>
</td>
</tr> -->
<tr>
<td>
<a href="https://tailscale.com/blog/self-host-a-local-ai-stack/?utm_source=OpenWebUI&utm_medium=paid-ad-placement&utm_campaign=OpenWebUI-Docs" target="_blank">
<img src="https://docs.openwebui.com/sponsors/logos/tailscale.png" alt="Tailscale" style="width: 8rem; height: 8rem; border-radius: .75rem;" />
</a>
</td>
<td>
<a href="https://tailscale.com/blog/self-host-a-local-ai-stack/?utm_source=OpenWebUI&utm_medium=paid-ad-placement&utm_campaign=OpenWebUI-Docs">Tailscale</a> • Connect self-hosted AI to any device with Tailscale
</td>
</tr>
<tr>
<td>
<a href="https://warp.dev/open-webui" target="_blank">
<img src="https://docs.openwebui.com/sponsors/logos/warp.png" alt="Warp" style="width: 8rem; height: 8rem; border-radius: .75rem;" />
</a>
</td>
<td>
<a href="https://warp.dev/open-webui">Warp</a> • The intelligent terminal for developers
</td>
</tr>
</table>
--- ---
We are incredibly grateful for the generous support of our sponsors. Their contributions help us to maintain and improve our project, ensuring we can continue to deliver quality work to our community. Thank you! We are incredibly grateful for the generous support of our sponsors. Their contributions help us to maintain and improve our project, ensuring we can continue to deliver quality work to our community. Thank you!

View file

@ -307,9 +307,15 @@ API_KEY_ALLOWED_ENDPOINTS = PersistentConfig(
JWT_EXPIRES_IN = PersistentConfig( JWT_EXPIRES_IN = PersistentConfig(
"JWT_EXPIRES_IN", "auth.jwt_expiry", os.environ.get("JWT_EXPIRES_IN", "-1") "JWT_EXPIRES_IN", "auth.jwt_expiry", os.environ.get("JWT_EXPIRES_IN", "4w")
) )
if JWT_EXPIRES_IN.value == "-1":
log.warning(
"⚠️ SECURITY WARNING: JWT_EXPIRES_IN is set to '-1'\n"
" See: https://docs.openwebui.com/getting-started/env-configuration\n"
)
#################################### ####################################
# OAuth config # OAuth config
#################################### ####################################
@ -564,25 +570,34 @@ OAUTH_BLOCKED_GROUPS = PersistentConfig(
os.environ.get("OAUTH_BLOCKED_GROUPS", "[]"), os.environ.get("OAUTH_BLOCKED_GROUPS", "[]"),
) )
OAUTH_GROUPS_SEPARATOR = os.environ.get("OAUTH_GROUPS_SEPARATOR", ";")
OAUTH_ROLES_CLAIM = PersistentConfig( OAUTH_ROLES_CLAIM = PersistentConfig(
"OAUTH_ROLES_CLAIM", "OAUTH_ROLES_CLAIM",
"oauth.roles_claim", "oauth.roles_claim",
os.environ.get("OAUTH_ROLES_CLAIM", "roles"), os.environ.get("OAUTH_ROLES_CLAIM", "roles"),
) )
SEP = os.environ.get("OAUTH_ROLES_SEPARATOR", ",")
OAUTH_ALLOWED_ROLES = PersistentConfig( OAUTH_ALLOWED_ROLES = PersistentConfig(
"OAUTH_ALLOWED_ROLES", "OAUTH_ALLOWED_ROLES",
"oauth.allowed_roles", "oauth.allowed_roles",
[ [
role.strip() role.strip()
for role in os.environ.get("OAUTH_ALLOWED_ROLES", "user,admin").split(",") for role in os.environ.get("OAUTH_ALLOWED_ROLES", f"user{SEP}admin").split(SEP)
if role
], ],
) )
OAUTH_ADMIN_ROLES = PersistentConfig( OAUTH_ADMIN_ROLES = PersistentConfig(
"OAUTH_ADMIN_ROLES", "OAUTH_ADMIN_ROLES",
"oauth.admin_roles", "oauth.admin_roles",
[role.strip() for role in os.environ.get("OAUTH_ADMIN_ROLES", "admin").split(",")], [
role.strip()
for role in os.environ.get("OAUTH_ADMIN_ROLES", "admin").split(SEP)
if role
],
) )
OAUTH_ALLOWED_DOMAINS = PersistentConfig( OAUTH_ALLOWED_DOMAINS = PersistentConfig(
@ -2291,6 +2306,36 @@ DATALAB_MARKER_OUTPUT_FORMAT = PersistentConfig(
os.environ.get("DATALAB_MARKER_OUTPUT_FORMAT", "markdown"), os.environ.get("DATALAB_MARKER_OUTPUT_FORMAT", "markdown"),
) )
MINERU_API_MODE = PersistentConfig(
"MINERU_API_MODE",
"rag.mineru_api_mode",
os.environ.get("MINERU_API_MODE", "local"), # "local" or "cloud"
)
MINERU_API_URL = PersistentConfig(
"MINERU_API_URL",
"rag.mineru_api_url",
os.environ.get("MINERU_API_URL", "http://localhost:8000"),
)
MINERU_API_KEY = PersistentConfig(
"MINERU_API_KEY",
"rag.mineru_api_key",
os.environ.get("MINERU_API_KEY", ""),
)
mineru_params = os.getenv("MINERU_PARAMS", "")
try:
mineru_params = json.loads(mineru_params)
except json.JSONDecodeError:
mineru_params = {}
MINERU_PARAMS = PersistentConfig(
"MINERU_PARAMS",
"rag.mineru_params",
mineru_params,
)
EXTERNAL_DOCUMENT_LOADER_URL = PersistentConfig( EXTERNAL_DOCUMENT_LOADER_URL = PersistentConfig(
"EXTERNAL_DOCUMENT_LOADER_URL", "EXTERNAL_DOCUMENT_LOADER_URL",
"rag.external_document_loader_url", "rag.external_document_loader_url",
@ -2421,6 +2466,12 @@ DOCUMENT_INTELLIGENCE_KEY = PersistentConfig(
os.getenv("DOCUMENT_INTELLIGENCE_KEY", ""), os.getenv("DOCUMENT_INTELLIGENCE_KEY", ""),
) )
MISTRAL_OCR_API_BASE_URL = PersistentConfig(
"MISTRAL_OCR_API_BASE_URL",
"rag.MISTRAL_OCR_API_BASE_URL",
os.getenv("MISTRAL_OCR_API_BASE_URL", "https://api.mistral.ai/v1"),
)
MISTRAL_OCR_API_KEY = PersistentConfig( MISTRAL_OCR_API_KEY = PersistentConfig(
"MISTRAL_OCR_API_KEY", "MISTRAL_OCR_API_KEY",
"rag.mistral_ocr_api_key", "rag.mistral_ocr_api_key",
@ -3031,16 +3082,30 @@ EXTERNAL_WEB_LOADER_API_KEY = PersistentConfig(
# Images # Images
#################################### ####################################
ENABLE_IMAGE_GENERATION = PersistentConfig(
"ENABLE_IMAGE_GENERATION",
"image_generation.enable",
os.environ.get("ENABLE_IMAGE_GENERATION", "").lower() == "true",
)
IMAGE_GENERATION_ENGINE = PersistentConfig( IMAGE_GENERATION_ENGINE = PersistentConfig(
"IMAGE_GENERATION_ENGINE", "IMAGE_GENERATION_ENGINE",
"image_generation.engine", "image_generation.engine",
os.getenv("IMAGE_GENERATION_ENGINE", "openai"), os.getenv("IMAGE_GENERATION_ENGINE", "openai"),
) )
ENABLE_IMAGE_GENERATION = PersistentConfig( IMAGE_GENERATION_MODEL = PersistentConfig(
"ENABLE_IMAGE_GENERATION", "IMAGE_GENERATION_MODEL",
"image_generation.enable", "image_generation.model",
os.environ.get("ENABLE_IMAGE_GENERATION", "").lower() == "true", os.getenv("IMAGE_GENERATION_MODEL", ""),
)
IMAGE_SIZE = PersistentConfig(
"IMAGE_SIZE", "image_generation.size", os.getenv("IMAGE_SIZE", "512x512")
)
IMAGE_STEPS = PersistentConfig(
"IMAGE_STEPS", "image_generation.steps", int(os.getenv("IMAGE_STEPS", 50))
) )
ENABLE_IMAGE_PROMPT_GENERATION = PersistentConfig( ENABLE_IMAGE_PROMPT_GENERATION = PersistentConfig(
@ -3060,35 +3125,17 @@ AUTOMATIC1111_API_AUTH = PersistentConfig(
os.getenv("AUTOMATIC1111_API_AUTH", ""), os.getenv("AUTOMATIC1111_API_AUTH", ""),
) )
AUTOMATIC1111_CFG_SCALE = PersistentConfig( automatic1111_params = os.getenv("AUTOMATIC1111_PARAMS", "")
"AUTOMATIC1111_CFG_SCALE", try:
"image_generation.automatic1111.cfg_scale", automatic1111_params = json.loads(automatic1111_params)
( except json.JSONDecodeError:
float(os.environ.get("AUTOMATIC1111_CFG_SCALE")) automatic1111_params = {}
if os.environ.get("AUTOMATIC1111_CFG_SCALE")
else None
),
)
AUTOMATIC1111_SAMPLER = PersistentConfig( AUTOMATIC1111_PARAMS = PersistentConfig(
"AUTOMATIC1111_SAMPLER", "AUTOMATIC1111_PARAMS",
"image_generation.automatic1111.sampler", "image_generation.automatic1111.api_auth",
( automatic1111_params,
os.environ.get("AUTOMATIC1111_SAMPLER")
if os.environ.get("AUTOMATIC1111_SAMPLER")
else None
),
)
AUTOMATIC1111_SCHEDULER = PersistentConfig(
"AUTOMATIC1111_SCHEDULER",
"image_generation.automatic1111.scheduler",
(
os.environ.get("AUTOMATIC1111_SCHEDULER")
if os.environ.get("AUTOMATIC1111_SCHEDULER")
else None
),
) )
COMFYUI_BASE_URL = PersistentConfig( COMFYUI_BASE_URL = PersistentConfig(
@ -3254,18 +3301,79 @@ IMAGES_GEMINI_API_KEY = PersistentConfig(
os.getenv("IMAGES_GEMINI_API_KEY", GEMINI_API_KEY), os.getenv("IMAGES_GEMINI_API_KEY", GEMINI_API_KEY),
) )
IMAGE_SIZE = PersistentConfig( IMAGES_GEMINI_ENDPOINT_METHOD = PersistentConfig(
"IMAGE_SIZE", "image_generation.size", os.getenv("IMAGE_SIZE", "512x512") "IMAGES_GEMINI_ENDPOINT_METHOD",
"image_generation.gemini.endpoint_method",
os.getenv("IMAGES_GEMINI_ENDPOINT_METHOD", ""),
) )
IMAGE_STEPS = PersistentConfig(
"IMAGE_STEPS", "image_generation.steps", int(os.getenv("IMAGE_STEPS", 50)) IMAGE_EDIT_ENGINE = PersistentConfig(
"IMAGE_EDIT_ENGINE",
"images.edit.engine",
os.getenv("IMAGE_EDIT_ENGINE", "openai"),
) )
IMAGE_GENERATION_MODEL = PersistentConfig( IMAGE_EDIT_MODEL = PersistentConfig(
"IMAGE_GENERATION_MODEL", "IMAGE_EDIT_MODEL",
"image_generation.model", "images.edit.model",
os.getenv("IMAGE_GENERATION_MODEL", ""), os.getenv("IMAGE_EDIT_MODEL", ""),
)
IMAGE_EDIT_SIZE = PersistentConfig(
"IMAGE_EDIT_SIZE", "images.edit.size", os.getenv("IMAGE_EDIT_SIZE", "")
)
IMAGES_EDIT_OPENAI_API_BASE_URL = PersistentConfig(
"IMAGES_EDIT_OPENAI_API_BASE_URL",
"images.edit.openai.api_base_url",
os.getenv("IMAGES_EDIT_OPENAI_API_BASE_URL", OPENAI_API_BASE_URL),
)
IMAGES_EDIT_OPENAI_API_VERSION = PersistentConfig(
"IMAGES_EDIT_OPENAI_API_VERSION",
"images.edit.openai.api_version",
os.getenv("IMAGES_EDIT_OPENAI_API_VERSION", ""),
)
IMAGES_EDIT_OPENAI_API_KEY = PersistentConfig(
"IMAGES_EDIT_OPENAI_API_KEY",
"images.edit.openai.api_key",
os.getenv("IMAGES_EDIT_OPENAI_API_KEY", OPENAI_API_KEY),
)
IMAGES_EDIT_GEMINI_API_BASE_URL = PersistentConfig(
"IMAGES_EDIT_GEMINI_API_BASE_URL",
"images.edit.gemini.api_base_url",
os.getenv("IMAGES_EDIT_GEMINI_API_BASE_URL", GEMINI_API_BASE_URL),
)
IMAGES_EDIT_GEMINI_API_KEY = PersistentConfig(
"IMAGES_EDIT_GEMINI_API_KEY",
"images.edit.gemini.api_key",
os.getenv("IMAGES_EDIT_GEMINI_API_KEY", GEMINI_API_KEY),
)
IMAGES_EDIT_COMFYUI_BASE_URL = PersistentConfig(
"IMAGES_EDIT_COMFYUI_BASE_URL",
"images.edit.comfyui.base_url",
os.getenv("IMAGES_EDIT_COMFYUI_BASE_URL", ""),
)
IMAGES_EDIT_COMFYUI_API_KEY = PersistentConfig(
"IMAGES_EDIT_COMFYUI_API_KEY",
"images.edit.comfyui.api_key",
os.getenv("IMAGES_EDIT_COMFYUI_API_KEY", ""),
)
IMAGES_EDIT_COMFYUI_WORKFLOW = PersistentConfig(
"IMAGES_EDIT_COMFYUI_WORKFLOW",
"images.edit.comfyui.workflow",
os.getenv("IMAGES_EDIT_COMFYUI_WORKFLOW", ""),
)
IMAGES_EDIT_COMFYUI_WORKFLOW_NODES = PersistentConfig(
"IMAGES_EDIT_COMFYUI_WORKFLOW_NODES",
"images.edit.comfyui.nodes",
[],
) )
#################################### ####################################
@ -3300,6 +3408,10 @@ DEEPGRAM_API_KEY = PersistentConfig(
os.getenv("DEEPGRAM_API_KEY", ""), os.getenv("DEEPGRAM_API_KEY", ""),
) )
# ElevenLabs configuration
ELEVENLABS_API_BASE_URL = os.getenv(
"ELEVENLABS_API_BASE_URL", "https://api.elevenlabs.io"
)
AUDIO_STT_OPENAI_API_BASE_URL = PersistentConfig( AUDIO_STT_OPENAI_API_BASE_URL = PersistentConfig(
"AUDIO_STT_OPENAI_API_BASE_URL", "AUDIO_STT_OPENAI_API_BASE_URL",
@ -3367,6 +3479,24 @@ AUDIO_STT_AZURE_MAX_SPEAKERS = PersistentConfig(
os.getenv("AUDIO_STT_AZURE_MAX_SPEAKERS", ""), os.getenv("AUDIO_STT_AZURE_MAX_SPEAKERS", ""),
) )
AUDIO_STT_MISTRAL_API_KEY = PersistentConfig(
"AUDIO_STT_MISTRAL_API_KEY",
"audio.stt.mistral.api_key",
os.getenv("AUDIO_STT_MISTRAL_API_KEY", ""),
)
AUDIO_STT_MISTRAL_API_BASE_URL = PersistentConfig(
"AUDIO_STT_MISTRAL_API_BASE_URL",
"audio.stt.mistral.api_base_url",
os.getenv("AUDIO_STT_MISTRAL_API_BASE_URL", "https://api.mistral.ai/v1"),
)
AUDIO_STT_MISTRAL_USE_CHAT_COMPLETIONS = PersistentConfig(
"AUDIO_STT_MISTRAL_USE_CHAT_COMPLETIONS",
"audio.stt.mistral.use_chat_completions",
os.getenv("AUDIO_STT_MISTRAL_USE_CHAT_COMPLETIONS", "false").lower() == "true",
)
AUDIO_TTS_OPENAI_API_BASE_URL = PersistentConfig( AUDIO_TTS_OPENAI_API_BASE_URL = PersistentConfig(
"AUDIO_TTS_OPENAI_API_BASE_URL", "AUDIO_TTS_OPENAI_API_BASE_URL",
"audio.tts.openai.api_base_url", "audio.tts.openai.api_base_url",

View file

@ -38,6 +38,7 @@ class ERROR_MESSAGES(str, Enum):
ID_TAKEN = "Uh-oh! This id is already registered. Please choose another id string." ID_TAKEN = "Uh-oh! This id is already registered. Please choose another id string."
MODEL_ID_TAKEN = "Uh-oh! This model id is already registered. Please choose another model id string." MODEL_ID_TAKEN = "Uh-oh! This model id is already registered. Please choose another model id string."
NAME_TAG_TAKEN = "Uh-oh! This name tag is already registered. Please choose another name tag string." NAME_TAG_TAKEN = "Uh-oh! This name tag is already registered. Please choose another name tag string."
MODEL_ID_TOO_LONG = "The model id is too long. Please make sure your model id is less than 256 characters long."
INVALID_TOKEN = ( INVALID_TOKEN = (
"Your session has expired or the token is invalid. Please sign in again." "Your session has expired or the token is invalid. Please sign in again."

View file

@ -569,6 +569,21 @@ else:
CHAT_RESPONSE_MAX_TOOL_CALL_RETRIES = 30 CHAT_RESPONSE_MAX_TOOL_CALL_RETRIES = 30
CHAT_STREAM_RESPONSE_CHUNK_MAX_BUFFER_SIZE = os.environ.get(
"CHAT_STREAM_RESPONSE_CHUNK_MAX_BUFFER_SIZE", ""
)
if CHAT_STREAM_RESPONSE_CHUNK_MAX_BUFFER_SIZE == "":
CHAT_STREAM_RESPONSE_CHUNK_MAX_BUFFER_SIZE = None
else:
try:
CHAT_STREAM_RESPONSE_CHUNK_MAX_BUFFER_SIZE = int(
CHAT_STREAM_RESPONSE_CHUNK_MAX_BUFFER_SIZE
)
except Exception:
CHAT_STREAM_RESPONSE_CHUNK_MAX_BUFFER_SIZE = None
#################################### ####################################
# WEBSOCKET SUPPORT # WEBSOCKET SUPPORT
#################################### ####################################

View file

@ -147,9 +147,7 @@ from open_webui.config import (
# Image # Image
AUTOMATIC1111_API_AUTH, AUTOMATIC1111_API_AUTH,
AUTOMATIC1111_BASE_URL, AUTOMATIC1111_BASE_URL,
AUTOMATIC1111_CFG_SCALE, AUTOMATIC1111_PARAMS,
AUTOMATIC1111_SAMPLER,
AUTOMATIC1111_SCHEDULER,
COMFYUI_BASE_URL, COMFYUI_BASE_URL,
COMFYUI_API_KEY, COMFYUI_API_KEY,
COMFYUI_WORKFLOW, COMFYUI_WORKFLOW,
@ -165,6 +163,19 @@ from open_webui.config import (
IMAGES_OPENAI_API_KEY, IMAGES_OPENAI_API_KEY,
IMAGES_GEMINI_API_BASE_URL, IMAGES_GEMINI_API_BASE_URL,
IMAGES_GEMINI_API_KEY, IMAGES_GEMINI_API_KEY,
IMAGES_GEMINI_ENDPOINT_METHOD,
IMAGE_EDIT_ENGINE,
IMAGE_EDIT_MODEL,
IMAGE_EDIT_SIZE,
IMAGES_EDIT_OPENAI_API_BASE_URL,
IMAGES_EDIT_OPENAI_API_KEY,
IMAGES_EDIT_OPENAI_API_VERSION,
IMAGES_EDIT_GEMINI_API_BASE_URL,
IMAGES_EDIT_GEMINI_API_KEY,
IMAGES_EDIT_COMFYUI_BASE_URL,
IMAGES_EDIT_COMFYUI_API_KEY,
IMAGES_EDIT_COMFYUI_WORKFLOW,
IMAGES_EDIT_COMFYUI_WORKFLOW_NODES,
# Audio # Audio
AUDIO_STT_ENGINE, AUDIO_STT_ENGINE,
AUDIO_STT_MODEL, AUDIO_STT_MODEL,
@ -176,6 +187,9 @@ from open_webui.config import (
AUDIO_STT_AZURE_LOCALES, AUDIO_STT_AZURE_LOCALES,
AUDIO_STT_AZURE_BASE_URL, AUDIO_STT_AZURE_BASE_URL,
AUDIO_STT_AZURE_MAX_SPEAKERS, AUDIO_STT_AZURE_MAX_SPEAKERS,
AUDIO_STT_MISTRAL_API_KEY,
AUDIO_STT_MISTRAL_API_BASE_URL,
AUDIO_STT_MISTRAL_USE_CHAT_COMPLETIONS,
AUDIO_TTS_ENGINE, AUDIO_TTS_ENGINE,
AUDIO_TTS_MODEL, AUDIO_TTS_MODEL,
AUDIO_TTS_VOICE, AUDIO_TTS_VOICE,
@ -244,6 +258,10 @@ from open_webui.config import (
DATALAB_MARKER_DISABLE_IMAGE_EXTRACTION, DATALAB_MARKER_DISABLE_IMAGE_EXTRACTION,
DATALAB_MARKER_FORMAT_LINES, DATALAB_MARKER_FORMAT_LINES,
DATALAB_MARKER_OUTPUT_FORMAT, DATALAB_MARKER_OUTPUT_FORMAT,
MINERU_API_MODE,
MINERU_API_URL,
MINERU_API_KEY,
MINERU_PARAMS,
DATALAB_MARKER_USE_LLM, DATALAB_MARKER_USE_LLM,
EXTERNAL_DOCUMENT_LOADER_URL, EXTERNAL_DOCUMENT_LOADER_URL,
EXTERNAL_DOCUMENT_LOADER_API_KEY, EXTERNAL_DOCUMENT_LOADER_API_KEY,
@ -263,6 +281,7 @@ from open_webui.config import (
DOCLING_PICTURE_DESCRIPTION_API, DOCLING_PICTURE_DESCRIPTION_API,
DOCUMENT_INTELLIGENCE_ENDPOINT, DOCUMENT_INTELLIGENCE_ENDPOINT,
DOCUMENT_INTELLIGENCE_KEY, DOCUMENT_INTELLIGENCE_KEY,
MISTRAL_OCR_API_BASE_URL,
MISTRAL_OCR_API_KEY, MISTRAL_OCR_API_KEY,
RAG_TEXT_SPLITTER, RAG_TEXT_SPLITTER,
TIKTOKEN_ENCODING_NAME, TIKTOKEN_ENCODING_NAME,
@ -479,9 +498,11 @@ from open_webui.utils.auth import (
) )
from open_webui.utils.plugin import install_tool_and_function_dependencies from open_webui.utils.plugin import install_tool_and_function_dependencies
from open_webui.utils.oauth import ( from open_webui.utils.oauth import (
get_oauth_client_info_with_dynamic_client_registration,
encrypt_data,
decrypt_data,
OAuthManager, OAuthManager,
OAuthClientManager, OAuthClientManager,
decrypt_data,
OAuthClientInformationFull, OAuthClientInformationFull,
) )
from open_webui.utils.security_headers import SecurityHeadersMiddleware from open_webui.utils.security_headers import SecurityHeadersMiddleware
@ -853,7 +874,12 @@ app.state.config.DOCLING_PICTURE_DESCRIPTION_LOCAL = DOCLING_PICTURE_DESCRIPTION
app.state.config.DOCLING_PICTURE_DESCRIPTION_API = DOCLING_PICTURE_DESCRIPTION_API app.state.config.DOCLING_PICTURE_DESCRIPTION_API = DOCLING_PICTURE_DESCRIPTION_API
app.state.config.DOCUMENT_INTELLIGENCE_ENDPOINT = DOCUMENT_INTELLIGENCE_ENDPOINT app.state.config.DOCUMENT_INTELLIGENCE_ENDPOINT = DOCUMENT_INTELLIGENCE_ENDPOINT
app.state.config.DOCUMENT_INTELLIGENCE_KEY = DOCUMENT_INTELLIGENCE_KEY app.state.config.DOCUMENT_INTELLIGENCE_KEY = DOCUMENT_INTELLIGENCE_KEY
app.state.config.MISTRAL_OCR_API_BASE_URL = MISTRAL_OCR_API_BASE_URL
app.state.config.MISTRAL_OCR_API_KEY = MISTRAL_OCR_API_KEY app.state.config.MISTRAL_OCR_API_KEY = MISTRAL_OCR_API_KEY
app.state.config.MINERU_API_MODE = MINERU_API_MODE
app.state.config.MINERU_API_URL = MINERU_API_URL
app.state.config.MINERU_API_KEY = MINERU_API_KEY
app.state.config.MINERU_PARAMS = MINERU_PARAMS
app.state.config.TEXT_SPLITTER = RAG_TEXT_SPLITTER app.state.config.TEXT_SPLITTER = RAG_TEXT_SPLITTER
app.state.config.TIKTOKEN_ENCODING_NAME = TIKTOKEN_ENCODING_NAME app.state.config.TIKTOKEN_ENCODING_NAME = TIKTOKEN_ENCODING_NAME
@ -1055,27 +1081,40 @@ app.state.config.IMAGE_GENERATION_ENGINE = IMAGE_GENERATION_ENGINE
app.state.config.ENABLE_IMAGE_GENERATION = ENABLE_IMAGE_GENERATION app.state.config.ENABLE_IMAGE_GENERATION = ENABLE_IMAGE_GENERATION
app.state.config.ENABLE_IMAGE_PROMPT_GENERATION = ENABLE_IMAGE_PROMPT_GENERATION app.state.config.ENABLE_IMAGE_PROMPT_GENERATION = ENABLE_IMAGE_PROMPT_GENERATION
app.state.config.IMAGE_GENERATION_MODEL = IMAGE_GENERATION_MODEL
app.state.config.IMAGE_SIZE = IMAGE_SIZE
app.state.config.IMAGE_STEPS = IMAGE_STEPS
app.state.config.IMAGES_OPENAI_API_BASE_URL = IMAGES_OPENAI_API_BASE_URL app.state.config.IMAGES_OPENAI_API_BASE_URL = IMAGES_OPENAI_API_BASE_URL
app.state.config.IMAGES_OPENAI_API_VERSION = IMAGES_OPENAI_API_VERSION app.state.config.IMAGES_OPENAI_API_VERSION = IMAGES_OPENAI_API_VERSION
app.state.config.IMAGES_OPENAI_API_KEY = IMAGES_OPENAI_API_KEY app.state.config.IMAGES_OPENAI_API_KEY = IMAGES_OPENAI_API_KEY
app.state.config.IMAGES_GEMINI_API_BASE_URL = IMAGES_GEMINI_API_BASE_URL app.state.config.IMAGES_GEMINI_API_BASE_URL = IMAGES_GEMINI_API_BASE_URL
app.state.config.IMAGES_GEMINI_API_KEY = IMAGES_GEMINI_API_KEY app.state.config.IMAGES_GEMINI_API_KEY = IMAGES_GEMINI_API_KEY
app.state.config.IMAGES_GEMINI_ENDPOINT_METHOD = IMAGES_GEMINI_ENDPOINT_METHOD
app.state.config.IMAGE_GENERATION_MODEL = IMAGE_GENERATION_MODEL
app.state.config.AUTOMATIC1111_BASE_URL = AUTOMATIC1111_BASE_URL app.state.config.AUTOMATIC1111_BASE_URL = AUTOMATIC1111_BASE_URL
app.state.config.AUTOMATIC1111_API_AUTH = AUTOMATIC1111_API_AUTH app.state.config.AUTOMATIC1111_API_AUTH = AUTOMATIC1111_API_AUTH
app.state.config.AUTOMATIC1111_CFG_SCALE = AUTOMATIC1111_CFG_SCALE app.state.config.AUTOMATIC1111_PARAMS = AUTOMATIC1111_PARAMS
app.state.config.AUTOMATIC1111_SAMPLER = AUTOMATIC1111_SAMPLER
app.state.config.AUTOMATIC1111_SCHEDULER = AUTOMATIC1111_SCHEDULER
app.state.config.COMFYUI_BASE_URL = COMFYUI_BASE_URL app.state.config.COMFYUI_BASE_URL = COMFYUI_BASE_URL
app.state.config.COMFYUI_API_KEY = COMFYUI_API_KEY app.state.config.COMFYUI_API_KEY = COMFYUI_API_KEY
app.state.config.COMFYUI_WORKFLOW = COMFYUI_WORKFLOW app.state.config.COMFYUI_WORKFLOW = COMFYUI_WORKFLOW
app.state.config.COMFYUI_WORKFLOW_NODES = COMFYUI_WORKFLOW_NODES app.state.config.COMFYUI_WORKFLOW_NODES = COMFYUI_WORKFLOW_NODES
app.state.config.IMAGE_SIZE = IMAGE_SIZE
app.state.config.IMAGE_STEPS = IMAGE_STEPS app.state.config.IMAGE_EDIT_ENGINE = IMAGE_EDIT_ENGINE
app.state.config.IMAGE_EDIT_MODEL = IMAGE_EDIT_MODEL
app.state.config.IMAGE_EDIT_SIZE = IMAGE_EDIT_SIZE
app.state.config.IMAGES_EDIT_OPENAI_API_BASE_URL = IMAGES_EDIT_OPENAI_API_BASE_URL
app.state.config.IMAGES_EDIT_OPENAI_API_KEY = IMAGES_EDIT_OPENAI_API_KEY
app.state.config.IMAGES_EDIT_OPENAI_API_VERSION = IMAGES_EDIT_OPENAI_API_VERSION
app.state.config.IMAGES_EDIT_GEMINI_API_BASE_URL = IMAGES_EDIT_GEMINI_API_BASE_URL
app.state.config.IMAGES_EDIT_GEMINI_API_KEY = IMAGES_EDIT_GEMINI_API_KEY
app.state.config.IMAGES_EDIT_COMFYUI_BASE_URL = IMAGES_EDIT_COMFYUI_BASE_URL
app.state.config.IMAGES_EDIT_COMFYUI_API_KEY = IMAGES_EDIT_COMFYUI_API_KEY
app.state.config.IMAGES_EDIT_COMFYUI_WORKFLOW = IMAGES_EDIT_COMFYUI_WORKFLOW
app.state.config.IMAGES_EDIT_COMFYUI_WORKFLOW_NODES = IMAGES_EDIT_COMFYUI_WORKFLOW_NODES
######################################## ########################################
@ -1101,6 +1140,12 @@ app.state.config.AUDIO_STT_AZURE_LOCALES = AUDIO_STT_AZURE_LOCALES
app.state.config.AUDIO_STT_AZURE_BASE_URL = AUDIO_STT_AZURE_BASE_URL app.state.config.AUDIO_STT_AZURE_BASE_URL = AUDIO_STT_AZURE_BASE_URL
app.state.config.AUDIO_STT_AZURE_MAX_SPEAKERS = AUDIO_STT_AZURE_MAX_SPEAKERS app.state.config.AUDIO_STT_AZURE_MAX_SPEAKERS = AUDIO_STT_AZURE_MAX_SPEAKERS
app.state.config.AUDIO_STT_MISTRAL_API_KEY = AUDIO_STT_MISTRAL_API_KEY
app.state.config.AUDIO_STT_MISTRAL_API_BASE_URL = AUDIO_STT_MISTRAL_API_BASE_URL
app.state.config.AUDIO_STT_MISTRAL_USE_CHAT_COMPLETIONS = (
AUDIO_STT_MISTRAL_USE_CHAT_COMPLETIONS
)
app.state.config.TTS_ENGINE = AUDIO_TTS_ENGINE app.state.config.TTS_ENGINE = AUDIO_TTS_ENGINE
app.state.config.TTS_MODEL = AUDIO_TTS_MODEL app.state.config.TTS_MODEL = AUDIO_TTS_MODEL
@ -1550,11 +1595,15 @@ async def chat_completion(
log.info("Chat processing was cancelled") log.info("Chat processing was cancelled")
try: try:
event_emitter = get_event_emitter(metadata) event_emitter = get_event_emitter(metadata)
await event_emitter( await asyncio.shield(
{"type": "chat:tasks:cancel"}, event_emitter(
{"type": "chat:tasks:cancel"},
)
) )
except Exception as e: except Exception as e:
pass pass
finally:
raise # re-raise to ensure proper task cancellation handling
except Exception as e: except Exception as e:
log.debug(f"Error processing chat payload: {e}") log.debug(f"Error processing chat payload: {e}")
if metadata.get("chat_id") and metadata.get("message_id"): if metadata.get("chat_id") and metadata.get("message_id"):
@ -1585,7 +1634,7 @@ async def chat_completion(
finally: finally:
try: try:
if mcp_clients := metadata.get("mcp_clients"): if mcp_clients := metadata.get("mcp_clients"):
for client in mcp_clients.values(): for client in reversed(mcp_clients.values()):
await client.disconnect() await client.disconnect()
except Exception as e: except Exception as e:
log.debug(f"Error cleaning up: {e}") log.debug(f"Error cleaning up: {e}")
@ -1931,6 +1980,7 @@ if len(app.state.config.TOOL_SERVER_CONNECTIONS) > 0:
if tool_server_connection.get("type", "openapi") == "mcp": if tool_server_connection.get("type", "openapi") == "mcp":
server_id = tool_server_connection.get("info", {}).get("id") server_id = tool_server_connection.get("info", {}).get("id")
auth_type = tool_server_connection.get("auth_type", "none") auth_type = tool_server_connection.get("auth_type", "none")
if server_id and auth_type == "oauth_2.1": if server_id and auth_type == "oauth_2.1":
oauth_client_info = tool_server_connection.get("info", {}).get( oauth_client_info = tool_server_connection.get("info", {}).get(
"oauth_client_info", "" "oauth_client_info", ""
@ -1976,6 +2026,64 @@ except Exception as e:
) )
async def register_client(self, request, client_id: str) -> bool:
server_type, server_id = client_id.split(":", 1)
connection = None
connection_idx = None
for idx, conn in enumerate(request.app.state.config.TOOL_SERVER_CONNECTIONS or []):
if conn.get("type", "openapi") == server_type:
info = conn.get("info", {})
if info.get("id") == server_id:
connection = conn
connection_idx = idx
break
if connection is None or connection_idx is None:
log.warning(
f"Unable to locate MCP tool server configuration for client {client_id} during re-registration"
)
return False
server_url = connection.get("url")
oauth_server_key = (connection.get("config") or {}).get("oauth_server_key")
try:
oauth_client_info = (
await get_oauth_client_info_with_dynamic_client_registration(
request,
client_id,
server_url,
oauth_server_key,
)
)
except Exception as e:
log.error(f"Dynamic client re-registration failed for {client_id}: {e}")
return False
try:
request.app.state.config.TOOL_SERVER_CONNECTIONS[connection_idx] = {
**connection,
"info": {
**connection.get("info", {}),
"oauth_client_info": encrypt_data(
oauth_client_info.model_dump(mode="json")
),
},
}
except Exception as e:
log.error(
f"Failed to persist updated OAuth client info for tool server {client_id}: {e}"
)
return False
oauth_client_manager.remove_client(client_id)
oauth_client_manager.add_client(client_id, oauth_client_info)
log.info(f"Re-registered OAuth client {client_id} for tool server")
return True
@app.get("/oauth/clients/{client_id}/authorize") @app.get("/oauth/clients/{client_id}/authorize")
async def oauth_client_authorize( async def oauth_client_authorize(
client_id: str, client_id: str,
@ -1983,6 +2091,41 @@ async def oauth_client_authorize(
response: Response, response: Response,
user=Depends(get_verified_user), user=Depends(get_verified_user),
): ):
# ensure_valid_client_registration
client = oauth_client_manager.get_client(client_id)
client_info = oauth_client_manager.get_client_info(client_id)
if client is None or client_info is None:
raise HTTPException(status.HTTP_404_NOT_FOUND)
if not await oauth_client_manager._preflight_authorization_url(client, client_info):
log.info(
"Detected invalid OAuth client %s; attempting re-registration",
client_id,
)
registered = await register_client(request, client_id)
if not registered:
raise HTTPException(
status_code=status.HTTP_500_INTERNAL_SERVER_ERROR,
detail="Failed to re-register OAuth client",
)
client = oauth_client_manager.get_client(client_id)
client_info = oauth_client_manager.get_client_info(client_id)
if client is None or client_info is None:
raise HTTPException(
status_code=status.HTTP_500_INTERNAL_SERVER_ERROR,
detail="OAuth client unavailable after re-registration",
)
if not await oauth_client_manager._preflight_authorization_url(
client, client_info
):
raise HTTPException(
status_code=status.HTTP_500_INTERNAL_SERVER_ERROR,
detail="OAuth client registration is still invalid after re-registration",
)
return await oauth_client_manager.handle_authorize(request, client_id=client_id) return await oauth_client_manager.handle_authorize(request, client_id=client_id)

View file

@ -440,7 +440,10 @@ class ChatTable:
order_by = filter.get("order_by") order_by = filter.get("order_by")
direction = filter.get("direction") direction = filter.get("direction")
if order_by and direction and getattr(Chat, order_by): if order_by and direction:
if not getattr(Chat, order_by, None):
raise ValueError("Invalid order_by field")
if direction.lower() == "asc": if direction.lower() == "asc":
query = query.order_by(getattr(Chat, order_by).asc()) query = query.order_by(getattr(Chat, order_by).asc())
elif direction.lower() == "desc": elif direction.lower() == "desc":
@ -502,6 +505,7 @@ class ChatTable:
user_id: str, user_id: str,
include_archived: bool = False, include_archived: bool = False,
include_folders: bool = False, include_folders: bool = False,
include_pinned: bool = False,
skip: Optional[int] = None, skip: Optional[int] = None,
limit: Optional[int] = None, limit: Optional[int] = None,
) -> list[ChatTitleIdResponse]: ) -> list[ChatTitleIdResponse]:
@ -511,7 +515,8 @@ class ChatTable:
if not include_folders: if not include_folders:
query = query.filter_by(folder_id=None) query = query.filter_by(folder_id=None)
query = query.filter(or_(Chat.pinned == False, Chat.pinned == None)) if not include_pinned:
query = query.filter(or_(Chat.pinned == False, Chat.pinned == None))
if not include_archived: if not include_archived:
query = query.filter_by(archived=False) query = query.filter_by(archived=False)
@ -760,15 +765,20 @@ class ChatTable:
) )
elif dialect_name == "postgresql": elif dialect_name == "postgresql":
# PostgreSQL relies on proper JSON query for search # PostgreSQL doesn't allow null bytes in text. We filter those out by checking
# the JSON representation for \u0000 before attempting text extraction
postgres_content_sql = ( postgres_content_sql = (
"EXISTS (" "EXISTS ("
" SELECT 1 " " SELECT 1 "
" FROM json_array_elements(Chat.chat->'messages') AS message " " FROM json_array_elements(Chat.chat->'messages') AS message "
" WHERE LOWER(message->>'content') LIKE '%' || :content_key || '%'" " WHERE message->'content' IS NOT NULL "
" AND (message->'content')::text NOT LIKE '%\\u0000%' "
" AND LOWER(message->>'content') LIKE '%' || :content_key || '%'"
")" ")"
) )
postgres_content_clause = text(postgres_content_sql) postgres_content_clause = text(postgres_content_sql)
# Also filter out chats with null bytes in title
query = query.filter(text("Chat.title::text NOT LIKE '%\\x00%'"))
query = query.filter( query = query.filter(
or_( or_(
Chat.title.ilike(bindparam("title_key")), Chat.title.ilike(bindparam("title_key")),

View file

@ -82,6 +82,7 @@ class FileModelResponse(BaseModel):
class FileMetadataResponse(BaseModel): class FileMetadataResponse(BaseModel):
id: str id: str
hash: Optional[str] = None
meta: dict meta: dict
created_at: int # timestamp in epoch created_at: int # timestamp in epoch
updated_at: int # timestamp in epoch updated_at: int # timestamp in epoch
@ -97,6 +98,12 @@ class FileForm(BaseModel):
access_control: Optional[dict] = None access_control: Optional[dict] = None
class FileUpdateForm(BaseModel):
hash: Optional[str] = None
data: Optional[dict] = None
meta: Optional[dict] = None
class FilesTable: class FilesTable:
def insert_new_file(self, user_id: str, form_data: FileForm) -> Optional[FileModel]: def insert_new_file(self, user_id: str, form_data: FileForm) -> Optional[FileModel]:
with get_db() as db: with get_db() as db:
@ -147,6 +154,7 @@ class FilesTable:
file = db.get(File, id) file = db.get(File, id)
return FileMetadataResponse( return FileMetadataResponse(
id=file.id, id=file.id,
hash=file.hash,
meta=file.meta, meta=file.meta,
created_at=file.created_at, created_at=file.created_at,
updated_at=file.updated_at, updated_at=file.updated_at,
@ -182,12 +190,13 @@ class FilesTable:
return [ return [
FileMetadataResponse( FileMetadataResponse(
id=file.id, id=file.id,
hash=file.hash,
meta=file.meta, meta=file.meta,
created_at=file.created_at, created_at=file.created_at,
updated_at=file.updated_at, updated_at=file.updated_at,
) )
for file in db.query( for file in db.query(
File.id, File.meta, File.created_at, File.updated_at File.id, File.hash, File.meta, File.created_at, File.updated_at
) )
.filter(File.id.in_(ids)) .filter(File.id.in_(ids))
.order_by(File.updated_at.desc()) .order_by(File.updated_at.desc())
@ -201,6 +210,29 @@ class FilesTable:
for file in db.query(File).filter_by(user_id=user_id).all() for file in db.query(File).filter_by(user_id=user_id).all()
] ]
def update_file_by_id(
self, id: str, form_data: FileUpdateForm
) -> Optional[FileModel]:
with get_db() as db:
try:
file = db.query(File).filter_by(id=id).first()
if form_data.hash is not None:
file.hash = form_data.hash
if form_data.data is not None:
file.data = {**(file.data if file.data else {}), **form_data.data}
if form_data.meta is not None:
file.meta = {**(file.meta if file.meta else {}), **form_data.meta}
file.updated_at = int(time.time())
db.commit()
return FileModel.model_validate(file)
except Exception as e:
log.exception(f"Error updating file completely by id: {e}")
return None
def update_file_hash_by_id(self, id: str, hash: str) -> Optional[FileModel]: def update_file_hash_by_id(self, id: str, hash: str) -> Optional[FileModel]:
with get_db() as db: with get_db() as db:
try: try:

View file

@ -262,5 +262,16 @@ class OAuthSessionTable:
log.error(f"Error deleting OAuth sessions by user ID: {e}") log.error(f"Error deleting OAuth sessions by user ID: {e}")
return False return False
def delete_sessions_by_provider(self, provider: str) -> bool:
"""Delete all OAuth sessions for a provider"""
try:
with get_db() as db:
db.query(OAuthSession).filter_by(provider=provider).delete()
db.commit()
return True
except Exception as e:
log.error(f"Error deleting OAuth sessions by provider {provider}: {e}")
return False
OAuthSessions = OAuthSessionTable() OAuthSessions = OAuthSessionTable()

View file

@ -5,6 +5,7 @@ from urllib.parse import quote
from langchain_core.document_loaders import BaseLoader from langchain_core.document_loaders import BaseLoader
from langchain_core.documents import Document from langchain_core.documents import Document
from open_webui.utils.headers import include_user_info_headers
from open_webui.env import SRC_LOG_LEVELS from open_webui.env import SRC_LOG_LEVELS
log = logging.getLogger(__name__) log = logging.getLogger(__name__)
@ -18,6 +19,7 @@ class ExternalDocumentLoader(BaseLoader):
url: str, url: str,
api_key: str, api_key: str,
mime_type=None, mime_type=None,
user=None,
**kwargs, **kwargs,
) -> None: ) -> None:
self.url = url self.url = url
@ -26,6 +28,8 @@ class ExternalDocumentLoader(BaseLoader):
self.file_path = file_path self.file_path = file_path
self.mime_type = mime_type self.mime_type = mime_type
self.user = user
def load(self) -> List[Document]: def load(self) -> List[Document]:
with open(self.file_path, "rb") as f: with open(self.file_path, "rb") as f:
data = f.read() data = f.read()
@ -42,6 +46,9 @@ class ExternalDocumentLoader(BaseLoader):
except: except:
pass pass
if self.user is not None:
headers = include_user_info_headers(headers, self.user)
url = self.url url = self.url
if url.endswith("/"): if url.endswith("/"):
url = url[:-1] url = url[:-1]

View file

@ -27,6 +27,7 @@ from open_webui.retrieval.loaders.external_document import ExternalDocumentLoade
from open_webui.retrieval.loaders.mistral import MistralLoader from open_webui.retrieval.loaders.mistral import MistralLoader
from open_webui.retrieval.loaders.datalab_marker import DatalabMarkerLoader from open_webui.retrieval.loaders.datalab_marker import DatalabMarkerLoader
from open_webui.retrieval.loaders.mineru import MinerULoader
from open_webui.env import SRC_LOG_LEVELS, GLOBAL_LOG_LEVEL from open_webui.env import SRC_LOG_LEVELS, GLOBAL_LOG_LEVEL
@ -227,6 +228,7 @@ class DoclingLoader:
class Loader: class Loader:
def __init__(self, engine: str = "", **kwargs): def __init__(self, engine: str = "", **kwargs):
self.engine = engine self.engine = engine
self.user = kwargs.get("user", None)
self.kwargs = kwargs self.kwargs = kwargs
def load( def load(
@ -263,6 +265,7 @@ class Loader:
url=self.kwargs.get("EXTERNAL_DOCUMENT_LOADER_URL"), url=self.kwargs.get("EXTERNAL_DOCUMENT_LOADER_URL"),
api_key=self.kwargs.get("EXTERNAL_DOCUMENT_LOADER_API_KEY"), api_key=self.kwargs.get("EXTERNAL_DOCUMENT_LOADER_API_KEY"),
mime_type=file_content_type, mime_type=file_content_type,
user=self.user,
) )
elif self.engine == "tika" and self.kwargs.get("TIKA_SERVER_URL"): elif self.engine == "tika" and self.kwargs.get("TIKA_SERVER_URL"):
if self._is_text_file(file_ext, file_content_type): if self._is_text_file(file_ext, file_content_type):
@ -271,7 +274,6 @@ class Loader:
loader = TikaLoader( loader = TikaLoader(
url=self.kwargs.get("TIKA_SERVER_URL"), url=self.kwargs.get("TIKA_SERVER_URL"),
file_path=file_path, file_path=file_path,
mime_type=file_content_type,
extract_images=self.kwargs.get("PDF_EXTRACT_IMAGES"), extract_images=self.kwargs.get("PDF_EXTRACT_IMAGES"),
) )
elif ( elif (
@ -367,6 +369,16 @@ class Loader:
api_endpoint=self.kwargs.get("DOCUMENT_INTELLIGENCE_ENDPOINT"), api_endpoint=self.kwargs.get("DOCUMENT_INTELLIGENCE_ENDPOINT"),
azure_credential=DefaultAzureCredential(), azure_credential=DefaultAzureCredential(),
) )
elif self.engine == "mineru" and file_ext in [
"pdf"
]: # MinerU currently only supports PDF
loader = MinerULoader(
file_path=file_path,
api_mode=self.kwargs.get("MINERU_API_MODE", "local"),
api_url=self.kwargs.get("MINERU_API_URL", "http://localhost:8000"),
api_key=self.kwargs.get("MINERU_API_KEY", ""),
params=self.kwargs.get("MINERU_PARAMS", {}),
)
elif ( elif (
self.engine == "mistral_ocr" self.engine == "mistral_ocr"
and self.kwargs.get("MISTRAL_OCR_API_KEY") != "" and self.kwargs.get("MISTRAL_OCR_API_KEY") != ""
@ -374,16 +386,9 @@ class Loader:
in ["pdf"] # Mistral OCR currently only supports PDF and images in ["pdf"] # Mistral OCR currently only supports PDF and images
): ):
loader = MistralLoader( loader = MistralLoader(
api_key=self.kwargs.get("MISTRAL_OCR_API_KEY"), file_path=file_path base_url=self.kwargs.get("MISTRAL_OCR_API_BASE_URL"),
) api_key=self.kwargs.get("MISTRAL_OCR_API_KEY"),
elif ( file_path=file_path,
self.engine == "external"
and self.kwargs.get("MISTRAL_OCR_API_KEY") != ""
and file_ext
in ["pdf"] # Mistral OCR currently only supports PDF and images
):
loader = MistralLoader(
api_key=self.kwargs.get("MISTRAL_OCR_API_KEY"), file_path=file_path
) )
else: else:
if file_ext == "pdf": if file_ext == "pdf":

View file

@ -0,0 +1,541 @@
import os
import time
import requests
import logging
import tempfile
import zipfile
from typing import List, Optional
from langchain_core.documents import Document
from fastapi import HTTPException, status
log = logging.getLogger(__name__)
class MinerULoader:
"""
MinerU document parser loader supporting both Cloud API and Local API modes.
Cloud API: Uses MinerU managed service with async task-based processing
Local API: Uses self-hosted MinerU API with synchronous processing
"""
def __init__(
self,
file_path: str,
api_mode: str = "local",
api_url: str = "http://localhost:8000",
api_key: str = "",
params: dict = None,
):
self.file_path = file_path
self.api_mode = api_mode.lower()
self.api_url = api_url.rstrip("/")
self.api_key = api_key
# Parse params dict with defaults
params = params or {}
self.enable_ocr = params.get("enable_ocr", False)
self.enable_formula = params.get("enable_formula", True)
self.enable_table = params.get("enable_table", True)
self.language = params.get("language", "en")
self.model_version = params.get("model_version", "pipeline")
self.page_ranges = params.get("page_ranges", "")
# Validate API mode
if self.api_mode not in ["local", "cloud"]:
raise ValueError(
f"Invalid API mode: {self.api_mode}. Must be 'local' or 'cloud'"
)
# Validate Cloud API requirements
if self.api_mode == "cloud" and not self.api_key:
raise ValueError("API key is required for Cloud API mode")
def load(self) -> List[Document]:
"""
Main entry point for loading and parsing the document.
Routes to Cloud or Local API based on api_mode.
"""
try:
if self.api_mode == "cloud":
return self._load_cloud_api()
else:
return self._load_local_api()
except Exception as e:
log.error(f"Error loading document with MinerU: {e}")
raise
def _load_local_api(self) -> List[Document]:
"""
Load document using Local API (synchronous).
Posts file to /file_parse endpoint and gets immediate response.
"""
log.info(f"Using MinerU Local API at {self.api_url}")
filename = os.path.basename(self.file_path)
# Build form data for Local API
form_data = {
"return_md": "true",
"formula_enable": str(self.enable_formula).lower(),
"table_enable": str(self.enable_table).lower(),
}
# Parse method based on OCR setting
if self.enable_ocr:
form_data["parse_method"] = "ocr"
else:
form_data["parse_method"] = "auto"
# Language configuration (Local API uses lang_list array)
if self.language:
form_data["lang_list"] = self.language
# Backend/model version (Local API uses "backend" parameter)
if self.model_version == "vlm":
form_data["backend"] = "vlm-vllm-engine"
else:
form_data["backend"] = "pipeline"
# Page ranges (Local API uses start_page_id and end_page_id)
if self.page_ranges:
# For simplicity, if page_ranges is specified, log a warning
# Full page range parsing would require parsing the string
log.warning(
f"Page ranges '{self.page_ranges}' specified but Local API uses different format. "
"Consider using start_page_id/end_page_id parameters if needed."
)
try:
with open(self.file_path, "rb") as f:
files = {"files": (filename, f, "application/octet-stream")}
log.info(f"Sending file to MinerU Local API: {filename}")
log.debug(f"Local API parameters: {form_data}")
response = requests.post(
f"{self.api_url}/file_parse",
data=form_data,
files=files,
timeout=300, # 5 minute timeout for large documents
)
response.raise_for_status()
except FileNotFoundError:
raise HTTPException(
status.HTTP_404_NOT_FOUND, detail=f"File not found: {self.file_path}"
)
except requests.Timeout:
raise HTTPException(
status.HTTP_504_GATEWAY_TIMEOUT,
detail="MinerU Local API request timed out",
)
except requests.HTTPError as e:
error_detail = f"MinerU Local API request failed: {e}"
if e.response is not None:
try:
error_data = e.response.json()
error_detail += f" - {error_data}"
except:
error_detail += f" - {e.response.text}"
raise HTTPException(status.HTTP_400_BAD_REQUEST, detail=error_detail)
except Exception as e:
raise HTTPException(
status.HTTP_500_INTERNAL_SERVER_ERROR,
detail=f"Error calling MinerU Local API: {str(e)}",
)
# Parse response
try:
result = response.json()
except ValueError as e:
raise HTTPException(
status.HTTP_502_BAD_GATEWAY,
detail=f"Invalid JSON response from MinerU Local API: {e}",
)
# Extract markdown content from response
if "results" not in result:
raise HTTPException(
status.HTTP_502_BAD_GATEWAY,
detail="MinerU Local API response missing 'results' field",
)
results = result["results"]
if not results:
raise HTTPException(
status.HTTP_400_BAD_REQUEST,
detail="MinerU returned empty results",
)
# Get the first (and typically only) result
file_result = list(results.values())[0]
markdown_content = file_result.get("md_content", "")
if not markdown_content:
raise HTTPException(
status.HTTP_400_BAD_REQUEST,
detail="MinerU returned empty markdown content",
)
log.info(f"Successfully parsed document with MinerU Local API: {filename}")
# Create metadata
metadata = {
"source": filename,
"api_mode": "local",
"backend": result.get("backend", "unknown"),
"version": result.get("version", "unknown"),
}
return [Document(page_content=markdown_content, metadata=metadata)]
def _load_cloud_api(self) -> List[Document]:
"""
Load document using Cloud API (asynchronous).
Uses batch upload endpoint to avoid need for public file URLs.
"""
log.info(f"Using MinerU Cloud API at {self.api_url}")
filename = os.path.basename(self.file_path)
# Step 1: Request presigned upload URL
batch_id, upload_url = self._request_upload_url(filename)
# Step 2: Upload file to presigned URL
self._upload_to_presigned_url(upload_url)
# Step 3: Poll for results
result = self._poll_batch_status(batch_id, filename)
# Step 4: Download and extract markdown from ZIP
markdown_content = self._download_and_extract_zip(
result["full_zip_url"], filename
)
log.info(f"Successfully parsed document with MinerU Cloud API: {filename}")
# Create metadata
metadata = {
"source": filename,
"api_mode": "cloud",
"batch_id": batch_id,
}
return [Document(page_content=markdown_content, metadata=metadata)]
def _request_upload_url(self, filename: str) -> tuple:
"""
Request presigned upload URL from Cloud API.
Returns (batch_id, upload_url).
"""
headers = {
"Authorization": f"Bearer {self.api_key}",
"Content-Type": "application/json",
}
# Build request body
request_body = {
"enable_formula": self.enable_formula,
"enable_table": self.enable_table,
"language": self.language,
"model_version": self.model_version,
"files": [
{
"name": filename,
"is_ocr": self.enable_ocr,
}
],
}
# Add page ranges if specified
if self.page_ranges:
request_body["files"][0]["page_ranges"] = self.page_ranges
log.info(f"Requesting upload URL for: {filename}")
log.debug(f"Cloud API request body: {request_body}")
try:
response = requests.post(
f"{self.api_url}/file-urls/batch",
headers=headers,
json=request_body,
timeout=30,
)
response.raise_for_status()
except requests.HTTPError as e:
error_detail = f"Failed to request upload URL: {e}"
if e.response is not None:
try:
error_data = e.response.json()
error_detail += f" - {error_data.get('msg', error_data)}"
except:
error_detail += f" - {e.response.text}"
raise HTTPException(status.HTTP_400_BAD_REQUEST, detail=error_detail)
except Exception as e:
raise HTTPException(
status.HTTP_500_INTERNAL_SERVER_ERROR,
detail=f"Error requesting upload URL: {str(e)}",
)
try:
result = response.json()
except ValueError as e:
raise HTTPException(
status.HTTP_502_BAD_GATEWAY,
detail=f"Invalid JSON response: {e}",
)
# Check for API error response
if result.get("code") != 0:
raise HTTPException(
status.HTTP_400_BAD_REQUEST,
detail=f"MinerU Cloud API error: {result.get('msg', 'Unknown error')}",
)
data = result.get("data", {})
batch_id = data.get("batch_id")
file_urls = data.get("file_urls", [])
if not batch_id or not file_urls:
raise HTTPException(
status.HTTP_502_BAD_GATEWAY,
detail="MinerU Cloud API response missing batch_id or file_urls",
)
upload_url = file_urls[0]
log.info(f"Received upload URL for batch: {batch_id}")
return batch_id, upload_url
def _upload_to_presigned_url(self, upload_url: str) -> None:
"""
Upload file to presigned URL (no authentication needed).
"""
log.info(f"Uploading file to presigned URL")
try:
with open(self.file_path, "rb") as f:
response = requests.put(
upload_url,
data=f,
timeout=300, # 5 minute timeout for large files
)
response.raise_for_status()
except FileNotFoundError:
raise HTTPException(
status.HTTP_404_NOT_FOUND, detail=f"File not found: {self.file_path}"
)
except requests.Timeout:
raise HTTPException(
status.HTTP_504_GATEWAY_TIMEOUT,
detail="File upload to presigned URL timed out",
)
except requests.HTTPError as e:
raise HTTPException(
status.HTTP_400_BAD_REQUEST,
detail=f"Failed to upload file to presigned URL: {e}",
)
except Exception as e:
raise HTTPException(
status.HTTP_500_INTERNAL_SERVER_ERROR,
detail=f"Error uploading file: {str(e)}",
)
log.info("File uploaded successfully")
def _poll_batch_status(self, batch_id: str, filename: str) -> dict:
"""
Poll batch status until completion.
Returns the result dict for the file.
"""
headers = {
"Authorization": f"Bearer {self.api_key}",
}
max_iterations = 300 # 10 minutes max (2 seconds per iteration)
poll_interval = 2 # seconds
log.info(f"Polling batch status: {batch_id}")
for iteration in range(max_iterations):
try:
response = requests.get(
f"{self.api_url}/extract-results/batch/{batch_id}",
headers=headers,
timeout=30,
)
response.raise_for_status()
except requests.HTTPError as e:
error_detail = f"Failed to poll batch status: {e}"
if e.response is not None:
try:
error_data = e.response.json()
error_detail += f" - {error_data.get('msg', error_data)}"
except:
error_detail += f" - {e.response.text}"
raise HTTPException(status.HTTP_400_BAD_REQUEST, detail=error_detail)
except Exception as e:
raise HTTPException(
status.HTTP_500_INTERNAL_SERVER_ERROR,
detail=f"Error polling batch status: {str(e)}",
)
try:
result = response.json()
except ValueError as e:
raise HTTPException(
status.HTTP_502_BAD_GATEWAY,
detail=f"Invalid JSON response while polling: {e}",
)
# Check for API error response
if result.get("code") != 0:
raise HTTPException(
status.HTTP_400_BAD_REQUEST,
detail=f"MinerU Cloud API error: {result.get('msg', 'Unknown error')}",
)
data = result.get("data", {})
extract_result = data.get("extract_result", [])
# Find our file in the batch results
file_result = None
for item in extract_result:
if item.get("file_name") == filename:
file_result = item
break
if not file_result:
raise HTTPException(
status.HTTP_502_BAD_GATEWAY,
detail=f"File {filename} not found in batch results",
)
state = file_result.get("state")
if state == "done":
log.info(f"Processing complete for {filename}")
return file_result
elif state == "failed":
error_msg = file_result.get("err_msg", "Unknown error")
raise HTTPException(
status.HTTP_400_BAD_REQUEST,
detail=f"MinerU processing failed: {error_msg}",
)
elif state in ["waiting-file", "pending", "running", "converting"]:
# Still processing
if iteration % 10 == 0: # Log every 20 seconds
log.info(
f"Processing status: {state} (iteration {iteration + 1}/{max_iterations})"
)
time.sleep(poll_interval)
else:
log.warning(f"Unknown state: {state}")
time.sleep(poll_interval)
# Timeout
raise HTTPException(
status.HTTP_504_GATEWAY_TIMEOUT,
detail="MinerU processing timed out after 10 minutes",
)
def _download_and_extract_zip(self, zip_url: str, filename: str) -> str:
"""
Download ZIP file from CDN and extract markdown content.
Returns the markdown content as a string.
"""
log.info(f"Downloading results from: {zip_url}")
try:
response = requests.get(zip_url, timeout=60)
response.raise_for_status()
except requests.HTTPError as e:
raise HTTPException(
status.HTTP_400_BAD_REQUEST,
detail=f"Failed to download results ZIP: {e}",
)
except Exception as e:
raise HTTPException(
status.HTTP_500_INTERNAL_SERVER_ERROR,
detail=f"Error downloading results: {str(e)}",
)
# Save ZIP to temporary file and extract
try:
with tempfile.NamedTemporaryFile(delete=False, suffix=".zip") as tmp_zip:
tmp_zip.write(response.content)
tmp_zip_path = tmp_zip.name
with tempfile.TemporaryDirectory() as tmp_dir:
# Extract ZIP
with zipfile.ZipFile(tmp_zip_path, "r") as zip_ref:
zip_ref.extractall(tmp_dir)
# Find markdown file - search recursively for any .md file
markdown_content = None
found_md_path = None
# First, list all files in the ZIP for debugging
all_files = []
for root, dirs, files in os.walk(tmp_dir):
for file in files:
full_path = os.path.join(root, file)
all_files.append(full_path)
# Look for any .md file
if file.endswith(".md"):
found_md_path = full_path
log.info(f"Found markdown file at: {full_path}")
try:
with open(full_path, "r", encoding="utf-8") as f:
markdown_content = f.read()
if (
markdown_content
): # Use the first non-empty markdown file
break
except Exception as e:
log.warning(f"Failed to read {full_path}: {e}")
if markdown_content:
break
if markdown_content is None:
log.error(f"Available files in ZIP: {all_files}")
# Try to provide more helpful error message
md_files = [f for f in all_files if f.endswith(".md")]
if md_files:
error_msg = (
f"Found .md files but couldn't read them: {md_files}"
)
else:
error_msg = (
f"No .md files found in ZIP. Available files: {all_files}"
)
raise HTTPException(
status.HTTP_502_BAD_GATEWAY,
detail=error_msg,
)
# Clean up temporary ZIP file
os.unlink(tmp_zip_path)
except zipfile.BadZipFile as e:
raise HTTPException(
status.HTTP_502_BAD_GATEWAY,
detail=f"Invalid ZIP file received: {e}",
)
except Exception as e:
raise HTTPException(
status.HTTP_500_INTERNAL_SERVER_ERROR,
detail=f"Error extracting ZIP: {str(e)}",
)
if not markdown_content:
raise HTTPException(
status.HTTP_400_BAD_REQUEST,
detail="Extracted markdown content is empty",
)
log.info(
f"Successfully extracted markdown content ({len(markdown_content)} characters)"
)
return markdown_content

View file

@ -30,10 +30,9 @@ class MistralLoader:
- Enhanced error handling with retryable error classification - Enhanced error handling with retryable error classification
""" """
BASE_API_URL = "https://api.mistral.ai/v1"
def __init__( def __init__(
self, self,
base_url: str,
api_key: str, api_key: str,
file_path: str, file_path: str,
timeout: int = 300, # 5 minutes default timeout: int = 300, # 5 minutes default
@ -55,6 +54,9 @@ class MistralLoader:
if not os.path.exists(file_path): if not os.path.exists(file_path):
raise FileNotFoundError(f"File not found at {file_path}") raise FileNotFoundError(f"File not found at {file_path}")
self.base_url = (
base_url.rstrip("/") if base_url else "https://api.mistral.ai/v1"
)
self.api_key = api_key self.api_key = api_key
self.file_path = file_path self.file_path = file_path
self.timeout = timeout self.timeout = timeout
@ -240,7 +242,7 @@ class MistralLoader:
in a context manager to minimize memory usage duration. in a context manager to minimize memory usage duration.
""" """
log.info("Uploading file to Mistral API") log.info("Uploading file to Mistral API")
url = f"{self.BASE_API_URL}/files" url = f"{self.base_url}/files"
def upload_request(): def upload_request():
# MEMORY OPTIMIZATION: Use context manager to minimize file handle lifetime # MEMORY OPTIMIZATION: Use context manager to minimize file handle lifetime
@ -275,7 +277,7 @@ class MistralLoader:
async def _upload_file_async(self, session: aiohttp.ClientSession) -> str: async def _upload_file_async(self, session: aiohttp.ClientSession) -> str:
"""Async file upload with streaming for better memory efficiency.""" """Async file upload with streaming for better memory efficiency."""
url = f"{self.BASE_API_URL}/files" url = f"{self.base_url}/files"
async def upload_request(): async def upload_request():
# Create multipart writer for streaming upload # Create multipart writer for streaming upload
@ -321,7 +323,7 @@ class MistralLoader:
def _get_signed_url(self, file_id: str) -> str: def _get_signed_url(self, file_id: str) -> str:
"""Retrieves a temporary signed URL for the uploaded file (sync version).""" """Retrieves a temporary signed URL for the uploaded file (sync version)."""
log.info(f"Getting signed URL for file ID: {file_id}") log.info(f"Getting signed URL for file ID: {file_id}")
url = f"{self.BASE_API_URL}/files/{file_id}/url" url = f"{self.base_url}/files/{file_id}/url"
params = {"expiry": 1} params = {"expiry": 1}
signed_url_headers = {**self.headers, "Accept": "application/json"} signed_url_headers = {**self.headers, "Accept": "application/json"}
@ -346,7 +348,7 @@ class MistralLoader:
self, session: aiohttp.ClientSession, file_id: str self, session: aiohttp.ClientSession, file_id: str
) -> str: ) -> str:
"""Async signed URL retrieval.""" """Async signed URL retrieval."""
url = f"{self.BASE_API_URL}/files/{file_id}/url" url = f"{self.base_url}/files/{file_id}/url"
params = {"expiry": 1} params = {"expiry": 1}
headers = {**self.headers, "Accept": "application/json"} headers = {**self.headers, "Accept": "application/json"}
@ -373,7 +375,7 @@ class MistralLoader:
def _process_ocr(self, signed_url: str) -> Dict[str, Any]: def _process_ocr(self, signed_url: str) -> Dict[str, Any]:
"""Sends the signed URL to the OCR endpoint for processing (sync version).""" """Sends the signed URL to the OCR endpoint for processing (sync version)."""
log.info("Processing OCR via Mistral API") log.info("Processing OCR via Mistral API")
url = f"{self.BASE_API_URL}/ocr" url = f"{self.base_url}/ocr"
ocr_headers = { ocr_headers = {
**self.headers, **self.headers,
"Content-Type": "application/json", "Content-Type": "application/json",
@ -407,7 +409,7 @@ class MistralLoader:
self, session: aiohttp.ClientSession, signed_url: str self, session: aiohttp.ClientSession, signed_url: str
) -> Dict[str, Any]: ) -> Dict[str, Any]:
"""Async OCR processing with timing metrics.""" """Async OCR processing with timing metrics."""
url = f"{self.BASE_API_URL}/ocr" url = f"{self.base_url}/ocr"
headers = { headers = {
**self.headers, **self.headers,
@ -446,7 +448,7 @@ class MistralLoader:
def _delete_file(self, file_id: str) -> None: def _delete_file(self, file_id: str) -> None:
"""Deletes the file from Mistral storage (sync version).""" """Deletes the file from Mistral storage (sync version)."""
log.info(f"Deleting uploaded file ID: {file_id}") log.info(f"Deleting uploaded file ID: {file_id}")
url = f"{self.BASE_API_URL}/files/{file_id}" url = f"{self.base_url}/files/{file_id}"
try: try:
response = requests.delete( response = requests.delete(
@ -467,7 +469,7 @@ class MistralLoader:
async def delete_request(): async def delete_request():
self._debug_log(f"Deleting file ID: {file_id}") self._debug_log(f"Deleting file ID: {file_id}")
async with session.delete( async with session.delete(
url=f"{self.BASE_API_URL}/files/{file_id}", url=f"{self.base_url}/files/{file_id}",
headers=self.headers, headers=self.headers,
timeout=aiohttp.ClientTimeout( timeout=aiohttp.ClientTimeout(
total=self.cleanup_timeout total=self.cleanup_timeout

View file

@ -83,6 +83,7 @@ class YoutubeLoader:
TranscriptsDisabled, TranscriptsDisabled,
YouTubeTranscriptApi, YouTubeTranscriptApi,
) )
from youtube_transcript_api.proxies import GenericProxyConfig
except ImportError: except ImportError:
raise ImportError( raise ImportError(
'Could not import "youtube_transcript_api" Python package. ' 'Could not import "youtube_transcript_api" Python package. '
@ -90,10 +91,9 @@ class YoutubeLoader:
) )
if self.proxy_url: if self.proxy_url:
youtube_proxies = { youtube_proxies = GenericProxyConfig(
"http": self.proxy_url, http_url=self.proxy_url, https_url=self.proxy_url
"https": self.proxy_url, )
}
log.debug(f"Using proxy URL: {self.proxy_url[:14]}...") log.debug(f"Using proxy URL: {self.proxy_url[:14]}...")
else: else:
youtube_proxies = None youtube_proxies = None

View file

@ -71,6 +71,7 @@ def get_loader(request, url: str):
url, url,
verify_ssl=request.app.state.config.ENABLE_WEB_LOADER_SSL_VERIFICATION, verify_ssl=request.app.state.config.ENABLE_WEB_LOADER_SSL_VERIFICATION,
requests_per_second=request.app.state.config.WEB_LOADER_CONCURRENT_REQUESTS, requests_per_second=request.app.state.config.WEB_LOADER_CONCURRENT_REQUESTS,
trust_env=request.app.state.config.WEB_SEARCH_TRUST_ENV,
) )
@ -159,10 +160,18 @@ def query_doc_with_hybrid_search(
hybrid_bm25_weight: float, hybrid_bm25_weight: float,
) -> dict: ) -> dict:
try: try:
# First check if collection_result has the required attributes
if ( if (
not collection_result not collection_result
or not hasattr(collection_result, "documents") or not hasattr(collection_result, "documents")
or not collection_result.documents or not hasattr(collection_result, "metadatas")
):
log.warning(f"query_doc_with_hybrid_search:no_docs {collection_name}")
return {"documents": [], "metadatas": [], "distances": []}
# Now safely check the documents content after confirming attributes exist
if (
not collection_result.documents
or len(collection_result.documents) == 0 or len(collection_result.documents) == 0
or not collection_result.documents[0] or not collection_result.documents[0]
): ):
@ -268,6 +277,13 @@ def merge_and_sort_query_results(query_results: list[dict], k: int) -> dict:
combined = dict() # To store documents with unique document hashes combined = dict() # To store documents with unique document hashes
for data in query_results: for data in query_results:
if (
len(data.get("distances", [])) == 0
or len(data.get("documents", [])) == 0
or len(data.get("metadatas", [])) == 0
):
continue
distances = data["distances"][0] distances = data["distances"][0]
documents = data["documents"][0] documents = data["documents"][0]
metadatas = data["metadatas"][0] metadatas = data["metadatas"][0]
@ -500,11 +516,13 @@ def get_reranking_function(reranking_engine, reranking_model, reranking_function
if reranking_function is None: if reranking_function is None:
return None return None
if reranking_engine == "external": if reranking_engine == "external":
return lambda sentences, user=None: reranking_function.predict( return lambda query, documents, user=None: reranking_function.predict(
sentences, user=user [(query, doc.page_content) for doc in documents], user=user
) )
else: else:
return lambda sentences, user=None: reranking_function.predict(sentences) return lambda query, documents, user=None: reranking_function.predict(
[(query, doc.page_content) for doc in documents]
)
def get_sources_from_items( def get_sources_from_items(
@ -661,46 +679,51 @@ def get_sources_from_items(
collection_names.append(f"file-{item['id']}") collection_names.append(f"file-{item['id']}")
elif item.get("type") == "collection": elif item.get("type") == "collection":
if ( # Manual Full Mode Toggle for Collection
item.get("context") == "full" knowledge_base = Knowledges.get_knowledge_by_id(item.get("id"))
or request.app.state.config.BYPASS_EMBEDDING_AND_RETRIEVAL
if knowledge_base and (
user.role == "admin"
or knowledge_base.user_id == user.id
or has_access(user.id, "read", knowledge_base.access_control)
): ):
# Manual Full Mode Toggle for Collection if (
knowledge_base = Knowledges.get_knowledge_by_id(item.get("id")) item.get("context") == "full"
or request.app.state.config.BYPASS_EMBEDDING_AND_RETRIEVAL
if knowledge_base and (
user.role == "admin"
or knowledge_base.user_id == user.id
or has_access(user.id, "read", knowledge_base.access_control)
): ):
if knowledge_base and (
user.role == "admin"
or knowledge_base.user_id == user.id
or has_access(user.id, "read", knowledge_base.access_control)
):
file_ids = knowledge_base.data.get("file_ids", []) file_ids = knowledge_base.data.get("file_ids", [])
documents = [] documents = []
metadatas = [] metadatas = []
for file_id in file_ids: for file_id in file_ids:
file_object = Files.get_file_by_id(file_id) file_object = Files.get_file_by_id(file_id)
if file_object: if file_object:
documents.append(file_object.data.get("content", "")) documents.append(file_object.data.get("content", ""))
metadatas.append( metadatas.append(
{ {
"file_id": file_id, "file_id": file_id,
"name": file_object.filename, "name": file_object.filename,
"source": file_object.filename, "source": file_object.filename,
} }
) )
query_result = { query_result = {
"documents": [documents], "documents": [documents],
"metadatas": [metadatas], "metadatas": [metadatas],
} }
else:
# Fallback to collection names
if item.get("legacy"):
collection_names = item.get("collection_names", [])
else: else:
collection_names.append(item["id"]) # Fallback to collection names
if item.get("legacy"):
collection_names = item.get("collection_names", [])
else:
collection_names.append(item["id"])
elif item.get("docs"): elif item.get("docs"):
# BYPASS_WEB_SEARCH_EMBEDDING_AND_RETRIEVAL # BYPASS_WEB_SEARCH_EMBEDDING_AND_RETRIEVAL
@ -1043,9 +1066,7 @@ class RerankCompressor(BaseDocumentCompressor):
scores = None scores = None
if reranking: if reranking:
scores = self.reranking_function( scores = self.reranking_function(query, documents)
[(query, doc.page_content) for doc in documents]
)
else: else:
from sentence_transformers import util from sentence_transformers import util

View file

@ -717,7 +717,7 @@ class Oracle23aiClient(VectorDBBase):
) )
try: try:
limit = limit or 1000 limit = 1000 # Hardcoded limit for get operation
with self.get_connection() as connection: with self.get_connection() as connection:
with connection.cursor() as cursor: with connection.cursor() as cursor:

View file

@ -2,27 +2,42 @@ import logging
from typing import Optional, List from typing import Optional, List
import requests import requests
from open_webui.retrieval.web.main import SearchResult, get_filtered_results
from fastapi import Request
from open_webui.env import SRC_LOG_LEVELS from open_webui.env import SRC_LOG_LEVELS
from open_webui.retrieval.web.main import SearchResult, get_filtered_results
from open_webui.utils.headers import include_user_info_headers
log = logging.getLogger(__name__) log = logging.getLogger(__name__)
log.setLevel(SRC_LOG_LEVELS["RAG"]) log.setLevel(SRC_LOG_LEVELS["RAG"])
def search_external( def search_external(
request: Request,
external_url: str, external_url: str,
external_api_key: str, external_api_key: str,
query: str, query: str,
count: int, count: int,
filter_list: Optional[List[str]] = None, filter_list: Optional[List[str]] = None,
user=None,
) -> List[SearchResult]: ) -> List[SearchResult]:
try: try:
headers = {
"User-Agent": "Open WebUI (https://github.com/open-webui/open-webui) RAG Bot",
"Authorization": f"Bearer {external_api_key}",
}
headers = include_user_info_headers(headers, user)
chat_id = getattr(request.state, "chat_id", None)
if chat_id:
headers["X-OpenWebUI-Chat-Id"] = str(chat_id)
response = requests.post( response = requests.post(
external_url, external_url,
headers={ headers=headers,
"User-Agent": "Open WebUI (https://github.com/open-webui/open-webui) RAG Bot",
"Authorization": f"Bearer {external_api_key}",
},
json={ json={
"query": query, "query": query,
"count": count, "count": count,

View file

@ -1,11 +1,10 @@
import logging import logging
from typing import Optional, List from typing import Optional, List
from urllib.parse import urljoin
import requests
from open_webui.retrieval.web.main import SearchResult, get_filtered_results from open_webui.retrieval.web.main import SearchResult, get_filtered_results
from open_webui.env import SRC_LOG_LEVELS from open_webui.env import SRC_LOG_LEVELS
log = logging.getLogger(__name__) log = logging.getLogger(__name__)
log.setLevel(SRC_LOG_LEVELS["RAG"]) log.setLevel(SRC_LOG_LEVELS["RAG"])
@ -18,27 +17,20 @@ def search_firecrawl(
filter_list: Optional[List[str]] = None, filter_list: Optional[List[str]] = None,
) -> List[SearchResult]: ) -> List[SearchResult]:
try: try:
firecrawl_search_url = urljoin(firecrawl_url, "/v1/search") from firecrawl import FirecrawlApp
response = requests.post(
firecrawl_search_url, firecrawl = FirecrawlApp(api_key=firecrawl_api_key, api_url=firecrawl_url)
headers={ response = firecrawl.search(
"User-Agent": "Open WebUI (https://github.com/open-webui/open-webui) RAG Bot", query=query, limit=count, ignore_invalid_urls=True, timeout=count * 3
"Authorization": f"Bearer {firecrawl_api_key}",
},
json={
"query": query,
"limit": count,
},
) )
response.raise_for_status() results = response.web
results = response.json().get("data", [])
if filter_list: if filter_list:
results = get_filtered_results(results, filter_list) results = get_filtered_results(results, filter_list)
results = [ results = [
SearchResult( SearchResult(
link=result.get("url"), link=result.url,
title=result.get("title"), title=result.title,
snippet=result.get("description"), snippet=result.description,
) )
for result in results[:count] for result in results[:count]
] ]

View file

@ -15,6 +15,7 @@ def search_google_pse(
query: str, query: str,
count: int, count: int,
filter_list: Optional[list[str]] = None, filter_list: Optional[list[str]] = None,
referer: Optional[str] = None,
) -> list[SearchResult]: ) -> list[SearchResult]:
"""Search using Google's Programmable Search Engine API and return the results as a list of SearchResult objects. """Search using Google's Programmable Search Engine API and return the results as a list of SearchResult objects.
Handles pagination for counts greater than 10. Handles pagination for counts greater than 10.
@ -30,7 +31,11 @@ def search_google_pse(
list[SearchResult]: A list of SearchResult objects. list[SearchResult]: A list of SearchResult objects.
""" """
url = "https://www.googleapis.com/customsearch/v1" url = "https://www.googleapis.com/customsearch/v1"
headers = {"Content-Type": "application/json"} headers = {"Content-Type": "application/json"}
if referer:
headers["Referer"] = referer
all_results = [] all_results = []
start_index = 1 # Google PSE start parameter is 1-based start_index = 1 # Google PSE start parameter is 1-based

View file

@ -4,7 +4,6 @@ import socket
import ssl import ssl
import urllib.parse import urllib.parse
import urllib.request import urllib.request
from collections import defaultdict
from datetime import datetime, time, timedelta from datetime import datetime, time, timedelta
from typing import ( from typing import (
Any, Any,
@ -17,11 +16,12 @@ from typing import (
Union, Union,
Literal, Literal,
) )
from fastapi.concurrency import run_in_threadpool
import aiohttp import aiohttp
import certifi import certifi
import validators import validators
from langchain_community.document_loaders import PlaywrightURLLoader, WebBaseLoader from langchain_community.document_loaders import PlaywrightURLLoader, WebBaseLoader
from langchain_community.document_loaders.firecrawl import FireCrawlLoader
from langchain_community.document_loaders.base import BaseLoader from langchain_community.document_loaders.base import BaseLoader
from langchain_core.documents import Document from langchain_core.documents import Document
from open_webui.retrieval.loaders.tavily import TavilyLoader from open_webui.retrieval.loaders.tavily import TavilyLoader
@ -39,7 +39,8 @@ from open_webui.config import (
EXTERNAL_WEB_LOADER_URL, EXTERNAL_WEB_LOADER_URL,
EXTERNAL_WEB_LOADER_API_KEY, EXTERNAL_WEB_LOADER_API_KEY,
) )
from open_webui.env import SRC_LOG_LEVELS, AIOHTTP_CLIENT_SESSION_SSL from open_webui.env import SRC_LOG_LEVELS
log = logging.getLogger(__name__) log = logging.getLogger(__name__)
log.setLevel(SRC_LOG_LEVELS["RAG"]) log.setLevel(SRC_LOG_LEVELS["RAG"])
@ -142,13 +143,13 @@ class RateLimitMixin:
class URLProcessingMixin: class URLProcessingMixin:
def _verify_ssl_cert(self, url: str) -> bool: async def _verify_ssl_cert(self, url: str) -> bool:
"""Verify SSL certificate for a URL.""" """Verify SSL certificate for a URL."""
return verify_ssl_cert(url) return await run_in_threadpool(verify_ssl_cert, url)
async def _safe_process_url(self, url: str) -> bool: async def _safe_process_url(self, url: str) -> bool:
"""Perform safety checks before processing a URL.""" """Perform safety checks before processing a URL."""
if self.verify_ssl and not self._verify_ssl_cert(url): if self.verify_ssl and not await self._verify_ssl_cert(url):
raise ValueError(f"SSL certificate verification failed for {url}") raise ValueError(f"SSL certificate verification failed for {url}")
await self._wait_for_rate_limit() await self._wait_for_rate_limit()
return True return True
@ -189,13 +190,12 @@ class SafeFireCrawlLoader(BaseLoader, RateLimitMixin, URLProcessingMixin):
(uses FIRE_CRAWL_API_KEY environment variable if not provided). (uses FIRE_CRAWL_API_KEY environment variable if not provided).
api_url: Base URL for FireCrawl API. Defaults to official API endpoint. api_url: Base URL for FireCrawl API. Defaults to official API endpoint.
mode: Operation mode selection: mode: Operation mode selection:
- 'crawl': Website crawling mode (default) - 'crawl': Website crawling mode
- 'scrape': Direct page scraping - 'scrape': Direct page scraping (default)
- 'map': Site map generation - 'map': Site map generation
proxy: Proxy override settings for the FireCrawl API. proxy: Proxy override settings for the FireCrawl API.
params: The parameters to pass to the Firecrawl API. params: The parameters to pass to the Firecrawl API.
Examples include crawlerOptions. For more details, visit: https://docs.firecrawl.dev/sdks/python#batch-scrape
For more details, visit: https://github.com/mendableai/firecrawl-py
""" """
proxy_server = proxy.get("server") if proxy else None proxy_server = proxy.get("server") if proxy else None
if trust_env and not proxy_server: if trust_env and not proxy_server:
@ -215,50 +215,88 @@ class SafeFireCrawlLoader(BaseLoader, RateLimitMixin, URLProcessingMixin):
self.api_key = api_key self.api_key = api_key
self.api_url = api_url self.api_url = api_url
self.mode = mode self.mode = mode
self.params = params self.params = params or {}
def lazy_load(self) -> Iterator[Document]: def lazy_load(self) -> Iterator[Document]:
"""Load documents concurrently using FireCrawl.""" """Load documents using FireCrawl batch_scrape."""
for url in self.web_paths: log.debug(
try: "Starting FireCrawl batch scrape for %d URLs, mode: %s, params: %s",
self._safe_process_url_sync(url) len(self.web_paths),
loader = FireCrawlLoader( self.mode,
url=url, self.params,
api_key=self.api_key, )
api_url=self.api_url, try:
mode=self.mode, from firecrawl import FirecrawlApp
params=self.params,
firecrawl = FirecrawlApp(api_key=self.api_key, api_url=self.api_url)
result = firecrawl.batch_scrape(
self.web_paths,
formats=["markdown"],
skip_tls_verification=not self.verify_ssl,
ignore_invalid_urls=True,
remove_base64_images=True,
max_age=300000, # 5 minutes https://docs.firecrawl.dev/features/fast-scraping#common-maxage-values
wait_timeout=len(self.web_paths) * 3,
**self.params,
)
if result.status != "completed":
raise RuntimeError(
f"FireCrawl batch scrape did not complete successfully. result: {result}"
) )
for document in loader.lazy_load():
if not document.metadata.get("source"): for data in result.data:
document.metadata["source"] = document.metadata.get("sourceURL") metadata = data.metadata or {}
yield document yield Document(
except Exception as e: page_content=data.markdown or "",
if self.continue_on_failure: metadata={"source": metadata.url or metadata.source_url or ""},
log.exception(f"Error loading {url}: {e}") )
continue
except Exception as e:
if self.continue_on_failure:
log.exception(f"Error extracting content from URLs: {e}")
else:
raise e raise e
async def alazy_load(self): async def alazy_load(self):
"""Async version of lazy_load.""" """Async version of lazy_load."""
for url in self.web_paths: log.debug(
try: "Starting FireCrawl batch scrape for %d URLs, mode: %s, params: %s",
await self._safe_process_url(url) len(self.web_paths),
loader = FireCrawlLoader( self.mode,
url=url, self.params,
api_key=self.api_key, )
api_url=self.api_url, try:
mode=self.mode, from firecrawl import FirecrawlApp
params=self.params,
firecrawl = FirecrawlApp(api_key=self.api_key, api_url=self.api_url)
result = firecrawl.batch_scrape(
self.web_paths,
formats=["markdown"],
skip_tls_verification=not self.verify_ssl,
ignore_invalid_urls=True,
remove_base64_images=True,
max_age=300000, # 5 minutes https://docs.firecrawl.dev/features/fast-scraping#common-maxage-values
wait_timeout=len(self.web_paths) * 3,
**self.params,
)
if result.status != "completed":
raise RuntimeError(
f"FireCrawl batch scrape did not complete successfully. result: {result}"
) )
async for document in loader.alazy_load():
if not document.metadata.get("source"): for data in result.data:
document.metadata["source"] = document.metadata.get("sourceURL") metadata = data.metadata or {}
yield document yield Document(
except Exception as e: page_content=data.markdown or "",
if self.continue_on_failure: metadata={"source": metadata.url or metadata.source_url or ""},
log.exception(f"Error loading {url}: {e}") )
continue
except Exception as e:
if self.continue_on_failure:
log.exception(f"Error extracting content from URLs: {e}")
else:
raise e raise e

View file

@ -4,6 +4,7 @@ import logging
import os import os
import uuid import uuid
import html import html
import base64
from functools import lru_cache from functools import lru_cache
from pydub import AudioSegment from pydub import AudioSegment
from pydub.silence import split_on_silence from pydub.silence import split_on_silence
@ -39,13 +40,14 @@ from open_webui.config import (
WHISPER_MODEL_DIR, WHISPER_MODEL_DIR,
CACHE_DIR, CACHE_DIR,
WHISPER_LANGUAGE, WHISPER_LANGUAGE,
ELEVENLABS_API_BASE_URL,
) )
from open_webui.constants import ERROR_MESSAGES from open_webui.constants import ERROR_MESSAGES
from open_webui.env import ( from open_webui.env import (
ENV,
AIOHTTP_CLIENT_SESSION_SSL, AIOHTTP_CLIENT_SESSION_SSL,
AIOHTTP_CLIENT_TIMEOUT, AIOHTTP_CLIENT_TIMEOUT,
ENV,
SRC_LOG_LEVELS, SRC_LOG_LEVELS,
DEVICE_TYPE, DEVICE_TYPE,
ENABLE_FORWARD_USER_INFO_HEADERS, ENABLE_FORWARD_USER_INFO_HEADERS,
@ -178,6 +180,9 @@ class STTConfigForm(BaseModel):
AZURE_LOCALES: str AZURE_LOCALES: str
AZURE_BASE_URL: str AZURE_BASE_URL: str
AZURE_MAX_SPEAKERS: str AZURE_MAX_SPEAKERS: str
MISTRAL_API_KEY: str
MISTRAL_API_BASE_URL: str
MISTRAL_USE_CHAT_COMPLETIONS: bool
class AudioConfigUpdateForm(BaseModel): class AudioConfigUpdateForm(BaseModel):
@ -214,6 +219,9 @@ async def get_audio_config(request: Request, user=Depends(get_admin_user)):
"AZURE_LOCALES": request.app.state.config.AUDIO_STT_AZURE_LOCALES, "AZURE_LOCALES": request.app.state.config.AUDIO_STT_AZURE_LOCALES,
"AZURE_BASE_URL": request.app.state.config.AUDIO_STT_AZURE_BASE_URL, "AZURE_BASE_URL": request.app.state.config.AUDIO_STT_AZURE_BASE_URL,
"AZURE_MAX_SPEAKERS": request.app.state.config.AUDIO_STT_AZURE_MAX_SPEAKERS, "AZURE_MAX_SPEAKERS": request.app.state.config.AUDIO_STT_AZURE_MAX_SPEAKERS,
"MISTRAL_API_KEY": request.app.state.config.AUDIO_STT_MISTRAL_API_KEY,
"MISTRAL_API_BASE_URL": request.app.state.config.AUDIO_STT_MISTRAL_API_BASE_URL,
"MISTRAL_USE_CHAT_COMPLETIONS": request.app.state.config.AUDIO_STT_MISTRAL_USE_CHAT_COMPLETIONS,
}, },
} }
@ -255,6 +263,13 @@ async def update_audio_config(
request.app.state.config.AUDIO_STT_AZURE_MAX_SPEAKERS = ( request.app.state.config.AUDIO_STT_AZURE_MAX_SPEAKERS = (
form_data.stt.AZURE_MAX_SPEAKERS form_data.stt.AZURE_MAX_SPEAKERS
) )
request.app.state.config.AUDIO_STT_MISTRAL_API_KEY = form_data.stt.MISTRAL_API_KEY
request.app.state.config.AUDIO_STT_MISTRAL_API_BASE_URL = (
form_data.stt.MISTRAL_API_BASE_URL
)
request.app.state.config.AUDIO_STT_MISTRAL_USE_CHAT_COMPLETIONS = (
form_data.stt.MISTRAL_USE_CHAT_COMPLETIONS
)
if request.app.state.config.STT_ENGINE == "": if request.app.state.config.STT_ENGINE == "":
request.app.state.faster_whisper_model = set_faster_whisper_model( request.app.state.faster_whisper_model = set_faster_whisper_model(
@ -290,6 +305,9 @@ async def update_audio_config(
"AZURE_LOCALES": request.app.state.config.AUDIO_STT_AZURE_LOCALES, "AZURE_LOCALES": request.app.state.config.AUDIO_STT_AZURE_LOCALES,
"AZURE_BASE_URL": request.app.state.config.AUDIO_STT_AZURE_BASE_URL, "AZURE_BASE_URL": request.app.state.config.AUDIO_STT_AZURE_BASE_URL,
"AZURE_MAX_SPEAKERS": request.app.state.config.AUDIO_STT_AZURE_MAX_SPEAKERS, "AZURE_MAX_SPEAKERS": request.app.state.config.AUDIO_STT_AZURE_MAX_SPEAKERS,
"MISTRAL_API_KEY": request.app.state.config.AUDIO_STT_MISTRAL_API_KEY,
"MISTRAL_API_BASE_URL": request.app.state.config.AUDIO_STT_MISTRAL_API_BASE_URL,
"MISTRAL_USE_CHAT_COMPLETIONS": request.app.state.config.AUDIO_STT_MISTRAL_USE_CHAT_COMPLETIONS,
}, },
} }
@ -413,7 +431,7 @@ async def speech(request: Request, user=Depends(get_verified_user)):
timeout=timeout, trust_env=True timeout=timeout, trust_env=True
) as session: ) as session:
async with session.post( async with session.post(
f"https://api.elevenlabs.io/v1/text-to-speech/{voice_id}", f"{ELEVENLABS_API_BASE_URL}/v1/text-to-speech/{voice_id}",
json={ json={
"text": payload["input"], "text": payload["input"],
"model_id": request.app.state.config.TTS_MODEL, "model_id": request.app.state.config.TTS_MODEL,
@ -828,6 +846,186 @@ def transcription_handler(request, file_path, metadata):
detail=detail if detail else "Open WebUI: Server Connection Error", detail=detail if detail else "Open WebUI: Server Connection Error",
) )
elif request.app.state.config.STT_ENGINE == "mistral":
# Check file exists
if not os.path.exists(file_path):
raise HTTPException(status_code=400, detail="Audio file not found")
# Check file size
file_size = os.path.getsize(file_path)
if file_size > MAX_FILE_SIZE:
raise HTTPException(
status_code=400,
detail=f"File size exceeds limit of {MAX_FILE_SIZE_MB}MB",
)
api_key = request.app.state.config.AUDIO_STT_MISTRAL_API_KEY
api_base_url = (
request.app.state.config.AUDIO_STT_MISTRAL_API_BASE_URL
or "https://api.mistral.ai/v1"
)
use_chat_completions = (
request.app.state.config.AUDIO_STT_MISTRAL_USE_CHAT_COMPLETIONS
)
if not api_key:
raise HTTPException(
status_code=400,
detail="Mistral API key is required for Mistral STT",
)
r = None
try:
# Use voxtral-mini-latest as the default model for transcription
model = request.app.state.config.STT_MODEL or "voxtral-mini-latest"
log.info(
f"Mistral STT - model: {model}, "
f"method: {'chat_completions' if use_chat_completions else 'transcriptions'}"
)
if use_chat_completions:
# Use chat completions API with audio input
# This method requires mp3 or wav format
audio_file_to_use = file_path
if is_audio_conversion_required(file_path):
log.debug("Converting audio to mp3 for chat completions API")
converted_path = convert_audio_to_mp3(file_path)
if converted_path:
audio_file_to_use = converted_path
else:
log.error("Audio conversion failed")
raise HTTPException(
status_code=500,
detail="Audio conversion failed. Chat completions API requires mp3 or wav format.",
)
# Read and encode audio file as base64
with open(audio_file_to_use, "rb") as audio_file:
audio_base64 = base64.b64encode(audio_file.read()).decode("utf-8")
# Prepare chat completions request
url = f"{api_base_url}/chat/completions"
# Add language instruction if specified
language = metadata.get("language", None) if metadata else None
if language:
text_instruction = f"Transcribe this audio exactly as spoken in {language}. Do not translate it."
else:
text_instruction = "Transcribe this audio exactly as spoken in its original language. Do not translate it to another language."
payload = {
"model": model,
"messages": [
{
"role": "user",
"content": [
{
"type": "input_audio",
"input_audio": audio_base64,
},
{"type": "text", "text": text_instruction},
],
}
],
}
r = requests.post(
url=url,
json=payload,
headers={
"Authorization": f"Bearer {api_key}",
"Content-Type": "application/json",
},
)
r.raise_for_status()
response = r.json()
# Extract transcript from chat completion response
transcript = (
response.get("choices", [{}])[0]
.get("message", {})
.get("content", "")
.strip()
)
if not transcript:
raise ValueError("Empty transcript in response")
data = {"text": transcript}
else:
# Use dedicated transcriptions API
url = f"{api_base_url}/audio/transcriptions"
# Determine the MIME type
mime_type, _ = mimetypes.guess_type(file_path)
if not mime_type:
mime_type = "audio/webm"
# Use context manager to ensure file is properly closed
with open(file_path, "rb") as audio_file:
files = {"file": (filename, audio_file, mime_type)}
data_form = {"model": model}
# Add language if specified in metadata
language = metadata.get("language", None) if metadata else None
if language:
data_form["language"] = language
r = requests.post(
url=url,
files=files,
data=data_form,
headers={
"Authorization": f"Bearer {api_key}",
},
)
r.raise_for_status()
response = r.json()
# Extract transcript from response
transcript = response.get("text", "").strip()
if not transcript:
raise ValueError("Empty transcript in response")
data = {"text": transcript}
# Save transcript to json file (consistent with other providers)
transcript_file = f"{file_dir}/{id}.json"
with open(transcript_file, "w") as f:
json.dump(data, f)
log.debug(data)
return data
except ValueError as e:
log.exception("Error parsing Mistral response")
raise HTTPException(
status_code=500,
detail=f"Failed to parse Mistral response: {str(e)}",
)
except requests.exceptions.RequestException as e:
log.exception(e)
detail = None
try:
if r is not None and r.status_code != 200:
res = r.json()
if "error" in res:
detail = f"External: {res['error'].get('message', '')}"
else:
detail = f"External: {r.text}"
except Exception:
detail = f"External: {e}"
raise HTTPException(
status_code=getattr(r, "status_code", 500) if r else 500,
detail=detail if detail else "Open WebUI: Server Connection Error",
)
def transcribe(request: Request, file_path: str, metadata: Optional[dict] = None): def transcribe(request: Request, file_path: str, metadata: Optional[dict] = None):
log.info(f"transcribe: {file_path} {metadata}") log.info(f"transcribe: {file_path} {metadata}")
@ -1037,7 +1235,7 @@ def get_available_models(request: Request) -> list[dict]:
elif request.app.state.config.TTS_ENGINE == "elevenlabs": elif request.app.state.config.TTS_ENGINE == "elevenlabs":
try: try:
response = requests.get( response = requests.get(
"https://api.elevenlabs.io/v1/models", f"{ELEVENLABS_API_BASE_URL}/v1/models",
headers={ headers={
"xi-api-key": request.app.state.config.TTS_API_KEY, "xi-api-key": request.app.state.config.TTS_API_KEY,
"Content-Type": "application/json", "Content-Type": "application/json",
@ -1141,7 +1339,7 @@ def get_elevenlabs_voices(api_key: str) -> dict:
try: try:
# TODO: Add retries # TODO: Add retries
response = requests.get( response = requests.get(
"https://api.elevenlabs.io/v1/voices", f"{ELEVENLABS_API_BASE_URL}/v1/voices",
headers={ headers={
"xi-api-key": api_key, "xi-api-key": api_key,
"Content-Type": "application/json", "Content-Type": "application/json",

View file

@ -508,6 +508,15 @@ async def signin(request: Request, response: Response, form_data: SigninForm):
user = Auths.authenticate_user(admin_email.lower(), admin_password) user = Auths.authenticate_user(admin_email.lower(), admin_password)
else: else:
password_bytes = form_data.password.encode("utf-8")
if len(password_bytes) > 72:
# TODO: Implement other hashing algorithms that support longer passwords
log.info("Password too long, truncating to 72 bytes for bcrypt")
password_bytes = password_bytes[:72]
# decode safely — ignore incomplete UTF-8 sequences
form_data.password = password_bytes.decode("utf-8", errors="ignore")
user = Auths.authenticate_user(form_data.email.lower(), form_data.password) user = Auths.authenticate_user(form_data.email.lower(), form_data.password)
if user: if user:

View file

@ -39,6 +39,7 @@ router = APIRouter()
def get_session_user_chat_list( def get_session_user_chat_list(
user=Depends(get_verified_user), user=Depends(get_verified_user),
page: Optional[int] = None, page: Optional[int] = None,
include_pinned: Optional[bool] = False,
include_folders: Optional[bool] = False, include_folders: Optional[bool] = False,
): ):
try: try:
@ -47,11 +48,15 @@ def get_session_user_chat_list(
skip = (page - 1) * limit skip = (page - 1) * limit
return Chats.get_chat_title_id_list_by_user_id( return Chats.get_chat_title_id_list_by_user_id(
user.id, include_folders=include_folders, skip=skip, limit=limit user.id,
include_folders=include_folders,
include_pinned=include_pinned,
skip=skip,
limit=limit,
) )
else: else:
return Chats.get_chat_title_id_list_by_user_id( return Chats.get_chat_title_id_list_by_user_id(
user.id, include_folders=include_folders user.id, include_folders=include_folders, include_pinned=include_pinned
) )
except Exception as e: except Exception as e:
log.exception(e) log.exception(e)

View file

@ -1,4 +1,5 @@
import logging import logging
import copy
from fastapi import APIRouter, Depends, Request, HTTPException from fastapi import APIRouter, Depends, Request, HTTPException
from pydantic import BaseModel, ConfigDict from pydantic import BaseModel, ConfigDict
import aiohttp import aiohttp
@ -15,6 +16,7 @@ from open_webui.utils.tools import (
set_tool_servers, set_tool_servers,
) )
from open_webui.utils.mcp.client import MCPClient from open_webui.utils.mcp.client import MCPClient
from open_webui.models.oauth_sessions import OAuthSessions
from open_webui.env import SRC_LOG_LEVELS from open_webui.env import SRC_LOG_LEVELS
@ -165,6 +167,21 @@ async def set_tool_servers_config(
form_data: ToolServersConfigForm, form_data: ToolServersConfigForm,
user=Depends(get_admin_user), user=Depends(get_admin_user),
): ):
for connection in request.app.state.config.TOOL_SERVER_CONNECTIONS:
server_type = connection.get("type", "openapi")
auth_type = connection.get("auth_type", "none")
if auth_type == "oauth_2.1":
# Remove existing OAuth clients for tool servers
server_id = connection.get("info", {}).get("id")
client_key = f"{server_type}:{server_id}"
try:
request.app.state.oauth_client_manager.remove_client(client_key)
except:
pass
# Set new tool server connections
request.app.state.config.TOOL_SERVER_CONNECTIONS = [ request.app.state.config.TOOL_SERVER_CONNECTIONS = [
connection.model_dump() for connection in form_data.TOOL_SERVER_CONNECTIONS connection.model_dump() for connection in form_data.TOOL_SERVER_CONNECTIONS
] ]
@ -176,6 +193,7 @@ async def set_tool_servers_config(
if server_type == "mcp": if server_type == "mcp":
server_id = connection.get("info", {}).get("id") server_id = connection.get("info", {}).get("id")
auth_type = connection.get("auth_type", "none") auth_type = connection.get("auth_type", "none")
if auth_type == "oauth_2.1" and server_id: if auth_type == "oauth_2.1" and server_id:
try: try:
oauth_client_info = connection.get("info", {}).get( oauth_client_info = connection.get("info", {}).get(
@ -183,7 +201,7 @@ async def set_tool_servers_config(
) )
oauth_client_info = decrypt_data(oauth_client_info) oauth_client_info = decrypt_data(oauth_client_info)
await request.app.state.oauth_client_manager.add_client( request.app.state.oauth_client_manager.add_client(
f"{server_type}:{server_id}", f"{server_type}:{server_id}",
OAuthClientInformationFull(**oauth_client_info), OAuthClientInformationFull(**oauth_client_info),
) )
@ -211,7 +229,7 @@ async def verify_tool_servers_config(
log.debug( log.debug(
f"Trying to fetch OAuth 2.1 discovery document from {discovery_url}" f"Trying to fetch OAuth 2.1 discovery document from {discovery_url}"
) )
async with aiohttp.ClientSession() as session: async with aiohttp.ClientSession(trust_env=True) as session:
async with session.get( async with session.get(
discovery_url discovery_url
) as oauth_server_metadata_response: ) as oauth_server_metadata_response:

View file

@ -115,6 +115,10 @@ def process_uploaded_file(request, file, file_path, file_item, file_metadata, us
request.app.state.config.CONTENT_EXTRACTION_ENGINE == "external" request.app.state.config.CONTENT_EXTRACTION_ENGINE == "external"
): ):
process_file(request, ProcessFileForm(file_id=file_item.id), user=user) process_file(request, ProcessFileForm(file_id=file_item.id), user=user)
else:
raise Exception(
f"File type {file.content_type} is not supported for processing"
)
else: else:
log.info( log.info(
f"File type {file.content_type} is not provided, but trying to process anyway" f"File type {file.content_type} is not provided, but trying to process anyway"

File diff suppressed because it is too large Load diff

View file

@ -35,12 +35,18 @@ log = logging.getLogger(__name__)
router = APIRouter() router = APIRouter()
def validate_model_id(model_id: str) -> bool:
return model_id and len(model_id) <= 256
########################### ###########################
# GetModels # GetModels
########################### ###########################
@router.get("/", response_model=list[ModelUserResponse]) @router.get(
"/list", response_model=list[ModelUserResponse]
) # do NOT use "/" as path, conflicts with main.py
async def get_models(id: Optional[str] = None, user=Depends(get_verified_user)): async def get_models(id: Optional[str] = None, user=Depends(get_verified_user)):
if user.role == "admin" and BYPASS_ADMIN_ACCESS_CONTROL: if user.role == "admin" and BYPASS_ADMIN_ACCESS_CONTROL:
return Models.get_models() return Models.get_models()
@ -84,6 +90,12 @@ async def create_new_model(
detail=ERROR_MESSAGES.MODEL_ID_TAKEN, detail=ERROR_MESSAGES.MODEL_ID_TAKEN,
) )
if not validate_model_id(form_data.id):
raise HTTPException(
status_code=status.HTTP_400_BAD_REQUEST,
detail=ERROR_MESSAGES.MODEL_ID_TOO_LONG,
)
else: else:
model = Models.insert_new_model(form_data, user.id) model = Models.insert_new_model(form_data, user.id)
if model: if model:
@ -124,7 +136,8 @@ async def import_models(
for model_data in data: for model_data in data:
# Here, you can add logic to validate model_data if needed # Here, you can add logic to validate model_data if needed
model_id = model_data.get("id") model_id = model_data.get("id")
if model_id:
if model_id and validate_model_id(model_id):
existing_model = Models.get_model_by_id(model_id) existing_model = Models.get_model_by_id(model_id)
if existing_model: if existing_model:
# Update existing model # Update existing model

View file

@ -45,6 +45,7 @@ from open_webui.utils.payload import (
) )
from open_webui.utils.misc import ( from open_webui.utils.misc import (
convert_logit_bias_input_to_json, convert_logit_bias_input_to_json,
stream_chunks_handler,
) )
from open_webui.utils.auth import get_admin_user, get_verified_user from open_webui.utils.auth import get_admin_user, get_verified_user
@ -501,50 +502,55 @@ async def get_all_models(request: Request, user: UserModel) -> dict[str, list]:
return response return response
return None return None
def merge_models_lists(model_lists): def is_supported_openai_models(model_id):
if any(
name in model_id
for name in [
"babbage",
"dall-e",
"davinci",
"embedding",
"tts",
"whisper",
]
):
return False
return True
def get_merged_models(model_lists):
log.debug(f"merge_models_lists {model_lists}") log.debug(f"merge_models_lists {model_lists}")
merged_list = [] models = {}
for idx, models in enumerate(model_lists): for idx, model_list in enumerate(model_lists):
if models is not None and "error" not in models: if model_list is not None and "error" not in model_list:
for model in model_list:
model_id = model.get("id") or model.get("name")
merged_list.extend( if (
[ "api.openai.com"
{ in request.app.state.config.OPENAI_API_BASE_URLS[idx]
and not is_supported_openai_models(model_id)
):
# Skip unwanted OpenAI models
continue
if model_id and model_id not in models:
models[model_id] = {
**model, **model,
"name": model.get("name", model["id"]), "name": model.get("name", model_id),
"owned_by": "openai", "owned_by": "openai",
"openai": model, "openai": model,
"connection_type": model.get("connection_type", "external"), "connection_type": model.get("connection_type", "external"),
"urlIdx": idx, "urlIdx": idx,
} }
for model in models
if (model.get("id") or model.get("name"))
and (
"api.openai.com"
not in request.app.state.config.OPENAI_API_BASE_URLS[idx]
or not any(
name in model["id"]
for name in [
"babbage",
"dall-e",
"davinci",
"embedding",
"tts",
"whisper",
]
)
)
]
)
return merged_list return models
models = {"data": merge_models_lists(map(extract_data, responses))} models = get_merged_models(map(extract_data, responses))
log.debug(f"models: {models}") log.debug(f"models: {models}")
request.app.state.OPENAI_MODELS = {model["id"]: model for model in models["data"]} request.app.state.OPENAI_MODELS = models
return models return {"data": list(models.values())}
@router.get("/models") @router.get("/models")
@ -947,7 +953,7 @@ async def generate_chat_completion(
if "text/event-stream" in r.headers.get("Content-Type", ""): if "text/event-stream" in r.headers.get("Content-Type", ""):
streaming = True streaming = True
return StreamingResponse( return StreamingResponse(
r.content, stream_chunks_handler(r.content),
status_code=r.status, status_code=r.status,
headers=dict(r.headers), headers=dict(r.headers),
background=BackgroundTask( background=BackgroundTask(

View file

@ -32,7 +32,7 @@ from langchain.text_splitter import RecursiveCharacterTextSplitter, TokenTextSpl
from langchain_text_splitters import MarkdownHeaderTextSplitter from langchain_text_splitters import MarkdownHeaderTextSplitter
from langchain_core.documents import Document from langchain_core.documents import Document
from open_webui.models.files import FileModel, Files from open_webui.models.files import FileModel, FileUpdateForm, Files
from open_webui.models.knowledge import Knowledges from open_webui.models.knowledge import Knowledges
from open_webui.storage.provider import Storage from open_webui.storage.provider import Storage
@ -465,7 +465,13 @@ async def get_rag_config(request: Request, user=Depends(get_admin_user)):
"DOCLING_PICTURE_DESCRIPTION_API": request.app.state.config.DOCLING_PICTURE_DESCRIPTION_API, "DOCLING_PICTURE_DESCRIPTION_API": request.app.state.config.DOCLING_PICTURE_DESCRIPTION_API,
"DOCUMENT_INTELLIGENCE_ENDPOINT": request.app.state.config.DOCUMENT_INTELLIGENCE_ENDPOINT, "DOCUMENT_INTELLIGENCE_ENDPOINT": request.app.state.config.DOCUMENT_INTELLIGENCE_ENDPOINT,
"DOCUMENT_INTELLIGENCE_KEY": request.app.state.config.DOCUMENT_INTELLIGENCE_KEY, "DOCUMENT_INTELLIGENCE_KEY": request.app.state.config.DOCUMENT_INTELLIGENCE_KEY,
"MISTRAL_OCR_API_BASE_URL": request.app.state.config.MISTRAL_OCR_API_BASE_URL,
"MISTRAL_OCR_API_KEY": request.app.state.config.MISTRAL_OCR_API_KEY, "MISTRAL_OCR_API_KEY": request.app.state.config.MISTRAL_OCR_API_KEY,
# MinerU settings
"MINERU_API_MODE": request.app.state.config.MINERU_API_MODE,
"MINERU_API_URL": request.app.state.config.MINERU_API_URL,
"MINERU_API_KEY": request.app.state.config.MINERU_API_KEY,
"MINERU_PARAMS": request.app.state.config.MINERU_PARAMS,
# Reranking settings # Reranking settings
"RAG_RERANKING_MODEL": request.app.state.config.RAG_RERANKING_MODEL, "RAG_RERANKING_MODEL": request.app.state.config.RAG_RERANKING_MODEL,
"RAG_RERANKING_ENGINE": request.app.state.config.RAG_RERANKING_ENGINE, "RAG_RERANKING_ENGINE": request.app.state.config.RAG_RERANKING_ENGINE,
@ -645,8 +651,15 @@ class ConfigForm(BaseModel):
DOCLING_PICTURE_DESCRIPTION_API: Optional[dict] = None DOCLING_PICTURE_DESCRIPTION_API: Optional[dict] = None
DOCUMENT_INTELLIGENCE_ENDPOINT: Optional[str] = None DOCUMENT_INTELLIGENCE_ENDPOINT: Optional[str] = None
DOCUMENT_INTELLIGENCE_KEY: Optional[str] = None DOCUMENT_INTELLIGENCE_KEY: Optional[str] = None
MISTRAL_OCR_API_BASE_URL: Optional[str] = None
MISTRAL_OCR_API_KEY: Optional[str] = None MISTRAL_OCR_API_KEY: Optional[str] = None
# MinerU settings
MINERU_API_MODE: Optional[str] = None
MINERU_API_URL: Optional[str] = None
MINERU_API_KEY: Optional[str] = None
MINERU_PARAMS: Optional[dict] = None
# Reranking settings # Reranking settings
RAG_RERANKING_MODEL: Optional[str] = None RAG_RERANKING_MODEL: Optional[str] = None
RAG_RERANKING_ENGINE: Optional[str] = None RAG_RERANKING_ENGINE: Optional[str] = None
@ -880,12 +893,40 @@ async def update_rag_config(
if form_data.DOCUMENT_INTELLIGENCE_KEY is not None if form_data.DOCUMENT_INTELLIGENCE_KEY is not None
else request.app.state.config.DOCUMENT_INTELLIGENCE_KEY else request.app.state.config.DOCUMENT_INTELLIGENCE_KEY
) )
request.app.state.config.MISTRAL_OCR_API_BASE_URL = (
form_data.MISTRAL_OCR_API_BASE_URL
if form_data.MISTRAL_OCR_API_BASE_URL is not None
else request.app.state.config.MISTRAL_OCR_API_BASE_URL
)
request.app.state.config.MISTRAL_OCR_API_KEY = ( request.app.state.config.MISTRAL_OCR_API_KEY = (
form_data.MISTRAL_OCR_API_KEY form_data.MISTRAL_OCR_API_KEY
if form_data.MISTRAL_OCR_API_KEY is not None if form_data.MISTRAL_OCR_API_KEY is not None
else request.app.state.config.MISTRAL_OCR_API_KEY else request.app.state.config.MISTRAL_OCR_API_KEY
) )
# MinerU settings
request.app.state.config.MINERU_API_MODE = (
form_data.MINERU_API_MODE
if form_data.MINERU_API_MODE is not None
else request.app.state.config.MINERU_API_MODE
)
request.app.state.config.MINERU_API_URL = (
form_data.MINERU_API_URL
if form_data.MINERU_API_URL is not None
else request.app.state.config.MINERU_API_URL
)
request.app.state.config.MINERU_API_KEY = (
form_data.MINERU_API_KEY
if form_data.MINERU_API_KEY is not None
else request.app.state.config.MINERU_API_KEY
)
request.app.state.config.MINERU_PARAMS = (
form_data.MINERU_PARAMS
if form_data.MINERU_PARAMS is not None
else request.app.state.config.MINERU_PARAMS
)
# Reranking settings # Reranking settings
if request.app.state.config.RAG_RERANKING_ENGINE == "": if request.app.state.config.RAG_RERANKING_ENGINE == "":
# Unloading the internal reranker and clear VRAM memory # Unloading the internal reranker and clear VRAM memory
@ -1149,7 +1190,13 @@ async def update_rag_config(
"DOCLING_PICTURE_DESCRIPTION_API": request.app.state.config.DOCLING_PICTURE_DESCRIPTION_API, "DOCLING_PICTURE_DESCRIPTION_API": request.app.state.config.DOCLING_PICTURE_DESCRIPTION_API,
"DOCUMENT_INTELLIGENCE_ENDPOINT": request.app.state.config.DOCUMENT_INTELLIGENCE_ENDPOINT, "DOCUMENT_INTELLIGENCE_ENDPOINT": request.app.state.config.DOCUMENT_INTELLIGENCE_ENDPOINT,
"DOCUMENT_INTELLIGENCE_KEY": request.app.state.config.DOCUMENT_INTELLIGENCE_KEY, "DOCUMENT_INTELLIGENCE_KEY": request.app.state.config.DOCUMENT_INTELLIGENCE_KEY,
"MISTRAL_OCR_API_BASE_URL": request.app.state.config.MISTRAL_OCR_API_BASE_URL,
"MISTRAL_OCR_API_KEY": request.app.state.config.MISTRAL_OCR_API_KEY, "MISTRAL_OCR_API_KEY": request.app.state.config.MISTRAL_OCR_API_KEY,
# MinerU settings
"MINERU_API_MODE": request.app.state.config.MINERU_API_MODE,
"MINERU_API_URL": request.app.state.config.MINERU_API_URL,
"MINERU_API_KEY": request.app.state.config.MINERU_API_KEY,
"MINERU_PARAMS": request.app.state.config.MINERU_PARAMS,
# Reranking settings # Reranking settings
"RAG_RERANKING_MODEL": request.app.state.config.RAG_RERANKING_MODEL, "RAG_RERANKING_MODEL": request.app.state.config.RAG_RERANKING_MODEL,
"RAG_RERANKING_ENGINE": request.app.state.config.RAG_RERANKING_ENGINE, "RAG_RERANKING_ENGINE": request.app.state.config.RAG_RERANKING_ENGINE,
@ -1527,6 +1574,7 @@ def process_file(
file_path = Storage.get_file(file_path) file_path = Storage.get_file(file_path)
loader = Loader( loader = Loader(
engine=request.app.state.config.CONTENT_EXTRACTION_ENGINE, engine=request.app.state.config.CONTENT_EXTRACTION_ENGINE,
user=user,
DATALAB_MARKER_API_KEY=request.app.state.config.DATALAB_MARKER_API_KEY, DATALAB_MARKER_API_KEY=request.app.state.config.DATALAB_MARKER_API_KEY,
DATALAB_MARKER_API_BASE_URL=request.app.state.config.DATALAB_MARKER_API_BASE_URL, DATALAB_MARKER_API_BASE_URL=request.app.state.config.DATALAB_MARKER_API_BASE_URL,
DATALAB_MARKER_ADDITIONAL_CONFIG=request.app.state.config.DATALAB_MARKER_ADDITIONAL_CONFIG, DATALAB_MARKER_ADDITIONAL_CONFIG=request.app.state.config.DATALAB_MARKER_ADDITIONAL_CONFIG,
@ -1559,7 +1607,12 @@ def process_file(
PDF_EXTRACT_IMAGES=request.app.state.config.PDF_EXTRACT_IMAGES, PDF_EXTRACT_IMAGES=request.app.state.config.PDF_EXTRACT_IMAGES,
DOCUMENT_INTELLIGENCE_ENDPOINT=request.app.state.config.DOCUMENT_INTELLIGENCE_ENDPOINT, DOCUMENT_INTELLIGENCE_ENDPOINT=request.app.state.config.DOCUMENT_INTELLIGENCE_ENDPOINT,
DOCUMENT_INTELLIGENCE_KEY=request.app.state.config.DOCUMENT_INTELLIGENCE_KEY, DOCUMENT_INTELLIGENCE_KEY=request.app.state.config.DOCUMENT_INTELLIGENCE_KEY,
MISTRAL_OCR_API_BASE_URL=request.app.state.config.MISTRAL_OCR_API_BASE_URL,
MISTRAL_OCR_API_KEY=request.app.state.config.MISTRAL_OCR_API_KEY, MISTRAL_OCR_API_KEY=request.app.state.config.MISTRAL_OCR_API_KEY,
MINERU_API_MODE=request.app.state.config.MINERU_API_MODE,
MINERU_API_URL=request.app.state.config.MINERU_API_URL,
MINERU_API_KEY=request.app.state.config.MINERU_API_KEY,
MINERU_PARAMS=request.app.state.config.MINERU_PARAMS,
) )
docs = loader.load( docs = loader.load(
file.filename, file.meta.get("content_type"), file_path file.filename, file.meta.get("content_type"), file_path
@ -1758,7 +1811,9 @@ def process_web(
) )
def search_web(request: Request, engine: str, query: str) -> list[SearchResult]: def search_web(
request: Request, engine: str, query: str, user=None
) -> list[SearchResult]:
"""Search the web using a search engine and return the results as a list of SearchResult objects. """Search the web using a search engine and return the results as a list of SearchResult objects.
Will look for a search engine API key in environment variables in the following order: Will look for a search engine API key in environment variables in the following order:
- SEARXNG_QUERY_URL - SEARXNG_QUERY_URL
@ -1833,6 +1888,7 @@ def search_web(request: Request, engine: str, query: str) -> list[SearchResult]:
query, query,
request.app.state.config.WEB_SEARCH_RESULT_COUNT, request.app.state.config.WEB_SEARCH_RESULT_COUNT,
request.app.state.config.WEB_SEARCH_DOMAIN_FILTER_LIST, request.app.state.config.WEB_SEARCH_DOMAIN_FILTER_LIST,
referer=request.app.state.config.WEBUI_URL,
) )
else: else:
raise Exception( raise Exception(
@ -2015,11 +2071,13 @@ def search_web(request: Request, engine: str, query: str) -> list[SearchResult]:
) )
elif engine == "external": elif engine == "external":
return search_external( return search_external(
request,
request.app.state.config.EXTERNAL_WEB_SEARCH_URL, request.app.state.config.EXTERNAL_WEB_SEARCH_URL,
request.app.state.config.EXTERNAL_WEB_SEARCH_API_KEY, request.app.state.config.EXTERNAL_WEB_SEARCH_API_KEY,
query, query,
request.app.state.config.WEB_SEARCH_RESULT_COUNT, request.app.state.config.WEB_SEARCH_RESULT_COUNT,
request.app.state.config.WEB_SEARCH_DOMAIN_FILTER_LIST, request.app.state.config.WEB_SEARCH_DOMAIN_FILTER_LIST,
user=user,
) )
else: else:
raise Exception("No search engine API key found in environment variables") raise Exception("No search engine API key found in environment variables")
@ -2044,6 +2102,7 @@ async def process_web_search(
request, request,
request.app.state.config.WEB_SEARCH_ENGINE, request.app.state.config.WEB_SEARCH_ENGINE,
query, query,
user,
) )
for query in form_data.queries for query in form_data.queries
] ]
@ -2393,16 +2452,19 @@ def process_files_batch(
""" """
Process a batch of files and save them to the vector database. Process a batch of files and save them to the vector database.
""" """
results: List[BatchProcessFilesResult] = []
errors: List[BatchProcessFilesResult] = []
collection_name = form_data.collection_name collection_name = form_data.collection_name
file_results: List[BatchProcessFilesResult] = []
file_errors: List[BatchProcessFilesResult] = []
file_updates: List[FileUpdateForm] = []
# Prepare all documents first # Prepare all documents first
all_docs: List[Document] = [] all_docs: List[Document] = []
for file in form_data.files: for file in form_data.files:
try: try:
text_content = file.data.get("content", "") text_content = file.data.get("content", "")
docs: List[Document] = [ docs: List[Document] = [
Document( Document(
page_content=text_content.replace("<br/>", "\n"), page_content=text_content.replace("<br/>", "\n"),
@ -2416,16 +2478,21 @@ def process_files_batch(
) )
] ]
hash = calculate_sha256_string(text_content)
Files.update_file_hash_by_id(file.id, hash)
Files.update_file_data_by_id(file.id, {"content": text_content})
all_docs.extend(docs) all_docs.extend(docs)
results.append(BatchProcessFilesResult(file_id=file.id, status="prepared"))
file_updates.append(
FileUpdateForm(
hash=calculate_sha256_string(text_content),
data={"content": text_content},
)
)
file_results.append(
BatchProcessFilesResult(file_id=file.id, status="prepared")
)
except Exception as e: except Exception as e:
log.error(f"process_files_batch: Error processing file {file.id}: {str(e)}") log.error(f"process_files_batch: Error processing file {file.id}: {str(e)}")
errors.append( file_errors.append(
BatchProcessFilesResult(file_id=file.id, status="failed", error=str(e)) BatchProcessFilesResult(file_id=file.id, status="failed", error=str(e))
) )
@ -2441,20 +2508,18 @@ def process_files_batch(
) )
# Update all files with collection name # Update all files with collection name
for result in results: for file_update, file_result in zip(file_updates, file_results):
Files.update_file_metadata_by_id( Files.update_file_by_id(id=file_result.file_id, form_data=file_update)
result.file_id, {"collection_name": collection_name} file_result.status = "completed"
)
result.status = "completed"
except Exception as e: except Exception as e:
log.error( log.error(
f"process_files_batch: Error saving documents to vector DB: {str(e)}" f"process_files_batch: Error saving documents to vector DB: {str(e)}"
) )
for result in results: for file_result in file_results:
result.status = "failed" file_result.status = "failed"
errors.append( file_errors.append(
BatchProcessFilesResult(file_id=result.file_id, error=str(e)) BatchProcessFilesResult(file_id=file_result.file_id, error=str(e))
) )
return BatchProcessFilesResponse(results=results, errors=errors) return BatchProcessFilesResponse(results=file_results, errors=file_errors)

View file

@ -361,7 +361,7 @@ async def get_user_by_id(user_id: str, user=Depends(get_verified_user)):
) )
@router.get("/{user_id}/oauth/sessions", response_model=Optional[dict]) @router.get("/{user_id}/oauth/sessions")
async def get_user_oauth_sessions_by_id(user_id: str, user=Depends(get_admin_user)): async def get_user_oauth_sessions_by_id(user_id: str, user=Depends(get_admin_user)):
sessions = OAuthSessions.get_sessions_by_user_id(user_id) sessions = OAuthSessions.get_sessions_by_user_id(user_id)
if sessions and len(sessions) > 0: if sessions and len(sessions) > 0:

View file

@ -126,10 +126,3 @@ async def download_db(user=Depends(get_admin_user)):
) )
@router.get("/litellm/config")
async def download_litellm_config_yaml(user=Depends(get_admin_user)):
return FileResponse(
f"{DATA_DIR}/litellm/config.yaml",
media_type="application/octet-stream",
filename="config.yaml",
)

View file

@ -18,7 +18,12 @@ from open_webui.utils.redis import (
get_sentinel_url_from_env, get_sentinel_url_from_env,
) )
from open_webui.config import (
CORS_ALLOW_ORIGIN,
)
from open_webui.env import ( from open_webui.env import (
VERSION,
ENABLE_WEBSOCKET_SUPPORT, ENABLE_WEBSOCKET_SUPPORT,
WEBSOCKET_MANAGER, WEBSOCKET_MANAGER,
WEBSOCKET_REDIS_URL, WEBSOCKET_REDIS_URL,
@ -48,6 +53,9 @@ log.setLevel(SRC_LOG_LEVELS["SOCKET"])
REDIS = None REDIS = None
# Configure CORS for Socket.IO
SOCKETIO_CORS_ORIGINS = "*" if CORS_ALLOW_ORIGIN == ["*"] else CORS_ALLOW_ORIGIN
if WEBSOCKET_MANAGER == "redis": if WEBSOCKET_MANAGER == "redis":
if WEBSOCKET_SENTINEL_HOSTS: if WEBSOCKET_SENTINEL_HOSTS:
mgr = socketio.AsyncRedisManager( mgr = socketio.AsyncRedisManager(
@ -58,7 +66,7 @@ if WEBSOCKET_MANAGER == "redis":
else: else:
mgr = socketio.AsyncRedisManager(WEBSOCKET_REDIS_URL) mgr = socketio.AsyncRedisManager(WEBSOCKET_REDIS_URL)
sio = socketio.AsyncServer( sio = socketio.AsyncServer(
cors_allowed_origins=[], cors_allowed_origins=SOCKETIO_CORS_ORIGINS,
async_mode="asgi", async_mode="asgi",
transports=(["websocket"] if ENABLE_WEBSOCKET_SUPPORT else ["polling"]), transports=(["websocket"] if ENABLE_WEBSOCKET_SUPPORT else ["polling"]),
allow_upgrades=ENABLE_WEBSOCKET_SUPPORT, allow_upgrades=ENABLE_WEBSOCKET_SUPPORT,
@ -67,7 +75,7 @@ if WEBSOCKET_MANAGER == "redis":
) )
else: else:
sio = socketio.AsyncServer( sio = socketio.AsyncServer(
cors_allowed_origins=[], cors_allowed_origins=SOCKETIO_CORS_ORIGINS,
async_mode="asgi", async_mode="asgi",
transports=(["websocket"] if ENABLE_WEBSOCKET_SUPPORT else ["polling"]), transports=(["websocket"] if ENABLE_WEBSOCKET_SUPPORT else ["polling"]),
allow_upgrades=ENABLE_WEBSOCKET_SUPPORT, allow_upgrades=ENABLE_WEBSOCKET_SUPPORT,
@ -274,6 +282,8 @@ async def connect(sid, environ, auth):
else: else:
USER_POOL[user.id] = [sid] USER_POOL[user.id] = [sid]
await sio.enter_room(sid, f"user:{user.id}")
@sio.on("user-join") @sio.on("user-join")
async def user_join(sid, data): async def user_join(sid, data):
@ -296,6 +306,7 @@ async def user_join(sid, data):
else: else:
USER_POOL[user.id] = [sid] USER_POOL[user.id] = [sid]
await sio.enter_room(sid, f"user:{user.id}")
# Join all the channels # Join all the channels
channels = Channels.get_channels_by_user_id(user.id) channels = Channels.get_channels_by_user_id(user.id)
log.debug(f"{channels=}") log.debug(f"{channels=}")
@ -641,40 +652,24 @@ async def disconnect(sid):
def get_event_emitter(request_info, update_db=True): def get_event_emitter(request_info, update_db=True):
async def __event_emitter__(event_data): async def __event_emitter__(event_data):
user_id = request_info["user_id"] user_id = request_info["user_id"]
chat_id = request_info["chat_id"]
message_id = request_info["message_id"]
session_ids = list( await sio.emit(
set( "events",
USER_POOL.get(user_id, []) {
+ ( "chat_id": chat_id,
[request_info.get("session_id")] "message_id": message_id,
if request_info.get("session_id") "data": event_data,
else [] },
) room=f"user:{user_id}",
)
) )
chat_id = request_info.get("chat_id", None)
message_id = request_info.get("message_id", None)
emit_tasks = [
sio.emit(
"events",
{
"chat_id": chat_id,
"message_id": message_id,
"data": event_data,
},
to=session_id,
)
for session_id in session_ids
]
await asyncio.gather(*emit_tasks)
if ( if (
update_db update_db
and message_id and message_id
and not request_info.get("chat_id", "").startswith("local:") and not request_info.get("chat_id", "").startswith("local:")
): ):
if "type" in event_data and event_data["type"] == "status": if "type" in event_data and event_data["type"] == "status":
Chats.add_message_status_to_chat_by_id_and_message_id( Chats.add_message_status_to_chat_by_id_and_message_id(
request_info["chat_id"], request_info["chat_id"],
@ -764,7 +759,14 @@ def get_event_emitter(request_info, update_db=True):
}, },
) )
return __event_emitter__ if (
"user_id" in request_info
and "chat_id" in request_info
and "message_id" in request_info
):
return __event_emitter__
else:
return None
def get_event_call(request_info): def get_event_call(request_info):
@ -780,7 +782,14 @@ def get_event_call(request_info):
) )
return response return response
return __event_caller__ if (
"session_id" in request_info
and "chat_id" in request_info
and "message_id" in request_info
):
return __event_caller__
else:
return None
get_event_caller = get_event_call get_event_caller = get_event_call

View file

@ -1,5 +1,5 @@
from open_webui.routers.images import ( from open_webui.routers.images import (
load_b64_image_data, get_image_data,
upload_image, upload_image,
) )
@ -22,7 +22,7 @@ def get_image_url_from_base64(request, base64_image_string, metadata, user):
if "data:image/png;base64" in base64_image_string: if "data:image/png;base64" in base64_image_string:
image_url = "" image_url = ""
# Extract base64 image data from the line # Extract base64 image data from the line
image_data, content_type = load_b64_image_data(base64_image_string) image_data, content_type = get_image_data(base64_image_string)
if image_data is not None: if image_data is not None:
image_url = upload_image( image_url = upload_image(
request, request,

View file

@ -0,0 +1,11 @@
from urllib.parse import quote
def include_user_info_headers(headers, user):
return {
**headers,
"X-OpenWebUI-User-Name": quote(user.name, safe=" "),
"X-OpenWebUI-User-Id": user.id,
"X-OpenWebUI-User-Email": user.email,
"X-OpenWebUI-User-Role": user.role,
}

View file

@ -2,6 +2,8 @@ import asyncio
import json import json
import logging import logging
import random import random
import requests
import aiohttp
import urllib.parse import urllib.parse
import urllib.request import urllib.request
from typing import Optional from typing import Optional
@ -91,6 +93,25 @@ def get_images(ws, prompt, client_id, base_url, api_key):
return {"data": output_images} return {"data": output_images}
async def comfyui_upload_image(image_file_item, base_url, api_key):
url = f"{base_url}/api/upload/image"
headers = {}
if api_key:
headers["Authorization"] = f"Bearer {api_key}"
_, (filename, file_bytes, mime_type) = image_file_item
form = aiohttp.FormData()
form.add_field("image", file_bytes, filename=filename, content_type=mime_type)
form.add_field("type", "input") # required by ComfyUI
async with aiohttp.ClientSession() as session:
async with session.post(url, data=form, headers=headers) as resp:
resp.raise_for_status()
return await resp.json()
class ComfyUINodeInput(BaseModel): class ComfyUINodeInput(BaseModel):
type: Optional[str] = None type: Optional[str] = None
node_ids: list[str] = [] node_ids: list[str] = []
@ -103,7 +124,7 @@ class ComfyUIWorkflow(BaseModel):
nodes: list[ComfyUINodeInput] nodes: list[ComfyUINodeInput]
class ComfyUIGenerateImageForm(BaseModel): class ComfyUICreateImageForm(BaseModel):
workflow: ComfyUIWorkflow workflow: ComfyUIWorkflow
prompt: str prompt: str
@ -116,8 +137,8 @@ class ComfyUIGenerateImageForm(BaseModel):
seed: Optional[int] = None seed: Optional[int] = None
async def comfyui_generate_image( async def comfyui_create_image(
model: str, payload: ComfyUIGenerateImageForm, client_id, base_url, api_key model: str, payload: ComfyUICreateImageForm, client_id, base_url, api_key
): ):
ws_url = base_url.replace("http://", "ws://").replace("https://", "wss://") ws_url = base_url.replace("http://", "ws://").replace("https://", "wss://")
workflow = json.loads(payload.workflow.workflow) workflow = json.loads(payload.workflow.workflow)
@ -191,3 +212,102 @@ async def comfyui_generate_image(
ws.close() ws.close()
return images return images
class ComfyUIEditImageForm(BaseModel):
workflow: ComfyUIWorkflow
image: str | list[str]
prompt: str
width: Optional[int] = None
height: Optional[int] = None
n: Optional[int] = None
steps: Optional[int] = None
seed: Optional[int] = None
async def comfyui_edit_image(
model: str, payload: ComfyUIEditImageForm, client_id, base_url, api_key
):
ws_url = base_url.replace("http://", "ws://").replace("https://", "wss://")
workflow = json.loads(payload.workflow.workflow)
for node in payload.workflow.nodes:
if node.type:
if node.type == "model":
for node_id in node.node_ids:
workflow[node_id]["inputs"][node.key] = model
elif node.type == "image":
if isinstance(payload.image, list):
# check if multiple images are provided
for idx, node_id in enumerate(node.node_ids):
if idx < len(payload.image):
workflow[node_id]["inputs"][node.key] = payload.image[idx]
else:
for node_id in node.node_ids:
workflow[node_id]["inputs"][node.key] = payload.image
elif node.type == "prompt":
for node_id in node.node_ids:
workflow[node_id]["inputs"][
node.key if node.key else "text"
] = payload.prompt
elif node.type == "negative_prompt":
for node_id in node.node_ids:
workflow[node_id]["inputs"][
node.key if node.key else "text"
] = payload.negative_prompt
elif node.type == "width":
for node_id in node.node_ids:
workflow[node_id]["inputs"][
node.key if node.key else "width"
] = payload.width
elif node.type == "height":
for node_id in node.node_ids:
workflow[node_id]["inputs"][
node.key if node.key else "height"
] = payload.height
elif node.type == "n":
for node_id in node.node_ids:
workflow[node_id]["inputs"][
node.key if node.key else "batch_size"
] = payload.n
elif node.type == "steps":
for node_id in node.node_ids:
workflow[node_id]["inputs"][
node.key if node.key else "steps"
] = payload.steps
elif node.type == "seed":
seed = (
payload.seed
if payload.seed
else random.randint(0, 1125899906842624)
)
for node_id in node.node_ids:
workflow[node_id]["inputs"][node.key] = seed
else:
for node_id in node.node_ids:
workflow[node_id]["inputs"][node.key] = node.value
try:
ws = websocket.WebSocket()
headers = {"Authorization": f"Bearer {api_key}"}
ws.connect(f"{ws_url}/ws?clientId={client_id}", header=headers)
log.info("WebSocket connection established.")
except Exception as e:
log.exception(f"Failed to connect to WebSocket server: {e}")
return None
try:
log.info("Sending workflow to WebSocket server.")
log.info(f"Workflow: {workflow}")
images = await asyncio.to_thread(
get_images, ws, workflow, client_id, base_url, api_key
)
except Exception as e:
log.exception(f"Error while receiving images: {e}")
images = None
ws.close()
return images

View file

@ -2,6 +2,8 @@ import asyncio
from typing import Optional from typing import Optional
from contextlib import AsyncExitStack from contextlib import AsyncExitStack
import anyio
from mcp import ClientSession from mcp import ClientSession
from mcp.client.auth import OAuthClientProvider, TokenStorage from mcp.client.auth import OAuthClientProvider, TokenStorage
from mcp.client.streamable_http import streamablehttp_client from mcp.client.streamable_http import streamablehttp_client
@ -11,26 +13,29 @@ from mcp.shared.auth import OAuthClientInformationFull, OAuthClientMetadata, OAu
class MCPClient: class MCPClient:
def __init__(self): def __init__(self):
self.session: Optional[ClientSession] = None self.session: Optional[ClientSession] = None
self.exit_stack = AsyncExitStack() self.exit_stack = None
async def connect(self, url: str, headers: Optional[dict] = None): async def connect(self, url: str, headers: Optional[dict] = None):
try: async with AsyncExitStack() as exit_stack:
self._streams_context = streamablehttp_client(url, headers=headers) try:
self._streams_context = streamablehttp_client(url, headers=headers)
transport = await self.exit_stack.enter_async_context(self._streams_context) transport = await exit_stack.enter_async_context(self._streams_context)
read_stream, write_stream, _ = transport read_stream, write_stream, _ = transport
self._session_context = ClientSession( self._session_context = ClientSession(
read_stream, write_stream read_stream, write_stream
) # pylint: disable=W0201 ) # pylint: disable=W0201
self.session = await self.exit_stack.enter_async_context( self.session = await exit_stack.enter_async_context(
self._session_context self._session_context
) )
await self.session.initialize() with anyio.fail_after(10):
except Exception as e: await self.session.initialize()
await self.disconnect() self.exit_stack = exit_stack.pop_all()
raise e except Exception as e:
await asyncio.shield(self.disconnect())
raise e
async def list_tool_specs(self) -> Optional[dict]: async def list_tool_specs(self) -> Optional[dict]:
if not self.session: if not self.session:

View file

@ -45,10 +45,10 @@ from open_webui.routers.retrieval import (
SearchForm, SearchForm,
) )
from open_webui.routers.images import ( from open_webui.routers.images import (
load_b64_image_data,
image_generations, image_generations,
GenerateImageForm, CreateImageForm,
upload_image, image_edits,
EditImageForm,
) )
from open_webui.routers.pipelines import ( from open_webui.routers.pipelines import (
process_pipeline_inlet_filter, process_pipeline_inlet_filter,
@ -91,7 +91,7 @@ from open_webui.utils.misc import (
convert_logit_bias_input_to_json, convert_logit_bias_input_to_json,
get_content_from_message, get_content_from_message,
) )
from open_webui.utils.tools import get_tools from open_webui.utils.tools import get_tools, get_updated_tool_function
from open_webui.utils.plugin import load_function_module_by_id from open_webui.utils.plugin import load_function_module_by_id
from open_webui.utils.filter import ( from open_webui.utils.filter import (
get_sorted_filter_ids, get_sorted_filter_ids,
@ -718,9 +718,31 @@ async def chat_web_search_handler(
return form_data return form_data
def get_last_images(message_list):
images = []
for message in reversed(message_list):
images_flag = False
for file in message.get("files", []):
if file.get("type") == "image":
images.append(file.get("url"))
images_flag = True
if images_flag:
break
return images
async def chat_image_generation_handler( async def chat_image_generation_handler(
request: Request, form_data: dict, extra_params: dict, user request: Request, form_data: dict, extra_params: dict, user
): ):
metadata = extra_params.get("__metadata__", {})
chat_id = metadata.get("chat_id", None)
if not chat_id:
return form_data
chat = Chats.get_chat_by_id_and_user_id(chat_id, user.id)
__event_emitter__ = extra_params["__event_emitter__"] __event_emitter__ = extra_params["__event_emitter__"]
await __event_emitter__( await __event_emitter__(
{ {
@ -729,87 +751,151 @@ async def chat_image_generation_handler(
} }
) )
messages = form_data["messages"] messages_map = chat.chat.get("history", {}).get("messages", {})
user_message = get_last_user_message(messages) message_id = chat.chat.get("history", {}).get("currentId")
message_list = get_message_list(messages_map, message_id)
user_message = get_last_user_message(message_list)
prompt = user_message prompt = user_message
negative_prompt = "" input_images = get_last_images(message_list)
if request.app.state.config.ENABLE_IMAGE_PROMPT_GENERATION:
try:
res = await generate_image_prompt(
request,
{
"model": form_data["model"],
"messages": messages,
},
user,
)
response = res["choices"][0]["message"]["content"]
try:
bracket_start = response.find("{")
bracket_end = response.rfind("}") + 1
if bracket_start == -1 or bracket_end == -1:
raise Exception("No JSON object found in the response")
response = response[bracket_start:bracket_end]
response = json.loads(response)
prompt = response.get("prompt", [])
except Exception as e:
prompt = user_message
except Exception as e:
log.exception(e)
prompt = user_message
system_message_content = "" system_message_content = ""
if len(input_images) == 0:
# Create image(s)
if request.app.state.config.ENABLE_IMAGE_PROMPT_GENERATION:
try:
res = await generate_image_prompt(
request,
{
"model": form_data["model"],
"messages": form_data["messages"],
},
user,
)
try: response = res["choices"][0]["message"]["content"]
images = await image_generations(
request=request,
form_data=GenerateImageForm(**{"prompt": prompt}),
user=user,
)
await __event_emitter__( try:
{ bracket_start = response.find("{")
"type": "status", bracket_end = response.rfind("}") + 1
"data": {"description": "Image created", "done": True},
}
)
await __event_emitter__( if bracket_start == -1 or bracket_end == -1:
{ raise Exception("No JSON object found in the response")
"type": "files",
"data": {
"files": [
{
"type": "image",
"url": image["url"],
}
for image in images
]
},
}
)
system_message_content = "<context>User is shown the generated image, tell the user that the image has been generated</context>" response = response[bracket_start:bracket_end]
except Exception as e: response = json.loads(response)
log.exception(e) prompt = response.get("prompt", [])
await __event_emitter__( except Exception as e:
{ prompt = user_message
"type": "status",
"data": {
"description": f"An error occurred while generating an image",
"done": True,
},
}
)
system_message_content = "<context>Unable to generate an image, tell the user that an error occurred</context>" except Exception as e:
log.exception(e)
prompt = user_message
try:
images = await image_generations(
request=request,
form_data=CreateImageForm(**{"prompt": prompt}),
user=user,
)
await __event_emitter__(
{
"type": "status",
"data": {"description": "Image created", "done": True},
}
)
await __event_emitter__(
{
"type": "files",
"data": {
"files": [
{
"type": "image",
"url": image["url"],
}
for image in images
]
},
}
)
system_message_content = "<context>The requested image has been created and is now being shown to the user. Let them know that it has been generated.</context>"
except Exception as e:
log.debug(e)
error_message = ""
if isinstance(e, HTTPException):
if e.detail and isinstance(e.detail, dict):
error_message = e.detail.get("message", str(e.detail))
else:
error_message = str(e.detail)
await __event_emitter__(
{
"type": "status",
"data": {
"description": f"An error occurred while generating an image",
"done": True,
},
}
)
system_message_content = f"<context>Image generation was attempted but failed. The system is currently unable to generate the image. Tell the user that an error occurred: {error_message}</context>"
else:
# Edit image(s)
try:
images = await image_edits(
request=request,
form_data=EditImageForm(**{"prompt": prompt, "image": input_images}),
user=user,
)
await __event_emitter__(
{
"type": "status",
"data": {"description": "Image created", "done": True},
}
)
await __event_emitter__(
{
"type": "files",
"data": {
"files": [
{
"type": "image",
"url": image["url"],
}
for image in images
]
},
}
)
system_message_content = "<context>The requested image has been created and is now being shown to the user. Let them know that it has been generated.</context>"
except Exception as e:
log.debug(e)
error_message = ""
if isinstance(e, HTTPException):
if e.detail and isinstance(e.detail, dict):
error_message = e.detail.get("message", str(e.detail))
else:
error_message = str(e.detail)
await __event_emitter__(
{
"type": "status",
"data": {
"description": f"An error occurred while generating an image",
"done": True,
},
}
)
system_message_content = f"<context>Image generation was attempted but failed. The system is currently unable to generate the image. Tell the user that an error occurred: {error_message}</context>"
if system_message_content: if system_message_content:
form_data["messages"] = add_or_update_system_message( form_data["messages"] = add_or_update_system_message(
@ -827,11 +913,7 @@ async def chat_completion_files_handler(
if files := body.get("metadata", {}).get("files", None): if files := body.get("metadata", {}).get("files", None):
# Check if all files are in full context mode # Check if all files are in full context mode
all_full_context = all( all_full_context = all(item.get("context") == "full" for item in files)
item.get("context") == "full"
for item in files
if item.get("type") == "file"
)
queries = [] queries = []
if not all_full_context: if not all_full_context:
@ -1311,6 +1393,17 @@ async def process_chat_payload(request, form_data, user, metadata, model):
} }
except Exception as e: except Exception as e:
log.debug(e) log.debug(e)
if event_emitter:
await event_emitter(
{
"type": "chat:message:error",
"data": {
"error": {
"content": f"Failed to connect to MCP server '{server_id}'"
}
},
}
)
continue continue
tools_dict = await get_tools( tools_dict = await get_tools(
@ -1509,8 +1602,8 @@ async def process_chat_response(
) )
follow_ups_string = response_message.get( follow_ups_string = response_message.get(
"content", response_message.get("reasoning_content", "") "content"
) ) or response_message.get("reasoning_content", "")
else: else:
follow_ups_string = "" follow_ups_string = ""
@ -1547,16 +1640,13 @@ async def process_chat_response(
if not metadata.get("chat_id", "").startswith( if not metadata.get("chat_id", "").startswith(
"local:" "local:"
): # Only update titles and tags for non-temp chats ): # Only update titles and tags for non-temp chats
if ( if TASKS.TITLE_GENERATION in tasks:
TASKS.TITLE_GENERATION in tasks
and tasks[TASKS.TITLE_GENERATION]
):
user_message = get_last_user_message(messages) user_message = get_last_user_message(messages)
if user_message and len(user_message) > 100: if user_message and len(user_message) > 100:
user_message = user_message[:100] + "..." user_message = user_message[:100] + "..."
title = None
if tasks[TASKS.TITLE_GENERATION]: if tasks[TASKS.TITLE_GENERATION]:
res = await generate_title( res = await generate_title(
request, request,
{ {
@ -1573,12 +1663,12 @@ async def process_chat_response(
"message", {} "message", {}
) )
title_string = response_message.get( title_string = (
"content", response_message.get("content")
response_message.get( or response_message.get(
"reasoning_content", "reasoning_content",
message.get("content", user_message), )
), or message.get("content", user_message)
) )
else: else:
title_string = "" title_string = ""
@ -1607,7 +1697,8 @@ async def process_chat_response(
"data": title, "data": title,
} }
) )
elif len(messages) == 2:
if title == None and len(messages) == 2:
title = messages[0].get("content", user_message) title = messages[0].get("content", user_message)
Chats.update_chat_title_by_id(metadata["chat_id"], title) Chats.update_chat_title_by_id(metadata["chat_id"], title)
@ -1637,9 +1728,8 @@ async def process_chat_response(
) )
tags_string = response_message.get( tags_string = response_message.get(
"content", "content"
response_message.get("reasoning_content", ""), ) or response_message.get("reasoning_content", "")
)
else: else:
tags_string = "" tags_string = ""
@ -1944,9 +2034,11 @@ async def process_chat_response(
content = f"{content}{tool_calls_display_content}" content = f"{content}{tool_calls_display_content}"
elif block["type"] == "reasoning": elif block["type"] == "reasoning":
reasoning_display_content = "\n".join( reasoning_display_content = html.escape(
(f"> {line}" if not line.startswith(">") else line) "\n".join(
for line in block["content"].splitlines() (f"> {line}" if not line.startswith(">") else line)
for line in block["content"].splitlines()
)
) )
reasoning_duration = block.get("duration", None) reasoning_duration = block.get("duration", None)
@ -2354,7 +2446,9 @@ async def process_chat_response(
) )
if data: if data:
if "event" in data: if "event" in data and not getattr(
request.state, "direct", False
):
await event_emitter(data.get("event", {})) await event_emitter(data.get("event", {}))
if "selected_model_id" in data: if "selected_model_id" in data:
@ -2671,8 +2765,6 @@ async def process_chat_response(
results = [] results = []
for tool_call in response_tool_calls: for tool_call in response_tool_calls:
print("tool_call", tool_call)
tool_call_id = tool_call.get("id", "") tool_call_id = tool_call.get("id", "")
tool_function_name = tool_call.get("function", {}).get( tool_function_name = tool_call.get("function", {}).get(
"name", "" "name", ""
@ -2747,7 +2839,16 @@ async def process_chat_response(
) )
else: else:
tool_function = tool["callable"] tool_function = get_updated_tool_function(
function=tool["callable"],
extra_params={
"__messages__": form_data.get(
"messages", []
),
"__files__": metadata.get("files", []),
},
)
tool_result = await tool_function( tool_result = await tool_function(
**tool_function_params **tool_function_params
) )
@ -2803,9 +2904,9 @@ async def process_chat_response(
try: try:
new_form_data = { new_form_data = {
**form_data,
"model": model_id, "model": model_id,
"stream": True, "stream": True,
"tools": form_data["tools"],
"messages": [ "messages": [
*form_data["messages"], *form_data["messages"],
*convert_content_blocks_to_messages( *convert_content_blocks_to_messages(
@ -2979,6 +3080,7 @@ async def process_chat_response(
try: try:
new_form_data = { new_form_data = {
**form_data,
"model": model_id, "model": model_id,
"stream": True, "stream": True,
"messages": [ "messages": [

View file

@ -8,10 +8,11 @@ from datetime import timedelta
from pathlib import Path from pathlib import Path
from typing import Callable, Optional from typing import Callable, Optional
import json import json
import aiohttp
import collections.abc import collections.abc
from open_webui.env import SRC_LOG_LEVELS from open_webui.env import SRC_LOG_LEVELS, CHAT_STREAM_RESPONSE_CHUNK_MAX_BUFFER_SIZE
log = logging.getLogger(__name__) log = logging.getLogger(__name__)
log.setLevel(SRC_LOG_LEVELS["MAIN"]) log.setLevel(SRC_LOG_LEVELS["MAIN"])
@ -539,3 +540,68 @@ def extract_urls(text: str) -> list[str]:
r"(https?://[^\s]+)", re.IGNORECASE r"(https?://[^\s]+)", re.IGNORECASE
) # Matches http and https URLs ) # Matches http and https URLs
return url_pattern.findall(text) return url_pattern.findall(text)
def stream_chunks_handler(stream: aiohttp.StreamReader):
"""
Handle stream response chunks, supporting large data chunks that exceed the original 16kb limit.
When a single line exceeds max_buffer_size, returns an empty JSON string {} and skips subsequent data
until encountering normally sized data.
:param stream: The stream reader to handle.
:return: An async generator that yields the stream data.
"""
max_buffer_size = CHAT_STREAM_RESPONSE_CHUNK_MAX_BUFFER_SIZE
if max_buffer_size is None or max_buffer_size <= 0:
return stream
async def yield_safe_stream_chunks():
buffer = b""
skip_mode = False
async for data, _ in stream.iter_chunks():
if not data:
continue
# In skip_mode, if buffer already exceeds the limit, clear it (it's part of an oversized line)
if skip_mode and len(buffer) > max_buffer_size:
buffer = b""
lines = (buffer + data).split(b"\n")
# Process complete lines (except the last possibly incomplete fragment)
for i in range(len(lines) - 1):
line = lines[i]
if skip_mode:
# Skip mode: check if current line is small enough to exit skip mode
if len(line) <= max_buffer_size:
skip_mode = False
yield line
else:
yield b"data: {}"
else:
# Normal mode: check if line exceeds limit
if len(line) > max_buffer_size:
skip_mode = True
yield b"data: {}"
log.info(f"Skip mode triggered, line size: {len(line)}")
else:
yield line
# Save the last incomplete fragment
buffer = lines[-1]
# Check if buffer exceeds limit
if not skip_mode and len(buffer) > max_buffer_size:
skip_mode = True
log.info(f"Skip mode triggered, buffer size: {len(buffer)}")
# Clear oversized buffer to prevent unlimited growth
buffer = b""
# Process remaining buffer data
if buffer and not skip_mode:
yield buffer
return yield_safe_stream_chunks()

View file

@ -166,13 +166,18 @@ async def get_all_models(request, refresh: bool = False, user: UserModel = None)
action_ids = [] action_ids = []
filter_ids = [] filter_ids = []
if "info" in model and "meta" in model["info"]: if "info" in model:
action_ids.extend( if "meta" in model["info"]:
model["info"]["meta"].get("actionIds", []) action_ids.extend(
) model["info"]["meta"].get("actionIds", [])
filter_ids.extend( )
model["info"]["meta"].get("filterIds", []) filter_ids.extend(
) model["info"]["meta"].get("filterIds", [])
)
if "params" in model["info"]:
# Remove params to avoid exposing sensitive info
del model["info"]["params"]
model["action_ids"] = action_ids model["action_ids"] = action_ids
model["filter_ids"] = filter_ids model["filter_ids"] = filter_ids
@ -182,22 +187,40 @@ async def get_all_models(request, refresh: bool = False, user: UserModel = None)
elif custom_model.is_active and ( elif custom_model.is_active and (
custom_model.id not in [model["id"] for model in models] custom_model.id not in [model["id"] for model in models]
): ):
# Custom model based on a base model
owned_by = "openai" owned_by = "openai"
pipe = None pipe = None
for m in models:
if (
custom_model.base_model_id == m["id"]
or custom_model.base_model_id == m["id"].split(":")[0]
):
owned_by = m.get("owned_by", "unknown")
if "pipe" in m:
pipe = m["pipe"]
break
model = {
"id": f"{custom_model.id}",
"name": custom_model.name,
"object": "model",
"created": custom_model.created_at,
"owned_by": owned_by,
"preset": True,
**({"pipe": pipe} if pipe is not None else {}),
}
info = custom_model.model_dump()
if "params" in info:
# Remove params to avoid exposing sensitive info
del info["params"]
model["info"] = info
action_ids = [] action_ids = []
filter_ids = [] filter_ids = []
for model in models:
if (
custom_model.base_model_id == model["id"]
or custom_model.base_model_id == model["id"].split(":")[0]
):
owned_by = model.get("owned_by", "unknown owner")
if "pipe" in model:
pipe = model["pipe"]
break
if custom_model.meta: if custom_model.meta:
meta = custom_model.meta.model_dump() meta = custom_model.meta.model_dump()
@ -207,20 +230,10 @@ async def get_all_models(request, refresh: bool = False, user: UserModel = None)
if "filterIds" in meta: if "filterIds" in meta:
filter_ids.extend(meta["filterIds"]) filter_ids.extend(meta["filterIds"])
models.append( model["action_ids"] = action_ids
{ model["filter_ids"] = filter_ids
"id": f"{custom_model.id}",
"name": custom_model.name, models.append(model)
"object": "model",
"created": custom_model.created_at,
"owned_by": owned_by,
"info": custom_model.model_dump(),
"preset": True,
**({"pipe": pipe} if pipe is not None else {}),
"action_ids": action_ids,
"filter_ids": filter_ids,
}
)
# Process action_ids to get the actions # Process action_ids to get the actions
def get_action_items_from_module(function, module): def get_action_items_from_module(function, module):

View file

@ -1,4 +1,5 @@
import base64 import base64
import copy
import hashlib import hashlib
import logging import logging
import mimetypes import mimetypes
@ -41,6 +42,7 @@ from open_webui.config import (
ENABLE_OAUTH_GROUP_MANAGEMENT, ENABLE_OAUTH_GROUP_MANAGEMENT,
ENABLE_OAUTH_GROUP_CREATION, ENABLE_OAUTH_GROUP_CREATION,
OAUTH_BLOCKED_GROUPS, OAUTH_BLOCKED_GROUPS,
OAUTH_GROUPS_SEPARATOR,
OAUTH_ROLES_CLAIM, OAUTH_ROLES_CLAIM,
OAUTH_SUB_CLAIM, OAUTH_SUB_CLAIM,
OAUTH_GROUPS_CLAIM, OAUTH_GROUPS_CLAIM,
@ -74,6 +76,8 @@ from mcp.shared.auth import (
OAuthMetadata, OAuthMetadata,
) )
from authlib.oauth2.rfc6749.errors import OAuth2Error
class OAuthClientInformationFull(OAuthClientMetadata): class OAuthClientInformationFull(OAuthClientMetadata):
issuer: Optional[str] = None # URL of the OAuth server that issued this client issuer: Optional[str] = None # URL of the OAuth server that issued this client
@ -150,6 +154,37 @@ def decrypt_data(data: str):
raise raise
def _build_oauth_callback_error_message(e: Exception) -> str:
"""
Produce a user-facing callback error string with actionable context.
Keeps the message short and strips newlines for safe redirect usage.
"""
if isinstance(e, OAuth2Error):
parts = [p for p in [e.error, e.description] if p]
detail = " - ".join(parts)
elif isinstance(e, HTTPException):
detail = e.detail if isinstance(e.detail, str) else str(e.detail)
elif isinstance(e, aiohttp.ClientResponseError):
detail = f"Upstream provider returned {e.status}: {e.message}"
elif isinstance(e, aiohttp.ClientError):
detail = str(e)
elif isinstance(e, KeyError):
missing = str(e).strip("'")
if missing.lower() == "state":
detail = "Missing state parameter in callback (session may have expired)"
else:
detail = f"Missing expected key '{missing}' in OAuth response"
else:
detail = str(e)
detail = detail.replace("\n", " ").strip()
if not detail:
detail = e.__class__.__name__
message = f"OAuth callback failed: {detail}"
return message[:197] + "..." if len(message) > 200 else message
def is_in_blocked_groups(group_name: str, groups: list) -> bool: def is_in_blocked_groups(group_name: str, groups: list) -> bool:
""" """
Check if a group name matches any blocked pattern. Check if a group name matches any blocked pattern.
@ -251,7 +286,7 @@ async def get_oauth_client_info_with_dynamic_client_registration(
# Attempt to fetch OAuth server metadata to get registration endpoint & scopes # Attempt to fetch OAuth server metadata to get registration endpoint & scopes
discovery_urls = get_discovery_urls(oauth_server_url) discovery_urls = get_discovery_urls(oauth_server_url)
for url in discovery_urls: for url in discovery_urls:
async with aiohttp.ClientSession() as session: async with aiohttp.ClientSession(trust_env=True) as session:
async with session.get( async with session.get(
url, ssl=AIOHTTP_CLIENT_SESSION_SSL url, ssl=AIOHTTP_CLIENT_SESSION_SSL
) as oauth_server_metadata_response: ) as oauth_server_metadata_response:
@ -287,7 +322,7 @@ async def get_oauth_client_info_with_dynamic_client_registration(
) )
# Perform dynamic client registration and return client info # Perform dynamic client registration and return client info
async with aiohttp.ClientSession() as session: async with aiohttp.ClientSession(trust_env=True) as session:
async with session.post( async with session.post(
registration_url, json=registration_data, ssl=AIOHTTP_CLIENT_SESSION_SSL registration_url, json=registration_data, ssl=AIOHTTP_CLIENT_SESSION_SSL
) as oauth_client_registration_response: ) as oauth_client_registration_response:
@ -371,6 +406,82 @@ class OAuthClientManager:
if client_id in self.clients: if client_id in self.clients:
del self.clients[client_id] del self.clients[client_id]
log.info(f"Removed OAuth client {client_id}") log.info(f"Removed OAuth client {client_id}")
if hasattr(self.oauth, "_clients"):
if client_id in self.oauth._clients:
self.oauth._clients.pop(client_id, None)
if hasattr(self.oauth, "_registry"):
if client_id in self.oauth._registry:
self.oauth._registry.pop(client_id, None)
return True
async def _preflight_authorization_url(
self, client, client_info: OAuthClientInformationFull
) -> bool:
# TODO: Replace this logic with a more robust OAuth client registration validation
# Only perform preflight checks for Starlette OAuth clients
if not hasattr(client, "create_authorization_url"):
return True
redirect_uri = None
if client_info.redirect_uris:
redirect_uri = str(client_info.redirect_uris[0])
try:
auth_data = await client.create_authorization_url(redirect_uri=redirect_uri)
authorization_url = auth_data.get("url")
if not authorization_url:
return True
except Exception as e:
log.debug(
f"Skipping OAuth preflight for client {client_info.client_id}: {e}",
)
return True
try:
async with aiohttp.ClientSession(trust_env=True) as session:
async with session.get(
authorization_url,
allow_redirects=False,
ssl=AIOHTTP_CLIENT_SESSION_SSL,
) as resp:
if resp.status < 400:
return True
response_text = await resp.text()
error = None
error_description = ""
content_type = resp.headers.get("content-type", "")
if "application/json" in content_type:
try:
payload = json.loads(response_text)
error = payload.get("error")
error_description = payload.get("error_description", "")
except:
pass
else:
error_description = response_text
error_message = f"{error or ''} {error_description or ''}".lower()
if any(
keyword in error_message
for keyword in ("invalid_client", "invalid client", "client id")
):
log.warning(
f"OAuth client preflight detected invalid registration for {client_info.client_id}: {error} {error_description}"
)
return False
except Exception as e:
log.debug(
f"Skipping OAuth preflight network check for client {client_info.client_id}: {e}"
)
return True return True
def get_client(self, client_id): def get_client(self, client_id):
@ -561,7 +672,6 @@ class OAuthClientManager:
client = self.get_client(client_id) client = self.get_client(client_id)
if client is None: if client is None:
raise HTTPException(404) raise HTTPException(404)
client_info = self.get_client_info(client_id) client_info = self.get_client_info(client_id)
if client_info is None: if client_info is None:
raise HTTPException(404) raise HTTPException(404)
@ -569,7 +679,8 @@ class OAuthClientManager:
redirect_uri = ( redirect_uri = (
client_info.redirect_uris[0] if client_info.redirect_uris else None client_info.redirect_uris[0] if client_info.redirect_uris else None
) )
return await client.authorize_redirect(request, str(redirect_uri)) redirect_uri_str = str(redirect_uri) if redirect_uri else None
return await client.authorize_redirect(request, redirect_uri_str)
async def handle_callback(self, request, client_id: str, user_id: str, response): async def handle_callback(self, request, client_id: str, user_id: str, response):
client = self.get_client(client_id) client = self.get_client(client_id)
@ -621,8 +732,14 @@ class OAuthClientManager:
error_message = "Failed to obtain OAuth token" error_message = "Failed to obtain OAuth token"
log.warning(error_message) log.warning(error_message)
except Exception as e: except Exception as e:
error_message = "OAuth callback error" error_message = _build_oauth_callback_error_message(e)
log.warning(f"OAuth callback error: {e}") log.warning(
"OAuth callback error for user_id=%s client_id=%s: %s",
user_id,
client_id,
error_message,
exc_info=True,
)
redirect_url = ( redirect_url = (
str(request.app.state.config.WEBUI_URL or request.base_url) str(request.app.state.config.WEBUI_URL or request.base_url)
@ -630,7 +747,9 @@ class OAuthClientManager:
if error_message: if error_message:
log.debug(error_message) log.debug(error_message)
redirect_url = f"{redirect_url}/?error={error_message}" redirect_url = (
f"{redirect_url}/?error={urllib.parse.quote_plus(error_message)}"
)
return RedirectResponse(url=redirect_url, headers=response.headers) return RedirectResponse(url=redirect_url, headers=response.headers)
response = RedirectResponse(url=redirect_url, headers=response.headers) response = RedirectResponse(url=redirect_url, headers=response.headers)
@ -917,7 +1036,11 @@ class OAuthManager:
if isinstance(claim_data, list): if isinstance(claim_data, list):
user_oauth_groups = claim_data user_oauth_groups = claim_data
elif isinstance(claim_data, str): elif isinstance(claim_data, str):
user_oauth_groups = [claim_data] # Split by the configured separator if present
if OAUTH_GROUPS_SEPARATOR in claim_data:
user_oauth_groups = claim_data.split(OAUTH_GROUPS_SEPARATOR)
else:
user_oauth_groups = [claim_data]
else: else:
user_oauth_groups = [] user_oauth_groups = []
@ -1104,7 +1227,13 @@ class OAuthManager:
try: try:
token = await client.authorize_access_token(request) token = await client.authorize_access_token(request)
except Exception as e: except Exception as e:
log.warning(f"OAuth callback error: {e}") detailed_error = _build_oauth_callback_error_message(e)
log.warning(
"OAuth callback error during authorize_access_token for provider %s: %s",
provider,
detailed_error,
exc_info=True,
)
raise HTTPException(400, detail=ERROR_MESSAGES.INVALID_CRED) raise HTTPException(400, detail=ERROR_MESSAGES.INVALID_CRED)
# Try to get userinfo from the token first, some providers include it there # Try to get userinfo from the token first, some providers include it there

View file

@ -297,6 +297,10 @@ def convert_payload_openai_to_ollama(openai_payload: dict) -> dict:
if "tools" in openai_payload: if "tools" in openai_payload:
ollama_payload["tools"] = openai_payload["tools"] ollama_payload["tools"] = openai_payload["tools"]
if "max_tokens" in openai_payload:
ollama_payload["num_predict"] = openai_payload["max_tokens"]
del openai_payload["max_tokens"]
# If there are advanced parameters in the payload, format them in Ollama's options field # If there are advanced parameters in the payload, format them in Ollama's options field
if openai_payload.get("options"): if openai_payload.get("options"):
ollama_payload["options"] = openai_payload["options"] ollama_payload["options"] = openai_payload["options"]

View file

@ -85,9 +85,26 @@ def get_async_tool_function_and_apply_extra_params(
update_wrapper(new_function, function) update_wrapper(new_function, function)
new_function.__signature__ = new_sig new_function.__signature__ = new_sig
new_function.__function__ = function # type: ignore
new_function.__extra_params__ = extra_params # type: ignore
return new_function return new_function
def get_updated_tool_function(function: Callable, extra_params: dict):
# Get the original function and merge updated params
__function__ = getattr(function, "__function__", None)
__extra_params__ = getattr(function, "__extra_params__", None)
if __function__ is not None and __extra_params__ is not None:
return get_async_tool_function_and_apply_extra_params(
__function__,
{**__extra_params__, **extra_params},
)
return function
async def get_tools( async def get_tools(
request: Request, tool_ids: list[str], user: UserModel, extra_params: dict request: Request, tool_ids: list[str], user: UserModel, extra_params: dict
) -> dict[str, dict]: ) -> dict[str, dict]:
@ -754,7 +771,7 @@ async def execute_tool_server(
) as session: ) as session:
request_method = getattr(session, http_method.lower()) request_method = getattr(session, http_method.lower())
if http_method in ["post", "put", "patch"]: if http_method in ["post", "put", "patch", "delete"]:
async with request_method( async with request_method(
final_url, final_url,
json=body_params, json=body_params,

View file

@ -51,7 +51,7 @@ async def post_webhook(name: str, url: str, message: str, event_data: dict) -> b
payload = {**event_data} payload = {**event_data}
log.debug(f"payload: {payload}") log.debug(f"payload: {payload}")
async with aiohttp.ClientSession() as session: async with aiohttp.ClientSession(trust_env=True) as session:
async with session.post(url, json=payload) as r: async with session.post(url, json=payload) as r:
r_text = await r.text() r_text = await r.text()
r.raise_for_status() r.raise_for_status()

View file

@ -5,12 +5,12 @@ python-multipart==0.0.20
itsdangerous==2.2.0 itsdangerous==2.2.0
python-socketio==5.13.0 python-socketio==5.13.0
python-jose==3.4.0 python-jose==3.5.0
cryptography cryptography
bcrypt==5.0.0 bcrypt==5.0.0
argon2-cffi==25.1.0 argon2-cffi==25.1.0
PyJWT[crypto]==2.10.1 PyJWT[crypto]==2.10.1
authlib==1.6.3 authlib==1.6.5
requests==2.32.5 requests==2.32.5
aiohttp==3.12.15 aiohttp==3.12.15
@ -63,7 +63,7 @@ fpdf2==2.8.2
pymdown-extensions==10.14.2 pymdown-extensions==10.14.2
docx2txt==0.8 docx2txt==0.8
python-pptx==1.0.2 python-pptx==1.0.2
unstructured==0.16.17 unstructured==0.18.15
nltk==3.9.1 nltk==3.9.1
Markdown==3.9 Markdown==3.9
pypandoc==1.15 pypandoc==1.15
@ -133,7 +133,7 @@ pytest-docker~=3.1.1
ldap3==2.9.1 ldap3==2.9.1
## Firecrawl ## Firecrawl
firecrawl-py==1.12.0 firecrawl-py==4.5.0
## Trace ## Trace
opentelemetry-api==1.37.0 opentelemetry-api==1.37.0

View file

@ -11,8 +11,6 @@ services:
open-webui: open-webui:
build: build:
context: . context: .
args:
OLLAMA_BASE_URL: '/ollama'
dockerfile: Dockerfile dockerfile: Dockerfile
image: ghcr.io/open-webui/open-webui:${WEBUI_DOCKER_TAG-main} image: ghcr.io/open-webui/open-webui:${WEBUI_DOCKER_TAG-main}
container_name: open-webui container_name: open-webui

View file

@ -24,6 +24,10 @@ Noticed something off? Have an idea? Check our [Issues tab](https://github.com/o
> - **Template Compliance:** Please be aware that failure to follow the provided issue template, or not providing the requested information at all, will likely result in your issue being closed without further consideration. This approach is critical for maintaining the manageability and integrity of issue tracking. > - **Template Compliance:** Please be aware that failure to follow the provided issue template, or not providing the requested information at all, will likely result in your issue being closed without further consideration. This approach is critical for maintaining the manageability and integrity of issue tracking.
> - **Detail is Key:** To ensure your issue is understood and can be effectively addressed, it's imperative to include comprehensive details. Descriptions should be clear, including steps to reproduce, expected outcomes, and actual results. Lack of sufficient detail may hinder our ability to resolve your issue. > - **Detail is Key:** To ensure your issue is understood and can be effectively addressed, it's imperative to include comprehensive details. Descriptions should be clear, including steps to reproduce, expected outcomes, and actual results. Lack of sufficient detail may hinder our ability to resolve your issue.
> [!WARNING]
> Reporting vulnerabilities is not wanted through Issues!
> Instead, [use the security reporting functionality](https://github.com/open-webui/open-webui/security) and ensure you comply with the outlined requirements.
### 🧭 Scope of Support ### 🧭 Scope of Support
We've noticed an uptick in issues not directly related to Open WebUI but rather to the environment it's run in, especially Docker setups. While we strive to support Docker deployment, understanding Docker fundamentals is crucial for a smooth experience. We've noticed an uptick in issues not directly related to Open WebUI but rather to the environment it's run in, especially Docker setups. While we strive to support Docker deployment, understanding Docker fundamentals is crucial for a smooth experience.
@ -32,6 +36,8 @@ We've noticed an uptick in issues not directly related to Open WebUI but rather
- **Advanced Configurations**: Setting up reverse proxies for HTTPS and managing Docker deployments requires foundational knowledge. There are numerous online resources available to learn these skills. Ensuring you have this knowledge will greatly enhance your experience with Open WebUI and similar projects. - **Advanced Configurations**: Setting up reverse proxies for HTTPS and managing Docker deployments requires foundational knowledge. There are numerous online resources available to learn these skills. Ensuring you have this knowledge will greatly enhance your experience with Open WebUI and similar projects.
- **Check the documentation and help improve it**: [Our documentation](https://docs.openwebui.com) has ever growing troubleshooting guides and detailed installation tutorials. Please verify if it is of help to your issue and help expand it by submitting issues and PRs on our [Docs Repository](https://github.com/open-webui/docs).
## 💡 Contributing ## 💡 Contributing
Looking to contribute? Great! Here's how you can help: Looking to contribute? Great! Here's how you can help:
@ -46,9 +52,15 @@ We welcome pull requests. Before submitting one, please:
4. Write clear, descriptive commit messages. 4. Write clear, descriptive commit messages.
5. It's essential to complete your pull request in a timely manner. We move fast, and having PRs hang around too long is not feasible. If you can't get it done within a reasonable time frame, we may have to close it to keep the project moving forward. 5. It's essential to complete your pull request in a timely manner. We move fast, and having PRs hang around too long is not feasible. If you can't get it done within a reasonable time frame, we may have to close it to keep the project moving forward.
> [!NOTE]
> The Pull Request Template has various requirements outlined. Go through the PR-checklist one by one and ensure you completed all steps before submitting your PR for review (you can open it as draft otherwise!).
### 📚 Documentation & Tutorials ### 📚 Documentation & Tutorials
Help us make Open WebUI more accessible by improving documentation, writing tutorials, or creating guides on setting up and optimizing the web UI. Help us make Open WebUI more accessible by improving the documentation, writing tutorials, or creating guides on setting up and optimizing the Web UI.
Help expand our documentation by submitting issues and PRs on our [Docs Repository](https://github.com/open-webui/docs).
We welcome tutorials, guides and other documentation improvements!
### 🌐 Translations and Internationalization ### 🌐 Translations and Internationalization
@ -62,9 +74,12 @@ To add a new language:
- Copy the American English translation file(s) (from `en-US` directory in `src/lib/i18n/locale`) to this new directory and update the string values in JSON format according to your language. Make sure to preserve the structure of the JSON object. - Copy the American English translation file(s) (from `en-US` directory in `src/lib/i18n/locale`) to this new directory and update the string values in JSON format according to your language. Make sure to preserve the structure of the JSON object.
- Add the language code and its respective title to languages file at `src/lib/i18n/locales/languages.json`. - Add the language code and its respective title to languages file at `src/lib/i18n/locales/languages.json`.
> [!NOTE]
> When adding new translations, do so in a standalone PR! Feature PRs or PRs fixing a bug should not contain translation updates. Always keep the scope of a PR narrow.
### 🤔 Questions & Feedback ### 🤔 Questions & Feedback
Got questions or feedback? Join our [Discord community](https://discord.gg/5rJgQTnV4s) or open an issue. We're here to help! Got questions or feedback? Join our [Discord community](https://discord.gg/5rJgQTnV4s) or open an issue or discussion. We're here to help!
## 🙏 Thank You! ## 🙏 Thank You!

View file

@ -4,10 +4,11 @@ Our primary goal is to ensure the protection and confidentiality of sensitive da
## Supported Versions ## Supported Versions
| Version | Supported | | Version (Branch) | Supported |
| ------- | ------------------ | | ---------------- | ------------------ |
| main | :white_check_mark: | | main | :white_check_mark: |
| others | :x: | | dev | :x: |
| others | :x: |
## Zero Tolerance for External Platforms ## Zero Tolerance for External Platforms
@ -17,28 +18,113 @@ Any reports or solicitations arriving from sources other than our designated Git
## Reporting a Vulnerability ## Reporting a Vulnerability
We appreciate the community's interest in identifying potential vulnerabilities. However, effective immediately, we will **not** accept low-effort vulnerability reports. To ensure that submissions are constructive and actionable, please adhere to the following guidelines:
Reports not submitted through our designated GitHub repository will be disregarded, and we will categorically reject invitations to collaborate on external platforms. Our aggressive stance on this matter underscores our commitment to a secure, transparent, and open community where all operations are visible and contributors are accountable. Reports not submitted through our designated GitHub repository will be disregarded, and we will categorically reject invitations to collaborate on external platforms. Our aggressive stance on this matter underscores our commitment to a secure, transparent, and open community where all operations are visible and contributors are accountable.
1. **No Vague Reports**: Submissions such as "I found a vulnerability" without any details will be treated as spam and will not be accepted. We appreciate the community's interest in identifying potential vulnerabilities. However, effective immediately, we will **not** accept low-effort vulnerability reports. Ensure that **submissions are constructive, actionable, reproducible, well documented and adhere to the following guidelines**:
2. **In-Depth Understanding Required**: Reports must reflect a clear understanding of the codebase and provide specific details about the vulnerability, including the affected components and potential impacts. 1. **Report MUST be a vulnerability:** A security vulnerability is an exploitable weakness where the system behaves in an unintended way, allowing attackers to bypass security controls, gain unauthorized access, execute arbitrary code, or escalate privileges. Configuration options, missing features, and expected protocol behavior are **not vulnerabilities**.
3. **Proof of Concept (PoC) is Mandatory**: Each submission must include a well-documented proof of concept (PoC) that demonstrates the vulnerability. If confidentiality is a concern, reporters are encouraged to create a private fork of the repository and share access with the maintainers. Reports lacking valid evidence will be disregarded. 2. **No Vague Reports**: Submissions such as "I found a vulnerability" without any details will be treated as spam and will not be accepted.
4. **Required Patch Submission**: Along with the PoC, reporters must provide a patch or actionable steps to remediate the identified vulnerability. This helps us evaluate and implement fixes rapidly. 3. **In-Depth Understanding Required**: Reports must reflect a clear understanding of the codebase and provide specific details about the vulnerability, including the affected components and potential impacts.
5. **Streamlined Merging Process**: When vulnerability reports meet the above criteria, we can consider them for immediate merging, similar to regular pull requests. Well-structured and thorough submissions will expedite the process of enhancing our security. 4. **Proof of Concept (PoC) is Mandatory**: Each submission must include a well-documented proof of concept (PoC) that demonstrates the vulnerability. If confidentiality is a concern, reporters are encouraged to create a private fork of the repository and share access with the maintainers. Reports lacking valid evidence may be disregarded.
**Non-compliant submissions will be closed, and repeat violators may be banned.** Our goal is to foster a constructive reporting environment where quality submissions promote better security for all users. > [!NOTE]
> A PoC (Proof of Concept) is a **demonstration of exploitation of a vulnerability**. Your PoC must show:
>
> 1. What security boundary was crossed (Confidentiality, Integrity, Availability, Authenticity, Non-repudiation)
> 2. How this vulnerability was abused
> 3. What actions the attacker can now perform
>
> **Examples of valid PoCs:**
>
> - Step-by-step reproduction instructions with exact commands
> - Complete exploit code with detailed execution instructions
> - Screenshots/videos demonstrating the exploit (supplementary to written steps)
>
> **Failure to provide a reproducible PoC may lead to closure of the report**
>
> We will notify you, if we struggle to reproduce the exploit using your PoC to allow you to improve your PoC.
> However, if we repeatedly cannot reproduce the exploit using the PoC, the report may be closed.
## Product Security 5. **Required Patch or Actionable Remediation Plan Submission**: Along with the PoC, reporters must provide a patch or some actionable steps to remediate the identified vulnerability. This helps us evaluate and implement fixes rapidly.
6. **Streamlined Merging Process**: When vulnerability reports meet the above criteria, we can consider provided patches for immediate merging, similar to regular pull requests. Well-structured and thorough submissions will expedite the process of enhancing our security.
7. **Default Configuration Testing**: All vulnerability reports MUST be tested and reproducible using Open WebUI's out-of-the-box default configuration. Claims of vulnerabilities that only manifest with explicitly weakened security settings may be discarded, unless they are covered by the following exception:
> [!NOTE]
> **Note**: If you believe you have found a security issue that
>
> 1. affects default configurations, **or**
> 2. represents a genuine bypass of intended security controls, **or**
> 3. works only with non-default configurations, **but the configuration in question is likely to be used by production deployments**, **then we absolutely want to hear about it.** This policy is intended to filter configuration issues and deployment problems, not to discourage legitimate security research.
8. **Threat Model Understanding Required**: Reports must demonstrate understanding of Open WebUI's self-hosted, authenticated, role-based access control architecture. Comparing Open WebUI to services with fundamentally different security models without acknowledging the architectural differences may result in report rejection.
9. **CVSS Scoring Accuracy:** If you include a CVSS score with your report, it must accurately reflect the vulnerability according to CVSS methodology. Common errors include 1) rating PR:N (None) when authentication is required, 2) scoring hypothetical attack chains instead of the actual vulnerability, or 3) inflating severity without evidence. **We will adjust inaccurate CVSS scores.** Intentionally inflated scores may result in report rejection.
> [!WARNING]
>
> **Using CVE Precedents:** If you cite other CVEs to support your report, ensure they are **genuinely comparable** in vulnerability type, threat model, and attack vector. Citing CVEs from different product categories, different vulnerability classes or different deployment models will lead us to suspect the use of AI in your report.
10. **Admin Actions Are Out of Scope:** Vulnerabilities that require an administrator to actively perform unsafe actions are **not considered valid vulnerabilities**. Admins have full system control and are expected to understand the security implications of their actions and configurations. This includes but is not limited to: adding malicious external servers (models, tools, webhooks), pasting untrusted code into Functions/Tools, or intentionally weakening security settings. **Reports requiring admin negligence or social engineering of admins may be rejected.**
> [!NOTE]
> Similar to rule "Default Configuration Testing": If you believe you have found a vulnerability that affects admins and is NOT caused by admin negligence or intentionally malicious actions,
> **then we absolutely want to hear about it.** This policy is intended to filter social engineering attacks on admins, malicious plugins being deployed by admins and similar malicious actions, not to discourage legitimate security research.
11. **AI report transparency:** Due to an extreme spike in AI-aided vulnerability reports **YOU MUST DISCLOSE if AI was used in any capacity** - whether for writing the report, generating the PoC, or identifying the vulnerability. If AI helped you in any way shape or form in the creation of the report, PoC or finding the vulnerability, you MUST disclose it.
> [!NOTE]
> AI-aided vulnerability reports **will not be rejected by us by default**. But:
>
> - If we suspect you used AI (but you did not disclose it to us), we will be asking tough follow-up questions to validate your understanding of the reported vulnerability and Open WebUI itself.
> - If we suspect you used AI (but you did not disclose it to us) **and** your report ends up being invalid/not a vulnerability/not reproducible, then you **may be banned** from reporting future vulnerabilities.
>
> This measure was necessary due to the extreme rise in clearly AI written vulnerability reports, where the vast majority of them
>
> - were not a vulnerability
> - were faulty configurations rather than a real vulnerability
> - did not provide a PoC
> - violated any of the rules outlined here
> - had a clear lack of understanding of Open WebUI
> - wrote comments with conflicting information
> - used illogical arguments
**Non-compliant submissions will be closed, and repeat extreme violators may be banned.** Our goal is to foster a constructive reporting environment where quality submissions promote better security for all users.
## Where to report the vulnerability
If you want to report a vulnerability and can meet the outlined requirements, [open a vulnerability report here](https://github.com/open-webui/open-webui/security/advisories/new).
If you feel like you are not able to follow ALL outlined requirements for vulnerability-specific reasons, still do report it, we will check every report either way.
## Product Security And For Non-Vulnerability Related Security Concerns:
If your concern does not meet the vulnerability requirements outlined above, is not a vulnerability, **but is still related to security concerns**, then use the following channels instead:
- **Documentation issues/improvement ideas:** Open an issue on our [Documentation Repository](https://github.com/open-webui/docs)
- **Feature requests:** Create a discussion in [GitHub Discussions - Ideas](https://github.com/open-webui/open-webui/discussions/) to discuss with the community if this feature request is wanted by multiple people
- **Configuration help:** Ask the community for help and guidance on our [Discord Server](https://discord.gg/5rJgQTnV4s) or on [Reddit](https://www.reddit.com/r/OpenWebUI/)
- **General issues:** Use our [Issue Tracker](https://github.com/open-webui/open-webui/issues)
**Examples of non-vulnerability, still security related concerns:**
- Suggestions for better default configuration values
- Security hardening recommendations
- Deployment best practices guidance
- Unclear configuration instructions
- Need for additional security documentation
- Feature requests for optional security enhancements (2FA, audit logging, etc.)
- General security questions about production deployment
Please use the adequate channel for your specific issue - e.g. best-practice guidance or additional documentation needs into the Documentation Repository, and feature requests into the Main Repository as an issue or discussion.
We regularly audit our internal processes and system architecture for vulnerabilities using a combination of automated and manual testing techniques. We are also planning to implement SAST and SCA scans in our project soon. We regularly audit our internal processes and system architecture for vulnerabilities using a combination of automated and manual testing techniques. We are also planning to implement SAST and SCA scans in our project soon.
For immediate concerns or detailed reports that meet our guidelines, please create an issue in our [issue tracker](/open-webui/open-webui/issues) or contact us on [Discord](https://discord.gg/5rJgQTnV4s). For any other immediate concerns, please create an issue in our [issue tracker](https://github.com/open-webui/open-webui/issues) or contact our team on [Discord](https://discord.gg/5rJgQTnV4s).
--- ---
_Last updated on **2024-08-19**._ _Last updated on **2025-11-06**._

690
package-lock.json generated
View file

@ -1,12 +1,12 @@
{ {
"name": "open-webui", "name": "open-webui",
"version": "0.6.33", "version": "0.6.36",
"lockfileVersion": 3, "lockfileVersion": 3,
"requires": true, "requires": true,
"packages": { "packages": {
"": { "": {
"name": "open-webui", "name": "open-webui",
"version": "0.6.33", "version": "0.6.36",
"dependencies": { "dependencies": {
"@azure/msal-browser": "^4.5.0", "@azure/msal-browser": "^4.5.0",
"@codemirror/lang-javascript": "^6.2.2", "@codemirror/lang-javascript": "^6.2.2",
@ -103,8 +103,8 @@
"devDependencies": { "devDependencies": {
"@sveltejs/adapter-auto": "3.2.2", "@sveltejs/adapter-auto": "3.2.2",
"@sveltejs/adapter-static": "^3.0.2", "@sveltejs/adapter-static": "^3.0.2",
"@sveltejs/kit": "^2.5.20", "@sveltejs/kit": "^2.5.27",
"@sveltejs/vite-plugin-svelte": "^3.1.1", "@sveltejs/vite-plugin-svelte": "^4.0.0",
"@tailwindcss/container-queries": "^0.1.1", "@tailwindcss/container-queries": "^0.1.1",
"@tailwindcss/postcss": "^4.0.0", "@tailwindcss/postcss": "^4.0.0",
"@tailwindcss/typography": "^0.5.13", "@tailwindcss/typography": "^0.5.13",
@ -114,14 +114,14 @@
"eslint": "^8.56.0", "eslint": "^8.56.0",
"eslint-config-prettier": "^9.1.0", "eslint-config-prettier": "^9.1.0",
"eslint-plugin-cypress": "^3.4.0", "eslint-plugin-cypress": "^3.4.0",
"eslint-plugin-svelte": "^2.43.0", "eslint-plugin-svelte": "^2.45.1",
"i18next-parser": "^9.0.1", "i18next-parser": "^9.0.1",
"postcss": "^8.4.31", "postcss": "^8.4.31",
"prettier": "^3.3.3", "prettier": "^3.3.3",
"prettier-plugin-svelte": "^3.2.6", "prettier-plugin-svelte": "^3.2.6",
"sass-embedded": "^1.81.0", "sass-embedded": "^1.81.0",
"svelte": "^4.2.18", "svelte": "^5.0.0",
"svelte-check": "^3.8.5", "svelte-check": "^4.0.0",
"svelte-confetti": "^1.3.2", "svelte-confetti": "^1.3.2",
"tailwindcss": "^4.0.0", "tailwindcss": "^4.0.0",
"tslib": "^2.4.1", "tslib": "^2.4.1",
@ -155,18 +155,6 @@
"url": "https://github.com/sponsors/sindresorhus" "url": "https://github.com/sponsors/sindresorhus"
} }
}, },
"node_modules/@ampproject/remapping": {
"version": "2.3.0",
"resolved": "https://registry.npmjs.org/@ampproject/remapping/-/remapping-2.3.0.tgz",
"integrity": "sha512-30iZtAPgz+LTIYoeivqYo853f02jBYSd5uGnGpkFV0M3xOt9aN73erkgYAmZU43x4VfqcnLxW9Kpg3R5LC4YYw==",
"dependencies": {
"@jridgewell/gen-mapping": "^0.3.5",
"@jridgewell/trace-mapping": "^0.3.24"
},
"engines": {
"node": ">=6.0.0"
}
},
"node_modules/@antfu/install-pkg": { "node_modules/@antfu/install-pkg": {
"version": "1.0.0", "version": "1.0.0",
"resolved": "https://registry.npmjs.org/@antfu/install-pkg/-/install-pkg-1.0.0.tgz", "resolved": "https://registry.npmjs.org/@antfu/install-pkg/-/install-pkg-1.0.0.tgz",
@ -1997,16 +1985,23 @@
"license": "MIT" "license": "MIT"
}, },
"node_modules/@jridgewell/gen-mapping": { "node_modules/@jridgewell/gen-mapping": {
"version": "0.3.5", "version": "0.3.13",
"resolved": "https://registry.npmjs.org/@jridgewell/gen-mapping/-/gen-mapping-0.3.5.tgz", "resolved": "https://registry.npmjs.org/@jridgewell/gen-mapping/-/gen-mapping-0.3.13.tgz",
"integrity": "sha512-IzL8ZoEDIBRWEzlCcRhOaCupYyN5gdIK+Q6fbFdPDg6HqX6jpkItn7DFIpW9LQzXG6Df9sA7+OKnq0qlz/GaQg==", "integrity": "sha512-2kkt/7niJ6MgEPxF0bYdQ6etZaA+fQvDcLKckhy1yIQOzaoKjBBjSj63/aLVjYE3qhRt5dvM+uUyfCg6UKCBbA==",
"license": "MIT",
"dependencies": { "dependencies": {
"@jridgewell/set-array": "^1.2.1", "@jridgewell/sourcemap-codec": "^1.5.0",
"@jridgewell/sourcemap-codec": "^1.4.10", "@jridgewell/trace-mapping": "^0.3.24"
}
},
"node_modules/@jridgewell/remapping": {
"version": "2.3.5",
"resolved": "https://registry.npmjs.org/@jridgewell/remapping/-/remapping-2.3.5.tgz",
"integrity": "sha512-LI9u/+laYG4Ds1TDKSJW2YPrIlcVYOwi2fUC6xB43lueCjgxV4lffOCZCtYFiH6TNOX+tQKXx97T4IKHbhyHEQ==",
"license": "MIT",
"dependencies": {
"@jridgewell/gen-mapping": "^0.3.5",
"@jridgewell/trace-mapping": "^0.3.24" "@jridgewell/trace-mapping": "^0.3.24"
},
"engines": {
"node": ">=6.0.0"
} }
}, },
"node_modules/@jridgewell/resolve-uri": { "node_modules/@jridgewell/resolve-uri": {
@ -2017,18 +2012,11 @@
"node": ">=6.0.0" "node": ">=6.0.0"
} }
}, },
"node_modules/@jridgewell/set-array": {
"version": "1.2.1",
"resolved": "https://registry.npmjs.org/@jridgewell/set-array/-/set-array-1.2.1.tgz",
"integrity": "sha512-R8gLRTZeyp03ymzP/6Lil/28tGeGEzhx1q2k703KGWRAI1VdvPIXdG70VJc2pAMw3NA6JKL5hhFu1sJX0Mnn/A==",
"engines": {
"node": ">=6.0.0"
}
},
"node_modules/@jridgewell/sourcemap-codec": { "node_modules/@jridgewell/sourcemap-codec": {
"version": "1.5.0", "version": "1.5.5",
"resolved": "https://registry.npmjs.org/@jridgewell/sourcemap-codec/-/sourcemap-codec-1.5.0.tgz", "resolved": "https://registry.npmjs.org/@jridgewell/sourcemap-codec/-/sourcemap-codec-1.5.5.tgz",
"integrity": "sha512-gv3ZRaISU3fjPAgNsriBRqGWQL6quFx04YMPW/zD8XMLsU32mhCCbfbO6KZFLjvYpCZ8zyDEgqsgf+PwPaM7GQ==" "integrity": "sha512-cYQ9310grqxueWbl+WuIUIaiUaDcj7WOq5fVhEljNVgRfOUhY9fy2zTvfoqWsnebh8Sl70VScFbICvJnLKB0Og==",
"license": "MIT"
}, },
"node_modules/@jridgewell/trace-mapping": { "node_modules/@jridgewell/trace-mapping": {
"version": "0.3.25", "version": "0.3.25",
@ -2210,23 +2198,6 @@
"resolved": "https://registry.npmjs.org/@mediapipe/tasks-vision/-/tasks-vision-0.10.17.tgz", "resolved": "https://registry.npmjs.org/@mediapipe/tasks-vision/-/tasks-vision-0.10.17.tgz",
"integrity": "sha512-CZWV/q6TTe8ta61cZXjfnnHsfWIdFhms03M9T7Cnd5y2mdpylJM0rF1qRq+wsQVRMLz1OYPVEBU9ph2Bx8cxrg==" "integrity": "sha512-CZWV/q6TTe8ta61cZXjfnnHsfWIdFhms03M9T7Cnd5y2mdpylJM0rF1qRq+wsQVRMLz1OYPVEBU9ph2Bx8cxrg=="
}, },
"node_modules/@melt-ui/svelte": {
"version": "0.76.2",
"resolved": "https://registry.npmjs.org/@melt-ui/svelte/-/svelte-0.76.2.tgz",
"integrity": "sha512-7SbOa11tXUS95T3fReL+dwDs5FyJtCEqrqG3inRziDws346SYLsxOQ6HmX+4BkIsQh1R8U3XNa+EMmdMt38lMA==",
"license": "MIT",
"dependencies": {
"@floating-ui/core": "^1.3.1",
"@floating-ui/dom": "^1.4.5",
"@internationalized/date": "^3.5.0",
"dequal": "^2.0.3",
"focus-trap": "^7.5.2",
"nanoid": "^5.0.4"
},
"peerDependencies": {
"svelte": ">=3 <5"
}
},
"node_modules/@mermaid-js/parser": { "node_modules/@mermaid-js/parser": {
"version": "0.6.2", "version": "0.6.2",
"resolved": "https://registry.npmjs.org/@mermaid-js/parser/-/parser-0.6.2.tgz", "resolved": "https://registry.npmjs.org/@mermaid-js/parser/-/parser-0.6.2.tgz",
@ -2948,42 +2919,89 @@
"license": "LIL" "license": "LIL"
}, },
"node_modules/@sveltejs/vite-plugin-svelte": { "node_modules/@sveltejs/vite-plugin-svelte": {
"version": "3.1.1", "version": "4.0.4",
"resolved": "https://registry.npmjs.org/@sveltejs/vite-plugin-svelte/-/vite-plugin-svelte-3.1.1.tgz", "resolved": "https://registry.npmjs.org/@sveltejs/vite-plugin-svelte/-/vite-plugin-svelte-4.0.4.tgz",
"integrity": "sha512-rimpFEAboBBHIlzISibg94iP09k/KYdHgVhJlcsTfn7KMBhc70jFX/GRWkRdFCc2fdnk+4+Bdfej23cMDnJS6A==", "integrity": "sha512-0ba1RQ/PHen5FGpdSrW7Y3fAMQjrXantECALeOiOdBdzR5+5vPP6HVZRLmZaQL+W8m++o+haIAKq5qT+MiZ7VA==",
"license": "MIT",
"dependencies": { "dependencies": {
"@sveltejs/vite-plugin-svelte-inspector": "^2.1.0", "@sveltejs/vite-plugin-svelte-inspector": "^3.0.0-next.0||^3.0.0",
"debug": "^4.3.4", "debug": "^4.3.7",
"deepmerge": "^4.3.1", "deepmerge": "^4.3.1",
"kleur": "^4.1.5", "kleur": "^4.1.5",
"magic-string": "^0.30.10", "magic-string": "^0.30.12",
"svelte-hmr": "^0.16.0", "vitefu": "^1.0.3"
"vitefu": "^0.2.5"
}, },
"engines": { "engines": {
"node": "^18.0.0 || >=20" "node": "^18.0.0 || ^20.0.0 || >=22"
}, },
"peerDependencies": { "peerDependencies": {
"svelte": "^4.0.0 || ^5.0.0-next.0", "svelte": "^5.0.0-next.96 || ^5.0.0",
"vite": "^5.0.0" "vite": "^5.0.0"
} }
}, },
"node_modules/@sveltejs/vite-plugin-svelte-inspector": { "node_modules/@sveltejs/vite-plugin-svelte-inspector": {
"version": "2.1.0", "version": "3.0.1",
"resolved": "https://registry.npmjs.org/@sveltejs/vite-plugin-svelte-inspector/-/vite-plugin-svelte-inspector-2.1.0.tgz", "resolved": "https://registry.npmjs.org/@sveltejs/vite-plugin-svelte-inspector/-/vite-plugin-svelte-inspector-3.0.1.tgz",
"integrity": "sha512-9QX28IymvBlSCqsCll5t0kQVxipsfhFFL+L2t3nTWfXnddYwxBuAEtTtlaVQpRz9c37BhJjltSeY4AJSC03SSg==", "integrity": "sha512-2CKypmj1sM4GE7HjllT7UKmo4Q6L5xFRd7VMGEWhYnZ+wc6AUVU01IBd7yUi6WnFndEwWoMNOd6e8UjoN0nbvQ==",
"license": "MIT",
"dependencies": { "dependencies": {
"debug": "^4.3.4" "debug": "^4.3.7"
}, },
"engines": { "engines": {
"node": "^18.0.0 || >=20" "node": "^18.0.0 || ^20.0.0 || >=22"
}, },
"peerDependencies": { "peerDependencies": {
"@sveltejs/vite-plugin-svelte": "^3.0.0", "@sveltejs/vite-plugin-svelte": "^4.0.0-next.0||^4.0.0",
"svelte": "^4.0.0 || ^5.0.0-next.0", "svelte": "^5.0.0-next.96 || ^5.0.0",
"vite": "^5.0.0" "vite": "^5.0.0"
} }
}, },
"node_modules/@sveltejs/vite-plugin-svelte-inspector/node_modules/debug": {
"version": "4.4.3",
"resolved": "https://registry.npmjs.org/debug/-/debug-4.4.3.tgz",
"integrity": "sha512-RGwwWnwQvkVfavKVt22FGLw+xYSdzARwm0ru6DhTVA3umU5hZc28V3kO4stgYryrTlLpuvgI9GiijltAjNbcqA==",
"license": "MIT",
"dependencies": {
"ms": "^2.1.3"
},
"engines": {
"node": ">=6.0"
},
"peerDependenciesMeta": {
"supports-color": {
"optional": true
}
}
},
"node_modules/@sveltejs/vite-plugin-svelte-inspector/node_modules/ms": {
"version": "2.1.3",
"resolved": "https://registry.npmjs.org/ms/-/ms-2.1.3.tgz",
"integrity": "sha512-6FlzubTLZG3J2a/NVCAleEhjzq5oxgHyaCU9yYXvcLsvoVaHJq/s5xXI6/XXP6tz7R9xAOtHnSO/tXtF3WRTlA==",
"license": "MIT"
},
"node_modules/@sveltejs/vite-plugin-svelte/node_modules/debug": {
"version": "4.4.3",
"resolved": "https://registry.npmjs.org/debug/-/debug-4.4.3.tgz",
"integrity": "sha512-RGwwWnwQvkVfavKVt22FGLw+xYSdzARwm0ru6DhTVA3umU5hZc28V3kO4stgYryrTlLpuvgI9GiijltAjNbcqA==",
"license": "MIT",
"dependencies": {
"ms": "^2.1.3"
},
"engines": {
"node": ">=6.0"
},
"peerDependenciesMeta": {
"supports-color": {
"optional": true
}
}
},
"node_modules/@sveltejs/vite-plugin-svelte/node_modules/ms": {
"version": "2.1.3",
"resolved": "https://registry.npmjs.org/ms/-/ms-2.1.3.tgz",
"integrity": "sha512-6FlzubTLZG3J2a/NVCAleEhjzq5oxgHyaCU9yYXvcLsvoVaHJq/s5xXI6/XXP6tz7R9xAOtHnSO/tXtF3WRTlA==",
"license": "MIT"
},
"node_modules/@swc/helpers": { "node_modules/@swc/helpers": {
"version": "0.5.17", "version": "0.5.17",
"resolved": "https://registry.npmjs.org/@swc/helpers/-/helpers-0.5.17.tgz", "resolved": "https://registry.npmjs.org/@swc/helpers/-/helpers-0.5.17.tgz",
@ -4198,12 +4216,6 @@
"undici-types": "~5.26.4" "undici-types": "~5.26.4"
} }
}, },
"node_modules/@types/pug": {
"version": "2.0.10",
"resolved": "https://registry.npmjs.org/@types/pug/-/pug-2.0.10.tgz",
"integrity": "sha512-Sk/uYFOBAB7mb74XcpizmH0KOR2Pv3D2Hmrh1Dmy5BmK3MpdSa5kqZcg6EKBdklU0bFXX9gCfzvpnyUehrPIuA==",
"dev": true
},
"node_modules/@types/raf": { "node_modules/@types/raf": {
"version": "3.4.3", "version": "3.4.3",
"resolved": "https://registry.npmjs.org/@types/raf/-/raf-3.4.3.tgz", "resolved": "https://registry.npmjs.org/@types/raf/-/raf-3.4.3.tgz",
@ -4803,11 +4815,12 @@
"integrity": "sha512-8+9WqebbFzpX9OR+Wa6O29asIogeRMzcGtAINdpMHHyAg10f05aSFVBbcEqGf/PXw1EjAZ+q2/bEBg3DvurK3Q==" "integrity": "sha512-8+9WqebbFzpX9OR+Wa6O29asIogeRMzcGtAINdpMHHyAg10f05aSFVBbcEqGf/PXw1EjAZ+q2/bEBg3DvurK3Q=="
}, },
"node_modules/aria-query": { "node_modules/aria-query": {
"version": "5.3.0", "version": "5.3.2",
"resolved": "https://registry.npmjs.org/aria-query/-/aria-query-5.3.0.tgz", "resolved": "https://registry.npmjs.org/aria-query/-/aria-query-5.3.2.tgz",
"integrity": "sha512-b0P0sZPKtyu8HkeRAfCq0IfURZK+SuwMjY1UXGBU27wpAiTwQAIlq56IbIO+ytk/JjS1fMR14ee5WBBfKi5J6A==", "integrity": "sha512-COROpnaoap1E2F000S62r6A60uHZnmlvomhfyT2DlTcrY1OrBKn2UhH7qn5wTC9zMvD0AY7csdPSNwKP+7WiQw==",
"dependencies": { "license": "Apache-2.0",
"dequal": "^2.0.3" "engines": {
"node": ">= 0.4"
} }
}, },
"node_modules/asn1": { "node_modules/asn1": {
@ -4895,11 +4908,12 @@
"dev": true "dev": true
}, },
"node_modules/axobject-query": { "node_modules/axobject-query": {
"version": "4.0.0", "version": "4.1.0",
"resolved": "https://registry.npmjs.org/axobject-query/-/axobject-query-4.0.0.tgz", "resolved": "https://registry.npmjs.org/axobject-query/-/axobject-query-4.1.0.tgz",
"integrity": "sha512-+60uv1hiVFhHZeO+Lz0RYzsVHy5Wr1ayX0mwda9KPDVLNJgZ1T9Ny7VmFbLDzxsH0D87I86vgj3gFrjTJUYznw==", "integrity": "sha512-qIj0G9wZbMGNLjLmg1PT6v2mE9AH2zlnADJD/2tC6E00hgmhUOfEB6greHPAfLRSufHqROIUTkw6E+M3lH0PTQ==",
"dependencies": { "license": "Apache-2.0",
"dequal": "^2.0.3" "engines": {
"node": ">= 0.4"
} }
}, },
"node_modules/balanced-match": { "node_modules/balanced-match": {
@ -4990,6 +5004,23 @@
"svelte": "^4.0.0 || ^5.0.0-next.118" "svelte": "^4.0.0 || ^5.0.0-next.118"
} }
}, },
"node_modules/bits-ui/node_modules/@melt-ui/svelte": {
"version": "0.76.2",
"resolved": "https://registry.npmjs.org/@melt-ui/svelte/-/svelte-0.76.2.tgz",
"integrity": "sha512-7SbOa11tXUS95T3fReL+dwDs5FyJtCEqrqG3inRziDws346SYLsxOQ6HmX+4BkIsQh1R8U3XNa+EMmdMt38lMA==",
"license": "MIT",
"dependencies": {
"@floating-ui/core": "^1.3.1",
"@floating-ui/dom": "^1.4.5",
"@internationalized/date": "^3.5.0",
"dequal": "^2.0.3",
"focus-trap": "^7.5.2",
"nanoid": "^5.0.4"
},
"peerDependencies": {
"svelte": ">=3 <5"
}
},
"node_modules/bl": { "node_modules/bl": {
"version": "5.1.0", "version": "5.1.0",
"resolved": "https://registry.npmjs.org/bl/-/bl-5.1.0.tgz", "resolved": "https://registry.npmjs.org/bl/-/bl-5.1.0.tgz",
@ -5702,24 +5733,13 @@
"integrity": "sha512-au6ydSpg6nsrigcZ4m8Bc9hxjeW+GJ8xh5G3BJCMt4WXe1H10UNaVOamqQTmrx1kjVuxAHIQSNU6hY4Nsn9/ag==", "integrity": "sha512-au6ydSpg6nsrigcZ4m8Bc9hxjeW+GJ8xh5G3BJCMt4WXe1H10UNaVOamqQTmrx1kjVuxAHIQSNU6hY4Nsn9/ag==",
"dev": true "dev": true
}, },
"node_modules/code-red": { "node_modules/clsx": {
"version": "1.0.4", "version": "2.1.1",
"resolved": "https://registry.npmjs.org/code-red/-/code-red-1.0.4.tgz", "resolved": "https://registry.npmjs.org/clsx/-/clsx-2.1.1.tgz",
"integrity": "sha512-7qJWqItLA8/VPVlKJlFXU+NBlo/qyfs39aJcuMT/2ere32ZqvF5OSxgdM5xOfJJ7O429gg2HM47y8v9P+9wrNw==", "integrity": "sha512-eYm0QWBtUrBWZWG0d386OGAw16Z995PiOVo2B7bjWSbHedGl5e0ZWaq65kOGgUSNesEIDkB9ISbTg/JK9dhCZA==",
"dependencies": { "license": "MIT",
"@jridgewell/sourcemap-codec": "^1.4.15", "engines": {
"@types/estree": "^1.0.1", "node": ">=6"
"acorn": "^8.10.0",
"estree-walker": "^3.0.3",
"periscopic": "^3.1.0"
}
},
"node_modules/code-red/node_modules/estree-walker": {
"version": "3.0.3",
"resolved": "https://registry.npmjs.org/estree-walker/-/estree-walker-3.0.3.tgz",
"integrity": "sha512-7RUKfXgSMMkzt6ZuXmqapOurLGPPfgj6l9uRZ7lRGolvk0y2yocc35LdcxKC5PQZdn2DMqioAQ2NoWcrTKmm6g==",
"dependencies": {
"@types/estree": "^1.0.0"
} }
}, },
"node_modules/codedent": { "node_modules/codedent": {
@ -5981,18 +6001,6 @@
"url": "https://github.com/sponsors/fb55" "url": "https://github.com/sponsors/fb55"
} }
}, },
"node_modules/css-tree": {
"version": "2.3.1",
"resolved": "https://registry.npmjs.org/css-tree/-/css-tree-2.3.1.tgz",
"integrity": "sha512-6Fv1DV/TYw//QF5IzQdqsNDjx/wc8TrMBZsqjL9eW01tWb7R7k/mq+/VXfJCl7SoD5emsJop9cOByJZfs8hYIw==",
"dependencies": {
"mdn-data": "2.0.30",
"source-map-js": "^1.0.1"
},
"engines": {
"node": "^10 || ^12.20.0 || ^14.13.0 || >=15.0.0"
}
},
"node_modules/css-what": { "node_modules/css-what": {
"version": "6.1.0", "version": "6.1.0",
"resolved": "https://registry.npmjs.org/css-what/-/css-what-6.1.0.tgz", "resolved": "https://registry.npmjs.org/css-what/-/css-what-6.1.0.tgz",
@ -6815,15 +6823,6 @@
"node": ">=6" "node": ">=6"
} }
}, },
"node_modules/detect-indent": {
"version": "6.1.0",
"resolved": "https://registry.npmjs.org/detect-indent/-/detect-indent-6.1.0.tgz",
"integrity": "sha512-reYkTUJAZb9gUuZ2RvVCNhVHdg62RHnJ7WJl8ftMi4diZ6NWlciOzQN88pUhSELEwflJht4oQDv0F0BMlwaYtA==",
"dev": true,
"engines": {
"node": ">=8"
}
},
"node_modules/detect-libc": { "node_modules/detect-libc": {
"version": "2.0.3", "version": "2.0.3",
"resolved": "https://registry.npmjs.org/detect-libc/-/detect-libc-2.0.3.tgz", "resolved": "https://registry.npmjs.org/detect-libc/-/detect-libc-2.0.3.tgz",
@ -7116,12 +7115,6 @@
"node": ">= 0.4" "node": ">= 0.4"
} }
}, },
"node_modules/es6-promise": {
"version": "3.3.1",
"resolved": "https://registry.npmjs.org/es6-promise/-/es6-promise-3.3.1.tgz",
"integrity": "sha512-SOp9Phqvqn7jtEUxPWdWfWoLmyt2VaJ6MpvP9Comy1MceMXqE6bxvaTu4iaxpYYPzhny28Lc+M87/c2cPK6lDg==",
"dev": true
},
"node_modules/esbuild": { "node_modules/esbuild": {
"version": "0.25.1", "version": "0.25.1",
"resolved": "https://registry.npmjs.org/esbuild/-/esbuild-0.25.1.tgz", "resolved": "https://registry.npmjs.org/esbuild/-/esbuild-0.25.1.tgz",
@ -7278,22 +7271,23 @@
} }
}, },
"node_modules/eslint-plugin-svelte": { "node_modules/eslint-plugin-svelte": {
"version": "2.43.0", "version": "2.46.1",
"resolved": "https://registry.npmjs.org/eslint-plugin-svelte/-/eslint-plugin-svelte-2.43.0.tgz", "resolved": "https://registry.npmjs.org/eslint-plugin-svelte/-/eslint-plugin-svelte-2.46.1.tgz",
"integrity": "sha512-REkxQWvg2pp7QVLxQNa+dJ97xUqRe7Y2JJbSWkHSuszu0VcblZtXkPBPckkivk99y5CdLw4slqfPylL2d/X4jQ==", "integrity": "sha512-7xYr2o4NID/f9OEYMqxsEQsCsj4KaMy4q5sANaKkAb6/QeCjYFxRmDm2S3YC3A3pl1kyPZ/syOx/i7LcWYSbIw==",
"dev": true, "dev": true,
"license": "MIT",
"dependencies": { "dependencies": {
"@eslint-community/eslint-utils": "^4.4.0", "@eslint-community/eslint-utils": "^4.4.0",
"@jridgewell/sourcemap-codec": "^1.4.15", "@jridgewell/sourcemap-codec": "^1.4.15",
"eslint-compat-utils": "^0.5.1", "eslint-compat-utils": "^0.5.1",
"esutils": "^2.0.3", "esutils": "^2.0.3",
"known-css-properties": "^0.34.0", "known-css-properties": "^0.35.0",
"postcss": "^8.4.38", "postcss": "^8.4.38",
"postcss-load-config": "^3.1.4", "postcss-load-config": "^3.1.4",
"postcss-safe-parser": "^6.0.0", "postcss-safe-parser": "^6.0.0",
"postcss-selector-parser": "^6.1.0", "postcss-selector-parser": "^6.1.0",
"semver": "^7.6.2", "semver": "^7.6.2",
"svelte-eslint-parser": "^0.41.0" "svelte-eslint-parser": "^0.43.0"
}, },
"engines": { "engines": {
"node": "^14.17.0 || >=16.0.0" "node": "^14.17.0 || >=16.0.0"
@ -7303,7 +7297,7 @@
}, },
"peerDependencies": { "peerDependencies": {
"eslint": "^7.0.0 || ^8.0.0-0 || ^9.0.0-0", "eslint": "^7.0.0 || ^8.0.0-0 || ^9.0.0-0",
"svelte": "^3.37.0 || ^4.0.0 || ^5.0.0-next.191" "svelte": "^3.37.0 || ^4.0.0 || ^5.0.0"
}, },
"peerDependenciesMeta": { "peerDependenciesMeta": {
"svelte": { "svelte": {
@ -7410,6 +7404,15 @@
"node": ">=0.10" "node": ">=0.10"
} }
}, },
"node_modules/esrap": {
"version": "2.1.1",
"resolved": "https://registry.npmjs.org/esrap/-/esrap-2.1.1.tgz",
"integrity": "sha512-ebTT9B6lOtZGMgJ3o5r12wBacHctG7oEWazIda8UlPfA3HD/Wrv8FdXoVo73vzdpwCxNyXjPauyN2bbJzMkB9A==",
"license": "MIT",
"dependencies": {
"@jridgewell/sourcemap-codec": "^1.4.15"
}
},
"node_modules/esrecurse": { "node_modules/esrecurse": {
"version": "4.3.0", "version": "4.3.0",
"resolved": "https://registry.npmjs.org/esrecurse/-/esrecurse-4.3.0.tgz", "resolved": "https://registry.npmjs.org/esrecurse/-/esrecurse-4.3.0.tgz",
@ -9021,10 +9024,11 @@
} }
}, },
"node_modules/known-css-properties": { "node_modules/known-css-properties": {
"version": "0.34.0", "version": "0.35.0",
"resolved": "https://registry.npmjs.org/known-css-properties/-/known-css-properties-0.34.0.tgz", "resolved": "https://registry.npmjs.org/known-css-properties/-/known-css-properties-0.35.0.tgz",
"integrity": "sha512-tBECoUqNFbyAY4RrbqsBQqDFpGXAEbdD5QKr8kACx3+rnArmuuR22nKQWKazvp07N9yjTyDZaw/20UIH8tL9DQ==", "integrity": "sha512-a/RAk2BfKk+WFGhhOCAYqSiFLc34k8Mt/6NWRI4joER0EYUzXIcFivjjnoD3+XU1DggLn/tZc3DOAgke7l8a4A==",
"dev": true "dev": true,
"license": "MIT"
}, },
"node_modules/kokoro-js": { "node_modules/kokoro-js": {
"version": "1.1.1", "version": "1.1.1",
@ -9658,11 +9662,12 @@
"license": "ISC" "license": "ISC"
}, },
"node_modules/magic-string": { "node_modules/magic-string": {
"version": "0.30.11", "version": "0.30.21",
"resolved": "https://registry.npmjs.org/magic-string/-/magic-string-0.30.11.tgz", "resolved": "https://registry.npmjs.org/magic-string/-/magic-string-0.30.21.tgz",
"integrity": "sha512-+Wri9p0QHMy+545hKww7YAu5NyzF8iomPL/RQazugQ9+Ez4Ic3mERMd8ZTX5rfK944j+560ZJi8iAwgak1Ac7A==", "integrity": "sha512-vd2F4YUyEXKGcLHoq+TEyCjxueSeHnFxyyjNp80yg0XV4vUhnDer/lvvlqM/arB5bXQN5K2/3oinyCRyx8T2CQ==",
"license": "MIT",
"dependencies": { "dependencies": {
"@jridgewell/sourcemap-codec": "^1.5.0" "@jridgewell/sourcemap-codec": "^1.5.5"
} }
}, },
"node_modules/markdown-it": { "node_modules/markdown-it": {
@ -9738,11 +9743,6 @@
"node": ">= 0.4" "node": ">= 0.4"
} }
}, },
"node_modules/mdn-data": {
"version": "2.0.30",
"resolved": "https://registry.npmjs.org/mdn-data/-/mdn-data-2.0.30.tgz",
"integrity": "sha512-GaqWWShW4kv/G9IEucWScBx9G1/vsFZZJUO+tD26M8J8z3Kw5RDQjaoZe03YAClgeS/SWPOcb4nkFBTEi5DUEA=="
},
"node_modules/mdurl": { "node_modules/mdurl": {
"version": "2.0.0", "version": "2.0.0",
"resolved": "https://registry.npmjs.org/mdurl/-/mdurl-2.0.0.tgz", "resolved": "https://registry.npmjs.org/mdurl/-/mdurl-2.0.0.tgz",
@ -9857,15 +9857,6 @@
"node": ">=6" "node": ">=6"
} }
}, },
"node_modules/min-indent": {
"version": "1.0.1",
"resolved": "https://registry.npmjs.org/min-indent/-/min-indent-1.0.1.tgz",
"integrity": "sha512-I9jwMn07Sy/IwOj3zVkVik2JTvgpaykDZEigL6Rx6N9LbMywwUSMtxET+7lVoDLLd3O3IXwJwvuuns8UB/HeAg==",
"dev": true,
"engines": {
"node": ">=4"
}
},
"node_modules/minimatch": { "node_modules/minimatch": {
"version": "9.0.5", "version": "9.0.5",
"resolved": "https://registry.npmjs.org/minimatch/-/minimatch-9.0.5.tgz", "resolved": "https://registry.npmjs.org/minimatch/-/minimatch-9.0.5.tgz",
@ -9961,18 +9952,6 @@
"url": "https://github.com/sponsors/isaacs" "url": "https://github.com/sponsors/isaacs"
} }
}, },
"node_modules/mkdirp": {
"version": "0.5.6",
"resolved": "https://registry.npmjs.org/mkdirp/-/mkdirp-0.5.6.tgz",
"integrity": "sha512-FP+p8RB8OWpF3YZBCrP5gtADmtXApB5AMLn+vdyA+PyxCjrCs00mjyUozssO33cwDeT3wNGdLxJ5M//YqtHAJw==",
"dev": true,
"dependencies": {
"minimist": "^1.2.6"
},
"bin": {
"mkdirp": "bin/cmd.js"
}
},
"node_modules/mktemp": { "node_modules/mktemp": {
"version": "0.4.0", "version": "0.4.0",
"resolved": "https://registry.npmjs.org/mktemp/-/mktemp-0.4.0.tgz", "resolved": "https://registry.npmjs.org/mktemp/-/mktemp-0.4.0.tgz",
@ -10461,32 +10440,6 @@
"integrity": "sha512-7EAHlyLHI56VEIdK57uwHdHKIaAGbnXPiw0yWbarQZOKaKpvUIgW0jWRVLiatnM+XXlSwsanIBH/hzGMJulMow==", "integrity": "sha512-7EAHlyLHI56VEIdK57uwHdHKIaAGbnXPiw0yWbarQZOKaKpvUIgW0jWRVLiatnM+XXlSwsanIBH/hzGMJulMow==",
"devOptional": true "devOptional": true
}, },
"node_modules/periscopic": {
"version": "3.1.0",
"resolved": "https://registry.npmjs.org/periscopic/-/periscopic-3.1.0.tgz",
"integrity": "sha512-vKiQ8RRtkl9P+r/+oefh25C3fhybptkHKCZSPlcXiJux2tJF55GnEj3BVn4A5gKfq9NWWXXrxkHBwVPUfH0opw==",
"dependencies": {
"@types/estree": "^1.0.0",
"estree-walker": "^3.0.0",
"is-reference": "^3.0.0"
}
},
"node_modules/periscopic/node_modules/estree-walker": {
"version": "3.0.3",
"resolved": "https://registry.npmjs.org/estree-walker/-/estree-walker-3.0.3.tgz",
"integrity": "sha512-7RUKfXgSMMkzt6ZuXmqapOurLGPPfgj6l9uRZ7lRGolvk0y2yocc35LdcxKC5PQZdn2DMqioAQ2NoWcrTKmm6g==",
"dependencies": {
"@types/estree": "^1.0.0"
}
},
"node_modules/periscopic/node_modules/is-reference": {
"version": "3.0.2",
"resolved": "https://registry.npmjs.org/is-reference/-/is-reference-3.0.2.tgz",
"integrity": "sha512-v3rht/LgVcsdZa3O2Nqs+NMowLOxeOm7Ay9+/ARQ2F+qEoANRcqrjAZKGN0v8ymUetZGgkp26LTnGT7H0Qo9Pg==",
"dependencies": {
"@types/estree": "*"
}
},
"node_modules/phonemizer": { "node_modules/phonemizer": {
"version": "1.2.1", "version": "1.2.1",
"resolved": "https://registry.npmjs.org/phonemizer/-/phonemizer-1.2.1.tgz", "resolved": "https://registry.npmjs.org/phonemizer/-/phonemizer-1.2.1.tgz",
@ -10684,6 +10637,7 @@
"url": "https://github.com/sponsors/ai" "url": "https://github.com/sponsors/ai"
} }
], ],
"license": "MIT",
"engines": { "engines": {
"node": ">=12.0" "node": ">=12.0"
}, },
@ -11581,73 +11535,6 @@
"resolved": "https://registry.npmjs.org/safer-buffer/-/safer-buffer-2.1.2.tgz", "resolved": "https://registry.npmjs.org/safer-buffer/-/safer-buffer-2.1.2.tgz",
"integrity": "sha512-YZo3K82SD7Riyi0E1EQPojLz7kpepnSQI9IyPbHHg1XXXevb5dJI7tpyN2ADxGcQbHG7vcyRHk0cbwqcQriUtg==" "integrity": "sha512-YZo3K82SD7Riyi0E1EQPojLz7kpepnSQI9IyPbHHg1XXXevb5dJI7tpyN2ADxGcQbHG7vcyRHk0cbwqcQriUtg=="
}, },
"node_modules/sander": {
"version": "0.5.1",
"resolved": "https://registry.npmjs.org/sander/-/sander-0.5.1.tgz",
"integrity": "sha512-3lVqBir7WuKDHGrKRDn/1Ye3kwpXaDOMsiRP1wd6wpZW56gJhsbp5RqQpA6JG/P+pkXizygnr1dKR8vzWaVsfA==",
"dev": true,
"dependencies": {
"es6-promise": "^3.1.2",
"graceful-fs": "^4.1.3",
"mkdirp": "^0.5.1",
"rimraf": "^2.5.2"
}
},
"node_modules/sander/node_modules/brace-expansion": {
"version": "1.1.12",
"resolved": "https://registry.npmjs.org/brace-expansion/-/brace-expansion-1.1.12.tgz",
"integrity": "sha512-9T9UjW3r0UW5c1Q7GTwllptXwhvYmEzFhzMfZ9H7FQWt+uZePjZPjBP/W1ZEyZ1twGWom5/56TF4lPcqjnDHcg==",
"dev": true,
"license": "MIT",
"dependencies": {
"balanced-match": "^1.0.0",
"concat-map": "0.0.1"
}
},
"node_modules/sander/node_modules/glob": {
"version": "7.2.3",
"resolved": "https://registry.npmjs.org/glob/-/glob-7.2.3.tgz",
"integrity": "sha512-nFR0zLpU2YCaRxwoCJvL6UvCH2JFyFVIvwTLsIf21AuHlMskA1hhTdk+LlYJtOlYt9v6dvszD2BGRqBL+iQK9Q==",
"dev": true,
"dependencies": {
"fs.realpath": "^1.0.0",
"inflight": "^1.0.4",
"inherits": "2",
"minimatch": "^3.1.1",
"once": "^1.3.0",
"path-is-absolute": "^1.0.0"
},
"engines": {
"node": "*"
},
"funding": {
"url": "https://github.com/sponsors/isaacs"
}
},
"node_modules/sander/node_modules/minimatch": {
"version": "3.1.2",
"resolved": "https://registry.npmjs.org/minimatch/-/minimatch-3.1.2.tgz",
"integrity": "sha512-J7p63hRiAjw1NDEww1W7i37+ByIrOWO5XQQAzZ3VOcL0PNybwpfmV/N05zFAzwQ9USyEcX6t3UO+K5aqBQOIHw==",
"dev": true,
"dependencies": {
"brace-expansion": "^1.1.7"
},
"engines": {
"node": "*"
}
},
"node_modules/sander/node_modules/rimraf": {
"version": "2.7.1",
"resolved": "https://registry.npmjs.org/rimraf/-/rimraf-2.7.1.tgz",
"integrity": "sha512-uWjbaKIK3T1OSVptzX7Nl6PvQ3qAGtKEtVRjRuazjfL3Bx5eI409VZSqgND+4UNnmzLVdPj9FqFJNPqBZFve4w==",
"dev": true,
"dependencies": {
"glob": "^7.1.3"
},
"bin": {
"rimraf": "bin.js"
}
},
"node_modules/sass-embedded": { "node_modules/sass-embedded": {
"version": "1.81.0", "version": "1.81.0",
"resolved": "https://registry.npmjs.org/sass-embedded/-/sass-embedded-1.81.0.tgz", "resolved": "https://registry.npmjs.org/sass-embedded/-/sass-embedded-1.81.0.tgz",
@ -12231,21 +12118,6 @@
"node": ">=10.0.0" "node": ">=10.0.0"
} }
}, },
"node_modules/sorcery": {
"version": "0.11.0",
"resolved": "https://registry.npmjs.org/sorcery/-/sorcery-0.11.0.tgz",
"integrity": "sha512-J69LQ22xrQB1cIFJhPfgtLuI6BpWRiWu1Y3vSsIwK/eAScqJxd/+CJlUuHQRdX2C9NGFamq+KqNywGgaThwfHw==",
"dev": true,
"dependencies": {
"@jridgewell/sourcemap-codec": "^1.4.14",
"buffer-crc32": "^0.2.5",
"minimist": "^1.2.0",
"sander": "^0.5.0"
},
"bin": {
"sorcery": "bin/sorcery"
}
},
"node_modules/sort-keys": { "node_modules/sort-keys": {
"version": "5.0.0", "version": "5.0.0",
"resolved": "https://registry.npmjs.org/sort-keys/-/sort-keys-5.0.0.tgz", "resolved": "https://registry.npmjs.org/sort-keys/-/sort-keys-5.0.0.tgz",
@ -12456,18 +12328,6 @@
"node": ">=6" "node": ">=6"
} }
}, },
"node_modules/strip-indent": {
"version": "3.0.0",
"resolved": "https://registry.npmjs.org/strip-indent/-/strip-indent-3.0.0.tgz",
"integrity": "sha512-laJTa3Jb+VQpaC6DseHhF7dXVqHTfJPCRDaEbid/drOhgitgYku/letMUqOXFoWV0zIIUbjpdH2t+tYj4bQMRQ==",
"dev": true,
"dependencies": {
"min-indent": "^1.0.0"
},
"engines": {
"node": ">=8"
}
},
"node_modules/strip-json-comments": { "node_modules/strip-json-comments": {
"version": "3.1.1", "version": "3.1.1",
"resolved": "https://registry.npmjs.org/strip-json-comments/-/strip-json-comments-3.1.1.tgz", "resolved": "https://registry.npmjs.org/strip-json-comments/-/strip-json-comments-3.1.1.tgz",
@ -12527,47 +12387,115 @@
} }
}, },
"node_modules/svelte": { "node_modules/svelte": {
"version": "4.2.19", "version": "5.42.2",
"resolved": "https://registry.npmjs.org/svelte/-/svelte-4.2.19.tgz", "resolved": "https://registry.npmjs.org/svelte/-/svelte-5.42.2.tgz",
"integrity": "sha512-IY1rnGr6izd10B0A8LqsBfmlT5OILVuZ7XsI0vdGPEvuonFV7NYEUK4dAkm9Zg2q0Um92kYjTpS1CAP3Nh/KWw==", "integrity": "sha512-iSry5jsBHispVczyt9UrBX/1qu3HQ/UyKPAIjqlvlu3o/eUvc+kpyMyRS2O4HLLx4MvLurLGIUOyyP11pyD59g==",
"license": "MIT",
"dependencies": { "dependencies": {
"@ampproject/remapping": "^2.2.1", "@jridgewell/remapping": "^2.3.4",
"@jridgewell/sourcemap-codec": "^1.4.15", "@jridgewell/sourcemap-codec": "^1.5.0",
"@jridgewell/trace-mapping": "^0.3.18", "@sveltejs/acorn-typescript": "^1.0.5",
"@types/estree": "^1.0.1", "@types/estree": "^1.0.5",
"acorn": "^8.9.0", "acorn": "^8.12.1",
"aria-query": "^5.3.0", "aria-query": "^5.3.1",
"axobject-query": "^4.0.0", "axobject-query": "^4.1.0",
"code-red": "^1.0.3", "clsx": "^2.1.1",
"css-tree": "^2.3.1", "esm-env": "^1.2.1",
"estree-walker": "^3.0.3", "esrap": "^2.1.0",
"is-reference": "^3.0.1", "is-reference": "^3.0.3",
"locate-character": "^3.0.0", "locate-character": "^3.0.0",
"magic-string": "^0.30.4", "magic-string": "^0.30.11",
"periscopic": "^3.1.0" "zimmerframe": "^1.1.2"
}, },
"engines": { "engines": {
"node": ">=16" "node": ">=18"
} }
}, },
"node_modules/svelte-check": { "node_modules/svelte-check": {
"version": "3.8.5", "version": "4.3.3",
"resolved": "https://registry.npmjs.org/svelte-check/-/svelte-check-3.8.5.tgz", "resolved": "https://registry.npmjs.org/svelte-check/-/svelte-check-4.3.3.tgz",
"integrity": "sha512-3OGGgr9+bJ/+1nbPgsvulkLC48xBsqsgtc8Wam281H4G9F5v3mYGa2bHRsPuwHC5brKl4AxJH95QF73kmfihGQ==", "integrity": "sha512-RYP0bEwenDXzfv0P1sKAwjZSlaRyqBn0Fz1TVni58lqyEiqgwztTpmodJrGzP6ZT2aHl4MbTvWP6gbmQ3FOnBg==",
"dev": true, "dev": true,
"license": "MIT",
"dependencies": { "dependencies": {
"@jridgewell/trace-mapping": "^0.3.17", "@jridgewell/trace-mapping": "^0.3.25",
"chokidar": "^3.4.1", "chokidar": "^4.0.1",
"fdir": "^6.2.0",
"picocolors": "^1.0.0", "picocolors": "^1.0.0",
"sade": "^1.7.4", "sade": "^1.7.4"
"svelte-preprocess": "^5.1.3",
"typescript": "^5.0.3"
}, },
"bin": { "bin": {
"svelte-check": "bin/svelte-check" "svelte-check": "bin/svelte-check"
}, },
"engines": {
"node": ">= 18.0.0"
},
"peerDependencies": { "peerDependencies": {
"svelte": "^3.55.0 || ^4.0.0-next.0 || ^4.0.0 || ^5.0.0-next.0" "svelte": "^4.0.0 || ^5.0.0-next.0",
"typescript": ">=5.0.0"
}
},
"node_modules/svelte-check/node_modules/chokidar": {
"version": "4.0.3",
"resolved": "https://registry.npmjs.org/chokidar/-/chokidar-4.0.3.tgz",
"integrity": "sha512-Qgzu8kfBvo+cA4962jnP1KkS6Dop5NS6g7R5LFYJr4b8Ub94PPQXUksCw9PvXoeXPRRddRNC5C1JQUR2SMGtnA==",
"dev": true,
"license": "MIT",
"dependencies": {
"readdirp": "^4.0.1"
},
"engines": {
"node": ">= 14.16.0"
},
"funding": {
"url": "https://paulmillr.com/funding/"
}
},
"node_modules/svelte-check/node_modules/fdir": {
"version": "6.5.0",
"resolved": "https://registry.npmjs.org/fdir/-/fdir-6.5.0.tgz",
"integrity": "sha512-tIbYtZbucOs0BRGqPJkshJUYdL+SDH7dVM8gjy+ERp3WAUjLEFJE+02kanyHtwjWOnwrKYBiwAmM0p4kLJAnXg==",
"dev": true,
"license": "MIT",
"engines": {
"node": ">=12.0.0"
},
"peerDependencies": {
"picomatch": "^3 || ^4"
},
"peerDependenciesMeta": {
"picomatch": {
"optional": true
}
}
},
"node_modules/svelte-check/node_modules/picomatch": {
"version": "4.0.3",
"resolved": "https://registry.npmjs.org/picomatch/-/picomatch-4.0.3.tgz",
"integrity": "sha512-5gTmgEY/sqK6gFXLIsQNH19lWb4ebPDLA4SdLP7dsWkIXHWlG66oPuVvXSGFPppYZz8ZDZq0dYYrbHfBCVUb1Q==",
"dev": true,
"license": "MIT",
"optional": true,
"peer": true,
"engines": {
"node": ">=12"
},
"funding": {
"url": "https://github.com/sponsors/jonschlinkert"
}
},
"node_modules/svelte-check/node_modules/readdirp": {
"version": "4.1.2",
"resolved": "https://registry.npmjs.org/readdirp/-/readdirp-4.1.2.tgz",
"integrity": "sha512-GDhwkLfywWL2s6vEjyhri+eXmfH6j1L7JE27WhqLeYzoh/A3DBaYGEj2H/HFZCn/kMfim73FXxEJTw06WtxQwg==",
"dev": true,
"license": "MIT",
"engines": {
"node": ">= 14.18.0"
},
"funding": {
"type": "individual",
"url": "https://paulmillr.com/funding/"
} }
}, },
"node_modules/svelte-confetti": { "node_modules/svelte-confetti": {
@ -12580,10 +12508,11 @@
} }
}, },
"node_modules/svelte-eslint-parser": { "node_modules/svelte-eslint-parser": {
"version": "0.41.0", "version": "0.43.0",
"resolved": "https://registry.npmjs.org/svelte-eslint-parser/-/svelte-eslint-parser-0.41.0.tgz", "resolved": "https://registry.npmjs.org/svelte-eslint-parser/-/svelte-eslint-parser-0.43.0.tgz",
"integrity": "sha512-L6f4hOL+AbgfBIB52Z310pg1d2QjRqm7wy3kI1W6hhdhX5bvu7+f0R6w4ykp5HoDdzq+vGhIJmsisaiJDGmVfA==", "integrity": "sha512-GpU52uPKKcVnh8tKN5P4UZpJ/fUDndmq7wfsvoVXsyP+aY0anol7Yqo01fyrlaWGMFfm4av5DyrjlaXdLRJvGA==",
"dev": true, "dev": true,
"license": "MIT",
"dependencies": { "dependencies": {
"eslint-scope": "^7.2.2", "eslint-scope": "^7.2.2",
"eslint-visitor-keys": "^3.4.3", "eslint-visitor-keys": "^3.4.3",
@ -12598,7 +12527,7 @@
"url": "https://github.com/sponsors/ota-meshi" "url": "https://github.com/sponsors/ota-meshi"
}, },
"peerDependencies": { "peerDependencies": {
"svelte": "^3.37.0 || ^4.0.0 || ^5.0.0-next.191" "svelte": "^3.37.0 || ^4.0.0 || ^5.0.0"
}, },
"peerDependenciesMeta": { "peerDependenciesMeta": {
"svelte": { "svelte": {
@ -12606,80 +12535,6 @@
} }
} }
}, },
"node_modules/svelte-hmr": {
"version": "0.16.0",
"resolved": "https://registry.npmjs.org/svelte-hmr/-/svelte-hmr-0.16.0.tgz",
"integrity": "sha512-Gyc7cOS3VJzLlfj7wKS0ZnzDVdv3Pn2IuVeJPk9m2skfhcu5bq3wtIZyQGggr7/Iim5rH5cncyQft/kRLupcnA==",
"engines": {
"node": "^12.20 || ^14.13.1 || >= 16"
},
"peerDependencies": {
"svelte": "^3.19.0 || ^4.0.0"
}
},
"node_modules/svelte-preprocess": {
"version": "5.1.3",
"resolved": "https://registry.npmjs.org/svelte-preprocess/-/svelte-preprocess-5.1.3.tgz",
"integrity": "sha512-xxAkmxGHT+J/GourS5mVJeOXZzne1FR5ljeOUAMXUkfEhkLEllRreXpbl3dIYJlcJRfL1LO1uIAPpBpBfiqGPw==",
"dev": true,
"hasInstallScript": true,
"dependencies": {
"@types/pug": "^2.0.6",
"detect-indent": "^6.1.0",
"magic-string": "^0.30.5",
"sorcery": "^0.11.0",
"strip-indent": "^3.0.0"
},
"engines": {
"node": ">= 16.0.0",
"pnpm": "^8.0.0"
},
"peerDependencies": {
"@babel/core": "^7.10.2",
"coffeescript": "^2.5.1",
"less": "^3.11.3 || ^4.0.0",
"postcss": "^7 || ^8",
"postcss-load-config": "^2.1.0 || ^3.0.0 || ^4.0.0 || ^5.0.0",
"pug": "^3.0.0",
"sass": "^1.26.8",
"stylus": "^0.55.0",
"sugarss": "^2.0.0 || ^3.0.0 || ^4.0.0",
"svelte": "^3.23.0 || ^4.0.0-next.0 || ^4.0.0 || ^5.0.0-next.0",
"typescript": ">=3.9.5 || ^4.0.0 || ^5.0.0"
},
"peerDependenciesMeta": {
"@babel/core": {
"optional": true
},
"coffeescript": {
"optional": true
},
"less": {
"optional": true
},
"postcss": {
"optional": true
},
"postcss-load-config": {
"optional": true
},
"pug": {
"optional": true
},
"sass": {
"optional": true
},
"stylus": {
"optional": true
},
"sugarss": {
"optional": true
},
"typescript": {
"optional": true
}
}
},
"node_modules/svelte-sonner": { "node_modules/svelte-sonner": {
"version": "0.3.28", "version": "0.3.28",
"resolved": "https://registry.npmjs.org/svelte-sonner/-/svelte-sonner-0.3.28.tgz", "resolved": "https://registry.npmjs.org/svelte-sonner/-/svelte-sonner-0.3.28.tgz",
@ -12688,20 +12543,19 @@
"svelte": "^3.0.0 || ^4.0.0 || ^5.0.0-next.1" "svelte": "^3.0.0 || ^4.0.0 || ^5.0.0-next.1"
} }
}, },
"node_modules/svelte/node_modules/estree-walker": { "node_modules/svelte/node_modules/@types/estree": {
"version": "3.0.3", "version": "1.0.8",
"resolved": "https://registry.npmjs.org/estree-walker/-/estree-walker-3.0.3.tgz", "resolved": "https://registry.npmjs.org/@types/estree/-/estree-1.0.8.tgz",
"integrity": "sha512-7RUKfXgSMMkzt6ZuXmqapOurLGPPfgj6l9uRZ7lRGolvk0y2yocc35LdcxKC5PQZdn2DMqioAQ2NoWcrTKmm6g==", "integrity": "sha512-dWHzHa2WqEXI/O1E9OjrocMTKJl2mSrEolh1Iomrv6U+JuNwaHXsXx9bLu5gG7BUWFIN0skIQJQ/L1rIex4X6w==",
"dependencies": { "license": "MIT"
"@types/estree": "^1.0.0"
}
}, },
"node_modules/svelte/node_modules/is-reference": { "node_modules/svelte/node_modules/is-reference": {
"version": "3.0.2", "version": "3.0.3",
"resolved": "https://registry.npmjs.org/is-reference/-/is-reference-3.0.2.tgz", "resolved": "https://registry.npmjs.org/is-reference/-/is-reference-3.0.3.tgz",
"integrity": "sha512-v3rht/LgVcsdZa3O2Nqs+NMowLOxeOm7Ay9+/ARQ2F+qEoANRcqrjAZKGN0v8ymUetZGgkp26LTnGT7H0Qo9Pg==", "integrity": "sha512-ixkJoqQvAP88E6wLydLGGqCJsrFUnqoH6HnaczB8XmDH1oaWU+xxdptvikTgaEhtZ53Ky6YXiBuUI2WXLMCwjw==",
"license": "MIT",
"dependencies": { "dependencies": {
"@types/estree": "*" "@types/estree": "^1.0.6"
} }
}, },
"node_modules/svg-pathdata": { "node_modules/svg-pathdata": {
@ -14213,11 +14067,17 @@
} }
}, },
"node_modules/vitefu": { "node_modules/vitefu": {
"version": "0.2.5", "version": "1.1.1",
"resolved": "https://registry.npmjs.org/vitefu/-/vitefu-0.2.5.tgz", "resolved": "https://registry.npmjs.org/vitefu/-/vitefu-1.1.1.tgz",
"integrity": "sha512-SgHtMLoqaeeGnd2evZ849ZbACbnwQCIwRH57t18FxcXoZop0uQu0uzlIhJBlF/eWVzuce0sHeqPcDo+evVcg8Q==", "integrity": "sha512-B/Fegf3i8zh0yFbpzZ21amWzHmuNlLlmJT6n7bu5e+pCHUKQIfXSYokrqOBGEMMe9UG2sostKQF9mml/vYaWJQ==",
"license": "MIT",
"workspaces": [
"tests/deps/*",
"tests/projects/*",
"tests/projects/workspace/packages/*"
],
"peerDependencies": { "peerDependencies": {
"vite": "^3.0.0 || ^4.0.0 || ^5.0.0" "vite": "^3.0.0 || ^4.0.0 || ^5.0.0 || ^6.0.0 || ^7.0.0-beta.0"
}, },
"peerDependenciesMeta": { "peerDependenciesMeta": {
"vite": { "vite": {
@ -14945,6 +14805,12 @@
"funding": { "funding": {
"url": "https://github.com/sponsors/sindresorhus" "url": "https://github.com/sponsors/sindresorhus"
} }
},
"node_modules/zimmerframe": {
"version": "1.1.4",
"resolved": "https://registry.npmjs.org/zimmerframe/-/zimmerframe-1.1.4.tgz",
"integrity": "sha512-B58NGBEoc8Y9MWWCQGl/gq9xBCe4IiKM0a2x7GZdQKOW5Exr8S1W24J6OgM1njK8xCRGvAJIL/MxXHf6SkmQKQ==",
"license": "MIT"
} }
} }
} }

View file

@ -1,6 +1,6 @@
{ {
"name": "open-webui", "name": "open-webui",
"version": "0.6.33", "version": "0.6.36",
"private": true, "private": true,
"scripts": { "scripts": {
"dev": "npm run pyodide:fetch && vite dev --host", "dev": "npm run pyodide:fetch && vite dev --host",
@ -24,8 +24,8 @@
"devDependencies": { "devDependencies": {
"@sveltejs/adapter-auto": "3.2.2", "@sveltejs/adapter-auto": "3.2.2",
"@sveltejs/adapter-static": "^3.0.2", "@sveltejs/adapter-static": "^3.0.2",
"@sveltejs/kit": "^2.5.20", "@sveltejs/kit": "^2.5.27",
"@sveltejs/vite-plugin-svelte": "^3.1.1", "@sveltejs/vite-plugin-svelte": "^4.0.0",
"@tailwindcss/container-queries": "^0.1.1", "@tailwindcss/container-queries": "^0.1.1",
"@tailwindcss/postcss": "^4.0.0", "@tailwindcss/postcss": "^4.0.0",
"@tailwindcss/typography": "^0.5.13", "@tailwindcss/typography": "^0.5.13",
@ -35,14 +35,14 @@
"eslint": "^8.56.0", "eslint": "^8.56.0",
"eslint-config-prettier": "^9.1.0", "eslint-config-prettier": "^9.1.0",
"eslint-plugin-cypress": "^3.4.0", "eslint-plugin-cypress": "^3.4.0",
"eslint-plugin-svelte": "^2.43.0", "eslint-plugin-svelte": "^2.45.1",
"i18next-parser": "^9.0.1", "i18next-parser": "^9.0.1",
"postcss": "^8.4.31", "postcss": "^8.4.31",
"prettier": "^3.3.3", "prettier": "^3.3.3",
"prettier-plugin-svelte": "^3.2.6", "prettier-plugin-svelte": "^3.2.6",
"sass-embedded": "^1.81.0", "sass-embedded": "^1.81.0",
"svelte": "^4.2.18", "svelte": "^5.0.0",
"svelte-check": "^3.8.5", "svelte-check": "^4.0.0",
"svelte-confetti": "^1.3.2", "svelte-confetti": "^1.3.2",
"tailwindcss": "^4.0.0", "tailwindcss": "^4.0.0",
"tslib": "^2.4.1", "tslib": "^2.4.1",

View file

@ -13,12 +13,12 @@ dependencies = [
"itsdangerous==2.2.0", "itsdangerous==2.2.0",
"python-socketio==5.13.0", "python-socketio==5.13.0",
"python-jose==3.4.0", "python-jose==3.5.0",
"cryptography", "cryptography",
"bcrypt==5.0.0", "bcrypt==5.0.0",
"argon2-cffi==25.1.0", "argon2-cffi==25.1.0",
"PyJWT[crypto]==2.10.1", "PyJWT[crypto]==2.10.1",
"authlib==1.6.3", "authlib==1.6.5",
"requests==2.32.5", "requests==2.32.5",
"aiohttp==3.12.15", "aiohttp==3.12.15",
@ -73,7 +73,7 @@ dependencies = [
"pymdown-extensions==10.14.2", "pymdown-extensions==10.14.2",
"docx2txt==0.8", "docx2txt==0.8",
"python-pptx==1.0.2", "python-pptx==1.0.2",
"unstructured==0.16.17", "unstructured==0.18.15",
"nltk==3.9.1", "nltk==3.9.1",
"Markdown==3.9", "Markdown==3.9",
"pypandoc==1.15", "pypandoc==1.15",
@ -151,9 +151,7 @@ all = [
"oracledb==3.2.0", "oracledb==3.2.0",
"colbert-ai==0.2.21", "colbert-ai==0.2.21",
"firecrawl-py==4.5.0",
"firecrawl-py==1.12.0",
"tencentcloud-sdk-python==3.0.1336",
] ]
[project.scripts] [project.scripts]

View file

@ -129,8 +129,8 @@ li p {
} }
::-webkit-scrollbar { ::-webkit-scrollbar {
height: 0.4rem; height: 0.45rem;
width: 0.4rem; width: 0.45rem;
} }
::-webkit-scrollbar-track { ::-webkit-scrollbar-track {
@ -152,6 +152,14 @@ select {
-webkit-appearance: none; -webkit-appearance: none;
} }
.dark select:not([class*='bg-transparent']) {
@apply bg-gray-900 text-gray-300;
}
.dark select option {
@apply bg-gray-850 text-white;
}
@keyframes shimmer { @keyframes shimmer {
0% { 0% {
background-position: 200% 0; background-position: 200% 0;

View file

@ -23,12 +23,7 @@
href="/static/apple-touch-icon.png" href="/static/apple-touch-icon.png"
crossorigin="use-credentials" crossorigin="use-credentials"
/> />
<link <link rel="manifest" href="/manifest.json" crossorigin="use-credentials" />
rel="manifest"
href="/manifest.json"
crossorigin="use-credentials"
crossorigin="use-credentials"
/>
<meta <meta
name="viewport" name="viewport"
content="width=device-width, initial-scale=1, maximum-scale=1, viewport-fit=cover" content="width=device-width, initial-scale=1, maximum-scale=1, viewport-fit=cover"

View file

@ -112,6 +112,7 @@ export const importChat = async (
export const getChatList = async ( export const getChatList = async (
token: string = '', token: string = '',
page: number | null = null, page: number | null = null,
include_pinned: boolean = false,
include_folders: boolean = false include_folders: boolean = false
) => { ) => {
let error = null; let error = null;
@ -125,6 +126,10 @@ export const getChatList = async (
searchParams.append('include_folders', 'true'); searchParams.append('include_folders', 'true');
} }
if (include_pinned) {
searchParams.append('include_pinned', 'true');
}
const res = await fetch(`${WEBUI_API_BASE_URL}/chats/?${searchParams.toString()}`, { const res = await fetch(`${WEBUI_API_BASE_URL}/chats/?${searchParams.toString()}`, {
method: 'GET', method: 'GET',
headers: { headers: {

View file

@ -63,6 +63,10 @@ export const uploadFile = async (token: string, file: File, metadata?: object |
console.error(data.error); console.error(data.error);
res.error = data.error; res.error = data.error;
} }
if (res?.data) {
res.data = data;
}
} }
} }
} }

View file

@ -494,7 +494,10 @@ export const executeToolServer = async (
headers headers
}; };
if (['post', 'put', 'patch'].includes(httpMethod.toLowerCase()) && operation.requestBody) { if (
['post', 'put', 'patch', 'delete'].includes(httpMethod.toLowerCase()) &&
operation.requestBody
) {
requestOptions.body = JSON.stringify(bodyParams); requestOptions.body = JSON.stringify(bodyParams);
} }
@ -1398,6 +1401,33 @@ export const getChangelog = async () => {
return res; return res;
}; };
export const getVersion = async (token: string) => {
let error = null;
const res = await fetch(`${WEBUI_BASE_URL}/api/version`, {
method: 'GET',
headers: {
'Content-Type': 'application/json',
Authorization: `Bearer ${token}`
}
})
.then(async (res) => {
if (!res.ok) throw await res.json();
return res.json();
})
.catch((err) => {
console.error(err);
error = err;
return null;
});
if (error) {
throw error;
}
return res?.version ?? null;
};
export const getVersionUpdates = async (token: string) => { export const getVersionUpdates = async (token: string) => {
let error = null; let error = null;

View file

@ -1,9 +1,9 @@
import { WEBUI_API_BASE_URL } from '$lib/constants'; import { WEBUI_API_BASE_URL } from '$lib/constants';
export const getModels = async (token: string = '') => { export const getModelItems = async (token: string = '') => {
let error = null; let error = null;
const res = await fetch(`${WEBUI_API_BASE_URL}/models/`, { const res = await fetch(`${WEBUI_API_BASE_URL}/models/list`, {
method: 'GET', method: 'GET',
headers: { headers: {
Accept: 'application/json', Accept: 'application/json',

View file

@ -180,38 +180,3 @@ export const downloadDatabase = async (token: string) => {
} }
}; };
export const downloadLiteLLMConfig = async (token: string) => {
let error = null;
const res = await fetch(`${WEBUI_API_BASE_URL}/utils/litellm/config`, {
method: 'GET',
headers: {
'Content-Type': 'application/json',
Authorization: `Bearer ${token}`
}
})
.then(async (response) => {
if (!response.ok) {
throw await response.json();
}
return response.blob();
})
.then((blob) => {
const url = window.URL.createObjectURL(blob);
const a = document.createElement('a');
a.href = url;
a.download = 'config.yaml';
document.body.appendChild(a);
a.click();
window.URL.revokeObjectURL(url);
})
.catch((err) => {
console.error(err);
error = err.detail;
return null;
});
if (error) {
throw error;
}
};

View file

@ -158,6 +158,7 @@
if (res) { if (res) {
toast.success($i18n.t('Function deleted successfully')); toast.success($i18n.t('Function deleted successfully'));
functions = functions.filter((f) => f.id !== func.id);
_functions.set(await getFunctions(localStorage.token)); _functions.set(await getFunctions(localStorage.token));
models.set( models.set(

View file

@ -50,6 +50,9 @@
let STT_AZURE_BASE_URL = ''; let STT_AZURE_BASE_URL = '';
let STT_AZURE_MAX_SPEAKERS = ''; let STT_AZURE_MAX_SPEAKERS = '';
let STT_DEEPGRAM_API_KEY = ''; let STT_DEEPGRAM_API_KEY = '';
let STT_MISTRAL_API_KEY = '';
let STT_MISTRAL_API_BASE_URL = '';
let STT_MISTRAL_USE_CHAT_COMPLETIONS = false;
let STT_WHISPER_MODEL_LOADING = false; let STT_WHISPER_MODEL_LOADING = false;
@ -135,7 +138,10 @@
AZURE_REGION: STT_AZURE_REGION, AZURE_REGION: STT_AZURE_REGION,
AZURE_LOCALES: STT_AZURE_LOCALES, AZURE_LOCALES: STT_AZURE_LOCALES,
AZURE_BASE_URL: STT_AZURE_BASE_URL, AZURE_BASE_URL: STT_AZURE_BASE_URL,
AZURE_MAX_SPEAKERS: STT_AZURE_MAX_SPEAKERS AZURE_MAX_SPEAKERS: STT_AZURE_MAX_SPEAKERS,
MISTRAL_API_KEY: STT_MISTRAL_API_KEY,
MISTRAL_API_BASE_URL: STT_MISTRAL_API_BASE_URL,
MISTRAL_USE_CHAT_COMPLETIONS: STT_MISTRAL_USE_CHAT_COMPLETIONS
} }
}); });
@ -184,6 +190,9 @@
STT_AZURE_BASE_URL = res.stt.AZURE_BASE_URL; STT_AZURE_BASE_URL = res.stt.AZURE_BASE_URL;
STT_AZURE_MAX_SPEAKERS = res.stt.AZURE_MAX_SPEAKERS; STT_AZURE_MAX_SPEAKERS = res.stt.AZURE_MAX_SPEAKERS;
STT_DEEPGRAM_API_KEY = res.stt.DEEPGRAM_API_KEY; STT_DEEPGRAM_API_KEY = res.stt.DEEPGRAM_API_KEY;
STT_MISTRAL_API_KEY = res.stt.MISTRAL_API_KEY;
STT_MISTRAL_API_BASE_URL = res.stt.MISTRAL_API_BASE_URL;
STT_MISTRAL_USE_CHAT_COMPLETIONS = res.stt.MISTRAL_USE_CHAT_COMPLETIONS;
} }
await getVoices(); await getVoices();
@ -201,7 +210,7 @@
<div class=" space-y-3 overflow-y-scroll scrollbar-hidden h-full"> <div class=" space-y-3 overflow-y-scroll scrollbar-hidden h-full">
<div class="flex flex-col gap-3"> <div class="flex flex-col gap-3">
<div> <div>
<div class=" mb-2.5 text-base font-medium">{$i18n.t('Speech-to-Text')}</div> <div class=" mt-0.5 mb-2.5 text-base font-medium">{$i18n.t('Speech-to-Text')}</div>
<hr class=" border-gray-100 dark:border-gray-850 my-2" /> <hr class=" border-gray-100 dark:border-gray-850 my-2" />
@ -235,6 +244,7 @@
<option value="web">{$i18n.t('Web API')}</option> <option value="web">{$i18n.t('Web API')}</option>
<option value="deepgram">{$i18n.t('Deepgram')}</option> <option value="deepgram">{$i18n.t('Deepgram')}</option>
<option value="azure">{$i18n.t('Azure AI Speech')}</option> <option value="azure">{$i18n.t('Azure AI Speech')}</option>
<option value="mistral">{$i18n.t('MistralAI')}</option>
</select> </select>
</div> </div>
</div> </div>
@ -367,6 +377,67 @@
</div> </div>
</div> </div>
</div> </div>
{:else if STT_ENGINE === 'mistral'}
<div>
<div class="mt-1 flex gap-2 mb-1">
<input
class="flex-1 w-full bg-transparent outline-hidden"
placeholder={$i18n.t('API Base URL')}
bind:value={STT_MISTRAL_API_BASE_URL}
required
/>
<SensitiveInput placeholder={$i18n.t('API Key')} bind:value={STT_MISTRAL_API_KEY} />
</div>
</div>
<hr class="border-gray-100 dark:border-gray-850 my-2" />
<div>
<div class=" mb-1.5 text-xs font-medium">{$i18n.t('STT Model')}</div>
<div class="flex w-full">
<div class="flex-1">
<input
class="w-full rounded-lg py-2 px-4 text-sm bg-gray-50 dark:text-gray-300 dark:bg-gray-850 outline-hidden"
bind:value={STT_MODEL}
placeholder="voxtral-mini-latest"
/>
</div>
</div>
<div class="mt-2 mb-1 text-xs text-gray-400 dark:text-gray-500">
{$i18n.t('Leave empty to use the default model (voxtral-mini-latest).')}
<a
class=" hover:underline dark:text-gray-200 text-gray-800"
href="https://docs.mistral.ai/capabilities/audio_transcription"
target="_blank"
>
{$i18n.t('Learn more about Voxtral transcription.')}
</a>
</div>
</div>
<hr class="border-gray-100 dark:border-gray-850 my-2" />
<div>
<div class="flex items-center justify-between mb-2">
<div class="text-xs font-medium">{$i18n.t('Use Chat Completions API')}</div>
<label class="relative inline-flex items-center cursor-pointer">
<input
type="checkbox"
bind:checked={STT_MISTRAL_USE_CHAT_COMPLETIONS}
class="sr-only peer"
/>
<div
class="w-9 h-5 bg-gray-200 peer-focus:outline-none peer-focus:ring-2 peer-focus:ring-blue-300 dark:peer-focus:ring-blue-800 rounded-full peer dark:bg-gray-700 peer-checked:after:translate-x-full peer-checked:after:border-white after:content-[''] after:absolute after:top-[2px] after:left-[2px] after:bg-white after:border-gray-300 after:border after:rounded-full after:h-4 after:w-4 after:transition-all dark:border-gray-600 peer-checked:bg-blue-600"
></div>
</label>
</div>
<div class="text-xs text-gray-400 dark:text-gray-500">
{$i18n.t(
'Use /v1/chat/completions endpoint instead of /v1/audio/transcriptions for potentially better accuracy.'
)}
</div>
</div>
{:else if STT_ENGINE === ''} {:else if STT_ENGINE === ''}
<div> <div>
<div class=" mb-1.5 text-xs font-medium">{$i18n.t('STT Model')}</div> <div class=" mb-1.5 text-xs font-medium">{$i18n.t('STT Model')}</div>
@ -427,7 +498,7 @@
</div> </div>
<div> <div>
<div class=" mb-2.5 text-base font-medium">{$i18n.t('Text-to-Speech')}</div> <div class=" mt-0.5 mb-2.5 text-base font-medium">{$i18n.t('Text-to-Speech')}</div>
<hr class=" border-gray-100 dark:border-gray-850 my-2" /> <hr class=" border-gray-100 dark:border-gray-850 my-2" />

View file

@ -41,7 +41,7 @@
{#if config} {#if config}
<div> <div>
<div class="mb-3.5"> <div class="mb-3.5">
<div class=" mb-2.5 text-base font-medium">{$i18n.t('General')}</div> <div class=" mt-0.5 mb-2.5 text-base font-medium">{$i18n.t('General')}</div>
<hr class=" border-gray-100 dark:border-gray-850 my-2" /> <hr class=" border-gray-100 dark:border-gray-850 my-2" />
@ -164,7 +164,7 @@
</div> </div>
<div class="mb-3.5"> <div class="mb-3.5">
<div class=" mb-2.5 text-base font-medium">{$i18n.t('Code Interpreter')}</div> <div class=" mt-0.5 mb-2.5 text-base font-medium">{$i18n.t('Code Interpreter')}</div>
<hr class=" border-gray-100 dark:border-gray-850 my-2" /> <hr class=" border-gray-100 dark:border-gray-850 my-2" />

View file

@ -219,7 +219,7 @@
<div class=" overflow-y-scroll scrollbar-hidden h-full"> <div class=" overflow-y-scroll scrollbar-hidden h-full">
{#if ENABLE_OPENAI_API !== null && ENABLE_OLLAMA_API !== null && connectionsConfig !== null} {#if ENABLE_OPENAI_API !== null && ENABLE_OLLAMA_API !== null && connectionsConfig !== null}
<div class="mb-3.5"> <div class="mb-3.5">
<div class=" mb-2.5 text-base font-medium">{$i18n.t('General')}</div> <div class=" mt-0.5 mb-2.5 text-base font-medium">{$i18n.t('General')}</div>
<hr class=" border-gray-100 dark:border-gray-850 my-2" /> <hr class=" border-gray-100 dark:border-gray-850 my-2" />

View file

@ -2,7 +2,7 @@
import fileSaver from 'file-saver'; import fileSaver from 'file-saver';
const { saveAs } = fileSaver; const { saveAs } = fileSaver;
import { downloadDatabase, downloadLiteLLMConfig } from '$lib/apis/utils'; import { downloadDatabase } from '$lib/apis/utils';
import { onMount, getContext } from 'svelte'; import { onMount, getContext } from 'svelte';
import { config, user } from '$lib/stores'; import { config, user } from '$lib/stores';
import { toast } from 'svelte-sonner'; import { toast } from 'svelte-sonner';

View file

@ -171,14 +171,6 @@
return; return;
} }
if (
RAGConfig.CONTENT_EXTRACTION_ENGINE === 'datalab_marker' &&
!RAGConfig.DATALAB_MARKER_API_KEY
) {
toast.error($i18n.t('Datalab Marker API Key required.'));
return;
}
if ( if (
RAGConfig.CONTENT_EXTRACTION_ENGINE === 'datalab_marker' && RAGConfig.CONTENT_EXTRACTION_ENGINE === 'datalab_marker' &&
RAGConfig.DATALAB_MARKER_ADDITIONAL_CONFIG && RAGConfig.DATALAB_MARKER_ADDITIONAL_CONFIG &&
@ -207,10 +199,28 @@
return; return;
} }
if (
RAGConfig.CONTENT_EXTRACTION_ENGINE === 'mineru' &&
RAGConfig.MINERU_API_MODE === 'cloud' &&
RAGConfig.MINERU_API_KEY === ''
) {
toast.error($i18n.t('MinerU API Key required for Cloud API mode.'));
return;
}
if (!RAGConfig.BYPASS_EMBEDDING_AND_RETRIEVAL) { if (!RAGConfig.BYPASS_EMBEDDING_AND_RETRIEVAL) {
await embeddingModelUpdateHandler(); await embeddingModelUpdateHandler();
} }
if (RAGConfig.MINERU_PARAMS) {
try {
JSON.parse(RAGConfig.MINERU_PARAMS);
} catch (e) {
toast.error($i18n.t('Invalid JSON format in MinerU Parameters'));
return;
}
}
const res = await updateRAGConfig(localStorage.token, { const res = await updateRAGConfig(localStorage.token, {
...RAGConfig, ...RAGConfig,
ALLOWED_FILE_EXTENSIONS: RAGConfig.ALLOWED_FILE_EXTENSIONS.split(',') ALLOWED_FILE_EXTENSIONS: RAGConfig.ALLOWED_FILE_EXTENSIONS.split(',')
@ -219,7 +229,13 @@
DOCLING_PICTURE_DESCRIPTION_LOCAL: JSON.parse( DOCLING_PICTURE_DESCRIPTION_LOCAL: JSON.parse(
RAGConfig.DOCLING_PICTURE_DESCRIPTION_LOCAL || '{}' RAGConfig.DOCLING_PICTURE_DESCRIPTION_LOCAL || '{}'
), ),
DOCLING_PICTURE_DESCRIPTION_API: JSON.parse(RAGConfig.DOCLING_PICTURE_DESCRIPTION_API || '{}') DOCLING_PICTURE_DESCRIPTION_API: JSON.parse(
RAGConfig.DOCLING_PICTURE_DESCRIPTION_API || '{}'
),
MINERU_PARAMS:
typeof RAGConfig.MINERU_PARAMS === 'string' && RAGConfig.MINERU_PARAMS.trim() !== ''
? JSON.parse(RAGConfig.MINERU_PARAMS)
: {}
}); });
dispatch('save'); dispatch('save');
}; };
@ -260,6 +276,11 @@
2 2
); );
config.MINERU_PARAMS =
typeof config.MINERU_PARAMS === 'object'
? JSON.stringify(config.MINERU_PARAMS ?? {}, null, 2)
: config.MINERU_PARAMS;
RAGConfig = config; RAGConfig = config;
}); });
</script> </script>
@ -316,7 +337,7 @@
<div class=" space-y-2.5 overflow-y-scroll scrollbar-hidden h-full pr-1.5"> <div class=" space-y-2.5 overflow-y-scroll scrollbar-hidden h-full pr-1.5">
<div class=""> <div class="">
<div class="mb-3"> <div class="mb-3">
<div class=" mb-2.5 text-base font-medium">{$i18n.t('General')}</div> <div class=" mt-0.5 mb-2.5 text-base font-medium">{$i18n.t('General')}</div>
<hr class=" border-gray-100 dark:border-gray-850 my-2" /> <hr class=" border-gray-100 dark:border-gray-850 my-2" />
@ -337,6 +358,7 @@
<option value="datalab_marker">{$i18n.t('Datalab Marker API')}</option> <option value="datalab_marker">{$i18n.t('Datalab Marker API')}</option>
<option value="document_intelligence">{$i18n.t('Document Intelligence')}</option> <option value="document_intelligence">{$i18n.t('Document Intelligence')}</option>
<option value="mistral_ocr">{$i18n.t('Mistral OCR')}</option> <option value="mistral_ocr">{$i18n.t('Mistral OCR')}</option>
<option value="mineru">{$i18n.t('MinerU')}</option>
</select> </select>
</div> </div>
</div> </div>
@ -723,7 +745,7 @@
</div> </div>
<div class=""> <div class="">
<Textarea <Textarea
bind:value={RAGConfig.DOCLING_PARAMETERS} bind:value={RAGConfig.DOCLING_PARAMS}
placeholder={$i18n.t('Enter additional parameters in JSON format')} placeholder={$i18n.t('Enter additional parameters in JSON format')}
minSize={100} minSize={100}
/> />
@ -744,11 +766,86 @@
</div> </div>
{:else if RAGConfig.CONTENT_EXTRACTION_ENGINE === 'mistral_ocr'} {:else if RAGConfig.CONTENT_EXTRACTION_ENGINE === 'mistral_ocr'}
<div class="my-0.5 flex gap-2 pr-2"> <div class="my-0.5 flex gap-2 pr-2">
<input
class="flex-1 w-full text-sm bg-transparent outline-hidden"
placeholder={$i18n.t('Enter Mistral API Base URL')}
bind:value={RAGConfig.MISTRAL_OCR_API_BASE_URL}
/>
<SensitiveInput <SensitiveInput
placeholder={$i18n.t('Enter Mistral API Key')} placeholder={$i18n.t('Enter Mistral API Key')}
bind:value={RAGConfig.MISTRAL_OCR_API_KEY} bind:value={RAGConfig.MISTRAL_OCR_API_KEY}
/> />
</div> </div>
{:else if RAGConfig.CONTENT_EXTRACTION_ENGINE === 'mineru'}
<!-- API Mode Selection -->
<div class="flex w-full mt-2">
<div class="flex-1 flex justify-between">
<div class="self-center text-xs font-medium">
{$i18n.t('API Mode')}
</div>
<select
class="dark:bg-gray-900 w-fit pr-8 rounded-sm px-2 text-xs bg-transparent outline-hidden"
bind:value={RAGConfig.MINERU_API_MODE}
on:change={() => {
// Auto-update URL when switching modes if it's empty or matches the opposite mode's default
const cloudUrl = 'https://mineru.net/api/v4';
const localUrl = 'http://localhost:8000';
if (RAGConfig.MINERU_API_MODE === 'cloud') {
if (!RAGConfig.MINERU_API_URL || RAGConfig.MINERU_API_URL === localUrl) {
RAGConfig.MINERU_API_URL = cloudUrl;
}
} else {
if (!RAGConfig.MINERU_API_URL || RAGConfig.MINERU_API_URL === cloudUrl) {
RAGConfig.MINERU_API_URL = localUrl;
}
}
}}
>
<option value="local">{$i18n.t('local')}</option>
<option value="cloud">{$i18n.t('cloud')}</option>
</select>
</div>
</div>
<!-- API URL -->
<div class="flex w-full mt-2">
<input
class="flex-1 w-full text-sm bg-transparent outline-hidden"
placeholder={RAGConfig.MINERU_API_MODE === 'cloud'
? $i18n.t('https://mineru.net/api/v4')
: $i18n.t('http://localhost:8000')}
bind:value={RAGConfig.MINERU_API_URL}
/>
</div>
<div class="flex w-full mt-2">
<SensitiveInput
placeholder={$i18n.t('Enter MinerU API Key')}
bind:value={RAGConfig.MINERU_API_KEY}
/>
</div>
<!-- Parameters -->
<div class="flex flex-col justify-between w-full mt-2">
<div class="text-xs font-medium">
<Tooltip
content={$i18n.t(
'Advanced parameters for MinerU parsing (enable_ocr, enable_formula, enable_table, language, model_version, page_ranges)'
)}
placement="top-start"
>
{$i18n.t('Parameters')}
</Tooltip>
</div>
<div class="mt-1.5">
<Textarea
bind:value={RAGConfig.MINERU_PARAMS}
placeholder={`{\n "enable_ocr": false,\n "enable_formula": true,\n "enable_table": true,\n "language": "en",\n "model_version": "pipeline",\n "page_ranges": ""\n}`}
minSize={100}
/>
</div>
</div>
{/if} {/if}
</div> </div>
@ -829,7 +926,7 @@
{#if !RAGConfig.BYPASS_EMBEDDING_AND_RETRIEVAL} {#if !RAGConfig.BYPASS_EMBEDDING_AND_RETRIEVAL}
<div class="mb-3"> <div class="mb-3">
<div class=" mb-2.5 text-base font-medium">{$i18n.t('Embedding')}</div> <div class=" mt-0.5 mb-2.5 text-base font-medium">{$i18n.t('Embedding')}</div>
<hr class=" border-gray-100 dark:border-gray-850 my-2" /> <hr class=" border-gray-100 dark:border-gray-850 my-2" />
@ -978,7 +1075,7 @@
<div class="mt-1 mb-1 text-xs text-gray-400 dark:text-gray-500"> <div class="mt-1 mb-1 text-xs text-gray-400 dark:text-gray-500">
{$i18n.t( {$i18n.t(
'Warning: If you update or change your embedding model, you will need to re-import all documents.' 'After updating or changing the embedding model, you must reindex the knowledge base for the changes to take effect. You can do this using the "Reindex" button below.'
)} )}
</div> </div>
</div> </div>
@ -1004,7 +1101,7 @@
</div> </div>
<div class="mb-3"> <div class="mb-3">
<div class=" mb-2.5 text-base font-medium">{$i18n.t('Retrieval')}</div> <div class=" mt-0.5 mb-2.5 text-base font-medium">{$i18n.t('Retrieval')}</div>
<hr class=" border-gray-100 dark:border-gray-850 my-2" /> <hr class=" border-gray-100 dark:border-gray-850 my-2" />
@ -1247,7 +1344,7 @@
{/if} {/if}
<div class="mb-3"> <div class="mb-3">
<div class=" mb-2.5 text-base font-medium">{$i18n.t('Files')}</div> <div class=" mt-0.5 mb-2.5 text-base font-medium">{$i18n.t('Files')}</div>
<hr class=" border-gray-100 dark:border-gray-850 my-2" /> <hr class=" border-gray-100 dark:border-gray-850 my-2" />
@ -1359,7 +1456,7 @@
</div> </div>
<div class="mb-3"> <div class="mb-3">
<div class=" mb-2.5 text-base font-medium">{$i18n.t('Integration')}</div> <div class=" mt-0.5 mb-2.5 text-base font-medium">{$i18n.t('Integration')}</div>
<hr class=" border-gray-100 dark:border-gray-850 my-2" /> <hr class=" border-gray-100 dark:border-gray-850 my-2" />
@ -1379,7 +1476,7 @@
</div> </div>
<div class="mb-3"> <div class="mb-3">
<div class=" mb-2.5 text-base font-medium">{$i18n.t('Danger Zone')}</div> <div class=" mt-0.5 mb-2.5 text-base font-medium">{$i18n.t('Danger Zone')}</div>
<hr class=" border-gray-100 dark:border-gray-850 my-2" /> <hr class=" border-gray-100 dark:border-gray-850 my-2" />

View file

@ -104,7 +104,7 @@
{#if evaluationConfig !== null} {#if evaluationConfig !== null}
<div class=""> <div class="">
<div class="mb-3"> <div class="mb-3">
<div class=" mb-2.5 text-base font-medium">{$i18n.t('General')}</div> <div class=" mt-0.5 mb-2.5 text-base font-medium">{$i18n.t('General')}</div>
<hr class=" border-gray-100 dark:border-gray-850 my-2" /> <hr class=" border-gray-100 dark:border-gray-850 my-2" />
@ -119,7 +119,7 @@
{#if evaluationConfig.ENABLE_EVALUATION_ARENA_MODELS} {#if evaluationConfig.ENABLE_EVALUATION_ARENA_MODELS}
<div class="mb-3"> <div class="mb-3">
<div class=" mb-2.5 text-base font-medium flex justify-between items-center"> <div class=" mt-0.5 mb-2.5 text-base font-medium flex justify-between items-center">
<div> <div>
{$i18n.t('Manage')} {$i18n.t('Manage')}
</div> </div>

View file

@ -118,11 +118,11 @@
updateHandler(); updateHandler();
}} }}
> >
<div class="mt-0.5 space-y-3 overflow-y-scroll scrollbar-hidden h-full"> <div class="space-y-3 overflow-y-scroll scrollbar-hidden h-full">
{#if adminConfig !== null} {#if adminConfig !== null}
<div class=""> <div class="">
<div class="mb-3.5"> <div class="mb-3.5">
<div class=" mb-2.5 text-base font-medium">{$i18n.t('General')}</div> <div class=" mt-0.5 mb-2.5 text-base font-medium">{$i18n.t('General')}</div>
<hr class=" border-gray-100 dark:border-gray-850 my-2" /> <hr class=" border-gray-100 dark:border-gray-850 my-2" />
@ -280,7 +280,7 @@
</div> </div>
<div class="mb-3"> <div class="mb-3">
<div class=" mb-2.5 text-base font-medium">{$i18n.t('Authentication')}</div> <div class=" mt-0.5 mb-2.5 text-base font-medium">{$i18n.t('Authentication')}</div>
<hr class=" border-gray-100 dark:border-gray-850 my-2" /> <hr class=" border-gray-100 dark:border-gray-850 my-2" />
@ -399,6 +399,26 @@
>{$i18n.t("'s', 'm', 'h', 'd', 'w' or '-1' for no expiration.")}</span >{$i18n.t("'s', 'm', 'h', 'd', 'w' or '-1' for no expiration.")}</span
> >
</div> </div>
{#if adminConfig.JWT_EXPIRES_IN === '-1'}
<div class="mt-2 text-xs">
<div
class=" bg-yellow-500/20 text-yellow-700 dark:text-yellow-200 rounded-lg px-3 py-2"
>
<div>
<span class=" font-medium">{$i18n.t('Warning')}:</span>
<span
><a
href="https://docs.openwebui.com/getting-started/env-configuration#jwt_expires_in"
target="_blank"
class=" underline"
>{$i18n.t('No expiration can pose security risks.')}
</a></span
>
</div>
</div>
</div>
{/if}
</div> </div>
<div class=" space-y-3"> <div class=" space-y-3">
@ -617,7 +637,7 @@
</div> </div>
<div class="mb-3"> <div class="mb-3">
<div class=" mb-2.5 text-base font-medium">{$i18n.t('Features')}</div> <div class=" mt-0.5 mb-2.5 text-base font-medium">{$i18n.t('Features')}</div>
<hr class=" border-gray-100 dark:border-gray-850 my-2" /> <hr class=" border-gray-100 dark:border-gray-850 my-2" />

File diff suppressed because it is too large Load diff

View file

@ -111,7 +111,7 @@
> >
<div class=" overflow-y-scroll scrollbar-hidden h-full pr-1.5"> <div class=" overflow-y-scroll scrollbar-hidden h-full pr-1.5">
<div class="mb-3.5"> <div class="mb-3.5">
<div class=" mb-2.5 text-base font-medium">{$i18n.t('Tasks')}</div> <div class=" mt-0.5 mb-2.5 text-base font-medium">{$i18n.t('Tasks')}</div>
<hr class=" border-gray-100 dark:border-gray-850 my-2" /> <hr class=" border-gray-100 dark:border-gray-850 my-2" />
@ -384,7 +384,7 @@
</div> </div>
<div class="mb-3.5"> <div class="mb-3.5">
<div class=" mb-2.5 text-base font-medium">{$i18n.t('UI')}</div> <div class=" mt-0.5 mb-2.5 text-base font-medium">{$i18n.t('UI')}</div>
<hr class=" border-gray-100 dark:border-gray-850 my-2" /> <hr class=" border-gray-100 dark:border-gray-850 my-2" />

View file

@ -313,7 +313,7 @@
<div class=" my-2 mb-5" id="model-list"> <div class=" my-2 mb-5" id="model-list">
{#if models.length > 0} {#if models.length > 0}
{#each filteredModels as model, modelIdx (model.id)} {#each filteredModels as model, modelIdx (`${model.id}-${modelIdx}`)}
<div <div
class=" flex space-x-4 cursor-pointer w-full px-3 py-2 dark:hover:bg-white/5 hover:bg-black/5 rounded-lg transition {model class=" flex space-x-4 cursor-pointer w-full px-3 py-2 dark:hover:bg-white/5 hover:bg-black/5 rounded-lg transition {model
?.meta?.hidden ?.meta?.hidden

View file

@ -59,7 +59,7 @@
{#if servers !== null} {#if servers !== null}
<div class=""> <div class="">
<div class="mb-3"> <div class="mb-3">
<div class=" mb-2.5 text-base font-medium">{$i18n.t('General')}</div> <div class=" mt-0.5 mb-2.5 text-base font-medium">{$i18n.t('General')}</div>
<hr class=" border-gray-100 dark:border-gray-850 my-2" /> <hr class=" border-gray-100 dark:border-gray-850 my-2" />

View file

@ -95,7 +95,7 @@
{#if webConfig} {#if webConfig}
<div class=""> <div class="">
<div class="mb-3"> <div class="mb-3">
<div class=" mb-2.5 text-base font-medium">{$i18n.t('General')}</div> <div class=" mt-0.5 mb-2.5 text-base font-medium">{$i18n.t('General')}</div>
<hr class=" border-gray-100 dark:border-gray-850 my-2" /> <hr class=" border-gray-100 dark:border-gray-850 my-2" />
@ -724,7 +724,7 @@
</div> </div>
<div class="mb-3"> <div class="mb-3">
<div class=" mb-2.5 text-base font-medium">{$i18n.t('Loader')}</div> <div class=" mt-0.5 mb-2.5 text-base font-medium">{$i18n.t('Loader')}</div>
<hr class=" border-gray-100 dark:border-gray-850 my-2" /> <hr class=" border-gray-100 dark:border-gray-850 my-2" />

View file

@ -112,17 +112,6 @@
}} }}
/> />
{#key selectedUser}
<EditUserModal
bind:show={showEditUserModal}
{selectedUser}
sessionUser={$user}
on:save={async () => {
getUserList();
}}
/>
{/key}
<AddUserModal <AddUserModal
bind:show={showAddUserModal} bind:show={showAddUserModal}
on:save={async () => { on:save={async () => {
@ -130,6 +119,15 @@
}} }}
/> />
<EditUserModal
bind:show={showEditUserModal}
{selectedUser}
sessionUser={$user}
on:save={async () => {
getUserList();
}}
/>
{#if selectedUser} {#if selectedUser}
<UserChatsModal bind:show={showUserChatsModal} user={selectedUser} /> <UserChatsModal bind:show={showUserChatsModal} user={selectedUser} />
{/if} {/if}

View file

@ -22,6 +22,18 @@
export let selectedUser; export let selectedUser;
export let sessionUser; export let sessionUser;
$: if (show) {
init();
}
const init = () => {
if (selectedUser) {
_user = selectedUser;
_user.password = '';
loadUserGroups();
}
};
let _user = { let _user = {
profile_image_url: '', profile_image_url: '',
role: 'pending', role: 'pending',
@ -52,14 +64,6 @@
return null; return null;
}); });
}; };
onMount(() => {
if (selectedUser) {
_user = selectedUser;
_user.password = '';
loadUserGroups();
}
});
</script> </script>
<Modal size="sm" bind:show> <Modal size="sm" bind:show>

View file

@ -19,6 +19,7 @@
<nav class="sticky top-0 z-30 w-full px-1.5 py-1.5 -mb-8 flex items-center drag-region"> <nav class="sticky top-0 z-30 w-full px-1.5 py-1.5 -mb-8 flex items-center drag-region">
<div <div
id="navbar-bg-gradient-to-b"
class=" bg-linear-to-b via-50% from-white via-white to-transparent dark:from-gray-900 dark:via-gray-900 dark:to-transparent pointer-events-none absolute inset-0 -bottom-7 z-[-1]" class=" bg-linear-to-b via-50% from-white via-white to-transparent dark:from-gray-900 dark:via-gray-900 dark:to-transparent pointer-events-none absolute inset-0 -bottom-7 z-[-1]"
></div> ></div>

View file

@ -4,7 +4,14 @@
const i18n = getContext('i18n'); const i18n = getContext('i18n');
const dispatch = createEventDispatcher(); const dispatch = createEventDispatcher();
import { artifactCode, chatId, settings, showArtifacts, showControls } from '$lib/stores'; import {
artifactCode,
chatId,
settings,
showArtifacts,
showControls,
artifactContents
} from '$lib/stores';
import { copyToClipboard, createMessagesList } from '$lib/utils'; import { copyToClipboard, createMessagesList } from '$lib/utils';
import XMark from '../icons/XMark.svelte'; import XMark from '../icons/XMark.svelte';
@ -15,8 +22,6 @@
import Download from '../icons/Download.svelte'; import Download from '../icons/Download.svelte';
export let overlay = false; export let overlay = false;
export let history;
let messages = [];
let contents: Array<{ type: string; content: string }> = []; let contents: Array<{ type: string; content: string }> = [];
let selectedContentIdx = 0; let selectedContentIdx = 0;
@ -24,121 +29,11 @@
let copied = false; let copied = false;
let iframeElement: HTMLIFrameElement; let iframeElement: HTMLIFrameElement;
$: if (history) {
messages = createMessagesList(history, history.currentId);
getContents();
} else {
messages = [];
getContents();
}
const getContents = () => {
contents = [];
messages.forEach((message) => {
if (message?.role !== 'user' && message?.content) {
const codeBlockContents = message.content.match(/```[\s\S]*?```/g);
let codeBlocks = [];
if (codeBlockContents) {
codeBlockContents.forEach((block) => {
const lang = block.split('\n')[0].replace('```', '').trim().toLowerCase();
const code = block.replace(/```[\s\S]*?\n/, '').replace(/```$/, '');
codeBlocks.push({ lang, code });
});
}
let htmlContent = '';
let cssContent = '';
let jsContent = '';
codeBlocks.forEach((block) => {
const { lang, code } = block;
if (lang === 'html') {
htmlContent += code + '\n';
} else if (lang === 'css') {
cssContent += code + '\n';
} else if (lang === 'javascript' || lang === 'js') {
jsContent += code + '\n';
}
});
const inlineHtml = message.content.match(/<html>[\s\S]*?<\/html>/gi);
const inlineCss = message.content.match(/<style>[\s\S]*?<\/style>/gi);
const inlineJs = message.content.match(/<script>[\s\S]*?<\/script>/gi);
if (inlineHtml) {
inlineHtml.forEach((block) => {
const content = block.replace(/<\/?html>/gi, ''); // Remove <html> tags
htmlContent += content + '\n';
});
}
if (inlineCss) {
inlineCss.forEach((block) => {
const content = block.replace(/<\/?style>/gi, ''); // Remove <style> tags
cssContent += content + '\n';
});
}
if (inlineJs) {
inlineJs.forEach((block) => {
const content = block.replace(/<\/?script>/gi, ''); // Remove <script> tags
jsContent += content + '\n';
});
}
if (htmlContent || cssContent || jsContent) {
const renderedContent = `
<!DOCTYPE html>
<html lang="en">
<head>
<meta charset="UTF-8">
<meta name="viewport" content="width=device-width, initial-scale=1.0">
<${''}style>
body {
background-color: white; /* Ensure the iframe has a white background */
}
${cssContent}
</${''}style>
</head>
<body>
${htmlContent}
<${''}script>
${jsContent}
</${''}script>
</body>
</html>
`;
contents = [...contents, { type: 'iframe', content: renderedContent }];
} else {
// Check for SVG content
for (const block of codeBlocks) {
if (block.lang === 'svg' || (block.lang === 'xml' && block.code.includes('<svg'))) {
contents = [...contents, { type: 'svg', content: block.code }];
}
}
}
}
});
if (contents.length === 0) {
showControls.set(false);
showArtifacts.set(false);
}
selectedContentIdx = contents ? contents.length - 1 : 0;
};
function navigateContent(direction: 'prev' | 'next') { function navigateContent(direction: 'prev' | 'next') {
console.log(selectedContentIdx);
selectedContentIdx = selectedContentIdx =
direction === 'prev' direction === 'prev'
? Math.max(selectedContentIdx - 1, 0) ? Math.max(selectedContentIdx - 1, 0)
: Math.min(selectedContentIdx + 1, contents.length - 1); : Math.min(selectedContentIdx + 1, contents.length - 1);
console.log(selectedContentIdx);
} }
const iframeLoadHandler = () => { const iframeLoadHandler = () => {
@ -201,6 +96,18 @@
selectedContentIdx = codeIdx !== -1 ? codeIdx : 0; selectedContentIdx = codeIdx !== -1 ? codeIdx : 0;
} }
}); });
artifactContents.subscribe((value) => {
contents = value;
console.log('Artifact contents updated:', contents);
if (contents.length === 0) {
showControls.set(false);
showArtifacts.set(false);
}
selectedContentIdx = contents ? contents.length - 1 : 0;
});
}); });
</script> </script>

View file

@ -4,6 +4,7 @@
import { PaneGroup, Pane, PaneResizer } from 'paneforge'; import { PaneGroup, Pane, PaneResizer } from 'paneforge';
import { getContext, onDestroy, onMount, tick } from 'svelte'; import { getContext, onDestroy, onMount, tick } from 'svelte';
import { fade } from 'svelte/transition';
const i18n: Writable<i18nType> = getContext('i18n'); const i18n: Writable<i18nType> = getContext('i18n');
import { goto } from '$app/navigation'; import { goto } from '$app/navigation';
@ -26,6 +27,7 @@
banners, banners,
user, user,
socket, socket,
audioQueue,
showControls, showControls,
showCallOverlay, showCallOverlay,
currentChatPage, currentChatPage,
@ -34,6 +36,7 @@
showOverview, showOverview,
chatTitle, chatTitle,
showArtifacts, showArtifacts,
artifactContents,
tools, tools,
toolServers, toolServers,
functions, functions,
@ -41,6 +44,7 @@
pinnedChats, pinnedChats,
showEmbeds showEmbeds
} from '$lib/stores'; } from '$lib/stores';
import { import {
convertMessagesToHistory, convertMessagesToHistory,
copyToClipboard, copyToClipboard,
@ -48,8 +52,10 @@
createMessagesList, createMessagesList,
getPromptVariables, getPromptVariables,
processDetails, processDetails,
removeAllDetails removeAllDetails,
getCodeBlockContents
} from '$lib/utils'; } from '$lib/utils';
import { AudioQueue } from '$lib/utils/audio';
import { import {
createNewChat, createNewChat,
@ -75,8 +81,8 @@
import { getTools } from '$lib/apis/tools'; import { getTools } from '$lib/apis/tools';
import { uploadFile } from '$lib/apis/files'; import { uploadFile } from '$lib/apis/files';
import { createOpenAITextStream } from '$lib/apis/streaming'; import { createOpenAITextStream } from '$lib/apis/streaming';
import { getFunctions } from '$lib/apis/functions';
import { fade } from 'svelte/transition'; import { updateFolderById } from '$lib/apis/folders';
import Banner from '../common/Banner.svelte'; import Banner from '../common/Banner.svelte';
import MessageInput from '$lib/components/chat/MessageInput.svelte'; import MessageInput from '$lib/components/chat/MessageInput.svelte';
@ -89,9 +95,7 @@
import Spinner from '../common/Spinner.svelte'; import Spinner from '../common/Spinner.svelte';
import Tooltip from '../common/Tooltip.svelte'; import Tooltip from '../common/Tooltip.svelte';
import Sidebar from '../icons/Sidebar.svelte'; import Sidebar from '../icons/Sidebar.svelte';
import { getFunctions } from '$lib/apis/functions';
import Image from '../common/Image.svelte'; import Image from '../common/Image.svelte';
import { updateFolderById } from '$lib/apis/folders';
export let chatIdProp = ''; export let chatIdProp = '';
@ -192,6 +196,8 @@
codeInterpreterEnabled = input.codeInterpreterEnabled; codeInterpreterEnabled = input.codeInterpreterEnabled;
} }
} catch (e) {} } catch (e) {}
} else {
await setDefaults();
} }
const chatInput = document.getElementById('chat-input'); const chatInput = document.getElementById('chat-input');
@ -527,17 +533,28 @@
let showControlsSubscribe = null; let showControlsSubscribe = null;
let selectedFolderSubscribe = null; let selectedFolderSubscribe = null;
const stopAudio = () => {
try {
speechSynthesis.cancel();
$audioQueue.stop();
} catch {}
};
onMount(async () => { onMount(async () => {
loading = true; loading = true;
console.log('mounted'); console.log('mounted');
window.addEventListener('message', onMessageHandler); window.addEventListener('message', onMessageHandler);
$socket?.on('events', chatEventHandler); $socket?.on('events', chatEventHandler);
audioQueue.set(new AudioQueue(document.getElementById('audioElement')));
pageSubscribe = page.subscribe(async (p) => { pageSubscribe = page.subscribe(async (p) => {
if (p.url.pathname === '/') { if (p.url.pathname === '/') {
await tick(); await tick();
initNewChat(); initNewChat();
} }
stopAudio();
}); });
const storageChatInput = sessionStorage.getItem( const storageChatInput = sessionStorage.getItem(
@ -619,6 +636,7 @@
chatIdUnsubscriber?.(); chatIdUnsubscriber?.();
window.removeEventListener('message', onMessageHandler); window.removeEventListener('message', onMessageHandler);
$socket?.off('events', chatEventHandler); $socket?.off('events', chatEventHandler);
$audioQueue?.destroy();
} catch (e) { } catch (e) {
console.error(e); console.error(e);
} }
@ -817,6 +835,63 @@
} }
}; };
$: if (history) {
getContents();
} else {
artifactContents.set([]);
}
const getContents = () => {
const messages = history ? createMessagesList(history, history.currentId) : [];
let contents = [];
messages.forEach((message) => {
if (message?.role !== 'user' && message?.content) {
const {
codeBlocks: codeBlocks,
html: htmlContent,
css: cssContent,
js: jsContent
} = getCodeBlockContents(message.content);
if (htmlContent || cssContent || jsContent) {
const renderedContent = `
<!DOCTYPE html>
<html lang="en">
<head>
<meta charset="UTF-8">
<meta name="viewport" content="width=device-width, initial-scale=1.0">
<${''}style>
body {
background-color: white; /* Ensure the iframe has a white background */
}
${cssContent}
</${''}style>
</head>
<body>
${htmlContent}
<${''}script>
${jsContent}
</${''}script>
</body>
</html>
`;
contents = [...contents, { type: 'iframe', content: renderedContent }];
} else {
// Check for SVG content
for (const block of codeBlocks) {
if (block.lang === 'svg' || (block.lang === 'xml' && block.code.includes('<svg'))) {
contents = [...contents, { type: 'svg', content: block.code }];
}
}
}
}
});
artifactContents.set(contents);
};
////////////////////////// //////////////////////////
// Web functions // Web functions
////////////////////////// //////////////////////////
@ -1080,7 +1155,7 @@
}); });
} }
}; };
const chatCompletedHandler = async (chatId, modelId, responseMessageId, messages) => { const chatCompletedHandler = async (_chatId, modelId, responseMessageId, messages) => {
const res = await chatCompleted(localStorage.token, { const res = await chatCompleted(localStorage.token, {
model: modelId, model: modelId,
messages: messages.map((m) => ({ messages: messages.map((m) => ({
@ -1094,7 +1169,7 @@
})), })),
filter_ids: selectedFilterIds.length > 0 ? selectedFilterIds : undefined, filter_ids: selectedFilterIds.length > 0 ? selectedFilterIds : undefined,
model_item: $models.find((m) => m.id === modelId), model_item: $models.find((m) => m.id === modelId),
chat_id: chatId, chat_id: _chatId,
session_id: $socket?.id, session_id: $socket?.id,
id: responseMessageId id: responseMessageId
}).catch((error) => { }).catch((error) => {
@ -1122,9 +1197,9 @@
await tick(); await tick();
if ($chatId == chatId) { if ($chatId == _chatId) {
if (!$temporaryChatEnabled) { if (!$temporaryChatEnabled) {
chat = await updateChatById(localStorage.token, chatId, { chat = await updateChatById(localStorage.token, _chatId, {
models: selectedModels, models: selectedModels,
messages: messages, messages: messages,
history: history, history: history,
@ -1140,7 +1215,7 @@
taskIds = null; taskIds = null;
}; };
const chatActionHandler = async (chatId, actionId, modelId, responseMessageId, event = null) => { const chatActionHandler = async (_chatId, actionId, modelId, responseMessageId, event = null) => {
const messages = createMessagesList(history, responseMessageId); const messages = createMessagesList(history, responseMessageId);
const res = await chatAction(localStorage.token, actionId, { const res = await chatAction(localStorage.token, actionId, {
@ -1155,7 +1230,7 @@
})), })),
...(event ? { event: event } : {}), ...(event ? { event: event } : {}),
model_item: $models.find((m) => m.id === modelId), model_item: $models.find((m) => m.id === modelId),
chat_id: chatId, chat_id: _chatId,
session_id: $socket?.id, session_id: $socket?.id,
id: responseMessageId id: responseMessageId
}).catch((error) => { }).catch((error) => {
@ -1177,9 +1252,9 @@
} }
} }
if ($chatId == chatId) { if ($chatId == _chatId) {
if (!$temporaryChatEnabled) { if (!$temporaryChatEnabled) {
chat = await updateChatById(localStorage.token, chatId, { chat = await updateChatById(localStorage.token, _chatId, {
models: selectedModels, models: selectedModels,
messages: messages, messages: messages,
history: history, history: history,
@ -2288,7 +2363,7 @@
</title> </title>
</svelte:head> </svelte:head>
<audio id="audioElement" src="" style="display: none;" /> <audio id="audioElement" src="" style="display: none;"></audio>
<EventConfirmDialog <EventConfirmDialog
bind:show={showEventConfirmation} bind:show={showEventConfirmation}
@ -2440,7 +2515,7 @@
</div> </div>
</div> </div>
<div class=" pb-2"> <div class=" pb-2 z-10">
<MessageInput <MessageInput
bind:this={messageInput} bind:this={messageInput}
{history} {history}
@ -2570,3 +2645,10 @@
</div> </div>
{/if} {/if}
</div> </div>
<style>
::-webkit-scrollbar {
height: 0.5rem;
width: 0.5rem;
}
</style>

File diff suppressed because it is too large Load diff

View file

@ -233,7 +233,7 @@
{/if} {/if}
</Tooltip> </Tooltip>
<Tooltip content={item.description || decodeString(item?.name)} placement="top-start"> <Tooltip content={`${decodeString(item?.name)}`} placement="top-start">
<div class="line-clamp-1 flex-1"> <div class="line-clamp-1 flex-1">
{decodeString(item?.name)} {decodeString(item?.name)}
</div> </div>

View file

@ -56,6 +56,10 @@
fileUploadCapableModels.length === selectedModels.length && fileUploadCapableModels.length === selectedModels.length &&
($user?.role === 'admin' || $user?.permissions?.chat?.file_upload); ($user?.role === 'admin' || $user?.permissions?.chat?.file_upload);
$: if (!fileUploadEnabled && files.length > 0) {
files = [];
}
const detectMobile = () => { const detectMobile = () => {
const userAgent = navigator.userAgent || navigator.vendor || window.opera; const userAgent = navigator.userAgent || navigator.vendor || window.opera;
return /android|iphone|ipad|ipod|windows phone/i.test(userAgent); return /android|iphone|ipad|ipod|windows phone/i.test(userAgent);
@ -199,7 +203,9 @@
className="w-full" className="w-full"
> >
<DropdownMenu.Item <DropdownMenu.Item
class="flex gap-2 items-center px-3 py-1.5 text-sm cursor-pointer hover:bg-gray-50 dark:hover:bg-gray-800 rounded-xl" class="flex gap-2 items-center px-3 py-1.5 text-sm cursor-pointer hover:bg-gray-50 dark:hover:bg-gray-800 rounded-xl {!fileUploadEnabled
? 'opacity-50'
: ''}"
on:click={() => { on:click={() => {
if (fileUploadEnabled) { if (fileUploadEnabled) {
showAttachWebpageModal = true; showAttachWebpageModal = true;

View file

@ -31,7 +31,7 @@
const getItemsPage = async () => { const getItemsPage = async () => {
itemsLoading = true; itemsLoading = true;
let res = await getChatList(localStorage.token, page, true).catch(() => { let res = await getChatList(localStorage.token, page, true, true).catch(() => {
return []; return [];
}); });

View file

@ -42,6 +42,7 @@
export let onShowValves: Function; export let onShowValves: Function;
export let onClose: Function; export let onClose: Function;
export let closeOnOutsideClick = true;
let show = false; let show = false;
let tab = ''; let tab = '';

View file

@ -335,6 +335,7 @@
stopDurationCounter(); stopDurationCounter();
audioChunks = []; audioChunks = [];
visualizerData = Array(VISUALIZER_BUFFER_LENGTH).fill(0);
if (stream) { if (stream) {
const tracks = stream.getTracks(); const tracks = stream.getTracks();

View file

@ -108,7 +108,7 @@
source: _source, source: _source,
document: [document], document: [document],
metadata: metadata ? [metadata] : [], metadata: metadata ? [metadata] : [],
distances: distance !== undefined ? [distance] : undefined distances: distance !== undefined ? [distance] : []
}); });
} }
}); });

View file

@ -6,7 +6,12 @@
import PyodideWorker from '$lib/workers/pyodide.worker?worker'; import PyodideWorker from '$lib/workers/pyodide.worker?worker';
import { executeCode } from '$lib/apis/utils'; import { executeCode } from '$lib/apis/utils';
import { copyToClipboard, renderMermaidDiagram, renderVegaVisualization } from '$lib/utils'; import {
copyToClipboard,
initMermaid,
renderMermaidDiagram,
renderVegaVisualization
} from '$lib/utils';
import 'highlight.js/styles/github-dark.min.css'; import 'highlight.js/styles/github-dark.min.css';
@ -54,8 +59,8 @@
let _token = null; let _token = null;
let mermaidHtml = null; let renderHTML = null;
let vegaHtml = null; let renderError = null;
let highlightedCode = null; let highlightedCode = null;
let executing = false; let executing = false;
@ -323,28 +328,36 @@
}; };
}; };
let mermaid = null;
const renderMermaid = async (code) => {
if (!mermaid) {
mermaid = await initMermaid();
}
return await renderMermaidDiagram(mermaid, code);
};
const render = async () => { const render = async () => {
onUpdate(token); onUpdate(token);
if (lang === 'mermaid' && (token?.raw ?? '').slice(-4).includes('```')) { if (lang === 'mermaid' && (token?.raw ?? '').slice(-4).includes('```')) {
try { try {
mermaidHtml = await renderMermaidDiagram(code); renderHTML = await renderMermaid(code);
} catch (error) { } catch (error) {
console.error('Failed to render mermaid diagram:', error); console.error('Failed to render mermaid diagram:', error);
const errorMsg = error instanceof Error ? error.message : String(error); const errorMsg = error instanceof Error ? error.message : String(error);
toast.error($i18n.t('Failed to render diagram') + `: ${errorMsg}`); renderError = $i18n.t('Failed to render diagram') + `: ${errorMsg}`;
mermaidHtml = null; renderHTML = null;
} }
} else if ( } else if (
(lang === 'vega' || lang === 'vega-lite') && (lang === 'vega' || lang === 'vega-lite') &&
(token?.raw ?? '').slice(-4).includes('```') (token?.raw ?? '').slice(-4).includes('```')
) { ) {
try { try {
vegaHtml = await renderVegaVisualization(code); renderHTML = await renderVegaVisualization(code);
} catch (error) { } catch (error) {
console.error('Failed to render Vega visualization:', error); console.error('Failed to render Vega visualization:', error);
const errorMsg = error instanceof Error ? error.message : String(error); const errorMsg = error instanceof Error ? error.message : String(error);
toast.error($i18n.t('Failed to render diagram') + `: ${errorMsg}`); renderError = $i18n.t('Failed to render visualization') + `: ${errorMsg}`;
vegaHtml = null; renderHTML = null;
} }
} }
}; };
@ -407,25 +420,24 @@
class="relative {className} flex flex-col rounded-3xl border border-gray-100 dark:border-gray-850 my-0.5" class="relative {className} flex flex-col rounded-3xl border border-gray-100 dark:border-gray-850 my-0.5"
dir="ltr" dir="ltr"
> >
{#if lang === 'mermaid'} {#if ['mermaid', 'vega', 'vega-lite'].includes(lang)}
{#if mermaidHtml} {#if renderHTML}
<SvgPanZoom <SvgPanZoom
className=" rounded-3xl max-h-fit overflow-hidden" className=" rounded-3xl max-h-fit overflow-hidden"
svg={mermaidHtml} svg={renderHTML}
content={_token.text} content={_token.text}
/> />
{:else} {:else}
<pre class="mermaid">{code}</pre> <div class="p-3">
{/if} {#if renderError}
{:else if lang === 'vega' || lang === 'vega-lite'} <div
{#if vegaHtml} class="flex gap-2.5 border px-4 py-3 border-red-600/10 bg-red-600/10 rounded-2xl mb-2"
<SvgPanZoom >
className="rounded-3xl max-h-fit overflow-hidden" {renderError}
svg={vegaHtml} </div>
content={_token.text} {/if}
/> <pre>{code}</pre>
{:else} </div>
<pre class="vega">{code}</pre>
{/if} {/if}
{:else} {:else}
<div <div
@ -561,15 +573,15 @@
> >
{#if executing} {#if executing}
<div class=" "> <div class=" ">
<div class=" text-gray-500 text-xs mb-1">{$i18n.t('STDOUT/STDERR')}</div> <div class=" text-gray-500 text-sm mb-1">{$i18n.t('STDOUT/STDERR')}</div>
<div class="text-sm">{$i18n.t('Running...')}</div> <div class="text-sm">{$i18n.t('Running...')}</div>
</div> </div>
{:else} {:else}
{#if stdout || stderr} {#if stdout || stderr}
<div class=" "> <div class=" ">
<div class=" text-gray-500 text-xs mb-1">{$i18n.t('STDOUT/STDERR')}</div> <div class=" text-gray-500 text-sm mb-1">{$i18n.t('STDOUT/STDERR')}</div>
<div <div
class="text-sm {stdout?.split('\n')?.length > 100 class="text-sm font-mono whitespace-pre-wrap {stdout?.split('\n')?.length > 100
? `max-h-96` ? `max-h-96`
: ''} overflow-y-auto" : ''} overflow-y-auto"
> >
@ -579,7 +591,7 @@
{/if} {/if}
{#if result || files} {#if result || files}
<div class=" "> <div class=" ">
<div class=" text-gray-500 text-xs mb-1">{$i18n.t('RESULT')}</div> <div class=" text-gray-500 text-sm mb-1">{$i18n.t('RESULT')}</div>
{#if result} {#if result}
<div class="text-sm">{`${JSON.stringify(result)}`}</div> <div class="text-sm">{`${JSON.stringify(result)}`}</div>
{/if} {/if}

View file

@ -176,7 +176,7 @@
{onSourceClick} {onSourceClick}
{onTaskClick} {onTaskClick}
{onSave} {onSave}
onUpdate={(token) => { onUpdate={async (token) => {
const { lang, text: code } = token; const { lang, text: code } = token;
if ( if (
@ -185,6 +185,7 @@
!$mobile && !$mobile &&
$chatId $chatId
) { ) {
await tick();
showArtifacts.set(true); showArtifacts.set(true);
showControls.set(true); showControls.set(true);
} }

View file

@ -79,7 +79,12 @@
title="Embedded content" title="Embedded content"
frameborder="0" frameborder="0"
sandbox sandbox
onload="this.style.height=(this.contentWindow.document.body.scrollHeight+20)+'px';" on:load={(e) => {
try {
e.currentTarget.style.height =
e.currentTarget.contentWindow.document.body.scrollHeight + 20 + 'px';
} catch {}
}}
></iframe> ></iframe>
{:else} {:else}
{token.text} {token.text}
@ -116,17 +121,19 @@
referrerpolicy="strict-origin-when-cross-origin" referrerpolicy="strict-origin-when-cross-origin"
allowfullscreen allowfullscreen
width="100%" width="100%"
onload="this.style.height=(this.contentWindow.document.body.scrollHeight+20)+'px';" on:load={(e) => {
try {
e.currentTarget.style.height =
e.currentTarget.contentWindow.document.body.scrollHeight + 20 + 'px';
} catch {}
}}
></iframe> ></iframe>
{/if} {/if}
{:else if token.text.includes(`<source_id`)} {:else if token.text.includes(`<source_id`)}
<Source {id} {token} onClick={onSourceClick} /> <Source {id} {token} onClick={onSourceClick} />
{:else if token.text.trim().match(/^<br\s*\/?>$/i)}
<br />
{:else} {:else}
{@const br = token.text.match(/<br\s*\/?>/)} {token.text}
{#if br}
<br />
{:else}
{token.text}
{/if}
{/if} {/if}
{/if} {/if}

View file

@ -24,7 +24,7 @@
export let onSourceClick: Function = () => {}; export let onSourceClick: Function = () => {};
</script> </script>
{#each tokens as token} {#each tokens as token, tokenIdx (tokenIdx)}
{#if token.type === 'escape'} {#if token.type === 'escape'}
{unescapeHtml(token.text)} {unescapeHtml(token.text)}
{:else if token.type === 'html'} {:else if token.type === 'html'}
@ -59,7 +59,12 @@
title={token.fileId} title={token.fileId}
width="100%" width="100%"
frameborder="0" frameborder="0"
onload="this.style.height=(this.contentWindow.document.body.scrollHeight+20)+'px';" on:load={(e) => {
try {
e.currentTarget.style.height =
e.currentTarget.contentWindow.document.body.scrollHeight + 20 + 'px';
} catch {}
}}
></iframe> ></iframe>
{:else if token.type === 'mention'} {:else if token.type === 'mention'}
<MentionToken {token} /> <MentionToken {token} />

View file

@ -13,7 +13,7 @@
{:else} {:else}
{#each texts as text} {#each texts as text}
<span class="" transition:fade={{ duration: 100 }}> <span class="" transition:fade={{ duration: 100 }}>
{text} {text}{' '}
</span> </span>
{/each} {/each}
{/if} {/if}

View file

@ -1,4 +1,5 @@
<script lang="ts"> <script lang="ts">
import { decode } from 'html-entities';
import DOMPurify from 'dompurify'; import DOMPurify from 'dompurify';
import { onMount, getContext } from 'svelte'; import { onMount, getContext } from 'svelte';
const i18n = getContext('i18n'); const i18n = getContext('i18n');
@ -10,6 +11,7 @@
import { unescapeHtml } from '$lib/utils'; import { unescapeHtml } from '$lib/utils';
import { WEBUI_BASE_URL } from '$lib/constants'; import { WEBUI_BASE_URL } from '$lib/constants';
import { settings } from '$lib/stores';
import CodeBlock from '$lib/components/chat/Messages/CodeBlock.svelte'; import CodeBlock from '$lib/components/chat/Messages/CodeBlock.svelte';
import MarkdownInlineTokens from '$lib/components/chat/Messages/Markdown/MarkdownInlineTokens.svelte'; import MarkdownInlineTokens from '$lib/components/chat/Messages/Markdown/MarkdownInlineTokens.svelte';
@ -20,7 +22,6 @@
import Download from '$lib/components/icons/Download.svelte'; import Download from '$lib/components/icons/Download.svelte';
import Source from './Source.svelte'; import Source from './Source.svelte';
import { settings } from '$lib/stores';
import HtmlToken from './HTMLToken.svelte'; import HtmlToken from './HTMLToken.svelte';
export let id: string; export let id: string;
@ -304,7 +305,7 @@
<div class=" mb-1.5" slot="content"> <div class=" mb-1.5" slot="content">
<svelte:self <svelte:self
id={`${id}-${tokenIdx}-d`} id={`${id}-${tokenIdx}-d`}
tokens={marked.lexer(token.text)} tokens={marked.lexer(decode(token.text))}
attributes={token?.attributes} attributes={token?.attributes}
{done} {done}
{editCodeBlock} {editCodeBlock}
@ -321,7 +322,12 @@
title={token.fileId} title={token.fileId}
width="100%" width="100%"
frameborder="0" frameborder="0"
onload="this.style.height=(this.contentWindow.document.body.scrollHeight+20)+'px';" on:load={(e) => {
try {
e.currentTarget.style.height =
e.currentTarget.contentWindow.document.body.scrollHeight + 20 + 'px';
} catch {}
}}
></iframe> ></iframe>
{:else if token.type === 'paragraph'} {:else if token.type === 'paragraph'}
<p dir="auto"> <p dir="auto">

View file

@ -46,6 +46,7 @@
</script> </script>
<div <div
role="listitem"
class="flex flex-col justify-between px-5 mb-3 w-full {($settings?.widescreenMode ?? null) class="flex flex-col justify-between px-5 mb-3 w-full {($settings?.widescreenMode ?? null)
? 'max-w-full' ? 'max-w-full'
: 'max-w-5xl'} mx-auto rounded-lg group" : 'max-w-5xl'} mx-auto rounded-lg group"

View file

@ -15,7 +15,15 @@
import { getChatById } from '$lib/apis/chats'; import { getChatById } from '$lib/apis/chats';
import { generateTags } from '$lib/apis'; import { generateTags } from '$lib/apis';
import { config, models, settings, temporaryChatEnabled, TTSWorker, user } from '$lib/stores'; import {
audioQueue,
config,
models,
settings,
temporaryChatEnabled,
TTSWorker,
user
} from '$lib/stores';
import { synthesizeOpenAISpeech } from '$lib/apis/audio'; import { synthesizeOpenAISpeech } from '$lib/apis/audio';
import { imageGenerations } from '$lib/apis/images'; import { imageGenerations } from '$lib/apis/images';
import { import {
@ -156,7 +164,6 @@
let messageIndexEdit = false; let messageIndexEdit = false;
let audioParts: Record<number, HTMLAudioElement | null> = {};
let speaking = false; let speaking = false;
let speakingIdx: number | undefined; let speakingIdx: number | undefined;
@ -178,51 +185,25 @@
} }
}; };
const playAudio = (idx: number) => { const stopAudio = () => {
return new Promise<void>((res) => { try {
speakingIdx = idx; speechSynthesis.cancel();
const audio = audioParts[idx]; $audioQueue.stop();
} catch {}
if (!audio) {
return res();
}
audio.play();
audio.onended = async () => {
await new Promise((r) => setTimeout(r, 300));
if (Object.keys(audioParts).length - 1 === idx) {
speaking = false;
}
res();
};
});
};
const toggleSpeakMessage = async () => {
if (speaking) { if (speaking) {
try {
speechSynthesis.cancel();
if (speakingIdx !== undefined && audioParts[speakingIdx]) {
audioParts[speakingIdx]!.pause();
audioParts[speakingIdx]!.currentTime = 0;
}
} catch {}
speaking = false; speaking = false;
speakingIdx = undefined; speakingIdx = undefined;
return;
} }
};
const speak = async () => {
if (!(message?.content ?? '').trim().length) { if (!(message?.content ?? '').trim().length) {
toast.info($i18n.t('No content to speak')); toast.info($i18n.t('No content to speak'));
return; return;
} }
speaking = true; speaking = true;
const content = removeAllDetails(message.content); const content = removeAllDetails(message.content);
if ($config.audio.tts.engine === '') { if ($config.audio.tts.engine === '') {
@ -241,12 +222,12 @@
console.log(voice); console.log(voice);
const speak = new SpeechSynthesisUtterance(content); const speech = new SpeechSynthesisUtterance(content);
speak.rate = $settings.audio?.tts?.playbackRate ?? 1; speech.rate = $settings.audio?.tts?.playbackRate ?? 1;
console.log(speak); console.log(speech);
speak.onend = () => { speech.onend = () => {
speaking = false; speaking = false;
if ($settings.conversationMode) { if ($settings.conversationMode) {
document.getElementById('voice-input-button')?.click(); document.getElementById('voice-input-button')?.click();
@ -254,15 +235,21 @@
}; };
if (voice) { if (voice) {
speak.voice = voice; speech.voice = voice;
} }
speechSynthesis.speak(speak); speechSynthesis.speak(speech);
} }
}, 100); }, 100);
} else { } else {
loadingSpeech = true; $audioQueue.setId(`${message.id}`);
$audioQueue.setPlaybackRate($settings.audio?.tts?.playbackRate ?? 1);
$audioQueue.onStopped = () => {
speaking = false;
speakingIdx = undefined;
};
loadingSpeech = true;
const messageContentParts: string[] = getMessageContentParts( const messageContentParts: string[] = getMessageContentParts(
content, content,
$config?.audio?.tts?.split_on ?? 'punctuation' $config?.audio?.tts?.split_on ?? 'punctuation'
@ -278,17 +265,6 @@
} }
console.debug('Prepared message content for TTS', messageContentParts); console.debug('Prepared message content for TTS', messageContentParts);
audioParts = messageContentParts.reduce(
(acc, _sentence, idx) => {
acc[idx] = null;
return acc;
},
{} as typeof audioParts
);
let lastPlayedAudioPromise = Promise.resolve(); // Initialize a promise that resolves immediately
if ($settings.audio?.tts?.engine === 'browser-kokoro') { if ($settings.audio?.tts?.engine === 'browser-kokoro') {
if (!$TTSWorker) { if (!$TTSWorker) {
await TTSWorker.set( await TTSWorker.set(
@ -315,12 +291,9 @@
}); });
if (blob) { if (blob) {
const audio = new Audio(blob); const url = URL.createObjectURL(blob);
audio.playbackRate = $settings.audio?.tts?.playbackRate ?? 1; $audioQueue.enqueue(url);
audioParts[idx] = audio;
loadingSpeech = false; loadingSpeech = false;
lastPlayedAudioPromise = lastPlayedAudioPromise.then(() => playAudio(idx));
} }
} }
} else { } else {
@ -341,13 +314,10 @@
if (res) { if (res) {
const blob = await res.blob(); const blob = await res.blob();
const blobUrl = URL.createObjectURL(blob); const url = URL.createObjectURL(blob);
const audio = new Audio(blobUrl);
audio.playbackRate = $settings.audio?.tts?.playbackRate ?? 1;
audioParts[idx] = audio; $audioQueue.enqueue(url);
loadingSpeech = false; loadingSpeech = false;
lastPlayedAudioPromise = lastPlayedAudioPromise.then(() => playAudio(idx));
} }
} }
} }
@ -620,7 +590,7 @@
<div class="flex-auto w-0 pl-1 relative"> <div class="flex-auto w-0 pl-1 relative">
<Name> <Name>
<Tooltip content={model?.name ?? message.model} placement="top-start"> <Tooltip content={model?.name ?? message.model} placement="top-start">
<span class="line-clamp-1 text-black dark:text-white"> <span id="response-message-model-name" class="line-clamp-1 text-black dark:text-white">
{model?.name ?? message.model} {model?.name ?? message.model}
</span> </span>
</Tooltip> </Tooltip>
@ -648,10 +618,7 @@
<div class="chat-{message.role} w-full min-w-full markdown-prose"> <div class="chat-{message.role} w-full min-w-full markdown-prose">
<div> <div>
{#if model?.info?.meta?.capabilities?.status_updates ?? true} {#if model?.info?.meta?.capabilities?.status_updates ?? true}
<StatusHistory <StatusHistory statusHistory={message?.statusHistory} />
statusHistory={message?.statusHistory}
expand={message?.content === ''}
/>
{/if} {/if}
{#if message?.files && message.files?.filter((f) => f.type === 'image').length > 0} {#if message?.files && message.files?.filter((f) => f.type === 'image').length > 0}
@ -995,7 +962,11 @@
: 'invisible group-hover:visible'} p-1.5 hover:bg-black/5 dark:hover:bg-white/5 rounded-lg dark:hover:text-white hover:text-black transition" : 'invisible group-hover:visible'} p-1.5 hover:bg-black/5 dark:hover:bg-white/5 rounded-lg dark:hover:text-white hover:text-black transition"
on:click={() => { on:click={() => {
if (!loadingSpeech) { if (!loadingSpeech) {
toggleSpeakMessage(); if (speaking) {
stopAudio();
} else {
speak();
}
} }
}} }}
> >

View file

@ -29,38 +29,6 @@
{#if history && history.length > 0} {#if history && history.length > 0}
{#if status?.hidden !== true} {#if status?.hidden !== true}
<div class="text-sm flex flex-col w-full"> <div class="text-sm flex flex-col w-full">
{#if showHistory}
<div class="flex flex-row">
{#if history.length > 1}
<div class="w-full">
{#each history as status, idx}
{#if idx !== history.length - 1}
<div class="flex items-stretch gap-2 mb-1">
<div class=" ">
<div class="pt-3 px-1 mb-1.5">
<span
class="relative flex size-1.5 rounded-full justify-center items-center"
>
<span
class="relative inline-flex size-1.5 rounded-full bg-gray-500 dark:bg-gray-300"
></span>
</span>
</div>
<div
class="w-[0.5px] ml-[6.5px] h-[calc(100%-14px)] bg-gray-300 dark:bg-gray-700"
/>
</div>
<StatusItem {status} done={true} />
</div>
{/if}
{/each}
</div>
{/if}
</div>
{/if}
<button <button
class="w-full" class="w-full"
on:click={() => { on:click={() => {
@ -68,23 +36,38 @@
}} }}
> >
<div class="flex items-start gap-2"> <div class="flex items-start gap-2">
{#if history.length > 1}
<div class="pt-3 px-1">
<span class="relative flex size-1.5 rounded-full justify-center items-center">
{#if status?.done === false}
<span
class="absolute inline-flex h-full w-full animate-ping rounded-full bg-gray-500 dark:bg-gray-300 opacity-75"
></span>
{/if}
<span
class="relative inline-flex size-1.5 rounded-full bg-gray-500 dark:bg-gray-300"
></span>
</span>
</div>
{/if}
<StatusItem {status} /> <StatusItem {status} />
</div> </div>
</button> </button>
{#if showHistory}
<div class="flex flex-row">
{#if history.length > 1}
<div class="w-full">
{#each history as status, idx}
<div class="flex items-stretch gap-2 mb-1">
<div class=" ">
<div class="pt-3 px-1 mb-1.5">
<span class="relative flex size-1.5 rounded-full justify-center items-center">
<span
class="relative inline-flex size-1.5 rounded-full bg-gray-500 dark:bg-gray-400"
></span>
</span>
</div>
{#if idx !== history.length - 1}
<div
class="w-[0.5px] ml-[6.5px] h-[calc(100%-14px)] bg-gray-300 dark:bg-gray-700"
/>
{/if}
</div>
<StatusItem {status} done={true} />
</div>
{/each}
</div>
{/if}
</div>
{/if}
</div> </div>
{/if} {/if}
{/if} {/if}

View file

@ -121,7 +121,10 @@
if (selectedTag === '') { if (selectedTag === '') {
return true; return true;
} }
return (item.model?.tags ?? []).map((tag) => tag.name).includes(selectedTag);
return (item.model?.tags ?? [])
.map((tag) => tag.name.toLowerCase())
.includes(selectedTag.toLowerCase());
}) })
.filter((item) => { .filter((item) => {
if (selectedConnectionType === '') { if (selectedConnectionType === '') {
@ -139,7 +142,9 @@
if (selectedTag === '') { if (selectedTag === '') {
return true; return true;
} }
return (item.model?.tags ?? []).map((tag) => tag.name).includes(selectedTag); return (item.model?.tags ?? [])
.map((tag) => tag.name.toLowerCase())
.includes(selectedTag.toLowerCase());
}) })
.filter((item) => { .filter((item) => {
if (selectedConnectionType === '') { if (selectedConnectionType === '') {
@ -315,8 +320,7 @@
tags = items tags = items
.filter((item) => !(item.model?.info?.meta?.hidden ?? false)) .filter((item) => !(item.model?.info?.meta?.hidden ?? false))
.flatMap((item) => item.model?.tags ?? []) .flatMap((item) => item.model?.tags ?? [])
.map((tag) => tag.name); .map((tag) => tag.name.toLowerCase());
// Remove duplicates and sort // Remove duplicates and sort
tags = Array.from(new Set(tags)).sort((a, b) => a.localeCompare(b)); tags = Array.from(new Set(tags)).sort((a, b) => a.localeCompare(b));
} }

View file

@ -73,6 +73,7 @@
<nav class="sticky top-0 z-30 w-full py-1 -mb-8 flex flex-col items-center drag-region"> <nav class="sticky top-0 z-30 w-full py-1 -mb-8 flex flex-col items-center drag-region">
<div class="flex items-center w-full pl-1.5 pr-1"> <div class="flex items-center w-full pl-1.5 pr-1">
<div <div
id="navbar-bg-gradient-to-b"
class=" bg-linear-to-b via-40% to-97% from-white via-white to-transparent dark:from-gray-900 dark:via-gray-900 dark:to-transparent pointer-events-none absolute inset-0 -bottom-7 z-[-1]" class=" bg-linear-to-b via-40% to-97% from-white via-white to-transparent dark:from-gray-900 dark:via-gray-900 dark:to-transparent pointer-events-none absolute inset-0 -bottom-7 z-[-1]"
></div> ></div>

View file

@ -276,7 +276,7 @@
</div> </div>
</div> </div>
{#if $user?.role === 'admin' || ($user?.permissions.chat?.system_prompt ?? true)} {#if $user?.role === 'admin' || (($user?.permissions.chat?.controls ?? true) && ($user?.permissions.chat?.system_prompt ?? true))}
<hr class="border-gray-100/50 dark:border-gray-850 my-3" /> <hr class="border-gray-100/50 dark:border-gray-850 my-3" />
<div> <div>
@ -293,7 +293,7 @@
</div> </div>
{/if} {/if}
{#if $user?.role === 'admin' || ($user?.permissions.chat?.controls ?? true)} {#if $user?.role === 'admin' || (($user?.permissions.chat?.controls ?? true) && ($user?.permissions.chat?.params ?? true))}
<div class="mt-2 space-y-3 pr-1.5"> <div class="mt-2 space-y-3 pr-1.5">
<div class="flex justify-between items-center text-sm"> <div class="flex justify-between items-center text-sm">
<div class=" font-medium">{$i18n.t('Advanced Parameters')}</div> <div class=" font-medium">{$i18n.t('Advanced Parameters')}</div>

View file

@ -52,10 +52,17 @@
<Tooltip content={connection?.type === 'mcp' ? $i18n.t('MCP') : $i18n.t('OpenAPI')}> <Tooltip content={connection?.type === 'mcp' ? $i18n.t('MCP') : $i18n.t('OpenAPI')}>
<WrenchAlt /> <WrenchAlt />
</Tooltip> </Tooltip>
<div class=" capitalize outline-hidden w-full bg-transparent">
{connection?.info?.name ?? connection?.url} {#if connection?.info?.name}
<span class="text-gray-500">{connection?.info?.id}</span> <div class=" capitalize outline-hidden w-full bg-transparent">
</div> {connection?.info?.name ?? connection?.url}
<span class="text-gray-500">{connection?.info?.id ?? ''}</span>
</div>
{:else}
<div>
{connection?.url}
</div>
{/if}
</div> </div>
</div> </div>
</Tooltip> </Tooltip>

View file

@ -0,0 +1,96 @@
<script lang="ts">
import { getContext, onMount } from 'svelte';
import Tooltip from '../common/Tooltip.svelte';
import type { Shortcut } from '$lib/shortcuts';
export let shortcut: Shortcut;
export let isMac: boolean;
const i18n = getContext('i18n');
let keyboardLayoutMap: Map<string, string> | undefined;
onMount(async () => {
if (navigator.keyboard && 'getLayoutMap' in navigator.keyboard) {
try {
keyboardLayoutMap = await navigator.keyboard.getLayoutMap();
} catch (error) {
console.error('Failed to get keyboard layout map:', error);
}
}
});
function formatKey(key: string): string {
// First, handle special modifier keys which are defined in lowercase
switch (key) {
case 'mod':
return isMac ? '⌘' : 'Ctrl';
case 'shift':
return isMac ? '⇧' : 'Shift';
case 'alt':
return isMac ? '⌥' : 'Alt';
}
// Next, try to use the layout map with the raw KeyboardEvent.code (e.g., "Slash")
if (keyboardLayoutMap && keyboardLayoutMap.has(key)) {
const mappedKey = keyboardLayoutMap.get(key) ?? key;
// For single characters, make them uppercase. For others (like 'CapsLock'), leave as is.
return mappedKey.length === 1 ? mappedKey.toUpperCase() : mappedKey;
}
// Finally, provide a fallback for browsers without getLayoutMap or for keys not in the map
const lowerKey = key.toLowerCase();
switch (lowerKey) {
case 'backspace':
case 'delete':
return isMac ? '⌫' : 'Delete';
case 'escape':
return 'Esc';
case 'enter':
return isMac ? '↩' : 'Enter';
case 'tab':
return isMac ? '⇥' : 'Tab';
case 'arrowup':
return '↑';
case 'arrowdown':
return '↓';
case 'quote':
return "'";
case 'period':
return '.';
case 'slash':
return '/';
case 'semicolon':
return ';';
default:
// For 'KeyA', 'Digit1', etc., extract the last character.
if (lowerKey.startsWith('key') || lowerKey.startsWith('digit')) {
return key.slice(-1).toUpperCase();
}
// For anything else, just uppercase it.
return key.toUpperCase();
}
}
</script>
<div class="w-full flex justify-between">
<div class="text-sm whitespace-pre-line">
{#if shortcut.tooltip}
<Tooltip content={$i18n.t(shortcut.tooltip)}>
<span class="whitespace-nowrap">
{$i18n.t(shortcut.name)}<span class="text-xs">&nbsp;*</span>
</span>
</Tooltip>
{:else}
{$i18n.t(shortcut.name)}
{/if}
</div>
<div class="flex-shrink-0 flex justify-end self-start h-full space-x-1 text-xs">
{#each shortcut.keys.filter((key) => !(key.toLowerCase() === 'delete' && shortcut.keys.includes('Backspace'))) as key}
<div
class="h-fit px-1 py-0.5 flex items-start justify-center rounded-sm border border-black/10 capitalize text-gray-600 dark:border-white/10 dark:text-gray-300"
>
{formatKey(key)}
</div>
{/each}
</div>
</div>

View file

@ -1,392 +1,129 @@
<script lang="ts"> <script lang="ts">
import { getContext } from 'svelte'; import { getContext, onMount } from 'svelte';
import Modal from '../common/Modal.svelte'; import Modal from '../common/Modal.svelte';
import { shortcuts } from '$lib/shortcuts';
import Tooltip from '../common/Tooltip.svelte'; import { settings } from '$lib/stores';
const i18n = getContext('i18n'); import ShortcutItem from './ShortcutItem.svelte';
import XMark from '$lib/components/icons/XMark.svelte'; import XMark from '$lib/components/icons/XMark.svelte';
type CategorizedShortcuts = {
[category: string]: {
left: Shortcut[];
right: Shortcut[];
};
};
const i18n = getContext('i18n');
export let show = false; export let show = false;
let categorizedShortcuts: CategorizedShortcuts = {};
let isMac = false;
onMount(() => {
isMac = /Mac/i.test(navigator.userAgent);
});
$: {
const allShortcuts = Object.values(shortcuts).filter((shortcut) => {
if (!shortcut.setting) {
return true;
}
return $settings[shortcut.setting.id] === shortcut.setting.value;
});
categorizedShortcuts = allShortcuts.reduce((acc, shortcut) => {
const category = shortcut.category;
if (!acc[category]) {
acc[category] = [];
}
acc[category].push(shortcut);
return acc;
}, {});
}
</script> </script>
<Modal bind:show> <Modal bind:show>
<div class="text-gray-700 dark:text-gray-100"> <div class="text-gray-700 dark:text-gray-100 px-5 py-4">
<div class=" flex justify-between dark:text-gray-300 px-5 pt-4"> <div class="flex justify-between dark:text-gray-300 pb-2">
<div class=" text-lg font-medium self-center">{$i18n.t('Keyboard shortcuts')}</div> <div class="text-lg font-medium self-center">{$i18n.t('Keyboard Shortcuts')}</div>
<button <button class="self-center" on:click={() => (show = false)}>
class="self-center"
on:click={() => {
show = false;
}}
>
<XMark className={'size-5'} /> <XMark className={'size-5'} />
</button> </button>
</div> </div>
<div class="flex flex-col md:flex-row w-full p-5 md:space-x-4 dark:text-gray-200"> {#each Object.entries(categorizedShortcuts) as [category, items], categoryIndex}
<div class=" flex flex-col w-full sm:flex-row sm:justify-center sm:space-x-6"> {#if categoryIndex > 0}
<div class="flex flex-col space-y-3 w-full self-start"> <div class="py-3">
<div class="w-full flex justify-between items-center"> <div class="w-full border-t dark:border-gray-850 border-gray-50" />
<div class=" text-sm">{$i18n.t('Open new chat')}</div>
<div class="flex space-x-1 text-xs">
<div
class=" h-fit py-1 px-2 flex items-center justify-center rounded-sm border border-black/10 capitalize text-gray-600 dark:border-white/10 dark:text-gray-300"
>
Ctrl/⌘
</div>
<div
class=" h-fit py-1 px-2 flex items-center justify-center rounded-sm border border-black/10 capitalize text-gray-600 dark:border-white/10 dark:text-gray-300"
>
Shift
</div>
<div
class=" h-fit py-1 px-2 flex items-center justify-center rounded-sm border border-black/10 capitalize text-gray-600 dark:border-white/10 dark:text-gray-300"
>
O
</div>
</div>
</div>
<div class="w-full flex justify-between items-center">
<div class=" text-sm">{$i18n.t('Focus chat input')}</div>
<div class="flex space-x-1 text-xs">
<div
class=" h-fit py-1 px-2 flex items-center justify-center rounded-sm border border-black/10 capitalize text-gray-600 dark:border-white/10 dark:text-gray-300"
>
Shift
</div>
<div
class=" h-fit py-1 px-2 flex items-center justify-center rounded-sm border border-black/10 capitalize text-gray-600 dark:border-white/10 dark:text-gray-300"
>
Esc
</div>
</div>
</div>
<div class="w-full flex justify-between items-center">
<div class=" text-sm">
<Tooltip
content={$i18n.t(
'Only active when the chat input is in focus and an LLM is generating a response.'
)}
>
{$i18n.t('Stop Generating')}<span class="text-xs"> *</span>
</Tooltip>
</div>
<div class="flex space-x-1 text-xs">
<div
class=" h-fit py-1 px-2 flex items-center justify-center rounded-sm border border-black/10 capitalize text-gray-600 dark:border-white/10 dark:text-gray-300"
>
Esc
</div>
</div>
</div>
<div class="w-full flex justify-between items-center">
<div class=" text-sm">{$i18n.t('Copy last code block')}</div>
<div class="flex space-x-1 text-xs">
<div
class=" h-fit py-1 px-2 flex items-center justify-center rounded-sm border border-black/10 capitalize text-gray-600 dark:border-white/10 dark:text-gray-300"
>
Ctrl/⌘
</div>
<div
class=" h-fit py-1 px-2 flex items-center justify-center rounded-sm border border-black/10 capitalize text-gray-600 dark:border-white/10 dark:text-gray-300"
>
Shift
</div>
<div
class=" h-fit py-1 px-2 flex items-center justify-center rounded-sm border border-black/10 capitalize text-gray-600 dark:border-white/10 dark:text-gray-300"
>
;
</div>
</div>
</div>
<div class="w-full flex justify-between items-center">
<div class=" text-sm">{$i18n.t('Copy last response')}</div>
<div class="flex space-x-1 text-xs">
<div
class=" h-fit py-1 px-2 flex items-center justify-center rounded-sm border border-black/10 capitalize text-gray-600 dark:border-white/10 dark:text-gray-300"
>
Ctrl/⌘
</div>
<div
class=" h-fit py-1 px-2 flex items-center justify-center rounded-sm border border-black/10 capitalize text-gray-600 dark:border-white/10 dark:text-gray-300"
>
Shift
</div>
<div
class=" h-fit py-1 px-2 flex items-center justify-center rounded-sm border border-black/10 capitalize text-gray-600 dark:border-white/10 dark:text-gray-300"
>
C
</div>
</div>
</div>
<div class="w-full flex justify-between items-center">
<div class=" text-sm">
<Tooltip
content={$i18n.t(
'Only active when "Paste Large Text as File" setting is toggled on.'
)}
>
{$i18n.t('Prevent file creation')}<span class="text-s"> *</span>
</Tooltip>
</div>
<div class="flex space-x-1 text-xs">
<div
class=" h-fit py-1 px-2 flex items-center justify-center rounded-sm border border-black/10 capitalize text-gray-600 dark:border-white/10 dark:text-gray-300"
>
Ctrl/⌘
</div>
<div
class=" h-fit py-1 px-2 flex items-center justify-center rounded-sm border border-black/10 capitalize text-gray-600 dark:border-white/10 dark:text-gray-300"
>
Shift
</div>
<div
class=" h-fit py-1 px-2 flex items-center justify-center rounded-sm border border-black/10 capitalize text-gray-600 dark:border-white/10 dark:text-gray-300"
>
V
</div>
</div>
</div>
</div> </div>
{/if}
<div class="flex flex-col space-y-3 w-full self-start"> <div class="flex justify-between dark:text-gray-300 pb-2">
<div class="w-full flex justify-between items-center"> <div class="text-base self-center">{$i18n.t(category)}</div>
<div class=" text-sm">{$i18n.t('Generate prompt pair')}</div> </div>
<div class="flex flex-col md:flex-row w-full md:space-x-2 dark:text-gray-200">
<div class="flex flex-col w-full sm:flex-row sm:justify-center sm:space-x-6">
<div class=" grid grid-cols-1 sm:grid-cols-2 gap-2 gap-x-4 w-full">
<!-- {$i18n.t('Chat')} -->
<!-- {$i18n.t('Global')} -->
<!-- {$i18n.t('Input')} -->
<!-- {$i18n.t('Message')} -->
<div class="flex space-x-1 text-xs"> <!-- {$i18n.t('New Chat')} -->
<div <!-- {$i18n.t('New Temporary Chat')} -->
class=" h-fit py-1 px-2 flex items-center justify-center rounded-sm border border-black/10 capitalize text-gray-600 dark:border-white/10 dark:text-gray-300" <!-- {$i18n.t('Delete Chat')} -->
> <!-- {$i18n.t('Search')} -->
Ctrl/⌘ <!-- {$i18n.t('Open Settings')} -->
<!-- {$i18n.t('Show Shortcuts')} -->
<!-- {$i18n.t('Toggle Sidebar')} -->
<!-- {$i18n.t('Close Modal')} -->
<!-- {$i18n.t('Focus Chat Input')} -->
<!-- {$i18n.t('Accept Autocomplete Generation\nJump to Prompt Variable')} -->
<!-- {$i18n.t('Prevent File Creation')} -->
<!-- {$i18n.t('Attach File From Knowledge')} -->
<!-- {$i18n.t('Add Custom Prompt')} -->
<!-- {$i18n.t('Talk to Model')} -->
<!-- {$i18n.t('Generate Message Pair')} -->
<!-- {$i18n.t('Regenerate Response')} -->
<!-- {$i18n.t('Stop Generating')} -->
<!-- {$i18n.t('Edit Last Message')} -->
<!-- {$i18n.t('Copy Last Response')} -->
<!-- {$i18n.t('Copy Last Code Block')} -->
<!-- {$i18n.t('Only active when "Paste Large Text as File" setting is toggled on.')} -->
<!-- {$i18n.t('Only active when the chat input is in focus.')} -->
<!-- {$i18n.t('Only active when the chat input is in focus and an LLM is generating a response.')} -->
<!-- {$i18n.t('Only can be triggered when the chat input is in focus.')} -->
{#each items as shortcut}
<div class="col-span-1 flex items-start">
<ShortcutItem {shortcut} {isMac} />
</div> </div>
{/each}
<div
class=" h-fit py-1 px-2 flex items-center justify-center rounded-sm border border-black/10 capitalize text-gray-600 dark:border-white/10 dark:text-gray-300"
>
Shift
</div>
<div
class=" h-fit py-1 px-2 flex items-center justify-center rounded-sm border border-black/10 capitalize text-gray-600 dark:border-white/10 dark:text-gray-300"
>
Enter
</div>
</div>
</div>
<div class="w-full flex justify-between items-center">
<div class=" text-sm">{$i18n.t('Toggle search')}</div>
<div class="flex space-x-1 text-xs">
<div
class=" h-fit py-1 px-2 flex items-center justify-center rounded-sm border border-black/10 capitalize text-gray-600 dark:border-white/10 dark:text-gray-300"
>
Ctrl/⌘
</div>
<div
class=" h-fit py-1 px-2 flex items-center justify-center rounded-sm border border-black/10 capitalize text-gray-600 dark:border-white/10 dark:text-gray-300"
>
K
</div>
</div>
</div>
<div class="w-full flex justify-between items-center">
<div class=" text-sm">{$i18n.t('Toggle settings')}</div>
<div class="flex space-x-1 text-xs">
<div
class=" h-fit py-1 px-2 flex items-center justify-center rounded-sm border border-black/10 capitalize text-gray-600 dark:border-white/10 dark:text-gray-300"
>
Ctrl/⌘
</div>
<div
class=" h-fit py-1 px-2 flex items-center justify-center rounded-sm border border-black/10 capitalize text-gray-600 dark:border-white/10 dark:text-gray-300"
>
.
</div>
</div>
</div>
<div class="w-full flex justify-between items-center">
<div class=" text-sm">{$i18n.t('Toggle sidebar')}</div>
<div class="flex space-x-1 text-xs">
<div
class=" h-fit py-1 px-2 flex items-center justify-center rounded-sm border border-black/10 capitalize text-gray-600 dark:border-white/10 dark:text-gray-300"
>
Ctrl/⌘
</div>
<div
class=" h-fit py-1 px-2 flex items-center justify-center rounded-sm border border-black/10 capitalize text-gray-600 dark:border-white/10 dark:text-gray-300"
>
Shift
</div>
<div
class=" h-fit py-1 px-2 flex items-center justify-center rounded-sm border border-black/10 capitalize text-gray-600 dark:border-white/10 dark:text-gray-300"
>
S
</div>
</div>
</div>
<div class="w-full flex justify-between items-center">
<div class=" text-sm">{$i18n.t('Delete chat')}</div>
<div class="flex space-x-1 text-xs">
<div
class=" h-fit py-1 px-2 flex items-center justify-center rounded-sm border border-black/10 capitalize text-gray-600 dark:border-white/10 dark:text-gray-300"
>
Ctrl/⌘
</div>
<div
class=" h-fit py-1 px-2 flex items-center justify-center rounded-sm border border-black/10 capitalize text-gray-600 dark:border-white/10 dark:text-gray-300"
>
Shift
</div>
<div
class=" h-fit py-1 px-2 flex items-center justify-center rounded-sm border border-black/10 capitalize text-gray-600 dark:border-white/10 dark:text-gray-300"
>
⌫/Delete
</div>
</div>
</div>
<div class="w-full flex justify-between items-center">
<div class=" text-sm">{$i18n.t('Show shortcuts')}</div>
<div class="flex space-x-1 text-xs">
<div
class=" h-fit py-1 px-2 flex items-center justify-center rounded-sm border border-black/10 capitalize text-gray-600 dark:border-white/10 dark:text-gray-300"
>
Ctrl/⌘
</div>
<div
class=" h-fit py-1 px-2 flex items-center justify-center rounded-sm border border-black/10 capitalize text-gray-600 dark:border-white/10 dark:text-gray-300"
>
/
</div>
</div>
</div> </div>
</div> </div>
</div> </div>
</div> {/each}
<div class="px-5 pb-4 text-xs text-gray-500 dark:text-gray-400">
{$i18n.t(
'Shortcuts with an asterisk (*) are situational and only active under specific conditions.'
)}
</div>
<div class=" flex justify-between dark:text-gray-300 px-5">
<div class=" text-lg font-medium self-center">{$i18n.t('Input commands')}</div>
</div>
<div class="flex flex-col md:flex-row w-full p-5 md:space-x-4 dark:text-gray-200">
<div class=" flex flex-col w-full sm:flex-row sm:justify-center sm:space-x-6">
<div class="flex flex-col space-y-3 w-full self-start">
<div class="w-full flex justify-between items-center">
<div class=" text-sm">
{$i18n.t('Attach file from knowledge')}
</div>
<div class="flex space-x-1 text-xs">
<div
class=" h-fit py-1 px-2 flex items-center justify-center rounded-sm border border-black/10 capitalize text-gray-600 dark:border-white/10 dark:text-gray-300"
>
#
</div>
</div>
</div>
<div class="w-full flex justify-between items-center">
<div class=" text-sm">
{$i18n.t('Add custom prompt')}
</div>
<div class="flex space-x-1 text-xs">
<div
class=" h-fit py-1 px-2 flex items-center justify-center rounded-sm border border-black/10 capitalize text-gray-600 dark:border-white/10 dark:text-gray-300"
>
/
</div>
</div>
</div>
<div class="w-full flex justify-between items-center">
<div class=" text-sm">
{$i18n.t('Talk to model')}
</div>
<div class="flex space-x-1 text-xs">
<div
class=" h-fit py-1 px-2 flex items-center justify-center rounded-sm border border-black/10 capitalize text-gray-600 dark:border-white/10 dark:text-gray-300"
>
@
</div>
</div>
</div>
<div class="w-full flex justify-between items-center">
<div class=" text-sm">
{$i18n.t('Accept autocomplete generation / Jump to prompt variable')}
</div>
<div class="flex space-x-1 text-xs">
<div
class=" h-fit py-1 px-2 flex items-center justify-center rounded-sm border border-black/10 capitalize text-gray-600 dark:border-white/10 dark:text-gray-300"
>
TAB
</div>
</div>
</div>
</div>
</div>
</div>
</div> </div>
</Modal> </Modal>
<style> <style>
input::-webkit-outer-spin-button, input::-webkit-outer-spin-button,
input::-webkit-inner-spin-button { input::-webkit-inner-spin-button {
/* display: none; <- Crashes Chrome on hover */
-webkit-appearance: none; -webkit-appearance: none;
margin: 0; /* <-- Apparently some margin are still there even though it's hidden */ margin: 0;
} }
.tabs::-webkit-scrollbar { .tabs::-webkit-scrollbar {
display: none; /* for Chrome, Safari and Opera */ display: none;
} }
.tabs { .tabs {
-ms-overflow-style: none; /* IE and Edge */ -ms-overflow-style: none;
scrollbar-width: none; /* Firefox */ scrollbar-width: none;
} }
input[type='number'] { input[type='number'] {
-moz-appearance: textfield; /* Firefox */ -moz-appearance: textfield;
} }
</style> </style>

Some files were not shown because too many files have changed in this diff Show more