Merge branch 'dev' into feat/rate-limit-web-search

This commit is contained in:
cvaz1306 2025-10-02 19:15:40 -07:00 committed by GitHub
commit 551bc64a8d
No known key found for this signature in database
GPG key ID: B5690EEEBB952194
334 changed files with 18123 additions and 5947 deletions

View file

@ -4,14 +4,15 @@
**Before submitting, make sure you've checked the following:** **Before submitting, make sure you've checked the following:**
- [ ] **Target branch:** Please verify that the pull request targets the `dev` branch. - [ ] **Target branch:** Verify that the pull request targets the `dev` branch. Not targeting the `dev` branch may lead to immediate closure of the PR.
- [ ] **Description:** Provide a concise description of the changes made in this pull request. - [ ] **Description:** Provide a concise description of the changes made in this pull request.
- [ ] **Changelog:** Ensure a changelog entry following the format of [Keep a Changelog](https://keepachangelog.com/) is added at the bottom of the PR description. - [ ] **Changelog:** Ensure a changelog entry following the format of [Keep a Changelog](https://keepachangelog.com/) is added at the bottom of the PR description.
- [ ] **Documentation:** Have you updated relevant documentation [Open WebUI Docs](https://github.com/open-webui/docs), or other documentation sources? - [ ] **Documentation:** If necessary, update relevant documentation [Open WebUI Docs](https://github.com/open-webui/docs) like environment variables, the tutorials, or other documentation sources.
- [ ] **Dependencies:** Are there any new dependencies? Have you updated the dependency versions in the documentation? - [ ] **Dependencies:** Are there any new dependencies? Have you updated the dependency versions in the documentation?
- [ ] **Testing:** Have you written and run sufficient tests to validate the changes? - [ ] **Testing:** Perform manual tests to verify the implemented fix/feature works as intended AND does not break any other functionality. Take this as an opportunity to make screenshots of the feature/fix and include it in the PR description.
- [ ] **Agentic AI Code:**: Confirm this Pull Request is **not written by any AI Agent** or has at least gone through additional human review **and** manual testing. If any AI Agent is the co-author of this PR, it may lead to immediate closure of the PR.
- [ ] **Code review:** Have you performed a self-review of your code, addressing any coding standard issues and ensuring adherence to the project's coding standards? - [ ] **Code review:** Have you performed a self-review of your code, addressing any coding standard issues and ensuring adherence to the project's coding standards?
- [ ] **Prefix:** To clearly categorize this pull request, prefix the pull request title using one of the following: - [ ] **Title Prefix:** To clearly categorize this pull request, prefix the pull request title using one of the following:
- **BREAKING CHANGE**: Significant changes that may affect compatibility - **BREAKING CHANGE**: Significant changes that may affect compatibility
- **build**: Changes that affect the build system or external dependencies - **build**: Changes that affect the build system or external dependencies
- **ci**: Changes to our continuous integration processes or workflows - **ci**: Changes to our continuous integration processes or workflows

View file

@ -5,6 +5,166 @@ All notable changes to this project will be documented in this file.
The format is based on [Keep a Changelog](https://keepachangelog.com/en/1.1.0/), The format is based on [Keep a Changelog](https://keepachangelog.com/en/1.1.0/),
and this project adheres to [Semantic Versioning](https://semver.org/spec/v2.0.0.html). and this project adheres to [Semantic Versioning](https://semver.org/spec/v2.0.0.html).
## [0.6.32] - 2025-09-29
### Added
- ⚡ JSON model import moved to backend processing for significant performance improvements when importing large model files. [#17871](https://github.com/open-webui/open-webui/pull/17871)
- ⚠️ Visual warnings for group permissions that display when a permission is disabled in a group but remains enabled in the default user role, clarifying inheritance behavior for administrators. [#17848](https://github.com/open-webui/open-webui/pull/17848)
- 🗄️ Milvus multi-tenancy mode using shared collections with resource ID filtering for improved scalability, mirroring the existing Qdrant implementation and configurable via ENABLE_MILVUS_MULTITENANCY_MODE environment variable. [#17837](https://github.com/open-webui/open-webui/pull/17837)
- 🛠️ Enhanced tool result processing with improved error handling, better MCP tool result handling, and performance improvements for embedded UI components. [Commit](https://github.com/open-webui/open-webui/commit/4f06f29348b2c9d71c87d1bbe5b748a368f5101f)
- 👥 New user groups now automatically inherit default group permissions, streamlining the admin setup process by eliminating manual permission configuration. [#17843](https://github.com/open-webui/open-webui/pull/17843)
- 🗂️ Bulk unarchive functionality for all chats, providing a single backend endpoint to efficiently restore all archived chats at once. [#17857](https://github.com/open-webui/open-webui/pull/17857)
- 🏷️ Browser tab title toggle setting allows users to control whether chat titles appear in the browser tab or display only "Open WebUI". [#17851](https://github.com/open-webui/open-webui/pull/17851)
- 💬 Reply-to-message functionality in channels, allowing users to reply directly to specific messages with visual threading and context display. [Commit](https://github.com/open-webui/open-webui/commit/1a18928c94903ad1f1f0391b8ade042c3e60205b)
- 🔧 Tool server import and export functionality, allowing direct upload of openapi.json and openapi.yaml files as an alternative to URL-based configuration. [#14446](https://github.com/open-webui/open-webui/issues/14446)
- 🔧 User valve configuration for Functions is now available in the integration menu, providing consistent management alongside Tools. [#17784](https://github.com/open-webui/open-webui/issues/17784)
- 🔐 Admin permission toggle for controlling public sharing of notes, configurable via USER_PERMISSIONS_NOTES_ALLOW_PUBLIC_SHARING environment variable. [#17801](https://github.com/open-webui/open-webui/pull/17801), [Docs:#715](https://github.com/open-webui/docs/pull/715)
- 🗄️ DISKANN index type support for Milvus vector database with configurable maximum degree and search list size parameters. [#17770](https://github.com/open-webui/open-webui/pull/17770), [Docs:Commit](https://github.com/open-webui/docs/commit/cec50ab4d4b659558ca1ccd4b5e6fc024f05fb83)
- 🔄 Various improvements were implemented across the frontend and backend to enhance performance, stability, and security.
- 🌐 Translations for Chinese (Simplified & Traditional) and Bosnian (Latin) were enhanced and expanded.
### Fixed
- 🛠️ MCP tool calls are now correctly routed to the appropriate server when multiple streamable-http MCP servers are enabled, preventing "Tool not found" errors. [#17817](https://github.com/open-webui/open-webui/issues/17817)
- 🛠️ External tool servers (OpenAPI/MCP) now properly process and return tool results to the model, restoring functionality that was broken in v0.6.31. [#17764](https://github.com/open-webui/open-webui/issues/17764)
- 🔧 User valve detection now correctly identifies valves in imported tool code, ensuring gear icons appear in the integrations menu for all tools with user valves. [#17765](https://github.com/open-webui/open-webui/issues/17765)
- 🔐 MCP OAuth discovery now correctly handles multi-tenant configurations by including subpaths in metadata URL discovery. [#17768](https://github.com/open-webui/open-webui/issues/17768)
- 🗄️ Milvus query operations now correctly use -1 instead of None for unlimited queries, preventing TypeError exceptions. [#17769](https://github.com/open-webui/open-webui/pull/17769), [#17088](https://github.com/open-webui/open-webui/issues/17088)
- 📁 File upload error messages are now displayed when files are modified during upload, preventing user confusion on Android and Windows devices. [#17777](https://github.com/open-webui/open-webui/pull/17777)
- 🎨 MessageInput Integrations button hover effect now displays correctly with proper visual feedback. [#17767](https://github.com/open-webui/open-webui/pull/17767)
- 🎯 "Set as default" label positioning is fixed to ensure it remains clickable in all scenarios, including multi-model configurations. [#17779](https://github.com/open-webui/open-webui/pull/17779)
- 🎛️ Floating buttons now correctly retrieve message context by using the proper messageId parameter in createMessagesList calls. [#17823](https://github.com/open-webui/open-webui/pull/17823)
- 📌 Pinned chats are now properly cleared from the sidebar after archiving all chats, ensuring UI consistency without requiring a page refresh. [#17832](https://github.com/open-webui/open-webui/pull/17832)
- 🗑️ Delete confirmation modals now properly truncate long names for Notes, Prompts, Tools, and Functions to prevent modal overflow. [#17812](https://github.com/open-webui/open-webui/pull/17812)
- 🌐 Internationalization function calls now use proper Svelte store subscription syntax, preventing "i18n.t is not a function" errors on the model creation page. [#17819](https://github.com/open-webui/open-webui/pull/17819)
- 🎨 Playground chat interface button layout is corrected to prevent vertical text rendering for Assistant/User role buttons. [#17819](https://github.com/open-webui/open-webui/pull/17819)
- 🏷️ UI text truncation is improved across multiple components including usernames in admin panels, arena model names, model tags, and filter tags to prevent layout overflow issues. [#17805](https://github.com/open-webui/open-webui/pull/17805), [#17803](https://github.com/open-webui/open-webui/pull/17803), [#17791](https://github.com/open-webui/open-webui/pull/17791), [#17796](https://github.com/open-webui/open-webui/pull/17796)
## [0.6.31] - 2025-09-25
### Added
- 🔌 MCP (streamable HTTP) server support was added alongside existing OpenAPI server integration, allowing users to connect both server types through an improved server configuration interface. [#15932](https://github.com/open-webui/open-webui/issues/15932) [#16651](https://github.com/open-webui/open-webui/pull/16651), [Commit](https://github.com/open-webui/open-webui/commit/fd7385c3921eb59af76a26f4c475aedb38ce2406), [Commit](https://github.com/open-webui/open-webui/commit/777e81f7a8aca957a359d51df8388e5af4721a68), [Commit](https://github.com/open-webui/open-webui/commit/de7f7b3d855641450f8e5aac34fbae0665e0b80e), [Commit](https://github.com/open-webui/open-webui/commit/f1bbf3a91e4713039364b790e886e59b401572d0), [Commit](https://github.com/open-webui/open-webui/commit/c55afc42559c32a6f0c8beb0f1bb18e9360ab8af), [Commit](https://github.com/open-webui/open-webui/commit/61f20acf61f4fe30c0e5b0180949f6e1a8cf6524)
- 🔐 To enable MCP server authentication, OAuth 2.1 dynamic client registration was implemented with secure automatic client registration, encrypted session management, and seamless authentication flows. [Commit](https://github.com/open-webui/open-webui/commit/972be4eda5a394c111e849075f94099c9c0dd9aa), [Commit](https://github.com/open-webui/open-webui/commit/77e971dd9fbeee806e2864e686df5ec75e82104b), [Commit](https://github.com/open-webui/open-webui/commit/879abd7feea3692a2f157da4a458d30f27217508), [Commit](https://github.com/open-webui/open-webui/commit/422d38fd114b1ebd8a7dbb114d64e14791e67d7a), [Docs:#709](https://github.com/open-webui/docs/pull/709)
- 🛠️ External & Built-In Tools can now support rich UI element embedding ([Docs](https://docs.openwebui.com/features/plugin/tools/development)), allowing tools to return HTML content and interactive iframes that display directly within chat conversations with configurable security settings. [Commit](https://github.com/open-webui/open-webui/commit/07c5b25bc8b63173f406feb3ba183d375fedee6a), [Commit](https://github.com/open-webui/open-webui/commit/a5d8882bba7933a2c2c31c0a1405aba507c370bb), [Commit](https://github.com/open-webui/open-webui/commit/7be5b7f50f498de97359003609fc5993a172f084), [Commit](https://github.com/open-webui/open-webui/commit/a89ffccd7e96705a4a40e845289f4fcf9c4ae596)
- 📝 Note editor now supports drag-and-drop reordering of list items with visual drag handles, making list organization more intuitive and efficient. [Commit](https://github.com/open-webui/open-webui/commit/e4e97e727e9b4971f1c363b1280ca3a101599d88), [Commit](https://github.com/open-webui/open-webui/commit/aeb5288a3c7a6e9e0a47b807cc52f870c1b7dbe6)
- 🔍 Search modal was enhanced with quick action buttons for starting new conversations and creating notes, with intelligent content pre-population from search queries. [Commit](https://github.com/open-webui/open-webui/commit/aa6f63a335e172fec1dc94b2056541f52c1167a6), [Commit](https://github.com/open-webui/open-webui/commit/612a52d7bb7dbe9fa0bbbc8ac0a552d2b9801146), [Commit](https://github.com/open-webui/open-webui/commit/b03529b006f3148e895b1094584e1ab129ecac5b)
- 🛠️ Tool user valve configuration interface was added to the integrations menu, displaying clickable gear icon buttons with tooltips for tools that support user-specific settings, making personal tool configurations easily accessible. [Commit](https://github.com/open-webui/open-webui/commit/27d61307cdce97ed11a05ec13fc300249d6022cd)
- 👥 Channel access control was enhanced to require write permissions for posting, editing, and deleting messages, while read-only users can view content but cannot contribute. [#17543](https://github.com/open-webui/open-webui/pull/17543)
- 💬 Channel models now support image processing, allowing AI assistants to view and analyze images shared in conversation threads. [Commit](https://github.com/open-webui/open-webui/commit/9f0010e234a6f40782a66021435d3c02b9c23639)
- 🌐 Attach Webpage button was added to the message input menu, providing a user-friendly modal interface for attaching web content and YouTube videos as an alternative to the existing URL syntax. [#17534](https://github.com/open-webui/open-webui/pull/17534)
- 🔐 Redis session storage support was added for OAuth redirects, providing better state handling in multi-pod Kubernetes deployments and resolving CSRF mismatch errors. [#17223](https://github.com/open-webui/open-webui/pull/17223), [#15373](https://github.com/open-webui/open-webui/issues/15373)
- 🔍 Ollama Cloud web search integration was added as a new search engine option, providing access to web search functionality through Ollama's cloud infrastructure. [Commit](https://github.com/open-webui/open-webui/commit/e06489d92baca095b8f376fbef223298c7772579), [Commit](https://github.com/open-webui/open-webui/commit/4b6d34438bcfc45463dc7a9cb984794b32c1f0a1), [Commit](https://github.com/open-webui/open-webui/commit/05c46008da85357dc6890b846789dfaa59f4a520), [Commit](https://github.com/open-webui/open-webui/commit/fe65fe0b97ec5a8fff71592ff04a25c8e123d108), [Docs:#708](https://github.com/open-webui/docs/pull/708)
- 🔍 Perplexity Websearch API integration was added as a new search engine option, providing access to the new websearch functionality provided by Perplexity. [#17756](https://github.com/open-webui/open-webui/issues/17756), [Commit](https://github.com/open-webui/open-webui/pull/17747/commits/7f411dd5cc1c29733216f79e99eeeed0406a2afe)
- ☁️ OneDrive integration was improved to support separate client IDs for personal and business authentication, enabling both integrations to work simultaneously. [#17619](https://github.com/open-webui/open-webui/pull/17619), [Docs](https://docs.openwebui.com/tutorials/integrations/onedrive-sharepoint), [Docs](https://docs.openwebui.com/getting-started/env-configuration/#onedrive)
- 📝 Pending user overlay content now supports markdown formatting, enabling rich text display for custom messages similar to banner functionality. [#17681](https://github.com/open-webui/open-webui/pull/17681)
- 🎨 Image generation model selection was centralized to enable dynamic model override in function calls, allowing pipes and tools to specify different models than the global default while maintaining backward compatibility. [#17689](https://github.com/open-webui/open-webui/pull/17689)
- 🎨 Interface design was modernized with updated visual styling, improved spacing, and refined component layouts across modals, sidebar, settings, and navigation elements. [Commit](https://github.com/open-webui/open-webui/commit/27a91cc80a24bda0a3a188bc3120a8ab57b00881), [Commit](https://github.com/open-webui/open-webui/commit/4ad743098615f9c58daa9df392f31109aeceeb16), [Commit](https://github.com/open-webui/open-webui/commit/fd7385c3921eb59af76a26f4c475aedb38ce2406)
- 📊 Notes query performance was optimized through database-level filtering and separated access control logic, reducing memory usage and eliminating N+1 query problems for better scalability. [#17607](https://github.com/open-webui/open-webui/pull/17607) [Commit](https://github.com/open-webui/open-webui/pull/17747/commits/da661756fa7eec754270e6dd8c67cbf74a28a17f)
- ⚡ Page loading performance was optimized by deferring API requests until components are actually opened, including ChangelogModal, ModelSelector, RecursiveFolder, ArchivedChatsModal, and SearchModal. [#17542](https://github.com/open-webui/open-webui/pull/17542), [#17555](https://github.com/open-webui/open-webui/pull/17555), [#17557](https://github.com/open-webui/open-webui/pull/17557), [#17541](https://github.com/open-webui/open-webui/pull/17541), [#17640](https://github.com/open-webui/open-webui/pull/17640)
- ⚡ Bundle size was reduced by 1.58MB through optimized highlight.js language support, improving page loading speed and reducing bandwidth usage. [#17645](https://github.com/open-webui/open-webui/pull/17645)
- ⚡ Editor collaboration functionality was refactored to reduce package size by 390KB and minimize compilation errors, improving build performance and reliability. [#17593](https://github.com/open-webui/open-webui/pull/17593)
- ♿ Enhanced user interface accessibility through the addition of unique element IDs, improving targeting for testing, styling, and assistive technologies while providing better semantic markup for screen readers and accessibility tools. [#17746](https://github.com/open-webui/open-webui/pull/17746)
- 🔄 Various improvements were implemented across the frontend and backend to enhance performance, stability, and security.
- 🌐 Translations for Portuguese (Brazil), Chinese (Simplified and Traditional), Korean, Irish, Spanish, Finnish, French, Kabyle, Russian, and Catalan were enhanced and improved.
### Fixed
- 🛡️ SVG content security was enhanced by implementing DOMPurify sanitization to prevent XSS attacks through malicious SVG elements, ensuring safe rendering of user-generated SVG content. [Commit](https://github.com/open-webui/open-webui/pull/17747/commits/750a659a9fee7687e667d9d755e17b8a0c77d557)
- ☁️ OneDrive attachment menu rendering issues were resolved by restructuring the submenu interface from dropdown to tabbed navigation, preventing menu items from being hidden or clipped due to overflow constraints. [#17554](https://github.com/open-webui/open-webui/issues/17554), [Commit](https://github.com/open-webui/open-webui/pull/17747/commits/90e4b49b881b644465831cc3028bb44f0f7a2196)
- 💬 Attached conversation references now persist throughout the entire chat session, ensuring models can continue querying referenced conversations after multiple conversation turns. [#17750](https://github.com/open-webui/open-webui/issues/17750)
- 🔍 Search modal text box focus issues after pinning or unpinning chats were resolved, allowing users to properly exit the search interface by clicking outside the text box. [#17743](https://github.com/open-webui/open-webui/issues/17743)
- 🔍 Search function chat list is now properly updated in real-time when chats are created or deleted, eliminating stale search results and preview loading failures. [#17741](https://github.com/open-webui/open-webui/issues/17741)
- 💬 Chat jitter and delayed code block expansion in multi-model sessions were resolved by reverting dynamic CodeEditor loading, restoring stable rendering behavior. [#17715](https://github.com/open-webui/open-webui/pull/17715), [#17684](https://github.com/open-webui/open-webui/issues/17684)
- 📎 File upload handling was improved to properly recognize uploaded files even when no accompanying text message is provided, resolving issues where attachments were ignored in custom prompts. [#17492](https://github.com/open-webui/open-webui/issues/17492)
- 💬 Chat conversation referencing within projects was restored by including foldered chats in the reference menu, allowing users to properly quote conversations from within their project scope. [#17530](https://github.com/open-webui/open-webui/issues/17530)
- 🔍 RAG query generation is now skipped when all attached files are set to full context mode, preventing unnecessary retrieval operations and improving system efficiency. [#17744](https://github.com/open-webui/open-webui/pull/17744)
- 💾 Memory leaks in file handling and HTTP connections are prevented through proper resource cleanup, ensuring stable memory usage during large file downloads and processing operations. [#17608](https://github.com/open-webui/open-webui/pull/17608)
- 🔐 OAuth access token refresh errors are resolved by properly implementing async/await patterns, preventing "coroutine object has no attribute get" failures during token expiry. [#17585](https://github.com/open-webui/open-webui/issues/17585), [#17678](https://github.com/open-webui/open-webui/issues/17678)
- ⚙️ Valve behavior was improved to properly handle default values and array types, ensuring only explicitly set values are persisted while maintaining consistent distinction between custom and default valve states. [#17664](https://github.com/open-webui/open-webui/pull/17664)
- 🔍 Hybrid search functionality was enhanced to handle inconsistent parameter types and prevent failures when collection results are None, empty, or in unexpected formats. [#17617](https://github.com/open-webui/open-webui/pull/17617)
- 📁 Empty folder deletion is now allowed regardless of chat deletion permission restrictions, resolving cases where users couldn't remove folders after deleting all contained chats. [#17683](https://github.com/open-webui/open-webui/pull/17683)
- 📝 Rich text editor console errors were resolved by adding proper error handling when the TipTap editor view is not available or not yet mounted. [#17697](https://github.com/open-webui/open-webui/issues/17697)
- 🗒️ Hidden models are now properly excluded from the notes section dropdown and default model selection, preventing users from accessing models they shouldn't see. [#17722](https://github.com/open-webui/open-webui/pull/17722)
- 🖼️ AI-generated image download filenames now use a clean, translatable "Generated Image" format instead of potentially problematic response text, improving file management and compatibility. [#17721](https://github.com/open-webui/open-webui/pull/17721)
- 🎨 Toggle switch display issues in the Integrations interface are fixed, preventing background highlighting and obscuring on hover. [#17564](https://github.com/open-webui/open-webui/issues/17564)
### Changed
- 👥 Channel permissions now require write access for message posting, editing, and deletion, with existing user groups defaulting to read-only access requiring manual admin migration to write permissions for full participation.
- ☁️ OneDrive environment variable configuration was updated to use separate ONEDRIVE_CLIENT_ID_PERSONAL and ONEDRIVE_CLIENT_ID_BUSINESS variables for better client ID separation, while maintaining backward compatibility with the legacy ONEDRIVE_CLIENT_ID variable. [Docs](https://docs.openwebui.com/tutorials/integrations/onedrive-sharepoint), [Docs](https://docs.openwebui.com/getting-started/env-configuration/#onedrive)
## [0.6.30] - 2025-09-17
### Added
- 🔑 Microsoft Entra ID authentication type support was added for Azure OpenAI connections, enabling enhanced security and streamlined authentication workflows.
### Fixed
- ☁️ OneDrive integration was fixed after recent breakage, restoring reliable account connectivity and file access.
## [0.6.29] - 2025-09-17
### Added
- 🎨 The chat input menu has been completely overhauled with a revolutionary new design, consolidating attachments under a unified '+' button, organizing integrations into a streamlined options menu, and introducing powerful, interactive selectors for attaching chats, notes, and knowledge base items. [Commit](https://github.com/open-webui/open-webui/commit/a68342d5a887e36695e21f8c2aec593b159654ff), [Commit](https://github.com/open-webui/open-webui/commit/96b8aaf83ff341fef432649366bc5155bac6cf20), [Commit](https://github.com/open-webui/open-webui/commit/4977e6d50f7b931372c96dd5979ca635d58aeb78), [Commit](https://github.com/open-webui/open-webui/commit/d973db829f7ec98b8f8fe7d3b2822d588e79f94e), [Commit](https://github.com/open-webui/open-webui/commit/d4c628de09654df76653ad9bce9cb3263e2f27c8), [Commit](https://github.com/open-webui/open-webui/commit/cd740f436db4ea308dbede14ef7ff56e8126f51b), [Commit](https://github.com/open-webui/open-webui/commit/5c2db102d06b5c18beb248d795682ff422e9b6d1), [Commit](https://github.com/open-webui/open-webui/commit/031cf38655a1a2973194d2eaa0fbbd17aca8ee92), [Commit](https://github.com/open-webui/open-webui/pull/17420/commits/3ed0a6d11fea1a054e0bc8aa8dfbe417c7c53e51), [Commit](https://github.com/open-webui/open-webui/pull/17420/commits/eadec9e86e01bc8f9fb90dfe7a7ae4fc3bfa6420), [Commit](https://github.com/open-webui/open-webui/pull/17420/commits/c03ca7270e64e3a002d321237160c0ddaf2bb129), [Commit](https://github.com/open-webui/open-webui/pull/17420/commits/b53ddfbd19aa94e9cbf7210acb31c3cfafafa5fe), [Commit](https://github.com/open-webui/open-webui/pull/17420/commits/c923461882fcde30ae297a95e91176c95b9b72e1)
- 🤖 AI models can now be mentioned in channels to automatically generate responses, enabling multi-model conversations where mentioned models participate directly in threaded discussions with full context awareness. [Commit](https://github.com/open-webui/open-webui/pull/17420/commits/4fe97d8794ee18e087790caab9e5d82886006145)
- 💬 The Channels feature now utilizes the modern rich text editor, including support for '/', '@', and '#' command suggestions. [Commit](https://github.com/open-webui/open-webui/commit/06c1426e14ac0dfaf723485dbbc9723a4d89aba9), [Commit](https://github.com/open-webui/open-webui/commit/02f7c3258b62970ce79716f75d15467a96565054)
- 📎 Channel message input now supports direct paste functionality for images and files from the clipboard, streamlining content sharing workflows. [Commit](https://github.com/open-webui/open-webui/pull/17420/commits/6549fc839f86c40c26c2ef4dedcaf763a9304418)
- ⚙️ Models can now be configured with default features (Web Search, Image Generation) and filters that automatically activate when a user selects the model. [Commit](https://github.com/open-webui/open-webui/commit/9a555478273355a5177bfc7f7211c64778e4c8de), [Commit](https://github.com/open-webui/open-webui/commit/384a53b339820068e92f7eaea0d9f3e0536c19c2), [Commit](https://github.com/open-webui/open-webui/commit/d7f43bfc1a30c065def8c50d77c2579c1a3c5c67), [Commit](https://github.com/open-webui/open-webui/commit/6a67a2217cc5946ad771e479e3a37ac213210748)
- 💬 The ability to reference other chats as context within a conversation was added via the attachment menu. [Commit](https://github.com/open-webui/open-webui/commit/e097bbdf11ae4975c622e086df00d054291cdeb3), [Commit](https://github.com/open-webui/open-webui/commit/f3cd2ffb18e7dedbe88430f9ae7caa6b3cfd79d0), [Commit](https://github.com/open-webui/open-webui/commit/74263c872c5d574a9bb0944d7984f748dc772dba), [Commit](https://github.com/open-webui/open-webui/pull/17420/commits/aa8ab349ed2fcb46d1cf994b9c0de2ec2ea35d0d), [Commit](https://github.com/open-webui/open-webui/pull/17420/commits/025eef754f0d46789981defd473d001e3b1d0ca2)
- 🎨 The command suggestion UI for prompts ('/'), models ('@'), and knowledge ('#') was completely overhauled with a more responsive and keyboard-navigable interface. [Commit](https://github.com/open-webui/open-webui/commit/6b69c4da0fb9329ccf7024483960e070cf52ccab), [Commit](https://github.com/open-webui/open-webui/commit/06a6855f844456eceaa4d410c93379460e208202), [Commit](https://github.com/open-webui/open-webui/commit/c55f5578280b936cf581a743df3703e3db1afd54), [Commit](https://github.com/open-webui/open-webui/commit/f68d1ba394d4423d369f827894cde99d760b2402)
- 👥 User and channel suggestions were added to the mention system, enabling '@' mentions for users and models, and '#' mentions for channels with searchable user lookup and clickable navigation. [Commit](https://github.com/open-webui/open-webui/pull/17420/commits/bbd1d2b58c89b35daea234f1fc9208f2af840899), [Commit](https://github.com/open-webui/open-webui/pull/17420/commits/aef1e06f0bb72065a25579c982dd49157e320268), [Commit](https://github.com/open-webui/open-webui/pull/17420/commits/779db74d7e9b7b00d099b7d65cfbc8a831e74690)
- 📁 Folder functionality was enhanced with custom background image support, improved drag-and-drop capabilities for moving folders to root level, and better menu interactions. [Commit](https://github.com/open-webui/open-webui/pull/17420/commits/2a234829f5dfdfde27fdfd30591caa908340efb4), [Commit](https://github.com/open-webui/open-webui/pull/17420/commits/2b1ee8b0dc5f7c0caaafdd218f20705059fa72e2), [Commit](https://github.com/open-webui/open-webui/pull/17420/commits/b1e5bc8e490745f701909c19b6a444b67c04660e), [Commit](https://github.com/open-webui/open-webui/pull/17420/commits/3e584132686372dfeef187596a7c557aa5f48308)
- ☁️ OneDrive integration configuration now supports selecting between personal and work/school account types via ENABLE_ONEDRIVE_PERSONAL and ENABLE_ONEDRIVE_BUSINESS environment variables. [#17354](https://github.com/open-webui/open-webui/pull/17354), [Commit](https://github.com/open-webui/open-webui/commit/e1e3009a30f9808ce06582d81a60e391f5ca09ec), [Docs:#697](https://github.com/open-webui/docs/pull/697)
- ⚡ Mermaid.js is now dynamically loaded on demand, significantly reducing first-screen loading time and improving initial page performance. [#17476](https://github.com/open-webui/open-webui/issues/17476), [#17477](https://github.com/open-webui/open-webui/pull/17477)
- ⚡ Azure MSAL browser library is now dynamically loaded on demand, reducing initial bundle size by 730KB and improving first-screen loading speed. [#17479](https://github.com/open-webui/open-webui/pull/17479)
- ⚡ CodeEditor component is now dynamically loaded on demand, reducing initial bundle size by 1MB and improving first-screen loading speed. [#17498](https://github.com/open-webui/open-webui/pull/17498)
- ⚡ Hugging Face Transformers library is now dynamically loaded on demand, reducing initial bundle size by 1.9MB and improving first-screen loading speed. [#17499](https://github.com/open-webui/open-webui/pull/17499)
- ⚡ jsPDF and html2canvas-pro libraries are now dynamically loaded on demand, reducing initial bundle size by 980KB and improving first-screen loading speed. [#17502](https://github.com/open-webui/open-webui/pull/17502)
- ⚡ Leaflet mapping library is now dynamically loaded on demand, reducing initial bundle size by 454KB and improving first-screen loading speed. [#17503](https://github.com/open-webui/open-webui/pull/17503)
- 📊 OpenTelemetry metrics collection was enhanced to properly handle HTTP 500 errors and ensure metrics are recorded even during exceptions. [Commit](https://github.com/open-webui/open-webui/pull/17420/commits/b14617a653c6bdcfd3102c12f971924fd1faf572)
- 🔒 OAuth token retrieval logic was refactored, improving the reliability and consistency of authentication handling across the backend. [Commit](https://github.com/open-webui/open-webui/commit/6c0a5fa91cdbf6ffb74667ee61ca96bebfdfbc50)
- 💻 Code block output processing was improved to handle Python execution results more reliably, along with refined visual styling and button layouts. [Commit](https://github.com/open-webui/open-webui/pull/17420/commits/0e5320c39e308ff97f2ca9e289618af12479eb6e)
- ⚡ Message input processing was optimized to skip unnecessary text variable handling when input is empty, improving performance. [Commit](https://github.com/open-webui/open-webui/pull/17420/commits/e1386fe80b77126a12dabc4ad058abe9b024b275)
- 📄 Individual chat PDF export was added to the sidebar chat menu, allowing users to export single conversations as PDF documents with both stylized and plain text options. [Commit](https://github.com/open-webui/open-webui/pull/17420/commits/d041d58bb619689cd04a391b4f8191b23941ca62)
- 🛠️ Function validation was enhanced with improved valve validation and better error handling during function loading and synchronization. [Commit](https://github.com/open-webui/open-webui/pull/17420/commits/e66e0526ed6a116323285f79f44237538b6c75e6), [Commit](https://github.com/open-webui/open-webui/pull/17420/commits/8edfd29102e0a61777b23d3575eaa30be37b59a5)
- 🔔 Notification toast interaction was enhanced with drag detection to prevent accidental clicks and added keyboard support for accessibility. [Commit](https://github.com/open-webui/open-webui/pull/17420/commits/621e7679c427b6f0efa85f95235319238bf171ad)
- 🗓️ Improved date and time formatting dynamically adapts to the selected language, ensuring consistent localization across the UI. [#17409](https://github.com/open-webui/open-webui/pull/17409), [Commit](https://github.com/open-webui/open-webui/commit/2227f24bd6d861b1fad8d2cabacf7d62ce137d0c)
- 🔒 Feishu SSO integration was added, allowing users to authenticate via Feishu. [#17284](https://github.com/open-webui/open-webui/pull/17284), [Docs:#685](https://github.com/open-webui/docs/pull/685)
- 🔠 Toggle filters in the chat input options menu are now sorted alphabetically for easier navigation. [Commit](https://github.com/open-webui/open-webui/commit/ca853ca4656180487afcd84230d214f91db52533)
- 🎨 Long chat titles in the sidebar are now truncated to prevent text overflow and maintain a clean layout. [#17356](https://github.com/open-webui/open-webui/pull/17356)
- 🎨 Temporary chat interface design was refined with improved layout and visual consistency. [Commit](https://github.com/open-webui/open-webui/pull/17420/commits/67549dcadd670285d491bd41daf3d081a70fd094), [Commit](https://github.com/open-webui/open-webui/pull/17420/commits/2ca34217e68f3b439899c75881dfb050f49c9eb2), [Commit](https://github.com/open-webui/open-webui/pull/17420/commits/fb02ec52a5df3f58b53db4ab3a995c15f83503cd)
- 🎨 Download icon consistency was improved across the entire interface by standardizing the icon component used in menus, functions, tools, and export features. [Commit](https://github.com/open-webui/open-webui/pull/17420/commits/596be451ece7e11b5cd25465d49670c27a1cb33f)
- 🎨 Settings interface was enhanced with improved iconography and reorganized the 'Chats' section into 'Data Controls' for better clarity. [Commit](https://github.com/open-webui/open-webui/pull/17420/commits/8bf0b40fdd978b5af6548a6e1fb3aabd90bcd5cd)
- 🔄 Various improvements were implemented across the frontend and backend to enhance performance, stability, and security.
- 🌐 Translations for Finnish, German, Kabyle, Portuguese (Brazil), Simplified Chinese, Spanish (Spain), and Traditional Chinese (Taiwan) were enhanced and expanded.
### Fixed
- 📚 Knowledge base permission logic was corrected to ensure private collection owners can access their own content when embedding bypass is enabled. [#17432](https://github.com/open-webui/open-webui/issues/17432), [Commit](https://github.com/open-webui/open-webui/commit/a51f0c30ec1472d71487eab3e15d0351a2716b12)
- ⚙️ Connection URL editing in Admin Settings now properly saves changes instead of reverting to original values, fixing issues with both Ollama and OpenAI-compatible endpoints. [#17435](https://github.com/open-webui/open-webui/issues/17435), [Commit](https://github.com/open-webui/open-webui/commit/e4c864de7eb0d577843a80688677ce3659d1f81f)
- 📊 Usage information collection from Google models was corrected to handle providers that send usage data alongside content chunks instead of separately. [#17421](https://github.com/open-webui/open-webui/pull/17421), [Commit](https://github.com/open-webui/open-webui/commit/c2f98a4cd29ed738f395fef09c42ab8e73cd46a0)
- ⚙️ Settings modal scrolling issue was resolved by moving image compression controls to a dedicated modal, preventing the main settings from becoming scrollable out of view. [#17474](https://github.com/open-webui/open-webui/issues/17474), [Commit](https://github.com/open-webui/open-webui/commit/fed5615c19b0045a55b0be426b468a57bfda4b66)
- 📁 Folder click behavior was improved to prevent accidental actions by implementing proper double-click detection and timing delays for folder expansion and selection. [Commit](https://github.com/open-webui/open-webui/pull/17420/commits/19e3214997170eea6ee92452e8c778e04a28e396)
- 🔐 Access control component reliability was improved with better null checking and error handling for group permissions and private access scenarios. [Commit](https://github.com/open-webui/open-webui/pull/17420/commits/c8780a7f934c5e49a21b438f2f30232f83cf75d2), [Commit](https://github.com/open-webui/open-webui/pull/17420/commits/32015c392dbc6b7367a6a91d9e173e675ea3402c)
- 🔗 The citation modal now correctly displays and links to external web page sources in addition to internal documents. [Commit](https://github.com/open-webui/open-webui/commit/9208a84185a7e59524f00a7576667d493c3ac7d4)
- 🔗 Web and YouTube attachment handling was fixed, ensuring their content is now reliably processed and included in the chat context for retrieval. [Commit](https://github.com/open-webui/open-webui/commit/210197fd438b52080cda5d6ce3d47b92cdc264c8)
- 📂 Large file upload failures are resolved by correcting the processing logic for scenarios where document embedding is bypassed. [Commit](https://github.com/open-webui/open-webui/commit/051b6daa8299fd332503bd584563556e2ae6adab)
- 🌐 Rich text input placeholder text now correctly updates when the interface language is switched, ensuring proper localization. [#17473](https://github.com/open-webui/open-webui/pull/17473), [Commit](https://github.com/open-webui/open-webui/commit/77358031f5077e6efe5cc08d8d4e5831c7cd1cd9)
- 📊 Llama.cpp server timing metrics are now correctly parsed and displayed by fixing a typo in the response handling. [#17350](https://github.com/open-webui/open-webui/issues/17350), [Commit](https://github.com/open-webui/open-webui/commit/cf72f5503f39834b9da44ebbb426a3674dad0caa)
- 🛠️ Filter functions with file_handler configuration now properly handle messages without file attachments, preventing runtime errors. [#17423](https://github.com/open-webui/open-webui/pull/17423)
- 🔔 Channel notification delivery was fixed to properly handle background task execution and user access checking. [Commit](https://github.com/open-webui/open-webui/pull/17420/commits/1077b2ac8b96e49c2ad2620e76eb65bbb2a3a1f3)
### Changed
- 📝 Prompt template variables are now optional by default instead of being forced as required, allowing flexible workflows with optional metadata fields. [#17447](https://github.com/open-webui/open-webui/issues/17447), [Commit](https://github.com/open-webui/open-webui/commit/d5824b1b495fcf86e57171769bcec2a0f698b070), [Docs:#696](https://github.com/open-webui/docs/pull/696)
- 🛠️ Direct external tool servers now require explicit user selection from the input interface instead of being automatically included in conversations, providing better control over tool usage. [Commit](https://github.com/open-webui/open-webui/pull/17420/commits/0f04227c34ca32746c43a9323e2df32299fcb6af), [Commit](https://github.com/open-webui/open-webui/pull/17420/commits/99bba12de279dd55c55ded35b2e4f819af1c9ab5)
- 📺 Widescreen mode option was removed from Channels interface, with all channel layouts now using full-width display. [Commit](https://github.com/open-webui/open-webui/pull/17420/commits/d46b7b8f1b99a8054b55031fe935c8a16d5ec956)
- 🎛️ The plain textarea input option was deprecated, and the custom text editor is now the standard for all chat inputs. [Commit](https://github.com/open-webui/open-webui/commit/153afd832ccd12a1e5fd99b085008d080872c161)
## [0.6.28] - 2025-09-10 ## [0.6.28] - 2025-09-10
### Added ### Added

11
LICENSE_NOTICE Normal file
View file

@ -0,0 +1,11 @@
# Open WebUI Multi-License Notice
This repository contains code governed by multiple licenses based on the date and origin of contribution:
1. All code committed prior to commit a76068d69cd59568b920dfab85dc573dbbb8f131 is licensed under the MIT License (see LICENSE_HISTORY).
2. All code committed from commit a76068d69cd59568b920dfab85dc573dbbb8f131 up to and including commit 60d84a3aae9802339705826e9095e272e3c83623 is licensed under the BSD 3-Clause License (see LICENSE_HISTORY).
3. All code contributed or modified after commit 60d84a3aae9802339705826e9095e272e3c83623 is licensed under the Open WebUI License (see LICENSE).
For details on which commits are covered by which license, refer to LICENSE_HISTORY.

View file

@ -248,7 +248,7 @@ Discover upcoming features on our roadmap in the [Open WebUI Documentation](http
## License 📜 ## License 📜
This project is licensed under the [Open WebUI License](LICENSE), a revised BSD-3-Clause license. You receive all the same rights as the classic BSD-3 license: you can use, modify, and distribute the software, including in proprietary and commercial products, with minimal restrictions. The only additional requirement is to preserve the "Open WebUI" branding, as detailed in the LICENSE file. For full terms, see the [LICENSE](LICENSE) document. 📄 This project contains code under multiple licenses. The current codebase includes components licensed under the Open WebUI License with an additional requirement to preserve the "Open WebUI" branding, as well as prior contributions under their respective original licenses. For a detailed record of license changes and the applicable terms for each section of the code, please refer to [LICENSE_HISTORY](./LICENSE_HISTORY). For complete and updated licensing details, please see the [LICENSE](./LICENSE) and [LICENSE_HISTORY](./LICENSE_HISTORY) files.
## Support 💬 ## Support 💬

View file

@ -222,10 +222,11 @@ class PersistentConfig(Generic[T]):
class AppConfig: class AppConfig:
_state: dict[str, PersistentConfig]
_redis: Union[redis.Redis, redis.cluster.RedisCluster] = None _redis: Union[redis.Redis, redis.cluster.RedisCluster] = None
_redis_key_prefix: str _redis_key_prefix: str
_state: dict[str, PersistentConfig]
def __init__( def __init__(
self, self,
redis_url: Optional[str] = None, redis_url: Optional[str] = None,
@ -233,9 +234,8 @@ class AppConfig:
redis_cluster: Optional[bool] = False, redis_cluster: Optional[bool] = False,
redis_key_prefix: str = "open-webui", redis_key_prefix: str = "open-webui",
): ):
super().__setattr__("_state", {})
super().__setattr__("_redis_key_prefix", redis_key_prefix)
if redis_url: if redis_url:
super().__setattr__("_redis_key_prefix", redis_key_prefix)
super().__setattr__( super().__setattr__(
"_redis", "_redis",
get_redis_connection( get_redis_connection(
@ -246,6 +246,8 @@ class AppConfig:
), ),
) )
super().__setattr__("_state", {})
def __setattr__(self, key, value): def __setattr__(self, key, value):
if isinstance(value, PersistentConfig): if isinstance(value, PersistentConfig):
self._state[key] = value self._state[key] = value
@ -603,8 +605,8 @@ def load_oauth_providers():
OAUTH_PROVIDERS.clear() OAUTH_PROVIDERS.clear()
if GOOGLE_CLIENT_ID.value and GOOGLE_CLIENT_SECRET.value: if GOOGLE_CLIENT_ID.value and GOOGLE_CLIENT_SECRET.value:
def google_oauth_register(client: OAuth): def google_oauth_register(oauth: OAuth):
client.register( return oauth.register(
name="google", name="google",
client_id=GOOGLE_CLIENT_ID.value, client_id=GOOGLE_CLIENT_ID.value,
client_secret=GOOGLE_CLIENT_SECRET.value, client_secret=GOOGLE_CLIENT_SECRET.value,
@ -631,8 +633,8 @@ def load_oauth_providers():
and MICROSOFT_CLIENT_TENANT_ID.value and MICROSOFT_CLIENT_TENANT_ID.value
): ):
def microsoft_oauth_register(client: OAuth): def microsoft_oauth_register(oauth: OAuth):
client.register( return oauth.register(
name="microsoft", name="microsoft",
client_id=MICROSOFT_CLIENT_ID.value, client_id=MICROSOFT_CLIENT_ID.value,
client_secret=MICROSOFT_CLIENT_SECRET.value, client_secret=MICROSOFT_CLIENT_SECRET.value,
@ -656,8 +658,8 @@ def load_oauth_providers():
if GITHUB_CLIENT_ID.value and GITHUB_CLIENT_SECRET.value: if GITHUB_CLIENT_ID.value and GITHUB_CLIENT_SECRET.value:
def github_oauth_register(client: OAuth): def github_oauth_register(oauth: OAuth):
client.register( return oauth.register(
name="github", name="github",
client_id=GITHUB_CLIENT_ID.value, client_id=GITHUB_CLIENT_ID.value,
client_secret=GITHUB_CLIENT_SECRET.value, client_secret=GITHUB_CLIENT_SECRET.value,
@ -688,7 +690,7 @@ def load_oauth_providers():
and OPENID_PROVIDER_URL.value and OPENID_PROVIDER_URL.value
): ):
def oidc_oauth_register(client: OAuth): def oidc_oauth_register(oauth: OAuth):
client_kwargs = { client_kwargs = {
"scope": OAUTH_SCOPES.value, "scope": OAUTH_SCOPES.value,
**( **(
@ -714,7 +716,7 @@ def load_oauth_providers():
% ("S256", OAUTH_CODE_CHALLENGE_METHOD.value) % ("S256", OAUTH_CODE_CHALLENGE_METHOD.value)
) )
client.register( return oauth.register(
name="oidc", name="oidc",
client_id=OAUTH_CLIENT_ID.value, client_id=OAUTH_CLIENT_ID.value,
client_secret=OAUTH_CLIENT_SECRET.value, client_secret=OAUTH_CLIENT_SECRET.value,
@ -731,8 +733,8 @@ def load_oauth_providers():
if FEISHU_CLIENT_ID.value and FEISHU_CLIENT_SECRET.value: if FEISHU_CLIENT_ID.value and FEISHU_CLIENT_SECRET.value:
def feishu_oauth_register(client: OAuth): def feishu_oauth_register(oauth: OAuth):
client.register( return oauth.register(
name="feishu", name="feishu",
client_id=FEISHU_CLIENT_ID.value, client_id=FEISHU_CLIENT_ID.value,
client_secret=FEISHU_CLIENT_SECRET.value, client_secret=FEISHU_CLIENT_SECRET.value,
@ -1215,6 +1217,11 @@ USER_PERMISSIONS_WORKSPACE_MODELS_ALLOW_PUBLIC_SHARING = (
== "true" == "true"
) )
USER_PERMISSIONS_NOTES_ALLOW_PUBLIC_SHARING = (
os.environ.get("USER_PERMISSIONS_NOTES_ALLOW_PUBLIC_SHARING", "False").lower()
== "true"
)
USER_PERMISSIONS_WORKSPACE_KNOWLEDGE_ALLOW_PUBLIC_SHARING = ( USER_PERMISSIONS_WORKSPACE_KNOWLEDGE_ALLOW_PUBLIC_SHARING = (
os.environ.get( os.environ.get(
"USER_PERMISSIONS_WORKSPACE_KNOWLEDGE_ALLOW_PUBLIC_SHARING", "False" "USER_PERMISSIONS_WORKSPACE_KNOWLEDGE_ALLOW_PUBLIC_SHARING", "False"
@ -1352,6 +1359,7 @@ DEFAULT_USER_PERMISSIONS = {
"public_knowledge": USER_PERMISSIONS_WORKSPACE_KNOWLEDGE_ALLOW_PUBLIC_SHARING, "public_knowledge": USER_PERMISSIONS_WORKSPACE_KNOWLEDGE_ALLOW_PUBLIC_SHARING,
"public_prompts": USER_PERMISSIONS_WORKSPACE_PROMPTS_ALLOW_PUBLIC_SHARING, "public_prompts": USER_PERMISSIONS_WORKSPACE_PROMPTS_ALLOW_PUBLIC_SHARING,
"public_tools": USER_PERMISSIONS_WORKSPACE_TOOLS_ALLOW_PUBLIC_SHARING, "public_tools": USER_PERMISSIONS_WORKSPACE_TOOLS_ALLOW_PUBLIC_SHARING,
"public_notes": USER_PERMISSIONS_NOTES_ALLOW_PUBLIC_SHARING,
}, },
"chat": { "chat": {
"controls": USER_PERMISSIONS_CHAT_CONTROLS, "controls": USER_PERMISSIONS_CHAT_CONTROLS,
@ -1997,16 +2005,23 @@ if VECTOR_DB == "chroma":
# this uses the model defined in the Dockerfile ENV variable. If you dont use docker or docker based deployments such as k8s, the default embedding model will be used (sentence-transformers/all-MiniLM-L6-v2) # this uses the model defined in the Dockerfile ENV variable. If you dont use docker or docker based deployments such as k8s, the default embedding model will be used (sentence-transformers/all-MiniLM-L6-v2)
# Milvus # Milvus
MILVUS_URI = os.environ.get("MILVUS_URI", f"{DATA_DIR}/vector_db/milvus.db") MILVUS_URI = os.environ.get("MILVUS_URI", f"{DATA_DIR}/vector_db/milvus.db")
MILVUS_DB = os.environ.get("MILVUS_DB", "default") MILVUS_DB = os.environ.get("MILVUS_DB", "default")
MILVUS_TOKEN = os.environ.get("MILVUS_TOKEN", None) MILVUS_TOKEN = os.environ.get("MILVUS_TOKEN", None)
MILVUS_INDEX_TYPE = os.environ.get("MILVUS_INDEX_TYPE", "HNSW") MILVUS_INDEX_TYPE = os.environ.get("MILVUS_INDEX_TYPE", "HNSW")
MILVUS_METRIC_TYPE = os.environ.get("MILVUS_METRIC_TYPE", "COSINE") MILVUS_METRIC_TYPE = os.environ.get("MILVUS_METRIC_TYPE", "COSINE")
MILVUS_HNSW_M = int(os.environ.get("MILVUS_HNSW_M", "16")) MILVUS_HNSW_M = int(os.environ.get("MILVUS_HNSW_M", "16"))
MILVUS_HNSW_EFCONSTRUCTION = int(os.environ.get("MILVUS_HNSW_EFCONSTRUCTION", "100")) MILVUS_HNSW_EFCONSTRUCTION = int(os.environ.get("MILVUS_HNSW_EFCONSTRUCTION", "100"))
MILVUS_IVF_FLAT_NLIST = int(os.environ.get("MILVUS_IVF_FLAT_NLIST", "128")) MILVUS_IVF_FLAT_NLIST = int(os.environ.get("MILVUS_IVF_FLAT_NLIST", "128"))
MILVUS_DISKANN_MAX_DEGREE = int(os.environ.get("MILVUS_DISKANN_MAX_DEGREE", "56"))
MILVUS_DISKANN_SEARCH_LIST_SIZE = int(
os.environ.get("MILVUS_DISKANN_SEARCH_LIST_SIZE", "100")
)
ENABLE_MILVUS_MULTITENANCY_MODE = (
os.environ.get("ENABLE_MILVUS_MULTITENANCY_MODE", "false").lower() == "true"
)
# Hyphens not allowed, need to use underscores in collection names
MILVUS_COLLECTION_PREFIX = os.environ.get("MILVUS_COLLECTION_PREFIX", "open_webui")
# Qdrant # Qdrant
QDRANT_URI = os.environ.get("QDRANT_URI", None) QDRANT_URI = os.environ.get("QDRANT_URI", None)
@ -2169,10 +2184,20 @@ ENABLE_ONEDRIVE_INTEGRATION = PersistentConfig(
os.getenv("ENABLE_ONEDRIVE_INTEGRATION", "False").lower() == "true", os.getenv("ENABLE_ONEDRIVE_INTEGRATION", "False").lower() == "true",
) )
ONEDRIVE_CLIENT_ID = PersistentConfig(
"ONEDRIVE_CLIENT_ID", ENABLE_ONEDRIVE_PERSONAL = (
"onedrive.client_id", os.environ.get("ENABLE_ONEDRIVE_PERSONAL", "True").lower() == "true"
os.environ.get("ONEDRIVE_CLIENT_ID", ""), )
ENABLE_ONEDRIVE_BUSINESS = (
os.environ.get("ENABLE_ONEDRIVE_BUSINESS", "True").lower() == "true"
)
ONEDRIVE_CLIENT_ID = os.environ.get("ONEDRIVE_CLIENT_ID", "")
ONEDRIVE_CLIENT_ID_PERSONAL = os.environ.get(
"ONEDRIVE_CLIENT_ID_PERSONAL", ONEDRIVE_CLIENT_ID
)
ONEDRIVE_CLIENT_ID_BUSINESS = os.environ.get(
"ONEDRIVE_CLIENT_ID_BUSINESS", ONEDRIVE_CLIENT_ID
) )
ONEDRIVE_SHAREPOINT_URL = PersistentConfig( ONEDRIVE_SHAREPOINT_URL = PersistentConfig(
@ -2285,6 +2310,18 @@ DOCLING_SERVER_URL = PersistentConfig(
os.getenv("DOCLING_SERVER_URL", "http://docling:5001"), os.getenv("DOCLING_SERVER_URL", "http://docling:5001"),
) )
docling_params = os.getenv("DOCLING_PARAMS", "")
try:
docling_params = json.loads(docling_params)
except json.JSONDecodeError:
docling_params = {}
DOCLING_PARAMS = PersistentConfig(
"DOCLING_PARAMS",
"rag.docling_params",
docling_params,
)
DOCLING_DO_OCR = PersistentConfig( DOCLING_DO_OCR = PersistentConfig(
"DOCLING_DO_OCR", "DOCLING_DO_OCR",
"rag.docling_do_ocr", "rag.docling_do_ocr",
@ -2778,6 +2815,12 @@ WEB_SEARCH_TRUST_ENV = PersistentConfig(
) )
OLLAMA_CLOUD_WEB_SEARCH_API_KEY = PersistentConfig(
"OLLAMA_CLOUD_WEB_SEARCH_API_KEY",
"rag.web.search.ollama_cloud_api_key",
os.getenv("OLLAMA_CLOUD_API_KEY", ""),
)
SEARXNG_QUERY_URL = PersistentConfig( SEARXNG_QUERY_URL = PersistentConfig(
"SEARXNG_QUERY_URL", "SEARXNG_QUERY_URL",
"rag.web.search.searxng_query_url", "rag.web.search.searxng_query_url",
@ -3353,6 +3396,19 @@ AUDIO_TTS_OPENAI_API_KEY = PersistentConfig(
os.getenv("AUDIO_TTS_OPENAI_API_KEY", OPENAI_API_KEY), os.getenv("AUDIO_TTS_OPENAI_API_KEY", OPENAI_API_KEY),
) )
audio_tts_openai_params = os.getenv("AUDIO_TTS_OPENAI_PARAMS", "")
try:
audio_tts_openai_params = json.loads(audio_tts_openai_params)
except json.JSONDecodeError:
audio_tts_openai_params = {}
AUDIO_TTS_OPENAI_PARAMS = PersistentConfig(
"AUDIO_TTS_OPENAI_PARAMS",
"audio.tts.openai.params",
audio_tts_openai_params,
)
AUDIO_TTS_API_KEY = PersistentConfig( AUDIO_TTS_API_KEY = PersistentConfig(
"AUDIO_TTS_API_KEY", "AUDIO_TTS_API_KEY",
"audio.tts.api_key", "audio.tts.api_key",

View file

@ -474,6 +474,10 @@ ENABLE_OAUTH_ID_TOKEN_COOKIE = (
os.environ.get("ENABLE_OAUTH_ID_TOKEN_COOKIE", "True").lower() == "true" os.environ.get("ENABLE_OAUTH_ID_TOKEN_COOKIE", "True").lower() == "true"
) )
OAUTH_CLIENT_INFO_ENCRYPTION_KEY = os.environ.get(
"OAUTH_CLIENT_INFO_ENCRYPTION_KEY", WEBUI_SECRET_KEY
)
OAUTH_SESSION_TOKEN_ENCRYPTION_KEY = os.environ.get( OAUTH_SESSION_TOKEN_ENCRYPTION_KEY = os.environ.get(
"OAUTH_SESSION_TOKEN_ENCRYPTION_KEY", WEBUI_SECRET_KEY "OAUTH_SESSION_TOKEN_ENCRYPTION_KEY", WEBUI_SECRET_KEY
) )
@ -547,16 +551,16 @@ else:
CHAT_RESPONSE_MAX_TOOL_CALL_RETRIES = os.environ.get( CHAT_RESPONSE_MAX_TOOL_CALL_RETRIES = os.environ.get(
"CHAT_RESPONSE_MAX_TOOL_CALL_RETRIES", "10" "CHAT_RESPONSE_MAX_TOOL_CALL_RETRIES", "30"
) )
if CHAT_RESPONSE_MAX_TOOL_CALL_RETRIES == "": if CHAT_RESPONSE_MAX_TOOL_CALL_RETRIES == "":
CHAT_RESPONSE_MAX_TOOL_CALL_RETRIES = 10 CHAT_RESPONSE_MAX_TOOL_CALL_RETRIES = 30
else: else:
try: try:
CHAT_RESPONSE_MAX_TOOL_CALL_RETRIES = int(CHAT_RESPONSE_MAX_TOOL_CALL_RETRIES) CHAT_RESPONSE_MAX_TOOL_CALL_RETRIES = int(CHAT_RESPONSE_MAX_TOOL_CALL_RETRIES)
except Exception: except Exception:
CHAT_RESPONSE_MAX_TOOL_CALL_RETRIES = 10 CHAT_RESPONSE_MAX_TOOL_CALL_RETRIES = 30
#################################### ####################################

View file

@ -19,6 +19,7 @@ from fastapi import (
from starlette.responses import Response, StreamingResponse from starlette.responses import Response, StreamingResponse
from open_webui.constants import ERROR_MESSAGES
from open_webui.socket.main import ( from open_webui.socket.main import (
get_event_call, get_event_call,
get_event_emitter, get_event_emitter,
@ -60,8 +61,20 @@ def get_function_module_by_id(request: Request, pipe_id: str):
function_module, _, _ = get_function_module_from_cache(request, pipe_id) function_module, _, _ = get_function_module_from_cache(request, pipe_id)
if hasattr(function_module, "valves") and hasattr(function_module, "Valves"): if hasattr(function_module, "valves") and hasattr(function_module, "Valves"):
Valves = function_module.Valves
valves = Functions.get_function_valves_by_id(pipe_id) valves = Functions.get_function_valves_by_id(pipe_id)
function_module.valves = function_module.Valves(**(valves if valves else {}))
if valves:
try:
function_module.valves = Valves(
**{k: v for k, v in valves.items() if v is not None}
)
except Exception as e:
log.exception(f"Error loading valves for function {pipe_id}: {e}")
raise e
else:
function_module.valves = Valves()
return function_module return function_module
@ -70,8 +83,13 @@ async def get_function_models(request):
pipe_models = [] pipe_models = []
for pipe in pipes: for pipe in pipes:
try:
function_module = get_function_module_by_id(request, pipe.id) function_module = get_function_module_by_id(request, pipe.id)
has_user_valves = False
if hasattr(function_module, "UserValves"):
has_user_valves = True
# Check if function is a manifold # Check if function is a manifold
if hasattr(function_module, "pipes"): if hasattr(function_module, "pipes"):
sub_pipes = [] sub_pipes = []
@ -110,6 +128,7 @@ async def get_function_models(request):
"created": pipe.created_at, "created": pipe.created_at,
"owned_by": "openai", "owned_by": "openai",
"pipe": pipe_flag, "pipe": pipe_flag,
"has_user_valves": has_user_valves,
} }
) )
else: else:
@ -127,8 +146,12 @@ async def get_function_models(request):
"created": pipe.created_at, "created": pipe.created_at,
"owned_by": "openai", "owned_by": "openai",
"pipe": pipe_flag, "pipe": pipe_flag,
"has_user_valves": has_user_valves,
} }
) )
except Exception as e:
log.exception(e)
continue
return pipe_models return pipe_models
@ -222,7 +245,7 @@ async def generate_function_chat_completion(
oauth_token = None oauth_token = None
try: try:
if request.cookies.get("oauth_session_id", None): if request.cookies.get("oauth_session_id", None):
oauth_token = request.app.state.oauth_manager.get_oauth_token( oauth_token = await request.app.state.oauth_manager.get_oauth_token(
user.id, user.id,
request.cookies.get("oauth_session_id", None), request.cookies.get("oauth_session_id", None),
) )

View file

@ -8,6 +8,7 @@ import shutil
import sys import sys
import time import time
import random import random
import re
from uuid import uuid4 from uuid import uuid4
@ -50,6 +51,11 @@ from starlette.middleware.sessions import SessionMiddleware
from starlette.responses import Response, StreamingResponse from starlette.responses import Response, StreamingResponse
from starlette.datastructures import Headers from starlette.datastructures import Headers
from starsessions import (
SessionMiddleware as StarSessionsMiddleware,
SessionAutoloadMiddleware,
)
from starsessions.stores.redis import RedisStore
from open_webui.utils import logger from open_webui.utils import logger
from open_webui.utils.audit import AuditLevel, AuditLoggingMiddleware from open_webui.utils.audit import AuditLevel, AuditLoggingMiddleware
@ -110,9 +116,6 @@ from open_webui.config import (
OLLAMA_API_CONFIGS, OLLAMA_API_CONFIGS,
# OpenAI # OpenAI
ENABLE_OPENAI_API, ENABLE_OPENAI_API,
ONEDRIVE_CLIENT_ID,
ONEDRIVE_SHAREPOINT_URL,
ONEDRIVE_SHAREPOINT_TENANT_ID,
OPENAI_API_BASE_URLS, OPENAI_API_BASE_URLS,
OPENAI_API_KEYS, OPENAI_API_KEYS,
OPENAI_API_CONFIGS, OPENAI_API_CONFIGS,
@ -172,13 +175,14 @@ from open_webui.config import (
AUDIO_STT_AZURE_LOCALES, AUDIO_STT_AZURE_LOCALES,
AUDIO_STT_AZURE_BASE_URL, AUDIO_STT_AZURE_BASE_URL,
AUDIO_STT_AZURE_MAX_SPEAKERS, AUDIO_STT_AZURE_MAX_SPEAKERS,
AUDIO_TTS_API_KEY,
AUDIO_TTS_ENGINE, AUDIO_TTS_ENGINE,
AUDIO_TTS_MODEL, AUDIO_TTS_MODEL,
AUDIO_TTS_VOICE,
AUDIO_TTS_OPENAI_API_BASE_URL, AUDIO_TTS_OPENAI_API_BASE_URL,
AUDIO_TTS_OPENAI_API_KEY, AUDIO_TTS_OPENAI_API_KEY,
AUDIO_TTS_OPENAI_PARAMS,
AUDIO_TTS_API_KEY,
AUDIO_TTS_SPLIT_ON, AUDIO_TTS_SPLIT_ON,
AUDIO_TTS_VOICE,
AUDIO_TTS_AZURE_SPEECH_REGION, AUDIO_TTS_AZURE_SPEECH_REGION,
AUDIO_TTS_AZURE_SPEECH_BASE_URL, AUDIO_TTS_AZURE_SPEECH_BASE_URL,
AUDIO_TTS_AZURE_SPEECH_OUTPUT_FORMAT, AUDIO_TTS_AZURE_SPEECH_OUTPUT_FORMAT,
@ -244,6 +248,7 @@ from open_webui.config import (
EXTERNAL_DOCUMENT_LOADER_API_KEY, EXTERNAL_DOCUMENT_LOADER_API_KEY,
TIKA_SERVER_URL, TIKA_SERVER_URL,
DOCLING_SERVER_URL, DOCLING_SERVER_URL,
DOCLING_PARAMS,
DOCLING_DO_OCR, DOCLING_DO_OCR,
DOCLING_FORCE_OCR, DOCLING_FORCE_OCR,
DOCLING_OCR_ENGINE, DOCLING_OCR_ENGINE,
@ -272,6 +277,7 @@ from open_webui.config import (
WEB_SEARCH_CONCURRENT_REQUESTS, WEB_SEARCH_CONCURRENT_REQUESTS,
WEB_SEARCH_TRUST_ENV, WEB_SEARCH_TRUST_ENV,
WEB_SEARCH_DOMAIN_FILTER_LIST, WEB_SEARCH_DOMAIN_FILTER_LIST,
OLLAMA_CLOUD_WEB_SEARCH_API_KEY,
JINA_API_KEY, JINA_API_KEY,
SEARCHAPI_API_KEY, SEARCHAPI_API_KEY,
SEARCHAPI_ENGINE, SEARCHAPI_ENGINE,
@ -303,14 +309,17 @@ from open_webui.config import (
GOOGLE_PSE_ENGINE_ID, GOOGLE_PSE_ENGINE_ID,
GOOGLE_DRIVE_CLIENT_ID, GOOGLE_DRIVE_CLIENT_ID,
GOOGLE_DRIVE_API_KEY, GOOGLE_DRIVE_API_KEY,
ONEDRIVE_CLIENT_ID, ENABLE_ONEDRIVE_INTEGRATION,
ONEDRIVE_CLIENT_ID_PERSONAL,
ONEDRIVE_CLIENT_ID_BUSINESS,
ONEDRIVE_SHAREPOINT_URL, ONEDRIVE_SHAREPOINT_URL,
ONEDRIVE_SHAREPOINT_TENANT_ID, ONEDRIVE_SHAREPOINT_TENANT_ID,
ENABLE_ONEDRIVE_PERSONAL,
ENABLE_ONEDRIVE_BUSINESS,
ENABLE_RAG_HYBRID_SEARCH, ENABLE_RAG_HYBRID_SEARCH,
ENABLE_RAG_LOCAL_WEB_FETCH, ENABLE_RAG_LOCAL_WEB_FETCH,
ENABLE_WEB_LOADER_SSL_VERIFICATION, ENABLE_WEB_LOADER_SSL_VERIFICATION,
ENABLE_GOOGLE_DRIVE_INTEGRATION, ENABLE_GOOGLE_DRIVE_INTEGRATION,
ENABLE_ONEDRIVE_INTEGRATION,
UPLOAD_DIR, UPLOAD_DIR,
EXTERNAL_WEB_SEARCH_URL, EXTERNAL_WEB_SEARCH_URL,
EXTERNAL_WEB_SEARCH_API_KEY, EXTERNAL_WEB_SEARCH_API_KEY,
@ -448,6 +457,7 @@ from open_webui.utils.models import (
get_all_models, get_all_models,
get_all_base_models, get_all_base_models,
check_model_access, check_model_access,
get_filtered_models,
) )
from open_webui.utils.chat import ( from open_webui.utils.chat import (
generate_chat_completion as chat_completion_handler, generate_chat_completion as chat_completion_handler,
@ -466,7 +476,12 @@ from open_webui.utils.auth import (
get_verified_user, get_verified_user,
) )
from open_webui.utils.plugin import install_tool_and_function_dependencies from open_webui.utils.plugin import install_tool_and_function_dependencies
from open_webui.utils.oauth import OAuthManager from open_webui.utils.oauth import (
OAuthManager,
OAuthClientManager,
decrypt_data,
OAuthClientInformationFull,
)
from open_webui.utils.security_headers import SecurityHeadersMiddleware from open_webui.utils.security_headers import SecurityHeadersMiddleware
from open_webui.utils.redis import get_redis_connection from open_webui.utils.redis import get_redis_connection
@ -596,9 +611,14 @@ app = FastAPI(
lifespan=lifespan, lifespan=lifespan,
) )
# For Open WebUI OIDC/OAuth2
oauth_manager = OAuthManager(app) oauth_manager = OAuthManager(app)
app.state.oauth_manager = oauth_manager app.state.oauth_manager = oauth_manager
# For Integrations
oauth_client_manager = OAuthClientManager(app)
app.state.oauth_client_manager = oauth_client_manager
app.state.instance_id = None app.state.instance_id = None
app.state.config = AppConfig( app.state.config = AppConfig(
redis_url=REDIS_URL, redis_url=REDIS_URL,
@ -817,6 +837,7 @@ app.state.config.EXTERNAL_DOCUMENT_LOADER_URL = EXTERNAL_DOCUMENT_LOADER_URL
app.state.config.EXTERNAL_DOCUMENT_LOADER_API_KEY = EXTERNAL_DOCUMENT_LOADER_API_KEY app.state.config.EXTERNAL_DOCUMENT_LOADER_API_KEY = EXTERNAL_DOCUMENT_LOADER_API_KEY
app.state.config.TIKA_SERVER_URL = TIKA_SERVER_URL app.state.config.TIKA_SERVER_URL = TIKA_SERVER_URL
app.state.config.DOCLING_SERVER_URL = DOCLING_SERVER_URL app.state.config.DOCLING_SERVER_URL = DOCLING_SERVER_URL
app.state.config.DOCLING_PARAMS = DOCLING_PARAMS
app.state.config.DOCLING_DO_OCR = DOCLING_DO_OCR app.state.config.DOCLING_DO_OCR = DOCLING_DO_OCR
app.state.config.DOCLING_FORCE_OCR = DOCLING_FORCE_OCR app.state.config.DOCLING_FORCE_OCR = DOCLING_FORCE_OCR
app.state.config.DOCLING_OCR_ENGINE = DOCLING_OCR_ENGINE app.state.config.DOCLING_OCR_ENGINE = DOCLING_OCR_ENGINE
@ -882,6 +903,8 @@ app.state.config.BYPASS_WEB_SEARCH_WEB_LOADER = BYPASS_WEB_SEARCH_WEB_LOADER
app.state.config.ENABLE_GOOGLE_DRIVE_INTEGRATION = ENABLE_GOOGLE_DRIVE_INTEGRATION app.state.config.ENABLE_GOOGLE_DRIVE_INTEGRATION = ENABLE_GOOGLE_DRIVE_INTEGRATION
app.state.config.ENABLE_ONEDRIVE_INTEGRATION = ENABLE_ONEDRIVE_INTEGRATION app.state.config.ENABLE_ONEDRIVE_INTEGRATION = ENABLE_ONEDRIVE_INTEGRATION
app.state.config.OLLAMA_CLOUD_WEB_SEARCH_API_KEY = OLLAMA_CLOUD_WEB_SEARCH_API_KEY
app.state.config.SEARXNG_QUERY_URL = SEARXNG_QUERY_URL app.state.config.SEARXNG_QUERY_URL = SEARXNG_QUERY_URL
app.state.config.YACY_QUERY_URL = YACY_QUERY_URL app.state.config.YACY_QUERY_URL = YACY_QUERY_URL
app.state.config.YACY_USERNAME = YACY_USERNAME app.state.config.YACY_USERNAME = YACY_USERNAME
@ -1076,11 +1099,15 @@ app.state.config.AUDIO_STT_AZURE_LOCALES = AUDIO_STT_AZURE_LOCALES
app.state.config.AUDIO_STT_AZURE_BASE_URL = AUDIO_STT_AZURE_BASE_URL app.state.config.AUDIO_STT_AZURE_BASE_URL = AUDIO_STT_AZURE_BASE_URL
app.state.config.AUDIO_STT_AZURE_MAX_SPEAKERS = AUDIO_STT_AZURE_MAX_SPEAKERS app.state.config.AUDIO_STT_AZURE_MAX_SPEAKERS = AUDIO_STT_AZURE_MAX_SPEAKERS
app.state.config.TTS_OPENAI_API_BASE_URL = AUDIO_TTS_OPENAI_API_BASE_URL
app.state.config.TTS_OPENAI_API_KEY = AUDIO_TTS_OPENAI_API_KEY
app.state.config.TTS_ENGINE = AUDIO_TTS_ENGINE app.state.config.TTS_ENGINE = AUDIO_TTS_ENGINE
app.state.config.TTS_MODEL = AUDIO_TTS_MODEL app.state.config.TTS_MODEL = AUDIO_TTS_MODEL
app.state.config.TTS_VOICE = AUDIO_TTS_VOICE app.state.config.TTS_VOICE = AUDIO_TTS_VOICE
app.state.config.TTS_OPENAI_API_BASE_URL = AUDIO_TTS_OPENAI_API_BASE_URL
app.state.config.TTS_OPENAI_API_KEY = AUDIO_TTS_OPENAI_API_KEY
app.state.config.TTS_OPENAI_PARAMS = AUDIO_TTS_OPENAI_PARAMS
app.state.config.TTS_API_KEY = AUDIO_TTS_API_KEY app.state.config.TTS_API_KEY = AUDIO_TTS_API_KEY
app.state.config.TTS_SPLIT_ON = AUDIO_TTS_SPLIT_ON app.state.config.TTS_SPLIT_ON = AUDIO_TTS_SPLIT_ON
@ -1151,12 +1178,32 @@ class RedirectMiddleware(BaseHTTPMiddleware):
path = request.url.path path = request.url.path
query_params = dict(parse_qs(urlparse(str(request.url)).query)) query_params = dict(parse_qs(urlparse(str(request.url)).query))
redirect_params = {}
# Check for the specific watch path and the presence of 'v' parameter # Check for the specific watch path and the presence of 'v' parameter
if path.endswith("/watch") and "v" in query_params: if path.endswith("/watch") and "v" in query_params:
# Extract the first 'v' parameter # Extract the first 'v' parameter
video_id = query_params["v"][0] youtube_video_id = query_params["v"][0]
encoded_video_id = urlencode({"youtube": video_id}) redirect_params["youtube"] = youtube_video_id
redirect_url = f"/?{encoded_video_id}"
if "shared" in query_params and len(query_params["shared"]) > 0:
# PWA share_target support
text = query_params["shared"][0]
if text:
urls = re.match(r"https://\S+", text)
if urls:
from open_webui.retrieval.loaders.youtube import _parse_video_id
if youtube_video_id := _parse_video_id(urls[0]):
redirect_params["youtube"] = youtube_video_id
else:
redirect_params["load-url"] = urls[0]
else:
redirect_params["q"] = text
if redirect_params:
redirect_url = f"/?{urlencode(redirect_params)}"
return RedirectResponse(url=redirect_url) return RedirectResponse(url=redirect_url)
# Proceed with the normal flow of other requests # Proceed with the normal flow of other requests
@ -1291,33 +1338,6 @@ if audit_level != AuditLevel.NONE:
async def get_models( async def get_models(
request: Request, refresh: bool = False, user=Depends(get_verified_user) request: Request, refresh: bool = False, user=Depends(get_verified_user)
): ):
def get_filtered_models(models, user):
filtered_models = []
for model in models:
if model.get("arena"):
if has_access(
user.id,
type="read",
access_control=model.get("info", {})
.get("meta", {})
.get("access_control", {}),
):
filtered_models.append(model)
continue
model_info = Models.get_model_by_id(model["id"])
if model_info:
if (
(user.role == "admin" and BYPASS_ADMIN_ACCESS_CONTROL)
or user.id == model_info.user_id
or has_access(
user.id, type="read", access_control=model_info.access_control
)
):
filtered_models.append(model)
return filtered_models
all_models = await get_all_models(request, refresh=refresh, user=user) all_models = await get_all_models(request, refresh=refresh, user=user)
models = [] models = []
@ -1353,11 +1373,6 @@ async def get_models(
) )
) )
# Filter out models that the user does not have access to
if (
user.role == "user"
or (user.role == "admin" and not BYPASS_ADMIN_ACCESS_CONTROL)
) and not BYPASS_MODEL_ACCESS_CONTROL:
models = get_filtered_models(models, user) models = get_filtered_models(models, user)
log.debug( log.debug(
@ -1487,7 +1502,7 @@ async def chat_completion(
} }
if metadata.get("chat_id") and (user and user.role != "admin"): if metadata.get("chat_id") and (user and user.role != "admin"):
if metadata["chat_id"] != "local": if not metadata["chat_id"].startswith("local:"):
chat = Chats.get_chat_by_id_and_user_id(metadata["chat_id"], user.id) chat = Chats.get_chat_by_id_and_user_id(metadata["chat_id"], user.id)
if chat is None: if chat is None:
raise HTTPException( raise HTTPException(
@ -1514,6 +1529,7 @@ async def chat_completion(
response = await chat_completion_handler(request, form_data, user) response = await chat_completion_handler(request, form_data, user)
if metadata.get("chat_id") and metadata.get("message_id"): if metadata.get("chat_id") and metadata.get("message_id"):
try: try:
if not metadata["chat_id"].startswith("local:"):
Chats.upsert_message_to_chat_by_id_and_message_id( Chats.upsert_message_to_chat_by_id_and_message_id(
metadata["chat_id"], metadata["chat_id"],
metadata["message_id"], metadata["message_id"],
@ -1541,6 +1557,7 @@ async def chat_completion(
if metadata.get("chat_id") and metadata.get("message_id"): if metadata.get("chat_id") and metadata.get("message_id"):
# Update the chat message with the error # Update the chat message with the error
try: try:
if not metadata["chat_id"].startswith("local:"):
Chats.upsert_message_to_chat_by_id_and_message_id( Chats.upsert_message_to_chat_by_id_and_message_id(
metadata["chat_id"], metadata["chat_id"],
metadata["message_id"], metadata["message_id"],
@ -1562,6 +1579,14 @@ async def chat_completion(
except: except:
pass pass
finally:
try:
if mcp_clients := metadata.get("mcp_clients"):
for client in mcp_clients.values():
await client.disconnect()
except Exception as e:
log.debug(f"Error cleaning up: {e}")
pass
if ( if (
metadata.get("session_id") metadata.get("session_id")
@ -1730,6 +1755,14 @@ async def get_app_config(request: Request):
"enable_admin_chat_access": ENABLE_ADMIN_CHAT_ACCESS, "enable_admin_chat_access": ENABLE_ADMIN_CHAT_ACCESS,
"enable_google_drive_integration": app.state.config.ENABLE_GOOGLE_DRIVE_INTEGRATION, "enable_google_drive_integration": app.state.config.ENABLE_GOOGLE_DRIVE_INTEGRATION,
"enable_onedrive_integration": app.state.config.ENABLE_ONEDRIVE_INTEGRATION, "enable_onedrive_integration": app.state.config.ENABLE_ONEDRIVE_INTEGRATION,
**(
{
"enable_onedrive_personal": ENABLE_ONEDRIVE_PERSONAL,
"enable_onedrive_business": ENABLE_ONEDRIVE_BUSINESS,
}
if app.state.config.ENABLE_ONEDRIVE_INTEGRATION
else {}
),
} }
if user is not None if user is not None
else {} else {}
@ -1767,7 +1800,8 @@ async def get_app_config(request: Request):
"api_key": GOOGLE_DRIVE_API_KEY.value, "api_key": GOOGLE_DRIVE_API_KEY.value,
}, },
"onedrive": { "onedrive": {
"client_id": ONEDRIVE_CLIENT_ID.value, "client_id_personal": ONEDRIVE_CLIENT_ID_PERSONAL,
"client_id_business": ONEDRIVE_CLIENT_ID_BUSINESS,
"sharepoint_url": ONEDRIVE_SHAREPOINT_URL.value, "sharepoint_url": ONEDRIVE_SHAREPOINT_URL.value,
"sharepoint_tenant_id": ONEDRIVE_SHAREPOINT_TENANT_ID.value, "sharepoint_tenant_id": ONEDRIVE_SHAREPOINT_TENANT_ID.value,
}, },
@ -1887,17 +1921,76 @@ async def get_current_usage(user=Depends(get_verified_user)):
# OAuth Login & Callback # OAuth Login & Callback
############################ ############################
# SessionMiddleware is used by authlib for oauth
if len(OAUTH_PROVIDERS) > 0: # Initialize OAuth client manager with any MCP tool servers using OAuth 2.1
if len(app.state.config.TOOL_SERVER_CONNECTIONS) > 0:
for tool_server_connection in app.state.config.TOOL_SERVER_CONNECTIONS:
if tool_server_connection.get("type", "openapi") == "mcp":
server_id = tool_server_connection.get("info", {}).get("id")
auth_type = tool_server_connection.get("auth_type", "none")
if server_id and auth_type == "oauth_2.1":
oauth_client_info = tool_server_connection.get("info", {}).get(
"oauth_client_info", ""
)
oauth_client_info = decrypt_data(oauth_client_info)
app.state.oauth_client_manager.add_client(
f"mcp:{server_id}", OAuthClientInformationFull(**oauth_client_info)
)
try:
if REDIS_URL:
redis_session_store = RedisStore(
url=REDIS_URL,
prefix=(f"{REDIS_KEY_PREFIX}:session:" if REDIS_KEY_PREFIX else "session:"),
)
app.add_middleware(SessionAutoloadMiddleware)
app.add_middleware(
StarSessionsMiddleware,
store=redis_session_store,
cookie_name="owui-session",
cookie_same_site=WEBUI_SESSION_COOKIE_SAME_SITE,
cookie_https_only=WEBUI_SESSION_COOKIE_SECURE,
)
log.info("Using Redis for session")
else:
raise ValueError("No Redis URL provided")
except Exception as e:
app.add_middleware( app.add_middleware(
SessionMiddleware, SessionMiddleware,
secret_key=WEBUI_SECRET_KEY, secret_key=WEBUI_SECRET_KEY,
session_cookie="oui-session", session_cookie="owui-session",
same_site=WEBUI_SESSION_COOKIE_SAME_SITE, same_site=WEBUI_SESSION_COOKIE_SAME_SITE,
https_only=WEBUI_SESSION_COOKIE_SECURE, https_only=WEBUI_SESSION_COOKIE_SECURE,
) )
@app.get("/oauth/clients/{client_id}/authorize")
async def oauth_client_authorize(
client_id: str,
request: Request,
response: Response,
user=Depends(get_verified_user),
):
return await oauth_client_manager.handle_authorize(request, client_id=client_id)
@app.get("/oauth/clients/{client_id}/callback")
async def oauth_client_callback(
client_id: str,
request: Request,
response: Response,
user=Depends(get_verified_user),
):
return await oauth_client_manager.handle_callback(
request,
client_id=client_id,
user_id=user.id if user else None,
response=response,
)
@app.get("/oauth/{provider}/login") @app.get("/oauth/{provider}/login")
async def oauth_login(provider: str, request: Request): async def oauth_login(provider: str, request: Request):
return await oauth_manager.handle_login(request, provider) return await oauth_manager.handle_login(request, provider)
@ -1909,8 +2002,9 @@ async def oauth_login(provider: str, request: Request):
# - This is considered insecure in general, as OAuth providers do not always verify email addresses # - This is considered insecure in general, as OAuth providers do not always verify email addresses
# 3. If there is no user, and ENABLE_OAUTH_SIGNUP is true, create a user # 3. If there is no user, and ENABLE_OAUTH_SIGNUP is true, create a user
# - Email addresses are considered unique, so we fail registration if the email address is already taken # - Email addresses are considered unique, so we fail registration if the email address is already taken
@app.get("/oauth/{provider}/callback") @app.get("/oauth/{provider}/login/callback")
async def oauth_callback(provider: str, request: Request, response: Response): @app.get("/oauth/{provider}/callback") # Legacy endpoint
async def oauth_login_callback(provider: str, request: Request, response: Response):
return await oauth_manager.handle_callback(request, provider, response) return await oauth_manager.handle_callback(request, provider, response)
@ -1940,6 +2034,11 @@ async def get_manifest_json():
"purpose": "maskable", "purpose": "maskable",
}, },
], ],
"share_target": {
"action": "/",
"method": "GET",
"params": {"text": "shared"},
},
} }

View file

@ -0,0 +1,34 @@
"""Add reply_to_id column to message
Revision ID: a5c220713937
Revises: 38d63c18f30f
Create Date: 2025-09-27 02:24:18.058455
"""
from typing import Sequence, Union
from alembic import op
import sqlalchemy as sa
# revision identifiers, used by Alembic.
revision: str = "a5c220713937"
down_revision: Union[str, None] = "38d63c18f30f"
branch_labels: Union[str, Sequence[str], None] = None
depends_on: Union[str, Sequence[str], None] = None
def upgrade() -> None:
# Add 'reply_to_id' column to the 'message' table for replying to messages
op.add_column(
"message",
sa.Column("reply_to_id", sa.Text(), nullable=True),
)
pass
def downgrade() -> None:
# Remove 'reply_to_id' column from the 'message' table
op.drop_column("message", "reply_to_id")
pass

View file

@ -57,6 +57,10 @@ class ChannelModel(BaseModel):
#################### ####################
class ChannelResponse(ChannelModel):
write_access: bool = False
class ChannelForm(BaseModel): class ChannelForm(BaseModel):
name: str name: str
description: Optional[str] = None description: Optional[str] = None

View file

@ -366,6 +366,15 @@ class ChatTable:
except Exception: except Exception:
return False return False
def unarchive_all_chats_by_user_id(self, user_id: str) -> bool:
try:
with get_db() as db:
db.query(Chat).filter_by(user_id=user_id).update({"archived": False})
db.commit()
return True
except Exception:
return False
def update_chat_share_id_by_id( def update_chat_share_id_by_id(
self, id: str, share_id: Optional[str] self, id: str, share_id: Optional[str]
) -> Optional[ChatModel]: ) -> Optional[ChatModel]:
@ -492,11 +501,16 @@ class ChatTable:
self, self,
user_id: str, user_id: str,
include_archived: bool = False, include_archived: bool = False,
include_folders: bool = False,
skip: Optional[int] = None, skip: Optional[int] = None,
limit: Optional[int] = None, limit: Optional[int] = None,
) -> list[ChatTitleIdResponse]: ) -> list[ChatTitleIdResponse]:
with get_db() as db: with get_db() as db:
query = db.query(Chat).filter_by(user_id=user_id).filter_by(folder_id=None) query = db.query(Chat).filter_by(user_id=user_id)
if not include_folders:
query = query.filter_by(folder_id=None)
query = query.filter(or_(Chat.pinned == False, Chat.pinned == None)) query = query.filter(or_(Chat.pinned == False, Chat.pinned == None))
if not include_archived: if not include_archived:
@ -805,7 +819,7 @@ class ChatTable:
return [ChatModel.model_validate(chat) for chat in all_chats] return [ChatModel.model_validate(chat) for chat in all_chats]
def get_chats_by_folder_id_and_user_id( def get_chats_by_folder_id_and_user_id(
self, folder_id: str, user_id: str self, folder_id: str, user_id: str, skip: int = 0, limit: int = 60
) -> list[ChatModel]: ) -> list[ChatModel]:
with get_db() as db: with get_db() as db:
query = db.query(Chat).filter_by(folder_id=folder_id, user_id=user_id) query = db.query(Chat).filter_by(folder_id=folder_id, user_id=user_id)
@ -814,6 +828,11 @@ class ChatTable:
query = query.order_by(Chat.updated_at.desc()) query = query.order_by(Chat.updated_at.desc())
if skip:
query = query.offset(skip)
if limit:
query = query.limit(limit)
all_chats = query.all() all_chats = query.all()
return [ChatModel.model_validate(chat) for chat in all_chats] return [ChatModel.model_validate(chat) for chat in all_chats]
@ -943,6 +962,16 @@ class ChatTable:
return count return count
def count_chats_by_folder_id_and_user_id(self, folder_id: str, user_id: str) -> int:
with get_db() as db:
query = db.query(Chat).filter_by(user_id=user_id)
query = query.filter_by(folder_id=folder_id)
count = query.count()
log.info(f"Count of chats for folder '{folder_id}': {count}")
return count
def delete_tag_by_id_and_user_id_and_tag_name( def delete_tag_by_id_and_user_id_and_tag_name(
self, id: str, user_id: str, tag_name: str self, id: str, user_id: str, tag_name: str
) -> bool: ) -> bool:

View file

@ -130,6 +130,17 @@ class FilesTable:
except Exception: except Exception:
return None return None
def get_file_by_id_and_user_id(self, id: str, user_id: str) -> Optional[FileModel]:
with get_db() as db:
try:
file = db.query(File).filter_by(id=id, user_id=user_id).first()
if file:
return FileModel.model_validate(file)
else:
return None
except Exception:
return None
def get_file_metadata_by_id(self, id: str) -> Optional[FileMetadataResponse]: def get_file_metadata_by_id(self, id: str) -> Optional[FileMetadataResponse]:
with get_db() as db: with get_db() as db:
try: try:

View file

@ -50,6 +50,20 @@ class FolderModel(BaseModel):
model_config = ConfigDict(from_attributes=True) model_config = ConfigDict(from_attributes=True)
class FolderMetadataResponse(BaseModel):
icon: Optional[str] = None
class FolderNameIdResponse(BaseModel):
id: str
name: str
meta: Optional[FolderMetadataResponse] = None
parent_id: Optional[str] = None
is_expanded: bool = False
created_at: int
updated_at: int
#################### ####################
# Forms # Forms
#################### ####################

View file

@ -5,6 +5,7 @@ from typing import Optional
from open_webui.internal.db import Base, get_db from open_webui.internal.db import Base, get_db
from open_webui.models.tags import TagModel, Tag, Tags from open_webui.models.tags import TagModel, Tag, Tags
from open_webui.models.users import Users, UserNameResponse
from pydantic import BaseModel, ConfigDict from pydantic import BaseModel, ConfigDict
@ -43,6 +44,7 @@ class Message(Base):
user_id = Column(Text) user_id = Column(Text)
channel_id = Column(Text, nullable=True) channel_id = Column(Text, nullable=True)
reply_to_id = Column(Text, nullable=True)
parent_id = Column(Text, nullable=True) parent_id = Column(Text, nullable=True)
content = Column(Text) content = Column(Text)
@ -60,6 +62,7 @@ class MessageModel(BaseModel):
user_id: str user_id: str
channel_id: Optional[str] = None channel_id: Optional[str] = None
reply_to_id: Optional[str] = None
parent_id: Optional[str] = None parent_id: Optional[str] = None
content: str content: str
@ -77,6 +80,7 @@ class MessageModel(BaseModel):
class MessageForm(BaseModel): class MessageForm(BaseModel):
content: str content: str
reply_to_id: Optional[str] = None
parent_id: Optional[str] = None parent_id: Optional[str] = None
data: Optional[dict] = None data: Optional[dict] = None
meta: Optional[dict] = None meta: Optional[dict] = None
@ -88,7 +92,15 @@ class Reactions(BaseModel):
count: int count: int
class MessageResponse(MessageModel): class MessageUserResponse(MessageModel):
user: Optional[UserNameResponse] = None
class MessageReplyToResponse(MessageUserResponse):
reply_to_message: Optional[MessageUserResponse] = None
class MessageResponse(MessageReplyToResponse):
latest_reply_at: Optional[int] latest_reply_at: Optional[int]
reply_count: int reply_count: int
reactions: list[Reactions] reactions: list[Reactions]
@ -107,6 +119,7 @@ class MessageTable:
"id": id, "id": id,
"user_id": user_id, "user_id": user_id,
"channel_id": channel_id, "channel_id": channel_id,
"reply_to_id": form_data.reply_to_id,
"parent_id": form_data.parent_id, "parent_id": form_data.parent_id,
"content": form_data.content, "content": form_data.content,
"data": form_data.data, "data": form_data.data,
@ -128,19 +141,32 @@ class MessageTable:
if not message: if not message:
return None return None
reactions = self.get_reactions_by_message_id(id) reply_to_message = (
replies = self.get_replies_by_message_id(id) self.get_message_by_id(message.reply_to_id)
if message.reply_to_id
else None
)
return MessageResponse( reactions = self.get_reactions_by_message_id(id)
**{ thread_replies = self.get_thread_replies_by_message_id(id)
user = Users.get_user_by_id(message.user_id)
return MessageResponse.model_validate(
{
**MessageModel.model_validate(message).model_dump(), **MessageModel.model_validate(message).model_dump(),
"latest_reply_at": replies[0].created_at if replies else None, "user": user.model_dump() if user else None,
"reply_count": len(replies), "reply_to_message": (
reply_to_message.model_dump() if reply_to_message else None
),
"latest_reply_at": (
thread_replies[0].created_at if thread_replies else None
),
"reply_count": len(thread_replies),
"reactions": reactions, "reactions": reactions,
} }
) )
def get_replies_by_message_id(self, id: str) -> list[MessageModel]: def get_thread_replies_by_message_id(self, id: str) -> list[MessageReplyToResponse]:
with get_db() as db: with get_db() as db:
all_messages = ( all_messages = (
db.query(Message) db.query(Message)
@ -148,7 +174,27 @@ class MessageTable:
.order_by(Message.created_at.desc()) .order_by(Message.created_at.desc())
.all() .all()
) )
return [MessageModel.model_validate(message) for message in all_messages]
messages = []
for message in all_messages:
reply_to_message = (
self.get_message_by_id(message.reply_to_id)
if message.reply_to_id
else None
)
messages.append(
MessageReplyToResponse.model_validate(
{
**MessageModel.model_validate(message).model_dump(),
"reply_to_message": (
reply_to_message.model_dump()
if reply_to_message
else None
),
}
)
)
return messages
def get_reply_user_ids_by_message_id(self, id: str) -> list[str]: def get_reply_user_ids_by_message_id(self, id: str) -> list[str]:
with get_db() as db: with get_db() as db:
@ -159,7 +205,7 @@ class MessageTable:
def get_messages_by_channel_id( def get_messages_by_channel_id(
self, channel_id: str, skip: int = 0, limit: int = 50 self, channel_id: str, skip: int = 0, limit: int = 50
) -> list[MessageModel]: ) -> list[MessageReplyToResponse]:
with get_db() as db: with get_db() as db:
all_messages = ( all_messages = (
db.query(Message) db.query(Message)
@ -169,11 +215,31 @@ class MessageTable:
.limit(limit) .limit(limit)
.all() .all()
) )
return [MessageModel.model_validate(message) for message in all_messages]
messages = []
for message in all_messages:
reply_to_message = (
self.get_message_by_id(message.reply_to_id)
if message.reply_to_id
else None
)
messages.append(
MessageReplyToResponse.model_validate(
{
**MessageModel.model_validate(message).model_dump(),
"reply_to_message": (
reply_to_message.model_dump()
if reply_to_message
else None
),
}
)
)
return messages
def get_messages_by_parent_id( def get_messages_by_parent_id(
self, channel_id: str, parent_id: str, skip: int = 0, limit: int = 50 self, channel_id: str, parent_id: str, skip: int = 0, limit: int = 50
) -> list[MessageModel]: ) -> list[MessageReplyToResponse]:
with get_db() as db: with get_db() as db:
message = db.get(Message, parent_id) message = db.get(Message, parent_id)
@ -193,7 +259,26 @@ class MessageTable:
if len(all_messages) < limit: if len(all_messages) < limit:
all_messages.append(message) all_messages.append(message)
return [MessageModel.model_validate(message) for message in all_messages] messages = []
for message in all_messages:
reply_to_message = (
self.get_message_by_id(message.reply_to_id)
if message.reply_to_id
else None
)
messages.append(
MessageReplyToResponse.model_validate(
{
**MessageModel.model_validate(message).model_dump(),
"reply_to_message": (
reply_to_message.model_dump()
if reply_to_message
else None
),
}
)
)
return messages
def update_message_by_id( def update_message_by_id(
self, id: str, form_data: MessageForm self, id: str, form_data: MessageForm
@ -201,8 +286,14 @@ class MessageTable:
with get_db() as db: with get_db() as db:
message = db.get(Message, id) message = db.get(Message, id)
message.content = form_data.content message.content = form_data.content
message.data = form_data.data message.data = {
message.meta = form_data.meta **(message.data if message.data else {}),
**(form_data.data if form_data.data else {}),
}
message.meta = {
**(message.meta if message.meta else {}),
**(form_data.meta if form_data.meta else {}),
}
message.updated_at = int(time.time_ns()) message.updated_at = int(time.time_ns())
db.commit() db.commit()
db.refresh(message) db.refresh(message)

View file

@ -2,6 +2,7 @@ import json
import time import time
import uuid import uuid
from typing import Optional from typing import Optional
from functools import lru_cache
from open_webui.internal.db import Base, get_db from open_webui.internal.db import Base, get_db
from open_webui.models.groups import Groups from open_webui.models.groups import Groups
@ -110,20 +111,72 @@ class NoteTable:
return [NoteModel.model_validate(note) for note in notes] return [NoteModel.model_validate(note) for note in notes]
def get_notes_by_user_id( def get_notes_by_user_id(
self,
user_id: str,
skip: Optional[int] = None,
limit: Optional[int] = None,
) -> list[NoteModel]:
with get_db() as db:
query = db.query(Note).filter(Note.user_id == user_id)
query = query.order_by(Note.updated_at.desc())
if skip is not None:
query = query.offset(skip)
if limit is not None:
query = query.limit(limit)
notes = query.all()
return [NoteModel.model_validate(note) for note in notes]
def get_notes_by_permission(
self, self,
user_id: str, user_id: str,
permission: str = "write", permission: str = "write",
skip: Optional[int] = None, skip: Optional[int] = None,
limit: Optional[int] = None, limit: Optional[int] = None,
) -> list[NoteModel]: ) -> list[NoteModel]:
notes = self.get_notes(skip=skip, limit=limit) with get_db() as db:
user_group_ids = {group.id for group in Groups.get_groups_by_member_id(user_id)} user_groups = Groups.get_groups_by_member_id(user_id)
return [ user_group_ids = {group.id for group in user_groups}
note
for note in notes # Order newest-first. We stream to keep memory usage low.
if note.user_id == user_id query = (
or has_access(user_id, permission, note.access_control, user_group_ids) db.query(Note)
] .order_by(Note.updated_at.desc())
.execution_options(stream_results=True)
.yield_per(256)
)
results: list[NoteModel] = []
n_skipped = 0
for note in query:
# Fast-pass #1: owner
if note.user_id == user_id:
permitted = True
# Fast-pass #2: public/open
elif note.access_control is None:
# Technically this should mean public access for both read and write, but we'll only do read for now
# We might want to change this behavior later
permitted = permission == "read"
else:
permitted = has_access(
user_id, permission, note.access_control, user_group_ids
)
if not permitted:
continue
# Apply skip AFTER permission filtering so it counts only accessible notes
if skip and n_skipped < skip:
n_skipped += 1
continue
results.append(NoteModel.model_validate(note))
if limit is not None and len(results) >= limit:
break
return results
def get_note_by_id(self, id: str) -> Optional[NoteModel]: def get_note_by_id(self, id: str) -> Optional[NoteModel]:
with get_db() as db: with get_db() as db:

View file

@ -176,6 +176,26 @@ class OAuthSessionTable:
log.error(f"Error getting OAuth session by ID: {e}") log.error(f"Error getting OAuth session by ID: {e}")
return None return None
def get_session_by_provider_and_user_id(
self, provider: str, user_id: str
) -> Optional[OAuthSessionModel]:
"""Get OAuth session by provider and user ID"""
try:
with get_db() as db:
session = (
db.query(OAuthSession)
.filter_by(provider=provider, user_id=user_id)
.first()
)
if session:
session.token = self._decrypt_token(session.token)
return OAuthSessionModel.model_validate(session)
return None
except Exception as e:
log.error(f"Error getting OAuth session by provider and user ID: {e}")
return None
def get_sessions_by_user_id(self, user_id: str) -> List[OAuthSessionModel]: def get_sessions_by_user_id(self, user_id: str) -> List[OAuthSessionModel]:
"""Get all OAuth sessions for a user""" """Get all OAuth sessions for a user"""
try: try:

View file

@ -95,6 +95,8 @@ class ToolResponse(BaseModel):
class ToolUserResponse(ToolResponse): class ToolUserResponse(ToolResponse):
user: Optional[UserResponse] = None user: Optional[UserResponse] = None
model_config = ConfigDict(extra="allow")
class ToolForm(BaseModel): class ToolForm(BaseModel):
id: str id: str

View file

@ -107,11 +107,21 @@ class UserInfoResponse(BaseModel):
role: str role: str
class UserIdNameResponse(BaseModel):
id: str
name: str
class UserInfoListResponse(BaseModel): class UserInfoListResponse(BaseModel):
users: list[UserInfoResponse] users: list[UserInfoResponse]
total: int total: int
class UserIdNameListResponse(BaseModel):
users: list[UserIdNameResponse]
total: int
class UserResponse(BaseModel): class UserResponse(BaseModel):
id: str id: str
name: str name: str
@ -210,7 +220,7 @@ class UsersTable:
filter: Optional[dict] = None, filter: Optional[dict] = None,
skip: Optional[int] = None, skip: Optional[int] = None,
limit: Optional[int] = None, limit: Optional[int] = None,
) -> UserListResponse: ) -> dict:
with get_db() as db: with get_db() as db:
query = db.query(User) query = db.query(User)

View file

@ -346,11 +346,9 @@ class Loader:
self.engine == "document_intelligence" self.engine == "document_intelligence"
and self.kwargs.get("DOCUMENT_INTELLIGENCE_ENDPOINT") != "" and self.kwargs.get("DOCUMENT_INTELLIGENCE_ENDPOINT") != ""
and ( and (
file_ext in ["pdf", "xls", "xlsx", "docx", "ppt", "pptx"] file_ext in ["pdf", "docx", "ppt", "pptx"]
or file_content_type or file_content_type
in [ in [
"application/vnd.ms-excel",
"application/vnd.openxmlformats-officedocument.spreadsheetml.sheet",
"application/vnd.openxmlformats-officedocument.wordprocessingml.document", "application/vnd.openxmlformats-officedocument.wordprocessingml.document",
"application/vnd.ms-powerpoint", "application/vnd.ms-powerpoint",
"application/vnd.openxmlformats-officedocument.presentationml.presentation", "application/vnd.openxmlformats-officedocument.presentationml.presentation",

View file

@ -127,7 +127,13 @@ def query_doc_with_hybrid_search(
hybrid_bm25_weight: float, hybrid_bm25_weight: float,
) -> dict: ) -> dict:
try: try:
if not collection_result.documents[0]: if (
not collection_result
or not hasattr(collection_result, "documents")
or not collection_result.documents
or len(collection_result.documents) == 0
or not collection_result.documents[0]
):
log.warning(f"query_doc_with_hybrid_search:no_docs {collection_name}") log.warning(f"query_doc_with_hybrid_search:no_docs {collection_name}")
return {"documents": [], "metadatas": [], "distances": []} return {"documents": [], "metadatas": [], "distances": []}
@ -621,6 +627,7 @@ def get_sources_from_items(
if knowledge_base and ( if knowledge_base and (
user.role == "admin" user.role == "admin"
or knowledge_base.user_id == user.id
or has_access(user.id, "read", knowledge_base.access_control) or has_access(user.id, "read", knowledge_base.access_control)
): ):

View file

@ -11,7 +11,7 @@ from open_webui.retrieval.vector.main import (
SearchResult, SearchResult,
GetResult, GetResult,
) )
from open_webui.retrieval.vector.utils import stringify_metadata from open_webui.retrieval.vector.utils import process_metadata
from open_webui.config import ( from open_webui.config import (
CHROMA_DATA_PATH, CHROMA_DATA_PATH,
@ -146,7 +146,7 @@ class ChromaClient(VectorDBBase):
ids = [item["id"] for item in items] ids = [item["id"] for item in items]
documents = [item["text"] for item in items] documents = [item["text"] for item in items]
embeddings = [item["vector"] for item in items] embeddings = [item["vector"] for item in items]
metadatas = [stringify_metadata(item["metadata"]) for item in items] metadatas = [process_metadata(item["metadata"]) for item in items]
for batch in create_batches( for batch in create_batches(
api=self.client, api=self.client,
@ -166,7 +166,7 @@ class ChromaClient(VectorDBBase):
ids = [item["id"] for item in items] ids = [item["id"] for item in items]
documents = [item["text"] for item in items] documents = [item["text"] for item in items]
embeddings = [item["vector"] for item in items] embeddings = [item["vector"] for item in items]
metadatas = [stringify_metadata(item["metadata"]) for item in items] metadatas = [process_metadata(item["metadata"]) for item in items]
collection.upsert( collection.upsert(
ids=ids, documents=documents, embeddings=embeddings, metadatas=metadatas ids=ids, documents=documents, embeddings=embeddings, metadatas=metadatas

View file

@ -3,7 +3,7 @@ from typing import Optional
import ssl import ssl
from elasticsearch.helpers import bulk, scan from elasticsearch.helpers import bulk, scan
from open_webui.retrieval.vector.utils import stringify_metadata from open_webui.retrieval.vector.utils import process_metadata
from open_webui.retrieval.vector.main import ( from open_webui.retrieval.vector.main import (
VectorDBBase, VectorDBBase,
VectorItem, VectorItem,
@ -245,7 +245,7 @@ class ElasticsearchClient(VectorDBBase):
"collection": collection_name, "collection": collection_name,
"vector": item["vector"], "vector": item["vector"],
"text": item["text"], "text": item["text"],
"metadata": stringify_metadata(item["metadata"]), "metadata": process_metadata(item["metadata"]),
}, },
} }
for item in batch for item in batch
@ -266,7 +266,7 @@ class ElasticsearchClient(VectorDBBase):
"collection": collection_name, "collection": collection_name,
"vector": item["vector"], "vector": item["vector"],
"text": item["text"], "text": item["text"],
"metadata": stringify_metadata(item["metadata"]), "metadata": process_metadata(item["metadata"]),
}, },
"doc_as_upsert": True, "doc_as_upsert": True,
} }

View file

@ -6,7 +6,7 @@ import json
import logging import logging
from typing import Optional from typing import Optional
from open_webui.retrieval.vector.utils import stringify_metadata from open_webui.retrieval.vector.utils import process_metadata
from open_webui.retrieval.vector.main import ( from open_webui.retrieval.vector.main import (
VectorDBBase, VectorDBBase,
VectorItem, VectorItem,
@ -22,6 +22,8 @@ from open_webui.config import (
MILVUS_HNSW_M, MILVUS_HNSW_M,
MILVUS_HNSW_EFCONSTRUCTION, MILVUS_HNSW_EFCONSTRUCTION,
MILVUS_IVF_FLAT_NLIST, MILVUS_IVF_FLAT_NLIST,
MILVUS_DISKANN_MAX_DEGREE,
MILVUS_DISKANN_SEARCH_LIST_SIZE,
) )
from open_webui.env import SRC_LOG_LEVELS from open_webui.env import SRC_LOG_LEVELS
@ -131,12 +133,18 @@ class MilvusClient(VectorDBBase):
elif index_type == "IVF_FLAT": elif index_type == "IVF_FLAT":
index_creation_params = {"nlist": MILVUS_IVF_FLAT_NLIST} index_creation_params = {"nlist": MILVUS_IVF_FLAT_NLIST}
log.info(f"IVF_FLAT params: {index_creation_params}") log.info(f"IVF_FLAT params: {index_creation_params}")
elif index_type == "DISKANN":
index_creation_params = {
"max_degree": MILVUS_DISKANN_MAX_DEGREE,
"search_list_size": MILVUS_DISKANN_SEARCH_LIST_SIZE,
}
log.info(f"DISKANN params: {index_creation_params}")
elif index_type in ["FLAT", "AUTOINDEX"]: elif index_type in ["FLAT", "AUTOINDEX"]:
log.info(f"Using {index_type} index with no specific build-time params.") log.info(f"Using {index_type} index with no specific build-time params.")
else: else:
log.warning( log.warning(
f"Unsupported MILVUS_INDEX_TYPE: '{index_type}'. " f"Unsupported MILVUS_INDEX_TYPE: '{index_type}'. "
f"Supported types: HNSW, IVF_FLAT, FLAT, AUTOINDEX. " f"Supported types: HNSW, IVF_FLAT, DISKANN, FLAT, AUTOINDEX. "
f"Milvus will use its default for the collection if this type is not directly supported for index creation." f"Milvus will use its default for the collection if this type is not directly supported for index creation."
) )
# For unsupported types, pass the type directly to Milvus; it might handle it or use a default. # For unsupported types, pass the type directly to Milvus; it might handle it or use a default.
@ -189,7 +197,7 @@ class MilvusClient(VectorDBBase):
) )
return self._result_to_search_result(result) return self._result_to_search_result(result)
def query(self, collection_name: str, filter: dict, limit: Optional[int] = None): def query(self, collection_name: str, filter: dict, limit: int = -1):
connections.connect(uri=MILVUS_URI, token=MILVUS_TOKEN, db_name=MILVUS_DB) connections.connect(uri=MILVUS_URI, token=MILVUS_TOKEN, db_name=MILVUS_DB)
# Construct the filter string for querying # Construct the filter string for querying
@ -222,7 +230,7 @@ class MilvusClient(VectorDBBase):
"data", "data",
"metadata", "metadata",
], ],
limit=limit, # Pass the limit directly; None means no limit. limit=limit, # Pass the limit directly; -1 means no limit.
) )
while True: while True:
@ -249,7 +257,7 @@ class MilvusClient(VectorDBBase):
) )
# Using query with a trivial filter to get all items. # Using query with a trivial filter to get all items.
# This will use the paginated query logic. # This will use the paginated query logic.
return self.query(collection_name=collection_name, filter={}, limit=None) return self.query(collection_name=collection_name, filter={}, limit=-1)
def insert(self, collection_name: str, items: list[VectorItem]): def insert(self, collection_name: str, items: list[VectorItem]):
# Insert the items into the collection, if the collection does not exist, it will be created. # Insert the items into the collection, if the collection does not exist, it will be created.
@ -281,7 +289,7 @@ class MilvusClient(VectorDBBase):
"id": item["id"], "id": item["id"],
"vector": item["vector"], "vector": item["vector"],
"data": {"text": item["text"]}, "data": {"text": item["text"]},
"metadata": stringify_metadata(item["metadata"]), "metadata": process_metadata(item["metadata"]),
} }
for item in items for item in items
], ],
@ -317,7 +325,7 @@ class MilvusClient(VectorDBBase):
"id": item["id"], "id": item["id"],
"vector": item["vector"], "vector": item["vector"],
"data": {"text": item["text"]}, "data": {"text": item["text"]},
"metadata": stringify_metadata(item["metadata"]), "metadata": process_metadata(item["metadata"]),
} }
for item in items for item in items
], ],

View file

@ -0,0 +1,282 @@
import logging
from typing import Optional, Tuple, List, Dict, Any
from open_webui.config import (
MILVUS_URI,
MILVUS_TOKEN,
MILVUS_DB,
MILVUS_COLLECTION_PREFIX,
MILVUS_INDEX_TYPE,
MILVUS_METRIC_TYPE,
MILVUS_HNSW_M,
MILVUS_HNSW_EFCONSTRUCTION,
MILVUS_IVF_FLAT_NLIST,
)
from open_webui.env import SRC_LOG_LEVELS
from open_webui.retrieval.vector.main import (
GetResult,
SearchResult,
VectorDBBase,
VectorItem,
)
from pymilvus import (
connections,
utility,
Collection,
CollectionSchema,
FieldSchema,
DataType,
)
log = logging.getLogger(__name__)
log.setLevel(SRC_LOG_LEVELS["RAG"])
RESOURCE_ID_FIELD = "resource_id"
class MilvusClient(VectorDBBase):
def __init__(self):
# Milvus collection names can only contain numbers, letters, and underscores.
self.collection_prefix = MILVUS_COLLECTION_PREFIX.replace("-", "_")
connections.connect(
alias="default",
uri=MILVUS_URI,
token=MILVUS_TOKEN,
db_name=MILVUS_DB,
)
# Main collection types for multi-tenancy
self.MEMORY_COLLECTION = f"{self.collection_prefix}_memories"
self.KNOWLEDGE_COLLECTION = f"{self.collection_prefix}_knowledge"
self.FILE_COLLECTION = f"{self.collection_prefix}_files"
self.WEB_SEARCH_COLLECTION = f"{self.collection_prefix}_web_search"
self.HASH_BASED_COLLECTION = f"{self.collection_prefix}_hash_based"
self.shared_collections = [
self.MEMORY_COLLECTION,
self.KNOWLEDGE_COLLECTION,
self.FILE_COLLECTION,
self.WEB_SEARCH_COLLECTION,
self.HASH_BASED_COLLECTION,
]
def _get_collection_and_resource_id(self, collection_name: str) -> Tuple[str, str]:
"""
Maps the traditional collection name to multi-tenant collection and resource ID.
WARNING: This mapping relies on current Open WebUI naming conventions for
collection names. If Open WebUI changes how it generates collection names
(e.g., "user-memory-" prefix, "file-" prefix, web search patterns, or hash
formats), this mapping will break and route data to incorrect collections.
POTENTIALLY CAUSING HUGE DATA CORRUPTION, DATA CONSISTENCY ISSUES AND INCORRECT
DATA MAPPING INSIDE THE DATABASE.
"""
resource_id = collection_name
if collection_name.startswith("user-memory-"):
return self.MEMORY_COLLECTION, resource_id
elif collection_name.startswith("file-"):
return self.FILE_COLLECTION, resource_id
elif collection_name.startswith("web-search-"):
return self.WEB_SEARCH_COLLECTION, resource_id
elif len(collection_name) == 63 and all(
c in "0123456789abcdef" for c in collection_name
):
return self.HASH_BASED_COLLECTION, resource_id
else:
return self.KNOWLEDGE_COLLECTION, resource_id
def _create_shared_collection(self, mt_collection_name: str, dimension: int):
fields = [
FieldSchema(
name="id",
dtype=DataType.VARCHAR,
is_primary=True,
auto_id=False,
max_length=36,
),
FieldSchema(name="vector", dtype=DataType.FLOAT_VECTOR, dim=dimension),
FieldSchema(name="text", dtype=DataType.VARCHAR, max_length=65535),
FieldSchema(name="metadata", dtype=DataType.JSON),
FieldSchema(name=RESOURCE_ID_FIELD, dtype=DataType.VARCHAR, max_length=255),
]
schema = CollectionSchema(fields, "Shared collection for multi-tenancy")
collection = Collection(mt_collection_name, schema)
index_params = {
"metric_type": MILVUS_METRIC_TYPE,
"index_type": MILVUS_INDEX_TYPE,
"params": {},
}
if MILVUS_INDEX_TYPE == "HNSW":
index_params["params"] = {
"M": MILVUS_HNSW_M,
"efConstruction": MILVUS_HNSW_EFCONSTRUCTION,
}
elif MILVUS_INDEX_TYPE == "IVF_FLAT":
index_params["params"] = {"nlist": MILVUS_IVF_FLAT_NLIST}
collection.create_index("vector", index_params)
collection.create_index(RESOURCE_ID_FIELD)
log.info(f"Created shared collection: {mt_collection_name}")
return collection
def _ensure_collection(self, mt_collection_name: str, dimension: int):
if not utility.has_collection(mt_collection_name):
self._create_shared_collection(mt_collection_name, dimension)
def has_collection(self, collection_name: str) -> bool:
mt_collection, resource_id = self._get_collection_and_resource_id(
collection_name
)
if not utility.has_collection(mt_collection):
return False
collection = Collection(mt_collection)
collection.load()
res = collection.query(expr=f"{RESOURCE_ID_FIELD} == '{resource_id}'", limit=1)
return len(res) > 0
def upsert(self, collection_name: str, items: List[VectorItem]):
if not items:
return
mt_collection, resource_id = self._get_collection_and_resource_id(
collection_name
)
dimension = len(items[0]["vector"])
self._ensure_collection(mt_collection, dimension)
collection = Collection(mt_collection)
entities = [
{
"id": item["id"],
"vector": item["vector"],
"text": item["text"],
"metadata": item["metadata"],
RESOURCE_ID_FIELD: resource_id,
}
for item in items
]
collection.insert(entities)
collection.flush()
def search(
self, collection_name: str, vectors: List[List[float]], limit: int
) -> Optional[SearchResult]:
if not vectors:
return None
mt_collection, resource_id = self._get_collection_and_resource_id(
collection_name
)
if not utility.has_collection(mt_collection):
return None
collection = Collection(mt_collection)
collection.load()
search_params = {"metric_type": MILVUS_METRIC_TYPE, "params": {}}
results = collection.search(
data=vectors,
anns_field="vector",
param=search_params,
limit=limit,
expr=f"{RESOURCE_ID_FIELD} == '{resource_id}'",
output_fields=["id", "text", "metadata"],
)
ids, documents, metadatas, distances = [], [], [], []
for hits in results:
batch_ids, batch_docs, batch_metadatas, batch_dists = [], [], [], []
for hit in hits:
batch_ids.append(hit.entity.get("id"))
batch_docs.append(hit.entity.get("text"))
batch_metadatas.append(hit.entity.get("metadata"))
batch_dists.append(hit.distance)
ids.append(batch_ids)
documents.append(batch_docs)
metadatas.append(batch_metadatas)
distances.append(batch_dists)
return SearchResult(
ids=ids, documents=documents, metadatas=metadatas, distances=distances
)
def delete(
self,
collection_name: str,
ids: Optional[List[str]] = None,
filter: Optional[Dict[str, Any]] = None,
):
mt_collection, resource_id = self._get_collection_and_resource_id(
collection_name
)
if not utility.has_collection(mt_collection):
return
collection = Collection(mt_collection)
# Build expression
expr = [f"{RESOURCE_ID_FIELD} == '{resource_id}'"]
if ids:
# Milvus expects a string list for 'in' operator
id_list_str = ", ".join([f"'{id_val}'" for id_val in ids])
expr.append(f"id in [{id_list_str}]")
if filter:
for key, value in filter.items():
expr.append(f"metadata['{key}'] == '{value}'")
collection.delete(" and ".join(expr))
def reset(self):
for collection_name in self.shared_collections:
if utility.has_collection(collection_name):
utility.drop_collection(collection_name)
def delete_collection(self, collection_name: str):
mt_collection, resource_id = self._get_collection_and_resource_id(
collection_name
)
if not utility.has_collection(mt_collection):
return
collection = Collection(mt_collection)
collection.delete(f"{RESOURCE_ID_FIELD} == '{resource_id}'")
def query(
self, collection_name: str, filter: Dict[str, Any], limit: Optional[int] = None
) -> Optional[GetResult]:
mt_collection, resource_id = self._get_collection_and_resource_id(
collection_name
)
if not utility.has_collection(mt_collection):
return None
collection = Collection(mt_collection)
collection.load()
expr = [f"{RESOURCE_ID_FIELD} == '{resource_id}'"]
if filter:
for key, value in filter.items():
if isinstance(value, str):
expr.append(f"metadata['{key}'] == '{value}'")
else:
expr.append(f"metadata['{key}'] == {value}")
results = collection.query(
expr=" and ".join(expr),
output_fields=["id", "text", "metadata"],
limit=limit,
)
ids = [res["id"] for res in results]
documents = [res["text"] for res in results]
metadatas = [res["metadata"] for res in results]
return GetResult(ids=[ids], documents=[documents], metadatas=[metadatas])
def get(self, collection_name: str) -> Optional[GetResult]:
return self.query(collection_name, filter={}, limit=None)
def insert(self, collection_name: str, items: List[VectorItem]):
return self.upsert(collection_name, items)

View file

@ -2,7 +2,7 @@ from opensearchpy import OpenSearch
from opensearchpy.helpers import bulk from opensearchpy.helpers import bulk
from typing import Optional from typing import Optional
from open_webui.retrieval.vector.utils import stringify_metadata from open_webui.retrieval.vector.utils import process_metadata
from open_webui.retrieval.vector.main import ( from open_webui.retrieval.vector.main import (
VectorDBBase, VectorDBBase,
VectorItem, VectorItem,
@ -201,7 +201,7 @@ class OpenSearchClient(VectorDBBase):
"_source": { "_source": {
"vector": item["vector"], "vector": item["vector"],
"text": item["text"], "text": item["text"],
"metadata": stringify_metadata(item["metadata"]), "metadata": process_metadata(item["metadata"]),
}, },
} }
for item in batch for item in batch
@ -223,7 +223,7 @@ class OpenSearchClient(VectorDBBase):
"doc": { "doc": {
"vector": item["vector"], "vector": item["vector"],
"text": item["text"], "text": item["text"],
"metadata": stringify_metadata(item["metadata"]), "metadata": process_metadata(item["metadata"]),
}, },
"doc_as_upsert": True, "doc_as_upsert": True,
} }

View file

@ -27,7 +27,7 @@ from sqlalchemy.ext.mutable import MutableDict
from sqlalchemy.exc import NoSuchTableError from sqlalchemy.exc import NoSuchTableError
from open_webui.retrieval.vector.utils import stringify_metadata from open_webui.retrieval.vector.utils import process_metadata
from open_webui.retrieval.vector.main import ( from open_webui.retrieval.vector.main import (
VectorDBBase, VectorDBBase,
VectorItem, VectorItem,
@ -265,7 +265,7 @@ class PgvectorClient(VectorDBBase):
vector=vector, vector=vector,
collection_name=collection_name, collection_name=collection_name,
text=item["text"], text=item["text"],
vmetadata=stringify_metadata(item["metadata"]), vmetadata=process_metadata(item["metadata"]),
) )
new_items.append(new_chunk) new_items.append(new_chunk)
self.session.bulk_save_objects(new_items) self.session.bulk_save_objects(new_items)
@ -323,7 +323,7 @@ class PgvectorClient(VectorDBBase):
if existing: if existing:
existing.vector = vector existing.vector = vector
existing.text = item["text"] existing.text = item["text"]
existing.vmetadata = stringify_metadata(item["metadata"]) existing.vmetadata = process_metadata(item["metadata"])
existing.collection_name = ( existing.collection_name = (
collection_name # Update collection_name if necessary collection_name # Update collection_name if necessary
) )
@ -333,7 +333,7 @@ class PgvectorClient(VectorDBBase):
vector=vector, vector=vector,
collection_name=collection_name, collection_name=collection_name,
text=item["text"], text=item["text"],
vmetadata=stringify_metadata(item["metadata"]), vmetadata=process_metadata(item["metadata"]),
) )
self.session.add(new_chunk) self.session.add(new_chunk)
self.session.commit() self.session.commit()

View file

@ -32,7 +32,7 @@ from open_webui.config import (
PINECONE_CLOUD, PINECONE_CLOUD,
) )
from open_webui.env import SRC_LOG_LEVELS from open_webui.env import SRC_LOG_LEVELS
from open_webui.retrieval.vector.utils import stringify_metadata from open_webui.retrieval.vector.utils import process_metadata
NO_LIMIT = 10000 # Reasonable limit to avoid overwhelming the system NO_LIMIT = 10000 # Reasonable limit to avoid overwhelming the system
@ -185,7 +185,7 @@ class PineconeClient(VectorDBBase):
point = { point = {
"id": item["id"], "id": item["id"],
"values": item["vector"], "values": item["vector"],
"metadata": stringify_metadata(metadata), "metadata": process_metadata(metadata),
} }
points.append(point) points.append(point)
return points return points

View file

@ -105,6 +105,13 @@ class QdrantClient(VectorDBBase):
Returns: Returns:
tuple: (collection_name, tenant_id) tuple: (collection_name, tenant_id)
WARNING: This mapping relies on current Open WebUI naming conventions for
collection names. If Open WebUI changes how it generates collection names
(e.g., "user-memory-" prefix, "file-" prefix, web search patterns, or hash
formats), this mapping will break and route data to incorrect collections.
POTENTIALLY CAUSING HUGE DATA CORRUPTION, DATA CONSISTENCY ISSUES AND INCORRECT
DATA MAPPING INSIDE THE DATABASE.
""" """
# Check for user memory collections # Check for user memory collections
tenant_id = collection_name tenant_id = collection_name

View file

@ -1,4 +1,4 @@
from open_webui.retrieval.vector.utils import stringify_metadata from open_webui.retrieval.vector.utils import process_metadata
from open_webui.retrieval.vector.main import ( from open_webui.retrieval.vector.main import (
VectorDBBase, VectorDBBase,
VectorItem, VectorItem,
@ -185,7 +185,7 @@ class S3VectorClient(VectorDBBase):
metadata["text"] = item["text"] metadata["text"] = item["text"]
# Convert metadata to string format for consistency # Convert metadata to string format for consistency
metadata = stringify_metadata(metadata) metadata = process_metadata(metadata)
# Filter metadata to comply with S3 Vector API limit of 10 keys # Filter metadata to comply with S3 Vector API limit of 10 keys
metadata = self._filter_metadata(metadata, item["id"]) metadata = self._filter_metadata(metadata, item["id"])
@ -256,7 +256,7 @@ class S3VectorClient(VectorDBBase):
metadata["text"] = item["text"] metadata["text"] = item["text"]
# Convert metadata to string format for consistency # Convert metadata to string format for consistency
metadata = stringify_metadata(metadata) metadata = process_metadata(metadata)
# Filter metadata to comply with S3 Vector API limit of 10 keys # Filter metadata to comply with S3 Vector API limit of 10 keys
metadata = self._filter_metadata(metadata, item["id"]) metadata = self._filter_metadata(metadata, item["id"])

View file

@ -1,6 +1,10 @@
from open_webui.retrieval.vector.main import VectorDBBase from open_webui.retrieval.vector.main import VectorDBBase
from open_webui.retrieval.vector.type import VectorType from open_webui.retrieval.vector.type import VectorType
from open_webui.config import VECTOR_DB, ENABLE_QDRANT_MULTITENANCY_MODE from open_webui.config import (
VECTOR_DB,
ENABLE_QDRANT_MULTITENANCY_MODE,
ENABLE_MILVUS_MULTITENANCY_MODE,
)
class Vector: class Vector:
@ -12,6 +16,13 @@ class Vector:
""" """
match vector_type: match vector_type:
case VectorType.MILVUS: case VectorType.MILVUS:
if ENABLE_MILVUS_MULTITENANCY_MODE:
from open_webui.retrieval.vector.dbs.milvus_multitenancy import (
MilvusClient,
)
return MilvusClient()
else:
from open_webui.retrieval.vector.dbs.milvus import MilvusClient from open_webui.retrieval.vector.dbs.milvus import MilvusClient
return MilvusClient() return MilvusClient()

View file

@ -1,10 +1,24 @@
from datetime import datetime from datetime import datetime
KEYS_TO_EXCLUDE = ["content", "pages", "tables", "paragraphs", "sections", "figures"]
def stringify_metadata(
def filter_metadata(metadata: dict[str, any]) -> dict[str, any]:
metadata = {
key: value for key, value in metadata.items() if key not in KEYS_TO_EXCLUDE
}
return metadata
def process_metadata(
metadata: dict[str, any], metadata: dict[str, any],
) -> dict[str, any]: ) -> dict[str, any]:
for key, value in metadata.items(): for key, value in metadata.items():
# Remove large fields
if key in KEYS_TO_EXCLUDE:
del metadata[key]
# Convert non-serializable fields to strings
if ( if (
isinstance(value, datetime) isinstance(value, datetime)
or isinstance(value, list) or isinstance(value, list)

View file

@ -0,0 +1,51 @@
import logging
from dataclasses import dataclass
from typing import Optional
import requests
from open_webui.env import SRC_LOG_LEVELS
from open_webui.retrieval.web.main import SearchResult
log = logging.getLogger(__name__)
log.setLevel(SRC_LOG_LEVELS["RAG"])
def search_ollama_cloud(
url: str,
api_key: str,
query: str,
count: int,
filter_list: Optional[list[str]] = None,
) -> list[SearchResult]:
"""Search using Ollama Search API and return the results as a list of SearchResult objects.
Args:
api_key (str): A Ollama Search API key
query (str): The query to search for
count (int): Number of results to return
filter_list (Optional[list[str]]): List of domains to filter results by
"""
log.info(f"Searching with Ollama for query: {query}")
headers = {"Authorization": f"Bearer {api_key}", "Content-Type": "application/json"}
payload = {"query": query, "max_results": count}
try:
response = requests.post(f"{url}/api/web_search", headers=headers, json=payload)
response.raise_for_status()
data = response.json()
results = data.get("results", [])
log.info(f"Found {len(results)} results")
return [
SearchResult(
link=result.get("url", ""),
title=result.get("title", ""),
snippet=result.get("content", ""),
)
for result in results
]
except Exception as e:
log.error(f"Error searching Ollama: {e}")
return []

View file

@ -0,0 +1,64 @@
import logging
from typing import Optional, Literal
import requests
from open_webui.retrieval.web.main import SearchResult, get_filtered_results
from open_webui.env import SRC_LOG_LEVELS
log = logging.getLogger(__name__)
log.setLevel(SRC_LOG_LEVELS["RAG"])
def search_perplexity_search(
api_key: str,
query: str,
count: int,
filter_list: Optional[list[str]] = None,
) -> list[SearchResult]:
"""Search using Perplexity API and return the results as a list of SearchResult objects.
Args:
api_key (str): A Perplexity API key
query (str): The query to search for
count (int): Maximum number of results to return
filter_list (Optional[list[str]]): List of domains to filter results
"""
# Handle PersistentConfig object
if hasattr(api_key, "__str__"):
api_key = str(api_key)
try:
url = "https://api.perplexity.ai/search"
# Create payload for the API call
payload = {
"query": query,
"max_results": count,
}
headers = {
"Authorization": f"Bearer {api_key}",
"Content-Type": "application/json",
}
# Make the API request
response = requests.request("POST", url, json=payload, headers=headers)
# Parse the JSON response
json_response = response.json()
# Extract citations from the response
results = json_response.get("results", [])
return [
SearchResult(
link=result["url"], title=result["title"], snippet=result["snippet"]
)
for result in results
]
except Exception as e:
log.error(f"Error searching with Perplexity Search API: {e}")
return []

View file

@ -3,6 +3,7 @@ import json
import logging import logging
import os import os
import uuid import uuid
import html
from functools import lru_cache from functools import lru_cache
from pydub import AudioSegment from pydub import AudioSegment
from pydub.silence import split_on_silence from pydub.silence import split_on_silence
@ -153,6 +154,7 @@ def set_faster_whisper_model(model: str, auto_update: bool = False):
class TTSConfigForm(BaseModel): class TTSConfigForm(BaseModel):
OPENAI_API_BASE_URL: str OPENAI_API_BASE_URL: str
OPENAI_API_KEY: str OPENAI_API_KEY: str
OPENAI_PARAMS: Optional[dict] = None
API_KEY: str API_KEY: str
ENGINE: str ENGINE: str
MODEL: str MODEL: str
@ -189,6 +191,7 @@ async def get_audio_config(request: Request, user=Depends(get_admin_user)):
"tts": { "tts": {
"OPENAI_API_BASE_URL": request.app.state.config.TTS_OPENAI_API_BASE_URL, "OPENAI_API_BASE_URL": request.app.state.config.TTS_OPENAI_API_BASE_URL,
"OPENAI_API_KEY": request.app.state.config.TTS_OPENAI_API_KEY, "OPENAI_API_KEY": request.app.state.config.TTS_OPENAI_API_KEY,
"OPENAI_PARAMS": request.app.state.config.TTS_OPENAI_PARAMS,
"API_KEY": request.app.state.config.TTS_API_KEY, "API_KEY": request.app.state.config.TTS_API_KEY,
"ENGINE": request.app.state.config.TTS_ENGINE, "ENGINE": request.app.state.config.TTS_ENGINE,
"MODEL": request.app.state.config.TTS_MODEL, "MODEL": request.app.state.config.TTS_MODEL,
@ -221,6 +224,7 @@ async def update_audio_config(
): ):
request.app.state.config.TTS_OPENAI_API_BASE_URL = form_data.tts.OPENAI_API_BASE_URL request.app.state.config.TTS_OPENAI_API_BASE_URL = form_data.tts.OPENAI_API_BASE_URL
request.app.state.config.TTS_OPENAI_API_KEY = form_data.tts.OPENAI_API_KEY request.app.state.config.TTS_OPENAI_API_KEY = form_data.tts.OPENAI_API_KEY
request.app.state.config.TTS_OPENAI_PARAMS = form_data.tts.OPENAI_PARAMS
request.app.state.config.TTS_API_KEY = form_data.tts.API_KEY request.app.state.config.TTS_API_KEY = form_data.tts.API_KEY
request.app.state.config.TTS_ENGINE = form_data.tts.ENGINE request.app.state.config.TTS_ENGINE = form_data.tts.ENGINE
request.app.state.config.TTS_MODEL = form_data.tts.MODEL request.app.state.config.TTS_MODEL = form_data.tts.MODEL
@ -261,12 +265,13 @@ async def update_audio_config(
return { return {
"tts": { "tts": {
"OPENAI_API_BASE_URL": request.app.state.config.TTS_OPENAI_API_BASE_URL,
"OPENAI_API_KEY": request.app.state.config.TTS_OPENAI_API_KEY,
"API_KEY": request.app.state.config.TTS_API_KEY,
"ENGINE": request.app.state.config.TTS_ENGINE, "ENGINE": request.app.state.config.TTS_ENGINE,
"MODEL": request.app.state.config.TTS_MODEL, "MODEL": request.app.state.config.TTS_MODEL,
"VOICE": request.app.state.config.TTS_VOICE, "VOICE": request.app.state.config.TTS_VOICE,
"OPENAI_API_BASE_URL": request.app.state.config.TTS_OPENAI_API_BASE_URL,
"OPENAI_API_KEY": request.app.state.config.TTS_OPENAI_API_KEY,
"OPENAI_PARAMS": request.app.state.config.TTS_OPENAI_PARAMS,
"API_KEY": request.app.state.config.TTS_API_KEY,
"SPLIT_ON": request.app.state.config.TTS_SPLIT_ON, "SPLIT_ON": request.app.state.config.TTS_SPLIT_ON,
"AZURE_SPEECH_REGION": request.app.state.config.TTS_AZURE_SPEECH_REGION, "AZURE_SPEECH_REGION": request.app.state.config.TTS_AZURE_SPEECH_REGION,
"AZURE_SPEECH_BASE_URL": request.app.state.config.TTS_AZURE_SPEECH_BASE_URL, "AZURE_SPEECH_BASE_URL": request.app.state.config.TTS_AZURE_SPEECH_BASE_URL,
@ -336,6 +341,11 @@ async def speech(request: Request, user=Depends(get_verified_user)):
async with aiohttp.ClientSession( async with aiohttp.ClientSession(
timeout=timeout, trust_env=True timeout=timeout, trust_env=True
) as session: ) as session:
payload = {
**payload,
**(request.app.state.config.TTS_OPENAI_PARAMS or {}),
}
r = await session.post( r = await session.post(
url=f"{request.app.state.config.TTS_OPENAI_API_BASE_URL}/audio/speech", url=f"{request.app.state.config.TTS_OPENAI_API_BASE_URL}/audio/speech",
json=payload, json=payload,
@ -458,7 +468,7 @@ async def speech(request: Request, user=Depends(get_verified_user)):
try: try:
data = f"""<speak version="1.0" xmlns="http://www.w3.org/2001/10/synthesis" xml:lang="{locale}"> data = f"""<speak version="1.0" xmlns="http://www.w3.org/2001/10/synthesis" xml:lang="{locale}">
<voice name="{language}">{payload["input"]}</voice> <voice name="{language}">{html.escape(payload["input"])}</voice>
</speak>""" </speak>"""
timeout = aiohttp.ClientTimeout(total=AIOHTTP_CLIENT_TIMEOUT) timeout = aiohttp.ClientTimeout(total=AIOHTTP_CLIENT_TIMEOUT)
async with aiohttp.ClientSession( async with aiohttp.ClientSession(
@ -550,7 +560,7 @@ def transcription_handler(request, file_path, metadata):
metadata = metadata or {} metadata = metadata or {}
languages = [ languages = [
metadata.get("language", None) if WHISPER_LANGUAGE == "" else WHISPER_LANGUAGE, metadata.get("language", None) if not WHISPER_LANGUAGE else WHISPER_LANGUAGE,
None, # Always fallback to None in case transcription fails None, # Always fallback to None in case transcription fails
] ]

View file

@ -10,7 +10,13 @@ from pydantic import BaseModel
from open_webui.socket.main import sio, get_user_ids_from_room from open_webui.socket.main import sio, get_user_ids_from_room
from open_webui.models.users import Users, UserNameResponse from open_webui.models.users import Users, UserNameResponse
from open_webui.models.channels import Channels, ChannelModel, ChannelForm from open_webui.models.groups import Groups
from open_webui.models.channels import (
Channels,
ChannelModel,
ChannelForm,
ChannelResponse,
)
from open_webui.models.messages import ( from open_webui.models.messages import (
Messages, Messages,
MessageModel, MessageModel,
@ -24,9 +30,17 @@ from open_webui.constants import ERROR_MESSAGES
from open_webui.env import SRC_LOG_LEVELS from open_webui.env import SRC_LOG_LEVELS
from open_webui.utils.models import (
get_all_models,
get_filtered_models,
)
from open_webui.utils.chat import generate_chat_completion
from open_webui.utils.auth import get_admin_user, get_verified_user from open_webui.utils.auth import get_admin_user, get_verified_user
from open_webui.utils.access_control import has_access, get_users_with_access from open_webui.utils.access_control import has_access, get_users_with_access
from open_webui.utils.webhook import post_webhook from open_webui.utils.webhook import post_webhook
from open_webui.utils.channels import extract_mentions, replace_mentions
log = logging.getLogger(__name__) log = logging.getLogger(__name__)
log.setLevel(SRC_LOG_LEVELS["MODELS"]) log.setLevel(SRC_LOG_LEVELS["MODELS"])
@ -72,7 +86,7 @@ async def create_new_channel(form_data: ChannelForm, user=Depends(get_admin_user
############################ ############################
@router.get("/{id}", response_model=Optional[ChannelModel]) @router.get("/{id}", response_model=Optional[ChannelResponse])
async def get_channel_by_id(id: str, user=Depends(get_verified_user)): async def get_channel_by_id(id: str, user=Depends(get_verified_user)):
channel = Channels.get_channel_by_id(id) channel = Channels.get_channel_by_id(id)
if not channel: if not channel:
@ -87,7 +101,16 @@ async def get_channel_by_id(id: str, user=Depends(get_verified_user)):
status_code=status.HTTP_403_FORBIDDEN, detail=ERROR_MESSAGES.DEFAULT() status_code=status.HTTP_403_FORBIDDEN, detail=ERROR_MESSAGES.DEFAULT()
) )
return ChannelModel(**channel.model_dump()) write_access = has_access(
user.id, type="write", access_control=channel.access_control, strict=False
)
return ChannelResponse(
**{
**channel.model_dump(),
"write_access": write_access or user.role == "admin",
}
)
############################ ############################
@ -144,7 +167,7 @@ async def delete_channel_by_id(id: str, user=Depends(get_admin_user)):
class MessageUserResponse(MessageResponse): class MessageUserResponse(MessageResponse):
user: UserNameResponse pass
@router.get("/{id}/messages", response_model=list[MessageUserResponse]) @router.get("/{id}/messages", response_model=list[MessageUserResponse])
@ -173,15 +196,17 @@ async def get_channel_messages(
user = Users.get_user_by_id(message.user_id) user = Users.get_user_by_id(message.user_id)
users[message.user_id] = user users[message.user_id] = user
replies = Messages.get_replies_by_message_id(message.id) thread_replies = Messages.get_thread_replies_by_message_id(message.id)
latest_reply_at = replies[0].created_at if replies else None latest_thread_reply_at = (
thread_replies[0].created_at if thread_replies else None
)
messages.append( messages.append(
MessageUserResponse( MessageUserResponse(
**{ **{
**message.model_dump(), **message.model_dump(),
"reply_count": len(replies), "reply_count": len(thread_replies),
"latest_reply_at": latest_reply_at, "latest_reply_at": latest_thread_reply_at,
"reactions": Messages.get_reactions_by_message_id(message.id), "reactions": Messages.get_reactions_by_message_id(message.id),
"user": UserNameResponse(**users[message.user_id].model_dump()), "user": UserNameResponse(**users[message.user_id].model_dump()),
} }
@ -200,14 +225,11 @@ async def send_notification(name, webui_url, channel, message, active_user_ids):
users = get_users_with_access("read", channel.access_control) users = get_users_with_access("read", channel.access_control)
for user in users: for user in users:
if user.id in active_user_ids: if user.id not in active_user_ids:
continue
else:
if user.settings: if user.settings:
webhook_url = user.settings.ui.get("notifications", {}).get( webhook_url = user.settings.ui.get("notifications", {}).get(
"webhook_url", None "webhook_url", None
) )
if webhook_url: if webhook_url:
await post_webhook( await post_webhook(
name, name,
@ -221,14 +243,169 @@ async def send_notification(name, webui_url, channel, message, active_user_ids):
}, },
) )
return True
@router.post("/{id}/messages/post", response_model=Optional[MessageModel])
async def post_new_message( async def model_response_handler(request, channel, message, user):
request: Request, MODELS = {
id: str, model["id"]: model
form_data: MessageForm, for model in get_filtered_models(await get_all_models(request, user=user), user)
background_tasks: BackgroundTasks, }
user=Depends(get_verified_user),
mentions = extract_mentions(message.content)
message_content = replace_mentions(message.content)
model_mentions = {}
# check if the message is a reply to a message sent by a model
if (
message.reply_to_message
and message.reply_to_message.meta
and message.reply_to_message.meta.get("model_id", None)
):
model_id = message.reply_to_message.meta.get("model_id", None)
model_mentions[model_id] = {"id": model_id, "id_type": "M"}
# check if any of the mentions are models
for mention in mentions:
if mention["id_type"] == "M" and mention["id"] not in model_mentions:
model_mentions[mention["id"]] = mention
if not model_mentions:
return False
for mention in model_mentions.values():
model_id = mention["id"]
model = MODELS.get(model_id, None)
if model:
try:
# reverse to get in chronological order
thread_messages = Messages.get_messages_by_parent_id(
channel.id,
message.parent_id if message.parent_id else message.id,
)[::-1]
response_message, channel = await new_message_handler(
request,
channel.id,
MessageForm(
**{
"parent_id": (
message.parent_id if message.parent_id else message.id
),
"content": f"",
"data": {},
"meta": {
"model_id": model_id,
"model_name": model.get("name", model_id),
},
}
),
user,
)
thread_history = []
images = []
message_users = {}
for thread_message in thread_messages:
message_user = None
if thread_message.user_id not in message_users:
message_user = Users.get_user_by_id(thread_message.user_id)
message_users[thread_message.user_id] = message_user
else:
message_user = message_users[thread_message.user_id]
if thread_message.meta and thread_message.meta.get(
"model_id", None
):
# If the message was sent by a model, use the model name
message_model_id = thread_message.meta.get("model_id", None)
message_model = MODELS.get(message_model_id, None)
username = (
message_model.get("name", message_model_id)
if message_model
else message_model_id
)
else:
username = message_user.name if message_user else "Unknown"
thread_history.append(
f"{username}: {replace_mentions(thread_message.content)}"
)
thread_message_files = thread_message.data.get("files", [])
for file in thread_message_files:
if file.get("type", "") == "image":
images.append(file.get("url", ""))
system_message = {
"role": "system",
"content": f"You are {model.get('name', model_id)}, participating in a threaded conversation. Be concise and conversational."
+ (
f"Here's the thread history:\n\n{''.join([f'{msg}' for msg in thread_history])}\n\nContinue the conversation naturally as {model.get('name', model_id)}, addressing the most recent message while being aware of the full context."
if thread_history
else ""
),
}
content = f"{user.name if user else 'User'}: {message_content}"
if images:
content = [
{
"type": "text",
"text": content,
},
*[
{
"type": "image_url",
"image_url": {
"url": image,
},
}
for image in images
],
]
form_data = {
"model": model_id,
"messages": [
system_message,
{"role": "user", "content": content},
],
"stream": False,
}
res = await generate_chat_completion(
request,
form_data=form_data,
user=user,
)
if res:
await update_message_by_id(
channel.id,
response_message.id,
MessageForm(
**{
"content": res["choices"][0]["message"]["content"],
"meta": {
"done": True,
},
}
),
user,
)
except Exception as e:
log.info(e)
pass
return True
async def new_message_handler(
request: Request, id: str, form_data: MessageForm, user=Depends(get_verified_user)
): ):
channel = Channels.get_channel_by_id(id) channel = Channels.get_channel_by_id(id)
if not channel: if not channel:
@ -237,7 +414,7 @@ async def post_new_message(
) )
if user.role != "admin" and not has_access( if user.role != "admin" and not has_access(
user.id, type="read", access_control=channel.access_control user.id, type="write", access_control=channel.access_control, strict=False
): ):
raise HTTPException( raise HTTPException(
status_code=status.HTTP_403_FORBIDDEN, detail=ERROR_MESSAGES.DEFAULT() status_code=status.HTTP_403_FORBIDDEN, detail=ERROR_MESSAGES.DEFAULT()
@ -245,31 +422,21 @@ async def post_new_message(
try: try:
message = Messages.insert_new_message(form_data, channel.id, user.id) message = Messages.insert_new_message(form_data, channel.id, user.id)
if message: if message:
message = Messages.get_message_by_id(message.id)
event_data = { event_data = {
"channel_id": channel.id, "channel_id": channel.id,
"message_id": message.id, "message_id": message.id,
"data": { "data": {
"type": "message", "type": "message",
"data": MessageUserResponse( "data": message.model_dump(),
**{
**message.model_dump(),
"reply_count": 0,
"latest_reply_at": None,
"reactions": Messages.get_reactions_by_message_id(
message.id
),
"user": UserNameResponse(**user.model_dump()),
}
).model_dump(),
}, },
"user": UserNameResponse(**user.model_dump()).model_dump(), "user": UserNameResponse(**user.model_dump()).model_dump(),
"channel": channel.model_dump(), "channel": channel.model_dump(),
} }
await sio.emit( await sio.emit(
"channel-events", "events:channel",
event_data, event_data,
to=f"channel:{channel.id}", to=f"channel:{channel.id}",
) )
@ -280,33 +447,45 @@ async def post_new_message(
if parent_message: if parent_message:
await sio.emit( await sio.emit(
"channel-events", "events:channel",
{ {
"channel_id": channel.id, "channel_id": channel.id,
"message_id": parent_message.id, "message_id": parent_message.id,
"data": { "data": {
"type": "message:reply", "type": "message:reply",
"data": MessageUserResponse( "data": parent_message.model_dump(),
**{
**parent_message.model_dump(),
"user": UserNameResponse(
**Users.get_user_by_id(
parent_message.user_id
).model_dump()
),
}
).model_dump(),
}, },
"user": UserNameResponse(**user.model_dump()).model_dump(), "user": UserNameResponse(**user.model_dump()).model_dump(),
"channel": channel.model_dump(), "channel": channel.model_dump(),
}, },
to=f"channel:{channel.id}", to=f"channel:{channel.id}",
) )
return message, channel
else:
raise Exception("Error creating message")
except Exception as e:
log.exception(e)
raise HTTPException(
status_code=status.HTTP_400_BAD_REQUEST, detail=ERROR_MESSAGES.DEFAULT()
)
@router.post("/{id}/messages/post", response_model=Optional[MessageModel])
async def post_new_message(
request: Request,
id: str,
form_data: MessageForm,
background_tasks: BackgroundTasks,
user=Depends(get_verified_user),
):
try:
message, channel = await new_message_handler(request, id, form_data, user)
active_user_ids = get_user_ids_from_room(f"channel:{channel.id}") active_user_ids = get_user_ids_from_room(f"channel:{channel.id}")
background_tasks.add_task( async def background_handler():
send_notification, await model_response_handler(request, channel, message, user)
await send_notification(
request.app.state.WEBUI_NAME, request.app.state.WEBUI_NAME,
request.app.state.config.WEBUI_URL, request.app.state.config.WEBUI_URL,
channel, channel,
@ -314,7 +493,12 @@ async def post_new_message(
active_user_ids, active_user_ids,
) )
return MessageModel(**message.model_dump()) background_tasks.add_task(background_handler)
return message
except HTTPException as e:
raise e
except Exception as e: except Exception as e:
log.exception(e) log.exception(e)
raise HTTPException( raise HTTPException(
@ -460,20 +644,13 @@ async def update_message_by_id(
if message: if message:
await sio.emit( await sio.emit(
"channel-events", "events:channel",
{ {
"channel_id": channel.id, "channel_id": channel.id,
"message_id": message.id, "message_id": message.id,
"data": { "data": {
"type": "message:update", "type": "message:update",
"data": MessageUserResponse( "data": message.model_dump(),
**{
**message.model_dump(),
"user": UserNameResponse(
**user.model_dump()
).model_dump(),
}
).model_dump(),
}, },
"user": UserNameResponse(**user.model_dump()).model_dump(), "user": UserNameResponse(**user.model_dump()).model_dump(),
"channel": channel.model_dump(), "channel": channel.model_dump(),
@ -509,7 +686,7 @@ async def add_reaction_to_message(
) )
if user.role != "admin" and not has_access( if user.role != "admin" and not has_access(
user.id, type="read", access_control=channel.access_control user.id, type="write", access_control=channel.access_control, strict=False
): ):
raise HTTPException( raise HTTPException(
status_code=status.HTTP_403_FORBIDDEN, detail=ERROR_MESSAGES.DEFAULT() status_code=status.HTTP_403_FORBIDDEN, detail=ERROR_MESSAGES.DEFAULT()
@ -531,7 +708,7 @@ async def add_reaction_to_message(
message = Messages.get_message_by_id(message_id) message = Messages.get_message_by_id(message_id)
await sio.emit( await sio.emit(
"channel-events", "events:channel",
{ {
"channel_id": channel.id, "channel_id": channel.id,
"message_id": message.id, "message_id": message.id,
@ -539,9 +716,6 @@ async def add_reaction_to_message(
"type": "message:reaction:add", "type": "message:reaction:add",
"data": { "data": {
**message.model_dump(), **message.model_dump(),
"user": UserNameResponse(
**Users.get_user_by_id(message.user_id).model_dump()
).model_dump(),
"name": form_data.name, "name": form_data.name,
}, },
}, },
@ -575,7 +749,7 @@ async def remove_reaction_by_id_and_user_id_and_name(
) )
if user.role != "admin" and not has_access( if user.role != "admin" and not has_access(
user.id, type="read", access_control=channel.access_control user.id, type="write", access_control=channel.access_control, strict=False
): ):
raise HTTPException( raise HTTPException(
status_code=status.HTTP_403_FORBIDDEN, detail=ERROR_MESSAGES.DEFAULT() status_code=status.HTTP_403_FORBIDDEN, detail=ERROR_MESSAGES.DEFAULT()
@ -600,7 +774,7 @@ async def remove_reaction_by_id_and_user_id_and_name(
message = Messages.get_message_by_id(message_id) message = Messages.get_message_by_id(message_id)
await sio.emit( await sio.emit(
"channel-events", "events:channel",
{ {
"channel_id": channel.id, "channel_id": channel.id,
"message_id": message.id, "message_id": message.id,
@ -608,9 +782,6 @@ async def remove_reaction_by_id_and_user_id_and_name(
"type": "message:reaction:remove", "type": "message:reaction:remove",
"data": { "data": {
**message.model_dump(), **message.model_dump(),
"user": UserNameResponse(
**Users.get_user_by_id(message.user_id).model_dump()
).model_dump(),
"name": form_data.name, "name": form_data.name,
}, },
}, },
@ -657,7 +828,9 @@ async def delete_message_by_id(
if ( if (
user.role != "admin" user.role != "admin"
and message.user_id != user.id and message.user_id != user.id
and not has_access(user.id, type="read", access_control=channel.access_control) and not has_access(
user.id, type="write", access_control=channel.access_control, strict=False
)
): ):
raise HTTPException( raise HTTPException(
status_code=status.HTTP_403_FORBIDDEN, detail=ERROR_MESSAGES.DEFAULT() status_code=status.HTTP_403_FORBIDDEN, detail=ERROR_MESSAGES.DEFAULT()
@ -666,7 +839,7 @@ async def delete_message_by_id(
try: try:
Messages.delete_message_by_id(message_id) Messages.delete_message_by_id(message_id)
await sio.emit( await sio.emit(
"channel-events", "events:channel",
{ {
"channel_id": channel.id, "channel_id": channel.id,
"message_id": message.id, "message_id": message.id,
@ -689,22 +862,13 @@ async def delete_message_by_id(
if parent_message: if parent_message:
await sio.emit( await sio.emit(
"channel-events", "events:channel",
{ {
"channel_id": channel.id, "channel_id": channel.id,
"message_id": parent_message.id, "message_id": parent_message.id,
"data": { "data": {
"type": "message:reply", "type": "message:reply",
"data": MessageUserResponse( "data": parent_message.model_dump(),
**{
**parent_message.model_dump(),
"user": UserNameResponse(
**Users.get_user_by_id(
parent_message.user_id
).model_dump()
),
}
).model_dump(),
}, },
"user": UserNameResponse(**user.model_dump()).model_dump(), "user": UserNameResponse(**user.model_dump()).model_dump(),
"channel": channel.model_dump(), "channel": channel.model_dump(),

View file

@ -37,7 +37,9 @@ router = APIRouter()
@router.get("/", response_model=list[ChatTitleIdResponse]) @router.get("/", response_model=list[ChatTitleIdResponse])
@router.get("/list", response_model=list[ChatTitleIdResponse]) @router.get("/list", response_model=list[ChatTitleIdResponse])
def get_session_user_chat_list( def get_session_user_chat_list(
user=Depends(get_verified_user), page: Optional[int] = None user=Depends(get_verified_user),
page: Optional[int] = None,
include_folders: Optional[bool] = False,
): ):
try: try:
if page is not None: if page is not None:
@ -45,10 +47,12 @@ def get_session_user_chat_list(
skip = (page - 1) * limit skip = (page - 1) * limit
return Chats.get_chat_title_id_list_by_user_id( return Chats.get_chat_title_id_list_by_user_id(
user.id, skip=skip, limit=limit user.id, include_folders=include_folders, skip=skip, limit=limit
) )
else: else:
return Chats.get_chat_title_id_list_by_user_id(user.id) return Chats.get_chat_title_id_list_by_user_id(
user.id, include_folders=include_folders
)
except Exception as e: except Exception as e:
log.exception(e) log.exception(e)
raise HTTPException( raise HTTPException(
@ -166,7 +170,7 @@ async def import_chat(form_data: ChatImportForm, user=Depends(get_verified_user)
@router.get("/search", response_model=list[ChatTitleIdResponse]) @router.get("/search", response_model=list[ChatTitleIdResponse])
async def search_user_chats( def search_user_chats(
text: str, page: Optional[int] = None, user=Depends(get_verified_user) text: str, page: Optional[int] = None, user=Depends(get_verified_user)
): ):
if page is None: if page is None:
@ -214,6 +218,28 @@ async def get_chats_by_folder_id(folder_id: str, user=Depends(get_verified_user)
] ]
@router.get("/folder/{folder_id}/list")
async def get_chat_list_by_folder_id(
folder_id: str, page: Optional[int] = 1, user=Depends(get_verified_user)
):
try:
limit = 60
skip = (page - 1) * limit
return [
{"title": chat.title, "id": chat.id, "updated_at": chat.updated_at}
for chat in Chats.get_chats_by_folder_id_and_user_id(
folder_id, user.id, skip=skip, limit=limit
)
]
except Exception as e:
log.exception(e)
raise HTTPException(
status_code=status.HTTP_400_BAD_REQUEST, detail=ERROR_MESSAGES.DEFAULT()
)
############################ ############################
# GetPinnedChats # GetPinnedChats
############################ ############################
@ -335,6 +361,16 @@ async def archive_all_chats(user=Depends(get_verified_user)):
return Chats.archive_all_chats_by_user_id(user.id) return Chats.archive_all_chats_by_user_id(user.id)
############################
# UnarchiveAllChats
############################
@router.post("/unarchive/all", response_model=bool)
async def unarchive_all_chats(user=Depends(get_verified_user)):
return Chats.unarchive_all_chats_by_user_id(user.id)
############################ ############################
# GetSharedChatById # GetSharedChatById
############################ ############################

View file

@ -1,5 +1,7 @@
import logging
from fastapi import APIRouter, Depends, Request, HTTPException from fastapi import APIRouter, Depends, Request, HTTPException
from pydantic import BaseModel, ConfigDict from pydantic import BaseModel, ConfigDict
import aiohttp
from typing import Optional from typing import Optional
@ -12,10 +14,24 @@ from open_webui.utils.tools import (
get_tool_server_url, get_tool_server_url,
set_tool_servers, set_tool_servers,
) )
from open_webui.utils.mcp.client import MCPClient
from open_webui.env import SRC_LOG_LEVELS
from open_webui.utils.oauth import (
get_discovery_urls,
get_oauth_client_info_with_dynamic_client_registration,
encrypt_data,
decrypt_data,
OAuthClientInformationFull,
)
from mcp.shared.auth import OAuthMetadata
router = APIRouter() router = APIRouter()
log = logging.getLogger(__name__)
log.setLevel(SRC_LOG_LEVELS["MAIN"])
############################ ############################
# ImportConfig # ImportConfig
@ -79,6 +95,43 @@ async def set_connections_config(
} }
class OAuthClientRegistrationForm(BaseModel):
url: str
client_id: str
client_name: Optional[str] = None
@router.post("/oauth/clients/register")
async def register_oauth_client(
request: Request,
form_data: OAuthClientRegistrationForm,
type: Optional[str] = None,
user=Depends(get_admin_user),
):
try:
oauth_client_id = form_data.client_id
if type:
oauth_client_id = f"{type}:{form_data.client_id}"
oauth_client_info = (
await get_oauth_client_info_with_dynamic_client_registration(
request, oauth_client_id, form_data.url
)
)
return {
"status": True,
"oauth_client_info": encrypt_data(
oauth_client_info.model_dump(mode="json")
),
}
except Exception as e:
log.debug(f"Failed to register OAuth client: {e}")
raise HTTPException(
status_code=400,
detail=f"Failed to register OAuth client",
)
############################ ############################
# ToolServers Config # ToolServers Config
############################ ############################
@ -87,6 +140,7 @@ async def set_connections_config(
class ToolServerConnection(BaseModel): class ToolServerConnection(BaseModel):
url: str url: str
path: str path: str
type: Optional[str] = "openapi" # openapi, mcp
auth_type: Optional[str] auth_type: Optional[str]
key: Optional[str] key: Optional[str]
config: Optional[dict] config: Optional[dict]
@ -114,8 +168,29 @@ async def set_tool_servers_config(
request.app.state.config.TOOL_SERVER_CONNECTIONS = [ request.app.state.config.TOOL_SERVER_CONNECTIONS = [
connection.model_dump() for connection in form_data.TOOL_SERVER_CONNECTIONS connection.model_dump() for connection in form_data.TOOL_SERVER_CONNECTIONS
] ]
await set_tool_servers(request) await set_tool_servers(request)
for connection in request.app.state.config.TOOL_SERVER_CONNECTIONS:
server_type = connection.get("type", "openapi")
if server_type == "mcp":
server_id = connection.get("info", {}).get("id")
auth_type = connection.get("auth_type", "none")
if auth_type == "oauth_2.1" and server_id:
try:
oauth_client_info = connection.get("info", {}).get(
"oauth_client_info", ""
)
oauth_client_info = decrypt_data(oauth_client_info)
await request.app.state.oauth_client_manager.add_client(
f"{server_type}:{server_id}",
OAuthClientInformationFull(**oauth_client_info),
)
except Exception as e:
log.debug(f"Failed to add OAuth client for MCP tool server: {e}")
continue
return { return {
"TOOL_SERVER_CONNECTIONS": request.app.state.config.TOOL_SERVER_CONNECTIONS, "TOOL_SERVER_CONNECTIONS": request.app.state.config.TOOL_SERVER_CONNECTIONS,
} }
@ -129,19 +204,106 @@ async def verify_tool_servers_config(
Verify the connection to the tool server. Verify the connection to the tool server.
""" """
try: try:
if form_data.type == "mcp":
if form_data.auth_type == "oauth_2.1":
discovery_urls = get_discovery_urls(form_data.url)
for discovery_url in discovery_urls:
log.debug(
f"Trying to fetch OAuth 2.1 discovery document from {discovery_url}"
)
async with aiohttp.ClientSession() as session:
async with session.get(
discovery_url
) as oauth_server_metadata_response:
if oauth_server_metadata_response.status == 200:
try:
oauth_server_metadata = (
OAuthMetadata.model_validate(
await oauth_server_metadata_response.json()
)
)
return {
"status": True,
"oauth_server_metadata": oauth_server_metadata.model_dump(
mode="json"
),
}
except Exception as e:
log.info(
f"Failed to parse OAuth 2.1 discovery document: {e}"
)
raise HTTPException(
status_code=400,
detail=f"Failed to parse OAuth 2.1 discovery document from {discovery_urls[0]}",
)
raise HTTPException(
status_code=400,
detail=f"Failed to fetch OAuth 2.1 discovery document from {discovery_urls}",
)
else:
try:
client = MCPClient()
headers = None
token = None token = None
if form_data.auth_type == "bearer": if form_data.auth_type == "bearer":
token = form_data.key token = form_data.key
elif form_data.auth_type == "session": elif form_data.auth_type == "session":
token = request.state.token.credentials token = request.state.token.credentials
elif form_data.auth_type == "system_oauth":
try:
if request.cookies.get("oauth_session_id", None):
token = await request.app.state.oauth_manager.get_oauth_token(
user.id,
request.cookies.get("oauth_session_id", None),
)
except Exception as e:
pass
if token:
headers = {"Authorization": f"Bearer {token}"}
await client.connect(form_data.url, headers=headers)
specs = await client.list_tool_specs()
return {
"status": True,
"specs": specs,
}
except Exception as e:
log.debug(f"Failed to create MCP client: {e}")
raise HTTPException(
status_code=400,
detail=f"Failed to create MCP client",
)
finally:
if client:
await client.disconnect()
else: # openapi
token = None
if form_data.auth_type == "bearer":
token = form_data.key
elif form_data.auth_type == "session":
token = request.state.token.credentials
elif form_data.auth_type == "system_oauth":
try:
if request.cookies.get("oauth_session_id", None):
token = await request.app.state.oauth_manager.get_oauth_token(
user.id,
request.cookies.get("oauth_session_id", None),
)
except Exception as e:
pass
url = get_tool_server_url(form_data.url, form_data.path) url = get_tool_server_url(form_data.url, form_data.path)
return await get_tool_server_data(token, url) return await get_tool_server_data(token, url)
except HTTPException as e:
raise e
except Exception as e: except Exception as e:
log.debug(f"Failed to connect to the tool server: {e}")
raise HTTPException( raise HTTPException(
status_code=400, status_code=400,
detail=f"Failed to connect to the tool server: {str(e)}", detail=f"Failed to connect to the tool server",
) )

View file

@ -120,11 +120,6 @@ def process_uploaded_file(request, file, file_path, file_item, file_metadata, us
f"File type {file.content_type} is not provided, but trying to process anyway" f"File type {file.content_type} is not provided, but trying to process anyway"
) )
process_file(request, ProcessFileForm(file_id=file_item.id), user=user) process_file(request, ProcessFileForm(file_id=file_item.id), user=user)
Files.update_file_data_by_id(
file_item.id,
{"status": "completed"},
)
except Exception as e: except Exception as e:
log.error(f"Error processing file: {file_item.id}") log.error(f"Error processing file: {file_item.id}")
Files.update_file_data_by_id( Files.update_file_data_by_id(

View file

@ -12,6 +12,7 @@ from open_webui.models.folders import (
FolderForm, FolderForm,
FolderUpdateForm, FolderUpdateForm,
FolderModel, FolderModel,
FolderNameIdResponse,
Folders, Folders,
) )
from open_webui.models.chats import Chats from open_webui.models.chats import Chats
@ -44,7 +45,7 @@ router = APIRouter()
############################ ############################
@router.get("/", response_model=list[FolderModel]) @router.get("/", response_model=list[FolderNameIdResponse])
async def get_folders(user=Depends(get_verified_user)): async def get_folders(user=Depends(get_verified_user)):
folders = Folders.get_folders_by_user_id(user.id) folders = Folders.get_folders_by_user_id(user.id)
@ -76,14 +77,6 @@ async def get_folders(user=Depends(get_verified_user)):
return [ return [
{ {
**folder.model_dump(), **folder.model_dump(),
"items": {
"chats": [
{"title": chat.title, "id": chat.id, "updated_at": chat.updated_at}
for chat in Chats.get_chats_by_folder_id_and_user_id(
folder.id, user.id
)
]
},
} }
for folder in folders for folder in folders
] ]
@ -262,10 +255,10 @@ async def update_folder_is_expanded_by_id(
async def delete_folder_by_id( async def delete_folder_by_id(
request: Request, id: str, user=Depends(get_verified_user) request: Request, id: str, user=Depends(get_verified_user)
): ):
if Chats.count_chats_by_folder_id_and_user_id(id, user.id):
chat_delete_permission = has_permission( chat_delete_permission = has_permission(
user.id, "chat.delete", request.app.state.config.USER_PERMISSIONS user.id, "chat.delete", request.app.state.config.USER_PERMISSIONS
) )
if user.role != "admin" and not chat_delete_permission: if user.role != "admin" and not chat_delete_permission:
raise HTTPException( raise HTTPException(
status_code=status.HTTP_403_FORBIDDEN, status_code=status.HTTP_403_FORBIDDEN,

View file

@ -148,6 +148,18 @@ async def sync_functions(
content=function.content, content=function.content,
) )
if hasattr(function_module, "Valves") and function.valves:
Valves = function_module.Valves
try:
Valves(
**{k: v for k, v in function.valves.items() if v is not None}
)
except Exception as e:
log.exception(
f"Error validating valves for function {function.id}: {e}"
)
raise e
return Functions.sync_functions(user.id, form_data.functions) return Functions.sync_functions(user.id, form_data.functions)
except Exception as e: except Exception as e:
log.exception(f"Failed to load a function: {e}") log.exception(f"Failed to load a function: {e}")
@ -419,8 +431,10 @@ async def update_function_valves_by_id(
try: try:
form_data = {k: v for k, v in form_data.items() if v is not None} form_data = {k: v for k, v in form_data.items() if v is not None}
valves = Valves(**form_data) valves = Valves(**form_data)
Functions.update_function_valves_by_id(id, valves.model_dump())
return valves.model_dump() valves_dict = valves.model_dump(exclude_unset=True)
Functions.update_function_valves_by_id(id, valves_dict)
return valves_dict
except Exception as e: except Exception as e:
log.exception(f"Error updating function values by id {id}: {e}") log.exception(f"Error updating function values by id {id}: {e}")
raise HTTPException( raise HTTPException(
@ -502,10 +516,11 @@ async def update_function_user_valves_by_id(
try: try:
form_data = {k: v for k, v in form_data.items() if v is not None} form_data = {k: v for k, v in form_data.items() if v is not None}
user_valves = UserValves(**form_data) user_valves = UserValves(**form_data)
user_valves_dict = user_valves.model_dump(exclude_unset=True)
Functions.update_user_valves_by_id_and_user_id( Functions.update_user_valves_by_id_and_user_id(
id, user.id, user_valves.model_dump() id, user.id, user_valves_dict
) )
return user_valves.model_dump() return user_valves_dict
except Exception as e: except Exception as e:
log.exception(f"Error updating function user valves by id {id}: {e}") log.exception(f"Error updating function user valves by id {id}: {e}")
raise HTTPException( raise HTTPException(

View file

@ -514,6 +514,7 @@ async def image_generations(
size = form_data.size size = form_data.size
width, height = tuple(map(int, size.split("x"))) width, height = tuple(map(int, size.split("x")))
model = get_image_model(request)
r = None r = None
try: try:
@ -531,11 +532,7 @@ async def image_generations(
headers["X-OpenWebUI-User-Role"] = user.role headers["X-OpenWebUI-User-Role"] = user.role
data = { data = {
"model": ( "model": model,
request.app.state.config.IMAGE_GENERATION_MODEL
if request.app.state.config.IMAGE_GENERATION_MODEL != ""
else "dall-e-2"
),
"prompt": form_data.prompt, "prompt": form_data.prompt,
"n": form_data.n, "n": form_data.n,
"size": ( "size": (
@ -584,7 +581,6 @@ async def image_generations(
headers["Content-Type"] = "application/json" headers["Content-Type"] = "application/json"
headers["x-goog-api-key"] = request.app.state.config.IMAGES_GEMINI_API_KEY headers["x-goog-api-key"] = request.app.state.config.IMAGES_GEMINI_API_KEY
model = get_image_model(request)
data = { data = {
"instances": {"prompt": form_data.prompt}, "instances": {"prompt": form_data.prompt},
"parameters": { "parameters": {
@ -640,7 +636,7 @@ async def image_generations(
} }
) )
res = await comfyui_generate_image( res = await comfyui_generate_image(
request.app.state.config.IMAGE_GENERATION_MODEL, model,
form_data, form_data,
user.id, user.id,
request.app.state.config.COMFYUI_BASE_URL, request.app.state.config.COMFYUI_BASE_URL,

View file

@ -1,4 +1,9 @@
from typing import Optional from typing import Optional
import io
import base64
import json
import asyncio
import logging
from open_webui.models.models import ( from open_webui.models.models import (
ModelForm, ModelForm,
@ -10,12 +15,22 @@ from open_webui.models.models import (
from pydantic import BaseModel from pydantic import BaseModel
from open_webui.constants import ERROR_MESSAGES from open_webui.constants import ERROR_MESSAGES
from fastapi import APIRouter, Depends, HTTPException, Request, status from fastapi import (
APIRouter,
Depends,
HTTPException,
Request,
status,
Response,
)
from fastapi.responses import FileResponse, StreamingResponse
from open_webui.utils.auth import get_admin_user, get_verified_user from open_webui.utils.auth import get_admin_user, get_verified_user
from open_webui.utils.access_control import has_access, has_permission from open_webui.utils.access_control import has_access, has_permission
from open_webui.config import BYPASS_ADMIN_ACCESS_CONTROL from open_webui.config import BYPASS_ADMIN_ACCESS_CONTROL, STATIC_DIR
log = logging.getLogger(__name__)
router = APIRouter() router = APIRouter()
@ -90,6 +105,50 @@ async def export_models(user=Depends(get_admin_user)):
return Models.get_models() return Models.get_models()
############################
# ImportModels
############################
class ModelsImportForm(BaseModel):
models: list[dict]
@router.post("/import", response_model=bool)
async def import_models(
user: str = Depends(get_admin_user), form_data: ModelsImportForm = (...)
):
try:
data = form_data.models
if isinstance(data, list):
for model_data in data:
# Here, you can add logic to validate model_data if needed
model_id = model_data.get("id")
if model_id:
existing_model = Models.get_model_by_id(model_id)
if existing_model:
# Update existing model
model_data["meta"] = model_data.get("meta", {})
model_data["params"] = model_data.get("params", {})
updated_model = ModelForm(
**{**existing_model.model_dump(), **model_data}
)
Models.update_model_by_id(model_id, updated_model)
else:
# Insert new model
model_data["meta"] = model_data.get("meta", {})
model_data["params"] = model_data.get("params", {})
new_model = ModelForm(**model_data)
Models.insert_new_model(user_id=user.id, form_data=new_model)
return True
else:
raise HTTPException(status_code=400, detail="Invalid JSON format")
except Exception as e:
log.exception(e)
raise HTTPException(status_code=500, detail=str(e))
############################ ############################
# SyncModels # SyncModels
############################ ############################
@ -129,6 +188,39 @@ async def get_model_by_id(id: str, user=Depends(get_verified_user)):
) )
###########################
# GetModelById
###########################
@router.get("/model/profile/image")
async def get_model_profile_image(id: str, user=Depends(get_verified_user)):
model = Models.get_model_by_id(id)
if model:
if model.meta.profile_image_url:
if model.meta.profile_image_url.startswith("http"):
return Response(
status_code=status.HTTP_302_FOUND,
headers={"Location": model.meta.profile_image_url},
)
elif model.meta.profile_image_url.startswith("data:image"):
try:
header, base64_data = model.meta.profile_image_url.split(",", 1)
image_data = base64.b64decode(base64_data)
image_buffer = io.BytesIO(image_data)
return StreamingResponse(
image_buffer,
media_type="image/png",
headers={"Content-Disposition": "inline; filename=image.png"},
)
except Exception as e:
pass
return FileResponse(f"{STATIC_DIR}/favicon.png")
else:
return FileResponse(f"{STATIC_DIR}/favicon.png")
############################ ############################
# ToggleModelById # ToggleModelById
############################ ############################

View file

@ -48,7 +48,7 @@ async def get_notes(request: Request, user=Depends(get_verified_user)):
"user": UserResponse(**Users.get_user_by_id(note.user_id).model_dump()), "user": UserResponse(**Users.get_user_by_id(note.user_id).model_dump()),
} }
) )
for note in Notes.get_notes_by_user_id(user.id, "write") for note in Notes.get_notes_by_permission(user.id, "write")
] ]
return notes return notes
@ -81,7 +81,9 @@ async def get_note_list(
notes = [ notes = [
NoteTitleIdResponse(**note.model_dump()) NoteTitleIdResponse(**note.model_dump())
for note in Notes.get_notes_by_user_id(user.id, "write", skip=skip, limit=limit) for note in Notes.get_notes_by_permission(
user.id, "write", skip=skip, limit=limit
)
] ]
return notes return notes
@ -178,6 +180,18 @@ async def update_note_by_id(
status_code=status.HTTP_403_FORBIDDEN, detail=ERROR_MESSAGES.DEFAULT() status_code=status.HTTP_403_FORBIDDEN, detail=ERROR_MESSAGES.DEFAULT()
) )
# Check if user can share publicly
if (
user.role != "admin"
and form_data.access_control == None
and not has_permission(
user.id,
"sharing.public_notes",
request.app.state.config.USER_PERMISSIONS,
)
):
form_data.access_control = {}
try: try:
note = Notes.update_note_by_id(id, form_data) note = Notes.update_note_by_id(id, form_data)
await sio.emit( await sio.emit(

View file

@ -1020,6 +1020,10 @@ class GenerateEmbedForm(BaseModel):
options: Optional[dict] = None options: Optional[dict] = None
keep_alive: Optional[Union[int, str]] = None keep_alive: Optional[Union[int, str]] = None
model_config = ConfigDict(
extra="allow",
)
@router.post("/api/embed") @router.post("/api/embed")
@router.post("/api/embed/{url_idx}") @router.post("/api/embed/{url_idx}")
@ -1694,13 +1698,15 @@ async def download_file_stream(
yield f'data: {{"progress": {progress}, "completed": {current_size}, "total": {total_size}}}\n\n' yield f'data: {{"progress": {progress}, "completed": {current_size}, "total": {total_size}}}\n\n'
if done: if done:
file.seek(0) file.close()
with open(file_path, "rb") as file:
chunk_size = 1024 * 1024 * 2 chunk_size = 1024 * 1024 * 2
hashed = calculate_sha256(file, chunk_size) hashed = calculate_sha256(file, chunk_size)
file.seek(0)
url = f"{ollama_url}/api/blobs/sha256:{hashed}" url = f"{ollama_url}/api/blobs/sha256:{hashed}"
response = requests.post(url, data=file) with requests.Session() as session:
response = session.post(url, data=file, timeout=30)
if response.ok: if response.ok:
res = { res = {

View file

@ -9,6 +9,8 @@ from aiocache import cached
import requests import requests
from urllib.parse import quote from urllib.parse import quote
from azure.identity import DefaultAzureCredential, get_bearer_token_provider
from fastapi import Depends, HTTPException, Request, APIRouter from fastapi import Depends, HTTPException, Request, APIRouter
from fastapi.responses import ( from fastapi.responses import (
FileResponse, FileResponse,
@ -119,7 +121,7 @@ def openai_reasoning_model_handler(payload):
return payload return payload
def get_headers_and_cookies( async def get_headers_and_cookies(
request: Request, request: Request,
url, url,
key=None, key=None,
@ -172,7 +174,7 @@ def get_headers_and_cookies(
oauth_token = None oauth_token = None
try: try:
if request.cookies.get("oauth_session_id", None): if request.cookies.get("oauth_session_id", None):
oauth_token = request.app.state.oauth_manager.get_oauth_token( oauth_token = await request.app.state.oauth_manager.get_oauth_token(
user.id, user.id,
request.cookies.get("oauth_session_id", None), request.cookies.get("oauth_session_id", None),
) )
@ -182,12 +184,30 @@ def get_headers_and_cookies(
if oauth_token: if oauth_token:
token = f"{oauth_token.get('access_token', '')}" token = f"{oauth_token.get('access_token', '')}"
elif auth_type in ("azure_ad", "microsoft_entra_id"):
token = get_microsoft_entra_id_access_token()
if token: if token:
headers["Authorization"] = f"Bearer {token}" headers["Authorization"] = f"Bearer {token}"
return headers, cookies return headers, cookies
def get_microsoft_entra_id_access_token():
"""
Get Microsoft Entra ID access token using DefaultAzureCredential for Azure OpenAI.
Returns the token string or None if authentication fails.
"""
try:
token_provider = get_bearer_token_provider(
DefaultAzureCredential(), "https://cognitiveservices.azure.com/.default"
)
return token_provider()
except Exception as e:
log.error(f"Error getting Microsoft Entra ID access token: {e}")
return None
########################################## ##########################################
# #
# API routes # API routes
@ -285,7 +305,7 @@ async def speech(request: Request, user=Depends(get_verified_user)):
request.app.state.config.OPENAI_API_CONFIGS.get(url, {}), # Legacy support request.app.state.config.OPENAI_API_CONFIGS.get(url, {}), # Legacy support
) )
headers, cookies = get_headers_and_cookies( headers, cookies = await get_headers_and_cookies(
request, url, key, api_config, user=user request, url, key, api_config, user=user
) )
@ -550,7 +570,7 @@ async def get_models(
timeout=aiohttp.ClientTimeout(total=AIOHTTP_CLIENT_TIMEOUT_MODEL_LIST), timeout=aiohttp.ClientTimeout(total=AIOHTTP_CLIENT_TIMEOUT_MODEL_LIST),
) as session: ) as session:
try: try:
headers, cookies = get_headers_and_cookies( headers, cookies = await get_headers_and_cookies(
request, url, key, api_config, user=user request, url, key, api_config, user=user
) )
@ -636,14 +656,17 @@ async def verify_connection(
timeout=aiohttp.ClientTimeout(total=AIOHTTP_CLIENT_TIMEOUT_MODEL_LIST), timeout=aiohttp.ClientTimeout(total=AIOHTTP_CLIENT_TIMEOUT_MODEL_LIST),
) as session: ) as session:
try: try:
headers, cookies = get_headers_and_cookies( headers, cookies = await get_headers_and_cookies(
request, url, key, api_config, user=user request, url, key, api_config, user=user
) )
if api_config.get("azure", False): if api_config.get("azure", False):
# Only set api-key header if not using Azure Entra ID authentication
auth_type = api_config.get("auth_type", "bearer")
if auth_type not in ("azure_ad", "microsoft_entra_id"):
headers["api-key"] = key headers["api-key"] = key
api_version = api_config.get("api_version", "") or "2023-03-15-preview"
api_version = api_config.get("api_version", "") or "2023-03-15-preview"
async with session.get( async with session.get(
url=f"{url}/openai/models?api-version={api_version}", url=f"{url}/openai/models?api-version={api_version}",
headers=headers, headers=headers,
@ -878,14 +901,19 @@ async def generate_chat_completion(
convert_logit_bias_input_to_json(payload["logit_bias"]) convert_logit_bias_input_to_json(payload["logit_bias"])
) )
headers, cookies = get_headers_and_cookies( headers, cookies = await get_headers_and_cookies(
request, url, key, api_config, metadata, user=user request, url, key, api_config, metadata, user=user
) )
if api_config.get("azure", False): if api_config.get("azure", False):
api_version = api_config.get("api_version", "2023-03-15-preview") api_version = api_config.get("api_version", "2023-03-15-preview")
request_url, payload = convert_to_azure_payload(url, payload, api_version) request_url, payload = convert_to_azure_payload(url, payload, api_version)
# Only set api-key header if not using Azure Entra ID authentication
auth_type = api_config.get("auth_type", "bearer")
if auth_type not in ("azure_ad", "microsoft_entra_id"):
headers["api-key"] = key headers["api-key"] = key
headers["api-version"] = api_version headers["api-version"] = api_version
request_url = f"{request_url}/chat/completions?api-version={api_version}" request_url = f"{request_url}/chat/completions?api-version={api_version}"
else: else:
@ -982,7 +1010,9 @@ async def embeddings(request: Request, form_data: dict, user):
session = None session = None
streaming = False streaming = False
headers, cookies = get_headers_and_cookies(request, url, key, api_config, user=user) headers, cookies = await get_headers_and_cookies(
request, url, key, api_config, user=user
)
try: try:
session = aiohttp.ClientSession(trust_env=True) session = aiohttp.ClientSession(trust_env=True)
r = await session.request( r = await session.request(
@ -1052,13 +1082,18 @@ async def proxy(path: str, request: Request, user=Depends(get_verified_user)):
streaming = False streaming = False
try: try:
headers, cookies = get_headers_and_cookies( headers, cookies = await get_headers_and_cookies(
request, url, key, api_config, user=user request, url, key, api_config, user=user
) )
if api_config.get("azure", False): if api_config.get("azure", False):
api_version = api_config.get("api_version", "2023-03-15-preview") api_version = api_config.get("api_version", "2023-03-15-preview")
# Only set api-key header if not using Azure Entra ID authentication
auth_type = api_config.get("auth_type", "bearer")
if auth_type not in ("azure_ad", "microsoft_entra_id"):
headers["api-key"] = key headers["api-key"] = key
headers["api-version"] = api_version headers["api-version"] = api_version
payload = json.loads(body) payload = json.loads(body)

View file

@ -50,6 +50,8 @@ from open_webui.retrieval.loaders.youtube import YoutubeLoader
# Web search engines # Web search engines
from open_webui.retrieval.web.main import SearchResult from open_webui.retrieval.web.main import SearchResult
from open_webui.retrieval.web.utils import get_web_loader from open_webui.retrieval.web.utils import get_web_loader
from open_webui.retrieval.web.ollama import search_ollama_cloud
from open_webui.retrieval.web.perplexity_search import search_perplexity_search
from open_webui.retrieval.web.brave import search_brave from open_webui.retrieval.web.brave import search_brave
from open_webui.retrieval.web.kagi import search_kagi from open_webui.retrieval.web.kagi import search_kagi
from open_webui.retrieval.web.mojeek import search_mojeek from open_webui.retrieval.web.mojeek import search_mojeek
@ -81,6 +83,7 @@ from open_webui.retrieval.utils import (
query_doc, query_doc,
query_doc_with_hybrid_search, query_doc_with_hybrid_search,
) )
from open_webui.retrieval.vector.utils import filter_metadata
from open_webui.utils.misc import ( from open_webui.utils.misc import (
calculate_sha256_string, calculate_sha256_string,
) )
@ -431,6 +434,7 @@ async def get_rag_config(request: Request, user=Depends(get_admin_user)):
"EXTERNAL_DOCUMENT_LOADER_API_KEY": request.app.state.config.EXTERNAL_DOCUMENT_LOADER_API_KEY, "EXTERNAL_DOCUMENT_LOADER_API_KEY": request.app.state.config.EXTERNAL_DOCUMENT_LOADER_API_KEY,
"TIKA_SERVER_URL": request.app.state.config.TIKA_SERVER_URL, "TIKA_SERVER_URL": request.app.state.config.TIKA_SERVER_URL,
"DOCLING_SERVER_URL": request.app.state.config.DOCLING_SERVER_URL, "DOCLING_SERVER_URL": request.app.state.config.DOCLING_SERVER_URL,
"DOCLING_PARAMS": request.app.state.config.DOCLING_PARAMS,
"DOCLING_DO_OCR": request.app.state.config.DOCLING_DO_OCR, "DOCLING_DO_OCR": request.app.state.config.DOCLING_DO_OCR,
"DOCLING_FORCE_OCR": request.app.state.config.DOCLING_FORCE_OCR, "DOCLING_FORCE_OCR": request.app.state.config.DOCLING_FORCE_OCR,
"DOCLING_OCR_ENGINE": request.app.state.config.DOCLING_OCR_ENGINE, "DOCLING_OCR_ENGINE": request.app.state.config.DOCLING_OCR_ENGINE,
@ -474,6 +478,7 @@ async def get_rag_config(request: Request, user=Depends(get_admin_user)):
"WEB_SEARCH_DOMAIN_FILTER_LIST": request.app.state.config.WEB_SEARCH_DOMAIN_FILTER_LIST, "WEB_SEARCH_DOMAIN_FILTER_LIST": request.app.state.config.WEB_SEARCH_DOMAIN_FILTER_LIST,
"BYPASS_WEB_SEARCH_EMBEDDING_AND_RETRIEVAL": request.app.state.config.BYPASS_WEB_SEARCH_EMBEDDING_AND_RETRIEVAL, "BYPASS_WEB_SEARCH_EMBEDDING_AND_RETRIEVAL": request.app.state.config.BYPASS_WEB_SEARCH_EMBEDDING_AND_RETRIEVAL,
"BYPASS_WEB_SEARCH_WEB_LOADER": request.app.state.config.BYPASS_WEB_SEARCH_WEB_LOADER, "BYPASS_WEB_SEARCH_WEB_LOADER": request.app.state.config.BYPASS_WEB_SEARCH_WEB_LOADER,
"OLLAMA_CLOUD_WEB_SEARCH_API_KEY": request.app.state.config.OLLAMA_CLOUD_WEB_SEARCH_API_KEY,
"SEARXNG_QUERY_URL": request.app.state.config.SEARXNG_QUERY_URL, "SEARXNG_QUERY_URL": request.app.state.config.SEARXNG_QUERY_URL,
"YACY_QUERY_URL": request.app.state.config.YACY_QUERY_URL, "YACY_QUERY_URL": request.app.state.config.YACY_QUERY_URL,
"YACY_USERNAME": request.app.state.config.YACY_USERNAME, "YACY_USERNAME": request.app.state.config.YACY_USERNAME,
@ -530,6 +535,7 @@ class WebConfig(BaseModel):
WEB_SEARCH_DOMAIN_FILTER_LIST: Optional[List[str]] = [] WEB_SEARCH_DOMAIN_FILTER_LIST: Optional[List[str]] = []
BYPASS_WEB_SEARCH_EMBEDDING_AND_RETRIEVAL: Optional[bool] = None BYPASS_WEB_SEARCH_EMBEDDING_AND_RETRIEVAL: Optional[bool] = None
BYPASS_WEB_SEARCH_WEB_LOADER: Optional[bool] = None BYPASS_WEB_SEARCH_WEB_LOADER: Optional[bool] = None
OLLAMA_CLOUD_WEB_SEARCH_API_KEY: Optional[str] = None
SEARXNG_QUERY_URL: Optional[str] = None SEARXNG_QUERY_URL: Optional[str] = None
YACY_QUERY_URL: Optional[str] = None YACY_QUERY_URL: Optional[str] = None
YACY_USERNAME: Optional[str] = None YACY_USERNAME: Optional[str] = None
@ -590,6 +596,7 @@ class ConfigForm(BaseModel):
# Content extraction settings # Content extraction settings
CONTENT_EXTRACTION_ENGINE: Optional[str] = None CONTENT_EXTRACTION_ENGINE: Optional[str] = None
PDF_EXTRACT_IMAGES: Optional[bool] = None PDF_EXTRACT_IMAGES: Optional[bool] = None
DATALAB_MARKER_API_KEY: Optional[str] = None DATALAB_MARKER_API_KEY: Optional[str] = None
DATALAB_MARKER_API_BASE_URL: Optional[str] = None DATALAB_MARKER_API_BASE_URL: Optional[str] = None
DATALAB_MARKER_ADDITIONAL_CONFIG: Optional[str] = None DATALAB_MARKER_ADDITIONAL_CONFIG: Optional[str] = None
@ -601,11 +608,13 @@ class ConfigForm(BaseModel):
DATALAB_MARKER_FORMAT_LINES: Optional[bool] = None DATALAB_MARKER_FORMAT_LINES: Optional[bool] = None
DATALAB_MARKER_USE_LLM: Optional[bool] = None DATALAB_MARKER_USE_LLM: Optional[bool] = None
DATALAB_MARKER_OUTPUT_FORMAT: Optional[str] = None DATALAB_MARKER_OUTPUT_FORMAT: Optional[str] = None
EXTERNAL_DOCUMENT_LOADER_URL: Optional[str] = None EXTERNAL_DOCUMENT_LOADER_URL: Optional[str] = None
EXTERNAL_DOCUMENT_LOADER_API_KEY: Optional[str] = None EXTERNAL_DOCUMENT_LOADER_API_KEY: Optional[str] = None
TIKA_SERVER_URL: Optional[str] = None TIKA_SERVER_URL: Optional[str] = None
DOCLING_SERVER_URL: Optional[str] = None DOCLING_SERVER_URL: Optional[str] = None
DOCLING_PARAMS: Optional[dict] = None
DOCLING_DO_OCR: Optional[bool] = None DOCLING_DO_OCR: Optional[bool] = None
DOCLING_FORCE_OCR: Optional[bool] = None DOCLING_FORCE_OCR: Optional[bool] = None
DOCLING_OCR_ENGINE: Optional[str] = None DOCLING_OCR_ENGINE: Optional[str] = None
@ -782,6 +791,11 @@ async def update_rag_config(
if form_data.DOCLING_SERVER_URL is not None if form_data.DOCLING_SERVER_URL is not None
else request.app.state.config.DOCLING_SERVER_URL else request.app.state.config.DOCLING_SERVER_URL
) )
request.app.state.config.DOCLING_PARAMS = (
form_data.DOCLING_PARAMS
if form_data.DOCLING_PARAMS is not None
else request.app.state.config.DOCLING_PARAMS
)
request.app.state.config.DOCLING_DO_OCR = ( request.app.state.config.DOCLING_DO_OCR = (
form_data.DOCLING_DO_OCR form_data.DOCLING_DO_OCR
if form_data.DOCLING_DO_OCR is not None if form_data.DOCLING_DO_OCR is not None
@ -993,6 +1007,9 @@ async def update_rag_config(
request.app.state.config.BYPASS_WEB_SEARCH_WEB_LOADER = ( request.app.state.config.BYPASS_WEB_SEARCH_WEB_LOADER = (
form_data.web.BYPASS_WEB_SEARCH_WEB_LOADER form_data.web.BYPASS_WEB_SEARCH_WEB_LOADER
) )
request.app.state.config.OLLAMA_CLOUD_WEB_SEARCH_API_KEY = (
form_data.web.OLLAMA_CLOUD_WEB_SEARCH_API_KEY
)
request.app.state.config.SEARXNG_QUERY_URL = form_data.web.SEARXNG_QUERY_URL request.app.state.config.SEARXNG_QUERY_URL = form_data.web.SEARXNG_QUERY_URL
request.app.state.config.YACY_QUERY_URL = form_data.web.YACY_QUERY_URL request.app.state.config.YACY_QUERY_URL = form_data.web.YACY_QUERY_URL
request.app.state.config.YACY_USERNAME = form_data.web.YACY_USERNAME request.app.state.config.YACY_USERNAME = form_data.web.YACY_USERNAME
@ -1101,6 +1118,7 @@ async def update_rag_config(
"EXTERNAL_DOCUMENT_LOADER_API_KEY": request.app.state.config.EXTERNAL_DOCUMENT_LOADER_API_KEY, "EXTERNAL_DOCUMENT_LOADER_API_KEY": request.app.state.config.EXTERNAL_DOCUMENT_LOADER_API_KEY,
"TIKA_SERVER_URL": request.app.state.config.TIKA_SERVER_URL, "TIKA_SERVER_URL": request.app.state.config.TIKA_SERVER_URL,
"DOCLING_SERVER_URL": request.app.state.config.DOCLING_SERVER_URL, "DOCLING_SERVER_URL": request.app.state.config.DOCLING_SERVER_URL,
"DOCLING_PARAMS": request.app.state.config.DOCLING_PARAMS,
"DOCLING_DO_OCR": request.app.state.config.DOCLING_DO_OCR, "DOCLING_DO_OCR": request.app.state.config.DOCLING_DO_OCR,
"DOCLING_FORCE_OCR": request.app.state.config.DOCLING_FORCE_OCR, "DOCLING_FORCE_OCR": request.app.state.config.DOCLING_FORCE_OCR,
"DOCLING_OCR_ENGINE": request.app.state.config.DOCLING_OCR_ENGINE, "DOCLING_OCR_ENGINE": request.app.state.config.DOCLING_OCR_ENGINE,
@ -1144,6 +1162,7 @@ async def update_rag_config(
"WEB_SEARCH_DOMAIN_FILTER_LIST": request.app.state.config.WEB_SEARCH_DOMAIN_FILTER_LIST, "WEB_SEARCH_DOMAIN_FILTER_LIST": request.app.state.config.WEB_SEARCH_DOMAIN_FILTER_LIST,
"BYPASS_WEB_SEARCH_EMBEDDING_AND_RETRIEVAL": request.app.state.config.BYPASS_WEB_SEARCH_EMBEDDING_AND_RETRIEVAL, "BYPASS_WEB_SEARCH_EMBEDDING_AND_RETRIEVAL": request.app.state.config.BYPASS_WEB_SEARCH_EMBEDDING_AND_RETRIEVAL,
"BYPASS_WEB_SEARCH_WEB_LOADER": request.app.state.config.BYPASS_WEB_SEARCH_WEB_LOADER, "BYPASS_WEB_SEARCH_WEB_LOADER": request.app.state.config.BYPASS_WEB_SEARCH_WEB_LOADER,
"OLLAMA_CLOUD_WEB_SEARCH_API_KEY": request.app.state.config.OLLAMA_CLOUD_WEB_SEARCH_API_KEY,
"SEARXNG_QUERY_URL": request.app.state.config.SEARXNG_QUERY_URL, "SEARXNG_QUERY_URL": request.app.state.config.SEARXNG_QUERY_URL,
"YACY_QUERY_URL": request.app.state.config.YACY_QUERY_URL, "YACY_QUERY_URL": request.app.state.config.YACY_QUERY_URL,
"YACY_USERNAME": request.app.state.config.YACY_USERNAME, "YACY_USERNAME": request.app.state.config.YACY_USERNAME,
@ -1412,8 +1431,13 @@ def process_file(
form_data: ProcessFileForm, form_data: ProcessFileForm,
user=Depends(get_verified_user), user=Depends(get_verified_user),
): ):
try: if user.role == "admin":
file = Files.get_file_by_id(form_data.file_id) file = Files.get_file_by_id(form_data.file_id)
else:
file = Files.get_file_by_id_and_user_id(form_data.file_id, user.id)
if file:
try:
collection_name = form_data.collection_name collection_name = form_data.collection_name
@ -1426,7 +1450,9 @@ def process_file(
try: try:
# /files/{file_id}/data/content/update # /files/{file_id}/data/content/update
VECTOR_DB_CLIENT.delete_collection(collection_name=f"file-{file.id}") VECTOR_DB_CLIENT.delete_collection(
collection_name=f"file-{file.id}"
)
except: except:
# Audio file upload pipeline # Audio file upload pipeline
pass pass
@ -1511,6 +1537,7 @@ def process_file(
"picture_description_mode": request.app.state.config.DOCLING_PICTURE_DESCRIPTION_MODE, "picture_description_mode": request.app.state.config.DOCLING_PICTURE_DESCRIPTION_MODE,
"picture_description_local": request.app.state.config.DOCLING_PICTURE_DESCRIPTION_LOCAL, "picture_description_local": request.app.state.config.DOCLING_PICTURE_DESCRIPTION_LOCAL,
"picture_description_api": request.app.state.config.DOCLING_PICTURE_DESCRIPTION_API, "picture_description_api": request.app.state.config.DOCLING_PICTURE_DESCRIPTION_API,
**request.app.state.config.DOCLING_PARAMS,
}, },
PDF_EXTRACT_IMAGES=request.app.state.config.PDF_EXTRACT_IMAGES, PDF_EXTRACT_IMAGES=request.app.state.config.PDF_EXTRACT_IMAGES,
DOCUMENT_INTELLIGENCE_ENDPOINT=request.app.state.config.DOCUMENT_INTELLIGENCE_ENDPOINT, DOCUMENT_INTELLIGENCE_ENDPOINT=request.app.state.config.DOCUMENT_INTELLIGENCE_ENDPOINT,
@ -1525,7 +1552,7 @@ def process_file(
Document( Document(
page_content=doc.page_content, page_content=doc.page_content,
metadata={ metadata={
**doc.metadata, **filter_metadata(doc.metadata),
"name": file.filename, "name": file.filename,
"created_by": file.user_id, "created_by": file.user_id,
"file_id": file.id, "file_id": file.id,
@ -1589,17 +1616,29 @@ def process_file(
}, },
) )
Files.update_file_data_by_id(
file.id,
{"status": "completed"},
)
return { return {
"status": True, "status": True,
"collection_name": collection_name, "collection_name": collection_name,
"filename": file.filename, "filename": file.filename,
"content": text_content, "content": text_content,
} }
else:
raise Exception("Error saving document to vector database")
except Exception as e: except Exception as e:
raise e raise e
except Exception as e: except Exception as e:
log.exception(e) log.exception(e)
Files.update_file_data_by_id(
file.id,
{"status": "failed"},
)
if "No pandoc was found" in str(e): if "No pandoc was found" in str(e):
raise HTTPException( raise HTTPException(
status_code=status.HTTP_400_BAD_REQUEST, status_code=status.HTTP_400_BAD_REQUEST,
@ -1611,6 +1650,11 @@ def process_file(
detail=str(e), detail=str(e),
) )
else:
raise HTTPException(
status_code=status.HTTP_404_NOT_FOUND, detail=ERROR_MESSAGES.NOT_FOUND
)
class ProcessTextForm(BaseModel): class ProcessTextForm(BaseModel):
name: str name: str
@ -1825,7 +1869,25 @@ async def search_web(request: Request, engine: str, query: str) -> list[SearchRe
logging.info(f"search_web: {engine} query: {query}") logging.info(f"search_web: {engine} query: {query}")
# TODO: add playwright to search the web # TODO: add playwright to search the web
if engine == "searxng": if engine == "ollama_cloud":
return search_ollama_cloud(
"https://ollama.com",
request.app.state.config.OLLAMA_CLOUD_WEB_SEARCH_API_KEY,
query,
request.app.state.config.WEB_SEARCH_RESULT_COUNT,
request.app.state.config.WEB_SEARCH_DOMAIN_FILTER_LIST,
)
elif engine == "perplexity_search":
if request.app.state.config.PERPLEXITY_API_KEY:
return search_perplexity_search(
request.app.state.config.PERPLEXITY_API_KEY,
query,
request.app.state.config.WEB_SEARCH_RESULT_COUNT,
request.app.state.config.WEB_SEARCH_DOMAIN_FILTER_LIST,
)
else:
raise Exception("No PERPLEXITY_API_KEY found in environment variables")
elif engine == "searxng":
if request.app.state.config.SEARXNG_QUERY_URL: if request.app.state.config.SEARXNG_QUERY_URL:
return search_searxng( return search_searxng(
request.app.state.config.SEARXNG_QUERY_URL, request.app.state.config.SEARXNG_QUERY_URL,
@ -2059,7 +2121,7 @@ async def process_web_search(
result_items = [] result_items = []
try: try:
logging.info( logging.debug(
f"trying to web search with {request.app.state.config.WEB_SEARCH_ENGINE, form_data.queries}" f"trying to web search with {request.app.state.config.WEB_SEARCH_ENGINE, form_data.queries}"
) )

View file

@ -9,6 +9,7 @@ from pydantic import BaseModel, HttpUrl
from fastapi import APIRouter, Depends, HTTPException, Request, status from fastapi import APIRouter, Depends, HTTPException, Request, status
from open_webui.models.oauth_sessions import OAuthSessions
from open_webui.models.tools import ( from open_webui.models.tools import (
ToolForm, ToolForm,
ToolModel, ToolModel,
@ -16,7 +17,11 @@ from open_webui.models.tools import (
ToolUserResponse, ToolUserResponse,
Tools, Tools,
) )
from open_webui.utils.plugin import load_tool_module_by_id, replace_imports from open_webui.utils.plugin import (
load_tool_module_by_id,
replace_imports,
get_tool_module_from_cache,
)
from open_webui.utils.tools import get_tool_specs from open_webui.utils.tools import get_tool_specs
from open_webui.utils.auth import get_admin_user, get_verified_user from open_webui.utils.auth import get_admin_user, get_verified_user
from open_webui.utils.access_control import has_access, has_permission from open_webui.utils.access_control import has_access, has_permission
@ -34,6 +39,14 @@ log.setLevel(SRC_LOG_LEVELS["MAIN"])
router = APIRouter() router = APIRouter()
def get_tool_module(request, tool_id, load_from_db=True):
"""
Get the tool module by its ID.
"""
tool_module, _ = get_tool_module_from_cache(request, tool_id, load_from_db)
return tool_module
############################ ############################
# GetTools # GetTools
############################ ############################
@ -41,8 +54,21 @@ router = APIRouter()
@router.get("/", response_model=list[ToolUserResponse]) @router.get("/", response_model=list[ToolUserResponse])
async def get_tools(request: Request, user=Depends(get_verified_user)): async def get_tools(request: Request, user=Depends(get_verified_user)):
tools = Tools.get_tools() tools = []
# Local Tools
for tool in Tools.get_tools():
tool_module = get_tool_module(request, tool.id)
tools.append(
ToolUserResponse(
**{
**tool.model_dump(),
"has_user_valves": hasattr(tool_module, "UserValves"),
}
)
)
# OpenAPI Tool Servers
for server in await get_tool_servers(request): for server in await get_tool_servers(request):
tools.append( tools.append(
ToolUserResponse( ToolUserResponse(
@ -68,6 +94,50 @@ async def get_tools(request: Request, user=Depends(get_verified_user)):
) )
) )
# MCP Tool Servers
for server in request.app.state.config.TOOL_SERVER_CONNECTIONS:
if server.get("type", "openapi") == "mcp":
server_id = server.get("info", {}).get("id")
auth_type = server.get("auth_type", "none")
session_token = None
if auth_type == "oauth_2.1":
splits = server_id.split(":")
server_id = splits[-1] if len(splits) > 1 else server_id
session_token = (
await request.app.state.oauth_client_manager.get_oauth_token(
user.id, f"mcp:{server_id}"
)
)
tools.append(
ToolUserResponse(
**{
"id": f"server:mcp:{server.get('info', {}).get('id')}",
"user_id": f"server:mcp:{server.get('info', {}).get('id')}",
"name": server.get("info", {}).get("name", "MCP Tool Server"),
"meta": {
"description": server.get("info", {}).get(
"description", ""
),
},
"access_control": server.get("config", {}).get(
"access_control", None
),
"updated_at": int(time.time()),
"created_at": int(time.time()),
**(
{
"authenticated": session_token is not None,
}
if auth_type == "oauth_2.1"
else {}
),
}
)
)
if user.role == "admin" and BYPASS_ADMIN_ACCESS_CONTROL: if user.role == "admin" and BYPASS_ADMIN_ACCESS_CONTROL:
# Admin can see all tools # Admin can see all tools
return tools return tools
@ -462,8 +532,9 @@ async def update_tools_valves_by_id(
try: try:
form_data = {k: v for k, v in form_data.items() if v is not None} form_data = {k: v for k, v in form_data.items() if v is not None}
valves = Valves(**form_data) valves = Valves(**form_data)
Tools.update_tool_valves_by_id(id, valves.model_dump()) valves_dict = valves.model_dump(exclude_unset=True)
return valves.model_dump() Tools.update_tool_valves_by_id(id, valves_dict)
return valves_dict
except Exception as e: except Exception as e:
log.exception(f"Failed to update tool valves by id {id}: {e}") log.exception(f"Failed to update tool valves by id {id}: {e}")
raise HTTPException( raise HTTPException(
@ -538,10 +609,11 @@ async def update_tools_user_valves_by_id(
try: try:
form_data = {k: v for k, v in form_data.items() if v is not None} form_data = {k: v for k, v in form_data.items() if v is not None}
user_valves = UserValves(**form_data) user_valves = UserValves(**form_data)
user_valves_dict = user_valves.model_dump(exclude_unset=True)
Tools.update_user_valves_by_id_and_user_id( Tools.update_user_valves_by_id_and_user_id(
id, user.id, user_valves.model_dump() id, user.id, user_valves_dict
) )
return user_valves.model_dump() return user_valves_dict
except Exception as e: except Exception as e:
log.exception(f"Failed to update user valves by id {id}: {e}") log.exception(f"Failed to update user valves by id {id}: {e}")
raise HTTPException( raise HTTPException(

View file

@ -18,6 +18,7 @@ from open_webui.models.users import (
UserModel, UserModel,
UserListResponse, UserListResponse,
UserInfoListResponse, UserInfoListResponse,
UserIdNameListResponse,
UserRoleUpdateForm, UserRoleUpdateForm,
Users, Users,
UserSettings, UserSettings,
@ -100,6 +101,23 @@ async def get_all_users(
return Users.get_users() return Users.get_users()
@router.get("/search", response_model=UserIdNameListResponse)
async def search_users(
query: Optional[str] = None,
user=Depends(get_verified_user),
):
limit = PAGE_ITEM_COUNT
page = 1 # Always return the first page for search
skip = (page - 1) * limit
filter = {}
if query:
filter["query"] = query
return Users.get_users(filter=filter, skip=skip, limit=limit)
############################ ############################
# User Groups # User Groups
############################ ############################
@ -139,6 +157,7 @@ class SharingPermissions(BaseModel):
public_knowledge: bool = True public_knowledge: bool = True
public_prompts: bool = True public_prompts: bool = True
public_tools: bool = True public_tools: bool = True
public_notes: bool = True
class ChatPermissions(BaseModel): class ChatPermissions(BaseModel):

View file

@ -356,7 +356,7 @@ async def join_note(sid, data):
await sio.enter_room(sid, f"note:{note.id}") await sio.enter_room(sid, f"note:{note.id}")
@sio.on("channel-events") @sio.on("events:channel")
async def channel_events(sid, data): async def channel_events(sid, data):
room = f"channel:{data['channel_id']}" room = f"channel:{data['channel_id']}"
participants = sio.manager.get_participants( participants = sio.manager.get_participants(
@ -373,7 +373,7 @@ async def channel_events(sid, data):
if event_type == "typing": if event_type == "typing":
await sio.emit( await sio.emit(
"channel-events", "events:channel",
{ {
"channel_id": data["channel_id"], "channel_id": data["channel_id"],
"message_id": data.get("message_id", None), "message_id": data.get("message_id", None),
@ -653,12 +653,15 @@ def get_event_emitter(request_info, update_db=True):
) )
) )
chat_id = request_info.get("chat_id", None)
message_id = request_info.get("message_id", None)
emit_tasks = [ emit_tasks = [
sio.emit( sio.emit(
"chat-events", "events",
{ {
"chat_id": request_info.get("chat_id", None), "chat_id": chat_id,
"message_id": request_info.get("message_id", None), "message_id": message_id,
"data": event_data, "data": event_data,
}, },
to=session_id, to=session_id,
@ -667,8 +670,11 @@ def get_event_emitter(request_info, update_db=True):
] ]
await asyncio.gather(*emit_tasks) await asyncio.gather(*emit_tasks)
if (
if update_db: update_db
and message_id
and not request_info.get("chat_id", "").startswith("local:")
):
if "type" in event_data and event_data["type"] == "status": if "type" in event_data and event_data["type"] == "status":
Chats.add_message_status_to_chat_by_id_and_message_id( Chats.add_message_status_to_chat_by_id_and_message_id(
request_info["chat_id"], request_info["chat_id"],
@ -705,6 +711,23 @@ def get_event_emitter(request_info, update_db=True):
}, },
) )
if "type" in event_data and event_data["type"] == "embeds":
message = Chats.get_message_by_id_and_message_id(
request_info["chat_id"],
request_info["message_id"],
)
embeds = event_data.get("data", {}).get("embeds", [])
embeds.extend(message.get("embeds", []))
Chats.upsert_message_to_chat_by_id_and_message_id(
request_info["chat_id"],
request_info["message_id"],
{
"embeds": embeds,
},
)
if "type" in event_data and event_data["type"] == "files": if "type" in event_data and event_data["type"] == "files":
message = Chats.get_message_by_id_and_message_id( message = Chats.get_message_by_id_and_message_id(
request_info["chat_id"], request_info["chat_id"],
@ -747,7 +770,7 @@ def get_event_emitter(request_info, update_db=True):
def get_event_call(request_info): def get_event_call(request_info):
async def __event_caller__(event_data): async def __event_caller__(event_data):
response = await sio.call( response = await sio.call(
"chat-events", "events",
{ {
"chat_id": request_info.get("chat_id", None), "chat_id": request_info.get("chat_id", None),
"message_id": request_info.get("message_id", None), "message_id": request_info.get("message_id", None),

View file

@ -164,7 +164,10 @@ async def stop_task(redis, task_id: str):
# Task successfully canceled # Task successfully canceled
return {"status": True, "message": f"Task {task_id} successfully stopped."} return {"status": True, "message": f"Task {task_id} successfully stopped."}
return {"status": False, "message": f"Failed to stop task {task_id}."} if task.cancelled() or task.done():
return {"status": True, "message": f"Task {task_id} successfully cancelled."}
return {"status": True, "message": f"Cancellation requested for {task_id}."}
async def stop_item_tasks(redis: Redis, item_id: str): async def stop_item_tasks(redis: Redis, item_id: str):

View file

@ -110,9 +110,13 @@ def has_access(
type: str = "write", type: str = "write",
access_control: Optional[dict] = None, access_control: Optional[dict] = None,
user_group_ids: Optional[Set[str]] = None, user_group_ids: Optional[Set[str]] = None,
strict: bool = True,
) -> bool: ) -> bool:
if access_control is None: if access_control is None:
if strict:
return type == "read" return type == "read"
else:
return True
if user_group_ids is None: if user_group_ids is None:
user_groups = Groups.get_groups_by_member_id(user_id) user_groups = Groups.get_groups_by_member_id(user_id)
@ -130,9 +134,10 @@ def has_access(
# Get all users with access to a resource # Get all users with access to a resource
def get_users_with_access( def get_users_with_access(
type: str = "write", access_control: Optional[dict] = None type: str = "write", access_control: Optional[dict] = None
) -> List[UserModel]: ) -> list[UserModel]:
if access_control is None: if access_control is None:
return Users.get_users() result = Users.get_users()
return result.get("users", [])
permission_access = access_control.get(type, {}) permission_access = access_control.get(type, {})
permitted_group_ids = permission_access.get("group_ids", []) permitted_group_ids = permission_access.get("group_ids", [])

View file

@ -6,7 +6,7 @@ import hmac
import hashlib import hashlib
import requests import requests
import os import os
import bcrypt
from cryptography.hazmat.primitives.ciphers.aead import AESGCM from cryptography.hazmat.primitives.ciphers.aead import AESGCM
from cryptography.hazmat.primitives.asymmetric import ed25519 from cryptography.hazmat.primitives.asymmetric import ed25519
@ -38,11 +38,8 @@ from open_webui.env import (
from fastapi import BackgroundTasks, Depends, HTTPException, Request, Response, status from fastapi import BackgroundTasks, Depends, HTTPException, Request, Response, status
from fastapi.security import HTTPAuthorizationCredentials, HTTPBearer from fastapi.security import HTTPAuthorizationCredentials, HTTPBearer
from passlib.context import CryptContext
logging.getLogger("passlib").setLevel(logging.ERROR)
log = logging.getLogger(__name__) log = logging.getLogger(__name__)
log.setLevel(SRC_LOG_LEVELS["OAUTH"]) log.setLevel(SRC_LOG_LEVELS["OAUTH"])
@ -155,17 +152,23 @@ def get_license_data(app, key):
bearer_security = HTTPBearer(auto_error=False) bearer_security = HTTPBearer(auto_error=False)
pwd_context = CryptContext(schemes=["bcrypt"], deprecated="auto")
def verify_password(plain_password, hashed_password): def get_password_hash(password: str) -> str:
"""Hash a password using bcrypt"""
return bcrypt.hashpw(password.encode("utf-8"), bcrypt.gensalt()).decode("utf-8")
def verify_password(plain_password: str, hashed_password: str) -> bool:
"""Verify a password against its hash"""
return ( return (
pwd_context.verify(plain_password, hashed_password) if hashed_password else None bcrypt.checkpw(
plain_password.encode("utf-8"),
hashed_password.encode("utf-8"),
)
if hashed_password
else None
) )
def get_password_hash(password):
return pwd_context.hash(password)
def create_token(data: dict, expires_delta: Union[timedelta, None] = None) -> str: def create_token(data: dict, expires_delta: Union[timedelta, None] = None) -> str:

View file

@ -0,0 +1,31 @@
import re
def extract_mentions(message: str, triggerChar: str = "@"):
# Escape triggerChar in case it's a regex special character
triggerChar = re.escape(triggerChar)
pattern = rf"<{triggerChar}([A-Z]):([^|>]+)"
matches = re.findall(pattern, message)
return [{"id_type": id_type, "id": id_value} for id_type, id_value in matches]
def replace_mentions(message: str, triggerChar: str = "@", use_label: bool = True):
"""
Replace mentions in the message with either their label (after the pipe `|`)
or their id if no label exists.
Example:
"<@M:gpt-4.1|GPT-4>" -> "GPT-4" (if use_label=True)
"<@M:gpt-4.1|GPT-4>" -> "gpt-4.1" (if use_label=False)
"""
# Escape triggerChar
triggerChar = re.escape(triggerChar)
def replacer(match):
id_type, id_value, label = match.groups()
return label if use_label and label else id_value
# Regex captures: idType, id, optional label
pattern = rf"<{triggerChar}([A-Z]):([^|>]+)(?:\|([^>]+))?>"
return re.sub(pattern, replacer, message)

View file

@ -80,6 +80,7 @@ async def generate_direct_chat_completion(
event_caller = get_event_call(metadata) event_caller = get_event_call(metadata)
channel = f"{user_id}:{session_id}:{request_id}" channel = f"{user_id}:{session_id}:{request_id}"
logging.info(f"WebSocket channel: {channel}")
if form_data.get("stream"): if form_data.get("stream"):
q = asyncio.Queue() q = asyncio.Queue()
@ -121,7 +122,10 @@ async def generate_direct_chat_completion(
yield f"data: {json.dumps(data)}\n\n" yield f"data: {json.dumps(data)}\n\n"
elif isinstance(data, str): elif isinstance(data, str):
yield data if "data:" in data:
yield f"{data}\n\n"
else:
yield f"data: {data}\n\n"
except Exception as e: except Exception as e:
log.debug(f"Error in event generator: {e}") log.debug(f"Error in event generator: {e}")
pass pass

View file

@ -0,0 +1,97 @@
from open_webui.routers.images import (
load_b64_image_data,
upload_image,
)
from fastapi import (
APIRouter,
Depends,
HTTPException,
Request,
UploadFile,
)
from open_webui.routers.files import upload_file_handler
import mimetypes
import base64
import io
def get_image_url_from_base64(request, base64_image_string, metadata, user):
if "data:image/png;base64" in base64_image_string:
image_url = ""
# Extract base64 image data from the line
image_data, content_type = load_b64_image_data(base64_image_string)
if image_data is not None:
image_url = upload_image(
request,
image_data,
content_type,
metadata,
user,
)
return image_url
return None
def load_b64_audio_data(b64_str):
try:
if "," in b64_str:
header, b64_data = b64_str.split(",", 1)
else:
b64_data = b64_str
header = "data:audio/wav;base64"
audio_data = base64.b64decode(b64_data)
content_type = (
header.split(";")[0].split(":")[1] if ";" in header else "audio/wav"
)
return audio_data, content_type
except Exception as e:
print(f"Error decoding base64 audio data: {e}")
return None, None
def upload_audio(request, audio_data, content_type, metadata, user):
audio_format = mimetypes.guess_extension(content_type)
file = UploadFile(
file=io.BytesIO(audio_data),
filename=f"generated-{audio_format}", # will be converted to a unique ID on upload_file
headers={
"content-type": content_type,
},
)
file_item = upload_file_handler(
request,
file=file,
metadata=metadata,
process=False,
user=user,
)
url = request.app.url_path_for("get_file_content_by_id", id=file_item.id)
return url
def get_audio_url_from_base64(request, base64_audio_string, metadata, user):
if "data:audio/wav;base64" in base64_audio_string:
audio_url = ""
# Extract base64 audio data from the line
audio_data, content_type = load_b64_audio_data(base64_audio_string)
if audio_data is not None:
audio_url = upload_audio(
request,
audio_data,
content_type,
metadata,
user,
)
return audio_url
return None
def get_file_url_from_base64(request, base64_file_string, metadata, user):
if "data:image/png;base64" in base64_file_string:
return get_image_url_from_base64(request, base64_file_string, metadata, user)
elif "data:audio/wav;base64" in base64_file_string:
return get_audio_url_from_base64(request, base64_file_string, metadata, user)
return None

View file

@ -127,8 +127,10 @@ async def process_filter_functions(
raise e raise e
# Handle file cleanup for inlet # Handle file cleanup for inlet
if skip_files and "files" in form_data.get("metadata", {}): if skip_files:
del form_data["files"] if "files" in form_data.get("metadata", {}):
del form_data["metadata"]["files"] del form_data["metadata"]["files"]
if "files" in form_data:
del form_data["files"]
return form_data, {} return form_data, {}

View file

@ -0,0 +1,110 @@
import asyncio
from typing import Optional
from contextlib import AsyncExitStack
from mcp import ClientSession
from mcp.client.auth import OAuthClientProvider, TokenStorage
from mcp.client.streamable_http import streamablehttp_client
from mcp.shared.auth import OAuthClientInformationFull, OAuthClientMetadata, OAuthToken
class MCPClient:
def __init__(self):
self.session: Optional[ClientSession] = None
self.exit_stack = AsyncExitStack()
async def connect(self, url: str, headers: Optional[dict] = None):
try:
self._streams_context = streamablehttp_client(url, headers=headers)
transport = await self.exit_stack.enter_async_context(self._streams_context)
read_stream, write_stream, _ = transport
self._session_context = ClientSession(
read_stream, write_stream
) # pylint: disable=W0201
self.session = await self.exit_stack.enter_async_context(
self._session_context
)
await self.session.initialize()
except Exception as e:
await self.disconnect()
raise e
async def list_tool_specs(self) -> Optional[dict]:
if not self.session:
raise RuntimeError("MCP client is not connected.")
result = await self.session.list_tools()
tools = result.tools
tool_specs = []
for tool in tools:
name = tool.name
description = tool.description
inputSchema = tool.inputSchema
# TODO: handle outputSchema if needed
outputSchema = getattr(tool, "outputSchema", None)
tool_specs.append(
{"name": name, "description": description, "parameters": inputSchema}
)
return tool_specs
async def call_tool(
self, function_name: str, function_args: dict
) -> Optional[dict]:
if not self.session:
raise RuntimeError("MCP client is not connected.")
result = await self.session.call_tool(function_name, function_args)
if not result:
raise Exception("No result returned from MCP tool call.")
result_dict = result.model_dump(mode="json")
result_content = result_dict.get("content", {})
if result.isError:
raise Exception(result_content)
else:
return result_content
async def list_resources(self, cursor: Optional[str] = None) -> Optional[dict]:
if not self.session:
raise RuntimeError("MCP client is not connected.")
result = await self.session.list_resources(cursor=cursor)
if not result:
raise Exception("No result returned from MCP list_resources call.")
result_dict = result.model_dump()
resources = result_dict.get("resources", [])
return resources
async def read_resource(self, uri: str) -> Optional[dict]:
if not self.session:
raise RuntimeError("MCP client is not connected.")
result = await self.session.read_resource(uri)
if not result:
raise Exception("No result returned from MCP read_resource call.")
result_dict = result.model_dump()
return result_dict
async def disconnect(self):
# Clean up and close the session
await self.exit_stack.aclose()
async def __aenter__(self):
await self.exit_stack.__aenter__()
return self
async def __aexit__(self, exc_type, exc_value, traceback):
await self.exit_stack.__aexit__(exc_type, exc_value, traceback)
await self.disconnect()

View file

@ -20,9 +20,11 @@ from concurrent.futures import ThreadPoolExecutor
from fastapi import Request, HTTPException from fastapi import Request, HTTPException
from fastapi.responses import HTMLResponse
from starlette.responses import Response, StreamingResponse, JSONResponse from starlette.responses import Response, StreamingResponse, JSONResponse
from open_webui.models.oauth_sessions import OAuthSessions
from open_webui.models.chats import Chats from open_webui.models.chats import Chats
from open_webui.models.folders import Folders from open_webui.models.folders import Folders
from open_webui.models.users import Users from open_webui.models.users import Users
@ -52,6 +54,11 @@ from open_webui.routers.pipelines import (
from open_webui.routers.memories import query_memory, QueryMemoryForm from open_webui.routers.memories import query_memory, QueryMemoryForm
from open_webui.utils.webhook import post_webhook from open_webui.utils.webhook import post_webhook
from open_webui.utils.files import (
get_audio_url_from_base64,
get_file_url_from_base64,
get_image_url_from_base64,
)
from open_webui.models.users import UserModel from open_webui.models.users import UserModel
@ -73,10 +80,12 @@ from open_webui.utils.misc import (
add_or_update_system_message, add_or_update_system_message,
add_or_update_user_message, add_or_update_user_message,
get_last_user_message, get_last_user_message,
get_last_user_message_item,
get_last_assistant_message, get_last_assistant_message,
get_system_message, get_system_message,
prepend_to_first_user_message_content, prepend_to_first_user_message_content,
convert_logit_bias_input_to_json, convert_logit_bias_input_to_json,
get_content_from_message,
) )
from open_webui.utils.tools import get_tools from open_webui.utils.tools import get_tools
from open_webui.utils.plugin import load_function_module_by_id from open_webui.utils.plugin import load_function_module_by_id
@ -86,6 +95,7 @@ from open_webui.utils.filter import (
) )
from open_webui.utils.code_interpreter import execute_code_jupyter from open_webui.utils.code_interpreter import execute_code_jupyter
from open_webui.utils.payload import apply_system_prompt_to_body from open_webui.utils.payload import apply_system_prompt_to_body
from open_webui.utils.mcp.client import MCPClient
from open_webui.config import ( from open_webui.config import (
@ -125,6 +135,149 @@ DEFAULT_SOLUTION_TAGS = [("<|begin_of_solution|>", "<|end_of_solution|>")]
DEFAULT_CODE_INTERPRETER_TAGS = [("<code_interpreter>", "</code_interpreter>")] DEFAULT_CODE_INTERPRETER_TAGS = [("<code_interpreter>", "</code_interpreter>")]
def process_tool_result(
request,
tool_function_name,
tool_result,
tool_type,
direct_tool=False,
metadata=None,
user=None,
):
tool_result_embeds = []
if isinstance(tool_result, HTMLResponse):
content_disposition = tool_result.headers.get("Content-Disposition", "")
if "inline" in content_disposition:
content = tool_result.body.decode("utf-8", "replace")
tool_result_embeds.append(content)
if 200 <= tool_result.status_code < 300:
tool_result = {
"status": "success",
"code": "ui_component",
"message": f"{tool_function_name}: Embedded UI result is active and visible to the user.",
}
elif 400 <= tool_result.status_code < 500:
tool_result = {
"status": "error",
"code": "ui_component",
"message": f"{tool_function_name}: Client error {tool_result.status_code} from embedded UI result.",
}
elif 500 <= tool_result.status_code < 600:
tool_result = {
"status": "error",
"code": "ui_component",
"message": f"{tool_function_name}: Server error {tool_result.status_code} from embedded UI result.",
}
else:
tool_result = {
"status": "error",
"code": "ui_component",
"message": f"{tool_function_name}: Unexpected status code {tool_result.status_code} from embedded UI result.",
}
else:
tool_result = tool_result.body.decode("utf-8", "replace")
elif (tool_type == "external" and isinstance(tool_result, tuple)) or (
direct_tool and isinstance(tool_result, list) and len(tool_result) == 2
):
tool_result, tool_response_headers = tool_result
try:
if not isinstance(tool_response_headers, dict):
tool_response_headers = dict(tool_response_headers)
except Exception as e:
tool_response_headers = {}
log.debug(e)
if tool_response_headers and isinstance(tool_response_headers, dict):
content_disposition = tool_response_headers.get(
"Content-Disposition",
tool_response_headers.get("content-disposition", ""),
)
if "inline" in content_disposition:
content_type = tool_response_headers.get(
"Content-Type",
tool_response_headers.get("content-type", ""),
)
location = tool_response_headers.get(
"Location",
tool_response_headers.get("location", ""),
)
if "text/html" in content_type:
# Display as iframe embed
tool_result_embeds.append(tool_result)
tool_result = {
"status": "success",
"code": "ui_component",
"message": f"{tool_function_name}: Embedded UI result is active and visible to the user.",
}
elif location:
tool_result_embeds.append(location)
tool_result = {
"status": "success",
"code": "ui_component",
"message": f"{tool_function_name}: Embedded UI result is active and visible to the user.",
}
tool_result_files = []
if isinstance(tool_result, list):
if tool_type == "mcp": # MCP
tool_response = []
for item in tool_result:
if isinstance(item, dict):
if item.get("type") == "text":
text = item.get("text", "")
if isinstance(text, str):
try:
text = json.loads(text)
except json.JSONDecodeError:
pass
tool_response.append(text)
elif item.get("type") in ["image", "audio"]:
file_url = get_file_url_from_base64(
request,
f"data:{item.get('mimeType')};base64,{item.get('data', item.get('blob', ''))}",
{
"chat_id": metadata.get("chat_id", None),
"message_id": metadata.get("message_id", None),
"session_id": metadata.get("session_id", None),
"result": item,
},
user,
)
tool_result_files.append(
{
"type": item.get("type", "data"),
"url": file_url,
}
)
tool_result = tool_response[0] if len(tool_response) == 1 else tool_response
else: # OpenAPI
for item in tool_result:
if isinstance(item, str) and item.startswith("data:"):
tool_result_files.append(
{
"type": "data",
"content": item,
}
)
tool_result.remove(item)
if isinstance(tool_result, list):
tool_result = {"results": tool_result}
if isinstance(tool_result, dict) or isinstance(tool_result, list):
tool_result = json.dumps(tool_result, indent=2, ensure_ascii=False)
return tool_result, tool_result_files, tool_result_embeds
async def chat_completion_tools_handler( async def chat_completion_tools_handler(
request: Request, body: dict, extra_params: dict, user: UserModel, models, tools request: Request, body: dict, extra_params: dict, user: UserModel, models, tools
) -> tuple[dict, dict]: ) -> tuple[dict, dict]:
@ -132,7 +285,7 @@ async def chat_completion_tools_handler(
content = None content = None
if hasattr(response, "body_iterator"): if hasattr(response, "body_iterator"):
async for chunk in response.body_iterator: async for chunk in response.body_iterator:
data = json.loads(chunk.decode("utf-8")) data = json.loads(chunk.decode("utf-8", "replace"))
content = data["choices"][0]["message"]["content"] content = data["choices"][0]["message"]["content"]
# Cleanup any remaining background tasks if necessary # Cleanup any remaining background tasks if necessary
@ -144,12 +297,14 @@ async def chat_completion_tools_handler(
def get_tools_function_calling_payload(messages, task_model_id, content): def get_tools_function_calling_payload(messages, task_model_id, content):
user_message = get_last_user_message(messages) user_message = get_last_user_message(messages)
history = "\n".join(
f"{message['role'].upper()}: \"\"\"{message['content']}\"\"\"" recent_messages = messages[-4:] if len(messages) > 4 else messages
for message in messages[::-1][:4] chat_history = "\n".join(
f"{message['role'].upper()}: \"\"\"{get_content_from_message(message)}\"\"\""
for message in recent_messages
) )
prompt = f"History:\n{history}\nQuery: {user_message}" prompt = f"History:\n{chat_history}\nQuery: {user_message}"
return { return {
"model": task_model_id, "model": task_model_id,
@ -162,6 +317,7 @@ async def chat_completion_tools_handler(
} }
event_caller = extra_params["__event_call__"] event_caller = extra_params["__event_call__"]
event_emitter = extra_params["__event_emitter__"]
metadata = extra_params["__metadata__"] metadata = extra_params["__metadata__"]
task_model_id = get_task_model_id( task_model_id = get_task_model_id(
@ -216,8 +372,14 @@ async def chat_completion_tools_handler(
tool_function_params = tool_call.get("parameters", {}) tool_function_params = tool_call.get("parameters", {})
tool = None
tool_type = ""
direct_tool = False
try: try:
tool = tools[tool_function_name] tool = tools[tool_function_name]
tool_type = tool.get("type", "")
direct_tool = tool.get("direct", False)
spec = tool.get("spec", {}) spec = tool.get("spec", {})
allowed_params = ( allowed_params = (
@ -249,18 +411,46 @@ async def chat_completion_tools_handler(
except Exception as e: except Exception as e:
tool_result = str(e) tool_result = str(e)
tool_result_files = [] tool_result, tool_result_files, tool_result_embeds = (
if isinstance(tool_result, list): process_tool_result(
for item in tool_result: request,
# check if string tool_function_name,
if isinstance(item, str) and item.startswith("data:"): tool_result,
tool_result_files.append(item) tool_type,
tool_result.remove(item) direct_tool,
metadata,
user,
)
)
if isinstance(tool_result, dict) or isinstance(tool_result, list): if event_emitter:
tool_result = json.dumps(tool_result, indent=2) if tool_result_files:
await event_emitter(
{
"type": "files",
"data": {
"files": tool_result_files,
},
}
)
if isinstance(tool_result, str): if tool_result_embeds:
await event_emitter(
{
"type": "embeds",
"data": {
"embeds": tool_result_embeds,
},
}
)
print(
f"Tool {tool_function_name} result: {tool_result}",
tool_result_files,
tool_result_embeds,
)
if tool_result:
tool = tools[tool_function_name] tool = tools[tool_function_name]
tool_id = tool.get("tool_id", "") tool_id = tool.get("tool_id", "")
@ -274,18 +464,19 @@ async def chat_completion_tools_handler(
sources.append( sources.append(
{ {
"source": { "source": {
"name": (f"TOOL:{tool_name}"), "name": (f"{tool_name}"),
}, },
"document": [tool_result], "document": [str(tool_result)],
"metadata": [ "metadata": [
{ {
"source": (f"TOOL:{tool_name}"), "source": (f"{tool_name}"),
"parameters": tool_function_params, "parameters": tool_function_params,
} }
], ],
"tool_result": True, "tool_result": True,
} }
) )
# Citation is not enabled for this tool # Citation is not enabled for this tool
body["messages"] = add_or_update_user_message( body["messages"] = add_or_update_user_message(
f"\nTool `{tool_name}` Output: {tool_result}", f"\nTool `{tool_name}` Output: {tool_result}",
@ -631,7 +822,11 @@ async def chat_completion_files_handler(
sources = [] sources = []
if files := body.get("metadata", {}).get("files", None): if files := body.get("metadata", {}).get("files", None):
# Check if all files are in full context mode
all_full_context = all(item.get("context") == "full" for item in files)
queries = [] queries = []
if not all_full_context:
try: try:
queries_response = await generate_queries( queries_response = await generate_queries(
request, request,
@ -663,6 +858,7 @@ async def chat_completion_files_handler(
if len(queries) == 0: if len(queries) == 0:
queries = [get_last_user_message(body["messages"])] queries = [get_last_user_message(body["messages"])]
if not all_full_context:
await __event_emitter__( await __event_emitter__(
{ {
"type": "status", "type": "status",
@ -701,7 +897,8 @@ async def chat_completion_files_handler(
r=request.app.state.config.RELEVANCE_THRESHOLD, r=request.app.state.config.RELEVANCE_THRESHOLD,
hybrid_bm25_weight=request.app.state.config.HYBRID_BM25_WEIGHT, hybrid_bm25_weight=request.app.state.config.HYBRID_BM25_WEIGHT,
hybrid_search=request.app.state.config.ENABLE_RAG_HYBRID_SEARCH, hybrid_search=request.app.state.config.ENABLE_RAG_HYBRID_SEARCH,
full_context=request.app.state.config.RAG_FULL_CONTEXT, full_context=all_full_context
or request.app.state.config.RAG_FULL_CONTEXT,
user=user, user=user,
), ),
) )
@ -807,7 +1004,7 @@ async def process_chat_payload(request, form_data, user, metadata, model):
if system_message: if system_message:
try: try:
form_data = apply_system_prompt_to_body( form_data = apply_system_prompt_to_body(
system_message.get("content"), form_data, metadata, user system_message.get("content"), form_data, metadata, user, replace=True
) )
except: except:
pass pass
@ -818,7 +1015,7 @@ async def process_chat_payload(request, form_data, user, metadata, model):
oauth_token = None oauth_token = None
try: try:
if request.cookies.get("oauth_session_id", None): if request.cookies.get("oauth_session_id", None):
oauth_token = request.app.state.oauth_manager.get_oauth_token( oauth_token = await request.app.state.oauth_manager.get_oauth_token(
user.id, user.id,
request.cookies.get("oauth_session_id", None), request.cookies.get("oauth_session_id", None),
) )
@ -987,14 +1184,113 @@ async def process_chat_payload(request, form_data, user, metadata, model):
# Server side tools # Server side tools
tool_ids = metadata.get("tool_ids", None) tool_ids = metadata.get("tool_ids", None)
# Client side tools # Client side tools
tool_servers = metadata.get("tool_servers", None) direct_tool_servers = metadata.get("tool_servers", None)
log.debug(f"{tool_ids=}") log.debug(f"{tool_ids=}")
log.debug(f"{tool_servers=}") log.debug(f"{direct_tool_servers=}")
tools_dict = {} tools_dict = {}
mcp_clients = {}
mcp_tools_dict = {}
if tool_ids: if tool_ids:
for tool_id in tool_ids:
if tool_id.startswith("server:mcp:"):
try:
server_id = tool_id[len("server:mcp:") :]
mcp_server_connection = None
for (
server_connection
) in request.app.state.config.TOOL_SERVER_CONNECTIONS:
if (
server_connection.get("type", "") == "mcp"
and server_connection.get("info", {}).get("id") == server_id
):
mcp_server_connection = server_connection
break
if not mcp_server_connection:
log.error(f"MCP server with id {server_id} not found")
continue
auth_type = mcp_server_connection.get("auth_type", "")
headers = {}
if auth_type == "bearer":
headers["Authorization"] = (
f"Bearer {mcp_server_connection.get('key', '')}"
)
elif auth_type == "none":
# No authentication
pass
elif auth_type == "session":
headers["Authorization"] = (
f"Bearer {request.state.token.credentials}"
)
elif auth_type == "system_oauth":
oauth_token = extra_params.get("__oauth_token__", None)
if oauth_token:
headers["Authorization"] = (
f"Bearer {oauth_token.get('access_token', '')}"
)
elif auth_type == "oauth_2.1":
try:
splits = server_id.split(":")
server_id = splits[-1] if len(splits) > 1 else server_id
oauth_token = await request.app.state.oauth_client_manager.get_oauth_token(
user.id, f"mcp:{server_id}"
)
if oauth_token:
headers["Authorization"] = (
f"Bearer {oauth_token.get('access_token', '')}"
)
except Exception as e:
log.error(f"Error getting OAuth token: {e}")
oauth_token = None
mcp_clients[server_id] = MCPClient()
await mcp_clients[server_id].connect(
url=mcp_server_connection.get("url", ""),
headers=headers if headers else None,
)
tool_specs = await mcp_clients[server_id].list_tool_specs()
for tool_spec in tool_specs:
def make_tool_function(client, function_name):
async def tool_function(**kwargs):
print(kwargs)
print(client)
print(await client.list_tool_specs())
return await client.call_tool(
function_name,
function_args=kwargs,
)
return tool_function
tool_function = make_tool_function(
mcp_clients[server_id], tool_spec["name"]
)
mcp_tools_dict[f"{server_id}_{tool_spec['name']}"] = {
"spec": {
**tool_spec,
"name": f"{server_id}_{tool_spec['name']}",
},
"callable": tool_function,
"type": "mcp",
"client": mcp_clients[server_id],
"direct": False,
}
except Exception as e:
log.debug(e)
continue
tools_dict = await get_tools( tools_dict = await get_tools(
request, request,
tool_ids, tool_ids,
@ -1006,9 +1302,11 @@ async def process_chat_payload(request, form_data, user, metadata, model):
"__files__": metadata.get("files", []), "__files__": metadata.get("files", []),
}, },
) )
if mcp_tools_dict:
tools_dict = {**tools_dict, **mcp_tools_dict}
if tool_servers: if direct_tool_servers:
for tool_server in tool_servers: for tool_server in direct_tool_servers:
tool_specs = tool_server.pop("specs", []) tool_specs = tool_server.pop("specs", [])
for tool in tool_specs: for tool in tool_specs:
@ -1018,6 +1316,9 @@ async def process_chat_payload(request, form_data, user, metadata, model):
"server": tool_server, "server": tool_server,
} }
if mcp_clients:
metadata["mcp_clients"] = mcp_clients
if tools_dict: if tools_dict:
if metadata.get("params", {}).get("function_calling") == "native": if metadata.get("params", {}).get("function_calling") == "native":
# If the function calling is native, then call the tools function calling handler # If the function calling is native, then call the tools function calling handler
@ -1050,9 +1351,7 @@ async def process_chat_payload(request, form_data, user, metadata, model):
citation_idx_map = {} citation_idx_map = {}
for source in sources: for source in sources:
is_tool_result = source.get("tool_result", False) if "document" in source:
if "document" in source and not is_tool_result:
for document_text, document_metadata in zip( for document_text, document_metadata in zip(
source["document"], source["metadata"] source["document"], source["metadata"]
): ):
@ -1079,25 +1378,14 @@ async def process_chat_payload(request, form_data, user, metadata, model):
raise Exception("No user message found") raise Exception("No user message found")
if context_string != "": if context_string != "":
# Workaround for Ollama 2.0+ system prompt issue form_data["messages"] = add_or_update_user_message(
# TODO: replace with add_or_update_system_message
if model.get("owned_by") == "ollama":
form_data["messages"] = prepend_to_first_user_message_content(
rag_template(
request.app.state.config.RAG_TEMPLATE,
context_string,
prompt,
),
form_data["messages"],
)
else:
form_data["messages"] = add_or_update_system_message(
rag_template( rag_template(
request.app.state.config.RAG_TEMPLATE, request.app.state.config.RAG_TEMPLATE,
context_string, context_string,
prompt, prompt,
), ),
form_data["messages"], form_data["messages"],
append=False,
) )
# If there are citations, add them to the data_items # If there are citations, add them to the data_items
@ -1131,10 +1419,13 @@ async def process_chat_response(
request, response, form_data, user, metadata, model, events, tasks request, response, form_data, user, metadata, model, events, tasks
): ):
async def background_tasks_handler(): async def background_tasks_handler():
message = None
messages = []
if "chat_id" in metadata and not metadata["chat_id"].startswith("local:"):
messages_map = Chats.get_messages_map_by_chat_id(metadata["chat_id"]) messages_map = Chats.get_messages_map_by_chat_id(metadata["chat_id"])
message = messages_map.get(metadata["message_id"]) if messages_map else None message = messages_map.get(metadata["message_id"]) if messages_map else None
if message:
message_list = get_message_list(messages_map, metadata["message_id"]) message_list = get_message_list(messages_map, metadata["message_id"])
# Remove details tags and files from the messages. # Remove details tags and files from the messages.
@ -1167,12 +1458,21 @@ async def process_chat_response(
"content": content, "content": content,
} }
) )
else:
# Local temp chat, get the model and message from the form_data
message = get_last_user_message_item(form_data.get("messages", []))
messages = form_data.get("messages", [])
if message:
message["model"] = form_data.get("model")
if message and "model" in message:
if tasks and messages: if tasks and messages:
if ( if (
TASKS.FOLLOW_UP_GENERATION in tasks TASKS.FOLLOW_UP_GENERATION in tasks
and tasks[TASKS.FOLLOW_UP_GENERATION] and tasks[TASKS.FOLLOW_UP_GENERATION]
): ):
print("Generating follow ups")
res = await generate_follow_ups( res = await generate_follow_ups(
request, request,
{ {
@ -1203,15 +1503,6 @@ async def process_chat_response(
follow_ups = json.loads(follow_ups_string).get( follow_ups = json.loads(follow_ups_string).get(
"follow_ups", [] "follow_ups", []
) )
Chats.upsert_message_to_chat_by_id_and_message_id(
metadata["chat_id"],
metadata["message_id"],
{
"followUps": follow_ups,
},
)
await event_emitter( await event_emitter(
{ {
"type": "chat:message:follow_ups", "type": "chat:message:follow_ups",
@ -1220,10 +1511,26 @@ async def process_chat_response(
}, },
} }
) )
if not metadata.get("chat_id", "").startswith("local:"):
Chats.upsert_message_to_chat_by_id_and_message_id(
metadata["chat_id"],
metadata["message_id"],
{
"followUps": follow_ups,
},
)
except Exception as e: except Exception as e:
pass pass
if TASKS.TITLE_GENERATION in tasks: if not metadata.get("chat_id", "").startswith(
"local:"
): # Only update titles and tags for non-temp chats
if (
TASKS.TITLE_GENERATION in tasks
and tasks[TASKS.TITLE_GENERATION]
):
user_message = get_last_user_message(messages) user_message = get_last_user_message(messages)
if user_message and len(user_message) > 100: if user_message and len(user_message) > 100:
user_message = user_message[:100] + "..." user_message = user_message[:100] + "..."
@ -1246,7 +1553,8 @@ async def process_chat_response(
res.get("choices", [])[0] res.get("choices", [])[0]
.get("message", {}) .get("message", {})
.get( .get(
"content", message.get("content", user_message) "content",
message.get("content", user_message),
) )
) )
else: else:
@ -1266,7 +1574,9 @@ async def process_chat_response(
if not title: if not title:
title = messages[0].get("content", user_message) title = messages[0].get("content", user_message)
Chats.update_chat_title_by_id(metadata["chat_id"], title) Chats.update_chat_title_by_id(
metadata["chat_id"], title
)
await event_emitter( await event_emitter(
{ {
@ -1352,7 +1662,9 @@ async def process_chat_response(
response.body, bytes response.body, bytes
): ):
try: try:
response_data = json.loads(response.body.decode("utf-8")) response_data = json.loads(
response.body.decode("utf-8", "replace")
)
except json.JSONDecodeError: except json.JSONDecodeError:
response_data = { response_data = {
"error": {"detail": "Invalid JSON response"} "error": {"detail": "Invalid JSON response"}
@ -1498,7 +1810,7 @@ async def process_chat_response(
oauth_token = None oauth_token = None
try: try:
if request.cookies.get("oauth_session_id", None): if request.cookies.get("oauth_session_id", None):
oauth_token = request.app.state.oauth_manager.get_oauth_token( oauth_token = await request.app.state.oauth_manager.get_oauth_token(
user.id, user.id,
request.cookies.get("oauth_session_id", None), request.cookies.get("oauth_session_id", None),
) )
@ -1581,7 +1893,8 @@ async def process_chat_response(
break break
if tool_result is not None: if tool_result is not None:
tool_calls_display_content = f'{tool_calls_display_content}<details type="tool_calls" done="true" id="{tool_call_id}" name="{tool_name}" arguments="{html.escape(json.dumps(tool_arguments))}" result="{html.escape(json.dumps(tool_result, ensure_ascii=False))}" files="{html.escape(json.dumps(tool_result_files)) if tool_result_files else ""}">\n<summary>Tool Executed</summary>\n</details>\n' tool_result_embeds = result.get("embeds", "")
tool_calls_display_content = f'{tool_calls_display_content}<details type="tool_calls" done="true" id="{tool_call_id}" name="{tool_name}" arguments="{html.escape(json.dumps(tool_arguments))}" result="{html.escape(json.dumps(tool_result, ensure_ascii=False))}" files="{html.escape(json.dumps(tool_result_files)) if tool_result_files else ""}" embeds="{html.escape(json.dumps(tool_result_embeds))}">\n<summary>Tool Executed</summary>\n</details>\n'
else: else:
tool_calls_display_content = f'{tool_calls_display_content}<details type="tool_calls" done="false" id="{tool_call_id}" name="{tool_name}" arguments="{html.escape(json.dumps(tool_arguments))}">\n<summary>Executing...</summary>\n</details>\n' tool_calls_display_content = f'{tool_calls_display_content}<details type="tool_calls" done="false" id="{tool_call_id}" name="{tool_name}" arguments="{html.escape(json.dumps(tool_arguments))}">\n<summary>Executing...</summary>\n</details>\n'
@ -1985,7 +2298,11 @@ async def process_chat_response(
last_delta_data = None last_delta_data = None
async for line in response.body_iterator: async for line in response.body_iterator:
line = line.decode("utf-8") if isinstance(line, bytes) else line line = (
line.decode("utf-8", "replace")
if isinstance(line, bytes)
else line
)
data = line data = line
# Skip empty lines # Skip empty lines
@ -2031,6 +2348,20 @@ async def process_chat_response(
) )
else: else:
choices = data.get("choices", []) choices = data.get("choices", [])
# 17421
usage = data.get("usage", {}) or {}
usage.update(data.get("timings", {})) # llama.cpp
if usage:
await event_emitter(
{
"type": "chat:completion",
"data": {
"usage": usage,
},
}
)
if not choices: if not choices:
error = data.get("error", {}) error = data.get("error", {})
if error: if error:
@ -2042,20 +2373,6 @@ async def process_chat_response(
}, },
} }
) )
usage = data.get("usage", {})
usage.update(
data.get("timings", {})
) # llama.cpp
if usage:
await event_emitter(
{
"type": "chat:completion",
"data": {
"usage": usage,
},
}
)
continue continue
delta = choices[0].get("delta", {}) delta = choices[0].get("delta", {})
@ -2328,8 +2645,12 @@ async def process_chat_response(
results = [] results = []
for tool_call in response_tool_calls: for tool_call in response_tool_calls:
print("tool_call", tool_call)
tool_call_id = tool_call.get("id", "") tool_call_id = tool_call.get("id", "")
tool_name = tool_call.get("function", {}).get("name", "") tool_function_name = tool_call.get("function", {}).get(
"name", ""
)
tool_args = tool_call.get("function", {}).get("arguments", "{}") tool_args = tool_call.get("function", {}).get("arguments", "{}")
tool_function_params = {} tool_function_params = {}
@ -2359,11 +2680,17 @@ async def process_chat_response(
) )
tool_result = None tool_result = None
tool = None
tool_type = None
direct_tool = False
if tool_name in tools: if tool_function_name in tools:
tool = tools[tool_name] tool = tools[tool_function_name]
spec = tool.get("spec", {}) spec = tool.get("spec", {})
tool_type = tool.get("type", "")
direct_tool = tool.get("direct", False)
try: try:
allowed_params = ( allowed_params = (
spec.get("parameters", {}) spec.get("parameters", {})
@ -2377,13 +2704,13 @@ async def process_chat_response(
if k in allowed_params if k in allowed_params
} }
if tool.get("direct", False): if direct_tool:
tool_result = await event_caller( tool_result = await event_caller(
{ {
"type": "execute:tool", "type": "execute:tool",
"data": { "data": {
"id": str(uuid4()), "id": str(uuid4()),
"name": tool_name, "name": tool_function_name,
"params": tool_function_params, "params": tool_function_params,
"server": tool.get("server", {}), "server": tool.get("server", {}),
"session_id": metadata.get( "session_id": metadata.get(
@ -2402,19 +2729,16 @@ async def process_chat_response(
except Exception as e: except Exception as e:
tool_result = str(e) tool_result = str(e)
tool_result_files = [] tool_result, tool_result_files, tool_result_embeds = (
if isinstance(tool_result, list): process_tool_result(
for item in tool_result: request,
# check if string tool_function_name,
if isinstance(item, str) and item.startswith("data:"): tool_result,
tool_result_files.append(item) tool_type,
tool_result.remove(item) direct_tool,
metadata,
if isinstance(tool_result, dict) or isinstance( user,
tool_result, list )
):
tool_result = json.dumps(
tool_result, indent=2, ensure_ascii=False
) )
results.append( results.append(
@ -2426,11 +2750,15 @@ async def process_chat_response(
if tool_result_files if tool_result_files
else {} else {}
), ),
**(
{"embeds": tool_result_embeds}
if tool_result_embeds
else {}
),
} }
) )
content_blocks[-1]["results"] = results content_blocks[-1]["results"] = results
content_blocks.append( content_blocks.append(
{ {
"type": "text", "type": "text",
@ -2571,20 +2899,15 @@ async def process_chat_response(
if isinstance(stdout, str): if isinstance(stdout, str):
stdoutLines = stdout.split("\n") stdoutLines = stdout.split("\n")
for idx, line in enumerate(stdoutLines): for idx, line in enumerate(stdoutLines):
if "data:image/png;base64" in line: if "data:image/png;base64" in line:
image_url = "" image_url = get_image_url_from_base64(
# Extract base64 image data from the line
image_data, content_type = (
load_b64_image_data(line)
)
if image_data is not None:
image_url = upload_image(
request, request,
image_data, line,
content_type,
metadata, metadata,
user, user,
) )
if image_url:
stdoutLines[idx] = ( stdoutLines[idx] = (
f"![Output Image]({image_url})" f"![Output Image]({image_url})"
) )
@ -2597,16 +2920,9 @@ async def process_chat_response(
resultLines = result.split("\n") resultLines = result.split("\n")
for idx, line in enumerate(resultLines): for idx, line in enumerate(resultLines):
if "data:image/png;base64" in line: if "data:image/png;base64" in line:
image_url = "" image_url = get_image_url_from_base64(
# Extract base64 image data from the line
image_data, content_type = (
load_b64_image_data(line)
)
if image_data is not None:
image_url = upload_image(
request, request,
image_data, line,
content_type,
metadata, metadata,
user, user,
) )

View file

@ -120,17 +120,26 @@ def pop_system_message(messages: list[dict]) -> tuple[Optional[dict], list[dict]
return get_system_message(messages), remove_system_message(messages) return get_system_message(messages), remove_system_message(messages)
def prepend_to_first_user_message_content( def update_message_content(message: dict, content: str, append: bool = True) -> dict:
content: str, messages: list[dict]
) -> list[dict]:
for message in messages:
if message["role"] == "user":
if isinstance(message["content"], list): if isinstance(message["content"], list):
for item in message["content"]: for item in message["content"]:
if item["type"] == "text": if item["type"] == "text":
if append:
item["text"] = f"{item['text']}\n{content}"
else:
item["text"] = f"{content}\n{item['text']}" item["text"] = f"{content}\n{item['text']}"
else:
if append:
message["content"] = f"{message['content']}\n{content}"
else: else:
message["content"] = f"{content}\n{message['content']}" message["content"] = f"{content}\n{message['content']}"
return message
def replace_system_message_content(content: str, messages: list[dict]) -> dict:
for message in messages:
if message["role"] == "system":
message["content"] = content
break break
return messages return messages
@ -148,10 +157,7 @@ def add_or_update_system_message(
""" """
if messages and messages[0].get("role") == "system": if messages and messages[0].get("role") == "system":
if append: messages[0] = update_message_content(messages[0], content, append)
messages[0]["content"] = f"{messages[0]['content']}\n{content}"
else:
messages[0]["content"] = f"{content}\n{messages[0]['content']}"
else: else:
# Insert at the beginning # Insert at the beginning
messages.insert(0, {"role": "system", "content": content}) messages.insert(0, {"role": "system", "content": content})
@ -159,7 +165,7 @@ def add_or_update_system_message(
return messages return messages
def add_or_update_user_message(content: str, messages: list[dict]): def add_or_update_user_message(content: str, messages: list[dict], append: bool = True):
""" """
Adds a new user message at the end of the messages list Adds a new user message at the end of the messages list
or updates the existing user message at the end. or updates the existing user message at the end.
@ -170,7 +176,7 @@ def add_or_update_user_message(content: str, messages: list[dict]):
""" """
if messages and messages[-1].get("role") == "user": if messages and messages[-1].get("role") == "user":
messages[-1]["content"] = f"{messages[-1]['content']}\n{content}" messages[-1] = update_message_content(messages[-1], content, append)
else: else:
# Insert at the end # Insert at the end
messages.append({"role": "user", "content": content}) messages.append({"role": "user", "content": content})
@ -178,6 +184,16 @@ def add_or_update_user_message(content: str, messages: list[dict]):
return messages return messages
def prepend_to_first_user_message_content(
content: str, messages: list[dict]
) -> list[dict]:
for message in messages:
if message["role"] == "user":
message = update_message_content(message, content, append=False)
break
return messages
def append_or_update_assistant_message(content: str, messages: list[dict]): def append_or_update_assistant_message(content: str, messages: list[dict]):
""" """
Adds a new assistant message at the end of the messages list Adds a new assistant message at the end of the messages list
@ -383,17 +399,10 @@ def parse_ollama_modelfile(model_text):
"top_k": int, "top_k": int,
"top_p": float, "top_p": float,
"num_keep": int, "num_keep": int,
"typical_p": float,
"presence_penalty": float, "presence_penalty": float,
"frequency_penalty": float, "frequency_penalty": float,
"penalize_newline": bool,
"numa": bool,
"num_batch": int, "num_batch": int,
"num_gpu": int, "num_gpu": int,
"main_gpu": int,
"low_vram": bool,
"f16_kv": bool,
"vocab_only": bool,
"use_mmap": bool, "use_mmap": bool,
"use_mlock": bool, "use_mlock": bool,
"num_thread": int, "num_thread": int,

View file

@ -22,10 +22,11 @@ from open_webui.utils.access_control import has_access
from open_webui.config import ( from open_webui.config import (
BYPASS_ADMIN_ACCESS_CONTROL,
DEFAULT_ARENA_MODEL, DEFAULT_ARENA_MODEL,
) )
from open_webui.env import SRC_LOG_LEVELS, GLOBAL_LOG_LEVEL from open_webui.env import BYPASS_MODEL_ACCESS_CONTROL, SRC_LOG_LEVELS, GLOBAL_LOG_LEVEL
from open_webui.models.users import UserModel from open_webui.models.users import UserModel
@ -262,6 +263,7 @@ async def get_all_models(request, refresh: bool = False, user: UserModel = None)
"icon": function.meta.manifest.get("icon_url", None) "icon": function.meta.manifest.get("icon_url", None)
or getattr(module, "icon_url", None) or getattr(module, "icon_url", None)
or getattr(module, "icon", None), or getattr(module, "icon", None),
"has_user_valves": hasattr(module, "UserValves"),
} }
] ]
@ -332,3 +334,40 @@ def check_model_access(user, model):
) )
): ):
raise Exception("Model not found") raise Exception("Model not found")
def get_filtered_models(models, user):
# Filter out models that the user does not have access to
if (
user.role == "user"
or (user.role == "admin" and not BYPASS_ADMIN_ACCESS_CONTROL)
) and not BYPASS_MODEL_ACCESS_CONTROL:
filtered_models = []
for model in models:
if model.get("arena"):
if has_access(
user.id,
type="read",
access_control=model.get("info", {})
.get("meta", {})
.get("access_control", {}),
):
filtered_models.append(model)
continue
model_info = Models.get_model_by_id(model["id"])
if model_info:
if (
(user.role == "admin" and BYPASS_ADMIN_ACCESS_CONTROL)
or user.id == model_info.user_id
or has_access(
user.id,
type="read",
access_control=model_info.access_control,
)
):
filtered_models.append(model)
return filtered_models
else:
return models

View file

@ -1,7 +1,9 @@
import base64 import base64
import hashlib
import logging import logging
import mimetypes import mimetypes
import sys import sys
import urllib
import uuid import uuid
import json import json
from datetime import datetime, timedelta from datetime import datetime, timedelta
@ -9,6 +11,9 @@ from datetime import datetime, timedelta
import re import re
import fnmatch import fnmatch
import time import time
import secrets
from cryptography.fernet import Fernet
import aiohttp import aiohttp
from authlib.integrations.starlette_client import OAuth from authlib.integrations.starlette_client import OAuth
@ -18,6 +23,7 @@ from fastapi import (
status, status,
) )
from starlette.responses import RedirectResponse from starlette.responses import RedirectResponse
from typing import Optional
from open_webui.models.auths import Auths from open_webui.models.auths import Auths
@ -56,11 +62,27 @@ from open_webui.env import (
WEBUI_AUTH_COOKIE_SAME_SITE, WEBUI_AUTH_COOKIE_SAME_SITE,
WEBUI_AUTH_COOKIE_SECURE, WEBUI_AUTH_COOKIE_SECURE,
ENABLE_OAUTH_ID_TOKEN_COOKIE, ENABLE_OAUTH_ID_TOKEN_COOKIE,
OAUTH_CLIENT_INFO_ENCRYPTION_KEY,
) )
from open_webui.utils.misc import parse_duration from open_webui.utils.misc import parse_duration
from open_webui.utils.auth import get_password_hash, create_token from open_webui.utils.auth import get_password_hash, create_token
from open_webui.utils.webhook import post_webhook from open_webui.utils.webhook import post_webhook
from mcp.shared.auth import (
OAuthClientMetadata,
OAuthMetadata,
)
class OAuthClientInformationFull(OAuthClientMetadata):
issuer: Optional[str] = None # URL of the OAuth server that issued this client
client_id: str
client_secret: str | None = None
client_id_issued_at: int | None = None
client_secret_expires_at: int | None = None
from open_webui.env import SRC_LOG_LEVELS, GLOBAL_LOG_LEVEL from open_webui.env import SRC_LOG_LEVELS, GLOBAL_LOG_LEVEL
logging.basicConfig(stream=sys.stdout, level=GLOBAL_LOG_LEVEL) logging.basicConfig(stream=sys.stdout, level=GLOBAL_LOG_LEVEL)
@ -89,6 +111,42 @@ auth_manager_config.JWT_EXPIRES_IN = JWT_EXPIRES_IN
auth_manager_config.OAUTH_UPDATE_PICTURE_ON_LOGIN = OAUTH_UPDATE_PICTURE_ON_LOGIN auth_manager_config.OAUTH_UPDATE_PICTURE_ON_LOGIN = OAUTH_UPDATE_PICTURE_ON_LOGIN
FERNET = None
if len(OAUTH_CLIENT_INFO_ENCRYPTION_KEY) != 44:
key_bytes = hashlib.sha256(OAUTH_CLIENT_INFO_ENCRYPTION_KEY.encode()).digest()
OAUTH_CLIENT_INFO_ENCRYPTION_KEY = base64.urlsafe_b64encode(key_bytes)
else:
OAUTH_CLIENT_INFO_ENCRYPTION_KEY = OAUTH_CLIENT_INFO_ENCRYPTION_KEY.encode()
try:
FERNET = Fernet(OAUTH_CLIENT_INFO_ENCRYPTION_KEY)
except Exception as e:
log.error(f"Error initializing Fernet with provided key: {e}")
raise
def encrypt_data(data) -> str:
"""Encrypt data for storage"""
try:
data_json = json.dumps(data)
encrypted = FERNET.encrypt(data_json.encode()).decode()
return encrypted
except Exception as e:
log.error(f"Error encrypting data: {e}")
raise
def decrypt_data(data: str):
"""Decrypt data from storage"""
try:
decrypted = FERNET.decrypt(data.encode()).decode()
return json.loads(decrypted)
except Exception as e:
log.error(f"Error decrypting data: {e}")
raise
def is_in_blocked_groups(group_name: str, groups: list) -> bool: def is_in_blocked_groups(group_name: str, groups: list) -> bool:
""" """
Check if a group name matches any blocked pattern. Check if a group name matches any blocked pattern.
@ -133,14 +191,438 @@ def is_in_blocked_groups(group_name: str, groups: list) -> bool:
return False return False
def get_parsed_and_base_url(server_url) -> tuple[urllib.parse.ParseResult, str]:
parsed = urllib.parse.urlparse(server_url)
base_url = f"{parsed.scheme}://{parsed.netloc}"
return parsed, base_url
def get_discovery_urls(server_url) -> list[str]:
parsed, base_url = get_parsed_and_base_url(server_url)
urls = [
urllib.parse.urljoin(base_url, "/.well-known/oauth-authorization-server"),
urllib.parse.urljoin(base_url, "/.well-known/openid-configuration"),
]
if parsed.path and parsed.path != "/":
urls.append(
urllib.parse.urljoin(
base_url,
f"/.well-known/oauth-authorization-server{parsed.path.rstrip('/')}",
)
)
urls.append(
urllib.parse.urljoin(
base_url, f"/.well-known/openid-configuration{parsed.path.rstrip('/')}"
)
)
return urls
# TODO: Some OAuth providers require Initial Access Tokens (IATs) for dynamic client registration.
# This is not currently supported.
async def get_oauth_client_info_with_dynamic_client_registration(
request,
client_id: str,
oauth_server_url: str,
oauth_server_key: Optional[str] = None,
) -> OAuthClientInformationFull:
try:
oauth_server_metadata = None
oauth_server_metadata_url = None
redirect_base_url = (
str(request.app.state.config.WEBUI_URL or request.base_url)
).rstrip("/")
oauth_client_metadata = OAuthClientMetadata(
client_name="Open WebUI",
redirect_uris=[f"{redirect_base_url}/oauth/clients/{client_id}/callback"],
grant_types=["authorization_code", "refresh_token"],
response_types=["code"],
token_endpoint_auth_method="client_secret_post",
)
# Attempt to fetch OAuth server metadata to get registration endpoint & scopes
discovery_urls = get_discovery_urls(oauth_server_url)
for url in discovery_urls:
async with aiohttp.ClientSession() as session:
async with session.get(
url, ssl=AIOHTTP_CLIENT_SESSION_SSL
) as oauth_server_metadata_response:
if oauth_server_metadata_response.status == 200:
try:
oauth_server_metadata = OAuthMetadata.model_validate(
await oauth_server_metadata_response.json()
)
oauth_server_metadata_url = url
if (
oauth_client_metadata.scope is None
and oauth_server_metadata.scopes_supported is not None
):
oauth_client_metadata.scope = " ".join(
oauth_server_metadata.scopes_supported
)
break
except Exception as e:
log.error(f"Error parsing OAuth metadata from {url}: {e}")
continue
registration_url = None
if oauth_server_metadata and oauth_server_metadata.registration_endpoint:
registration_url = str(oauth_server_metadata.registration_endpoint)
else:
_, base_url = get_parsed_and_base_url(oauth_server_url)
registration_url = urllib.parse.urljoin(base_url, "/register")
registration_data = oauth_client_metadata.model_dump(
exclude_none=True,
mode="json",
by_alias=True,
)
# Perform dynamic client registration and return client info
async with aiohttp.ClientSession() as session:
async with session.post(
registration_url, json=registration_data, ssl=AIOHTTP_CLIENT_SESSION_SSL
) as oauth_client_registration_response:
try:
registration_response_json = (
await oauth_client_registration_response.json()
)
oauth_client_info = OAuthClientInformationFull.model_validate(
{
**registration_response_json,
**{"issuer": oauth_server_metadata_url},
}
)
log.info(
f"Dynamic client registration successful at {registration_url}, client_id: {oauth_client_info.client_id}"
)
return oauth_client_info
except Exception as e:
error_text = None
try:
error_text = await oauth_client_registration_response.text()
log.error(
f"Dynamic client registration failed at {registration_url}: {oauth_client_registration_response.status} - {error_text}"
)
except Exception as e:
pass
log.error(f"Error parsing client registration response: {e}")
raise Exception(
f"Dynamic client registration failed: {error_text}"
if error_text
else "Error parsing client registration response"
)
raise Exception("Dynamic client registration failed")
except Exception as e:
log.error(f"Exception during dynamic client registration: {e}")
raise e
class OAuthClientManager:
def __init__(self, app):
self.oauth = OAuth()
self.app = app
self.clients = {}
def add_client(self, client_id, oauth_client_info: OAuthClientInformationFull):
self.clients[client_id] = {
"client": self.oauth.register(
name=client_id,
client_id=oauth_client_info.client_id,
client_secret=oauth_client_info.client_secret,
client_kwargs=(
{"scope": oauth_client_info.scope}
if oauth_client_info.scope
else {}
),
server_metadata_url=(
oauth_client_info.issuer if oauth_client_info.issuer else None
),
),
"client_info": oauth_client_info,
}
return self.clients[client_id]
def remove_client(self, client_id):
if client_id in self.clients:
del self.clients[client_id]
log.info(f"Removed OAuth client {client_id}")
return True
def get_client(self, client_id):
client = self.clients.get(client_id)
return client["client"] if client else None
def get_client_info(self, client_id):
client = self.clients.get(client_id)
return client["client_info"] if client else None
def get_server_metadata_url(self, client_id):
if client_id in self.clients:
client = self.clients[client_id]
return (
client.server_metadata_url
if hasattr(client, "server_metadata_url")
else None
)
return None
async def get_oauth_token(
self, user_id: str, client_id: str, force_refresh: bool = False
):
"""
Get a valid OAuth token for the user, automatically refreshing if needed.
Args:
user_id: The user ID
client_id: The OAuth client ID (provider)
force_refresh: Force token refresh even if current token appears valid
Returns:
dict: OAuth token data with access_token, or None if no valid token available
"""
try:
# Get the OAuth session
session = OAuthSessions.get_session_by_provider_and_user_id(
client_id, user_id
)
if not session:
log.warning(
f"No OAuth session found for user {user_id}, client_id {client_id}"
)
return None
if force_refresh or datetime.now() + timedelta(
minutes=5
) >= datetime.fromtimestamp(session.expires_at):
log.debug(
f"Token refresh needed for user {user_id}, client_id {session.provider}"
)
refreshed_token = await self._refresh_token(session)
if refreshed_token:
return refreshed_token
else:
log.warning(
f"Token refresh failed for user {user_id}, client_id {session.provider}, deleting session {session.id}"
)
OAuthSessions.delete_session_by_id(session.id)
return None
return session.token
except Exception as e:
log.error(f"Error getting OAuth token for user {user_id}: {e}")
return None
async def _refresh_token(self, session) -> dict:
"""
Refresh an OAuth token if needed, with concurrency protection.
Args:
session: The OAuth session object
Returns:
dict: Refreshed token data, or None if refresh failed
"""
try:
# Perform the actual refresh
refreshed_token = await self._perform_token_refresh(session)
if refreshed_token:
# Update the session with new token data
session = OAuthSessions.update_session_by_id(
session.id, refreshed_token
)
log.info(f"Successfully refreshed token for session {session.id}")
return session.token
else:
log.error(f"Failed to refresh token for session {session.id}")
return None
except Exception as e:
log.error(f"Error refreshing token for session {session.id}: {e}")
return None
async def _perform_token_refresh(self, session) -> dict:
"""
Perform the actual OAuth token refresh.
Args:
session: The OAuth session object
Returns:
dict: New token data, or None if refresh failed
"""
client_id = session.provider
token_data = session.token
if not token_data.get("refresh_token"):
log.warning(f"No refresh token available for session {session.id}")
return None
try:
client = self.get_client(client_id)
if not client:
log.error(f"No OAuth client found for provider {client_id}")
return None
token_endpoint = None
async with aiohttp.ClientSession(trust_env=True) as session_http:
async with session_http.get(
self.get_server_metadata_url(client_id)
) as r:
if r.status == 200:
openid_data = await r.json()
token_endpoint = openid_data.get("token_endpoint")
else:
log.error(
f"Failed to fetch OpenID configuration for client_id {client_id}"
)
if not token_endpoint:
log.error(f"No token endpoint found for client_id {client_id}")
return None
# Prepare refresh request
refresh_data = {
"grant_type": "refresh_token",
"refresh_token": token_data["refresh_token"],
"client_id": client.client_id,
}
if hasattr(client, "client_secret") and client.client_secret:
refresh_data["client_secret"] = client.client_secret
# Make refresh request
async with aiohttp.ClientSession(trust_env=True) as session_http:
async with session_http.post(
token_endpoint,
data=refresh_data,
headers={"Content-Type": "application/x-www-form-urlencoded"},
ssl=AIOHTTP_CLIENT_SESSION_SSL,
) as r:
if r.status == 200:
new_token_data = await r.json()
# Merge with existing token data (preserve refresh_token if not provided)
if "refresh_token" not in new_token_data:
new_token_data["refresh_token"] = token_data[
"refresh_token"
]
# Add timestamp for tracking
new_token_data["issued_at"] = datetime.now().timestamp()
# Calculate expires_at if we have expires_in
if (
"expires_in" in new_token_data
and "expires_at" not in new_token_data
):
new_token_data["expires_at"] = int(
datetime.now().timestamp()
+ new_token_data["expires_in"]
)
log.debug(f"Token refresh successful for client_id {client_id}")
return new_token_data
else:
error_text = await r.text()
log.error(
f"Token refresh failed for client_id {client_id}: {r.status} - {error_text}"
)
return None
except Exception as e:
log.error(f"Exception during token refresh for client_id {client_id}: {e}")
return None
async def handle_authorize(self, request, client_id: str) -> RedirectResponse:
client = self.get_client(client_id)
if client is None:
raise HTTPException(404)
client_info = self.get_client_info(client_id)
if client_info is None:
raise HTTPException(404)
redirect_uri = (
client_info.redirect_uris[0] if client_info.redirect_uris else None
)
return await client.authorize_redirect(request, str(redirect_uri))
async def handle_callback(self, request, client_id: str, user_id: str, response):
client = self.get_client(client_id)
if client is None:
raise HTTPException(404)
error_message = None
try:
token = await client.authorize_access_token(request)
if token:
try:
# Add timestamp for tracking
token["issued_at"] = datetime.now().timestamp()
# Calculate expires_at if we have expires_in
if "expires_in" in token and "expires_at" not in token:
token["expires_at"] = (
datetime.now().timestamp() + token["expires_in"]
)
# Clean up any existing sessions for this user/client_id first
sessions = OAuthSessions.get_sessions_by_user_id(user_id)
for session in sessions:
if session.provider == client_id:
OAuthSessions.delete_session_by_id(session.id)
session = OAuthSessions.create_session(
user_id=user_id,
provider=client_id,
token=token,
)
log.info(
f"Stored OAuth session server-side for user {user_id}, client_id {client_id}"
)
except Exception as e:
error_message = "Failed to store OAuth session server-side"
log.error(f"Failed to store OAuth session server-side: {e}")
else:
error_message = "Failed to obtain OAuth token"
log.warning(error_message)
except Exception as e:
error_message = "OAuth callback error"
log.warning(f"OAuth callback error: {e}")
redirect_url = (
str(request.app.state.config.WEBUI_URL or request.base_url)
).rstrip("/")
if error_message:
log.debug(error_message)
redirect_url = f"{redirect_url}/?error={error_message}"
return RedirectResponse(url=redirect_url, headers=response.headers)
response = RedirectResponse(url=redirect_url, headers=response.headers)
return response
class OAuthManager: class OAuthManager:
def __init__(self, app): def __init__(self, app):
self.oauth = OAuth() self.oauth = OAuth()
self.app = app self.app = app
self._clients = {} self._clients = {}
for _, provider_config in OAUTH_PROVIDERS.items():
provider_config["register"](self.oauth) for name, provider_config in OAUTH_PROVIDERS.items():
if "register" not in provider_config:
log.error(f"OAuth provider {name} missing register function")
continue
client = provider_config["register"](self.oauth)
self._clients[name] = client
def get_client(self, provider_name): def get_client(self, provider_name):
if provider_name not in self._clients: if provider_name not in self._clients:
@ -157,7 +639,7 @@ class OAuthManager:
) )
return None return None
def get_oauth_token( async def get_oauth_token(
self, user_id: str, session_id: str, force_refresh: bool = False self, user_id: str, session_id: str, force_refresh: bool = False
): ):
""" """
@ -186,13 +668,15 @@ class OAuthManager:
log.debug( log.debug(
f"Token refresh needed for user {user_id}, provider {session.provider}" f"Token refresh needed for user {user_id}, provider {session.provider}"
) )
refreshed_token = self._refresh_token(session) refreshed_token = await self._refresh_token(session)
if refreshed_token: if refreshed_token:
return refreshed_token return refreshed_token
else: else:
log.warning( log.warning(
f"Token refresh failed for user {user_id}, provider {session.provider}" f"Token refresh failed for user {user_id}, provider {session.provider}, deleting session {session.id}"
) )
OAuthSessions.delete_session_by_id(session.id)
return None return None
return session.token return session.token
@ -252,9 +736,10 @@ class OAuthManager:
log.error(f"No OAuth client found for provider {provider}") log.error(f"No OAuth client found for provider {provider}")
return None return None
server_metadata_url = self.get_server_metadata_url(provider)
token_endpoint = None token_endpoint = None
async with aiohttp.ClientSession(trust_env=True) as session_http: async with aiohttp.ClientSession(trust_env=True) as session_http:
async with session_http.get(client.gserver_metadata_url) as r: async with session_http.get(server_metadata_url) as r:
if r.status == 200: if r.status == 200:
openid_data = await r.json() openid_data = await r.json()
token_endpoint = openid_data.get("token_endpoint") token_endpoint = openid_data.get("token_endpoint")
@ -301,7 +786,7 @@ class OAuthManager:
"expires_in" in new_token_data "expires_in" in new_token_data
and "expires_at" not in new_token_data and "expires_at" not in new_token_data
): ):
new_token_data["expires_at"] = ( new_token_data["expires_at"] = int(
datetime.now().timestamp() datetime.now().timestamp()
+ new_token_data["expires_in"] + new_token_data["expires_in"]
) )
@ -574,7 +1059,7 @@ class OAuthManager:
raise HTTPException(404) raise HTTPException(404)
# If the provider has a custom redirect URL, use that, otherwise automatically generate one # If the provider has a custom redirect URL, use that, otherwise automatically generate one
redirect_uri = OAUTH_PROVIDERS[provider].get("redirect_uri") or request.url_for( redirect_uri = OAUTH_PROVIDERS[provider].get("redirect_uri") or request.url_for(
"oauth_callback", provider=provider "oauth_login_callback", provider=provider
) )
client = self.get_client(provider) client = self.get_client(provider)
if client is None: if client is None:
@ -791,9 +1276,9 @@ class OAuthManager:
else ERROR_MESSAGES.DEFAULT("Error during OAuth process") else ERROR_MESSAGES.DEFAULT("Error during OAuth process")
) )
redirect_base_url = str(request.app.state.config.WEBUI_URL or request.base_url) redirect_base_url = (
if redirect_base_url.endswith("/"): str(request.app.state.config.WEBUI_URL or request.base_url)
redirect_base_url = redirect_base_url[:-1] ).rstrip("/")
redirect_url = f"{redirect_base_url}/auth" redirect_url = f"{redirect_base_url}/auth"
if error_message: if error_message:

View file

@ -2,6 +2,7 @@ from open_webui.utils.task import prompt_template, prompt_variables_template
from open_webui.utils.misc import ( from open_webui.utils.misc import (
deep_update, deep_update,
add_or_update_system_message, add_or_update_system_message,
replace_system_message_content,
) )
from typing import Callable, Optional from typing import Callable, Optional
@ -10,7 +11,11 @@ import json
# inplace function: form_data is modified # inplace function: form_data is modified
def apply_system_prompt_to_body( def apply_system_prompt_to_body(
system: Optional[str], form_data: dict, metadata: Optional[dict] = None, user=None system: Optional[str],
form_data: dict,
metadata: Optional[dict] = None,
user=None,
replace: bool = False,
) -> dict: ) -> dict:
if not system: if not system:
return form_data return form_data
@ -24,9 +29,15 @@ def apply_system_prompt_to_body(
# Legacy (API Usage) # Legacy (API Usage)
system = prompt_template(system, user) system = prompt_template(system, user)
if replace:
form_data["messages"] = replace_system_message_content(
system, form_data.get("messages", [])
)
else:
form_data["messages"] = add_or_update_system_message( form_data["messages"] = add_or_update_system_message(
system, form_data.get("messages", []) system, form_data.get("messages", [])
) )
return form_data return form_data
@ -153,17 +164,11 @@ def apply_model_params_to_body_ollama(params: dict, form_data: dict) -> dict:
"repeat_last_n": int, "repeat_last_n": int,
"top_k": int, "top_k": int,
"min_p": float, "min_p": float,
"typical_p": float,
"repeat_penalty": float, "repeat_penalty": float,
"presence_penalty": float, "presence_penalty": float,
"frequency_penalty": float, "frequency_penalty": float,
"penalize_newline": bool,
"stop": lambda x: [bytes(s, "utf-8").decode("unicode_escape") for s in x], "stop": lambda x: [bytes(s, "utf-8").decode("unicode_escape") for s in x],
"numa": bool,
"num_gpu": int, "num_gpu": int,
"main_gpu": int,
"low_vram": bool,
"vocab_only": bool,
"use_mmap": bool, "use_mmap": bool,
"use_mlock": bool, "use_mlock": bool,
"num_thread": int, "num_thread": int,

View file

@ -166,6 +166,48 @@ def load_function_module_by_id(function_id: str, content: str | None = None):
os.unlink(temp_file.name) os.unlink(temp_file.name)
def get_tool_module_from_cache(request, tool_id, load_from_db=True):
if load_from_db:
# Always load from the database by default
tool = Tools.get_tool_by_id(tool_id)
if not tool:
raise Exception(f"Tool not found: {tool_id}")
content = tool.content
new_content = replace_imports(content)
if new_content != content:
content = new_content
# Update the tool content in the database
Tools.update_tool_by_id(tool_id, {"content": content})
if (
hasattr(request.app.state, "TOOL_CONTENTS")
and tool_id in request.app.state.TOOL_CONTENTS
) and (
hasattr(request.app.state, "TOOLS") and tool_id in request.app.state.TOOLS
):
if request.app.state.TOOL_CONTENTS[tool_id] == content:
return request.app.state.TOOLS[tool_id], None
tool_module, frontmatter = load_tool_module_by_id(tool_id, content)
else:
if hasattr(request.app.state, "TOOLS") and tool_id in request.app.state.TOOLS:
return request.app.state.TOOLS[tool_id], None
tool_module, frontmatter = load_tool_module_by_id(tool_id)
if not hasattr(request.app.state, "TOOLS"):
request.app.state.TOOLS = {}
if not hasattr(request.app.state, "TOOL_CONTENTS"):
request.app.state.TOOL_CONTENTS = {}
request.app.state.TOOLS[tool_id] = tool_module
request.app.state.TOOL_CONTENTS[tool_id] = content
return tool_module, frontmatter
def get_function_module_from_cache(request, function_id, load_from_db=True): def get_function_module_from_cache(request, function_id, load_from_db=True):
if load_from_db: if load_from_db:
# Always load from the database by default # Always load from the database by default

View file

@ -163,7 +163,16 @@ def setup_metrics(app: FastAPI, resource: Resource) -> None:
@app.middleware("http") @app.middleware("http")
async def _metrics_middleware(request: Request, call_next): async def _metrics_middleware(request: Request, call_next):
start_time = time.perf_counter() start_time = time.perf_counter()
status_code = None
try:
response = await call_next(request) response = await call_next(request)
status_code = getattr(response, "status_code", 500)
return response
except Exception:
status_code = 500
raise
finally:
elapsed_ms = (time.perf_counter() - start_time) * 1000.0 elapsed_ms = (time.perf_counter() - start_time) * 1000.0
# Route template e.g. "/items/{item_id}" instead of real path. # Route template e.g. "/items/{item_id}" instead of real path.
@ -173,10 +182,8 @@ def setup_metrics(app: FastAPI, resource: Resource) -> None:
attrs: Dict[str, str | int] = { attrs: Dict[str, str | int] = {
"http.method": request.method, "http.method": request.method,
"http.route": route_path, "http.route": route_path,
"http.status_code": response.status_code, "http.status_code": status_code,
} }
request_counter.add(1, attrs) request_counter.add(1, attrs)
duration_histogram.record(elapsed_ms, attrs) duration_histogram.record(elapsed_ms, attrs)
return response

View file

@ -96,8 +96,23 @@ async def get_tools(
for tool_id in tool_ids: for tool_id in tool_ids:
tool = Tools.get_tool_by_id(tool_id) tool = Tools.get_tool_by_id(tool_id)
if tool is None: if tool is None:
if tool_id.startswith("server:"): if tool_id.startswith("server:"):
server_id = tool_id.split(":")[1] splits = tool_id.split(":")
if len(splits) == 2:
type = "openapi"
server_id = splits[1]
elif len(splits) == 3:
type = splits[1]
server_id = splits[2]
server_id_splits = server_id.split("|")
if len(server_id_splits) == 2:
server_id = server_id_splits[0]
function_names = server_id_splits[1].split(",")
if type == "openapi":
tool_server_data = None tool_server_data = None
for server in await get_tool_servers(request): for server in await get_tool_servers(request):
@ -111,7 +126,9 @@ async def get_tools(
tool_server_idx = tool_server_data.get("idx", 0) tool_server_idx = tool_server_data.get("idx", 0)
tool_server_connection = ( tool_server_connection = (
request.app.state.config.TOOL_SERVER_CONNECTIONS[tool_server_idx] request.app.state.config.TOOL_SERVER_CONNECTIONS[
tool_server_idx
]
) )
specs = tool_server_data.get("specs", []) specs = tool_server_data.get("specs", [])
@ -145,7 +162,9 @@ async def get_tools(
headers["Content-Type"] = "application/json" headers["Content-Type"] = "application/json"
def make_tool_function(function_name, tool_server_data, headers): def make_tool_function(
function_name, tool_server_data, headers
):
async def tool_function(**kwargs): async def tool_function(**kwargs):
return await execute_tool_server( return await execute_tool_server(
url=tool_server_data["url"], url=tool_server_data["url"],
@ -171,6 +190,8 @@ async def get_tools(
"tool_id": tool_id, "tool_id": tool_id,
"callable": callable, "callable": callable,
"spec": spec, "spec": spec,
# Misc info
"type": "external",
} }
# Handle function name collisions # Handle function name collisions
@ -182,6 +203,10 @@ async def get_tools(
function_name = f"{server_id}_{function_name}" function_name = f"{server_id}_{function_name}"
tools_dict[function_name] = tool_dict tools_dict[function_name] = tool_dict
else:
continue
else: else:
continue continue
else: else:
@ -563,25 +588,20 @@ async def get_tool_server_data(token: str, url: str) -> Dict[str, Any]:
error = str(err) error = str(err)
raise Exception(error) raise Exception(error)
data = { log.debug(f"Fetched data: {res}")
"openapi": res, return res
"info": res.get("info", {}),
"specs": convert_openapi_to_tool_payload(res),
}
log.info(f"Fetched data: {data}")
return data
async def get_tool_servers_data(servers: List[Dict[str, Any]]) -> List[Dict[str, Any]]: async def get_tool_servers_data(servers: List[Dict[str, Any]]) -> List[Dict[str, Any]]:
# Prepare list of enabled servers along with their original index # Prepare list of enabled servers along with their original index
tasks = []
server_entries = [] server_entries = []
for idx, server in enumerate(servers): for idx, server in enumerate(servers):
if server.get("config", {}).get("enable"): if (
# Path (to OpenAPI spec URL) can be either a full URL or a path to append to the base URL server.get("config", {}).get("enable")
openapi_path = server.get("path", "openapi.json") and server.get("type", "openapi") == "openapi"
full_url = get_tool_server_url(server.get("url"), openapi_path) ):
info = server.get("info", {}) info = server.get("info", {})
auth_type = server.get("auth_type", "bearer") auth_type = server.get("auth_type", "bearer")
@ -597,12 +617,34 @@ async def get_tool_servers_data(servers: List[Dict[str, Any]]) -> List[Dict[str,
if not id: if not id:
id = str(idx) id = str(idx)
server_entries.append((id, idx, server, full_url, info, token)) server_url = server.get("url")
spec_type = server.get("spec_type", "url")
# Create async tasks to fetch data # Create async tasks to fetch data
tasks = [ task = None
get_tool_server_data(token, url) for (_, _, _, url, _, token) in server_entries if spec_type == "url":
] # Path (to OpenAPI spec URL) can be either a full URL or a path to append to the base URL
openapi_path = server.get("path", "openapi.json")
spec_url = get_tool_server_url(server_url, openapi_path)
# Fetch from URL
task = get_tool_server_data(token, spec_url)
elif spec_type == "json" and server.get("spec", ""):
# Use provided JSON spec
spec_json = None
try:
spec_json = json.loads(server.get("spec", ""))
except Exception as e:
log.error(f"Error parsing JSON spec for tool server {id}: {e}")
if spec_json:
task = asyncio.sleep(
0,
result=spec_json,
)
if task:
tasks.append(task)
server_entries.append((id, idx, server, server_url, info, token))
# Execute tasks concurrently # Execute tasks concurrently
responses = await asyncio.gather(*tasks, return_exceptions=True) responses = await asyncio.gather(*tasks, return_exceptions=True)
@ -614,8 +656,13 @@ async def get_tool_servers_data(servers: List[Dict[str, Any]]) -> List[Dict[str,
log.error(f"Failed to connect to {url} OpenAPI tool server") log.error(f"Failed to connect to {url} OpenAPI tool server")
continue continue
openapi_data = response.get("openapi", {}) response = {
"openapi": response,
"info": response.get("info", {}),
"specs": convert_openapi_to_tool_payload(response),
}
openapi_data = response.get("openapi", {})
if info and isinstance(openapi_data, dict): if info and isinstance(openapi_data, dict):
openapi_data["info"] = openapi_data.get("info", {}) openapi_data["info"] = openapi_data.get("info", {})
@ -646,7 +693,7 @@ async def execute_tool_server(
name: str, name: str,
params: Dict[str, Any], params: Dict[str, Any],
server_data: Dict[str, Any], server_data: Dict[str, Any],
) -> Any: ) -> Tuple[Dict[str, Any], Optional[Dict[str, Any]]]:
error = None error = None
try: try:
openapi = server_data.get("openapi", {}) openapi = server_data.get("openapi", {})
@ -718,6 +765,7 @@ async def execute_tool_server(
headers=headers, headers=headers,
cookies=cookies, cookies=cookies,
ssl=AIOHTTP_CLIENT_SESSION_TOOL_SERVER_SSL, ssl=AIOHTTP_CLIENT_SESSION_TOOL_SERVER_SSL,
allow_redirects=False,
) as response: ) as response:
if response.status >= 400: if response.status >= 400:
text = await response.text() text = await response.text()
@ -728,13 +776,15 @@ async def execute_tool_server(
except Exception: except Exception:
response_data = await response.text() response_data = await response.text()
return response_data response_headers = response.headers
return (response_data, response_headers)
else: else:
async with request_method( async with request_method(
final_url, final_url,
headers=headers, headers=headers,
cookies=cookies, cookies=cookies,
ssl=AIOHTTP_CLIENT_SESSION_TOOL_SERVER_SSL, ssl=AIOHTTP_CLIENT_SESSION_TOOL_SERVER_SSL,
allow_redirects=False,
) as response: ) as response:
if response.status >= 400: if response.status >= 400:
text = await response.text() text = await response.text()
@ -745,12 +795,13 @@ async def execute_tool_server(
except Exception: except Exception:
response_data = await response.text() response_data = await response.text()
return response_data response_headers = response.headers
return (response_data, response_headers)
except Exception as err: except Exception as err:
error = str(err) error = str(err)
log.exception(f"API Request Error: {error}") log.exception(f"API Request Error: {error}")
return {"error": error} return ({"error": error}, None)
def get_tool_server_url(url: Optional[str], path: str) -> str: def get_tool_server_url(url: Optional[str], path: str) -> str:

View file

@ -1,59 +1,66 @@
fastapi==0.115.7 fastapi==0.118.0
uvicorn[standard]==0.35.0 uvicorn[standard]==0.37.0
pydantic==2.11.7 pydantic==2.11.9
python-multipart==0.0.20 python-multipart==0.0.20
itsdangerous==2.2.0
python-socketio==5.13.0 python-socketio==5.13.0
python-jose==3.4.0 python-jose==3.4.0
passlib[bcrypt]==1.7.4
cryptography cryptography
bcrypt==5.0.0
argon2-cffi==25.1.0
PyJWT[crypto]==2.10.1
authlib==1.6.3
requests==2.32.4 requests==2.32.5
aiohttp==3.12.15 aiohttp==3.12.15
async-timeout async-timeout
aiocache aiocache
aiofiles aiofiles
starlette-compress==1.6.0 starlette-compress==1.6.0
httpx[socks,http2,zstd,cli,brotli]==0.28.1 httpx[socks,http2,zstd,cli,brotli]==0.28.1
starsessions[redis]==2.2.1
sqlalchemy==2.0.38 sqlalchemy==2.0.38
alembic==1.14.0 alembic==1.14.0
peewee==3.18.1 peewee==3.18.1
peewee-migrate==1.12.2 peewee-migrate==1.12.2
psycopg2-binary==2.9.10
pgvector==0.4.1
PyMySQL==1.1.1
bcrypt==4.3.0
pymongo
redis
boto3==1.40.5
argon2-cffi==25.1.0
APScheduler==3.10.4
pycrdt==0.12.25 pycrdt==0.12.25
redis
pymongo
psycopg2-binary==2.9.10
pgvector==0.4.1
PyMySQL==1.1.1
boto3==1.40.5
APScheduler==3.10.4
RestrictedPython==8.0 RestrictedPython==8.0
loguru==0.7.3 loguru==0.7.3
asgiref==3.8.1 asgiref==3.8.1
# AI libraries # AI libraries
tiktoken
mcp==1.14.1
openai openai
anthropic anthropic
google-genai==1.32.0 google-genai==1.38.0
google-generativeai==0.8.5 google-generativeai==0.8.5
tiktoken
langchain==0.3.26 langchain==0.3.27
langchain-community==0.3.27 langchain-community==0.3.29
fake-useragent==2.2.0 fake-useragent==2.2.0
chromadb==1.0.20 chromadb==1.1.0
opensearch-py==2.8.0
pymilvus==2.5.0 pymilvus==2.5.0
qdrant-client==1.14.3 qdrant-client==1.14.3
opensearch-py==2.8.0
playwright==1.49.1 # Caution: version must match docker-compose.playwright.yaml playwright==1.49.1 # Caution: version must match docker-compose.playwright.yaml
elasticsearch==9.1.0 elasticsearch==9.1.0
pinecone==6.0.2 pinecone==6.0.2
@ -61,12 +68,12 @@ oracledb==3.2.0
av==14.0.1 # Caution: Set due to FATAL FIPS SELFTEST FAILURE, see discussion https://github.com/open-webui/open-webui/discussions/15720 av==14.0.1 # Caution: Set due to FATAL FIPS SELFTEST FAILURE, see discussion https://github.com/open-webui/open-webui/discussions/15720
transformers transformers
sentence-transformers==4.1.0 sentence-transformers==5.1.1
accelerate accelerate
colbert-ai==0.2.21
pyarrow==20.0.0 # fix: pin pyarrow version to 20 for rpi compatibility #15897 pyarrow==20.0.0 # fix: pin pyarrow version to 20 for rpi compatibility #15897
einops==0.8.1 einops==0.8.1
colbert-ai==0.2.21
ftfy==6.2.3 ftfy==6.2.3
pypdf==6.0.0 pypdf==6.0.0
@ -76,7 +83,7 @@ docx2txt==0.8
python-pptx==1.0.2 python-pptx==1.0.2
unstructured==0.16.17 unstructured==0.16.17
nltk==3.9.1 nltk==3.9.1
Markdown==3.8.2 Markdown==3.9
pypandoc==1.15 pypandoc==1.15
pandas==2.2.3 pandas==2.2.3
openpyxl==3.1.5 openpyxl==3.1.5
@ -94,13 +101,10 @@ rapidocr-onnxruntime==1.4.4
rank-bm25==0.2.2 rank-bm25==0.2.2
onnxruntime==1.20.1 onnxruntime==1.20.1
faster-whisper==1.1.1 faster-whisper==1.1.1
PyJWT[crypto]==2.10.1
authlib==1.6.3
black==25.1.0 black==25.9.0
youtube-transcript-api==1.2.2 youtube-transcript-api==1.2.2
pytube==15.0.0 pytube==15.0.0
@ -120,7 +124,7 @@ pytest-docker~=3.1.1
googleapis-common-protos==1.70.0 googleapis-common-protos==1.70.0
google-cloud-storage==2.19.0 google-cloud-storage==2.19.0
azure-identity==1.23.0 azure-identity==1.25.0
azure-storage-blob==12.24.1 azure-storage-blob==12.24.1
@ -134,14 +138,14 @@ firecrawl-py==1.12.0
tencentcloud-sdk-python==3.0.1336 tencentcloud-sdk-python==3.0.1336
## Trace ## Trace
opentelemetry-api==1.36.0 opentelemetry-api==1.37.0
opentelemetry-sdk==1.36.0 opentelemetry-sdk==1.37.0
opentelemetry-exporter-otlp==1.36.0 opentelemetry-exporter-otlp==1.37.0
opentelemetry-instrumentation==0.57b0 opentelemetry-instrumentation==0.58b0
opentelemetry-instrumentation-fastapi==0.57b0 opentelemetry-instrumentation-fastapi==0.58b0
opentelemetry-instrumentation-sqlalchemy==0.57b0 opentelemetry-instrumentation-sqlalchemy==0.58b0
opentelemetry-instrumentation-redis==0.57b0 opentelemetry-instrumentation-redis==0.58b0
opentelemetry-instrumentation-requests==0.57b0 opentelemetry-instrumentation-requests==0.58b0
opentelemetry-instrumentation-logging==0.57b0 opentelemetry-instrumentation-logging==0.58b0
opentelemetry-instrumentation-httpx==0.57b0 opentelemetry-instrumentation-httpx==0.58b0
opentelemetry-instrumentation-aiohttp-client==0.57b0 opentelemetry-instrumentation-aiohttp-client==0.58b0

View file

@ -70,5 +70,18 @@ if [ -n "$SPACE_ID" ]; then
fi fi
PYTHON_CMD=$(command -v python3 || command -v python) PYTHON_CMD=$(command -v python3 || command -v python)
UVICORN_WORKERS="${UVICORN_WORKERS:-1}"
WEBUI_SECRET_KEY="$WEBUI_SECRET_KEY" exec "$PYTHON_CMD" -m uvicorn open_webui.main:app --host "$HOST" --port "$PORT" --forwarded-allow-ips '*' --workers "${UVICORN_WORKERS:-1}" # If script is called with arguments, use them; otherwise use default workers
if [ "$#" -gt 0 ]; then
ARGS=("$@")
else
ARGS=(--workers "$UVICORN_WORKERS")
fi
# Run uvicorn
WEBUI_SECRET_KEY="$WEBUI_SECRET_KEY" exec "$PYTHON_CMD" -m uvicorn open_webui.main:app \
--host "$HOST" \
--port "$PORT" \
--forwarded-allow-ips '*' \
"${ARGS[@]}"

65
package-lock.json generated
View file

@ -1,12 +1,12 @@
{ {
"name": "open-webui", "name": "open-webui",
"version": "0.6.29", "version": "0.6.32",
"lockfileVersion": 3, "lockfileVersion": 3,
"requires": true, "requires": true,
"packages": { "packages": {
"": { "": {
"name": "open-webui", "name": "open-webui",
"version": "0.6.29", "version": "0.6.32",
"dependencies": { "dependencies": {
"@azure/msal-browser": "^4.5.0", "@azure/msal-browser": "^4.5.0",
"@codemirror/lang-javascript": "^6.2.2", "@codemirror/lang-javascript": "^6.2.2",
@ -23,7 +23,7 @@
"@tiptap/core": "^3.0.7", "@tiptap/core": "^3.0.7",
"@tiptap/extension-bubble-menu": "^2.26.1", "@tiptap/extension-bubble-menu": "^2.26.1",
"@tiptap/extension-code-block-lowlight": "^3.0.7", "@tiptap/extension-code-block-lowlight": "^3.0.7",
"@tiptap/extension-drag-handle": "^3.0.7", "@tiptap/extension-drag-handle": "^3.4.5",
"@tiptap/extension-file-handler": "^3.0.7", "@tiptap/extension-file-handler": "^3.0.7",
"@tiptap/extension-floating-menu": "^2.26.1", "@tiptap/extension-floating-menu": "^2.26.1",
"@tiptap/extension-highlight": "^3.3.0", "@tiptap/extension-highlight": "^3.3.0",
@ -39,6 +39,7 @@
"@tiptap/starter-kit": "^3.0.7", "@tiptap/starter-kit": "^3.0.7",
"@tiptap/suggestion": "^3.4.2", "@tiptap/suggestion": "^3.4.2",
"@xyflow/svelte": "^0.1.19", "@xyflow/svelte": "^0.1.19",
"alpinejs": "^3.15.0",
"async": "^3.2.5", "async": "^3.2.5",
"bits-ui": "^0.21.15", "bits-ui": "^0.21.15",
"chart.js": "^4.5.0", "chart.js": "^4.5.0",
@ -3383,9 +3384,9 @@
} }
}, },
"node_modules/@tiptap/extension-collaboration": { "node_modules/@tiptap/extension-collaboration": {
"version": "3.0.7", "version": "3.4.5",
"resolved": "https://registry.npmjs.org/@tiptap/extension-collaboration/-/extension-collaboration-3.0.7.tgz", "resolved": "https://registry.npmjs.org/@tiptap/extension-collaboration/-/extension-collaboration-3.4.5.tgz",
"integrity": "sha512-so59vQCAS1vy6k86byk96fYvAPM5w8u8/Yp3jKF1LPi9LH4wzS4hGnOP/dEbedxPU48an9WB1lSOczSKPECJaQ==", "integrity": "sha512-JyPXTYkYi2XzUWsmObv2cogMrs7huAvfq6l7d5hAwsU2FnA1vMycaa48N4uekogySP6VBkiQNDf9B4T09AwwqA==",
"license": "MIT", "license": "MIT",
"peer": true, "peer": true,
"funding": { "funding": {
@ -3393,8 +3394,8 @@
"url": "https://github.com/sponsors/ueberdosis" "url": "https://github.com/sponsors/ueberdosis"
}, },
"peerDependencies": { "peerDependencies": {
"@tiptap/core": "^3.0.7", "@tiptap/core": "^3.4.5",
"@tiptap/pm": "^3.0.7", "@tiptap/pm": "^3.4.5",
"@tiptap/y-tiptap": "^3.0.0-beta.3", "@tiptap/y-tiptap": "^3.0.0-beta.3",
"yjs": "^13" "yjs": "^13"
} }
@ -3413,9 +3414,9 @@
} }
}, },
"node_modules/@tiptap/extension-drag-handle": { "node_modules/@tiptap/extension-drag-handle": {
"version": "3.0.7", "version": "3.4.5",
"resolved": "https://registry.npmjs.org/@tiptap/extension-drag-handle/-/extension-drag-handle-3.0.7.tgz", "resolved": "https://registry.npmjs.org/@tiptap/extension-drag-handle/-/extension-drag-handle-3.4.5.tgz",
"integrity": "sha512-rm8+0kPz5C5JTp4f1QY61Qd5d7zlJAxLeJtOvgC9RCnrNG1F7LCsmOkvy5fsU6Qk2YCCYOiSSMC4S4HKPrUJhw==", "integrity": "sha512-177hQ9lMQYJz+SuCg8eA47MB2tn3G3MGBJ5+3PNl5Bs4WQukR9uHpxdR+bH00/LedwxrlNlglMa5Hirrx9odMQ==",
"license": "MIT", "license": "MIT",
"dependencies": { "dependencies": {
"@floating-ui/dom": "^1.6.13" "@floating-ui/dom": "^1.6.13"
@ -3425,10 +3426,10 @@
"url": "https://github.com/sponsors/ueberdosis" "url": "https://github.com/sponsors/ueberdosis"
}, },
"peerDependencies": { "peerDependencies": {
"@tiptap/core": "^3.0.7", "@tiptap/core": "^3.4.5",
"@tiptap/extension-collaboration": "^3.0.7", "@tiptap/extension-collaboration": "^3.4.5",
"@tiptap/extension-node-range": "^3.0.7", "@tiptap/extension-node-range": "^3.4.5",
"@tiptap/pm": "^3.0.7", "@tiptap/pm": "^3.4.5",
"@tiptap/y-tiptap": "^3.0.0-beta.3" "@tiptap/y-tiptap": "^3.0.0-beta.3"
} }
}, },
@ -3642,9 +3643,9 @@
} }
}, },
"node_modules/@tiptap/extension-node-range": { "node_modules/@tiptap/extension-node-range": {
"version": "3.0.7", "version": "3.4.5",
"resolved": "https://registry.npmjs.org/@tiptap/extension-node-range/-/extension-node-range-3.0.7.tgz", "resolved": "https://registry.npmjs.org/@tiptap/extension-node-range/-/extension-node-range-3.4.5.tgz",
"integrity": "sha512-cHViNqtOUD9CLJxEj28rcj8tb8RYQZ7kwmtSvIye84Y3MJIzigRm4IUBNNOYnZfq5YAZIR97WKcJeFz3EU1VPg==", "integrity": "sha512-mHCjdJZX8DZCpnw9wBqioanANy6tRoy20/OcJxMW1T7naeRCuCU4sFjwO37yb/tmYk1BQA2/L1/H2r0fVoZwtA==",
"license": "MIT", "license": "MIT",
"peer": true, "peer": true,
"funding": { "funding": {
@ -3652,8 +3653,8 @@
"url": "https://github.com/sponsors/ueberdosis" "url": "https://github.com/sponsors/ueberdosis"
}, },
"peerDependencies": { "peerDependencies": {
"@tiptap/core": "^3.0.7", "@tiptap/core": "^3.4.5",
"@tiptap/pm": "^3.0.7" "@tiptap/pm": "^3.4.5"
} }
}, },
"node_modules/@tiptap/extension-ordered-list": { "node_modules/@tiptap/extension-ordered-list": {
@ -4569,6 +4570,21 @@
"@types/estree": "^1.0.0" "@types/estree": "^1.0.0"
} }
}, },
"node_modules/@vue/reactivity": {
"version": "3.1.5",
"resolved": "https://registry.npmjs.org/@vue/reactivity/-/reactivity-3.1.5.tgz",
"integrity": "sha512-1tdfLmNjWG6t/CsPldh+foumYFo3cpyCHgBYQ34ylaMsJ+SNHQ1kApMIa8jN+i593zQuaw3AdWH0nJTARzCFhg==",
"license": "MIT",
"dependencies": {
"@vue/shared": "3.1.5"
}
},
"node_modules/@vue/shared": {
"version": "3.1.5",
"resolved": "https://registry.npmjs.org/@vue/shared/-/shared-3.1.5.tgz",
"integrity": "sha512-oJ4F3TnvpXaQwZJNF3ZK+kLPHKarDmJjJ6jyzVNDKH9md1dptjC7lWR//jrGuLdek/U6iltWxqAnYOu8gCiOvA==",
"license": "MIT"
},
"node_modules/@webreflection/fetch": { "node_modules/@webreflection/fetch": {
"version": "0.1.5", "version": "0.1.5",
"resolved": "https://registry.npmjs.org/@webreflection/fetch/-/fetch-0.1.5.tgz", "resolved": "https://registry.npmjs.org/@webreflection/fetch/-/fetch-0.1.5.tgz",
@ -4672,6 +4688,15 @@
"url": "https://github.com/sponsors/epoberezkin" "url": "https://github.com/sponsors/epoberezkin"
} }
}, },
"node_modules/alpinejs": {
"version": "3.15.0",
"resolved": "https://registry.npmjs.org/alpinejs/-/alpinejs-3.15.0.tgz",
"integrity": "sha512-lpokA5okCF1BKh10LG8YjqhfpxyHBk4gE7boIgVHltJzYoM7O9nK3M7VlntLEJGsVmu7U/RzUWajmHREGT38Eg==",
"license": "MIT",
"dependencies": {
"@vue/reactivity": "~3.1.1"
}
},
"node_modules/amator": { "node_modules/amator": {
"version": "1.1.0", "version": "1.1.0",
"resolved": "https://registry.npmjs.org/amator/-/amator-1.1.0.tgz", "resolved": "https://registry.npmjs.org/amator/-/amator-1.1.0.tgz",

View file

@ -1,6 +1,6 @@
{ {
"name": "open-webui", "name": "open-webui",
"version": "0.6.29", "version": "0.6.32",
"private": true, "private": true,
"scripts": { "scripts": {
"dev": "npm run pyodide:fetch && vite dev --host", "dev": "npm run pyodide:fetch && vite dev --host",
@ -67,7 +67,7 @@
"@tiptap/core": "^3.0.7", "@tiptap/core": "^3.0.7",
"@tiptap/extension-bubble-menu": "^2.26.1", "@tiptap/extension-bubble-menu": "^2.26.1",
"@tiptap/extension-code-block-lowlight": "^3.0.7", "@tiptap/extension-code-block-lowlight": "^3.0.7",
"@tiptap/extension-drag-handle": "^3.0.7", "@tiptap/extension-drag-handle": "^3.4.5",
"@tiptap/extension-file-handler": "^3.0.7", "@tiptap/extension-file-handler": "^3.0.7",
"@tiptap/extension-floating-menu": "^2.26.1", "@tiptap/extension-floating-menu": "^2.26.1",
"@tiptap/extension-highlight": "^3.3.0", "@tiptap/extension-highlight": "^3.3.0",
@ -83,6 +83,7 @@
"@tiptap/starter-kit": "^3.0.7", "@tiptap/starter-kit": "^3.0.7",
"@tiptap/suggestion": "^3.4.2", "@tiptap/suggestion": "^3.4.2",
"@xyflow/svelte": "^0.1.19", "@xyflow/svelte": "^0.1.19",
"alpinejs": "^3.15.0",
"async": "^3.2.5", "async": "^3.2.5",
"bits-ui": "^0.21.15", "bits-ui": "^0.21.15",
"chart.js": "^4.5.0", "chart.js": "^4.5.0",

View file

@ -10,23 +10,24 @@ dependencies = [
"uvicorn[standard]==0.35.0", "uvicorn[standard]==0.35.0",
"pydantic==2.11.7", "pydantic==2.11.7",
"python-multipart==0.0.20", "python-multipart==0.0.20",
"itsdangerous==2.2.0",
"python-socketio==5.13.0", "python-socketio==5.13.0",
"python-jose==3.4.0", "python-jose==3.4.0",
"passlib[bcrypt]==1.7.4",
"cryptography", "cryptography",
"bcrypt==4.3.0", "bcrypt==5.0.0",
"argon2-cffi==23.1.0", "argon2-cffi==25.1.0",
"PyJWT[crypto]==2.10.1", "PyJWT[crypto]==2.10.1",
"authlib==1.6.3", "authlib==1.6.3",
"requests==2.32.4", "requests==2.32.5",
"aiohttp==3.12.15", "aiohttp==3.12.15",
"async-timeout", "async-timeout",
"aiocache", "aiocache",
"aiofiles", "aiofiles",
"starlette-compress==1.6.0", "starlette-compress==1.6.0",
"httpx[socks,http2,zstd,cli,brotli]==0.28.1", "httpx[socks,http2,zstd,cli,brotli]==0.28.1",
"starsessions[redis]==2.2.1",
"sqlalchemy==2.0.38", "sqlalchemy==2.0.38",
"alembic==1.14.0", "alembic==1.14.0",
@ -46,20 +47,22 @@ dependencies = [
"asgiref==3.8.1", "asgiref==3.8.1",
"tiktoken", "tiktoken",
"mcp==1.14.1",
"openai", "openai",
"anthropic", "anthropic",
"google-genai==1.32.0", "google-genai==1.38.0",
"google-generativeai==0.8.5", "google-generativeai==0.8.5",
"langchain==0.3.26", "langchain==0.3.27",
"langchain-community==0.3.27", "langchain-community==0.3.29",
"fake-useragent==2.2.0", "fake-useragent==2.2.0",
"chromadb==1.0.20", "chromadb==1.0.20",
"opensearch-py==2.8.0", "opensearch-py==2.8.0",
"transformers", "transformers",
"sentence-transformers==4.1.0", "sentence-transformers==5.1.1",
"accelerate", "accelerate",
"pyarrow==20.0.0", "pyarrow==20.0.0",
"einops==0.8.1", "einops==0.8.1",
@ -108,7 +111,7 @@ dependencies = [
"googleapis-common-protos==1.70.0", "googleapis-common-protos==1.70.0",
"google-cloud-storage==2.19.0", "google-cloud-storage==2.19.0",
"azure-identity==1.20.0", "azure-identity==1.25.0",
"azure-storage-blob==12.24.1", "azure-storage-blob==12.24.1",
"ldap3==2.9.1", "ldap3==2.9.1",

View file

@ -70,23 +70,23 @@ textarea::placeholder {
} }
.input-prose { .input-prose {
@apply prose dark:prose-invert prose-headings:font-semibold prose-hr:my-4 prose-hr:border-gray-100 prose-hr:dark:border-gray-800 prose-p:my-1 prose-img:my-1 prose-headings:my-2 prose-pre:my-0 prose-table:my-1 prose-blockquote:my-0 prose-ul:my-1 prose-ol:my-1 prose-li:my-0.5 whitespace-pre-line; @apply prose dark:prose-invert prose-headings:font-semibold prose-hr:my-4 prose-hr:border-gray-50 prose-hr:dark:border-gray-850 prose-p:my-1 prose-img:my-1 prose-headings:my-2 prose-pre:my-0 prose-table:my-1 prose-blockquote:my-0 prose-ul:my-1 prose-ol:my-1 prose-li:my-0.5 whitespace-pre-line;
} }
.input-prose-sm { .input-prose-sm {
@apply prose dark:prose-invert prose-headings:font-medium prose-h1:text-2xl prose-h2:text-xl prose-h3:text-lg prose-hr:my-4 prose-hr:border-gray-100 prose-hr:dark:border-gray-800 prose-p:my-1 prose-img:my-1 prose-headings:my-2 prose-pre:my-0 prose-table:my-1 prose-blockquote:my-0 prose-ul:my-1 prose-ol:my-1 prose-li:my-1 whitespace-pre-line text-sm; @apply prose dark:prose-invert prose-headings:font-medium prose-h1:text-2xl prose-h2:text-xl prose-h3:text-lg prose-hr:my-4 prose-hr:border-gray-50 prose-hr:dark:border-gray-850 prose-p:my-1 prose-img:my-1 prose-headings:my-2 prose-pre:my-0 prose-table:my-1 prose-blockquote:my-0 prose-ul:my-1 prose-ol:my-1 prose-li:my-1 whitespace-pre-line text-sm;
} }
.markdown-prose { .markdown-prose {
@apply prose dark:prose-invert prose-blockquote:border-s-gray-100 prose-blockquote:dark:border-gray-800 prose-blockquote:border-s-2 prose-blockquote:not-italic prose-blockquote:font-normal prose-headings:font-semibold prose-hr:my-4 prose-hr:border-gray-100 prose-hr:dark:border-gray-800 prose-p:my-0 prose-img:my-1 prose-headings:my-1 prose-pre:my-0 prose-table:my-0 prose-blockquote:my-0 prose-ul:-my-0 prose-ol:-my-0 prose-li:-my-0 whitespace-pre-line; @apply prose dark:prose-invert prose-blockquote:border-s-gray-100 prose-blockquote:dark:border-gray-800 prose-blockquote:border-s-2 prose-blockquote:not-italic prose-blockquote:font-normal prose-headings:font-semibold prose-hr:my-4 prose-hr:border-gray-50 prose-hr:dark:border-gray-850 prose-p:my-0 prose-img:my-1 prose-headings:my-1 prose-pre:my-0 prose-table:my-0 prose-blockquote:my-0 prose-ul:-my-0 prose-ol:-my-0 prose-li:-my-0 whitespace-pre-line;
} }
.markdown-prose-sm { .markdown-prose-sm {
@apply text-sm prose dark:prose-invert prose-blockquote:border-s-gray-100 prose-blockquote:dark:border-gray-800 prose-blockquote:border-s-2 prose-blockquote:not-italic prose-blockquote:font-normal prose-headings:font-semibold prose-hr:my-2 prose-hr:border-gray-100 prose-hr:dark:border-gray-800 prose-p:my-0 prose-img:my-1 prose-headings:my-1 prose-pre:my-0 prose-table:my-0 prose-blockquote:my-0 prose-ul:-my-0 prose-ol:-my-0 prose-li:-my-0 whitespace-pre-line; @apply text-sm prose dark:prose-invert prose-blockquote:border-s-gray-100 prose-blockquote:dark:border-gray-800 prose-blockquote:border-s-2 prose-blockquote:not-italic prose-blockquote:font-normal prose-headings:font-semibold prose-hr:my-2 prose-hr:border-gray-50 prose-hr:dark:border-gray-850 prose-p:my-0 prose-img:my-1 prose-headings:my-1 prose-pre:my-0 prose-table:my-0 prose-blockquote:my-0 prose-ul:-my-0 prose-ol:-my-0 prose-li:-my-0 whitespace-pre-line;
} }
.markdown-prose-xs { .markdown-prose-xs {
@apply text-xs prose dark:prose-invert prose-blockquote:border-s-gray-100 prose-blockquote:dark:border-gray-800 prose-blockquote:border-s-2 prose-blockquote:not-italic prose-blockquote:font-normal prose-headings:font-semibold prose-hr:my-0.5 prose-hr:border-gray-100 prose-hr:dark:border-gray-800 prose-p:my-0 prose-img:my-1 prose-headings:my-1 prose-pre:my-0 prose-table:my-0 prose-blockquote:my-0 prose-ul:-my-0 prose-ol:-my-0 prose-li:-my-0 whitespace-pre-line; @apply text-xs prose dark:prose-invert prose-blockquote:border-s-gray-100 prose-blockquote:dark:border-gray-800 prose-blockquote:border-s-2 prose-blockquote:not-italic prose-blockquote:font-normal prose-headings:font-semibold prose-hr:my-0.5 prose-hr:border-gray-50 prose-hr:dark:border-gray-850 prose-p:my-0 prose-img:my-1 prose-headings:my-1 prose-pre:my-0 prose-table:my-0 prose-blockquote:my-0 prose-ul:-my-0 prose-ol:-my-0 prose-li:-my-0 whitespace-pre-line;
} }
.markdown a { .markdown a {
@ -116,7 +116,7 @@ li p {
::-webkit-scrollbar-thumb { ::-webkit-scrollbar-thumb {
--tw-border-opacity: 1; --tw-border-opacity: 1;
background-color: rgba(215, 215, 215, 0.8); background-color: rgba(215, 215, 215, 0.6);
border-color: rgba(255, 255, 255, var(--tw-border-opacity)); border-color: rgba(255, 255, 255, var(--tw-border-opacity));
border-radius: 9999px; border-radius: 9999px;
border-width: 1px; border-width: 1px;
@ -124,12 +124,12 @@ li p {
/* Dark theme scrollbar styles */ /* Dark theme scrollbar styles */
.dark ::-webkit-scrollbar-thumb { .dark ::-webkit-scrollbar-thumb {
background-color: rgba(67, 67, 67, 0.8); /* Darker color for dark theme */ background-color: rgba(67, 67, 67, 0.6); /* Darker color for dark theme */
border-color: rgba(0, 0, 0, var(--tw-border-opacity)); border-color: rgba(0, 0, 0, var(--tw-border-opacity));
} }
::-webkit-scrollbar { ::-webkit-scrollbar {
height: 0.6rem; height: 0.4rem;
width: 0.4rem; width: 0.4rem;
} }
@ -409,17 +409,33 @@ input[type='number'] {
} }
} }
.tiptap .mention { .mention {
border-radius: 0.4rem; border-radius: 0.4rem;
box-decoration-break: clone; box-decoration-break: clone;
padding: 0.1rem 0.3rem; padding: 0.1rem 0.3rem;
@apply text-blue-900 dark:text-blue-100 bg-blue-300/20 dark:bg-blue-500/20; @apply text-sky-800 dark:text-sky-200 bg-sky-300/15 dark:bg-sky-500/15;
} }
.tiptap .mention::after { .mention::after {
content: '\200B'; content: '\200B';
} }
.tiptap .suggestion {
border-radius: 0.4rem;
box-decoration-break: clone;
padding: 0.1rem 0.3rem;
@apply text-sky-800 dark:text-sky-200 bg-sky-300/15 dark:bg-sky-500/15;
}
.tiptap .suggestion::after {
content: '\200B';
}
.tiptap .suggestion.is-empty::after {
content: '\00A0';
border-bottom: 1px dotted rgba(31, 41, 55, 0.12);
}
.input-prose .tiptap ul[data-type='taskList'] { .input-prose .tiptap ul[data-type='taskList'] {
list-style: none; list-style: none;
margin-left: 0; margin-left: 0;
@ -645,3 +661,112 @@ body {
background: #171717; background: #171717;
color: #eee; color: #eee;
} }
/* Position the handle relative to each LI */
.pm-li--with-handle {
position: relative;
margin-left: 12px; /* make space for the handle */
}
.tiptap ul[data-type='taskList'] .pm-list-drag-handle {
margin-left: 0px;
}
/* The drag handle itself */
.pm-list-drag-handle {
position: absolute;
left: -36px; /* pull into the left gutter */
top: 1px;
width: 18px;
height: 18px;
display: inline-flex;
align-items: center;
justify-content: center;
font-size: 12px;
line-height: 1;
border-radius: 4px;
cursor: grab;
user-select: none;
opacity: 0.35;
transition:
opacity 120ms ease,
background 120ms ease;
}
.tiptap ul[data-type='taskList'] .pm-list-drag-handle {
left: -16px; /* pull into the left gutter more to avoid the checkbox */
}
.pm-list-drag-handle:active {
cursor: grabbing;
}
.pm-li--with-handle:hover > .pm-list-drag-handle {
opacity: 1;
}
.pm-list-drag-handle:hover {
background: rgba(0, 0, 0, 0.06);
}
:root {
--pm-accent: color-mix(in oklab, Highlight 70%, transparent);
--pm-fill-target: color-mix(in oklab, Highlight 26%, transparent);
--pm-fill-ancestor: color-mix(in oklab, Highlight 16%, transparent);
}
.pm-li-drop-before,
.pm-li-drop-after,
.pm-li-drop-into,
.pm-li-drop-outdent {
position: relative;
}
/* BEFORE/AFTER lines */
.pm-li-drop-before::before,
.pm-li-drop-after::after {
content: '';
position: absolute;
left: 0;
right: 0;
height: 3px;
background: var(--pm-accent);
pointer-events: none;
}
.pm-li-drop-before::before {
top: -2px;
}
.pm-li-drop-after::after {
bottom: -2px;
}
.pm-li-drop-before,
.pm-li-drop-after,
.pm-li-drop-into,
.pm-li-drop-outdent {
background: var(--pm-fill-target);
border-radius: 6px;
}
.pm-li-drop-outdent::before {
content: '';
position: absolute;
inset-block: 0;
inset-inline-start: 0;
width: 3px;
background: color-mix(in oklab, Highlight 35%, transparent);
}
.pm-li--with-handle:has(.pm-li-drop-before),
.pm-li--with-handle:has(.pm-li-drop-after),
.pm-li--with-handle:has(.pm-li-drop-into),
.pm-li--with-handle:has(.pm-li-drop-outdent) {
background: var(--pm-fill-ancestor);
border-radius: 6px;
}
.pm-li-drop-before,
.pm-li-drop-after,
.pm-li-drop-into,
.pm-li-drop-outdent {
position: relative;
z-index: 0;
}

View file

@ -23,8 +23,6 @@
href="/static/apple-touch-icon.png" href="/static/apple-touch-icon.png"
crossorigin="use-credentials" crossorigin="use-credentials"
/> />
<meta name="apple-mobile-web-app-title" content="Open WebUI" />
<link <link
rel="manifest" rel="manifest"
href="/manifest.json" href="/manifest.json"
@ -37,14 +35,7 @@
/> />
<meta name="theme-color" content="#171717" /> <meta name="theme-color" content="#171717" />
<meta name="robots" content="noindex,nofollow" /> <meta name="robots" content="noindex,nofollow" />
<meta name="description" content="Open WebUI" />
<link
rel="search"
type="application/opensearchdescription+xml"
title="Open WebUI"
href="/opensearch.xml"
crossorigin="use-credentials"
/>
<script src="/static/loader.js" defer crossorigin="use-credentials"></script> <script src="/static/loader.js" defer crossorigin="use-credentials"></script>
<link rel="stylesheet" href="/static/custom.css" crossorigin="use-credentials" /> <link rel="stylesheet" href="/static/custom.css" crossorigin="use-credentials" />

View file

@ -248,6 +248,7 @@ export const getChannelThreadMessages = async (
}; };
type MessageForm = { type MessageForm = {
reply_to_id?: string;
parent_id?: string; parent_id?: string;
content: string; content: string;
data?: object; data?: object;

View file

@ -33,6 +33,38 @@ export const createNewChat = async (token: string, chat: object, folderId: strin
return res; return res;
}; };
export const unarchiveAllChats = async (token: string) => {
let error = null;
const res = await fetch(`${WEBUI_API_BASE_URL}/chats/unarchive/all`, {
method: 'POST',
headers: {
Accept: 'application/json',
'Content-Type': 'application/json',
...(token && { authorization: `Bearer ${token}` })
}
})
.then(async (res) => {
if (!res.ok) throw await res.json();
return res.json();
})
.then((json) => {
return json;
})
.catch((err) => {
error = err.detail;
console.error(err);
return null;
});
if (error) {
throw error;
}
return res;
};
export const importChat = async ( export const importChat = async (
token: string, token: string,
chat: object, chat: object,
@ -77,7 +109,11 @@ export const importChat = async (
return res; return res;
}; };
export const getChatList = async (token: string = '', page: number | null = null) => { export const getChatList = async (
token: string = '',
page: number | null = null,
include_folders: boolean = false
) => {
let error = null; let error = null;
const searchParams = new URLSearchParams(); const searchParams = new URLSearchParams();
@ -85,6 +121,10 @@ export const getChatList = async (token: string = '', page: number | null = null
searchParams.append('page', `${page}`); searchParams.append('page', `${page}`);
} }
if (include_folders) {
searchParams.append('include_folders', 'true');
}
const res = await fetch(`${WEBUI_API_BASE_URL}/chats/?${searchParams.toString()}`, { const res = await fetch(`${WEBUI_API_BASE_URL}/chats/?${searchParams.toString()}`, {
method: 'GET', method: 'GET',
headers: { headers: {
@ -319,6 +359,45 @@ export const getChatsByFolderId = async (token: string, folderId: string) => {
return res; return res;
}; };
export const getChatListByFolderId = async (token: string, folderId: string, page: number = 1) => {
let error = null;
const searchParams = new URLSearchParams();
if (page !== null) {
searchParams.append('page', `${page}`);
}
const res = await fetch(
`${WEBUI_API_BASE_URL}/chats/folder/${folderId}/list?${searchParams.toString()}`,
{
method: 'GET',
headers: {
Accept: 'application/json',
'Content-Type': 'application/json',
...(token && { authorization: `Bearer ${token}` })
}
}
)
.then(async (res) => {
if (!res.ok) throw await res.json();
return res.json();
})
.then((json) => {
return json;
})
.catch((err) => {
error = err;
console.error(err);
return null;
});
if (error) {
throw error;
}
return res;
};
export const getAllArchivedChats = async (token: string) => { export const getAllArchivedChats = async (token: string) => {
let error = null; let error = null;

View file

@ -1,4 +1,4 @@
import { WEBUI_API_BASE_URL } from '$lib/constants'; import { WEBUI_API_BASE_URL, WEBUI_BASE_URL } from '$lib/constants';
import type { Banner } from '$lib/types'; import type { Banner } from '$lib/types';
export const importConfig = async (token: string, config) => { export const importConfig = async (token: string, config) => {
@ -202,6 +202,52 @@ export const verifyToolServerConnection = async (token: string, connection: obje
return res; return res;
}; };
type RegisterOAuthClientForm = {
url: string;
client_id: string;
client_name?: string;
};
export const registerOAuthClient = async (
token: string,
formData: RegisterOAuthClientForm,
type: null | string = null
) => {
let error = null;
const searchParams = type ? `?type=${type}` : '';
const res = await fetch(`${WEBUI_API_BASE_URL}/configs/oauth/clients/register${searchParams}`, {
method: 'POST',
headers: {
'Content-Type': 'application/json',
Authorization: `Bearer ${token}`
},
body: JSON.stringify({
...formData
})
})
.then(async (res) => {
if (!res.ok) throw await res.json();
return res.json();
})
.catch((err) => {
console.error(err);
error = err.detail;
return null;
});
if (error) {
throw error;
}
return res;
};
export const getOAuthClientAuthorizationUrl = (clientId: string, type: null | string = null) => {
const oauthClientId = type ? `${type}:${clientId}` : clientId;
return `${WEBUI_BASE_URL}/oauth/clients/${oauthClientId}/authorize`;
};
export const getCodeExecutionConfig = async (token: string) => { export const getCodeExecutionConfig = async (token: string) => {
let error = null; let error = null;

View file

@ -23,7 +23,7 @@ export const uploadFile = async (token: string, file: File, metadata?: object |
return res.json(); return res.json();
}) })
.catch((err) => { .catch((err) => {
error = err.detail; error = err.detail || err.message;
console.error(err); console.error(err);
return null; return null;
}); });

View file

@ -337,14 +337,8 @@ export const getToolServerData = async (token: string, url: string) => {
throw error; throw error;
} }
const data = { console.log(res);
openapi: res, return res;
info: res.info,
specs: convertOpenApiToToolPayload(res)
};
console.log(data);
return data;
}; };
export const getToolServersData = async (servers: object[]) => { export const getToolServersData = async (servers: object[]) => {
@ -356,6 +350,7 @@ export const getToolServersData = async (servers: object[]) => {
let error = null; let error = null;
let toolServerToken = null; let toolServerToken = null;
const auth_type = server?.auth_type ?? 'bearer'; const auth_type = server?.auth_type ?? 'bearer';
if (auth_type === 'bearer') { if (auth_type === 'bearer') {
toolServerToken = server?.key; toolServerToken = server?.key;
@ -365,7 +360,11 @@ export const getToolServersData = async (servers: object[]) => {
toolServerToken = localStorage.token; toolServerToken = localStorage.token;
} }
const data = await getToolServerData( let res = null;
const specType = server?.spec_type ?? 'url';
if (specType === 'url') {
res = await getToolServerData(
toolServerToken, toolServerToken,
(server?.path ?? '').includes('://') (server?.path ?? '').includes('://')
? server?.path ? server?.path
@ -374,9 +373,21 @@ export const getToolServersData = async (servers: object[]) => {
error = err; error = err;
return null; return null;
}); });
} else if ((specType === 'json' && server?.spec) ?? null) {
try {
res = JSON.parse(server?.spec);
} catch (e) {
error = 'Failed to parse JSON spec';
}
}
if (res) {
const { openapi, info, specs } = {
openapi: res,
info: res.info,
specs: convertOpenApiToToolPayload(res)
};
if (data) {
const { openapi, info, specs } = data;
return { return {
url: server?.url, url: server?.url,
openapi: openapi, openapi: openapi,
@ -493,18 +504,25 @@ export const executeToolServer = async (
throw new Error(`HTTP error! Status: ${res.status}. Message: ${resText}`); throw new Error(`HTTP error! Status: ${res.status}. Message: ${resText}`);
} }
let responseData; // make a clone of res and extract headers
try { const responseHeaders = {};
responseData = await res.json(); res.headers.forEach((value, key) => {
} catch (err) { responseHeaders[key] = value;
responseData = await res.text(); });
}
return responseData; const text = await res.text();
let responseData;
try {
responseData = JSON.parse(text);
} catch {
responseData = text;
}
return [responseData, responseHeaders];
} catch (err: any) { } catch (err: any) {
error = err.message; error = err.message;
console.error('API Request Error:', error); console.error('API Request Error:', error);
return { error }; return [{ error }, null];
} }
}; };

View file

@ -31,6 +31,34 @@ export const getModels = async (token: string = '') => {
return res; return res;
}; };
export const importModels = async (token: string, models: object[]) => {
let error = null;
const res = await fetch(`${WEBUI_API_BASE_URL}/models/import`, {
method: 'POST',
headers: {
'Content-Type': 'application/json',
authorization: `Bearer ${token}`
},
body: JSON.stringify({ models: models })
})
.then(async (res) => {
if (!res.ok) throw await res.json();
return res.json();
})
.catch((err) => {
error = err;
console.error(err);
return null;
});
if (error) {
throw error;
}
return res;
};
export const getBaseModels = async (token: string = '') => { export const getBaseModels = async (token: string = '') => {
let error = null; let error = null;

View file

@ -194,6 +194,34 @@ export const getAllUsers = async (token: string) => {
return res; return res;
}; };
export const searchUsers = async (token: string, query: string) => {
let error = null;
let res = null;
res = await fetch(`${WEBUI_API_BASE_URL}/users/search?query=${encodeURIComponent(query)}`, {
method: 'GET',
headers: {
'Content-Type': 'application/json',
Authorization: `Bearer ${token}`
}
})
.then(async (res) => {
if (!res.ok) throw await res.json();
return res.json();
})
.catch((err) => {
console.error(err);
error = err.detail;
return null;
});
if (error) {
throw error;
}
return res;
};
export const getUserSettings = async (token: string) => { export const getUserSettings = async (token: string) => {
let error = null; let error = null;
const res = await fetch(`${WEBUI_API_BASE_URL}/users/user/settings`, { const res = await fetch(`${WEBUI_API_BASE_URL}/users/user/settings`, {

View file

@ -122,7 +122,7 @@
return; return;
} }
if (!key) { if (!key && !['azure_ad', 'microsoft_entra_id'].includes(auth_type)) {
loading = false; loading = false;
toast.error($i18n.t('Key is required')); toast.error($i18n.t('Key is required'));
@ -331,6 +331,9 @@
<option value="session">{$i18n.t('Session')}</option> <option value="session">{$i18n.t('Session')}</option>
{#if !direct} {#if !direct}
<option value="system_oauth">{$i18n.t('OAuth')}</option> <option value="system_oauth">{$i18n.t('OAuth')}</option>
{#if azure}
<option value="microsoft_entra_id">{$i18n.t('Entra ID')}</option>
{/if}
{/if} {/if}
{/if} {/if}
</select> </select>
@ -361,6 +364,12 @@
> >
{$i18n.t('Forwards system user OAuth access token to authenticate')} {$i18n.t('Forwards system user OAuth access token to authenticate')}
</div> </div>
{:else if ['azure_ad', 'microsoft_entra_id'].includes(auth_type)}
<div
class={`text-xs self-center translate-y-[1px] ${($settings?.highContrastMode ?? false) ? 'text-gray-800 dark:text-gray-100' : 'text-gray-500'}`}
>
{$i18n.t('Uses DefaultAzureCredential to authenticate')}
</div>
{/if} {/if}
</div> </div>
</div> </div>
@ -443,7 +452,7 @@
</div> </div>
{/if} {/if}
<div class="flex flex-col w-full"> <div class="flex flex-col w-full mt-2">
<div class="mb-1 flex justify-between"> <div class="mb-1 flex justify-between">
<div <div
class={`mb-0.5 text-xs text-gray-500 class={`mb-0.5 text-xs text-gray-500
@ -499,8 +508,6 @@
{/if} {/if}
</div> </div>
<hr class=" border-gray-100 dark:border-gray-700/10 my-1.5 w-full" />
<div class="flex items-center"> <div class="flex items-center">
<label class="sr-only" for="add-model-id-input">{$i18n.t('Add a model ID')}</label> <label class="sr-only" for="add-model-id-input">{$i18n.t('Add a model ID')}</label>
<input <input
@ -528,9 +535,7 @@
</div> </div>
</div> </div>
<hr class=" border-gray-50 dark:border-gray-850 my-2.5 w-full" /> <div class="flex gap-2 mt-2">
<div class="flex gap-2">
<div class="flex flex-col w-full"> <div class="flex flex-col w-full">
<div <div
class={`mb-0.5 text-xs text-gray-500 class={`mb-0.5 text-xs text-gray-500

View file

@ -1,440 +0,0 @@
<script lang="ts">
import { toast } from 'svelte-sonner';
import { getContext, onMount } from 'svelte';
const i18n = getContext('i18n');
import { settings } from '$lib/stores';
import Modal from '$lib/components/common/Modal.svelte';
import Plus from '$lib/components/icons/Plus.svelte';
import Minus from '$lib/components/icons/Minus.svelte';
import PencilSolid from '$lib/components/icons/PencilSolid.svelte';
import SensitiveInput from '$lib/components/common/SensitiveInput.svelte';
import Tooltip from '$lib/components/common/Tooltip.svelte';
import Switch from '$lib/components/common/Switch.svelte';
import Tags from './common/Tags.svelte';
import { getToolServerData } from '$lib/apis';
import { verifyToolServerConnection } from '$lib/apis/configs';
import AccessControl from './workspace/common/AccessControl.svelte';
import Spinner from '$lib/components/common/Spinner.svelte';
import XMark from '$lib/components/icons/XMark.svelte';
export let onSubmit: Function = () => {};
export let onDelete: Function = () => {};
export let show = false;
export let edit = false;
export let direct = false;
export let connection = null;
let url = '';
let path = 'openapi.json';
let auth_type = 'bearer';
let key = '';
let accessControl = {};
let id = '';
let name = '';
let description = '';
let enable = true;
let loading = false;
const verifyHandler = async () => {
if (url === '') {
toast.error($i18n.t('Please enter a valid URL'));
return;
}
if (path === '') {
toast.error($i18n.t('Please enter a valid path'));
return;
}
if (direct) {
const res = await getToolServerData(
auth_type === 'bearer' ? key : localStorage.token,
path.includes('://') ? path : `${url}${path.startsWith('/') ? '' : '/'}${path}`
).catch((err) => {
toast.error($i18n.t('Connection failed'));
});
if (res) {
toast.success($i18n.t('Connection successful'));
console.debug('Connection successful', res);
}
} else {
const res = await verifyToolServerConnection(localStorage.token, {
url,
path,
auth_type,
key,
config: {
enable: enable,
access_control: accessControl
},
info: {
id,
name,
description
}
}).catch((err) => {
toast.error($i18n.t('Connection failed'));
});
if (res) {
toast.success($i18n.t('Connection successful'));
console.debug('Connection successful', res);
}
}
};
const submitHandler = async () => {
loading = true;
// remove trailing slash from url
url = url.replace(/\/$/, '');
const connection = {
url,
path,
auth_type,
key,
config: {
enable: enable,
access_control: accessControl
},
info: {
id: id,
name: name,
description: description
}
};
await onSubmit(connection);
loading = false;
show = false;
url = '';
path = 'openapi.json';
key = '';
auth_type = 'bearer';
id = '';
name = '';
description = '';
enable = true;
accessControl = null;
};
const init = () => {
if (connection) {
url = connection.url;
path = connection?.path ?? 'openapi.json';
auth_type = connection?.auth_type ?? 'bearer';
key = connection?.key ?? '';
id = connection.info?.id ?? '';
name = connection.info?.name ?? '';
description = connection.info?.description ?? '';
enable = connection.config?.enable ?? true;
accessControl = connection.config?.access_control ?? null;
}
};
$: if (show) {
init();
}
onMount(() => {
init();
});
</script>
<Modal size="sm" bind:show>
<div>
<div class=" flex justify-between dark:text-gray-100 px-5 pt-4 pb-2">
<h1 class=" text-lg font-medium self-center font-primary">
{#if edit}
{$i18n.t('Edit Connection')}
{:else}
{$i18n.t('Add Connection')}
{/if}
</h1>
<button
class="self-center"
aria-label={$i18n.t('Close Configure Connection Modal')}
on:click={() => {
show = false;
}}
>
<XMark className={'size-5'} />
</button>
</div>
<div class="flex flex-col md:flex-row w-full px-4 pb-4 md:space-x-4 dark:text-gray-200">
<div class=" flex flex-col w-full sm:flex-row sm:justify-center sm:space-x-6">
<form
class="flex flex-col w-full"
on:submit={(e) => {
e.preventDefault();
submitHandler();
}}
>
<div class="px-1">
<div class="flex gap-2">
<div class="flex flex-col w-full">
<div class="flex justify-between mb-0.5">
<label
for="api-base-url"
class={`text-xs ${($settings?.highContrastMode ?? false) ? 'text-gray-800 dark:text-gray-100' : 'text-gray-500'}`}
>{$i18n.t('URL')}</label
>
</div>
<div class="flex flex-1 items-center">
<input
id="api-base-url"
class={`w-full flex-1 text-sm bg-transparent ${($settings?.highContrastMode ?? false) ? 'placeholder:text-gray-700 dark:placeholder:text-gray-100' : 'outline-hidden placeholder:text-gray-300 dark:placeholder:text-gray-700'}`}
type="text"
bind:value={url}
placeholder={$i18n.t('API Base URL')}
autocomplete="off"
required
/>
<Tooltip
content={$i18n.t('Verify Connection')}
className="shrink-0 flex items-center mr-1"
>
<button
class="self-center p-1 bg-transparent hover:bg-gray-100 dark:bg-gray-900 dark:hover:bg-gray-850 rounded-lg transition"
on:click={() => {
verifyHandler();
}}
aria-label={$i18n.t('Verify Connection')}
type="button"
>
<svg
xmlns="http://www.w3.org/2000/svg"
viewBox="0 0 20 20"
fill="currentColor"
class="w-4 h-4"
aria-hidden="true"
>
<path
fill-rule="evenodd"
d="M15.312 11.424a5.5 5.5 0 01-9.201 2.466l-.312-.311h2.433a.75.75 0 000-1.5H3.989a.75.75 0 00-.75.75v4.242a.75.75 0 001.5 0v-2.43l.31.31a7 7 0 0011.712-3.138.75.75 0 00-1.449-.39zm1.23-3.723a.75.75 0 00.219-.53V2.929a.75.75 0 00-1.5 0V5.36l-.31-.31A7 7 0 003.239 8.188a.75.75 0 101.448.389A5.5 5.5 0 0113.89 6.11l.311.31h-2.432a.75.75 0 000 1.5h4.243a.75.75 0 00.53-.219z"
clip-rule="evenodd"
/>
</svg>
</button>
</Tooltip>
<Tooltip content={enable ? $i18n.t('Enabled') : $i18n.t('Disabled')}>
<Switch bind:state={enable} />
</Tooltip>
</div>
<div class="flex-1 flex items-center">
<label for="url-or-path" class="sr-only"
>{$i18n.t('openapi.json URL or Path')}</label
>
<input
class={`w-full text-sm bg-transparent ${($settings?.highContrastMode ?? false) ? 'placeholder:text-gray-700 dark:placeholder:text-gray-100' : 'outline-hidden placeholder:text-gray-300 dark:placeholder:text-gray-700'}`}
type="text"
id="url-or-path"
bind:value={path}
placeholder={$i18n.t('openapi.json URL or Path')}
autocomplete="off"
required
/>
</div>
</div>
</div>
<div
class={`text-xs mt-1 ${($settings?.highContrastMode ?? false) ? 'text-gray-800 dark:text-gray-100' : 'text-gray-500'}`}
>
{$i18n.t(`WebUI will make requests to "{{url}}"`, {
url: path.includes('://') ? path : `${url}${path.startsWith('/') ? '' : '/'}${path}`
})}
</div>
<div class="flex gap-2 mt-2">
<div class="flex flex-col w-full">
<label
for="select-bearer-or-session"
class={`text-xs ${($settings?.highContrastMode ?? false) ? 'text-gray-800 dark:text-gray-100' : 'text-gray-500'}`}
>{$i18n.t('Auth')}</label
>
<div class="flex gap-2">
<div class="flex-shrink-0 self-start">
<select
id="select-bearer-or-session"
class={`w-full text-sm bg-transparent pr-5 ${($settings?.highContrastMode ?? false) ? 'placeholder:text-gray-700 dark:placeholder:text-gray-100' : 'outline-hidden placeholder:text-gray-300 dark:placeholder:text-gray-700'}`}
bind:value={auth_type}
>
<option value="none">{$i18n.t('None')}</option>
<option value="bearer">{$i18n.t('Bearer')}</option>
<option value="session">{$i18n.t('Session')}</option>
{#if !direct}
<option value="system_oauth">{$i18n.t('OAuth')}</option>
{/if}
</select>
</div>
<div class="flex flex-1 items-center">
{#if auth_type === 'bearer'}
<SensitiveInput
bind:value={key}
placeholder={$i18n.t('API Key')}
required={false}
/>
{:else if auth_type === 'none'}
<div
class={`text-xs self-center translate-y-[1px] ${($settings?.highContrastMode ?? false) ? 'text-gray-800 dark:text-gray-100' : 'text-gray-500'}`}
>
{$i18n.t('No authentication')}
</div>
{:else if auth_type === 'session'}
<div
class={`text-xs self-center translate-y-[1px] ${($settings?.highContrastMode ?? false) ? 'text-gray-800 dark:text-gray-100' : 'text-gray-500'}`}
>
{$i18n.t('Forwards system user session credentials to authenticate')}
</div>
{:else if auth_type === 'system_oauth'}
<div
class={`text-xs self-center translate-y-[1px] ${($settings?.highContrastMode ?? false) ? 'text-gray-800 dark:text-gray-100' : 'text-gray-500'}`}
>
{$i18n.t('Forwards system user OAuth access token to authenticate')}
</div>
{/if}
</div>
</div>
</div>
</div>
{#if !direct}
<hr class=" border-gray-100 dark:border-gray-700/10 my-2.5 w-full" />
<div class="flex gap-2">
<div class="flex flex-col w-full">
<label
for="enter-id"
class={`mb-0.5 text-xs ${($settings?.highContrastMode ?? false) ? 'text-gray-800 dark:text-gray-100' : 'text-gray-500'}`}
>{$i18n.t('ID')}
<span class="text-xs text-gray-200 dark:text-gray-800 ml-0.5"
>{$i18n.t('Optional')}</span
>
</label>
<div class="flex-1">
<input
id="enter-id"
class={`w-full text-sm bg-transparent ${($settings?.highContrastMode ?? false) ? 'placeholder:text-gray-700 dark:placeholder:text-gray-100' : 'outline-hidden placeholder:text-gray-300 dark:placeholder:text-gray-700'}`}
type="text"
bind:value={id}
placeholder={$i18n.t('Enter ID')}
autocomplete="off"
/>
</div>
</div>
</div>
<div class="flex gap-2 mt-2">
<div class="flex flex-col w-full">
<label
for="enter-name"
class={`mb-0.5 text-xs ${($settings?.highContrastMode ?? false) ? 'text-gray-800 dark:text-gray-100' : 'text-gray-500'}`}
>{$i18n.t('Name')}
</label>
<div class="flex-1">
<input
id="enter-name"
class={`w-full text-sm bg-transparent ${($settings?.highContrastMode ?? false) ? 'placeholder:text-gray-700 dark:placeholder:text-gray-100' : 'outline-hidden placeholder:text-gray-300 dark:placeholder:text-gray-700'}`}
type="text"
bind:value={name}
placeholder={$i18n.t('Enter name')}
autocomplete="off"
required
/>
</div>
</div>
</div>
<div class="flex flex-col w-full mt-2">
<label
for="description"
class={`mb-1 text-xs ${($settings?.highContrastMode ?? false) ? 'text-gray-800 dark:text-gray-100 placeholder:text-gray-700 dark:placeholder:text-gray-100' : 'outline-hidden placeholder:text-gray-300 dark:placeholder:text-gray-700 text-gray-500'}`}
>{$i18n.t('Description')}</label
>
<div class="flex-1">
<input
id="description"
class={`w-full text-sm bg-transparent ${($settings?.highContrastMode ?? false) ? 'placeholder:text-gray-700 dark:placeholder:text-gray-100' : 'outline-hidden placeholder:text-gray-300 dark:placeholder:text-gray-700'}`}
type="text"
bind:value={description}
placeholder={$i18n.t('Enter description')}
autocomplete="off"
/>
</div>
</div>
<hr class=" border-gray-100 dark:border-gray-700/10 my-2.5 w-full" />
<div class="my-2 -mx-2">
<div class="px-3 py-2 bg-gray-50 dark:bg-gray-950 rounded-lg">
<AccessControl bind:accessControl />
</div>
</div>
{/if}
</div>
<div class="flex justify-end pt-3 text-sm font-medium gap-1.5">
{#if edit}
<button
class="px-3.5 py-1.5 text-sm font-medium dark:bg-black dark:hover:bg-gray-900 dark:text-white bg-white text-black hover:bg-gray-100 transition rounded-full flex flex-row space-x-1 items-center"
type="button"
on:click={() => {
onDelete();
show = false;
}}
>
{$i18n.t('Delete')}
</button>
{/if}
<button
class="px-3.5 py-1.5 text-sm font-medium bg-black hover:bg-gray-900 text-white dark:bg-white dark:text-black dark:hover:bg-gray-100 transition rounded-full flex flex-row space-x-1 items-center {loading
? ' cursor-not-allowed'
: ''}"
type="submit"
disabled={loading}
>
{$i18n.t('Save')}
{#if loading}
<div class="ml-2 self-center">
<Spinner />
</div>
{/if}
</button>
</div>
</form>
</div>
</div>
</div>
</Modal>

View file

@ -0,0 +1,797 @@
<script lang="ts">
import { v4 as uuidv4 } from 'uuid';
import fileSaver from 'file-saver';
const { saveAs } = fileSaver;
import { toast } from 'svelte-sonner';
import { getContext, onMount } from 'svelte';
const i18n = getContext('i18n');
import { settings } from '$lib/stores';
import Modal from '$lib/components/common/Modal.svelte';
import Plus from '$lib/components/icons/Plus.svelte';
import Minus from '$lib/components/icons/Minus.svelte';
import PencilSolid from '$lib/components/icons/PencilSolid.svelte';
import SensitiveInput from '$lib/components/common/SensitiveInput.svelte';
import Tooltip from '$lib/components/common/Tooltip.svelte';
import Switch from '$lib/components/common/Switch.svelte';
import Tags from './common/Tags.svelte';
import { getToolServerData } from '$lib/apis';
import { verifyToolServerConnection, registerOAuthClient } from '$lib/apis/configs';
import AccessControl from './workspace/common/AccessControl.svelte';
import Spinner from '$lib/components/common/Spinner.svelte';
import XMark from '$lib/components/icons/XMark.svelte';
export let onSubmit: Function = () => {};
export let onDelete: Function = () => {};
export let show = false;
export let edit = false;
export let direct = false;
export let connection = null;
let inputElement = null;
let type = 'openapi'; // 'openapi', 'mcp'
let url = '';
let spec_type = 'url'; // 'url', 'json'
let spec = ''; // used when spec_type is 'json'
let path = 'openapi.json';
let auth_type = 'bearer';
let key = '';
let accessControl = {};
let id = '';
let name = '';
let description = '';
let oauthClientInfo = null;
let enable = true;
let loading = false;
const registerOAuthClientHandler = async () => {
if (url === '') {
toast.error($i18n.t('Please enter a valid URL'));
return;
}
if (id === '') {
toast.error($i18n.t('Please enter a valid ID'));
return;
}
const res = await registerOAuthClient(
localStorage.token,
{
url: url,
client_id: id
},
'mcp'
).catch((err) => {
toast.error($i18n.t('Registration failed'));
return null;
});
if (res) {
toast.warning(
$i18n.t(
'Please save the connection to persist the OAuth client information and do not change the ID'
)
);
toast.success($i18n.t('Registration successful'));
console.debug('Registration successful', res);
oauthClientInfo = res?.oauth_client_info ?? null;
}
};
const verifyHandler = async () => {
if (url === '') {
toast.error($i18n.t('Please enter a valid URL'));
return;
}
if (['openapi', ''].includes(type)) {
if (spec_type === 'json' && spec === '') {
toast.error($i18n.t('Please enter a valid JSON spec'));
return;
}
if (spec_type === 'url' && path === '') {
toast.error($i18n.t('Please enter a valid path'));
return;
}
}
if (direct) {
const res = await getToolServerData(
auth_type === 'bearer' ? key : localStorage.token,
path.includes('://') ? path : `${url}${path.startsWith('/') ? '' : '/'}${path}`
).catch((err) => {
toast.error($i18n.t('Connection failed'));
});
if (res) {
toast.success($i18n.t('Connection successful'));
console.debug('Connection successful', res);
}
} else {
const res = await verifyToolServerConnection(localStorage.token, {
url,
path,
type,
auth_type,
key,
config: {
enable: enable,
access_control: accessControl
},
info: {
id,
name,
description
}
}).catch((err) => {
toast.error($i18n.t('Connection failed'));
});
if (res) {
toast.success($i18n.t('Connection successful'));
console.debug('Connection successful', res);
}
}
};
const importHandler = async (e) => {
const file = e.target.files[0];
if (!file) return;
const reader = new FileReader();
reader.onload = (event) => {
const json = event.target.result;
console.log('importHandler', json);
try {
let data = JSON.parse(json);
// validate data
if (Array.isArray(data)) {
if (data.length === 0) {
toast.error($i18n.t('Please select a valid JSON file'));
return;
}
data = data[0];
}
if (data.type) type = data.type;
if (data.url) url = data.url;
if (data.spec_type) spec_type = data.spec_type;
if (data.spec) spec = data.spec;
if (data.path) path = data.path;
if (data.auth_type) auth_type = data.auth_type;
if (data.key) key = data.key;
if (data.info) {
id = data.info.id ?? '';
name = data.info.name ?? '';
description = data.info.description ?? '';
}
if (data.config) {
enable = data.config.enable ?? true;
accessControl = data.config.access_control ?? {};
}
toast.success($i18n.t('Import successful'));
} catch (error) {
toast.error($i18n.t('Please select a valid JSON file'));
}
};
reader.readAsText(file);
};
const exportHandler = async () => {
// export current connection as json file
const json = JSON.stringify([
{
type,
url,
spec_type,
spec,
path,
auth_type,
key,
info: {
id: id,
name: name,
description: description
}
}
]);
const blob = new Blob([json], {
type: 'application/json'
});
saveAs(blob, `tool-server-${id || name || 'export'}.json`);
};
const submitHandler = async () => {
loading = true;
// remove trailing slash from url
url = url.replace(/\/$/, '');
if (id.includes(':') || id.includes('|')) {
toast.error($i18n.t('ID cannot contain ":" or "|" characters'));
loading = false;
return;
}
if (type === 'mcp' && auth_type === 'oauth_2.1' && !oauthClientInfo) {
toast.error($i18n.t('Please register the OAuth client'));
loading = false;
return;
}
// validate spec
if (spec_type === 'json') {
try {
const specJSON = JSON.parse(spec);
spec = JSON.stringify(specJSON, null, 2);
} catch (e) {
toast.error($i18n.t('Please enter a valid JSON spec'));
loading = false;
return;
}
}
const connection = {
type,
url,
spec_type,
spec,
path,
auth_type,
key,
config: {
enable: enable,
access_control: accessControl
},
info: {
id: id,
name: name,
description: description,
...(oauthClientInfo ? { oauth_client_info: oauthClientInfo } : {})
}
};
await onSubmit(connection);
loading = false;
show = false;
// reset form
type = 'openapi';
url = '';
spec_type = 'url';
spec = '';
path = 'openapi.json';
key = '';
auth_type = 'bearer';
id = '';
name = '';
description = '';
oauthClientInfo = null;
enable = true;
accessControl = null;
};
const init = () => {
if (connection) {
type = connection?.type ?? 'openapi';
url = connection.url;
spec_type = connection?.spec_type ?? 'url';
spec = connection?.spec ?? '';
path = connection?.path ?? 'openapi.json';
auth_type = connection?.auth_type ?? 'bearer';
key = connection?.key ?? '';
id = connection.info?.id ?? '';
name = connection.info?.name ?? '';
description = connection.info?.description ?? '';
oauthClientInfo = connection.info?.oauth_client_info ?? null;
enable = connection.config?.enable ?? true;
accessControl = connection.config?.access_control ?? null;
}
};
$: if (show) {
init();
}
onMount(() => {
init();
});
</script>
<Modal size="sm" bind:show>
<div>
<div class=" flex justify-between dark:text-gray-100 px-5 pt-4 pb-2">
<h1 class=" text-lg font-medium self-center font-primary">
{#if edit}
{$i18n.t('Edit Connection')}
{:else}
{$i18n.t('Add Connection')}
{/if}
</h1>
<div class="flex items-center gap-3">
<div class="flex gap-1.5 text-xs justify-end">
<button
class=" hover:underline"
type="button"
on:click={() => {
inputElement?.click();
}}
>
{$i18n.t('Import')}
</button>
<button class=" hover:underline" type="button" on:click={exportHandler}>
{$i18n.t('Export')}
</button>
</div>
<button
class="self-center"
aria-label={$i18n.t('Close Configure Connection Modal')}
on:click={() => {
show = false;
}}
>
<XMark className={'size-5'} />
</button>
</div>
</div>
<div class="flex flex-col md:flex-row w-full px-4 pb-4 md:space-x-4 dark:text-gray-200">
<div class=" flex flex-col w-full sm:flex-row sm:justify-center sm:space-x-6">
<input
bind:this={inputElement}
type="file"
hidden
accept=".json"
on:change={(e) => {
importHandler(e);
}}
/>
<form
class="flex flex-col w-full"
on:submit={(e) => {
e.preventDefault();
submitHandler();
}}
>
<div class="px-1">
{#if !direct}
<div class="flex gap-2 mb-1.5">
<div class="flex w-full justify-between items-center">
<div class=" text-xs text-gray-500">{$i18n.t('Type')}</div>
<div class="">
<button
on:click={() => {
type = ['', 'openapi'].includes(type) ? 'mcp' : 'openapi';
}}
type="button"
class=" text-xs text-gray-700 dark:text-gray-300"
>
{#if ['', 'openapi'].includes(type)}
{$i18n.t('OpenAPI')}
{:else if type === 'mcp'}
{$i18n.t('MCP')}
<span class="text-gray-500">{$i18n.t('Streamable HTTP')}</span>
{/if}
</button>
</div>
</div>
</div>
{/if}
<div class="flex gap-2">
<div class="flex flex-col w-full">
<div class="flex justify-between mb-0.5">
<label
for="api-base-url"
class={`text-xs ${($settings?.highContrastMode ?? false) ? 'text-gray-800 dark:text-gray-100' : 'text-gray-500'}`}
>{$i18n.t('URL')}</label
>
</div>
<div class="flex flex-1 items-center">
<input
id="api-base-url"
class={`w-full flex-1 text-sm bg-transparent ${($settings?.highContrastMode ?? false) ? 'placeholder:text-gray-700 dark:placeholder:text-gray-100' : 'outline-hidden placeholder:text-gray-300 dark:placeholder:text-gray-700'}`}
type="text"
bind:value={url}
placeholder={$i18n.t('API Base URL')}
autocomplete="off"
required
/>
<Tooltip
content={$i18n.t('Verify Connection')}
className="shrink-0 flex items-center mr-1"
>
<button
class="self-center p-1 bg-transparent hover:bg-gray-100 dark:bg-gray-900 dark:hover:bg-gray-850 rounded-lg transition"
on:click={() => {
verifyHandler();
}}
aria-label={$i18n.t('Verify Connection')}
type="button"
>
<svg
xmlns="http://www.w3.org/2000/svg"
viewBox="0 0 20 20"
fill="currentColor"
class="w-4 h-4"
aria-hidden="true"
>
<path
fill-rule="evenodd"
d="M15.312 11.424a5.5 5.5 0 01-9.201 2.466l-.312-.311h2.433a.75.75 0 000-1.5H3.989a.75.75 0 00-.75.75v4.242a.75.75 0 001.5 0v-2.43l.31.31a7 7 0 0011.712-3.138.75.75 0 00-1.449-.39zm1.23-3.723a.75.75 0 00.219-.53V2.929a.75.75 0 00-1.5 0V5.36l-.31-.31A7 7 0 003.239 8.188a.75.75 0 101.448.389A5.5 5.5 0 0113.89 6.11l.311.31h-2.432a.75.75 0 000 1.5h4.243a.75.75 0 00.53-.219z"
clip-rule="evenodd"
/>
</svg>
</button>
</Tooltip>
<Tooltip content={enable ? $i18n.t('Enabled') : $i18n.t('Disabled')}>
<Switch bind:state={enable} />
</Tooltip>
</div>
</div>
</div>
{#if ['', 'openapi'].includes(type)}
<div class="flex gap-2 mt-2">
<div class="flex flex-col w-full">
<div class="flex justify-between items-center mb-0.5">
<div class="flex gap-2 items-center">
<div
for="select-bearer-or-session"
class={`text-xs ${($settings?.highContrastMode ?? false) ? 'text-gray-800 dark:text-gray-100' : 'text-gray-500'}`}
>
{$i18n.t('OpenAPI Spec')}
</div>
</div>
</div>
<div class="flex gap-2">
<div class="flex-shrink-0 self-start">
<select
id="select-bearer-or-session"
class={`w-full text-sm bg-transparent pr-5 ${($settings?.highContrastMode ?? false) ? 'placeholder:text-gray-700 dark:placeholder:text-gray-100' : 'outline-hidden placeholder:text-gray-300 dark:placeholder:text-gray-700'}`}
bind:value={spec_type}
>
<option value="url">{$i18n.t('URL')}</option>
<option value="json">{$i18n.t('JSON')}</option>
</select>
</div>
<div class="flex flex-1 items-center">
{#if spec_type === 'url'}
<div class="flex-1 flex items-center">
<label for="url-or-path" class="sr-only"
>{$i18n.t('openapi.json URL or Path')}</label
>
<input
class={`w-full text-sm bg-transparent ${($settings?.highContrastMode ?? false) ? 'placeholder:text-gray-700 dark:placeholder:text-gray-100' : 'outline-hidden placeholder:text-gray-300 dark:placeholder:text-gray-700'}`}
type="text"
id="url-or-path"
bind:value={path}
placeholder={$i18n.t('openapi.json URL or Path')}
autocomplete="off"
required
/>
</div>
{:else if spec_type === 'json'}
<div
class={`text-xs w-full self-center translate-y-[1px] ${($settings?.highContrastMode ?? false) ? 'text-gray-800 dark:text-gray-100' : 'text-gray-500'}`}
>
<label for="url-or-path" class="sr-only">{$i18n.t('JSON Spec')}</label>
<textarea
class={`w-full text-sm bg-transparent ${($settings?.highContrastMode ?? false) ? 'placeholder:text-gray-700 dark:placeholder:text-gray-100' : 'outline-hidden placeholder:text-gray-300 dark:placeholder:text-gray-700 text-black dark:text-white'}`}
bind:value={spec}
placeholder={$i18n.t('JSON Spec')}
autocomplete="off"
required
rows="5"
/>
</div>
{/if}
</div>
</div>
{#if ['', 'url'].includes(spec_type)}
<div
class={`text-xs mt-1 ${($settings?.highContrastMode ?? false) ? 'text-gray-800 dark:text-gray-100' : 'text-gray-500'}`}
>
{$i18n.t(`WebUI will make requests to "{{url}}"`, {
url: path.includes('://')
? path
: `${url}${path.startsWith('/') ? '' : '/'}${path}`
})}
</div>
{/if}
</div>
</div>
{/if}
<div class="flex gap-2 mt-2">
<div class="flex flex-col w-full">
<div class="flex justify-between items-center">
<div class="flex gap-2 items-center">
<div
for="select-bearer-or-session"
class={`text-xs ${($settings?.highContrastMode ?? false) ? 'text-gray-800 dark:text-gray-100' : 'text-gray-500'}`}
>
{$i18n.t('Auth')}
</div>
</div>
{#if auth_type === 'oauth_2.1'}
<div class="flex items-center gap-2">
<div class="flex flex-col justify-end items-center shrink-0">
<Tooltip
content={oauthClientInfo
? $i18n.t('Register Again')
: $i18n.t('Register Client')}
>
<button
class=" text-xs underline dark:text-gray-500 dark:hover:text-gray-200 text-gray-700 hover:text-gray-900 transition"
type="button"
on:click={() => {
registerOAuthClientHandler();
}}
>
{$i18n.t('Register Client')}
</button>
</Tooltip>
</div>
{#if !oauthClientInfo}
<div
class="text-xs font-medium px-1.5 rounded-md bg-yellow-500/20 text-yellow-700 dark:text-yellow-200"
>
{$i18n.t('Not Registered')}
</div>
{:else}
<div
class="text-xs font-medium px-1.5 rounded-md bg-green-500/20 text-green-700 dark:text-green-200"
>
{$i18n.t('Registered')}
</div>
{/if}
</div>
{/if}
</div>
<div class="flex gap-2">
<div class="flex-shrink-0 self-start">
<select
id="select-bearer-or-session"
class={`w-full text-sm bg-transparent pr-5 ${($settings?.highContrastMode ?? false) ? 'placeholder:text-gray-700 dark:placeholder:text-gray-100' : 'outline-hidden placeholder:text-gray-300 dark:placeholder:text-gray-700'}`}
bind:value={auth_type}
>
<option value="none">{$i18n.t('None')}</option>
<option value="bearer">{$i18n.t('Bearer')}</option>
<option value="session">{$i18n.t('Session')}</option>
{#if !direct}
<option value="system_oauth">{$i18n.t('OAuth')}</option>
{#if type === 'mcp'}
<option value="oauth_2.1">{$i18n.t('OAuth 2.1')}</option>
{/if}
{/if}
</select>
</div>
<div class="flex flex-1 items-center">
{#if auth_type === 'bearer'}
<SensitiveInput
bind:value={key}
placeholder={$i18n.t('API Key')}
required={false}
/>
{:else if auth_type === 'none'}
<div
class={`text-xs self-center translate-y-[1px] ${($settings?.highContrastMode ?? false) ? 'text-gray-800 dark:text-gray-100' : 'text-gray-500'}`}
>
{$i18n.t('No authentication')}
</div>
{:else if auth_type === 'session'}
<div
class={`text-xs self-center translate-y-[1px] ${($settings?.highContrastMode ?? false) ? 'text-gray-800 dark:text-gray-100' : 'text-gray-500'}`}
>
{$i18n.t('Forwards system user session credentials to authenticate')}
</div>
{:else if auth_type === 'system_oauth'}
<div
class={`text-xs self-center translate-y-[1px] ${($settings?.highContrastMode ?? false) ? 'text-gray-800 dark:text-gray-100' : 'text-gray-500'}`}
>
{$i18n.t('Forwards system user OAuth access token to authenticate')}
</div>
{:else if auth_type === 'oauth_2.1'}
<div
class={`flex items-center text-xs self-center translate-y-[1px] ${($settings?.highContrastMode ?? false) ? 'text-gray-800 dark:text-gray-100' : 'text-gray-500'}`}
>
{$i18n.t('Uses OAuth 2.1 Dynamic Client Registration')}
</div>
{/if}
</div>
</div>
</div>
</div>
{#if !direct}
<hr class=" border-gray-100 dark:border-gray-700/10 my-2.5 w-full" />
<div class="flex gap-2">
<div class="flex flex-col w-full">
<label
for="enter-id"
class={`mb-0.5 text-xs ${($settings?.highContrastMode ?? false) ? 'text-gray-800 dark:text-gray-100' : 'text-gray-500'}`}
>{$i18n.t('ID')}
{#if type !== 'mcp'}
<span class="text-xs text-gray-200 dark:text-gray-800 ml-0.5"
>{$i18n.t('Optional')}</span
>
{/if}
</label>
<div class="flex-1">
<input
id="enter-id"
class={`w-full text-sm bg-transparent ${($settings?.highContrastMode ?? false) ? 'placeholder:text-gray-700 dark:placeholder:text-gray-100' : 'outline-hidden placeholder:text-gray-300 dark:placeholder:text-gray-700'}`}
type="text"
bind:value={id}
placeholder={$i18n.t('Enter ID')}
autocomplete="off"
required={type === 'mcp'}
/>
</div>
</div>
</div>
<div class="flex gap-2 mt-2">
<div class="flex flex-col w-full">
<label
for="enter-name"
class={`mb-0.5 text-xs ${($settings?.highContrastMode ?? false) ? 'text-gray-800 dark:text-gray-100' : 'text-gray-500'}`}
>{$i18n.t('Name')}
</label>
<div class="flex-1">
<input
id="enter-name"
class={`w-full text-sm bg-transparent ${($settings?.highContrastMode ?? false) ? 'placeholder:text-gray-700 dark:placeholder:text-gray-100' : 'outline-hidden placeholder:text-gray-300 dark:placeholder:text-gray-700'}`}
type="text"
bind:value={name}
placeholder={$i18n.t('Enter name')}
autocomplete="off"
required
/>
</div>
</div>
</div>
<div class="flex flex-col w-full mt-2">
<label
for="description"
class={`mb-1 text-xs ${($settings?.highContrastMode ?? false) ? 'text-gray-800 dark:text-gray-100 placeholder:text-gray-700 dark:placeholder:text-gray-100' : 'outline-hidden placeholder:text-gray-300 dark:placeholder:text-gray-700 text-gray-500'}`}
>{$i18n.t('Description')}</label
>
<div class="flex-1">
<input
id="description"
class={`w-full text-sm bg-transparent ${($settings?.highContrastMode ?? false) ? 'placeholder:text-gray-700 dark:placeholder:text-gray-100' : 'outline-hidden placeholder:text-gray-300 dark:placeholder:text-gray-700'}`}
type="text"
bind:value={description}
placeholder={$i18n.t('Enter description')}
autocomplete="off"
/>
</div>
</div>
<hr class=" border-gray-100 dark:border-gray-700/10 my-2.5 w-full" />
<div class="my-2 -mx-2">
<div class="px-4 py-3 bg-gray-50 dark:bg-gray-950 rounded-3xl">
<AccessControl bind:accessControl />
</div>
</div>
{/if}
</div>
{#if type === 'mcp'}
<div
class=" bg-yellow-500/20 text-yellow-700 dark:text-yellow-200 rounded-2xl text-xs px-4 py-3 mb-2"
>
<span class="font-medium">
{$i18n.t('Warning')}:
</span>
{$i18n.t(
'MCP support is experimental and its specification changes often, which can lead to incompatibilities. OpenAPI specification support is directly maintained by the Open WebUI team, making it the more reliable option for compatibility.'
)}
<a
class="font-medium underline"
href="https://docs.openwebui.com/features/mcp"
target="_blank">{$i18n.t('Read more →')}</a
>
</div>
{/if}
<div class="flex justify-between pt-3 text-sm font-medium gap-1.5">
<div></div>
<div class="flex gap-1.5">
{#if edit}
<button
class="px-3.5 py-1.5 text-sm font-medium dark:bg-black dark:hover:bg-gray-900 dark:text-white bg-white text-black hover:bg-gray-100 transition rounded-full flex flex-row space-x-1 items-center"
type="button"
on:click={() => {
onDelete();
show = false;
}}
>
{$i18n.t('Delete')}
</button>
{/if}
<button
class="px-3.5 py-1.5 text-sm font-medium bg-black hover:bg-gray-900 text-white dark:bg-white dark:text-black dark:hover:bg-gray-100 transition rounded-full flex flex-row space-x-1 items-center {loading
? ' cursor-not-allowed'
: ''}"
type="submit"
disabled={loading}
>
{$i18n.t('Save')}
{#if loading}
<div class="ml-2 self-center">
<Spinner />
</div>
{/if}
</button>
</div>
</div>
</form>
</div>
</div>
</div>
</Modal>

View file

@ -19,16 +19,19 @@
let changelog = null; let changelog = null;
onMount(async () => { const init = async () => {
const res = await getChangelog(); changelog = await getChangelog();
changelog = res; };
});
$: if (show) {
init();
}
</script> </script>
<Modal bind:show size="xl"> <Modal bind:show size="xl">
<div class="px-5 pt-4 dark:text-gray-300 text-gray-700"> <div class="px-6 pt-5 dark:text-white text-black">
<div class="flex justify-between items-start"> <div class="flex justify-between items-start">
<div class="text-xl font-semibold"> <div class="text-xl font-medium">
{$i18n.t("What's New in")} {$i18n.t("What's New in")}
{$WEBUI_NAME} {$WEBUI_NAME}
<Confetti x={[-1, -0.25]} y={[0, 0.5]} /> <Confetti x={[-1, -0.25]} y={[0, 0.5]} />
@ -48,7 +51,7 @@
</div> </div>
<div class="flex items-center mt-1"> <div class="flex items-center mt-1">
<div class="text-sm dark:text-gray-200">{$i18n.t('Release Notes')}</div> <div class="text-sm dark:text-gray-200">{$i18n.t('Release Notes')}</div>
<div class="flex self-center w-[1px] h-6 mx-2.5 bg-gray-200 dark:bg-gray-700" /> <div class="flex self-center w-[1px] h-6 mx-2.5 bg-gray-50/50 dark:bg-gray-850/50" />
<div class="text-sm dark:text-gray-200"> <div class="text-sm dark:text-gray-200">
v{WEBUI_VERSION} v{WEBUI_VERSION}
</div> </div>
@ -56,7 +59,7 @@
</div> </div>
<div class=" w-full p-4 px-5 text-gray-700 dark:text-gray-100"> <div class=" w-full p-4 px-5 text-gray-700 dark:text-gray-100">
<div class=" overflow-y-scroll max-h-[32rem] scrollbar-hidden"> <div class=" overflow-y-scroll max-h-[30rem] scrollbar-hidden">
<div class="mb-3"> <div class="mb-3">
{#if changelog} {#if changelog}
{#each Object.keys(changelog) as version} {#each Object.keys(changelog) as version}
@ -65,20 +68,20 @@
v{version} - {changelog[version].date} v{version} - {changelog[version].date}
</div> </div>
<hr class="border-gray-100 dark:border-gray-850 my-2" /> <hr class="border-gray-50/50 dark:border-gray-850/50 my-2" />
{#each Object.keys(changelog[version]).filter((section) => section !== 'date') as section} {#each Object.keys(changelog[version]).filter((section) => section !== 'date') as section}
<div class="w-full"> <div class="w-full">
<div <div
class="font-semibold uppercase text-xs {section === 'added' class="font-semibold uppercase text-xs {section === 'added'
? 'text-white bg-blue-600' ? 'bg-blue-500/20 text-blue-700 dark:text-blue-200'
: section === 'fixed' : section === 'fixed'
? 'text-white bg-green-600' ? 'bg-green-500/20 text-green-700 dark:text-green-200'
: section === 'changed' : section === 'changed'
? 'text-white bg-yellow-600' ? 'bg-yellow-500/20 text-yellow-700 dark:text-yellow-200'
: section === 'removed' : section === 'removed'
? 'text-white bg-red-600' ? 'bg-red-500/20 text-red-700 dark:text-red-200'
: ''} w-fit px-3 rounded-full my-2.5" : ''} w-fit rounded-xl px-2 my-2.5"
> >
{section} {section}
</div> </div>

View file

@ -12,6 +12,43 @@
export let title: string = 'HI'; export let title: string = 'HI';
export let content: string; export let content: string;
let startX = 0,
startY = 0;
let moved = false;
const DRAG_THRESHOLD_PX = 6;
const clickHandler = () => {
onClick();
dispatch('closeToast');
};
function onPointerDown(e: PointerEvent) {
startX = e.clientX;
startY = e.clientY;
moved = false;
// Ensure we continue to get events even if the toast moves under the pointer.
(e.currentTarget as HTMLElement).setPointerCapture?.(e.pointerId);
}
function onPointerMove(e: PointerEvent) {
if (moved) return;
const dx = e.clientX - startX;
const dy = e.clientY - startY;
if (dx * dx + dy * dy > DRAG_THRESHOLD_PX * DRAG_THRESHOLD_PX) {
moved = true;
}
}
function onPointerUp(e: PointerEvent) {
// Release capture if taken
(e.currentTarget as HTMLElement).releasePointerCapture?.(e.pointerId);
// Only treat as a click if there wasn't a drag
if (!moved) {
clickHandler();
}
}
onMount(() => { onMount(() => {
if (!navigator.userActivation.hasBeenActive) { if (!navigator.userActivation.hasBeenActive) {
return; return;
@ -31,24 +68,33 @@
}); });
</script> </script>
<button <!-- svelte-ignore a11y-click-events-have-key-events -->
class="flex gap-2.5 text-left min-w-[var(--width)] w-full dark:bg-gray-850 dark:text-white bg-white text-black border border-gray-100 dark:border-gray-850 rounded-xl px-3.5 py-3.5" <!-- svelte-ignore a11y-no-static-element-interactions -->
on:click={() => { <div
onClick(); class="flex gap-2.5 text-left min-w-[var(--width)] w-full dark:bg-gray-850 dark:text-white bg-white text-black border border-gray-100 dark:border-gray-800 rounded-3xl px-4 py-3.5 cursor-pointer select-none"
dispatch('closeToast'); on:dragstart|preventDefault
on:pointerdown={onPointerDown}
on:pointermove={onPointerMove}
on:pointerup={onPointerUp}
on:pointercancel={() => (moved = true)}
on:keydown={(e) => {
if (e.key === 'Enter' || e.key === ' ') {
e.preventDefault();
clickHandler();
}
}} }}
> >
<div class="shrink-0 self-top -translate-y-0.5"> <div class="shrink-0 self-top -translate-y-0.5">
<img src="{WEBUI_BASE_URL}/static/favicon.png" alt="favicon" class="size-7 rounded-full" /> <img src="{WEBUI_BASE_URL}/static/favicon.png" alt="favicon" class="size-6 rounded-full" />
</div> </div>
<div> <div>
{#if title} {#if title}
<div class=" text-[13px] font-medium mb-0.5 line-clamp-1 capitalize">{title}</div> <div class=" text-[13px] font-medium mb-0.5 line-clamp-1">{title}</div>
{/if} {/if}
<div class=" line-clamp-2 text-xs self-center dark:text-gray-300 font-normal"> <div class=" line-clamp-2 text-xs self-center dark:text-gray-300 font-normal">
{@html DOMPurify.sanitize(marked(content))} {@html DOMPurify.sanitize(marked(content))}
</div> </div>
</div> </div>
</button> </div>

View file

@ -56,7 +56,7 @@
<div class="flex flex-col lg:flex-row w-full h-full pb-2 lg:space-x-4"> <div class="flex flex-col lg:flex-row w-full h-full pb-2 lg:space-x-4">
<div <div
id="users-tabs-container" id="users-tabs-container"
class="tabs flex flex-row overflow-x-auto gap-2.5 max-w-full lg:gap-1 lg:flex-col lg:flex-none lg:w-40 dark:text-gray-200 text-sm font-medium text-left scrollbar-none" class="tabs mx-[16px] lg:mx-0 lg:px-[16px] flex flex-row overflow-x-auto gap-2.5 max-w-full lg:gap-1 lg:flex-col lg:flex-none lg:w-50 dark:text-gray-200 text-sm font-medium text-left scrollbar-none"
> >
<button <button
id="leaderboard" id="leaderboard"
@ -113,7 +113,7 @@
</button> </button>
</div> </div>
<div class="flex-1 mt-1 lg:mt-0 overflow-y-scroll"> <div class="flex-1 mt-1 lg:mt-0 px-[16px] lg:pr-[16px] lg:pl-0 overflow-y-scroll">
{#if selectedTab === 'leaderboard'} {#if selectedTab === 'leaderboard'}
<Leaderboard {feedbacks} /> <Leaderboard {feedbacks} />
{:else if selectedTab === 'feedbacks'} {:else if selectedTab === 'feedbacks'}

View file

@ -13,7 +13,7 @@
import GarbageBin from '$lib/components/icons/GarbageBin.svelte'; import GarbageBin from '$lib/components/icons/GarbageBin.svelte';
import Pencil from '$lib/components/icons/Pencil.svelte'; import Pencil from '$lib/components/icons/Pencil.svelte';
import Tooltip from '$lib/components/common/Tooltip.svelte'; import Tooltip from '$lib/components/common/Tooltip.svelte';
import Download from '$lib/components/icons/ArrowDownTray.svelte'; import Download from '$lib/components/icons/Download.svelte';
let show = false; let show = false;
</script> </script>
@ -25,7 +25,7 @@
<div slot="content"> <div slot="content">
<DropdownMenu.Content <DropdownMenu.Content
class="w-full max-w-[150px] rounded-xl px-1 py-1.5 z-50 bg-white dark:bg-gray-850 dark:text-white shadow-lg" class="w-full max-w-[150px] rounded-xl p-1 z-50 bg-white dark:bg-gray-850 dark:text-white shadow-lg"
sideOffset={-2} sideOffset={-2}
side="bottom" side="bottom"
align="start" align="start"

View file

@ -13,7 +13,7 @@
import { deleteFeedbackById, exportAllFeedbacks, getAllFeedbacks } from '$lib/apis/evaluations'; import { deleteFeedbackById, exportAllFeedbacks, getAllFeedbacks } from '$lib/apis/evaluations';
import Tooltip from '$lib/components/common/Tooltip.svelte'; import Tooltip from '$lib/components/common/Tooltip.svelte';
import ArrowDownTray from '$lib/components/icons/ArrowDownTray.svelte'; import Download from '$lib/components/icons/Download.svelte';
import Badge from '$lib/components/common/Badge.svelte'; import Badge from '$lib/components/common/Badge.svelte';
import CloudArrowUp from '$lib/components/icons/CloudArrowUp.svelte'; import CloudArrowUp from '$lib/components/icons/CloudArrowUp.svelte';
import Pagination from '$lib/components/common/Pagination.svelte'; import Pagination from '$lib/components/common/Pagination.svelte';
@ -169,7 +169,7 @@
<FeedbackModal bind:show={showFeedbackModal} {selectedFeedback} onClose={closeFeedbackModal} /> <FeedbackModal bind:show={showFeedbackModal} {selectedFeedback} onClose={closeFeedbackModal} />
<div class="mt-0.5 mb-2 gap-1 flex flex-row justify-between"> <div class="mt-0.5 mb-1 gap-1 flex flex-row justify-between">
<div class="flex md:self-center text-lg font-medium px-0.5"> <div class="flex md:self-center text-lg font-medium px-0.5">
{$i18n.t('Feedback History')} {$i18n.t('Feedback History')}
@ -187,31 +187,25 @@
exportHandler(); exportHandler();
}} }}
> >
<ArrowDownTray className="size-3" /> <Download className="size-3" />
</button> </button>
</Tooltip> </Tooltip>
</div> </div>
{/if} {/if}
</div> </div>
<div <div class="scrollbar-hidden relative whitespace-nowrap overflow-x-auto max-w-full">
class="scrollbar-hidden relative whitespace-nowrap overflow-x-auto max-w-full rounded-sm pt-0.5"
>
{#if (feedbacks ?? []).length === 0} {#if (feedbacks ?? []).length === 0}
<div class="text-center text-xs text-gray-500 dark:text-gray-400 py-1"> <div class="text-center text-xs text-gray-500 dark:text-gray-400 py-1">
{$i18n.t('No feedbacks found')} {$i18n.t('No feedbacks found')}
</div> </div>
{:else} {:else}
<table <table class="w-full text-sm text-left text-gray-500 dark:text-gray-400 table-auto max-w-full">
class="w-full text-sm text-left text-gray-500 dark:text-gray-400 table-auto max-w-full rounded-sm" <thead class="text-xs text-gray-800 uppercase bg-transparent dark:text-gray-200">
> <tr class=" border-b-[1.5px] border-gray-50 dark:border-gray-850">
<thead
class="text-xs text-gray-700 uppercase bg-gray-50 dark:bg-gray-850 dark:text-gray-400 -translate-y-0.5"
>
<tr class="">
<th <th
scope="col" scope="col"
class="px-3 py-1.5 cursor-pointer select-none w-3" class="px-2.5 py-2 cursor-pointer select-none w-3"
on:click={() => setSortKey('user')} on:click={() => setSortKey('user')}
> >
<div class="flex gap-1.5 items-center justify-end"> <div class="flex gap-1.5 items-center justify-end">
@ -234,7 +228,7 @@
<th <th
scope="col" scope="col"
class="px-3 pr-1.5 cursor-pointer select-none" class="px-2.5 py-2 cursor-pointer select-none"
on:click={() => setSortKey('model_id')} on:click={() => setSortKey('model_id')}
> >
<div class="flex gap-1.5 items-center"> <div class="flex gap-1.5 items-center">
@ -257,7 +251,7 @@
<th <th
scope="col" scope="col"
class="px-3 py-1.5 text-right cursor-pointer select-none w-fit" class="px-2.5 py-2 text-right cursor-pointer select-none w-fit"
on:click={() => setSortKey('rating')} on:click={() => setSortKey('rating')}
> >
<div class="flex gap-1.5 items-center justify-end"> <div class="flex gap-1.5 items-center justify-end">
@ -280,7 +274,7 @@
<th <th
scope="col" scope="col"
class="px-3 py-1.5 text-right cursor-pointer select-none w-0" class="px-2.5 py-2 text-right cursor-pointer select-none w-0"
on:click={() => setSortKey('updated_at')} on:click={() => setSortKey('updated_at')}
> >
<div class="flex gap-1.5 items-center justify-end"> <div class="flex gap-1.5 items-center justify-end">
@ -301,7 +295,7 @@
</div> </div>
</th> </th>
<th scope="col" class="px-3 py-1.5 text-right cursor-pointer select-none w-0"> </th> <th scope="col" class="px-2.5 py-2 text-right cursor-pointer select-none w-0"> </th>
</tr> </tr>
</thead> </thead>
<tbody class=""> <tbody class="">

View file

@ -1,9 +1,4 @@
<script lang="ts"> <script lang="ts">
import * as ort from 'onnxruntime-web';
import { env, AutoModel, AutoTokenizer } from '@huggingface/transformers';
env.backends.onnx.wasm.wasmPaths = '/wasm/';
import { onMount, getContext } from 'svelte'; import { onMount, getContext } from 'svelte';
import { models } from '$lib/stores'; import { models } from '$lib/stores';
@ -237,6 +232,11 @@
////////////////////// //////////////////////
const loadEmbeddingModel = async () => { const loadEmbeddingModel = async () => {
const { env, AutoModel, AutoTokenizer } = await import('@huggingface/transformers');
if (env.backends.onnx.wasm) {
env.backends.onnx.wasm.wasmPaths = '/wasm/';
}
// Check if the tokenizer and model are already loaded and stored in the window object // Check if the tokenizer and model are already loaded and stored in the window object
if (!window.tokenizer) { if (!window.tokenizer) {
window.tokenizer = await AutoTokenizer.from_pretrained(EMBEDDING_MODEL); window.tokenizer = await AutoTokenizer.from_pretrained(EMBEDDING_MODEL);
@ -337,7 +337,7 @@
/> />
<div <div
class="pt-0.5 pb-2 gap-1 flex flex-col md:flex-row justify-between sticky top-0 z-10 bg-white dark:bg-gray-900" class="pt-0.5 pb-1 gap-1 flex flex-col md:flex-row justify-between sticky top-0 z-10 bg-white dark:bg-gray-900"
> >
<div class="flex md:self-center text-lg font-medium px-0.5 shrink-0 items-center"> <div class="flex md:self-center text-lg font-medium px-0.5 shrink-0 items-center">
<div class=" gap-1"> <div class=" gap-1">
@ -370,9 +370,7 @@
</div> </div>
</div> </div>
<div <div class="scrollbar-hidden relative whitespace-nowrap overflow-x-auto max-w-full rounded-sm">
class="scrollbar-hidden relative whitespace-nowrap overflow-x-auto max-w-full rounded-sm pt-0.5"
>
{#if loadingLeaderboard} {#if loadingLeaderboard}
<div class=" absolute top-0 bottom-0 left-0 right-0 flex"> <div class=" absolute top-0 bottom-0 left-0 right-0 flex">
<div class="m-auto"> <div class="m-auto">
@ -386,17 +384,15 @@
</div> </div>
{:else} {:else}
<table <table
class="w-full text-sm text-left text-gray-500 dark:text-gray-400 table-auto max-w-full rounded {loadingLeaderboard class="w-full text-sm text-left text-gray-500 dark:text-gray-400 table-auto max-w-full {loadingLeaderboard
? 'opacity-20' ? 'opacity-20'
: ''}" : ''}"
> >
<thead <thead class="text-xs text-gray-800 uppercase bg-transparent dark:text-gray-200">
class="text-xs text-gray-700 uppercase bg-gray-50 dark:bg-gray-850 dark:text-gray-400 -translate-y-0.5" <tr class=" border-b-[1.5px] border-gray-50 dark:border-gray-850">
>
<tr class="">
<th <th
scope="col" scope="col"
class="px-3 py-1.5 cursor-pointer select-none w-3" class="px-2.5 py-2 cursor-pointer select-none w-3"
on:click={() => setSortKey('rating')} on:click={() => setSortKey('rating')}
> >
<div class="flex gap-1.5 items-center"> <div class="flex gap-1.5 items-center">
@ -418,7 +414,7 @@
</th> </th>
<th <th
scope="col" scope="col"
class="px-3 py-1.5 cursor-pointer select-none" class="px-2.5 py-2 cursor-pointer select-none"
on:click={() => setSortKey('name')} on:click={() => setSortKey('name')}
> >
<div class="flex gap-1.5 items-center"> <div class="flex gap-1.5 items-center">
@ -440,7 +436,7 @@
</th> </th>
<th <th
scope="col" scope="col"
class="px-3 py-1.5 text-right cursor-pointer select-none w-fit" class="px-2.5 py-2 text-right cursor-pointer select-none w-fit"
on:click={() => setSortKey('rating')} on:click={() => setSortKey('rating')}
> >
<div class="flex gap-1.5 items-center justify-end"> <div class="flex gap-1.5 items-center justify-end">
@ -462,7 +458,7 @@
</th> </th>
<th <th
scope="col" scope="col"
class="px-3 py-1.5 text-right cursor-pointer select-none w-5" class="px-2.5 py-2 text-right cursor-pointer select-none w-5"
on:click={() => setSortKey('won')} on:click={() => setSortKey('won')}
> >
<div class="flex gap-1.5 items-center justify-end"> <div class="flex gap-1.5 items-center justify-end">
@ -484,7 +480,7 @@
</th> </th>
<th <th
scope="col" scope="col"
class="px-3 py-1.5 text-right cursor-pointer select-none w-5" class="px-2.5 py-2 text-right cursor-pointer select-none w-5"
on:click={() => setSortKey('lost')} on:click={() => setSortKey('lost')}
> >
<div class="flex gap-1.5 items-center justify-end"> <div class="flex gap-1.5 items-center justify-end">

View file

@ -18,7 +18,7 @@
toggleGlobalById toggleGlobalById
} from '$lib/apis/functions'; } from '$lib/apis/functions';
import ArrowDownTray from '../icons/ArrowDownTray.svelte'; import Download from '../icons/Download.svelte';
import Tooltip from '../common/Tooltip.svelte'; import Tooltip from '../common/Tooltip.svelte';
import ConfirmDialog from '../common/ConfirmDialog.svelte'; import ConfirmDialog from '../common/ConfirmDialog.svelte';
import { getModels } from '$lib/apis'; import { getModels } from '$lib/apis';
@ -222,7 +222,7 @@
}} }}
/> />
<div class="flex flex-col mt-1.5 mb-0.5"> <div class="flex flex-col mt-1.5 mb-0.5 px-[16px]">
<div class="flex justify-between items-center mb-1"> <div class="flex justify-between items-center mb-1">
<div class="flex md:self-center text-xl items-center font-medium px-0.5"> <div class="flex md:self-center text-xl items-center font-medium px-0.5">
{$i18n.t('Functions')} {$i18n.t('Functions')}
@ -317,7 +317,7 @@
</div> </div>
</div> </div>
<div class="mb-5"> <div class="mb-5 px-[16px]">
{#each filteredItems as func (func.id)} {#each filteredItems as func (func.id)}
<div <div
class=" flex space-x-4 cursor-pointer w-full px-2 py-2 dark:hover:bg-white/5 hover:bg-black/5 rounded-xl" class=" flex space-x-4 cursor-pointer w-full px-2 py-2 dark:hover:bg-white/5 hover:bg-black/5 rounded-xl"
@ -330,14 +330,14 @@
<div class=" flex-1 self-center pl-1"> <div class=" flex-1 self-center pl-1">
<div class=" font-semibold flex items-center gap-1.5"> <div class=" font-semibold flex items-center gap-1.5">
<div <div
class=" text-xs font-bold px-1 rounded-sm uppercase line-clamp-1 bg-gray-500/20 text-gray-700 dark:text-gray-200" class=" text-xs font-semibold px-1 rounded-sm uppercase line-clamp-1 bg-gray-500/20 text-gray-700 dark:text-gray-200"
> >
{func.type} {func.type}
</div> </div>
{#if func?.meta?.manifest?.version} {#if func?.meta?.manifest?.version}
<div <div
class="text-xs font-bold px-1 rounded-sm line-clamp-1 bg-gray-500/20 text-gray-700 dark:text-gray-200" class="text-xs font-semibold px-1 rounded-sm line-clamp-1 bg-gray-500/20 text-gray-700 dark:text-gray-200"
> >
v{func?.meta?.manifest?.version ?? ''} v{func?.meta?.manifest?.version ?? ''}
</div> </div>
@ -482,7 +482,7 @@
)} )}
</div> --> </div> -->
<div class=" flex justify-end w-full mb-2"> <div class=" flex justify-end w-full mb-2 px-[16px]">
<div class="flex space-x-2"> <div class="flex space-x-2">
<input <input
id="documents-import-input" id="documents-import-input"
@ -562,7 +562,7 @@
</div> </div>
{#if $config?.features.enable_community_sharing} {#if $config?.features.enable_community_sharing}
<div class=" my-16"> <div class=" my-16 px-[16px]">
<div class=" text-xl font-medium mb-1 line-clamp-1"> <div class=" text-xl font-medium mb-1 line-clamp-1">
{$i18n.t('Made by Open WebUI Community')} {$i18n.t('Made by Open WebUI Community')}
</div> </div>
@ -595,7 +595,7 @@
deleteHandler(selectedFunction); deleteHandler(selectedFunction);
}} }}
> >
<div class=" text-sm text-gray-500"> <div class=" text-sm text-gray-500 truncate">
{$i18n.t('This will delete')} <span class=" font-semibold">{selectedFunction.name}</span>. {$i18n.t('This will delete')} <span class=" font-semibold">{selectedFunction.name}</span>.
</div> </div>
</DeleteConfirmDialog> </DeleteConfirmDialog>

View file

@ -8,7 +8,7 @@
import Tooltip from '$lib/components/common/Tooltip.svelte'; import Tooltip from '$lib/components/common/Tooltip.svelte';
import Share from '$lib/components/icons/Share.svelte'; import Share from '$lib/components/icons/Share.svelte';
import DocumentDuplicate from '$lib/components/icons/DocumentDuplicate.svelte'; import DocumentDuplicate from '$lib/components/icons/DocumentDuplicate.svelte';
import ArrowDownTray from '$lib/components/icons/ArrowDownTray.svelte'; import Download from '$lib/components/icons/Download.svelte';
import Switch from '$lib/components/common/Switch.svelte'; import Switch from '$lib/components/common/Switch.svelte';
import GlobeAlt from '$lib/components/icons/GlobeAlt.svelte'; import GlobeAlt from '$lib/components/icons/GlobeAlt.svelte';
import Github from '$lib/components/icons/Github.svelte'; import Github from '$lib/components/icons/Github.svelte';
@ -41,7 +41,7 @@
<div slot="content"> <div slot="content">
<DropdownMenu.Content <DropdownMenu.Content
class="w-full max-w-[190px] text-sm rounded-xl px-1 py-1.5 z-50 bg-white dark:bg-gray-850 dark:text-white shadow-lg font-primary" class="w-full max-w-[190px] text-sm rounded-xl p-1 z-50 bg-white dark:bg-gray-850 dark:text-white shadow-lg font-primary"
sideOffset={-2} sideOffset={-2}
side="bottom" side="bottom"
align="start" align="start"

View file

@ -8,7 +8,7 @@
import Tooltip from '$lib/components/common/Tooltip.svelte'; import Tooltip from '$lib/components/common/Tooltip.svelte';
import Share from '$lib/components/icons/Share.svelte'; import Share from '$lib/components/icons/Share.svelte';
import DocumentDuplicate from '$lib/components/icons/DocumentDuplicate.svelte'; import DocumentDuplicate from '$lib/components/icons/DocumentDuplicate.svelte';
import ArrowDownTray from '$lib/components/icons/ArrowDownTray.svelte'; import Download from '$lib/components/icons/Download.svelte';
import Switch from '$lib/components/common/Switch.svelte'; import Switch from '$lib/components/common/Switch.svelte';
import GlobeAlt from '$lib/components/icons/GlobeAlt.svelte'; import GlobeAlt from '$lib/components/icons/GlobeAlt.svelte';
@ -42,7 +42,7 @@
<div slot="content"> <div slot="content">
<DropdownMenu.Content <DropdownMenu.Content
class="w-full max-w-[180px] rounded-xl px-1 py-1.5 border border-gray-100 dark:border-gray-800 z-50 bg-white dark:bg-gray-850 dark:text-white shadow-sm" class="w-full max-w-[180px] rounded-xl p-1 border border-gray-100 dark:border-gray-800 z-50 bg-white dark:bg-gray-850 dark:text-white shadow-sm"
sideOffset={-2} sideOffset={-2}
side="bottom" side="bottom"
align="start" align="start"
@ -63,7 +63,7 @@
</div> </div>
</div> </div>
<hr class="border-gray-100 dark:border-gray-850 my-1" /> <hr class="border-gray-50 dark:border-gray-850 my-1" />
{/if} {/if}
<DropdownMenu.Item <DropdownMenu.Item
@ -117,12 +117,12 @@
exportHandler(); exportHandler();
}} }}
> >
<ArrowDownTray /> <Download />
<div class="flex items-center">{$i18n.t('Export')}</div> <div class="flex items-center">{$i18n.t('Export')}</div>
</DropdownMenu.Item> </DropdownMenu.Item>
<hr class="border-gray-100 dark:border-gray-850 my-1" /> <hr class="border-gray-50 dark:border-gray-850 my-1" />
<DropdownMenu.Item <DropdownMenu.Item
class="flex gap-2 items-center px-3 py-1.5 text-sm font-medium cursor-pointer hover:bg-gray-50 dark:hover:bg-gray-800 rounded-md" class="flex gap-2 items-center px-3 py-1.5 text-sm font-medium cursor-pointer hover:bg-gray-50 dark:hover:bg-gray-800 rounded-md"

View file

@ -83,7 +83,7 @@
<div class="flex flex-col lg:flex-row w-full h-full pb-2 lg:space-x-4"> <div class="flex flex-col lg:flex-row w-full h-full pb-2 lg:space-x-4">
<div <div
id="admin-settings-tabs-container" id="admin-settings-tabs-container"
class="tabs flex flex-row overflow-x-auto gap-2.5 max-w-full lg:gap-1 lg:flex-col lg:flex-none lg:w-40 dark:text-gray-200 text-sm font-medium text-left scrollbar-none" class="tabs mx-[16px] lg:mx-0 lg:px-[16px] flex flex-row overflow-x-auto gap-2.5 max-w-full lg:gap-1 lg:flex-col lg:flex-none lg:w-50 dark:text-gray-200 text-sm font-medium text-left scrollbar-none"
> >
<button <button
id="general" id="general"
@ -433,7 +433,9 @@
</button> </button>
</div> </div>
<div class="flex-1 mt-3 lg:mt-0 overflow-y-scroll pr-1 scrollbar-hidden"> <div
class="flex-1 mt-3 lg:mt-0 px-[16px] lg:pr-[16px] lg:pl-0 overflow-y-scroll scrollbar-hidden"
>
{#if selectedTab === 'general'} {#if selectedTab === 'general'}
<General <General
saveHandler={async () => { saveHandler={async () => {

View file

@ -19,6 +19,7 @@
import type { Writable } from 'svelte/store'; import type { Writable } from 'svelte/store';
import type { i18n as i18nType } from 'i18next'; import type { i18n as i18nType } from 'i18next';
import Textarea from '$lib/components/common/Textarea.svelte';
const i18n = getContext<Writable<i18nType>>('i18n'); const i18n = getContext<Writable<i18nType>>('i18n');
@ -31,6 +32,7 @@
let TTS_ENGINE = ''; let TTS_ENGINE = '';
let TTS_MODEL = ''; let TTS_MODEL = '';
let TTS_VOICE = ''; let TTS_VOICE = '';
let TTS_OPENAI_PARAMS = '';
let TTS_SPLIT_ON: TTS_RESPONSE_SPLIT = TTS_RESPONSE_SPLIT.PUNCTUATION; let TTS_SPLIT_ON: TTS_RESPONSE_SPLIT = TTS_RESPONSE_SPLIT.PUNCTUATION;
let TTS_AZURE_SPEECH_REGION = ''; let TTS_AZURE_SPEECH_REGION = '';
let TTS_AZURE_SPEECH_BASE_URL = ''; let TTS_AZURE_SPEECH_BASE_URL = '';
@ -98,18 +100,28 @@
}; };
const updateConfigHandler = async () => { const updateConfigHandler = async () => {
let openaiParams = {};
try {
openaiParams = TTS_OPENAI_PARAMS ? JSON.parse(TTS_OPENAI_PARAMS) : {};
TTS_OPENAI_PARAMS = JSON.stringify(openaiParams, null, 2);
} catch (e) {
toast.error($i18n.t('Invalid JSON format for Parameters'));
return;
}
const res = await updateAudioConfig(localStorage.token, { const res = await updateAudioConfig(localStorage.token, {
tts: { tts: {
OPENAI_API_BASE_URL: TTS_OPENAI_API_BASE_URL, OPENAI_API_BASE_URL: TTS_OPENAI_API_BASE_URL,
OPENAI_API_KEY: TTS_OPENAI_API_KEY, OPENAI_API_KEY: TTS_OPENAI_API_KEY,
OPENAI_PARAMS: openaiParams,
API_KEY: TTS_API_KEY, API_KEY: TTS_API_KEY,
ENGINE: TTS_ENGINE, ENGINE: TTS_ENGINE,
MODEL: TTS_MODEL, MODEL: TTS_MODEL,
VOICE: TTS_VOICE, VOICE: TTS_VOICE,
SPLIT_ON: TTS_SPLIT_ON,
AZURE_SPEECH_REGION: TTS_AZURE_SPEECH_REGION, AZURE_SPEECH_REGION: TTS_AZURE_SPEECH_REGION,
AZURE_SPEECH_BASE_URL: TTS_AZURE_SPEECH_BASE_URL, AZURE_SPEECH_BASE_URL: TTS_AZURE_SPEECH_BASE_URL,
AZURE_SPEECH_OUTPUT_FORMAT: TTS_AZURE_SPEECH_OUTPUT_FORMAT AZURE_SPEECH_OUTPUT_FORMAT: TTS_AZURE_SPEECH_OUTPUT_FORMAT,
SPLIT_ON: TTS_SPLIT_ON
}, },
stt: { stt: {
OPENAI_API_BASE_URL: STT_OPENAI_API_BASE_URL, OPENAI_API_BASE_URL: STT_OPENAI_API_BASE_URL,
@ -146,6 +158,7 @@
console.log(res); console.log(res);
TTS_OPENAI_API_BASE_URL = res.tts.OPENAI_API_BASE_URL; TTS_OPENAI_API_BASE_URL = res.tts.OPENAI_API_BASE_URL;
TTS_OPENAI_API_KEY = res.tts.OPENAI_API_KEY; TTS_OPENAI_API_KEY = res.tts.OPENAI_API_KEY;
TTS_OPENAI_PARAMS = JSON.stringify(res?.tts?.OPENAI_PARAMS ?? '', null, 2);
TTS_API_KEY = res.tts.API_KEY; TTS_API_KEY = res.tts.API_KEY;
TTS_ENGINE = res.tts.ENGINE; TTS_ENGINE = res.tts.ENGINE;
@ -612,6 +625,22 @@
</div> </div>
</div> </div>
</div> </div>
<div class="mt-2 mb-1 text-xs text-gray-400 dark:text-gray-500">
<div class="w-full">
<div class=" mb-1.5 text-xs font-medium">{$i18n.t('Additional Parameters')}</div>
<div class="flex w-full">
<div class="flex-1">
<Textarea
className="w-full rounded-lg py-2 px-4 text-sm bg-gray-50 dark:text-gray-300 dark:bg-gray-850 outline-hidden"
bind:value={TTS_OPENAI_PARAMS}
placeholder={$i18n.t('Enter additional parameters in JSON format')}
minSize={100}
/>
</div>
</div>
</div>
</div>
{:else if TTS_ENGINE === 'elevenlabs'} {:else if TTS_ENGINE === 'elevenlabs'}
<div class=" flex gap-2"> <div class=" flex gap-2">
<div class="w-full"> <div class="w-full">

View file

@ -261,7 +261,7 @@
<div class="flex flex-col gap-1.5 mt-1.5"> <div class="flex flex-col gap-1.5 mt-1.5">
{#each OPENAI_API_BASE_URLS as url, idx} {#each OPENAI_API_BASE_URLS as url, idx}
<OpenAIConnection <OpenAIConnection
{url} bind:url={OPENAI_API_BASE_URLS[idx]}
bind:key={OPENAI_API_KEYS[idx]} bind:key={OPENAI_API_KEYS[idx]}
bind:config={OPENAI_API_CONFIGS[idx]} bind:config={OPENAI_API_CONFIGS[idx]}
pipeline={pipelineUrls[url] ? true : false} pipeline={pipelineUrls[url] ? true : false}
@ -326,7 +326,7 @@
<div class="flex-1 flex flex-col gap-1.5 mt-1.5"> <div class="flex-1 flex flex-col gap-1.5 mt-1.5">
{#each OLLAMA_BASE_URLS as url, idx} {#each OLLAMA_BASE_URLS as url, idx}
<OllamaConnection <OllamaConnection
{url} bind:url={OLLAMA_BASE_URLS[idx]}
bind:config={OLLAMA_API_CONFIGS[idx]} bind:config={OLLAMA_API_CONFIGS[idx]}
{idx} {idx}
onSubmit={() => { onSubmit={() => {

View file

@ -10,7 +10,7 @@
import Cog6 from '$lib/components/icons/Cog6.svelte'; import Cog6 from '$lib/components/icons/Cog6.svelte';
import Wrench from '$lib/components/icons/Wrench.svelte'; import Wrench from '$lib/components/icons/Wrench.svelte';
import ManageOllamaModal from './ManageOllamaModal.svelte'; import ManageOllamaModal from './ManageOllamaModal.svelte';
import ArrowDownTray from '$lib/components/icons/ArrowDownTray.svelte'; import Download from '$lib/components/icons/Download.svelte';
export let onDelete = () => {}; export let onDelete = () => {};
export let onSubmit = () => {}; export let onSubmit = () => {};
@ -84,7 +84,7 @@
}} }}
type="button" type="button"
> >
<ArrowDownTray /> <Download />
</button> </button>
</Tooltip> </Tooltip>

View file

@ -143,7 +143,7 @@
</div> </div>
</button> </button>
<hr class="border-gray-100 dark:border-gray-850 my-1" /> <hr class="border-gray-50 dark:border-gray-850 my-1" />
{#if $config?.features.enable_admin_export ?? true} {#if $config?.features.enable_admin_export ?? true}
<div class=" flex w-full justify-between"> <div class=" flex w-full justify-between">

View file

@ -714,6 +714,21 @@
</div> </div>
{/if} {/if}
{/if} {/if}
<div class="flex justify-between w-full mt-2">
<div class="self-center text-xs font-medium">
<Tooltip content={''} placement="top-start">
{$i18n.t('Parameters')}
</Tooltip>
</div>
<div class="">
<Textarea
bind:value={RAGConfig.DOCLING_PARAMETERS}
placeholder={$i18n.t('Enter additional parameters in JSON format')}
minSize={100}
/>
</div>
</div>
{:else if RAGConfig.CONTENT_EXTRACTION_ENGINE === 'document_intelligence'} {:else if RAGConfig.CONTENT_EXTRACTION_ENGINE === 'document_intelligence'}
<div class="my-0.5 flex gap-2 pr-2"> <div class="my-0.5 flex gap-2 pr-2">
<input <input
@ -1143,7 +1158,7 @@
<div class=" mb-2.5 py-0.5 w-full justify-between"> <div class=" mb-2.5 py-0.5 w-full justify-between">
<Tooltip <Tooltip
content={$i18n.t( content={$i18n.t(
'The Weight of BM25 Hybrid Search. 0 more lexical, 1 more semantic. Default 0.5' 'The Weight of BM25 Hybrid Search. 0 more semantic, 1 more lexical. Default 0.5'
)} )}
placement="top-start" placement="top-start"
className="inline-tooltip" className="inline-tooltip"

View file

@ -293,7 +293,7 @@
<hr class=" border-gray-100 dark:border-gray-700/10 my-2.5 w-full" /> <hr class=" border-gray-100 dark:border-gray-700/10 my-2.5 w-full" />
<div class="my-2 -mx-2"> <div class="my-2 -mx-2">
<div class="px-3 py-2 bg-gray-50 dark:bg-gray-950 rounded-lg"> <div class="px-4 py-3 bg-gray-50 dark:bg-gray-950 rounded-3xl">
<AccessControl bind:accessControl /> <AccessControl bind:accessControl />
</div> </div>
</div> </div>

View file

@ -34,7 +34,7 @@
<div class="w-full flex flex-col"> <div class="w-full flex flex-col">
<div class="flex items-center gap-1"> <div class="flex items-center gap-1">
<div class="shrink-0 line-clamp-1"> <div class=" line-clamp-1">
{model.name} {model.name}
</div> </div>
</div> </div>

View file

@ -12,7 +12,8 @@
deleteAllModels, deleteAllModels,
getBaseModels, getBaseModels,
toggleModelById, toggleModelById,
updateModelById updateModelById,
importModels
} from '$lib/apis/models'; } from '$lib/apis/models';
import { copyToClipboard } from '$lib/utils'; import { copyToClipboard } from '$lib/utils';
import { page } from '$app/stores'; import { page } from '$app/stores';
@ -30,7 +31,7 @@
import Cog6 from '$lib/components/icons/Cog6.svelte'; import Cog6 from '$lib/components/icons/Cog6.svelte';
import ConfigureModelsModal from './Models/ConfigureModelsModal.svelte'; import ConfigureModelsModal from './Models/ConfigureModelsModal.svelte';
import Wrench from '$lib/components/icons/Wrench.svelte'; import Wrench from '$lib/components/icons/Wrench.svelte';
import ArrowDownTray from '$lib/components/icons/ArrowDownTray.svelte'; import Download from '$lib/components/icons/Download.svelte';
import ManageModelsModal from './Models/ManageModelsModal.svelte'; import ManageModelsModal from './Models/ManageModelsModal.svelte';
import ModelMenu from '$lib/components/admin/Settings/Models/ModelMenu.svelte'; import ModelMenu from '$lib/components/admin/Settings/Models/ModelMenu.svelte';
import EllipsisHorizontal from '$lib/components/icons/EllipsisHorizontal.svelte'; import EllipsisHorizontal from '$lib/components/icons/EllipsisHorizontal.svelte';
@ -40,6 +41,7 @@
let shiftKey = false; let shiftKey = false;
let modelsImportInProgress = false;
let importFiles; let importFiles;
let modelsImportInputElement: HTMLInputElement; let modelsImportInputElement: HTMLInputElement;
@ -265,7 +267,7 @@
showManageModal = true; showManageModal = true;
}} }}
> >
<ArrowDownTray /> <Download />
</button> </button>
</Tooltip> </Tooltip>
@ -464,47 +466,41 @@
accept=".json" accept=".json"
hidden hidden
on:change={() => { on:change={() => {
console.log(importFiles); if (importFiles.length > 0) {
const reader = new FileReader();
let reader = new FileReader();
reader.onload = async (event) => { reader.onload = async (event) => {
let savedModels = JSON.parse(event.target.result); try {
console.log(savedModels); const models = JSON.parse(String(event.target.result));
modelsImportInProgress = true;
const res = await importModels(localStorage.token, models);
modelsImportInProgress = false;
for (const model of savedModels) { if (res) {
if (Object.keys(model).includes('base_model_id')) { toast.success($i18n.t('Models imported successfully'));
if (model.base_model_id === null) { await init();
upsertModelHandler(model);
}
} else { } else {
if (model?.info ?? false) { toast.error($i18n.t('Failed to import models'));
if (model.info.base_model_id === null) {
upsertModelHandler(model.info);
} }
} catch (e) {
toast.error($i18n.t('Invalid JSON file'));
console.error(e);
} }
}
}
await _models.set(
await getModels(
localStorage.token,
$config?.features?.enable_direct_connections &&
($settings?.directConnections ?? null)
)
);
init();
}; };
reader.readAsText(importFiles[0]); reader.readAsText(importFiles[0]);
}
}} }}
/> />
<button <button
class="flex text-xs items-center space-x-1 px-3 py-1.5 rounded-xl bg-gray-50 hover:bg-gray-100 dark:bg-gray-800 dark:hover:bg-gray-700 dark:text-gray-200 transition" class="flex text-xs items-center space-x-1 px-3 py-1.5 rounded-xl bg-gray-50 hover:bg-gray-100 dark:bg-gray-800 dark:hover:bg-gray-700 dark:text-gray-200 transition"
disabled={modelsImportInProgress}
on:click={() => { on:click={() => {
modelsImportInputElement.click(); modelsImportInputElement.click();
}} }}
> >
{#if modelsImportInProgress}
<Spinner className="size-3" />
{/if}
<div class=" self-center mr-2 font-medium line-clamp-1"> <div class=" self-center mr-2 font-medium line-clamp-1">
{$i18n.t('Import Presets')} {$i18n.t('Import Presets')}
</div> </div>

Some files were not shown because too many files have changed in this diff Show more