mirror of
https://github.com/open-webui/open-webui.git
synced 2025-12-13 04:45:19 +00:00
Merge branch 'open-webui:main' into feature/knowledge-sync-button
This commit is contained in:
commit
9b8dfa095a
285 changed files with 19177 additions and 9886 deletions
6
.github/workflows/docker-build.yaml
vendored
6
.github/workflows/docker-build.yaml
vendored
|
|
@ -141,6 +141,9 @@ jobs:
|
|||
platform=${{ matrix.platform }}
|
||||
echo "PLATFORM_PAIR=${platform//\//-}" >> $GITHUB_ENV
|
||||
|
||||
- name: Delete huge unnecessary tools folder
|
||||
run: rm -rf /opt/hostedtoolcache
|
||||
|
||||
- name: Checkout repository
|
||||
uses: actions/checkout@v5
|
||||
|
||||
|
|
@ -243,6 +246,9 @@ jobs:
|
|||
platform=${{ matrix.platform }}
|
||||
echo "PLATFORM_PAIR=${platform//\//-}" >> $GITHUB_ENV
|
||||
|
||||
- name: Delete huge unnecessary tools folder
|
||||
run: rm -rf /opt/hostedtoolcache
|
||||
|
||||
- name: Checkout repository
|
||||
uses: actions/checkout@v5
|
||||
|
||||
|
|
|
|||
198
CHANGELOG.md
198
CHANGELOG.md
|
|
@ -5,6 +5,204 @@ All notable changes to this project will be documented in this file.
|
|||
The format is based on [Keep a Changelog](https://keepachangelog.com/en/1.1.0/),
|
||||
and this project adheres to [Semantic Versioning](https://semver.org/spec/v2.0.0.html).
|
||||
|
||||
## [0.6.38] - 2025-11-24
|
||||
|
||||
### Fixed
|
||||
|
||||
- 🔍 Hybrid search now works reliably after recent changes.
|
||||
- 🛠️ Tool server saving now handles errors gracefully, preventing failed saves from impacting the UI.
|
||||
- 🔐 SSO/OIDC code fixed to improve login reliability and better handle edge cases.
|
||||
|
||||
## [0.6.37] - 2025-11-24
|
||||
|
||||
### Added
|
||||
|
||||
- 🔐 Granular sharing permissions are now available with two-tiered control separating group sharing from public sharing, allowing administrators to independently configure whether users can share workspace items with groups or make them publicly accessible, with separate permission toggles for models, knowledge bases, prompts, tools, and notes, configurable via "USER_PERMISSIONS_WORKSPACE_MODELS_ALLOW_SHARING", "USER_PERMISSIONS_WORKSPACE_MODELS_ALLOW_PUBLIC_SHARING", and corresponding environment variables for other workspace item types, while groups can now be configured to opt-out of sharing via the "Allow Group Sharing" setting. [Commit](https://github.com/open-webui/open-webui/commit/7be750bcbb40da91912a0a66b7ab791effdcc3b6), [Commit](https://github.com/open-webui/open-webui/commit/f69e37a8507d6d57382d6670641b367f3127f90a)
|
||||
- 🔐 Password policy enforcement is now available with configurable validation rules, allowing administrators to require specific password complexity requirements via "ENABLE_PASSWORD_VALIDATION" and "PASSWORD_VALIDATION_REGEX_PATTERN" environment variables, with default pattern requiring minimum 8 characters including uppercase, lowercase, digit, and special character. [#17794](https://github.com/open-webui/open-webui/pull/17794)
|
||||
- 🔐 Granular import and export permissions are now available for workspace items, introducing six separate permission toggles for models, prompts, and tools that are disabled by default for enhanced security. [#19242](https://github.com/open-webui/open-webui/pull/19242)
|
||||
- 👥 Default group assignment is now available for new users, allowing administrators to automatically assign newly registered users to a specified group for streamlined access control to models, prompts, and tools, particularly useful for organizations with group-based model access policies. [#19325](https://github.com/open-webui/open-webui/pull/19325), [#17842](https://github.com/open-webui/open-webui/issues/17842)
|
||||
- 🔒 Password-based authentication can now be fully disabled via "ENABLE_PASSWORD_AUTH" environment variable, enforcing SSO-only authentication and preventing password login fallback when SSO is configured. [#19113](https://github.com/open-webui/open-webui/pull/19113)
|
||||
- 🖼️ Large stream chunk handling was implemented to support models that generate images directly in their output responses, with configurable buffer size via "CHAT_STREAM_RESPONSE_CHUNK_MAX_BUFFER_SIZE" environment variable, resolving compatibility issues with models like Gemini 2.5 Flash Image. [#18884](https://github.com/open-webui/open-webui/pull/18884), [#17626](https://github.com/open-webui/open-webui/issues/17626)
|
||||
- 🖼️ Streaming response middleware now handles images in delta updates with automatic base64 conversion, enabling proper display of images from models using the "choices[0].delta.images.image_url" format such as Gemini 2.5 Flash Image Preview on OpenRouter. [#19073](https://github.com/open-webui/open-webui/pull/19073), [#19019](https://github.com/open-webui/open-webui/issues/19019)
|
||||
- 📈 Model list API performance was optimized by pre-fetching user group memberships and removing profile image URLs from response payloads, significantly reducing both database queries and payload size for instances with large model lists, with profile images now served dynamically via dedicated endpoints. [#19097](https://github.com/open-webui/open-webui/pull/19097), [#18950](https://github.com/open-webui/open-webui/issues/18950)
|
||||
- ⏩ Batch file processing performance was improved by reducing database queries by 67% while ensuring data consistency between vector and relational databases. [#18953](https://github.com/open-webui/open-webui/pull/18953)
|
||||
- 🚀 Chat import performance was dramatically improved by replacing individual per-chat API requests with a bulk import endpoint, reducing import time by up to 95% for large chat collections and providing user feedback via toast notifications displaying the number of successfully imported chats. [#17861](https://github.com/open-webui/open-webui/pull/17861)
|
||||
- ⚡ Socket event broadcasting performance was optimized by implementing user-specific rooms, significantly reducing server overhead particularly for users with multiple concurrent sessions. [#18996](https://github.com/open-webui/open-webui/pull/18996)
|
||||
- 🗄️ Weaviate is now supported as a vector database option, providing an additional choice for RAG document storage alongside existing ChromaDB, Milvus, Qdrant, and OpenSearch integrations. [#14747](https://github.com/open-webui/open-webui/pull/14747)
|
||||
- 🗄️ PostgreSQL pgvector now supports HNSW index types and large dimensional embeddings exceeding 2000 dimensions through automatic halfvec type selection, with configurable index methods via "PGVECTOR_INDEX_METHOD", "PGVECTOR_HNSW_M", "PGVECTOR_HNSW_EF_CONSTRUCTION", and "PGVECTOR_IVFFLAT_LISTS" environment variables. [#19158](https://github.com/open-webui/open-webui/pull/19158), [#16890](https://github.com/open-webui/open-webui/issues/16890)
|
||||
- 🔍 Azure AI Search is now supported as a web search provider, enabling integration with Azure's cognitive search services via "AZURE_AI_SEARCH_API_KEY", "AZURE_AI_SEARCH_ENDPOINT", and "AZURE_AI_SEARCH_INDEX_NAME" configuration. [#19104](https://github.com/open-webui/open-webui/pull/19104)
|
||||
- ⚡ External embedding generation now processes API requests in parallel instead of sequential batches, reducing document processing time by 10-50x when using OpenAI, Azure OpenAI, or Ollama embedding providers, with large PDFs now processing in seconds instead of minutes. [#19296](https://github.com/open-webui/open-webui/pull/19296)
|
||||
- 💨 Base64 image conversion is now available for markdown content in chat responses, automatically uploading embedded images exceeding 1KB and replacing them with file URLs to reduce payload size and resource consumption, configurable via "REPLACE_IMAGE_URLS_IN_CHAT_RESPONSE" environment variable. [#19076](https://github.com/open-webui/open-webui/pull/19076)
|
||||
- 🎨 OpenAI image generation now supports additional API parameters including quality settings for GPT Image 1, configurable via "IMAGES_OPENAI_API_PARAMS" environment variable or through the admin interface, enabling cost-effective image generation with low, medium, or high quality options. [#19228](https://github.com/open-webui/open-webui/issues/19228)
|
||||
- 🖼️ Image editing can now be independently enabled or disabled via admin settings, allowing administrators to control whether sequential image prompts trigger image editing or new image generation, configurable via "ENABLE_IMAGE_EDIT" environment variable. [#19284](https://github.com/open-webui/open-webui/issues/19284)
|
||||
- 🔐 SSRF protection was implemented with a configurable URL blocklist that prevents access to cloud metadata endpoints and private networks, with default protections for AWS, Google Cloud, Azure, and Alibaba Cloud metadata services, customizable via "WEB_FETCH_FILTER_LIST" environment variable. [#19201](https://github.com/open-webui/open-webui/pull/19201)
|
||||
- ⚡ Workspace models page now supports server-side pagination dramatically improving load times and usability for instances with large numbers of workspace models.
|
||||
- 🔍 Hybrid search now indexes file metadata including filenames, titles, headings, sources, and snippets alongside document content, enabling keyword queries to surface documents where search terms appear only in metadata, configurable via "ENABLE_RAG_HYBRID_SEARCH_ENRICHED_TEXTS" environment variable. [#19095](https://github.com/open-webui/open-webui/pull/19095)
|
||||
- 📂 Knowledge base upload page now supports folder drag-and-drop with recursive directory handling, enabling batch uploads of entire directory structures instead of requiring individual file selection. [#19320](https://github.com/open-webui/open-webui/pull/19320)
|
||||
- 🤖 Model cloning is now available in admin settings, allowing administrators to quickly create workspace models based on existing base models through a "Clone" option in the model dropdown menu. [#17937](https://github.com/open-webui/open-webui/pull/17937)
|
||||
- 🎨 UI scale adjustment is now available in interface settings, allowing users to increase the size of the entire interface from 1.0x to 1.5x for improved accessibility and readability, particularly beneficial for users with visual impairments. [#19186](https://github.com/open-webui/open-webui/pull/19186)
|
||||
- 📌 Default pinned models can now be configured by administrators for all new users, mirroring the behavior of default models where admin-configured defaults apply only to users who haven't customized their pinned models, configurable via "DEFAULT_PINNED_MODELS" environment variable. [#19273](https://github.com/open-webui/open-webui/pull/19273)
|
||||
- 🎙️ Text-to-Speech and Speech-to-Text services now receive user information headers when "ENABLE_FORWARD_USER_INFO_HEADERS" is enabled, allowing external TTS and STT providers to implement user-specific personalization, rate limiting, and usage tracking. [#19323](https://github.com/open-webui/open-webui/pull/19323), [#19312](https://github.com/open-webui/open-webui/issues/19312)
|
||||
- 🎙️ Voice mode now supports custom system prompts via "VOICE_MODE_PROMPT_TEMPLATE" configuration, allowing administrators to control response style and behavior for voice interactions. [#18607](https://github.com/open-webui/open-webui/pull/18607)
|
||||
- 🔧 WebSocket and Redis configuration options are now available including debug logging controls, custom ping timeout and interval settings, and arbitrary Redis connection options via "WEBSOCKET_SERVER_LOGGING", "WEBSOCKET_SERVER_ENGINEIO_LOGGING", "WEBSOCKET_SERVER_PING_TIMEOUT", "WEBSOCKET_SERVER_PING_INTERVAL", and "WEBSOCKET_REDIS_OPTIONS" environment variables. [#19091](https://github.com/open-webui/open-webui/pull/19091)
|
||||
- 🔧 MCP OAuth dynamic client registration now automatically detects and uses the appropriate token endpoint authentication method from server-supported options, enabling compatibility with OAuth servers that only support "client_secret_basic" instead of "client_secret_post". [#19193](https://github.com/open-webui/open-webui/issues/19193)
|
||||
- 🔧 Custom headers can now be configured for remote MCP and OpenAPI tool server connections, enabling integration with services that require additional authentication headers. [#18918](https://github.com/open-webui/open-webui/issues/18918)
|
||||
- 🔍 Perplexity Search now supports custom API endpoints via "PERPLEXITY_SEARCH_API_URL" configuration and automatically forwards user information headers to enable personalized search experiences. [#19147](https://github.com/open-webui/open-webui/pull/19147)
|
||||
- 🔍 User information headers can now be optionally forwarded to external web search engines when "ENABLE_FORWARD_USER_INFO_HEADERS" is enabled. [#19043](https://github.com/open-webui/open-webui/pull/19043)
|
||||
- 📊 Daily active user metric is now available for monitoring, tracking unique users active since midnight UTC via the "webui.users.active.today" Prometheus gauge. [#19236](https://github.com/open-webui/open-webui/pull/19236), [#19234](https://github.com/open-webui/open-webui/issues/19234)
|
||||
- 📊 Audit log file path is now configurable via "AUDIT_LOGS_FILE_PATH" environment variable, enabling storage in separate volumes or custom locations. [#19173](https://github.com/open-webui/open-webui/pull/19173)
|
||||
- 🎨 Sidebar collapse states for model lists and group information are now persistent across page refreshes, remembering user preferences through browser-based storage. [#19159](https://github.com/open-webui/open-webui/issues/19159)
|
||||
- 🎨 Background image display was enhanced with semi-transparent overlays for navbar and sidebar, creating a seamless and visually cohesive design across the entire interface. [#19157](https://github.com/open-webui/open-webui/issues/19157)
|
||||
- 📋 Tables in chat messages now include a copy button that appears on hover, enabling quick copying of table content alongside the existing CSV export functionality. [#19162](https://github.com/open-webui/open-webui/issues/19162)
|
||||
- 📝 Notes can now be created directly via the "/notes/new" URL endpoint with optional title and content query parameters, enabling faster note creation through bookmarks and shortcuts. [#19195](https://github.com/open-webui/open-webui/issues/19195)
|
||||
- 🏷️ Tag suggestions are now context-aware, displaying only relevant tags when creating or editing models versus chat conversations, preventing confusion between model and chat tags. [#19135](https://github.com/open-webui/open-webui/issues/19135)
|
||||
- ✍️ Prompt autocompletion is now available independently of the rich text input setting, improving accessibility to the feature. [#19150](https://github.com/open-webui/open-webui/issues/19150)
|
||||
- 🔄 Various improvements were implemented across the frontend and backend to enhance performance, stability, and security.
|
||||
- 🌐 Translations for Simplified Chinese, Traditional Chinese, Portuguese (Brazil), Catalan, Spanish (Spain), Finnish, Irish, Farsi, Swedish, Danish, German, Korean, and Thai were improved and expanded.
|
||||
|
||||
### Fixed
|
||||
|
||||
- 🤖 Model update functionality now works correctly, resolving a database parameter binding error that prevented saving changes to model configurations via the Save & Update button. [#19335](https://github.com/open-webui/open-webui/issues/19335)
|
||||
- 🖼️ Multiple input images for image editing and generation are now correctly passed as an array using the "image[]" parameter syntax, enabling proper multi-image reference functionality with models like GPT Image 1. [#19339](https://github.com/open-webui/open-webui/issues/19339)
|
||||
- 📱 PWA installations on iOS now properly refresh after server container restarts, resolving freezing issues by automatically unregistering service workers when version or deployment changes are detected. [#19316](https://github.com/open-webui/open-webui/pull/19316)
|
||||
- 🗄️ S3 Vectors collection detection now correctly handles buckets with more than 2000 indexes by using direct index lookup instead of paginated list scanning, improving performance by approximately 8x and enabling RAG queries to work reliably at scale. [#19238](https://github.com/open-webui/open-webui/pull/19238), [#19233](https://github.com/open-webui/open-webui/issues/19233)
|
||||
- 📈 Feedback retrieval performance was optimized by eliminating N+1 query patterns through database joins, adding server-side pagination and sorting, significantly reducing database load for instances with large feedback datasets. [#17976](https://github.com/open-webui/open-webui/pull/17976)
|
||||
- 🔍 Chat search now works correctly with PostgreSQL when chat data contains null bytes, with comprehensive sanitization preventing null bytes during data writes, cleaning existing data on read, and stripping null bytes during search queries to ensure reliable search functionality. [#15616](https://github.com/open-webui/open-webui/issues/15616)
|
||||
- 🔍 Hybrid search with reranking now correctly handles attribute validation, preventing errors when collection results lack expected structure. [#19025](https://github.com/open-webui/open-webui/pull/19025), [#17046](https://github.com/open-webui/open-webui/issues/17046)
|
||||
- 🔎 Reranking functionality now works correctly after recent refactoring, resolving crashes caused by incorrect function argument handling. [#19270](https://github.com/open-webui/open-webui/pull/19270)
|
||||
- 🤖 Azure OpenAI models now support the "reasoning_effort" parameter, enabling proper configuration of reasoning capabilities for models like GPT-5.1 which default to no reasoning without this setting. [#19290](https://github.com/open-webui/open-webui/issues/19290)
|
||||
- 🤖 Models with very long IDs can now be deleted correctly, resolving URL length limitations that previously prevented management operations on such models. [#18230](https://github.com/open-webui/open-webui/pull/18230)
|
||||
- 🤖 Model-level streaming settings now correctly apply to API requests, ensuring "Stream Chat Response" toggle properly controls the streaming parameter. [#19154](https://github.com/open-webui/open-webui/issues/19154)
|
||||
- 🖼️ Image editing configuration now correctly preserves independent OpenAI API endpoints and keys, preventing them from being overwritten by image generation settings. [#19003](https://github.com/open-webui/open-webui/issues/19003)
|
||||
- 🎨 Gemini image edit settings now display correctly in the admin panel, fixing an incorrect configuration key reference that prevented proper rendering of edit options. [#19200](https://github.com/open-webui/open-webui/pull/19200)
|
||||
- 🖌️ Image generation settings menu now loads correctly, resolving validation errors with AUTOMATIC1111 API authentication parameters. [#19187](https://github.com/open-webui/open-webui/issues/19187), [#19246](https://github.com/open-webui/open-webui/issues/19246)
|
||||
- 📅 Date formatting in chat search and admin user chat search now correctly respects the "DEFAULT_LOCALE" environment variable, displaying dates according to the configured locale instead of always using MM/DD/YYYY format. [#19305](https://github.com/open-webui/open-webui/pull/19305), [#19020](https://github.com/open-webui/open-webui/issues/19020)
|
||||
- 📝 RAG template query placeholder escaping logic was corrected to prevent unintended replacements of context values when query placeholders appear in retrieved content. [#19102](https://github.com/open-webui/open-webui/pull/19102), [#19101](https://github.com/open-webui/open-webui/issues/19101)
|
||||
- 📄 RAG template prompt duplication was eliminated by removing redundant user query section from the default template. [#19099](https://github.com/open-webui/open-webui/pull/19099), [#19098](https://github.com/open-webui/open-webui/issues/19098)
|
||||
- 📋 MinerU local mode configuration no longer incorrectly requires an API key, allowing proper use of local content extraction without external API credentials. [#19258](https://github.com/open-webui/open-webui/issues/19258)
|
||||
- 📊 Excel file uploads now work correctly with the addition of the missing msoffcrypto-tool dependency, resolving import errors introduced by the unstructured package upgrade. [#19153](https://github.com/open-webui/open-webui/issues/19153)
|
||||
- 📑 Docling parameters now properly handle JSON serialization, preventing exceptions and ensuring configuration changes are saved correctly. [#19072](https://github.com/open-webui/open-webui/pull/19072)
|
||||
- 🛠️ UserValves configuration now correctly isolates settings per tool, preventing configuration contamination when multiple tools with UserValves are used simultaneously. [#19185](https://github.com/open-webui/open-webui/pull/19185), [#15569](https://github.com/open-webui/open-webui/issues/15569)
|
||||
- 🔧 Tool selection prompt now correctly handles user messages without duplication, removing redundant query prefixes and improving prompt clarity. [#19122](https://github.com/open-webui/open-webui/pull/19122), [#19121](https://github.com/open-webui/open-webui/issues/19121)
|
||||
- 📝 Notes chat feature now correctly submits messages to the completions endpoint, resolving errors that prevented AI model interactions. [#19079](https://github.com/open-webui/open-webui/pull/19079)
|
||||
- 📝 Note PDF downloads now sanitize HTML content using DOMPurify before rendering, preventing potential DOM-based XSS attacks from malicious content in notes. [Commit](https://github.com/open-webui/open-webui/commit/03cc6ce8eb5c055115406e2304fbf7e3338b8dce)
|
||||
- 📁 Archived chats now have their folder associations automatically removed to prevent unintended deletion when their previous folder is deleted. [#14578](https://github.com/open-webui/open-webui/issues/14578)
|
||||
- 🔐 ElevenLabs API key is now properly obfuscated in the admin settings page, preventing plain text exposure of sensitive credentials. [#19262](https://github.com/open-webui/open-webui/pull/19262), [#19260](https://github.com/open-webui/open-webui/issues/19260)
|
||||
- 🔧 MCP OAuth server metadata discovery now follows the correct specification order, ensuring proper authentication flow compliance. [#19244](https://github.com/open-webui/open-webui/pull/19244)
|
||||
- 🔒 API key endpoint restrictions now properly enforce access controls for all endpoints including SCIM, preventing unintended access when "API_KEY_ALLOWED_ENDPOINTS" is configured. [#19168](https://github.com/open-webui/open-webui/issues/19168)
|
||||
- 🔓 OAuth role claim parsing now supports both flat and nested claim structures, enabling compatibility with OAuth providers that deliver claims as direct properties on the user object rather than nested structures. [#19286](https://github.com/open-webui/open-webui/pull/19286)
|
||||
- 🔑 OAuth MCP server verification now correctly extracts the access token value for authorization headers instead of sending the entire token dictionary. [#19149](https://github.com/open-webui/open-webui/pull/19149), [#19148](https://github.com/open-webui/open-webui/issues/19148)
|
||||
- ⚙️ OAuth dynamic client registration now correctly converts empty strings to None for optional fields, preventing validation failures in MCP package integration. [#19144](https://github.com/open-webui/open-webui/pull/19144), [#19129](https://github.com/open-webui/open-webui/issues/19129)
|
||||
- 🔐 OIDC authentication now correctly passes client credentials in access token requests, ensuring compatibility with providers that require these parameters per RFC 6749. [#19132](https://github.com/open-webui/open-webui/pull/19132), [#19131](https://github.com/open-webui/open-webui/issues/19131)
|
||||
- 🔗 OAuth client creation now respects configured token endpoint authentication methods instead of defaulting to basic authentication, preventing failures with servers that don't support basic auth. [#19165](https://github.com/open-webui/open-webui/pull/19165)
|
||||
- 📋 Text copied from chat responses in Chrome now pastes without background formatting, improving readability when pasting into word processors. [#19083](https://github.com/open-webui/open-webui/issues/19083)
|
||||
|
||||
### Changed
|
||||
|
||||
- 🗄️ Group membership data storage was refactored from JSON arrays to a dedicated relational database table, significantly improving query performance and scalability for instances with large numbers of users and groups, while API responses now return member counts instead of full user ID arrays. [#19239](https://github.com/open-webui/open-webui/pull/19239)
|
||||
- 📄 MinerU parameter handling was refactored to pass parameters directly to the API, improving flexibility and fixing VLM backend configuration. [#19105](https://github.com/open-webui/open-webui/pull/19105), [#18446](https://github.com/open-webui/open-webui/discussions/18446)
|
||||
- 🔐 API key creation is now controlled by granular user and group permissions, with the "ENABLE_API_KEY" environment variable renamed to "ENABLE_API_KEYS" and disabled by default, requiring explicit configuration at both the global and user permission levels, while related environment variables "ENABLE_API_KEY_ENDPOINT_RESTRICTIONS" and "API_KEY_ALLOWED_ENDPOINTS" were renamed to "ENABLE_API_KEYS_ENDPOINT_RESTRICTIONS" and "API_KEYS_ALLOWED_ENDPOINTS" respectively. [#18336](https://github.com/open-webui/open-webui/pull/18336)
|
||||
|
||||
## [0.6.36] - 2025-11-07
|
||||
|
||||
### Added
|
||||
|
||||
- 🔐 OAuth group parsing now supports configurable separators via the "OAUTH_GROUPS_SEPARATOR" environment variable, enabling proper handling of semicolon-separated group claims from providers like CILogon. [#18987](https://github.com/open-webui/open-webui/pull/18987), [#18979](https://github.com/open-webui/open-webui/issues/18979)
|
||||
|
||||
### Fixed
|
||||
|
||||
- 🛠️ Tool calling functionality is restored by correcting asynchronous function handling in tool parameter updates. [#18981](https://github.com/open-webui/open-webui/issues/18981)
|
||||
- 🖼️ The ComfyUI image edit workflow editor modal now opens correctly when clicking the Edit button. [#18978](https://github.com/open-webui/open-webui/issues/18978)
|
||||
- 🔥 Firecrawl import errors are resolved by implementing lazy loading and using the correct class name. [#18973](https://github.com/open-webui/open-webui/issues/18973)
|
||||
- 🔌 Socket.IO CORS warning is resolved by properly configuring CORS origins for Socket.IO connections. [Commit](https://github.com/open-webui/open-webui/commit/639d26252e528c9c37a5f553b11eb94376d8792d)
|
||||
|
||||
## [0.6.35] - 2025-11-06
|
||||
|
||||
### Added
|
||||
|
||||
- 🖼️ Image generation system received a comprehensive overhaul with major new capabilities including full image editing support allowing users to modify existing images using text prompts with OpenAI, Gemini, or ComfyUI engines, adding Gemini 2.5 Flash Image (Nano Banana) support, Qwen Image Edit integration, resolution of base64-encoded image display issues, streamlined AUTOMATIC1111 configuration by consolidating parameters into a flexible JSON parameters field, and enhanced UI with a code editor modal for ComfyUI workflow management. [#17434](https://github.com/open-webui/open-webui/pull/17434), [#16976](https://github.com/open-webui/open-webui/issues/16976), [Commit](https://github.com/open-webui/open-webui/commit/8e5690aab4f632a57027e2acf880b8f89a8717c0), [Commit](https://github.com/open-webui/open-webui/commit/72f8539fd2e679fec0762945f22f4b8a6920afa0), [Commit](https://github.com/open-webui/open-webui/commit/8d34fcb586eeee1fac6da2f991518b8a68b00b72), [Commit](https://github.com/open-webui/open-webui/commit/72900cd686de1fa6be84b5a8a2fc857cff7b91b8)
|
||||
- 🔒 CORS origin validation was added to WebSocket connections as a defense-in-depth security measure against cross-site WebSocket hijacking attacks. [#18411](https://github.com/open-webui/open-webui/pull/18411), [#18410](https://github.com/open-webui/open-webui/issues/18410)
|
||||
- 🔄 Automatic page refresh now occurs when a version update is detected via WebSocket connection, ensuring users always run the latest version without cache issues. [Commit](https://github.com/open-webui/open-webui/commit/989f192c92d2fe55daa31336e7971e21798b96ae)
|
||||
- 🐍 Experimental initial preparations for Python 3.13 compatibility by updating dependencies with security enhancements and cryptographic improvements. [#18430](https://github.com/open-webui/open-webui/pull/18430), [#18424](https://github.com/open-webui/open-webui/pull/18424)
|
||||
- ⚡ Image compression now preserves the original image format instead of converting to PNG, significantly reducing file sizes and improving chat loading performance. [#18506](https://github.com/open-webui/open-webui/pull/18506)
|
||||
- 🎤 Mistral Voxtral model support was added for text-to-speech, including voxtral-small and voxtral-mini models with both transcription and chat completion API support. [#18934](https://github.com/open-webui/open-webui/pull/18934)
|
||||
- 🔊 Text-to-speech now uses a global audio queue system to prevent overlapping playback, ensuring only one TTS instance plays at a time with proper stop/start controls and automatic cleanup when switching between messages. [#16152](https://github.com/open-webui/open-webui/pull/16152), [#18744](https://github.com/open-webui/open-webui/pull/18744), [#16150](https://github.com/open-webui/open-webui/issues/16150)
|
||||
- 🔊 ELEVENLABS_API_BASE_URL environment variable now allows configuration of custom ElevenLabs API endpoints, enabling support for EU residency API requirements. [#18402](https://github.com/open-webui/open-webui/issues/18402)
|
||||
- 🔐 OAUTH_ROLES_SEPARATOR environment variable now allows custom role separators for OAuth roles that contain commas, useful for roles specified in LDAP syntax. [#18572](https://github.com/open-webui/open-webui/pull/18572)
|
||||
- 📄 External document loaders can now optionally forward user information headers when ENABLE_FORWARD_USER_INFO_HEADERS is enabled, enabling cost tracking, audit logs, and usage analytics for external services. [#18731](https://github.com/open-webui/open-webui/pull/18731)
|
||||
- 📄 MISTRAL_OCR_API_BASE_URL environment variable now allows configuration of custom Mistral OCR API endpoints for flexible deployment options. [Commit](https://github.com/open-webui/open-webui/commit/415b93c7c35c2e2db4425e6da1b88b3750f496b0)
|
||||
- ⌨️ Keyboard shortcut hints are now displayed on sidebar buttons with a refactored shortcuts modal that accurately reflects all available hotkeys across different keyboard layouts. [#18473](https://github.com/open-webui/open-webui/pull/18473)
|
||||
- 🛠️ Tooltips now display tool descriptions when hovering over tool names on the model edit page, improving usability and providing immediate context. [#18707](https://github.com/open-webui/open-webui/pull/18707)
|
||||
- 📝 "Create a new note" from the search modal now immediately creates a new private note and opens it in the editor instead of navigating to the generic notes page. [#18255](https://github.com/open-webui/open-webui/pull/18255)
|
||||
- 🖨️ Code block output now preserves whitespace formatting with monospace font to accurately reflect terminal behavior. [#18352](https://github.com/open-webui/open-webui/pull/18352)
|
||||
- ✏️ Edit button is now available in the three-dot menu of models in the workspace section for quick access to model editing, with the menu reorganized for better user experience and Edit, Clone, Copy Link, and Share options logically grouped. [#18574](https://github.com/open-webui/open-webui/pull/18574)
|
||||
- 📌 Sidebar models section is now collapsible, allowing users to expand and collapse the pinned models list for better sidebar organization. [Commit](https://github.com/open-webui/open-webui/commit/82c08a3b5d189f81c96b6548cc872198771015b0)
|
||||
- 🌙 Dark mode styles for select elements were added using Tailwind CSS classes, improving consistency across the interface. [#18636](https://github.com/open-webui/open-webui/pull/18636)
|
||||
- 🔄 Various improvements were implemented across the frontend and backend to enhance performance, stability, and security.
|
||||
- 🌐 Translations for Portuguese (Brazil), Greek, German, Traditional Chinese, Simplified Chinese, Spanish, Georgian, Danish, and Estonian were enhanced and expanded.
|
||||
|
||||
### Fixed
|
||||
|
||||
- 🔒 Server-Sent Event (SSE) code injection vulnerability in Direct Connections is resolved by blocking event emission from untrusted external model servers; event emitters from direct connected model servers are no longer supported, preventing arbitrary JavaScript execution in user browsers. [Commit](https://github.com/open-webui/open-webui/commit/8af6a4cf21b756a66cd58378a01c60f74c39b7ca)
|
||||
- 🛡️ DOM XSS vulnerability in "Insert Prompt as Rich Text" is resolved by sanitizing HTML content with DOMPurify before rendering. [Commit](https://github.com/open-webui/open-webui/commit/eb9c4c0e358c274aea35f21c2856c0a20051e5f1)
|
||||
- ⚙️ MCP server cancellation scope corruption is prevented by reversing disconnection order to follow LIFO and properly handling exceptions, resolving 100% CPU usage when resuming chats with expired tokens or using multiple streamable MCP servers. [#18537](https://github.com/open-webui/open-webui/pull/18537)
|
||||
- 🔧 UI freeze when querying models with knowledge bases containing inconsistent distance metrics is resolved by properly initializing the distances array in citations. [#18585](https://github.com/open-webui/open-webui/pull/18585)
|
||||
- 🤖 Duplicate model IDs from multiple OpenAI endpoints are now automatically deduplicated server-side, preventing frontend crashes for users with unified gateway proxies that aggregate multiple providers. [Commit](https://github.com/open-webui/open-webui/commit/fdf7ca11d4f3cc8fe63e81c98dc0d1e48e52ba36)
|
||||
- 🔐 Login failures with passwords longer than 72 bytes are resolved by safely truncating oversized passwords for bcrypt compatibility. [#18157](https://github.com/open-webui/open-webui/issues/18157)
|
||||
- 🔐 OAuth 2.1 MCP tool connections now automatically re-register clients when stored client IDs become stale, preventing unauthorized_client errors after editing tool endpoints and providing detailed error messages for callback failures. [#18415](https://github.com/open-webui/open-webui/pull/18415), [#18309](https://github.com/open-webui/open-webui/issues/18309)
|
||||
- 🔓 OAuth 2.1 discovery, metadata fetching, and dynamic client registration now correctly use HTTP proxy environment variables when trust_env is enabled. [Commit](https://github.com/open-webui/open-webui/commit/bafeb76c411483bd6b135f0edbcdce048120f264)
|
||||
- 🔌 MCP server connection failures now display clear error messages in the chat interface instead of silently failing. [#18892](https://github.com/open-webui/open-webui/pull/18892), [#18889](https://github.com/open-webui/open-webui/issues/18889)
|
||||
- 💬 Chat titles are now properly generated even when title auto-generation is disabled in interface settings, fixing an issue where chats would remain labeled as "New chat". [#18761](https://github.com/open-webui/open-webui/pull/18761), [#18717](https://github.com/open-webui/open-webui/issues/18717), [#6478](https://github.com/open-webui/open-webui/issues/6478)
|
||||
- 🔍 Chat query errors are prevented by properly validating and handling the "order_by" parameter to ensure requested columns exist. [#18400](https://github.com/open-webui/open-webui/pull/18400), [#18452](https://github.com/open-webui/open-webui/pull/18452)
|
||||
- 🔧 Root-level max_tokens parameter is no longer dropped when proxying to Ollama, properly converting to num_predict to limit output token length as intended. [#18618](https://github.com/open-webui/open-webui/issues/18618)
|
||||
- 🔑 Self-hosted Marker instances can now be used without requiring an API key, while keeping it optional for datalab Marker service users. [#18617](https://github.com/open-webui/open-webui/issues/18617)
|
||||
- 🔧 OpenAPI specification endpoint conflict between "/api/v1/models" and "/api/v1/models/" is resolved by changing the models router endpoint to "/list", preventing duplicate operationId errors when generating TypeScript API clients. [#18758](https://github.com/open-webui/open-webui/issues/18758)
|
||||
- 🏷️ Model tags are now de-duplicated case-insensitively in both the model selector and workspace models page, preventing duplicate entries with different capitalization from appearing in filter dropdowns. [#18716](https://github.com/open-webui/open-webui/pull/18716), [#18711](https://github.com/open-webui/open-webui/issues/18711)
|
||||
- 📄 Docling RAG parameter configuration is now correctly saved in the admin UI by fixing the typo in the "DOCLING_PARAMS" parameter name. [#18390](https://github.com/open-webui/open-webui/pull/18390)
|
||||
- 📃 Tika document processing now automatically detects content types instead of relying on potentially incorrect browser-provided mime-types, improving file handling accuracy for formats like RTF. [#18765](https://github.com/open-webui/open-webui/pull/18765), [#18683](https://github.com/open-webui/open-webui/issues/18683)
|
||||
- 🖼️ Image and video uploads to knowledge bases now display proper error messages instead of showing an infinite spinner when the content extraction engine does not support these file types. [#18514](https://github.com/open-webui/open-webui/issues/18514)
|
||||
- 📝 Notes PDF export now properly detects and applies dark mode styling consistently across both the notes list and individual note pages, with a shared utility function to eliminate code duplication. [#18526](https://github.com/open-webui/open-webui/issues/18526)
|
||||
- 💭 Details tags for reasoning content are now correctly identified and rendered even when the same tag is present in user messages. [#18840](https://github.com/open-webui/open-webui/pull/18840), [#18294](https://github.com/open-webui/open-webui/issues/18294)
|
||||
- 📊 Mermaid and Vega rendering errors now display inline with the code instead of showing repetitive toast notifications, improving user experience when models generate invalid diagram syntax. [Commit](https://github.com/open-webui/open-webui/commit/fdc0f04a8b7dd0bc9f9dc0e7e30854f7a0eea3e9)
|
||||
- 📈 Mermaid diagram rendering errors no longer cause UI unavailability or display error messages below the input box. [#18493](https://github.com/open-webui/open-webui/pull/18493), [#18340](https://github.com/open-webui/open-webui/issues/18340)
|
||||
- 🔗 Web search SSL verification is now asynchronous, preventing the website from hanging during web search operations. [#18714](https://github.com/open-webui/open-webui/pull/18714), [#18699](https://github.com/open-webui/open-webui/issues/18699)
|
||||
- 🌍 Web search results now correctly use HTTP proxy environment variables when WEB_SEARCH_TRUST_ENV is enabled. [#18667](https://github.com/open-webui/open-webui/pull/18667), [#7008](https://github.com/open-webui/open-webui/discussions/7008)
|
||||
- 🔍 Google Programmable Search Engine now properly includes referer headers, enabling API keys with HTTP referrer restrictions configured in Google Cloud Console. [#18871](https://github.com/open-webui/open-webui/pull/18871), [#18870](https://github.com/open-webui/open-webui/issues/18870)
|
||||
- ⚡ YouTube video transcript fetching now works correctly when using a proxy connection. [#18419](https://github.com/open-webui/open-webui/pull/18419)
|
||||
- 🎙️ Speech-to-text transcription no longer deletes or replaces existing text in the prompt input field, properly preserving any previously entered content. [#18540](https://github.com/open-webui/open-webui/issues/18540)
|
||||
- 🎙️ The "Instant Auto-Send After Voice Transcription" setting now functions correctly and automatically sends transcribed text when enabled. [#18466](https://github.com/open-webui/open-webui/issues/18466)
|
||||
- ⚙️ Chat settings now load properly when reopening a tab or starting a new session by initializing defaults when sessionStorage is empty. [#18438](https://github.com/open-webui/open-webui/pull/18438)
|
||||
- 🔎 Folder tag search in the sidebar now correctly handles folder names with multiple spaces by replacing all spaces with underscores. [Commit](https://github.com/open-webui/open-webui/commit/a8fe979af68e47e4e4bb3eb76e48d93d60cd2a45)
|
||||
- 🛠️ Functions page now updates immediately after deleting a function, removing the need for a manual page reload. [#18912](https://github.com/open-webui/open-webui/pull/18912), [#18908](https://github.com/open-webui/open-webui/issues/18908)
|
||||
- 🛠️ Native tool calling now properly supports sequential tool calls with shared context, allowing tools to access images and data from previous tool executions in the same conversation. [#18664](https://github.com/open-webui/open-webui/pull/18664)
|
||||
- 🎯 Globally enabled actions in the model editor now correctly apply as global instead of being treated as disabled. [#18577](https://github.com/open-webui/open-webui/pull/18577)
|
||||
- 📋 Clipboard images pasted via the "{{CLIPBOARD}}" prompt variable are now correctly converted to base64 format before being sent to the backend, resolving base64 encoding errors. [#18432](https://github.com/open-webui/open-webui/pull/18432), [#18425](https://github.com/open-webui/open-webui/issues/18425)
|
||||
- 📋 File list is now cleared when switching to models that do not support file uploads, preventing files from being sent to incompatible models. [#18496](https://github.com/open-webui/open-webui/pull/18496)
|
||||
- 📂 Move menu no longer displays when folders are empty. [#18484](https://github.com/open-webui/open-webui/pull/18484)
|
||||
- 📁 Folder and channel creation now validates that names are not empty, preventing creation of folders or channels with no name and showing an error toast if attempted. [#18564](https://github.com/open-webui/open-webui/pull/18564)
|
||||
- 🖊️ Rich text input no longer removes text between equals signs when pasting code with comparison operators. [#18551](https://github.com/open-webui/open-webui/issues/18551)
|
||||
- ⌨️ Keyboard shortcuts now display the correct keys for international and non-QWERTY keyboard layouts by detecting the user's layout using the Keyboard API. [#18533](https://github.com/open-webui/open-webui/pull/18533)
|
||||
- 🌐 "Attach Webpage" button now displays with correct disabled styling when a model does not support file uploads. [#18483](https://github.com/open-webui/open-webui/pull/18483)
|
||||
- 🎚️ Divider no longer displays in the integrations menu when no integrations are enabled. [#18487](https://github.com/open-webui/open-webui/pull/18487)
|
||||
- 📱 Chat controls button is now properly hidden on mobile for users without admin or explicit chat control permissions. [#18641](https://github.com/open-webui/open-webui/pull/18641)
|
||||
- 📍 User menu, download submenu, and move submenu are now repositioned to prevent overlap with the Chat Controls sidebar when it is open. [Commit](https://github.com/open-webui/open-webui/commit/414ab51cb6df1ab0d6c85ac6c1f2c5c9a5f8e2aa)
|
||||
- 🎯 Artifacts button no longer appears in the chat menu when there are no artifacts to display. [Commit](https://github.com/open-webui/open-webui/commit/ed6449d35f84f68dc75ee5c6b3f4748a3fda0096)
|
||||
- 🎨 Artifacts view now automatically displays when opening an existing conversation containing artifacts, improving user experience. [#18215](https://github.com/open-webui/open-webui/pull/18215)
|
||||
- 🖌️ Formatting toolbar is no longer hidden under images or code blocks in chat and now displays correctly above all message content.
|
||||
- 🎨 Layout shift near system instructions is prevented by properly rendering the chat component when system prompts are empty. [#18594](https://github.com/open-webui/open-webui/pull/18594)
|
||||
- 📐 Modal layout shift caused by scrollbar appearance is prevented by adding a stable scrollbar gutter. [#18591](https://github.com/open-webui/open-webui/pull/18591)
|
||||
- ✨ Spacing between icon and label in the user menu dropdown items is now consistent. [#18595](https://github.com/open-webui/open-webui/pull/18595)
|
||||
- 💬 Duplicate prompt suggestions no longer cause the webpage to freeze or throw JavaScript errors by implementing proper key management with composite keys. [#18841](https://github.com/open-webui/open-webui/pull/18841), [#18566](https://github.com/open-webui/open-webui/issues/18566)
|
||||
- 🔍 Chat preview loading in the search modal now works correctly for all search results by fixing an index boundary check that previously caused out-of-bounds errors. [#18911](https://github.com/open-webui/open-webui/pull/18911)
|
||||
- ♿ Screen reader support was enhanced by wrapping messages in semantic elements with descriptive aria-labels, adding "Assistant is typing" and "Response complete" announcements for improved accessibility. [#18735](https://github.com/open-webui/open-webui/pull/18735)
|
||||
- 🔒 Incorrect await call in the OAuth 2.1 flow is removed, eliminating a logged exception during authentication. [#18236](https://github.com/open-webui/open-webui/pull/18236)
|
||||
- 🛡️ Duplicate crossorigin attribute in the manifest file was removed. [#18413](https://github.com/open-webui/open-webui/pull/18413)
|
||||
|
||||
### Changed
|
||||
|
||||
- 🔄 Firecrawl integration was refactored to use the official Firecrawl SDK instead of direct HTTP requests and langchain_community FireCrawlLoader, improving reliability and performance with batch scraping support and enhanced error handling. [#18635](https://github.com/open-webui/open-webui/pull/18635)
|
||||
- 📄 MinerU content extraction engine now only supports PDF files following the upstream removal of LibreOffice document conversion in version 2.0.0; users needing to process office documents should convert them to PDF format first. [#18448](https://github.com/open-webui/open-webui/issues/18448)
|
||||
|
||||
## [0.6.34] - 2025-10-16
|
||||
|
||||
### Added
|
||||
|
|
|
|||
24
README.md
24
README.md
|
|
@ -31,32 +31,44 @@ For more information, be sure to check out our [Open WebUI Documentation](https:
|
|||
|
||||
- 🛡️ **Granular Permissions and User Groups**: By allowing administrators to create detailed user roles and permissions, we ensure a secure user environment. This granularity not only enhances security but also allows for customized user experiences, fostering a sense of ownership and responsibility amongst users.
|
||||
|
||||
- 🔄 **SCIM 2.0 Support**: Enterprise-grade user and group provisioning through SCIM 2.0 protocol, enabling seamless integration with identity providers like Okta, Azure AD, and Google Workspace for automated user lifecycle management.
|
||||
|
||||
- 📱 **Responsive Design**: Enjoy a seamless experience across Desktop PC, Laptop, and Mobile devices.
|
||||
|
||||
- 📱 **Progressive Web App (PWA) for Mobile**: Enjoy a native app-like experience on your mobile device with our PWA, providing offline access on localhost and a seamless user interface.
|
||||
|
||||
- ✒️🔢 **Full Markdown and LaTeX Support**: Elevate your LLM experience with comprehensive Markdown and LaTeX capabilities for enriched interaction.
|
||||
|
||||
- 🎤📹 **Hands-Free Voice/Video Call**: Experience seamless communication with integrated hands-free voice and video call features, allowing for a more dynamic and interactive chat environment.
|
||||
- 🎤📹 **Hands-Free Voice/Video Call**: Experience seamless communication with integrated hands-free voice and video call features using multiple Speech-to-Text providers (Local Whisper, OpenAI, Deepgram, Azure) and Text-to-Speech engines (Azure, ElevenLabs, OpenAI, Transformers, WebAPI), allowing for dynamic and interactive chat environments.
|
||||
|
||||
- 🛠️ **Model Builder**: Easily create Ollama models via the Web UI. Create and add custom characters/agents, customize chat elements, and import models effortlessly through [Open WebUI Community](https://openwebui.com/) integration.
|
||||
|
||||
- 🐍 **Native Python Function Calling Tool**: Enhance your LLMs with built-in code editor support in the tools workspace. Bring Your Own Function (BYOF) by simply adding your pure Python functions, enabling seamless integration with LLMs.
|
||||
|
||||
- 📚 **Local RAG Integration**: Dive into the future of chat interactions with groundbreaking Retrieval Augmented Generation (RAG) support. This feature seamlessly integrates document interactions into your chat experience. You can load documents directly into the chat or add files to your document library, effortlessly accessing them using the `#` command before a query.
|
||||
- 💾 **Persistent Artifact Storage**: Built-in key-value storage API for artifacts, enabling features like journals, trackers, leaderboards, and collaborative tools with both personal and shared data scopes across sessions.
|
||||
|
||||
- 🔍 **Web Search for RAG**: Perform web searches using providers like `SearXNG`, `Google PSE`, `Brave Search`, `serpstack`, `serper`, `Serply`, `DuckDuckGo`, `TavilySearch`, `SearchApi` and `Bing` and inject the results directly into your chat experience.
|
||||
- 📚 **Local RAG Integration**: Dive into the future of chat interactions with groundbreaking Retrieval Augmented Generation (RAG) support using your choice of 9 vector databases and multiple content extraction engines (Tika, Docling, Document Intelligence, Mistral OCR, External loaders). Load documents directly into chat or add files to your document library, effortlessly accessing them using the `#` command before a query.
|
||||
|
||||
- 🔍 **Web Search for RAG**: Perform web searches using 15+ providers including `SearXNG`, `Google PSE`, `Brave Search`, `Kagi`, `Mojeek`, `Tavily`, `Perplexity`, `serpstack`, `serper`, `Serply`, `DuckDuckGo`, `SearchApi`, `SerpApi`, `Bing`, `Jina`, `Exa`, `Sougou`, `Azure AI Search`, and `Ollama Cloud`, injecting results directly into your chat experience.
|
||||
|
||||
- 🌐 **Web Browsing Capability**: Seamlessly integrate websites into your chat experience using the `#` command followed by a URL. This feature allows you to incorporate web content directly into your conversations, enhancing the richness and depth of your interactions.
|
||||
|
||||
- 🎨 **Image Generation Integration**: Seamlessly incorporate image generation capabilities using options such as AUTOMATIC1111 API or ComfyUI (local), and OpenAI's DALL-E (external), enriching your chat experience with dynamic visual content.
|
||||
- 🎨 **Image Generation & Editing Integration**: Create and edit images using multiple engines including OpenAI's DALL-E, Gemini, ComfyUI (local), and AUTOMATIC1111 (local), with support for both generation and prompt-based editing workflows.
|
||||
|
||||
- ⚙️ **Many Models Conversations**: Effortlessly engage with various models simultaneously, harnessing their unique strengths for optimal responses. Enhance your experience by leveraging a diverse set of models in parallel.
|
||||
|
||||
- 🔐 **Role-Based Access Control (RBAC)**: Ensure secure access with restricted permissions; only authorized individuals can access your Ollama, and exclusive model creation/pulling rights are reserved for administrators.
|
||||
|
||||
- 🗄️ **Flexible Database & Storage Options**: Choose from SQLite (with optional encryption), PostgreSQL, or configure cloud storage backends (S3, Google Cloud Storage, Azure Blob Storage) for scalable deployments.
|
||||
|
||||
- 🔍 **Advanced Vector Database Support**: Select from 9 vector database options including ChromaDB, PGVector, Qdrant, Milvus, Elasticsearch, OpenSearch, Pinecone, S3Vector, and Oracle 23ai for optimal RAG performance.
|
||||
|
||||
- 🔐 **Enterprise Authentication**: Full support for LDAP/Active Directory integration, SCIM 2.0 automated provisioning, and SSO via trusted headers alongside OAuth providers. Enterprise-grade user and group provisioning through SCIM 2.0 protocol, enabling seamless integration with identity providers like Okta, Azure AD, and Google Workspace for automated user lifecycle management.
|
||||
|
||||
- ☁️ **Cloud-Native Integration**: Native support for Google Drive and OneDrive/SharePoint file picking, enabling seamless document import from enterprise cloud storage.
|
||||
|
||||
- 📊 **Production Observability**: Built-in OpenTelemetry support for traces, metrics, and logs, enabling comprehensive monitoring with your existing observability stack.
|
||||
|
||||
- ⚖️ **Horizontal Scalability**: Redis-backed session management and WebSocket support for multi-worker and multi-node deployments behind load balancers.
|
||||
|
||||
- 🌐🌍 **Multilingual Support**: Experience Open WebUI in your preferred language with our internationalization (i18n) support. Join us in expanding our supported languages! We're actively seeking contributors!
|
||||
|
||||
- 🧩 **Pipelines, Open WebUI Plugin Support**: Seamlessly integrate custom logic and Python libraries into Open WebUI using [Pipelines Plugin Framework](https://github.com/open-webui/pipelines). Launch your Pipelines instance, set the OpenAI URL to the Pipelines URL, and explore endless possibilities. [Examples](https://github.com/open-webui/pipelines/tree/main/examples) include **Function Calling**, User **Rate Limiting** to control access, **Usage Monitoring** with tools like Langfuse, **Live Translation with LibreTranslate** for multilingual support, **Toxic Message Filtering** and much more.
|
||||
|
|
|
|||
|
|
@ -287,25 +287,30 @@ class AppConfig:
|
|||
# WEBUI_AUTH (Required for security)
|
||||
####################################
|
||||
|
||||
ENABLE_API_KEY = PersistentConfig(
|
||||
"ENABLE_API_KEY",
|
||||
"auth.api_key.enable",
|
||||
os.environ.get("ENABLE_API_KEY", "True").lower() == "true",
|
||||
ENABLE_API_KEYS = PersistentConfig(
|
||||
"ENABLE_API_KEYS",
|
||||
"auth.enable_api_keys",
|
||||
os.environ.get("ENABLE_API_KEYS", "False").lower() == "true",
|
||||
)
|
||||
|
||||
ENABLE_API_KEY_ENDPOINT_RESTRICTIONS = PersistentConfig(
|
||||
"ENABLE_API_KEY_ENDPOINT_RESTRICTIONS",
|
||||
ENABLE_API_KEYS_ENDPOINT_RESTRICTIONS = PersistentConfig(
|
||||
"ENABLE_API_KEYS_ENDPOINT_RESTRICTIONS",
|
||||
"auth.api_key.endpoint_restrictions",
|
||||
os.environ.get("ENABLE_API_KEY_ENDPOINT_RESTRICTIONS", "False").lower() == "true",
|
||||
os.environ.get(
|
||||
"ENABLE_API_KEYS_ENDPOINT_RESTRICTIONS",
|
||||
os.environ.get("ENABLE_API_KEY_ENDPOINT_RESTRICTIONS", "False"),
|
||||
).lower()
|
||||
== "true",
|
||||
)
|
||||
|
||||
API_KEY_ALLOWED_ENDPOINTS = PersistentConfig(
|
||||
"API_KEY_ALLOWED_ENDPOINTS",
|
||||
API_KEYS_ALLOWED_ENDPOINTS = PersistentConfig(
|
||||
"API_KEYS_ALLOWED_ENDPOINTS",
|
||||
"auth.api_key.allowed_endpoints",
|
||||
os.environ.get("API_KEY_ALLOWED_ENDPOINTS", ""),
|
||||
os.environ.get(
|
||||
"API_KEYS_ALLOWED_ENDPOINTS", os.environ.get("API_KEY_ALLOWED_ENDPOINTS", "")
|
||||
),
|
||||
)
|
||||
|
||||
|
||||
JWT_EXPIRES_IN = PersistentConfig(
|
||||
"JWT_EXPIRES_IN", "auth.jwt_expiry", os.environ.get("JWT_EXPIRES_IN", "4w")
|
||||
)
|
||||
|
|
@ -570,6 +575,8 @@ OAUTH_BLOCKED_GROUPS = PersistentConfig(
|
|||
os.environ.get("OAUTH_BLOCKED_GROUPS", "[]"),
|
||||
)
|
||||
|
||||
OAUTH_GROUPS_SEPARATOR = os.environ.get("OAUTH_GROUPS_SEPARATOR", ";")
|
||||
|
||||
OAUTH_ROLES_CLAIM = PersistentConfig(
|
||||
"OAUTH_ROLES_CLAIM",
|
||||
"oauth.roles_claim",
|
||||
|
|
@ -613,6 +620,11 @@ OAUTH_UPDATE_PICTURE_ON_LOGIN = PersistentConfig(
|
|||
os.environ.get("OAUTH_UPDATE_PICTURE_ON_LOGIN", "False").lower() == "true",
|
||||
)
|
||||
|
||||
OAUTH_ACCESS_TOKEN_REQUEST_INCLUDE_CLIENT_ID = (
|
||||
os.environ.get("OAUTH_ACCESS_TOKEN_REQUEST_INCLUDE_CLIENT_ID", "False").lower()
|
||||
== "true"
|
||||
)
|
||||
|
||||
|
||||
def load_oauth_providers():
|
||||
OAUTH_PROVIDERS.clear()
|
||||
|
|
@ -1122,6 +1134,7 @@ ENABLE_LOGIN_FORM = PersistentConfig(
|
|||
os.environ.get("ENABLE_LOGIN_FORM", "True").lower() == "true",
|
||||
)
|
||||
|
||||
ENABLE_PASSWORD_AUTH = os.environ.get("ENABLE_PASSWORD_AUTH", "True").lower() == "true"
|
||||
|
||||
DEFAULT_LOCALE = PersistentConfig(
|
||||
"DEFAULT_LOCALE",
|
||||
|
|
@ -1133,6 +1146,12 @@ DEFAULT_MODELS = PersistentConfig(
|
|||
"DEFAULT_MODELS", "ui.default_models", os.environ.get("DEFAULT_MODELS", None)
|
||||
)
|
||||
|
||||
DEFAULT_PINNED_MODELS = PersistentConfig(
|
||||
"DEFAULT_PINNED_MODELS",
|
||||
"ui.default_pinned_models",
|
||||
os.environ.get("DEFAULT_PINNED_MODELS", None),
|
||||
)
|
||||
|
||||
try:
|
||||
default_prompt_suggestions = json.loads(
|
||||
os.environ.get("DEFAULT_PROMPT_SUGGESTIONS", "[]")
|
||||
|
|
@ -1189,6 +1208,12 @@ DEFAULT_USER_ROLE = PersistentConfig(
|
|||
os.getenv("DEFAULT_USER_ROLE", "pending"),
|
||||
)
|
||||
|
||||
DEFAULT_GROUP_ID = PersistentConfig(
|
||||
"DEFAULT_GROUP_ID",
|
||||
"ui.default_group_id",
|
||||
os.environ.get("DEFAULT_GROUP_ID", ""),
|
||||
)
|
||||
|
||||
PENDING_USER_OVERLAY_TITLE = PersistentConfig(
|
||||
"PENDING_USER_OVERLAY_TITLE",
|
||||
"ui.pending_user_overlay_title",
|
||||
|
|
@ -1228,6 +1253,40 @@ USER_PERMISSIONS_WORKSPACE_TOOLS_ACCESS = (
|
|||
os.environ.get("USER_PERMISSIONS_WORKSPACE_TOOLS_ACCESS", "False").lower() == "true"
|
||||
)
|
||||
|
||||
USER_PERMISSIONS_WORKSPACE_MODELS_IMPORT = (
|
||||
os.environ.get("USER_PERMISSIONS_WORKSPACE_MODELS_IMPORT", "False").lower()
|
||||
== "true"
|
||||
)
|
||||
|
||||
USER_PERMISSIONS_WORKSPACE_MODELS_EXPORT = (
|
||||
os.environ.get("USER_PERMISSIONS_WORKSPACE_MODELS_EXPORT", "False").lower()
|
||||
== "true"
|
||||
)
|
||||
|
||||
USER_PERMISSIONS_WORKSPACE_PROMPTS_IMPORT = (
|
||||
os.environ.get("USER_PERMISSIONS_WORKSPACE_PROMPTS_IMPORT", "False").lower()
|
||||
== "true"
|
||||
)
|
||||
|
||||
USER_PERMISSIONS_WORKSPACE_PROMPTS_EXPORT = (
|
||||
os.environ.get("USER_PERMISSIONS_WORKSPACE_PROMPTS_EXPORT", "False").lower()
|
||||
== "true"
|
||||
)
|
||||
|
||||
USER_PERMISSIONS_WORKSPACE_TOOLS_IMPORT = (
|
||||
os.environ.get("USER_PERMISSIONS_WORKSPACE_TOOLS_IMPORT", "False").lower() == "true"
|
||||
)
|
||||
|
||||
USER_PERMISSIONS_WORKSPACE_TOOLS_EXPORT = (
|
||||
os.environ.get("USER_PERMISSIONS_WORKSPACE_TOOLS_EXPORT", "False").lower() == "true"
|
||||
)
|
||||
|
||||
|
||||
USER_PERMISSIONS_WORKSPACE_MODELS_ALLOW_SHARING = (
|
||||
os.environ.get("USER_PERMISSIONS_WORKSPACE_MODELS_ALLOW_SHARING", "False").lower()
|
||||
== "true"
|
||||
)
|
||||
|
||||
USER_PERMISSIONS_WORKSPACE_MODELS_ALLOW_PUBLIC_SHARING = (
|
||||
os.environ.get(
|
||||
"USER_PERMISSIONS_WORKSPACE_MODELS_ALLOW_PUBLIC_SHARING", "False"
|
||||
|
|
@ -1235,8 +1294,10 @@ USER_PERMISSIONS_WORKSPACE_MODELS_ALLOW_PUBLIC_SHARING = (
|
|||
== "true"
|
||||
)
|
||||
|
||||
USER_PERMISSIONS_NOTES_ALLOW_PUBLIC_SHARING = (
|
||||
os.environ.get("USER_PERMISSIONS_NOTES_ALLOW_PUBLIC_SHARING", "False").lower()
|
||||
USER_PERMISSIONS_WORKSPACE_KNOWLEDGE_ALLOW_SHARING = (
|
||||
os.environ.get(
|
||||
"USER_PERMISSIONS_WORKSPACE_KNOWLEDGE_ALLOW_PUBLIC_SHARING", "False"
|
||||
).lower()
|
||||
== "true"
|
||||
)
|
||||
|
||||
|
|
@ -1247,6 +1308,11 @@ USER_PERMISSIONS_WORKSPACE_KNOWLEDGE_ALLOW_PUBLIC_SHARING = (
|
|||
== "true"
|
||||
)
|
||||
|
||||
USER_PERMISSIONS_WORKSPACE_PROMPTS_ALLOW_SHARING = (
|
||||
os.environ.get("USER_PERMISSIONS_WORKSPACE_PROMPTS_ALLOW_SHARING", "False").lower()
|
||||
== "true"
|
||||
)
|
||||
|
||||
USER_PERMISSIONS_WORKSPACE_PROMPTS_ALLOW_PUBLIC_SHARING = (
|
||||
os.environ.get(
|
||||
"USER_PERMISSIONS_WORKSPACE_PROMPTS_ALLOW_PUBLIC_SHARING", "False"
|
||||
|
|
@ -1254,6 +1320,12 @@ USER_PERMISSIONS_WORKSPACE_PROMPTS_ALLOW_PUBLIC_SHARING = (
|
|||
== "true"
|
||||
)
|
||||
|
||||
|
||||
USER_PERMISSIONS_WORKSPACE_TOOLS_ALLOW_SHARING = (
|
||||
os.environ.get("USER_PERMISSIONS_WORKSPACE_TOOLS_ALLOW_SHARING", "False").lower()
|
||||
== "true"
|
||||
)
|
||||
|
||||
USER_PERMISSIONS_WORKSPACE_TOOLS_ALLOW_PUBLIC_SHARING = (
|
||||
os.environ.get(
|
||||
"USER_PERMISSIONS_WORKSPACE_TOOLS_ALLOW_PUBLIC_SHARING", "False"
|
||||
|
|
@ -1262,6 +1334,17 @@ USER_PERMISSIONS_WORKSPACE_TOOLS_ALLOW_PUBLIC_SHARING = (
|
|||
)
|
||||
|
||||
|
||||
USER_PERMISSIONS_NOTES_ALLOW_SHARING = (
|
||||
os.environ.get("USER_PERMISSIONS_NOTES_ALLOW_PUBLIC_SHARING", "False").lower()
|
||||
== "true"
|
||||
)
|
||||
|
||||
USER_PERMISSIONS_NOTES_ALLOW_PUBLIC_SHARING = (
|
||||
os.environ.get("USER_PERMISSIONS_NOTES_ALLOW_PUBLIC_SHARING", "False").lower()
|
||||
== "true"
|
||||
)
|
||||
|
||||
|
||||
USER_PERMISSIONS_CHAT_CONTROLS = (
|
||||
os.environ.get("USER_PERMISSIONS_CHAT_CONTROLS", "True").lower() == "true"
|
||||
)
|
||||
|
|
@ -1364,6 +1447,10 @@ USER_PERMISSIONS_FEATURES_NOTES = (
|
|||
os.environ.get("USER_PERMISSIONS_FEATURES_NOTES", "True").lower() == "true"
|
||||
)
|
||||
|
||||
USER_PERMISSIONS_FEATURES_API_KEYS = (
|
||||
os.environ.get("USER_PERMISSIONS_FEATURES_API_KEYS", "False").lower() == "true"
|
||||
)
|
||||
|
||||
|
||||
DEFAULT_USER_PERMISSIONS = {
|
||||
"workspace": {
|
||||
|
|
@ -1371,12 +1458,23 @@ DEFAULT_USER_PERMISSIONS = {
|
|||
"knowledge": USER_PERMISSIONS_WORKSPACE_KNOWLEDGE_ACCESS,
|
||||
"prompts": USER_PERMISSIONS_WORKSPACE_PROMPTS_ACCESS,
|
||||
"tools": USER_PERMISSIONS_WORKSPACE_TOOLS_ACCESS,
|
||||
"models_import": USER_PERMISSIONS_WORKSPACE_MODELS_IMPORT,
|
||||
"models_export": USER_PERMISSIONS_WORKSPACE_MODELS_EXPORT,
|
||||
"prompts_import": USER_PERMISSIONS_WORKSPACE_PROMPTS_IMPORT,
|
||||
"prompts_export": USER_PERMISSIONS_WORKSPACE_PROMPTS_EXPORT,
|
||||
"tools_import": USER_PERMISSIONS_WORKSPACE_TOOLS_IMPORT,
|
||||
"tools_export": USER_PERMISSIONS_WORKSPACE_TOOLS_EXPORT,
|
||||
},
|
||||
"sharing": {
|
||||
"models": USER_PERMISSIONS_WORKSPACE_MODELS_ALLOW_SHARING,
|
||||
"public_models": USER_PERMISSIONS_WORKSPACE_MODELS_ALLOW_PUBLIC_SHARING,
|
||||
"knowledge": USER_PERMISSIONS_WORKSPACE_KNOWLEDGE_ALLOW_SHARING,
|
||||
"public_knowledge": USER_PERMISSIONS_WORKSPACE_KNOWLEDGE_ALLOW_PUBLIC_SHARING,
|
||||
"prompts": USER_PERMISSIONS_WORKSPACE_PROMPTS_ALLOW_SHARING,
|
||||
"public_prompts": USER_PERMISSIONS_WORKSPACE_PROMPTS_ALLOW_PUBLIC_SHARING,
|
||||
"tools": USER_PERMISSIONS_WORKSPACE_TOOLS_ALLOW_SHARING,
|
||||
"public_tools": USER_PERMISSIONS_WORKSPACE_TOOLS_ALLOW_PUBLIC_SHARING,
|
||||
"notes": USER_PERMISSIONS_NOTES_ALLOW_SHARING,
|
||||
"public_notes": USER_PERMISSIONS_NOTES_ALLOW_PUBLIC_SHARING,
|
||||
},
|
||||
"chat": {
|
||||
|
|
@ -1401,6 +1499,7 @@ DEFAULT_USER_PERMISSIONS = {
|
|||
"temporary_enforced": USER_PERMISSIONS_CHAT_TEMPORARY_ENFORCED,
|
||||
},
|
||||
"features": {
|
||||
"api_keys": USER_PERMISSIONS_FEATURES_API_KEYS,
|
||||
"direct_tool_servers": USER_PERMISSIONS_FEATURES_DIRECT_TOOL_SERVERS,
|
||||
"web_search": USER_PERMISSIONS_FEATURES_WEB_SEARCH,
|
||||
"image_generation": USER_PERMISSIONS_FEATURES_IMAGE_GENERATION,
|
||||
|
|
@ -1814,6 +1913,38 @@ Output:
|
|||
#### Output:
|
||||
"""
|
||||
|
||||
|
||||
VOICE_MODE_PROMPT_TEMPLATE = PersistentConfig(
|
||||
"VOICE_MODE_PROMPT_TEMPLATE",
|
||||
"task.voice.prompt_template",
|
||||
os.environ.get("VOICE_MODE_PROMPT_TEMPLATE", ""),
|
||||
)
|
||||
|
||||
DEFAULT_VOICE_MODE_PROMPT_TEMPLATE = """You are a friendly, concise voice assistant.
|
||||
|
||||
Everything you say will be spoken aloud.
|
||||
Keep responses short, clear, and natural.
|
||||
|
||||
STYLE:
|
||||
- Use simple words and short sentences.
|
||||
- Sound warm and conversational.
|
||||
- Avoid long explanations, lists, or complex phrasing.
|
||||
|
||||
BEHAVIOR:
|
||||
- Give the quickest helpful answer first.
|
||||
- Offer extra detail only if needed.
|
||||
- Ask for clarification only when necessary.
|
||||
|
||||
VOICE OPTIMIZATION:
|
||||
- Break information into small, easy-to-hear chunks.
|
||||
- Avoid dense wording or anything that sounds like reading text.
|
||||
|
||||
ERROR HANDLING:
|
||||
- If unsure, say so briefly and offer options.
|
||||
- If something is unsafe or impossible, decline kindly and suggest a safe alternative.
|
||||
|
||||
Stay consistent, helpful, and easy to listen to."""
|
||||
|
||||
TOOLS_FUNCTION_CALLING_PROMPT_TEMPLATE = PersistentConfig(
|
||||
"TOOLS_FUNCTION_CALLING_PROMPT_TEMPLATE",
|
||||
"task.tools.prompt_template",
|
||||
|
|
@ -2054,6 +2185,11 @@ ENABLE_QDRANT_MULTITENANCY_MODE = (
|
|||
)
|
||||
QDRANT_COLLECTION_PREFIX = os.environ.get("QDRANT_COLLECTION_PREFIX", "open-webui")
|
||||
|
||||
WEAVIATE_HTTP_HOST = os.environ.get("WEAVIATE_HTTP_HOST", "")
|
||||
WEAVIATE_HTTP_PORT = int(os.environ.get("WEAVIATE_HTTP_PORT", "8080"))
|
||||
WEAVIATE_GRPC_PORT = int(os.environ.get("WEAVIATE_GRPC_PORT", "50051"))
|
||||
WEAVIATE_API_KEY = os.environ.get("WEAVIATE_API_KEY")
|
||||
|
||||
# OpenSearch
|
||||
OPENSEARCH_URI = os.environ.get("OPENSEARCH_URI", "https://localhost:9200")
|
||||
OPENSEARCH_SSL = os.environ.get("OPENSEARCH_SSL", "true").lower() == "true"
|
||||
|
|
@ -2084,6 +2220,16 @@ PGVECTOR_INITIALIZE_MAX_VECTOR_LENGTH = int(
|
|||
os.environ.get("PGVECTOR_INITIALIZE_MAX_VECTOR_LENGTH", "1536")
|
||||
)
|
||||
|
||||
PGVECTOR_USE_HALFVEC = os.getenv("PGVECTOR_USE_HALFVEC", "false").lower() == "true"
|
||||
|
||||
if PGVECTOR_INITIALIZE_MAX_VECTOR_LENGTH > 2000 and not PGVECTOR_USE_HALFVEC:
|
||||
raise ValueError(
|
||||
"PGVECTOR_INITIALIZE_MAX_VECTOR_LENGTH is set to "
|
||||
f"{PGVECTOR_INITIALIZE_MAX_VECTOR_LENGTH}, which exceeds the 2000 dimension limit of the "
|
||||
"'vector' type. Set PGVECTOR_USE_HALFVEC=true to enable the 'halfvec' "
|
||||
"type required for high-dimensional embeddings."
|
||||
)
|
||||
|
||||
PGVECTOR_CREATE_EXTENSION = (
|
||||
os.getenv("PGVECTOR_CREATE_EXTENSION", "true").lower() == "true"
|
||||
)
|
||||
|
|
@ -2133,6 +2279,40 @@ else:
|
|||
except Exception:
|
||||
PGVECTOR_POOL_RECYCLE = 3600
|
||||
|
||||
PGVECTOR_INDEX_METHOD = os.getenv("PGVECTOR_INDEX_METHOD", "").strip().lower()
|
||||
if PGVECTOR_INDEX_METHOD not in ("ivfflat", "hnsw", ""):
|
||||
PGVECTOR_INDEX_METHOD = ""
|
||||
|
||||
PGVECTOR_HNSW_M = os.environ.get("PGVECTOR_HNSW_M", 16)
|
||||
|
||||
if PGVECTOR_HNSW_M == "":
|
||||
PGVECTOR_HNSW_M = 16
|
||||
else:
|
||||
try:
|
||||
PGVECTOR_HNSW_M = int(PGVECTOR_HNSW_M)
|
||||
except Exception:
|
||||
PGVECTOR_HNSW_M = 16
|
||||
|
||||
PGVECTOR_HNSW_EF_CONSTRUCTION = os.environ.get("PGVECTOR_HNSW_EF_CONSTRUCTION", 64)
|
||||
|
||||
if PGVECTOR_HNSW_EF_CONSTRUCTION == "":
|
||||
PGVECTOR_HNSW_EF_CONSTRUCTION = 64
|
||||
else:
|
||||
try:
|
||||
PGVECTOR_HNSW_EF_CONSTRUCTION = int(PGVECTOR_HNSW_EF_CONSTRUCTION)
|
||||
except Exception:
|
||||
PGVECTOR_HNSW_EF_CONSTRUCTION = 64
|
||||
|
||||
PGVECTOR_IVFFLAT_LISTS = os.environ.get("PGVECTOR_IVFFLAT_LISTS", 100)
|
||||
|
||||
if PGVECTOR_IVFFLAT_LISTS == "":
|
||||
PGVECTOR_IVFFLAT_LISTS = 100
|
||||
else:
|
||||
try:
|
||||
PGVECTOR_IVFFLAT_LISTS = int(PGVECTOR_IVFFLAT_LISTS)
|
||||
except Exception:
|
||||
PGVECTOR_IVFFLAT_LISTS = 100
|
||||
|
||||
# Pinecone
|
||||
PINECONE_API_KEY = os.environ.get("PINECONE_API_KEY", None)
|
||||
PINECONE_ENVIRONMENT = os.environ.get("PINECONE_ENVIRONMENT", None)
|
||||
|
|
@ -2464,6 +2644,12 @@ DOCUMENT_INTELLIGENCE_KEY = PersistentConfig(
|
|||
os.getenv("DOCUMENT_INTELLIGENCE_KEY", ""),
|
||||
)
|
||||
|
||||
MISTRAL_OCR_API_BASE_URL = PersistentConfig(
|
||||
"MISTRAL_OCR_API_BASE_URL",
|
||||
"rag.MISTRAL_OCR_API_BASE_URL",
|
||||
os.getenv("MISTRAL_OCR_API_BASE_URL", "https://api.mistral.ai/v1"),
|
||||
)
|
||||
|
||||
MISTRAL_OCR_API_KEY = PersistentConfig(
|
||||
"MISTRAL_OCR_API_KEY",
|
||||
"rag.mistral_ocr_api_key",
|
||||
|
|
@ -2502,6 +2688,13 @@ ENABLE_RAG_HYBRID_SEARCH = PersistentConfig(
|
|||
os.environ.get("ENABLE_RAG_HYBRID_SEARCH", "").lower() == "true",
|
||||
)
|
||||
|
||||
ENABLE_RAG_HYBRID_SEARCH_ENRICHED_TEXTS = PersistentConfig(
|
||||
"ENABLE_RAG_HYBRID_SEARCH_ENRICHED_TEXTS",
|
||||
"rag.enable_hybrid_search_enriched_texts",
|
||||
os.environ.get("ENABLE_RAG_HYBRID_SEARCH_ENRICHED_TEXTS", "False").lower()
|
||||
== "true",
|
||||
)
|
||||
|
||||
RAG_FULL_CONTEXT = PersistentConfig(
|
||||
"RAG_FULL_CONTEXT",
|
||||
"rag.full_context",
|
||||
|
|
@ -2689,10 +2882,6 @@ Provide a clear and direct response to the user's query, including inline citati
|
|||
<context>
|
||||
{{CONTEXT}}
|
||||
</context>
|
||||
|
||||
<user_query>
|
||||
{{QUERY}}
|
||||
</user_query>
|
||||
"""
|
||||
|
||||
RAG_TEMPLATE = PersistentConfig(
|
||||
|
|
@ -2745,6 +2934,26 @@ ENABLE_RAG_LOCAL_WEB_FETCH = (
|
|||
os.getenv("ENABLE_RAG_LOCAL_WEB_FETCH", "False").lower() == "true"
|
||||
)
|
||||
|
||||
|
||||
DEFAULT_WEB_FETCH_FILTER_LIST = [
|
||||
"!169.254.169.254",
|
||||
"!fd00:ec2::254",
|
||||
"!metadata.google.internal",
|
||||
"!metadata.azure.com",
|
||||
"!100.100.100.200",
|
||||
]
|
||||
|
||||
web_fetch_filter_list = os.getenv("WEB_FETCH_FILTER_LIST", "")
|
||||
if web_fetch_filter_list == "":
|
||||
web_fetch_filter_list = []
|
||||
else:
|
||||
web_fetch_filter_list = [
|
||||
item.strip() for item in web_fetch_filter_list.split(",") if item.strip()
|
||||
]
|
||||
|
||||
WEB_FETCH_FILTER_LIST = list(set(DEFAULT_WEB_FETCH_FILTER_LIST + web_fetch_filter_list))
|
||||
|
||||
|
||||
YOUTUBE_LOADER_LANGUAGE = PersistentConfig(
|
||||
"YOUTUBE_LOADER_LANGUAGE",
|
||||
"rag.youtube_loader_language",
|
||||
|
|
@ -2803,6 +3012,7 @@ WEB_SEARCH_DOMAIN_FILTER_LIST = PersistentConfig(
|
|||
# "wikipedia.com",
|
||||
# "wikimedia.org",
|
||||
# "wikidata.org",
|
||||
# "!stackoverflow.com",
|
||||
],
|
||||
)
|
||||
|
||||
|
|
@ -2974,6 +3184,24 @@ BING_SEARCH_V7_SUBSCRIPTION_KEY = PersistentConfig(
|
|||
os.environ.get("BING_SEARCH_V7_SUBSCRIPTION_KEY", ""),
|
||||
)
|
||||
|
||||
AZURE_AI_SEARCH_API_KEY = PersistentConfig(
|
||||
"AZURE_AI_SEARCH_API_KEY",
|
||||
"rag.web.search.azure_ai_search_api_key",
|
||||
os.environ.get("AZURE_AI_SEARCH_API_KEY", ""),
|
||||
)
|
||||
|
||||
AZURE_AI_SEARCH_ENDPOINT = PersistentConfig(
|
||||
"AZURE_AI_SEARCH_ENDPOINT",
|
||||
"rag.web.search.azure_ai_search_endpoint",
|
||||
os.environ.get("AZURE_AI_SEARCH_ENDPOINT", ""),
|
||||
)
|
||||
|
||||
AZURE_AI_SEARCH_INDEX_NAME = PersistentConfig(
|
||||
"AZURE_AI_SEARCH_INDEX_NAME",
|
||||
"rag.web.search.azure_ai_search_index_name",
|
||||
os.environ.get("AZURE_AI_SEARCH_INDEX_NAME", ""),
|
||||
)
|
||||
|
||||
EXA_API_KEY = PersistentConfig(
|
||||
"EXA_API_KEY",
|
||||
"rag.web.search.exa_api_key",
|
||||
|
|
@ -2998,6 +3226,12 @@ PERPLEXITY_SEARCH_CONTEXT_USAGE = PersistentConfig(
|
|||
os.getenv("PERPLEXITY_SEARCH_CONTEXT_USAGE", "medium"),
|
||||
)
|
||||
|
||||
PERPLEXITY_SEARCH_API_URL = PersistentConfig(
|
||||
"PERPLEXITY_SEARCH_API_URL",
|
||||
"rag.web.search.perplexity_search_api_url",
|
||||
os.getenv("PERPLEXITY_SEARCH_API_URL", "https://api.perplexity.ai/search"),
|
||||
)
|
||||
|
||||
SOUGOU_API_SID = PersistentConfig(
|
||||
"SOUGOU_API_SID",
|
||||
"rag.web.search.sougou_api_sid",
|
||||
|
|
@ -3074,16 +3308,30 @@ EXTERNAL_WEB_LOADER_API_KEY = PersistentConfig(
|
|||
# Images
|
||||
####################################
|
||||
|
||||
ENABLE_IMAGE_GENERATION = PersistentConfig(
|
||||
"ENABLE_IMAGE_GENERATION",
|
||||
"image_generation.enable",
|
||||
os.environ.get("ENABLE_IMAGE_GENERATION", "").lower() == "true",
|
||||
)
|
||||
|
||||
IMAGE_GENERATION_ENGINE = PersistentConfig(
|
||||
"IMAGE_GENERATION_ENGINE",
|
||||
"image_generation.engine",
|
||||
os.getenv("IMAGE_GENERATION_ENGINE", "openai"),
|
||||
)
|
||||
|
||||
ENABLE_IMAGE_GENERATION = PersistentConfig(
|
||||
"ENABLE_IMAGE_GENERATION",
|
||||
"image_generation.enable",
|
||||
os.environ.get("ENABLE_IMAGE_GENERATION", "").lower() == "true",
|
||||
IMAGE_GENERATION_MODEL = PersistentConfig(
|
||||
"IMAGE_GENERATION_MODEL",
|
||||
"image_generation.model",
|
||||
os.getenv("IMAGE_GENERATION_MODEL", ""),
|
||||
)
|
||||
|
||||
IMAGE_SIZE = PersistentConfig(
|
||||
"IMAGE_SIZE", "image_generation.size", os.getenv("IMAGE_SIZE", "512x512")
|
||||
)
|
||||
|
||||
IMAGE_STEPS = PersistentConfig(
|
||||
"IMAGE_STEPS", "image_generation.steps", int(os.getenv("IMAGE_STEPS", 50))
|
||||
)
|
||||
|
||||
ENABLE_IMAGE_PROMPT_GENERATION = PersistentConfig(
|
||||
|
|
@ -3103,35 +3351,16 @@ AUTOMATIC1111_API_AUTH = PersistentConfig(
|
|||
os.getenv("AUTOMATIC1111_API_AUTH", ""),
|
||||
)
|
||||
|
||||
AUTOMATIC1111_CFG_SCALE = PersistentConfig(
|
||||
"AUTOMATIC1111_CFG_SCALE",
|
||||
"image_generation.automatic1111.cfg_scale",
|
||||
(
|
||||
float(os.environ.get("AUTOMATIC1111_CFG_SCALE"))
|
||||
if os.environ.get("AUTOMATIC1111_CFG_SCALE")
|
||||
else None
|
||||
),
|
||||
)
|
||||
automatic1111_params = os.getenv("AUTOMATIC1111_PARAMS", "")
|
||||
try:
|
||||
automatic1111_params = json.loads(automatic1111_params)
|
||||
except json.JSONDecodeError:
|
||||
automatic1111_params = {}
|
||||
|
||||
|
||||
AUTOMATIC1111_SAMPLER = PersistentConfig(
|
||||
"AUTOMATIC1111_SAMPLER",
|
||||
"image_generation.automatic1111.sampler",
|
||||
(
|
||||
os.environ.get("AUTOMATIC1111_SAMPLER")
|
||||
if os.environ.get("AUTOMATIC1111_SAMPLER")
|
||||
else None
|
||||
),
|
||||
)
|
||||
|
||||
AUTOMATIC1111_SCHEDULER = PersistentConfig(
|
||||
"AUTOMATIC1111_SCHEDULER",
|
||||
"image_generation.automatic1111.scheduler",
|
||||
(
|
||||
os.environ.get("AUTOMATIC1111_SCHEDULER")
|
||||
if os.environ.get("AUTOMATIC1111_SCHEDULER")
|
||||
else None
|
||||
),
|
||||
AUTOMATIC1111_PARAMS = PersistentConfig(
|
||||
"AUTOMATIC1111_PARAMS",
|
||||
"image_generation.automatic1111.api_params",
|
||||
automatic1111_params,
|
||||
)
|
||||
|
||||
COMFYUI_BASE_URL = PersistentConfig(
|
||||
|
|
@ -3286,6 +3515,18 @@ IMAGES_OPENAI_API_KEY = PersistentConfig(
|
|||
os.getenv("IMAGES_OPENAI_API_KEY", OPENAI_API_KEY),
|
||||
)
|
||||
|
||||
images_openai_params = os.getenv("IMAGES_OPENAI_PARAMS", "")
|
||||
try:
|
||||
images_openai_params = json.loads(images_openai_params)
|
||||
except json.JSONDecodeError:
|
||||
images_openai_params = {}
|
||||
|
||||
|
||||
IMAGES_OPENAI_API_PARAMS = PersistentConfig(
|
||||
"IMAGES_OPENAI_API_PARAMS", "image_generation.openai.params", images_openai_params
|
||||
)
|
||||
|
||||
|
||||
IMAGES_GEMINI_API_BASE_URL = PersistentConfig(
|
||||
"IMAGES_GEMINI_API_BASE_URL",
|
||||
"image_generation.gemini.api_base_url",
|
||||
|
|
@ -3297,18 +3538,84 @@ IMAGES_GEMINI_API_KEY = PersistentConfig(
|
|||
os.getenv("IMAGES_GEMINI_API_KEY", GEMINI_API_KEY),
|
||||
)
|
||||
|
||||
IMAGE_SIZE = PersistentConfig(
|
||||
"IMAGE_SIZE", "image_generation.size", os.getenv("IMAGE_SIZE", "512x512")
|
||||
IMAGES_GEMINI_ENDPOINT_METHOD = PersistentConfig(
|
||||
"IMAGES_GEMINI_ENDPOINT_METHOD",
|
||||
"image_generation.gemini.endpoint_method",
|
||||
os.getenv("IMAGES_GEMINI_ENDPOINT_METHOD", ""),
|
||||
)
|
||||
|
||||
IMAGE_STEPS = PersistentConfig(
|
||||
"IMAGE_STEPS", "image_generation.steps", int(os.getenv("IMAGE_STEPS", 50))
|
||||
ENABLE_IMAGE_EDIT = PersistentConfig(
|
||||
"ENABLE_IMAGE_EDIT",
|
||||
"images.edit.enable",
|
||||
os.environ.get("ENABLE_IMAGE_EDIT", "").lower() == "true",
|
||||
)
|
||||
|
||||
IMAGE_GENERATION_MODEL = PersistentConfig(
|
||||
"IMAGE_GENERATION_MODEL",
|
||||
"image_generation.model",
|
||||
os.getenv("IMAGE_GENERATION_MODEL", ""),
|
||||
IMAGE_EDIT_ENGINE = PersistentConfig(
|
||||
"IMAGE_EDIT_ENGINE",
|
||||
"images.edit.engine",
|
||||
os.getenv("IMAGE_EDIT_ENGINE", "openai"),
|
||||
)
|
||||
|
||||
IMAGE_EDIT_MODEL = PersistentConfig(
|
||||
"IMAGE_EDIT_MODEL",
|
||||
"images.edit.model",
|
||||
os.getenv("IMAGE_EDIT_MODEL", ""),
|
||||
)
|
||||
|
||||
IMAGE_EDIT_SIZE = PersistentConfig(
|
||||
"IMAGE_EDIT_SIZE", "images.edit.size", os.getenv("IMAGE_EDIT_SIZE", "")
|
||||
)
|
||||
|
||||
IMAGES_EDIT_OPENAI_API_BASE_URL = PersistentConfig(
|
||||
"IMAGES_EDIT_OPENAI_API_BASE_URL",
|
||||
"images.edit.openai.api_base_url",
|
||||
os.getenv("IMAGES_EDIT_OPENAI_API_BASE_URL", OPENAI_API_BASE_URL),
|
||||
)
|
||||
IMAGES_EDIT_OPENAI_API_VERSION = PersistentConfig(
|
||||
"IMAGES_EDIT_OPENAI_API_VERSION",
|
||||
"images.edit.openai.api_version",
|
||||
os.getenv("IMAGES_EDIT_OPENAI_API_VERSION", ""),
|
||||
)
|
||||
|
||||
IMAGES_EDIT_OPENAI_API_KEY = PersistentConfig(
|
||||
"IMAGES_EDIT_OPENAI_API_KEY",
|
||||
"images.edit.openai.api_key",
|
||||
os.getenv("IMAGES_EDIT_OPENAI_API_KEY", OPENAI_API_KEY),
|
||||
)
|
||||
|
||||
IMAGES_EDIT_GEMINI_API_BASE_URL = PersistentConfig(
|
||||
"IMAGES_EDIT_GEMINI_API_BASE_URL",
|
||||
"images.edit.gemini.api_base_url",
|
||||
os.getenv("IMAGES_EDIT_GEMINI_API_BASE_URL", GEMINI_API_BASE_URL),
|
||||
)
|
||||
IMAGES_EDIT_GEMINI_API_KEY = PersistentConfig(
|
||||
"IMAGES_EDIT_GEMINI_API_KEY",
|
||||
"images.edit.gemini.api_key",
|
||||
os.getenv("IMAGES_EDIT_GEMINI_API_KEY", GEMINI_API_KEY),
|
||||
)
|
||||
|
||||
|
||||
IMAGES_EDIT_COMFYUI_BASE_URL = PersistentConfig(
|
||||
"IMAGES_EDIT_COMFYUI_BASE_URL",
|
||||
"images.edit.comfyui.base_url",
|
||||
os.getenv("IMAGES_EDIT_COMFYUI_BASE_URL", ""),
|
||||
)
|
||||
IMAGES_EDIT_COMFYUI_API_KEY = PersistentConfig(
|
||||
"IMAGES_EDIT_COMFYUI_API_KEY",
|
||||
"images.edit.comfyui.api_key",
|
||||
os.getenv("IMAGES_EDIT_COMFYUI_API_KEY", ""),
|
||||
)
|
||||
|
||||
IMAGES_EDIT_COMFYUI_WORKFLOW = PersistentConfig(
|
||||
"IMAGES_EDIT_COMFYUI_WORKFLOW",
|
||||
"images.edit.comfyui.workflow",
|
||||
os.getenv("IMAGES_EDIT_COMFYUI_WORKFLOW", ""),
|
||||
)
|
||||
|
||||
IMAGES_EDIT_COMFYUI_WORKFLOW_NODES = PersistentConfig(
|
||||
"IMAGES_EDIT_COMFYUI_WORKFLOW_NODES",
|
||||
"images.edit.comfyui.nodes",
|
||||
[],
|
||||
)
|
||||
|
||||
####################################
|
||||
|
|
@ -3414,6 +3721,24 @@ AUDIO_STT_AZURE_MAX_SPEAKERS = PersistentConfig(
|
|||
os.getenv("AUDIO_STT_AZURE_MAX_SPEAKERS", ""),
|
||||
)
|
||||
|
||||
AUDIO_STT_MISTRAL_API_KEY = PersistentConfig(
|
||||
"AUDIO_STT_MISTRAL_API_KEY",
|
||||
"audio.stt.mistral.api_key",
|
||||
os.getenv("AUDIO_STT_MISTRAL_API_KEY", ""),
|
||||
)
|
||||
|
||||
AUDIO_STT_MISTRAL_API_BASE_URL = PersistentConfig(
|
||||
"AUDIO_STT_MISTRAL_API_BASE_URL",
|
||||
"audio.stt.mistral.api_base_url",
|
||||
os.getenv("AUDIO_STT_MISTRAL_API_BASE_URL", "https://api.mistral.ai/v1"),
|
||||
)
|
||||
|
||||
AUDIO_STT_MISTRAL_USE_CHAT_COMPLETIONS = PersistentConfig(
|
||||
"AUDIO_STT_MISTRAL_USE_CHAT_COMPLETIONS",
|
||||
"audio.stt.mistral.use_chat_completions",
|
||||
os.getenv("AUDIO_STT_MISTRAL_USE_CHAT_COMPLETIONS", "false").lower() == "true",
|
||||
)
|
||||
|
||||
AUDIO_TTS_OPENAI_API_BASE_URL = PersistentConfig(
|
||||
"AUDIO_TTS_OPENAI_API_BASE_URL",
|
||||
"audio.tts.openai.api_base_url",
|
||||
|
|
|
|||
|
|
@ -45,7 +45,7 @@ class ERROR_MESSAGES(str, Enum):
|
|||
)
|
||||
INVALID_CRED = "The email or password provided is incorrect. Please check for typos and try logging in again."
|
||||
INVALID_EMAIL_FORMAT = "The email format you entered is invalid. Please double-check and make sure you're using a valid email address (e.g., yourname@example.com)."
|
||||
INVALID_PASSWORD = (
|
||||
INCORRECT_PASSWORD = (
|
||||
"The password provided is incorrect. Please check for typos and try again."
|
||||
)
|
||||
INVALID_TRUSTED_HEADER = "Your provider has not provided a trusted header. Please contact your administrator for assistance."
|
||||
|
|
@ -105,6 +105,10 @@ class ERROR_MESSAGES(str, Enum):
|
|||
)
|
||||
FILE_NOT_PROCESSED = "Extracted content is not available for this file. Please ensure that the file is processed before proceeding."
|
||||
|
||||
INVALID_PASSWORD = lambda err="": (
|
||||
err if err else "The password does not meet the required validation criteria."
|
||||
)
|
||||
|
||||
|
||||
class TASKS(str, Enum):
|
||||
def __str__(self) -> str:
|
||||
|
|
|
|||
|
|
@ -8,6 +8,8 @@ import shutil
|
|||
from uuid import uuid4
|
||||
from pathlib import Path
|
||||
from cryptography.hazmat.primitives import serialization
|
||||
import re
|
||||
|
||||
|
||||
import markdown
|
||||
from bs4 import BeautifulSoup
|
||||
|
|
@ -135,6 +137,9 @@ else:
|
|||
PACKAGE_DATA = {"version": "0.0.0"}
|
||||
|
||||
VERSION = PACKAGE_DATA["version"]
|
||||
|
||||
|
||||
DEPLOYMENT_ID = os.environ.get("DEPLOYMENT_ID", "")
|
||||
INSTANCE_ID = os.environ.get("INSTANCE_ID", str(uuid4()))
|
||||
|
||||
|
||||
|
|
@ -426,6 +431,17 @@ WEBUI_AUTH_TRUSTED_GROUPS_HEADER = os.environ.get(
|
|||
)
|
||||
|
||||
|
||||
ENABLE_PASSWORD_VALIDATION = (
|
||||
os.environ.get("ENABLE_PASSWORD_VALIDATION", "False").lower() == "true"
|
||||
)
|
||||
PASSWORD_VALIDATION_REGEX_PATTERN = os.environ.get(
|
||||
"PASSWORD_VALIDATION_REGEX_PATTERN",
|
||||
"^(?=.*[a-z])(?=.*[A-Z])(?=.*\d)(?=.*[^\w\s]).{8,}$",
|
||||
)
|
||||
|
||||
PASSWORD_VALIDATION_REGEX_PATTERN = re.compile(PASSWORD_VALIDATION_REGEX_PATTERN)
|
||||
|
||||
|
||||
BYPASS_MODEL_ACCESS_CONTROL = (
|
||||
os.environ.get("BYPASS_MODEL_ACCESS_CONTROL", "False").lower() == "true"
|
||||
)
|
||||
|
|
@ -493,7 +509,10 @@ OAUTH_SESSION_TOKEN_ENCRYPTION_KEY = os.environ.get(
|
|||
# SCIM Configuration
|
||||
####################################
|
||||
|
||||
SCIM_ENABLED = os.environ.get("SCIM_ENABLED", "False").lower() == "true"
|
||||
ENABLE_SCIM = (
|
||||
os.environ.get("ENABLE_SCIM", os.environ.get("SCIM_ENABLED", "False")).lower()
|
||||
== "true"
|
||||
)
|
||||
SCIM_TOKEN = os.environ.get("SCIM_TOKEN", "")
|
||||
|
||||
####################################
|
||||
|
|
@ -541,6 +560,10 @@ else:
|
|||
# CHAT
|
||||
####################################
|
||||
|
||||
ENABLE_CHAT_RESPONSE_BASE64_IMAGE_URL_CONVERSION = (
|
||||
os.environ.get("REPLACE_IMAGE_URLS_IN_CHAT_RESPONSE", "False").lower() == "true"
|
||||
)
|
||||
|
||||
CHAT_RESPONSE_STREAM_DELTA_CHUNK_SIZE = os.environ.get(
|
||||
"CHAT_RESPONSE_STREAM_DELTA_CHUNK_SIZE", "1"
|
||||
)
|
||||
|
|
@ -569,6 +592,21 @@ else:
|
|||
CHAT_RESPONSE_MAX_TOOL_CALL_RETRIES = 30
|
||||
|
||||
|
||||
CHAT_STREAM_RESPONSE_CHUNK_MAX_BUFFER_SIZE = os.environ.get(
|
||||
"CHAT_STREAM_RESPONSE_CHUNK_MAX_BUFFER_SIZE", ""
|
||||
)
|
||||
|
||||
if CHAT_STREAM_RESPONSE_CHUNK_MAX_BUFFER_SIZE == "":
|
||||
CHAT_STREAM_RESPONSE_CHUNK_MAX_BUFFER_SIZE = None
|
||||
else:
|
||||
try:
|
||||
CHAT_STREAM_RESPONSE_CHUNK_MAX_BUFFER_SIZE = int(
|
||||
CHAT_STREAM_RESPONSE_CHUNK_MAX_BUFFER_SIZE
|
||||
)
|
||||
except Exception:
|
||||
CHAT_STREAM_RESPONSE_CHUNK_MAX_BUFFER_SIZE = None
|
||||
|
||||
|
||||
####################################
|
||||
# WEBSOCKET SUPPORT
|
||||
####################################
|
||||
|
|
@ -580,6 +618,17 @@ ENABLE_WEBSOCKET_SUPPORT = (
|
|||
|
||||
WEBSOCKET_MANAGER = os.environ.get("WEBSOCKET_MANAGER", "")
|
||||
|
||||
WEBSOCKET_REDIS_OPTIONS = os.environ.get("WEBSOCKET_REDIS_OPTIONS", "")
|
||||
if WEBSOCKET_REDIS_OPTIONS == "":
|
||||
log.debug("No WEBSOCKET_REDIS_OPTIONS provided, defaulting to None")
|
||||
WEBSOCKET_REDIS_OPTIONS = None
|
||||
else:
|
||||
try:
|
||||
WEBSOCKET_REDIS_OPTIONS = json.loads(WEBSOCKET_REDIS_OPTIONS)
|
||||
except Exception:
|
||||
log.warning("Invalid WEBSOCKET_REDIS_OPTIONS, defaulting to None")
|
||||
WEBSOCKET_REDIS_OPTIONS = None
|
||||
|
||||
WEBSOCKET_REDIS_URL = os.environ.get("WEBSOCKET_REDIS_URL", REDIS_URL)
|
||||
WEBSOCKET_REDIS_CLUSTER = (
|
||||
os.environ.get("WEBSOCKET_REDIS_CLUSTER", str(REDIS_CLUSTER)).lower() == "true"
|
||||
|
|
@ -594,6 +643,23 @@ except ValueError:
|
|||
|
||||
WEBSOCKET_SENTINEL_HOSTS = os.environ.get("WEBSOCKET_SENTINEL_HOSTS", "")
|
||||
WEBSOCKET_SENTINEL_PORT = os.environ.get("WEBSOCKET_SENTINEL_PORT", "26379")
|
||||
WEBSOCKET_SERVER_LOGGING = (
|
||||
os.environ.get("WEBSOCKET_SERVER_LOGGING", "False").lower() == "true"
|
||||
)
|
||||
WEBSOCKET_SERVER_ENGINEIO_LOGGING = (
|
||||
os.environ.get("WEBSOCKET_SERVER_LOGGING", "False").lower() == "true"
|
||||
)
|
||||
WEBSOCKET_SERVER_PING_TIMEOUT = os.environ.get("WEBSOCKET_SERVER_PING_TIMEOUT", "20")
|
||||
try:
|
||||
WEBSOCKET_SERVER_PING_TIMEOUT = int(WEBSOCKET_SERVER_PING_TIMEOUT)
|
||||
except ValueError:
|
||||
WEBSOCKET_SERVER_PING_TIMEOUT = 20
|
||||
|
||||
WEBSOCKET_SERVER_PING_INTERVAL = os.environ.get("WEBSOCKET_SERVER_PING_INTERVAL", "25")
|
||||
try:
|
||||
WEBSOCKET_SERVER_PING_INTERVAL = int(WEBSOCKET_SERVER_PING_INTERVAL)
|
||||
except ValueError:
|
||||
WEBSOCKET_SERVER_PING_INTERVAL = 25
|
||||
|
||||
|
||||
AIOHTTP_CLIENT_TIMEOUT = os.environ.get("AIOHTTP_CLIENT_TIMEOUT", "")
|
||||
|
|
@ -706,7 +772,9 @@ if OFFLINE_MODE:
|
|||
# AUDIT LOGGING
|
||||
####################################
|
||||
# Where to store log file
|
||||
AUDIT_LOGS_FILE_PATH = f"{DATA_DIR}/audit.log"
|
||||
# Defaults to the DATA_DIR/audit.log. To set AUDIT_LOGS_FILE_PATH you need to
|
||||
# provide the whole path, like: /app/audit.log
|
||||
AUDIT_LOGS_FILE_PATH = os.getenv("AUDIT_LOGS_FILE_PATH", f"{DATA_DIR}/audit.log")
|
||||
# Maximum size of a file before rotating into a new log file
|
||||
AUDIT_LOG_FILE_ROTATION_SIZE = os.getenv("AUDIT_LOG_FILE_ROTATION_SIZE", "10MB")
|
||||
|
||||
|
|
|
|||
|
|
@ -146,9 +146,7 @@ from open_webui.config import (
|
|||
# Image
|
||||
AUTOMATIC1111_API_AUTH,
|
||||
AUTOMATIC1111_BASE_URL,
|
||||
AUTOMATIC1111_CFG_SCALE,
|
||||
AUTOMATIC1111_SAMPLER,
|
||||
AUTOMATIC1111_SCHEDULER,
|
||||
AUTOMATIC1111_PARAMS,
|
||||
COMFYUI_BASE_URL,
|
||||
COMFYUI_API_KEY,
|
||||
COMFYUI_WORKFLOW,
|
||||
|
|
@ -162,8 +160,23 @@ from open_webui.config import (
|
|||
IMAGES_OPENAI_API_BASE_URL,
|
||||
IMAGES_OPENAI_API_VERSION,
|
||||
IMAGES_OPENAI_API_KEY,
|
||||
IMAGES_OPENAI_API_PARAMS,
|
||||
IMAGES_GEMINI_API_BASE_URL,
|
||||
IMAGES_GEMINI_API_KEY,
|
||||
IMAGES_GEMINI_ENDPOINT_METHOD,
|
||||
ENABLE_IMAGE_EDIT,
|
||||
IMAGE_EDIT_ENGINE,
|
||||
IMAGE_EDIT_MODEL,
|
||||
IMAGE_EDIT_SIZE,
|
||||
IMAGES_EDIT_OPENAI_API_BASE_URL,
|
||||
IMAGES_EDIT_OPENAI_API_KEY,
|
||||
IMAGES_EDIT_OPENAI_API_VERSION,
|
||||
IMAGES_EDIT_GEMINI_API_BASE_URL,
|
||||
IMAGES_EDIT_GEMINI_API_KEY,
|
||||
IMAGES_EDIT_COMFYUI_BASE_URL,
|
||||
IMAGES_EDIT_COMFYUI_API_KEY,
|
||||
IMAGES_EDIT_COMFYUI_WORKFLOW,
|
||||
IMAGES_EDIT_COMFYUI_WORKFLOW_NODES,
|
||||
# Audio
|
||||
AUDIO_STT_ENGINE,
|
||||
AUDIO_STT_MODEL,
|
||||
|
|
@ -175,6 +188,9 @@ from open_webui.config import (
|
|||
AUDIO_STT_AZURE_LOCALES,
|
||||
AUDIO_STT_AZURE_BASE_URL,
|
||||
AUDIO_STT_AZURE_MAX_SPEAKERS,
|
||||
AUDIO_STT_MISTRAL_API_KEY,
|
||||
AUDIO_STT_MISTRAL_API_BASE_URL,
|
||||
AUDIO_STT_MISTRAL_USE_CHAT_COMPLETIONS,
|
||||
AUDIO_TTS_ENGINE,
|
||||
AUDIO_TTS_MODEL,
|
||||
AUDIO_TTS_VOICE,
|
||||
|
|
@ -266,6 +282,7 @@ from open_webui.config import (
|
|||
DOCLING_PICTURE_DESCRIPTION_API,
|
||||
DOCUMENT_INTELLIGENCE_ENDPOINT,
|
||||
DOCUMENT_INTELLIGENCE_KEY,
|
||||
MISTRAL_OCR_API_BASE_URL,
|
||||
MISTRAL_OCR_API_KEY,
|
||||
RAG_TEXT_SPLITTER,
|
||||
TIKTOKEN_ENCODING_NAME,
|
||||
|
|
@ -304,6 +321,7 @@ from open_webui.config import (
|
|||
PERPLEXITY_API_KEY,
|
||||
PERPLEXITY_MODEL,
|
||||
PERPLEXITY_SEARCH_CONTEXT_USAGE,
|
||||
PERPLEXITY_SEARCH_API_URL,
|
||||
SOUGOU_API_SID,
|
||||
SOUGOU_API_SK,
|
||||
KAGI_SEARCH_API_KEY,
|
||||
|
|
@ -321,6 +339,7 @@ from open_webui.config import (
|
|||
ENABLE_ONEDRIVE_PERSONAL,
|
||||
ENABLE_ONEDRIVE_BUSINESS,
|
||||
ENABLE_RAG_HYBRID_SEARCH,
|
||||
ENABLE_RAG_HYBRID_SEARCH_ENRICHED_TEXTS,
|
||||
ENABLE_RAG_LOCAL_WEB_FETCH,
|
||||
ENABLE_WEB_LOADER_SSL_VERIFICATION,
|
||||
ENABLE_GOOGLE_DRIVE_INTEGRATION,
|
||||
|
|
@ -339,9 +358,9 @@ from open_webui.config import (
|
|||
JWT_EXPIRES_IN,
|
||||
ENABLE_SIGNUP,
|
||||
ENABLE_LOGIN_FORM,
|
||||
ENABLE_API_KEY,
|
||||
ENABLE_API_KEY_ENDPOINT_RESTRICTIONS,
|
||||
API_KEY_ALLOWED_ENDPOINTS,
|
||||
ENABLE_API_KEYS,
|
||||
ENABLE_API_KEYS_ENDPOINT_RESTRICTIONS,
|
||||
API_KEYS_ALLOWED_ENDPOINTS,
|
||||
ENABLE_CHANNELS,
|
||||
ENABLE_NOTES,
|
||||
ENABLE_COMMUNITY_SHARING,
|
||||
|
|
@ -351,10 +370,12 @@ from open_webui.config import (
|
|||
BYPASS_ADMIN_ACCESS_CONTROL,
|
||||
USER_PERMISSIONS,
|
||||
DEFAULT_USER_ROLE,
|
||||
DEFAULT_GROUP_ID,
|
||||
PENDING_USER_OVERLAY_CONTENT,
|
||||
PENDING_USER_OVERLAY_TITLE,
|
||||
DEFAULT_PROMPT_SUGGESTIONS,
|
||||
DEFAULT_MODELS,
|
||||
DEFAULT_PINNED_MODELS,
|
||||
DEFAULT_ARENA_MODEL,
|
||||
MODEL_ORDER_LIST,
|
||||
EVALUATION_ARENA_MODELS,
|
||||
|
|
@ -413,6 +434,7 @@ from open_webui.config import (
|
|||
TAGS_GENERATION_PROMPT_TEMPLATE,
|
||||
IMAGE_PROMPT_GENERATION_PROMPT_TEMPLATE,
|
||||
TOOLS_FUNCTION_CALLING_PROMPT_TEMPLATE,
|
||||
VOICE_MODE_PROMPT_TEMPLATE,
|
||||
QUERY_GENERATION_PROMPT_TEMPLATE,
|
||||
AUTOCOMPLETE_GENERATION_PROMPT_TEMPLATE,
|
||||
AUTOCOMPLETE_GENERATION_INPUT_MAX_LENGTH,
|
||||
|
|
@ -434,6 +456,7 @@ from open_webui.env import (
|
|||
SAFE_MODE,
|
||||
SRC_LOG_LEVELS,
|
||||
VERSION,
|
||||
DEPLOYMENT_ID,
|
||||
INSTANCE_ID,
|
||||
WEBUI_BUILD_HASH,
|
||||
WEBUI_SECRET_KEY,
|
||||
|
|
@ -444,7 +467,7 @@ from open_webui.env import (
|
|||
WEBUI_AUTH_TRUSTED_NAME_HEADER,
|
||||
WEBUI_AUTH_SIGNOUT_REDIRECT_URL,
|
||||
# SCIM
|
||||
SCIM_ENABLED,
|
||||
ENABLE_SCIM,
|
||||
SCIM_TOKEN,
|
||||
ENABLE_COMPRESSION_MIDDLEWARE,
|
||||
ENABLE_WEBSOCKET_SUPPORT,
|
||||
|
|
@ -700,7 +723,7 @@ app.state.config.ENABLE_DIRECT_CONNECTIONS = ENABLE_DIRECT_CONNECTIONS
|
|||
#
|
||||
########################################
|
||||
|
||||
app.state.SCIM_ENABLED = SCIM_ENABLED
|
||||
app.state.ENABLE_SCIM = ENABLE_SCIM
|
||||
app.state.SCIM_TOKEN = SCIM_TOKEN
|
||||
|
||||
########################################
|
||||
|
|
@ -722,11 +745,11 @@ app.state.config.WEBUI_URL = WEBUI_URL
|
|||
app.state.config.ENABLE_SIGNUP = ENABLE_SIGNUP
|
||||
app.state.config.ENABLE_LOGIN_FORM = ENABLE_LOGIN_FORM
|
||||
|
||||
app.state.config.ENABLE_API_KEY = ENABLE_API_KEY
|
||||
app.state.config.ENABLE_API_KEY_ENDPOINT_RESTRICTIONS = (
|
||||
ENABLE_API_KEY_ENDPOINT_RESTRICTIONS
|
||||
app.state.config.ENABLE_API_KEYS = ENABLE_API_KEYS
|
||||
app.state.config.ENABLE_API_KEYS_ENDPOINT_RESTRICTIONS = (
|
||||
ENABLE_API_KEYS_ENDPOINT_RESTRICTIONS
|
||||
)
|
||||
app.state.config.API_KEY_ALLOWED_ENDPOINTS = API_KEY_ALLOWED_ENDPOINTS
|
||||
app.state.config.API_KEYS_ALLOWED_ENDPOINTS = API_KEYS_ALLOWED_ENDPOINTS
|
||||
|
||||
app.state.config.JWT_EXPIRES_IN = JWT_EXPIRES_IN
|
||||
|
||||
|
|
@ -735,8 +758,13 @@ app.state.config.ADMIN_EMAIL = ADMIN_EMAIL
|
|||
|
||||
|
||||
app.state.config.DEFAULT_MODELS = DEFAULT_MODELS
|
||||
app.state.config.DEFAULT_PINNED_MODELS = DEFAULT_PINNED_MODELS
|
||||
app.state.config.MODEL_ORDER_LIST = MODEL_ORDER_LIST
|
||||
|
||||
|
||||
app.state.config.DEFAULT_PROMPT_SUGGESTIONS = DEFAULT_PROMPT_SUGGESTIONS
|
||||
app.state.config.DEFAULT_USER_ROLE = DEFAULT_USER_ROLE
|
||||
app.state.config.DEFAULT_GROUP_ID = DEFAULT_GROUP_ID
|
||||
|
||||
app.state.config.PENDING_USER_OVERLAY_CONTENT = PENDING_USER_OVERLAY_CONTENT
|
||||
app.state.config.PENDING_USER_OVERLAY_TITLE = PENDING_USER_OVERLAY_TITLE
|
||||
|
|
@ -746,7 +774,6 @@ app.state.config.RESPONSE_WATERMARK = RESPONSE_WATERMARK
|
|||
app.state.config.USER_PERMISSIONS = USER_PERMISSIONS
|
||||
app.state.config.WEBHOOK_URL = WEBHOOK_URL
|
||||
app.state.config.BANNERS = WEBUI_BANNERS
|
||||
app.state.config.MODEL_ORDER_LIST = MODEL_ORDER_LIST
|
||||
|
||||
|
||||
app.state.config.ENABLE_CHANNELS = ENABLE_CHANNELS
|
||||
|
|
@ -824,6 +851,9 @@ app.state.config.FILE_IMAGE_COMPRESSION_HEIGHT = FILE_IMAGE_COMPRESSION_HEIGHT
|
|||
app.state.config.RAG_FULL_CONTEXT = RAG_FULL_CONTEXT
|
||||
app.state.config.BYPASS_EMBEDDING_AND_RETRIEVAL = BYPASS_EMBEDDING_AND_RETRIEVAL
|
||||
app.state.config.ENABLE_RAG_HYBRID_SEARCH = ENABLE_RAG_HYBRID_SEARCH
|
||||
app.state.config.ENABLE_RAG_HYBRID_SEARCH_ENRICHED_TEXTS = (
|
||||
ENABLE_RAG_HYBRID_SEARCH_ENRICHED_TEXTS
|
||||
)
|
||||
app.state.config.ENABLE_WEB_LOADER_SSL_VERIFICATION = ENABLE_WEB_LOADER_SSL_VERIFICATION
|
||||
|
||||
app.state.config.CONTENT_EXTRACTION_ENGINE = CONTENT_EXTRACTION_ENGINE
|
||||
|
|
@ -858,6 +888,7 @@ app.state.config.DOCLING_PICTURE_DESCRIPTION_LOCAL = DOCLING_PICTURE_DESCRIPTION
|
|||
app.state.config.DOCLING_PICTURE_DESCRIPTION_API = DOCLING_PICTURE_DESCRIPTION_API
|
||||
app.state.config.DOCUMENT_INTELLIGENCE_ENDPOINT = DOCUMENT_INTELLIGENCE_ENDPOINT
|
||||
app.state.config.DOCUMENT_INTELLIGENCE_KEY = DOCUMENT_INTELLIGENCE_KEY
|
||||
app.state.config.MISTRAL_OCR_API_BASE_URL = MISTRAL_OCR_API_BASE_URL
|
||||
app.state.config.MISTRAL_OCR_API_KEY = MISTRAL_OCR_API_KEY
|
||||
app.state.config.MINERU_API_MODE = MINERU_API_MODE
|
||||
app.state.config.MINERU_API_URL = MINERU_API_URL
|
||||
|
|
@ -942,6 +973,7 @@ app.state.config.EXA_API_KEY = EXA_API_KEY
|
|||
app.state.config.PERPLEXITY_API_KEY = PERPLEXITY_API_KEY
|
||||
app.state.config.PERPLEXITY_MODEL = PERPLEXITY_MODEL
|
||||
app.state.config.PERPLEXITY_SEARCH_CONTEXT_USAGE = PERPLEXITY_SEARCH_CONTEXT_USAGE
|
||||
app.state.config.PERPLEXITY_SEARCH_API_URL = PERPLEXITY_SEARCH_API_URL
|
||||
app.state.config.SOUGOU_API_SID = SOUGOU_API_SID
|
||||
app.state.config.SOUGOU_API_SK = SOUGOU_API_SK
|
||||
app.state.config.EXTERNAL_WEB_SEARCH_URL = EXTERNAL_WEB_SEARCH_URL
|
||||
|
|
@ -1064,27 +1096,42 @@ app.state.config.IMAGE_GENERATION_ENGINE = IMAGE_GENERATION_ENGINE
|
|||
app.state.config.ENABLE_IMAGE_GENERATION = ENABLE_IMAGE_GENERATION
|
||||
app.state.config.ENABLE_IMAGE_PROMPT_GENERATION = ENABLE_IMAGE_PROMPT_GENERATION
|
||||
|
||||
app.state.config.IMAGE_GENERATION_MODEL = IMAGE_GENERATION_MODEL
|
||||
app.state.config.IMAGE_SIZE = IMAGE_SIZE
|
||||
app.state.config.IMAGE_STEPS = IMAGE_STEPS
|
||||
|
||||
app.state.config.IMAGES_OPENAI_API_BASE_URL = IMAGES_OPENAI_API_BASE_URL
|
||||
app.state.config.IMAGES_OPENAI_API_VERSION = IMAGES_OPENAI_API_VERSION
|
||||
app.state.config.IMAGES_OPENAI_API_KEY = IMAGES_OPENAI_API_KEY
|
||||
app.state.config.IMAGES_OPENAI_API_PARAMS = IMAGES_OPENAI_API_PARAMS
|
||||
|
||||
app.state.config.IMAGES_GEMINI_API_BASE_URL = IMAGES_GEMINI_API_BASE_URL
|
||||
app.state.config.IMAGES_GEMINI_API_KEY = IMAGES_GEMINI_API_KEY
|
||||
|
||||
app.state.config.IMAGE_GENERATION_MODEL = IMAGE_GENERATION_MODEL
|
||||
app.state.config.IMAGES_GEMINI_ENDPOINT_METHOD = IMAGES_GEMINI_ENDPOINT_METHOD
|
||||
|
||||
app.state.config.AUTOMATIC1111_BASE_URL = AUTOMATIC1111_BASE_URL
|
||||
app.state.config.AUTOMATIC1111_API_AUTH = AUTOMATIC1111_API_AUTH
|
||||
app.state.config.AUTOMATIC1111_CFG_SCALE = AUTOMATIC1111_CFG_SCALE
|
||||
app.state.config.AUTOMATIC1111_SAMPLER = AUTOMATIC1111_SAMPLER
|
||||
app.state.config.AUTOMATIC1111_SCHEDULER = AUTOMATIC1111_SCHEDULER
|
||||
app.state.config.AUTOMATIC1111_PARAMS = AUTOMATIC1111_PARAMS
|
||||
|
||||
app.state.config.COMFYUI_BASE_URL = COMFYUI_BASE_URL
|
||||
app.state.config.COMFYUI_API_KEY = COMFYUI_API_KEY
|
||||
app.state.config.COMFYUI_WORKFLOW = COMFYUI_WORKFLOW
|
||||
app.state.config.COMFYUI_WORKFLOW_NODES = COMFYUI_WORKFLOW_NODES
|
||||
|
||||
app.state.config.IMAGE_SIZE = IMAGE_SIZE
|
||||
app.state.config.IMAGE_STEPS = IMAGE_STEPS
|
||||
|
||||
app.state.config.ENABLE_IMAGE_EDIT = ENABLE_IMAGE_EDIT
|
||||
app.state.config.IMAGE_EDIT_ENGINE = IMAGE_EDIT_ENGINE
|
||||
app.state.config.IMAGE_EDIT_MODEL = IMAGE_EDIT_MODEL
|
||||
app.state.config.IMAGE_EDIT_SIZE = IMAGE_EDIT_SIZE
|
||||
app.state.config.IMAGES_EDIT_OPENAI_API_BASE_URL = IMAGES_EDIT_OPENAI_API_BASE_URL
|
||||
app.state.config.IMAGES_EDIT_OPENAI_API_KEY = IMAGES_EDIT_OPENAI_API_KEY
|
||||
app.state.config.IMAGES_EDIT_OPENAI_API_VERSION = IMAGES_EDIT_OPENAI_API_VERSION
|
||||
app.state.config.IMAGES_EDIT_GEMINI_API_BASE_URL = IMAGES_EDIT_GEMINI_API_BASE_URL
|
||||
app.state.config.IMAGES_EDIT_GEMINI_API_KEY = IMAGES_EDIT_GEMINI_API_KEY
|
||||
app.state.config.IMAGES_EDIT_COMFYUI_BASE_URL = IMAGES_EDIT_COMFYUI_BASE_URL
|
||||
app.state.config.IMAGES_EDIT_COMFYUI_API_KEY = IMAGES_EDIT_COMFYUI_API_KEY
|
||||
app.state.config.IMAGES_EDIT_COMFYUI_WORKFLOW = IMAGES_EDIT_COMFYUI_WORKFLOW
|
||||
app.state.config.IMAGES_EDIT_COMFYUI_WORKFLOW_NODES = IMAGES_EDIT_COMFYUI_WORKFLOW_NODES
|
||||
|
||||
|
||||
########################################
|
||||
|
|
@ -1110,6 +1157,12 @@ app.state.config.AUDIO_STT_AZURE_LOCALES = AUDIO_STT_AZURE_LOCALES
|
|||
app.state.config.AUDIO_STT_AZURE_BASE_URL = AUDIO_STT_AZURE_BASE_URL
|
||||
app.state.config.AUDIO_STT_AZURE_MAX_SPEAKERS = AUDIO_STT_AZURE_MAX_SPEAKERS
|
||||
|
||||
app.state.config.AUDIO_STT_MISTRAL_API_KEY = AUDIO_STT_MISTRAL_API_KEY
|
||||
app.state.config.AUDIO_STT_MISTRAL_API_BASE_URL = AUDIO_STT_MISTRAL_API_BASE_URL
|
||||
app.state.config.AUDIO_STT_MISTRAL_USE_CHAT_COMPLETIONS = (
|
||||
AUDIO_STT_MISTRAL_USE_CHAT_COMPLETIONS
|
||||
)
|
||||
|
||||
app.state.config.TTS_ENGINE = AUDIO_TTS_ENGINE
|
||||
|
||||
app.state.config.TTS_MODEL = AUDIO_TTS_MODEL
|
||||
|
|
@ -1171,6 +1224,7 @@ app.state.config.AUTOCOMPLETE_GENERATION_PROMPT_TEMPLATE = (
|
|||
app.state.config.AUTOCOMPLETE_GENERATION_INPUT_MAX_LENGTH = (
|
||||
AUTOCOMPLETE_GENERATION_INPUT_MAX_LENGTH
|
||||
)
|
||||
app.state.config.VOICE_MODE_PROMPT_TEMPLATE = VOICE_MODE_PROMPT_TEMPLATE
|
||||
|
||||
|
||||
########################################
|
||||
|
|
@ -1181,6 +1235,10 @@ app.state.config.AUTOCOMPLETE_GENERATION_INPUT_MAX_LENGTH = (
|
|||
|
||||
app.state.MODELS = {}
|
||||
|
||||
# Add the middleware to the app
|
||||
if ENABLE_COMPRESSION_MIDDLEWARE:
|
||||
app.add_middleware(CompressMiddleware)
|
||||
|
||||
|
||||
class RedirectMiddleware(BaseHTTPMiddleware):
|
||||
async def dispatch(self, request: Request, call_next):
|
||||
|
|
@ -1222,14 +1280,53 @@ class RedirectMiddleware(BaseHTTPMiddleware):
|
|||
return response
|
||||
|
||||
|
||||
# Add the middleware to the app
|
||||
if ENABLE_COMPRESSION_MIDDLEWARE:
|
||||
app.add_middleware(CompressMiddleware)
|
||||
|
||||
app.add_middleware(RedirectMiddleware)
|
||||
app.add_middleware(SecurityHeadersMiddleware)
|
||||
|
||||
|
||||
class APIKeyRestrictionMiddleware(BaseHTTPMiddleware):
|
||||
async def dispatch(self, request: Request, call_next):
|
||||
auth_header = request.headers.get("Authorization")
|
||||
token = None
|
||||
|
||||
if auth_header:
|
||||
scheme, token = auth_header.split(" ")
|
||||
|
||||
# Only apply restrictions if an sk- API key is used
|
||||
if token and token.startswith("sk-"):
|
||||
# Check if restrictions are enabled
|
||||
if request.app.state.config.ENABLE_API_KEYS_ENDPOINT_RESTRICTIONS:
|
||||
allowed_paths = [
|
||||
path.strip()
|
||||
for path in str(
|
||||
request.app.state.config.API_KEYS_ALLOWED_ENDPOINTS
|
||||
).split(",")
|
||||
if path.strip()
|
||||
]
|
||||
|
||||
request_path = request.url.path
|
||||
|
||||
# Match exact path or prefix path
|
||||
is_allowed = any(
|
||||
request_path == allowed or request_path.startswith(allowed + "/")
|
||||
for allowed in allowed_paths
|
||||
)
|
||||
|
||||
if not is_allowed:
|
||||
return JSONResponse(
|
||||
status_code=status.HTTP_403_FORBIDDEN,
|
||||
content={
|
||||
"detail": "API key not allowed to access this endpoint."
|
||||
},
|
||||
)
|
||||
|
||||
response = await call_next(request)
|
||||
return response
|
||||
|
||||
|
||||
app.add_middleware(APIKeyRestrictionMiddleware)
|
||||
|
||||
|
||||
@app.middleware("http")
|
||||
async def commit_session_after_request(request: Request, call_next):
|
||||
response = await call_next(request)
|
||||
|
|
@ -1245,7 +1342,7 @@ async def check_url(request: Request, call_next):
|
|||
request.headers.get("Authorization")
|
||||
)
|
||||
|
||||
request.state.enable_api_key = app.state.config.ENABLE_API_KEY
|
||||
request.state.enable_api_keys = app.state.config.ENABLE_API_KEYS
|
||||
response = await call_next(request)
|
||||
process_time = int(time.time()) - start_time
|
||||
response.headers["X-Process-Time"] = str(process_time)
|
||||
|
|
@ -1320,7 +1417,7 @@ app.include_router(
|
|||
app.include_router(utils.router, prefix="/api/v1/utils", tags=["utils"])
|
||||
|
||||
# SCIM 2.0 API for identity management
|
||||
if SCIM_ENABLED:
|
||||
if ENABLE_SCIM:
|
||||
app.include_router(scim.router, prefix="/api/v1/scim/v2", tags=["scim"])
|
||||
|
||||
|
||||
|
|
@ -1357,6 +1454,10 @@ async def get_models(
|
|||
if "pipeline" in model and model["pipeline"].get("type", None) == "filter":
|
||||
continue
|
||||
|
||||
# Remove profile image URL to reduce payload size
|
||||
if model.get("info", {}).get("meta", {}).get("profile_image_url"):
|
||||
model["info"]["meta"].pop("profile_image_url", None)
|
||||
|
||||
try:
|
||||
model_tags = [
|
||||
tag.get("name")
|
||||
|
|
@ -1479,6 +1580,9 @@ async def chat_completion(
|
|||
reasoning_tags = form_data.get("params", {}).get("reasoning_tags")
|
||||
|
||||
# Model Params
|
||||
if model_info_params.get("stream_response") is not None:
|
||||
form_data["stream"] = model_info_params.get("stream_response")
|
||||
|
||||
if model_info_params.get("stream_delta_chunk_size"):
|
||||
stream_delta_chunk_size = model_info_params.get("stream_delta_chunk_size")
|
||||
|
||||
|
|
@ -1748,7 +1852,7 @@ async def get_app_config(request: Request):
|
|||
"auth_trusted_header": bool(app.state.AUTH_TRUSTED_EMAIL_HEADER),
|
||||
"enable_signup_password_confirmation": ENABLE_SIGNUP_PASSWORD_CONFIRMATION,
|
||||
"enable_ldap": app.state.config.ENABLE_LDAP,
|
||||
"enable_api_key": app.state.config.ENABLE_API_KEY,
|
||||
"enable_api_keys": app.state.config.ENABLE_API_KEYS,
|
||||
"enable_signup": app.state.config.ENABLE_SIGNUP,
|
||||
"enable_login_form": app.state.config.ENABLE_LOGIN_FORM,
|
||||
"enable_websocket": ENABLE_WEBSOCKET_SUPPORT,
|
||||
|
|
@ -1786,6 +1890,7 @@ async def get_app_config(request: Request):
|
|||
**(
|
||||
{
|
||||
"default_models": app.state.config.DEFAULT_MODELS,
|
||||
"default_pinned_models": app.state.config.DEFAULT_PINNED_MODELS,
|
||||
"default_prompt_suggestions": app.state.config.DEFAULT_PROMPT_SUGGESTIONS,
|
||||
"user_count": user_count,
|
||||
"code": {
|
||||
|
|
@ -1887,6 +1992,7 @@ async def update_webhook_url(form_data: UrlForm, user=Depends(get_admin_user)):
|
|||
async def get_app_version():
|
||||
return {
|
||||
"version": VERSION,
|
||||
"deployment_id": DEPLOYMENT_ID,
|
||||
}
|
||||
|
||||
|
||||
|
|
|
|||
|
|
@ -0,0 +1,146 @@
|
|||
"""add_group_member_table
|
||||
|
||||
Revision ID: 37f288994c47
|
||||
Revises: a5c220713937
|
||||
Create Date: 2025-11-17 03:45:25.123939
|
||||
|
||||
"""
|
||||
|
||||
import uuid
|
||||
import time
|
||||
import json
|
||||
from typing import Sequence, Union
|
||||
|
||||
from alembic import op
|
||||
import sqlalchemy as sa
|
||||
|
||||
|
||||
# revision identifiers, used by Alembic.
|
||||
revision: str = "37f288994c47"
|
||||
down_revision: Union[str, None] = "a5c220713937"
|
||||
branch_labels: Union[str, Sequence[str], None] = None
|
||||
depends_on: Union[str, Sequence[str], None] = None
|
||||
|
||||
|
||||
def upgrade() -> None:
|
||||
# 1. Create new table
|
||||
op.create_table(
|
||||
"group_member",
|
||||
sa.Column("id", sa.Text(), primary_key=True, unique=True, nullable=False),
|
||||
sa.Column(
|
||||
"group_id",
|
||||
sa.Text(),
|
||||
sa.ForeignKey("group.id", ondelete="CASCADE"),
|
||||
nullable=False,
|
||||
),
|
||||
sa.Column(
|
||||
"user_id",
|
||||
sa.Text(),
|
||||
sa.ForeignKey("user.id", ondelete="CASCADE"),
|
||||
nullable=False,
|
||||
),
|
||||
sa.Column("created_at", sa.BigInteger(), nullable=True),
|
||||
sa.Column("updated_at", sa.BigInteger(), nullable=True),
|
||||
sa.UniqueConstraint("group_id", "user_id", name="uq_group_member_group_user"),
|
||||
)
|
||||
|
||||
connection = op.get_bind()
|
||||
|
||||
# 2. Read existing group with user_ids JSON column
|
||||
group_table = sa.Table(
|
||||
"group",
|
||||
sa.MetaData(),
|
||||
sa.Column("id", sa.Text()),
|
||||
sa.Column("user_ids", sa.JSON()), # JSON stored as text in SQLite + PG
|
||||
)
|
||||
|
||||
results = connection.execute(
|
||||
sa.select(group_table.c.id, group_table.c.user_ids)
|
||||
).fetchall()
|
||||
|
||||
print(results)
|
||||
|
||||
# 3. Insert members into group_member table
|
||||
gm_table = sa.Table(
|
||||
"group_member",
|
||||
sa.MetaData(),
|
||||
sa.Column("id", sa.Text()),
|
||||
sa.Column("group_id", sa.Text()),
|
||||
sa.Column("user_id", sa.Text()),
|
||||
sa.Column("created_at", sa.BigInteger()),
|
||||
sa.Column("updated_at", sa.BigInteger()),
|
||||
)
|
||||
|
||||
now = int(time.time())
|
||||
for group_id, user_ids in results:
|
||||
if not user_ids:
|
||||
continue
|
||||
|
||||
if isinstance(user_ids, str):
|
||||
try:
|
||||
user_ids = json.loads(user_ids)
|
||||
except Exception:
|
||||
continue # skip invalid JSON
|
||||
|
||||
if not isinstance(user_ids, list):
|
||||
continue
|
||||
|
||||
rows = [
|
||||
{
|
||||
"id": str(uuid.uuid4()),
|
||||
"group_id": group_id,
|
||||
"user_id": uid,
|
||||
"created_at": now,
|
||||
"updated_at": now,
|
||||
}
|
||||
for uid in user_ids
|
||||
]
|
||||
|
||||
if rows:
|
||||
connection.execute(gm_table.insert(), rows)
|
||||
|
||||
# 4. Optionally drop the old column
|
||||
with op.batch_alter_table("group") as batch:
|
||||
batch.drop_column("user_ids")
|
||||
|
||||
|
||||
def downgrade():
|
||||
# Reverse: restore user_ids column
|
||||
with op.batch_alter_table("group") as batch:
|
||||
batch.add_column(sa.Column("user_ids", sa.JSON()))
|
||||
|
||||
connection = op.get_bind()
|
||||
gm_table = sa.Table(
|
||||
"group_member",
|
||||
sa.MetaData(),
|
||||
sa.Column("group_id", sa.Text()),
|
||||
sa.Column("user_id", sa.Text()),
|
||||
sa.Column("created_at", sa.BigInteger()),
|
||||
sa.Column("updated_at", sa.BigInteger()),
|
||||
)
|
||||
|
||||
group_table = sa.Table(
|
||||
"group",
|
||||
sa.MetaData(),
|
||||
sa.Column("id", sa.Text()),
|
||||
sa.Column("user_ids", sa.JSON()),
|
||||
)
|
||||
|
||||
# Build JSON arrays again
|
||||
results = connection.execute(sa.select(group_table.c.id)).fetchall()
|
||||
|
||||
for (group_id,) in results:
|
||||
members = connection.execute(
|
||||
sa.select(gm_table.c.user_id).where(gm_table.c.group_id == group_id)
|
||||
).fetchall()
|
||||
|
||||
member_ids = [m[0] for m in members]
|
||||
|
||||
connection.execute(
|
||||
group_table.update()
|
||||
.where(group_table.c.id == group_id)
|
||||
.values(user_ids=member_ids)
|
||||
)
|
||||
|
||||
# Drop the new table
|
||||
op.drop_table("group_member")
|
||||
|
|
@ -7,7 +7,6 @@ from open_webui.models.users import UserModel, Users
|
|||
from open_webui.env import SRC_LOG_LEVELS
|
||||
from pydantic import BaseModel
|
||||
from sqlalchemy import Boolean, Column, String, Text
|
||||
from open_webui.utils.auth import verify_password
|
||||
|
||||
log = logging.getLogger(__name__)
|
||||
log.setLevel(SRC_LOG_LEVELS["MODELS"])
|
||||
|
|
@ -20,7 +19,7 @@ log.setLevel(SRC_LOG_LEVELS["MODELS"])
|
|||
class Auth(Base):
|
||||
__tablename__ = "auth"
|
||||
|
||||
id = Column(String, primary_key=True)
|
||||
id = Column(String, primary_key=True, unique=True)
|
||||
email = Column(String)
|
||||
password = Column(Text)
|
||||
active = Column(Boolean)
|
||||
|
|
@ -122,7 +121,9 @@ class AuthsTable:
|
|||
else:
|
||||
return None
|
||||
|
||||
def authenticate_user(self, email: str, password: str) -> Optional[UserModel]:
|
||||
def authenticate_user(
|
||||
self, email: str, verify_password: callable
|
||||
) -> Optional[UserModel]:
|
||||
log.info(f"authenticate_user: {email}")
|
||||
|
||||
user = Users.get_user_by_email(email)
|
||||
|
|
@ -133,7 +134,7 @@ class AuthsTable:
|
|||
with get_db() as db:
|
||||
auth = db.query(Auth).filter_by(id=user.id, active=True).first()
|
||||
if auth:
|
||||
if verify_password(password, auth.password):
|
||||
if verify_password(auth.password):
|
||||
return user
|
||||
else:
|
||||
return None
|
||||
|
|
|
|||
|
|
@ -19,7 +19,7 @@ from sqlalchemy.sql import exists
|
|||
class Channel(Base):
|
||||
__tablename__ = "channel"
|
||||
|
||||
id = Column(Text, primary_key=True)
|
||||
id = Column(Text, primary_key=True, unique=True)
|
||||
user_id = Column(Text)
|
||||
type = Column(Text, nullable=True)
|
||||
|
||||
|
|
|
|||
|
|
@ -26,7 +26,7 @@ log.setLevel(SRC_LOG_LEVELS["MODELS"])
|
|||
class Chat(Base):
|
||||
__tablename__ = "chat"
|
||||
|
||||
id = Column(String, primary_key=True)
|
||||
id = Column(String, primary_key=True, unique=True)
|
||||
user_id = Column(String)
|
||||
title = Column(Text)
|
||||
chat = Column(JSON)
|
||||
|
|
@ -92,6 +92,10 @@ class ChatImportForm(ChatForm):
|
|||
updated_at: Optional[int] = None
|
||||
|
||||
|
||||
class ChatsImportForm(BaseModel):
|
||||
chats: list[ChatImportForm]
|
||||
|
||||
|
||||
class ChatTitleMessagesForm(BaseModel):
|
||||
title: str
|
||||
messages: list[dict]
|
||||
|
|
@ -123,6 +127,43 @@ class ChatTitleIdResponse(BaseModel):
|
|||
|
||||
|
||||
class ChatTable:
|
||||
def _clean_null_bytes(self, obj):
|
||||
"""
|
||||
Recursively remove actual null bytes (\x00) and unicode escape \\u0000
|
||||
from strings inside dict/list structures.
|
||||
Safe for JSON objects.
|
||||
"""
|
||||
if isinstance(obj, str):
|
||||
return obj.replace("\x00", "").replace("\u0000", "")
|
||||
elif isinstance(obj, dict):
|
||||
return {k: self._clean_null_bytes(v) for k, v in obj.items()}
|
||||
elif isinstance(obj, list):
|
||||
return [self._clean_null_bytes(v) for v in obj]
|
||||
return obj
|
||||
|
||||
def _sanitize_chat_row(self, chat_item):
|
||||
"""
|
||||
Clean a Chat SQLAlchemy model's title + chat JSON,
|
||||
and return True if anything changed.
|
||||
"""
|
||||
changed = False
|
||||
|
||||
# Clean title
|
||||
if chat_item.title:
|
||||
cleaned = self._clean_null_bytes(chat_item.title)
|
||||
if cleaned != chat_item.title:
|
||||
chat_item.title = cleaned
|
||||
changed = True
|
||||
|
||||
# Clean JSON
|
||||
if chat_item.chat:
|
||||
cleaned = self._clean_null_bytes(chat_item.chat)
|
||||
if cleaned != chat_item.chat:
|
||||
chat_item.chat = cleaned
|
||||
changed = True
|
||||
|
||||
return changed
|
||||
|
||||
def insert_new_chat(self, user_id: str, form_data: ChatForm) -> Optional[ChatModel]:
|
||||
with get_db() as db:
|
||||
id = str(uuid.uuid4())
|
||||
|
|
@ -130,68 +171,76 @@ class ChatTable:
|
|||
**{
|
||||
"id": id,
|
||||
"user_id": user_id,
|
||||
"title": (
|
||||
"title": self._clean_null_bytes(
|
||||
form_data.chat["title"]
|
||||
if "title" in form_data.chat
|
||||
else "New Chat"
|
||||
),
|
||||
"chat": form_data.chat,
|
||||
"chat": self._clean_null_bytes(form_data.chat),
|
||||
"folder_id": form_data.folder_id,
|
||||
"created_at": int(time.time()),
|
||||
"updated_at": int(time.time()),
|
||||
}
|
||||
)
|
||||
|
||||
result = Chat(**chat.model_dump())
|
||||
db.add(result)
|
||||
chat_item = Chat(**chat.model_dump())
|
||||
db.add(chat_item)
|
||||
db.commit()
|
||||
db.refresh(result)
|
||||
return ChatModel.model_validate(result) if result else None
|
||||
db.refresh(chat_item)
|
||||
return ChatModel.model_validate(chat_item) if chat_item else None
|
||||
|
||||
def import_chat(
|
||||
def _chat_import_form_to_chat_model(
|
||||
self, user_id: str, form_data: ChatImportForm
|
||||
) -> Optional[ChatModel]:
|
||||
with get_db() as db:
|
||||
id = str(uuid.uuid4())
|
||||
chat = ChatModel(
|
||||
**{
|
||||
"id": id,
|
||||
"user_id": user_id,
|
||||
"title": (
|
||||
form_data.chat["title"]
|
||||
if "title" in form_data.chat
|
||||
else "New Chat"
|
||||
),
|
||||
"chat": form_data.chat,
|
||||
"meta": form_data.meta,
|
||||
"pinned": form_data.pinned,
|
||||
"folder_id": form_data.folder_id,
|
||||
"created_at": (
|
||||
form_data.created_at
|
||||
if form_data.created_at
|
||||
else int(time.time())
|
||||
),
|
||||
"updated_at": (
|
||||
form_data.updated_at
|
||||
if form_data.updated_at
|
||||
else int(time.time())
|
||||
),
|
||||
}
|
||||
)
|
||||
) -> ChatModel:
|
||||
id = str(uuid.uuid4())
|
||||
chat = ChatModel(
|
||||
**{
|
||||
"id": id,
|
||||
"user_id": user_id,
|
||||
"title": self._clean_null_bytes(
|
||||
form_data.chat["title"] if "title" in form_data.chat else "New Chat"
|
||||
),
|
||||
"chat": self._clean_null_bytes(form_data.chat),
|
||||
"meta": form_data.meta,
|
||||
"pinned": form_data.pinned,
|
||||
"folder_id": form_data.folder_id,
|
||||
"created_at": (
|
||||
form_data.created_at if form_data.created_at else int(time.time())
|
||||
),
|
||||
"updated_at": (
|
||||
form_data.updated_at if form_data.updated_at else int(time.time())
|
||||
),
|
||||
}
|
||||
)
|
||||
return chat
|
||||
|
||||
result = Chat(**chat.model_dump())
|
||||
db.add(result)
|
||||
def import_chats(
|
||||
self, user_id: str, chat_import_forms: list[ChatImportForm]
|
||||
) -> list[ChatModel]:
|
||||
with get_db() as db:
|
||||
chats = []
|
||||
|
||||
for form_data in chat_import_forms:
|
||||
chat = self._chat_import_form_to_chat_model(user_id, form_data)
|
||||
chats.append(Chat(**chat.model_dump()))
|
||||
|
||||
db.add_all(chats)
|
||||
db.commit()
|
||||
db.refresh(result)
|
||||
return ChatModel.model_validate(result) if result else None
|
||||
return [ChatModel.model_validate(chat) for chat in chats]
|
||||
|
||||
def update_chat_by_id(self, id: str, chat: dict) -> Optional[ChatModel]:
|
||||
try:
|
||||
with get_db() as db:
|
||||
chat_item = db.get(Chat, id)
|
||||
chat_item.chat = chat
|
||||
chat_item.title = chat["title"] if "title" in chat else "New Chat"
|
||||
chat_item.chat = self._clean_null_bytes(chat)
|
||||
chat_item.title = (
|
||||
self._clean_null_bytes(chat["title"])
|
||||
if "title" in chat
|
||||
else "New Chat"
|
||||
)
|
||||
|
||||
chat_item.updated_at = int(time.time())
|
||||
|
||||
db.commit()
|
||||
db.refresh(chat_item)
|
||||
|
||||
|
|
@ -297,6 +346,27 @@ class ChatTable:
|
|||
chat["history"] = history
|
||||
return self.update_chat_by_id(id, chat)
|
||||
|
||||
def add_message_files_by_id_and_message_id(
|
||||
self, id: str, message_id: str, files: list[dict]
|
||||
) -> list[dict]:
|
||||
chat = self.get_chat_by_id(id)
|
||||
if chat is None:
|
||||
return None
|
||||
|
||||
chat = chat.chat
|
||||
history = chat.get("history", {})
|
||||
|
||||
message_files = []
|
||||
|
||||
if message_id in history.get("messages", {}):
|
||||
message_files = history["messages"][message_id].get("files", [])
|
||||
message_files = message_files + files
|
||||
history["messages"][message_id]["files"] = message_files
|
||||
|
||||
chat["history"] = history
|
||||
self.update_chat_by_id(id, chat)
|
||||
return message_files
|
||||
|
||||
def insert_shared_chat_by_chat_id(self, chat_id: str) -> Optional[ChatModel]:
|
||||
with get_db() as db:
|
||||
# Get the existing chat to share
|
||||
|
|
@ -405,6 +475,7 @@ class ChatTable:
|
|||
with get_db() as db:
|
||||
chat = db.get(Chat, id)
|
||||
chat.archived = not chat.archived
|
||||
chat.folder_id = None
|
||||
chat.updated_at = int(time.time())
|
||||
db.commit()
|
||||
db.refresh(chat)
|
||||
|
|
@ -561,8 +632,15 @@ class ChatTable:
|
|||
def get_chat_by_id(self, id: str) -> Optional[ChatModel]:
|
||||
try:
|
||||
with get_db() as db:
|
||||
chat = db.get(Chat, id)
|
||||
return ChatModel.model_validate(chat)
|
||||
chat_item = db.get(Chat, id)
|
||||
if chat_item is None:
|
||||
return None
|
||||
|
||||
if self._sanitize_chat_row(chat_item):
|
||||
db.commit()
|
||||
db.refresh(chat_item)
|
||||
|
||||
return ChatModel.model_validate(chat_item)
|
||||
except Exception:
|
||||
return None
|
||||
|
||||
|
|
@ -765,21 +843,32 @@ class ChatTable:
|
|||
)
|
||||
|
||||
elif dialect_name == "postgresql":
|
||||
# PostgreSQL relies on proper JSON query for search
|
||||
postgres_content_sql = (
|
||||
"EXISTS ("
|
||||
" SELECT 1 "
|
||||
" FROM json_array_elements(Chat.chat->'messages') AS message "
|
||||
" WHERE LOWER(message->>'content') LIKE '%' || :content_key || '%'"
|
||||
")"
|
||||
# PostgreSQL doesn't allow null bytes in text. We filter those out by checking
|
||||
# the JSON representation for \u0000 before attempting text extraction
|
||||
|
||||
# Safety filter: JSON field must not contain \u0000
|
||||
query = query.filter(text("Chat.chat::text NOT LIKE '%\\\\u0000%'"))
|
||||
|
||||
# Safety filter: title must not contain actual null bytes
|
||||
query = query.filter(text("Chat.title::text NOT LIKE '%\\x00%'"))
|
||||
|
||||
postgres_content_sql = """
|
||||
EXISTS (
|
||||
SELECT 1
|
||||
FROM json_array_elements(Chat.chat->'messages') AS message
|
||||
WHERE json_typeof(message->'content') = 'string'
|
||||
AND LOWER(message->>'content') LIKE '%' || :content_key || '%'
|
||||
)
|
||||
"""
|
||||
|
||||
postgres_content_clause = text(postgres_content_sql)
|
||||
|
||||
query = query.filter(
|
||||
or_(
|
||||
Chat.title.ilike(bindparam("title_key")),
|
||||
postgres_content_clause,
|
||||
).params(title_key=f"%{search_text}%", content_key=search_text)
|
||||
)
|
||||
)
|
||||
).params(title_key=f"%{search_text}%", content_key=search_text.lower())
|
||||
|
||||
# Check if there are any tags to filter, it should have all the tags
|
||||
if "none" in tag_ids:
|
||||
|
|
@ -1054,6 +1143,20 @@ class ChatTable:
|
|||
except Exception:
|
||||
return False
|
||||
|
||||
def move_chats_by_user_id_and_folder_id(
|
||||
self, user_id: str, folder_id: str, new_folder_id: Optional[str]
|
||||
) -> bool:
|
||||
try:
|
||||
with get_db() as db:
|
||||
db.query(Chat).filter_by(user_id=user_id, folder_id=folder_id).update(
|
||||
{"folder_id": new_folder_id}
|
||||
)
|
||||
db.commit()
|
||||
|
||||
return True
|
||||
except Exception:
|
||||
return False
|
||||
|
||||
def delete_shared_chats_by_user_id(self, user_id: str) -> bool:
|
||||
try:
|
||||
with get_db() as db:
|
||||
|
|
|
|||
|
|
@ -4,7 +4,7 @@ import uuid
|
|||
from typing import Optional
|
||||
|
||||
from open_webui.internal.db import Base, get_db
|
||||
from open_webui.models.chats import Chats
|
||||
from open_webui.models.users import User
|
||||
|
||||
from open_webui.env import SRC_LOG_LEVELS
|
||||
from pydantic import BaseModel, ConfigDict
|
||||
|
|
@ -21,7 +21,7 @@ log.setLevel(SRC_LOG_LEVELS["MODELS"])
|
|||
|
||||
class Feedback(Base):
|
||||
__tablename__ = "feedback"
|
||||
id = Column(Text, primary_key=True)
|
||||
id = Column(Text, primary_key=True, unique=True)
|
||||
user_id = Column(Text)
|
||||
version = Column(BigInteger, default=0)
|
||||
type = Column(Text)
|
||||
|
|
@ -92,6 +92,28 @@ class FeedbackForm(BaseModel):
|
|||
model_config = ConfigDict(extra="allow")
|
||||
|
||||
|
||||
class UserResponse(BaseModel):
|
||||
id: str
|
||||
name: str
|
||||
email: str
|
||||
role: str = "pending"
|
||||
|
||||
last_active_at: int # timestamp in epoch
|
||||
updated_at: int # timestamp in epoch
|
||||
created_at: int # timestamp in epoch
|
||||
|
||||
model_config = ConfigDict(from_attributes=True)
|
||||
|
||||
|
||||
class FeedbackUserResponse(FeedbackResponse):
|
||||
user: Optional[UserResponse] = None
|
||||
|
||||
|
||||
class FeedbackListResponse(BaseModel):
|
||||
items: list[FeedbackUserResponse]
|
||||
total: int
|
||||
|
||||
|
||||
class FeedbackTable:
|
||||
def insert_new_feedback(
|
||||
self, user_id: str, form_data: FeedbackForm
|
||||
|
|
@ -143,6 +165,70 @@ class FeedbackTable:
|
|||
except Exception:
|
||||
return None
|
||||
|
||||
def get_feedback_items(
|
||||
self, filter: dict = {}, skip: int = 0, limit: int = 30
|
||||
) -> FeedbackListResponse:
|
||||
with get_db() as db:
|
||||
query = db.query(Feedback, User).join(User, Feedback.user_id == User.id)
|
||||
|
||||
if filter:
|
||||
order_by = filter.get("order_by")
|
||||
direction = filter.get("direction")
|
||||
|
||||
if order_by == "username":
|
||||
if direction == "asc":
|
||||
query = query.order_by(User.name.asc())
|
||||
else:
|
||||
query = query.order_by(User.name.desc())
|
||||
elif order_by == "model_id":
|
||||
# it's stored in feedback.data['model_id']
|
||||
if direction == "asc":
|
||||
query = query.order_by(
|
||||
Feedback.data["model_id"].as_string().asc()
|
||||
)
|
||||
else:
|
||||
query = query.order_by(
|
||||
Feedback.data["model_id"].as_string().desc()
|
||||
)
|
||||
elif order_by == "rating":
|
||||
# it's stored in feedback.data['rating']
|
||||
if direction == "asc":
|
||||
query = query.order_by(
|
||||
Feedback.data["rating"].as_string().asc()
|
||||
)
|
||||
else:
|
||||
query = query.order_by(
|
||||
Feedback.data["rating"].as_string().desc()
|
||||
)
|
||||
elif order_by == "updated_at":
|
||||
if direction == "asc":
|
||||
query = query.order_by(Feedback.updated_at.asc())
|
||||
else:
|
||||
query = query.order_by(Feedback.updated_at.desc())
|
||||
|
||||
else:
|
||||
query = query.order_by(Feedback.created_at.desc())
|
||||
|
||||
# Count BEFORE pagination
|
||||
total = query.count()
|
||||
|
||||
if skip:
|
||||
query = query.offset(skip)
|
||||
if limit:
|
||||
query = query.limit(limit)
|
||||
|
||||
items = query.all()
|
||||
|
||||
feedbacks = []
|
||||
for feedback, user in items:
|
||||
feedback_model = FeedbackModel.model_validate(feedback)
|
||||
user_model = UserResponse.model_validate(user)
|
||||
feedbacks.append(
|
||||
FeedbackUserResponse(**feedback_model.model_dump(), user=user_model)
|
||||
)
|
||||
|
||||
return FeedbackListResponse(items=feedbacks, total=total)
|
||||
|
||||
def get_all_feedbacks(self) -> list[FeedbackModel]:
|
||||
with get_db() as db:
|
||||
return [
|
||||
|
|
|
|||
|
|
@ -17,7 +17,7 @@ log.setLevel(SRC_LOG_LEVELS["MODELS"])
|
|||
|
||||
class File(Base):
|
||||
__tablename__ = "file"
|
||||
id = Column(String, primary_key=True)
|
||||
id = Column(String, primary_key=True, unique=True)
|
||||
user_id = Column(String)
|
||||
hash = Column(Text, nullable=True)
|
||||
|
||||
|
|
@ -98,6 +98,12 @@ class FileForm(BaseModel):
|
|||
access_control: Optional[dict] = None
|
||||
|
||||
|
||||
class FileUpdateForm(BaseModel):
|
||||
hash: Optional[str] = None
|
||||
data: Optional[dict] = None
|
||||
meta: Optional[dict] = None
|
||||
|
||||
|
||||
class FilesTable:
|
||||
def insert_new_file(self, user_id: str, form_data: FileForm) -> Optional[FileModel]:
|
||||
with get_db() as db:
|
||||
|
|
@ -204,6 +210,29 @@ class FilesTable:
|
|||
for file in db.query(File).filter_by(user_id=user_id).all()
|
||||
]
|
||||
|
||||
def update_file_by_id(
|
||||
self, id: str, form_data: FileUpdateForm
|
||||
) -> Optional[FileModel]:
|
||||
with get_db() as db:
|
||||
try:
|
||||
file = db.query(File).filter_by(id=id).first()
|
||||
|
||||
if form_data.hash is not None:
|
||||
file.hash = form_data.hash
|
||||
|
||||
if form_data.data is not None:
|
||||
file.data = {**(file.data if file.data else {}), **form_data.data}
|
||||
|
||||
if form_data.meta is not None:
|
||||
file.meta = {**(file.meta if file.meta else {}), **form_data.meta}
|
||||
|
||||
file.updated_at = int(time.time())
|
||||
db.commit()
|
||||
return FileModel.model_validate(file)
|
||||
except Exception as e:
|
||||
log.exception(f"Error updating file completely by id: {e}")
|
||||
return None
|
||||
|
||||
def update_file_hash_by_id(self, id: str, hash: str) -> Optional[FileModel]:
|
||||
with get_db() as db:
|
||||
try:
|
||||
|
|
|
|||
|
|
@ -23,7 +23,7 @@ log.setLevel(SRC_LOG_LEVELS["MODELS"])
|
|||
|
||||
class Folder(Base):
|
||||
__tablename__ = "folder"
|
||||
id = Column(Text, primary_key=True)
|
||||
id = Column(Text, primary_key=True, unique=True)
|
||||
parent_id = Column(Text, nullable=True)
|
||||
user_id = Column(Text)
|
||||
name = Column(Text)
|
||||
|
|
|
|||
|
|
@ -19,7 +19,7 @@ log.setLevel(SRC_LOG_LEVELS["MODELS"])
|
|||
class Function(Base):
|
||||
__tablename__ = "function"
|
||||
|
||||
id = Column(String, primary_key=True)
|
||||
id = Column(String, primary_key=True, unique=True)
|
||||
user_id = Column(String)
|
||||
name = Column(Text)
|
||||
type = Column(Text)
|
||||
|
|
|
|||
|
|
@ -11,7 +11,7 @@ from open_webui.models.files import FileMetadataResponse
|
|||
|
||||
|
||||
from pydantic import BaseModel, ConfigDict
|
||||
from sqlalchemy import BigInteger, Column, String, Text, JSON, func
|
||||
from sqlalchemy import BigInteger, Column, String, Text, JSON, func, ForeignKey
|
||||
|
||||
|
||||
log = logging.getLogger(__name__)
|
||||
|
|
@ -35,7 +35,6 @@ class Group(Base):
|
|||
meta = Column(JSON, nullable=True)
|
||||
|
||||
permissions = Column(JSON, nullable=True)
|
||||
user_ids = Column(JSON, nullable=True)
|
||||
|
||||
created_at = Column(BigInteger)
|
||||
updated_at = Column(BigInteger)
|
||||
|
|
@ -53,12 +52,33 @@ class GroupModel(BaseModel):
|
|||
meta: Optional[dict] = None
|
||||
|
||||
permissions: Optional[dict] = None
|
||||
user_ids: list[str] = []
|
||||
|
||||
created_at: int # timestamp in epoch
|
||||
updated_at: int # timestamp in epoch
|
||||
|
||||
|
||||
class GroupMember(Base):
|
||||
__tablename__ = "group_member"
|
||||
|
||||
id = Column(Text, unique=True, primary_key=True)
|
||||
group_id = Column(
|
||||
Text,
|
||||
ForeignKey("group.id", ondelete="CASCADE"),
|
||||
nullable=False,
|
||||
)
|
||||
user_id = Column(Text, nullable=False)
|
||||
created_at = Column(BigInteger, nullable=True)
|
||||
updated_at = Column(BigInteger, nullable=True)
|
||||
|
||||
|
||||
class GroupMemberModel(BaseModel):
|
||||
id: str
|
||||
group_id: str
|
||||
user_id: str
|
||||
created_at: Optional[int] = None # timestamp in epoch
|
||||
updated_at: Optional[int] = None # timestamp in epoch
|
||||
|
||||
|
||||
####################
|
||||
# Forms
|
||||
####################
|
||||
|
|
@ -72,7 +92,7 @@ class GroupResponse(BaseModel):
|
|||
permissions: Optional[dict] = None
|
||||
data: Optional[dict] = None
|
||||
meta: Optional[dict] = None
|
||||
user_ids: list[str] = []
|
||||
member_count: Optional[int] = None
|
||||
created_at: int # timestamp in epoch
|
||||
updated_at: int # timestamp in epoch
|
||||
|
||||
|
|
@ -81,13 +101,14 @@ class GroupForm(BaseModel):
|
|||
name: str
|
||||
description: str
|
||||
permissions: Optional[dict] = None
|
||||
data: Optional[dict] = None
|
||||
|
||||
|
||||
class UserIdsForm(BaseModel):
|
||||
user_ids: Optional[list[str]] = None
|
||||
|
||||
|
||||
class GroupUpdateForm(GroupForm, UserIdsForm):
|
||||
class GroupUpdateForm(GroupForm):
|
||||
pass
|
||||
|
||||
|
||||
|
|
@ -131,12 +152,8 @@ class GroupTable:
|
|||
return [
|
||||
GroupModel.model_validate(group)
|
||||
for group in db.query(Group)
|
||||
.filter(
|
||||
func.json_array_length(Group.user_ids) > 0
|
||||
) # Ensure array exists
|
||||
.filter(
|
||||
Group.user_ids.cast(String).like(f'%"{user_id}"%')
|
||||
) # String-based check
|
||||
.join(GroupMember, GroupMember.group_id == Group.id)
|
||||
.filter(GroupMember.user_id == user_id)
|
||||
.order_by(Group.updated_at.desc())
|
||||
.all()
|
||||
]
|
||||
|
|
@ -149,12 +166,46 @@ class GroupTable:
|
|||
except Exception:
|
||||
return None
|
||||
|
||||
def get_group_user_ids_by_id(self, id: str) -> Optional[str]:
|
||||
group = self.get_group_by_id(id)
|
||||
if group:
|
||||
return group.user_ids
|
||||
else:
|
||||
return None
|
||||
def get_group_user_ids_by_id(self, id: str) -> Optional[list[str]]:
|
||||
with get_db() as db:
|
||||
members = (
|
||||
db.query(GroupMember.user_id).filter(GroupMember.group_id == id).all()
|
||||
)
|
||||
|
||||
if not members:
|
||||
return None
|
||||
|
||||
return [m[0] for m in members]
|
||||
|
||||
def set_group_user_ids_by_id(self, group_id: str, user_ids: list[str]) -> None:
|
||||
with get_db() as db:
|
||||
# Delete existing members
|
||||
db.query(GroupMember).filter(GroupMember.group_id == group_id).delete()
|
||||
|
||||
# Insert new members
|
||||
now = int(time.time())
|
||||
new_members = [
|
||||
GroupMember(
|
||||
id=str(uuid.uuid4()),
|
||||
group_id=group_id,
|
||||
user_id=user_id,
|
||||
created_at=now,
|
||||
updated_at=now,
|
||||
)
|
||||
for user_id in user_ids
|
||||
]
|
||||
|
||||
db.add_all(new_members)
|
||||
db.commit()
|
||||
|
||||
def get_group_member_count_by_id(self, id: str) -> int:
|
||||
with get_db() as db:
|
||||
count = (
|
||||
db.query(func.count(GroupMember.user_id))
|
||||
.filter(GroupMember.group_id == id)
|
||||
.scalar()
|
||||
)
|
||||
return count if count else 0
|
||||
|
||||
def update_group_by_id(
|
||||
self, id: str, form_data: GroupUpdateForm, overwrite: bool = False
|
||||
|
|
@ -195,20 +246,29 @@ class GroupTable:
|
|||
def remove_user_from_all_groups(self, user_id: str) -> bool:
|
||||
with get_db() as db:
|
||||
try:
|
||||
groups = self.get_groups_by_member_id(user_id)
|
||||
# Find all groups the user belongs to
|
||||
groups = (
|
||||
db.query(Group)
|
||||
.join(GroupMember, GroupMember.group_id == Group.id)
|
||||
.filter(GroupMember.user_id == user_id)
|
||||
.all()
|
||||
)
|
||||
|
||||
# Remove the user from each group
|
||||
for group in groups:
|
||||
group.user_ids.remove(user_id)
|
||||
db.query(Group).filter_by(id=group.id).update(
|
||||
{
|
||||
"user_ids": group.user_ids,
|
||||
"updated_at": int(time.time()),
|
||||
}
|
||||
)
|
||||
db.commit()
|
||||
db.query(GroupMember).filter(
|
||||
GroupMember.group_id == group.id, GroupMember.user_id == user_id
|
||||
).delete()
|
||||
|
||||
db.query(Group).filter_by(id=group.id).update(
|
||||
{"updated_at": int(time.time())}
|
||||
)
|
||||
|
||||
db.commit()
|
||||
return True
|
||||
|
||||
except Exception:
|
||||
db.rollback()
|
||||
return False
|
||||
|
||||
def create_groups_by_group_names(
|
||||
|
|
@ -246,37 +306,61 @@ class GroupTable:
|
|||
def sync_groups_by_group_names(self, user_id: str, group_names: list[str]) -> bool:
|
||||
with get_db() as db:
|
||||
try:
|
||||
groups = db.query(Group).filter(Group.name.in_(group_names)).all()
|
||||
group_ids = [group.id for group in groups]
|
||||
now = int(time.time())
|
||||
|
||||
# Remove user from groups not in the new list
|
||||
existing_groups = self.get_groups_by_member_id(user_id)
|
||||
# 1. Groups that SHOULD contain the user
|
||||
target_groups = (
|
||||
db.query(Group).filter(Group.name.in_(group_names)).all()
|
||||
)
|
||||
target_group_ids = {g.id for g in target_groups}
|
||||
|
||||
for group in existing_groups:
|
||||
if group.id not in group_ids:
|
||||
group.user_ids.remove(user_id)
|
||||
db.query(Group).filter_by(id=group.id).update(
|
||||
{
|
||||
"user_ids": group.user_ids,
|
||||
"updated_at": int(time.time()),
|
||||
}
|
||||
# 2. Groups the user is CURRENTLY in
|
||||
existing_group_ids = {
|
||||
g.id
|
||||
for g in db.query(Group)
|
||||
.join(GroupMember, GroupMember.group_id == Group.id)
|
||||
.filter(GroupMember.user_id == user_id)
|
||||
.all()
|
||||
}
|
||||
|
||||
# 3. Determine adds + removals
|
||||
groups_to_add = target_group_ids - existing_group_ids
|
||||
groups_to_remove = existing_group_ids - target_group_ids
|
||||
|
||||
# 4. Remove in one bulk delete
|
||||
if groups_to_remove:
|
||||
db.query(GroupMember).filter(
|
||||
GroupMember.user_id == user_id,
|
||||
GroupMember.group_id.in_(groups_to_remove),
|
||||
).delete(synchronize_session=False)
|
||||
|
||||
db.query(Group).filter(Group.id.in_(groups_to_remove)).update(
|
||||
{"updated_at": now}, synchronize_session=False
|
||||
)
|
||||
|
||||
# 5. Bulk insert missing memberships
|
||||
for group_id in groups_to_add:
|
||||
db.add(
|
||||
GroupMember(
|
||||
id=str(uuid.uuid4()),
|
||||
group_id=group_id,
|
||||
user_id=user_id,
|
||||
created_at=now,
|
||||
updated_at=now,
|
||||
)
|
||||
)
|
||||
|
||||
# Add user to new groups
|
||||
for group in groups:
|
||||
if user_id not in group.user_ids:
|
||||
group.user_ids.append(user_id)
|
||||
db.query(Group).filter_by(id=group.id).update(
|
||||
{
|
||||
"user_ids": group.user_ids,
|
||||
"updated_at": int(time.time()),
|
||||
}
|
||||
)
|
||||
if groups_to_add:
|
||||
db.query(Group).filter(Group.id.in_(groups_to_add)).update(
|
||||
{"updated_at": now}, synchronize_session=False
|
||||
)
|
||||
|
||||
db.commit()
|
||||
return True
|
||||
|
||||
except Exception as e:
|
||||
log.exception(e)
|
||||
db.rollback()
|
||||
return False
|
||||
|
||||
def add_users_to_group(
|
||||
|
|
@ -288,21 +372,31 @@ class GroupTable:
|
|||
if not group:
|
||||
return None
|
||||
|
||||
group_user_ids = group.user_ids
|
||||
if not group_user_ids or not isinstance(group_user_ids, list):
|
||||
group_user_ids = []
|
||||
now = int(time.time())
|
||||
|
||||
group_user_ids = list(set(group_user_ids)) # Deduplicate
|
||||
for user_id in user_ids or []:
|
||||
try:
|
||||
db.add(
|
||||
GroupMember(
|
||||
id=str(uuid.uuid4()),
|
||||
group_id=id,
|
||||
user_id=user_id,
|
||||
created_at=now,
|
||||
updated_at=now,
|
||||
)
|
||||
)
|
||||
db.flush() # Detect unique constraint violation early
|
||||
except Exception:
|
||||
db.rollback() # Clear failed INSERT
|
||||
db.begin() # Start a new transaction
|
||||
continue # Duplicate → ignore
|
||||
|
||||
for user_id in user_ids:
|
||||
if user_id not in group_user_ids:
|
||||
group_user_ids.append(user_id)
|
||||
|
||||
group.user_ids = group_user_ids
|
||||
group.updated_at = int(time.time())
|
||||
group.updated_at = now
|
||||
db.commit()
|
||||
db.refresh(group)
|
||||
|
||||
return GroupModel.model_validate(group)
|
||||
|
||||
except Exception as e:
|
||||
log.exception(e)
|
||||
return None
|
||||
|
|
@ -316,23 +410,22 @@ class GroupTable:
|
|||
if not group:
|
||||
return None
|
||||
|
||||
group_user_ids = group.user_ids
|
||||
|
||||
if not group_user_ids or not isinstance(group_user_ids, list):
|
||||
if not user_ids:
|
||||
return GroupModel.model_validate(group)
|
||||
|
||||
group_user_ids = list(set(group_user_ids)) # Deduplicate
|
||||
|
||||
# Remove each user from group_member
|
||||
for user_id in user_ids:
|
||||
if user_id in group_user_ids:
|
||||
group_user_ids.remove(user_id)
|
||||
db.query(GroupMember).filter(
|
||||
GroupMember.group_id == id, GroupMember.user_id == user_id
|
||||
).delete()
|
||||
|
||||
group.user_ids = group_user_ids
|
||||
# Update group timestamp
|
||||
group.updated_at = int(time.time())
|
||||
|
||||
db.commit()
|
||||
db.refresh(group)
|
||||
return GroupModel.model_validate(group)
|
||||
|
||||
except Exception as e:
|
||||
log.exception(e)
|
||||
return None
|
||||
|
|
|
|||
|
|
@ -14,7 +14,7 @@ from sqlalchemy import BigInteger, Column, String, Text
|
|||
class Memory(Base):
|
||||
__tablename__ = "memory"
|
||||
|
||||
id = Column(String, primary_key=True)
|
||||
id = Column(String, primary_key=True, unique=True)
|
||||
user_id = Column(String)
|
||||
content = Column(Text)
|
||||
updated_at = Column(BigInteger)
|
||||
|
|
|
|||
|
|
@ -20,7 +20,7 @@ from sqlalchemy.sql import exists
|
|||
|
||||
class MessageReaction(Base):
|
||||
__tablename__ = "message_reaction"
|
||||
id = Column(Text, primary_key=True)
|
||||
id = Column(Text, primary_key=True, unique=True)
|
||||
user_id = Column(Text)
|
||||
message_id = Column(Text)
|
||||
name = Column(Text)
|
||||
|
|
|
|||
|
|
@ -6,12 +6,12 @@ from open_webui.internal.db import Base, JSONField, get_db
|
|||
from open_webui.env import SRC_LOG_LEVELS
|
||||
|
||||
from open_webui.models.groups import Groups
|
||||
from open_webui.models.users import Users, UserResponse
|
||||
from open_webui.models.users import User, UserModel, Users, UserResponse
|
||||
|
||||
|
||||
from pydantic import BaseModel, ConfigDict
|
||||
|
||||
from sqlalchemy import or_, and_, func
|
||||
from sqlalchemy import String, cast, or_, and_, func
|
||||
from sqlalchemy.dialects import postgresql, sqlite
|
||||
from sqlalchemy import BigInteger, Column, Text, JSON, Boolean
|
||||
|
||||
|
|
@ -133,6 +133,11 @@ class ModelResponse(ModelModel):
|
|||
pass
|
||||
|
||||
|
||||
class ModelListResponse(BaseModel):
|
||||
items: list[ModelUserResponse]
|
||||
total: int
|
||||
|
||||
|
||||
class ModelForm(BaseModel):
|
||||
id: str
|
||||
base_model_id: Optional[str] = None
|
||||
|
|
@ -215,6 +220,84 @@ class ModelsTable:
|
|||
or has_access(user_id, permission, model.access_control, user_group_ids)
|
||||
]
|
||||
|
||||
def search_models(
|
||||
self, user_id: str, filter: dict = {}, skip: int = 0, limit: int = 30
|
||||
) -> ModelListResponse:
|
||||
with get_db() as db:
|
||||
# Join GroupMember so we can order by group_id when requested
|
||||
query = db.query(Model, User).outerjoin(User, User.id == Model.user_id)
|
||||
query = query.filter(Model.base_model_id != None)
|
||||
|
||||
if filter:
|
||||
query_key = filter.get("query")
|
||||
if query_key:
|
||||
query = query.filter(
|
||||
or_(
|
||||
Model.name.ilike(f"%{query_key}%"),
|
||||
Model.base_model_id.ilike(f"%{query_key}%"),
|
||||
)
|
||||
)
|
||||
|
||||
if filter.get("user_id"):
|
||||
query = query.filter(Model.user_id == filter.get("user_id"))
|
||||
|
||||
view_option = filter.get("view_option")
|
||||
|
||||
if view_option == "created":
|
||||
query = query.filter(Model.user_id == user_id)
|
||||
elif view_option == "shared":
|
||||
query = query.filter(Model.user_id != user_id)
|
||||
|
||||
tag = filter.get("tag")
|
||||
if tag:
|
||||
# TODO: This is a simple implementation and should be improved for performance
|
||||
like_pattern = f'%"{tag.lower()}"%' # `"tag"` inside JSON array
|
||||
meta_text = func.lower(cast(Model.meta, String))
|
||||
|
||||
query = query.filter(meta_text.like(like_pattern))
|
||||
|
||||
order_by = filter.get("order_by")
|
||||
direction = filter.get("direction")
|
||||
|
||||
if order_by == "name":
|
||||
if direction == "asc":
|
||||
query = query.order_by(Model.name.asc())
|
||||
else:
|
||||
query = query.order_by(Model.name.desc())
|
||||
elif order_by == "created_at":
|
||||
if direction == "asc":
|
||||
query = query.order_by(Model.created_at.asc())
|
||||
else:
|
||||
query = query.order_by(Model.created_at.desc())
|
||||
elif order_by == "updated_at":
|
||||
if direction == "asc":
|
||||
query = query.order_by(Model.updated_at.asc())
|
||||
else:
|
||||
query = query.order_by(Model.updated_at.desc())
|
||||
|
||||
else:
|
||||
query = query.order_by(Model.created_at.desc())
|
||||
|
||||
# Count BEFORE pagination
|
||||
total = query.count()
|
||||
|
||||
if skip:
|
||||
query = query.offset(skip)
|
||||
if limit:
|
||||
query = query.limit(limit)
|
||||
|
||||
items = query.all()
|
||||
|
||||
models = []
|
||||
for model, user in items:
|
||||
model_model = ModelModel.model_validate(model)
|
||||
user_model = UserResponse(**UserModel.model_validate(user).model_dump())
|
||||
models.append(
|
||||
ModelUserResponse(**model_model.model_dump(), user=user_model)
|
||||
)
|
||||
|
||||
return ModelListResponse(items=models, total=total)
|
||||
|
||||
def get_model_by_id(self, id: str) -> Optional[ModelModel]:
|
||||
try:
|
||||
with get_db() as db:
|
||||
|
|
@ -244,11 +327,9 @@ class ModelsTable:
|
|||
try:
|
||||
with get_db() as db:
|
||||
# update only the fields that are present in the model
|
||||
result = (
|
||||
db.query(Model)
|
||||
.filter_by(id=id)
|
||||
.update(model.model_dump(exclude={"id"}))
|
||||
)
|
||||
data = model.model_dump(exclude={"id"})
|
||||
result = db.query(Model).filter_by(id=id).update(data)
|
||||
|
||||
db.commit()
|
||||
|
||||
model = db.get(Model, id)
|
||||
|
|
|
|||
|
|
@ -6,7 +6,7 @@ from open_webui.internal.db import Base, JSONField, get_db
|
|||
|
||||
from open_webui.env import DATABASE_USER_ACTIVE_STATUS_UPDATE_INTERVAL
|
||||
from open_webui.models.chats import Chats
|
||||
from open_webui.models.groups import Groups
|
||||
from open_webui.models.groups import Groups, GroupMember
|
||||
from open_webui.utils.misc import throttle
|
||||
|
||||
|
||||
|
|
@ -95,8 +95,12 @@ class UpdateProfileForm(BaseModel):
|
|||
date_of_birth: Optional[datetime.date] = None
|
||||
|
||||
|
||||
class UserGroupIdsModel(UserModel):
|
||||
group_ids: list[str] = []
|
||||
|
||||
|
||||
class UserListResponse(BaseModel):
|
||||
users: list[UserModel]
|
||||
users: list[UserGroupIdsModel]
|
||||
total: int
|
||||
|
||||
|
||||
|
|
@ -222,7 +226,10 @@ class UsersTable:
|
|||
limit: Optional[int] = None,
|
||||
) -> dict:
|
||||
with get_db() as db:
|
||||
query = db.query(User)
|
||||
# Join GroupMember so we can order by group_id when requested
|
||||
query = db.query(User).outerjoin(
|
||||
GroupMember, GroupMember.user_id == User.id
|
||||
)
|
||||
|
||||
if filter:
|
||||
query_key = filter.get("query")
|
||||
|
|
@ -237,7 +244,16 @@ class UsersTable:
|
|||
order_by = filter.get("order_by")
|
||||
direction = filter.get("direction")
|
||||
|
||||
if order_by == "name":
|
||||
if order_by and order_by.startswith("group_id:"):
|
||||
group_id = order_by.split(":", 1)[1]
|
||||
|
||||
if direction == "asc":
|
||||
query = query.order_by((GroupMember.group_id == group_id).asc())
|
||||
else:
|
||||
query = query.order_by(
|
||||
(GroupMember.group_id == group_id).desc()
|
||||
)
|
||||
elif order_by == "name":
|
||||
if direction == "asc":
|
||||
query = query.order_by(User.name.asc())
|
||||
else:
|
||||
|
|
@ -274,6 +290,9 @@ class UsersTable:
|
|||
else:
|
||||
query = query.order_by(User.created_at.desc())
|
||||
|
||||
# Count BEFORE pagination
|
||||
total = query.count()
|
||||
|
||||
if skip:
|
||||
query = query.offset(skip)
|
||||
if limit:
|
||||
|
|
@ -282,7 +301,7 @@ class UsersTable:
|
|||
users = query.all()
|
||||
return {
|
||||
"users": [UserModel.model_validate(user) for user in users],
|
||||
"total": db.query(User).count(),
|
||||
"total": total,
|
||||
}
|
||||
|
||||
def get_users_by_user_ids(self, user_ids: list[str]) -> list[UserModel]:
|
||||
|
|
@ -322,6 +341,15 @@ class UsersTable:
|
|||
except Exception:
|
||||
return None
|
||||
|
||||
def get_num_users_active_today(self) -> Optional[int]:
|
||||
with get_db() as db:
|
||||
current_timestamp = int(datetime.datetime.now().timestamp())
|
||||
today_midnight_timestamp = current_timestamp - (current_timestamp % 86400)
|
||||
query = db.query(User).filter(
|
||||
User.last_active_at > today_midnight_timestamp
|
||||
)
|
||||
return query.count()
|
||||
|
||||
def update_user_role_by_id(self, id: str, role: str) -> Optional[UserModel]:
|
||||
try:
|
||||
with get_db() as db:
|
||||
|
|
|
|||
|
|
@ -5,6 +5,7 @@ from urllib.parse import quote
|
|||
|
||||
from langchain_core.document_loaders import BaseLoader
|
||||
from langchain_core.documents import Document
|
||||
from open_webui.utils.headers import include_user_info_headers
|
||||
from open_webui.env import SRC_LOG_LEVELS
|
||||
|
||||
log = logging.getLogger(__name__)
|
||||
|
|
@ -18,6 +19,7 @@ class ExternalDocumentLoader(BaseLoader):
|
|||
url: str,
|
||||
api_key: str,
|
||||
mime_type=None,
|
||||
user=None,
|
||||
**kwargs,
|
||||
) -> None:
|
||||
self.url = url
|
||||
|
|
@ -26,6 +28,8 @@ class ExternalDocumentLoader(BaseLoader):
|
|||
self.file_path = file_path
|
||||
self.mime_type = mime_type
|
||||
|
||||
self.user = user
|
||||
|
||||
def load(self) -> List[Document]:
|
||||
with open(self.file_path, "rb") as f:
|
||||
data = f.read()
|
||||
|
|
@ -42,6 +46,9 @@ class ExternalDocumentLoader(BaseLoader):
|
|||
except:
|
||||
pass
|
||||
|
||||
if self.user is not None:
|
||||
headers = include_user_info_headers(headers, self.user)
|
||||
|
||||
url = self.url
|
||||
if url.endswith("/"):
|
||||
url = url[:-1]
|
||||
|
|
|
|||
|
|
@ -228,6 +228,7 @@ class DoclingLoader:
|
|||
class Loader:
|
||||
def __init__(self, engine: str = "", **kwargs):
|
||||
self.engine = engine
|
||||
self.user = kwargs.get("user", None)
|
||||
self.kwargs = kwargs
|
||||
|
||||
def load(
|
||||
|
|
@ -264,6 +265,7 @@ class Loader:
|
|||
url=self.kwargs.get("EXTERNAL_DOCUMENT_LOADER_URL"),
|
||||
api_key=self.kwargs.get("EXTERNAL_DOCUMENT_LOADER_API_KEY"),
|
||||
mime_type=file_content_type,
|
||||
user=self.user,
|
||||
)
|
||||
elif self.engine == "tika" and self.kwargs.get("TIKA_SERVER_URL"):
|
||||
if self._is_text_file(file_ext, file_content_type):
|
||||
|
|
@ -272,7 +274,6 @@ class Loader:
|
|||
loader = TikaLoader(
|
||||
url=self.kwargs.get("TIKA_SERVER_URL"),
|
||||
file_path=file_path,
|
||||
mime_type=file_content_type,
|
||||
extract_images=self.kwargs.get("PDF_EXTRACT_IMAGES"),
|
||||
)
|
||||
elif (
|
||||
|
|
@ -369,14 +370,8 @@ class Loader:
|
|||
azure_credential=DefaultAzureCredential(),
|
||||
)
|
||||
elif self.engine == "mineru" and file_ext in [
|
||||
"pdf",
|
||||
"doc",
|
||||
"docx",
|
||||
"ppt",
|
||||
"pptx",
|
||||
"xls",
|
||||
"xlsx",
|
||||
]:
|
||||
"pdf"
|
||||
]: # MinerU currently only supports PDF
|
||||
loader = MinerULoader(
|
||||
file_path=file_path,
|
||||
api_mode=self.kwargs.get("MINERU_API_MODE", "local"),
|
||||
|
|
@ -391,16 +386,9 @@ class Loader:
|
|||
in ["pdf"] # Mistral OCR currently only supports PDF and images
|
||||
):
|
||||
loader = MistralLoader(
|
||||
api_key=self.kwargs.get("MISTRAL_OCR_API_KEY"), file_path=file_path
|
||||
)
|
||||
elif (
|
||||
self.engine == "external"
|
||||
and self.kwargs.get("MISTRAL_OCR_API_KEY") != ""
|
||||
and file_ext
|
||||
in ["pdf"] # Mistral OCR currently only supports PDF and images
|
||||
):
|
||||
loader = MistralLoader(
|
||||
api_key=self.kwargs.get("MISTRAL_OCR_API_KEY"), file_path=file_path
|
||||
base_url=self.kwargs.get("MISTRAL_OCR_API_BASE_URL"),
|
||||
api_key=self.kwargs.get("MISTRAL_OCR_API_KEY"),
|
||||
file_path=file_path,
|
||||
)
|
||||
else:
|
||||
if file_ext == "pdf":
|
||||
|
|
|
|||
|
|
@ -33,13 +33,14 @@ class MinerULoader:
|
|||
self.api_key = api_key
|
||||
|
||||
# Parse params dict with defaults
|
||||
params = params or {}
|
||||
self.params = params or {}
|
||||
self.enable_ocr = params.get("enable_ocr", False)
|
||||
self.enable_formula = params.get("enable_formula", True)
|
||||
self.enable_table = params.get("enable_table", True)
|
||||
self.language = params.get("language", "en")
|
||||
self.model_version = params.get("model_version", "pipeline")
|
||||
self.page_ranges = params.get("page_ranges", "")
|
||||
|
||||
self.page_ranges = self.params.pop("page_ranges", "")
|
||||
|
||||
# Validate API mode
|
||||
if self.api_mode not in ["local", "cloud"]:
|
||||
|
|
@ -76,27 +77,10 @@ class MinerULoader:
|
|||
|
||||
# Build form data for Local API
|
||||
form_data = {
|
||||
**self.params,
|
||||
"return_md": "true",
|
||||
"formula_enable": str(self.enable_formula).lower(),
|
||||
"table_enable": str(self.enable_table).lower(),
|
||||
}
|
||||
|
||||
# Parse method based on OCR setting
|
||||
if self.enable_ocr:
|
||||
form_data["parse_method"] = "ocr"
|
||||
else:
|
||||
form_data["parse_method"] = "auto"
|
||||
|
||||
# Language configuration (Local API uses lang_list array)
|
||||
if self.language:
|
||||
form_data["lang_list"] = self.language
|
||||
|
||||
# Backend/model version (Local API uses "backend" parameter)
|
||||
if self.model_version == "vlm":
|
||||
form_data["backend"] = "vlm-vllm-engine"
|
||||
else:
|
||||
form_data["backend"] = "pipeline"
|
||||
|
||||
# Page ranges (Local API uses start_page_id and end_page_id)
|
||||
if self.page_ranges:
|
||||
# For simplicity, if page_ranges is specified, log a warning
|
||||
|
|
@ -236,10 +220,7 @@ class MinerULoader:
|
|||
|
||||
# Build request body
|
||||
request_body = {
|
||||
"enable_formula": self.enable_formula,
|
||||
"enable_table": self.enable_table,
|
||||
"language": self.language,
|
||||
"model_version": self.model_version,
|
||||
**self.params,
|
||||
"files": [
|
||||
{
|
||||
"name": filename,
|
||||
|
|
|
|||
|
|
@ -30,10 +30,9 @@ class MistralLoader:
|
|||
- Enhanced error handling with retryable error classification
|
||||
"""
|
||||
|
||||
BASE_API_URL = "https://api.mistral.ai/v1"
|
||||
|
||||
def __init__(
|
||||
self,
|
||||
base_url: str,
|
||||
api_key: str,
|
||||
file_path: str,
|
||||
timeout: int = 300, # 5 minutes default
|
||||
|
|
@ -55,6 +54,9 @@ class MistralLoader:
|
|||
if not os.path.exists(file_path):
|
||||
raise FileNotFoundError(f"File not found at {file_path}")
|
||||
|
||||
self.base_url = (
|
||||
base_url.rstrip("/") if base_url else "https://api.mistral.ai/v1"
|
||||
)
|
||||
self.api_key = api_key
|
||||
self.file_path = file_path
|
||||
self.timeout = timeout
|
||||
|
|
@ -240,7 +242,7 @@ class MistralLoader:
|
|||
in a context manager to minimize memory usage duration.
|
||||
"""
|
||||
log.info("Uploading file to Mistral API")
|
||||
url = f"{self.BASE_API_URL}/files"
|
||||
url = f"{self.base_url}/files"
|
||||
|
||||
def upload_request():
|
||||
# MEMORY OPTIMIZATION: Use context manager to minimize file handle lifetime
|
||||
|
|
@ -275,7 +277,7 @@ class MistralLoader:
|
|||
|
||||
async def _upload_file_async(self, session: aiohttp.ClientSession) -> str:
|
||||
"""Async file upload with streaming for better memory efficiency."""
|
||||
url = f"{self.BASE_API_URL}/files"
|
||||
url = f"{self.base_url}/files"
|
||||
|
||||
async def upload_request():
|
||||
# Create multipart writer for streaming upload
|
||||
|
|
@ -321,7 +323,7 @@ class MistralLoader:
|
|||
def _get_signed_url(self, file_id: str) -> str:
|
||||
"""Retrieves a temporary signed URL for the uploaded file (sync version)."""
|
||||
log.info(f"Getting signed URL for file ID: {file_id}")
|
||||
url = f"{self.BASE_API_URL}/files/{file_id}/url"
|
||||
url = f"{self.base_url}/files/{file_id}/url"
|
||||
params = {"expiry": 1}
|
||||
signed_url_headers = {**self.headers, "Accept": "application/json"}
|
||||
|
||||
|
|
@ -346,7 +348,7 @@ class MistralLoader:
|
|||
self, session: aiohttp.ClientSession, file_id: str
|
||||
) -> str:
|
||||
"""Async signed URL retrieval."""
|
||||
url = f"{self.BASE_API_URL}/files/{file_id}/url"
|
||||
url = f"{self.base_url}/files/{file_id}/url"
|
||||
params = {"expiry": 1}
|
||||
|
||||
headers = {**self.headers, "Accept": "application/json"}
|
||||
|
|
@ -373,7 +375,7 @@ class MistralLoader:
|
|||
def _process_ocr(self, signed_url: str) -> Dict[str, Any]:
|
||||
"""Sends the signed URL to the OCR endpoint for processing (sync version)."""
|
||||
log.info("Processing OCR via Mistral API")
|
||||
url = f"{self.BASE_API_URL}/ocr"
|
||||
url = f"{self.base_url}/ocr"
|
||||
ocr_headers = {
|
||||
**self.headers,
|
||||
"Content-Type": "application/json",
|
||||
|
|
@ -407,7 +409,7 @@ class MistralLoader:
|
|||
self, session: aiohttp.ClientSession, signed_url: str
|
||||
) -> Dict[str, Any]:
|
||||
"""Async OCR processing with timing metrics."""
|
||||
url = f"{self.BASE_API_URL}/ocr"
|
||||
url = f"{self.base_url}/ocr"
|
||||
|
||||
headers = {
|
||||
**self.headers,
|
||||
|
|
@ -446,7 +448,7 @@ class MistralLoader:
|
|||
def _delete_file(self, file_id: str) -> None:
|
||||
"""Deletes the file from Mistral storage (sync version)."""
|
||||
log.info(f"Deleting uploaded file ID: {file_id}")
|
||||
url = f"{self.BASE_API_URL}/files/{file_id}"
|
||||
url = f"{self.base_url}/files/{file_id}"
|
||||
|
||||
try:
|
||||
response = requests.delete(
|
||||
|
|
@ -467,7 +469,7 @@ class MistralLoader:
|
|||
async def delete_request():
|
||||
self._debug_log(f"Deleting file ID: {file_id}")
|
||||
async with session.delete(
|
||||
url=f"{self.BASE_API_URL}/files/{file_id}",
|
||||
url=f"{self.base_url}/files/{file_id}",
|
||||
headers=self.headers,
|
||||
timeout=aiohttp.ClientTimeout(
|
||||
total=self.cleanup_timeout
|
||||
|
|
|
|||
|
|
@ -6,6 +6,7 @@ from urllib.parse import quote
|
|||
|
||||
from open_webui.env import ENABLE_FORWARD_USER_INFO_HEADERS, SRC_LOG_LEVELS
|
||||
from open_webui.retrieval.models.base_reranker import BaseReranker
|
||||
from open_webui.utils.headers import include_user_info_headers
|
||||
|
||||
|
||||
log = logging.getLogger(__name__)
|
||||
|
|
@ -40,22 +41,17 @@ class ExternalReranker(BaseReranker):
|
|||
log.info(f"ExternalReranker:predict:model {self.model}")
|
||||
log.info(f"ExternalReranker:predict:query {query}")
|
||||
|
||||
headers = {
|
||||
"Content-Type": "application/json",
|
||||
"Authorization": f"Bearer {self.api_key}",
|
||||
}
|
||||
|
||||
if ENABLE_FORWARD_USER_INFO_HEADERS and user:
|
||||
headers = include_user_info_headers(headers, user)
|
||||
|
||||
r = requests.post(
|
||||
f"{self.url}",
|
||||
headers={
|
||||
"Content-Type": "application/json",
|
||||
"Authorization": f"Bearer {self.api_key}",
|
||||
**(
|
||||
{
|
||||
"X-OpenWebUI-User-Name": quote(user.name, safe=" "),
|
||||
"X-OpenWebUI-User-Id": user.id,
|
||||
"X-OpenWebUI-User-Email": user.email,
|
||||
"X-OpenWebUI-User-Role": user.role,
|
||||
}
|
||||
if ENABLE_FORWARD_USER_INFO_HEADERS and user
|
||||
else {}
|
||||
),
|
||||
},
|
||||
headers=headers,
|
||||
json=payload,
|
||||
)
|
||||
|
||||
|
|
|
|||
|
|
@ -1,8 +1,10 @@
|
|||
import logging
|
||||
import os
|
||||
from typing import Optional, Union
|
||||
from typing import Awaitable, Optional, Union
|
||||
|
||||
import requests
|
||||
import aiohttp
|
||||
import asyncio
|
||||
import hashlib
|
||||
from concurrent.futures import ThreadPoolExecutor
|
||||
import time
|
||||
|
|
@ -27,6 +29,7 @@ from open_webui.models.notes import Notes
|
|||
|
||||
from open_webui.retrieval.vector.main import GetResult
|
||||
from open_webui.utils.access_control import has_access
|
||||
from open_webui.utils.headers import include_user_info_headers
|
||||
from open_webui.utils.misc import get_message_list
|
||||
|
||||
from open_webui.retrieval.web.utils import get_web_loader
|
||||
|
|
@ -88,14 +91,29 @@ class VectorSearchRetriever(BaseRetriever):
|
|||
top_k: int
|
||||
|
||||
def _get_relevant_documents(
|
||||
self, query: str, *, run_manager: CallbackManagerForRetrieverRun
|
||||
) -> list[Document]:
|
||||
"""Get documents relevant to a query.
|
||||
|
||||
Args:
|
||||
query: String to find relevant documents for.
|
||||
run_manager: The callback handler to use.
|
||||
|
||||
Returns:
|
||||
List of relevant documents.
|
||||
"""
|
||||
return []
|
||||
|
||||
async def _aget_relevant_documents(
|
||||
self,
|
||||
query: str,
|
||||
*,
|
||||
run_manager: CallbackManagerForRetrieverRun,
|
||||
) -> list[Document]:
|
||||
embedding = await self.embedding_function(query, RAG_EMBEDDING_QUERY_PREFIX)
|
||||
result = VECTOR_DB_CLIENT.search(
|
||||
collection_name=self.collection_name,
|
||||
vectors=[self.embedding_function(query, RAG_EMBEDDING_QUERY_PREFIX)],
|
||||
vectors=[embedding],
|
||||
limit=self.top_k,
|
||||
)
|
||||
|
||||
|
|
@ -148,7 +166,45 @@ def get_doc(collection_name: str, user: UserModel = None):
|
|||
raise e
|
||||
|
||||
|
||||
def query_doc_with_hybrid_search(
|
||||
def get_enriched_texts(collection_result: GetResult) -> list[str]:
|
||||
enriched_texts = []
|
||||
for idx, text in enumerate(collection_result.documents[0]):
|
||||
metadata = collection_result.metadatas[0][idx]
|
||||
metadata_parts = [text]
|
||||
|
||||
# Add filename (repeat twice for extra weight in BM25 scoring)
|
||||
if metadata.get("name"):
|
||||
filename = metadata["name"]
|
||||
filename_tokens = (
|
||||
filename.replace("_", " ").replace("-", " ").replace(".", " ")
|
||||
)
|
||||
metadata_parts.append(
|
||||
f"Filename: {filename} {filename_tokens} {filename_tokens}"
|
||||
)
|
||||
|
||||
# Add title if available
|
||||
if metadata.get("title"):
|
||||
metadata_parts.append(f"Title: {metadata['title']}")
|
||||
|
||||
# Add document section headings if available (from markdown splitter)
|
||||
if metadata.get("headings") and isinstance(metadata["headings"], list):
|
||||
headings = " > ".join(str(h) for h in metadata["headings"])
|
||||
metadata_parts.append(f"Section: {headings}")
|
||||
|
||||
# Add source URL/path if available
|
||||
if metadata.get("source"):
|
||||
metadata_parts.append(f"Source: {metadata['source']}")
|
||||
|
||||
# Add snippet for web search results
|
||||
if metadata.get("snippet"):
|
||||
metadata_parts.append(f"Snippet: {metadata['snippet']}")
|
||||
|
||||
enriched_texts.append(" ".join(metadata_parts))
|
||||
|
||||
return enriched_texts
|
||||
|
||||
|
||||
async def query_doc_with_hybrid_search(
|
||||
collection_name: str,
|
||||
collection_result: GetResult,
|
||||
query: str,
|
||||
|
|
@ -158,12 +214,21 @@ def query_doc_with_hybrid_search(
|
|||
k_reranker: int,
|
||||
r: float,
|
||||
hybrid_bm25_weight: float,
|
||||
enable_enriched_texts: bool = False,
|
||||
) -> dict:
|
||||
try:
|
||||
# First check if collection_result has the required attributes
|
||||
if (
|
||||
not collection_result
|
||||
or not hasattr(collection_result, "documents")
|
||||
or not collection_result.documents
|
||||
or not hasattr(collection_result, "metadatas")
|
||||
):
|
||||
log.warning(f"query_doc_with_hybrid_search:no_docs {collection_name}")
|
||||
return {"documents": [], "metadatas": [], "distances": []}
|
||||
|
||||
# Now safely check the documents content after confirming attributes exist
|
||||
if (
|
||||
not collection_result.documents
|
||||
or len(collection_result.documents) == 0
|
||||
or not collection_result.documents[0]
|
||||
):
|
||||
|
|
@ -172,8 +237,14 @@ def query_doc_with_hybrid_search(
|
|||
|
||||
log.debug(f"query_doc_with_hybrid_search:doc {collection_name}")
|
||||
|
||||
bm25_texts = (
|
||||
get_enriched_texts(collection_result)
|
||||
if enable_enriched_texts
|
||||
else collection_result.documents[0]
|
||||
)
|
||||
|
||||
bm25_retriever = BM25Retriever.from_texts(
|
||||
texts=collection_result.documents[0],
|
||||
texts=bm25_texts,
|
||||
metadatas=collection_result.metadatas[0],
|
||||
)
|
||||
bm25_retriever.k = k
|
||||
|
|
@ -209,7 +280,7 @@ def query_doc_with_hybrid_search(
|
|||
base_compressor=compressor, base_retriever=ensemble_retriever
|
||||
)
|
||||
|
||||
result = compression_retriever.invoke(query)
|
||||
result = await compression_retriever.ainvoke(query)
|
||||
|
||||
distances = [d.metadata.get("score") for d in result]
|
||||
documents = [d.page_content for d in result]
|
||||
|
|
@ -328,7 +399,7 @@ def get_all_items_from_collections(collection_names: list[str]) -> dict:
|
|||
return merge_get_results(results)
|
||||
|
||||
|
||||
def query_collection(
|
||||
async def query_collection(
|
||||
collection_names: list[str],
|
||||
queries: list[str],
|
||||
embedding_function,
|
||||
|
|
@ -353,7 +424,9 @@ def query_collection(
|
|||
return None, e
|
||||
|
||||
# Generate all query embeddings (in one call)
|
||||
query_embeddings = embedding_function(queries, prefix=RAG_EMBEDDING_QUERY_PREFIX)
|
||||
query_embeddings = await embedding_function(
|
||||
queries, prefix=RAG_EMBEDDING_QUERY_PREFIX
|
||||
)
|
||||
log.debug(
|
||||
f"query_collection: processing {len(queries)} queries across {len(collection_names)} collections"
|
||||
)
|
||||
|
|
@ -380,7 +453,7 @@ def query_collection(
|
|||
return merge_and_sort_query_results(results, k=k)
|
||||
|
||||
|
||||
def query_collection_with_hybrid_search(
|
||||
async def query_collection_with_hybrid_search(
|
||||
collection_names: list[str],
|
||||
queries: list[str],
|
||||
embedding_function,
|
||||
|
|
@ -389,6 +462,7 @@ def query_collection_with_hybrid_search(
|
|||
k_reranker: int,
|
||||
r: float,
|
||||
hybrid_bm25_weight: float,
|
||||
enable_enriched_texts: bool = False,
|
||||
) -> dict:
|
||||
results = []
|
||||
error = False
|
||||
|
|
@ -411,9 +485,9 @@ def query_collection_with_hybrid_search(
|
|||
f"Starting hybrid search for {len(queries)} queries in {len(collection_names)} collections..."
|
||||
)
|
||||
|
||||
def process_query(collection_name, query):
|
||||
async def process_query(collection_name, query):
|
||||
try:
|
||||
result = query_doc_with_hybrid_search(
|
||||
result = await query_doc_with_hybrid_search(
|
||||
collection_name=collection_name,
|
||||
collection_result=collection_results[collection_name],
|
||||
query=query,
|
||||
|
|
@ -423,6 +497,7 @@ def query_collection_with_hybrid_search(
|
|||
k_reranker=k_reranker,
|
||||
r=r,
|
||||
hybrid_bm25_weight=hybrid_bm25_weight,
|
||||
enable_enriched_texts=enable_enriched_texts,
|
||||
)
|
||||
return result, None
|
||||
except Exception as e:
|
||||
|
|
@ -432,15 +507,16 @@ def query_collection_with_hybrid_search(
|
|||
# Prepare tasks for all collections and queries
|
||||
# Avoid running any tasks for collections that failed to fetch data (have assigned None)
|
||||
tasks = [
|
||||
(cn, q)
|
||||
for cn in collection_names
|
||||
if collection_results[cn] is not None
|
||||
for q in queries
|
||||
(collection_name, query)
|
||||
for collection_name in collection_names
|
||||
if collection_results[collection_name] is not None
|
||||
for query in queries
|
||||
]
|
||||
|
||||
with ThreadPoolExecutor() as executor:
|
||||
future_results = [executor.submit(process_query, cn, q) for cn, q in tasks]
|
||||
task_results = [future.result() for future in future_results]
|
||||
# Run all queries in parallel using asyncio.gather
|
||||
task_results = await asyncio.gather(
|
||||
*[process_query(collection_name, query) for collection_name, query in tasks]
|
||||
)
|
||||
|
||||
for result, err in task_results:
|
||||
if err is not None:
|
||||
|
|
@ -456,6 +532,248 @@ def query_collection_with_hybrid_search(
|
|||
return merge_and_sort_query_results(results, k=k)
|
||||
|
||||
|
||||
def generate_openai_batch_embeddings(
|
||||
model: str,
|
||||
texts: list[str],
|
||||
url: str = "https://api.openai.com/v1",
|
||||
key: str = "",
|
||||
prefix: str = None,
|
||||
user: UserModel = None,
|
||||
) -> Optional[list[list[float]]]:
|
||||
try:
|
||||
log.debug(
|
||||
f"generate_openai_batch_embeddings:model {model} batch size: {len(texts)}"
|
||||
)
|
||||
json_data = {"input": texts, "model": model}
|
||||
if isinstance(RAG_EMBEDDING_PREFIX_FIELD_NAME, str) and isinstance(prefix, str):
|
||||
json_data[RAG_EMBEDDING_PREFIX_FIELD_NAME] = prefix
|
||||
|
||||
headers = {
|
||||
"Content-Type": "application/json",
|
||||
"Authorization": f"Bearer {key}",
|
||||
}
|
||||
if ENABLE_FORWARD_USER_INFO_HEADERS and user:
|
||||
headers = include_user_info_headers(headers, user)
|
||||
|
||||
r = requests.post(
|
||||
f"{url}/embeddings",
|
||||
headers=headers,
|
||||
json=json_data,
|
||||
)
|
||||
r.raise_for_status()
|
||||
data = r.json()
|
||||
if "data" in data:
|
||||
return [elem["embedding"] for elem in data["data"]]
|
||||
else:
|
||||
raise "Something went wrong :/"
|
||||
except Exception as e:
|
||||
log.exception(f"Error generating openai batch embeddings: {e}")
|
||||
return None
|
||||
|
||||
|
||||
async def agenerate_openai_batch_embeddings(
|
||||
model: str,
|
||||
texts: list[str],
|
||||
url: str = "https://api.openai.com/v1",
|
||||
key: str = "",
|
||||
prefix: str = None,
|
||||
user: UserModel = None,
|
||||
) -> Optional[list[list[float]]]:
|
||||
try:
|
||||
log.debug(
|
||||
f"agenerate_openai_batch_embeddings:model {model} batch size: {len(texts)}"
|
||||
)
|
||||
form_data = {"input": texts, "model": model}
|
||||
if isinstance(RAG_EMBEDDING_PREFIX_FIELD_NAME, str) and isinstance(prefix, str):
|
||||
form_data[RAG_EMBEDDING_PREFIX_FIELD_NAME] = prefix
|
||||
|
||||
headers = {
|
||||
"Content-Type": "application/json",
|
||||
"Authorization": f"Bearer {key}",
|
||||
}
|
||||
if ENABLE_FORWARD_USER_INFO_HEADERS and user:
|
||||
headers = include_user_info_headers(headers, user)
|
||||
|
||||
async with aiohttp.ClientSession(trust_env=True) as session:
|
||||
async with session.post(
|
||||
f"{url}/embeddings", headers=headers, json=form_data
|
||||
) as r:
|
||||
r.raise_for_status()
|
||||
data = await r.json()
|
||||
if "data" in data:
|
||||
return [item["embedding"] for item in data["data"]]
|
||||
else:
|
||||
raise Exception("Something went wrong :/")
|
||||
except Exception as e:
|
||||
log.exception(f"Error generating openai batch embeddings: {e}")
|
||||
return None
|
||||
|
||||
|
||||
def generate_azure_openai_batch_embeddings(
|
||||
model: str,
|
||||
texts: list[str],
|
||||
url: str,
|
||||
key: str = "",
|
||||
version: str = "",
|
||||
prefix: str = None,
|
||||
user: UserModel = None,
|
||||
) -> Optional[list[list[float]]]:
|
||||
try:
|
||||
log.debug(
|
||||
f"generate_azure_openai_batch_embeddings:deployment {model} batch size: {len(texts)}"
|
||||
)
|
||||
json_data = {"input": texts}
|
||||
if isinstance(RAG_EMBEDDING_PREFIX_FIELD_NAME, str) and isinstance(prefix, str):
|
||||
json_data[RAG_EMBEDDING_PREFIX_FIELD_NAME] = prefix
|
||||
|
||||
url = f"{url}/openai/deployments/{model}/embeddings?api-version={version}"
|
||||
|
||||
for _ in range(5):
|
||||
headers = {
|
||||
"Content-Type": "application/json",
|
||||
"api-key": key,
|
||||
}
|
||||
if ENABLE_FORWARD_USER_INFO_HEADERS and user:
|
||||
headers = include_user_info_headers(headers, user)
|
||||
|
||||
r = requests.post(
|
||||
url,
|
||||
headers=headers,
|
||||
json=json_data,
|
||||
)
|
||||
if r.status_code == 429:
|
||||
retry = float(r.headers.get("Retry-After", "1"))
|
||||
time.sleep(retry)
|
||||
continue
|
||||
r.raise_for_status()
|
||||
data = r.json()
|
||||
if "data" in data:
|
||||
return [elem["embedding"] for elem in data["data"]]
|
||||
else:
|
||||
raise Exception("Something went wrong :/")
|
||||
return None
|
||||
except Exception as e:
|
||||
log.exception(f"Error generating azure openai batch embeddings: {e}")
|
||||
return None
|
||||
|
||||
|
||||
async def agenerate_azure_openai_batch_embeddings(
|
||||
model: str,
|
||||
texts: list[str],
|
||||
url: str,
|
||||
key: str = "",
|
||||
version: str = "",
|
||||
prefix: str = None,
|
||||
user: UserModel = None,
|
||||
) -> Optional[list[list[float]]]:
|
||||
try:
|
||||
log.debug(
|
||||
f"agenerate_azure_openai_batch_embeddings:deployment {model} batch size: {len(texts)}"
|
||||
)
|
||||
form_data = {"input": texts}
|
||||
if isinstance(RAG_EMBEDDING_PREFIX_FIELD_NAME, str) and isinstance(prefix, str):
|
||||
form_data[RAG_EMBEDDING_PREFIX_FIELD_NAME] = prefix
|
||||
|
||||
full_url = f"{url}/openai/deployments/{model}/embeddings?api-version={version}"
|
||||
|
||||
headers = {
|
||||
"Content-Type": "application/json",
|
||||
"api-key": key,
|
||||
}
|
||||
if ENABLE_FORWARD_USER_INFO_HEADERS and user:
|
||||
headers = include_user_info_headers(headers, user)
|
||||
|
||||
async with aiohttp.ClientSession(trust_env=True) as session:
|
||||
async with session.post(full_url, headers=headers, json=form_data) as r:
|
||||
r.raise_for_status()
|
||||
data = await r.json()
|
||||
if "data" in data:
|
||||
return [item["embedding"] for item in data["data"]]
|
||||
else:
|
||||
raise Exception("Something went wrong :/")
|
||||
except Exception as e:
|
||||
log.exception(f"Error generating azure openai batch embeddings: {e}")
|
||||
return None
|
||||
|
||||
|
||||
def generate_ollama_batch_embeddings(
|
||||
model: str,
|
||||
texts: list[str],
|
||||
url: str,
|
||||
key: str = "",
|
||||
prefix: str = None,
|
||||
user: UserModel = None,
|
||||
) -> Optional[list[list[float]]]:
|
||||
try:
|
||||
log.debug(
|
||||
f"generate_ollama_batch_embeddings:model {model} batch size: {len(texts)}"
|
||||
)
|
||||
json_data = {"input": texts, "model": model}
|
||||
if isinstance(RAG_EMBEDDING_PREFIX_FIELD_NAME, str) and isinstance(prefix, str):
|
||||
json_data[RAG_EMBEDDING_PREFIX_FIELD_NAME] = prefix
|
||||
|
||||
headers = {
|
||||
"Content-Type": "application/json",
|
||||
"Authorization": f"Bearer {key}",
|
||||
}
|
||||
if ENABLE_FORWARD_USER_INFO_HEADERS and user:
|
||||
headers = include_user_info_headers(headers, user)
|
||||
|
||||
r = requests.post(
|
||||
f"{url}/api/embed",
|
||||
headers=headers,
|
||||
json=json_data,
|
||||
)
|
||||
r.raise_for_status()
|
||||
data = r.json()
|
||||
|
||||
if "embeddings" in data:
|
||||
return data["embeddings"]
|
||||
else:
|
||||
raise "Something went wrong :/"
|
||||
except Exception as e:
|
||||
log.exception(f"Error generating ollama batch embeddings: {e}")
|
||||
return None
|
||||
|
||||
|
||||
async def agenerate_ollama_batch_embeddings(
|
||||
model: str,
|
||||
texts: list[str],
|
||||
url: str,
|
||||
key: str = "",
|
||||
prefix: str = None,
|
||||
user: UserModel = None,
|
||||
) -> Optional[list[list[float]]]:
|
||||
try:
|
||||
log.debug(
|
||||
f"agenerate_ollama_batch_embeddings:model {model} batch size: {len(texts)}"
|
||||
)
|
||||
form_data = {"input": texts, "model": model}
|
||||
if isinstance(RAG_EMBEDDING_PREFIX_FIELD_NAME, str) and isinstance(prefix, str):
|
||||
form_data[RAG_EMBEDDING_PREFIX_FIELD_NAME] = prefix
|
||||
|
||||
headers = {
|
||||
"Content-Type": "application/json",
|
||||
"Authorization": f"Bearer {key}",
|
||||
}
|
||||
if ENABLE_FORWARD_USER_INFO_HEADERS and user:
|
||||
headers = include_user_info_headers(headers, user)
|
||||
|
||||
async with aiohttp.ClientSession(trust_env=True) as session:
|
||||
async with session.post(
|
||||
f"{url}/api/embed", headers=headers, json=form_data
|
||||
) as r:
|
||||
r.raise_for_status()
|
||||
data = await r.json()
|
||||
if "embeddings" in data:
|
||||
return data["embeddings"]
|
||||
else:
|
||||
raise Exception("Something went wrong :/")
|
||||
except Exception as e:
|
||||
log.exception(f"Error generating ollama batch embeddings: {e}")
|
||||
return None
|
||||
|
||||
|
||||
def get_embedding_function(
|
||||
embedding_engine,
|
||||
embedding_model,
|
||||
|
|
@ -464,13 +782,23 @@ def get_embedding_function(
|
|||
key,
|
||||
embedding_batch_size,
|
||||
azure_api_version=None,
|
||||
):
|
||||
) -> Awaitable:
|
||||
if embedding_engine == "":
|
||||
return lambda query, prefix=None, user=None: embedding_function.encode(
|
||||
query, **({"prompt": prefix} if prefix else {})
|
||||
).tolist()
|
||||
# Sentence transformers: CPU-bound sync operation
|
||||
async def async_embedding_function(query, prefix=None, user=None):
|
||||
return await asyncio.to_thread(
|
||||
(
|
||||
lambda query, prefix=None: embedding_function.encode(
|
||||
query, **({"prompt": prefix} if prefix else {})
|
||||
).tolist()
|
||||
),
|
||||
query,
|
||||
prefix,
|
||||
)
|
||||
|
||||
return async_embedding_function
|
||||
elif embedding_engine in ["ollama", "openai", "azure_openai"]:
|
||||
func = lambda query, prefix=None, user=None: generate_embeddings(
|
||||
embedding_function = lambda query, prefix=None, user=None: generate_embeddings(
|
||||
engine=embedding_engine,
|
||||
model=embedding_model,
|
||||
text=query,
|
||||
|
|
@ -481,41 +809,104 @@ def get_embedding_function(
|
|||
azure_api_version=azure_api_version,
|
||||
)
|
||||
|
||||
def generate_multiple(query, prefix, user, func):
|
||||
async def async_embedding_function(query, prefix=None, user=None):
|
||||
if isinstance(query, list):
|
||||
embeddings = []
|
||||
for i in range(0, len(query), embedding_batch_size):
|
||||
batch_embeddings = func(
|
||||
query[i : i + embedding_batch_size],
|
||||
prefix=prefix,
|
||||
user=user,
|
||||
)
|
||||
# Create batches
|
||||
batches = [
|
||||
query[i : i + embedding_batch_size]
|
||||
for i in range(0, len(query), embedding_batch_size)
|
||||
]
|
||||
log.debug(
|
||||
f"generate_multiple_async: Processing {len(batches)} batches in parallel"
|
||||
)
|
||||
|
||||
# Execute all batches in parallel
|
||||
tasks = [
|
||||
embedding_function(batch, prefix=prefix, user=user)
|
||||
for batch in batches
|
||||
]
|
||||
batch_results = await asyncio.gather(*tasks)
|
||||
|
||||
# Flatten results
|
||||
embeddings = []
|
||||
for batch_embeddings in batch_results:
|
||||
if isinstance(batch_embeddings, list):
|
||||
embeddings.extend(batch_embeddings)
|
||||
|
||||
log.debug(
|
||||
f"generate_multiple_async: Generated {len(embeddings)} embeddings from {len(batches)} parallel batches"
|
||||
)
|
||||
return embeddings
|
||||
else:
|
||||
return func(query, prefix, user)
|
||||
return await embedding_function(query, prefix, user)
|
||||
|
||||
return lambda query, prefix=None, user=None: generate_multiple(
|
||||
query, prefix, user, func
|
||||
)
|
||||
return async_embedding_function
|
||||
else:
|
||||
raise ValueError(f"Unknown embedding engine: {embedding_engine}")
|
||||
|
||||
|
||||
async def generate_embeddings(
|
||||
engine: str,
|
||||
model: str,
|
||||
text: Union[str, list[str]],
|
||||
prefix: Union[str, None] = None,
|
||||
**kwargs,
|
||||
):
|
||||
url = kwargs.get("url", "")
|
||||
key = kwargs.get("key", "")
|
||||
user = kwargs.get("user")
|
||||
|
||||
if prefix is not None and RAG_EMBEDDING_PREFIX_FIELD_NAME is None:
|
||||
if isinstance(text, list):
|
||||
text = [f"{prefix}{text_element}" for text_element in text]
|
||||
else:
|
||||
text = f"{prefix}{text}"
|
||||
|
||||
if engine == "ollama":
|
||||
embeddings = await agenerate_ollama_batch_embeddings(
|
||||
**{
|
||||
"model": model,
|
||||
"texts": text if isinstance(text, list) else [text],
|
||||
"url": url,
|
||||
"key": key,
|
||||
"prefix": prefix,
|
||||
"user": user,
|
||||
}
|
||||
)
|
||||
return embeddings[0] if isinstance(text, str) else embeddings
|
||||
elif engine == "openai":
|
||||
embeddings = await agenerate_openai_batch_embeddings(
|
||||
model, text if isinstance(text, list) else [text], url, key, prefix, user
|
||||
)
|
||||
return embeddings[0] if isinstance(text, str) else embeddings
|
||||
elif engine == "azure_openai":
|
||||
azure_api_version = kwargs.get("azure_api_version", "")
|
||||
embeddings = await agenerate_azure_openai_batch_embeddings(
|
||||
model,
|
||||
text if isinstance(text, list) else [text],
|
||||
url,
|
||||
key,
|
||||
azure_api_version,
|
||||
prefix,
|
||||
user,
|
||||
)
|
||||
return embeddings[0] if isinstance(text, str) else embeddings
|
||||
|
||||
|
||||
def get_reranking_function(reranking_engine, reranking_model, reranking_function):
|
||||
if reranking_function is None:
|
||||
return None
|
||||
if reranking_engine == "external":
|
||||
return lambda sentences, user=None: reranking_function.predict(
|
||||
sentences, user=user
|
||||
return lambda query, documents, user=None: reranking_function.predict(
|
||||
[(query, doc.page_content) for doc in documents], user=user
|
||||
)
|
||||
else:
|
||||
return lambda sentences, user=None: reranking_function.predict(sentences)
|
||||
return lambda query, documents, user=None: reranking_function.predict(
|
||||
[(query, doc.page_content) for doc in documents]
|
||||
)
|
||||
|
||||
|
||||
def get_sources_from_items(
|
||||
async def get_sources_from_items(
|
||||
request,
|
||||
items,
|
||||
queries,
|
||||
|
|
@ -743,7 +1134,7 @@ def get_sources_from_items(
|
|||
query_result = None # Initialize to None
|
||||
if hybrid_search:
|
||||
try:
|
||||
query_result = query_collection_with_hybrid_search(
|
||||
query_result = await query_collection_with_hybrid_search(
|
||||
collection_names=collection_names,
|
||||
queries=queries,
|
||||
embedding_function=embedding_function,
|
||||
|
|
@ -752,6 +1143,7 @@ def get_sources_from_items(
|
|||
k_reranker=k_reranker,
|
||||
r=r,
|
||||
hybrid_bm25_weight=hybrid_bm25_weight,
|
||||
enable_enriched_texts=request.app.state.config.ENABLE_RAG_HYBRID_SEARCH_ENRICHED_TEXTS,
|
||||
)
|
||||
except Exception as e:
|
||||
log.debug(
|
||||
|
|
@ -760,7 +1152,7 @@ def get_sources_from_items(
|
|||
|
||||
# fallback to non-hybrid search
|
||||
if not hybrid_search and query_result is None:
|
||||
query_result = query_collection(
|
||||
query_result = await query_collection(
|
||||
collection_names=collection_names,
|
||||
queries=queries,
|
||||
embedding_function=embedding_function,
|
||||
|
|
@ -836,199 +1228,6 @@ def get_model_path(model: str, update_model: bool = False):
|
|||
return model
|
||||
|
||||
|
||||
def generate_openai_batch_embeddings(
|
||||
model: str,
|
||||
texts: list[str],
|
||||
url: str = "https://api.openai.com/v1",
|
||||
key: str = "",
|
||||
prefix: str = None,
|
||||
user: UserModel = None,
|
||||
) -> Optional[list[list[float]]]:
|
||||
try:
|
||||
log.debug(
|
||||
f"generate_openai_batch_embeddings:model {model} batch size: {len(texts)}"
|
||||
)
|
||||
json_data = {"input": texts, "model": model}
|
||||
if isinstance(RAG_EMBEDDING_PREFIX_FIELD_NAME, str) and isinstance(prefix, str):
|
||||
json_data[RAG_EMBEDDING_PREFIX_FIELD_NAME] = prefix
|
||||
|
||||
r = requests.post(
|
||||
f"{url}/embeddings",
|
||||
headers={
|
||||
"Content-Type": "application/json",
|
||||
"Authorization": f"Bearer {key}",
|
||||
**(
|
||||
{
|
||||
"X-OpenWebUI-User-Name": quote(user.name, safe=" "),
|
||||
"X-OpenWebUI-User-Id": user.id,
|
||||
"X-OpenWebUI-User-Email": user.email,
|
||||
"X-OpenWebUI-User-Role": user.role,
|
||||
}
|
||||
if ENABLE_FORWARD_USER_INFO_HEADERS and user
|
||||
else {}
|
||||
),
|
||||
},
|
||||
json=json_data,
|
||||
)
|
||||
r.raise_for_status()
|
||||
data = r.json()
|
||||
if "data" in data:
|
||||
return [elem["embedding"] for elem in data["data"]]
|
||||
else:
|
||||
raise "Something went wrong :/"
|
||||
except Exception as e:
|
||||
log.exception(f"Error generating openai batch embeddings: {e}")
|
||||
return None
|
||||
|
||||
|
||||
def generate_azure_openai_batch_embeddings(
|
||||
model: str,
|
||||
texts: list[str],
|
||||
url: str,
|
||||
key: str = "",
|
||||
version: str = "",
|
||||
prefix: str = None,
|
||||
user: UserModel = None,
|
||||
) -> Optional[list[list[float]]]:
|
||||
try:
|
||||
log.debug(
|
||||
f"generate_azure_openai_batch_embeddings:deployment {model} batch size: {len(texts)}"
|
||||
)
|
||||
json_data = {"input": texts}
|
||||
if isinstance(RAG_EMBEDDING_PREFIX_FIELD_NAME, str) and isinstance(prefix, str):
|
||||
json_data[RAG_EMBEDDING_PREFIX_FIELD_NAME] = prefix
|
||||
|
||||
url = f"{url}/openai/deployments/{model}/embeddings?api-version={version}"
|
||||
|
||||
for _ in range(5):
|
||||
r = requests.post(
|
||||
url,
|
||||
headers={
|
||||
"Content-Type": "application/json",
|
||||
"api-key": key,
|
||||
**(
|
||||
{
|
||||
"X-OpenWebUI-User-Name": quote(user.name, safe=" "),
|
||||
"X-OpenWebUI-User-Id": user.id,
|
||||
"X-OpenWebUI-User-Email": user.email,
|
||||
"X-OpenWebUI-User-Role": user.role,
|
||||
}
|
||||
if ENABLE_FORWARD_USER_INFO_HEADERS and user
|
||||
else {}
|
||||
),
|
||||
},
|
||||
json=json_data,
|
||||
)
|
||||
if r.status_code == 429:
|
||||
retry = float(r.headers.get("Retry-After", "1"))
|
||||
time.sleep(retry)
|
||||
continue
|
||||
r.raise_for_status()
|
||||
data = r.json()
|
||||
if "data" in data:
|
||||
return [elem["embedding"] for elem in data["data"]]
|
||||
else:
|
||||
raise Exception("Something went wrong :/")
|
||||
return None
|
||||
except Exception as e:
|
||||
log.exception(f"Error generating azure openai batch embeddings: {e}")
|
||||
return None
|
||||
|
||||
|
||||
def generate_ollama_batch_embeddings(
|
||||
model: str,
|
||||
texts: list[str],
|
||||
url: str,
|
||||
key: str = "",
|
||||
prefix: str = None,
|
||||
user: UserModel = None,
|
||||
) -> Optional[list[list[float]]]:
|
||||
try:
|
||||
log.debug(
|
||||
f"generate_ollama_batch_embeddings:model {model} batch size: {len(texts)}"
|
||||
)
|
||||
json_data = {"input": texts, "model": model}
|
||||
if isinstance(RAG_EMBEDDING_PREFIX_FIELD_NAME, str) and isinstance(prefix, str):
|
||||
json_data[RAG_EMBEDDING_PREFIX_FIELD_NAME] = prefix
|
||||
|
||||
r = requests.post(
|
||||
f"{url}/api/embed",
|
||||
headers={
|
||||
"Content-Type": "application/json",
|
||||
"Authorization": f"Bearer {key}",
|
||||
**(
|
||||
{
|
||||
"X-OpenWebUI-User-Name": quote(user.name, safe=" "),
|
||||
"X-OpenWebUI-User-Id": user.id,
|
||||
"X-OpenWebUI-User-Email": user.email,
|
||||
"X-OpenWebUI-User-Role": user.role,
|
||||
}
|
||||
if ENABLE_FORWARD_USER_INFO_HEADERS
|
||||
else {}
|
||||
),
|
||||
},
|
||||
json=json_data,
|
||||
)
|
||||
r.raise_for_status()
|
||||
data = r.json()
|
||||
|
||||
if "embeddings" in data:
|
||||
return data["embeddings"]
|
||||
else:
|
||||
raise "Something went wrong :/"
|
||||
except Exception as e:
|
||||
log.exception(f"Error generating ollama batch embeddings: {e}")
|
||||
return None
|
||||
|
||||
|
||||
def generate_embeddings(
|
||||
engine: str,
|
||||
model: str,
|
||||
text: Union[str, list[str]],
|
||||
prefix: Union[str, None] = None,
|
||||
**kwargs,
|
||||
):
|
||||
url = kwargs.get("url", "")
|
||||
key = kwargs.get("key", "")
|
||||
user = kwargs.get("user")
|
||||
|
||||
if prefix is not None and RAG_EMBEDDING_PREFIX_FIELD_NAME is None:
|
||||
if isinstance(text, list):
|
||||
text = [f"{prefix}{text_element}" for text_element in text]
|
||||
else:
|
||||
text = f"{prefix}{text}"
|
||||
|
||||
if engine == "ollama":
|
||||
embeddings = generate_ollama_batch_embeddings(
|
||||
**{
|
||||
"model": model,
|
||||
"texts": text if isinstance(text, list) else [text],
|
||||
"url": url,
|
||||
"key": key,
|
||||
"prefix": prefix,
|
||||
"user": user,
|
||||
}
|
||||
)
|
||||
return embeddings[0] if isinstance(text, str) else embeddings
|
||||
elif engine == "openai":
|
||||
embeddings = generate_openai_batch_embeddings(
|
||||
model, text if isinstance(text, list) else [text], url, key, prefix, user
|
||||
)
|
||||
return embeddings[0] if isinstance(text, str) else embeddings
|
||||
elif engine == "azure_openai":
|
||||
azure_api_version = kwargs.get("azure_api_version", "")
|
||||
embeddings = generate_azure_openai_batch_embeddings(
|
||||
model,
|
||||
text if isinstance(text, list) else [text],
|
||||
url,
|
||||
key,
|
||||
azure_api_version,
|
||||
prefix,
|
||||
user,
|
||||
)
|
||||
return embeddings[0] if isinstance(text, str) else embeddings
|
||||
|
||||
|
||||
import operator
|
||||
from typing import Optional, Sequence
|
||||
|
||||
|
|
@ -1051,19 +1250,38 @@ class RerankCompressor(BaseDocumentCompressor):
|
|||
documents: Sequence[Document],
|
||||
query: str,
|
||||
callbacks: Optional[Callbacks] = None,
|
||||
) -> Sequence[Document]:
|
||||
"""Compress retrieved documents given the query context.
|
||||
|
||||
Args:
|
||||
documents: The retrieved documents.
|
||||
query: The query context.
|
||||
callbacks: Optional callbacks to run during compression.
|
||||
|
||||
Returns:
|
||||
The compressed documents.
|
||||
|
||||
"""
|
||||
return []
|
||||
|
||||
async def acompress_documents(
|
||||
self,
|
||||
documents: Sequence[Document],
|
||||
query: str,
|
||||
callbacks: Optional[Callbacks] = None,
|
||||
) -> Sequence[Document]:
|
||||
reranking = self.reranking_function is not None
|
||||
|
||||
scores = None
|
||||
if reranking:
|
||||
scores = self.reranking_function(
|
||||
[(query, doc.page_content) for doc in documents]
|
||||
)
|
||||
scores = self.reranking_function(query, documents)
|
||||
else:
|
||||
from sentence_transformers import util
|
||||
|
||||
query_embedding = self.embedding_function(query, RAG_EMBEDDING_QUERY_PREFIX)
|
||||
document_embedding = self.embedding_function(
|
||||
query_embedding = await self.embedding_function(
|
||||
query, RAG_EMBEDDING_QUERY_PREFIX
|
||||
)
|
||||
document_embedding = await self.embedding_function(
|
||||
[doc.page_content for doc in documents], RAG_EMBEDDING_CONTENT_PREFIX
|
||||
)
|
||||
scores = util.cos_sim(query_embedding, document_embedding)[0]
|
||||
|
|
|
|||
|
|
@ -1,4 +1,4 @@
|
|||
from typing import Optional, List, Dict, Any
|
||||
from typing import Optional, List, Dict, Any, Tuple
|
||||
import logging
|
||||
import json
|
||||
from sqlalchemy import (
|
||||
|
|
@ -22,7 +22,7 @@ from sqlalchemy.pool import NullPool, QueuePool
|
|||
|
||||
from sqlalchemy.orm import declarative_base, scoped_session, sessionmaker
|
||||
from sqlalchemy.dialects.postgresql import JSONB, array
|
||||
from pgvector.sqlalchemy import Vector
|
||||
from pgvector.sqlalchemy import Vector, HALFVEC
|
||||
from sqlalchemy.ext.mutable import MutableDict
|
||||
from sqlalchemy.exc import NoSuchTableError
|
||||
|
||||
|
|
@ -44,11 +44,20 @@ from open_webui.config import (
|
|||
PGVECTOR_POOL_MAX_OVERFLOW,
|
||||
PGVECTOR_POOL_TIMEOUT,
|
||||
PGVECTOR_POOL_RECYCLE,
|
||||
PGVECTOR_INDEX_METHOD,
|
||||
PGVECTOR_HNSW_M,
|
||||
PGVECTOR_HNSW_EF_CONSTRUCTION,
|
||||
PGVECTOR_IVFFLAT_LISTS,
|
||||
PGVECTOR_USE_HALFVEC,
|
||||
)
|
||||
|
||||
from open_webui.env import SRC_LOG_LEVELS
|
||||
|
||||
VECTOR_LENGTH = PGVECTOR_INITIALIZE_MAX_VECTOR_LENGTH
|
||||
USE_HALFVEC = PGVECTOR_USE_HALFVEC
|
||||
|
||||
VECTOR_TYPE_FACTORY = HALFVEC if USE_HALFVEC else Vector
|
||||
VECTOR_OPCLASS = "halfvec_cosine_ops" if USE_HALFVEC else "vector_cosine_ops"
|
||||
Base = declarative_base()
|
||||
|
||||
log = logging.getLogger(__name__)
|
||||
|
|
@ -67,7 +76,7 @@ class DocumentChunk(Base):
|
|||
__tablename__ = "document_chunk"
|
||||
|
||||
id = Column(Text, primary_key=True)
|
||||
vector = Column(Vector(dim=VECTOR_LENGTH), nullable=True)
|
||||
vector = Column(VECTOR_TYPE_FACTORY(dim=VECTOR_LENGTH), nullable=True)
|
||||
collection_name = Column(Text, nullable=False)
|
||||
|
||||
if PGVECTOR_PGCRYPTO:
|
||||
|
|
@ -157,13 +166,9 @@ class PgvectorClient(VectorDBBase):
|
|||
connection = self.session.connection()
|
||||
Base.metadata.create_all(bind=connection)
|
||||
|
||||
# Create an index on the vector column if it doesn't exist
|
||||
self.session.execute(
|
||||
text(
|
||||
"CREATE INDEX IF NOT EXISTS idx_document_chunk_vector "
|
||||
"ON document_chunk USING ivfflat (vector vector_cosine_ops) WITH (lists = 100);"
|
||||
)
|
||||
)
|
||||
index_method, index_options = self._vector_index_configuration()
|
||||
self._ensure_vector_index(index_method, index_options)
|
||||
|
||||
self.session.execute(
|
||||
text(
|
||||
"CREATE INDEX IF NOT EXISTS idx_document_chunk_collection_name "
|
||||
|
|
@ -177,6 +182,78 @@ class PgvectorClient(VectorDBBase):
|
|||
log.exception(f"Error during initialization: {e}")
|
||||
raise
|
||||
|
||||
@staticmethod
|
||||
def _extract_index_method(index_def: Optional[str]) -> Optional[str]:
|
||||
if not index_def:
|
||||
return None
|
||||
try:
|
||||
after_using = index_def.lower().split("using ", 1)[1]
|
||||
return after_using.split()[0]
|
||||
except (IndexError, AttributeError):
|
||||
return None
|
||||
|
||||
def _vector_index_configuration(self) -> Tuple[str, str]:
|
||||
if PGVECTOR_INDEX_METHOD:
|
||||
index_method = PGVECTOR_INDEX_METHOD
|
||||
log.info(
|
||||
"Using vector index method '%s' from PGVECTOR_INDEX_METHOD.",
|
||||
index_method,
|
||||
)
|
||||
elif USE_HALFVEC:
|
||||
index_method = "hnsw"
|
||||
log.info(
|
||||
"VECTOR_LENGTH=%s exceeds 2000; using halfvec column type with hnsw index.",
|
||||
VECTOR_LENGTH,
|
||||
)
|
||||
else:
|
||||
index_method = "ivfflat"
|
||||
|
||||
if index_method == "hnsw":
|
||||
index_options = f"WITH (m = {PGVECTOR_HNSW_M}, ef_construction = {PGVECTOR_HNSW_EF_CONSTRUCTION})"
|
||||
else:
|
||||
index_options = f"WITH (lists = {PGVECTOR_IVFFLAT_LISTS})"
|
||||
|
||||
return index_method, index_options
|
||||
|
||||
def _ensure_vector_index(self, index_method: str, index_options: str) -> None:
|
||||
index_name = "idx_document_chunk_vector"
|
||||
existing_index_def = self.session.execute(
|
||||
text(
|
||||
"""
|
||||
SELECT indexdef
|
||||
FROM pg_indexes
|
||||
WHERE schemaname = current_schema()
|
||||
AND tablename = 'document_chunk'
|
||||
AND indexname = :index_name
|
||||
"""
|
||||
),
|
||||
{"index_name": index_name},
|
||||
).scalar()
|
||||
|
||||
existing_method = self._extract_index_method(existing_index_def)
|
||||
if existing_method and existing_method != index_method:
|
||||
raise RuntimeError(
|
||||
f"Existing pgvector index '{index_name}' uses method '{existing_method}' but configuration now "
|
||||
f"requires '{index_method}'. Automatic rebuild is disabled to prevent long-running maintenance. "
|
||||
"Drop the index manually (optionally after tuning maintenance_work_mem/max_parallel_maintenance_workers) "
|
||||
"and recreate it with the new method before restarting Open WebUI."
|
||||
)
|
||||
|
||||
if not existing_index_def:
|
||||
index_sql = (
|
||||
f"CREATE INDEX IF NOT EXISTS {index_name} "
|
||||
f"ON document_chunk USING {index_method} (vector {VECTOR_OPCLASS})"
|
||||
)
|
||||
if index_options:
|
||||
index_sql = f"{index_sql} {index_options}"
|
||||
self.session.execute(text(index_sql))
|
||||
log.info(
|
||||
"Ensured vector index '%s' using %s%s.",
|
||||
index_name,
|
||||
index_method,
|
||||
f" {index_options}" if index_options else "",
|
||||
)
|
||||
|
||||
def check_vector_length(self) -> None:
|
||||
"""
|
||||
Check if the VECTOR_LENGTH matches the existing vector column dimension in the database.
|
||||
|
|
@ -196,16 +273,19 @@ class PgvectorClient(VectorDBBase):
|
|||
if "vector" in document_chunk_table.columns:
|
||||
vector_column = document_chunk_table.columns["vector"]
|
||||
vector_type = vector_column.type
|
||||
if isinstance(vector_type, Vector):
|
||||
db_vector_length = vector_type.dim
|
||||
if db_vector_length != VECTOR_LENGTH:
|
||||
raise Exception(
|
||||
f"VECTOR_LENGTH {VECTOR_LENGTH} does not match existing vector column dimension {db_vector_length}. "
|
||||
"Cannot change vector size after initialization without migrating the data."
|
||||
)
|
||||
else:
|
||||
expected_type = HALFVEC if USE_HALFVEC else Vector
|
||||
|
||||
if not isinstance(vector_type, expected_type):
|
||||
raise Exception(
|
||||
"The 'vector' column exists but is not of type 'Vector'."
|
||||
"The 'vector' column type does not match the expected type "
|
||||
f"('{expected_type.__name__}') for VECTOR_LENGTH {VECTOR_LENGTH}."
|
||||
)
|
||||
|
||||
db_vector_length = getattr(vector_type, "dim", None)
|
||||
if db_vector_length is not None and db_vector_length != VECTOR_LENGTH:
|
||||
raise Exception(
|
||||
f"VECTOR_LENGTH {VECTOR_LENGTH} does not match existing vector column dimension {db_vector_length}. "
|
||||
"Cannot change vector size after initialization without migrating the data."
|
||||
)
|
||||
else:
|
||||
raise Exception(
|
||||
|
|
@ -360,11 +440,11 @@ class PgvectorClient(VectorDBBase):
|
|||
num_queries = len(vectors)
|
||||
|
||||
def vector_expr(vector):
|
||||
return cast(array(vector), Vector(VECTOR_LENGTH))
|
||||
return cast(array(vector), VECTOR_TYPE_FACTORY(VECTOR_LENGTH))
|
||||
|
||||
# Create the values for query vectors
|
||||
qid_col = column("qid", Integer)
|
||||
q_vector_col = column("q_vector", Vector(VECTOR_LENGTH))
|
||||
q_vector_col = column("q_vector", VECTOR_TYPE_FACTORY(VECTOR_LENGTH))
|
||||
query_vectors = (
|
||||
values(qid_col, q_vector_col)
|
||||
.data(
|
||||
|
|
|
|||
|
|
@ -117,15 +117,16 @@ class S3VectorClient(VectorDBBase):
|
|||
|
||||
def has_collection(self, collection_name: str) -> bool:
|
||||
"""
|
||||
Check if a vector index (collection) exists in the S3 vector bucket.
|
||||
Check if a vector index exists using direct lookup.
|
||||
This avoids pagination issues with list_indexes() and is significantly faster.
|
||||
"""
|
||||
|
||||
try:
|
||||
response = self.client.list_indexes(vectorBucketName=self.bucket_name)
|
||||
indexes = response.get("indexes", [])
|
||||
return any(idx.get("indexName") == collection_name for idx in indexes)
|
||||
self.client.get_index(
|
||||
vectorBucketName=self.bucket_name, indexName=collection_name
|
||||
)
|
||||
return True
|
||||
except Exception as e:
|
||||
log.error(f"Error listing indexes: {e}")
|
||||
log.error(f"Error checking if index '{collection_name}' exists: {e}")
|
||||
return False
|
||||
|
||||
def delete_collection(self, collection_name: str) -> None:
|
||||
|
|
|
|||
340
backend/open_webui/retrieval/vector/dbs/weaviate.py
Normal file
340
backend/open_webui/retrieval/vector/dbs/weaviate.py
Normal file
|
|
@ -0,0 +1,340 @@
|
|||
import weaviate
|
||||
import re
|
||||
import uuid
|
||||
from typing import Any, Dict, List, Optional, Union
|
||||
|
||||
from open_webui.retrieval.vector.main import (
|
||||
VectorDBBase,
|
||||
VectorItem,
|
||||
SearchResult,
|
||||
GetResult,
|
||||
)
|
||||
from open_webui.retrieval.vector.utils import process_metadata
|
||||
from open_webui.config import (
|
||||
WEAVIATE_HTTP_HOST,
|
||||
WEAVIATE_HTTP_PORT,
|
||||
WEAVIATE_GRPC_PORT,
|
||||
WEAVIATE_API_KEY,
|
||||
)
|
||||
|
||||
|
||||
def _convert_uuids_to_strings(obj: Any) -> Any:
|
||||
"""
|
||||
Recursively convert UUID objects to strings in nested data structures.
|
||||
|
||||
This function handles:
|
||||
- UUID objects -> string
|
||||
- Dictionaries with UUID values
|
||||
- Lists/Tuples with UUID values
|
||||
- Nested combinations of the above
|
||||
|
||||
Args:
|
||||
obj: Any object that might contain UUIDs
|
||||
|
||||
Returns:
|
||||
The same object structure with UUIDs converted to strings
|
||||
"""
|
||||
if isinstance(obj, uuid.UUID):
|
||||
return str(obj)
|
||||
elif isinstance(obj, dict):
|
||||
return {key: _convert_uuids_to_strings(value) for key, value in obj.items()}
|
||||
elif isinstance(obj, (list, tuple)):
|
||||
return type(obj)(_convert_uuids_to_strings(item) for item in obj)
|
||||
elif isinstance(obj, (str, int, float, bool, type(None))):
|
||||
return obj
|
||||
else:
|
||||
return obj
|
||||
|
||||
|
||||
class WeaviateClient(VectorDBBase):
|
||||
def __init__(self):
|
||||
self.url = WEAVIATE_HTTP_HOST
|
||||
try:
|
||||
# Build connection parameters
|
||||
connection_params = {
|
||||
"host": WEAVIATE_HTTP_HOST,
|
||||
"port": WEAVIATE_HTTP_PORT,
|
||||
"grpc_port": WEAVIATE_GRPC_PORT,
|
||||
}
|
||||
|
||||
# Only add auth_credentials if WEAVIATE_API_KEY exists and is not empty
|
||||
if WEAVIATE_API_KEY:
|
||||
connection_params["auth_credentials"] = (
|
||||
weaviate.classes.init.Auth.api_key(WEAVIATE_API_KEY)
|
||||
)
|
||||
|
||||
self.client = weaviate.connect_to_local(**connection_params)
|
||||
self.client.connect()
|
||||
except Exception as e:
|
||||
raise ConnectionError(f"Failed to connect to Weaviate: {e}") from e
|
||||
|
||||
def _sanitize_collection_name(self, collection_name: str) -> str:
|
||||
"""Sanitize collection name to be a valid Weaviate class name."""
|
||||
if not isinstance(collection_name, str) or not collection_name.strip():
|
||||
raise ValueError("Collection name must be a non-empty string")
|
||||
|
||||
# Requirements for a valid Weaviate class name:
|
||||
# The collection name must begin with a capital letter.
|
||||
# The name can only contain letters, numbers, and the underscore (_) character. Spaces are not allowed.
|
||||
|
||||
# Replace hyphens with underscores and keep only alphanumeric characters
|
||||
name = re.sub(r"[^a-zA-Z0-9_]", "", collection_name.replace("-", "_"))
|
||||
name = name.strip("_")
|
||||
|
||||
if not name:
|
||||
raise ValueError(
|
||||
"Could not sanitize collection name to be a valid Weaviate class name"
|
||||
)
|
||||
|
||||
# Ensure it starts with a letter and is capitalized
|
||||
if not name[0].isalpha():
|
||||
name = "C" + name
|
||||
|
||||
return name[0].upper() + name[1:]
|
||||
|
||||
def has_collection(self, collection_name: str) -> bool:
|
||||
sane_collection_name = self._sanitize_collection_name(collection_name)
|
||||
return self.client.collections.exists(sane_collection_name)
|
||||
|
||||
def delete_collection(self, collection_name: str) -> None:
|
||||
sane_collection_name = self._sanitize_collection_name(collection_name)
|
||||
if self.client.collections.exists(sane_collection_name):
|
||||
self.client.collections.delete(sane_collection_name)
|
||||
|
||||
def _create_collection(self, collection_name: str) -> None:
|
||||
self.client.collections.create(
|
||||
name=collection_name,
|
||||
vector_config=weaviate.classes.config.Configure.Vectors.self_provided(),
|
||||
properties=[
|
||||
weaviate.classes.config.Property(
|
||||
name="text", data_type=weaviate.classes.config.DataType.TEXT
|
||||
),
|
||||
],
|
||||
)
|
||||
|
||||
def insert(self, collection_name: str, items: List[VectorItem]) -> None:
|
||||
sane_collection_name = self._sanitize_collection_name(collection_name)
|
||||
if not self.client.collections.exists(sane_collection_name):
|
||||
self._create_collection(sane_collection_name)
|
||||
|
||||
collection = self.client.collections.get(sane_collection_name)
|
||||
|
||||
with collection.batch.fixed_size(batch_size=100) as batch:
|
||||
for item in items:
|
||||
item_uuid = str(uuid.uuid4()) if not item["id"] else str(item["id"])
|
||||
|
||||
properties = {"text": item["text"]}
|
||||
if item["metadata"]:
|
||||
clean_metadata = _convert_uuids_to_strings(
|
||||
process_metadata(item["metadata"])
|
||||
)
|
||||
clean_metadata.pop("text", None)
|
||||
properties.update(clean_metadata)
|
||||
|
||||
batch.add_object(
|
||||
properties=properties, uuid=item_uuid, vector=item["vector"]
|
||||
)
|
||||
|
||||
def upsert(self, collection_name: str, items: List[VectorItem]) -> None:
|
||||
sane_collection_name = self._sanitize_collection_name(collection_name)
|
||||
if not self.client.collections.exists(sane_collection_name):
|
||||
self._create_collection(sane_collection_name)
|
||||
|
||||
collection = self.client.collections.get(sane_collection_name)
|
||||
|
||||
with collection.batch.fixed_size(batch_size=100) as batch:
|
||||
for item in items:
|
||||
item_uuid = str(item["id"]) if item["id"] else None
|
||||
|
||||
properties = {"text": item["text"]}
|
||||
if item["metadata"]:
|
||||
clean_metadata = _convert_uuids_to_strings(
|
||||
process_metadata(item["metadata"])
|
||||
)
|
||||
clean_metadata.pop("text", None)
|
||||
properties.update(clean_metadata)
|
||||
|
||||
batch.add_object(
|
||||
properties=properties, uuid=item_uuid, vector=item["vector"]
|
||||
)
|
||||
|
||||
def search(
|
||||
self, collection_name: str, vectors: List[List[Union[float, int]]], limit: int
|
||||
) -> Optional[SearchResult]:
|
||||
sane_collection_name = self._sanitize_collection_name(collection_name)
|
||||
if not self.client.collections.exists(sane_collection_name):
|
||||
return None
|
||||
|
||||
collection = self.client.collections.get(sane_collection_name)
|
||||
|
||||
result_ids, result_documents, result_metadatas, result_distances = (
|
||||
[],
|
||||
[],
|
||||
[],
|
||||
[],
|
||||
)
|
||||
|
||||
for vector_embedding in vectors:
|
||||
try:
|
||||
response = collection.query.near_vector(
|
||||
near_vector=vector_embedding,
|
||||
limit=limit,
|
||||
return_metadata=weaviate.classes.query.MetadataQuery(distance=True),
|
||||
)
|
||||
|
||||
ids = [str(obj.uuid) for obj in response.objects]
|
||||
documents = []
|
||||
metadatas = []
|
||||
distances = []
|
||||
|
||||
for obj in response.objects:
|
||||
properties = dict(obj.properties) if obj.properties else {}
|
||||
documents.append(properties.pop("text", ""))
|
||||
metadatas.append(_convert_uuids_to_strings(properties))
|
||||
|
||||
# Weaviate has cosine distance, 2 (worst) -> 0 (best). Re-ordering to 0 -> 1
|
||||
raw_distances = [
|
||||
(
|
||||
obj.metadata.distance
|
||||
if obj.metadata and obj.metadata.distance
|
||||
else 2.0
|
||||
)
|
||||
for obj in response.objects
|
||||
]
|
||||
distances = [(2 - dist) / 2 for dist in raw_distances]
|
||||
|
||||
result_ids.append(ids)
|
||||
result_documents.append(documents)
|
||||
result_metadatas.append(metadatas)
|
||||
result_distances.append(distances)
|
||||
except Exception:
|
||||
result_ids.append([])
|
||||
result_documents.append([])
|
||||
result_metadatas.append([])
|
||||
result_distances.append([])
|
||||
|
||||
return SearchResult(
|
||||
**{
|
||||
"ids": result_ids,
|
||||
"documents": result_documents,
|
||||
"metadatas": result_metadatas,
|
||||
"distances": result_distances,
|
||||
}
|
||||
)
|
||||
|
||||
def query(
|
||||
self, collection_name: str, filter: Dict, limit: Optional[int] = None
|
||||
) -> Optional[GetResult]:
|
||||
sane_collection_name = self._sanitize_collection_name(collection_name)
|
||||
if not self.client.collections.exists(sane_collection_name):
|
||||
return None
|
||||
|
||||
collection = self.client.collections.get(sane_collection_name)
|
||||
|
||||
weaviate_filter = None
|
||||
if filter:
|
||||
for key, value in filter.items():
|
||||
prop_filter = weaviate.classes.query.Filter.by_property(name=key).equal(
|
||||
value
|
||||
)
|
||||
weaviate_filter = (
|
||||
prop_filter
|
||||
if weaviate_filter is None
|
||||
else weaviate.classes.query.Filter.all_of(
|
||||
[weaviate_filter, prop_filter]
|
||||
)
|
||||
)
|
||||
|
||||
try:
|
||||
response = collection.query.fetch_objects(
|
||||
filters=weaviate_filter, limit=limit
|
||||
)
|
||||
|
||||
ids = [str(obj.uuid) for obj in response.objects]
|
||||
documents = []
|
||||
metadatas = []
|
||||
|
||||
for obj in response.objects:
|
||||
properties = dict(obj.properties) if obj.properties else {}
|
||||
documents.append(properties.pop("text", ""))
|
||||
metadatas.append(_convert_uuids_to_strings(properties))
|
||||
|
||||
return GetResult(
|
||||
**{
|
||||
"ids": [ids],
|
||||
"documents": [documents],
|
||||
"metadatas": [metadatas],
|
||||
}
|
||||
)
|
||||
except Exception:
|
||||
return None
|
||||
|
||||
def get(self, collection_name: str) -> Optional[GetResult]:
|
||||
sane_collection_name = self._sanitize_collection_name(collection_name)
|
||||
if not self.client.collections.exists(sane_collection_name):
|
||||
return None
|
||||
|
||||
collection = self.client.collections.get(sane_collection_name)
|
||||
ids, documents, metadatas = [], [], []
|
||||
|
||||
try:
|
||||
for item in collection.iterator():
|
||||
ids.append(str(item.uuid))
|
||||
properties = dict(item.properties) if item.properties else {}
|
||||
documents.append(properties.pop("text", ""))
|
||||
metadatas.append(_convert_uuids_to_strings(properties))
|
||||
|
||||
if not ids:
|
||||
return None
|
||||
|
||||
return GetResult(
|
||||
**{
|
||||
"ids": [ids],
|
||||
"documents": [documents],
|
||||
"metadatas": [metadatas],
|
||||
}
|
||||
)
|
||||
except Exception:
|
||||
return None
|
||||
|
||||
def delete(
|
||||
self,
|
||||
collection_name: str,
|
||||
ids: Optional[List[str]] = None,
|
||||
filter: Optional[Dict] = None,
|
||||
) -> None:
|
||||
sane_collection_name = self._sanitize_collection_name(collection_name)
|
||||
if not self.client.collections.exists(sane_collection_name):
|
||||
return
|
||||
|
||||
collection = self.client.collections.get(sane_collection_name)
|
||||
|
||||
try:
|
||||
if ids:
|
||||
for item_id in ids:
|
||||
collection.data.delete_by_id(uuid=item_id)
|
||||
elif filter:
|
||||
weaviate_filter = None
|
||||
for key, value in filter.items():
|
||||
prop_filter = weaviate.classes.query.Filter.by_property(
|
||||
name=key
|
||||
).equal(value)
|
||||
weaviate_filter = (
|
||||
prop_filter
|
||||
if weaviate_filter is None
|
||||
else weaviate.classes.query.Filter.all_of(
|
||||
[weaviate_filter, prop_filter]
|
||||
)
|
||||
)
|
||||
|
||||
if weaviate_filter:
|
||||
collection.data.delete_many(where=weaviate_filter)
|
||||
except Exception:
|
||||
pass
|
||||
|
||||
def reset(self) -> None:
|
||||
try:
|
||||
for collection_name in self.client.collections.list_all().keys():
|
||||
self.client.collections.delete(collection_name)
|
||||
except Exception:
|
||||
pass
|
||||
|
|
@ -67,6 +67,10 @@ class Vector:
|
|||
from open_webui.retrieval.vector.dbs.oracle23ai import Oracle23aiClient
|
||||
|
||||
return Oracle23aiClient()
|
||||
case VectorType.WEAVIATE:
|
||||
from open_webui.retrieval.vector.dbs.weaviate import WeaviateClient
|
||||
|
||||
return WeaviateClient()
|
||||
case _:
|
||||
raise ValueError(f"Unsupported vector type: {vector_type}")
|
||||
|
||||
|
|
|
|||
|
|
@ -11,3 +11,4 @@ class VectorType(StrEnum):
|
|||
PGVECTOR = "pgvector"
|
||||
ORACLE23AI = "oracle23ai"
|
||||
S3VECTOR = "s3vector"
|
||||
WEAVIATE = "weaviate"
|
||||
|
|
|
|||
128
backend/open_webui/retrieval/web/azure.py
Normal file
128
backend/open_webui/retrieval/web/azure.py
Normal file
|
|
@ -0,0 +1,128 @@
|
|||
import logging
|
||||
from typing import Optional
|
||||
from open_webui.retrieval.web.main import SearchResult, get_filtered_results
|
||||
from open_webui.env import SRC_LOG_LEVELS
|
||||
|
||||
log = logging.getLogger(__name__)
|
||||
log.setLevel(SRC_LOG_LEVELS["RAG"])
|
||||
|
||||
"""
|
||||
Azure AI Search integration for Open WebUI.
|
||||
Documentation: https://learn.microsoft.com/en-us/python/api/overview/azure/search-documents-readme?view=azure-python
|
||||
|
||||
Required package: azure-search-documents
|
||||
Install: pip install azure-search-documents
|
||||
"""
|
||||
|
||||
|
||||
def search_azure(
|
||||
api_key: str,
|
||||
endpoint: str,
|
||||
index_name: str,
|
||||
query: str,
|
||||
count: int,
|
||||
filter_list: Optional[list[str]] = None,
|
||||
) -> list[SearchResult]:
|
||||
"""
|
||||
Search using Azure AI Search.
|
||||
|
||||
Args:
|
||||
api_key: Azure Search API key (query key or admin key)
|
||||
endpoint: Azure Search service endpoint (e.g., https://myservice.search.windows.net)
|
||||
index_name: Name of the search index to query
|
||||
query: Search query string
|
||||
count: Number of results to return
|
||||
filter_list: Optional list of domains to filter results
|
||||
|
||||
Returns:
|
||||
List of SearchResult objects with link, title, and snippet
|
||||
"""
|
||||
try:
|
||||
from azure.core.credentials import AzureKeyCredential
|
||||
from azure.search.documents import SearchClient
|
||||
except ImportError:
|
||||
log.error(
|
||||
"azure-search-documents package is not installed. "
|
||||
"Install it with: pip install azure-search-documents"
|
||||
)
|
||||
raise ImportError(
|
||||
"azure-search-documents is required for Azure AI Search. "
|
||||
"Install it with: pip install azure-search-documents"
|
||||
)
|
||||
|
||||
try:
|
||||
# Create search client with API key authentication
|
||||
credential = AzureKeyCredential(api_key)
|
||||
search_client = SearchClient(
|
||||
endpoint=endpoint, index_name=index_name, credential=credential
|
||||
)
|
||||
|
||||
# Perform the search
|
||||
results = search_client.search(search_text=query, top=count)
|
||||
|
||||
# Convert results to list and extract fields
|
||||
search_results = []
|
||||
for result in results:
|
||||
# Azure AI Search returns documents with custom schemas
|
||||
# We need to extract common fields that might represent URL, title, and content
|
||||
# Common field names to look for:
|
||||
result_dict = dict(result)
|
||||
|
||||
# Try to find URL field (common names)
|
||||
link = (
|
||||
result_dict.get("url")
|
||||
or result_dict.get("link")
|
||||
or result_dict.get("uri")
|
||||
or result_dict.get("metadata_storage_path")
|
||||
or ""
|
||||
)
|
||||
|
||||
# Try to find title field (common names)
|
||||
title = (
|
||||
result_dict.get("title")
|
||||
or result_dict.get("name")
|
||||
or result_dict.get("metadata_title")
|
||||
or result_dict.get("metadata_storage_name")
|
||||
or None
|
||||
)
|
||||
|
||||
# Try to find content/snippet field (common names)
|
||||
snippet = (
|
||||
result_dict.get("content")
|
||||
or result_dict.get("snippet")
|
||||
or result_dict.get("description")
|
||||
or result_dict.get("summary")
|
||||
or result_dict.get("text")
|
||||
or None
|
||||
)
|
||||
|
||||
# Truncate snippet if too long
|
||||
if snippet and len(snippet) > 500:
|
||||
snippet = snippet[:497] + "..."
|
||||
|
||||
if link: # Only add if we found a valid link
|
||||
search_results.append(
|
||||
{
|
||||
"link": link,
|
||||
"title": title,
|
||||
"snippet": snippet,
|
||||
}
|
||||
)
|
||||
|
||||
# Apply domain filtering if specified
|
||||
if filter_list:
|
||||
search_results = get_filtered_results(search_results, filter_list)
|
||||
|
||||
# Convert to SearchResult objects
|
||||
return [
|
||||
SearchResult(
|
||||
link=result["link"],
|
||||
title=result.get("title"),
|
||||
snippet=result.get("snippet"),
|
||||
)
|
||||
for result in search_results
|
||||
]
|
||||
|
||||
except Exception as ex:
|
||||
log.error(f"Azure AI Search error: {ex}")
|
||||
raise ex
|
||||
|
|
@ -2,27 +2,42 @@ import logging
|
|||
from typing import Optional, List
|
||||
|
||||
import requests
|
||||
from open_webui.retrieval.web.main import SearchResult, get_filtered_results
|
||||
|
||||
from fastapi import Request
|
||||
|
||||
from open_webui.env import SRC_LOG_LEVELS
|
||||
|
||||
from open_webui.retrieval.web.main import SearchResult, get_filtered_results
|
||||
from open_webui.utils.headers import include_user_info_headers
|
||||
|
||||
|
||||
log = logging.getLogger(__name__)
|
||||
log.setLevel(SRC_LOG_LEVELS["RAG"])
|
||||
|
||||
|
||||
def search_external(
|
||||
request: Request,
|
||||
external_url: str,
|
||||
external_api_key: str,
|
||||
query: str,
|
||||
count: int,
|
||||
filter_list: Optional[List[str]] = None,
|
||||
user=None,
|
||||
) -> List[SearchResult]:
|
||||
try:
|
||||
headers = {
|
||||
"User-Agent": "Open WebUI (https://github.com/open-webui/open-webui) RAG Bot",
|
||||
"Authorization": f"Bearer {external_api_key}",
|
||||
}
|
||||
headers = include_user_info_headers(headers, user)
|
||||
|
||||
chat_id = getattr(request.state, "chat_id", None)
|
||||
if chat_id:
|
||||
headers["X-OpenWebUI-Chat-Id"] = str(chat_id)
|
||||
|
||||
response = requests.post(
|
||||
external_url,
|
||||
headers={
|
||||
"User-Agent": "Open WebUI (https://github.com/open-webui/open-webui) RAG Bot",
|
||||
"Authorization": f"Bearer {external_api_key}",
|
||||
},
|
||||
headers=headers,
|
||||
json={
|
||||
"query": query,
|
||||
"count": count,
|
||||
|
|
|
|||
|
|
@ -4,7 +4,6 @@ from typing import Optional, List
|
|||
from open_webui.retrieval.web.main import SearchResult, get_filtered_results
|
||||
from open_webui.env import SRC_LOG_LEVELS
|
||||
|
||||
from firecrawl import Firecrawl
|
||||
|
||||
log = logging.getLogger(__name__)
|
||||
log.setLevel(SRC_LOG_LEVELS["RAG"])
|
||||
|
|
@ -18,7 +17,9 @@ def search_firecrawl(
|
|||
filter_list: Optional[List[str]] = None,
|
||||
) -> List[SearchResult]:
|
||||
try:
|
||||
firecrawl = Firecrawl(api_key=firecrawl_api_key, api_url=firecrawl_url)
|
||||
from firecrawl import FirecrawlApp
|
||||
|
||||
firecrawl = FirecrawlApp(api_key=firecrawl_api_key, api_url=firecrawl_url)
|
||||
response = firecrawl.search(
|
||||
query=query, limit=count, ignore_invalid_urls=True, timeout=count * 3
|
||||
)
|
||||
|
|
|
|||
|
|
@ -15,6 +15,7 @@ def search_google_pse(
|
|||
query: str,
|
||||
count: int,
|
||||
filter_list: Optional[list[str]] = None,
|
||||
referer: Optional[str] = None,
|
||||
) -> list[SearchResult]:
|
||||
"""Search using Google's Programmable Search Engine API and return the results as a list of SearchResult objects.
|
||||
Handles pagination for counts greater than 10.
|
||||
|
|
@ -30,7 +31,11 @@ def search_google_pse(
|
|||
list[SearchResult]: A list of SearchResult objects.
|
||||
"""
|
||||
url = "https://www.googleapis.com/customsearch/v1"
|
||||
|
||||
headers = {"Content-Type": "application/json"}
|
||||
if referer:
|
||||
headers["Referer"] = referer
|
||||
|
||||
all_results = []
|
||||
start_index = 1 # Google PSE start parameter is 1-based
|
||||
|
||||
|
|
|
|||
|
|
@ -5,18 +5,37 @@ from urllib.parse import urlparse
|
|||
|
||||
from pydantic import BaseModel
|
||||
|
||||
from open_webui.retrieval.web.utils import is_string_allowed, resolve_hostname
|
||||
|
||||
|
||||
def get_filtered_results(results, filter_list):
|
||||
if not filter_list:
|
||||
return results
|
||||
|
||||
filtered_results = []
|
||||
|
||||
for result in results:
|
||||
url = result.get("url") or result.get("link", "") or result.get("href", "")
|
||||
if not validators.url(url):
|
||||
continue
|
||||
|
||||
domain = urlparse(url).netloc
|
||||
if any(domain.endswith(filtered_domain) for filtered_domain in filter_list):
|
||||
if not domain:
|
||||
continue
|
||||
|
||||
hostnames = [domain]
|
||||
|
||||
try:
|
||||
ipv4_addresses, ipv6_addresses = resolve_hostname(domain)
|
||||
hostnames.extend(ipv4_addresses)
|
||||
hostnames.extend(ipv6_addresses)
|
||||
except Exception:
|
||||
pass
|
||||
|
||||
if any(is_string_allowed(hostname, filter_list) for hostname in hostnames):
|
||||
filtered_results.append(result)
|
||||
continue
|
||||
|
||||
return filtered_results
|
||||
|
||||
|
||||
|
|
|
|||
|
|
@ -3,6 +3,7 @@ from typing import Optional, Literal
|
|||
import requests
|
||||
|
||||
from open_webui.retrieval.web.main import SearchResult, get_filtered_results
|
||||
from open_webui.utils.headers import include_user_info_headers
|
||||
from open_webui.env import SRC_LOG_LEVELS
|
||||
|
||||
|
||||
|
|
@ -15,6 +16,8 @@ def search_perplexity_search(
|
|||
query: str,
|
||||
count: int,
|
||||
filter_list: Optional[list[str]] = None,
|
||||
api_url: str = "https://api.perplexity.ai/search",
|
||||
user=None,
|
||||
) -> list[SearchResult]:
|
||||
"""Search using Perplexity API and return the results as a list of SearchResult objects.
|
||||
|
||||
|
|
@ -23,6 +26,8 @@ def search_perplexity_search(
|
|||
query (str): The query to search for
|
||||
count (int): Maximum number of results to return
|
||||
filter_list (Optional[list[str]]): List of domains to filter results
|
||||
api_url (str): Custom API URL (defaults to https://api.perplexity.ai/search)
|
||||
user: Optional user object for forwarding user info headers
|
||||
|
||||
"""
|
||||
|
||||
|
|
@ -30,8 +35,11 @@ def search_perplexity_search(
|
|||
if hasattr(api_key, "__str__"):
|
||||
api_key = str(api_key)
|
||||
|
||||
if hasattr(api_url, "__str__"):
|
||||
api_url = str(api_url)
|
||||
|
||||
try:
|
||||
url = "https://api.perplexity.ai/search"
|
||||
url = api_url
|
||||
|
||||
# Create payload for the API call
|
||||
payload = {
|
||||
|
|
@ -44,6 +52,10 @@ def search_perplexity_search(
|
|||
"Content-Type": "application/json",
|
||||
}
|
||||
|
||||
# Forward user info headers if user is provided
|
||||
if user is not None:
|
||||
headers = include_user_info_headers(headers, user)
|
||||
|
||||
# Make the API request
|
||||
response = requests.request("POST", url, json=payload, headers=headers)
|
||||
# Parse the JSON response
|
||||
|
|
|
|||
|
|
@ -16,12 +16,15 @@ from typing import (
|
|||
Union,
|
||||
Literal,
|
||||
)
|
||||
|
||||
from fastapi.concurrency import run_in_threadpool
|
||||
import aiohttp
|
||||
import certifi
|
||||
import validators
|
||||
from langchain_community.document_loaders import PlaywrightURLLoader, WebBaseLoader
|
||||
from langchain_community.document_loaders.base import BaseLoader
|
||||
from langchain_core.documents import Document
|
||||
|
||||
from open_webui.retrieval.loaders.tavily import TavilyLoader
|
||||
from open_webui.retrieval.loaders.external_web import ExternalWebLoader
|
||||
from open_webui.constants import ERROR_MESSAGES
|
||||
|
|
@ -36,19 +39,79 @@ from open_webui.config import (
|
|||
TAVILY_EXTRACT_DEPTH,
|
||||
EXTERNAL_WEB_LOADER_URL,
|
||||
EXTERNAL_WEB_LOADER_API_KEY,
|
||||
WEB_FETCH_FILTER_LIST,
|
||||
)
|
||||
from open_webui.env import SRC_LOG_LEVELS
|
||||
|
||||
from firecrawl import Firecrawl
|
||||
|
||||
log = logging.getLogger(__name__)
|
||||
log.setLevel(SRC_LOG_LEVELS["RAG"])
|
||||
|
||||
|
||||
def resolve_hostname(hostname):
|
||||
# Get address information
|
||||
addr_info = socket.getaddrinfo(hostname, None)
|
||||
|
||||
# Extract IP addresses from address information
|
||||
ipv4_addresses = [info[4][0] for info in addr_info if info[0] == socket.AF_INET]
|
||||
ipv6_addresses = [info[4][0] for info in addr_info if info[0] == socket.AF_INET6]
|
||||
|
||||
return ipv4_addresses, ipv6_addresses
|
||||
|
||||
|
||||
def get_allow_block_lists(filter_list):
|
||||
allow_list = []
|
||||
block_list = []
|
||||
|
||||
if filter_list:
|
||||
for d in filter_list:
|
||||
if d.startswith("!"):
|
||||
# Domains starting with "!" → blocked
|
||||
block_list.append(d[1:])
|
||||
else:
|
||||
# Domains starting without "!" → allowed
|
||||
allow_list.append(d)
|
||||
|
||||
return allow_list, block_list
|
||||
|
||||
|
||||
def is_string_allowed(string: str, filter_list: Optional[list[str]] = None) -> bool:
|
||||
if not filter_list:
|
||||
return True
|
||||
|
||||
allow_list, block_list = get_allow_block_lists(filter_list)
|
||||
# If allow list is non-empty, require domain to match one of them
|
||||
if allow_list:
|
||||
if not any(string.endswith(allowed) for allowed in allow_list):
|
||||
return False
|
||||
|
||||
# Block list always removes matches
|
||||
if any(string.endswith(blocked) for blocked in block_list):
|
||||
return False
|
||||
|
||||
return True
|
||||
|
||||
|
||||
def validate_url(url: Union[str, Sequence[str]]):
|
||||
if isinstance(url, str):
|
||||
if isinstance(validators.url(url), validators.ValidationError):
|
||||
raise ValueError(ERROR_MESSAGES.INVALID_URL)
|
||||
|
||||
parsed_url = urllib.parse.urlparse(url)
|
||||
|
||||
# Protocol validation - only allow http/https
|
||||
if parsed_url.scheme not in ["http", "https"]:
|
||||
log.warning(
|
||||
f"Blocked non-HTTP(S) protocol: {parsed_url.scheme} in URL: {url}"
|
||||
)
|
||||
raise ValueError(ERROR_MESSAGES.INVALID_URL)
|
||||
|
||||
# Blocklist check using unified filtering logic
|
||||
if WEB_FETCH_FILTER_LIST:
|
||||
if not is_string_allowed(url, WEB_FETCH_FILTER_LIST):
|
||||
log.warning(f"URL blocked by filter list: {url}")
|
||||
raise ValueError(ERROR_MESSAGES.INVALID_URL)
|
||||
|
||||
if not ENABLE_RAG_LOCAL_WEB_FETCH:
|
||||
# Local web fetch is disabled, filter out any URLs that resolve to private IP addresses
|
||||
parsed_url = urllib.parse.urlparse(url)
|
||||
|
|
@ -81,17 +144,6 @@ def safe_validate_urls(url: Sequence[str]) -> Sequence[str]:
|
|||
return valid_urls
|
||||
|
||||
|
||||
def resolve_hostname(hostname):
|
||||
# Get address information
|
||||
addr_info = socket.getaddrinfo(hostname, None)
|
||||
|
||||
# Extract IP addresses from address information
|
||||
ipv4_addresses = [info[4][0] for info in addr_info if info[0] == socket.AF_INET]
|
||||
ipv6_addresses = [info[4][0] for info in addr_info if info[0] == socket.AF_INET6]
|
||||
|
||||
return ipv4_addresses, ipv6_addresses
|
||||
|
||||
|
||||
def extract_metadata(soup, url):
|
||||
metadata = {"source": url}
|
||||
if title := soup.find("title"):
|
||||
|
|
@ -142,13 +194,13 @@ class RateLimitMixin:
|
|||
|
||||
|
||||
class URLProcessingMixin:
|
||||
def _verify_ssl_cert(self, url: str) -> bool:
|
||||
async def _verify_ssl_cert(self, url: str) -> bool:
|
||||
"""Verify SSL certificate for a URL."""
|
||||
return verify_ssl_cert(url)
|
||||
return await run_in_threadpool(verify_ssl_cert, url)
|
||||
|
||||
async def _safe_process_url(self, url: str) -> bool:
|
||||
"""Perform safety checks before processing a URL."""
|
||||
if self.verify_ssl and not self._verify_ssl_cert(url):
|
||||
if self.verify_ssl and not await self._verify_ssl_cert(url):
|
||||
raise ValueError(f"SSL certificate verification failed for {url}")
|
||||
await self._wait_for_rate_limit()
|
||||
return True
|
||||
|
|
@ -225,7 +277,9 @@ class SafeFireCrawlLoader(BaseLoader, RateLimitMixin, URLProcessingMixin):
|
|||
self.params,
|
||||
)
|
||||
try:
|
||||
firecrawl = Firecrawl(api_key=self.api_key, api_url=self.api_url)
|
||||
from firecrawl import FirecrawlApp
|
||||
|
||||
firecrawl = FirecrawlApp(api_key=self.api_key, api_url=self.api_url)
|
||||
result = firecrawl.batch_scrape(
|
||||
self.web_paths,
|
||||
formats=["markdown"],
|
||||
|
|
@ -264,7 +318,9 @@ class SafeFireCrawlLoader(BaseLoader, RateLimitMixin, URLProcessingMixin):
|
|||
self.params,
|
||||
)
|
||||
try:
|
||||
firecrawl = Firecrawl(api_key=self.api_key, api_url=self.api_url)
|
||||
from firecrawl import FirecrawlApp
|
||||
|
||||
firecrawl = FirecrawlApp(api_key=self.api_key, api_url=self.api_url)
|
||||
result = firecrawl.batch_scrape(
|
||||
self.web_paths,
|
||||
formats=["markdown"],
|
||||
|
|
@ -637,6 +693,10 @@ def get_web_loader(
|
|||
# Check if the URLs are valid
|
||||
safe_urls = safe_validate_urls([urls] if isinstance(urls, str) else urls)
|
||||
|
||||
if not safe_urls:
|
||||
log.warning(f"All provided URLs were blocked or invalid: {urls}")
|
||||
raise ValueError(ERROR_MESSAGES.INVALID_URL)
|
||||
|
||||
web_loader_args = {
|
||||
"web_paths": safe_urls,
|
||||
"verify_ssl": verify_ssl,
|
||||
|
|
|
|||
|
|
@ -4,6 +4,7 @@ import logging
|
|||
import os
|
||||
import uuid
|
||||
import html
|
||||
import base64
|
||||
from functools import lru_cache
|
||||
from pydub import AudioSegment
|
||||
from pydub.silence import split_on_silence
|
||||
|
|
@ -15,7 +16,6 @@ import aiohttp
|
|||
import aiofiles
|
||||
import requests
|
||||
import mimetypes
|
||||
from urllib.parse import urljoin, quote
|
||||
|
||||
from fastapi import (
|
||||
Depends,
|
||||
|
|
@ -34,6 +34,7 @@ from pydantic import BaseModel
|
|||
|
||||
|
||||
from open_webui.utils.auth import get_admin_user, get_verified_user
|
||||
from open_webui.utils.headers import include_user_info_headers
|
||||
from open_webui.config import (
|
||||
WHISPER_MODEL_AUTO_UPDATE,
|
||||
WHISPER_MODEL_DIR,
|
||||
|
|
@ -179,6 +180,9 @@ class STTConfigForm(BaseModel):
|
|||
AZURE_LOCALES: str
|
||||
AZURE_BASE_URL: str
|
||||
AZURE_MAX_SPEAKERS: str
|
||||
MISTRAL_API_KEY: str
|
||||
MISTRAL_API_BASE_URL: str
|
||||
MISTRAL_USE_CHAT_COMPLETIONS: bool
|
||||
|
||||
|
||||
class AudioConfigUpdateForm(BaseModel):
|
||||
|
|
@ -215,6 +219,9 @@ async def get_audio_config(request: Request, user=Depends(get_admin_user)):
|
|||
"AZURE_LOCALES": request.app.state.config.AUDIO_STT_AZURE_LOCALES,
|
||||
"AZURE_BASE_URL": request.app.state.config.AUDIO_STT_AZURE_BASE_URL,
|
||||
"AZURE_MAX_SPEAKERS": request.app.state.config.AUDIO_STT_AZURE_MAX_SPEAKERS,
|
||||
"MISTRAL_API_KEY": request.app.state.config.AUDIO_STT_MISTRAL_API_KEY,
|
||||
"MISTRAL_API_BASE_URL": request.app.state.config.AUDIO_STT_MISTRAL_API_BASE_URL,
|
||||
"MISTRAL_USE_CHAT_COMPLETIONS": request.app.state.config.AUDIO_STT_MISTRAL_USE_CHAT_COMPLETIONS,
|
||||
},
|
||||
}
|
||||
|
||||
|
|
@ -256,6 +263,13 @@ async def update_audio_config(
|
|||
request.app.state.config.AUDIO_STT_AZURE_MAX_SPEAKERS = (
|
||||
form_data.stt.AZURE_MAX_SPEAKERS
|
||||
)
|
||||
request.app.state.config.AUDIO_STT_MISTRAL_API_KEY = form_data.stt.MISTRAL_API_KEY
|
||||
request.app.state.config.AUDIO_STT_MISTRAL_API_BASE_URL = (
|
||||
form_data.stt.MISTRAL_API_BASE_URL
|
||||
)
|
||||
request.app.state.config.AUDIO_STT_MISTRAL_USE_CHAT_COMPLETIONS = (
|
||||
form_data.stt.MISTRAL_USE_CHAT_COMPLETIONS
|
||||
)
|
||||
|
||||
if request.app.state.config.STT_ENGINE == "":
|
||||
request.app.state.faster_whisper_model = set_faster_whisper_model(
|
||||
|
|
@ -291,6 +305,9 @@ async def update_audio_config(
|
|||
"AZURE_LOCALES": request.app.state.config.AUDIO_STT_AZURE_LOCALES,
|
||||
"AZURE_BASE_URL": request.app.state.config.AUDIO_STT_AZURE_BASE_URL,
|
||||
"AZURE_MAX_SPEAKERS": request.app.state.config.AUDIO_STT_AZURE_MAX_SPEAKERS,
|
||||
"MISTRAL_API_KEY": request.app.state.config.AUDIO_STT_MISTRAL_API_KEY,
|
||||
"MISTRAL_API_BASE_URL": request.app.state.config.AUDIO_STT_MISTRAL_API_BASE_URL,
|
||||
"MISTRAL_USE_CHAT_COMPLETIONS": request.app.state.config.AUDIO_STT_MISTRAL_USE_CHAT_COMPLETIONS,
|
||||
},
|
||||
}
|
||||
|
||||
|
|
@ -347,23 +364,17 @@ async def speech(request: Request, user=Depends(get_verified_user)):
|
|||
**(request.app.state.config.TTS_OPENAI_PARAMS or {}),
|
||||
}
|
||||
|
||||
headers = {
|
||||
"Content-Type": "application/json",
|
||||
"Authorization": f"Bearer {request.app.state.config.TTS_OPENAI_API_KEY}",
|
||||
}
|
||||
if ENABLE_FORWARD_USER_INFO_HEADERS:
|
||||
headers = include_user_info_headers(headers, user)
|
||||
|
||||
r = await session.post(
|
||||
url=f"{request.app.state.config.TTS_OPENAI_API_BASE_URL}/audio/speech",
|
||||
json=payload,
|
||||
headers={
|
||||
"Content-Type": "application/json",
|
||||
"Authorization": f"Bearer {request.app.state.config.TTS_OPENAI_API_KEY}",
|
||||
**(
|
||||
{
|
||||
"X-OpenWebUI-User-Name": quote(user.name, safe=" "),
|
||||
"X-OpenWebUI-User-Id": user.id,
|
||||
"X-OpenWebUI-User-Email": user.email,
|
||||
"X-OpenWebUI-User-Role": user.role,
|
||||
}
|
||||
if ENABLE_FORWARD_USER_INFO_HEADERS
|
||||
else {}
|
||||
),
|
||||
},
|
||||
headers=headers,
|
||||
ssl=AIOHTTP_CLIENT_SESSION_SSL,
|
||||
)
|
||||
|
||||
|
|
@ -553,7 +564,7 @@ async def speech(request: Request, user=Depends(get_verified_user)):
|
|||
return FileResponse(file_path)
|
||||
|
||||
|
||||
def transcription_handler(request, file_path, metadata):
|
||||
def transcription_handler(request, file_path, metadata, user=None):
|
||||
filename = os.path.basename(file_path)
|
||||
file_dir = os.path.dirname(file_path)
|
||||
id = filename.split(".")[0]
|
||||
|
|
@ -604,11 +615,15 @@ def transcription_handler(request, file_path, metadata):
|
|||
if language:
|
||||
payload["language"] = language
|
||||
|
||||
headers = {
|
||||
"Authorization": f"Bearer {request.app.state.config.STT_OPENAI_API_KEY}"
|
||||
}
|
||||
if user and ENABLE_FORWARD_USER_INFO_HEADERS:
|
||||
headers = include_user_info_headers(headers, user)
|
||||
|
||||
r = requests.post(
|
||||
url=f"{request.app.state.config.STT_OPENAI_API_BASE_URL}/audio/transcriptions",
|
||||
headers={
|
||||
"Authorization": f"Bearer {request.app.state.config.STT_OPENAI_API_KEY}"
|
||||
},
|
||||
headers=headers,
|
||||
files={"file": (filename, open(file_path, "rb"))},
|
||||
data=payload,
|
||||
)
|
||||
|
|
@ -829,8 +844,190 @@ def transcription_handler(request, file_path, metadata):
|
|||
detail=detail if detail else "Open WebUI: Server Connection Error",
|
||||
)
|
||||
|
||||
elif request.app.state.config.STT_ENGINE == "mistral":
|
||||
# Check file exists
|
||||
if not os.path.exists(file_path):
|
||||
raise HTTPException(status_code=400, detail="Audio file not found")
|
||||
|
||||
def transcribe(request: Request, file_path: str, metadata: Optional[dict] = None):
|
||||
# Check file size
|
||||
file_size = os.path.getsize(file_path)
|
||||
if file_size > MAX_FILE_SIZE:
|
||||
raise HTTPException(
|
||||
status_code=400,
|
||||
detail=f"File size exceeds limit of {MAX_FILE_SIZE_MB}MB",
|
||||
)
|
||||
|
||||
api_key = request.app.state.config.AUDIO_STT_MISTRAL_API_KEY
|
||||
api_base_url = (
|
||||
request.app.state.config.AUDIO_STT_MISTRAL_API_BASE_URL
|
||||
or "https://api.mistral.ai/v1"
|
||||
)
|
||||
use_chat_completions = (
|
||||
request.app.state.config.AUDIO_STT_MISTRAL_USE_CHAT_COMPLETIONS
|
||||
)
|
||||
|
||||
if not api_key:
|
||||
raise HTTPException(
|
||||
status_code=400,
|
||||
detail="Mistral API key is required for Mistral STT",
|
||||
)
|
||||
|
||||
r = None
|
||||
try:
|
||||
# Use voxtral-mini-latest as the default model for transcription
|
||||
model = request.app.state.config.STT_MODEL or "voxtral-mini-latest"
|
||||
|
||||
log.info(
|
||||
f"Mistral STT - model: {model}, "
|
||||
f"method: {'chat_completions' if use_chat_completions else 'transcriptions'}"
|
||||
)
|
||||
|
||||
if use_chat_completions:
|
||||
# Use chat completions API with audio input
|
||||
# This method requires mp3 or wav format
|
||||
audio_file_to_use = file_path
|
||||
|
||||
if is_audio_conversion_required(file_path):
|
||||
log.debug("Converting audio to mp3 for chat completions API")
|
||||
converted_path = convert_audio_to_mp3(file_path)
|
||||
if converted_path:
|
||||
audio_file_to_use = converted_path
|
||||
else:
|
||||
log.error("Audio conversion failed")
|
||||
raise HTTPException(
|
||||
status_code=500,
|
||||
detail="Audio conversion failed. Chat completions API requires mp3 or wav format.",
|
||||
)
|
||||
|
||||
# Read and encode audio file as base64
|
||||
with open(audio_file_to_use, "rb") as audio_file:
|
||||
audio_base64 = base64.b64encode(audio_file.read()).decode("utf-8")
|
||||
|
||||
# Prepare chat completions request
|
||||
url = f"{api_base_url}/chat/completions"
|
||||
|
||||
# Add language instruction if specified
|
||||
language = metadata.get("language", None) if metadata else None
|
||||
if language:
|
||||
text_instruction = f"Transcribe this audio exactly as spoken in {language}. Do not translate it."
|
||||
else:
|
||||
text_instruction = "Transcribe this audio exactly as spoken in its original language. Do not translate it to another language."
|
||||
|
||||
payload = {
|
||||
"model": model,
|
||||
"messages": [
|
||||
{
|
||||
"role": "user",
|
||||
"content": [
|
||||
{
|
||||
"type": "input_audio",
|
||||
"input_audio": audio_base64,
|
||||
},
|
||||
{"type": "text", "text": text_instruction},
|
||||
],
|
||||
}
|
||||
],
|
||||
}
|
||||
|
||||
r = requests.post(
|
||||
url=url,
|
||||
json=payload,
|
||||
headers={
|
||||
"Authorization": f"Bearer {api_key}",
|
||||
"Content-Type": "application/json",
|
||||
},
|
||||
)
|
||||
|
||||
r.raise_for_status()
|
||||
response = r.json()
|
||||
|
||||
# Extract transcript from chat completion response
|
||||
transcript = (
|
||||
response.get("choices", [{}])[0]
|
||||
.get("message", {})
|
||||
.get("content", "")
|
||||
.strip()
|
||||
)
|
||||
if not transcript:
|
||||
raise ValueError("Empty transcript in response")
|
||||
|
||||
data = {"text": transcript}
|
||||
|
||||
else:
|
||||
# Use dedicated transcriptions API
|
||||
url = f"{api_base_url}/audio/transcriptions"
|
||||
|
||||
# Determine the MIME type
|
||||
mime_type, _ = mimetypes.guess_type(file_path)
|
||||
if not mime_type:
|
||||
mime_type = "audio/webm"
|
||||
|
||||
# Use context manager to ensure file is properly closed
|
||||
with open(file_path, "rb") as audio_file:
|
||||
files = {"file": (filename, audio_file, mime_type)}
|
||||
data_form = {"model": model}
|
||||
|
||||
# Add language if specified in metadata
|
||||
language = metadata.get("language", None) if metadata else None
|
||||
if language:
|
||||
data_form["language"] = language
|
||||
|
||||
r = requests.post(
|
||||
url=url,
|
||||
files=files,
|
||||
data=data_form,
|
||||
headers={
|
||||
"Authorization": f"Bearer {api_key}",
|
||||
},
|
||||
)
|
||||
|
||||
r.raise_for_status()
|
||||
response = r.json()
|
||||
|
||||
# Extract transcript from response
|
||||
transcript = response.get("text", "").strip()
|
||||
if not transcript:
|
||||
raise ValueError("Empty transcript in response")
|
||||
|
||||
data = {"text": transcript}
|
||||
|
||||
# Save transcript to json file (consistent with other providers)
|
||||
transcript_file = f"{file_dir}/{id}.json"
|
||||
with open(transcript_file, "w") as f:
|
||||
json.dump(data, f)
|
||||
|
||||
log.debug(data)
|
||||
return data
|
||||
|
||||
except ValueError as e:
|
||||
log.exception("Error parsing Mistral response")
|
||||
raise HTTPException(
|
||||
status_code=500,
|
||||
detail=f"Failed to parse Mistral response: {str(e)}",
|
||||
)
|
||||
except requests.exceptions.RequestException as e:
|
||||
log.exception(e)
|
||||
detail = None
|
||||
|
||||
try:
|
||||
if r is not None and r.status_code != 200:
|
||||
res = r.json()
|
||||
if "error" in res:
|
||||
detail = f"External: {res['error'].get('message', '')}"
|
||||
else:
|
||||
detail = f"External: {r.text}"
|
||||
except Exception:
|
||||
detail = f"External: {e}"
|
||||
|
||||
raise HTTPException(
|
||||
status_code=getattr(r, "status_code", 500) if r else 500,
|
||||
detail=detail if detail else "Open WebUI: Server Connection Error",
|
||||
)
|
||||
|
||||
|
||||
def transcribe(
|
||||
request: Request, file_path: str, metadata: Optional[dict] = None, user=None
|
||||
):
|
||||
log.info(f"transcribe: {file_path} {metadata}")
|
||||
|
||||
if is_audio_conversion_required(file_path):
|
||||
|
|
@ -857,7 +1054,9 @@ def transcribe(request: Request, file_path: str, metadata: Optional[dict] = None
|
|||
with ThreadPoolExecutor() as executor:
|
||||
# Submit tasks for each chunk_path
|
||||
futures = [
|
||||
executor.submit(transcription_handler, request, chunk_path, metadata)
|
||||
executor.submit(
|
||||
transcription_handler, request, chunk_path, metadata, user
|
||||
)
|
||||
for chunk_path in chunk_paths
|
||||
]
|
||||
# Gather results as they complete
|
||||
|
|
@ -992,7 +1191,7 @@ def transcription(
|
|||
if language:
|
||||
metadata = {"language": language}
|
||||
|
||||
result = transcribe(request, file_path, metadata)
|
||||
result = transcribe(request, file_path, metadata, user)
|
||||
|
||||
return {
|
||||
**result,
|
||||
|
|
|
|||
|
|
@ -4,6 +4,7 @@ import time
|
|||
import datetime
|
||||
import logging
|
||||
from aiohttp import ClientSession
|
||||
import urllib
|
||||
|
||||
from open_webui.models.auths import (
|
||||
AddUserForm,
|
||||
|
|
@ -35,12 +36,20 @@ from open_webui.env import (
|
|||
)
|
||||
from fastapi import APIRouter, Depends, HTTPException, Request, status
|
||||
from fastapi.responses import RedirectResponse, Response, JSONResponse
|
||||
from open_webui.config import OPENID_PROVIDER_URL, ENABLE_OAUTH_SIGNUP, ENABLE_LDAP
|
||||
from open_webui.config import (
|
||||
OPENID_PROVIDER_URL,
|
||||
ENABLE_OAUTH_SIGNUP,
|
||||
ENABLE_LDAP,
|
||||
ENABLE_PASSWORD_AUTH,
|
||||
)
|
||||
from pydantic import BaseModel
|
||||
|
||||
from open_webui.utils.misc import parse_duration, validate_email_format
|
||||
from open_webui.utils.auth import (
|
||||
validate_password,
|
||||
verify_password,
|
||||
decode_token,
|
||||
invalidate_token,
|
||||
create_api_key,
|
||||
create_token,
|
||||
get_admin_user,
|
||||
|
|
@ -50,7 +59,7 @@ from open_webui.utils.auth import (
|
|||
get_http_authorization_cred,
|
||||
)
|
||||
from open_webui.utils.webhook import post_webhook
|
||||
from open_webui.utils.access_control import get_permissions
|
||||
from open_webui.utils.access_control import get_permissions, has_permission
|
||||
|
||||
from typing import Optional, List
|
||||
|
||||
|
|
@ -169,13 +178,19 @@ async def update_password(
|
|||
if WEBUI_AUTH_TRUSTED_EMAIL_HEADER:
|
||||
raise HTTPException(400, detail=ERROR_MESSAGES.ACTION_PROHIBITED)
|
||||
if session_user:
|
||||
user = Auths.authenticate_user(session_user.email, form_data.password)
|
||||
user = Auths.authenticate_user(
|
||||
session_user.email, lambda pw: verify_password(form_data.password, pw)
|
||||
)
|
||||
|
||||
if user:
|
||||
try:
|
||||
validate_password(form_data.password)
|
||||
except Exception as e:
|
||||
raise HTTPException(400, detail=str(e))
|
||||
hashed = get_password_hash(form_data.new_password)
|
||||
return Auths.update_user_password_by_id(user.id, hashed)
|
||||
else:
|
||||
raise HTTPException(400, detail=ERROR_MESSAGES.INVALID_PASSWORD)
|
||||
raise HTTPException(400, detail=ERROR_MESSAGES.INCORRECT_PASSWORD)
|
||||
else:
|
||||
raise HTTPException(400, detail=ERROR_MESSAGES.INVALID_CRED)
|
||||
|
||||
|
|
@ -185,7 +200,17 @@ async def update_password(
|
|||
############################
|
||||
@router.post("/ldap", response_model=SessionUserResponse)
|
||||
async def ldap_auth(request: Request, response: Response, form_data: LdapForm):
|
||||
ENABLE_LDAP = request.app.state.config.ENABLE_LDAP
|
||||
# Security checks FIRST - before loading any config
|
||||
if not request.app.state.config.ENABLE_LDAP:
|
||||
raise HTTPException(400, detail="LDAP authentication is not enabled")
|
||||
|
||||
if not ENABLE_PASSWORD_AUTH:
|
||||
raise HTTPException(
|
||||
status_code=status.HTTP_403_FORBIDDEN,
|
||||
detail=ERROR_MESSAGES.ACTION_PROHIBITED,
|
||||
)
|
||||
|
||||
# NOW load LDAP config variables
|
||||
LDAP_SERVER_LABEL = request.app.state.config.LDAP_SERVER_LABEL
|
||||
LDAP_SERVER_HOST = request.app.state.config.LDAP_SERVER_HOST
|
||||
LDAP_SERVER_PORT = request.app.state.config.LDAP_SERVER_PORT
|
||||
|
|
@ -206,9 +231,6 @@ async def ldap_auth(request: Request, response: Response, form_data: LdapForm):
|
|||
else "ALL"
|
||||
)
|
||||
|
||||
if not ENABLE_LDAP:
|
||||
raise HTTPException(400, detail="LDAP authentication is not enabled")
|
||||
|
||||
try:
|
||||
tls = Tls(
|
||||
validate=LDAP_VALIDATE_CERT,
|
||||
|
|
@ -463,6 +485,12 @@ async def ldap_auth(request: Request, response: Response, form_data: LdapForm):
|
|||
|
||||
@router.post("/signin", response_model=SessionUserResponse)
|
||||
async def signin(request: Request, response: Response, form_data: SigninForm):
|
||||
if not ENABLE_PASSWORD_AUTH:
|
||||
raise HTTPException(
|
||||
status_code=status.HTTP_403_FORBIDDEN,
|
||||
detail=ERROR_MESSAGES.ACTION_PROHIBITED,
|
||||
)
|
||||
|
||||
if WEBUI_AUTH_TRUSTED_EMAIL_HEADER:
|
||||
if WEBUI_AUTH_TRUSTED_EMAIL_HEADER not in request.headers:
|
||||
raise HTTPException(400, detail=ERROR_MESSAGES.INVALID_TRUSTED_HEADER)
|
||||
|
|
@ -472,6 +500,10 @@ async def signin(request: Request, response: Response, form_data: SigninForm):
|
|||
|
||||
if WEBUI_AUTH_TRUSTED_NAME_HEADER:
|
||||
name = request.headers.get(WEBUI_AUTH_TRUSTED_NAME_HEADER, email)
|
||||
try:
|
||||
name = urllib.parse.unquote(name, encoding="utf-8")
|
||||
except Exception as e:
|
||||
pass
|
||||
|
||||
if not Users.get_user_by_email(email.lower()):
|
||||
await signup(
|
||||
|
|
@ -495,7 +527,9 @@ async def signin(request: Request, response: Response, form_data: SigninForm):
|
|||
admin_password = "admin"
|
||||
|
||||
if Users.get_user_by_email(admin_email.lower()):
|
||||
user = Auths.authenticate_user(admin_email.lower(), admin_password)
|
||||
user = Auths.authenticate_user(
|
||||
admin_email.lower(), lambda pw: verify_password(admin_password, pw)
|
||||
)
|
||||
else:
|
||||
if Users.has_users():
|
||||
raise HTTPException(400, detail=ERROR_MESSAGES.EXISTING_USERS)
|
||||
|
|
@ -506,7 +540,9 @@ async def signin(request: Request, response: Response, form_data: SigninForm):
|
|||
SignupForm(email=admin_email, password=admin_password, name="User"),
|
||||
)
|
||||
|
||||
user = Auths.authenticate_user(admin_email.lower(), admin_password)
|
||||
user = Auths.authenticate_user(
|
||||
admin_email.lower(), lambda pw: verify_password(admin_password, pw)
|
||||
)
|
||||
else:
|
||||
password_bytes = form_data.password.encode("utf-8")
|
||||
if len(password_bytes) > 72:
|
||||
|
|
@ -517,7 +553,9 @@ async def signin(request: Request, response: Response, form_data: SigninForm):
|
|||
# decode safely — ignore incomplete UTF-8 sequences
|
||||
form_data.password = password_bytes.decode("utf-8", errors="ignore")
|
||||
|
||||
user = Auths.authenticate_user(form_data.email.lower(), form_data.password)
|
||||
user = Auths.authenticate_user(
|
||||
form_data.email.lower(), lambda pw: verify_password(form_data.password, pw)
|
||||
)
|
||||
|
||||
if user:
|
||||
|
||||
|
|
@ -599,16 +637,14 @@ async def signup(request: Request, response: Response, form_data: SignupForm):
|
|||
raise HTTPException(400, detail=ERROR_MESSAGES.EMAIL_TAKEN)
|
||||
|
||||
try:
|
||||
role = "admin" if not has_users else request.app.state.config.DEFAULT_USER_ROLE
|
||||
|
||||
# The password passed to bcrypt must be 72 bytes or fewer. If it is longer, it will be truncated before hashing.
|
||||
if len(form_data.password.encode("utf-8")) > 72:
|
||||
raise HTTPException(
|
||||
status.HTTP_400_BAD_REQUEST,
|
||||
detail=ERROR_MESSAGES.PASSWORD_TOO_LONG,
|
||||
)
|
||||
try:
|
||||
validate_password(form_data.password)
|
||||
except Exception as e:
|
||||
raise HTTPException(400, detail=str(e))
|
||||
|
||||
hashed = get_password_hash(form_data.password)
|
||||
|
||||
role = "admin" if not has_users else request.app.state.config.DEFAULT_USER_ROLE
|
||||
user = Auths.insert_new_auth(
|
||||
form_data.email.lower(),
|
||||
hashed,
|
||||
|
|
@ -664,6 +700,10 @@ async def signup(request: Request, response: Response, form_data: SignupForm):
|
|||
# Disable signup after the first user is created
|
||||
request.app.state.config.ENABLE_SIGNUP = False
|
||||
|
||||
default_group_id = getattr(request.app.state.config, "DEFAULT_GROUP_ID", "")
|
||||
if default_group_id and default_group_id:
|
||||
Groups.add_users_to_group(default_group_id, [user.id])
|
||||
|
||||
return {
|
||||
"token": token,
|
||||
"token_type": "Bearer",
|
||||
|
|
@ -684,6 +724,19 @@ async def signup(request: Request, response: Response, form_data: SignupForm):
|
|||
|
||||
@router.get("/signout")
|
||||
async def signout(request: Request, response: Response):
|
||||
|
||||
# get auth token from headers or cookies
|
||||
token = None
|
||||
auth_header = request.headers.get("Authorization")
|
||||
if auth_header:
|
||||
auth_cred = get_http_authorization_cred(auth_header)
|
||||
token = auth_cred.credentials
|
||||
else:
|
||||
token = request.cookies.get("token")
|
||||
|
||||
if token:
|
||||
await invalidate_token(request, token)
|
||||
|
||||
response.delete_cookie("token")
|
||||
response.delete_cookie("oui-session")
|
||||
response.delete_cookie("oauth_id_token")
|
||||
|
|
@ -764,6 +817,11 @@ async def add_user(form_data: AddUserForm, user=Depends(get_admin_user)):
|
|||
raise HTTPException(400, detail=ERROR_MESSAGES.EMAIL_TAKEN)
|
||||
|
||||
try:
|
||||
try:
|
||||
validate_password(form_data.password)
|
||||
except Exception as e:
|
||||
raise HTTPException(400, detail=str(e))
|
||||
|
||||
hashed = get_password_hash(form_data.password)
|
||||
user = Auths.insert_new_auth(
|
||||
form_data.email.lower(),
|
||||
|
|
@ -835,10 +893,11 @@ async def get_admin_config(request: Request, user=Depends(get_admin_user)):
|
|||
"SHOW_ADMIN_DETAILS": request.app.state.config.SHOW_ADMIN_DETAILS,
|
||||
"WEBUI_URL": request.app.state.config.WEBUI_URL,
|
||||
"ENABLE_SIGNUP": request.app.state.config.ENABLE_SIGNUP,
|
||||
"ENABLE_API_KEY": request.app.state.config.ENABLE_API_KEY,
|
||||
"ENABLE_API_KEY_ENDPOINT_RESTRICTIONS": request.app.state.config.ENABLE_API_KEY_ENDPOINT_RESTRICTIONS,
|
||||
"API_KEY_ALLOWED_ENDPOINTS": request.app.state.config.API_KEY_ALLOWED_ENDPOINTS,
|
||||
"ENABLE_API_KEYS": request.app.state.config.ENABLE_API_KEYS,
|
||||
"ENABLE_API_KEYS_ENDPOINT_RESTRICTIONS": request.app.state.config.ENABLE_API_KEYS_ENDPOINT_RESTRICTIONS,
|
||||
"API_KEYS_ALLOWED_ENDPOINTS": request.app.state.config.API_KEYS_ALLOWED_ENDPOINTS,
|
||||
"DEFAULT_USER_ROLE": request.app.state.config.DEFAULT_USER_ROLE,
|
||||
"DEFAULT_GROUP_ID": request.app.state.config.DEFAULT_GROUP_ID,
|
||||
"JWT_EXPIRES_IN": request.app.state.config.JWT_EXPIRES_IN,
|
||||
"ENABLE_COMMUNITY_SHARING": request.app.state.config.ENABLE_COMMUNITY_SHARING,
|
||||
"ENABLE_MESSAGE_RATING": request.app.state.config.ENABLE_MESSAGE_RATING,
|
||||
|
|
@ -855,10 +914,11 @@ class AdminConfig(BaseModel):
|
|||
SHOW_ADMIN_DETAILS: bool
|
||||
WEBUI_URL: str
|
||||
ENABLE_SIGNUP: bool
|
||||
ENABLE_API_KEY: bool
|
||||
ENABLE_API_KEY_ENDPOINT_RESTRICTIONS: bool
|
||||
API_KEY_ALLOWED_ENDPOINTS: str
|
||||
ENABLE_API_KEYS: bool
|
||||
ENABLE_API_KEYS_ENDPOINT_RESTRICTIONS: bool
|
||||
API_KEYS_ALLOWED_ENDPOINTS: str
|
||||
DEFAULT_USER_ROLE: str
|
||||
DEFAULT_GROUP_ID: str
|
||||
JWT_EXPIRES_IN: str
|
||||
ENABLE_COMMUNITY_SHARING: bool
|
||||
ENABLE_MESSAGE_RATING: bool
|
||||
|
|
@ -878,12 +938,12 @@ async def update_admin_config(
|
|||
request.app.state.config.WEBUI_URL = form_data.WEBUI_URL
|
||||
request.app.state.config.ENABLE_SIGNUP = form_data.ENABLE_SIGNUP
|
||||
|
||||
request.app.state.config.ENABLE_API_KEY = form_data.ENABLE_API_KEY
|
||||
request.app.state.config.ENABLE_API_KEY_ENDPOINT_RESTRICTIONS = (
|
||||
form_data.ENABLE_API_KEY_ENDPOINT_RESTRICTIONS
|
||||
request.app.state.config.ENABLE_API_KEYS = form_data.ENABLE_API_KEYS
|
||||
request.app.state.config.ENABLE_API_KEYS_ENDPOINT_RESTRICTIONS = (
|
||||
form_data.ENABLE_API_KEYS_ENDPOINT_RESTRICTIONS
|
||||
)
|
||||
request.app.state.config.API_KEY_ALLOWED_ENDPOINTS = (
|
||||
form_data.API_KEY_ALLOWED_ENDPOINTS
|
||||
request.app.state.config.API_KEYS_ALLOWED_ENDPOINTS = (
|
||||
form_data.API_KEYS_ALLOWED_ENDPOINTS
|
||||
)
|
||||
|
||||
request.app.state.config.ENABLE_CHANNELS = form_data.ENABLE_CHANNELS
|
||||
|
|
@ -892,6 +952,8 @@ async def update_admin_config(
|
|||
if form_data.DEFAULT_USER_ROLE in ["pending", "user", "admin"]:
|
||||
request.app.state.config.DEFAULT_USER_ROLE = form_data.DEFAULT_USER_ROLE
|
||||
|
||||
request.app.state.config.DEFAULT_GROUP_ID = form_data.DEFAULT_GROUP_ID
|
||||
|
||||
pattern = r"^(-1|0|(-?\d+(\.\d+)?)(ms|s|m|h|d|w))$"
|
||||
|
||||
# Check if the input string matches the pattern
|
||||
|
|
@ -918,10 +980,11 @@ async def update_admin_config(
|
|||
"SHOW_ADMIN_DETAILS": request.app.state.config.SHOW_ADMIN_DETAILS,
|
||||
"WEBUI_URL": request.app.state.config.WEBUI_URL,
|
||||
"ENABLE_SIGNUP": request.app.state.config.ENABLE_SIGNUP,
|
||||
"ENABLE_API_KEY": request.app.state.config.ENABLE_API_KEY,
|
||||
"ENABLE_API_KEY_ENDPOINT_RESTRICTIONS": request.app.state.config.ENABLE_API_KEY_ENDPOINT_RESTRICTIONS,
|
||||
"API_KEY_ALLOWED_ENDPOINTS": request.app.state.config.API_KEY_ALLOWED_ENDPOINTS,
|
||||
"ENABLE_API_KEYS": request.app.state.config.ENABLE_API_KEYS,
|
||||
"ENABLE_API_KEYS_ENDPOINT_RESTRICTIONS": request.app.state.config.ENABLE_API_KEYS_ENDPOINT_RESTRICTIONS,
|
||||
"API_KEYS_ALLOWED_ENDPOINTS": request.app.state.config.API_KEYS_ALLOWED_ENDPOINTS,
|
||||
"DEFAULT_USER_ROLE": request.app.state.config.DEFAULT_USER_ROLE,
|
||||
"DEFAULT_GROUP_ID": request.app.state.config.DEFAULT_GROUP_ID,
|
||||
"JWT_EXPIRES_IN": request.app.state.config.JWT_EXPIRES_IN,
|
||||
"ENABLE_COMMUNITY_SHARING": request.app.state.config.ENABLE_COMMUNITY_SHARING,
|
||||
"ENABLE_MESSAGE_RATING": request.app.state.config.ENABLE_MESSAGE_RATING,
|
||||
|
|
@ -1045,9 +1108,11 @@ async def update_ldap_config(
|
|||
# create api key
|
||||
@router.post("/api_key", response_model=ApiKey)
|
||||
async def generate_api_key(request: Request, user=Depends(get_current_user)):
|
||||
if not request.app.state.config.ENABLE_API_KEY:
|
||||
if not request.app.state.config.ENABLE_API_KEYS or not has_permission(
|
||||
user.id, "features.api_keys", request.app.state.config.USER_PERMISSIONS
|
||||
):
|
||||
raise HTTPException(
|
||||
status.HTTP_403_FORBIDDEN,
|
||||
status_code=status.HTTP_403_FORBIDDEN,
|
||||
detail=ERROR_MESSAGES.API_KEY_CREATION_NOT_ALLOWED,
|
||||
)
|
||||
|
||||
|
|
|
|||
|
|
@ -7,6 +7,7 @@ from open_webui.socket.main import get_event_emitter
|
|||
from open_webui.models.chats import (
|
||||
ChatForm,
|
||||
ChatImportForm,
|
||||
ChatsImportForm,
|
||||
ChatResponse,
|
||||
Chats,
|
||||
ChatTitleIdResponse,
|
||||
|
|
@ -142,26 +143,15 @@ async def create_new_chat(form_data: ChatForm, user=Depends(get_verified_user)):
|
|||
|
||||
|
||||
############################
|
||||
# ImportChat
|
||||
# ImportChats
|
||||
############################
|
||||
|
||||
|
||||
@router.post("/import", response_model=Optional[ChatResponse])
|
||||
async def import_chat(form_data: ChatImportForm, user=Depends(get_verified_user)):
|
||||
@router.post("/import", response_model=list[ChatResponse])
|
||||
async def import_chats(form_data: ChatsImportForm, user=Depends(get_verified_user)):
|
||||
try:
|
||||
chat = Chats.import_chat(user.id, form_data)
|
||||
if chat:
|
||||
tags = chat.meta.get("tags", [])
|
||||
for tag_id in tags:
|
||||
tag_id = tag_id.replace(" ", "_").lower()
|
||||
tag_name = " ".join([word.capitalize() for word in tag_id.split("_")])
|
||||
if (
|
||||
tag_id != "none"
|
||||
and Tags.get_tag_by_name_and_user_id(tag_name, user.id) is None
|
||||
):
|
||||
Tags.insert_new_tag(tag_name, user.id)
|
||||
|
||||
return ChatResponse(**chat.model_dump())
|
||||
chats = Chats.import_chats(user.id, form_data.chats)
|
||||
return chats
|
||||
except Exception as e:
|
||||
log.exception(e)
|
||||
raise HTTPException(
|
||||
|
|
@ -228,7 +218,7 @@ async def get_chat_list_by_folder_id(
|
|||
folder_id: str, page: Optional[int] = 1, user=Depends(get_verified_user)
|
||||
):
|
||||
try:
|
||||
limit = 60
|
||||
limit = 10
|
||||
skip = (page - 1) * limit
|
||||
|
||||
return [
|
||||
|
|
@ -658,19 +648,28 @@ async def clone_chat_by_id(
|
|||
"title": form_data.title if form_data.title else f"Clone of {chat.title}",
|
||||
}
|
||||
|
||||
chat = Chats.import_chat(
|
||||
chats = Chats.import_chats(
|
||||
user.id,
|
||||
ChatImportForm(
|
||||
**{
|
||||
"chat": updated_chat,
|
||||
"meta": chat.meta,
|
||||
"pinned": chat.pinned,
|
||||
"folder_id": chat.folder_id,
|
||||
}
|
||||
),
|
||||
[
|
||||
ChatImportForm(
|
||||
**{
|
||||
"chat": updated_chat,
|
||||
"meta": chat.meta,
|
||||
"pinned": chat.pinned,
|
||||
"folder_id": chat.folder_id,
|
||||
}
|
||||
)
|
||||
],
|
||||
)
|
||||
|
||||
return ChatResponse(**chat.model_dump())
|
||||
if chats:
|
||||
chat = chats[0]
|
||||
return ChatResponse(**chat.model_dump())
|
||||
else:
|
||||
raise HTTPException(
|
||||
status_code=status.HTTP_500_INTERNAL_SERVER_ERROR,
|
||||
detail=ERROR_MESSAGES.DEFAULT(),
|
||||
)
|
||||
else:
|
||||
raise HTTPException(
|
||||
status_code=status.HTTP_401_UNAUTHORIZED, detail=ERROR_MESSAGES.DEFAULT()
|
||||
|
|
@ -698,18 +697,28 @@ async def clone_shared_chat_by_id(id: str, user=Depends(get_verified_user)):
|
|||
"title": f"Clone of {chat.title}",
|
||||
}
|
||||
|
||||
chat = Chats.import_chat(
|
||||
chats = Chats.import_chats(
|
||||
user.id,
|
||||
ChatImportForm(
|
||||
**{
|
||||
"chat": updated_chat,
|
||||
"meta": chat.meta,
|
||||
"pinned": chat.pinned,
|
||||
"folder_id": chat.folder_id,
|
||||
}
|
||||
),
|
||||
[
|
||||
ChatImportForm(
|
||||
**{
|
||||
"chat": updated_chat,
|
||||
"meta": chat.meta,
|
||||
"pinned": chat.pinned,
|
||||
"folder_id": chat.folder_id,
|
||||
}
|
||||
)
|
||||
],
|
||||
)
|
||||
return ChatResponse(**chat.model_dump())
|
||||
|
||||
if chats:
|
||||
chat = chats[0]
|
||||
return ChatResponse(**chat.model_dump())
|
||||
else:
|
||||
raise HTTPException(
|
||||
status_code=status.HTTP_500_INTERNAL_SERVER_ERROR,
|
||||
detail=ERROR_MESSAGES.DEFAULT(),
|
||||
)
|
||||
else:
|
||||
raise HTTPException(
|
||||
status_code=status.HTTP_401_UNAUTHORIZED, detail=ERROR_MESSAGES.DEFAULT()
|
||||
|
|
|
|||
|
|
@ -144,6 +144,7 @@ class ToolServerConnection(BaseModel):
|
|||
path: str
|
||||
type: Optional[str] = "openapi" # openapi, mcp
|
||||
auth_type: Optional[str]
|
||||
headers: Optional[dict | str] = None
|
||||
key: Optional[str]
|
||||
config: Optional[dict]
|
||||
|
||||
|
|
@ -229,7 +230,7 @@ async def verify_tool_servers_config(
|
|||
log.debug(
|
||||
f"Trying to fetch OAuth 2.1 discovery document from {discovery_url}"
|
||||
)
|
||||
async with aiohttp.ClientSession() as session:
|
||||
async with aiohttp.ClientSession(trust_env=True) as session:
|
||||
async with session.get(
|
||||
discovery_url
|
||||
) as oauth_server_metadata_response:
|
||||
|
|
@ -270,18 +271,26 @@ async def verify_tool_servers_config(
|
|||
elif form_data.auth_type == "session":
|
||||
token = request.state.token.credentials
|
||||
elif form_data.auth_type == "system_oauth":
|
||||
oauth_token = None
|
||||
try:
|
||||
if request.cookies.get("oauth_session_id", None):
|
||||
token = await request.app.state.oauth_manager.get_oauth_token(
|
||||
oauth_token = await request.app.state.oauth_manager.get_oauth_token(
|
||||
user.id,
|
||||
request.cookies.get("oauth_session_id", None),
|
||||
)
|
||||
|
||||
if oauth_token:
|
||||
token = oauth_token.get("access_token", "")
|
||||
except Exception as e:
|
||||
pass
|
||||
|
||||
if token:
|
||||
headers = {"Authorization": f"Bearer {token}"}
|
||||
|
||||
if form_data.headers and isinstance(form_data.headers, dict):
|
||||
if headers is None:
|
||||
headers = {}
|
||||
headers.update(form_data.headers)
|
||||
|
||||
await client.connect(form_data.url, headers=headers)
|
||||
specs = await client.list_tool_specs()
|
||||
return {
|
||||
|
|
@ -299,6 +308,7 @@ async def verify_tool_servers_config(
|
|||
await client.disconnect()
|
||||
else: # openapi
|
||||
token = None
|
||||
headers = None
|
||||
if form_data.auth_type == "bearer":
|
||||
token = form_data.key
|
||||
elif form_data.auth_type == "session":
|
||||
|
|
@ -306,15 +316,29 @@ async def verify_tool_servers_config(
|
|||
elif form_data.auth_type == "system_oauth":
|
||||
try:
|
||||
if request.cookies.get("oauth_session_id", None):
|
||||
token = await request.app.state.oauth_manager.get_oauth_token(
|
||||
user.id,
|
||||
request.cookies.get("oauth_session_id", None),
|
||||
oauth_token = (
|
||||
await request.app.state.oauth_manager.get_oauth_token(
|
||||
user.id,
|
||||
request.cookies.get("oauth_session_id", None),
|
||||
)
|
||||
)
|
||||
|
||||
if oauth_token:
|
||||
token = oauth_token.get("access_token", "")
|
||||
|
||||
except Exception as e:
|
||||
pass
|
||||
|
||||
if token:
|
||||
headers = {"Authorization": f"Bearer {token}"}
|
||||
|
||||
if form_data.headers and isinstance(form_data.headers, dict):
|
||||
if headers is None:
|
||||
headers = {}
|
||||
headers.update(form_data.headers)
|
||||
|
||||
url = get_tool_server_url(form_data.url, form_data.path)
|
||||
return await get_tool_server_data(token, url)
|
||||
return await get_tool_server_data(url, headers=headers)
|
||||
except HTTPException as e:
|
||||
raise e
|
||||
except Exception as e:
|
||||
|
|
@ -439,6 +463,7 @@ async def set_code_execution_config(
|
|||
############################
|
||||
class ModelsConfigForm(BaseModel):
|
||||
DEFAULT_MODELS: Optional[str]
|
||||
DEFAULT_PINNED_MODELS: Optional[str]
|
||||
MODEL_ORDER_LIST: Optional[list[str]]
|
||||
|
||||
|
||||
|
|
@ -446,6 +471,7 @@ class ModelsConfigForm(BaseModel):
|
|||
async def get_models_config(request: Request, user=Depends(get_admin_user)):
|
||||
return {
|
||||
"DEFAULT_MODELS": request.app.state.config.DEFAULT_MODELS,
|
||||
"DEFAULT_PINNED_MODELS": request.app.state.config.DEFAULT_PINNED_MODELS,
|
||||
"MODEL_ORDER_LIST": request.app.state.config.MODEL_ORDER_LIST,
|
||||
}
|
||||
|
||||
|
|
@ -455,9 +481,11 @@ async def set_models_config(
|
|||
request: Request, form_data: ModelsConfigForm, user=Depends(get_admin_user)
|
||||
):
|
||||
request.app.state.config.DEFAULT_MODELS = form_data.DEFAULT_MODELS
|
||||
request.app.state.config.DEFAULT_PINNED_MODELS = form_data.DEFAULT_PINNED_MODELS
|
||||
request.app.state.config.MODEL_ORDER_LIST = form_data.MODEL_ORDER_LIST
|
||||
return {
|
||||
"DEFAULT_MODELS": request.app.state.config.DEFAULT_MODELS,
|
||||
"DEFAULT_PINNED_MODELS": request.app.state.config.DEFAULT_PINNED_MODELS,
|
||||
"MODEL_ORDER_LIST": request.app.state.config.MODEL_ORDER_LIST,
|
||||
}
|
||||
|
||||
|
|
|
|||
|
|
@ -7,6 +7,8 @@ from open_webui.models.feedbacks import (
|
|||
FeedbackModel,
|
||||
FeedbackResponse,
|
||||
FeedbackForm,
|
||||
FeedbackUserResponse,
|
||||
FeedbackListResponse,
|
||||
Feedbacks,
|
||||
)
|
||||
|
||||
|
|
@ -56,35 +58,10 @@ async def update_config(
|
|||
}
|
||||
|
||||
|
||||
class UserResponse(BaseModel):
|
||||
id: str
|
||||
name: str
|
||||
email: str
|
||||
role: str = "pending"
|
||||
|
||||
last_active_at: int # timestamp in epoch
|
||||
updated_at: int # timestamp in epoch
|
||||
created_at: int # timestamp in epoch
|
||||
|
||||
|
||||
class FeedbackUserResponse(FeedbackResponse):
|
||||
user: Optional[UserResponse] = None
|
||||
|
||||
|
||||
@router.get("/feedbacks/all", response_model=list[FeedbackUserResponse])
|
||||
@router.get("/feedbacks/all", response_model=list[FeedbackResponse])
|
||||
async def get_all_feedbacks(user=Depends(get_admin_user)):
|
||||
feedbacks = Feedbacks.get_all_feedbacks()
|
||||
|
||||
feedback_list = []
|
||||
for feedback in feedbacks:
|
||||
user = Users.get_user_by_id(feedback.user_id)
|
||||
feedback_list.append(
|
||||
FeedbackUserResponse(
|
||||
**feedback.model_dump(),
|
||||
user=UserResponse(**user.model_dump()) if user else None,
|
||||
)
|
||||
)
|
||||
return feedback_list
|
||||
return feedbacks
|
||||
|
||||
|
||||
@router.delete("/feedbacks/all")
|
||||
|
|
@ -111,6 +88,31 @@ async def delete_feedbacks(user=Depends(get_verified_user)):
|
|||
return success
|
||||
|
||||
|
||||
PAGE_ITEM_COUNT = 30
|
||||
|
||||
|
||||
@router.get("/feedbacks/list", response_model=FeedbackListResponse)
|
||||
async def get_feedbacks(
|
||||
order_by: Optional[str] = None,
|
||||
direction: Optional[str] = None,
|
||||
page: Optional[int] = 1,
|
||||
user=Depends(get_admin_user),
|
||||
):
|
||||
limit = PAGE_ITEM_COUNT
|
||||
|
||||
page = max(1, page)
|
||||
skip = (page - 1) * limit
|
||||
|
||||
filter = {}
|
||||
if order_by:
|
||||
filter["order_by"] = order_by
|
||||
if direction:
|
||||
filter["direction"] = direction
|
||||
|
||||
result = Feedbacks.get_feedback_items(filter=filter, skip=skip, limit=limit)
|
||||
return result
|
||||
|
||||
|
||||
@router.post("/feedback", response_model=FeedbackModel)
|
||||
async def create_feedback(
|
||||
request: Request,
|
||||
|
|
|
|||
|
|
@ -102,7 +102,7 @@ def process_uploaded_file(request, file, file_path, file_item, file_metadata, us
|
|||
)
|
||||
):
|
||||
file_path = Storage.get_file(file_path)
|
||||
result = transcribe(request, file_path, file_metadata)
|
||||
result = transcribe(request, file_path, file_metadata, user)
|
||||
|
||||
process_file(
|
||||
request,
|
||||
|
|
|
|||
|
|
@ -258,7 +258,10 @@ async def update_folder_is_expanded_by_id(
|
|||
|
||||
@router.delete("/{id}")
|
||||
async def delete_folder_by_id(
|
||||
request: Request, id: str, user=Depends(get_verified_user)
|
||||
request: Request,
|
||||
id: str,
|
||||
delete_contents: Optional[bool] = True,
|
||||
user=Depends(get_verified_user),
|
||||
):
|
||||
if Chats.count_chats_by_folder_id_and_user_id(id, user.id):
|
||||
chat_delete_permission = has_permission(
|
||||
|
|
@ -277,8 +280,14 @@ async def delete_folder_by_id(
|
|||
if folder:
|
||||
try:
|
||||
folder_ids = Folders.delete_folder_by_id_and_user_id(id, user.id)
|
||||
|
||||
for folder_id in folder_ids:
|
||||
Chats.delete_chats_by_user_id_and_folder_id(user.id, folder_id)
|
||||
if delete_contents:
|
||||
Chats.delete_chats_by_user_id_and_folder_id(user.id, folder_id)
|
||||
else:
|
||||
Chats.move_chats_by_user_id_and_folder_id(
|
||||
user.id, folder_id, None
|
||||
)
|
||||
|
||||
return True
|
||||
except Exception as e:
|
||||
|
|
|
|||
|
|
@ -31,11 +31,32 @@ router = APIRouter()
|
|||
|
||||
|
||||
@router.get("/", response_model=list[GroupResponse])
|
||||
async def get_groups(user=Depends(get_verified_user)):
|
||||
async def get_groups(share: Optional[bool] = None, user=Depends(get_verified_user)):
|
||||
if user.role == "admin":
|
||||
return Groups.get_groups()
|
||||
groups = Groups.get_groups()
|
||||
else:
|
||||
return Groups.get_groups_by_member_id(user.id)
|
||||
groups = Groups.get_groups_by_member_id(user.id)
|
||||
|
||||
group_list = []
|
||||
|
||||
for group in groups:
|
||||
if share is not None:
|
||||
# Check if the group has data and a config with share key
|
||||
if (
|
||||
group.data
|
||||
and "share" in group.data.get("config", {})
|
||||
and group.data["config"]["share"] != share
|
||||
):
|
||||
continue
|
||||
|
||||
group_list.append(
|
||||
GroupResponse(
|
||||
**group.model_dump(),
|
||||
member_count=Groups.get_group_member_count_by_id(group.id),
|
||||
)
|
||||
)
|
||||
|
||||
return group_list
|
||||
|
||||
|
||||
############################
|
||||
|
|
@ -48,7 +69,10 @@ async def create_new_group(form_data: GroupForm, user=Depends(get_admin_user)):
|
|||
try:
|
||||
group = Groups.insert_new_group(user.id, form_data)
|
||||
if group:
|
||||
return group
|
||||
return GroupResponse(
|
||||
**group.model_dump(),
|
||||
member_count=Groups.get_group_member_count_by_id(group.id),
|
||||
)
|
||||
else:
|
||||
raise HTTPException(
|
||||
status_code=status.HTTP_400_BAD_REQUEST,
|
||||
|
|
@ -71,7 +95,10 @@ async def create_new_group(form_data: GroupForm, user=Depends(get_admin_user)):
|
|||
async def get_group_by_id(id: str, user=Depends(get_admin_user)):
|
||||
group = Groups.get_group_by_id(id)
|
||||
if group:
|
||||
return group
|
||||
return GroupResponse(
|
||||
**group.model_dump(),
|
||||
member_count=Groups.get_group_member_count_by_id(group.id),
|
||||
)
|
||||
else:
|
||||
raise HTTPException(
|
||||
status_code=status.HTTP_401_UNAUTHORIZED,
|
||||
|
|
@ -89,12 +116,12 @@ async def update_group_by_id(
|
|||
id: str, form_data: GroupUpdateForm, user=Depends(get_admin_user)
|
||||
):
|
||||
try:
|
||||
if form_data.user_ids:
|
||||
form_data.user_ids = Users.get_valid_user_ids(form_data.user_ids)
|
||||
|
||||
group = Groups.update_group_by_id(id, form_data)
|
||||
if group:
|
||||
return group
|
||||
return GroupResponse(
|
||||
**group.model_dump(),
|
||||
member_count=Groups.get_group_member_count_by_id(group.id),
|
||||
)
|
||||
else:
|
||||
raise HTTPException(
|
||||
status_code=status.HTTP_400_BAD_REQUEST,
|
||||
|
|
@ -123,7 +150,10 @@ async def add_user_to_group(
|
|||
|
||||
group = Groups.add_users_to_group(id, form_data.user_ids)
|
||||
if group:
|
||||
return group
|
||||
return GroupResponse(
|
||||
**group.model_dump(),
|
||||
member_count=Groups.get_group_member_count_by_id(group.id),
|
||||
)
|
||||
else:
|
||||
raise HTTPException(
|
||||
status_code=status.HTTP_400_BAD_REQUEST,
|
||||
|
|
@ -144,7 +174,10 @@ async def remove_users_from_group(
|
|||
try:
|
||||
group = Groups.remove_users_from_group(id, form_data.user_ids)
|
||||
if group:
|
||||
return group
|
||||
return GroupResponse(
|
||||
**group.model_dump(),
|
||||
member_count=Groups.get_group_member_count_by_id(group.id),
|
||||
)
|
||||
else:
|
||||
raise HTTPException(
|
||||
status_code=status.HTTP_400_BAD_REQUEST,
|
||||
|
|
|
|||
File diff suppressed because it is too large
Load diff
|
|
@ -1,6 +1,7 @@
|
|||
from fastapi import APIRouter, Depends, HTTPException, Request
|
||||
from pydantic import BaseModel
|
||||
import logging
|
||||
import asyncio
|
||||
from typing import Optional
|
||||
|
||||
from open_webui.models.memories import Memories, MemoryModel
|
||||
|
|
@ -17,7 +18,7 @@ router = APIRouter()
|
|||
|
||||
@router.get("/ef")
|
||||
async def get_embeddings(request: Request):
|
||||
return {"result": request.app.state.EMBEDDING_FUNCTION("hello world")}
|
||||
return {"result": await request.app.state.EMBEDDING_FUNCTION("hello world")}
|
||||
|
||||
|
||||
############################
|
||||
|
|
@ -51,15 +52,15 @@ async def add_memory(
|
|||
):
|
||||
memory = Memories.insert_new_memory(user.id, form_data.content)
|
||||
|
||||
vector = await request.app.state.EMBEDDING_FUNCTION(memory.content, user=user)
|
||||
|
||||
VECTOR_DB_CLIENT.upsert(
|
||||
collection_name=f"user-memory-{user.id}",
|
||||
items=[
|
||||
{
|
||||
"id": memory.id,
|
||||
"text": memory.content,
|
||||
"vector": request.app.state.EMBEDDING_FUNCTION(
|
||||
memory.content, user=user
|
||||
),
|
||||
"vector": vector,
|
||||
"metadata": {"created_at": memory.created_at},
|
||||
}
|
||||
],
|
||||
|
|
@ -86,9 +87,11 @@ async def query_memory(
|
|||
if not memories:
|
||||
raise HTTPException(status_code=404, detail="No memories found for user")
|
||||
|
||||
vector = await request.app.state.EMBEDDING_FUNCTION(form_data.content, user=user)
|
||||
|
||||
results = VECTOR_DB_CLIENT.search(
|
||||
collection_name=f"user-memory-{user.id}",
|
||||
vectors=[request.app.state.EMBEDDING_FUNCTION(form_data.content, user=user)],
|
||||
vectors=[vector],
|
||||
limit=form_data.k,
|
||||
)
|
||||
|
||||
|
|
@ -105,21 +108,28 @@ async def reset_memory_from_vector_db(
|
|||
VECTOR_DB_CLIENT.delete_collection(f"user-memory-{user.id}")
|
||||
|
||||
memories = Memories.get_memories_by_user_id(user.id)
|
||||
|
||||
# Generate vectors in parallel
|
||||
vectors = await asyncio.gather(
|
||||
*[
|
||||
request.app.state.EMBEDDING_FUNCTION(memory.content, user=user)
|
||||
for memory in memories
|
||||
]
|
||||
)
|
||||
|
||||
VECTOR_DB_CLIENT.upsert(
|
||||
collection_name=f"user-memory-{user.id}",
|
||||
items=[
|
||||
{
|
||||
"id": memory.id,
|
||||
"text": memory.content,
|
||||
"vector": request.app.state.EMBEDDING_FUNCTION(
|
||||
memory.content, user=user
|
||||
),
|
||||
"vector": vectors[idx],
|
||||
"metadata": {
|
||||
"created_at": memory.created_at,
|
||||
"updated_at": memory.updated_at,
|
||||
},
|
||||
}
|
||||
for memory in memories
|
||||
for idx, memory in enumerate(memories)
|
||||
],
|
||||
)
|
||||
|
||||
|
|
@ -164,15 +174,15 @@ async def update_memory_by_id(
|
|||
raise HTTPException(status_code=404, detail="Memory not found")
|
||||
|
||||
if form_data.content is not None:
|
||||
vector = await request.app.state.EMBEDDING_FUNCTION(memory.content, user=user)
|
||||
|
||||
VECTOR_DB_CLIENT.upsert(
|
||||
collection_name=f"user-memory-{user.id}",
|
||||
items=[
|
||||
{
|
||||
"id": memory.id,
|
||||
"text": memory.content,
|
||||
"vector": request.app.state.EMBEDDING_FUNCTION(
|
||||
memory.content, user=user
|
||||
),
|
||||
"vector": vector,
|
||||
"metadata": {
|
||||
"created_at": memory.created_at,
|
||||
"updated_at": memory.updated_at,
|
||||
|
|
|
|||
|
|
@ -9,7 +9,7 @@ from open_webui.models.models import (
|
|||
ModelForm,
|
||||
ModelModel,
|
||||
ModelResponse,
|
||||
ModelUserResponse,
|
||||
ModelListResponse,
|
||||
Models,
|
||||
)
|
||||
|
||||
|
|
@ -35,7 +35,7 @@ log = logging.getLogger(__name__)
|
|||
router = APIRouter()
|
||||
|
||||
|
||||
def validate_model_id(model_id: str) -> bool:
|
||||
def is_valid_model_id(model_id: str) -> bool:
|
||||
return model_id and len(model_id) <= 256
|
||||
|
||||
|
||||
|
|
@ -44,12 +44,43 @@ def validate_model_id(model_id: str) -> bool:
|
|||
###########################
|
||||
|
||||
|
||||
@router.get("/", response_model=list[ModelUserResponse])
|
||||
async def get_models(id: Optional[str] = None, user=Depends(get_verified_user)):
|
||||
if user.role == "admin" and BYPASS_ADMIN_ACCESS_CONTROL:
|
||||
return Models.get_models()
|
||||
else:
|
||||
return Models.get_models_by_user_id(user.id)
|
||||
PAGE_ITEM_COUNT = 30
|
||||
|
||||
|
||||
@router.get(
|
||||
"/list", response_model=ModelListResponse
|
||||
) # do NOT use "/" as path, conflicts with main.py
|
||||
async def get_models(
|
||||
query: Optional[str] = None,
|
||||
view_option: Optional[str] = None,
|
||||
tag: Optional[str] = None,
|
||||
order_by: Optional[str] = None,
|
||||
direction: Optional[str] = None,
|
||||
page: Optional[int] = 1,
|
||||
user=Depends(get_verified_user),
|
||||
):
|
||||
|
||||
limit = PAGE_ITEM_COUNT
|
||||
|
||||
page = max(1, page)
|
||||
skip = (page - 1) * limit
|
||||
|
||||
filter = {}
|
||||
if query:
|
||||
filter["query"] = query
|
||||
if view_option:
|
||||
filter["view_option"] = view_option
|
||||
if tag:
|
||||
filter["tag"] = tag
|
||||
if order_by:
|
||||
filter["order_by"] = order_by
|
||||
if direction:
|
||||
filter["direction"] = direction
|
||||
|
||||
if not user.role == "admin" or not BYPASS_ADMIN_ACCESS_CONTROL:
|
||||
filter["user_id"] = user.id
|
||||
|
||||
return Models.search_models(user.id, filter=filter, skip=skip, limit=limit)
|
||||
|
||||
|
||||
###########################
|
||||
|
|
@ -62,6 +93,30 @@ async def get_base_models(user=Depends(get_admin_user)):
|
|||
return Models.get_base_models()
|
||||
|
||||
|
||||
###########################
|
||||
# GetModelTags
|
||||
###########################
|
||||
|
||||
|
||||
@router.get("/tags", response_model=list[str])
|
||||
async def get_model_tags(user=Depends(get_verified_user)):
|
||||
if user.role == "admin" and BYPASS_ADMIN_ACCESS_CONTROL:
|
||||
models = Models.get_models()
|
||||
else:
|
||||
models = Models.get_models_by_user_id(user.id)
|
||||
|
||||
tags_set = set()
|
||||
for model in models:
|
||||
if model.meta:
|
||||
meta = model.meta.model_dump()
|
||||
for tag in meta.get("tags", []):
|
||||
tags_set.add((tag.get("name")))
|
||||
|
||||
tags = [tag for tag in tags_set]
|
||||
tags.sort()
|
||||
return tags
|
||||
|
||||
|
||||
############################
|
||||
# CreateNewModel
|
||||
############################
|
||||
|
|
@ -88,7 +143,7 @@ async def create_new_model(
|
|||
detail=ERROR_MESSAGES.MODEL_ID_TAKEN,
|
||||
)
|
||||
|
||||
if not validate_model_id(form_data.id):
|
||||
if not is_valid_model_id(form_data.id):
|
||||
raise HTTPException(
|
||||
status_code=status.HTTP_400_BAD_REQUEST,
|
||||
detail=ERROR_MESSAGES.MODEL_ID_TOO_LONG,
|
||||
|
|
@ -111,8 +166,19 @@ async def create_new_model(
|
|||
|
||||
|
||||
@router.get("/export", response_model=list[ModelModel])
|
||||
async def export_models(user=Depends(get_admin_user)):
|
||||
return Models.get_models()
|
||||
async def export_models(request: Request, user=Depends(get_verified_user)):
|
||||
if user.role != "admin" and not has_permission(
|
||||
user.id, "workspace.models_export", request.app.state.config.USER_PERMISSIONS
|
||||
):
|
||||
raise HTTPException(
|
||||
status_code=status.HTTP_401_UNAUTHORIZED,
|
||||
detail=ERROR_MESSAGES.UNAUTHORIZED,
|
||||
)
|
||||
|
||||
if user.role == "admin" and BYPASS_ADMIN_ACCESS_CONTROL:
|
||||
return Models.get_models()
|
||||
else:
|
||||
return Models.get_models_by_user_id(user.id)
|
||||
|
||||
|
||||
############################
|
||||
|
|
@ -126,8 +192,17 @@ class ModelsImportForm(BaseModel):
|
|||
|
||||
@router.post("/import", response_model=bool)
|
||||
async def import_models(
|
||||
user: str = Depends(get_admin_user), form_data: ModelsImportForm = (...)
|
||||
request: Request,
|
||||
user=Depends(get_verified_user),
|
||||
form_data: ModelsImportForm = (...),
|
||||
):
|
||||
if user.role != "admin" and not has_permission(
|
||||
user.id, "workspace.models_import", request.app.state.config.USER_PERMISSIONS
|
||||
):
|
||||
raise HTTPException(
|
||||
status_code=status.HTTP_401_UNAUTHORIZED,
|
||||
detail=ERROR_MESSAGES.UNAUTHORIZED,
|
||||
)
|
||||
try:
|
||||
data = form_data.models
|
||||
if isinstance(data, list):
|
||||
|
|
@ -135,7 +210,7 @@ async def import_models(
|
|||
# Here, you can add logic to validate model_data if needed
|
||||
model_id = model_data.get("id")
|
||||
|
||||
if model_id and validate_model_id(model_id):
|
||||
if model_id and is_valid_model_id(model_id):
|
||||
existing_model = Models.get_model_by_id(model_id)
|
||||
if existing_model:
|
||||
# Update existing model
|
||||
|
|
@ -181,6 +256,10 @@ async def sync_models(
|
|||
###########################
|
||||
|
||||
|
||||
class ModelIdForm(BaseModel):
|
||||
id: str
|
||||
|
||||
|
||||
# Note: We're not using the typical url path param here, but instead using a query parameter to allow '/' in the id
|
||||
@router.get("/model", response_model=Optional[ModelResponse])
|
||||
async def get_model_by_id(id: str, user=Depends(get_verified_user)):
|
||||
|
|
@ -227,6 +306,7 @@ async def get_model_profile_image(id: str, user=Depends(get_verified_user)):
|
|||
)
|
||||
except Exception as e:
|
||||
pass
|
||||
|
||||
return FileResponse(f"{STATIC_DIR}/favicon.png")
|
||||
else:
|
||||
return FileResponse(f"{STATIC_DIR}/favicon.png")
|
||||
|
|
@ -274,12 +354,10 @@ async def toggle_model_by_id(id: str, user=Depends(get_verified_user)):
|
|||
|
||||
@router.post("/model/update", response_model=Optional[ModelModel])
|
||||
async def update_model_by_id(
|
||||
id: str,
|
||||
form_data: ModelForm,
|
||||
user=Depends(get_verified_user),
|
||||
):
|
||||
model = Models.get_model_by_id(id)
|
||||
|
||||
model = Models.get_model_by_id(form_data.id)
|
||||
if not model:
|
||||
raise HTTPException(
|
||||
status_code=status.HTTP_401_UNAUTHORIZED,
|
||||
|
|
@ -296,7 +374,7 @@ async def update_model_by_id(
|
|||
detail=ERROR_MESSAGES.ACCESS_PROHIBITED,
|
||||
)
|
||||
|
||||
model = Models.update_model_by_id(id, form_data)
|
||||
model = Models.update_model_by_id(form_data.id, ModelForm(**form_data.model_dump()))
|
||||
return model
|
||||
|
||||
|
||||
|
|
@ -305,9 +383,9 @@ async def update_model_by_id(
|
|||
############################
|
||||
|
||||
|
||||
@router.delete("/model/delete", response_model=bool)
|
||||
async def delete_model_by_id(id: str, user=Depends(get_verified_user)):
|
||||
model = Models.get_model_by_id(id)
|
||||
@router.post("/model/delete", response_model=bool)
|
||||
async def delete_model_by_id(form_data: ModelIdForm, user=Depends(get_verified_user)):
|
||||
model = Models.get_model_by_id(form_data.id)
|
||||
if not model:
|
||||
raise HTTPException(
|
||||
status_code=status.HTTP_401_UNAUTHORIZED,
|
||||
|
|
@ -324,7 +402,7 @@ async def delete_model_by_id(id: str, user=Depends(get_verified_user)):
|
|||
detail=ERROR_MESSAGES.UNAUTHORIZED,
|
||||
)
|
||||
|
||||
result = Models.delete_model_by_id(id)
|
||||
result = Models.delete_model_by_id(form_data.id)
|
||||
return result
|
||||
|
||||
|
||||
|
|
|
|||
|
|
@ -16,8 +16,8 @@ from urllib.parse import urlparse
|
|||
import aiohttp
|
||||
from aiocache import cached
|
||||
import requests
|
||||
from urllib.parse import quote
|
||||
|
||||
from open_webui.utils.headers import include_user_info_headers
|
||||
from open_webui.models.chats import Chats
|
||||
from open_webui.models.users import UserModel
|
||||
|
||||
|
|
@ -82,22 +82,17 @@ async def send_get_request(url, key=None, user: UserModel = None):
|
|||
timeout = aiohttp.ClientTimeout(total=AIOHTTP_CLIENT_TIMEOUT_MODEL_LIST)
|
||||
try:
|
||||
async with aiohttp.ClientSession(timeout=timeout, trust_env=True) as session:
|
||||
headers = {
|
||||
"Content-Type": "application/json",
|
||||
**({"Authorization": f"Bearer {key}"} if key else {}),
|
||||
}
|
||||
|
||||
if ENABLE_FORWARD_USER_INFO_HEADERS and user:
|
||||
headers = include_user_info_headers(headers, user)
|
||||
|
||||
async with session.get(
|
||||
url,
|
||||
headers={
|
||||
"Content-Type": "application/json",
|
||||
**({"Authorization": f"Bearer {key}"} if key else {}),
|
||||
**(
|
||||
{
|
||||
"X-OpenWebUI-User-Name": quote(user.name, safe=" "),
|
||||
"X-OpenWebUI-User-Id": user.id,
|
||||
"X-OpenWebUI-User-Email": user.email,
|
||||
"X-OpenWebUI-User-Role": user.role,
|
||||
}
|
||||
if ENABLE_FORWARD_USER_INFO_HEADERS and user
|
||||
else {}
|
||||
),
|
||||
},
|
||||
headers=headers,
|
||||
ssl=AIOHTTP_CLIENT_SESSION_SSL,
|
||||
) as response:
|
||||
return await response.json()
|
||||
|
|
@ -133,28 +128,20 @@ async def send_post_request(
|
|||
trust_env=True, timeout=aiohttp.ClientTimeout(total=AIOHTTP_CLIENT_TIMEOUT)
|
||||
)
|
||||
|
||||
headers = {
|
||||
"Content-Type": "application/json",
|
||||
**({"Authorization": f"Bearer {key}"} if key else {}),
|
||||
}
|
||||
|
||||
if ENABLE_FORWARD_USER_INFO_HEADERS and user:
|
||||
headers = include_user_info_headers(headers, user)
|
||||
if metadata and metadata.get("chat_id"):
|
||||
headers["X-OpenWebUI-Chat-Id"] = metadata.get("chat_id")
|
||||
|
||||
r = await session.post(
|
||||
url,
|
||||
data=payload,
|
||||
headers={
|
||||
"Content-Type": "application/json",
|
||||
**({"Authorization": f"Bearer {key}"} if key else {}),
|
||||
**(
|
||||
{
|
||||
"X-OpenWebUI-User-Name": quote(user.name, safe=" "),
|
||||
"X-OpenWebUI-User-Id": user.id,
|
||||
"X-OpenWebUI-User-Email": user.email,
|
||||
"X-OpenWebUI-User-Role": user.role,
|
||||
**(
|
||||
{"X-OpenWebUI-Chat-Id": metadata.get("chat_id")}
|
||||
if metadata and metadata.get("chat_id")
|
||||
else {}
|
||||
),
|
||||
}
|
||||
if ENABLE_FORWARD_USER_INFO_HEADERS and user
|
||||
else {}
|
||||
),
|
||||
},
|
||||
headers=headers,
|
||||
ssl=AIOHTTP_CLIENT_SESSION_SSL,
|
||||
)
|
||||
|
||||
|
|
@ -246,21 +233,16 @@ async def verify_connection(
|
|||
timeout=aiohttp.ClientTimeout(total=AIOHTTP_CLIENT_TIMEOUT_MODEL_LIST),
|
||||
) as session:
|
||||
try:
|
||||
headers = {
|
||||
**({"Authorization": f"Bearer {key}"} if key else {}),
|
||||
}
|
||||
|
||||
if ENABLE_FORWARD_USER_INFO_HEADERS and user:
|
||||
headers = include_user_info_headers(headers, user)
|
||||
|
||||
async with session.get(
|
||||
f"{url}/api/version",
|
||||
headers={
|
||||
**({"Authorization": f"Bearer {key}"} if key else {}),
|
||||
**(
|
||||
{
|
||||
"X-OpenWebUI-User-Name": quote(user.name, safe=" "),
|
||||
"X-OpenWebUI-User-Id": user.id,
|
||||
"X-OpenWebUI-User-Email": user.email,
|
||||
"X-OpenWebUI-User-Role": user.role,
|
||||
}
|
||||
if ENABLE_FORWARD_USER_INFO_HEADERS and user
|
||||
else {}
|
||||
),
|
||||
},
|
||||
headers=headers,
|
||||
ssl=AIOHTTP_CLIENT_SESSION_SSL,
|
||||
) as r:
|
||||
if r.status != 200:
|
||||
|
|
@ -469,22 +451,17 @@ async def get_ollama_tags(
|
|||
|
||||
r = None
|
||||
try:
|
||||
headers = {
|
||||
**({"Authorization": f"Bearer {key}"} if key else {}),
|
||||
}
|
||||
|
||||
if ENABLE_FORWARD_USER_INFO_HEADERS and user:
|
||||
headers = include_user_info_headers(headers, user)
|
||||
|
||||
r = requests.request(
|
||||
method="GET",
|
||||
url=f"{url}/api/tags",
|
||||
headers={
|
||||
**({"Authorization": f"Bearer {key}"} if key else {}),
|
||||
**(
|
||||
{
|
||||
"X-OpenWebUI-User-Name": quote(user.name, safe=" "),
|
||||
"X-OpenWebUI-User-Id": user.id,
|
||||
"X-OpenWebUI-User-Email": user.email,
|
||||
"X-OpenWebUI-User-Role": user.role,
|
||||
}
|
||||
if ENABLE_FORWARD_USER_INFO_HEADERS and user
|
||||
else {}
|
||||
),
|
||||
},
|
||||
headers=headers,
|
||||
)
|
||||
r.raise_for_status()
|
||||
|
||||
|
|
@ -838,23 +815,18 @@ async def copy_model(
|
|||
key = get_api_key(url_idx, url, request.app.state.config.OLLAMA_API_CONFIGS)
|
||||
|
||||
try:
|
||||
headers = {
|
||||
"Content-Type": "application/json",
|
||||
**({"Authorization": f"Bearer {key}"} if key else {}),
|
||||
}
|
||||
|
||||
if ENABLE_FORWARD_USER_INFO_HEADERS and user:
|
||||
headers = include_user_info_headers(headers, user)
|
||||
|
||||
r = requests.request(
|
||||
method="POST",
|
||||
url=f"{url}/api/copy",
|
||||
headers={
|
||||
"Content-Type": "application/json",
|
||||
**({"Authorization": f"Bearer {key}"} if key else {}),
|
||||
**(
|
||||
{
|
||||
"X-OpenWebUI-User-Name": quote(user.name, safe=" "),
|
||||
"X-OpenWebUI-User-Id": user.id,
|
||||
"X-OpenWebUI-User-Email": user.email,
|
||||
"X-OpenWebUI-User-Role": user.role,
|
||||
}
|
||||
if ENABLE_FORWARD_USER_INFO_HEADERS and user
|
||||
else {}
|
||||
),
|
||||
},
|
||||
headers=headers,
|
||||
data=form_data.model_dump_json(exclude_none=True).encode(),
|
||||
)
|
||||
r.raise_for_status()
|
||||
|
|
@ -908,24 +880,19 @@ async def delete_model(
|
|||
key = get_api_key(url_idx, url, request.app.state.config.OLLAMA_API_CONFIGS)
|
||||
|
||||
try:
|
||||
headers = {
|
||||
"Content-Type": "application/json",
|
||||
**({"Authorization": f"Bearer {key}"} if key else {}),
|
||||
}
|
||||
|
||||
if ENABLE_FORWARD_USER_INFO_HEADERS and user:
|
||||
headers = include_user_info_headers(headers, user)
|
||||
|
||||
r = requests.request(
|
||||
method="DELETE",
|
||||
url=f"{url}/api/delete",
|
||||
data=json.dumps(form_data).encode(),
|
||||
headers={
|
||||
"Content-Type": "application/json",
|
||||
**({"Authorization": f"Bearer {key}"} if key else {}),
|
||||
**(
|
||||
{
|
||||
"X-OpenWebUI-User-Name": quote(user.name, safe=" "),
|
||||
"X-OpenWebUI-User-Id": user.id,
|
||||
"X-OpenWebUI-User-Email": user.email,
|
||||
"X-OpenWebUI-User-Role": user.role,
|
||||
}
|
||||
if ENABLE_FORWARD_USER_INFO_HEADERS and user
|
||||
else {}
|
||||
),
|
||||
},
|
||||
headers=headers,
|
||||
data=form_data.model_dump_json(exclude_none=True).encode(),
|
||||
)
|
||||
r.raise_for_status()
|
||||
|
||||
|
|
@ -973,24 +940,19 @@ async def show_model_info(
|
|||
key = get_api_key(url_idx, url, request.app.state.config.OLLAMA_API_CONFIGS)
|
||||
|
||||
try:
|
||||
headers = {
|
||||
"Content-Type": "application/json",
|
||||
**({"Authorization": f"Bearer {key}"} if key else {}),
|
||||
}
|
||||
|
||||
if ENABLE_FORWARD_USER_INFO_HEADERS and user:
|
||||
headers = include_user_info_headers(headers, user)
|
||||
|
||||
r = requests.request(
|
||||
method="POST",
|
||||
url=f"{url}/api/show",
|
||||
headers={
|
||||
"Content-Type": "application/json",
|
||||
**({"Authorization": f"Bearer {key}"} if key else {}),
|
||||
**(
|
||||
{
|
||||
"X-OpenWebUI-User-Name": quote(user.name, safe=" "),
|
||||
"X-OpenWebUI-User-Id": user.id,
|
||||
"X-OpenWebUI-User-Email": user.email,
|
||||
"X-OpenWebUI-User-Role": user.role,
|
||||
}
|
||||
if ENABLE_FORWARD_USER_INFO_HEADERS and user
|
||||
else {}
|
||||
),
|
||||
},
|
||||
data=json.dumps(form_data).encode(),
|
||||
headers=headers,
|
||||
data=form_data.model_dump_json(exclude_none=True).encode(),
|
||||
)
|
||||
r.raise_for_status()
|
||||
|
||||
|
|
@ -1064,23 +1026,18 @@ async def embed(
|
|||
form_data.model = form_data.model.replace(f"{prefix_id}.", "")
|
||||
|
||||
try:
|
||||
headers = {
|
||||
"Content-Type": "application/json",
|
||||
**({"Authorization": f"Bearer {key}"} if key else {}),
|
||||
}
|
||||
|
||||
if ENABLE_FORWARD_USER_INFO_HEADERS and user:
|
||||
headers = include_user_info_headers(headers, user)
|
||||
|
||||
r = requests.request(
|
||||
method="POST",
|
||||
url=f"{url}/api/embed",
|
||||
headers={
|
||||
"Content-Type": "application/json",
|
||||
**({"Authorization": f"Bearer {key}"} if key else {}),
|
||||
**(
|
||||
{
|
||||
"X-OpenWebUI-User-Name": quote(user.name, safe=" "),
|
||||
"X-OpenWebUI-User-Id": user.id,
|
||||
"X-OpenWebUI-User-Email": user.email,
|
||||
"X-OpenWebUI-User-Role": user.role,
|
||||
}
|
||||
if ENABLE_FORWARD_USER_INFO_HEADERS and user
|
||||
else {}
|
||||
),
|
||||
},
|
||||
headers=headers,
|
||||
data=form_data.model_dump_json(exclude_none=True).encode(),
|
||||
)
|
||||
r.raise_for_status()
|
||||
|
|
@ -1151,23 +1108,18 @@ async def embeddings(
|
|||
form_data.model = form_data.model.replace(f"{prefix_id}.", "")
|
||||
|
||||
try:
|
||||
headers = {
|
||||
"Content-Type": "application/json",
|
||||
**({"Authorization": f"Bearer {key}"} if key else {}),
|
||||
}
|
||||
|
||||
if ENABLE_FORWARD_USER_INFO_HEADERS and user:
|
||||
headers = include_user_info_headers(headers, user)
|
||||
|
||||
r = requests.request(
|
||||
method="POST",
|
||||
url=f"{url}/api/embeddings",
|
||||
headers={
|
||||
"Content-Type": "application/json",
|
||||
**({"Authorization": f"Bearer {key}"} if key else {}),
|
||||
**(
|
||||
{
|
||||
"X-OpenWebUI-User-Name": quote(user.name, safe=" "),
|
||||
"X-OpenWebUI-User-Id": user.id,
|
||||
"X-OpenWebUI-User-Email": user.email,
|
||||
"X-OpenWebUI-User-Role": user.role,
|
||||
}
|
||||
if ENABLE_FORWARD_USER_INFO_HEADERS and user
|
||||
else {}
|
||||
),
|
||||
},
|
||||
headers=headers,
|
||||
data=form_data.model_dump_json(exclude_none=True).encode(),
|
||||
)
|
||||
r.raise_for_status()
|
||||
|
|
|
|||
|
|
@ -7,7 +7,6 @@ from typing import Optional
|
|||
import aiohttp
|
||||
from aiocache import cached
|
||||
import requests
|
||||
from urllib.parse import quote
|
||||
|
||||
from azure.identity import DefaultAzureCredential, get_bearer_token_provider
|
||||
|
||||
|
|
@ -45,10 +44,12 @@ from open_webui.utils.payload import (
|
|||
)
|
||||
from open_webui.utils.misc import (
|
||||
convert_logit_bias_input_to_json,
|
||||
stream_chunks_handler,
|
||||
)
|
||||
|
||||
from open_webui.utils.auth import get_admin_user, get_verified_user
|
||||
from open_webui.utils.access_control import has_access
|
||||
from open_webui.utils.headers import include_user_info_headers
|
||||
|
||||
|
||||
log = logging.getLogger(__name__)
|
||||
|
|
@ -66,21 +67,16 @@ async def send_get_request(url, key=None, user: UserModel = None):
|
|||
timeout = aiohttp.ClientTimeout(total=AIOHTTP_CLIENT_TIMEOUT_MODEL_LIST)
|
||||
try:
|
||||
async with aiohttp.ClientSession(timeout=timeout, trust_env=True) as session:
|
||||
headers = {
|
||||
**({"Authorization": f"Bearer {key}"} if key else {}),
|
||||
}
|
||||
|
||||
if ENABLE_FORWARD_USER_INFO_HEADERS and user:
|
||||
headers = include_user_info_headers(headers, user)
|
||||
|
||||
async with session.get(
|
||||
url,
|
||||
headers={
|
||||
**({"Authorization": f"Bearer {key}"} if key else {}),
|
||||
**(
|
||||
{
|
||||
"X-OpenWebUI-User-Name": quote(user.name, safe=" "),
|
||||
"X-OpenWebUI-User-Id": user.id,
|
||||
"X-OpenWebUI-User-Email": user.email,
|
||||
"X-OpenWebUI-User-Role": user.role,
|
||||
}
|
||||
if ENABLE_FORWARD_USER_INFO_HEADERS and user
|
||||
else {}
|
||||
),
|
||||
},
|
||||
headers=headers,
|
||||
ssl=AIOHTTP_CLIENT_SESSION_SSL,
|
||||
) as response:
|
||||
return await response.json()
|
||||
|
|
@ -140,23 +136,13 @@ async def get_headers_and_cookies(
|
|||
if "openrouter.ai" in url
|
||||
else {}
|
||||
),
|
||||
**(
|
||||
{
|
||||
"X-OpenWebUI-User-Name": quote(user.name, safe=" "),
|
||||
"X-OpenWebUI-User-Id": user.id,
|
||||
"X-OpenWebUI-User-Email": user.email,
|
||||
"X-OpenWebUI-User-Role": user.role,
|
||||
**(
|
||||
{"X-OpenWebUI-Chat-Id": metadata.get("chat_id")}
|
||||
if metadata and metadata.get("chat_id")
|
||||
else {}
|
||||
),
|
||||
}
|
||||
if ENABLE_FORWARD_USER_INFO_HEADERS
|
||||
else {}
|
||||
),
|
||||
}
|
||||
|
||||
if ENABLE_FORWARD_USER_INFO_HEADERS and user:
|
||||
headers = include_user_info_headers(headers, user)
|
||||
if metadata and metadata.get("chat_id"):
|
||||
headers["X-OpenWebUI-Chat-Id"] = metadata.get("chat_id")
|
||||
|
||||
token = None
|
||||
auth_type = config.get("auth_type")
|
||||
|
||||
|
|
@ -501,50 +487,55 @@ async def get_all_models(request: Request, user: UserModel) -> dict[str, list]:
|
|||
return response
|
||||
return None
|
||||
|
||||
def merge_models_lists(model_lists):
|
||||
def is_supported_openai_models(model_id):
|
||||
if any(
|
||||
name in model_id
|
||||
for name in [
|
||||
"babbage",
|
||||
"dall-e",
|
||||
"davinci",
|
||||
"embedding",
|
||||
"tts",
|
||||
"whisper",
|
||||
]
|
||||
):
|
||||
return False
|
||||
return True
|
||||
|
||||
def get_merged_models(model_lists):
|
||||
log.debug(f"merge_models_lists {model_lists}")
|
||||
merged_list = []
|
||||
models = {}
|
||||
|
||||
for idx, models in enumerate(model_lists):
|
||||
if models is not None and "error" not in models:
|
||||
for idx, model_list in enumerate(model_lists):
|
||||
if model_list is not None and "error" not in model_list:
|
||||
for model in model_list:
|
||||
model_id = model.get("id") or model.get("name")
|
||||
|
||||
merged_list.extend(
|
||||
[
|
||||
{
|
||||
if (
|
||||
"api.openai.com"
|
||||
in request.app.state.config.OPENAI_API_BASE_URLS[idx]
|
||||
and not is_supported_openai_models(model_id)
|
||||
):
|
||||
# Skip unwanted OpenAI models
|
||||
continue
|
||||
|
||||
if model_id and model_id not in models:
|
||||
models[model_id] = {
|
||||
**model,
|
||||
"name": model.get("name", model["id"]),
|
||||
"name": model.get("name", model_id),
|
||||
"owned_by": "openai",
|
||||
"openai": model,
|
||||
"connection_type": model.get("connection_type", "external"),
|
||||
"urlIdx": idx,
|
||||
}
|
||||
for model in models
|
||||
if (model.get("id") or model.get("name"))
|
||||
and (
|
||||
"api.openai.com"
|
||||
not in request.app.state.config.OPENAI_API_BASE_URLS[idx]
|
||||
or not any(
|
||||
name in model["id"]
|
||||
for name in [
|
||||
"babbage",
|
||||
"dall-e",
|
||||
"davinci",
|
||||
"embedding",
|
||||
"tts",
|
||||
"whisper",
|
||||
]
|
||||
)
|
||||
)
|
||||
]
|
||||
)
|
||||
|
||||
return merged_list
|
||||
return models
|
||||
|
||||
models = {"data": merge_models_lists(map(extract_data, responses))}
|
||||
models = get_merged_models(map(extract_data, responses))
|
||||
log.debug(f"models: {models}")
|
||||
|
||||
request.app.state.OPENAI_MODELS = {model["id"]: model for model in models["data"]}
|
||||
return models
|
||||
request.app.state.OPENAI_MODELS = models
|
||||
return {"data": list(models.values())}
|
||||
|
||||
|
||||
@router.get("/models")
|
||||
|
|
@ -757,6 +748,7 @@ def get_azure_allowed_params(api_version: str) -> set[str]:
|
|||
"response_format",
|
||||
"seed",
|
||||
"max_completion_tokens",
|
||||
"reasoning_effort",
|
||||
}
|
||||
|
||||
try:
|
||||
|
|
@ -947,7 +939,7 @@ async def generate_chat_completion(
|
|||
if "text/event-stream" in r.headers.get("Content-Type", ""):
|
||||
streaming = True
|
||||
return StreamingResponse(
|
||||
r.content,
|
||||
stream_chunks_handler(r.content),
|
||||
status_code=r.status,
|
||||
headers=dict(r.headers),
|
||||
background=BackgroundTask(
|
||||
|
|
|
|||
|
|
@ -48,8 +48,15 @@ async def get_prompt_list(user=Depends(get_verified_user)):
|
|||
async def create_new_prompt(
|
||||
request: Request, form_data: PromptForm, user=Depends(get_verified_user)
|
||||
):
|
||||
if user.role != "admin" and not has_permission(
|
||||
user.id, "workspace.prompts", request.app.state.config.USER_PERMISSIONS
|
||||
if user.role != "admin" and not (
|
||||
has_permission(
|
||||
user.id, "workspace.prompts", request.app.state.config.USER_PERMISSIONS
|
||||
)
|
||||
or has_permission(
|
||||
user.id,
|
||||
"workspace.prompts_import",
|
||||
request.app.state.config.USER_PERMISSIONS,
|
||||
)
|
||||
):
|
||||
raise HTTPException(
|
||||
status_code=status.HTTP_401_UNAUTHORIZED,
|
||||
|
|
|
|||
|
|
@ -32,7 +32,7 @@ from langchain.text_splitter import RecursiveCharacterTextSplitter, TokenTextSpl
|
|||
from langchain_text_splitters import MarkdownHeaderTextSplitter
|
||||
from langchain_core.documents import Document
|
||||
|
||||
from open_webui.models.files import FileModel, Files
|
||||
from open_webui.models.files import FileModel, FileUpdateForm, Files
|
||||
from open_webui.models.knowledge import Knowledges
|
||||
from open_webui.storage.provider import Storage
|
||||
|
||||
|
|
@ -64,6 +64,7 @@ from open_webui.retrieval.web.serply import search_serply
|
|||
from open_webui.retrieval.web.serpstack import search_serpstack
|
||||
from open_webui.retrieval.web.tavily import search_tavily
|
||||
from open_webui.retrieval.web.bing import search_bing
|
||||
from open_webui.retrieval.web.azure import search_azure
|
||||
from open_webui.retrieval.web.exa import search_exa
|
||||
from open_webui.retrieval.web.perplexity import search_perplexity
|
||||
from open_webui.retrieval.web.sougou import search_sougou
|
||||
|
|
@ -430,6 +431,7 @@ async def get_rag_config(request: Request, user=Depends(get_admin_user)):
|
|||
"RAG_FULL_CONTEXT": request.app.state.config.RAG_FULL_CONTEXT,
|
||||
# Hybrid search settings
|
||||
"ENABLE_RAG_HYBRID_SEARCH": request.app.state.config.ENABLE_RAG_HYBRID_SEARCH,
|
||||
"ENABLE_RAG_HYBRID_SEARCH_ENRICHED_TEXTS": request.app.state.config.ENABLE_RAG_HYBRID_SEARCH_ENRICHED_TEXTS,
|
||||
"TOP_K_RERANKER": request.app.state.config.TOP_K_RERANKER,
|
||||
"RELEVANCE_THRESHOLD": request.app.state.config.RELEVANCE_THRESHOLD,
|
||||
"HYBRID_BM25_WEIGHT": request.app.state.config.HYBRID_BM25_WEIGHT,
|
||||
|
|
@ -465,6 +467,7 @@ async def get_rag_config(request: Request, user=Depends(get_admin_user)):
|
|||
"DOCLING_PICTURE_DESCRIPTION_API": request.app.state.config.DOCLING_PICTURE_DESCRIPTION_API,
|
||||
"DOCUMENT_INTELLIGENCE_ENDPOINT": request.app.state.config.DOCUMENT_INTELLIGENCE_ENDPOINT,
|
||||
"DOCUMENT_INTELLIGENCE_KEY": request.app.state.config.DOCUMENT_INTELLIGENCE_KEY,
|
||||
"MISTRAL_OCR_API_BASE_URL": request.app.state.config.MISTRAL_OCR_API_BASE_URL,
|
||||
"MISTRAL_OCR_API_KEY": request.app.state.config.MISTRAL_OCR_API_KEY,
|
||||
# MinerU settings
|
||||
"MINERU_API_MODE": request.app.state.config.MINERU_API_MODE,
|
||||
|
|
@ -527,6 +530,7 @@ async def get_rag_config(request: Request, user=Depends(get_admin_user)):
|
|||
"PERPLEXITY_API_KEY": request.app.state.config.PERPLEXITY_API_KEY,
|
||||
"PERPLEXITY_MODEL": request.app.state.config.PERPLEXITY_MODEL,
|
||||
"PERPLEXITY_SEARCH_CONTEXT_USAGE": request.app.state.config.PERPLEXITY_SEARCH_CONTEXT_USAGE,
|
||||
"PERPLEXITY_SEARCH_API_URL": request.app.state.config.PERPLEXITY_SEARCH_API_URL,
|
||||
"SOUGOU_API_SID": request.app.state.config.SOUGOU_API_SID,
|
||||
"SOUGOU_API_SK": request.app.state.config.SOUGOU_API_SK,
|
||||
"WEB_LOADER_ENGINE": request.app.state.config.WEB_LOADER_ENGINE,
|
||||
|
|
@ -584,6 +588,7 @@ class WebConfig(BaseModel):
|
|||
PERPLEXITY_API_KEY: Optional[str] = None
|
||||
PERPLEXITY_MODEL: Optional[str] = None
|
||||
PERPLEXITY_SEARCH_CONTEXT_USAGE: Optional[str] = None
|
||||
PERPLEXITY_SEARCH_API_URL: Optional[str] = None
|
||||
SOUGOU_API_SID: Optional[str] = None
|
||||
SOUGOU_API_SK: Optional[str] = None
|
||||
WEB_LOADER_ENGINE: Optional[str] = None
|
||||
|
|
@ -611,6 +616,7 @@ class ConfigForm(BaseModel):
|
|||
|
||||
# Hybrid search settings
|
||||
ENABLE_RAG_HYBRID_SEARCH: Optional[bool] = None
|
||||
ENABLE_RAG_HYBRID_SEARCH_ENRICHED_TEXTS: Optional[bool] = None
|
||||
TOP_K_RERANKER: Optional[int] = None
|
||||
RELEVANCE_THRESHOLD: Optional[float] = None
|
||||
HYBRID_BM25_WEIGHT: Optional[float] = None
|
||||
|
|
@ -650,6 +656,7 @@ class ConfigForm(BaseModel):
|
|||
DOCLING_PICTURE_DESCRIPTION_API: Optional[dict] = None
|
||||
DOCUMENT_INTELLIGENCE_ENDPOINT: Optional[str] = None
|
||||
DOCUMENT_INTELLIGENCE_KEY: Optional[str] = None
|
||||
MISTRAL_OCR_API_BASE_URL: Optional[str] = None
|
||||
MISTRAL_OCR_API_KEY: Optional[str] = None
|
||||
|
||||
# MinerU settings
|
||||
|
|
@ -716,6 +723,11 @@ async def update_rag_config(
|
|||
if form_data.ENABLE_RAG_HYBRID_SEARCH is not None
|
||||
else request.app.state.config.ENABLE_RAG_HYBRID_SEARCH
|
||||
)
|
||||
request.app.state.config.ENABLE_RAG_HYBRID_SEARCH_ENRICHED_TEXTS = (
|
||||
form_data.ENABLE_RAG_HYBRID_SEARCH_ENRICHED_TEXTS
|
||||
if form_data.ENABLE_RAG_HYBRID_SEARCH_ENRICHED_TEXTS is not None
|
||||
else request.app.state.config.ENABLE_RAG_HYBRID_SEARCH_ENRICHED_TEXTS
|
||||
)
|
||||
|
||||
request.app.state.config.TOP_K_RERANKER = (
|
||||
form_data.TOP_K_RERANKER
|
||||
|
|
@ -891,6 +903,12 @@ async def update_rag_config(
|
|||
if form_data.DOCUMENT_INTELLIGENCE_KEY is not None
|
||||
else request.app.state.config.DOCUMENT_INTELLIGENCE_KEY
|
||||
)
|
||||
|
||||
request.app.state.config.MISTRAL_OCR_API_BASE_URL = (
|
||||
form_data.MISTRAL_OCR_API_BASE_URL
|
||||
if form_data.MISTRAL_OCR_API_BASE_URL is not None
|
||||
else request.app.state.config.MISTRAL_OCR_API_BASE_URL
|
||||
)
|
||||
request.app.state.config.MISTRAL_OCR_API_KEY = (
|
||||
form_data.MISTRAL_OCR_API_KEY
|
||||
if form_data.MISTRAL_OCR_API_KEY is not None
|
||||
|
|
@ -1100,6 +1118,9 @@ async def update_rag_config(
|
|||
request.app.state.config.PERPLEXITY_SEARCH_CONTEXT_USAGE = (
|
||||
form_data.web.PERPLEXITY_SEARCH_CONTEXT_USAGE
|
||||
)
|
||||
request.app.state.config.PERPLEXITY_SEARCH_API_URL = (
|
||||
form_data.web.PERPLEXITY_SEARCH_API_URL
|
||||
)
|
||||
request.app.state.config.SOUGOU_API_SID = form_data.web.SOUGOU_API_SID
|
||||
request.app.state.config.SOUGOU_API_SK = form_data.web.SOUGOU_API_SK
|
||||
|
||||
|
|
@ -1182,6 +1203,7 @@ async def update_rag_config(
|
|||
"DOCLING_PICTURE_DESCRIPTION_API": request.app.state.config.DOCLING_PICTURE_DESCRIPTION_API,
|
||||
"DOCUMENT_INTELLIGENCE_ENDPOINT": request.app.state.config.DOCUMENT_INTELLIGENCE_ENDPOINT,
|
||||
"DOCUMENT_INTELLIGENCE_KEY": request.app.state.config.DOCUMENT_INTELLIGENCE_KEY,
|
||||
"MISTRAL_OCR_API_BASE_URL": request.app.state.config.MISTRAL_OCR_API_BASE_URL,
|
||||
"MISTRAL_OCR_API_KEY": request.app.state.config.MISTRAL_OCR_API_KEY,
|
||||
# MinerU settings
|
||||
"MINERU_API_MODE": request.app.state.config.MINERU_API_MODE,
|
||||
|
|
@ -1244,6 +1266,7 @@ async def update_rag_config(
|
|||
"PERPLEXITY_API_KEY": request.app.state.config.PERPLEXITY_API_KEY,
|
||||
"PERPLEXITY_MODEL": request.app.state.config.PERPLEXITY_MODEL,
|
||||
"PERPLEXITY_SEARCH_CONTEXT_USAGE": request.app.state.config.PERPLEXITY_SEARCH_CONTEXT_USAGE,
|
||||
"PERPLEXITY_SEARCH_API_URL": request.app.state.config.PERPLEXITY_SEARCH_API_URL,
|
||||
"SOUGOU_API_SID": request.app.state.config.SOUGOU_API_SID,
|
||||
"SOUGOU_API_SK": request.app.state.config.SOUGOU_API_SK,
|
||||
"WEB_LOADER_ENGINE": request.app.state.config.WEB_LOADER_ENGINE,
|
||||
|
|
@ -1444,10 +1467,13 @@ def save_docs_to_vector_db(
|
|||
),
|
||||
)
|
||||
|
||||
embeddings = embedding_function(
|
||||
list(map(lambda x: x.replace("\n", " "), texts)),
|
||||
prefix=RAG_EMBEDDING_CONTENT_PREFIX,
|
||||
user=user,
|
||||
# Run async embedding in sync context
|
||||
embeddings = asyncio.run(
|
||||
embedding_function(
|
||||
list(map(lambda x: x.replace("\n", " "), texts)),
|
||||
prefix=RAG_EMBEDDING_CONTENT_PREFIX,
|
||||
user=user,
|
||||
)
|
||||
)
|
||||
log.info(f"embeddings generated {len(embeddings)} for {len(texts)} items")
|
||||
|
||||
|
|
@ -1565,6 +1591,7 @@ def process_file(
|
|||
file_path = Storage.get_file(file_path)
|
||||
loader = Loader(
|
||||
engine=request.app.state.config.CONTENT_EXTRACTION_ENGINE,
|
||||
user=user,
|
||||
DATALAB_MARKER_API_KEY=request.app.state.config.DATALAB_MARKER_API_KEY,
|
||||
DATALAB_MARKER_API_BASE_URL=request.app.state.config.DATALAB_MARKER_API_BASE_URL,
|
||||
DATALAB_MARKER_ADDITIONAL_CONFIG=request.app.state.config.DATALAB_MARKER_ADDITIONAL_CONFIG,
|
||||
|
|
@ -1597,6 +1624,7 @@ def process_file(
|
|||
PDF_EXTRACT_IMAGES=request.app.state.config.PDF_EXTRACT_IMAGES,
|
||||
DOCUMENT_INTELLIGENCE_ENDPOINT=request.app.state.config.DOCUMENT_INTELLIGENCE_ENDPOINT,
|
||||
DOCUMENT_INTELLIGENCE_KEY=request.app.state.config.DOCUMENT_INTELLIGENCE_KEY,
|
||||
MISTRAL_OCR_API_BASE_URL=request.app.state.config.MISTRAL_OCR_API_BASE_URL,
|
||||
MISTRAL_OCR_API_KEY=request.app.state.config.MISTRAL_OCR_API_KEY,
|
||||
MINERU_API_MODE=request.app.state.config.MINERU_API_MODE,
|
||||
MINERU_API_URL=request.app.state.config.MINERU_API_URL,
|
||||
|
|
@ -1800,7 +1828,9 @@ def process_web(
|
|||
)
|
||||
|
||||
|
||||
def search_web(request: Request, engine: str, query: str) -> list[SearchResult]:
|
||||
def search_web(
|
||||
request: Request, engine: str, query: str, user=None
|
||||
) -> list[SearchResult]:
|
||||
"""Search the web using a search engine and return the results as a list of SearchResult objects.
|
||||
Will look for a search engine API key in environment variables in the following order:
|
||||
- SEARXNG_QUERY_URL
|
||||
|
|
@ -1839,6 +1869,8 @@ def search_web(request: Request, engine: str, query: str) -> list[SearchResult]:
|
|||
query,
|
||||
request.app.state.config.WEB_SEARCH_RESULT_COUNT,
|
||||
request.app.state.config.WEB_SEARCH_DOMAIN_FILTER_LIST,
|
||||
request.app.state.config.PERPLEXITY_SEARCH_API_URL,
|
||||
user,
|
||||
)
|
||||
else:
|
||||
raise Exception("No PERPLEXITY_API_KEY found in environment variables")
|
||||
|
|
@ -1875,6 +1907,7 @@ def search_web(request: Request, engine: str, query: str) -> list[SearchResult]:
|
|||
query,
|
||||
request.app.state.config.WEB_SEARCH_RESULT_COUNT,
|
||||
request.app.state.config.WEB_SEARCH_DOMAIN_FILTER_LIST,
|
||||
referer=request.app.state.config.WEBUI_URL,
|
||||
)
|
||||
else:
|
||||
raise Exception(
|
||||
|
|
@ -2015,6 +2048,24 @@ def search_web(request: Request, engine: str, query: str) -> list[SearchResult]:
|
|||
request.app.state.config.WEB_SEARCH_RESULT_COUNT,
|
||||
request.app.state.config.WEB_SEARCH_DOMAIN_FILTER_LIST,
|
||||
)
|
||||
elif engine == "azure":
|
||||
if (
|
||||
request.app.state.config.AZURE_AI_SEARCH_API_KEY
|
||||
and request.app.state.config.AZURE_AI_SEARCH_ENDPOINT
|
||||
and request.app.state.config.AZURE_AI_SEARCH_INDEX_NAME
|
||||
):
|
||||
return search_azure(
|
||||
request.app.state.config.AZURE_AI_SEARCH_API_KEY,
|
||||
request.app.state.config.AZURE_AI_SEARCH_ENDPOINT,
|
||||
request.app.state.config.AZURE_AI_SEARCH_INDEX_NAME,
|
||||
query,
|
||||
request.app.state.config.WEB_SEARCH_RESULT_COUNT,
|
||||
request.app.state.config.WEB_SEARCH_DOMAIN_FILTER_LIST,
|
||||
)
|
||||
else:
|
||||
raise Exception(
|
||||
"AZURE_AI_SEARCH_API_KEY, AZURE_AI_SEARCH_ENDPOINT, and AZURE_AI_SEARCH_INDEX_NAME are required for Azure AI Search"
|
||||
)
|
||||
elif engine == "exa":
|
||||
return search_exa(
|
||||
request.app.state.config.EXA_API_KEY,
|
||||
|
|
@ -2057,11 +2108,13 @@ def search_web(request: Request, engine: str, query: str) -> list[SearchResult]:
|
|||
)
|
||||
elif engine == "external":
|
||||
return search_external(
|
||||
request,
|
||||
request.app.state.config.EXTERNAL_WEB_SEARCH_URL,
|
||||
request.app.state.config.EXTERNAL_WEB_SEARCH_API_KEY,
|
||||
query,
|
||||
request.app.state.config.WEB_SEARCH_RESULT_COUNT,
|
||||
request.app.state.config.WEB_SEARCH_DOMAIN_FILTER_LIST,
|
||||
user=user,
|
||||
)
|
||||
else:
|
||||
raise Exception("No search engine API key found in environment variables")
|
||||
|
|
@ -2086,6 +2139,7 @@ async def process_web_search(
|
|||
request,
|
||||
request.app.state.config.WEB_SEARCH_ENGINE,
|
||||
query,
|
||||
user,
|
||||
)
|
||||
for query in form_data.queries
|
||||
]
|
||||
|
|
@ -2211,7 +2265,7 @@ class QueryDocForm(BaseModel):
|
|||
|
||||
|
||||
@router.post("/query/doc")
|
||||
def query_doc_handler(
|
||||
async def query_doc_handler(
|
||||
request: Request,
|
||||
form_data: QueryDocForm,
|
||||
user=Depends(get_verified_user),
|
||||
|
|
@ -2224,7 +2278,7 @@ def query_doc_handler(
|
|||
collection_results[form_data.collection_name] = VECTOR_DB_CLIENT.get(
|
||||
collection_name=form_data.collection_name
|
||||
)
|
||||
return query_doc_with_hybrid_search(
|
||||
return await query_doc_with_hybrid_search(
|
||||
collection_name=form_data.collection_name,
|
||||
collection_result=collection_results[form_data.collection_name],
|
||||
query=form_data.query,
|
||||
|
|
@ -2234,8 +2288,8 @@ def query_doc_handler(
|
|||
k=form_data.k if form_data.k else request.app.state.config.TOP_K,
|
||||
reranking_function=(
|
||||
(
|
||||
lambda sentences: request.app.state.RERANKING_FUNCTION(
|
||||
sentences, user=user
|
||||
lambda query, documents: request.app.state.RERANKING_FUNCTION(
|
||||
query, documents, user=user
|
||||
)
|
||||
)
|
||||
if request.app.state.RERANKING_FUNCTION
|
||||
|
|
@ -2256,11 +2310,12 @@ def query_doc_handler(
|
|||
user=user,
|
||||
)
|
||||
else:
|
||||
query_embedding = await request.app.state.EMBEDDING_FUNCTION(
|
||||
form_data.query, prefix=RAG_EMBEDDING_QUERY_PREFIX, user=user
|
||||
)
|
||||
return query_doc(
|
||||
collection_name=form_data.collection_name,
|
||||
query_embedding=request.app.state.EMBEDDING_FUNCTION(
|
||||
form_data.query, prefix=RAG_EMBEDDING_QUERY_PREFIX, user=user
|
||||
),
|
||||
query_embedding=query_embedding,
|
||||
k=form_data.k if form_data.k else request.app.state.config.TOP_K,
|
||||
user=user,
|
||||
)
|
||||
|
|
@ -2280,10 +2335,11 @@ class QueryCollectionsForm(BaseModel):
|
|||
r: Optional[float] = None
|
||||
hybrid: Optional[bool] = None
|
||||
hybrid_bm25_weight: Optional[float] = None
|
||||
enable_enriched_texts: Optional[bool] = None
|
||||
|
||||
|
||||
@router.post("/query/collection")
|
||||
def query_collection_handler(
|
||||
async def query_collection_handler(
|
||||
request: Request,
|
||||
form_data: QueryCollectionsForm,
|
||||
user=Depends(get_verified_user),
|
||||
|
|
@ -2292,7 +2348,7 @@ def query_collection_handler(
|
|||
if request.app.state.config.ENABLE_RAG_HYBRID_SEARCH and (
|
||||
form_data.hybrid is None or form_data.hybrid
|
||||
):
|
||||
return query_collection_with_hybrid_search(
|
||||
return await query_collection_with_hybrid_search(
|
||||
collection_names=form_data.collection_names,
|
||||
queries=[form_data.query],
|
||||
embedding_function=lambda query, prefix: request.app.state.EMBEDDING_FUNCTION(
|
||||
|
|
@ -2301,8 +2357,8 @@ def query_collection_handler(
|
|||
k=form_data.k if form_data.k else request.app.state.config.TOP_K,
|
||||
reranking_function=(
|
||||
(
|
||||
lambda sentences: request.app.state.RERANKING_FUNCTION(
|
||||
sentences, user=user
|
||||
lambda query, documents: request.app.state.RERANKING_FUNCTION(
|
||||
query, documents, user=user
|
||||
)
|
||||
)
|
||||
if request.app.state.RERANKING_FUNCTION
|
||||
|
|
@ -2320,9 +2376,14 @@ def query_collection_handler(
|
|||
if form_data.hybrid_bm25_weight
|
||||
else request.app.state.config.HYBRID_BM25_WEIGHT
|
||||
),
|
||||
enable_enriched_texts=(
|
||||
form_data.enable_enriched_texts
|
||||
if form_data.enable_enriched_texts is not None
|
||||
else request.app.state.config.ENABLE_RAG_HYBRID_SEARCH_ENRICHED_TEXTS
|
||||
),
|
||||
)
|
||||
else:
|
||||
return query_collection(
|
||||
return await query_collection(
|
||||
collection_names=form_data.collection_names,
|
||||
queries=[form_data.query],
|
||||
embedding_function=lambda query, prefix: request.app.state.EMBEDDING_FUNCTION(
|
||||
|
|
@ -2404,7 +2465,7 @@ if ENV == "dev":
|
|||
@router.get("/ef/{text}")
|
||||
async def get_embeddings(request: Request, text: Optional[str] = "Hello World!"):
|
||||
return {
|
||||
"result": request.app.state.EMBEDDING_FUNCTION(
|
||||
"result": await request.app.state.EMBEDDING_FUNCTION(
|
||||
text, prefix=RAG_EMBEDDING_QUERY_PREFIX
|
||||
)
|
||||
}
|
||||
|
|
@ -2435,16 +2496,19 @@ def process_files_batch(
|
|||
"""
|
||||
Process a batch of files and save them to the vector database.
|
||||
"""
|
||||
results: List[BatchProcessFilesResult] = []
|
||||
errors: List[BatchProcessFilesResult] = []
|
||||
|
||||
collection_name = form_data.collection_name
|
||||
|
||||
file_results: List[BatchProcessFilesResult] = []
|
||||
file_errors: List[BatchProcessFilesResult] = []
|
||||
file_updates: List[FileUpdateForm] = []
|
||||
|
||||
# Prepare all documents first
|
||||
all_docs: List[Document] = []
|
||||
|
||||
for file in form_data.files:
|
||||
try:
|
||||
text_content = file.data.get("content", "")
|
||||
|
||||
docs: List[Document] = [
|
||||
Document(
|
||||
page_content=text_content.replace("<br/>", "\n"),
|
||||
|
|
@ -2458,16 +2522,21 @@ def process_files_batch(
|
|||
)
|
||||
]
|
||||
|
||||
hash = calculate_sha256_string(text_content)
|
||||
Files.update_file_hash_by_id(file.id, hash)
|
||||
Files.update_file_data_by_id(file.id, {"content": text_content})
|
||||
|
||||
all_docs.extend(docs)
|
||||
results.append(BatchProcessFilesResult(file_id=file.id, status="prepared"))
|
||||
|
||||
file_updates.append(
|
||||
FileUpdateForm(
|
||||
hash=calculate_sha256_string(text_content),
|
||||
data={"content": text_content},
|
||||
)
|
||||
)
|
||||
file_results.append(
|
||||
BatchProcessFilesResult(file_id=file.id, status="prepared")
|
||||
)
|
||||
|
||||
except Exception as e:
|
||||
log.error(f"process_files_batch: Error processing file {file.id}: {str(e)}")
|
||||
errors.append(
|
||||
file_errors.append(
|
||||
BatchProcessFilesResult(file_id=file.id, status="failed", error=str(e))
|
||||
)
|
||||
|
||||
|
|
@ -2483,20 +2552,18 @@ def process_files_batch(
|
|||
)
|
||||
|
||||
# Update all files with collection name
|
||||
for result in results:
|
||||
Files.update_file_metadata_by_id(
|
||||
result.file_id, {"collection_name": collection_name}
|
||||
)
|
||||
result.status = "completed"
|
||||
for file_update, file_result in zip(file_updates, file_results):
|
||||
Files.update_file_by_id(id=file_result.file_id, form_data=file_update)
|
||||
file_result.status = "completed"
|
||||
|
||||
except Exception as e:
|
||||
log.error(
|
||||
f"process_files_batch: Error saving documents to vector DB: {str(e)}"
|
||||
)
|
||||
for result in results:
|
||||
result.status = "failed"
|
||||
errors.append(
|
||||
BatchProcessFilesResult(file_id=result.file_id, error=str(e))
|
||||
for file_result in file_results:
|
||||
file_result.status = "failed"
|
||||
file_errors.append(
|
||||
BatchProcessFilesResult(file_id=file_result.file_id, error=str(e))
|
||||
)
|
||||
|
||||
return BatchProcessFilesResponse(results=results, errors=errors)
|
||||
return BatchProcessFilesResponse(results=file_results, errors=file_errors)
|
||||
|
|
|
|||
|
|
@ -256,15 +256,16 @@ def get_scim_auth(
|
|||
)
|
||||
|
||||
# Check if SCIM is enabled
|
||||
scim_enabled = getattr(request.app.state, "SCIM_ENABLED", False)
|
||||
enable_scim = getattr(request.app.state, "ENABLE_SCIM", False)
|
||||
log.info(
|
||||
f"SCIM auth check - raw SCIM_ENABLED: {scim_enabled}, type: {type(scim_enabled)}"
|
||||
f"SCIM auth check - raw ENABLE_SCIM: {enable_scim}, type: {type(enable_scim)}"
|
||||
)
|
||||
|
||||
# Handle both PersistentConfig and direct value
|
||||
if hasattr(scim_enabled, "value"):
|
||||
scim_enabled = scim_enabled.value
|
||||
log.info(f"SCIM enabled status after conversion: {scim_enabled}")
|
||||
if not scim_enabled:
|
||||
if hasattr(enable_scim, "value"):
|
||||
enable_scim = enable_scim.value
|
||||
|
||||
if not enable_scim:
|
||||
raise HTTPException(
|
||||
status_code=status.HTTP_403_FORBIDDEN,
|
||||
detail="SCIM is not enabled",
|
||||
|
|
@ -348,8 +349,10 @@ def user_to_scim(user: UserModel, request: Request) -> SCIMUser:
|
|||
|
||||
def group_to_scim(group: GroupModel, request: Request) -> SCIMGroup:
|
||||
"""Convert internal Group model to SCIM Group"""
|
||||
member_ids = Groups.get_group_user_ids_by_id(group.id)
|
||||
members = []
|
||||
for user_id in group.user_ids:
|
||||
|
||||
for user_id in member_ids:
|
||||
user = Users.get_user_by_id(user_id)
|
||||
if user:
|
||||
members.append(
|
||||
|
|
@ -795,9 +798,11 @@ async def create_group(
|
|||
update_form = GroupUpdateForm(
|
||||
name=new_group.name,
|
||||
description=new_group.description,
|
||||
user_ids=member_ids,
|
||||
)
|
||||
|
||||
Groups.update_group_by_id(new_group.id, update_form)
|
||||
Groups.set_group_user_ids_by_id(new_group.id, member_ids)
|
||||
|
||||
new_group = Groups.get_group_by_id(new_group.id)
|
||||
|
||||
return group_to_scim(new_group, request)
|
||||
|
|
@ -829,7 +834,7 @@ async def update_group(
|
|||
# Handle members if provided
|
||||
if group_data.members is not None:
|
||||
member_ids = [member.value for member in group_data.members]
|
||||
update_form.user_ids = member_ids
|
||||
Groups.set_group_user_ids_by_id(group_id, member_ids)
|
||||
|
||||
# Update group
|
||||
updated_group = Groups.update_group_by_id(group_id, update_form)
|
||||
|
|
@ -862,7 +867,6 @@ async def patch_group(
|
|||
update_form = GroupUpdateForm(
|
||||
name=group.name,
|
||||
description=group.description,
|
||||
user_ids=group.user_ids.copy() if group.user_ids else [],
|
||||
)
|
||||
|
||||
for operation in patch_data.Operations:
|
||||
|
|
@ -875,21 +879,22 @@ async def patch_group(
|
|||
update_form.name = value
|
||||
elif path == "members":
|
||||
# Replace all members
|
||||
update_form.user_ids = [member["value"] for member in value]
|
||||
Groups.set_group_user_ids_by_id(
|
||||
group_id, [member["value"] for member in value]
|
||||
)
|
||||
|
||||
elif op == "add":
|
||||
if path == "members":
|
||||
# Add members
|
||||
if isinstance(value, list):
|
||||
for member in value:
|
||||
if isinstance(member, dict) and "value" in member:
|
||||
if member["value"] not in update_form.user_ids:
|
||||
update_form.user_ids.append(member["value"])
|
||||
Groups.add_users_to_group(group_id, [member["value"]])
|
||||
elif op == "remove":
|
||||
if path and path.startswith("members[value eq"):
|
||||
# Remove specific member
|
||||
member_id = path.split('"')[1]
|
||||
if member_id in update_form.user_ids:
|
||||
update_form.user_ids.remove(member_id)
|
||||
Groups.remove_users_from_group(group_id, [member_id])
|
||||
|
||||
# Update group
|
||||
updated_group = Groups.update_group_by_id(group_id, update_form)
|
||||
|
|
|
|||
|
|
@ -33,6 +33,7 @@ from open_webui.config import (
|
|||
DEFAULT_AUTOCOMPLETE_GENERATION_PROMPT_TEMPLATE,
|
||||
DEFAULT_EMOJI_GENERATION_PROMPT_TEMPLATE,
|
||||
DEFAULT_MOA_GENERATION_PROMPT_TEMPLATE,
|
||||
DEFAULT_VOICE_MODE_PROMPT_TEMPLATE,
|
||||
)
|
||||
from open_webui.env import SRC_LOG_LEVELS
|
||||
|
||||
|
|
@ -68,6 +69,7 @@ async def get_task_config(request: Request, user=Depends(get_verified_user)):
|
|||
"ENABLE_RETRIEVAL_QUERY_GENERATION": request.app.state.config.ENABLE_RETRIEVAL_QUERY_GENERATION,
|
||||
"QUERY_GENERATION_PROMPT_TEMPLATE": request.app.state.config.QUERY_GENERATION_PROMPT_TEMPLATE,
|
||||
"TOOLS_FUNCTION_CALLING_PROMPT_TEMPLATE": request.app.state.config.TOOLS_FUNCTION_CALLING_PROMPT_TEMPLATE,
|
||||
"VOICE_MODE_PROMPT_TEMPLATE": request.app.state.config.VOICE_MODE_PROMPT_TEMPLATE,
|
||||
}
|
||||
|
||||
|
||||
|
|
@ -87,6 +89,7 @@ class TaskConfigForm(BaseModel):
|
|||
ENABLE_RETRIEVAL_QUERY_GENERATION: bool
|
||||
QUERY_GENERATION_PROMPT_TEMPLATE: str
|
||||
TOOLS_FUNCTION_CALLING_PROMPT_TEMPLATE: str
|
||||
VOICE_MODE_PROMPT_TEMPLATE: Optional[str]
|
||||
|
||||
|
||||
@router.post("/config/update")
|
||||
|
|
@ -136,6 +139,10 @@ async def update_task_config(
|
|||
form_data.TOOLS_FUNCTION_CALLING_PROMPT_TEMPLATE
|
||||
)
|
||||
|
||||
request.app.state.config.VOICE_MODE_PROMPT_TEMPLATE = (
|
||||
form_data.VOICE_MODE_PROMPT_TEMPLATE
|
||||
)
|
||||
|
||||
return {
|
||||
"TASK_MODEL": request.app.state.config.TASK_MODEL,
|
||||
"TASK_MODEL_EXTERNAL": request.app.state.config.TASK_MODEL_EXTERNAL,
|
||||
|
|
@ -152,6 +159,7 @@ async def update_task_config(
|
|||
"ENABLE_RETRIEVAL_QUERY_GENERATION": request.app.state.config.ENABLE_RETRIEVAL_QUERY_GENERATION,
|
||||
"QUERY_GENERATION_PROMPT_TEMPLATE": request.app.state.config.QUERY_GENERATION_PROMPT_TEMPLATE,
|
||||
"TOOLS_FUNCTION_CALLING_PROMPT_TEMPLATE": request.app.state.config.TOOLS_FUNCTION_CALLING_PROMPT_TEMPLATE,
|
||||
"VOICE_MODE_PROMPT_TEMPLATE": request.app.state.config.VOICE_MODE_PROMPT_TEMPLATE,
|
||||
}
|
||||
|
||||
|
||||
|
|
|
|||
|
|
@ -247,9 +247,19 @@ async def load_tool_from_url(
|
|||
|
||||
|
||||
@router.get("/export", response_model=list[ToolModel])
|
||||
async def export_tools(user=Depends(get_admin_user)):
|
||||
tools = Tools.get_tools()
|
||||
return tools
|
||||
async def export_tools(request: Request, user=Depends(get_verified_user)):
|
||||
if user.role != "admin" and not has_permission(
|
||||
user.id, "workspace.tools_export", request.app.state.config.USER_PERMISSIONS
|
||||
):
|
||||
raise HTTPException(
|
||||
status_code=status.HTTP_401_UNAUTHORIZED,
|
||||
detail=ERROR_MESSAGES.UNAUTHORIZED,
|
||||
)
|
||||
|
||||
if user.role == "admin" and BYPASS_ADMIN_ACCESS_CONTROL:
|
||||
return Tools.get_tools()
|
||||
else:
|
||||
return Tools.get_tools_by_user_id(user.id, "read")
|
||||
|
||||
|
||||
############################
|
||||
|
|
@ -263,8 +273,13 @@ async def create_new_tools(
|
|||
form_data: ToolForm,
|
||||
user=Depends(get_verified_user),
|
||||
):
|
||||
if user.role != "admin" and not has_permission(
|
||||
user.id, "workspace.tools", request.app.state.config.USER_PERMISSIONS
|
||||
if user.role != "admin" and not (
|
||||
has_permission(
|
||||
user.id, "workspace.tools", request.app.state.config.USER_PERMISSIONS
|
||||
)
|
||||
or has_permission(
|
||||
user.id, "workspace.tools_import", request.app.state.config.USER_PERMISSIONS
|
||||
)
|
||||
):
|
||||
raise HTTPException(
|
||||
status_code=status.HTTP_401_UNAUTHORIZED,
|
||||
|
|
|
|||
|
|
@ -16,6 +16,7 @@ from open_webui.models.groups import Groups
|
|||
from open_webui.models.chats import Chats
|
||||
from open_webui.models.users import (
|
||||
UserModel,
|
||||
UserGroupIdsModel,
|
||||
UserListResponse,
|
||||
UserInfoListResponse,
|
||||
UserIdNameListResponse,
|
||||
|
|
@ -35,7 +36,12 @@ from open_webui.constants import ERROR_MESSAGES
|
|||
from open_webui.env import SRC_LOG_LEVELS, STATIC_DIR
|
||||
|
||||
|
||||
from open_webui.utils.auth import get_admin_user, get_password_hash, get_verified_user
|
||||
from open_webui.utils.auth import (
|
||||
get_admin_user,
|
||||
get_password_hash,
|
||||
get_verified_user,
|
||||
validate_password,
|
||||
)
|
||||
from open_webui.utils.access_control import get_permissions, has_permission
|
||||
|
||||
|
||||
|
|
@ -91,7 +97,25 @@ async def get_users(
|
|||
if direction:
|
||||
filter["direction"] = direction
|
||||
|
||||
return Users.get_users(filter=filter, skip=skip, limit=limit)
|
||||
result = Users.get_users(filter=filter, skip=skip, limit=limit)
|
||||
|
||||
users = result["users"]
|
||||
total = result["total"]
|
||||
|
||||
return {
|
||||
"users": [
|
||||
UserGroupIdsModel(
|
||||
**{
|
||||
**user.model_dump(),
|
||||
"group_ids": [
|
||||
group.id for group in Groups.get_groups_by_member_id(user.id)
|
||||
],
|
||||
}
|
||||
)
|
||||
for user in users
|
||||
],
|
||||
"total": total,
|
||||
}
|
||||
|
||||
|
||||
@router.get("/all", response_model=UserInfoListResponse)
|
||||
|
|
@ -150,13 +174,24 @@ class WorkspacePermissions(BaseModel):
|
|||
knowledge: bool = False
|
||||
prompts: bool = False
|
||||
tools: bool = False
|
||||
models_import: bool = False
|
||||
models_export: bool = False
|
||||
prompts_import: bool = False
|
||||
prompts_export: bool = False
|
||||
tools_import: bool = False
|
||||
tools_export: bool = False
|
||||
|
||||
|
||||
class SharingPermissions(BaseModel):
|
||||
public_models: bool = True
|
||||
public_knowledge: bool = True
|
||||
public_prompts: bool = True
|
||||
models: bool = False
|
||||
public_models: bool = False
|
||||
knowledge: bool = False
|
||||
public_knowledge: bool = False
|
||||
prompts: bool = False
|
||||
public_prompts: bool = False
|
||||
tools: bool = False
|
||||
public_tools: bool = True
|
||||
notes: bool = False
|
||||
public_notes: bool = True
|
||||
|
||||
|
||||
|
|
@ -183,6 +218,7 @@ class ChatPermissions(BaseModel):
|
|||
|
||||
|
||||
class FeaturesPermissions(BaseModel):
|
||||
api_keys: bool = False
|
||||
direct_tool_servers: bool = False
|
||||
web_search: bool = True
|
||||
image_generation: bool = True
|
||||
|
|
@ -361,7 +397,7 @@ async def get_user_by_id(user_id: str, user=Depends(get_verified_user)):
|
|||
)
|
||||
|
||||
|
||||
@router.get("/{user_id}/oauth/sessions", response_model=Optional[dict])
|
||||
@router.get("/{user_id}/oauth/sessions")
|
||||
async def get_user_oauth_sessions_by_id(user_id: str, user=Depends(get_admin_user)):
|
||||
sessions = OAuthSessions.get_sessions_by_user_id(user_id)
|
||||
if sessions and len(sessions) > 0:
|
||||
|
|
@ -471,8 +507,12 @@ async def update_user_by_id(
|
|||
)
|
||||
|
||||
if form_data.password:
|
||||
try:
|
||||
validate_password(form_data.password)
|
||||
except Exception as e:
|
||||
raise HTTPException(400, detail=str(e))
|
||||
|
||||
hashed = get_password_hash(form_data.password)
|
||||
log.debug(f"hashed: {hashed}")
|
||||
Auths.update_user_password_by_id(user_id, hashed)
|
||||
|
||||
Auths.update_email_by_id(user_id, form_data.email.lower())
|
||||
|
|
|
|||
|
|
@ -124,12 +124,3 @@ async def download_db(user=Depends(get_admin_user)):
|
|||
media_type="application/octet-stream",
|
||||
filename="webui.db",
|
||||
)
|
||||
|
||||
|
||||
@router.get("/litellm/config")
|
||||
async def download_litellm_config_yaml(user=Depends(get_admin_user)):
|
||||
return FileResponse(
|
||||
f"{DATA_DIR}/litellm/config.yaml",
|
||||
media_type="application/octet-stream",
|
||||
filename="config.yaml",
|
||||
)
|
||||
|
|
|
|||
|
|
@ -23,6 +23,7 @@ from open_webui.config import (
|
|||
)
|
||||
|
||||
from open_webui.env import (
|
||||
VERSION,
|
||||
ENABLE_WEBSOCKET_SUPPORT,
|
||||
WEBSOCKET_MANAGER,
|
||||
WEBSOCKET_REDIS_URL,
|
||||
|
|
@ -31,6 +32,11 @@ from open_webui.env import (
|
|||
WEBSOCKET_SENTINEL_PORT,
|
||||
WEBSOCKET_SENTINEL_HOSTS,
|
||||
REDIS_KEY_PREFIX,
|
||||
WEBSOCKET_REDIS_OPTIONS,
|
||||
WEBSOCKET_SERVER_PING_TIMEOUT,
|
||||
WEBSOCKET_SERVER_PING_INTERVAL,
|
||||
WEBSOCKET_SERVER_LOGGING,
|
||||
WEBSOCKET_SERVER_ENGINEIO_LOGGING,
|
||||
)
|
||||
from open_webui.utils.auth import decode_token
|
||||
from open_webui.socket.utils import RedisDict, RedisLock, YdocManager
|
||||
|
|
@ -52,30 +58,44 @@ log.setLevel(SRC_LOG_LEVELS["SOCKET"])
|
|||
|
||||
REDIS = None
|
||||
|
||||
# Configure CORS for Socket.IO
|
||||
SOCKETIO_CORS_ORIGINS = "*" if CORS_ALLOW_ORIGIN == ["*"] else CORS_ALLOW_ORIGIN
|
||||
|
||||
if WEBSOCKET_MANAGER == "redis":
|
||||
if WEBSOCKET_SENTINEL_HOSTS:
|
||||
mgr = socketio.AsyncRedisManager(
|
||||
get_sentinel_url_from_env(
|
||||
WEBSOCKET_REDIS_URL, WEBSOCKET_SENTINEL_HOSTS, WEBSOCKET_SENTINEL_PORT
|
||||
)
|
||||
),
|
||||
redis_options=WEBSOCKET_REDIS_OPTIONS,
|
||||
)
|
||||
else:
|
||||
mgr = socketio.AsyncRedisManager(WEBSOCKET_REDIS_URL)
|
||||
mgr = socketio.AsyncRedisManager(
|
||||
WEBSOCKET_REDIS_URL, redis_options=WEBSOCKET_REDIS_OPTIONS
|
||||
)
|
||||
sio = socketio.AsyncServer(
|
||||
cors_allowed_origins=CORS_ALLOW_ORIGIN,
|
||||
cors_allowed_origins=SOCKETIO_CORS_ORIGINS,
|
||||
async_mode="asgi",
|
||||
transports=(["websocket"] if ENABLE_WEBSOCKET_SUPPORT else ["polling"]),
|
||||
allow_upgrades=ENABLE_WEBSOCKET_SUPPORT,
|
||||
always_connect=True,
|
||||
client_manager=mgr,
|
||||
logger=WEBSOCKET_SERVER_LOGGING,
|
||||
ping_interval=WEBSOCKET_SERVER_PING_INTERVAL,
|
||||
ping_timeout=WEBSOCKET_SERVER_PING_TIMEOUT,
|
||||
engineio_logger=WEBSOCKET_SERVER_ENGINEIO_LOGGING,
|
||||
)
|
||||
else:
|
||||
sio = socketio.AsyncServer(
|
||||
cors_allowed_origins=CORS_ALLOW_ORIGIN,
|
||||
cors_allowed_origins=SOCKETIO_CORS_ORIGINS,
|
||||
async_mode="asgi",
|
||||
transports=(["websocket"] if ENABLE_WEBSOCKET_SUPPORT else ["polling"]),
|
||||
allow_upgrades=ENABLE_WEBSOCKET_SUPPORT,
|
||||
always_connect=True,
|
||||
logger=WEBSOCKET_SERVER_LOGGING,
|
||||
ping_interval=WEBSOCKET_SERVER_PING_INTERVAL,
|
||||
ping_timeout=WEBSOCKET_SERVER_PING_TIMEOUT,
|
||||
engineio_logger=WEBSOCKET_SERVER_ENGINEIO_LOGGING,
|
||||
)
|
||||
|
||||
|
||||
|
|
@ -278,6 +298,8 @@ async def connect(sid, environ, auth):
|
|||
else:
|
||||
USER_POOL[user.id] = [sid]
|
||||
|
||||
await sio.enter_room(sid, f"user:{user.id}")
|
||||
|
||||
|
||||
@sio.on("user-join")
|
||||
async def user_join(sid, data):
|
||||
|
|
@ -300,6 +322,7 @@ async def user_join(sid, data):
|
|||
else:
|
||||
USER_POOL[user.id] = [sid]
|
||||
|
||||
await sio.enter_room(sid, f"user:{user.id}")
|
||||
# Join all the channels
|
||||
channels = Channels.get_channels_by_user_id(user.id)
|
||||
log.debug(f"{channels=}")
|
||||
|
|
@ -645,40 +668,24 @@ async def disconnect(sid):
|
|||
def get_event_emitter(request_info, update_db=True):
|
||||
async def __event_emitter__(event_data):
|
||||
user_id = request_info["user_id"]
|
||||
chat_id = request_info["chat_id"]
|
||||
message_id = request_info["message_id"]
|
||||
|
||||
session_ids = list(
|
||||
set(
|
||||
USER_POOL.get(user_id, [])
|
||||
+ (
|
||||
[request_info.get("session_id")]
|
||||
if request_info.get("session_id")
|
||||
else []
|
||||
)
|
||||
)
|
||||
await sio.emit(
|
||||
"events",
|
||||
{
|
||||
"chat_id": chat_id,
|
||||
"message_id": message_id,
|
||||
"data": event_data,
|
||||
},
|
||||
room=f"user:{user_id}",
|
||||
)
|
||||
|
||||
chat_id = request_info.get("chat_id", None)
|
||||
message_id = request_info.get("message_id", None)
|
||||
|
||||
emit_tasks = [
|
||||
sio.emit(
|
||||
"events",
|
||||
{
|
||||
"chat_id": chat_id,
|
||||
"message_id": message_id,
|
||||
"data": event_data,
|
||||
},
|
||||
to=session_id,
|
||||
)
|
||||
for session_id in session_ids
|
||||
]
|
||||
|
||||
await asyncio.gather(*emit_tasks)
|
||||
if (
|
||||
update_db
|
||||
and message_id
|
||||
and not request_info.get("chat_id", "").startswith("local:")
|
||||
):
|
||||
|
||||
if "type" in event_data and event_data["type"] == "status":
|
||||
Chats.add_message_status_to_chat_by_id_and_message_id(
|
||||
request_info["chat_id"],
|
||||
|
|
@ -768,7 +775,14 @@ def get_event_emitter(request_info, update_db=True):
|
|||
},
|
||||
)
|
||||
|
||||
return __event_emitter__
|
||||
if (
|
||||
"user_id" in request_info
|
||||
and "chat_id" in request_info
|
||||
and "message_id" in request_info
|
||||
):
|
||||
return __event_emitter__
|
||||
else:
|
||||
return None
|
||||
|
||||
|
||||
def get_event_call(request_info):
|
||||
|
|
@ -784,7 +798,14 @@ def get_event_call(request_info):
|
|||
)
|
||||
return response
|
||||
|
||||
return __event_caller__
|
||||
if (
|
||||
"session_id" in request_info
|
||||
and "chat_id" in request_info
|
||||
and "message_id" in request_info
|
||||
):
|
||||
return __event_caller__
|
||||
else:
|
||||
return None
|
||||
|
||||
|
||||
get_event_caller = get_event_call
|
||||
|
|
|
|||
|
|
@ -21,13 +21,18 @@ from typing import Optional, Union, List, Dict
|
|||
|
||||
from opentelemetry import trace
|
||||
|
||||
|
||||
from open_webui.utils.access_control import has_permission
|
||||
from open_webui.models.users import Users
|
||||
|
||||
from open_webui.constants import ERROR_MESSAGES
|
||||
|
||||
from open_webui.env import (
|
||||
ENABLE_PASSWORD_VALIDATION,
|
||||
OFFLINE_MODE,
|
||||
LICENSE_BLOB,
|
||||
PASSWORD_VALIDATION_REGEX_PATTERN,
|
||||
REDIS_KEY_PREFIX,
|
||||
pk,
|
||||
WEBUI_SECRET_KEY,
|
||||
TRUSTED_SIGNATURE_KEY,
|
||||
|
|
@ -159,6 +164,20 @@ def get_password_hash(password: str) -> str:
|
|||
return bcrypt.hashpw(password.encode("utf-8"), bcrypt.gensalt()).decode("utf-8")
|
||||
|
||||
|
||||
def validate_password(password: str) -> bool:
|
||||
# The password passed to bcrypt must be 72 bytes or fewer. If it is longer, it will be truncated before hashing.
|
||||
if len(password.encode("utf-8")) > 72:
|
||||
raise Exception(
|
||||
ERROR_MESSAGES.PASSWORD_TOO_LONG,
|
||||
)
|
||||
|
||||
if ENABLE_PASSWORD_VALIDATION:
|
||||
if not PASSWORD_VALIDATION_REGEX_PATTERN.match(password):
|
||||
raise Exception(ERROR_MESSAGES.INVALID_PASSWORD())
|
||||
|
||||
return True
|
||||
|
||||
|
||||
def verify_password(plain_password: str, hashed_password: str) -> bool:
|
||||
"""Verify a password against its hash"""
|
||||
return (
|
||||
|
|
@ -178,6 +197,9 @@ def create_token(data: dict, expires_delta: Union[timedelta, None] = None) -> st
|
|||
expire = datetime.now(UTC) + expires_delta
|
||||
payload.update({"exp": expire})
|
||||
|
||||
jti = str(uuid.uuid4())
|
||||
payload.update({"jti": jti})
|
||||
|
||||
encoded_jwt = jwt.encode(payload, SESSION_SECRET, algorithm=ALGORITHM)
|
||||
return encoded_jwt
|
||||
|
||||
|
|
@ -190,6 +212,43 @@ def decode_token(token: str) -> Optional[dict]:
|
|||
return None
|
||||
|
||||
|
||||
async def is_valid_token(request, decoded) -> bool:
|
||||
# Require Redis to check revoked tokens
|
||||
if request.app.state.redis:
|
||||
jti = decoded.get("jti")
|
||||
|
||||
if jti:
|
||||
revoked = await request.app.state.redis.get(
|
||||
f"{REDIS_KEY_PREFIX}:auth:token:{jti}:revoked"
|
||||
)
|
||||
if revoked:
|
||||
return False
|
||||
|
||||
return True
|
||||
|
||||
|
||||
async def invalidate_token(request, token):
|
||||
decoded = decode_token(token)
|
||||
|
||||
# Require Redis to store revoked tokens
|
||||
if request.app.state.redis:
|
||||
jti = decoded.get("jti")
|
||||
exp = decoded.get("exp")
|
||||
|
||||
if jti:
|
||||
ttl = exp - int(
|
||||
datetime.now(UTC).timestamp()
|
||||
) # Calculate time-to-live for the token
|
||||
|
||||
if ttl > 0:
|
||||
# Store the revoked token in Redis with an expiration time
|
||||
await request.app.state.redis.set(
|
||||
f"{REDIS_KEY_PREFIX}:auth:token:{jti}:revoked",
|
||||
"1",
|
||||
ex=ttl,
|
||||
)
|
||||
|
||||
|
||||
def extract_token_from_auth_header(auth_header: str):
|
||||
return auth_header[len("Bearer ") :]
|
||||
|
||||
|
|
@ -209,7 +268,7 @@ def get_http_authorization_cred(auth_header: Optional[str]):
|
|||
return None
|
||||
|
||||
|
||||
def get_current_user(
|
||||
async def get_current_user(
|
||||
request: Request,
|
||||
response: Response,
|
||||
background_tasks: BackgroundTasks,
|
||||
|
|
@ -228,30 +287,7 @@ def get_current_user(
|
|||
|
||||
# auth by api key
|
||||
if token.startswith("sk-"):
|
||||
if not request.state.enable_api_key:
|
||||
raise HTTPException(
|
||||
status.HTTP_403_FORBIDDEN, detail=ERROR_MESSAGES.API_KEY_NOT_ALLOWED
|
||||
)
|
||||
|
||||
if request.app.state.config.ENABLE_API_KEY_ENDPOINT_RESTRICTIONS:
|
||||
allowed_paths = [
|
||||
path.strip()
|
||||
for path in str(
|
||||
request.app.state.config.API_KEY_ALLOWED_ENDPOINTS
|
||||
).split(",")
|
||||
]
|
||||
|
||||
# Check if the request path matches any allowed endpoint.
|
||||
if not any(
|
||||
request.url.path == allowed
|
||||
or request.url.path.startswith(allowed + "/")
|
||||
for allowed in allowed_paths
|
||||
):
|
||||
raise HTTPException(
|
||||
status.HTTP_403_FORBIDDEN, detail=ERROR_MESSAGES.API_KEY_NOT_ALLOWED
|
||||
)
|
||||
|
||||
user = get_current_user_by_api_key(token)
|
||||
user = get_current_user_by_api_key(request, token)
|
||||
|
||||
# Add user info to current span
|
||||
current_span = trace.get_current_span()
|
||||
|
|
@ -264,7 +300,6 @@ def get_current_user(
|
|||
return user
|
||||
|
||||
# auth by jwt token
|
||||
|
||||
try:
|
||||
try:
|
||||
data = decode_token(token)
|
||||
|
|
@ -275,6 +310,12 @@ def get_current_user(
|
|||
)
|
||||
|
||||
if data is not None and "id" in data:
|
||||
if data.get("jti") and not await is_valid_token(request, data):
|
||||
raise HTTPException(
|
||||
status_code=status.HTTP_401_UNAUTHORIZED,
|
||||
detail="Invalid token",
|
||||
)
|
||||
|
||||
user = Users.get_user_by_id(data["id"])
|
||||
if user is None:
|
||||
raise HTTPException(
|
||||
|
|
@ -327,7 +368,7 @@ def get_current_user(
|
|||
raise e
|
||||
|
||||
|
||||
def get_current_user_by_api_key(api_key: str):
|
||||
def get_current_user_by_api_key(request, api_key: str):
|
||||
user = Users.get_user_by_api_key(api_key)
|
||||
|
||||
if user is None:
|
||||
|
|
@ -335,16 +376,28 @@ def get_current_user_by_api_key(api_key: str):
|
|||
status_code=status.HTTP_401_UNAUTHORIZED,
|
||||
detail=ERROR_MESSAGES.INVALID_TOKEN,
|
||||
)
|
||||
else:
|
||||
# Add user info to current span
|
||||
current_span = trace.get_current_span()
|
||||
if current_span:
|
||||
current_span.set_attribute("client.user.id", user.id)
|
||||
current_span.set_attribute("client.user.email", user.email)
|
||||
current_span.set_attribute("client.user.role", user.role)
|
||||
current_span.set_attribute("client.auth.type", "api_key")
|
||||
|
||||
Users.update_user_last_active_by_id(user.id)
|
||||
if not request.state.enable_api_keys or (
|
||||
user.role != "admin"
|
||||
and not has_permission(
|
||||
user.id,
|
||||
"features.api_keys",
|
||||
request.app.state.config.USER_PERMISSIONS,
|
||||
)
|
||||
):
|
||||
raise HTTPException(
|
||||
status.HTTP_403_FORBIDDEN, detail=ERROR_MESSAGES.API_KEY_NOT_ALLOWED
|
||||
)
|
||||
|
||||
# Add user info to current span
|
||||
current_span = trace.get_current_span()
|
||||
if current_span:
|
||||
current_span.set_attribute("client.user.id", user.id)
|
||||
current_span.set_attribute("client.user.email", user.email)
|
||||
current_span.set_attribute("client.user.role", user.role)
|
||||
current_span.set_attribute("client.auth.type", "api_key")
|
||||
|
||||
Users.update_user_last_active_by_id(user.id)
|
||||
|
||||
return user
|
||||
|
||||
|
|
|
|||
|
|
@ -1,5 +1,5 @@
|
|||
from open_webui.routers.images import (
|
||||
load_b64_image_data,
|
||||
get_image_data,
|
||||
upload_image,
|
||||
)
|
||||
|
||||
|
|
@ -16,13 +16,18 @@ from open_webui.routers.files import upload_file_handler
|
|||
import mimetypes
|
||||
import base64
|
||||
import io
|
||||
import re
|
||||
|
||||
|
||||
BASE64_IMAGE_URL_PREFIX = re.compile(r"data:image/\w+;base64,", re.IGNORECASE)
|
||||
MARKDOWN_IMAGE_URL_PATTERN = re.compile(r"!\[(.*?)\]\((.+?)\)", re.IGNORECASE)
|
||||
|
||||
|
||||
def get_image_url_from_base64(request, base64_image_string, metadata, user):
|
||||
if "data:image/png;base64" in base64_image_string:
|
||||
if BASE64_IMAGE_URL_PREFIX.match(base64_image_string):
|
||||
image_url = ""
|
||||
# Extract base64 image data from the line
|
||||
image_data, content_type = load_b64_image_data(base64_image_string)
|
||||
image_data, content_type = get_image_data(base64_image_string)
|
||||
if image_data is not None:
|
||||
image_url = upload_image(
|
||||
request,
|
||||
|
|
@ -35,6 +40,19 @@ def get_image_url_from_base64(request, base64_image_string, metadata, user):
|
|||
return None
|
||||
|
||||
|
||||
def convert_markdown_base64_images(request, content: str, metadata, user):
|
||||
def replace(match):
|
||||
base64_string = match.group(2)
|
||||
MIN_REPLACEMENT_URL_LENGTH = 1024
|
||||
if len(base64_string) > MIN_REPLACEMENT_URL_LENGTH:
|
||||
url = get_image_url_from_base64(request, base64_string, metadata, user)
|
||||
if url:
|
||||
return f""
|
||||
return match.group(0)
|
||||
|
||||
return MARKDOWN_IMAGE_URL_PATTERN.sub(replace, content)
|
||||
|
||||
|
||||
def load_b64_audio_data(b64_str):
|
||||
try:
|
||||
if "," in b64_str:
|
||||
|
|
|
|||
11
backend/open_webui/utils/headers.py
Normal file
11
backend/open_webui/utils/headers.py
Normal file
|
|
@ -0,0 +1,11 @@
|
|||
from urllib.parse import quote
|
||||
|
||||
|
||||
def include_user_info_headers(headers, user):
|
||||
return {
|
||||
**headers,
|
||||
"X-OpenWebUI-User-Name": quote(user.name, safe=" "),
|
||||
"X-OpenWebUI-User-Id": user.id,
|
||||
"X-OpenWebUI-User-Email": user.email,
|
||||
"X-OpenWebUI-User-Role": user.role,
|
||||
}
|
||||
|
|
@ -2,6 +2,8 @@ import asyncio
|
|||
import json
|
||||
import logging
|
||||
import random
|
||||
import requests
|
||||
import aiohttp
|
||||
import urllib.parse
|
||||
import urllib.request
|
||||
from typing import Optional
|
||||
|
|
@ -91,6 +93,25 @@ def get_images(ws, prompt, client_id, base_url, api_key):
|
|||
return {"data": output_images}
|
||||
|
||||
|
||||
async def comfyui_upload_image(image_file_item, base_url, api_key):
|
||||
url = f"{base_url}/api/upload/image"
|
||||
headers = {}
|
||||
|
||||
if api_key:
|
||||
headers["Authorization"] = f"Bearer {api_key}"
|
||||
|
||||
_, (filename, file_bytes, mime_type) = image_file_item
|
||||
|
||||
form = aiohttp.FormData()
|
||||
form.add_field("image", file_bytes, filename=filename, content_type=mime_type)
|
||||
form.add_field("type", "input") # required by ComfyUI
|
||||
|
||||
async with aiohttp.ClientSession() as session:
|
||||
async with session.post(url, data=form, headers=headers) as resp:
|
||||
resp.raise_for_status()
|
||||
return await resp.json()
|
||||
|
||||
|
||||
class ComfyUINodeInput(BaseModel):
|
||||
type: Optional[str] = None
|
||||
node_ids: list[str] = []
|
||||
|
|
@ -103,7 +124,7 @@ class ComfyUIWorkflow(BaseModel):
|
|||
nodes: list[ComfyUINodeInput]
|
||||
|
||||
|
||||
class ComfyUIGenerateImageForm(BaseModel):
|
||||
class ComfyUICreateImageForm(BaseModel):
|
||||
workflow: ComfyUIWorkflow
|
||||
|
||||
prompt: str
|
||||
|
|
@ -116,8 +137,8 @@ class ComfyUIGenerateImageForm(BaseModel):
|
|||
seed: Optional[int] = None
|
||||
|
||||
|
||||
async def comfyui_generate_image(
|
||||
model: str, payload: ComfyUIGenerateImageForm, client_id, base_url, api_key
|
||||
async def comfyui_create_image(
|
||||
model: str, payload: ComfyUICreateImageForm, client_id, base_url, api_key
|
||||
):
|
||||
ws_url = base_url.replace("http://", "ws://").replace("https://", "wss://")
|
||||
workflow = json.loads(payload.workflow.workflow)
|
||||
|
|
@ -191,3 +212,102 @@ async def comfyui_generate_image(
|
|||
ws.close()
|
||||
|
||||
return images
|
||||
|
||||
|
||||
class ComfyUIEditImageForm(BaseModel):
|
||||
workflow: ComfyUIWorkflow
|
||||
|
||||
image: str | list[str]
|
||||
prompt: str
|
||||
width: Optional[int] = None
|
||||
height: Optional[int] = None
|
||||
n: Optional[int] = None
|
||||
|
||||
steps: Optional[int] = None
|
||||
seed: Optional[int] = None
|
||||
|
||||
|
||||
async def comfyui_edit_image(
|
||||
model: str, payload: ComfyUIEditImageForm, client_id, base_url, api_key
|
||||
):
|
||||
ws_url = base_url.replace("http://", "ws://").replace("https://", "wss://")
|
||||
workflow = json.loads(payload.workflow.workflow)
|
||||
|
||||
for node in payload.workflow.nodes:
|
||||
if node.type:
|
||||
if node.type == "model":
|
||||
for node_id in node.node_ids:
|
||||
workflow[node_id]["inputs"][node.key] = model
|
||||
elif node.type == "image":
|
||||
if isinstance(payload.image, list):
|
||||
# check if multiple images are provided
|
||||
for idx, node_id in enumerate(node.node_ids):
|
||||
if idx < len(payload.image):
|
||||
workflow[node_id]["inputs"][node.key] = payload.image[idx]
|
||||
else:
|
||||
for node_id in node.node_ids:
|
||||
workflow[node_id]["inputs"][node.key] = payload.image
|
||||
elif node.type == "prompt":
|
||||
for node_id in node.node_ids:
|
||||
workflow[node_id]["inputs"][
|
||||
node.key if node.key else "text"
|
||||
] = payload.prompt
|
||||
elif node.type == "negative_prompt":
|
||||
for node_id in node.node_ids:
|
||||
workflow[node_id]["inputs"][
|
||||
node.key if node.key else "text"
|
||||
] = payload.negative_prompt
|
||||
elif node.type == "width":
|
||||
for node_id in node.node_ids:
|
||||
workflow[node_id]["inputs"][
|
||||
node.key if node.key else "width"
|
||||
] = payload.width
|
||||
elif node.type == "height":
|
||||
for node_id in node.node_ids:
|
||||
workflow[node_id]["inputs"][
|
||||
node.key if node.key else "height"
|
||||
] = payload.height
|
||||
elif node.type == "n":
|
||||
for node_id in node.node_ids:
|
||||
workflow[node_id]["inputs"][
|
||||
node.key if node.key else "batch_size"
|
||||
] = payload.n
|
||||
elif node.type == "steps":
|
||||
for node_id in node.node_ids:
|
||||
workflow[node_id]["inputs"][
|
||||
node.key if node.key else "steps"
|
||||
] = payload.steps
|
||||
elif node.type == "seed":
|
||||
seed = (
|
||||
payload.seed
|
||||
if payload.seed
|
||||
else random.randint(0, 1125899906842624)
|
||||
)
|
||||
for node_id in node.node_ids:
|
||||
workflow[node_id]["inputs"][node.key] = seed
|
||||
else:
|
||||
for node_id in node.node_ids:
|
||||
workflow[node_id]["inputs"][node.key] = node.value
|
||||
|
||||
try:
|
||||
ws = websocket.WebSocket()
|
||||
headers = {"Authorization": f"Bearer {api_key}"}
|
||||
ws.connect(f"{ws_url}/ws?clientId={client_id}", header=headers)
|
||||
log.info("WebSocket connection established.")
|
||||
except Exception as e:
|
||||
log.exception(f"Failed to connect to WebSocket server: {e}")
|
||||
return None
|
||||
|
||||
try:
|
||||
log.info("Sending workflow to WebSocket server.")
|
||||
log.info(f"Workflow: {workflow}")
|
||||
images = await asyncio.to_thread(
|
||||
get_images, ws, workflow, client_id, base_url, api_key
|
||||
)
|
||||
except Exception as e:
|
||||
log.exception(f"Error while receiving images: {e}")
|
||||
images = None
|
||||
|
||||
ws.close()
|
||||
|
||||
return images
|
||||
|
|
|
|||
|
|
@ -45,10 +45,10 @@ from open_webui.routers.retrieval import (
|
|||
SearchForm,
|
||||
)
|
||||
from open_webui.routers.images import (
|
||||
load_b64_image_data,
|
||||
image_generations,
|
||||
GenerateImageForm,
|
||||
upload_image,
|
||||
CreateImageForm,
|
||||
image_edits,
|
||||
EditImageForm,
|
||||
)
|
||||
from open_webui.routers.pipelines import (
|
||||
process_pipeline_inlet_filter,
|
||||
|
|
@ -58,7 +58,7 @@ from open_webui.routers.memories import query_memory, QueryMemoryForm
|
|||
|
||||
from open_webui.utils.webhook import post_webhook
|
||||
from open_webui.utils.files import (
|
||||
get_audio_url_from_base64,
|
||||
convert_markdown_base64_images,
|
||||
get_file_url_from_base64,
|
||||
get_image_url_from_base64,
|
||||
)
|
||||
|
|
@ -91,7 +91,7 @@ from open_webui.utils.misc import (
|
|||
convert_logit_bias_input_to_json,
|
||||
get_content_from_message,
|
||||
)
|
||||
from open_webui.utils.tools import get_tools
|
||||
from open_webui.utils.tools import get_tools, get_updated_tool_function
|
||||
from open_webui.utils.plugin import load_function_module_by_id
|
||||
from open_webui.utils.filter import (
|
||||
get_sorted_filter_ids,
|
||||
|
|
@ -104,6 +104,7 @@ from open_webui.utils.mcp.client import MCPClient
|
|||
|
||||
from open_webui.config import (
|
||||
CACHE_DIR,
|
||||
DEFAULT_VOICE_MODE_PROMPT_TEMPLATE,
|
||||
DEFAULT_TOOLS_FUNCTION_CALLING_PROMPT_TEMPLATE,
|
||||
DEFAULT_CODE_INTERPRETER_PROMPT,
|
||||
CODE_INTERPRETER_BLOCKED_MODULES,
|
||||
|
|
@ -111,6 +112,7 @@ from open_webui.config import (
|
|||
from open_webui.env import (
|
||||
SRC_LOG_LEVELS,
|
||||
GLOBAL_LOG_LEVEL,
|
||||
ENABLE_CHAT_RESPONSE_BASE64_IMAGE_URL_CONVERSION,
|
||||
CHAT_RESPONSE_STREAM_DELTA_CHUNK_SIZE,
|
||||
CHAT_RESPONSE_MAX_TOOL_CALL_RETRIES,
|
||||
BYPASS_MODEL_ACCESS_CONTROL,
|
||||
|
|
@ -302,19 +304,27 @@ async def chat_completion_tools_handler(
|
|||
def get_tools_function_calling_payload(messages, task_model_id, content):
|
||||
user_message = get_last_user_message(messages)
|
||||
|
||||
if user_message and messages and messages[-1]["role"] == "user":
|
||||
# Remove the last user message to avoid duplication
|
||||
messages = messages[:-1]
|
||||
|
||||
recent_messages = messages[-4:] if len(messages) > 4 else messages
|
||||
chat_history = "\n".join(
|
||||
f"{message['role'].upper()}: \"\"\"{get_content_from_message(message)}\"\"\""
|
||||
for message in recent_messages
|
||||
)
|
||||
|
||||
prompt = f"History:\n{chat_history}\nQuery: {user_message}"
|
||||
prompt = (
|
||||
f"History:\n{chat_history}\nQuery: {user_message}"
|
||||
if chat_history
|
||||
else f"Query: {user_message}"
|
||||
)
|
||||
|
||||
return {
|
||||
"model": task_model_id,
|
||||
"messages": [
|
||||
{"role": "system", "content": content},
|
||||
{"role": "user", "content": f"Query: {prompt}"},
|
||||
{"role": "user", "content": prompt},
|
||||
],
|
||||
"stream": False,
|
||||
"metadata": {"task": str(TASKS.FUNCTION_CALLING)},
|
||||
|
|
@ -718,9 +728,52 @@ async def chat_web_search_handler(
|
|||
return form_data
|
||||
|
||||
|
||||
def get_last_images(message_list):
|
||||
images = []
|
||||
for message in reversed(message_list):
|
||||
images_flag = False
|
||||
for file in message.get("files", []):
|
||||
if file.get("type") == "image":
|
||||
images.append(file.get("url"))
|
||||
images_flag = True
|
||||
|
||||
if images_flag:
|
||||
break
|
||||
|
||||
return images
|
||||
|
||||
|
||||
def get_image_urls(delta_images, request, metadata, user) -> list[str]:
|
||||
if not isinstance(delta_images, list):
|
||||
return []
|
||||
|
||||
image_urls = []
|
||||
for img in delta_images:
|
||||
if not isinstance(img, dict) or img.get("type") != "image_url":
|
||||
continue
|
||||
|
||||
url = img.get("image_url", {}).get("url")
|
||||
if not url:
|
||||
continue
|
||||
|
||||
if url.startswith("data:image/png;base64"):
|
||||
url = get_image_url_from_base64(request, url, metadata, user)
|
||||
|
||||
image_urls.append(url)
|
||||
|
||||
return image_urls
|
||||
|
||||
|
||||
async def chat_image_generation_handler(
|
||||
request: Request, form_data: dict, extra_params: dict, user
|
||||
):
|
||||
metadata = extra_params.get("__metadata__", {})
|
||||
chat_id = metadata.get("chat_id", None)
|
||||
if not chat_id:
|
||||
return form_data
|
||||
|
||||
chat = Chats.get_chat_by_id_and_user_id(chat_id, user.id)
|
||||
|
||||
__event_emitter__ = extra_params["__event_emitter__"]
|
||||
await __event_emitter__(
|
||||
{
|
||||
|
|
@ -729,87 +782,153 @@ async def chat_image_generation_handler(
|
|||
}
|
||||
)
|
||||
|
||||
messages = form_data["messages"]
|
||||
user_message = get_last_user_message(messages)
|
||||
messages_map = chat.chat.get("history", {}).get("messages", {})
|
||||
message_id = chat.chat.get("history", {}).get("currentId")
|
||||
message_list = get_message_list(messages_map, message_id)
|
||||
user_message = get_last_user_message(message_list)
|
||||
|
||||
prompt = user_message
|
||||
negative_prompt = ""
|
||||
|
||||
if request.app.state.config.ENABLE_IMAGE_PROMPT_GENERATION:
|
||||
try:
|
||||
res = await generate_image_prompt(
|
||||
request,
|
||||
{
|
||||
"model": form_data["model"],
|
||||
"messages": messages,
|
||||
},
|
||||
user,
|
||||
)
|
||||
|
||||
response = res["choices"][0]["message"]["content"]
|
||||
|
||||
try:
|
||||
bracket_start = response.find("{")
|
||||
bracket_end = response.rfind("}") + 1
|
||||
|
||||
if bracket_start == -1 or bracket_end == -1:
|
||||
raise Exception("No JSON object found in the response")
|
||||
|
||||
response = response[bracket_start:bracket_end]
|
||||
response = json.loads(response)
|
||||
prompt = response.get("prompt", [])
|
||||
except Exception as e:
|
||||
prompt = user_message
|
||||
|
||||
except Exception as e:
|
||||
log.exception(e)
|
||||
prompt = user_message
|
||||
input_images = get_last_images(message_list)
|
||||
|
||||
system_message_content = ""
|
||||
|
||||
try:
|
||||
images = await image_generations(
|
||||
request=request,
|
||||
form_data=GenerateImageForm(**{"prompt": prompt}),
|
||||
user=user,
|
||||
)
|
||||
if len(input_images) > 0 and request.app.state.config.ENABLE_IMAGE_EDIT:
|
||||
# Edit image(s)
|
||||
try:
|
||||
images = await image_edits(
|
||||
request=request,
|
||||
form_data=EditImageForm(**{"prompt": prompt, "image": input_images}),
|
||||
user=user,
|
||||
)
|
||||
|
||||
await __event_emitter__(
|
||||
{
|
||||
"type": "status",
|
||||
"data": {"description": "Image created", "done": True},
|
||||
}
|
||||
)
|
||||
await __event_emitter__(
|
||||
{
|
||||
"type": "status",
|
||||
"data": {"description": "Image created", "done": True},
|
||||
}
|
||||
)
|
||||
|
||||
await __event_emitter__(
|
||||
{
|
||||
"type": "files",
|
||||
"data": {
|
||||
"files": [
|
||||
{
|
||||
"type": "image",
|
||||
"url": image["url"],
|
||||
}
|
||||
for image in images
|
||||
]
|
||||
},
|
||||
}
|
||||
)
|
||||
await __event_emitter__(
|
||||
{
|
||||
"type": "files",
|
||||
"data": {
|
||||
"files": [
|
||||
{
|
||||
"type": "image",
|
||||
"url": image["url"],
|
||||
}
|
||||
for image in images
|
||||
]
|
||||
},
|
||||
}
|
||||
)
|
||||
|
||||
system_message_content = "<context>User is shown the generated image, tell the user that the image has been generated</context>"
|
||||
except Exception as e:
|
||||
log.exception(e)
|
||||
await __event_emitter__(
|
||||
{
|
||||
"type": "status",
|
||||
"data": {
|
||||
"description": f"An error occurred while generating an image",
|
||||
"done": True,
|
||||
},
|
||||
}
|
||||
)
|
||||
system_message_content = "<context>The requested image has been created and is now being shown to the user. Let them know that it has been generated.</context>"
|
||||
except Exception as e:
|
||||
log.debug(e)
|
||||
|
||||
system_message_content = "<context>Unable to generate an image, tell the user that an error occurred</context>"
|
||||
error_message = ""
|
||||
if isinstance(e, HTTPException):
|
||||
if e.detail and isinstance(e.detail, dict):
|
||||
error_message = e.detail.get("message", str(e.detail))
|
||||
else:
|
||||
error_message = str(e.detail)
|
||||
|
||||
await __event_emitter__(
|
||||
{
|
||||
"type": "status",
|
||||
"data": {
|
||||
"description": f"An error occurred while generating an image",
|
||||
"done": True,
|
||||
},
|
||||
}
|
||||
)
|
||||
|
||||
system_message_content = f"<context>Image generation was attempted but failed. The system is currently unable to generate the image. Tell the user that an error occurred: {error_message}</context>"
|
||||
|
||||
else:
|
||||
# Create image(s)
|
||||
if request.app.state.config.ENABLE_IMAGE_PROMPT_GENERATION:
|
||||
try:
|
||||
res = await generate_image_prompt(
|
||||
request,
|
||||
{
|
||||
"model": form_data["model"],
|
||||
"messages": form_data["messages"],
|
||||
},
|
||||
user,
|
||||
)
|
||||
|
||||
response = res["choices"][0]["message"]["content"]
|
||||
|
||||
try:
|
||||
bracket_start = response.find("{")
|
||||
bracket_end = response.rfind("}") + 1
|
||||
|
||||
if bracket_start == -1 or bracket_end == -1:
|
||||
raise Exception("No JSON object found in the response")
|
||||
|
||||
response = response[bracket_start:bracket_end]
|
||||
response = json.loads(response)
|
||||
prompt = response.get("prompt", [])
|
||||
except Exception as e:
|
||||
prompt = user_message
|
||||
|
||||
except Exception as e:
|
||||
log.exception(e)
|
||||
prompt = user_message
|
||||
|
||||
try:
|
||||
images = await image_generations(
|
||||
request=request,
|
||||
form_data=CreateImageForm(**{"prompt": prompt}),
|
||||
user=user,
|
||||
)
|
||||
|
||||
await __event_emitter__(
|
||||
{
|
||||
"type": "status",
|
||||
"data": {"description": "Image created", "done": True},
|
||||
}
|
||||
)
|
||||
|
||||
await __event_emitter__(
|
||||
{
|
||||
"type": "files",
|
||||
"data": {
|
||||
"files": [
|
||||
{
|
||||
"type": "image",
|
||||
"url": image["url"],
|
||||
}
|
||||
for image in images
|
||||
]
|
||||
},
|
||||
}
|
||||
)
|
||||
|
||||
system_message_content = "<context>The requested image has been created and is now being shown to the user. Let them know that it has been generated.</context>"
|
||||
except Exception as e:
|
||||
log.debug(e)
|
||||
|
||||
error_message = ""
|
||||
if isinstance(e, HTTPException):
|
||||
if e.detail and isinstance(e.detail, dict):
|
||||
error_message = e.detail.get("message", str(e.detail))
|
||||
else:
|
||||
error_message = str(e.detail)
|
||||
|
||||
await __event_emitter__(
|
||||
{
|
||||
"type": "status",
|
||||
"data": {
|
||||
"description": f"An error occurred while generating an image",
|
||||
"done": True,
|
||||
},
|
||||
}
|
||||
)
|
||||
|
||||
system_message_content = f"<context>Image generation was attempted but failed. The system is currently unable to generate the image. Tell the user that an error occurred: {error_message}</context>"
|
||||
|
||||
if system_message_content:
|
||||
form_data["messages"] = add_or_update_system_message(
|
||||
|
|
@ -874,37 +993,32 @@ async def chat_completion_files_handler(
|
|||
queries = [get_last_user_message(body["messages"])]
|
||||
|
||||
try:
|
||||
# Offload get_sources_from_items to a separate thread
|
||||
loop = asyncio.get_running_loop()
|
||||
with ThreadPoolExecutor() as executor:
|
||||
sources = await loop.run_in_executor(
|
||||
executor,
|
||||
lambda: get_sources_from_items(
|
||||
request=request,
|
||||
items=files,
|
||||
queries=queries,
|
||||
embedding_function=lambda query, prefix: request.app.state.EMBEDDING_FUNCTION(
|
||||
query, prefix=prefix, user=user
|
||||
),
|
||||
k=request.app.state.config.TOP_K,
|
||||
reranking_function=(
|
||||
(
|
||||
lambda sentences: request.app.state.RERANKING_FUNCTION(
|
||||
sentences, user=user
|
||||
)
|
||||
)
|
||||
if request.app.state.RERANKING_FUNCTION
|
||||
else None
|
||||
),
|
||||
k_reranker=request.app.state.config.TOP_K_RERANKER,
|
||||
r=request.app.state.config.RELEVANCE_THRESHOLD,
|
||||
hybrid_bm25_weight=request.app.state.config.HYBRID_BM25_WEIGHT,
|
||||
hybrid_search=request.app.state.config.ENABLE_RAG_HYBRID_SEARCH,
|
||||
full_context=all_full_context
|
||||
or request.app.state.config.RAG_FULL_CONTEXT,
|
||||
user=user,
|
||||
),
|
||||
)
|
||||
# Directly await async get_sources_from_items (no thread needed - fully async now)
|
||||
sources = await get_sources_from_items(
|
||||
request=request,
|
||||
items=files,
|
||||
queries=queries,
|
||||
embedding_function=lambda query, prefix: request.app.state.EMBEDDING_FUNCTION(
|
||||
query, prefix=prefix, user=user
|
||||
),
|
||||
k=request.app.state.config.TOP_K,
|
||||
reranking_function=(
|
||||
(
|
||||
lambda query, documents: request.app.state.RERANKING_FUNCTION(
|
||||
query, documents, user=user
|
||||
)
|
||||
)
|
||||
if request.app.state.RERANKING_FUNCTION
|
||||
else None
|
||||
),
|
||||
k_reranker=request.app.state.config.TOP_K_RERANKER,
|
||||
r=request.app.state.config.RELEVANCE_THRESHOLD,
|
||||
hybrid_bm25_weight=request.app.state.config.HYBRID_BM25_WEIGHT,
|
||||
hybrid_search=request.app.state.config.ENABLE_RAG_HYBRID_SEARCH,
|
||||
full_context=all_full_context
|
||||
or request.app.state.config.RAG_FULL_CONTEXT,
|
||||
user=user,
|
||||
)
|
||||
except Exception as e:
|
||||
log.exception(e)
|
||||
|
||||
|
|
@ -1011,7 +1125,7 @@ async def process_chat_payload(request, form_data, user, metadata, model):
|
|||
pass
|
||||
|
||||
event_emitter = get_event_emitter(metadata)
|
||||
event_call = get_event_call(metadata)
|
||||
event_caller = get_event_call(metadata)
|
||||
|
||||
oauth_token = None
|
||||
try:
|
||||
|
|
@ -1025,14 +1139,13 @@ async def process_chat_payload(request, form_data, user, metadata, model):
|
|||
|
||||
extra_params = {
|
||||
"__event_emitter__": event_emitter,
|
||||
"__event_call__": event_call,
|
||||
"__event_call__": event_caller,
|
||||
"__user__": user.model_dump() if isinstance(user, UserModel) else {},
|
||||
"__metadata__": metadata,
|
||||
"__oauth_token__": oauth_token,
|
||||
"__request__": request,
|
||||
"__model__": model,
|
||||
"__oauth_token__": oauth_token,
|
||||
}
|
||||
|
||||
# Initialize events to store additional event to be sent to the client
|
||||
# Initialize contexts and citation
|
||||
if getattr(request.state, "direct", False) and hasattr(request.state, "model"):
|
||||
|
|
@ -1143,6 +1256,18 @@ async def process_chat_payload(request, form_data, user, metadata, model):
|
|||
|
||||
features = form_data.pop("features", None)
|
||||
if features:
|
||||
if "voice" in features and features["voice"]:
|
||||
if request.app.state.config.VOICE_MODE_PROMPT_TEMPLATE != None:
|
||||
if request.app.state.config.VOICE_MODE_PROMPT_TEMPLATE != "":
|
||||
template = request.app.state.config.VOICE_MODE_PROMPT_TEMPLATE
|
||||
else:
|
||||
template = DEFAULT_VOICE_MODE_PROMPT_TEMPLATE
|
||||
|
||||
form_data["messages"] = add_or_update_system_message(
|
||||
template,
|
||||
form_data["messages"],
|
||||
)
|
||||
|
||||
if "memory" in features and features["memory"]:
|
||||
form_data = await chat_memory_handler(
|
||||
request, form_data, extra_params, user
|
||||
|
|
@ -1237,7 +1362,6 @@ async def process_chat_payload(request, form_data, user, metadata, model):
|
|||
continue
|
||||
|
||||
auth_type = mcp_server_connection.get("auth_type", "")
|
||||
|
||||
headers = {}
|
||||
if auth_type == "bearer":
|
||||
headers["Authorization"] = (
|
||||
|
|
@ -1273,6 +1397,11 @@ async def process_chat_payload(request, form_data, user, metadata, model):
|
|||
log.error(f"Error getting OAuth token: {e}")
|
||||
oauth_token = None
|
||||
|
||||
connection_headers = mcp_server_connection.get("headers", None)
|
||||
if connection_headers and isinstance(connection_headers, dict):
|
||||
for key, value in connection_headers.items():
|
||||
headers[key] = value
|
||||
|
||||
mcp_clients[server_id] = MCPClient()
|
||||
await mcp_clients[server_id].connect(
|
||||
url=mcp_server_connection.get("url", ""),
|
||||
|
|
@ -1307,6 +1436,17 @@ async def process_chat_payload(request, form_data, user, metadata, model):
|
|||
}
|
||||
except Exception as e:
|
||||
log.debug(e)
|
||||
if event_emitter:
|
||||
await event_emitter(
|
||||
{
|
||||
"type": "chat:message:error",
|
||||
"data": {
|
||||
"error": {
|
||||
"content": f"Failed to connect to MCP server '{server_id}'"
|
||||
}
|
||||
},
|
||||
}
|
||||
)
|
||||
continue
|
||||
|
||||
tools_dict = await get_tools(
|
||||
|
|
@ -1543,16 +1683,13 @@ async def process_chat_response(
|
|||
if not metadata.get("chat_id", "").startswith(
|
||||
"local:"
|
||||
): # Only update titles and tags for non-temp chats
|
||||
if (
|
||||
TASKS.TITLE_GENERATION in tasks
|
||||
and tasks[TASKS.TITLE_GENERATION]
|
||||
):
|
||||
if TASKS.TITLE_GENERATION in tasks:
|
||||
user_message = get_last_user_message(messages)
|
||||
if user_message and len(user_message) > 100:
|
||||
user_message = user_message[:100] + "..."
|
||||
|
||||
title = None
|
||||
if tasks[TASKS.TITLE_GENERATION]:
|
||||
|
||||
res = await generate_title(
|
||||
request,
|
||||
{
|
||||
|
|
@ -1603,7 +1740,8 @@ async def process_chat_response(
|
|||
"data": title,
|
||||
}
|
||||
)
|
||||
elif len(messages) == 2:
|
||||
|
||||
if title == None and len(messages) == 2:
|
||||
title = messages[0].get("content", user_message)
|
||||
|
||||
Chats.update_chat_title_by_id(metadata["chat_id"], title)
|
||||
|
|
@ -1939,9 +2077,11 @@ async def process_chat_response(
|
|||
content = f"{content}{tool_calls_display_content}"
|
||||
|
||||
elif block["type"] == "reasoning":
|
||||
reasoning_display_content = "\n".join(
|
||||
(f"> {line}" if not line.startswith(">") else line)
|
||||
for line in block["content"].splitlines()
|
||||
reasoning_display_content = html.escape(
|
||||
"\n".join(
|
||||
(f"> {line}" if not line.startswith(">") else line)
|
||||
for line in block["content"].splitlines()
|
||||
)
|
||||
)
|
||||
|
||||
reasoning_duration = block.get("duration", None)
|
||||
|
|
@ -2459,6 +2599,26 @@ async def process_chat_response(
|
|||
"arguments"
|
||||
] += delta_arguments
|
||||
|
||||
image_urls = get_image_urls(
|
||||
delta.get("images", []), request, metadata, user
|
||||
)
|
||||
if image_urls:
|
||||
message_files = Chats.add_message_files_by_id_and_message_id(
|
||||
metadata["chat_id"],
|
||||
metadata["message_id"],
|
||||
[
|
||||
{"type": "image", "url": url}
|
||||
for url in image_urls
|
||||
],
|
||||
)
|
||||
|
||||
await event_emitter(
|
||||
{
|
||||
"type": "files",
|
||||
"data": {"files": message_files},
|
||||
}
|
||||
)
|
||||
|
||||
value = delta.get("content")
|
||||
|
||||
reasoning_content = (
|
||||
|
|
@ -2517,6 +2677,11 @@ async def process_chat_response(
|
|||
}
|
||||
)
|
||||
|
||||
if ENABLE_CHAT_RESPONSE_BASE64_IMAGE_URL_CONVERSION:
|
||||
value = convert_markdown_base64_images(
|
||||
request, value, metadata, user
|
||||
)
|
||||
|
||||
content = f"{content}{value}"
|
||||
if not content_blocks:
|
||||
content_blocks.append(
|
||||
|
|
@ -2742,7 +2907,16 @@ async def process_chat_response(
|
|||
)
|
||||
|
||||
else:
|
||||
tool_function = tool["callable"]
|
||||
tool_function = get_updated_tool_function(
|
||||
function=tool["callable"],
|
||||
extra_params={
|
||||
"__messages__": form_data.get(
|
||||
"messages", []
|
||||
),
|
||||
"__files__": metadata.get("files", []),
|
||||
},
|
||||
)
|
||||
|
||||
tool_result = await tool_function(
|
||||
**tool_function_params
|
||||
)
|
||||
|
|
|
|||
|
|
@ -8,10 +8,11 @@ from datetime import timedelta
|
|||
from pathlib import Path
|
||||
from typing import Callable, Optional
|
||||
import json
|
||||
import aiohttp
|
||||
|
||||
|
||||
import collections.abc
|
||||
from open_webui.env import SRC_LOG_LEVELS
|
||||
from open_webui.env import SRC_LOG_LEVELS, CHAT_STREAM_RESPONSE_CHUNK_MAX_BUFFER_SIZE
|
||||
|
||||
log = logging.getLogger(__name__)
|
||||
log.setLevel(SRC_LOG_LEVELS["MAIN"])
|
||||
|
|
@ -539,3 +540,68 @@ def extract_urls(text: str) -> list[str]:
|
|||
r"(https?://[^\s]+)", re.IGNORECASE
|
||||
) # Matches http and https URLs
|
||||
return url_pattern.findall(text)
|
||||
|
||||
|
||||
def stream_chunks_handler(stream: aiohttp.StreamReader):
|
||||
"""
|
||||
Handle stream response chunks, supporting large data chunks that exceed the original 16kb limit.
|
||||
When a single line exceeds max_buffer_size, returns an empty JSON string {} and skips subsequent data
|
||||
until encountering normally sized data.
|
||||
|
||||
:param stream: The stream reader to handle.
|
||||
:return: An async generator that yields the stream data.
|
||||
"""
|
||||
|
||||
max_buffer_size = CHAT_STREAM_RESPONSE_CHUNK_MAX_BUFFER_SIZE
|
||||
if max_buffer_size is None or max_buffer_size <= 0:
|
||||
return stream
|
||||
|
||||
async def yield_safe_stream_chunks():
|
||||
buffer = b""
|
||||
skip_mode = False
|
||||
|
||||
async for data, _ in stream.iter_chunks():
|
||||
if not data:
|
||||
continue
|
||||
|
||||
# In skip_mode, if buffer already exceeds the limit, clear it (it's part of an oversized line)
|
||||
if skip_mode and len(buffer) > max_buffer_size:
|
||||
buffer = b""
|
||||
|
||||
lines = (buffer + data).split(b"\n")
|
||||
|
||||
# Process complete lines (except the last possibly incomplete fragment)
|
||||
for i in range(len(lines) - 1):
|
||||
line = lines[i]
|
||||
|
||||
if skip_mode:
|
||||
# Skip mode: check if current line is small enough to exit skip mode
|
||||
if len(line) <= max_buffer_size:
|
||||
skip_mode = False
|
||||
yield line
|
||||
else:
|
||||
yield b"data: {}"
|
||||
else:
|
||||
# Normal mode: check if line exceeds limit
|
||||
if len(line) > max_buffer_size:
|
||||
skip_mode = True
|
||||
yield b"data: {}"
|
||||
log.info(f"Skip mode triggered, line size: {len(line)}")
|
||||
else:
|
||||
yield line
|
||||
|
||||
# Save the last incomplete fragment
|
||||
buffer = lines[-1]
|
||||
|
||||
# Check if buffer exceeds limit
|
||||
if not skip_mode and len(buffer) > max_buffer_size:
|
||||
skip_mode = True
|
||||
log.info(f"Skip mode triggered, buffer size: {len(buffer)}")
|
||||
# Clear oversized buffer to prevent unlimited growth
|
||||
buffer = b""
|
||||
|
||||
# Process remaining buffer data
|
||||
if buffer and not skip_mode:
|
||||
yield buffer
|
||||
|
||||
return yield_safe_stream_chunks()
|
||||
|
|
|
|||
|
|
@ -12,6 +12,7 @@ from open_webui.functions import get_function_models
|
|||
|
||||
from open_webui.models.functions import Functions
|
||||
from open_webui.models.models import Models
|
||||
from open_webui.models.groups import Groups
|
||||
|
||||
|
||||
from open_webui.utils.plugin import (
|
||||
|
|
@ -356,6 +357,7 @@ def get_filtered_models(models, user):
|
|||
or (user.role == "admin" and not BYPASS_ADMIN_ACCESS_CONTROL)
|
||||
) and not BYPASS_MODEL_ACCESS_CONTROL:
|
||||
filtered_models = []
|
||||
user_group_ids = {group.id for group in Groups.get_groups_by_member_id(user.id)}
|
||||
for model in models:
|
||||
if model.get("arena"):
|
||||
if has_access(
|
||||
|
|
@ -364,6 +366,7 @@ def get_filtered_models(models, user):
|
|||
access_control=model.get("info", {})
|
||||
.get("meta", {})
|
||||
.get("access_control", {}),
|
||||
user_group_ids=user_group_ids,
|
||||
):
|
||||
filtered_models.append(model)
|
||||
continue
|
||||
|
|
@ -377,6 +380,7 @@ def get_filtered_models(models, user):
|
|||
user.id,
|
||||
type="read",
|
||||
access_control=model_info.access_control,
|
||||
user_group_ids=user_group_ids,
|
||||
)
|
||||
):
|
||||
filtered_models.append(model)
|
||||
|
|
|
|||
|
|
@ -14,7 +14,7 @@ import fnmatch
|
|||
import time
|
||||
import secrets
|
||||
from cryptography.fernet import Fernet
|
||||
|
||||
from typing import Literal
|
||||
|
||||
import aiohttp
|
||||
from authlib.integrations.starlette_client import OAuth
|
||||
|
|
@ -42,6 +42,7 @@ from open_webui.config import (
|
|||
ENABLE_OAUTH_GROUP_MANAGEMENT,
|
||||
ENABLE_OAUTH_GROUP_CREATION,
|
||||
OAUTH_BLOCKED_GROUPS,
|
||||
OAUTH_GROUPS_SEPARATOR,
|
||||
OAUTH_ROLES_CLAIM,
|
||||
OAUTH_SUB_CLAIM,
|
||||
OAUTH_GROUPS_CLAIM,
|
||||
|
|
@ -52,6 +53,7 @@ from open_webui.config import (
|
|||
OAUTH_ADMIN_ROLES,
|
||||
OAUTH_ALLOWED_DOMAINS,
|
||||
OAUTH_UPDATE_PICTURE_ON_LOGIN,
|
||||
OAUTH_ACCESS_TOKEN_REQUEST_INCLUDE_CLIENT_ID,
|
||||
WEBHOOK_URL,
|
||||
JWT_EXPIRES_IN,
|
||||
AppConfig,
|
||||
|
|
@ -71,13 +73,20 @@ from open_webui.utils.auth import get_password_hash, create_token
|
|||
from open_webui.utils.webhook import post_webhook
|
||||
|
||||
from mcp.shared.auth import (
|
||||
OAuthClientMetadata,
|
||||
OAuthClientMetadata as MCPOAuthClientMetadata,
|
||||
OAuthMetadata,
|
||||
)
|
||||
|
||||
from authlib.oauth2.rfc6749.errors import OAuth2Error
|
||||
|
||||
|
||||
class OAuthClientMetadata(MCPOAuthClientMetadata):
|
||||
token_endpoint_auth_method: Literal[
|
||||
"none", "client_secret_basic", "client_secret_post"
|
||||
] = "client_secret_post"
|
||||
pass
|
||||
|
||||
|
||||
class OAuthClientInformationFull(OAuthClientMetadata):
|
||||
issuer: Optional[str] = None # URL of the OAuth server that issued this client
|
||||
|
||||
|
|
@ -237,24 +246,33 @@ def get_parsed_and_base_url(server_url) -> tuple[urllib.parse.ParseResult, str]:
|
|||
def get_discovery_urls(server_url) -> list[str]:
|
||||
parsed, base_url = get_parsed_and_base_url(server_url)
|
||||
|
||||
urls = [
|
||||
urllib.parse.urljoin(base_url, "/.well-known/oauth-authorization-server"),
|
||||
urllib.parse.urljoin(base_url, "/.well-known/openid-configuration"),
|
||||
]
|
||||
urls = []
|
||||
|
||||
if parsed.path and parsed.path != "/":
|
||||
urls.append(
|
||||
urllib.parse.urljoin(
|
||||
base_url,
|
||||
f"/.well-known/oauth-authorization-server{parsed.path.rstrip('/')}",
|
||||
)
|
||||
)
|
||||
urls.append(
|
||||
urllib.parse.urljoin(
|
||||
base_url, f"/.well-known/openid-configuration{parsed.path.rstrip('/')}"
|
||||
)
|
||||
# Generate discovery URLs based on https://modelcontextprotocol.io/specification/draft/basic/authorization#authorization-server-metadata-discovery
|
||||
tenant = parsed.path.rstrip("/")
|
||||
urls.extend(
|
||||
[
|
||||
urllib.parse.urljoin(
|
||||
base_url,
|
||||
f"/.well-known/oauth-authorization-server{tenant}",
|
||||
),
|
||||
urllib.parse.urljoin(
|
||||
base_url, f"/.well-known/openid-configuration{tenant}"
|
||||
),
|
||||
urllib.parse.urljoin(
|
||||
base_url, f"{tenant}/.well-known/openid-configuration"
|
||||
),
|
||||
]
|
||||
)
|
||||
|
||||
urls.extend(
|
||||
[
|
||||
urllib.parse.urljoin(base_url, "/.well-known/oauth-authorization-server"),
|
||||
urllib.parse.urljoin(base_url, "/.well-known/openid-configuration"),
|
||||
]
|
||||
)
|
||||
|
||||
return urls
|
||||
|
||||
|
||||
|
|
@ -279,13 +297,12 @@ async def get_oauth_client_info_with_dynamic_client_registration(
|
|||
redirect_uris=[f"{redirect_base_url}/oauth/clients/{client_id}/callback"],
|
||||
grant_types=["authorization_code", "refresh_token"],
|
||||
response_types=["code"],
|
||||
token_endpoint_auth_method="client_secret_post",
|
||||
)
|
||||
|
||||
# Attempt to fetch OAuth server metadata to get registration endpoint & scopes
|
||||
discovery_urls = get_discovery_urls(oauth_server_url)
|
||||
for url in discovery_urls:
|
||||
async with aiohttp.ClientSession() as session:
|
||||
async with aiohttp.ClientSession(trust_env=True) as session:
|
||||
async with session.get(
|
||||
url, ssl=AIOHTTP_CLIENT_SESSION_SSL
|
||||
) as oauth_server_metadata_response:
|
||||
|
|
@ -302,6 +319,17 @@ async def get_oauth_client_info_with_dynamic_client_registration(
|
|||
oauth_client_metadata.scope = " ".join(
|
||||
oauth_server_metadata.scopes_supported
|
||||
)
|
||||
|
||||
if (
|
||||
oauth_server_metadata.token_endpoint_auth_methods_supported
|
||||
and oauth_client_metadata.token_endpoint_auth_method
|
||||
not in oauth_server_metadata.token_endpoint_auth_methods_supported
|
||||
):
|
||||
# Pick the first supported method from the server
|
||||
oauth_client_metadata.token_endpoint_auth_method = oauth_server_metadata.token_endpoint_auth_methods_supported[
|
||||
0
|
||||
]
|
||||
|
||||
break
|
||||
except Exception as e:
|
||||
log.error(f"Error parsing OAuth metadata from {url}: {e}")
|
||||
|
|
@ -321,7 +349,7 @@ async def get_oauth_client_info_with_dynamic_client_registration(
|
|||
)
|
||||
|
||||
# Perform dynamic client registration and return client info
|
||||
async with aiohttp.ClientSession() as session:
|
||||
async with aiohttp.ClientSession(trust_env=True) as session:
|
||||
async with session.post(
|
||||
registration_url, json=registration_data, ssl=AIOHTTP_CLIENT_SESSION_SSL
|
||||
) as oauth_client_registration_response:
|
||||
|
|
@ -329,6 +357,13 @@ async def get_oauth_client_info_with_dynamic_client_registration(
|
|||
registration_response_json = (
|
||||
await oauth_client_registration_response.json()
|
||||
)
|
||||
|
||||
# The mcp package requires optional unset values to be None. If an empty string is passed, it gets validated and fails.
|
||||
# This replaces all empty strings with None.
|
||||
registration_response_json = {
|
||||
k: (None if v == "" else v)
|
||||
for k, v in registration_response_json.items()
|
||||
}
|
||||
oauth_client_info = OAuthClientInformationFull.model_validate(
|
||||
{
|
||||
**registration_response_json,
|
||||
|
|
@ -373,9 +408,20 @@ class OAuthClientManager:
|
|||
"name": client_id,
|
||||
"client_id": oauth_client_info.client_id,
|
||||
"client_secret": oauth_client_info.client_secret,
|
||||
"client_kwargs": (
|
||||
{"scope": oauth_client_info.scope} if oauth_client_info.scope else {}
|
||||
),
|
||||
"client_kwargs": {
|
||||
**(
|
||||
{"scope": oauth_client_info.scope}
|
||||
if oauth_client_info.scope
|
||||
else {}
|
||||
),
|
||||
**(
|
||||
{
|
||||
"token_endpoint_auth_method": oauth_client_info.token_endpoint_auth_method
|
||||
}
|
||||
if oauth_client_info.token_endpoint_auth_method
|
||||
else {}
|
||||
),
|
||||
},
|
||||
"server_metadata_url": (
|
||||
oauth_client_info.issuer if oauth_client_info.issuer else None
|
||||
),
|
||||
|
|
@ -689,16 +735,17 @@ class OAuthClientManager:
|
|||
error_message = None
|
||||
try:
|
||||
client_info = self.get_client_info(client_id)
|
||||
token_params = {}
|
||||
|
||||
auth_params = {}
|
||||
if (
|
||||
client_info
|
||||
and hasattr(client_info, "client_id")
|
||||
and hasattr(client_info, "client_secret")
|
||||
):
|
||||
token_params["client_id"] = client_info.client_id
|
||||
token_params["client_secret"] = client_info.client_secret
|
||||
auth_params["client_id"] = client_info.client_id
|
||||
auth_params["client_secret"] = client_info.client_secret
|
||||
|
||||
token = await client.authorize_access_token(request, **token_params)
|
||||
token = await client.authorize_access_token(request, **auth_params)
|
||||
if token:
|
||||
try:
|
||||
# Add timestamp for tracking
|
||||
|
|
@ -977,6 +1024,10 @@ class OAuthManager:
|
|||
for nested_claim in nested_claims:
|
||||
claim_data = claim_data.get(nested_claim, {})
|
||||
|
||||
# Try flat claim structure as alternative
|
||||
if not claim_data:
|
||||
claim_data = user_data.get(oauth_claim, {})
|
||||
|
||||
oauth_roles = []
|
||||
|
||||
if isinstance(claim_data, list):
|
||||
|
|
@ -1035,7 +1086,11 @@ class OAuthManager:
|
|||
if isinstance(claim_data, list):
|
||||
user_oauth_groups = claim_data
|
||||
elif isinstance(claim_data, str):
|
||||
user_oauth_groups = [claim_data]
|
||||
# Split by the configured separator if present
|
||||
if OAUTH_GROUPS_SEPARATOR in claim_data:
|
||||
user_oauth_groups = claim_data.split(OAUTH_GROUPS_SEPARATOR)
|
||||
else:
|
||||
user_oauth_groups = [claim_data]
|
||||
else:
|
||||
user_oauth_groups = []
|
||||
|
||||
|
|
@ -1106,22 +1161,21 @@ class OAuthManager:
|
|||
f"Removing user from group {group_model.name} as it is no longer in their oauth groups"
|
||||
)
|
||||
|
||||
user_ids = group_model.user_ids
|
||||
user_ids = [i for i in user_ids if i != user.id]
|
||||
Groups.remove_users_from_group(group_model.id, [user.id])
|
||||
|
||||
# In case a group is created, but perms are never assigned to the group by hitting "save"
|
||||
group_permissions = group_model.permissions
|
||||
if not group_permissions:
|
||||
group_permissions = default_permissions
|
||||
|
||||
update_form = GroupUpdateForm(
|
||||
name=group_model.name,
|
||||
description=group_model.description,
|
||||
permissions=group_permissions,
|
||||
user_ids=user_ids,
|
||||
)
|
||||
Groups.update_group_by_id(
|
||||
id=group_model.id, form_data=update_form, overwrite=False
|
||||
id=group_model.id,
|
||||
form_data=GroupUpdateForm(
|
||||
name=group_model.name,
|
||||
description=group_model.description,
|
||||
permissions=group_permissions,
|
||||
),
|
||||
overwrite=False,
|
||||
)
|
||||
|
||||
# Add user to new groups
|
||||
|
|
@ -1137,22 +1191,21 @@ class OAuthManager:
|
|||
f"Adding user to group {group_model.name} as it was found in their oauth groups"
|
||||
)
|
||||
|
||||
user_ids = group_model.user_ids
|
||||
user_ids.append(user.id)
|
||||
Groups.add_users_to_group(group_model.id, [user.id])
|
||||
|
||||
# In case a group is created, but perms are never assigned to the group by hitting "save"
|
||||
group_permissions = group_model.permissions
|
||||
if not group_permissions:
|
||||
group_permissions = default_permissions
|
||||
|
||||
update_form = GroupUpdateForm(
|
||||
name=group_model.name,
|
||||
description=group_model.description,
|
||||
permissions=group_permissions,
|
||||
user_ids=user_ids,
|
||||
)
|
||||
Groups.update_group_by_id(
|
||||
id=group_model.id, form_data=update_form, overwrite=False
|
||||
id=group_model.id,
|
||||
form_data=GroupUpdateForm(
|
||||
name=group_model.name,
|
||||
description=group_model.description,
|
||||
permissions=group_permissions,
|
||||
),
|
||||
overwrite=False,
|
||||
)
|
||||
|
||||
async def _process_picture_url(
|
||||
|
|
@ -1219,8 +1272,18 @@ class OAuthManager:
|
|||
error_message = None
|
||||
try:
|
||||
client = self.get_client(provider)
|
||||
|
||||
auth_params = {}
|
||||
|
||||
if client:
|
||||
if (
|
||||
hasattr(client, "client_id")
|
||||
and OAUTH_ACCESS_TOKEN_REQUEST_INCLUDE_CLIENT_ID
|
||||
):
|
||||
auth_params["client_id"] = client.client_id
|
||||
|
||||
try:
|
||||
token = await client.authorize_access_token(request)
|
||||
token = await client.authorize_access_token(request, **auth_params)
|
||||
except Exception as e:
|
||||
detailed_error = _build_oauth_callback_error_message(e)
|
||||
log.warning(
|
||||
|
|
|
|||
|
|
@ -208,20 +208,21 @@ def rag_template(template: str, context: str, query: str):
|
|||
if "[query]" in context:
|
||||
query_placeholder = "{{QUERY" + str(uuid.uuid4()) + "}}"
|
||||
template = template.replace("[query]", query_placeholder)
|
||||
query_placeholders.append(query_placeholder)
|
||||
query_placeholders.append((query_placeholder, "[query]"))
|
||||
|
||||
if "{{QUERY}}" in context:
|
||||
query_placeholder = "{{QUERY" + str(uuid.uuid4()) + "}}"
|
||||
template = template.replace("{{QUERY}}", query_placeholder)
|
||||
query_placeholders.append(query_placeholder)
|
||||
query_placeholders.append((query_placeholder, "{{QUERY}}"))
|
||||
|
||||
template = template.replace("[context]", context)
|
||||
template = template.replace("{{CONTEXT}}", context)
|
||||
|
||||
template = template.replace("[query]", query)
|
||||
template = template.replace("{{QUERY}}", query)
|
||||
|
||||
for query_placeholder in query_placeholders:
|
||||
template = template.replace(query_placeholder, query)
|
||||
for query_placeholder, original_placeholder in query_placeholders:
|
||||
template = template.replace(query_placeholder, original_placeholder)
|
||||
|
||||
return template
|
||||
|
||||
|
|
|
|||
|
|
@ -99,6 +99,9 @@ def _build_meter_provider(resource: Resource) -> MeterProvider:
|
|||
View(
|
||||
instrument_name="webui.users.active",
|
||||
),
|
||||
View(
|
||||
instrument_name="webui.users.active.today",
|
||||
),
|
||||
]
|
||||
|
||||
provider = MeterProvider(
|
||||
|
|
@ -159,6 +162,18 @@ def setup_metrics(app: FastAPI, resource: Resource) -> None:
|
|||
callbacks=[observe_active_users],
|
||||
)
|
||||
|
||||
def observe_users_active_today(
|
||||
options: metrics.CallbackOptions,
|
||||
) -> Sequence[metrics.Observation]:
|
||||
return [metrics.Observation(value=Users.get_num_users_active_today())]
|
||||
|
||||
meter.create_observable_gauge(
|
||||
name="webui.users.active.today",
|
||||
description="Number of users active since midnight today",
|
||||
unit="users",
|
||||
callbacks=[observe_users_active_today],
|
||||
)
|
||||
|
||||
# FastAPI middleware
|
||||
@app.middleware("http")
|
||||
async def _metrics_middleware(request: Request, call_next):
|
||||
|
|
|
|||
|
|
@ -85,9 +85,26 @@ def get_async_tool_function_and_apply_extra_params(
|
|||
update_wrapper(new_function, function)
|
||||
new_function.__signature__ = new_sig
|
||||
|
||||
new_function.__function__ = function # type: ignore
|
||||
new_function.__extra_params__ = extra_params # type: ignore
|
||||
|
||||
return new_function
|
||||
|
||||
|
||||
def get_updated_tool_function(function: Callable, extra_params: dict):
|
||||
# Get the original function and merge updated params
|
||||
__function__ = getattr(function, "__function__", None)
|
||||
__extra_params__ = getattr(function, "__extra_params__", None)
|
||||
|
||||
if __function__ is not None and __extra_params__ is not None:
|
||||
return get_async_tool_function_and_apply_extra_params(
|
||||
__function__,
|
||||
{**__extra_params__, **extra_params},
|
||||
)
|
||||
|
||||
return function
|
||||
|
||||
|
||||
async def get_tools(
|
||||
request: Request, tool_ids: list[str], user: UserModel, extra_params: dict
|
||||
) -> dict[str, dict]:
|
||||
|
|
@ -138,7 +155,9 @@ async def get_tools(
|
|||
auth_type = tool_server_connection.get("auth_type", "bearer")
|
||||
|
||||
cookies = {}
|
||||
headers = {}
|
||||
headers = {
|
||||
"Content-Type": "application/json",
|
||||
}
|
||||
|
||||
if auth_type == "bearer":
|
||||
headers["Authorization"] = (
|
||||
|
|
@ -160,7 +179,10 @@ async def get_tools(
|
|||
f"Bearer {oauth_token.get('access_token', '')}"
|
||||
)
|
||||
|
||||
headers["Content-Type"] = "application/json"
|
||||
connection_headers = tool_server_connection.get("headers", None)
|
||||
if connection_headers and isinstance(connection_headers, dict):
|
||||
for key, value in connection_headers.items():
|
||||
headers[key] = value
|
||||
|
||||
def make_tool_function(
|
||||
function_name, tool_server_data, headers
|
||||
|
|
@ -215,14 +237,16 @@ async def get_tools(
|
|||
module, _ = load_tool_module_by_id(tool_id)
|
||||
request.app.state.TOOLS[tool_id] = module
|
||||
|
||||
extra_params["__id__"] = tool_id
|
||||
__user__ = {
|
||||
**extra_params["__user__"],
|
||||
}
|
||||
|
||||
# Set valves for the tool
|
||||
if hasattr(module, "valves") and hasattr(module, "Valves"):
|
||||
valves = Tools.get_tool_valves_by_id(tool_id) or {}
|
||||
module.valves = module.Valves(**valves)
|
||||
if hasattr(module, "UserValves"):
|
||||
extra_params["__user__"]["valves"] = module.UserValves( # type: ignore
|
||||
__user__["valves"] = module.UserValves( # type: ignore
|
||||
**Tools.get_user_valves_by_id_and_user_id(tool_id, user.id)
|
||||
)
|
||||
|
||||
|
|
@ -244,7 +268,12 @@ async def get_tools(
|
|||
function_name = spec["name"]
|
||||
tool_function = getattr(module, function_name)
|
||||
callable = get_async_tool_function_and_apply_extra_params(
|
||||
tool_function, extra_params
|
||||
tool_function,
|
||||
{
|
||||
**extra_params,
|
||||
"__id__": tool_id,
|
||||
"__user__": __user__,
|
||||
},
|
||||
)
|
||||
|
||||
# TODO: Support Pydantic models as parameters
|
||||
|
|
@ -544,20 +573,21 @@ async def get_tool_servers(request: Request):
|
|||
return tool_servers
|
||||
|
||||
|
||||
async def get_tool_server_data(token: str, url: str) -> Dict[str, Any]:
|
||||
headers = {
|
||||
async def get_tool_server_data(url: str, headers: Optional[dict]) -> Dict[str, Any]:
|
||||
_headers = {
|
||||
"Accept": "application/json",
|
||||
"Content-Type": "application/json",
|
||||
}
|
||||
if token:
|
||||
headers["Authorization"] = f"Bearer {token}"
|
||||
|
||||
if headers:
|
||||
_headers.update(headers)
|
||||
|
||||
error = None
|
||||
try:
|
||||
timeout = aiohttp.ClientTimeout(total=AIOHTTP_CLIENT_TIMEOUT_TOOL_SERVER_DATA)
|
||||
async with aiohttp.ClientSession(timeout=timeout, trust_env=True) as session:
|
||||
async with session.get(
|
||||
url, headers=headers, ssl=AIOHTTP_CLIENT_SESSION_TOOL_SERVER_SSL
|
||||
url, headers=_headers, ssl=AIOHTTP_CLIENT_SESSION_TOOL_SERVER_SSL
|
||||
) as response:
|
||||
if response.status != 200:
|
||||
error_body = await response.json()
|
||||
|
|
@ -627,7 +657,10 @@ async def get_tool_servers_data(servers: List[Dict[str, Any]]) -> List[Dict[str,
|
|||
openapi_path = server.get("path", "openapi.json")
|
||||
spec_url = get_tool_server_url(server_url, openapi_path)
|
||||
# Fetch from URL
|
||||
task = get_tool_server_data(token, spec_url)
|
||||
task = get_tool_server_data(
|
||||
spec_url,
|
||||
{"Authorization": f"Bearer {token}"} if token else None,
|
||||
)
|
||||
elif spec_type == "json" and server.get("spec", ""):
|
||||
# Use provided JSON spec
|
||||
spec_json = None
|
||||
|
|
|
|||
|
|
@ -51,7 +51,7 @@ async def post_webhook(name: str, url: str, message: str, event_data: dict) -> b
|
|||
payload = {**event_data}
|
||||
|
||||
log.debug(f"payload: {payload}")
|
||||
async with aiohttp.ClientSession() as session:
|
||||
async with aiohttp.ClientSession(trust_env=True) as session:
|
||||
async with session.post(url, json=payload) as r:
|
||||
r_text = await r.text()
|
||||
r.raise_for_status()
|
||||
|
|
|
|||
50
backend/requirements-min.txt
Normal file
50
backend/requirements-min.txt
Normal file
|
|
@ -0,0 +1,50 @@
|
|||
# Minimal requirements for backend to run
|
||||
# WIP: use this as a reference to build a minimal docker image
|
||||
|
||||
fastapi==0.118.0
|
||||
uvicorn[standard]==0.37.0
|
||||
pydantic==2.11.9
|
||||
python-multipart==0.0.20
|
||||
itsdangerous==2.2.0
|
||||
|
||||
python-socketio==5.13.0
|
||||
python-jose==3.5.0
|
||||
cryptography
|
||||
bcrypt==5.0.0
|
||||
argon2-cffi==25.1.0
|
||||
PyJWT[crypto]==2.10.1
|
||||
authlib==1.6.5
|
||||
|
||||
requests==2.32.5
|
||||
aiohttp==3.12.15
|
||||
async-timeout
|
||||
aiocache
|
||||
aiofiles
|
||||
starlette-compress==1.6.0
|
||||
httpx[socks,http2,zstd,cli,brotli]==0.28.1
|
||||
starsessions[redis]==2.2.1
|
||||
|
||||
sqlalchemy==2.0.38
|
||||
alembic==1.14.0
|
||||
peewee==3.18.1
|
||||
peewee-migrate==1.12.2
|
||||
|
||||
pycrdt==0.12.25
|
||||
redis
|
||||
|
||||
APScheduler==3.10.4
|
||||
RestrictedPython==8.0
|
||||
|
||||
loguru==0.7.3
|
||||
asgiref==3.8.1
|
||||
|
||||
mcp==1.21.2
|
||||
openai
|
||||
|
||||
langchain==0.3.27
|
||||
langchain-community==0.3.29
|
||||
fake-useragent==2.2.0
|
||||
|
||||
chromadb==1.1.0
|
||||
black==25.9.0
|
||||
pydub
|
||||
|
|
@ -37,11 +37,11 @@ asgiref==3.8.1
|
|||
|
||||
# AI libraries
|
||||
tiktoken
|
||||
mcp==1.14.1
|
||||
mcp==1.21.2
|
||||
|
||||
openai
|
||||
anthropic
|
||||
google-genai==1.38.0
|
||||
google-genai==1.52.0
|
||||
google-generativeai==0.8.5
|
||||
|
||||
langchain==0.3.27
|
||||
|
|
@ -49,6 +49,7 @@ langchain-community==0.3.29
|
|||
|
||||
fake-useragent==2.2.0
|
||||
chromadb==1.1.0
|
||||
weaviate-client==4.17.0
|
||||
opensearch-py==2.8.0
|
||||
|
||||
transformers
|
||||
|
|
@ -63,7 +64,8 @@ fpdf2==2.8.2
|
|||
pymdown-extensions==10.14.2
|
||||
docx2txt==0.8
|
||||
python-pptx==1.0.2
|
||||
unstructured==0.18.15
|
||||
unstructured==0.18.18
|
||||
msoffcrypto-tool==5.4.2
|
||||
nltk==3.9.1
|
||||
Markdown==3.9
|
||||
pypandoc==1.15
|
||||
|
|
@ -75,7 +77,6 @@ validators==0.35.0
|
|||
psutil
|
||||
sentencepiece
|
||||
soundfile==0.13.1
|
||||
azure-ai-documentintelligence==1.0.2
|
||||
|
||||
pillow==11.3.0
|
||||
opencv-python-headless==4.11.0.86
|
||||
|
|
@ -85,7 +86,6 @@ rank-bm25==0.2.2
|
|||
onnxruntime==1.20.1
|
||||
faster-whisper==1.1.1
|
||||
|
||||
|
||||
black==25.9.0
|
||||
youtube-transcript-api==1.2.2
|
||||
pytube==15.0.0
|
||||
|
|
@ -93,6 +93,11 @@ pytube==15.0.0
|
|||
pydub
|
||||
ddgs==9.0.0
|
||||
|
||||
azure-ai-documentintelligence==1.0.2
|
||||
azure-identity==1.25.0
|
||||
azure-storage-blob==12.24.1
|
||||
azure-search-documents==11.6.0
|
||||
|
||||
## Google Drive
|
||||
google-api-python-client
|
||||
google-auth-httplib2
|
||||
|
|
@ -101,10 +106,7 @@ google-auth-oauthlib
|
|||
googleapis-common-protos==1.70.0
|
||||
google-cloud-storage==2.19.0
|
||||
|
||||
azure-identity==1.25.0
|
||||
azure-storage-blob==12.24.1
|
||||
|
||||
|
||||
## Databases
|
||||
pymongo
|
||||
psycopg2-binary==2.9.10
|
||||
pgvector==0.4.1
|
||||
|
|
|
|||
|
|
@ -57,22 +57,28 @@ We appreciate the community's interest in identifying potential vulnerabilities.
|
|||
> [!NOTE]
|
||||
> **Note**: If you believe you have found a security issue that
|
||||
>
|
||||
> 1. affects default configurations **or**
|
||||
> 2. represents a genuine bypass of intended security controls **or**
|
||||
> 3. works only with non-default configurations **but the configuration in question is likely to be used by production deployments** > **then we absolutely want to hear about it.** This policy is intended to filter configuration issues and deployment problems, not to discourage legitimate security research.
|
||||
> 1. affects default configurations, **or**
|
||||
> 2. represents a genuine bypass of intended security controls, **or**
|
||||
> 3. works only with non-default configurations, **but the configuration in question is likely to be used by production deployments**, **then we absolutely want to hear about it.** This policy is intended to filter configuration issues and deployment problems, not to discourage legitimate security research.
|
||||
|
||||
8. **Threat Model Understanding Required**: Reports must demonstrate understanding of Open WebUI's self-hosted, authenticated, role-based access control architecture. Comparing Open WebUI to services with fundamentally different security models without acknowledging the architectural differences may result in report rejection.
|
||||
|
||||
9. **CVSS Scoring Accuracy:** If you include a CVSS score with your report, it must accurately reflect the vulnerability according to CVSS methodology. Common errors include 1) rating PR:N (None) when authentication is required, 2) scoring hypothetical attack chains instead of the actual vulnerability, or 3) inflating severity without evidence. **We will adjust inaccurate CVSS scores.** Intentionally inflated scores may result in report rejection.
|
||||
|
||||
> [!WARNING] > **Using CVE Precedents:** If you cite other CVEs to support your report, ensure they are **genuinely comparable** in vulnerability type, threat model, and attack vector. Citing CVEs from different product categories, different vulnerability classes or different deployment models will lead us to suspect the use of AI in your report.
|
||||
> [!WARNING]
|
||||
>
|
||||
> **Using CVE Precedents:** If you cite other CVEs to support your report, ensure they are **genuinely comparable** in vulnerability type, threat model, and attack vector. Citing CVEs from different product categories, different vulnerability classes or different deployment models will lead us to suspect the use of AI in your report.
|
||||
|
||||
11. **Admin Actions Are Out of Scope:** Vulnerabilities that require an administrator to actively perform unsafe actions are **not considered valid vulnerabilities**. Admins have full system control and are expected to understand the security implications of their actions and configurations. This includes but is not limited to: adding malicious external servers (models, tools, webhooks), pasting untrusted code into Functions/Tools, or intentionally weakening security settings. **Reports requiring admin negligence or social engineering of admins may be rejected.**
|
||||
|
||||
12. **AI report transparency:** Due to an extreme spike in AI-aided vulnerability reports **YOU MUST DISCLOSE if AI was used in any capacity** - whether for writing the report, generating the PoC, or identifying the vulnerability. If AI helped you in any way shape or form in the creation of the report, PoC or finding the vulnerability, you MUST disclose it.
|
||||
10. **Admin Actions Are Out of Scope:** Vulnerabilities that require an administrator to actively perform unsafe actions are **not considered valid vulnerabilities**. Admins have full system control and are expected to understand the security implications of their actions and configurations. This includes but is not limited to: adding malicious external servers (models, tools, webhooks), pasting untrusted code into Functions/Tools, or intentionally weakening security settings. **Reports requiring admin negligence or social engineering of admins may be rejected.**
|
||||
|
||||
> [!NOTE]
|
||||
> AI-aided vulnerability reports **will not be rejected by us by default.** But:
|
||||
> Similar to rule "Default Configuration Testing": If you believe you have found a vulnerability that affects admins and is NOT caused by admin negligence or intentionally malicious actions,
|
||||
> **then we absolutely want to hear about it.** This policy is intended to filter social engineering attacks on admins, malicious plugins being deployed by admins and similar malicious actions, not to discourage legitimate security research.
|
||||
|
||||
11. **AI report transparency:** Due to an extreme spike in AI-aided vulnerability reports **YOU MUST DISCLOSE if AI was used in any capacity** - whether for writing the report, generating the PoC, or identifying the vulnerability. If AI helped you in any way shape or form in the creation of the report, PoC or finding the vulnerability, you MUST disclose it.
|
||||
|
||||
> [!NOTE]
|
||||
> AI-aided vulnerability reports **will not be rejected by us by default**. But:
|
||||
>
|
||||
> - If we suspect you used AI (but you did not disclose it to us), we will be asking tough follow-up questions to validate your understanding of the reported vulnerability and Open WebUI itself.
|
||||
> - If we suspect you used AI (but you did not disclose it to us) **and** your report ends up being invalid/not a vulnerability/not reproducible, then you **may be banned** from reporting future vulnerabilities.
|
||||
|
|
@ -94,7 +100,7 @@ We appreciate the community's interest in identifying potential vulnerabilities.
|
|||
If you want to report a vulnerability and can meet the outlined requirements, [open a vulnerability report here](https://github.com/open-webui/open-webui/security/advisories/new).
|
||||
If you feel like you are not able to follow ALL outlined requirements for vulnerability-specific reasons, still do report it, we will check every report either way.
|
||||
|
||||
## Product Security And For Non-Vulnerability Security Concerns:
|
||||
## Product Security And For Non-Vulnerability Related Security Concerns:
|
||||
|
||||
If your concern does not meet the vulnerability requirements outlined above, is not a vulnerability, **but is still related to security concerns**, then use the following channels instead:
|
||||
|
||||
|
|
@ -121,4 +127,4 @@ For any other immediate concerns, please create an issue in our [issue tracker](
|
|||
|
||||
---
|
||||
|
||||
_Last updated on **2025-10-17**._
|
||||
_Last updated on **2025-11-06**._
|
||||
|
|
|
|||
4
package-lock.json
generated
4
package-lock.json
generated
|
|
@ -1,12 +1,12 @@
|
|||
{
|
||||
"name": "open-webui",
|
||||
"version": "0.6.34",
|
||||
"version": "0.6.38",
|
||||
"lockfileVersion": 3,
|
||||
"requires": true,
|
||||
"packages": {
|
||||
"": {
|
||||
"name": "open-webui",
|
||||
"version": "0.6.34",
|
||||
"version": "0.6.38",
|
||||
"dependencies": {
|
||||
"@azure/msal-browser": "^4.5.0",
|
||||
"@codemirror/lang-javascript": "^6.2.2",
|
||||
|
|
|
|||
|
|
@ -1,6 +1,6 @@
|
|||
{
|
||||
"name": "open-webui",
|
||||
"version": "0.6.34",
|
||||
"version": "0.6.38",
|
||||
"private": true,
|
||||
"scripts": {
|
||||
"dev": "npm run pyodide:fetch && vite dev --host",
|
||||
|
|
|
|||
|
|
@ -37,9 +37,6 @@ dependencies = [
|
|||
"pycrdt==0.12.25",
|
||||
"redis",
|
||||
|
||||
"PyMySQL==1.1.1",
|
||||
"boto3==1.40.5",
|
||||
|
||||
"APScheduler==3.10.4",
|
||||
"RestrictedPython==8.0",
|
||||
|
||||
|
|
@ -47,11 +44,11 @@ dependencies = [
|
|||
"asgiref==3.8.1",
|
||||
|
||||
"tiktoken",
|
||||
"mcp==1.14.1",
|
||||
"mcp==1.21.2",
|
||||
|
||||
"openai",
|
||||
"anthropic",
|
||||
"google-genai==1.38.0",
|
||||
"google-genai==1.52.0",
|
||||
"google-generativeai==0.8.5",
|
||||
|
||||
"langchain==0.3.27",
|
||||
|
|
@ -60,6 +57,8 @@ dependencies = [
|
|||
"fake-useragent==2.2.0",
|
||||
"chromadb==1.0.20",
|
||||
"opensearch-py==2.8.0",
|
||||
"PyMySQL==1.1.1",
|
||||
"boto3==1.40.5",
|
||||
|
||||
"transformers",
|
||||
"sentence-transformers==5.1.1",
|
||||
|
|
@ -73,7 +72,8 @@ dependencies = [
|
|||
"pymdown-extensions==10.14.2",
|
||||
"docx2txt==0.8",
|
||||
"python-pptx==1.0.2",
|
||||
"unstructured==0.18.15",
|
||||
"unstructured==0.18.18",
|
||||
"msoffcrypto-tool==5.4.2",
|
||||
"nltk==3.9.1",
|
||||
"Markdown==3.9",
|
||||
"pypandoc==1.15",
|
||||
|
|
@ -146,12 +146,14 @@ all = [
|
|||
"elasticsearch==9.1.0",
|
||||
|
||||
"qdrant-client==1.14.3",
|
||||
"weaviate-client==4.17.0",
|
||||
"pymilvus==2.6.2",
|
||||
"pinecone==6.0.2",
|
||||
"oracledb==3.2.0",
|
||||
|
||||
"colbert-ai==0.2.21",
|
||||
|
||||
"firecrawl-py==4.5.0",
|
||||
"azure-search-documents==11.6.0",
|
||||
]
|
||||
|
||||
[project.scripts]
|
||||
|
|
|
|||
29
src/app.css
29
src/app.css
|
|
@ -30,8 +30,33 @@
|
|||
font-display: swap;
|
||||
}
|
||||
|
||||
/* --app-text-scale is updated via the UI Scale slider (Interface.svelte) */
|
||||
:root {
|
||||
--app-text-scale: 1;
|
||||
}
|
||||
|
||||
html {
|
||||
word-break: break-word;
|
||||
/* font-size scales the entire document via the same UI control */
|
||||
font-size: calc(1rem * var(--app-text-scale, 1));
|
||||
}
|
||||
|
||||
#sidebar-chat-item {
|
||||
/* sidebar item sizing scales for the chat list entries */
|
||||
min-height: calc(32px * var(--app-text-scale, 1));
|
||||
padding-inline: calc(11px * var(--app-text-scale, 1));
|
||||
padding-block: calc(6px * var(--app-text-scale, 1));
|
||||
}
|
||||
|
||||
#sidebar-chat-item div[dir='auto'] {
|
||||
/* chat title line height follows the text scale */
|
||||
height: calc(20px * var(--app-text-scale, 1));
|
||||
line-height: calc(20px * var(--app-text-scale, 1));
|
||||
}
|
||||
|
||||
#sidebar-chat-item input {
|
||||
/* editing state input height is kept in sync */
|
||||
min-height: calc(20px * var(--app-text-scale, 1));
|
||||
}
|
||||
|
||||
code {
|
||||
|
|
@ -129,8 +154,8 @@ li p {
|
|||
}
|
||||
|
||||
::-webkit-scrollbar {
|
||||
height: 0.4rem;
|
||||
width: 0.4rem;
|
||||
height: 0.45rem;
|
||||
width: 0.45rem;
|
||||
}
|
||||
|
||||
::-webkit-scrollbar-track {
|
||||
|
|
|
|||
112
src/app.html
112
src/app.html
|
|
@ -174,72 +174,72 @@
|
|||
</span> -->
|
||||
</div>
|
||||
</body>
|
||||
</html>
|
||||
|
||||
<style type="text/css" nonce="">
|
||||
html {
|
||||
overflow-y: hidden !important;
|
||||
overscroll-behavior-y: none;
|
||||
}
|
||||
<style type="text/css" nonce="">
|
||||
html {
|
||||
overflow-y: hidden !important;
|
||||
overscroll-behavior-y: none;
|
||||
}
|
||||
|
||||
#splash-screen {
|
||||
background: #fff;
|
||||
}
|
||||
#splash-screen {
|
||||
background: #fff;
|
||||
}
|
||||
|
||||
html.dark #splash-screen {
|
||||
background: #000;
|
||||
}
|
||||
html.dark #splash-screen {
|
||||
background: #000;
|
||||
}
|
||||
|
||||
html.her #splash-screen {
|
||||
background: #983724;
|
||||
}
|
||||
html.her #splash-screen {
|
||||
background: #983724;
|
||||
}
|
||||
|
||||
#logo-her {
|
||||
display: none;
|
||||
}
|
||||
|
||||
#progress-background {
|
||||
display: none;
|
||||
}
|
||||
|
||||
#progress-bar {
|
||||
display: none;
|
||||
}
|
||||
|
||||
html.her #logo {
|
||||
display: none;
|
||||
}
|
||||
|
||||
html.her #logo-her {
|
||||
display: block;
|
||||
filter: invert(1);
|
||||
}
|
||||
|
||||
html.her #progress-background {
|
||||
display: block;
|
||||
}
|
||||
|
||||
html.her #progress-bar {
|
||||
display: block;
|
||||
}
|
||||
|
||||
@media (max-width: 24rem) {
|
||||
html.her #progress-background {
|
||||
#logo-her {
|
||||
display: none;
|
||||
}
|
||||
|
||||
#progress-background {
|
||||
display: none;
|
||||
}
|
||||
|
||||
#progress-bar {
|
||||
display: none;
|
||||
}
|
||||
|
||||
html.her #logo {
|
||||
display: none;
|
||||
}
|
||||
|
||||
html.her #logo-her {
|
||||
display: block;
|
||||
filter: invert(1);
|
||||
}
|
||||
|
||||
html.her #progress-background {
|
||||
display: block;
|
||||
}
|
||||
|
||||
html.her #progress-bar {
|
||||
display: none;
|
||||
display: block;
|
||||
}
|
||||
}
|
||||
|
||||
@keyframes pulse {
|
||||
50% {
|
||||
opacity: 0.65;
|
||||
@media (max-width: 24rem) {
|
||||
html.her #progress-background {
|
||||
display: none;
|
||||
}
|
||||
|
||||
html.her #progress-bar {
|
||||
display: none;
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
.animate-pulse-fast {
|
||||
animation: pulse 1.5s cubic-bezier(0.4, 0, 0.6, 1) infinite;
|
||||
}
|
||||
</style>
|
||||
@keyframes pulse {
|
||||
50% {
|
||||
opacity: 0.65;
|
||||
}
|
||||
}
|
||||
|
||||
.animate-pulse-fast {
|
||||
animation: pulse 1.5s cubic-bezier(0.4, 0, 0.6, 1) infinite;
|
||||
}
|
||||
</style>
|
||||
</html>
|
||||
|
|
|
|||
|
|
@ -65,15 +65,7 @@ export const unarchiveAllChats = async (token: string) => {
|
|||
return res;
|
||||
};
|
||||
|
||||
export const importChat = async (
|
||||
token: string,
|
||||
chat: object,
|
||||
meta: object | null,
|
||||
pinned?: boolean,
|
||||
folderId?: string | null,
|
||||
createdAt: number | null = null,
|
||||
updatedAt: number | null = null
|
||||
) => {
|
||||
export const importChats = async (token: string, chats: object[]) => {
|
||||
let error = null;
|
||||
|
||||
const res = await fetch(`${WEBUI_API_BASE_URL}/chats/import`, {
|
||||
|
|
@ -84,12 +76,7 @@ export const importChat = async (
|
|||
authorization: `Bearer ${token}`
|
||||
},
|
||||
body: JSON.stringify({
|
||||
chat: chat,
|
||||
meta: meta ?? {},
|
||||
pinned: pinned,
|
||||
folder_id: folderId,
|
||||
created_at: createdAt ?? null,
|
||||
updated_at: updatedAt ?? null
|
||||
chats
|
||||
})
|
||||
})
|
||||
.then(async (res) => {
|
||||
|
|
|
|||
|
|
@ -93,6 +93,45 @@ export const getAllFeedbacks = async (token: string = '') => {
|
|||
return res;
|
||||
};
|
||||
|
||||
export const getFeedbackItems = async (token: string = '', orderBy, direction, page) => {
|
||||
let error = null;
|
||||
|
||||
const searchParams = new URLSearchParams();
|
||||
if (orderBy) searchParams.append('order_by', orderBy);
|
||||
if (direction) searchParams.append('direction', direction);
|
||||
if (page) searchParams.append('page', page.toString());
|
||||
|
||||
const res = await fetch(
|
||||
`${WEBUI_API_BASE_URL}/evaluations/feedbacks/list?${searchParams.toString()}`,
|
||||
{
|
||||
method: 'GET',
|
||||
headers: {
|
||||
Accept: 'application/json',
|
||||
'Content-Type': 'application/json',
|
||||
authorization: `Bearer ${token}`
|
||||
}
|
||||
}
|
||||
)
|
||||
.then(async (res) => {
|
||||
if (!res.ok) throw await res.json();
|
||||
return res.json();
|
||||
})
|
||||
.then((json) => {
|
||||
return json;
|
||||
})
|
||||
.catch((err) => {
|
||||
error = err.detail;
|
||||
console.error(err);
|
||||
return null;
|
||||
});
|
||||
|
||||
if (error) {
|
||||
throw error;
|
||||
}
|
||||
|
||||
return res;
|
||||
};
|
||||
|
||||
export const exportAllFeedbacks = async (token: string = '') => {
|
||||
let error = null;
|
||||
|
||||
|
|
|
|||
|
|
@ -239,10 +239,13 @@ export const updateFolderItemsById = async (token: string, id: string, items: Fo
|
|||
return res;
|
||||
};
|
||||
|
||||
export const deleteFolderById = async (token: string, id: string) => {
|
||||
export const deleteFolderById = async (token: string, id: string, deleteContents: boolean) => {
|
||||
let error = null;
|
||||
|
||||
const res = await fetch(`${WEBUI_API_BASE_URL}/folders/${id}`, {
|
||||
const searchParams = new URLSearchParams();
|
||||
searchParams.append('delete_contents', deleteContents ? 'true' : 'false');
|
||||
|
||||
const res = await fetch(`${WEBUI_API_BASE_URL}/folders/${id}?${searchParams.toString()}`, {
|
||||
method: 'DELETE',
|
||||
headers: {
|
||||
Accept: 'application/json',
|
||||
|
|
|
|||
|
|
@ -31,10 +31,15 @@ export const createNewGroup = async (token: string, group: object) => {
|
|||
return res;
|
||||
};
|
||||
|
||||
export const getGroups = async (token: string = '') => {
|
||||
export const getGroups = async (token: string = '', share?: boolean) => {
|
||||
let error = null;
|
||||
|
||||
const res = await fetch(`${WEBUI_API_BASE_URL}/groups/`, {
|
||||
const searchParams = new URLSearchParams();
|
||||
if (share !== undefined) {
|
||||
searchParams.append('share', String(share));
|
||||
}
|
||||
|
||||
const res = await fetch(`${WEBUI_API_BASE_URL}/groups/?${searchParams.toString()}`, {
|
||||
method: 'GET',
|
||||
headers: {
|
||||
Accept: 'application/json',
|
||||
|
|
@ -160,3 +165,73 @@ export const deleteGroupById = async (token: string, id: string) => {
|
|||
|
||||
return res;
|
||||
};
|
||||
|
||||
export const addUserToGroup = async (token: string, id: string, userIds: string[]) => {
|
||||
let error = null;
|
||||
|
||||
const res = await fetch(`${WEBUI_API_BASE_URL}/groups/id/${id}/users/add`, {
|
||||
method: 'POST',
|
||||
headers: {
|
||||
Accept: 'application/json',
|
||||
'Content-Type': 'application/json',
|
||||
authorization: `Bearer ${token}`
|
||||
},
|
||||
body: JSON.stringify({
|
||||
user_ids: userIds
|
||||
})
|
||||
})
|
||||
.then(async (res) => {
|
||||
if (!res.ok) throw await res.json();
|
||||
return res.json();
|
||||
})
|
||||
.then((json) => {
|
||||
return json;
|
||||
})
|
||||
.catch((err) => {
|
||||
error = err.detail;
|
||||
|
||||
console.error(err);
|
||||
return null;
|
||||
});
|
||||
|
||||
if (error) {
|
||||
throw error;
|
||||
}
|
||||
|
||||
return res;
|
||||
};
|
||||
|
||||
export const removeUserFromGroup = async (token: string, id: string, userIds: string[]) => {
|
||||
let error = null;
|
||||
|
||||
const res = await fetch(`${WEBUI_API_BASE_URL}/groups/id/${id}/users/remove`, {
|
||||
method: 'POST',
|
||||
headers: {
|
||||
Accept: 'application/json',
|
||||
'Content-Type': 'application/json',
|
||||
authorization: `Bearer ${token}`
|
||||
},
|
||||
body: JSON.stringify({
|
||||
user_ids: userIds
|
||||
})
|
||||
})
|
||||
.then(async (res) => {
|
||||
if (!res.ok) throw await res.json();
|
||||
return res.json();
|
||||
})
|
||||
.then((json) => {
|
||||
return json;
|
||||
})
|
||||
.catch((err) => {
|
||||
error = err.detail;
|
||||
|
||||
console.error(err);
|
||||
return null;
|
||||
});
|
||||
|
||||
if (error) {
|
||||
throw error;
|
||||
}
|
||||
|
||||
return res;
|
||||
};
|
||||
|
|
|
|||
|
|
@ -1401,6 +1401,33 @@ export const getChangelog = async () => {
|
|||
return res;
|
||||
};
|
||||
|
||||
export const getVersion = async (token: string) => {
|
||||
let error = null;
|
||||
|
||||
const res = await fetch(`${WEBUI_BASE_URL}/api/version`, {
|
||||
method: 'GET',
|
||||
headers: {
|
||||
'Content-Type': 'application/json',
|
||||
Authorization: `Bearer ${token}`
|
||||
}
|
||||
})
|
||||
.then(async (res) => {
|
||||
if (!res.ok) throw await res.json();
|
||||
return res.json();
|
||||
})
|
||||
.catch((err) => {
|
||||
console.error(err);
|
||||
error = err;
|
||||
return null;
|
||||
});
|
||||
|
||||
if (error) {
|
||||
throw error;
|
||||
}
|
||||
|
||||
return res;
|
||||
};
|
||||
|
||||
export const getVersionUpdates = async (token: string) => {
|
||||
let error = null;
|
||||
|
||||
|
|
|
|||
|
|
@ -1,9 +1,68 @@
|
|||
import { WEBUI_API_BASE_URL } from '$lib/constants';
|
||||
|
||||
export const getModels = async (token: string = '') => {
|
||||
export const getModelItems = async (
|
||||
token: string = '',
|
||||
query,
|
||||
viewOption,
|
||||
selectedTag,
|
||||
orderBy,
|
||||
direction,
|
||||
page
|
||||
) => {
|
||||
let error = null;
|
||||
|
||||
const res = await fetch(`${WEBUI_API_BASE_URL}/models/`, {
|
||||
const searchParams = new URLSearchParams();
|
||||
if (query) {
|
||||
searchParams.append('query', query);
|
||||
}
|
||||
if (viewOption) {
|
||||
searchParams.append('view_option', viewOption);
|
||||
}
|
||||
if (selectedTag) {
|
||||
searchParams.append('tag', selectedTag);
|
||||
}
|
||||
if (orderBy) {
|
||||
searchParams.append('order_by', orderBy);
|
||||
}
|
||||
if (direction) {
|
||||
searchParams.append('direction', direction);
|
||||
}
|
||||
if (page) {
|
||||
searchParams.append('page', page.toString());
|
||||
}
|
||||
|
||||
const res = await fetch(`${WEBUI_API_BASE_URL}/models/list?${searchParams.toString()}`, {
|
||||
method: 'GET',
|
||||
headers: {
|
||||
Accept: 'application/json',
|
||||
'Content-Type': 'application/json',
|
||||
authorization: `Bearer ${token}`
|
||||
}
|
||||
})
|
||||
.then(async (res) => {
|
||||
if (!res.ok) throw await res.json();
|
||||
return res.json();
|
||||
})
|
||||
.then((json) => {
|
||||
return json;
|
||||
})
|
||||
.catch((err) => {
|
||||
error = err;
|
||||
console.error(err);
|
||||
return null;
|
||||
});
|
||||
|
||||
if (error) {
|
||||
throw error;
|
||||
}
|
||||
|
||||
return res;
|
||||
};
|
||||
|
||||
export const getModelTags = async (token: string = '') => {
|
||||
let error = null;
|
||||
|
||||
const res = await fetch(`${WEBUI_API_BASE_URL}/models/tags`, {
|
||||
method: 'GET',
|
||||
headers: {
|
||||
Accept: 'application/json',
|
||||
|
|
@ -192,17 +251,14 @@ export const toggleModelById = async (token: string, id: string) => {
|
|||
export const updateModelById = async (token: string, id: string, model: object) => {
|
||||
let error = null;
|
||||
|
||||
const searchParams = new URLSearchParams();
|
||||
searchParams.append('id', id);
|
||||
|
||||
const res = await fetch(`${WEBUI_API_BASE_URL}/models/model/update?${searchParams.toString()}`, {
|
||||
const res = await fetch(`${WEBUI_API_BASE_URL}/models/model/update`, {
|
||||
method: 'POST',
|
||||
headers: {
|
||||
Accept: 'application/json',
|
||||
'Content-Type': 'application/json',
|
||||
authorization: `Bearer ${token}`
|
||||
},
|
||||
body: JSON.stringify(model)
|
||||
body: JSON.stringify({ ...model, id })
|
||||
})
|
||||
.then(async (res) => {
|
||||
if (!res.ok) throw await res.json();
|
||||
|
|
@ -228,16 +284,14 @@ export const updateModelById = async (token: string, id: string, model: object)
|
|||
export const deleteModelById = async (token: string, id: string) => {
|
||||
let error = null;
|
||||
|
||||
const searchParams = new URLSearchParams();
|
||||
searchParams.append('id', id);
|
||||
|
||||
const res = await fetch(`${WEBUI_API_BASE_URL}/models/model/delete?${searchParams.toString()}`, {
|
||||
method: 'DELETE',
|
||||
const res = await fetch(`${WEBUI_API_BASE_URL}/models/model/delete`, {
|
||||
method: 'POST',
|
||||
headers: {
|
||||
Accept: 'application/json',
|
||||
'Content-Type': 'application/json',
|
||||
authorization: `Bearer ${token}`
|
||||
}
|
||||
},
|
||||
body: JSON.stringify({ id })
|
||||
})
|
||||
.then(async (res) => {
|
||||
if (!res.ok) throw await res.json();
|
||||
|
|
|
|||
|
|
@ -179,39 +179,3 @@ export const downloadDatabase = async (token: string) => {
|
|||
throw error;
|
||||
}
|
||||
};
|
||||
|
||||
export const downloadLiteLLMConfig = async (token: string) => {
|
||||
let error = null;
|
||||
|
||||
const res = await fetch(`${WEBUI_API_BASE_URL}/utils/litellm/config`, {
|
||||
method: 'GET',
|
||||
headers: {
|
||||
'Content-Type': 'application/json',
|
||||
Authorization: `Bearer ${token}`
|
||||
}
|
||||
})
|
||||
.then(async (response) => {
|
||||
if (!response.ok) {
|
||||
throw await response.json();
|
||||
}
|
||||
return response.blob();
|
||||
})
|
||||
.then((blob) => {
|
||||
const url = window.URL.createObjectURL(blob);
|
||||
const a = document.createElement('a');
|
||||
a.href = url;
|
||||
a.download = 'config.yaml';
|
||||
document.body.appendChild(a);
|
||||
a.click();
|
||||
window.URL.revokeObjectURL(url);
|
||||
})
|
||||
.catch((err) => {
|
||||
console.error(err);
|
||||
error = err.detail;
|
||||
return null;
|
||||
});
|
||||
|
||||
if (error) {
|
||||
throw error;
|
||||
}
|
||||
};
|
||||
|
|
|
|||
|
|
@ -426,7 +426,7 @@
|
|||
<div class="flex-1">
|
||||
<Tooltip
|
||||
content={$i18n.t(
|
||||
'Enter additional headers in JSON format (e.g. {{\'{{"X-Custom-Header": "value"}}\'}})'
|
||||
'Enter additional headers in JSON format (e.g. {"X-Custom-Header": "value"}'
|
||||
)}
|
||||
>
|
||||
<Textarea
|
||||
|
|
|
|||
|
|
@ -22,6 +22,7 @@
|
|||
import AccessControl from './workspace/common/AccessControl.svelte';
|
||||
import Spinner from '$lib/components/common/Spinner.svelte';
|
||||
import XMark from '$lib/components/icons/XMark.svelte';
|
||||
import Textarea from './common/Textarea.svelte';
|
||||
|
||||
export let onSubmit: Function = () => {};
|
||||
export let onDelete: Function = () => {};
|
||||
|
|
@ -44,6 +45,7 @@
|
|||
|
||||
let auth_type = 'bearer';
|
||||
let key = '';
|
||||
let headers = '';
|
||||
|
||||
let accessControl = {};
|
||||
|
||||
|
|
@ -110,6 +112,20 @@
|
|||
}
|
||||
}
|
||||
|
||||
if (headers) {
|
||||
try {
|
||||
let _headers = JSON.parse(headers);
|
||||
if (typeof _headers !== 'object' || Array.isArray(_headers)) {
|
||||
_headers = null;
|
||||
throw new Error('Headers must be a valid JSON object');
|
||||
}
|
||||
headers = JSON.stringify(_headers, null, 2);
|
||||
} catch (error) {
|
||||
toast.error($i18n.t('Headers must be a valid JSON object'));
|
||||
return;
|
||||
}
|
||||
}
|
||||
|
||||
if (direct) {
|
||||
const res = await getToolServerData(
|
||||
auth_type === 'bearer' ? key : localStorage.token,
|
||||
|
|
@ -128,6 +144,7 @@
|
|||
path,
|
||||
type,
|
||||
auth_type,
|
||||
headers: headers ? JSON.parse(headers) : undefined,
|
||||
key,
|
||||
config: {
|
||||
enable: enable,
|
||||
|
|
@ -177,6 +194,7 @@
|
|||
if (data.path) path = data.path;
|
||||
|
||||
if (data.auth_type) auth_type = data.auth_type;
|
||||
if (data.headers) headers = JSON.stringify(data.headers, null, 2);
|
||||
if (data.key) key = data.key;
|
||||
|
||||
if (data.info) {
|
||||
|
|
@ -210,6 +228,7 @@
|
|||
path,
|
||||
|
||||
auth_type,
|
||||
headers: headers ? JSON.parse(headers) : undefined,
|
||||
key,
|
||||
|
||||
info: {
|
||||
|
|
@ -256,6 +275,20 @@
|
|||
}
|
||||
}
|
||||
|
||||
if (headers) {
|
||||
try {
|
||||
const _headers = JSON.parse(headers);
|
||||
if (typeof _headers !== 'object' || Array.isArray(_headers)) {
|
||||
throw new Error('Headers must be a valid JSON object');
|
||||
}
|
||||
headers = JSON.stringify(_headers, null, 2);
|
||||
} catch (error) {
|
||||
toast.error($i18n.t('Headers must be a valid JSON object'));
|
||||
loading = false;
|
||||
return;
|
||||
}
|
||||
}
|
||||
|
||||
const connection = {
|
||||
type,
|
||||
url,
|
||||
|
|
@ -265,9 +298,12 @@
|
|||
path,
|
||||
|
||||
auth_type,
|
||||
headers: headers ? JSON.parse(headers) : undefined,
|
||||
|
||||
key,
|
||||
config: {
|
||||
enable: enable,
|
||||
|
||||
access_control: accessControl
|
||||
},
|
||||
info: {
|
||||
|
|
@ -313,6 +349,8 @@
|
|||
path = connection?.path ?? 'openapi.json';
|
||||
|
||||
auth_type = connection?.auth_type ?? 'bearer';
|
||||
headers = connection?.headers ? JSON.stringify(connection.headers, null, 2) : '';
|
||||
|
||||
key = connection?.key ?? '';
|
||||
|
||||
id = connection.info?.id ?? '';
|
||||
|
|
@ -657,6 +695,33 @@
|
|||
</div>
|
||||
|
||||
{#if !direct}
|
||||
<div class="flex gap-2 mt-2">
|
||||
<div class="flex flex-col w-full">
|
||||
<label
|
||||
for="headers-input"
|
||||
class={`mb-0.5 text-xs text-gray-500
|
||||
${($settings?.highContrastMode ?? false) ? 'text-gray-800 dark:text-gray-100' : ''}`}
|
||||
>{$i18n.t('Headers')}</label
|
||||
>
|
||||
|
||||
<div class="flex-1">
|
||||
<Tooltip
|
||||
content={$i18n.t(
|
||||
'Enter additional headers in JSON format (e.g. {"X-Custom-Header": "value"}'
|
||||
)}
|
||||
>
|
||||
<Textarea
|
||||
className="w-full text-sm outline-hidden"
|
||||
bind:value={headers}
|
||||
placeholder={$i18n.t('Enter additional headers in JSON format')}
|
||||
required={false}
|
||||
minSize={30}
|
||||
/>
|
||||
</Tooltip>
|
||||
</div>
|
||||
</div>
|
||||
</div>
|
||||
|
||||
<hr class=" border-gray-100 dark:border-gray-700/10 my-2.5 w-full" />
|
||||
|
||||
<div class="flex gap-2">
|
||||
|
|
|
|||
|
|
@ -33,7 +33,9 @@
|
|||
let feedbacks = [];
|
||||
|
||||
onMount(async () => {
|
||||
// TODO: feedbacks elo rating calculation should be done in the backend; remove below line later
|
||||
feedbacks = await getAllFeedbacks(localStorage.token);
|
||||
|
||||
loaded = true;
|
||||
|
||||
const containerElement = document.getElementById('users-tabs-container');
|
||||
|
|
@ -117,7 +119,7 @@
|
|||
{#if selectedTab === 'leaderboard'}
|
||||
<Leaderboard {feedbacks} />
|
||||
{:else if selectedTab === 'feedbacks'}
|
||||
<Feedbacks {feedbacks} />
|
||||
<Feedbacks />
|
||||
{/if}
|
||||
</div>
|
||||
</div>
|
||||
|
|
|
|||
|
|
@ -10,7 +10,7 @@
|
|||
import { onMount, getContext } from 'svelte';
|
||||
const i18n = getContext('i18n');
|
||||
|
||||
import { deleteFeedbackById, exportAllFeedbacks, getAllFeedbacks } from '$lib/apis/evaluations';
|
||||
import { deleteFeedbackById, exportAllFeedbacks, getFeedbackItems } from '$lib/apis/evaluations';
|
||||
|
||||
import Tooltip from '$lib/components/common/Tooltip.svelte';
|
||||
import Download from '$lib/components/icons/Download.svelte';
|
||||
|
|
@ -23,78 +23,25 @@
|
|||
|
||||
import ChevronUp from '$lib/components/icons/ChevronUp.svelte';
|
||||
import ChevronDown from '$lib/components/icons/ChevronDown.svelte';
|
||||
import { WEBUI_BASE_URL } from '$lib/constants';
|
||||
import { WEBUI_API_BASE_URL, WEBUI_BASE_URL } from '$lib/constants';
|
||||
import { config } from '$lib/stores';
|
||||
|
||||
export let feedbacks = [];
|
||||
import Spinner from '$lib/components/common/Spinner.svelte';
|
||||
|
||||
let page = 1;
|
||||
$: paginatedFeedbacks = sortedFeedbacks.slice((page - 1) * 10, page * 10);
|
||||
let items = null;
|
||||
let total = null;
|
||||
|
||||
let orderBy: string = 'updated_at';
|
||||
let direction: 'asc' | 'desc' = 'desc';
|
||||
|
||||
type Feedback = {
|
||||
id: string;
|
||||
data: {
|
||||
rating: number;
|
||||
model_id: string;
|
||||
sibling_model_ids: string[] | null;
|
||||
reason: string;
|
||||
comment: string;
|
||||
tags: string[];
|
||||
};
|
||||
user: {
|
||||
name: string;
|
||||
profile_image_url: string;
|
||||
};
|
||||
updated_at: number;
|
||||
};
|
||||
|
||||
type ModelStats = {
|
||||
rating: number;
|
||||
won: number;
|
||||
lost: number;
|
||||
};
|
||||
|
||||
function setSortKey(key: string) {
|
||||
const setSortKey = (key) => {
|
||||
if (orderBy === key) {
|
||||
direction = direction === 'asc' ? 'desc' : 'asc';
|
||||
} else {
|
||||
orderBy = key;
|
||||
if (key === 'user' || key === 'model_id') {
|
||||
direction = 'asc';
|
||||
} else {
|
||||
direction = 'desc';
|
||||
}
|
||||
direction = 'asc';
|
||||
}
|
||||
page = 1;
|
||||
}
|
||||
|
||||
$: sortedFeedbacks = [...feedbacks].sort((a, b) => {
|
||||
let aVal, bVal;
|
||||
|
||||
switch (orderBy) {
|
||||
case 'user':
|
||||
aVal = a.user?.name || '';
|
||||
bVal = b.user?.name || '';
|
||||
return direction === 'asc' ? aVal.localeCompare(bVal) : bVal.localeCompare(aVal);
|
||||
case 'model_id':
|
||||
aVal = a.data.model_id || '';
|
||||
bVal = b.data.model_id || '';
|
||||
return direction === 'asc' ? aVal.localeCompare(bVal) : bVal.localeCompare(aVal);
|
||||
case 'rating':
|
||||
aVal = a.data.rating;
|
||||
bVal = b.data.rating;
|
||||
return direction === 'asc' ? aVal - bVal : bVal - aVal;
|
||||
case 'updated_at':
|
||||
aVal = a.updated_at;
|
||||
bVal = b.updated_at;
|
||||
return direction === 'asc' ? aVal - bVal : bVal - aVal;
|
||||
default:
|
||||
return 0;
|
||||
}
|
||||
});
|
||||
};
|
||||
|
||||
let showFeedbackModal = false;
|
||||
let selectedFeedback = null;
|
||||
|
|
@ -115,13 +62,41 @@
|
|||
//
|
||||
//////////////////////
|
||||
|
||||
const getFeedbacks = async () => {
|
||||
try {
|
||||
const res = await getFeedbackItems(localStorage.token, orderBy, direction, page).catch(
|
||||
(error) => {
|
||||
toast.error(`${error}`);
|
||||
return null;
|
||||
}
|
||||
);
|
||||
|
||||
if (res) {
|
||||
items = res.items;
|
||||
total = res.total;
|
||||
}
|
||||
} catch (err) {
|
||||
console.error(err);
|
||||
}
|
||||
};
|
||||
|
||||
$: if (page) {
|
||||
getFeedbacks();
|
||||
}
|
||||
|
||||
$: if (orderBy && direction) {
|
||||
getFeedbacks();
|
||||
}
|
||||
|
||||
const deleteFeedbackHandler = async (feedbackId: string) => {
|
||||
const response = await deleteFeedbackById(localStorage.token, feedbackId).catch((err) => {
|
||||
toast.error(err);
|
||||
return null;
|
||||
});
|
||||
if (response) {
|
||||
feedbacks = feedbacks.filter((f) => f.id !== feedbackId);
|
||||
toast.success($i18n.t('Feedback deleted successfully'));
|
||||
page = 1;
|
||||
getFeedbacks();
|
||||
}
|
||||
};
|
||||
|
||||
|
|
@ -169,256 +144,266 @@
|
|||
|
||||
<FeedbackModal bind:show={showFeedbackModal} {selectedFeedback} onClose={closeFeedbackModal} />
|
||||
|
||||
<div class="mt-0.5 mb-1 gap-1 flex flex-row justify-between">
|
||||
<div class="flex md:self-center text-lg font-medium px-0.5">
|
||||
{$i18n.t('Feedback History')}
|
||||
{#if items === null || total === null}
|
||||
<div class="my-10">
|
||||
<Spinner className="size-5" />
|
||||
</div>
|
||||
{:else}
|
||||
<div class="mt-0.5 mb-1 gap-1 flex flex-row justify-between">
|
||||
<div class="flex items-center md:self-center text-xl font-medium px-0.5 gap-2 shrink-0">
|
||||
<div>
|
||||
{$i18n.t('Feedback History')}
|
||||
</div>
|
||||
|
||||
<div class="flex self-center w-[1px] h-6 mx-2.5 bg-gray-50 dark:bg-gray-850" />
|
||||
<div class="text-lg font-medium text-gray-500 dark:text-gray-500">
|
||||
{total}
|
||||
</div>
|
||||
</div>
|
||||
|
||||
<span class="text-lg font-medium text-gray-500 dark:text-gray-300">{feedbacks.length}</span>
|
||||
{#if total > 0}
|
||||
<div>
|
||||
<Tooltip content={$i18n.t('Export')}>
|
||||
<button
|
||||
class=" p-2 rounded-xl hover:bg-gray-100 dark:bg-gray-900 dark:hover:bg-gray-850 transition font-medium text-sm flex items-center space-x-1"
|
||||
on:click={() => {
|
||||
exportHandler();
|
||||
}}
|
||||
>
|
||||
<Download className="size-3" />
|
||||
</button>
|
||||
</Tooltip>
|
||||
</div>
|
||||
{/if}
|
||||
</div>
|
||||
|
||||
{#if feedbacks.length > 0}
|
||||
<div>
|
||||
<Tooltip content={$i18n.t('Export')}>
|
||||
<button
|
||||
class=" p-2 rounded-xl hover:bg-gray-100 dark:bg-gray-900 dark:hover:bg-gray-850 transition font-medium text-sm flex items-center space-x-1"
|
||||
on:click={() => {
|
||||
exportHandler();
|
||||
}}
|
||||
>
|
||||
<Download className="size-3" />
|
||||
</button>
|
||||
</Tooltip>
|
||||
</div>
|
||||
{/if}
|
||||
</div>
|
||||
|
||||
<div class="scrollbar-hidden relative whitespace-nowrap overflow-x-auto max-w-full">
|
||||
{#if (feedbacks ?? []).length === 0}
|
||||
<div class="text-center text-xs text-gray-500 dark:text-gray-400 py-1">
|
||||
{$i18n.t('No feedbacks found')}
|
||||
</div>
|
||||
{:else}
|
||||
<table class="w-full text-sm text-left text-gray-500 dark:text-gray-400 table-auto max-w-full">
|
||||
<thead class="text-xs text-gray-800 uppercase bg-transparent dark:text-gray-200">
|
||||
<tr class=" border-b-[1.5px] border-gray-50 dark:border-gray-850">
|
||||
<th
|
||||
scope="col"
|
||||
class="px-2.5 py-2 cursor-pointer select-none w-3"
|
||||
on:click={() => setSortKey('user')}
|
||||
>
|
||||
<div class="flex gap-1.5 items-center justify-end">
|
||||
{$i18n.t('User')}
|
||||
{#if orderBy === 'user'}
|
||||
<span class="font-normal">
|
||||
{#if direction === 'asc'}
|
||||
<div class="scrollbar-hidden relative whitespace-nowrap overflow-x-auto max-w-full">
|
||||
{#if (items ?? []).length === 0}
|
||||
<div class="text-center text-xs text-gray-500 dark:text-gray-400 py-1">
|
||||
{$i18n.t('No feedbacks found')}
|
||||
</div>
|
||||
{:else}
|
||||
<table
|
||||
class="w-full text-sm text-left text-gray-500 dark:text-gray-400 table-auto max-w-full"
|
||||
>
|
||||
<thead class="text-xs text-gray-800 uppercase bg-transparent dark:text-gray-200">
|
||||
<tr class=" border-b-[1.5px] border-gray-50 dark:border-gray-850">
|
||||
<th
|
||||
scope="col"
|
||||
class="px-2.5 py-2 cursor-pointer select-none w-3"
|
||||
on:click={() => setSortKey('user')}
|
||||
>
|
||||
<div class="flex gap-1.5 items-center justify-end">
|
||||
{$i18n.t('User')}
|
||||
{#if orderBy === 'user'}
|
||||
<span class="font-normal">
|
||||
{#if direction === 'asc'}
|
||||
<ChevronUp className="size-2" />
|
||||
{:else}
|
||||
<ChevronDown className="size-2" />
|
||||
{/if}
|
||||
</span>
|
||||
{:else}
|
||||
<span class="invisible">
|
||||
<ChevronUp className="size-2" />
|
||||
{:else}
|
||||
<ChevronDown className="size-2" />
|
||||
{/if}
|
||||
</span>
|
||||
{:else}
|
||||
<span class="invisible">
|
||||
<ChevronUp className="size-2" />
|
||||
</span>
|
||||
{/if}
|
||||
</div>
|
||||
</th>
|
||||
|
||||
<th
|
||||
scope="col"
|
||||
class="px-2.5 py-2 cursor-pointer select-none"
|
||||
on:click={() => setSortKey('model_id')}
|
||||
>
|
||||
<div class="flex gap-1.5 items-center">
|
||||
{$i18n.t('Models')}
|
||||
{#if orderBy === 'model_id'}
|
||||
<span class="font-normal">
|
||||
{#if direction === 'asc'}
|
||||
<ChevronUp className="size-2" />
|
||||
{:else}
|
||||
<ChevronDown className="size-2" />
|
||||
{/if}
|
||||
</span>
|
||||
{:else}
|
||||
<span class="invisible">
|
||||
<ChevronUp className="size-2" />
|
||||
</span>
|
||||
{/if}
|
||||
</div>
|
||||
</th>
|
||||
|
||||
<th
|
||||
scope="col"
|
||||
class="px-2.5 py-2 text-right cursor-pointer select-none w-fit"
|
||||
on:click={() => setSortKey('rating')}
|
||||
>
|
||||
<div class="flex gap-1.5 items-center justify-end">
|
||||
{$i18n.t('Result')}
|
||||
{#if orderBy === 'rating'}
|
||||
<span class="font-normal">
|
||||
{#if direction === 'asc'}
|
||||
<ChevronUp className="size-2" />
|
||||
{:else}
|
||||
<ChevronDown className="size-2" />
|
||||
{/if}
|
||||
</span>
|
||||
{:else}
|
||||
<span class="invisible">
|
||||
<ChevronUp className="size-2" />
|
||||
</span>
|
||||
{/if}
|
||||
</div>
|
||||
</th>
|
||||
|
||||
<th
|
||||
scope="col"
|
||||
class="px-2.5 py-2 text-right cursor-pointer select-none w-0"
|
||||
on:click={() => setSortKey('updated_at')}
|
||||
>
|
||||
<div class="flex gap-1.5 items-center justify-end">
|
||||
{$i18n.t('Updated At')}
|
||||
{#if orderBy === 'updated_at'}
|
||||
<span class="font-normal">
|
||||
{#if direction === 'asc'}
|
||||
<ChevronUp className="size-2" />
|
||||
{:else}
|
||||
<ChevronDown className="size-2" />
|
||||
{/if}
|
||||
</span>
|
||||
{:else}
|
||||
<span class="invisible">
|
||||
<ChevronUp className="size-2" />
|
||||
</span>
|
||||
{/if}
|
||||
</div>
|
||||
</th>
|
||||
|
||||
<th scope="col" class="px-2.5 py-2 text-right cursor-pointer select-none w-0"> </th>
|
||||
</tr>
|
||||
</thead>
|
||||
<tbody class="">
|
||||
{#each paginatedFeedbacks as feedback (feedback.id)}
|
||||
<tr
|
||||
class="bg-white dark:bg-gray-900 dark:border-gray-850 text-xs cursor-pointer hover:bg-gray-50 dark:hover:bg-gray-850/50 transition"
|
||||
on:click={() => openFeedbackModal(feedback)}
|
||||
>
|
||||
<td class=" py-0.5 text-right font-semibold">
|
||||
<div class="flex justify-center">
|
||||
<Tooltip content={feedback?.user?.name}>
|
||||
<div class="shrink-0">
|
||||
<img
|
||||
src={feedback?.user?.profile_image_url ?? `${WEBUI_BASE_URL}/user.png`}
|
||||
alt={feedback?.user?.name}
|
||||
class="size-5 rounded-full object-cover shrink-0"
|
||||
/>
|
||||
</div>
|
||||
</Tooltip>
|
||||
</span>
|
||||
{/if}
|
||||
</div>
|
||||
</td>
|
||||
</th>
|
||||
|
||||
<td class=" py-1 pl-3 flex flex-col">
|
||||
<div class="flex flex-col items-start gap-0.5 h-full">
|
||||
<div class="flex flex-col h-full">
|
||||
{#if feedback.data?.sibling_model_ids}
|
||||
<div class="font-semibold text-gray-600 dark:text-gray-400 flex-1">
|
||||
{feedback.data?.model_id}
|
||||
</div>
|
||||
|
||||
<Tooltip content={feedback.data.sibling_model_ids.join(', ')}>
|
||||
<div class=" text-[0.65rem] text-gray-600 dark:text-gray-400 line-clamp-1">
|
||||
{#if feedback.data.sibling_model_ids.length > 2}
|
||||
<!-- {$i18n.t('and {{COUNT}} more')} -->
|
||||
{feedback.data.sibling_model_ids.slice(0, 2).join(', ')}, {$i18n.t(
|
||||
'and {{COUNT}} more',
|
||||
{ COUNT: feedback.data.sibling_model_ids.length - 2 }
|
||||
)}
|
||||
{:else}
|
||||
{feedback.data.sibling_model_ids.join(', ')}
|
||||
{/if}
|
||||
</div>
|
||||
</Tooltip>
|
||||
{:else}
|
||||
<div
|
||||
class=" text-sm font-medium text-gray-600 dark:text-gray-400 flex-1 py-1.5"
|
||||
>
|
||||
{feedback.data?.model_id}
|
||||
</div>
|
||||
{/if}
|
||||
</div>
|
||||
<th
|
||||
scope="col"
|
||||
class="px-2.5 py-2 cursor-pointer select-none"
|
||||
on:click={() => setSortKey('model_id')}
|
||||
>
|
||||
<div class="flex gap-1.5 items-center">
|
||||
{$i18n.t('Models')}
|
||||
{#if orderBy === 'model_id'}
|
||||
<span class="font-normal">
|
||||
{#if direction === 'asc'}
|
||||
<ChevronUp className="size-2" />
|
||||
{:else}
|
||||
<ChevronDown className="size-2" />
|
||||
{/if}
|
||||
</span>
|
||||
{:else}
|
||||
<span class="invisible">
|
||||
<ChevronUp className="size-2" />
|
||||
</span>
|
||||
{/if}
|
||||
</div>
|
||||
</td>
|
||||
</th>
|
||||
|
||||
{#if feedback?.data?.rating}
|
||||
<td class="px-3 py-1 text-right font-medium text-gray-900 dark:text-white w-max">
|
||||
<div class=" flex justify-end">
|
||||
{#if feedback?.data?.rating.toString() === '1'}
|
||||
<Badge type="info" content={$i18n.t('Won')} />
|
||||
{:else if feedback?.data?.rating.toString() === '0'}
|
||||
<Badge type="muted" content={$i18n.t('Draw')} />
|
||||
{:else if feedback?.data?.rating.toString() === '-1'}
|
||||
<Badge type="error" content={$i18n.t('Lost')} />
|
||||
{/if}
|
||||
<th
|
||||
scope="col"
|
||||
class="px-2.5 py-2 text-right cursor-pointer select-none w-fit"
|
||||
on:click={() => setSortKey('rating')}
|
||||
>
|
||||
<div class="flex gap-1.5 items-center justify-end">
|
||||
{$i18n.t('Result')}
|
||||
{#if orderBy === 'rating'}
|
||||
<span class="font-normal">
|
||||
{#if direction === 'asc'}
|
||||
<ChevronUp className="size-2" />
|
||||
{:else}
|
||||
<ChevronDown className="size-2" />
|
||||
{/if}
|
||||
</span>
|
||||
{:else}
|
||||
<span class="invisible">
|
||||
<ChevronUp className="size-2" />
|
||||
</span>
|
||||
{/if}
|
||||
</div>
|
||||
</th>
|
||||
|
||||
<th
|
||||
scope="col"
|
||||
class="px-2.5 py-2 text-right cursor-pointer select-none w-0"
|
||||
on:click={() => setSortKey('updated_at')}
|
||||
>
|
||||
<div class="flex gap-1.5 items-center justify-end">
|
||||
{$i18n.t('Updated At')}
|
||||
{#if orderBy === 'updated_at'}
|
||||
<span class="font-normal">
|
||||
{#if direction === 'asc'}
|
||||
<ChevronUp className="size-2" />
|
||||
{:else}
|
||||
<ChevronDown className="size-2" />
|
||||
{/if}
|
||||
</span>
|
||||
{:else}
|
||||
<span class="invisible">
|
||||
<ChevronUp className="size-2" />
|
||||
</span>
|
||||
{/if}
|
||||
</div>
|
||||
</th>
|
||||
|
||||
<th scope="col" class="px-2.5 py-2 text-right cursor-pointer select-none w-0"> </th>
|
||||
</tr>
|
||||
</thead>
|
||||
<tbody class="">
|
||||
{#each items as feedback (feedback.id)}
|
||||
<tr
|
||||
class="bg-white dark:bg-gray-900 dark:border-gray-850 text-xs cursor-pointer hover:bg-gray-50 dark:hover:bg-gray-850/50 transition"
|
||||
on:click={() => openFeedbackModal(feedback)}
|
||||
>
|
||||
<td class=" py-0.5 text-right font-medium">
|
||||
<div class="flex justify-center">
|
||||
<Tooltip content={feedback?.user?.name}>
|
||||
<div class="shrink-0">
|
||||
<img
|
||||
src={`${WEBUI_API_BASE_URL}/users/${feedback.user.id}/profile/image`}
|
||||
alt={feedback?.user?.name}
|
||||
class="size-5 rounded-full object-cover shrink-0"
|
||||
/>
|
||||
</div>
|
||||
</Tooltip>
|
||||
</div>
|
||||
</td>
|
||||
{/if}
|
||||
|
||||
<td class=" px-3 py-1 text-right font-medium">
|
||||
{dayjs(feedback.updated_at * 1000).fromNow()}
|
||||
</td>
|
||||
<td class=" py-1 pl-3 flex flex-col">
|
||||
<div class="flex flex-col items-start gap-0.5 h-full">
|
||||
<div class="flex flex-col h-full">
|
||||
{#if feedback.data?.sibling_model_ids}
|
||||
<div class="font-medium text-gray-600 dark:text-gray-400 flex-1">
|
||||
{feedback.data?.model_id}
|
||||
</div>
|
||||
|
||||
<td class=" px-3 py-1 text-right font-semibold" on:click={(e) => e.stopPropagation()}>
|
||||
<FeedbackMenu
|
||||
on:delete={(e) => {
|
||||
deleteFeedbackHandler(feedback.id);
|
||||
}}
|
||||
>
|
||||
<button
|
||||
class="self-center w-fit text-sm p-1.5 dark:text-gray-300 dark:hover:text-white hover:bg-black/5 dark:hover:bg-white/5 rounded-xl"
|
||||
<Tooltip content={feedback.data.sibling_model_ids.join(', ')}>
|
||||
<div class=" text-[0.65rem] text-gray-600 dark:text-gray-400 line-clamp-1">
|
||||
{#if feedback.data.sibling_model_ids.length > 2}
|
||||
<!-- {$i18n.t('and {{COUNT}} more')} -->
|
||||
{feedback.data.sibling_model_ids.slice(0, 2).join(', ')}, {$i18n.t(
|
||||
'and {{COUNT}} more',
|
||||
{ COUNT: feedback.data.sibling_model_ids.length - 2 }
|
||||
)}
|
||||
{:else}
|
||||
{feedback.data.sibling_model_ids.join(', ')}
|
||||
{/if}
|
||||
</div>
|
||||
</Tooltip>
|
||||
{:else}
|
||||
<div
|
||||
class=" text-sm font-medium text-gray-600 dark:text-gray-400 flex-1 py-1.5"
|
||||
>
|
||||
{feedback.data?.model_id}
|
||||
</div>
|
||||
{/if}
|
||||
</div>
|
||||
</div>
|
||||
</td>
|
||||
|
||||
{#if feedback?.data?.rating}
|
||||
<td class="px-3 py-1 text-right font-medium text-gray-900 dark:text-white w-max">
|
||||
<div class=" flex justify-end">
|
||||
{#if feedback?.data?.rating.toString() === '1'}
|
||||
<Badge type="info" content={$i18n.t('Won')} />
|
||||
{:else if feedback?.data?.rating.toString() === '0'}
|
||||
<Badge type="muted" content={$i18n.t('Draw')} />
|
||||
{:else if feedback?.data?.rating.toString() === '-1'}
|
||||
<Badge type="error" content={$i18n.t('Lost')} />
|
||||
{/if}
|
||||
</div>
|
||||
</td>
|
||||
{/if}
|
||||
|
||||
<td class=" px-3 py-1 text-right font-medium">
|
||||
{dayjs(feedback.updated_at * 1000).fromNow()}
|
||||
</td>
|
||||
|
||||
<td class=" px-3 py-1 text-right font-medium" on:click={(e) => e.stopPropagation()}>
|
||||
<FeedbackMenu
|
||||
on:delete={(e) => {
|
||||
deleteFeedbackHandler(feedback.id);
|
||||
}}
|
||||
>
|
||||
<EllipsisHorizontal />
|
||||
</button>
|
||||
</FeedbackMenu>
|
||||
</td>
|
||||
</tr>
|
||||
{/each}
|
||||
</tbody>
|
||||
</table>
|
||||
{/if}
|
||||
</div>
|
||||
|
||||
{#if feedbacks.length > 0 && $config?.features?.enable_community_sharing}
|
||||
<div class=" flex flex-col justify-end w-full text-right gap-1">
|
||||
<div class="line-clamp-1 text-gray-500 text-xs">
|
||||
{$i18n.t('Help us create the best community leaderboard by sharing your feedback history!')}
|
||||
</div>
|
||||
|
||||
<div class="flex space-x-1 ml-auto">
|
||||
<Tooltip
|
||||
content={$i18n.t(
|
||||
'To protect your privacy, only ratings, model IDs, tags, and metadata are shared from your feedback—your chat logs remain private and are not included.'
|
||||
)}
|
||||
>
|
||||
<button
|
||||
class="flex text-xs items-center px-3 py-1.5 rounded-xl bg-gray-50 hover:bg-gray-100 dark:bg-gray-850 dark:hover:bg-gray-800 dark:text-gray-200 transition"
|
||||
on:click={async () => {
|
||||
shareHandler();
|
||||
}}
|
||||
>
|
||||
<div class=" self-center mr-2 font-medium line-clamp-1">
|
||||
{$i18n.t('Share to Open WebUI Community')}
|
||||
</div>
|
||||
|
||||
<div class=" self-center">
|
||||
<CloudArrowUp className="size-3" strokeWidth="3" />
|
||||
</div>
|
||||
</button>
|
||||
</Tooltip>
|
||||
</div>
|
||||
<button
|
||||
class="self-center w-fit text-sm p-1.5 dark:text-gray-300 dark:hover:text-white hover:bg-black/5 dark:hover:bg-white/5 rounded-xl"
|
||||
>
|
||||
<EllipsisHorizontal />
|
||||
</button>
|
||||
</FeedbackMenu>
|
||||
</td>
|
||||
</tr>
|
||||
{/each}
|
||||
</tbody>
|
||||
</table>
|
||||
{/if}
|
||||
</div>
|
||||
{/if}
|
||||
|
||||
{#if feedbacks.length > 10}
|
||||
<Pagination bind:page count={feedbacks.length} perPage={10} />
|
||||
{#if total > 0 && $config?.features?.enable_community_sharing}
|
||||
<div class=" flex flex-col justify-end w-full text-right gap-1">
|
||||
<div class="line-clamp-1 text-gray-500 text-xs">
|
||||
{$i18n.t('Help us create the best community leaderboard by sharing your feedback history!')}
|
||||
</div>
|
||||
|
||||
<div class="flex space-x-1 ml-auto">
|
||||
<Tooltip
|
||||
content={$i18n.t(
|
||||
'To protect your privacy, only ratings, model IDs, tags, and metadata are shared from your feedback—your chat logs remain private and are not included.'
|
||||
)}
|
||||
>
|
||||
<button
|
||||
class="flex text-xs items-center px-3 py-1.5 rounded-xl bg-gray-50 hover:bg-gray-100 dark:bg-gray-850 dark:hover:bg-gray-800 dark:text-gray-200 transition"
|
||||
on:click={async () => {
|
||||
shareHandler();
|
||||
}}
|
||||
>
|
||||
<div class=" self-center mr-2 font-medium line-clamp-1">
|
||||
{$i18n.t('Share to Open WebUI Community')}
|
||||
</div>
|
||||
|
||||
<div class=" self-center">
|
||||
<CloudArrowUp className="size-3" strokeWidth="3" />
|
||||
</div>
|
||||
</button>
|
||||
</Tooltip>
|
||||
</div>
|
||||
</div>
|
||||
{/if}
|
||||
|
||||
{#if total > 30}
|
||||
<Pagination bind:page count={total} perPage={30} />
|
||||
{/if}
|
||||
{/if}
|
||||
|
|
|
|||
|
|
@ -10,7 +10,7 @@
|
|||
|
||||
import ChevronUp from '$lib/components/icons/ChevronUp.svelte';
|
||||
import ChevronDown from '$lib/components/icons/ChevronDown.svelte';
|
||||
import { WEBUI_BASE_URL } from '$lib/constants';
|
||||
import { WEBUI_API_BASE_URL, WEBUI_BASE_URL } from '$lib/constants';
|
||||
|
||||
const i18n = getContext('i18n');
|
||||
|
||||
|
|
@ -339,16 +339,14 @@
|
|||
<div
|
||||
class="pt-0.5 pb-1 gap-1 flex flex-col md:flex-row justify-between sticky top-0 z-10 bg-white dark:bg-gray-900"
|
||||
>
|
||||
<div class="flex md:self-center text-lg font-medium px-0.5 shrink-0 items-center">
|
||||
<div class=" gap-1">
|
||||
<div class="flex items-center md:self-center text-xl font-medium px-0.5 gap-2 shrink-0">
|
||||
<div>
|
||||
{$i18n.t('Leaderboard')}
|
||||
</div>
|
||||
|
||||
<div class="flex self-center w-[1px] h-6 mx-2.5 bg-gray-50 dark:bg-gray-850" />
|
||||
|
||||
<span class="text-lg font-medium text-gray-500 dark:text-gray-300 mr-1.5"
|
||||
>{rankedModels.length}</span
|
||||
>
|
||||
<div class="text-lg font-medium text-gray-500 dark:text-gray-500">
|
||||
{rankedModels.length}
|
||||
</div>
|
||||
</div>
|
||||
|
||||
<div class=" flex space-x-2">
|
||||
|
|
@ -517,7 +515,7 @@
|
|||
<div class="flex items-center gap-2">
|
||||
<div class="shrink-0">
|
||||
<img
|
||||
src={model?.info?.meta?.profile_image_url ?? `${WEBUI_BASE_URL}/favicon.png`}
|
||||
src={`${WEBUI_API_BASE_URL}/models/model/profile/image?id=${model.id}`}
|
||||
alt={model.name}
|
||||
class="size-5 rounded-full object-cover shrink-0"
|
||||
/>
|
||||
|
|
@ -532,7 +530,7 @@
|
|||
{model.rating}
|
||||
</td>
|
||||
|
||||
<td class=" px-3 py-1.5 text-right font-semibold text-green-500">
|
||||
<td class=" px-3 py-1.5 text-right font-medium text-green-500">
|
||||
<div class=" w-10">
|
||||
{#if model.stats.won === '-'}
|
||||
-
|
||||
|
|
@ -545,7 +543,7 @@
|
|||
</div>
|
||||
</td>
|
||||
|
||||
<td class="px-3 py-1.5 text-right font-semibold text-red-500">
|
||||
<td class="px-3 py-1.5 text-right font-medium text-red-500">
|
||||
<div class=" w-10">
|
||||
{#if model.stats.lost === '-'}
|
||||
-
|
||||
|
|
|
|||
|
|
@ -158,6 +158,7 @@
|
|||
|
||||
if (res) {
|
||||
toast.success($i18n.t('Function deleted successfully'));
|
||||
functions = functions.filter((f) => f.id !== func.id);
|
||||
|
||||
_functions.set(await getFunctions(localStorage.token));
|
||||
models.set(
|
||||
|
|
|
|||
|
|
@ -50,6 +50,9 @@
|
|||
let STT_AZURE_BASE_URL = '';
|
||||
let STT_AZURE_MAX_SPEAKERS = '';
|
||||
let STT_DEEPGRAM_API_KEY = '';
|
||||
let STT_MISTRAL_API_KEY = '';
|
||||
let STT_MISTRAL_API_BASE_URL = '';
|
||||
let STT_MISTRAL_USE_CHAT_COMPLETIONS = false;
|
||||
|
||||
let STT_WHISPER_MODEL_LOADING = false;
|
||||
|
||||
|
|
@ -135,7 +138,10 @@
|
|||
AZURE_REGION: STT_AZURE_REGION,
|
||||
AZURE_LOCALES: STT_AZURE_LOCALES,
|
||||
AZURE_BASE_URL: STT_AZURE_BASE_URL,
|
||||
AZURE_MAX_SPEAKERS: STT_AZURE_MAX_SPEAKERS
|
||||
AZURE_MAX_SPEAKERS: STT_AZURE_MAX_SPEAKERS,
|
||||
MISTRAL_API_KEY: STT_MISTRAL_API_KEY,
|
||||
MISTRAL_API_BASE_URL: STT_MISTRAL_API_BASE_URL,
|
||||
MISTRAL_USE_CHAT_COMPLETIONS: STT_MISTRAL_USE_CHAT_COMPLETIONS
|
||||
}
|
||||
});
|
||||
|
||||
|
|
@ -184,6 +190,9 @@
|
|||
STT_AZURE_BASE_URL = res.stt.AZURE_BASE_URL;
|
||||
STT_AZURE_MAX_SPEAKERS = res.stt.AZURE_MAX_SPEAKERS;
|
||||
STT_DEEPGRAM_API_KEY = res.stt.DEEPGRAM_API_KEY;
|
||||
STT_MISTRAL_API_KEY = res.stt.MISTRAL_API_KEY;
|
||||
STT_MISTRAL_API_BASE_URL = res.stt.MISTRAL_API_BASE_URL;
|
||||
STT_MISTRAL_USE_CHAT_COMPLETIONS = res.stt.MISTRAL_USE_CHAT_COMPLETIONS;
|
||||
}
|
||||
|
||||
await getVoices();
|
||||
|
|
@ -201,7 +210,7 @@
|
|||
<div class=" space-y-3 overflow-y-scroll scrollbar-hidden h-full">
|
||||
<div class="flex flex-col gap-3">
|
||||
<div>
|
||||
<div class=" mb-2.5 text-base font-medium">{$i18n.t('Speech-to-Text')}</div>
|
||||
<div class=" mt-0.5 mb-2.5 text-base font-medium">{$i18n.t('Speech-to-Text')}</div>
|
||||
|
||||
<hr class=" border-gray-100 dark:border-gray-850 my-2" />
|
||||
|
||||
|
|
@ -235,6 +244,7 @@
|
|||
<option value="web">{$i18n.t('Web API')}</option>
|
||||
<option value="deepgram">{$i18n.t('Deepgram')}</option>
|
||||
<option value="azure">{$i18n.t('Azure AI Speech')}</option>
|
||||
<option value="mistral">{$i18n.t('MistralAI')}</option>
|
||||
</select>
|
||||
</div>
|
||||
</div>
|
||||
|
|
@ -367,6 +377,67 @@
|
|||
</div>
|
||||
</div>
|
||||
</div>
|
||||
{:else if STT_ENGINE === 'mistral'}
|
||||
<div>
|
||||
<div class="mt-1 flex gap-2 mb-1">
|
||||
<input
|
||||
class="flex-1 w-full bg-transparent outline-hidden"
|
||||
placeholder={$i18n.t('API Base URL')}
|
||||
bind:value={STT_MISTRAL_API_BASE_URL}
|
||||
required
|
||||
/>
|
||||
|
||||
<SensitiveInput placeholder={$i18n.t('API Key')} bind:value={STT_MISTRAL_API_KEY} />
|
||||
</div>
|
||||
</div>
|
||||
|
||||
<hr class="border-gray-100 dark:border-gray-850 my-2" />
|
||||
|
||||
<div>
|
||||
<div class=" mb-1.5 text-xs font-medium">{$i18n.t('STT Model')}</div>
|
||||
<div class="flex w-full">
|
||||
<div class="flex-1">
|
||||
<input
|
||||
class="w-full rounded-lg py-2 px-4 text-sm bg-gray-50 dark:text-gray-300 dark:bg-gray-850 outline-hidden"
|
||||
bind:value={STT_MODEL}
|
||||
placeholder="voxtral-mini-latest"
|
||||
/>
|
||||
</div>
|
||||
</div>
|
||||
<div class="mt-2 mb-1 text-xs text-gray-400 dark:text-gray-500">
|
||||
{$i18n.t('Leave empty to use the default model (voxtral-mini-latest).')}
|
||||
<a
|
||||
class=" hover:underline dark:text-gray-200 text-gray-800"
|
||||
href="https://docs.mistral.ai/capabilities/audio_transcription"
|
||||
target="_blank"
|
||||
>
|
||||
{$i18n.t('Learn more about Voxtral transcription.')}
|
||||
</a>
|
||||
</div>
|
||||
</div>
|
||||
|
||||
<hr class="border-gray-100 dark:border-gray-850 my-2" />
|
||||
|
||||
<div>
|
||||
<div class="flex items-center justify-between mb-2">
|
||||
<div class="text-xs font-medium">{$i18n.t('Use Chat Completions API')}</div>
|
||||
<label class="relative inline-flex items-center cursor-pointer">
|
||||
<input
|
||||
type="checkbox"
|
||||
bind:checked={STT_MISTRAL_USE_CHAT_COMPLETIONS}
|
||||
class="sr-only peer"
|
||||
/>
|
||||
<div
|
||||
class="w-9 h-5 bg-gray-200 peer-focus:outline-none peer-focus:ring-2 peer-focus:ring-blue-300 dark:peer-focus:ring-blue-800 rounded-full peer dark:bg-gray-700 peer-checked:after:translate-x-full peer-checked:after:border-white after:content-[''] after:absolute after:top-[2px] after:left-[2px] after:bg-white after:border-gray-300 after:border after:rounded-full after:h-4 after:w-4 after:transition-all dark:border-gray-600 peer-checked:bg-blue-600"
|
||||
></div>
|
||||
</label>
|
||||
</div>
|
||||
<div class="text-xs text-gray-400 dark:text-gray-500">
|
||||
{$i18n.t(
|
||||
'Use /v1/chat/completions endpoint instead of /v1/audio/transcriptions for potentially better accuracy.'
|
||||
)}
|
||||
</div>
|
||||
</div>
|
||||
{:else if STT_ENGINE === ''}
|
||||
<div>
|
||||
<div class=" mb-1.5 text-xs font-medium">{$i18n.t('STT Model')}</div>
|
||||
|
|
@ -427,7 +498,7 @@
|
|||
</div>
|
||||
|
||||
<div>
|
||||
<div class=" mb-2.5 text-base font-medium">{$i18n.t('Text-to-Speech')}</div>
|
||||
<div class=" mt-0.5 mb-2.5 text-base font-medium">{$i18n.t('Text-to-Speech')}</div>
|
||||
|
||||
<hr class=" border-gray-100 dark:border-gray-850 my-2" />
|
||||
|
||||
|
|
@ -477,12 +548,7 @@
|
|||
{:else if TTS_ENGINE === 'elevenlabs'}
|
||||
<div>
|
||||
<div class="mt-1 flex gap-2 mb-1">
|
||||
<input
|
||||
class="flex-1 w-full rounded-lg py-2 pl-4 text-sm bg-gray-50 dark:text-gray-300 dark:bg-gray-850 outline-hidden"
|
||||
placeholder={$i18n.t('API Key')}
|
||||
bind:value={TTS_API_KEY}
|
||||
required
|
||||
/>
|
||||
<SensitiveInput placeholder={$i18n.t('API Key')} bind:value={TTS_API_KEY} required />
|
||||
</div>
|
||||
</div>
|
||||
{:else if TTS_ENGINE === 'azure'}
|
||||
|
|
|
|||
|
|
@ -41,7 +41,7 @@
|
|||
{#if config}
|
||||
<div>
|
||||
<div class="mb-3.5">
|
||||
<div class=" mb-2.5 text-base font-medium">{$i18n.t('General')}</div>
|
||||
<div class=" mt-0.5 mb-2.5 text-base font-medium">{$i18n.t('General')}</div>
|
||||
|
||||
<hr class=" border-gray-100 dark:border-gray-850 my-2" />
|
||||
|
||||
|
|
@ -164,7 +164,7 @@
|
|||
</div>
|
||||
|
||||
<div class="mb-3.5">
|
||||
<div class=" mb-2.5 text-base font-medium">{$i18n.t('Code Interpreter')}</div>
|
||||
<div class=" mt-0.5 mb-2.5 text-base font-medium">{$i18n.t('Code Interpreter')}</div>
|
||||
|
||||
<hr class=" border-gray-100 dark:border-gray-850 my-2" />
|
||||
|
||||
|
|
|
|||
|
|
@ -219,7 +219,7 @@
|
|||
<div class=" overflow-y-scroll scrollbar-hidden h-full">
|
||||
{#if ENABLE_OPENAI_API !== null && ENABLE_OLLAMA_API !== null && connectionsConfig !== null}
|
||||
<div class="mb-3.5">
|
||||
<div class=" mb-2.5 text-base font-medium">{$i18n.t('General')}</div>
|
||||
<div class=" mt-0.5 mb-2.5 text-base font-medium">{$i18n.t('General')}</div>
|
||||
|
||||
<hr class=" border-gray-100 dark:border-gray-850 my-2" />
|
||||
|
||||
|
|
|
|||
|
|
@ -2,7 +2,7 @@
|
|||
import fileSaver from 'file-saver';
|
||||
const { saveAs } = fileSaver;
|
||||
|
||||
import { downloadDatabase, downloadLiteLLMConfig } from '$lib/apis/utils';
|
||||
import { downloadDatabase } from '$lib/apis/utils';
|
||||
import { onMount, getContext } from 'svelte';
|
||||
import { config, user } from '$lib/stores';
|
||||
import { toast } from 'svelte-sonner';
|
||||
|
|
|
|||
|
|
@ -212,6 +212,27 @@
|
|||
await embeddingModelUpdateHandler();
|
||||
}
|
||||
|
||||
if (RAGConfig.DOCLING_PARAMS) {
|
||||
try {
|
||||
JSON.parse(RAGConfig.DOCLING_PARAMS);
|
||||
} catch (e) {
|
||||
toast.error(
|
||||
$i18n.t('Invalid JSON format in {{NAME}}', {
|
||||
NAME: $i18n.t('Docling Parameters')
|
||||
})
|
||||
);
|
||||
return;
|
||||
}
|
||||
}
|
||||
if (RAGConfig.MINERU_PARAMS) {
|
||||
try {
|
||||
JSON.parse(RAGConfig.MINERU_PARAMS);
|
||||
} catch (e) {
|
||||
toast.error($i18n.t('Invalid JSON format in MinerU Parameters'));
|
||||
return;
|
||||
}
|
||||
}
|
||||
|
||||
const res = await updateRAGConfig(localStorage.token, {
|
||||
...RAGConfig,
|
||||
ALLOWED_FILE_EXTENSIONS: RAGConfig.ALLOWED_FILE_EXTENSIONS.split(',')
|
||||
|
|
@ -220,7 +241,17 @@
|
|||
DOCLING_PICTURE_DESCRIPTION_LOCAL: JSON.parse(
|
||||
RAGConfig.DOCLING_PICTURE_DESCRIPTION_LOCAL || '{}'
|
||||
),
|
||||
DOCLING_PICTURE_DESCRIPTION_API: JSON.parse(RAGConfig.DOCLING_PICTURE_DESCRIPTION_API || '{}')
|
||||
DOCLING_PICTURE_DESCRIPTION_API: JSON.parse(
|
||||
RAGConfig.DOCLING_PICTURE_DESCRIPTION_API || '{}'
|
||||
),
|
||||
DOCLING_PARAMS:
|
||||
typeof RAGConfig.DOCLING_PARAMS === 'string' && RAGConfig.DOCLING_PARAMS.trim() !== ''
|
||||
? JSON.parse(RAGConfig.DOCLING_PARAMS)
|
||||
: {},
|
||||
MINERU_PARAMS:
|
||||
typeof RAGConfig.MINERU_PARAMS === 'string' && RAGConfig.MINERU_PARAMS.trim() !== ''
|
||||
? JSON.parse(RAGConfig.MINERU_PARAMS)
|
||||
: {}
|
||||
});
|
||||
dispatch('save');
|
||||
};
|
||||
|
|
@ -260,6 +291,15 @@
|
|||
null,
|
||||
2
|
||||
);
|
||||
config.DOCLING_PARAMS =
|
||||
typeof config.DOCLING_PARAMS === 'object'
|
||||
? JSON.stringify(config.DOCLING_PARAMS ?? {}, null, 2)
|
||||
: config.DOCLING_PARAMS;
|
||||
|
||||
config.MINERU_PARAMS =
|
||||
typeof config.MINERU_PARAMS === 'object'
|
||||
? JSON.stringify(config.MINERU_PARAMS ?? {}, null, 2)
|
||||
: config.MINERU_PARAMS;
|
||||
|
||||
RAGConfig = config;
|
||||
});
|
||||
|
|
@ -317,7 +357,7 @@
|
|||
<div class=" space-y-2.5 overflow-y-scroll scrollbar-hidden h-full pr-1.5">
|
||||
<div class="">
|
||||
<div class="mb-3">
|
||||
<div class=" mb-2.5 text-base font-medium">{$i18n.t('General')}</div>
|
||||
<div class=" mt-0.5 mb-2.5 text-base font-medium">{$i18n.t('General')}</div>
|
||||
|
||||
<hr class=" border-gray-100 dark:border-gray-850 my-2" />
|
||||
|
||||
|
|
@ -717,18 +757,18 @@
|
|||
{/if}
|
||||
{/if}
|
||||
|
||||
<div class="flex justify-between w-full mt-2">
|
||||
<div class="self-center text-xs font-medium">
|
||||
<Tooltip content={''} placement="top-start">
|
||||
<div class="flex flex-col gap-2 mt-2">
|
||||
<div class=" flex flex-col w-full justify-between">
|
||||
<div class=" mb-1 text-xs font-medium">
|
||||
{$i18n.t('Parameters')}
|
||||
</Tooltip>
|
||||
</div>
|
||||
<div class="">
|
||||
<Textarea
|
||||
bind:value={RAGConfig.DOCLING_PARAMS}
|
||||
placeholder={$i18n.t('Enter additional parameters in JSON format')}
|
||||
minSize={100}
|
||||
/>
|
||||
</div>
|
||||
<div class="flex w-full items-center relative">
|
||||
<Textarea
|
||||
bind:value={RAGConfig.DOCLING_PARAMS}
|
||||
placeholder={$i18n.t('Enter additional parameters in JSON format')}
|
||||
minSize={100}
|
||||
/>
|
||||
</div>
|
||||
</div>
|
||||
</div>
|
||||
{:else if RAGConfig.CONTENT_EXTRACTION_ENGINE === 'document_intelligence'}
|
||||
|
|
@ -746,6 +786,11 @@
|
|||
</div>
|
||||
{:else if RAGConfig.CONTENT_EXTRACTION_ENGINE === 'mistral_ocr'}
|
||||
<div class="my-0.5 flex gap-2 pr-2">
|
||||
<input
|
||||
class="flex-1 w-full text-sm bg-transparent outline-hidden"
|
||||
placeholder={$i18n.t('Enter Mistral API Base URL')}
|
||||
bind:value={RAGConfig.MISTRAL_OCR_API_BASE_URL}
|
||||
/>
|
||||
<SensitiveInput
|
||||
placeholder={$i18n.t('Enter Mistral API Key')}
|
||||
bind:value={RAGConfig.MISTRAL_OCR_API_KEY}
|
||||
|
|
@ -798,12 +843,13 @@
|
|||
<SensitiveInput
|
||||
placeholder={$i18n.t('Enter MinerU API Key')}
|
||||
bind:value={RAGConfig.MINERU_API_KEY}
|
||||
required={false}
|
||||
/>
|
||||
</div>
|
||||
|
||||
<!-- Parameters -->
|
||||
<div class="flex justify-between w-full mt-2">
|
||||
<div class="self-center text-xs font-medium">
|
||||
<div class="flex flex-col justify-between w-full mt-2">
|
||||
<div class="text-xs font-medium">
|
||||
<Tooltip
|
||||
content={$i18n.t(
|
||||
'Advanced parameters for MinerU parsing (enable_ocr, enable_formula, enable_table, language, model_version, page_ranges)'
|
||||
|
|
@ -813,22 +859,9 @@
|
|||
{$i18n.t('Parameters')}
|
||||
</Tooltip>
|
||||
</div>
|
||||
<div class="">
|
||||
<div class="mt-1.5">
|
||||
<Textarea
|
||||
value={typeof RAGConfig.MINERU_PARAMS === 'object' &&
|
||||
RAGConfig.MINERU_PARAMS !== null &&
|
||||
Object.keys(RAGConfig.MINERU_PARAMS).length > 0
|
||||
? JSON.stringify(RAGConfig.MINERU_PARAMS, null, 2)
|
||||
: ''}
|
||||
on:input={(e) => {
|
||||
try {
|
||||
const value = e.target.value.trim();
|
||||
RAGConfig.MINERU_PARAMS = value ? JSON.parse(value) : {};
|
||||
} catch (err) {
|
||||
// Keep the string value if JSON is invalid (user is still typing)
|
||||
RAGConfig.MINERU_PARAMS = e.target.value;
|
||||
}
|
||||
}}
|
||||
bind:value={RAGConfig.MINERU_PARAMS}
|
||||
placeholder={`{\n "enable_ocr": false,\n "enable_formula": true,\n "enable_table": true,\n "language": "en",\n "model_version": "pipeline",\n "page_ranges": ""\n}`}
|
||||
minSize={100}
|
||||
/>
|
||||
|
|
@ -914,7 +947,7 @@
|
|||
|
||||
{#if !RAGConfig.BYPASS_EMBEDDING_AND_RETRIEVAL}
|
||||
<div class="mb-3">
|
||||
<div class=" mb-2.5 text-base font-medium">{$i18n.t('Embedding')}</div>
|
||||
<div class=" mt-0.5 mb-2.5 text-base font-medium">{$i18n.t('Embedding')}</div>
|
||||
|
||||
<hr class=" border-gray-100 dark:border-gray-850 my-2" />
|
||||
|
||||
|
|
@ -1089,7 +1122,7 @@
|
|||
</div>
|
||||
|
||||
<div class="mb-3">
|
||||
<div class=" mb-2.5 text-base font-medium">{$i18n.t('Retrieval')}</div>
|
||||
<div class=" mt-0.5 mb-2.5 text-base font-medium">{$i18n.t('Retrieval')}</div>
|
||||
|
||||
<hr class=" border-gray-100 dark:border-gray-850 my-2" />
|
||||
|
||||
|
|
@ -1119,6 +1152,21 @@
|
|||
</div>
|
||||
|
||||
{#if RAGConfig.ENABLE_RAG_HYBRID_SEARCH === true}
|
||||
<div class="mb-2.5 flex w-full justify-between">
|
||||
<div class="self-center text-xs font-medium">
|
||||
{$i18n.t('Enrich Hybrid Search Text')}
|
||||
</div>
|
||||
<div class="flex items-center relative">
|
||||
<Tooltip
|
||||
content={$i18n.t(
|
||||
'Adds filenames, titles, sections, and snippets into the BM25 text to improve lexical recall.'
|
||||
)}
|
||||
>
|
||||
<Switch bind:state={RAGConfig.ENABLE_RAG_HYBRID_SEARCH_ENRICHED_TEXTS} />
|
||||
</Tooltip>
|
||||
</div>
|
||||
</div>
|
||||
|
||||
<div class=" mb-2.5 flex flex-col w-full justify-between">
|
||||
<div class="flex w-full justify-between">
|
||||
<div class=" self-center text-xs font-medium">
|
||||
|
|
@ -1332,7 +1380,7 @@
|
|||
{/if}
|
||||
|
||||
<div class="mb-3">
|
||||
<div class=" mb-2.5 text-base font-medium">{$i18n.t('Files')}</div>
|
||||
<div class=" mt-0.5 mb-2.5 text-base font-medium">{$i18n.t('Files')}</div>
|
||||
|
||||
<hr class=" border-gray-100 dark:border-gray-850 my-2" />
|
||||
|
||||
|
|
@ -1444,7 +1492,7 @@
|
|||
</div>
|
||||
|
||||
<div class="mb-3">
|
||||
<div class=" mb-2.5 text-base font-medium">{$i18n.t('Integration')}</div>
|
||||
<div class=" mt-0.5 mb-2.5 text-base font-medium">{$i18n.t('Integration')}</div>
|
||||
|
||||
<hr class=" border-gray-100 dark:border-gray-850 my-2" />
|
||||
|
||||
|
|
@ -1464,7 +1512,7 @@
|
|||
</div>
|
||||
|
||||
<div class="mb-3">
|
||||
<div class=" mb-2.5 text-base font-medium">{$i18n.t('Danger Zone')}</div>
|
||||
<div class=" mt-0.5 mb-2.5 text-base font-medium">{$i18n.t('Danger Zone')}</div>
|
||||
|
||||
<hr class=" border-gray-100 dark:border-gray-850 my-2" />
|
||||
|
||||
|
|
|
|||
|
|
@ -104,7 +104,7 @@
|
|||
{#if evaluationConfig !== null}
|
||||
<div class="">
|
||||
<div class="mb-3">
|
||||
<div class=" mb-2.5 text-base font-medium">{$i18n.t('General')}</div>
|
||||
<div class=" mt-0.5 mb-2.5 text-base font-medium">{$i18n.t('General')}</div>
|
||||
|
||||
<hr class=" border-gray-100 dark:border-gray-850 my-2" />
|
||||
|
||||
|
|
@ -119,7 +119,7 @@
|
|||
|
||||
{#if evaluationConfig.ENABLE_EVALUATION_ARENA_MODELS}
|
||||
<div class="mb-3">
|
||||
<div class=" mb-2.5 text-base font-medium flex justify-between items-center">
|
||||
<div class=" mt-0.5 mb-2.5 text-base font-medium flex justify-between items-center">
|
||||
<div>
|
||||
{$i18n.t('Manage')}
|
||||
</div>
|
||||
|
|
|
|||
|
|
@ -5,6 +5,7 @@
|
|||
|
||||
import Cog6 from '$lib/components/icons/Cog6.svelte';
|
||||
import ArenaModelModal from './ArenaModelModal.svelte';
|
||||
import { WEBUI_API_BASE_URL } from '$lib/constants';
|
||||
export let model;
|
||||
|
||||
let showModel = false;
|
||||
|
|
@ -27,7 +28,7 @@
|
|||
<div class="flex flex-col flex-1">
|
||||
<div class="flex gap-2.5 items-center">
|
||||
<img
|
||||
src={model.meta.profile_image_url}
|
||||
src={`${WEBUI_API_BASE_URL}/models/model/profile/image?id=${model.id}`}
|
||||
alt={model.name}
|
||||
class="size-8 rounded-full object-cover shrink-0"
|
||||
/>
|
||||
|
|
|
|||
|
|
@ -10,6 +10,7 @@
|
|||
updateLdapConfig,
|
||||
updateLdapServer
|
||||
} from '$lib/apis/auths';
|
||||
import { getGroups } from '$lib/apis/groups';
|
||||
import SensitiveInput from '$lib/components/common/SensitiveInput.svelte';
|
||||
import Switch from '$lib/components/common/Switch.svelte';
|
||||
import Tooltip from '$lib/components/common/Tooltip.svelte';
|
||||
|
|
@ -32,6 +33,7 @@
|
|||
|
||||
let adminConfig = null;
|
||||
let webhookUrl = '';
|
||||
let groups = [];
|
||||
|
||||
// LDAP
|
||||
let ENABLE_LDAP = false;
|
||||
|
|
@ -104,6 +106,9 @@
|
|||
})(),
|
||||
(async () => {
|
||||
LDAP_SERVER = await getLdapServer(localStorage.token);
|
||||
})(),
|
||||
(async () => {
|
||||
groups = await getGroups(localStorage.token);
|
||||
})()
|
||||
]);
|
||||
|
||||
|
|
@ -118,11 +123,11 @@
|
|||
updateHandler();
|
||||
}}
|
||||
>
|
||||
<div class="mt-0.5 space-y-3 overflow-y-scroll scrollbar-hidden h-full">
|
||||
<div class="space-y-3 overflow-y-scroll scrollbar-hidden h-full">
|
||||
{#if adminConfig !== null}
|
||||
<div class="">
|
||||
<div class="mb-3.5">
|
||||
<div class=" mb-2.5 text-base font-medium">{$i18n.t('General')}</div>
|
||||
<div class=" mt-0.5 mb-2.5 text-base font-medium">{$i18n.t('General')}</div>
|
||||
|
||||
<hr class=" border-gray-100 dark:border-gray-850 my-2" />
|
||||
|
||||
|
|
@ -280,7 +285,7 @@
|
|||
</div>
|
||||
|
||||
<div class="mb-3">
|
||||
<div class=" mb-2.5 text-base font-medium">{$i18n.t('Authentication')}</div>
|
||||
<div class=" mt-0.5 mb-2.5 text-base font-medium">{$i18n.t('Authentication')}</div>
|
||||
|
||||
<hr class=" border-gray-100 dark:border-gray-850 my-2" />
|
||||
|
||||
|
|
@ -299,6 +304,22 @@
|
|||
</div>
|
||||
</div>
|
||||
|
||||
<div class=" mb-2.5 flex w-full justify-between">
|
||||
<div class=" self-center text-xs font-medium">{$i18n.t('Default Group')}</div>
|
||||
<div class="flex items-center relative">
|
||||
<select
|
||||
class="dark:bg-gray-900 w-fit pr-8 rounded-sm px-2 text-xs bg-transparent outline-hidden text-right"
|
||||
bind:value={adminConfig.DEFAULT_GROUP_ID}
|
||||
placeholder={$i18n.t('Select a group')}
|
||||
>
|
||||
<option value={''}>None</option>
|
||||
{#each groups as group}
|
||||
<option value={group.id}>{group.name}</option>
|
||||
{/each}
|
||||
</select>
|
||||
</div>
|
||||
</div>
|
||||
|
||||
<div class=" mb-2.5 flex w-full justify-between pr-2">
|
||||
<div class=" self-center text-xs font-medium">{$i18n.t('Enable New Sign Ups')}</div>
|
||||
|
||||
|
|
@ -338,31 +359,31 @@
|
|||
</div>
|
||||
|
||||
<div class="mb-2.5 flex w-full justify-between pr-2">
|
||||
<div class=" self-center text-xs font-medium">{$i18n.t('Enable API Key')}</div>
|
||||
<div class=" self-center text-xs font-medium">{$i18n.t('Enable API Keys')}</div>
|
||||
|
||||
<Switch bind:state={adminConfig.ENABLE_API_KEY} />
|
||||
<Switch bind:state={adminConfig.ENABLE_API_KEYS} />
|
||||
</div>
|
||||
|
||||
{#if adminConfig?.ENABLE_API_KEY}
|
||||
{#if adminConfig?.ENABLE_API_KEYS}
|
||||
<div class="mb-2.5 flex w-full justify-between pr-2">
|
||||
<div class=" self-center text-xs font-medium">
|
||||
{$i18n.t('API Key Endpoint Restrictions')}
|
||||
</div>
|
||||
|
||||
<Switch bind:state={adminConfig.ENABLE_API_KEY_ENDPOINT_RESTRICTIONS} />
|
||||
<Switch bind:state={adminConfig.ENABLE_API_KEYS_ENDPOINT_RESTRICTIONS} />
|
||||
</div>
|
||||
|
||||
{#if adminConfig?.ENABLE_API_KEY_ENDPOINT_RESTRICTIONS}
|
||||
<div class=" flex w-full flex-col pr-2">
|
||||
{#if adminConfig?.ENABLE_API_KEYS_ENDPOINT_RESTRICTIONS}
|
||||
<div class=" flex w-full flex-col pr-2 mb-2.5">
|
||||
<div class=" text-xs font-medium">
|
||||
{$i18n.t('Allowed Endpoints')}
|
||||
</div>
|
||||
|
||||
<input
|
||||
class="w-full mt-1 rounded-lg text-sm dark:text-gray-300 bg-transparent outline-hidden"
|
||||
class="w-full mt-1 text-sm dark:text-gray-300 bg-transparent outline-hidden"
|
||||
type="text"
|
||||
placeholder={`e.g.) /api/v1/messages, /api/v1/channels`}
|
||||
bind:value={adminConfig.API_KEY_ALLOWED_ENDPOINTS}
|
||||
bind:value={adminConfig.API_KEYS_ALLOWED_ENDPOINTS}
|
||||
/>
|
||||
|
||||
<div class="mt-2 text-xs text-gray-400 dark:text-gray-500">
|
||||
|
|
@ -637,7 +658,7 @@
|
|||
</div>
|
||||
|
||||
<div class="mb-3">
|
||||
<div class=" mb-2.5 text-base font-medium">{$i18n.t('Features')}</div>
|
||||
<div class=" mt-0.5 mb-2.5 text-base font-medium">{$i18n.t('Features')}</div>
|
||||
|
||||
<hr class=" border-gray-100 dark:border-gray-850 my-2" />
|
||||
|
||||
|
|
|
|||
Some files were not shown because too many files have changed in this diff Show more
Loading…
Reference in a new issue