added version 0.6.38

This commit is contained in:
Diego 2025-11-25 09:41:24 +01:00
commit 890acf8f88
165 changed files with 4984 additions and 3440 deletions

View file

@ -141,6 +141,9 @@ jobs:
platform=${{ matrix.platform }} platform=${{ matrix.platform }}
echo "PLATFORM_PAIR=${platform//\//-}" >> $GITHUB_ENV echo "PLATFORM_PAIR=${platform//\//-}" >> $GITHUB_ENV
- name: Delete huge unnecessary tools folder
run: rm -rf /opt/hostedtoolcache
- name: Checkout repository - name: Checkout repository
uses: actions/checkout@v5 uses: actions/checkout@v5
@ -243,6 +246,9 @@ jobs:
platform=${{ matrix.platform }} platform=${{ matrix.platform }}
echo "PLATFORM_PAIR=${platform//\//-}" >> $GITHUB_ENV echo "PLATFORM_PAIR=${platform//\//-}" >> $GITHUB_ENV
- name: Delete huge unnecessary tools folder
run: rm -rf /opt/hostedtoolcache
- name: Checkout repository - name: Checkout repository
uses: actions/checkout@v5 uses: actions/checkout@v5

View file

@ -5,6 +5,104 @@ All notable changes to this project will be documented in this file.
The format is based on [Keep a Changelog](https://keepachangelog.com/en/1.1.0/), The format is based on [Keep a Changelog](https://keepachangelog.com/en/1.1.0/),
and this project adheres to [Semantic Versioning](https://semver.org/spec/v2.0.0.html). and this project adheres to [Semantic Versioning](https://semver.org/spec/v2.0.0.html).
## [0.6.38] - 2025-11-24
### Fixed
- 🔍 Hybrid search now works reliably after recent changes.
- 🛠️ Tool server saving now handles errors gracefully, preventing failed saves from impacting the UI.
- 🔐 SSO/OIDC code fixed to improve login reliability and better handle edge cases.
## [0.6.37] - 2025-11-24
### Added
- 🔐 Granular sharing permissions are now available with two-tiered control separating group sharing from public sharing, allowing administrators to independently configure whether users can share workspace items with groups or make them publicly accessible, with separate permission toggles for models, knowledge bases, prompts, tools, and notes, configurable via "USER_PERMISSIONS_WORKSPACE_MODELS_ALLOW_SHARING", "USER_PERMISSIONS_WORKSPACE_MODELS_ALLOW_PUBLIC_SHARING", and corresponding environment variables for other workspace item types, while groups can now be configured to opt-out of sharing via the "Allow Group Sharing" setting. [Commit](https://github.com/open-webui/open-webui/commit/7be750bcbb40da91912a0a66b7ab791effdcc3b6), [Commit](https://github.com/open-webui/open-webui/commit/f69e37a8507d6d57382d6670641b367f3127f90a)
- 🔐 Password policy enforcement is now available with configurable validation rules, allowing administrators to require specific password complexity requirements via "ENABLE_PASSWORD_VALIDATION" and "PASSWORD_VALIDATION_REGEX_PATTERN" environment variables, with default pattern requiring minimum 8 characters including uppercase, lowercase, digit, and special character. [#17794](https://github.com/open-webui/open-webui/pull/17794)
- 🔐 Granular import and export permissions are now available for workspace items, introducing six separate permission toggles for models, prompts, and tools that are disabled by default for enhanced security. [#19242](https://github.com/open-webui/open-webui/pull/19242)
- 👥 Default group assignment is now available for new users, allowing administrators to automatically assign newly registered users to a specified group for streamlined access control to models, prompts, and tools, particularly useful for organizations with group-based model access policies. [#19325](https://github.com/open-webui/open-webui/pull/19325), [#17842](https://github.com/open-webui/open-webui/issues/17842)
- 🔒 Password-based authentication can now be fully disabled via "ENABLE_PASSWORD_AUTH" environment variable, enforcing SSO-only authentication and preventing password login fallback when SSO is configured. [#19113](https://github.com/open-webui/open-webui/pull/19113)
- 🖼️ Large stream chunk handling was implemented to support models that generate images directly in their output responses, with configurable buffer size via "CHAT_STREAM_RESPONSE_CHUNK_MAX_BUFFER_SIZE" environment variable, resolving compatibility issues with models like Gemini 2.5 Flash Image. [#18884](https://github.com/open-webui/open-webui/pull/18884), [#17626](https://github.com/open-webui/open-webui/issues/17626)
- 🖼️ Streaming response middleware now handles images in delta updates with automatic base64 conversion, enabling proper display of images from models using the "choices[0].delta.images.image_url" format such as Gemini 2.5 Flash Image Preview on OpenRouter. [#19073](https://github.com/open-webui/open-webui/pull/19073), [#19019](https://github.com/open-webui/open-webui/issues/19019)
- 📈 Model list API performance was optimized by pre-fetching user group memberships and removing profile image URLs from response payloads, significantly reducing both database queries and payload size for instances with large model lists, with profile images now served dynamically via dedicated endpoints. [#19097](https://github.com/open-webui/open-webui/pull/19097), [#18950](https://github.com/open-webui/open-webui/issues/18950)
- ⏩ Batch file processing performance was improved by reducing database queries by 67% while ensuring data consistency between vector and relational databases. [#18953](https://github.com/open-webui/open-webui/pull/18953)
- 🚀 Chat import performance was dramatically improved by replacing individual per-chat API requests with a bulk import endpoint, reducing import time by up to 95% for large chat collections and providing user feedback via toast notifications displaying the number of successfully imported chats. [#17861](https://github.com/open-webui/open-webui/pull/17861)
- ⚡ Socket event broadcasting performance was optimized by implementing user-specific rooms, significantly reducing server overhead particularly for users with multiple concurrent sessions. [#18996](https://github.com/open-webui/open-webui/pull/18996)
- 🗄️ Weaviate is now supported as a vector database option, providing an additional choice for RAG document storage alongside existing ChromaDB, Milvus, Qdrant, and OpenSearch integrations. [#14747](https://github.com/open-webui/open-webui/pull/14747)
- 🗄️ PostgreSQL pgvector now supports HNSW index types and large dimensional embeddings exceeding 2000 dimensions through automatic halfvec type selection, with configurable index methods via "PGVECTOR_INDEX_METHOD", "PGVECTOR_HNSW_M", "PGVECTOR_HNSW_EF_CONSTRUCTION", and "PGVECTOR_IVFFLAT_LISTS" environment variables. [#19158](https://github.com/open-webui/open-webui/pull/19158), [#16890](https://github.com/open-webui/open-webui/issues/16890)
- 🔍 Azure AI Search is now supported as a web search provider, enabling integration with Azure's cognitive search services via "AZURE_AI_SEARCH_API_KEY", "AZURE_AI_SEARCH_ENDPOINT", and "AZURE_AI_SEARCH_INDEX_NAME" configuration. [#19104](https://github.com/open-webui/open-webui/pull/19104)
- ⚡ External embedding generation now processes API requests in parallel instead of sequential batches, reducing document processing time by 10-50x when using OpenAI, Azure OpenAI, or Ollama embedding providers, with large PDFs now processing in seconds instead of minutes. [#19296](https://github.com/open-webui/open-webui/pull/19296)
- 💨 Base64 image conversion is now available for markdown content in chat responses, automatically uploading embedded images exceeding 1KB and replacing them with file URLs to reduce payload size and resource consumption, configurable via "REPLACE_IMAGE_URLS_IN_CHAT_RESPONSE" environment variable. [#19076](https://github.com/open-webui/open-webui/pull/19076)
- 🎨 OpenAI image generation now supports additional API parameters including quality settings for GPT Image 1, configurable via "IMAGES_OPENAI_API_PARAMS" environment variable or through the admin interface, enabling cost-effective image generation with low, medium, or high quality options. [#19228](https://github.com/open-webui/open-webui/issues/19228)
- 🖼️ Image editing can now be independently enabled or disabled via admin settings, allowing administrators to control whether sequential image prompts trigger image editing or new image generation, configurable via "ENABLE_IMAGE_EDIT" environment variable. [#19284](https://github.com/open-webui/open-webui/issues/19284)
- 🔐 SSRF protection was implemented with a configurable URL blocklist that prevents access to cloud metadata endpoints and private networks, with default protections for AWS, Google Cloud, Azure, and Alibaba Cloud metadata services, customizable via "WEB_FETCH_FILTER_LIST" environment variable. [#19201](https://github.com/open-webui/open-webui/pull/19201)
- ⚡ Workspace models page now supports server-side pagination dramatically improving load times and usability for instances with large numbers of workspace models.
- 🔍 Hybrid search now indexes file metadata including filenames, titles, headings, sources, and snippets alongside document content, enabling keyword queries to surface documents where search terms appear only in metadata, configurable via "ENABLE_RAG_HYBRID_SEARCH_ENRICHED_TEXTS" environment variable. [#19095](https://github.com/open-webui/open-webui/pull/19095)
- 📂 Knowledge base upload page now supports folder drag-and-drop with recursive directory handling, enabling batch uploads of entire directory structures instead of requiring individual file selection. [#19320](https://github.com/open-webui/open-webui/pull/19320)
- 🤖 Model cloning is now available in admin settings, allowing administrators to quickly create workspace models based on existing base models through a "Clone" option in the model dropdown menu. [#17937](https://github.com/open-webui/open-webui/pull/17937)
- 🎨 UI scale adjustment is now available in interface settings, allowing users to increase the size of the entire interface from 1.0x to 1.5x for improved accessibility and readability, particularly beneficial for users with visual impairments. [#19186](https://github.com/open-webui/open-webui/pull/19186)
- 📌 Default pinned models can now be configured by administrators for all new users, mirroring the behavior of default models where admin-configured defaults apply only to users who haven't customized their pinned models, configurable via "DEFAULT_PINNED_MODELS" environment variable. [#19273](https://github.com/open-webui/open-webui/pull/19273)
- 🎙️ Text-to-Speech and Speech-to-Text services now receive user information headers when "ENABLE_FORWARD_USER_INFO_HEADERS" is enabled, allowing external TTS and STT providers to implement user-specific personalization, rate limiting, and usage tracking. [#19323](https://github.com/open-webui/open-webui/pull/19323), [#19312](https://github.com/open-webui/open-webui/issues/19312)
- 🎙️ Voice mode now supports custom system prompts via "VOICE_MODE_PROMPT_TEMPLATE" configuration, allowing administrators to control response style and behavior for voice interactions. [#18607](https://github.com/open-webui/open-webui/pull/18607)
- 🔧 WebSocket and Redis configuration options are now available including debug logging controls, custom ping timeout and interval settings, and arbitrary Redis connection options via "WEBSOCKET_SERVER_LOGGING", "WEBSOCKET_SERVER_ENGINEIO_LOGGING", "WEBSOCKET_SERVER_PING_TIMEOUT", "WEBSOCKET_SERVER_PING_INTERVAL", and "WEBSOCKET_REDIS_OPTIONS" environment variables. [#19091](https://github.com/open-webui/open-webui/pull/19091)
- 🔧 MCP OAuth dynamic client registration now automatically detects and uses the appropriate token endpoint authentication method from server-supported options, enabling compatibility with OAuth servers that only support "client_secret_basic" instead of "client_secret_post". [#19193](https://github.com/open-webui/open-webui/issues/19193)
- 🔧 Custom headers can now be configured for remote MCP and OpenAPI tool server connections, enabling integration with services that require additional authentication headers. [#18918](https://github.com/open-webui/open-webui/issues/18918)
- 🔍 Perplexity Search now supports custom API endpoints via "PERPLEXITY_SEARCH_API_URL" configuration and automatically forwards user information headers to enable personalized search experiences. [#19147](https://github.com/open-webui/open-webui/pull/19147)
- 🔍 User information headers can now be optionally forwarded to external web search engines when "ENABLE_FORWARD_USER_INFO_HEADERS" is enabled. [#19043](https://github.com/open-webui/open-webui/pull/19043)
- 📊 Daily active user metric is now available for monitoring, tracking unique users active since midnight UTC via the "webui.users.active.today" Prometheus gauge. [#19236](https://github.com/open-webui/open-webui/pull/19236), [#19234](https://github.com/open-webui/open-webui/issues/19234)
- 📊 Audit log file path is now configurable via "AUDIT_LOGS_FILE_PATH" environment variable, enabling storage in separate volumes or custom locations. [#19173](https://github.com/open-webui/open-webui/pull/19173)
- 🎨 Sidebar collapse states for model lists and group information are now persistent across page refreshes, remembering user preferences through browser-based storage. [#19159](https://github.com/open-webui/open-webui/issues/19159)
- 🎨 Background image display was enhanced with semi-transparent overlays for navbar and sidebar, creating a seamless and visually cohesive design across the entire interface. [#19157](https://github.com/open-webui/open-webui/issues/19157)
- 📋 Tables in chat messages now include a copy button that appears on hover, enabling quick copying of table content alongside the existing CSV export functionality. [#19162](https://github.com/open-webui/open-webui/issues/19162)
- 📝 Notes can now be created directly via the "/notes/new" URL endpoint with optional title and content query parameters, enabling faster note creation through bookmarks and shortcuts. [#19195](https://github.com/open-webui/open-webui/issues/19195)
- 🏷️ Tag suggestions are now context-aware, displaying only relevant tags when creating or editing models versus chat conversations, preventing confusion between model and chat tags. [#19135](https://github.com/open-webui/open-webui/issues/19135)
- ✍️ Prompt autocompletion is now available independently of the rich text input setting, improving accessibility to the feature. [#19150](https://github.com/open-webui/open-webui/issues/19150)
- 🔄 Various improvements were implemented across the frontend and backend to enhance performance, stability, and security.
- 🌐 Translations for Simplified Chinese, Traditional Chinese, Portuguese (Brazil), Catalan, Spanish (Spain), Finnish, Irish, Farsi, Swedish, Danish, German, Korean, and Thai were improved and expanded.
### Fixed
- 🤖 Model update functionality now works correctly, resolving a database parameter binding error that prevented saving changes to model configurations via the Save & Update button. [#19335](https://github.com/open-webui/open-webui/issues/19335)
- 🖼️ Multiple input images for image editing and generation are now correctly passed as an array using the "image[]" parameter syntax, enabling proper multi-image reference functionality with models like GPT Image 1. [#19339](https://github.com/open-webui/open-webui/issues/19339)
- 📱 PWA installations on iOS now properly refresh after server container restarts, resolving freezing issues by automatically unregistering service workers when version or deployment changes are detected. [#19316](https://github.com/open-webui/open-webui/pull/19316)
- 🗄️ S3 Vectors collection detection now correctly handles buckets with more than 2000 indexes by using direct index lookup instead of paginated list scanning, improving performance by approximately 8x and enabling RAG queries to work reliably at scale. [#19238](https://github.com/open-webui/open-webui/pull/19238), [#19233](https://github.com/open-webui/open-webui/issues/19233)
- 📈 Feedback retrieval performance was optimized by eliminating N+1 query patterns through database joins, adding server-side pagination and sorting, significantly reducing database load for instances with large feedback datasets. [#17976](https://github.com/open-webui/open-webui/pull/17976)
- 🔍 Chat search now works correctly with PostgreSQL when chat data contains null bytes, with comprehensive sanitization preventing null bytes during data writes, cleaning existing data on read, and stripping null bytes during search queries to ensure reliable search functionality. [#15616](https://github.com/open-webui/open-webui/issues/15616)
- 🔍 Hybrid search with reranking now correctly handles attribute validation, preventing errors when collection results lack expected structure. [#19025](https://github.com/open-webui/open-webui/pull/19025), [#17046](https://github.com/open-webui/open-webui/issues/17046)
- 🔎 Reranking functionality now works correctly after recent refactoring, resolving crashes caused by incorrect function argument handling. [#19270](https://github.com/open-webui/open-webui/pull/19270)
- 🤖 Azure OpenAI models now support the "reasoning_effort" parameter, enabling proper configuration of reasoning capabilities for models like GPT-5.1 which default to no reasoning without this setting. [#19290](https://github.com/open-webui/open-webui/issues/19290)
- 🤖 Models with very long IDs can now be deleted correctly, resolving URL length limitations that previously prevented management operations on such models. [#18230](https://github.com/open-webui/open-webui/pull/18230)
- 🤖 Model-level streaming settings now correctly apply to API requests, ensuring "Stream Chat Response" toggle properly controls the streaming parameter. [#19154](https://github.com/open-webui/open-webui/issues/19154)
- 🖼️ Image editing configuration now correctly preserves independent OpenAI API endpoints and keys, preventing them from being overwritten by image generation settings. [#19003](https://github.com/open-webui/open-webui/issues/19003)
- 🎨 Gemini image edit settings now display correctly in the admin panel, fixing an incorrect configuration key reference that prevented proper rendering of edit options. [#19200](https://github.com/open-webui/open-webui/pull/19200)
- 🖌️ Image generation settings menu now loads correctly, resolving validation errors with AUTOMATIC1111 API authentication parameters. [#19187](https://github.com/open-webui/open-webui/issues/19187), [#19246](https://github.com/open-webui/open-webui/issues/19246)
- 📅 Date formatting in chat search and admin user chat search now correctly respects the "DEFAULT_LOCALE" environment variable, displaying dates according to the configured locale instead of always using MM/DD/YYYY format. [#19305](https://github.com/open-webui/open-webui/pull/19305), [#19020](https://github.com/open-webui/open-webui/issues/19020)
- 📝 RAG template query placeholder escaping logic was corrected to prevent unintended replacements of context values when query placeholders appear in retrieved content. [#19102](https://github.com/open-webui/open-webui/pull/19102), [#19101](https://github.com/open-webui/open-webui/issues/19101)
- 📄 RAG template prompt duplication was eliminated by removing redundant user query section from the default template. [#19099](https://github.com/open-webui/open-webui/pull/19099), [#19098](https://github.com/open-webui/open-webui/issues/19098)
- 📋 MinerU local mode configuration no longer incorrectly requires an API key, allowing proper use of local content extraction without external API credentials. [#19258](https://github.com/open-webui/open-webui/issues/19258)
- 📊 Excel file uploads now work correctly with the addition of the missing msoffcrypto-tool dependency, resolving import errors introduced by the unstructured package upgrade. [#19153](https://github.com/open-webui/open-webui/issues/19153)
- 📑 Docling parameters now properly handle JSON serialization, preventing exceptions and ensuring configuration changes are saved correctly. [#19072](https://github.com/open-webui/open-webui/pull/19072)
- 🛠️ UserValves configuration now correctly isolates settings per tool, preventing configuration contamination when multiple tools with UserValves are used simultaneously. [#19185](https://github.com/open-webui/open-webui/pull/19185), [#15569](https://github.com/open-webui/open-webui/issues/15569)
- 🔧 Tool selection prompt now correctly handles user messages without duplication, removing redundant query prefixes and improving prompt clarity. [#19122](https://github.com/open-webui/open-webui/pull/19122), [#19121](https://github.com/open-webui/open-webui/issues/19121)
- 📝 Notes chat feature now correctly submits messages to the completions endpoint, resolving errors that prevented AI model interactions. [#19079](https://github.com/open-webui/open-webui/pull/19079)
- 📝 Note PDF downloads now sanitize HTML content using DOMPurify before rendering, preventing potential DOM-based XSS attacks from malicious content in notes. [Commit](https://github.com/open-webui/open-webui/commit/03cc6ce8eb5c055115406e2304fbf7e3338b8dce)
- 📁 Archived chats now have their folder associations automatically removed to prevent unintended deletion when their previous folder is deleted. [#14578](https://github.com/open-webui/open-webui/issues/14578)
- 🔐 ElevenLabs API key is now properly obfuscated in the admin settings page, preventing plain text exposure of sensitive credentials. [#19262](https://github.com/open-webui/open-webui/pull/19262), [#19260](https://github.com/open-webui/open-webui/issues/19260)
- 🔧 MCP OAuth server metadata discovery now follows the correct specification order, ensuring proper authentication flow compliance. [#19244](https://github.com/open-webui/open-webui/pull/19244)
- 🔒 API key endpoint restrictions now properly enforce access controls for all endpoints including SCIM, preventing unintended access when "API_KEY_ALLOWED_ENDPOINTS" is configured. [#19168](https://github.com/open-webui/open-webui/issues/19168)
- 🔓 OAuth role claim parsing now supports both flat and nested claim structures, enabling compatibility with OAuth providers that deliver claims as direct properties on the user object rather than nested structures. [#19286](https://github.com/open-webui/open-webui/pull/19286)
- 🔑 OAuth MCP server verification now correctly extracts the access token value for authorization headers instead of sending the entire token dictionary. [#19149](https://github.com/open-webui/open-webui/pull/19149), [#19148](https://github.com/open-webui/open-webui/issues/19148)
- ⚙️ OAuth dynamic client registration now correctly converts empty strings to None for optional fields, preventing validation failures in MCP package integration. [#19144](https://github.com/open-webui/open-webui/pull/19144), [#19129](https://github.com/open-webui/open-webui/issues/19129)
- 🔐 OIDC authentication now correctly passes client credentials in access token requests, ensuring compatibility with providers that require these parameters per RFC 6749. [#19132](https://github.com/open-webui/open-webui/pull/19132), [#19131](https://github.com/open-webui/open-webui/issues/19131)
- 🔗 OAuth client creation now respects configured token endpoint authentication methods instead of defaulting to basic authentication, preventing failures with servers that don't support basic auth. [#19165](https://github.com/open-webui/open-webui/pull/19165)
- 📋 Text copied from chat responses in Chrome now pastes without background formatting, improving readability when pasting into word processors. [#19083](https://github.com/open-webui/open-webui/issues/19083)
### Changed
- 🗄️ Group membership data storage was refactored from JSON arrays to a dedicated relational database table, significantly improving query performance and scalability for instances with large numbers of users and groups, while API responses now return member counts instead of full user ID arrays. [#19239](https://github.com/open-webui/open-webui/pull/19239)
- 📄 MinerU parameter handling was refactored to pass parameters directly to the API, improving flexibility and fixing VLM backend configuration. [#19105](https://github.com/open-webui/open-webui/pull/19105), [#18446](https://github.com/open-webui/open-webui/discussions/18446)
- 🔐 API key creation is now controlled by granular user and group permissions, with the "ENABLE_API_KEY" environment variable renamed to "ENABLE_API_KEYS" and disabled by default, requiring explicit configuration at both the global and user permission levels, while related environment variables "ENABLE_API_KEY_ENDPOINT_RESTRICTIONS" and "API_KEY_ALLOWED_ENDPOINTS" were renamed to "ENABLE_API_KEYS_ENDPOINT_RESTRICTIONS" and "API_KEYS_ALLOWED_ENDPOINTS" respectively. [#18336](https://github.com/open-webui/open-webui/pull/18336)
## [0.6.36] - 2025-11-07 ## [0.6.36] - 2025-11-07
### Added ### Added

View file

@ -31,32 +31,44 @@ For more information, be sure to check out our [Open WebUI Documentation](https:
- 🛡️ **Granular Permissions and User Groups**: By allowing administrators to create detailed user roles and permissions, we ensure a secure user environment. This granularity not only enhances security but also allows for customized user experiences, fostering a sense of ownership and responsibility amongst users. - 🛡️ **Granular Permissions and User Groups**: By allowing administrators to create detailed user roles and permissions, we ensure a secure user environment. This granularity not only enhances security but also allows for customized user experiences, fostering a sense of ownership and responsibility amongst users.
- 🔄 **SCIM 2.0 Support**: Enterprise-grade user and group provisioning through SCIM 2.0 protocol, enabling seamless integration with identity providers like Okta, Azure AD, and Google Workspace for automated user lifecycle management.
- 📱 **Responsive Design**: Enjoy a seamless experience across Desktop PC, Laptop, and Mobile devices. - 📱 **Responsive Design**: Enjoy a seamless experience across Desktop PC, Laptop, and Mobile devices.
- 📱 **Progressive Web App (PWA) for Mobile**: Enjoy a native app-like experience on your mobile device with our PWA, providing offline access on localhost and a seamless user interface. - 📱 **Progressive Web App (PWA) for Mobile**: Enjoy a native app-like experience on your mobile device with our PWA, providing offline access on localhost and a seamless user interface.
- ✒️🔢 **Full Markdown and LaTeX Support**: Elevate your LLM experience with comprehensive Markdown and LaTeX capabilities for enriched interaction. - ✒️🔢 **Full Markdown and LaTeX Support**: Elevate your LLM experience with comprehensive Markdown and LaTeX capabilities for enriched interaction.
- 🎤📹 **Hands-Free Voice/Video Call**: Experience seamless communication with integrated hands-free voice and video call features, allowing for a more dynamic and interactive chat environment. - 🎤📹 **Hands-Free Voice/Video Call**: Experience seamless communication with integrated hands-free voice and video call features using multiple Speech-to-Text providers (Local Whisper, OpenAI, Deepgram, Azure) and Text-to-Speech engines (Azure, ElevenLabs, OpenAI, Transformers, WebAPI), allowing for dynamic and interactive chat environments.
- 🛠️ **Model Builder**: Easily create Ollama models via the Web UI. Create and add custom characters/agents, customize chat elements, and import models effortlessly through [Open WebUI Community](https://openwebui.com/) integration. - 🛠️ **Model Builder**: Easily create Ollama models via the Web UI. Create and add custom characters/agents, customize chat elements, and import models effortlessly through [Open WebUI Community](https://openwebui.com/) integration.
- 🐍 **Native Python Function Calling Tool**: Enhance your LLMs with built-in code editor support in the tools workspace. Bring Your Own Function (BYOF) by simply adding your pure Python functions, enabling seamless integration with LLMs. - 🐍 **Native Python Function Calling Tool**: Enhance your LLMs with built-in code editor support in the tools workspace. Bring Your Own Function (BYOF) by simply adding your pure Python functions, enabling seamless integration with LLMs.
- 📚 **Local RAG Integration**: Dive into the future of chat interactions with groundbreaking Retrieval Augmented Generation (RAG) support. This feature seamlessly integrates document interactions into your chat experience. You can load documents directly into the chat or add files to your document library, effortlessly accessing them using the `#` command before a query. - 💾 **Persistent Artifact Storage**: Built-in key-value storage API for artifacts, enabling features like journals, trackers, leaderboards, and collaborative tools with both personal and shared data scopes across sessions.
- 🔍 **Web Search for RAG**: Perform web searches using providers like `SearXNG`, `Google PSE`, `Brave Search`, `serpstack`, `serper`, `Serply`, `DuckDuckGo`, `TavilySearch`, `SearchApi` and `Bing` and inject the results directly into your chat experience. - 📚 **Local RAG Integration**: Dive into the future of chat interactions with groundbreaking Retrieval Augmented Generation (RAG) support using your choice of 9 vector databases and multiple content extraction engines (Tika, Docling, Document Intelligence, Mistral OCR, External loaders). Load documents directly into chat or add files to your document library, effortlessly accessing them using the `#` command before a query.
- 🔍 **Web Search for RAG**: Perform web searches using 15+ providers including `SearXNG`, `Google PSE`, `Brave Search`, `Kagi`, `Mojeek`, `Tavily`, `Perplexity`, `serpstack`, `serper`, `Serply`, `DuckDuckGo`, `SearchApi`, `SerpApi`, `Bing`, `Jina`, `Exa`, `Sougou`, `Azure AI Search`, and `Ollama Cloud`, injecting results directly into your chat experience.
- 🌐 **Web Browsing Capability**: Seamlessly integrate websites into your chat experience using the `#` command followed by a URL. This feature allows you to incorporate web content directly into your conversations, enhancing the richness and depth of your interactions. - 🌐 **Web Browsing Capability**: Seamlessly integrate websites into your chat experience using the `#` command followed by a URL. This feature allows you to incorporate web content directly into your conversations, enhancing the richness and depth of your interactions.
- 🎨 **Image Generation Integration**: Seamlessly incorporate image generation capabilities using options such as AUTOMATIC1111 API or ComfyUI (local), and OpenAI's DALL-E (external), enriching your chat experience with dynamic visual content. - 🎨 **Image Generation & Editing Integration**: Create and edit images using multiple engines including OpenAI's DALL-E, Gemini, ComfyUI (local), and AUTOMATIC1111 (local), with support for both generation and prompt-based editing workflows.
- ⚙️ **Many Models Conversations**: Effortlessly engage with various models simultaneously, harnessing their unique strengths for optimal responses. Enhance your experience by leveraging a diverse set of models in parallel. - ⚙️ **Many Models Conversations**: Effortlessly engage with various models simultaneously, harnessing their unique strengths for optimal responses. Enhance your experience by leveraging a diverse set of models in parallel.
- 🔐 **Role-Based Access Control (RBAC)**: Ensure secure access with restricted permissions; only authorized individuals can access your Ollama, and exclusive model creation/pulling rights are reserved for administrators. - 🔐 **Role-Based Access Control (RBAC)**: Ensure secure access with restricted permissions; only authorized individuals can access your Ollama, and exclusive model creation/pulling rights are reserved for administrators.
- 🗄️ **Flexible Database & Storage Options**: Choose from SQLite (with optional encryption), PostgreSQL, or configure cloud storage backends (S3, Google Cloud Storage, Azure Blob Storage) for scalable deployments.
- 🔍 **Advanced Vector Database Support**: Select from 9 vector database options including ChromaDB, PGVector, Qdrant, Milvus, Elasticsearch, OpenSearch, Pinecone, S3Vector, and Oracle 23ai for optimal RAG performance.
- 🔐 **Enterprise Authentication**: Full support for LDAP/Active Directory integration, SCIM 2.0 automated provisioning, and SSO via trusted headers alongside OAuth providers. Enterprise-grade user and group provisioning through SCIM 2.0 protocol, enabling seamless integration with identity providers like Okta, Azure AD, and Google Workspace for automated user lifecycle management.
- ☁️ **Cloud-Native Integration**: Native support for Google Drive and OneDrive/SharePoint file picking, enabling seamless document import from enterprise cloud storage.
- 📊 **Production Observability**: Built-in OpenTelemetry support for traces, metrics, and logs, enabling comprehensive monitoring with your existing observability stack.
- ⚖️ **Horizontal Scalability**: Redis-backed session management and WebSocket support for multi-worker and multi-node deployments behind load balancers.
- 🌐🌍 **Multilingual Support**: Experience Open WebUI in your preferred language with our internationalization (i18n) support. Join us in expanding our supported languages! We're actively seeking contributors! - 🌐🌍 **Multilingual Support**: Experience Open WebUI in your preferred language with our internationalization (i18n) support. Join us in expanding our supported languages! We're actively seeking contributors!
- 🧩 **Pipelines, Open WebUI Plugin Support**: Seamlessly integrate custom logic and Python libraries into Open WebUI using [Pipelines Plugin Framework](https://github.com/open-webui/pipelines). Launch your Pipelines instance, set the OpenAI URL to the Pipelines URL, and explore endless possibilities. [Examples](https://github.com/open-webui/pipelines/tree/main/examples) include **Function Calling**, User **Rate Limiting** to control access, **Usage Monitoring** with tools like Langfuse, **Live Translation with LibreTranslate** for multilingual support, **Toxic Message Filtering** and much more. - 🧩 **Pipelines, Open WebUI Plugin Support**: Seamlessly integrate custom logic and Python libraries into Open WebUI using [Pipelines Plugin Framework](https://github.com/open-webui/pipelines). Launch your Pipelines instance, set the OpenAI URL to the Pipelines URL, and explore endless possibilities. [Examples](https://github.com/open-webui/pipelines/tree/main/examples) include **Function Calling**, User **Rate Limiting** to control access, **Usage Monitoring** with tools like Langfuse, **Live Translation with LibreTranslate** for multilingual support, **Toxic Message Filtering** and much more.

View file

@ -620,6 +620,11 @@ OAUTH_UPDATE_PICTURE_ON_LOGIN = PersistentConfig(
os.environ.get("OAUTH_UPDATE_PICTURE_ON_LOGIN", "False").lower() == "true", os.environ.get("OAUTH_UPDATE_PICTURE_ON_LOGIN", "False").lower() == "true",
) )
OAUTH_ACCESS_TOKEN_REQUEST_INCLUDE_CLIENT_ID = (
os.environ.get("OAUTH_ACCESS_TOKEN_REQUEST_INCLUDE_CLIENT_ID", "False").lower()
== "true"
)
def load_oauth_providers(): def load_oauth_providers():
OAUTH_PROVIDERS.clear() OAUTH_PROVIDERS.clear()
@ -2539,6 +2544,12 @@ DOCLING_SERVER_URL = PersistentConfig(
os.getenv("DOCLING_SERVER_URL", "http://docling:5001"), os.getenv("DOCLING_SERVER_URL", "http://docling:5001"),
) )
DOCLING_API_KEY = PersistentConfig(
"DOCLING_API_KEY",
"rag.docling_api_key",
os.getenv("DOCLING_API_KEY", ""),
)
docling_params = os.getenv("DOCLING_PARAMS", "") docling_params = os.getenv("DOCLING_PARAMS", "")
try: try:
docling_params = json.loads(docling_params) docling_params = json.loads(docling_params)
@ -2551,88 +2562,6 @@ DOCLING_PARAMS = PersistentConfig(
docling_params, docling_params,
) )
DOCLING_DO_OCR = PersistentConfig(
"DOCLING_DO_OCR",
"rag.docling_do_ocr",
os.getenv("DOCLING_DO_OCR", "True").lower() == "true",
)
DOCLING_FORCE_OCR = PersistentConfig(
"DOCLING_FORCE_OCR",
"rag.docling_force_ocr",
os.getenv("DOCLING_FORCE_OCR", "False").lower() == "true",
)
DOCLING_OCR_ENGINE = PersistentConfig(
"DOCLING_OCR_ENGINE",
"rag.docling_ocr_engine",
os.getenv("DOCLING_OCR_ENGINE", "tesseract"),
)
DOCLING_OCR_LANG = PersistentConfig(
"DOCLING_OCR_LANG",
"rag.docling_ocr_lang",
os.getenv("DOCLING_OCR_LANG", "eng,fra,deu,spa"),
)
DOCLING_PDF_BACKEND = PersistentConfig(
"DOCLING_PDF_BACKEND",
"rag.docling_pdf_backend",
os.getenv("DOCLING_PDF_BACKEND", "dlparse_v4"),
)
DOCLING_TABLE_MODE = PersistentConfig(
"DOCLING_TABLE_MODE",
"rag.docling_table_mode",
os.getenv("DOCLING_TABLE_MODE", "accurate"),
)
DOCLING_PIPELINE = PersistentConfig(
"DOCLING_PIPELINE",
"rag.docling_pipeline",
os.getenv("DOCLING_PIPELINE", "standard"),
)
DOCLING_DO_PICTURE_DESCRIPTION = PersistentConfig(
"DOCLING_DO_PICTURE_DESCRIPTION",
"rag.docling_do_picture_description",
os.getenv("DOCLING_DO_PICTURE_DESCRIPTION", "False").lower() == "true",
)
DOCLING_PICTURE_DESCRIPTION_MODE = PersistentConfig(
"DOCLING_PICTURE_DESCRIPTION_MODE",
"rag.docling_picture_description_mode",
os.getenv("DOCLING_PICTURE_DESCRIPTION_MODE", ""),
)
docling_picture_description_local = os.getenv("DOCLING_PICTURE_DESCRIPTION_LOCAL", "")
try:
docling_picture_description_local = json.loads(docling_picture_description_local)
except json.JSONDecodeError:
docling_picture_description_local = {}
DOCLING_PICTURE_DESCRIPTION_LOCAL = PersistentConfig(
"DOCLING_PICTURE_DESCRIPTION_LOCAL",
"rag.docling_picture_description_local",
docling_picture_description_local,
)
docling_picture_description_api = os.getenv("DOCLING_PICTURE_DESCRIPTION_API", "")
try:
docling_picture_description_api = json.loads(docling_picture_description_api)
except json.JSONDecodeError:
docling_picture_description_api = {}
DOCLING_PICTURE_DESCRIPTION_API = PersistentConfig(
"DOCLING_PICTURE_DESCRIPTION_API",
"rag.docling_picture_description_api",
docling_picture_description_api,
)
DOCUMENT_INTELLIGENCE_ENDPOINT = PersistentConfig( DOCUMENT_INTELLIGENCE_ENDPOINT = PersistentConfig(
"DOCUMENT_INTELLIGENCE_ENDPOINT", "DOCUMENT_INTELLIGENCE_ENDPOINT",
"rag.document_intelligence_endpoint", "rag.document_intelligence_endpoint",
@ -2790,6 +2719,12 @@ RAG_EMBEDDING_BATCH_SIZE = PersistentConfig(
), ),
) )
ENABLE_ASYNC_EMBEDDING = PersistentConfig(
"ENABLE_ASYNC_EMBEDDING",
"rag.enable_async_embedding",
os.environ.get("ENABLE_ASYNC_EMBEDDING", "True").lower() == "true",
)
RAG_EMBEDDING_QUERY_PREFIX = os.environ.get("RAG_EMBEDDING_QUERY_PREFIX", None) RAG_EMBEDDING_QUERY_PREFIX = os.environ.get("RAG_EMBEDDING_QUERY_PREFIX", None)
RAG_EMBEDDING_CONTENT_PREFIX = os.environ.get("RAG_EMBEDDING_CONTENT_PREFIX", None) RAG_EMBEDDING_CONTENT_PREFIX = os.environ.get("RAG_EMBEDDING_CONTENT_PREFIX", None)

View file

@ -230,6 +230,7 @@ from open_webui.config import (
RAG_RERANKING_MODEL_TRUST_REMOTE_CODE, RAG_RERANKING_MODEL_TRUST_REMOTE_CODE,
RAG_EMBEDDING_ENGINE, RAG_EMBEDDING_ENGINE,
RAG_EMBEDDING_BATCH_SIZE, RAG_EMBEDDING_BATCH_SIZE,
ENABLE_ASYNC_EMBEDDING,
RAG_TOP_K, RAG_TOP_K,
RAG_TOP_K_RERANKER, RAG_TOP_K_RERANKER,
RAG_RELEVANCE_THRESHOLD, RAG_RELEVANCE_THRESHOLD,
@ -268,18 +269,8 @@ from open_webui.config import (
EXTERNAL_DOCUMENT_LOADER_API_KEY, EXTERNAL_DOCUMENT_LOADER_API_KEY,
TIKA_SERVER_URL, TIKA_SERVER_URL,
DOCLING_SERVER_URL, DOCLING_SERVER_URL,
DOCLING_API_KEY,
DOCLING_PARAMS, DOCLING_PARAMS,
DOCLING_DO_OCR,
DOCLING_FORCE_OCR,
DOCLING_OCR_ENGINE,
DOCLING_OCR_LANG,
DOCLING_PDF_BACKEND,
DOCLING_TABLE_MODE,
DOCLING_PIPELINE,
DOCLING_DO_PICTURE_DESCRIPTION,
DOCLING_PICTURE_DESCRIPTION_MODE,
DOCLING_PICTURE_DESCRIPTION_LOCAL,
DOCLING_PICTURE_DESCRIPTION_API,
DOCUMENT_INTELLIGENCE_ENDPOINT, DOCUMENT_INTELLIGENCE_ENDPOINT,
DOCUMENT_INTELLIGENCE_KEY, DOCUMENT_INTELLIGENCE_KEY,
MISTRAL_OCR_API_BASE_URL, MISTRAL_OCR_API_BASE_URL,
@ -875,18 +866,8 @@ app.state.config.EXTERNAL_DOCUMENT_LOADER_URL = EXTERNAL_DOCUMENT_LOADER_URL
app.state.config.EXTERNAL_DOCUMENT_LOADER_API_KEY = EXTERNAL_DOCUMENT_LOADER_API_KEY app.state.config.EXTERNAL_DOCUMENT_LOADER_API_KEY = EXTERNAL_DOCUMENT_LOADER_API_KEY
app.state.config.TIKA_SERVER_URL = TIKA_SERVER_URL app.state.config.TIKA_SERVER_URL = TIKA_SERVER_URL
app.state.config.DOCLING_SERVER_URL = DOCLING_SERVER_URL app.state.config.DOCLING_SERVER_URL = DOCLING_SERVER_URL
app.state.config.DOCLING_API_KEY = DOCLING_API_KEY
app.state.config.DOCLING_PARAMS = DOCLING_PARAMS app.state.config.DOCLING_PARAMS = DOCLING_PARAMS
app.state.config.DOCLING_DO_OCR = DOCLING_DO_OCR
app.state.config.DOCLING_FORCE_OCR = DOCLING_FORCE_OCR
app.state.config.DOCLING_OCR_ENGINE = DOCLING_OCR_ENGINE
app.state.config.DOCLING_OCR_LANG = DOCLING_OCR_LANG
app.state.config.DOCLING_PDF_BACKEND = DOCLING_PDF_BACKEND
app.state.config.DOCLING_TABLE_MODE = DOCLING_TABLE_MODE
app.state.config.DOCLING_PIPELINE = DOCLING_PIPELINE
app.state.config.DOCLING_DO_PICTURE_DESCRIPTION = DOCLING_DO_PICTURE_DESCRIPTION
app.state.config.DOCLING_PICTURE_DESCRIPTION_MODE = DOCLING_PICTURE_DESCRIPTION_MODE
app.state.config.DOCLING_PICTURE_DESCRIPTION_LOCAL = DOCLING_PICTURE_DESCRIPTION_LOCAL
app.state.config.DOCLING_PICTURE_DESCRIPTION_API = DOCLING_PICTURE_DESCRIPTION_API
app.state.config.DOCUMENT_INTELLIGENCE_ENDPOINT = DOCUMENT_INTELLIGENCE_ENDPOINT app.state.config.DOCUMENT_INTELLIGENCE_ENDPOINT = DOCUMENT_INTELLIGENCE_ENDPOINT
app.state.config.DOCUMENT_INTELLIGENCE_KEY = DOCUMENT_INTELLIGENCE_KEY app.state.config.DOCUMENT_INTELLIGENCE_KEY = DOCUMENT_INTELLIGENCE_KEY
app.state.config.MISTRAL_OCR_API_BASE_URL = MISTRAL_OCR_API_BASE_URL app.state.config.MISTRAL_OCR_API_BASE_URL = MISTRAL_OCR_API_BASE_URL
@ -905,6 +886,7 @@ app.state.config.CHUNK_OVERLAP = CHUNK_OVERLAP
app.state.config.RAG_EMBEDDING_ENGINE = RAG_EMBEDDING_ENGINE app.state.config.RAG_EMBEDDING_ENGINE = RAG_EMBEDDING_ENGINE
app.state.config.RAG_EMBEDDING_MODEL = RAG_EMBEDDING_MODEL app.state.config.RAG_EMBEDDING_MODEL = RAG_EMBEDDING_MODEL
app.state.config.RAG_EMBEDDING_BATCH_SIZE = RAG_EMBEDDING_BATCH_SIZE app.state.config.RAG_EMBEDDING_BATCH_SIZE = RAG_EMBEDDING_BATCH_SIZE
app.state.config.ENABLE_ASYNC_EMBEDDING = ENABLE_ASYNC_EMBEDDING
app.state.config.RAG_RERANKING_ENGINE = RAG_RERANKING_ENGINE app.state.config.RAG_RERANKING_ENGINE = RAG_RERANKING_ENGINE
app.state.config.RAG_RERANKING_MODEL = RAG_RERANKING_MODEL app.state.config.RAG_RERANKING_MODEL = RAG_RERANKING_MODEL

View file

@ -19,7 +19,7 @@ log.setLevel(SRC_LOG_LEVELS["MODELS"])
class Auth(Base): class Auth(Base):
__tablename__ = "auth" __tablename__ = "auth"
id = Column(String, primary_key=True) id = Column(String, primary_key=True, unique=True)
email = Column(String) email = Column(String)
password = Column(Text) password = Column(Text)
active = Column(Boolean) active = Column(Boolean)

View file

@ -19,7 +19,7 @@ from sqlalchemy.sql import exists
class Channel(Base): class Channel(Base):
__tablename__ = "channel" __tablename__ = "channel"
id = Column(Text, primary_key=True) id = Column(Text, primary_key=True, unique=True)
user_id = Column(Text) user_id = Column(Text)
type = Column(Text, nullable=True) type = Column(Text, nullable=True)

View file

@ -26,7 +26,7 @@ log.setLevel(SRC_LOG_LEVELS["MODELS"])
class Chat(Base): class Chat(Base):
__tablename__ = "chat" __tablename__ = "chat"
id = Column(String, primary_key=True) id = Column(String, primary_key=True, unique=True)
user_id = Column(String) user_id = Column(String)
title = Column(Text) title = Column(Text)
chat = Column(JSON) chat = Column(JSON)
@ -92,6 +92,10 @@ class ChatImportForm(ChatForm):
updated_at: Optional[int] = None updated_at: Optional[int] = None
class ChatsImportForm(BaseModel):
chats: list[ChatImportForm]
class ChatTitleMessagesForm(BaseModel): class ChatTitleMessagesForm(BaseModel):
title: str title: str
messages: list[dict] messages: list[dict]
@ -123,6 +127,43 @@ class ChatTitleIdResponse(BaseModel):
class ChatTable: class ChatTable:
def _clean_null_bytes(self, obj):
"""
Recursively remove actual null bytes (\x00) and unicode escape \\u0000
from strings inside dict/list structures.
Safe for JSON objects.
"""
if isinstance(obj, str):
return obj.replace("\x00", "").replace("\u0000", "")
elif isinstance(obj, dict):
return {k: self._clean_null_bytes(v) for k, v in obj.items()}
elif isinstance(obj, list):
return [self._clean_null_bytes(v) for v in obj]
return obj
def _sanitize_chat_row(self, chat_item):
"""
Clean a Chat SQLAlchemy model's title + chat JSON,
and return True if anything changed.
"""
changed = False
# Clean title
if chat_item.title:
cleaned = self._clean_null_bytes(chat_item.title)
if cleaned != chat_item.title:
chat_item.title = cleaned
changed = True
# Clean JSON
if chat_item.chat:
cleaned = self._clean_null_bytes(chat_item.chat)
if cleaned != chat_item.chat:
chat_item.chat = cleaned
changed = True
return changed
def insert_new_chat(self, user_id: str, form_data: ChatForm) -> Optional[ChatModel]: def insert_new_chat(self, user_id: str, form_data: ChatForm) -> Optional[ChatModel]:
with get_db() as db: with get_db() as db:
id = str(uuid.uuid4()) id = str(uuid.uuid4())
@ -130,68 +171,76 @@ class ChatTable:
**{ **{
"id": id, "id": id,
"user_id": user_id, "user_id": user_id,
"title": ( "title": self._clean_null_bytes(
form_data.chat["title"] form_data.chat["title"]
if "title" in form_data.chat if "title" in form_data.chat
else "New Chat" else "New Chat"
), ),
"chat": form_data.chat, "chat": self._clean_null_bytes(form_data.chat),
"folder_id": form_data.folder_id, "folder_id": form_data.folder_id,
"created_at": int(time.time()), "created_at": int(time.time()),
"updated_at": int(time.time()), "updated_at": int(time.time()),
} }
) )
result = Chat(**chat.model_dump()) chat_item = Chat(**chat.model_dump())
db.add(result) db.add(chat_item)
db.commit() db.commit()
db.refresh(result) db.refresh(chat_item)
return ChatModel.model_validate(result) if result else None return ChatModel.model_validate(chat_item) if chat_item else None
def import_chat( def _chat_import_form_to_chat_model(
self, user_id: str, form_data: ChatImportForm self, user_id: str, form_data: ChatImportForm
) -> Optional[ChatModel]: ) -> ChatModel:
with get_db() as db: id = str(uuid.uuid4())
id = str(uuid.uuid4()) chat = ChatModel(
chat = ChatModel( **{
**{ "id": id,
"id": id, "user_id": user_id,
"user_id": user_id, "title": self._clean_null_bytes(
"title": ( form_data.chat["title"] if "title" in form_data.chat else "New Chat"
form_data.chat["title"] ),
if "title" in form_data.chat "chat": self._clean_null_bytes(form_data.chat),
else "New Chat" "meta": form_data.meta,
), "pinned": form_data.pinned,
"chat": form_data.chat, "folder_id": form_data.folder_id,
"meta": form_data.meta, "created_at": (
"pinned": form_data.pinned, form_data.created_at if form_data.created_at else int(time.time())
"folder_id": form_data.folder_id, ),
"created_at": ( "updated_at": (
form_data.created_at form_data.updated_at if form_data.updated_at else int(time.time())
if form_data.created_at ),
else int(time.time()) }
), )
"updated_at": ( return chat
form_data.updated_at
if form_data.updated_at
else int(time.time())
),
}
)
result = Chat(**chat.model_dump()) def import_chats(
db.add(result) self, user_id: str, chat_import_forms: list[ChatImportForm]
) -> list[ChatModel]:
with get_db() as db:
chats = []
for form_data in chat_import_forms:
chat = self._chat_import_form_to_chat_model(user_id, form_data)
chats.append(Chat(**chat.model_dump()))
db.add_all(chats)
db.commit() db.commit()
db.refresh(result) return [ChatModel.model_validate(chat) for chat in chats]
return ChatModel.model_validate(result) if result else None
def update_chat_by_id(self, id: str, chat: dict) -> Optional[ChatModel]: def update_chat_by_id(self, id: str, chat: dict) -> Optional[ChatModel]:
try: try:
with get_db() as db: with get_db() as db:
chat_item = db.get(Chat, id) chat_item = db.get(Chat, id)
chat_item.chat = chat chat_item.chat = self._clean_null_bytes(chat)
chat_item.title = chat["title"] if "title" in chat else "New Chat" chat_item.title = (
self._clean_null_bytes(chat["title"])
if "title" in chat
else "New Chat"
)
chat_item.updated_at = int(time.time()) chat_item.updated_at = int(time.time())
db.commit() db.commit()
db.refresh(chat_item) db.refresh(chat_item)
@ -426,6 +475,7 @@ class ChatTable:
with get_db() as db: with get_db() as db:
chat = db.get(Chat, id) chat = db.get(Chat, id)
chat.archived = not chat.archived chat.archived = not chat.archived
chat.folder_id = None
chat.updated_at = int(time.time()) chat.updated_at = int(time.time())
db.commit() db.commit()
db.refresh(chat) db.refresh(chat)
@ -582,8 +632,15 @@ class ChatTable:
def get_chat_by_id(self, id: str) -> Optional[ChatModel]: def get_chat_by_id(self, id: str) -> Optional[ChatModel]:
try: try:
with get_db() as db: with get_db() as db:
chat = db.get(Chat, id) chat_item = db.get(Chat, id)
return ChatModel.model_validate(chat) if chat_item is None:
return None
if self._sanitize_chat_row(chat_item):
db.commit()
db.refresh(chat_item)
return ChatModel.model_validate(chat_item)
except Exception: except Exception:
return None return None
@ -788,24 +845,30 @@ class ChatTable:
elif dialect_name == "postgresql": elif dialect_name == "postgresql":
# PostgreSQL doesn't allow null bytes in text. We filter those out by checking # PostgreSQL doesn't allow null bytes in text. We filter those out by checking
# the JSON representation for \u0000 before attempting text extraction # the JSON representation for \u0000 before attempting text extraction
postgres_content_sql = (
"EXISTS (" # Safety filter: JSON field must not contain \u0000
" SELECT 1 " query = query.filter(text("Chat.chat::text NOT LIKE '%\\\\u0000%'"))
" FROM json_array_elements(Chat.chat->'messages') AS message "
" WHERE message->'content' IS NOT NULL " # Safety filter: title must not contain actual null bytes
" AND (message->'content')::text NOT LIKE '%\\u0000%' "
" AND LOWER(message->>'content') LIKE '%' || :content_key || '%'"
")"
)
postgres_content_clause = text(postgres_content_sql)
# Also filter out chats with null bytes in title
query = query.filter(text("Chat.title::text NOT LIKE '%\\x00%'")) query = query.filter(text("Chat.title::text NOT LIKE '%\\x00%'"))
postgres_content_sql = """
EXISTS (
SELECT 1
FROM json_array_elements(Chat.chat->'messages') AS message
WHERE json_typeof(message->'content') = 'string'
AND LOWER(message->>'content') LIKE '%' || :content_key || '%'
)
"""
postgres_content_clause = text(postgres_content_sql)
query = query.filter( query = query.filter(
or_( or_(
Chat.title.ilike(bindparam("title_key")), Chat.title.ilike(bindparam("title_key")),
postgres_content_clause, postgres_content_clause,
).params(title_key=f"%{search_text}%", content_key=search_text) )
) ).params(title_key=f"%{search_text}%", content_key=search_text.lower())
# Check if there are any tags to filter, it should have all the tags # Check if there are any tags to filter, it should have all the tags
if "none" in tag_ids: if "none" in tag_ids:
@ -1080,6 +1143,20 @@ class ChatTable:
except Exception: except Exception:
return False return False
def move_chats_by_user_id_and_folder_id(
self, user_id: str, folder_id: str, new_folder_id: Optional[str]
) -> bool:
try:
with get_db() as db:
db.query(Chat).filter_by(user_id=user_id, folder_id=folder_id).update(
{"folder_id": new_folder_id}
)
db.commit()
return True
except Exception:
return False
def delete_shared_chats_by_user_id(self, user_id: str) -> bool: def delete_shared_chats_by_user_id(self, user_id: str) -> bool:
try: try:
with get_db() as db: with get_db() as db:

View file

@ -21,7 +21,7 @@ log.setLevel(SRC_LOG_LEVELS["MODELS"])
class Feedback(Base): class Feedback(Base):
__tablename__ = "feedback" __tablename__ = "feedback"
id = Column(Text, primary_key=True) id = Column(Text, primary_key=True, unique=True)
user_id = Column(Text) user_id = Column(Text)
version = Column(BigInteger, default=0) version = Column(BigInteger, default=0)
type = Column(Text) type = Column(Text)

View file

@ -17,7 +17,7 @@ log.setLevel(SRC_LOG_LEVELS["MODELS"])
class File(Base): class File(Base):
__tablename__ = "file" __tablename__ = "file"
id = Column(String, primary_key=True) id = Column(String, primary_key=True, unique=True)
user_id = Column(String) user_id = Column(String)
hash = Column(Text, nullable=True) hash = Column(Text, nullable=True)

View file

@ -23,7 +23,7 @@ log.setLevel(SRC_LOG_LEVELS["MODELS"])
class Folder(Base): class Folder(Base):
__tablename__ = "folder" __tablename__ = "folder"
id = Column(Text, primary_key=True) id = Column(Text, primary_key=True, unique=True)
parent_id = Column(Text, nullable=True) parent_id = Column(Text, nullable=True)
user_id = Column(Text) user_id = Column(Text)
name = Column(Text) name = Column(Text)

View file

@ -19,7 +19,7 @@ log.setLevel(SRC_LOG_LEVELS["MODELS"])
class Function(Base): class Function(Base):
__tablename__ = "function" __tablename__ = "function"
id = Column(String, primary_key=True) id = Column(String, primary_key=True, unique=True)
user_id = Column(String) user_id = Column(String)
name = Column(Text) name = Column(Text)
type = Column(Text) type = Column(Text)

View file

@ -14,7 +14,7 @@ from sqlalchemy import BigInteger, Column, String, Text
class Memory(Base): class Memory(Base):
__tablename__ = "memory" __tablename__ = "memory"
id = Column(String, primary_key=True) id = Column(String, primary_key=True, unique=True)
user_id = Column(String) user_id = Column(String)
content = Column(Text) content = Column(Text)
updated_at = Column(BigInteger) updated_at = Column(BigInteger)

View file

@ -20,7 +20,7 @@ from sqlalchemy.sql import exists
class MessageReaction(Base): class MessageReaction(Base):
__tablename__ = "message_reaction" __tablename__ = "message_reaction"
id = Column(Text, primary_key=True) id = Column(Text, primary_key=True, unique=True)
user_id = Column(Text) user_id = Column(Text)
message_id = Column(Text) message_id = Column(Text)
name = Column(Text) name = Column(Text)

View file

@ -6,12 +6,12 @@ from open_webui.internal.db import Base, JSONField, get_db
from open_webui.env import SRC_LOG_LEVELS from open_webui.env import SRC_LOG_LEVELS
from open_webui.models.groups import Groups from open_webui.models.groups import Groups
from open_webui.models.users import Users, UserResponse from open_webui.models.users import User, UserModel, Users, UserResponse
from pydantic import BaseModel, ConfigDict from pydantic import BaseModel, ConfigDict
from sqlalchemy import or_, and_, func from sqlalchemy import String, cast, or_, and_, func
from sqlalchemy.dialects import postgresql, sqlite from sqlalchemy.dialects import postgresql, sqlite
from sqlalchemy import BigInteger, Column, Text, JSON, Boolean from sqlalchemy import BigInteger, Column, Text, JSON, Boolean
@ -133,6 +133,11 @@ class ModelResponse(ModelModel):
pass pass
class ModelListResponse(BaseModel):
items: list[ModelUserResponse]
total: int
class ModelForm(BaseModel): class ModelForm(BaseModel):
id: str id: str
base_model_id: Optional[str] = None base_model_id: Optional[str] = None
@ -215,6 +220,89 @@ class ModelsTable:
or has_access(user_id, permission, model.access_control, user_group_ids) or has_access(user_id, permission, model.access_control, user_group_ids)
] ]
def search_models(
self, user_id: str, filter: dict = {}, skip: int = 0, limit: int = 30
) -> ModelListResponse:
with get_db() as db:
# Join GroupMember so we can order by group_id when requested
query = db.query(Model, User).outerjoin(User, User.id == Model.user_id)
query = query.filter(Model.base_model_id != None)
if filter:
query_key = filter.get("query")
if query_key:
query = query.filter(
or_(
Model.name.ilike(f"%{query_key}%"),
Model.base_model_id.ilike(f"%{query_key}%"),
)
)
if filter.get("user_id"):
query = query.filter(Model.user_id == filter.get("user_id"))
view_option = filter.get("view_option")
if view_option == "created":
query = query.filter(Model.user_id == user_id)
elif view_option == "shared":
query = query.filter(Model.user_id != user_id)
tag = filter.get("tag")
if tag:
# TODO: This is a simple implementation and should be improved for performance
like_pattern = f'%"{tag.lower()}"%' # `"tag"` inside JSON array
meta_text = func.lower(cast(Model.meta, String))
query = query.filter(meta_text.like(like_pattern))
order_by = filter.get("order_by")
direction = filter.get("direction")
if order_by == "name":
if direction == "asc":
query = query.order_by(Model.name.asc())
else:
query = query.order_by(Model.name.desc())
elif order_by == "created_at":
if direction == "asc":
query = query.order_by(Model.created_at.asc())
else:
query = query.order_by(Model.created_at.desc())
elif order_by == "updated_at":
if direction == "asc":
query = query.order_by(Model.updated_at.asc())
else:
query = query.order_by(Model.updated_at.desc())
else:
query = query.order_by(Model.created_at.desc())
# Count BEFORE pagination
total = query.count()
if skip:
query = query.offset(skip)
if limit:
query = query.limit(limit)
items = query.all()
models = []
for model, user in items:
models.append(
ModelUserResponse(
**ModelModel.model_validate(model).model_dump(),
user=(
UserResponse(**UserModel.model_validate(user).model_dump())
if user
else None
),
)
)
return ModelListResponse(items=models, total=total)
def get_model_by_id(self, id: str) -> Optional[ModelModel]: def get_model_by_id(self, id: str) -> Optional[ModelModel]:
try: try:
with get_db() as db: with get_db() as db:

View file

@ -11,8 +11,8 @@ from open_webui.utils.misc import throttle
from pydantic import BaseModel, ConfigDict from pydantic import BaseModel, ConfigDict
from sqlalchemy import BigInteger, Column, String, Text, Date from sqlalchemy import BigInteger, Column, String, Text, Date, exists, select
from sqlalchemy import or_ from sqlalchemy import or_, case
import datetime import datetime
@ -227,9 +227,7 @@ class UsersTable:
) -> dict: ) -> dict:
with get_db() as db: with get_db() as db:
# Join GroupMember so we can order by group_id when requested # Join GroupMember so we can order by group_id when requested
query = db.query(User).outerjoin( query = db.query(User)
GroupMember, GroupMember.user_id == User.id
)
if filter: if filter:
query_key = filter.get("query") query_key = filter.get("query")
@ -247,17 +245,28 @@ class UsersTable:
if order_by and order_by.startswith("group_id:"): if order_by and order_by.startswith("group_id:"):
group_id = order_by.split(":", 1)[1] group_id = order_by.split(":", 1)[1]
if direction == "asc": # Subquery that checks if the user belongs to the group
query = query.order_by((GroupMember.group_id == group_id).asc()) membership_exists = exists(
else: select(GroupMember.id).where(
query = query.order_by( GroupMember.user_id == User.id,
(GroupMember.group_id == group_id).desc() GroupMember.group_id == group_id,
) )
)
# CASE: user in group → 1, user not in group → 0
group_sort = case((membership_exists, 1), else_=0)
if direction == "asc":
query = query.order_by(group_sort.asc(), User.name.asc())
else:
query = query.order_by(group_sort.desc(), User.name.asc())
elif order_by == "name": elif order_by == "name":
if direction == "asc": if direction == "asc":
query = query.order_by(User.name.asc()) query = query.order_by(User.name.asc())
else: else:
query = query.order_by(User.name.desc()) query = query.order_by(User.name.desc())
elif order_by == "email": elif order_by == "email":
if direction == "asc": if direction == "asc":
query = query.order_by(User.email.asc()) query = query.order_by(User.email.asc())
@ -291,11 +300,13 @@ class UsersTable:
query = query.order_by(User.created_at.desc()) query = query.order_by(User.created_at.desc())
# Count BEFORE pagination # Count BEFORE pagination
query = query.distinct(User.id)
total = query.count() total = query.count()
if skip: # correct pagination logic
if skip is not None:
query = query.offset(skip) query = query.offset(skip)
if limit: if limit is not None:
query = query.limit(limit) query = query.limit(limit)
users = query.all() users = query.all()

View file

@ -132,8 +132,9 @@ class TikaLoader:
class DoclingLoader: class DoclingLoader:
def __init__(self, url, file_path=None, mime_type=None, params=None): def __init__(self, url, api_key=None, file_path=None, mime_type=None, params=None):
self.url = url.rstrip("/") self.url = url.rstrip("/")
self.api_key = api_key
self.file_path = file_path self.file_path = file_path
self.mime_type = mime_type self.mime_type = mime_type
@ -141,6 +142,10 @@ class DoclingLoader:
def load(self) -> list[Document]: def load(self) -> list[Document]:
with open(self.file_path, "rb") as f: with open(self.file_path, "rb") as f:
headers = {}
if self.api_key:
headers["Authorization"] = f"Bearer {self.api_key}"
files = { files = {
"files": ( "files": (
self.file_path, self.file_path,
@ -149,60 +154,15 @@ class DoclingLoader:
) )
} }
params = {"image_export_mode": "placeholder"} r = requests.post(
f"{self.url}/v1/convert/file",
if self.params: files=files,
if self.params.get("do_picture_description"): data={
params["do_picture_description"] = self.params.get( "image_export_mode": "placeholder",
"do_picture_description" **self.params,
) },
headers=headers,
picture_description_mode = self.params.get( )
"picture_description_mode", ""
).lower()
if picture_description_mode == "local" and self.params.get(
"picture_description_local", {}
):
params["picture_description_local"] = json.dumps(
self.params.get("picture_description_local", {})
)
elif picture_description_mode == "api" and self.params.get(
"picture_description_api", {}
):
params["picture_description_api"] = json.dumps(
self.params.get("picture_description_api", {})
)
params["do_ocr"] = self.params.get("do_ocr")
params["force_ocr"] = self.params.get("force_ocr")
if (
self.params.get("do_ocr")
and self.params.get("ocr_engine")
and self.params.get("ocr_lang")
):
params["ocr_engine"] = self.params.get("ocr_engine")
params["ocr_lang"] = [
lang.strip()
for lang in self.params.get("ocr_lang").split(",")
if lang.strip()
]
if self.params.get("pdf_backend"):
params["pdf_backend"] = self.params.get("pdf_backend")
if self.params.get("table_mode"):
params["table_mode"] = self.params.get("table_mode")
if self.params.get("pipeline"):
params["pipeline"] = self.params.get("pipeline")
endpoint = f"{self.url}/v1/convert/file"
r = requests.post(endpoint, files=files, data=params)
if r.ok: if r.ok:
result = r.json() result = r.json()
document_data = result.get("document", {}) document_data = result.get("document", {})
@ -211,7 +171,6 @@ class DoclingLoader:
metadata = {"Content-Type": self.mime_type} if self.mime_type else {} metadata = {"Content-Type": self.mime_type} if self.mime_type else {}
log.debug("Docling extracted text: %s", text) log.debug("Docling extracted text: %s", text)
return [Document(page_content=text, metadata=metadata)] return [Document(page_content=text, metadata=metadata)]
else: else:
error_msg = f"Error calling Docling API: {r.reason}" error_msg = f"Error calling Docling API: {r.reason}"
@ -340,6 +299,7 @@ class Loader:
loader = DoclingLoader( loader = DoclingLoader(
url=self.kwargs.get("DOCLING_SERVER_URL"), url=self.kwargs.get("DOCLING_SERVER_URL"),
api_key=self.kwargs.get("DOCLING_API_KEY", None),
file_path=file_path, file_path=file_path,
mime_type=file_content_type, mime_type=file_content_type,
params=params, params=params,

View file

@ -6,6 +6,7 @@ from urllib.parse import quote
from open_webui.env import ENABLE_FORWARD_USER_INFO_HEADERS, SRC_LOG_LEVELS from open_webui.env import ENABLE_FORWARD_USER_INFO_HEADERS, SRC_LOG_LEVELS
from open_webui.retrieval.models.base_reranker import BaseReranker from open_webui.retrieval.models.base_reranker import BaseReranker
from open_webui.utils.headers import include_user_info_headers
log = logging.getLogger(__name__) log = logging.getLogger(__name__)
@ -40,22 +41,17 @@ class ExternalReranker(BaseReranker):
log.info(f"ExternalReranker:predict:model {self.model}") log.info(f"ExternalReranker:predict:model {self.model}")
log.info(f"ExternalReranker:predict:query {query}") log.info(f"ExternalReranker:predict:query {query}")
headers = {
"Content-Type": "application/json",
"Authorization": f"Bearer {self.api_key}",
}
if ENABLE_FORWARD_USER_INFO_HEADERS and user:
headers = include_user_info_headers(headers, user)
r = requests.post( r = requests.post(
f"{self.url}", f"{self.url}",
headers={ headers=headers,
"Content-Type": "application/json",
"Authorization": f"Bearer {self.api_key}",
**(
{
"X-OpenWebUI-User-Name": quote(user.name, safe=" "),
"X-OpenWebUI-User-Id": user.id,
"X-OpenWebUI-User-Email": user.email,
"X-OpenWebUI-User-Role": user.role,
}
if ENABLE_FORWARD_USER_INFO_HEADERS and user
else {}
),
},
json=payload, json=payload,
) )

View file

@ -1,8 +1,10 @@
import logging import logging
import os import os
from typing import Optional, Union from typing import Awaitable, Optional, Union
import requests import requests
import aiohttp
import asyncio
import hashlib import hashlib
from concurrent.futures import ThreadPoolExecutor from concurrent.futures import ThreadPoolExecutor
import time import time
@ -27,6 +29,7 @@ from open_webui.models.notes import Notes
from open_webui.retrieval.vector.main import GetResult from open_webui.retrieval.vector.main import GetResult
from open_webui.utils.access_control import has_access from open_webui.utils.access_control import has_access
from open_webui.utils.headers import include_user_info_headers
from open_webui.utils.misc import get_message_list from open_webui.utils.misc import get_message_list
from open_webui.retrieval.web.utils import get_web_loader from open_webui.retrieval.web.utils import get_web_loader
@ -88,14 +91,29 @@ class VectorSearchRetriever(BaseRetriever):
top_k: int top_k: int
def _get_relevant_documents( def _get_relevant_documents(
self, query: str, *, run_manager: CallbackManagerForRetrieverRun
) -> list[Document]:
"""Get documents relevant to a query.
Args:
query: String to find relevant documents for.
run_manager: The callback handler to use.
Returns:
List of relevant documents.
"""
return []
async def _aget_relevant_documents(
self, self,
query: str, query: str,
*, *,
run_manager: CallbackManagerForRetrieverRun, run_manager: CallbackManagerForRetrieverRun,
) -> list[Document]: ) -> list[Document]:
embedding = await self.embedding_function(query, RAG_EMBEDDING_QUERY_PREFIX)
result = VECTOR_DB_CLIENT.search( result = VECTOR_DB_CLIENT.search(
collection_name=self.collection_name, collection_name=self.collection_name,
vectors=[self.embedding_function(query, RAG_EMBEDDING_QUERY_PREFIX)], vectors=[embedding],
limit=self.top_k, limit=self.top_k,
) )
@ -186,7 +204,7 @@ def get_enriched_texts(collection_result: GetResult) -> list[str]:
return enriched_texts return enriched_texts
def query_doc_with_hybrid_search( async def query_doc_with_hybrid_search(
collection_name: str, collection_name: str,
collection_result: GetResult, collection_result: GetResult,
query: str, query: str,
@ -262,7 +280,7 @@ def query_doc_with_hybrid_search(
base_compressor=compressor, base_retriever=ensemble_retriever base_compressor=compressor, base_retriever=ensemble_retriever
) )
result = compression_retriever.invoke(query) result = await compression_retriever.ainvoke(query)
distances = [d.metadata.get("score") for d in result] distances = [d.metadata.get("score") for d in result]
documents = [d.page_content for d in result] documents = [d.page_content for d in result]
@ -381,7 +399,7 @@ def get_all_items_from_collections(collection_names: list[str]) -> dict:
return merge_get_results(results) return merge_get_results(results)
def query_collection( async def query_collection(
collection_names: list[str], collection_names: list[str],
queries: list[str], queries: list[str],
embedding_function, embedding_function,
@ -406,7 +424,9 @@ def query_collection(
return None, e return None, e
# Generate all query embeddings (in one call) # Generate all query embeddings (in one call)
query_embeddings = embedding_function(queries, prefix=RAG_EMBEDDING_QUERY_PREFIX) query_embeddings = await embedding_function(
queries, prefix=RAG_EMBEDDING_QUERY_PREFIX
)
log.debug( log.debug(
f"query_collection: processing {len(queries)} queries across {len(collection_names)} collections" f"query_collection: processing {len(queries)} queries across {len(collection_names)} collections"
) )
@ -433,7 +453,7 @@ def query_collection(
return merge_and_sort_query_results(results, k=k) return merge_and_sort_query_results(results, k=k)
def query_collection_with_hybrid_search( async def query_collection_with_hybrid_search(
collection_names: list[str], collection_names: list[str],
queries: list[str], queries: list[str],
embedding_function, embedding_function,
@ -465,9 +485,9 @@ def query_collection_with_hybrid_search(
f"Starting hybrid search for {len(queries)} queries in {len(collection_names)} collections..." f"Starting hybrid search for {len(queries)} queries in {len(collection_names)} collections..."
) )
def process_query(collection_name, query): async def process_query(collection_name, query):
try: try:
result = query_doc_with_hybrid_search( result = await query_doc_with_hybrid_search(
collection_name=collection_name, collection_name=collection_name,
collection_result=collection_results[collection_name], collection_result=collection_results[collection_name],
query=query, query=query,
@ -487,15 +507,16 @@ def query_collection_with_hybrid_search(
# Prepare tasks for all collections and queries # Prepare tasks for all collections and queries
# Avoid running any tasks for collections that failed to fetch data (have assigned None) # Avoid running any tasks for collections that failed to fetch data (have assigned None)
tasks = [ tasks = [
(cn, q) (collection_name, query)
for cn in collection_names for collection_name in collection_names
if collection_results[cn] is not None if collection_results[collection_name] is not None
for q in queries for query in queries
] ]
with ThreadPoolExecutor() as executor: # Run all queries in parallel using asyncio.gather
future_results = [executor.submit(process_query, cn, q) for cn, q in tasks] task_results = await asyncio.gather(
task_results = [future.result() for future in future_results] *[process_query(collection_name, query) for collection_name, query in tasks]
)
for result, err in task_results: for result, err in task_results:
if err is not None: if err is not None:
@ -511,6 +532,248 @@ def query_collection_with_hybrid_search(
return merge_and_sort_query_results(results, k=k) return merge_and_sort_query_results(results, k=k)
def generate_openai_batch_embeddings(
model: str,
texts: list[str],
url: str = "https://api.openai.com/v1",
key: str = "",
prefix: str = None,
user: UserModel = None,
) -> Optional[list[list[float]]]:
try:
log.debug(
f"generate_openai_batch_embeddings:model {model} batch size: {len(texts)}"
)
json_data = {"input": texts, "model": model}
if isinstance(RAG_EMBEDDING_PREFIX_FIELD_NAME, str) and isinstance(prefix, str):
json_data[RAG_EMBEDDING_PREFIX_FIELD_NAME] = prefix
headers = {
"Content-Type": "application/json",
"Authorization": f"Bearer {key}",
}
if ENABLE_FORWARD_USER_INFO_HEADERS and user:
headers = include_user_info_headers(headers, user)
r = requests.post(
f"{url}/embeddings",
headers=headers,
json=json_data,
)
r.raise_for_status()
data = r.json()
if "data" in data:
return [elem["embedding"] for elem in data["data"]]
else:
raise "Something went wrong :/"
except Exception as e:
log.exception(f"Error generating openai batch embeddings: {e}")
return None
async def agenerate_openai_batch_embeddings(
model: str,
texts: list[str],
url: str = "https://api.openai.com/v1",
key: str = "",
prefix: str = None,
user: UserModel = None,
) -> Optional[list[list[float]]]:
try:
log.debug(
f"agenerate_openai_batch_embeddings:model {model} batch size: {len(texts)}"
)
form_data = {"input": texts, "model": model}
if isinstance(RAG_EMBEDDING_PREFIX_FIELD_NAME, str) and isinstance(prefix, str):
form_data[RAG_EMBEDDING_PREFIX_FIELD_NAME] = prefix
headers = {
"Content-Type": "application/json",
"Authorization": f"Bearer {key}",
}
if ENABLE_FORWARD_USER_INFO_HEADERS and user:
headers = include_user_info_headers(headers, user)
async with aiohttp.ClientSession(trust_env=True) as session:
async with session.post(
f"{url}/embeddings", headers=headers, json=form_data
) as r:
r.raise_for_status()
data = await r.json()
if "data" in data:
return [item["embedding"] for item in data["data"]]
else:
raise Exception("Something went wrong :/")
except Exception as e:
log.exception(f"Error generating openai batch embeddings: {e}")
return None
def generate_azure_openai_batch_embeddings(
model: str,
texts: list[str],
url: str,
key: str = "",
version: str = "",
prefix: str = None,
user: UserModel = None,
) -> Optional[list[list[float]]]:
try:
log.debug(
f"generate_azure_openai_batch_embeddings:deployment {model} batch size: {len(texts)}"
)
json_data = {"input": texts}
if isinstance(RAG_EMBEDDING_PREFIX_FIELD_NAME, str) and isinstance(prefix, str):
json_data[RAG_EMBEDDING_PREFIX_FIELD_NAME] = prefix
url = f"{url}/openai/deployments/{model}/embeddings?api-version={version}"
for _ in range(5):
headers = {
"Content-Type": "application/json",
"api-key": key,
}
if ENABLE_FORWARD_USER_INFO_HEADERS and user:
headers = include_user_info_headers(headers, user)
r = requests.post(
url,
headers=headers,
json=json_data,
)
if r.status_code == 429:
retry = float(r.headers.get("Retry-After", "1"))
time.sleep(retry)
continue
r.raise_for_status()
data = r.json()
if "data" in data:
return [elem["embedding"] for elem in data["data"]]
else:
raise Exception("Something went wrong :/")
return None
except Exception as e:
log.exception(f"Error generating azure openai batch embeddings: {e}")
return None
async def agenerate_azure_openai_batch_embeddings(
model: str,
texts: list[str],
url: str,
key: str = "",
version: str = "",
prefix: str = None,
user: UserModel = None,
) -> Optional[list[list[float]]]:
try:
log.debug(
f"agenerate_azure_openai_batch_embeddings:deployment {model} batch size: {len(texts)}"
)
form_data = {"input": texts}
if isinstance(RAG_EMBEDDING_PREFIX_FIELD_NAME, str) and isinstance(prefix, str):
form_data[RAG_EMBEDDING_PREFIX_FIELD_NAME] = prefix
full_url = f"{url}/openai/deployments/{model}/embeddings?api-version={version}"
headers = {
"Content-Type": "application/json",
"api-key": key,
}
if ENABLE_FORWARD_USER_INFO_HEADERS and user:
headers = include_user_info_headers(headers, user)
async with aiohttp.ClientSession(trust_env=True) as session:
async with session.post(full_url, headers=headers, json=form_data) as r:
r.raise_for_status()
data = await r.json()
if "data" in data:
return [item["embedding"] for item in data["data"]]
else:
raise Exception("Something went wrong :/")
except Exception as e:
log.exception(f"Error generating azure openai batch embeddings: {e}")
return None
def generate_ollama_batch_embeddings(
model: str,
texts: list[str],
url: str,
key: str = "",
prefix: str = None,
user: UserModel = None,
) -> Optional[list[list[float]]]:
try:
log.debug(
f"generate_ollama_batch_embeddings:model {model} batch size: {len(texts)}"
)
json_data = {"input": texts, "model": model}
if isinstance(RAG_EMBEDDING_PREFIX_FIELD_NAME, str) and isinstance(prefix, str):
json_data[RAG_EMBEDDING_PREFIX_FIELD_NAME] = prefix
headers = {
"Content-Type": "application/json",
"Authorization": f"Bearer {key}",
}
if ENABLE_FORWARD_USER_INFO_HEADERS and user:
headers = include_user_info_headers(headers, user)
r = requests.post(
f"{url}/api/embed",
headers=headers,
json=json_data,
)
r.raise_for_status()
data = r.json()
if "embeddings" in data:
return data["embeddings"]
else:
raise "Something went wrong :/"
except Exception as e:
log.exception(f"Error generating ollama batch embeddings: {e}")
return None
async def agenerate_ollama_batch_embeddings(
model: str,
texts: list[str],
url: str,
key: str = "",
prefix: str = None,
user: UserModel = None,
) -> Optional[list[list[float]]]:
try:
log.debug(
f"agenerate_ollama_batch_embeddings:model {model} batch size: {len(texts)}"
)
form_data = {"input": texts, "model": model}
if isinstance(RAG_EMBEDDING_PREFIX_FIELD_NAME, str) and isinstance(prefix, str):
form_data[RAG_EMBEDDING_PREFIX_FIELD_NAME] = prefix
headers = {
"Content-Type": "application/json",
"Authorization": f"Bearer {key}",
}
if ENABLE_FORWARD_USER_INFO_HEADERS and user:
headers = include_user_info_headers(headers, user)
async with aiohttp.ClientSession(trust_env=True) as session:
async with session.post(
f"{url}/api/embed", headers=headers, json=form_data
) as r:
r.raise_for_status()
data = await r.json()
if "embeddings" in data:
return data["embeddings"]
else:
raise Exception("Something went wrong :/")
except Exception as e:
log.exception(f"Error generating ollama batch embeddings: {e}")
return None
def get_embedding_function( def get_embedding_function(
embedding_engine, embedding_engine,
embedding_model, embedding_model,
@ -519,13 +782,24 @@ def get_embedding_function(
key, key,
embedding_batch_size, embedding_batch_size,
azure_api_version=None, azure_api_version=None,
): enable_async=True,
) -> Awaitable:
if embedding_engine == "": if embedding_engine == "":
return lambda query, prefix=None, user=None: embedding_function.encode( # Sentence transformers: CPU-bound sync operation
query, **({"prompt": prefix} if prefix else {}) async def async_embedding_function(query, prefix=None, user=None):
).tolist() return await asyncio.to_thread(
(
lambda query, prefix=None: embedding_function.encode(
query, **({"prompt": prefix} if prefix else {})
).tolist()
),
query,
prefix,
)
return async_embedding_function
elif embedding_engine in ["ollama", "openai", "azure_openai"]: elif embedding_engine in ["ollama", "openai", "azure_openai"]:
func = lambda query, prefix=None, user=None: generate_embeddings( embedding_function = lambda query, prefix=None, user=None: generate_embeddings(
engine=embedding_engine, engine=embedding_engine,
model=embedding_model, model=embedding_model,
text=query, text=query,
@ -536,29 +810,100 @@ def get_embedding_function(
azure_api_version=azure_api_version, azure_api_version=azure_api_version,
) )
def generate_multiple(query, prefix, user, func): async def async_embedding_function(query, prefix=None, user=None):
if isinstance(query, list): if isinstance(query, list):
embeddings = [] # Create batches
for i in range(0, len(query), embedding_batch_size): batches = [
batch_embeddings = func( query[i : i + embedding_batch_size]
query[i : i + embedding_batch_size], for i in range(0, len(query), embedding_batch_size)
prefix=prefix, ]
user=user,
)
if enable_async:
log.debug(
f"generate_multiple_async: Processing {len(batches)} batches in parallel"
)
# Execute all batches in parallel
tasks = [
embedding_function(batch, prefix=prefix, user=user)
for batch in batches
]
batch_results = await asyncio.gather(*tasks)
else:
log.debug(
f"generate_multiple_async: Processing {len(batches)} batches sequentially"
)
batch_results = []
for batch in batches:
batch_results.append(
await embedding_function(batch, prefix=prefix, user=user)
)
# Flatten results
embeddings = []
for batch_embeddings in batch_results:
if isinstance(batch_embeddings, list): if isinstance(batch_embeddings, list):
embeddings.extend(batch_embeddings) embeddings.extend(batch_embeddings)
log.debug(
f"generate_multiple_async: Generated {len(embeddings)} embeddings from {len(batches)} parallel batches"
)
return embeddings return embeddings
else: else:
return func(query, prefix, user) return await embedding_function(query, prefix, user)
return lambda query, prefix=None, user=None: generate_multiple( return async_embedding_function
query, prefix, user, func
)
else: else:
raise ValueError(f"Unknown embedding engine: {embedding_engine}") raise ValueError(f"Unknown embedding engine: {embedding_engine}")
async def generate_embeddings(
engine: str,
model: str,
text: Union[str, list[str]],
prefix: Union[str, None] = None,
**kwargs,
):
url = kwargs.get("url", "")
key = kwargs.get("key", "")
user = kwargs.get("user")
if prefix is not None and RAG_EMBEDDING_PREFIX_FIELD_NAME is None:
if isinstance(text, list):
text = [f"{prefix}{text_element}" for text_element in text]
else:
text = f"{prefix}{text}"
if engine == "ollama":
embeddings = await agenerate_ollama_batch_embeddings(
**{
"model": model,
"texts": text if isinstance(text, list) else [text],
"url": url,
"key": key,
"prefix": prefix,
"user": user,
}
)
return embeddings[0] if isinstance(text, str) else embeddings
elif engine == "openai":
embeddings = await agenerate_openai_batch_embeddings(
model, text if isinstance(text, list) else [text], url, key, prefix, user
)
return embeddings[0] if isinstance(text, str) else embeddings
elif engine == "azure_openai":
azure_api_version = kwargs.get("azure_api_version", "")
embeddings = await agenerate_azure_openai_batch_embeddings(
model,
text if isinstance(text, list) else [text],
url,
key,
azure_api_version,
prefix,
user,
)
return embeddings[0] if isinstance(text, str) else embeddings
def get_reranking_function(reranking_engine, reranking_model, reranking_function): def get_reranking_function(reranking_engine, reranking_model, reranking_function):
if reranking_function is None: if reranking_function is None:
return None return None
@ -572,7 +917,7 @@ def get_reranking_function(reranking_engine, reranking_model, reranking_function
) )
def get_sources_from_items( async def get_sources_from_items(
request, request,
items, items,
queries, queries,
@ -800,7 +1145,7 @@ def get_sources_from_items(
query_result = None # Initialize to None query_result = None # Initialize to None
if hybrid_search: if hybrid_search:
try: try:
query_result = query_collection_with_hybrid_search( query_result = await query_collection_with_hybrid_search(
collection_names=collection_names, collection_names=collection_names,
queries=queries, queries=queries,
embedding_function=embedding_function, embedding_function=embedding_function,
@ -818,7 +1163,7 @@ def get_sources_from_items(
# fallback to non-hybrid search # fallback to non-hybrid search
if not hybrid_search and query_result is None: if not hybrid_search and query_result is None:
query_result = query_collection( query_result = await query_collection(
collection_names=collection_names, collection_names=collection_names,
queries=queries, queries=queries,
embedding_function=embedding_function, embedding_function=embedding_function,
@ -894,199 +1239,6 @@ def get_model_path(model: str, update_model: bool = False):
return model return model
def generate_openai_batch_embeddings(
model: str,
texts: list[str],
url: str = "https://api.openai.com/v1",
key: str = "",
prefix: str = None,
user: UserModel = None,
) -> Optional[list[list[float]]]:
try:
log.debug(
f"generate_openai_batch_embeddings:model {model} batch size: {len(texts)}"
)
json_data = {"input": texts, "model": model}
if isinstance(RAG_EMBEDDING_PREFIX_FIELD_NAME, str) and isinstance(prefix, str):
json_data[RAG_EMBEDDING_PREFIX_FIELD_NAME] = prefix
r = requests.post(
f"{url}/embeddings",
headers={
"Content-Type": "application/json",
"Authorization": f"Bearer {key}",
**(
{
"X-OpenWebUI-User-Name": quote(user.name, safe=" "),
"X-OpenWebUI-User-Id": user.id,
"X-OpenWebUI-User-Email": user.email,
"X-OpenWebUI-User-Role": user.role,
}
if ENABLE_FORWARD_USER_INFO_HEADERS and user
else {}
),
},
json=json_data,
)
r.raise_for_status()
data = r.json()
if "data" in data:
return [elem["embedding"] for elem in data["data"]]
else:
raise "Something went wrong :/"
except Exception as e:
log.exception(f"Error generating openai batch embeddings: {e}")
return None
def generate_azure_openai_batch_embeddings(
model: str,
texts: list[str],
url: str,
key: str = "",
version: str = "",
prefix: str = None,
user: UserModel = None,
) -> Optional[list[list[float]]]:
try:
log.debug(
f"generate_azure_openai_batch_embeddings:deployment {model} batch size: {len(texts)}"
)
json_data = {"input": texts}
if isinstance(RAG_EMBEDDING_PREFIX_FIELD_NAME, str) and isinstance(prefix, str):
json_data[RAG_EMBEDDING_PREFIX_FIELD_NAME] = prefix
url = f"{url}/openai/deployments/{model}/embeddings?api-version={version}"
for _ in range(5):
r = requests.post(
url,
headers={
"Content-Type": "application/json",
"api-key": key,
**(
{
"X-OpenWebUI-User-Name": quote(user.name, safe=" "),
"X-OpenWebUI-User-Id": user.id,
"X-OpenWebUI-User-Email": user.email,
"X-OpenWebUI-User-Role": user.role,
}
if ENABLE_FORWARD_USER_INFO_HEADERS and user
else {}
),
},
json=json_data,
)
if r.status_code == 429:
retry = float(r.headers.get("Retry-After", "1"))
time.sleep(retry)
continue
r.raise_for_status()
data = r.json()
if "data" in data:
return [elem["embedding"] for elem in data["data"]]
else:
raise Exception("Something went wrong :/")
return None
except Exception as e:
log.exception(f"Error generating azure openai batch embeddings: {e}")
return None
def generate_ollama_batch_embeddings(
model: str,
texts: list[str],
url: str,
key: str = "",
prefix: str = None,
user: UserModel = None,
) -> Optional[list[list[float]]]:
try:
log.debug(
f"generate_ollama_batch_embeddings:model {model} batch size: {len(texts)}"
)
json_data = {"input": texts, "model": model}
if isinstance(RAG_EMBEDDING_PREFIX_FIELD_NAME, str) and isinstance(prefix, str):
json_data[RAG_EMBEDDING_PREFIX_FIELD_NAME] = prefix
r = requests.post(
f"{url}/api/embed",
headers={
"Content-Type": "application/json",
"Authorization": f"Bearer {key}",
**(
{
"X-OpenWebUI-User-Name": quote(user.name, safe=" "),
"X-OpenWebUI-User-Id": user.id,
"X-OpenWebUI-User-Email": user.email,
"X-OpenWebUI-User-Role": user.role,
}
if ENABLE_FORWARD_USER_INFO_HEADERS
else {}
),
},
json=json_data,
)
r.raise_for_status()
data = r.json()
if "embeddings" in data:
return data["embeddings"]
else:
raise "Something went wrong :/"
except Exception as e:
log.exception(f"Error generating ollama batch embeddings: {e}")
return None
def generate_embeddings(
engine: str,
model: str,
text: Union[str, list[str]],
prefix: Union[str, None] = None,
**kwargs,
):
url = kwargs.get("url", "")
key = kwargs.get("key", "")
user = kwargs.get("user")
if prefix is not None and RAG_EMBEDDING_PREFIX_FIELD_NAME is None:
if isinstance(text, list):
text = [f"{prefix}{text_element}" for text_element in text]
else:
text = f"{prefix}{text}"
if engine == "ollama":
embeddings = generate_ollama_batch_embeddings(
**{
"model": model,
"texts": text if isinstance(text, list) else [text],
"url": url,
"key": key,
"prefix": prefix,
"user": user,
}
)
return embeddings[0] if isinstance(text, str) else embeddings
elif engine == "openai":
embeddings = generate_openai_batch_embeddings(
model, text if isinstance(text, list) else [text], url, key, prefix, user
)
return embeddings[0] if isinstance(text, str) else embeddings
elif engine == "azure_openai":
azure_api_version = kwargs.get("azure_api_version", "")
embeddings = generate_azure_openai_batch_embeddings(
model,
text if isinstance(text, list) else [text],
url,
key,
azure_api_version,
prefix,
user,
)
return embeddings[0] if isinstance(text, str) else embeddings
import operator import operator
from typing import Optional, Sequence from typing import Optional, Sequence
@ -1109,6 +1261,25 @@ class RerankCompressor(BaseDocumentCompressor):
documents: Sequence[Document], documents: Sequence[Document],
query: str, query: str,
callbacks: Optional[Callbacks] = None, callbacks: Optional[Callbacks] = None,
) -> Sequence[Document]:
"""Compress retrieved documents given the query context.
Args:
documents: The retrieved documents.
query: The query context.
callbacks: Optional callbacks to run during compression.
Returns:
The compressed documents.
"""
return []
async def acompress_documents(
self,
documents: Sequence[Document],
query: str,
callbacks: Optional[Callbacks] = None,
) -> Sequence[Document]: ) -> Sequence[Document]:
reranking = self.reranking_function is not None reranking = self.reranking_function is not None
@ -1118,8 +1289,10 @@ class RerankCompressor(BaseDocumentCompressor):
else: else:
from sentence_transformers import util from sentence_transformers import util
query_embedding = self.embedding_function(query, RAG_EMBEDDING_QUERY_PREFIX) query_embedding = await self.embedding_function(
document_embedding = self.embedding_function( query, RAG_EMBEDDING_QUERY_PREFIX
)
document_embedding = await self.embedding_function(
[doc.page_content for doc in documents], RAG_EMBEDDING_CONTENT_PREFIX [doc.page_content for doc in documents], RAG_EMBEDDING_CONTENT_PREFIX
) )
scores = util.cos_sim(query_embedding, document_embedding)[0] scores = util.cos_sim(query_embedding, document_embedding)[0]

View file

@ -10,22 +10,27 @@ from open_webui.retrieval.vector.main import (
GetResult, GetResult,
) )
from open_webui.retrieval.vector.utils import process_metadata from open_webui.retrieval.vector.utils import process_metadata
from open_webui.config import WEAVIATE_HTTP_HOST, WEAVIATE_HTTP_PORT, WEAVIATE_GRPC_PORT, WEAVIATE_API_KEY from open_webui.config import (
WEAVIATE_HTTP_HOST,
WEAVIATE_HTTP_PORT,
WEAVIATE_GRPC_PORT,
WEAVIATE_API_KEY,
)
def _convert_uuids_to_strings(obj: Any) -> Any: def _convert_uuids_to_strings(obj: Any) -> Any:
""" """
Recursively convert UUID objects to strings in nested data structures. Recursively convert UUID objects to strings in nested data structures.
This function handles: This function handles:
- UUID objects -> string - UUID objects -> string
- Dictionaries with UUID values - Dictionaries with UUID values
- Lists/Tuples with UUID values - Lists/Tuples with UUID values
- Nested combinations of the above - Nested combinations of the above
Args: Args:
obj: Any object that might contain UUIDs obj: Any object that might contain UUIDs
Returns: Returns:
The same object structure with UUIDs converted to strings The same object structure with UUIDs converted to strings
""" """
@ -41,23 +46,23 @@ def _convert_uuids_to_strings(obj: Any) -> Any:
return obj return obj
class WeaviateClient(VectorDBBase): class WeaviateClient(VectorDBBase):
def __init__(self): def __init__(self):
self.url = WEAVIATE_HTTP_HOST self.url = WEAVIATE_HTTP_HOST
try: try:
# Build connection parameters # Build connection parameters
connection_params = { connection_params = {
"host": WEAVIATE_HTTP_HOST, "host": WEAVIATE_HTTP_HOST,
"port": WEAVIATE_HTTP_PORT, "port": WEAVIATE_HTTP_PORT,
"grpc_port": WEAVIATE_GRPC_PORT, "grpc_port": WEAVIATE_GRPC_PORT,
} }
# Only add auth_credentials if WEAVIATE_API_KEY exists and is not empty # Only add auth_credentials if WEAVIATE_API_KEY exists and is not empty
if WEAVIATE_API_KEY: if WEAVIATE_API_KEY:
connection_params["auth_credentials"] = weaviate.classes.init.Auth.api_key(WEAVIATE_API_KEY) connection_params["auth_credentials"] = (
weaviate.classes.init.Auth.api_key(WEAVIATE_API_KEY)
)
self.client = weaviate.connect_to_local(**connection_params) self.client = weaviate.connect_to_local(**connection_params)
self.client.connect() self.client.connect()
except Exception as e: except Exception as e:
@ -73,16 +78,18 @@ class WeaviateClient(VectorDBBase):
# The name can only contain letters, numbers, and the underscore (_) character. Spaces are not allowed. # The name can only contain letters, numbers, and the underscore (_) character. Spaces are not allowed.
# Replace hyphens with underscores and keep only alphanumeric characters # Replace hyphens with underscores and keep only alphanumeric characters
name = re.sub(r'[^a-zA-Z0-9_]', '', collection_name.replace("-", "_")) name = re.sub(r"[^a-zA-Z0-9_]", "", collection_name.replace("-", "_"))
name = name.strip("_") name = name.strip("_")
if not name: if not name:
raise ValueError("Could not sanitize collection name to be a valid Weaviate class name") raise ValueError(
"Could not sanitize collection name to be a valid Weaviate class name"
)
# Ensure it starts with a letter and is capitalized # Ensure it starts with a letter and is capitalized
if not name[0].isalpha(): if not name[0].isalpha():
name = "C" + name name = "C" + name
return name[0].upper() + name[1:] return name[0].upper() + name[1:]
def has_collection(self, collection_name: str) -> bool: def has_collection(self, collection_name: str) -> bool:
@ -99,8 +106,10 @@ class WeaviateClient(VectorDBBase):
name=collection_name, name=collection_name,
vector_config=weaviate.classes.config.Configure.Vectors.self_provided(), vector_config=weaviate.classes.config.Configure.Vectors.self_provided(),
properties=[ properties=[
weaviate.classes.config.Property(name="text", data_type=weaviate.classes.config.DataType.TEXT), weaviate.classes.config.Property(
] name="text", data_type=weaviate.classes.config.DataType.TEXT
),
],
) )
def insert(self, collection_name: str, items: List[VectorItem]) -> None: def insert(self, collection_name: str, items: List[VectorItem]) -> None:
@ -109,21 +118,21 @@ class WeaviateClient(VectorDBBase):
self._create_collection(sane_collection_name) self._create_collection(sane_collection_name)
collection = self.client.collections.get(sane_collection_name) collection = self.client.collections.get(sane_collection_name)
with collection.batch.fixed_size(batch_size=100) as batch: with collection.batch.fixed_size(batch_size=100) as batch:
for item in items: for item in items:
item_uuid = str(uuid.uuid4()) if not item["id"] else str(item["id"]) item_uuid = str(uuid.uuid4()) if not item["id"] else str(item["id"])
properties = {"text": item["text"]} properties = {"text": item["text"]}
if item["metadata"]: if item["metadata"]:
clean_metadata = _convert_uuids_to_strings(process_metadata(item["metadata"])) clean_metadata = _convert_uuids_to_strings(
process_metadata(item["metadata"])
)
clean_metadata.pop("text", None) clean_metadata.pop("text", None)
properties.update(clean_metadata) properties.update(clean_metadata)
batch.add_object( batch.add_object(
properties=properties, properties=properties, uuid=item_uuid, vector=item["vector"]
uuid=item_uuid,
vector=item["vector"]
) )
def upsert(self, collection_name: str, items: List[VectorItem]) -> None: def upsert(self, collection_name: str, items: List[VectorItem]) -> None:
@ -132,21 +141,21 @@ class WeaviateClient(VectorDBBase):
self._create_collection(sane_collection_name) self._create_collection(sane_collection_name)
collection = self.client.collections.get(sane_collection_name) collection = self.client.collections.get(sane_collection_name)
with collection.batch.fixed_size(batch_size=100) as batch: with collection.batch.fixed_size(batch_size=100) as batch:
for item in items: for item in items:
item_uuid = str(item["id"]) if item["id"] else None item_uuid = str(item["id"]) if item["id"] else None
properties = {"text": item["text"]} properties = {"text": item["text"]}
if item["metadata"]: if item["metadata"]:
clean_metadata = _convert_uuids_to_strings(process_metadata(item["metadata"])) clean_metadata = _convert_uuids_to_strings(
process_metadata(item["metadata"])
)
clean_metadata.pop("text", None) clean_metadata.pop("text", None)
properties.update(clean_metadata) properties.update(clean_metadata)
batch.add_object( batch.add_object(
properties=properties, properties=properties, uuid=item_uuid, vector=item["vector"]
uuid=item_uuid,
vector=item["vector"]
) )
def search( def search(
@ -157,9 +166,14 @@ class WeaviateClient(VectorDBBase):
return None return None
collection = self.client.collections.get(sane_collection_name) collection = self.client.collections.get(sane_collection_name)
result_ids, result_documents, result_metadatas, result_distances = [], [], [], [] result_ids, result_documents, result_metadatas, result_distances = (
[],
[],
[],
[],
)
for vector_embedding in vectors: for vector_embedding in vectors:
try: try:
response = collection.query.near_vector( response = collection.query.near_vector(
@ -167,21 +181,28 @@ class WeaviateClient(VectorDBBase):
limit=limit, limit=limit,
return_metadata=weaviate.classes.query.MetadataQuery(distance=True), return_metadata=weaviate.classes.query.MetadataQuery(distance=True),
) )
ids = [str(obj.uuid) for obj in response.objects] ids = [str(obj.uuid) for obj in response.objects]
documents = [] documents = []
metadatas = [] metadatas = []
distances = [] distances = []
for obj in response.objects: for obj in response.objects:
properties = dict(obj.properties) if obj.properties else {} properties = dict(obj.properties) if obj.properties else {}
documents.append(properties.pop("text", "")) documents.append(properties.pop("text", ""))
metadatas.append(_convert_uuids_to_strings(properties)) metadatas.append(_convert_uuids_to_strings(properties))
# Weaviate has cosine distance, 2 (worst) -> 0 (best). Re-ordering to 0 -> 1 # Weaviate has cosine distance, 2 (worst) -> 0 (best). Re-ordering to 0 -> 1
raw_distances = [obj.metadata.distance if obj.metadata and obj.metadata.distance else 2.0 for obj in response.objects] raw_distances = [
(
obj.metadata.distance
if obj.metadata and obj.metadata.distance
else 2.0
)
for obj in response.objects
]
distances = [(2 - dist) / 2 for dist in raw_distances] distances = [(2 - dist) / 2 for dist in raw_distances]
result_ids.append(ids) result_ids.append(ids)
result_documents.append(documents) result_documents.append(documents)
result_metadatas.append(metadatas) result_metadatas.append(metadatas)
@ -191,7 +212,7 @@ class WeaviateClient(VectorDBBase):
result_documents.append([]) result_documents.append([])
result_metadatas.append([]) result_metadatas.append([])
result_distances.append([]) result_distances.append([])
return SearchResult( return SearchResult(
**{ **{
"ids": result_ids, "ids": result_ids,
@ -209,16 +230,26 @@ class WeaviateClient(VectorDBBase):
return None return None
collection = self.client.collections.get(sane_collection_name) collection = self.client.collections.get(sane_collection_name)
weaviate_filter = None weaviate_filter = None
if filter: if filter:
for key, value in filter.items(): for key, value in filter.items():
prop_filter = weaviate.classes.query.Filter.by_property(name=key).equal(value) prop_filter = weaviate.classes.query.Filter.by_property(name=key).equal(
weaviate_filter = prop_filter if weaviate_filter is None else weaviate.classes.query.Filter.all_of([weaviate_filter, prop_filter]) value
)
weaviate_filter = (
prop_filter
if weaviate_filter is None
else weaviate.classes.query.Filter.all_of(
[weaviate_filter, prop_filter]
)
)
try: try:
response = collection.query.fetch_objects(filters=weaviate_filter, limit=limit) response = collection.query.fetch_objects(
filters=weaviate_filter, limit=limit
)
ids = [str(obj.uuid) for obj in response.objects] ids = [str(obj.uuid) for obj in response.objects]
documents = [] documents = []
metadatas = [] metadatas = []
@ -252,10 +283,10 @@ class WeaviateClient(VectorDBBase):
properties = dict(item.properties) if item.properties else {} properties = dict(item.properties) if item.properties else {}
documents.append(properties.pop("text", "")) documents.append(properties.pop("text", ""))
metadatas.append(_convert_uuids_to_strings(properties)) metadatas.append(_convert_uuids_to_strings(properties))
if not ids: if not ids:
return None return None
return GetResult( return GetResult(
**{ **{
"ids": [ids], "ids": [ids],
@ -285,9 +316,17 @@ class WeaviateClient(VectorDBBase):
elif filter: elif filter:
weaviate_filter = None weaviate_filter = None
for key, value in filter.items(): for key, value in filter.items():
prop_filter = weaviate.classes.query.Filter.by_property(name=key).equal(value) prop_filter = weaviate.classes.query.Filter.by_property(
weaviate_filter = prop_filter if weaviate_filter is None else weaviate.classes.query.Filter.all_of([weaviate_filter, prop_filter]) name=key
).equal(value)
weaviate_filter = (
prop_filter
if weaviate_filter is None
else weaviate.classes.query.Filter.all_of(
[weaviate_filter, prop_filter]
)
)
if weaviate_filter: if weaviate_filter:
collection.data.delete_many(where=weaviate_filter) collection.data.delete_many(where=weaviate_filter)
except Exception: except Exception:

View file

@ -5,7 +5,8 @@ from urllib.parse import urlparse
from pydantic import BaseModel from pydantic import BaseModel
from open_webui.retrieval.web.utils import is_string_allowed, resolve_hostname from open_webui.retrieval.web.utils import resolve_hostname
from open_webui.utils.misc import is_string_allowed
def get_filtered_results(results, filter_list): def get_filtered_results(results, filter_list):

View file

@ -42,7 +42,7 @@ from open_webui.config import (
WEB_FETCH_FILTER_LIST, WEB_FETCH_FILTER_LIST,
) )
from open_webui.env import SRC_LOG_LEVELS from open_webui.env import SRC_LOG_LEVELS
from open_webui.utils.misc import is_string_allowed
log = logging.getLogger(__name__) log = logging.getLogger(__name__)
log.setLevel(SRC_LOG_LEVELS["RAG"]) log.setLevel(SRC_LOG_LEVELS["RAG"])
@ -59,39 +59,6 @@ def resolve_hostname(hostname):
return ipv4_addresses, ipv6_addresses return ipv4_addresses, ipv6_addresses
def get_allow_block_lists(filter_list):
allow_list = []
block_list = []
if filter_list:
for d in filter_list:
if d.startswith("!"):
# Domains starting with "!" → blocked
block_list.append(d[1:])
else:
# Domains starting without "!" → allowed
allow_list.append(d)
return allow_list, block_list
def is_string_allowed(string: str, filter_list: Optional[list[str]] = None) -> bool:
if not filter_list:
return True
allow_list, block_list = get_allow_block_lists(filter_list)
# If allow list is non-empty, require domain to match one of them
if allow_list:
if not any(string.endswith(allowed) for allowed in allow_list):
return False
# Block list always removes matches
if any(string.endswith(blocked) for blocked in block_list):
return False
return True
def validate_url(url: Union[str, Sequence[str]]): def validate_url(url: Union[str, Sequence[str]]):
if isinstance(url, str): if isinstance(url, str):
if isinstance(validators.url(url), validators.ValidationError): if isinstance(validators.url(url), validators.ValidationError):

View file

@ -16,7 +16,6 @@ import aiohttp
import aiofiles import aiofiles
import requests import requests
import mimetypes import mimetypes
from urllib.parse import urljoin, quote
from fastapi import ( from fastapi import (
Depends, Depends,
@ -1026,7 +1025,9 @@ def transcription_handler(request, file_path, metadata, user=None):
) )
def transcribe(request: Request, file_path: str, metadata: Optional[dict] = None, user=None): def transcribe(
request: Request, file_path: str, metadata: Optional[dict] = None, user=None
):
log.info(f"transcribe: {file_path} {metadata}") log.info(f"transcribe: {file_path} {metadata}")
if is_audio_conversion_required(file_path): if is_audio_conversion_required(file_path):
@ -1053,7 +1054,9 @@ def transcribe(request: Request, file_path: str, metadata: Optional[dict] = None
with ThreadPoolExecutor() as executor: with ThreadPoolExecutor() as executor:
# Submit tasks for each chunk_path # Submit tasks for each chunk_path
futures = [ futures = [
executor.submit(transcription_handler, request, chunk_path, metadata, user) executor.submit(
transcription_handler, request, chunk_path, metadata, user
)
for chunk_path in chunk_paths for chunk_path in chunk_paths
] ]
# Gather results as they complete # Gather results as they complete

View file

@ -4,6 +4,7 @@ import time
import datetime import datetime
import logging import logging
from aiohttp import ClientSession from aiohttp import ClientSession
import urllib
from open_webui.models.auths import ( from open_webui.models.auths import (
AddUserForm, AddUserForm,
@ -499,6 +500,10 @@ async def signin(request: Request, response: Response, form_data: SigninForm):
if WEBUI_AUTH_TRUSTED_NAME_HEADER: if WEBUI_AUTH_TRUSTED_NAME_HEADER:
name = request.headers.get(WEBUI_AUTH_TRUSTED_NAME_HEADER, email) name = request.headers.get(WEBUI_AUTH_TRUSTED_NAME_HEADER, email)
try:
name = urllib.parse.unquote(name, encoding="utf-8")
except Exception as e:
pass
if not Users.get_user_by_email(email.lower()): if not Users.get_user_by_email(email.lower()):
await signup( await signup(
@ -694,11 +699,11 @@ async def signup(request: Request, response: Response, form_data: SignupForm):
if not has_users: if not has_users:
# Disable signup after the first user is created # Disable signup after the first user is created
request.app.state.config.ENABLE_SIGNUP = False request.app.state.config.ENABLE_SIGNUP = False
default_group_id = getattr(request.app.state.config, 'DEFAULT_GROUP_ID', "") default_group_id = getattr(request.app.state.config, "DEFAULT_GROUP_ID", "")
if default_group_id and default_group_id: if default_group_id and default_group_id:
Groups.add_users_to_group(default_group_id, [user.id]) Groups.add_users_to_group(default_group_id, [user.id])
return { return {
"token": token, "token": token,
"token_type": "Bearer", "token_type": "Bearer",
@ -928,7 +933,7 @@ class AdminConfig(BaseModel):
@router.post("/admin/config") @router.post("/admin/config")
async def update_admin_config( async def update_admin_config(
request: Request, form_data: AdminConfig, user=Depends(get_admin_user) request: Request, form_data: AdminConfig, user=Depends(get_admin_user)
): ):
request.app.state.config.SHOW_ADMIN_DETAILS = form_data.SHOW_ADMIN_DETAILS request.app.state.config.SHOW_ADMIN_DETAILS = form_data.SHOW_ADMIN_DETAILS
request.app.state.config.WEBUI_URL = form_data.WEBUI_URL request.app.state.config.WEBUI_URL = form_data.WEBUI_URL
request.app.state.config.ENABLE_SIGNUP = form_data.ENABLE_SIGNUP request.app.state.config.ENABLE_SIGNUP = form_data.ENABLE_SIGNUP

View file

@ -7,6 +7,7 @@ from open_webui.socket.main import get_event_emitter
from open_webui.models.chats import ( from open_webui.models.chats import (
ChatForm, ChatForm,
ChatImportForm, ChatImportForm,
ChatsImportForm,
ChatResponse, ChatResponse,
Chats, Chats,
ChatTitleIdResponse, ChatTitleIdResponse,
@ -142,26 +143,15 @@ async def create_new_chat(form_data: ChatForm, user=Depends(get_verified_user)):
############################ ############################
# ImportChat # ImportChats
############################ ############################
@router.post("/import", response_model=Optional[ChatResponse]) @router.post("/import", response_model=list[ChatResponse])
async def import_chat(form_data: ChatImportForm, user=Depends(get_verified_user)): async def import_chats(form_data: ChatsImportForm, user=Depends(get_verified_user)):
try: try:
chat = Chats.import_chat(user.id, form_data) chats = Chats.import_chats(user.id, form_data.chats)
if chat: return chats
tags = chat.meta.get("tags", [])
for tag_id in tags:
tag_id = tag_id.replace(" ", "_").lower()
tag_name = " ".join([word.capitalize() for word in tag_id.split("_")])
if (
tag_id != "none"
and Tags.get_tag_by_name_and_user_id(tag_name, user.id) is None
):
Tags.insert_new_tag(tag_name, user.id)
return ChatResponse(**chat.model_dump())
except Exception as e: except Exception as e:
log.exception(e) log.exception(e)
raise HTTPException( raise HTTPException(
@ -228,7 +218,7 @@ async def get_chat_list_by_folder_id(
folder_id: str, page: Optional[int] = 1, user=Depends(get_verified_user) folder_id: str, page: Optional[int] = 1, user=Depends(get_verified_user)
): ):
try: try:
limit = 60 limit = 10
skip = (page - 1) * limit skip = (page - 1) * limit
return [ return [
@ -658,19 +648,28 @@ async def clone_chat_by_id(
"title": form_data.title if form_data.title else f"Clone of {chat.title}", "title": form_data.title if form_data.title else f"Clone of {chat.title}",
} }
chat = Chats.import_chat( chats = Chats.import_chats(
user.id, user.id,
ChatImportForm( [
**{ ChatImportForm(
"chat": updated_chat, **{
"meta": chat.meta, "chat": updated_chat,
"pinned": chat.pinned, "meta": chat.meta,
"folder_id": chat.folder_id, "pinned": chat.pinned,
} "folder_id": chat.folder_id,
), }
)
],
) )
return ChatResponse(**chat.model_dump()) if chats:
chat = chats[0]
return ChatResponse(**chat.model_dump())
else:
raise HTTPException(
status_code=status.HTTP_500_INTERNAL_SERVER_ERROR,
detail=ERROR_MESSAGES.DEFAULT(),
)
else: else:
raise HTTPException( raise HTTPException(
status_code=status.HTTP_401_UNAUTHORIZED, detail=ERROR_MESSAGES.DEFAULT() status_code=status.HTTP_401_UNAUTHORIZED, detail=ERROR_MESSAGES.DEFAULT()
@ -698,18 +697,28 @@ async def clone_shared_chat_by_id(id: str, user=Depends(get_verified_user)):
"title": f"Clone of {chat.title}", "title": f"Clone of {chat.title}",
} }
chat = Chats.import_chat( chats = Chats.import_chats(
user.id, user.id,
ChatImportForm( [
**{ ChatImportForm(
"chat": updated_chat, **{
"meta": chat.meta, "chat": updated_chat,
"pinned": chat.pinned, "meta": chat.meta,
"folder_id": chat.folder_id, "pinned": chat.pinned,
} "folder_id": chat.folder_id,
), }
)
],
) )
return ChatResponse(**chat.model_dump())
if chats:
chat = chats[0]
return ChatResponse(**chat.model_dump())
else:
raise HTTPException(
status_code=status.HTTP_500_INTERNAL_SERVER_ERROR,
detail=ERROR_MESSAGES.DEFAULT(),
)
else: else:
raise HTTPException( raise HTTPException(
status_code=status.HTTP_401_UNAUTHORIZED, detail=ERROR_MESSAGES.DEFAULT() status_code=status.HTTP_401_UNAUTHORIZED, detail=ERROR_MESSAGES.DEFAULT()

View file

@ -258,7 +258,10 @@ async def update_folder_is_expanded_by_id(
@router.delete("/{id}") @router.delete("/{id}")
async def delete_folder_by_id( async def delete_folder_by_id(
request: Request, id: str, user=Depends(get_verified_user) request: Request,
id: str,
delete_contents: Optional[bool] = True,
user=Depends(get_verified_user),
): ):
if Chats.count_chats_by_folder_id_and_user_id(id, user.id): if Chats.count_chats_by_folder_id_and_user_id(id, user.id):
chat_delete_permission = has_permission( chat_delete_permission = has_permission(
@ -277,8 +280,14 @@ async def delete_folder_by_id(
if folder: if folder:
try: try:
folder_ids = Folders.delete_folder_by_id_and_user_id(id, user.id) folder_ids = Folders.delete_folder_by_id_and_user_id(id, user.id)
for folder_id in folder_ids: for folder_id in folder_ids:
Chats.delete_chats_by_user_id_and_folder_id(user.id, folder_id) if delete_contents:
Chats.delete_chats_by_user_id_and_folder_id(user.id, folder_id)
else:
Chats.move_chats_by_user_id_and_folder_id(
user.id, folder_id, None
)
return True return True
except Exception as e: except Exception as e:

View file

@ -549,9 +549,7 @@ async def image_generations(
if ENABLE_FORWARD_USER_INFO_HEADERS: if ENABLE_FORWARD_USER_INFO_HEADERS:
headers = include_user_info_headers(headers, user) headers = include_user_info_headers(headers, user)
url = ( url = f"{request.app.state.config.IMAGES_OPENAI_API_BASE_URL}/images/generations"
f"{request.app.state.config.IMAGES_OPENAI_API_BASE_URL}/images/generations",
)
if request.app.state.config.IMAGES_OPENAI_API_VERSION: if request.app.state.config.IMAGES_OPENAI_API_VERSION:
url = f"{url}?api-version={request.app.state.config.IMAGES_OPENAI_API_VERSION}" url = f"{url}?api-version={request.app.state.config.IMAGES_OPENAI_API_VERSION}"
@ -838,13 +836,13 @@ async def image_edits(
except Exception as e: except Exception as e:
raise HTTPException(status_code=400, detail=ERROR_MESSAGES.DEFAULT(e)) raise HTTPException(status_code=400, detail=ERROR_MESSAGES.DEFAULT(e))
def get_image_file_item(base64_string): def get_image_file_item(base64_string, param_name="image"):
data = base64_string data = base64_string
header, encoded = data.split(",", 1) header, encoded = data.split(",", 1)
mime_type = header.split(";")[0].lstrip("data:") mime_type = header.split(";")[0].lstrip("data:")
image_data = base64.b64decode(encoded) image_data = base64.b64decode(encoded)
return ( return (
"image", param_name,
( (
f"{uuid.uuid4()}.png", f"{uuid.uuid4()}.png",
io.BytesIO(image_data), io.BytesIO(image_data),
@ -879,7 +877,7 @@ async def image_edits(
files = [get_image_file_item(form_data.image)] files = [get_image_file_item(form_data.image)]
elif isinstance(form_data.image, list): elif isinstance(form_data.image, list):
for img in form_data.image: for img in form_data.image:
files.append(get_image_file_item(img)) files.append(get_image_file_item(img, "image[]"))
url_search_params = "" url_search_params = ""
if request.app.state.config.IMAGES_EDIT_OPENAI_API_VERSION: if request.app.state.config.IMAGES_EDIT_OPENAI_API_VERSION:

View file

@ -1,6 +1,7 @@
from typing import List, Optional from typing import List, Optional
from pydantic import BaseModel from pydantic import BaseModel
from fastapi import APIRouter, Depends, HTTPException, status, Request, Query from fastapi import APIRouter, Depends, HTTPException, status, Request, Query
from fastapi.concurrency import run_in_threadpool
import logging import logging
from open_webui.models.knowledge import ( from open_webui.models.knowledge import (
@ -223,7 +224,8 @@ async def reindex_knowledge_files(request: Request, user=Depends(get_verified_us
failed_files = [] failed_files = []
for file in files: for file in files:
try: try:
process_file( await run_in_threadpool(
process_file,
request, request,
ProcessFileForm( ProcessFileForm(
file_id=file.id, collection_name=knowledge_base.id file_id=file.id, collection_name=knowledge_base.id

View file

@ -1,6 +1,7 @@
from fastapi import APIRouter, Depends, HTTPException, Request from fastapi import APIRouter, Depends, HTTPException, Request
from pydantic import BaseModel from pydantic import BaseModel
import logging import logging
import asyncio
from typing import Optional from typing import Optional
from open_webui.models.memories import Memories, MemoryModel from open_webui.models.memories import Memories, MemoryModel
@ -17,7 +18,7 @@ router = APIRouter()
@router.get("/ef") @router.get("/ef")
async def get_embeddings(request: Request): async def get_embeddings(request: Request):
return {"result": request.app.state.EMBEDDING_FUNCTION("hello world")} return {"result": await request.app.state.EMBEDDING_FUNCTION("hello world")}
############################ ############################
@ -51,15 +52,15 @@ async def add_memory(
): ):
memory = Memories.insert_new_memory(user.id, form_data.content) memory = Memories.insert_new_memory(user.id, form_data.content)
vector = await request.app.state.EMBEDDING_FUNCTION(memory.content, user=user)
VECTOR_DB_CLIENT.upsert( VECTOR_DB_CLIENT.upsert(
collection_name=f"user-memory-{user.id}", collection_name=f"user-memory-{user.id}",
items=[ items=[
{ {
"id": memory.id, "id": memory.id,
"text": memory.content, "text": memory.content,
"vector": request.app.state.EMBEDDING_FUNCTION( "vector": vector,
memory.content, user=user
),
"metadata": {"created_at": memory.created_at}, "metadata": {"created_at": memory.created_at},
} }
], ],
@ -86,9 +87,11 @@ async def query_memory(
if not memories: if not memories:
raise HTTPException(status_code=404, detail="No memories found for user") raise HTTPException(status_code=404, detail="No memories found for user")
vector = await request.app.state.EMBEDDING_FUNCTION(form_data.content, user=user)
results = VECTOR_DB_CLIENT.search( results = VECTOR_DB_CLIENT.search(
collection_name=f"user-memory-{user.id}", collection_name=f"user-memory-{user.id}",
vectors=[request.app.state.EMBEDDING_FUNCTION(form_data.content, user=user)], vectors=[vector],
limit=form_data.k, limit=form_data.k,
) )
@ -105,21 +108,28 @@ async def reset_memory_from_vector_db(
VECTOR_DB_CLIENT.delete_collection(f"user-memory-{user.id}") VECTOR_DB_CLIENT.delete_collection(f"user-memory-{user.id}")
memories = Memories.get_memories_by_user_id(user.id) memories = Memories.get_memories_by_user_id(user.id)
# Generate vectors in parallel
vectors = await asyncio.gather(
*[
request.app.state.EMBEDDING_FUNCTION(memory.content, user=user)
for memory in memories
]
)
VECTOR_DB_CLIENT.upsert( VECTOR_DB_CLIENT.upsert(
collection_name=f"user-memory-{user.id}", collection_name=f"user-memory-{user.id}",
items=[ items=[
{ {
"id": memory.id, "id": memory.id,
"text": memory.content, "text": memory.content,
"vector": request.app.state.EMBEDDING_FUNCTION( "vector": vectors[idx],
memory.content, user=user
),
"metadata": { "metadata": {
"created_at": memory.created_at, "created_at": memory.created_at,
"updated_at": memory.updated_at, "updated_at": memory.updated_at,
}, },
} }
for memory in memories for idx, memory in enumerate(memories)
], ],
) )
@ -164,15 +174,15 @@ async def update_memory_by_id(
raise HTTPException(status_code=404, detail="Memory not found") raise HTTPException(status_code=404, detail="Memory not found")
if form_data.content is not None: if form_data.content is not None:
vector = await request.app.state.EMBEDDING_FUNCTION(memory.content, user=user)
VECTOR_DB_CLIENT.upsert( VECTOR_DB_CLIENT.upsert(
collection_name=f"user-memory-{user.id}", collection_name=f"user-memory-{user.id}",
items=[ items=[
{ {
"id": memory.id, "id": memory.id,
"text": memory.content, "text": memory.content,
"vector": request.app.state.EMBEDDING_FUNCTION( "vector": vector,
memory.content, user=user
),
"metadata": { "metadata": {
"created_at": memory.created_at, "created_at": memory.created_at,
"updated_at": memory.updated_at, "updated_at": memory.updated_at,

View file

@ -9,7 +9,7 @@ from open_webui.models.models import (
ModelForm, ModelForm,
ModelModel, ModelModel,
ModelResponse, ModelResponse,
ModelUserResponse, ModelListResponse,
Models, Models,
) )
@ -44,14 +44,43 @@ def is_valid_model_id(model_id: str) -> bool:
########################### ###########################
PAGE_ITEM_COUNT = 30
@router.get( @router.get(
"/list", response_model=list[ModelUserResponse] "/list", response_model=ModelListResponse
) # do NOT use "/" as path, conflicts with main.py ) # do NOT use "/" as path, conflicts with main.py
async def get_models(id: Optional[str] = None, user=Depends(get_verified_user)): async def get_models(
if user.role == "admin" and BYPASS_ADMIN_ACCESS_CONTROL: query: Optional[str] = None,
return Models.get_models() view_option: Optional[str] = None,
else: tag: Optional[str] = None,
return Models.get_models_by_user_id(user.id) order_by: Optional[str] = None,
direction: Optional[str] = None,
page: Optional[int] = 1,
user=Depends(get_verified_user),
):
limit = PAGE_ITEM_COUNT
page = max(1, page)
skip = (page - 1) * limit
filter = {}
if query:
filter["query"] = query
if view_option:
filter["view_option"] = view_option
if tag:
filter["tag"] = tag
if order_by:
filter["order_by"] = order_by
if direction:
filter["direction"] = direction
if not user.role == "admin" or not BYPASS_ADMIN_ACCESS_CONTROL:
filter["user_id"] = user.id
return Models.search_models(user.id, filter=filter, skip=skip, limit=limit)
########################### ###########################
@ -64,6 +93,30 @@ async def get_base_models(user=Depends(get_admin_user)):
return Models.get_base_models() return Models.get_base_models()
###########################
# GetModelTags
###########################
@router.get("/tags", response_model=list[str])
async def get_model_tags(user=Depends(get_verified_user)):
if user.role == "admin" and BYPASS_ADMIN_ACCESS_CONTROL:
models = Models.get_models()
else:
models = Models.get_models_by_user_id(user.id)
tags_set = set()
for model in models:
if model.meta:
meta = model.meta.model_dump()
for tag in meta.get("tags", []):
tags_set.add((tag.get("name")))
tags = [tag for tag in tags_set]
tags.sort()
return tags
############################ ############################
# CreateNewModel # CreateNewModel
############################ ############################

View file

@ -16,8 +16,8 @@ from urllib.parse import urlparse
import aiohttp import aiohttp
from aiocache import cached from aiocache import cached
import requests import requests
from urllib.parse import quote
from open_webui.utils.headers import include_user_info_headers
from open_webui.models.chats import Chats from open_webui.models.chats import Chats
from open_webui.models.users import UserModel from open_webui.models.users import UserModel
@ -82,22 +82,17 @@ async def send_get_request(url, key=None, user: UserModel = None):
timeout = aiohttp.ClientTimeout(total=AIOHTTP_CLIENT_TIMEOUT_MODEL_LIST) timeout = aiohttp.ClientTimeout(total=AIOHTTP_CLIENT_TIMEOUT_MODEL_LIST)
try: try:
async with aiohttp.ClientSession(timeout=timeout, trust_env=True) as session: async with aiohttp.ClientSession(timeout=timeout, trust_env=True) as session:
headers = {
"Content-Type": "application/json",
**({"Authorization": f"Bearer {key}"} if key else {}),
}
if ENABLE_FORWARD_USER_INFO_HEADERS and user:
headers = include_user_info_headers(headers, user)
async with session.get( async with session.get(
url, url,
headers={ headers=headers,
"Content-Type": "application/json",
**({"Authorization": f"Bearer {key}"} if key else {}),
**(
{
"X-OpenWebUI-User-Name": quote(user.name, safe=" "),
"X-OpenWebUI-User-Id": user.id,
"X-OpenWebUI-User-Email": user.email,
"X-OpenWebUI-User-Role": user.role,
}
if ENABLE_FORWARD_USER_INFO_HEADERS and user
else {}
),
},
ssl=AIOHTTP_CLIENT_SESSION_SSL, ssl=AIOHTTP_CLIENT_SESSION_SSL,
) as response: ) as response:
return await response.json() return await response.json()
@ -133,28 +128,20 @@ async def send_post_request(
trust_env=True, timeout=aiohttp.ClientTimeout(total=AIOHTTP_CLIENT_TIMEOUT) trust_env=True, timeout=aiohttp.ClientTimeout(total=AIOHTTP_CLIENT_TIMEOUT)
) )
headers = {
"Content-Type": "application/json",
**({"Authorization": f"Bearer {key}"} if key else {}),
}
if ENABLE_FORWARD_USER_INFO_HEADERS and user:
headers = include_user_info_headers(headers, user)
if metadata and metadata.get("chat_id"):
headers["X-OpenWebUI-Chat-Id"] = metadata.get("chat_id")
r = await session.post( r = await session.post(
url, url,
data=payload, data=payload,
headers={ headers=headers,
"Content-Type": "application/json",
**({"Authorization": f"Bearer {key}"} if key else {}),
**(
{
"X-OpenWebUI-User-Name": quote(user.name, safe=" "),
"X-OpenWebUI-User-Id": user.id,
"X-OpenWebUI-User-Email": user.email,
"X-OpenWebUI-User-Role": user.role,
**(
{"X-OpenWebUI-Chat-Id": metadata.get("chat_id")}
if metadata and metadata.get("chat_id")
else {}
),
}
if ENABLE_FORWARD_USER_INFO_HEADERS and user
else {}
),
},
ssl=AIOHTTP_CLIENT_SESSION_SSL, ssl=AIOHTTP_CLIENT_SESSION_SSL,
) )
@ -246,21 +233,16 @@ async def verify_connection(
timeout=aiohttp.ClientTimeout(total=AIOHTTP_CLIENT_TIMEOUT_MODEL_LIST), timeout=aiohttp.ClientTimeout(total=AIOHTTP_CLIENT_TIMEOUT_MODEL_LIST),
) as session: ) as session:
try: try:
headers = {
**({"Authorization": f"Bearer {key}"} if key else {}),
}
if ENABLE_FORWARD_USER_INFO_HEADERS and user:
headers = include_user_info_headers(headers, user)
async with session.get( async with session.get(
f"{url}/api/version", f"{url}/api/version",
headers={ headers=headers,
**({"Authorization": f"Bearer {key}"} if key else {}),
**(
{
"X-OpenWebUI-User-Name": quote(user.name, safe=" "),
"X-OpenWebUI-User-Id": user.id,
"X-OpenWebUI-User-Email": user.email,
"X-OpenWebUI-User-Role": user.role,
}
if ENABLE_FORWARD_USER_INFO_HEADERS and user
else {}
),
},
ssl=AIOHTTP_CLIENT_SESSION_SSL, ssl=AIOHTTP_CLIENT_SESSION_SSL,
) as r: ) as r:
if r.status != 200: if r.status != 200:
@ -469,22 +451,17 @@ async def get_ollama_tags(
r = None r = None
try: try:
headers = {
**({"Authorization": f"Bearer {key}"} if key else {}),
}
if ENABLE_FORWARD_USER_INFO_HEADERS and user:
headers = include_user_info_headers(headers, user)
r = requests.request( r = requests.request(
method="GET", method="GET",
url=f"{url}/api/tags", url=f"{url}/api/tags",
headers={ headers=headers,
**({"Authorization": f"Bearer {key}"} if key else {}),
**(
{
"X-OpenWebUI-User-Name": quote(user.name, safe=" "),
"X-OpenWebUI-User-Id": user.id,
"X-OpenWebUI-User-Email": user.email,
"X-OpenWebUI-User-Role": user.role,
}
if ENABLE_FORWARD_USER_INFO_HEADERS and user
else {}
),
},
) )
r.raise_for_status() r.raise_for_status()
@ -838,23 +815,18 @@ async def copy_model(
key = get_api_key(url_idx, url, request.app.state.config.OLLAMA_API_CONFIGS) key = get_api_key(url_idx, url, request.app.state.config.OLLAMA_API_CONFIGS)
try: try:
headers = {
"Content-Type": "application/json",
**({"Authorization": f"Bearer {key}"} if key else {}),
}
if ENABLE_FORWARD_USER_INFO_HEADERS and user:
headers = include_user_info_headers(headers, user)
r = requests.request( r = requests.request(
method="POST", method="POST",
url=f"{url}/api/copy", url=f"{url}/api/copy",
headers={ headers=headers,
"Content-Type": "application/json",
**({"Authorization": f"Bearer {key}"} if key else {}),
**(
{
"X-OpenWebUI-User-Name": quote(user.name, safe=" "),
"X-OpenWebUI-User-Id": user.id,
"X-OpenWebUI-User-Email": user.email,
"X-OpenWebUI-User-Role": user.role,
}
if ENABLE_FORWARD_USER_INFO_HEADERS and user
else {}
),
},
data=form_data.model_dump_json(exclude_none=True).encode(), data=form_data.model_dump_json(exclude_none=True).encode(),
) )
r.raise_for_status() r.raise_for_status()
@ -908,24 +880,19 @@ async def delete_model(
key = get_api_key(url_idx, url, request.app.state.config.OLLAMA_API_CONFIGS) key = get_api_key(url_idx, url, request.app.state.config.OLLAMA_API_CONFIGS)
try: try:
headers = {
"Content-Type": "application/json",
**({"Authorization": f"Bearer {key}"} if key else {}),
}
if ENABLE_FORWARD_USER_INFO_HEADERS and user:
headers = include_user_info_headers(headers, user)
r = requests.request( r = requests.request(
method="DELETE", method="DELETE",
url=f"{url}/api/delete", url=f"{url}/api/delete",
data=json.dumps(form_data).encode(), headers=headers,
headers={ data=form_data.model_dump_json(exclude_none=True).encode(),
"Content-Type": "application/json",
**({"Authorization": f"Bearer {key}"} if key else {}),
**(
{
"X-OpenWebUI-User-Name": quote(user.name, safe=" "),
"X-OpenWebUI-User-Id": user.id,
"X-OpenWebUI-User-Email": user.email,
"X-OpenWebUI-User-Role": user.role,
}
if ENABLE_FORWARD_USER_INFO_HEADERS and user
else {}
),
},
) )
r.raise_for_status() r.raise_for_status()
@ -973,24 +940,19 @@ async def show_model_info(
key = get_api_key(url_idx, url, request.app.state.config.OLLAMA_API_CONFIGS) key = get_api_key(url_idx, url, request.app.state.config.OLLAMA_API_CONFIGS)
try: try:
headers = {
"Content-Type": "application/json",
**({"Authorization": f"Bearer {key}"} if key else {}),
}
if ENABLE_FORWARD_USER_INFO_HEADERS and user:
headers = include_user_info_headers(headers, user)
r = requests.request( r = requests.request(
method="POST", method="POST",
url=f"{url}/api/show", url=f"{url}/api/show",
headers={ headers=headers,
"Content-Type": "application/json", data=form_data.model_dump_json(exclude_none=True).encode(),
**({"Authorization": f"Bearer {key}"} if key else {}),
**(
{
"X-OpenWebUI-User-Name": quote(user.name, safe=" "),
"X-OpenWebUI-User-Id": user.id,
"X-OpenWebUI-User-Email": user.email,
"X-OpenWebUI-User-Role": user.role,
}
if ENABLE_FORWARD_USER_INFO_HEADERS and user
else {}
),
},
data=json.dumps(form_data).encode(),
) )
r.raise_for_status() r.raise_for_status()
@ -1064,23 +1026,18 @@ async def embed(
form_data.model = form_data.model.replace(f"{prefix_id}.", "") form_data.model = form_data.model.replace(f"{prefix_id}.", "")
try: try:
headers = {
"Content-Type": "application/json",
**({"Authorization": f"Bearer {key}"} if key else {}),
}
if ENABLE_FORWARD_USER_INFO_HEADERS and user:
headers = include_user_info_headers(headers, user)
r = requests.request( r = requests.request(
method="POST", method="POST",
url=f"{url}/api/embed", url=f"{url}/api/embed",
headers={ headers=headers,
"Content-Type": "application/json",
**({"Authorization": f"Bearer {key}"} if key else {}),
**(
{
"X-OpenWebUI-User-Name": quote(user.name, safe=" "),
"X-OpenWebUI-User-Id": user.id,
"X-OpenWebUI-User-Email": user.email,
"X-OpenWebUI-User-Role": user.role,
}
if ENABLE_FORWARD_USER_INFO_HEADERS and user
else {}
),
},
data=form_data.model_dump_json(exclude_none=True).encode(), data=form_data.model_dump_json(exclude_none=True).encode(),
) )
r.raise_for_status() r.raise_for_status()
@ -1151,23 +1108,18 @@ async def embeddings(
form_data.model = form_data.model.replace(f"{prefix_id}.", "") form_data.model = form_data.model.replace(f"{prefix_id}.", "")
try: try:
headers = {
"Content-Type": "application/json",
**({"Authorization": f"Bearer {key}"} if key else {}),
}
if ENABLE_FORWARD_USER_INFO_HEADERS and user:
headers = include_user_info_headers(headers, user)
r = requests.request( r = requests.request(
method="POST", method="POST",
url=f"{url}/api/embeddings", url=f"{url}/api/embeddings",
headers={ headers=headers,
"Content-Type": "application/json",
**({"Authorization": f"Bearer {key}"} if key else {}),
**(
{
"X-OpenWebUI-User-Name": quote(user.name, safe=" "),
"X-OpenWebUI-User-Id": user.id,
"X-OpenWebUI-User-Email": user.email,
"X-OpenWebUI-User-Role": user.role,
}
if ENABLE_FORWARD_USER_INFO_HEADERS and user
else {}
),
},
data=form_data.model_dump_json(exclude_none=True).encode(), data=form_data.model_dump_json(exclude_none=True).encode(),
) )
r.raise_for_status() r.raise_for_status()

View file

@ -7,7 +7,6 @@ from typing import Optional
import aiohttp import aiohttp
from aiocache import cached from aiocache import cached
import requests import requests
from urllib.parse import quote
from azure.identity import DefaultAzureCredential, get_bearer_token_provider from azure.identity import DefaultAzureCredential, get_bearer_token_provider
@ -50,6 +49,7 @@ from open_webui.utils.misc import (
from open_webui.utils.auth import get_admin_user, get_verified_user from open_webui.utils.auth import get_admin_user, get_verified_user
from open_webui.utils.access_control import has_access from open_webui.utils.access_control import has_access
from open_webui.utils.headers import include_user_info_headers
log = logging.getLogger(__name__) log = logging.getLogger(__name__)
@ -67,21 +67,16 @@ async def send_get_request(url, key=None, user: UserModel = None):
timeout = aiohttp.ClientTimeout(total=AIOHTTP_CLIENT_TIMEOUT_MODEL_LIST) timeout = aiohttp.ClientTimeout(total=AIOHTTP_CLIENT_TIMEOUT_MODEL_LIST)
try: try:
async with aiohttp.ClientSession(timeout=timeout, trust_env=True) as session: async with aiohttp.ClientSession(timeout=timeout, trust_env=True) as session:
headers = {
**({"Authorization": f"Bearer {key}"} if key else {}),
}
if ENABLE_FORWARD_USER_INFO_HEADERS and user:
headers = include_user_info_headers(headers, user)
async with session.get( async with session.get(
url, url,
headers={ headers=headers,
**({"Authorization": f"Bearer {key}"} if key else {}),
**(
{
"X-OpenWebUI-User-Name": quote(user.name, safe=" "),
"X-OpenWebUI-User-Id": user.id,
"X-OpenWebUI-User-Email": user.email,
"X-OpenWebUI-User-Role": user.role,
}
if ENABLE_FORWARD_USER_INFO_HEADERS and user
else {}
),
},
ssl=AIOHTTP_CLIENT_SESSION_SSL, ssl=AIOHTTP_CLIENT_SESSION_SSL,
) as response: ) as response:
return await response.json() return await response.json()
@ -141,23 +136,13 @@ async def get_headers_and_cookies(
if "openrouter.ai" in url if "openrouter.ai" in url
else {} else {}
), ),
**(
{
"X-OpenWebUI-User-Name": quote(user.name, safe=" "),
"X-OpenWebUI-User-Id": user.id,
"X-OpenWebUI-User-Email": user.email,
"X-OpenWebUI-User-Role": user.role,
**(
{"X-OpenWebUI-Chat-Id": metadata.get("chat_id")}
if metadata and metadata.get("chat_id")
else {}
),
}
if ENABLE_FORWARD_USER_INFO_HEADERS
else {}
),
} }
if ENABLE_FORWARD_USER_INFO_HEADERS and user:
headers = include_user_info_headers(headers, user)
if metadata and metadata.get("chat_id"):
headers["X-OpenWebUI-Chat-Id"] = metadata.get("chat_id")
token = None token = None
auth_type = config.get("auth_type") auth_type = config.get("auth_type")

View file

@ -241,13 +241,14 @@ class SearchForm(BaseModel):
async def get_status(request: Request): async def get_status(request: Request):
return { return {
"status": True, "status": True,
"chunk_size": request.app.state.config.CHUNK_SIZE, "CHUNK_SIZE": request.app.state.config.CHUNK_SIZE,
"chunk_overlap": request.app.state.config.CHUNK_OVERLAP, "CHUNK_OVERLAP": request.app.state.config.CHUNK_OVERLAP,
"template": request.app.state.config.RAG_TEMPLATE, "RAG_TEMPLATE": request.app.state.config.RAG_TEMPLATE,
"embedding_engine": request.app.state.config.RAG_EMBEDDING_ENGINE, "RAG_EMBEDDING_ENGINE": request.app.state.config.RAG_EMBEDDING_ENGINE,
"embedding_model": request.app.state.config.RAG_EMBEDDING_MODEL, "RAG_EMBEDDING_MODEL": request.app.state.config.RAG_EMBEDDING_MODEL,
"reranking_model": request.app.state.config.RAG_RERANKING_MODEL, "RAG_RERANKING_MODEL": request.app.state.config.RAG_RERANKING_MODEL,
"embedding_batch_size": request.app.state.config.RAG_EMBEDDING_BATCH_SIZE, "RAG_EMBEDDING_BATCH_SIZE": request.app.state.config.RAG_EMBEDDING_BATCH_SIZE,
"ENABLE_ASYNC_EMBEDDING": request.app.state.config.ENABLE_ASYNC_EMBEDDING,
} }
@ -255,9 +256,10 @@ async def get_status(request: Request):
async def get_embedding_config(request: Request, user=Depends(get_admin_user)): async def get_embedding_config(request: Request, user=Depends(get_admin_user)):
return { return {
"status": True, "status": True,
"embedding_engine": request.app.state.config.RAG_EMBEDDING_ENGINE, "RAG_EMBEDDING_ENGINE": request.app.state.config.RAG_EMBEDDING_ENGINE,
"embedding_model": request.app.state.config.RAG_EMBEDDING_MODEL, "RAG_EMBEDDING_MODEL": request.app.state.config.RAG_EMBEDDING_MODEL,
"embedding_batch_size": request.app.state.config.RAG_EMBEDDING_BATCH_SIZE, "RAG_EMBEDDING_BATCH_SIZE": request.app.state.config.RAG_EMBEDDING_BATCH_SIZE,
"ENABLE_ASYNC_EMBEDDING": request.app.state.config.ENABLE_ASYNC_EMBEDDING,
"openai_config": { "openai_config": {
"url": request.app.state.config.RAG_OPENAI_API_BASE_URL, "url": request.app.state.config.RAG_OPENAI_API_BASE_URL,
"key": request.app.state.config.RAG_OPENAI_API_KEY, "key": request.app.state.config.RAG_OPENAI_API_KEY,
@ -294,18 +296,13 @@ class EmbeddingModelUpdateForm(BaseModel):
openai_config: Optional[OpenAIConfigForm] = None openai_config: Optional[OpenAIConfigForm] = None
ollama_config: Optional[OllamaConfigForm] = None ollama_config: Optional[OllamaConfigForm] = None
azure_openai_config: Optional[AzureOpenAIConfigForm] = None azure_openai_config: Optional[AzureOpenAIConfigForm] = None
embedding_engine: str RAG_EMBEDDING_ENGINE: str
embedding_model: str RAG_EMBEDDING_MODEL: str
embedding_batch_size: Optional[int] = 1 RAG_EMBEDDING_BATCH_SIZE: Optional[int] = 1
ENABLE_ASYNC_EMBEDDING: Optional[bool] = True
@router.post("/embedding/update") def unload_embedding_model(request: Request):
async def update_embedding_config(
request: Request, form_data: EmbeddingModelUpdateForm, user=Depends(get_admin_user)
):
log.info(
f"Updating embedding model: {request.app.state.config.RAG_EMBEDDING_MODEL} to {form_data.embedding_model}"
)
if request.app.state.config.RAG_EMBEDDING_ENGINE == "": if request.app.state.config.RAG_EMBEDDING_ENGINE == "":
# unloads current internal embedding model and clears VRAM cache # unloads current internal embedding model and clears VRAM cache
request.app.state.ef = None request.app.state.ef = None
@ -318,9 +315,25 @@ async def update_embedding_config(
if torch.cuda.is_available(): if torch.cuda.is_available():
torch.cuda.empty_cache() torch.cuda.empty_cache()
@router.post("/embedding/update")
async def update_embedding_config(
request: Request, form_data: EmbeddingModelUpdateForm, user=Depends(get_admin_user)
):
log.info(
f"Updating embedding model: {request.app.state.config.RAG_EMBEDDING_MODEL} to {form_data.RAG_EMBEDDING_MODEL}"
)
unload_embedding_model(request)
try: try:
request.app.state.config.RAG_EMBEDDING_ENGINE = form_data.embedding_engine request.app.state.config.RAG_EMBEDDING_ENGINE = form_data.RAG_EMBEDDING_ENGINE
request.app.state.config.RAG_EMBEDDING_MODEL = form_data.embedding_model request.app.state.config.RAG_EMBEDDING_MODEL = form_data.RAG_EMBEDDING_MODEL
request.app.state.config.RAG_EMBEDDING_BATCH_SIZE = (
form_data.RAG_EMBEDDING_BATCH_SIZE
)
request.app.state.config.ENABLE_ASYNC_EMBEDDING = (
form_data.ENABLE_ASYNC_EMBEDDING
)
if request.app.state.config.RAG_EMBEDDING_ENGINE in [ if request.app.state.config.RAG_EMBEDDING_ENGINE in [
"ollama", "ollama",
@ -354,10 +367,6 @@ async def update_embedding_config(
form_data.azure_openai_config.version form_data.azure_openai_config.version
) )
request.app.state.config.RAG_EMBEDDING_BATCH_SIZE = (
form_data.embedding_batch_size
)
request.app.state.ef = get_ef( request.app.state.ef = get_ef(
request.app.state.config.RAG_EMBEDDING_ENGINE, request.app.state.config.RAG_EMBEDDING_ENGINE,
request.app.state.config.RAG_EMBEDDING_MODEL, request.app.state.config.RAG_EMBEDDING_MODEL,
@ -391,13 +400,15 @@ async def update_embedding_config(
if request.app.state.config.RAG_EMBEDDING_ENGINE == "azure_openai" if request.app.state.config.RAG_EMBEDDING_ENGINE == "azure_openai"
else None else None
), ),
enable_async=request.app.state.config.ENABLE_ASYNC_EMBEDDING,
) )
return { return {
"status": True, "status": True,
"embedding_engine": request.app.state.config.RAG_EMBEDDING_ENGINE, "RAG_EMBEDDING_ENGINE": request.app.state.config.RAG_EMBEDDING_ENGINE,
"embedding_model": request.app.state.config.RAG_EMBEDDING_MODEL, "RAG_EMBEDDING_MODEL": request.app.state.config.RAG_EMBEDDING_MODEL,
"embedding_batch_size": request.app.state.config.RAG_EMBEDDING_BATCH_SIZE, "RAG_EMBEDDING_BATCH_SIZE": request.app.state.config.RAG_EMBEDDING_BATCH_SIZE,
"ENABLE_ASYNC_EMBEDDING": request.app.state.config.ENABLE_ASYNC_EMBEDDING,
"openai_config": { "openai_config": {
"url": request.app.state.config.RAG_OPENAI_API_BASE_URL, "url": request.app.state.config.RAG_OPENAI_API_BASE_URL,
"key": request.app.state.config.RAG_OPENAI_API_KEY, "key": request.app.state.config.RAG_OPENAI_API_KEY,
@ -453,18 +464,8 @@ async def get_rag_config(request: Request, user=Depends(get_admin_user)):
"EXTERNAL_DOCUMENT_LOADER_API_KEY": request.app.state.config.EXTERNAL_DOCUMENT_LOADER_API_KEY, "EXTERNAL_DOCUMENT_LOADER_API_KEY": request.app.state.config.EXTERNAL_DOCUMENT_LOADER_API_KEY,
"TIKA_SERVER_URL": request.app.state.config.TIKA_SERVER_URL, "TIKA_SERVER_URL": request.app.state.config.TIKA_SERVER_URL,
"DOCLING_SERVER_URL": request.app.state.config.DOCLING_SERVER_URL, "DOCLING_SERVER_URL": request.app.state.config.DOCLING_SERVER_URL,
"DOCLING_API_KEY": request.app.state.config.DOCLING_API_KEY,
"DOCLING_PARAMS": request.app.state.config.DOCLING_PARAMS, "DOCLING_PARAMS": request.app.state.config.DOCLING_PARAMS,
"DOCLING_DO_OCR": request.app.state.config.DOCLING_DO_OCR,
"DOCLING_FORCE_OCR": request.app.state.config.DOCLING_FORCE_OCR,
"DOCLING_OCR_ENGINE": request.app.state.config.DOCLING_OCR_ENGINE,
"DOCLING_OCR_LANG": request.app.state.config.DOCLING_OCR_LANG,
"DOCLING_PDF_BACKEND": request.app.state.config.DOCLING_PDF_BACKEND,
"DOCLING_TABLE_MODE": request.app.state.config.DOCLING_TABLE_MODE,
"DOCLING_PIPELINE": request.app.state.config.DOCLING_PIPELINE,
"DOCLING_DO_PICTURE_DESCRIPTION": request.app.state.config.DOCLING_DO_PICTURE_DESCRIPTION,
"DOCLING_PICTURE_DESCRIPTION_MODE": request.app.state.config.DOCLING_PICTURE_DESCRIPTION_MODE,
"DOCLING_PICTURE_DESCRIPTION_LOCAL": request.app.state.config.DOCLING_PICTURE_DESCRIPTION_LOCAL,
"DOCLING_PICTURE_DESCRIPTION_API": request.app.state.config.DOCLING_PICTURE_DESCRIPTION_API,
"DOCUMENT_INTELLIGENCE_ENDPOINT": request.app.state.config.DOCUMENT_INTELLIGENCE_ENDPOINT, "DOCUMENT_INTELLIGENCE_ENDPOINT": request.app.state.config.DOCUMENT_INTELLIGENCE_ENDPOINT,
"DOCUMENT_INTELLIGENCE_KEY": request.app.state.config.DOCUMENT_INTELLIGENCE_KEY, "DOCUMENT_INTELLIGENCE_KEY": request.app.state.config.DOCUMENT_INTELLIGENCE_KEY,
"MISTRAL_OCR_API_BASE_URL": request.app.state.config.MISTRAL_OCR_API_BASE_URL, "MISTRAL_OCR_API_BASE_URL": request.app.state.config.MISTRAL_OCR_API_BASE_URL,
@ -642,18 +643,8 @@ class ConfigForm(BaseModel):
TIKA_SERVER_URL: Optional[str] = None TIKA_SERVER_URL: Optional[str] = None
DOCLING_SERVER_URL: Optional[str] = None DOCLING_SERVER_URL: Optional[str] = None
DOCLING_API_KEY: Optional[str] = None
DOCLING_PARAMS: Optional[dict] = None DOCLING_PARAMS: Optional[dict] = None
DOCLING_DO_OCR: Optional[bool] = None
DOCLING_FORCE_OCR: Optional[bool] = None
DOCLING_OCR_ENGINE: Optional[str] = None
DOCLING_OCR_LANG: Optional[str] = None
DOCLING_PDF_BACKEND: Optional[str] = None
DOCLING_TABLE_MODE: Optional[str] = None
DOCLING_PIPELINE: Optional[str] = None
DOCLING_DO_PICTURE_DESCRIPTION: Optional[bool] = None
DOCLING_PICTURE_DESCRIPTION_MODE: Optional[str] = None
DOCLING_PICTURE_DESCRIPTION_LOCAL: Optional[dict] = None
DOCLING_PICTURE_DESCRIPTION_API: Optional[dict] = None
DOCUMENT_INTELLIGENCE_ENDPOINT: Optional[str] = None DOCUMENT_INTELLIGENCE_ENDPOINT: Optional[str] = None
DOCUMENT_INTELLIGENCE_KEY: Optional[str] = None DOCUMENT_INTELLIGENCE_KEY: Optional[str] = None
MISTRAL_OCR_API_BASE_URL: Optional[str] = None MISTRAL_OCR_API_BASE_URL: Optional[str] = None
@ -831,68 +822,16 @@ async def update_rag_config(
if form_data.DOCLING_SERVER_URL is not None if form_data.DOCLING_SERVER_URL is not None
else request.app.state.config.DOCLING_SERVER_URL else request.app.state.config.DOCLING_SERVER_URL
) )
request.app.state.config.DOCLING_API_KEY = (
form_data.DOCLING_API_KEY
if form_data.DOCLING_API_KEY is not None
else request.app.state.config.DOCLING_API_KEY
)
request.app.state.config.DOCLING_PARAMS = ( request.app.state.config.DOCLING_PARAMS = (
form_data.DOCLING_PARAMS form_data.DOCLING_PARAMS
if form_data.DOCLING_PARAMS is not None if form_data.DOCLING_PARAMS is not None
else request.app.state.config.DOCLING_PARAMS else request.app.state.config.DOCLING_PARAMS
) )
request.app.state.config.DOCLING_DO_OCR = (
form_data.DOCLING_DO_OCR
if form_data.DOCLING_DO_OCR is not None
else request.app.state.config.DOCLING_DO_OCR
)
request.app.state.config.DOCLING_FORCE_OCR = (
form_data.DOCLING_FORCE_OCR
if form_data.DOCLING_FORCE_OCR is not None
else request.app.state.config.DOCLING_FORCE_OCR
)
request.app.state.config.DOCLING_OCR_ENGINE = (
form_data.DOCLING_OCR_ENGINE
if form_data.DOCLING_OCR_ENGINE is not None
else request.app.state.config.DOCLING_OCR_ENGINE
)
request.app.state.config.DOCLING_OCR_LANG = (
form_data.DOCLING_OCR_LANG
if form_data.DOCLING_OCR_LANG is not None
else request.app.state.config.DOCLING_OCR_LANG
)
request.app.state.config.DOCLING_PDF_BACKEND = (
form_data.DOCLING_PDF_BACKEND
if form_data.DOCLING_PDF_BACKEND is not None
else request.app.state.config.DOCLING_PDF_BACKEND
)
request.app.state.config.DOCLING_TABLE_MODE = (
form_data.DOCLING_TABLE_MODE
if form_data.DOCLING_TABLE_MODE is not None
else request.app.state.config.DOCLING_TABLE_MODE
)
request.app.state.config.DOCLING_PIPELINE = (
form_data.DOCLING_PIPELINE
if form_data.DOCLING_PIPELINE is not None
else request.app.state.config.DOCLING_PIPELINE
)
request.app.state.config.DOCLING_DO_PICTURE_DESCRIPTION = (
form_data.DOCLING_DO_PICTURE_DESCRIPTION
if form_data.DOCLING_DO_PICTURE_DESCRIPTION is not None
else request.app.state.config.DOCLING_DO_PICTURE_DESCRIPTION
)
request.app.state.config.DOCLING_PICTURE_DESCRIPTION_MODE = (
form_data.DOCLING_PICTURE_DESCRIPTION_MODE
if form_data.DOCLING_PICTURE_DESCRIPTION_MODE is not None
else request.app.state.config.DOCLING_PICTURE_DESCRIPTION_MODE
)
request.app.state.config.DOCLING_PICTURE_DESCRIPTION_LOCAL = (
form_data.DOCLING_PICTURE_DESCRIPTION_LOCAL
if form_data.DOCLING_PICTURE_DESCRIPTION_LOCAL is not None
else request.app.state.config.DOCLING_PICTURE_DESCRIPTION_LOCAL
)
request.app.state.config.DOCLING_PICTURE_DESCRIPTION_API = (
form_data.DOCLING_PICTURE_DESCRIPTION_API
if form_data.DOCLING_PICTURE_DESCRIPTION_API is not None
else request.app.state.config.DOCLING_PICTURE_DESCRIPTION_API
)
request.app.state.config.DOCUMENT_INTELLIGENCE_ENDPOINT = ( request.app.state.config.DOCUMENT_INTELLIGENCE_ENDPOINT = (
form_data.DOCUMENT_INTELLIGENCE_ENDPOINT form_data.DOCUMENT_INTELLIGENCE_ENDPOINT
if form_data.DOCUMENT_INTELLIGENCE_ENDPOINT is not None if form_data.DOCUMENT_INTELLIGENCE_ENDPOINT is not None
@ -1189,18 +1128,8 @@ async def update_rag_config(
"EXTERNAL_DOCUMENT_LOADER_API_KEY": request.app.state.config.EXTERNAL_DOCUMENT_LOADER_API_KEY, "EXTERNAL_DOCUMENT_LOADER_API_KEY": request.app.state.config.EXTERNAL_DOCUMENT_LOADER_API_KEY,
"TIKA_SERVER_URL": request.app.state.config.TIKA_SERVER_URL, "TIKA_SERVER_URL": request.app.state.config.TIKA_SERVER_URL,
"DOCLING_SERVER_URL": request.app.state.config.DOCLING_SERVER_URL, "DOCLING_SERVER_URL": request.app.state.config.DOCLING_SERVER_URL,
"DOCLING_API_KEY": request.app.state.config.DOCLING_API_KEY,
"DOCLING_PARAMS": request.app.state.config.DOCLING_PARAMS, "DOCLING_PARAMS": request.app.state.config.DOCLING_PARAMS,
"DOCLING_DO_OCR": request.app.state.config.DOCLING_DO_OCR,
"DOCLING_FORCE_OCR": request.app.state.config.DOCLING_FORCE_OCR,
"DOCLING_OCR_ENGINE": request.app.state.config.DOCLING_OCR_ENGINE,
"DOCLING_OCR_LANG": request.app.state.config.DOCLING_OCR_LANG,
"DOCLING_PDF_BACKEND": request.app.state.config.DOCLING_PDF_BACKEND,
"DOCLING_TABLE_MODE": request.app.state.config.DOCLING_TABLE_MODE,
"DOCLING_PIPELINE": request.app.state.config.DOCLING_PIPELINE,
"DOCLING_DO_PICTURE_DESCRIPTION": request.app.state.config.DOCLING_DO_PICTURE_DESCRIPTION,
"DOCLING_PICTURE_DESCRIPTION_MODE": request.app.state.config.DOCLING_PICTURE_DESCRIPTION_MODE,
"DOCLING_PICTURE_DESCRIPTION_LOCAL": request.app.state.config.DOCLING_PICTURE_DESCRIPTION_LOCAL,
"DOCLING_PICTURE_DESCRIPTION_API": request.app.state.config.DOCLING_PICTURE_DESCRIPTION_API,
"DOCUMENT_INTELLIGENCE_ENDPOINT": request.app.state.config.DOCUMENT_INTELLIGENCE_ENDPOINT, "DOCUMENT_INTELLIGENCE_ENDPOINT": request.app.state.config.DOCUMENT_INTELLIGENCE_ENDPOINT,
"DOCUMENT_INTELLIGENCE_KEY": request.app.state.config.DOCUMENT_INTELLIGENCE_KEY, "DOCUMENT_INTELLIGENCE_KEY": request.app.state.config.DOCUMENT_INTELLIGENCE_KEY,
"MISTRAL_OCR_API_BASE_URL": request.app.state.config.MISTRAL_OCR_API_BASE_URL, "MISTRAL_OCR_API_BASE_URL": request.app.state.config.MISTRAL_OCR_API_BASE_URL,
@ -1467,10 +1396,13 @@ def save_docs_to_vector_db(
), ),
) )
embeddings = embedding_function( # Run async embedding in sync context
list(map(lambda x: x.replace("\n", " "), texts)), embeddings = asyncio.run(
prefix=RAG_EMBEDDING_CONTENT_PREFIX, embedding_function(
user=user, list(map(lambda x: x.replace("\n", " "), texts)),
prefix=RAG_EMBEDDING_CONTENT_PREFIX,
user=user,
)
) )
log.info(f"embeddings generated {len(embeddings)} for {len(texts)} items") log.info(f"embeddings generated {len(embeddings)} for {len(texts)} items")
@ -1604,20 +1536,8 @@ def process_file(
EXTERNAL_DOCUMENT_LOADER_API_KEY=request.app.state.config.EXTERNAL_DOCUMENT_LOADER_API_KEY, EXTERNAL_DOCUMENT_LOADER_API_KEY=request.app.state.config.EXTERNAL_DOCUMENT_LOADER_API_KEY,
TIKA_SERVER_URL=request.app.state.config.TIKA_SERVER_URL, TIKA_SERVER_URL=request.app.state.config.TIKA_SERVER_URL,
DOCLING_SERVER_URL=request.app.state.config.DOCLING_SERVER_URL, DOCLING_SERVER_URL=request.app.state.config.DOCLING_SERVER_URL,
DOCLING_PARAMS={ DOCLING_API_KEY=request.app.state.config.DOCLING_API_KEY,
"do_ocr": request.app.state.config.DOCLING_DO_OCR, DOCLING_PARAMS=request.app.state.config.DOCLING_PARAMS,
"force_ocr": request.app.state.config.DOCLING_FORCE_OCR,
"ocr_engine": request.app.state.config.DOCLING_OCR_ENGINE,
"ocr_lang": request.app.state.config.DOCLING_OCR_LANG,
"pdf_backend": request.app.state.config.DOCLING_PDF_BACKEND,
"table_mode": request.app.state.config.DOCLING_TABLE_MODE,
"pipeline": request.app.state.config.DOCLING_PIPELINE,
"do_picture_description": request.app.state.config.DOCLING_DO_PICTURE_DESCRIPTION,
"picture_description_mode": request.app.state.config.DOCLING_PICTURE_DESCRIPTION_MODE,
"picture_description_local": request.app.state.config.DOCLING_PICTURE_DESCRIPTION_LOCAL,
"picture_description_api": request.app.state.config.DOCLING_PICTURE_DESCRIPTION_API,
**request.app.state.config.DOCLING_PARAMS,
},
PDF_EXTRACT_IMAGES=request.app.state.config.PDF_EXTRACT_IMAGES, PDF_EXTRACT_IMAGES=request.app.state.config.PDF_EXTRACT_IMAGES,
DOCUMENT_INTELLIGENCE_ENDPOINT=request.app.state.config.DOCUMENT_INTELLIGENCE_ENDPOINT, DOCUMENT_INTELLIGENCE_ENDPOINT=request.app.state.config.DOCUMENT_INTELLIGENCE_ENDPOINT,
DOCUMENT_INTELLIGENCE_KEY=request.app.state.config.DOCUMENT_INTELLIGENCE_KEY, DOCUMENT_INTELLIGENCE_KEY=request.app.state.config.DOCUMENT_INTELLIGENCE_KEY,
@ -2262,7 +2182,7 @@ class QueryDocForm(BaseModel):
@router.post("/query/doc") @router.post("/query/doc")
def query_doc_handler( async def query_doc_handler(
request: Request, request: Request,
form_data: QueryDocForm, form_data: QueryDocForm,
user=Depends(get_verified_user), user=Depends(get_verified_user),
@ -2275,7 +2195,7 @@ def query_doc_handler(
collection_results[form_data.collection_name] = VECTOR_DB_CLIENT.get( collection_results[form_data.collection_name] = VECTOR_DB_CLIENT.get(
collection_name=form_data.collection_name collection_name=form_data.collection_name
) )
return query_doc_with_hybrid_search( return await query_doc_with_hybrid_search(
collection_name=form_data.collection_name, collection_name=form_data.collection_name,
collection_result=collection_results[form_data.collection_name], collection_result=collection_results[form_data.collection_name],
query=form_data.query, query=form_data.query,
@ -2285,8 +2205,8 @@ def query_doc_handler(
k=form_data.k if form_data.k else request.app.state.config.TOP_K, k=form_data.k if form_data.k else request.app.state.config.TOP_K,
reranking_function=( reranking_function=(
( (
lambda sentences: request.app.state.RERANKING_FUNCTION( lambda query, documents: request.app.state.RERANKING_FUNCTION(
sentences, user=user query, documents, user=user
) )
) )
if request.app.state.RERANKING_FUNCTION if request.app.state.RERANKING_FUNCTION
@ -2307,11 +2227,12 @@ def query_doc_handler(
user=user, user=user,
) )
else: else:
query_embedding = await request.app.state.EMBEDDING_FUNCTION(
form_data.query, prefix=RAG_EMBEDDING_QUERY_PREFIX, user=user
)
return query_doc( return query_doc(
collection_name=form_data.collection_name, collection_name=form_data.collection_name,
query_embedding=request.app.state.EMBEDDING_FUNCTION( query_embedding=query_embedding,
form_data.query, prefix=RAG_EMBEDDING_QUERY_PREFIX, user=user
),
k=form_data.k if form_data.k else request.app.state.config.TOP_K, k=form_data.k if form_data.k else request.app.state.config.TOP_K,
user=user, user=user,
) )
@ -2335,7 +2256,7 @@ class QueryCollectionsForm(BaseModel):
@router.post("/query/collection") @router.post("/query/collection")
def query_collection_handler( async def query_collection_handler(
request: Request, request: Request,
form_data: QueryCollectionsForm, form_data: QueryCollectionsForm,
user=Depends(get_verified_user), user=Depends(get_verified_user),
@ -2344,7 +2265,7 @@ def query_collection_handler(
if request.app.state.config.ENABLE_RAG_HYBRID_SEARCH and ( if request.app.state.config.ENABLE_RAG_HYBRID_SEARCH and (
form_data.hybrid is None or form_data.hybrid form_data.hybrid is None or form_data.hybrid
): ):
return query_collection_with_hybrid_search( return await query_collection_with_hybrid_search(
collection_names=form_data.collection_names, collection_names=form_data.collection_names,
queries=[form_data.query], queries=[form_data.query],
embedding_function=lambda query, prefix: request.app.state.EMBEDDING_FUNCTION( embedding_function=lambda query, prefix: request.app.state.EMBEDDING_FUNCTION(
@ -2379,7 +2300,7 @@ def query_collection_handler(
), ),
) )
else: else:
return query_collection( return await query_collection(
collection_names=form_data.collection_names, collection_names=form_data.collection_names,
queries=[form_data.query], queries=[form_data.query],
embedding_function=lambda query, prefix: request.app.state.EMBEDDING_FUNCTION( embedding_function=lambda query, prefix: request.app.state.EMBEDDING_FUNCTION(
@ -2461,7 +2382,7 @@ if ENV == "dev":
@router.get("/ef/{text}") @router.get("/ef/{text}")
async def get_embeddings(request: Request, text: Optional[str] = "Hello World!"): async def get_embeddings(request: Request, text: Optional[str] = "Hello World!"):
return { return {
"result": request.app.state.EMBEDDING_FUNCTION( "result": await request.app.state.EMBEDDING_FUNCTION(
text, prefix=RAG_EMBEDDING_QUERY_PREFIX text, prefix=RAG_EMBEDDING_QUERY_PREFIX
) )
} }

View file

@ -377,10 +377,13 @@ def get_current_user_by_api_key(request, api_key: str):
detail=ERROR_MESSAGES.INVALID_TOKEN, detail=ERROR_MESSAGES.INVALID_TOKEN,
) )
if not request.state.enable_api_keys or not has_permission( if not request.state.enable_api_keys or (
user.id, user.role != "admin"
"features.api_keys", and not has_permission(
request.app.state.config.USER_PERMISSIONS, user.id,
"features.api_keys",
request.app.state.config.USER_PERMISSIONS,
)
): ):
raise HTTPException( raise HTTPException(
status.HTTP_403_FORBIDDEN, detail=ERROR_MESSAGES.API_KEY_NOT_ALLOWED status.HTTP_403_FORBIDDEN, detail=ERROR_MESSAGES.API_KEY_NOT_ALLOWED

View file

@ -24,6 +24,7 @@ from fastapi.responses import HTMLResponse
from starlette.responses import Response, StreamingResponse, JSONResponse from starlette.responses import Response, StreamingResponse, JSONResponse
from open_webui.utils.misc import is_string_allowed
from open_webui.models.oauth_sessions import OAuthSessions from open_webui.models.oauth_sessions import OAuthSessions
from open_webui.models.chats import Chats from open_webui.models.chats import Chats
from open_webui.models.folders import Folders from open_webui.models.folders import Folders
@ -993,37 +994,32 @@ async def chat_completion_files_handler(
queries = [get_last_user_message(body["messages"])] queries = [get_last_user_message(body["messages"])]
try: try:
# Offload get_sources_from_items to a separate thread # Directly await async get_sources_from_items (no thread needed - fully async now)
loop = asyncio.get_running_loop() sources = await get_sources_from_items(
with ThreadPoolExecutor() as executor: request=request,
sources = await loop.run_in_executor( items=files,
executor, queries=queries,
lambda: get_sources_from_items( embedding_function=lambda query, prefix: request.app.state.EMBEDDING_FUNCTION(
request=request, query, prefix=prefix, user=user
items=files, ),
queries=queries, k=request.app.state.config.TOP_K,
embedding_function=lambda query, prefix: request.app.state.EMBEDDING_FUNCTION( reranking_function=(
query, prefix=prefix, user=user (
), lambda query, documents: request.app.state.RERANKING_FUNCTION(
k=request.app.state.config.TOP_K, query, documents, user=user
reranking_function=( )
( )
lambda query, documents: request.app.state.RERANKING_FUNCTION( if request.app.state.RERANKING_FUNCTION
query, documents, user=user else None
) ),
) k_reranker=request.app.state.config.TOP_K_RERANKER,
if request.app.state.RERANKING_FUNCTION r=request.app.state.config.RELEVANCE_THRESHOLD,
else None hybrid_bm25_weight=request.app.state.config.HYBRID_BM25_WEIGHT,
), hybrid_search=request.app.state.config.ENABLE_RAG_HYBRID_SEARCH,
k_reranker=request.app.state.config.TOP_K_RERANKER, full_context=all_full_context
r=request.app.state.config.RELEVANCE_THRESHOLD, or request.app.state.config.RAG_FULL_CONTEXT,
hybrid_bm25_weight=request.app.state.config.HYBRID_BM25_WEIGHT, user=user,
hybrid_search=request.app.state.config.ENABLE_RAG_HYBRID_SEARCH, )
full_context=all_full_context
or request.app.state.config.RAG_FULL_CONTEXT,
user=user,
),
)
except Exception as e: except Exception as e:
log.exception(e) log.exception(e)
@ -1130,7 +1126,7 @@ async def process_chat_payload(request, form_data, user, metadata, model):
pass pass
event_emitter = get_event_emitter(metadata) event_emitter = get_event_emitter(metadata)
event_call = get_event_call(metadata) event_caller = get_event_call(metadata)
oauth_token = None oauth_token = None
try: try:
@ -1144,14 +1140,13 @@ async def process_chat_payload(request, form_data, user, metadata, model):
extra_params = { extra_params = {
"__event_emitter__": event_emitter, "__event_emitter__": event_emitter,
"__event_call__": event_call, "__event_call__": event_caller,
"__user__": user.model_dump() if isinstance(user, UserModel) else {}, "__user__": user.model_dump() if isinstance(user, UserModel) else {},
"__metadata__": metadata, "__metadata__": metadata,
"__oauth_token__": oauth_token,
"__request__": request, "__request__": request,
"__model__": model, "__model__": model,
"__oauth_token__": oauth_token,
} }
# Initialize events to store additional event to be sent to the client # Initialize events to store additional event to be sent to the client
# Initialize contexts and citation # Initialize contexts and citation
if getattr(request.state, "direct", False) and hasattr(request.state, "model"): if getattr(request.state, "direct", False) and hasattr(request.state, "model"):
@ -1414,6 +1409,9 @@ async def process_chat_payload(request, form_data, user, metadata, model):
headers=headers if headers else None, headers=headers if headers else None,
) )
function_name_filter_list = mcp_server_connection.get(
"function_name_filter_list", None
)
tool_specs = await mcp_clients[server_id].list_tool_specs() tool_specs = await mcp_clients[server_id].list_tool_specs()
for tool_spec in tool_specs: for tool_spec in tool_specs:
@ -1426,6 +1424,15 @@ async def process_chat_payload(request, form_data, user, metadata, model):
return tool_function return tool_function
if function_name_filter_list and isinstance(
function_name_filter_list, list
):
if not is_string_allowed(
tool_spec["name"], function_name_filter_list
):
# Skip this function
continue
tool_function = make_tool_function( tool_function = make_tool_function(
mcp_clients[server_id], tool_spec["name"] mcp_clients[server_id], tool_spec["name"]
) )
@ -1466,6 +1473,7 @@ async def process_chat_payload(request, form_data, user, metadata, model):
"__files__": metadata.get("files", []), "__files__": metadata.get("files", []),
}, },
) )
if mcp_tools_dict: if mcp_tools_dict:
tools_dict = {**tools_dict, **mcp_tools_dict} tools_dict = {**tools_dict, **mcp_tools_dict}

View file

@ -27,6 +27,45 @@ def deep_update(d, u):
return d return d
def get_allow_block_lists(filter_list):
allow_list = []
block_list = []
if filter_list:
for d in filter_list:
if d.startswith("!"):
# Domains starting with "!" → blocked
block_list.append(d[1:])
else:
# Domains starting without "!" → allowed
allow_list.append(d)
return allow_list, block_list
def is_string_allowed(string: str, filter_list: Optional[list[str]] = None) -> bool:
"""
Checks if a string is allowed based on the provided filter list.
:param string: The string to check (e.g., domain or hostname).
:param filter_list: List of allowed/blocked strings. Strings starting with "!" are blocked.
:return: True if the string is allowed, False otherwise.
"""
if not filter_list:
return True
allow_list, block_list = get_allow_block_lists(filter_list)
# If allow list is non-empty, require domain to match one of them
if allow_list:
if not any(string.endswith(allowed) for allowed in allow_list):
return False
# Block list always removes matches
if any(string.endswith(blocked) for blocked in block_list):
return False
return True
def get_message_list(messages_map, message_id): def get_message_list(messages_map, message_id):
""" """
Reconstructs a list of messages in order up to the specified message_id. Reconstructs a list of messages in order up to the specified message_id.

View file

@ -53,6 +53,7 @@ from open_webui.config import (
OAUTH_ADMIN_ROLES, OAUTH_ADMIN_ROLES,
OAUTH_ALLOWED_DOMAINS, OAUTH_ALLOWED_DOMAINS,
OAUTH_UPDATE_PICTURE_ON_LOGIN, OAUTH_UPDATE_PICTURE_ON_LOGIN,
OAUTH_ACCESS_TOKEN_REQUEST_INCLUDE_CLIENT_ID,
WEBHOOK_URL, WEBHOOK_URL,
JWT_EXPIRES_IN, JWT_EXPIRES_IN,
AppConfig, AppConfig,
@ -1273,11 +1274,13 @@ class OAuthManager:
client = self.get_client(provider) client = self.get_client(provider)
auth_params = {} auth_params = {}
if client: if client:
if hasattr(client, "client_id"): if (
hasattr(client, "client_id")
and OAUTH_ACCESS_TOKEN_REQUEST_INCLUDE_CLIENT_ID
):
auth_params["client_id"] = client.client_id auth_params["client_id"] = client.client_id
if hasattr(client, "client_secret"):
auth_params["client_secret"] = client.client_secret
try: try:
token = await client.authorize_access_token(request, **auth_params) token = await client.authorize_access_token(request, **auth_params)

View file

@ -34,6 +34,7 @@ from langchain_core.utils.function_calling import (
) )
from open_webui.utils.misc import is_string_allowed
from open_webui.models.tools import Tools from open_webui.models.tools import Tools
from open_webui.models.users import UserModel from open_webui.models.users import UserModel
from open_webui.utils.plugin import load_tool_module_by_id from open_webui.utils.plugin import load_tool_module_by_id
@ -149,8 +150,20 @@ async def get_tools(
) )
specs = tool_server_data.get("specs", []) specs = tool_server_data.get("specs", [])
function_name_filter_list = tool_server_connection.get(
"function_name_filter_list", None
)
for spec in specs: for spec in specs:
function_name = spec["name"] function_name = spec["name"]
if function_name_filter_list and isinstance(
function_name_filter_list, list
):
if not is_string_allowed(
function_name, function_name_filter_list
):
# Skip this function
continue
auth_type = tool_server_connection.get("auth_type", "bearer") auth_type = tool_server_connection.get("auth_type", "bearer")

View file

@ -41,7 +41,7 @@ mcp==1.21.2
openai openai
anthropic anthropic
google-genai==1.38.0 google-genai==1.52.0
google-generativeai==0.8.5 google-generativeai==0.8.5
langchain==0.3.27 langchain==0.3.27
@ -106,6 +106,7 @@ google-auth-oauthlib
googleapis-common-protos==1.70.0 googleapis-common-protos==1.70.0
google-cloud-storage==2.19.0 google-cloud-storage==2.19.0
## Databases
pymongo pymongo
psycopg2-binary==2.9.10 psycopg2-binary==2.9.10
pgvector==0.4.1 pgvector==0.4.1

4
package-lock.json generated
View file

@ -1,12 +1,12 @@
{ {
"name": "open-webui", "name": "open-webui",
"version": "0.6.36", "version": "0.6.38",
"lockfileVersion": 3, "lockfileVersion": 3,
"requires": true, "requires": true,
"packages": { "packages": {
"": { "": {
"name": "open-webui", "name": "open-webui",
"version": "0.6.36", "version": "0.6.38",
"dependencies": { "dependencies": {
"@azure/msal-browser": "^4.5.0", "@azure/msal-browser": "^4.5.0",
"@codemirror/lang-javascript": "^6.2.2", "@codemirror/lang-javascript": "^6.2.2",

View file

@ -1,6 +1,6 @@
{ {
"name": "open-webui", "name": "open-webui",
"version": "0.6.36", "version": "0.6.38",
"private": true, "private": true,
"scripts": { "scripts": {
"dev": "npm run pyodide:fetch && vite dev --host", "dev": "npm run pyodide:fetch && vite dev --host",

View file

@ -37,9 +37,6 @@ dependencies = [
"pycrdt==0.12.25", "pycrdt==0.12.25",
"redis", "redis",
"PyMySQL==1.1.1",
"boto3==1.40.5",
"APScheduler==3.10.4", "APScheduler==3.10.4",
"RestrictedPython==8.0", "RestrictedPython==8.0",
@ -51,7 +48,7 @@ dependencies = [
"openai", "openai",
"anthropic", "anthropic",
"google-genai==1.38.0", "google-genai==1.52.0",
"google-generativeai==0.8.5", "google-generativeai==0.8.5",
"langchain==0.3.27", "langchain==0.3.27",
@ -60,6 +57,8 @@ dependencies = [
"fake-useragent==2.2.0", "fake-useragent==2.2.0",
"chromadb==1.0.20", "chromadb==1.0.20",
"opensearch-py==2.8.0", "opensearch-py==2.8.0",
"PyMySQL==1.1.1",
"boto3==1.40.5",
"transformers", "transformers",
"sentence-transformers==5.1.1", "sentence-transformers==5.1.1",

View file

@ -174,72 +174,72 @@
</span> --> </span> -->
</div> </div>
</body> </body>
</html>
<style type="text/css" nonce=""> <style type="text/css" nonce="">
html { html {
overflow-y: hidden !important; overflow-y: hidden !important;
overscroll-behavior-y: none; overscroll-behavior-y: none;
} }
#splash-screen { #splash-screen {
background: #fff; background: #fff;
} }
html.dark #splash-screen { html.dark #splash-screen {
background: #000; background: #000;
} }
html.her #splash-screen { html.her #splash-screen {
background: #983724; background: #983724;
} }
#logo-her { #logo-her {
display: none;
}
#progress-background {
display: none;
}
#progress-bar {
display: none;
}
html.her #logo {
display: none;
}
html.her #logo-her {
display: block;
filter: invert(1);
}
html.her #progress-background {
display: block;
}
html.her #progress-bar {
display: block;
}
@media (max-width: 24rem) {
html.her #progress-background {
display: none; display: none;
} }
#progress-background {
display: none;
}
#progress-bar {
display: none;
}
html.her #logo {
display: none;
}
html.her #logo-her {
display: block;
filter: invert(1);
}
html.her #progress-background {
display: block;
}
html.her #progress-bar { html.her #progress-bar {
display: none; display: block;
} }
}
@keyframes pulse { @media (max-width: 24rem) {
50% { html.her #progress-background {
opacity: 0.65; display: none;
}
html.her #progress-bar {
display: none;
}
} }
}
.animate-pulse-fast { @keyframes pulse {
animation: pulse 1.5s cubic-bezier(0.4, 0, 0.6, 1) infinite; 50% {
} opacity: 0.65;
</style> }
}
.animate-pulse-fast {
animation: pulse 1.5s cubic-bezier(0.4, 0, 0.6, 1) infinite;
}
</style>
</html>

View file

@ -65,15 +65,7 @@ export const unarchiveAllChats = async (token: string) => {
return res; return res;
}; };
export const importChat = async ( export const importChats = async (token: string, chats: object[]) => {
token: string,
chat: object,
meta: object | null,
pinned?: boolean,
folderId?: string | null,
createdAt: number | null = null,
updatedAt: number | null = null
) => {
let error = null; let error = null;
const res = await fetch(`${WEBUI_API_BASE_URL}/chats/import`, { const res = await fetch(`${WEBUI_API_BASE_URL}/chats/import`, {
@ -84,12 +76,7 @@ export const importChat = async (
authorization: `Bearer ${token}` authorization: `Bearer ${token}`
}, },
body: JSON.stringify({ body: JSON.stringify({
chat: chat, chats
meta: meta ?? {},
pinned: pinned,
folder_id: folderId,
created_at: createdAt ?? null,
updated_at: updatedAt ?? null
}) })
}) })
.then(async (res) => { .then(async (res) => {

View file

@ -239,10 +239,13 @@ export const updateFolderItemsById = async (token: string, id: string, items: Fo
return res; return res;
}; };
export const deleteFolderById = async (token: string, id: string) => { export const deleteFolderById = async (token: string, id: string, deleteContents: boolean) => {
let error = null; let error = null;
const res = await fetch(`${WEBUI_API_BASE_URL}/folders/${id}`, { const searchParams = new URLSearchParams();
searchParams.append('delete_contents', deleteContents ? 'true' : 'false');
const res = await fetch(`${WEBUI_API_BASE_URL}/folders/${id}?${searchParams.toString()}`, {
method: 'DELETE', method: 'DELETE',
headers: { headers: {
Accept: 'application/json', Accept: 'application/json',

View file

@ -2,7 +2,7 @@ import { WEBUI_API_BASE_URL } from '$lib/constants';
import { models, config, type Model } from '$lib/stores'; import { models, config, type Model } from '$lib/stores';
import { get } from 'svelte/store'; import { get } from 'svelte/store';
export const getModels = async (token: string = ''): Promise<Model[] | null> => { export const getModels = async (token: string = ''): Promise<Model[] | null> => {
const lang = get(config)?.default_locale || 'de'; const lang = get(config)?.default_locale || 'en';
try { try {
const res = await fetch(`${WEBUI_API_BASE_URL}/models/`, { const res = await fetch(`${WEBUI_API_BASE_URL}/models/`, {
method: 'GET', method: 'GET',
@ -26,11 +26,70 @@ export const getModels = async (token: string = ''): Promise<Model[] | null> =>
} }
}; };
export const getModelItems = async (token: string = '') => { export const getModelItems = async (
const lang = get(config)?.default_locale || 'de'; token: string = '',
query,
viewOption,
selectedTag,
orderBy,
direction,
page
) => {
const lang = get(config)?.default_locale || 'en';
let error = null; let error = null;
const res = await fetch(`${WEBUI_API_BASE_URL}/models/list`, { const searchParams = new URLSearchParams();
if (query) {
searchParams.append('query', query);
}
if (viewOption) {
searchParams.append('view_option', viewOption);
}
if (selectedTag) {
searchParams.append('tag', selectedTag);
}
if (orderBy) {
searchParams.append('order_by', orderBy);
}
if (direction) {
searchParams.append('direction', direction);
}
if (page) {
searchParams.append('page', page.toString());
}
const res = await fetch(`${WEBUI_API_BASE_URL}/models/list?${searchParams.toString()}`, {
method: 'GET',
headers: {
Accept: 'application/json',
'Content-Type': 'application/json',
authorization: `Bearer ${token}`
}
})
.then(async (res) => {
if (!res.ok) throw await res.json();
return res.json();
})
.then((json) => {
return json;
})
.catch((err) => {
error = err;
console.error(err);
return null;
});
if (error) {
throw error;
}
return res;
};
export const getModelTags = async (token: string = '') => {
let error = null;
const res = await fetch(`${WEBUI_API_BASE_URL}/models/tags`, {
method: 'GET', method: 'GET',
headers: { headers: {
Accept: 'application/json', Accept: 'application/json',

View file

@ -47,6 +47,7 @@
let key = ''; let key = '';
let headers = ''; let headers = '';
let functionNameFilterList = [];
let accessControl = {}; let accessControl = {};
let id = ''; let id = '';
@ -284,6 +285,7 @@
headers = JSON.stringify(_headers, null, 2); headers = JSON.stringify(_headers, null, 2);
} catch (error) { } catch (error) {
toast.error($i18n.t('Headers must be a valid JSON object')); toast.error($i18n.t('Headers must be a valid JSON object'));
loading = false;
return; return;
} }
} }
@ -302,7 +304,7 @@
key, key,
config: { config: {
enable: enable, enable: enable,
function_name_filter_list: functionNameFilterList,
access_control: accessControl access_control: accessControl
}, },
info: { info: {
@ -332,9 +334,11 @@
id = ''; id = '';
name = ''; name = '';
description = ''; description = '';
oauthClientInfo = null; oauthClientInfo = null;
enable = true; enable = true;
functionNameFilterList = [];
accessControl = null; accessControl = null;
}; };
@ -358,6 +362,7 @@
oauthClientInfo = connection.info?.oauth_client_info ?? null; oauthClientInfo = connection.info?.oauth_client_info ?? null;
enable = connection.config?.enable ?? true; enable = connection.config?.enable ?? true;
functionNameFilterList = connection.config?.function_name_filter_list ?? [];
accessControl = connection.config?.access_control ?? null; accessControl = connection.config?.access_control ?? null;
} }
}; };
@ -792,6 +797,25 @@
</div> </div>
</div> </div>
<div class="flex flex-col w-full mt-2">
<label
for="function-name-filter-list"
class={`mb-1 text-xs ${($settings?.highContrastMode ?? false) ? 'text-gray-800 dark:text-gray-100 placeholder:text-gray-700 dark:placeholder:text-gray-100' : 'outline-hidden placeholder:text-gray-300 dark:placeholder:text-gray-700 text-gray-500'}`}
>{$i18n.t('Function Name Filter List')}</label
>
<div class="flex-1">
<input
id="function-name-filter-list"
class={`w-full text-sm bg-transparent ${($settings?.highContrastMode ?? false) ? 'placeholder:text-gray-700 dark:placeholder:text-gray-100' : 'outline-hidden placeholder:text-gray-300 dark:placeholder:text-gray-700'}`}
type="text"
bind:value={functionNameFilterList}
placeholder={$i18n.t('Enter function name filter list (e.g. func1, !func2)')}
autocomplete="off"
/>
</div>
</div>
<hr class=" border-gray-100 dark:border-gray-700/10 my-2.5 w-full" /> <hr class=" border-gray-100 dark:border-gray-700/10 my-2.5 w-full" />
<div class="my-2 -mx-2"> <div class="my-2 -mx-2">

View file

@ -288,7 +288,7 @@
class="bg-white dark:bg-gray-900 dark:border-gray-850 text-xs cursor-pointer hover:bg-gray-50 dark:hover:bg-gray-850/50 transition" class="bg-white dark:bg-gray-900 dark:border-gray-850 text-xs cursor-pointer hover:bg-gray-50 dark:hover:bg-gray-850/50 transition"
on:click={() => openFeedbackModal(feedback)} on:click={() => openFeedbackModal(feedback)}
> >
<td class=" py-0.5 text-right font-semibold"> <td class=" py-0.5 text-right font-medium">
<div class="flex justify-center"> <div class="flex justify-center">
<Tooltip content={feedback?.user?.name}> <Tooltip content={feedback?.user?.name}>
<div class="shrink-0"> <div class="shrink-0">
@ -306,7 +306,7 @@
<div class="flex flex-col items-start gap-0.5 h-full"> <div class="flex flex-col items-start gap-0.5 h-full">
<div class="flex flex-col h-full"> <div class="flex flex-col h-full">
{#if feedback.data?.sibling_model_ids} {#if feedback.data?.sibling_model_ids}
<div class="font-semibold text-gray-600 dark:text-gray-400 flex-1"> <div class="font-medium text-gray-600 dark:text-gray-400 flex-1">
{feedback.data?.model_id} {feedback.data?.model_id}
</div> </div>
@ -352,7 +352,7 @@
{dayjs(feedback.updated_at * 1000).fromNow()} {dayjs(feedback.updated_at * 1000).fromNow()}
</td> </td>
<td class=" px-3 py-1 text-right font-semibold" on:click={(e) => e.stopPropagation()}> <td class=" px-3 py-1 text-right font-medium" on:click={(e) => e.stopPropagation()}>
<FeedbackMenu <FeedbackMenu
on:delete={(e) => { on:delete={(e) => {
deleteFeedbackHandler(feedback.id); deleteFeedbackHandler(feedback.id);

View file

@ -530,7 +530,7 @@
{model.rating} {model.rating}
</td> </td>
<td class=" px-3 py-1.5 text-right font-semibold text-green-500"> <td class=" px-3 py-1.5 text-right font-medium text-green-500">
<div class=" w-10"> <div class=" w-10">
{#if model.stats.won === '-'} {#if model.stats.won === '-'}
- -
@ -543,7 +543,7 @@
</div> </div>
</td> </td>
<td class="px-3 py-1.5 text-right font-semibold text-red-500"> <td class="px-3 py-1.5 text-right font-medium text-red-500">
<div class=" w-10"> <div class=" w-10">
{#if model.stats.lost === '-'} {#if model.stats.lost === '-'}
- -

View file

@ -38,9 +38,11 @@
let showResetUploadDirConfirm = false; let showResetUploadDirConfirm = false;
let showReindexConfirm = false; let showReindexConfirm = false;
let embeddingEngine = ''; let RAG_EMBEDDING_ENGINE = '';
let embeddingModel = ''; let RAG_EMBEDDING_MODEL = '';
let embeddingBatchSize = 1; let RAG_EMBEDDING_BATCH_SIZE = 1;
let ENABLE_ASYNC_EMBEDDING = true;
let rerankingModel = ''; let rerankingModel = '';
let OpenAIUrl = ''; let OpenAIUrl = '';
@ -64,7 +66,7 @@
let RAGConfig = null; let RAGConfig = null;
const embeddingModelUpdateHandler = async () => { const embeddingModelUpdateHandler = async () => {
if (embeddingEngine === '' && embeddingModel.split('/').length - 1 > 1) { if (RAG_EMBEDDING_ENGINE === '' && RAG_EMBEDDING_MODEL.split('/').length - 1 > 1) {
toast.error( toast.error(
$i18n.t( $i18n.t(
'Model filesystem path detected. Model shortname is required for update, cannot continue.' 'Model filesystem path detected. Model shortname is required for update, cannot continue.'
@ -72,7 +74,7 @@
); );
return; return;
} }
if (embeddingEngine === 'ollama' && embeddingModel === '') { if (RAG_EMBEDDING_ENGINE === 'ollama' && RAG_EMBEDDING_MODEL === '') {
toast.error( toast.error(
$i18n.t( $i18n.t(
'Model filesystem path detected. Model shortname is required for update, cannot continue.' 'Model filesystem path detected. Model shortname is required for update, cannot continue.'
@ -81,7 +83,7 @@
return; return;
} }
if (embeddingEngine === 'openai' && embeddingModel === '') { if (RAG_EMBEDDING_ENGINE === 'openai' && RAG_EMBEDDING_MODEL === '') {
toast.error( toast.error(
$i18n.t( $i18n.t(
'Model filesystem path detected. Model shortname is required for update, cannot continue.' 'Model filesystem path detected. Model shortname is required for update, cannot continue.'
@ -91,20 +93,26 @@
} }
if ( if (
embeddingEngine === 'azure_openai' && RAG_EMBEDDING_ENGINE === 'azure_openai' &&
(AzureOpenAIKey === '' || AzureOpenAIUrl === '' || AzureOpenAIVersion === '') (AzureOpenAIKey === '' || AzureOpenAIUrl === '' || AzureOpenAIVersion === '')
) { ) {
toast.error($i18n.t('OpenAI URL/Key required.')); toast.error($i18n.t('OpenAI URL/Key required.'));
return; return;
} }
console.debug('Update embedding model attempt:', embeddingModel); console.debug('Update embedding model attempt:', {
RAG_EMBEDDING_ENGINE,
RAG_EMBEDDING_MODEL,
RAG_EMBEDDING_BATCH_SIZE,
ENABLE_ASYNC_EMBEDDING
});
updateEmbeddingModelLoading = true; updateEmbeddingModelLoading = true;
const res = await updateEmbeddingConfig(localStorage.token, { const res = await updateEmbeddingConfig(localStorage.token, {
embedding_engine: embeddingEngine, RAG_EMBEDDING_ENGINE: RAG_EMBEDDING_ENGINE,
embedding_model: embeddingModel, RAG_EMBEDDING_MODEL: RAG_EMBEDDING_MODEL,
embedding_batch_size: embeddingBatchSize, RAG_EMBEDDING_BATCH_SIZE: RAG_EMBEDDING_BATCH_SIZE,
ENABLE_ASYNC_EMBEDDING: ENABLE_ASYNC_EMBEDDING,
ollama_config: { ollama_config: {
key: OllamaKey, key: OllamaKey,
url: OllamaUrl url: OllamaUrl
@ -151,26 +159,6 @@
toast.error($i18n.t('Docling Server URL required.')); toast.error($i18n.t('Docling Server URL required.'));
return; return;
} }
if (
RAGConfig.CONTENT_EXTRACTION_ENGINE === 'docling' &&
RAGConfig.DOCLING_DO_OCR &&
((RAGConfig.DOCLING_OCR_ENGINE === '' && RAGConfig.DOCLING_OCR_LANG !== '') ||
(RAGConfig.DOCLING_OCR_ENGINE !== '' && RAGConfig.DOCLING_OCR_LANG === ''))
) {
toast.error(
$i18n.t('Both Docling OCR Engine and Language(s) must be provided or both left empty.')
);
return;
}
if (
RAGConfig.CONTENT_EXTRACTION_ENGINE === 'docling' &&
RAGConfig.DOCLING_DO_OCR === false &&
RAGConfig.DOCLING_FORCE_OCR === true
) {
toast.error($i18n.t('In order to force OCR, performing OCR must be enabled.'));
return;
}
if ( if (
RAGConfig.CONTENT_EXTRACTION_ENGINE === 'datalab_marker' && RAGConfig.CONTENT_EXTRACTION_ENGINE === 'datalab_marker' &&
RAGConfig.DATALAB_MARKER_ADDITIONAL_CONFIG && RAGConfig.DATALAB_MARKER_ADDITIONAL_CONFIG &&
@ -238,12 +226,6 @@
ALLOWED_FILE_EXTENSIONS: RAGConfig.ALLOWED_FILE_EXTENSIONS.split(',') ALLOWED_FILE_EXTENSIONS: RAGConfig.ALLOWED_FILE_EXTENSIONS.split(',')
.map((ext) => ext.trim()) .map((ext) => ext.trim())
.filter((ext) => ext !== ''), .filter((ext) => ext !== ''),
DOCLING_PICTURE_DESCRIPTION_LOCAL: JSON.parse(
RAGConfig.DOCLING_PICTURE_DESCRIPTION_LOCAL || '{}'
),
DOCLING_PICTURE_DESCRIPTION_API: JSON.parse(
RAGConfig.DOCLING_PICTURE_DESCRIPTION_API || '{}'
),
DOCLING_PARAMS: DOCLING_PARAMS:
typeof RAGConfig.DOCLING_PARAMS === 'string' && RAGConfig.DOCLING_PARAMS.trim() !== '' typeof RAGConfig.DOCLING_PARAMS === 'string' && RAGConfig.DOCLING_PARAMS.trim() !== ''
? JSON.parse(RAGConfig.DOCLING_PARAMS) ? JSON.parse(RAGConfig.DOCLING_PARAMS)
@ -260,9 +242,10 @@
const embeddingConfig = await getEmbeddingConfig(localStorage.token); const embeddingConfig = await getEmbeddingConfig(localStorage.token);
if (embeddingConfig) { if (embeddingConfig) {
embeddingEngine = embeddingConfig.embedding_engine; RAG_EMBEDDING_ENGINE = embeddingConfig.RAG_EMBEDDING_ENGINE;
embeddingModel = embeddingConfig.embedding_model; RAG_EMBEDDING_MODEL = embeddingConfig.RAG_EMBEDDING_MODEL;
embeddingBatchSize = embeddingConfig.embedding_batch_size ?? 1; RAG_EMBEDDING_BATCH_SIZE = embeddingConfig.RAG_EMBEDDING_BATCH_SIZE ?? 1;
ENABLE_ASYNC_EMBEDDING = embeddingConfig.ENABLE_ASYNC_EMBEDDING ?? true;
OpenAIKey = embeddingConfig.openai_config.key; OpenAIKey = embeddingConfig.openai_config.key;
OpenAIUrl = embeddingConfig.openai_config.url; OpenAIUrl = embeddingConfig.openai_config.url;
@ -281,16 +264,6 @@
const config = await getRAGConfig(localStorage.token); const config = await getRAGConfig(localStorage.token);
config.ALLOWED_FILE_EXTENSIONS = (config?.ALLOWED_FILE_EXTENSIONS ?? []).join(', '); config.ALLOWED_FILE_EXTENSIONS = (config?.ALLOWED_FILE_EXTENSIONS ?? []).join(', ');
config.DOCLING_PICTURE_DESCRIPTION_LOCAL = JSON.stringify(
config.DOCLING_PICTURE_DESCRIPTION_LOCAL ?? {},
null,
2
);
config.DOCLING_PICTURE_DESCRIPTION_API = JSON.stringify(
config.DOCLING_PICTURE_DESCRIPTION_API ?? {},
null,
2
);
config.DOCLING_PARAMS = config.DOCLING_PARAMS =
typeof config.DOCLING_PARAMS === 'object' typeof config.DOCLING_PARAMS === 'object'
? JSON.stringify(config.DOCLING_PARAMS ?? {}, null, 2) ? JSON.stringify(config.DOCLING_PARAMS ?? {}, null, 2)
@ -589,174 +562,19 @@
</div> </div>
</div> </div>
{:else if RAGConfig.CONTENT_EXTRACTION_ENGINE === 'docling'} {:else if RAGConfig.CONTENT_EXTRACTION_ENGINE === 'docling'}
<div class="flex w-full mt-1"> <div class="my-0.5 flex gap-2 pr-2">
<input <input
class="flex-1 w-full text-sm bg-transparent outline-hidden" class="flex-1 w-full text-sm bg-transparent outline-hidden"
placeholder={$i18n.t('Enter Docling Server URL')} placeholder={$i18n.t('Enter Docling Server URL')}
bind:value={RAGConfig.DOCLING_SERVER_URL} bind:value={RAGConfig.DOCLING_SERVER_URL}
/> />
<SensitiveInput
placeholder={$i18n.t('Enter Docling API Key')}
bind:value={RAGConfig.DOCLING_API_KEY}
required={false}
/>
</div> </div>
<div class="flex w-full mt-2">
<div class="flex-1 flex justify-between">
<div class=" self-center text-xs font-medium">
{$i18n.t('Perform OCR')}
</div>
<div class="flex items-center relative">
<Switch bind:state={RAGConfig.DOCLING_DO_OCR} />
</div>
</div>
</div>
{#if RAGConfig.DOCLING_DO_OCR}
<div class="flex w-full mt-2">
<input
class="flex-1 w-full text-sm bg-transparent outline-hidden"
placeholder={$i18n.t('Enter Docling OCR Engine')}
bind:value={RAGConfig.DOCLING_OCR_ENGINE}
/>
<input
class="flex-1 w-full text-sm bg-transparent outline-hidden"
placeholder={$i18n.t('Enter Docling OCR Language(s)')}
bind:value={RAGConfig.DOCLING_OCR_LANG}
/>
</div>
{/if}
<div class="flex w-full mt-2">
<div class="flex-1 flex justify-between">
<div class=" self-center text-xs font-medium">
{$i18n.t('Force OCR')}
</div>
<div class="flex items-center relative">
<Switch bind:state={RAGConfig.DOCLING_FORCE_OCR} />
</div>
</div>
</div>
<div class="flex justify-between w-full mt-2">
<div class="self-center text-xs font-medium">
<Tooltip content={''} placement="top-start">
{$i18n.t('PDF Backend')}
</Tooltip>
</div>
<div class="">
<select
class="dark:bg-gray-900 w-fit pr-8 rounded-sm px-2 text-xs bg-transparent outline-hidden text-right"
bind:value={RAGConfig.DOCLING_PDF_BACKEND}
>
<option value="pypdfium2">{$i18n.t('pypdfium2')}</option>
<option value="dlparse_v1">{$i18n.t('dlparse_v1')}</option>
<option value="dlparse_v2">{$i18n.t('dlparse_v2')}</option>
<option value="dlparse_v4">{$i18n.t('dlparse_v4')}</option>
</select>
</div>
</div>
<div class="flex justify-between w-full mt-2">
<div class="self-center text-xs font-medium">
<Tooltip content={''} placement="top-start">
{$i18n.t('Table Mode')}
</Tooltip>
</div>
<div class="">
<select
class="dark:bg-gray-900 w-fit pr-8 rounded-sm px-2 text-xs bg-transparent outline-hidden text-right"
bind:value={RAGConfig.DOCLING_TABLE_MODE}
>
<option value="fast">{$i18n.t('fast')}</option>
<option value="accurate">{$i18n.t('accurate')}</option>
</select>
</div>
</div>
<div class="flex justify-between w-full mt-2">
<div class="self-center text-xs font-medium">
<Tooltip content={''} placement="top-start">
{$i18n.t('Pipeline')}
</Tooltip>
</div>
<div class="">
<select
class="dark:bg-gray-900 w-fit pr-8 rounded-sm px-2 text-xs bg-transparent outline-hidden text-right"
bind:value={RAGConfig.DOCLING_PIPELINE}
>
<option value="standard">{$i18n.t('standard')}</option>
<option value="vlm">{$i18n.t('vlm')}</option>
</select>
</div>
</div>
<div class="flex w-full mt-2">
<div class="flex-1 flex justify-between">
<div class=" self-center text-xs font-medium">
{$i18n.t('Describe Pictures in Documents')}
</div>
<div class="flex items-center relative">
<Switch bind:state={RAGConfig.DOCLING_DO_PICTURE_DESCRIPTION} />
</div>
</div>
</div>
{#if RAGConfig.DOCLING_DO_PICTURE_DESCRIPTION}
<div class="flex justify-between w-full mt-2">
<div class="self-center text-xs font-medium">
<Tooltip content={''} placement="top-start">
{$i18n.t('Picture Description Mode')}
</Tooltip>
</div>
<div class="">
<select
class="dark:bg-gray-900 w-fit pr-8 rounded-sm px-2 text-xs bg-transparent outline-hidden text-right"
bind:value={RAGConfig.DOCLING_PICTURE_DESCRIPTION_MODE}
>
<option value="">{$i18n.t('Default')}</option>
<option value="local">{$i18n.t('Local')}</option>
<option value="api">{$i18n.t('API')}</option>
</select>
</div>
</div>
{#if RAGConfig.DOCLING_PICTURE_DESCRIPTION_MODE === 'local'}
<div class="flex flex-col gap-2 mt-2">
<div class=" flex flex-col w-full justify-between">
<div class=" mb-1 text-xs font-medium">
{$i18n.t('Picture Description Local Config')}
</div>
<div class="flex w-full items-center relative">
<Tooltip
content={$i18n.t(
'Options for running a local vision-language model in the picture description. The parameters refer to a model hosted on Hugging Face. This parameter is mutually exclusive with picture_description_api.'
)}
placement="top-start"
className="w-full"
>
<Textarea
bind:value={RAGConfig.DOCLING_PICTURE_DESCRIPTION_LOCAL}
placeholder={$i18n.t('Enter Config in JSON format')}
/>
</Tooltip>
</div>
</div>
</div>
{:else if RAGConfig.DOCLING_PICTURE_DESCRIPTION_MODE === 'api'}
<div class="flex flex-col gap-2 mt-2">
<div class=" flex flex-col w-full justify-between">
<div class=" mb-1 text-xs font-medium">
{$i18n.t('Picture Description API Config')}
</div>
<div class="flex w-full items-center relative">
<Tooltip
content={$i18n.t(
'API details for using a vision-language model in the picture description. This parameter is mutually exclusive with picture_description_local.'
)}
placement="top-start"
className="w-full"
>
<Textarea
bind:value={RAGConfig.DOCLING_PICTURE_DESCRIPTION_API}
placeholder={$i18n.t('Enter Config in JSON format')}
/>
</Tooltip>
</div>
</div>
</div>
{/if}
{/if}
<div class="flex flex-col gap-2 mt-2"> <div class="flex flex-col gap-2 mt-2">
<div class=" flex flex-col w-full justify-between"> <div class=" flex flex-col w-full justify-between">
<div class=" mb-1 text-xs font-medium"> <div class=" mb-1 text-xs font-medium">
@ -959,17 +777,17 @@
<div class="flex items-center relative"> <div class="flex items-center relative">
<select <select
class="dark:bg-gray-900 w-fit pr-8 rounded-sm px-2 p-1 text-xs bg-transparent outline-hidden text-right" class="dark:bg-gray-900 w-fit pr-8 rounded-sm px-2 p-1 text-xs bg-transparent outline-hidden text-right"
bind:value={embeddingEngine} bind:value={RAG_EMBEDDING_ENGINE}
placeholder={$i18n.t('Select an embedding model engine')} placeholder={$i18n.t('Select an embedding model engine')}
on:change={(e) => { on:change={(e) => {
if (e.target.value === 'ollama') { if (e.target.value === 'ollama') {
embeddingModel = ''; RAG_EMBEDDING_MODEL = '';
} else if (e.target.value === 'openai') { } else if (e.target.value === 'openai') {
embeddingModel = 'text-embedding-3-small'; RAG_EMBEDDING_MODEL = 'text-embedding-3-small';
} else if (e.target.value === 'azure_openai') { } else if (e.target.value === 'azure_openai') {
embeddingModel = 'text-embedding-3-small'; RAG_EMBEDDING_MODEL = 'text-embedding-3-small';
} else if (e.target.value === '') { } else if (e.target.value === '') {
embeddingModel = 'sentence-transformers/all-MiniLM-L6-v2'; RAG_EMBEDDING_MODEL = 'sentence-transformers/all-MiniLM-L6-v2';
} }
}} }}
> >
@ -981,7 +799,7 @@
</div> </div>
</div> </div>
{#if embeddingEngine === 'openai'} {#if RAG_EMBEDDING_ENGINE === 'openai'}
<div class="my-0.5 flex gap-2 pr-2"> <div class="my-0.5 flex gap-2 pr-2">
<input <input
class="flex-1 w-full text-sm bg-transparent outline-hidden" class="flex-1 w-full text-sm bg-transparent outline-hidden"
@ -996,7 +814,7 @@
required={false} required={false}
/> />
</div> </div>
{:else if embeddingEngine === 'ollama'} {:else if RAG_EMBEDDING_ENGINE === 'ollama'}
<div class="my-0.5 flex gap-2 pr-2"> <div class="my-0.5 flex gap-2 pr-2">
<input <input
class="flex-1 w-full text-sm bg-transparent outline-hidden" class="flex-1 w-full text-sm bg-transparent outline-hidden"
@ -1011,7 +829,7 @@
required={false} required={false}
/> />
</div> </div>
{:else if embeddingEngine === 'azure_openai'} {:else if RAG_EMBEDDING_ENGINE === 'azure_openai'}
<div class="my-0.5 flex flex-col gap-2 pr-2 w-full"> <div class="my-0.5 flex flex-col gap-2 pr-2 w-full">
<div class="flex gap-2"> <div class="flex gap-2">
<input <input
@ -1038,12 +856,12 @@
<div class=" mb-1 text-xs font-medium">{$i18n.t('Embedding Model')}</div> <div class=" mb-1 text-xs font-medium">{$i18n.t('Embedding Model')}</div>
<div class=""> <div class="">
{#if embeddingEngine === 'ollama'} {#if RAG_EMBEDDING_ENGINE === 'ollama'}
<div class="flex w-full"> <div class="flex w-full">
<div class="flex-1 mr-2"> <div class="flex-1 mr-2">
<input <input
class="flex-1 w-full text-sm bg-transparent outline-hidden" class="flex-1 w-full text-sm bg-transparent outline-hidden"
bind:value={embeddingModel} bind:value={RAG_EMBEDDING_MODEL}
placeholder={$i18n.t('Set embedding model')} placeholder={$i18n.t('Set embedding model')}
required required
/> />
@ -1055,13 +873,13 @@
<input <input
class="flex-1 w-full text-sm bg-transparent outline-hidden" class="flex-1 w-full text-sm bg-transparent outline-hidden"
placeholder={$i18n.t('Set embedding model (e.g. {{model}})', { placeholder={$i18n.t('Set embedding model (e.g. {{model}})', {
model: embeddingModel.slice(-40) model: RAG_EMBEDDING_MODEL.slice(-40)
})} })}
bind:value={embeddingModel} bind:value={RAG_EMBEDDING_MODEL}
/> />
</div> </div>
{#if embeddingEngine === ''} {#if RAG_EMBEDDING_ENGINE === ''}
<button <button
class="px-2.5 bg-transparent text-gray-800 dark:bg-transparent dark:text-gray-100 rounded-lg transition" class="px-2.5 bg-transparent text-gray-800 dark:bg-transparent dark:text-gray-100 rounded-lg transition"
on:click={() => { on:click={() => {
@ -1101,7 +919,7 @@
</div> </div>
</div> </div>
{#if embeddingEngine === 'ollama' || embeddingEngine === 'openai' || embeddingEngine === 'azure_openai'} {#if RAG_EMBEDDING_ENGINE === 'ollama' || RAG_EMBEDDING_ENGINE === 'openai' || RAG_EMBEDDING_ENGINE === 'azure_openai'}
<div class=" mb-2.5 flex w-full justify-between"> <div class=" mb-2.5 flex w-full justify-between">
<div class=" self-center text-xs font-medium"> <div class=" self-center text-xs font-medium">
{$i18n.t('Embedding Batch Size')} {$i18n.t('Embedding Batch Size')}
@ -1109,7 +927,7 @@
<div class=""> <div class="">
<input <input
bind:value={embeddingBatchSize} bind:value={RAG_EMBEDDING_BATCH_SIZE}
type="number" type="number"
class=" bg-transparent text-center w-14 outline-none" class=" bg-transparent text-center w-14 outline-none"
min="-2" min="-2"
@ -1118,6 +936,22 @@
/> />
</div> </div>
</div> </div>
<div class=" mb-2.5 flex w-full justify-between">
<div class="self-center text-xs font-medium">
<Tooltip
content={$i18n.t(
'Runs embedding tasks concurrently to speed up processing. Turn off if rate limits become an issue.'
)}
placement="top-start"
>
{$i18n.t('Async Embedding Processing')}
</Tooltip>
</div>
<div class="flex items-center relative">
<Switch bind:state={ENABLE_ASYNC_EMBEDDING} />
</div>
</div>
{/if} {/if}
</div> </div>

View file

@ -312,7 +312,7 @@
bind:value={adminConfig.DEFAULT_GROUP_ID} bind:value={adminConfig.DEFAULT_GROUP_ID}
placeholder={$i18n.t('Select a group')} placeholder={$i18n.t('Select a group')}
> >
<option value={""}>None</option> <option value={''}>None</option>
{#each groups as group} {#each groups as group}
<option value={group.id}>{group.name}</option> <option value={group.id}>{group.name}</option>
{/each} {/each}

View file

@ -21,6 +21,7 @@
import Textarea from '$lib/components/common/Textarea.svelte'; import Textarea from '$lib/components/common/Textarea.svelte';
import Spinner from '$lib/components/common/Spinner.svelte'; import Spinner from '$lib/components/common/Spinner.svelte';
import Banners from './Interface/Banners.svelte'; import Banners from './Interface/Banners.svelte';
import PromptSuggestions from '$lib/components/workspace/Models/PromptSuggestions.svelte';
import LangPicker from './LangPicker.svelte'; import LangPicker from './LangPicker.svelte';
@ -486,196 +487,13 @@
<div class=" self-center text-xs"> <div class=" self-center text-xs">
{$i18n.t('Default Prompt Suggestions')} {$i18n.t('Default Prompt Suggestions')}
</div> </div>
<PromptSuggestions bind:promptSuggestions />
<button {#if promptSuggestions.length > 0}
class="p-1 px-3 text-xs flex rounded-sm transition" <div class="text-xs text-left w-full mt-2">
type="button" {$i18n.t('Adjusting these settings will apply changes universally to all users.')}
on:click={() => {
if (promptSuggestions.length === 0 || promptSuggestions.at(-1).content !== '') {
promptSuggestions = [...promptSuggestions, { content: '', title: ['', ''] }];
}
}}
>
<svg
xmlns="http://www.w3.org/2000/svg"
viewBox="0 0 20 20"
fill="currentColor"
class="w-4 h-4"
>
<path
d="M10.75 4.75a.75.75 0 00-1.5 0v4.5h-4.5a.75.75 0 000 1.5h4.5v4.5a.75.75 0 001.5 0v-4.5h4.5a.75.75 0 000-1.5h-4.5v-4.5z"
/>
</svg>
</button>
</div> </div>
<div class="grid lg:grid-cols-2 flex-col gap-1.5"> {/if}
{#each promptSuggestions as prompt, promptIdx}
<div
class=" flex border rounded-xl border-gray-50 dark:border-none dark:bg-gray-850 py-1.5"
>
<div class="flex flex-col flex-1 pl-1">
<div class="py-1 gap-1">
<input
class="px-3 text-sm font-medium w-full bg-transparent outline-hidden"
placeholder={$i18n.t('Title (e.g. Tell me a fun fact)')}
bind:value={prompt.title[0]}
/>
<input
class="px-3 text-xs w-full bg-transparent outline-hidden text-gray-600 dark:text-gray-400"
placeholder={$i18n.t('Subtitle (e.g. about the Roman Empire)')}
bind:value={prompt.title[1]}
/>
</div>
<hr class="border-gray-50 dark:border-gray-850 my-1" />
<textarea
class="px-3 py-1.5 text-xs w-full bg-transparent outline-hidden resize-none"
placeholder={$i18n.t(
'Prompt (e.g. Tell me a fun fact about the Roman Empire)'
)}
rows="3"
bind:value={prompt.content}
/>
</div>
<div class="">
<button
class="p-3"
type="button"
on:click={() => {
promptSuggestions.splice(promptIdx, 1);
promptSuggestions = promptSuggestions;
}}
>
<svg
xmlns="http://www.w3.org/2000/svg"
viewBox="0 0 20 20"
fill="currentColor"
class="w-4 h-4"
>
<path
d="M6.28 5.22a.75.75 0 00-1.06 1.06L8.94 10l-3.72 3.72a.75.75 0 101.06 1.06L10 11.06l3.72 3.72a.75.75 0 101.06-1.06L11.06 10l3.72-3.72a.75.75 0 00-1.06-1.06L10 8.94 6.28 5.22z"
/>
</svg>
</button>
</div>
</div>
{/each}
</div>
{#if promptSuggestions.length > 0}
<div class="text-xs text-left w-full mt-2">
{$i18n.t('Adjusting these settings will apply changes universally to all users.')}
</div>
{/if}
<div class="flex items-center justify-end space-x-2 mt-2">
<input
id="prompt-suggestions-import-input"
type="file"
accept=".json"
hidden
on:change={(e) => {
const files = e.target.files;
if (!files || files.length === 0) {
return;
}
console.log(files);
let reader = new FileReader();
reader.onload = async (event) => {
try {
let suggestions = JSON.parse(event.target.result);
suggestions = suggestions.map((s) => {
if (typeof s.title === 'string') {
s.title = [s.title, ''];
} else if (!Array.isArray(s.title)) {
s.title = ['', ''];
}
return s;
});
promptSuggestions = [...promptSuggestions, ...suggestions];
} catch (error) {
toast.error($i18n.t('Invalid JSON file'));
return;
}
};
reader.readAsText(files[0]);
e.target.value = ''; // Reset the input value
}}
/>
<button
class="flex text-xs items-center space-x-1 px-3 py-1.5 rounded-xl bg-gray-50 hover:bg-gray-100 dark:bg-gray-800 dark:hover:bg-gray-700 dark:text-gray-200 transition"
type="button"
on:click={() => {
const input = document.getElementById('prompt-suggestions-import-input');
if (input) {
input.click();
}
}}
>
<div class=" self-center mr-2 font-medium line-clamp-1">
{$i18n.t('Import Prompt Suggestions')}
</div>
<div class=" self-center">
<svg
xmlns="http://www.w3.org/2000/svg"
viewBox="0 0 16 16"
fill="currentColor"
class="w-3.5 h-3.5"
>
<path
fill-rule="evenodd"
d="M4 2a1.5 1.5 0 0 0-1.5 1.5v9A1.5 1.5 0 0 0 4 14h8a1.5 1.5 0 0 0 1.5-1.5V6.621a1.5 1.5 0 0 0-.44-1.06L9.94 2.439A1.5 1.5 0 0 0 8.878 2H4Zm4 9.5a.75.75 0 0 1-.75-.75V8.06l-.72.72a.75.75 0 0 1-1.06-1.06l2-2a.75.75 0 0 1 1.06 0l2 2a.75.75 0 1 1-1.06 1.06l-.72-.72v2.69a.75.75 0 0 1-.75.75Z"
clip-rule="evenodd"
/>
</svg>
</div>
</button>
{#if promptSuggestions.length}
<button
class="flex text-xs items-center space-x-1 px-3 py-1.5 rounded-xl bg-gray-50 hover:bg-gray-100 dark:bg-gray-800 dark:hover:bg-gray-700 dark:text-gray-200 transition"
type="button"
on:click={async () => {
let blob = new Blob([JSON.stringify(promptSuggestions)], {
type: 'application/json'
});
saveAs(blob, `prompt-suggestions-export-${Date.now()}.json`);
}}
>
<div class=" self-center mr-2 font-medium line-clamp-1">
{$i18n.t('Export Prompt Suggestions')} ({promptSuggestions.length})
</div>
<div class=" self-center">
<svg
xmlns="http://www.w3.org/2000/svg"
viewBox="0 0 16 16"
fill="currentColor"
class="w-3.5 h-3.5"
>
<path
fill-rule="evenodd"
d="M4 2a1.5 1.5 0 0 0-1.5 1.5v9A1.5 1.5 0 0 0 4 14h8a1.5 1.5 0 0 0 1.5-1.5V6.621a1.5 1.5 0 0 0-.44-1.06L9.94 2.439A1.5 1.5 0 0 0 8.878 2H4Zm4 3.5a.75.75 0 0 1 .75.75v2.69l.72-.72a.75.75 0 1 1 1.06 1.06l-2 2a.75.75 0 0 1-1.06 0l-2-2a.75.75 0 0 1 1.06-1.06l.72.72V6.25A.75.75 0 0 1 8 5.5Z"
clip-rule="evenodd"
/>
</svg>
</div>
</button>
{/if}
</div>
</div>
{/if} {/if}
</div> </div>
</div> </div>

View file

@ -38,6 +38,7 @@
import EyeSlash from '$lib/components/icons/EyeSlash.svelte'; import EyeSlash from '$lib/components/icons/EyeSlash.svelte';
import Eye from '$lib/components/icons/Eye.svelte'; import Eye from '$lib/components/icons/Eye.svelte';
import { WEBUI_API_BASE_URL, WEBUI_BASE_URL } from '$lib/constants'; import { WEBUI_API_BASE_URL, WEBUI_BASE_URL } from '$lib/constants';
import { goto } from '$app/navigation';
let shiftKey = false; let shiftKey = false;
@ -200,6 +201,16 @@
} }
}; };
const cloneHandler = async (model) => {
sessionStorage.model = JSON.stringify({
...model,
base_model_id: model.id,
id: `${model.id}-clone`,
name: `${model.name} (Clone)`
});
goto('/workspace/models/create');
};
const exportModelHandler = async (model) => { const exportModelHandler = async (model) => {
let blob = new Blob([JSON.stringify([model])], { let blob = new Blob([JSON.stringify([model])], {
type: 'application/json' type: 'application/json'
@ -419,6 +430,9 @@
copyLinkHandler={() => { copyLinkHandler={() => {
copyLinkHandler(model); copyLinkHandler(model);
}} }}
cloneHandler={() => {
cloneHandler(model);
}}
onClose={() => {}} onClose={() => {}}
> >
<button <button

View file

@ -25,6 +25,7 @@
export let exportHandler: Function; export let exportHandler: Function;
export let hideHandler: Function; export let hideHandler: Function;
export let copyLinkHandler: Function; export let copyLinkHandler: Function;
export let cloneHandler: Function;
export let onClose: Function; export let onClose: Function;
@ -114,6 +115,17 @@
<div class="flex items-center">{$i18n.t('Copy Link')}</div> <div class="flex items-center">{$i18n.t('Copy Link')}</div>
</DropdownMenu.Item> </DropdownMenu.Item>
<DropdownMenu.Item
class="flex gap-2 items-center px-3 py-1.5 text-sm font-medium cursor-pointer hover:bg-gray-50 dark:hover:bg-gray-800 rounded-md"
on:click={() => {
cloneHandler();
}}
>
<DocumentDuplicate />
<div class="flex items-center">{$i18n.t('Clone')}</div>
</DropdownMenu.Item>
<DropdownMenu.Item <DropdownMenu.Item
class="flex gap-2 items-center px-3 py-1.5 text-sm font-medium cursor-pointer hover:bg-gray-50 dark:hover:bg-gray-800 rounded-md" class="flex gap-2 items-center px-3 py-1.5 text-sm font-medium cursor-pointer hover:bg-gray-50 dark:hover:bg-gray-800 rounded-md"
on:click={() => { on:click={() => {

View file

@ -224,7 +224,7 @@
<div class="overflow-y-scroll scrollbar-hidden h-full"> <div class="overflow-y-scroll scrollbar-hidden h-full">
{#if PIPELINES_LIST !== null} {#if PIPELINES_LIST !== null}
<div class="flex w-full justify-between mb-2"> <div class="flex w-full justify-between mb-2">
<div class=" self-center text-sm font-semibold"> <div class=" self-center text-sm font-medium">
{$i18n.t('Manage Pipelines')} {$i18n.t('Manage Pipelines')}
</div> </div>
</div> </div>
@ -410,7 +410,7 @@
</div> </div>
<div class="mt-2 text-xs text-gray-500"> <div class="mt-2 text-xs text-gray-500">
<span class=" font-semibold dark:text-gray-200">{$i18n.t('Warning:')}</span> <span class=" font-medium dark:text-gray-200">{$i18n.t('Warning:')}</span>
{$i18n.t('Pipelines are a plugin system with arbitrary code execution —')} {$i18n.t('Pipelines are a plugin system with arbitrary code execution —')}
<span class=" font-medium dark:text-gray-400" <span class=" font-medium dark:text-gray-400"
>{$i18n.t("don't fetch random pipelines from sources you don't trust.")}</span >{$i18n.t("don't fetch random pipelines from sources you don't trust.")}</span
@ -423,7 +423,7 @@
{#if pipelines !== null} {#if pipelines !== null}
{#if pipelines.length > 0} {#if pipelines.length > 0}
<div class="flex w-full justify-between mb-2"> <div class="flex w-full justify-between mb-2">
<div class=" self-center text-sm font-semibold"> <div class=" self-center text-sm font-medium">
{$i18n.t('Pipelines Valves')} {$i18n.t('Pipelines Valves')}
</div> </div>
</div> </div>

View file

@ -169,7 +169,7 @@
</div> </div>
{:else} {:else}
<div> <div>
<div class=" flex items-center gap-3 justify-between text-xs uppercase px-1 font-semibold"> <div class=" flex items-center gap-3 justify-between text-xs uppercase px-1 font-medium">
<div class="w-full basis-3/5">{$i18n.t('Group')}</div> <div class="w-full basis-3/5">{$i18n.t('Group')}</div>
<div class="w-full basis-2/5 text-right">{$i18n.t('Users')}</div> <div class="w-full basis-2/5 text-right">{$i18n.t('Users')}</div>

View file

@ -100,6 +100,7 @@
<div class="flex-1"> <div class="flex-1">
<button <button
class="text-xs bg-transparent hover:underline cursor-pointer" class="text-xs bg-transparent hover:underline cursor-pointer"
type="button"
on:click={() => onDelete()} on:click={() => onDelete()}
> >
{$i18n.t('Delete')} {$i18n.t('Delete')}

View file

@ -2,37 +2,56 @@
import { getContext } from 'svelte'; import { getContext } from 'svelte';
const i18n = getContext('i18n'); const i18n = getContext('i18n');
import dayjs from 'dayjs';
import relativeTime from 'dayjs/plugin/relativeTime';
import localizedFormat from 'dayjs/plugin/localizedFormat';
dayjs.extend(relativeTime);
dayjs.extend(localizedFormat);
import { getUsers } from '$lib/apis/users'; import { getUsers } from '$lib/apis/users';
import { toast } from 'svelte-sonner'; import { toast } from 'svelte-sonner';
import { addUserToGroup, removeUserFromGroup } from '$lib/apis/groups';
import { WEBUI_API_BASE_URL } from '$lib/constants';
import Tooltip from '$lib/components/common/Tooltip.svelte'; import Tooltip from '$lib/components/common/Tooltip.svelte';
import Checkbox from '$lib/components/common/Checkbox.svelte'; import Checkbox from '$lib/components/common/Checkbox.svelte';
import Badge from '$lib/components/common/Badge.svelte'; import Badge from '$lib/components/common/Badge.svelte';
import Search from '$lib/components/icons/Search.svelte'; import Search from '$lib/components/icons/Search.svelte';
import Pagination from '$lib/components/common/Pagination.svelte'; import Pagination from '$lib/components/common/Pagination.svelte';
import { addUserToGroup, removeUserFromGroup } from '$lib/apis/groups'; import ChevronDown from '$lib/components/icons/ChevronDown.svelte';
import ChevronUp from '$lib/components/icons/ChevronUp.svelte';
import Spinner from '$lib/components/common/Spinner.svelte';
export let groupId: string; export let groupId: string;
export let userCount = 0; export let userCount = 0;
let users = []; let users = null;
let total = 0; let total = null;
let query = ''; let query = '';
let orderBy = `group_id:${groupId}`; // default sort key
let direction = 'desc'; // default sort order
let page = 1; let page = 1;
const setSortKey = (key) => {
if (orderBy === key) {
direction = direction === 'asc' ? 'desc' : 'asc';
} else {
orderBy = key;
direction = 'asc';
}
};
const getUserList = async () => { const getUserList = async () => {
try { try {
const res = await getUsers( const res = await getUsers(localStorage.token, query, orderBy, direction, page).catch(
localStorage.token, (error) => {
query, toast.error(`${error}`);
`group_id:${groupId}`, return null;
null, }
page );
).catch((error) => {
toast.error(`${error}`);
return null;
});
if (res) { if (res) {
users = res.users; users = res.users;
@ -60,11 +79,7 @@
getUserList(); getUserList();
}; };
$: if (page) { $: if (page !== null && query !== null && orderBy !== null && direction !== null) {
getUserList();
}
$: if (query !== null) {
getUserList(); getUserList();
} }
@ -87,42 +102,169 @@
</div> </div>
</div> </div>
<div class="flex-1 overflow-y-auto scrollbar-hidden"> {#if users === null || total === null}
<div class="flex flex-col gap-2.5"> <div class="my-10">
{#if users.length > 0} <Spinner className="size-5" />
{#each users as user, userIdx (user.id)}
<div class="flex flex-row items-center gap-3 w-full text-sm">
<div class="flex items-center">
<Checkbox
state={(user?.group_ids ?? []).includes(groupId) ? 'checked' : 'unchecked'}
on:change={(e) => {
toggleMember(user.id, e.detail);
}}
/>
</div>
<div class="flex w-full items-center justify-between overflow-hidden">
<Tooltip content={user.email} placement="top-start">
<div class="flex">
<div class=" font-medium self-center truncate">{user.name}</div>
</div>
</Tooltip>
{#if (user?.group_ids ?? []).includes(groupId)}
<Badge type="success" content="member" />
{/if}
</div>
</div>
{/each}
{:else}
<div class="text-gray-500 text-xs text-center py-2 px-10">
{$i18n.t('No users were found.')}
</div>
{/if}
</div> </div>
</div> {:else}
{#if users.length > 0}
<div class="scrollbar-hidden relative whitespace-nowrap overflow-x-auto max-w-full">
<table
class="w-full text-sm text-left text-gray-500 dark:text-gray-400 table-auto max-w-full"
>
<thead class="text-xs text-gray-800 uppercase bg-transparent dark:text-gray-200">
<tr class=" border-b-[1.5px] border-gray-50 dark:border-gray-850">
<th
scope="col"
class="px-2.5 py-2 cursor-pointer text-left w-8"
on:click={() => setSortKey(`group_id:${groupId}`)}
>
<div class="flex gap-1.5 items-center">
{$i18n.t('MBR')}
{#if total > 30} {#if orderBy === `group_id:${groupId}`}
<Pagination bind:page count={total} perPage={30} /> <span class="font-normal"
>{#if direction === 'asc'}
<ChevronUp className="size-2" />
{:else}
<ChevronDown className="size-2" />
{/if}
</span>
{:else}
<span class="invisible">
<ChevronUp className="size-2" />
</span>
{/if}
</div>
</th>
<th
scope="col"
class="px-2.5 py-2 cursor-pointer select-none"
on:click={() => setSortKey('role')}
>
<div class="flex gap-1.5 items-center">
{$i18n.t('Role')}
{#if orderBy === 'role'}
<span class="font-normal"
>{#if direction === 'asc'}
<ChevronUp className="size-2" />
{:else}
<ChevronDown className="size-2" />
{/if}
</span>
{:else}
<span class="invisible">
<ChevronUp className="size-2" />
</span>
{/if}
</div>
</th>
<th
scope="col"
class="px-2.5 py-2 cursor-pointer select-none"
on:click={() => setSortKey('name')}
>
<div class="flex gap-1.5 items-center">
{$i18n.t('Name')}
{#if orderBy === 'name'}
<span class="font-normal"
>{#if direction === 'asc'}
<ChevronUp className="size-2" />
{:else}
<ChevronDown className="size-2" />
{/if}
</span>
{:else}
<span class="invisible">
<ChevronUp className="size-2" />
</span>
{/if}
</div>
</th>
<th
scope="col"
class="px-2.5 py-2 cursor-pointer select-none"
on:click={() => setSortKey('last_active_at')}
>
<div class="flex gap-1.5 items-center">
{$i18n.t('Last Active')}
{#if orderBy === 'last_active_at'}
<span class="font-normal"
>{#if direction === 'asc'}
<ChevronUp className="size-2" />
{:else}
<ChevronDown className="size-2" />
{/if}
</span>
{:else}
<span class="invisible">
<ChevronUp className="size-2" />
</span>
{/if}
</div>
</th>
</tr>
</thead>
<tbody class="">
{#each users as user, userIdx (user?.id ?? userIdx)}
<tr class="bg-white dark:bg-gray-900 dark:border-gray-850 text-xs">
<td class=" px-3 py-1 w-8">
<div class="flex w-full justify-center">
<Checkbox
state={(user?.group_ids ?? []).includes(groupId) ? 'checked' : 'unchecked'}
on:change={(e) => {
toggleMember(user.id, e.detail);
}}
/>
</div>
</td>
<td class="px-3 py-1 min-w-[7rem] w-28">
<div class=" translate-y-0.5">
<Badge
type={user.role === 'admin'
? 'info'
: user.role === 'user'
? 'success'
: 'muted'}
content={$i18n.t(user.role)}
/>
</div>
</td>
<td class="px-3 py-1 font-medium text-gray-900 dark:text-white max-w-48">
<Tooltip content={user.email} placement="top-start">
<div class="flex items-center">
<img
class="rounded-full w-6 h-6 object-cover mr-2.5 flex-shrink-0"
src={`${WEBUI_API_BASE_URL}/users/${user.id}/profile/image`}
alt="user"
/>
<div class="font-medium truncate">{user.name}</div>
</div>
</Tooltip>
</td>
<td class=" px-3 py-1">
{dayjs(user.last_active_at * 1000).fromNow()}
</td>
</tr>
{/each}
</tbody>
</table>
</div>
{:else}
<div class="text-gray-500 text-xs text-center py-2 px-10">
{$i18n.t('No users were found.')}
</div>
{/if}
{#if total > 30}
<Pagination bind:page count={total} perPage={30} />
{/if}
{/if} {/if}
</div> </div>

View file

@ -53,7 +53,7 @@
placement="right" placement="right"
> >
<img <img
src={`${WEBUI_API_BASE_URL}/models/model/profile/image?id=${model.id}&lang=${$i18n.language}`} src={`${WEBUI_API_BASE_URL}/models/model/profile/image?id=${model?.id}&lang=${$i18n.language}`}
class=" size-[2.7rem] rounded-full border-[1px] border-gray-100 dark:border-none" class=" size-[2.7rem] rounded-full border-[1px] border-gray-100 dark:border-none"
alt="logo" alt="logo"
draggable="false" draggable="false"

View file

@ -468,7 +468,7 @@
} }
if ($settings.audio?.tts?.engine === 'browser-kokoro') { if ($settings.audio?.tts?.engine === 'browser-kokoro') {
const blob = await $TTSWorker const url = await $TTSWorker
.generate({ .generate({
text: content, text: content,
voice: $settings?.audio?.tts?.voice ?? $config?.audio?.tts?.voice voice: $settings?.audio?.tts?.voice ?? $config?.audio?.tts?.voice
@ -478,8 +478,8 @@
toast.error(`${error}`); toast.error(`${error}`);
}); });
if (blob) { if (url) {
audioCache.set(content, new Audio(blob)); audioCache.set(content, new Audio(url));
} }
} else if ($config.audio.tts.engine !== '') { } else if ($config.audio.tts.engine !== '') {
const res = await synthesizeOpenAISpeech( const res = await synthesizeOpenAISpeech(

View file

@ -168,9 +168,9 @@
title={$i18n.t('Content')} title={$i18n.t('Content')}
></iframe> ></iframe>
{:else} {:else}
<pre class="text-sm dark:text-gray-400 whitespace-pre-line"> <pre class="text-sm dark:text-gray-400 whitespace-pre-line">{document.document
{document.document} .trim()
</pre> .replace(/\n\n+/g, '\n\n')}</pre>
{/if} {/if}
</div> </div>
</div> </div>

View file

@ -5,9 +5,12 @@
import markedExtension from '$lib/utils/marked/extension'; import markedExtension from '$lib/utils/marked/extension';
import markedKatexExtension from '$lib/utils/marked/katex-extension'; import markedKatexExtension from '$lib/utils/marked/katex-extension';
import { disableSingleTilde } from '$lib/utils/marked/strikethrough-extension';
import { mentionExtension } from '$lib/utils/marked/mention-extension'; import { mentionExtension } from '$lib/utils/marked/mention-extension';
import MarkdownTokens from './Markdown/MarkdownTokens.svelte'; import MarkdownTokens from './Markdown/MarkdownTokens.svelte';
import footnoteExtension from '$lib/utils/marked/footnote-extension';
import citationExtension from '$lib/utils/marked/citation-extension';
export let id = ''; export let id = '';
export let content; export let content;
@ -38,6 +41,9 @@
marked.use(markedKatexExtension(options)); marked.use(markedKatexExtension(options));
marked.use(markedExtension(options)); marked.use(markedExtension(options));
marked.use(citationExtension(options));
marked.use(footnoteExtension(options));
marked.use(disableSingleTilde);
marked.use({ marked.use({
extensions: [mentionExtension({ triggerChar: '@' }), mentionExtension({ triggerChar: '#' })] extensions: [mentionExtension({ triggerChar: '@' }), mentionExtension({ triggerChar: '#' })]
}); });
@ -45,7 +51,7 @@
$: (async () => { $: (async () => {
if (content) { if (content) {
tokens = marked.lexer( tokens = marked.lexer(
replaceTokens(processResponseContent(content), sourceIds, model?.name, $user?.name) replaceTokens(processResponseContent(content), model?.name, $user?.name)
); );
} }
})(); })();
@ -59,6 +65,7 @@
{save} {save}
{preview} {preview}
{editCodeBlock} {editCodeBlock}
{sourceIds}
{topPadding} {topPadding}
{onTaskClick} {onTaskClick}
{onSourceClick} {onSourceClick}

View file

@ -3,14 +3,11 @@
import type { Token } from 'marked'; import type { Token } from 'marked';
import { WEBUI_BASE_URL } from '$lib/constants'; import { WEBUI_BASE_URL } from '$lib/constants';
import Source from './Source.svelte';
import { settings } from '$lib/stores'; import { settings } from '$lib/stores';
export let id: string; export let id: string;
export let token: Token; export let token: Token;
export let onSourceClick: Function = () => {};
let html: string | null = null; let html: string | null = null;
$: if (token.type === 'html' && token?.text) { $: if (token.type === 'html' && token?.text) {
@ -129,8 +126,6 @@
}} }}
></iframe> ></iframe>
{/if} {/if}
{:else if token.text.includes(`<source_id`)}
<Source {id} {token} onClick={onSourceClick} />
{:else if token.text.trim().match(/^<br\s*\/?>$/i)} {:else if token.text.trim().match(/^<br\s*\/?>$/i)}
<br /> <br />
{:else} {:else}

View file

@ -17,10 +17,12 @@
import TextToken from './MarkdownInlineTokens/TextToken.svelte'; import TextToken from './MarkdownInlineTokens/TextToken.svelte';
import CodespanToken from './MarkdownInlineTokens/CodespanToken.svelte'; import CodespanToken from './MarkdownInlineTokens/CodespanToken.svelte';
import MentionToken from './MarkdownInlineTokens/MentionToken.svelte'; import MentionToken from './MarkdownInlineTokens/MentionToken.svelte';
import SourceToken from './SourceToken.svelte';
export let id: string; export let id: string;
export let done = true; export let done = true;
export let tokens: Token[]; export let tokens: Token[];
export let sourceIds = [];
export let onSourceClick: Function = () => {}; export let onSourceClick: Function = () => {};
</script> </script>
@ -68,6 +70,17 @@
></iframe> ></iframe>
{:else if token.type === 'mention'} {:else if token.type === 'mention'}
<MentionToken {token} /> <MentionToken {token} />
{:else if token.type === 'footnote'}
{@html DOMPurify.sanitize(
`<sup class="footnote-ref footnote-ref-text">${token.escapedText}</sup>`
) || ''}
{:else if token.type === 'citation'}
<SourceToken {id} {token} {sourceIds} onClick={onSourceClick} />
<!-- {#if token.ids && token.ids.length > 0}
{#each token.ids as sourceId}
<Source id={sourceId - 1} title={sourceIds[sourceId - 1]} onClick={onSourceClick} />
{/each}
{/if} -->
{:else if token.type === 'text'} {:else if token.type === 'text'}
<TextToken {token} {done} /> <TextToken {token} {done} />
{/if} {/if}

View file

@ -21,7 +21,6 @@
import Tooltip from '$lib/components/common/Tooltip.svelte'; import Tooltip from '$lib/components/common/Tooltip.svelte';
import Download from '$lib/components/icons/Download.svelte'; import Download from '$lib/components/icons/Download.svelte';
import Source from './Source.svelte';
import HtmlToken from './HTMLToken.svelte'; import HtmlToken from './HTMLToken.svelte';
import Clipboard from '$lib/components/icons/Clipboard.svelte'; import Clipboard from '$lib/components/icons/Clipboard.svelte';
@ -29,6 +28,7 @@
export let tokens: Token[]; export let tokens: Token[];
export let top = true; export let top = true;
export let attributes = {}; export let attributes = {};
export let sourceIds = [];
export let done = true; export let done = true;
@ -96,6 +96,7 @@
id={`${id}-${tokenIdx}-h`} id={`${id}-${tokenIdx}-h`}
tokens={token.tokens} tokens={token.tokens}
{done} {done}
{sourceIds}
{onSourceClick} {onSourceClick}
/> />
</svelte:element> </svelte:element>
@ -147,6 +148,7 @@
id={`${id}-${tokenIdx}-header-${headerIdx}`} id={`${id}-${tokenIdx}-header-${headerIdx}`}
tokens={header.tokens} tokens={header.tokens}
{done} {done}
{sourceIds}
{onSourceClick} {onSourceClick}
/> />
</div> </div>
@ -172,6 +174,7 @@
id={`${id}-${tokenIdx}-row-${rowIdx}-${cellIdx}`} id={`${id}-${tokenIdx}-row-${rowIdx}-${cellIdx}`}
tokens={cell.tokens} tokens={cell.tokens}
{done} {done}
{sourceIds}
{onSourceClick} {onSourceClick}
/> />
</div> </div>
@ -221,6 +224,7 @@
{done} {done}
{editCodeBlock} {editCodeBlock}
{onTaskClick} {onTaskClick}
{sourceIds}
{onSourceClick} {onSourceClick}
/> />
</blockquote> </blockquote>
@ -255,6 +259,7 @@
{done} {done}
{editCodeBlock} {editCodeBlock}
{onTaskClick} {onTaskClick}
{sourceIds}
{onSourceClick} {onSourceClick}
/> />
</li> </li>
@ -289,6 +294,7 @@
{done} {done}
{editCodeBlock} {editCodeBlock}
{onTaskClick} {onTaskClick}
{sourceIds}
{onSourceClick} {onSourceClick}
/> />
</div> </div>
@ -300,6 +306,7 @@
{done} {done}
{editCodeBlock} {editCodeBlock}
{onTaskClick} {onTaskClick}
{sourceIds}
{onSourceClick} {onSourceClick}
/> />
{/if} {/if}
@ -323,6 +330,7 @@
{done} {done}
{editCodeBlock} {editCodeBlock}
{onTaskClick} {onTaskClick}
{sourceIds}
{onSourceClick} {onSourceClick}
/> />
</div> </div>
@ -348,6 +356,7 @@
id={`${id}-${tokenIdx}-p`} id={`${id}-${tokenIdx}-p`}
tokens={token.tokens ?? []} tokens={token.tokens ?? []}
{done} {done}
{sourceIds}
{onSourceClick} {onSourceClick}
/> />
</p> </p>
@ -359,6 +368,7 @@
id={`${id}-${tokenIdx}-t`} id={`${id}-${tokenIdx}-t`}
tokens={token.tokens} tokens={token.tokens}
{done} {done}
{sourceIds}
{onSourceClick} {onSourceClick}
/> />
{:else} {:else}
@ -370,6 +380,7 @@
id={`${id}-${tokenIdx}-p`} id={`${id}-${tokenIdx}-p`}
tokens={token.tokens ?? []} tokens={token.tokens ?? []}
{done} {done}
{sourceIds}
{onSourceClick} {onSourceClick}
/> />
{:else} {:else}

View file

@ -1,23 +1,10 @@
<script lang="ts"> <script lang="ts">
export let id; export let id;
export let token;
export let title: string = 'N/A';
export let onClick: Function = () => {}; export let onClick: Function = () => {};
let attributes: Record<string, string | undefined> = {};
function extractAttributes(input: string): Record<string, string> {
const regex = /(\w+)="([^"]*)"/g;
let match;
let attrs: Record<string, string> = {};
// Loop through all matches and populate the attributes object
while ((match = regex.exec(input)) !== null) {
attrs[match[1]] = match[2];
}
return attrs;
}
// Helper function to return only the domain from a URL // Helper function to return only the domain from a URL
function getDomain(url: string): string { function getDomain(url: string): string {
const domain = url.replace('http://', '').replace('https://', '').split(/[/?#]/)[0]; const domain = url.replace('http://', '').replace('https://', '').split(/[/?#]/)[0];
@ -44,23 +31,17 @@
} }
return title; return title;
}; };
$: attributes = extractAttributes(token.text);
</script> </script>
{#if attributes.title !== 'N/A'} {#if title !== 'N/A'}
<button <button
class="text-xs font-medium w-fit translate-y-[2px] px-2 py-0.5 dark:bg-white/5 dark:text-white/60 dark:hover:text-white bg-gray-50 text-black/60 hover:text-black transition rounded-lg" class="text-[10px] w-fit translate-y-[2px] px-2 py-0.5 dark:bg-white/5 dark:text-white/80 dark:hover:text-white bg-gray-50 text-black/80 hover:text-black transition rounded-xl"
on:click={() => { on:click={() => {
onClick(id, attributes.data); onClick(id);
}} }}
> >
<span class="line-clamp-1"> <span class="line-clamp-1">
{getDisplayTitle( {getDisplayTitle(formattedTitle(decodeURIComponent(title)))}
decodeURIComponent(attributes.title)
? formattedTitle(decodeURIComponent(attributes.title))
: ''
)}
</span> </span>
</button> </button>
{/if} {/if}

View file

@ -0,0 +1,74 @@
<script lang="ts">
import { LinkPreview } from 'bits-ui';
import Source from './Source.svelte';
export let id;
export let token;
export let sourceIds = [];
export let onClick: Function = () => {};
let containerElement;
let openPreview = false;
// Helper function to return only the domain from a URL
function getDomain(url: string): string {
const domain = url.replace('http://', '').replace('https://', '').split(/[/?#]/)[0];
if (domain.startsWith('www.')) {
return domain.slice(4);
}
return domain;
}
// Helper function to check if text is a URL and return the domain
function formattedTitle(title: string): string {
if (title.startsWith('http')) {
return getDomain(title);
}
return title;
}
const getDisplayTitle = (title: string) => {
if (!title) return 'N/A';
if (title.length > 30) {
return title.slice(0, 15) + '...' + title.slice(-10);
}
return title;
};
</script>
{#if (token?.ids ?? []).length == 1}
<Source id={token.ids[0] - 1} title={sourceIds[token.ids[0] - 1]} {onClick} />
{:else}
<LinkPreview.Root openDelay={0} bind:open={openPreview}>
<LinkPreview.Trigger>
<button
class="text-[10px] w-fit translate-y-[2px] px-2 py-0.5 dark:bg-white/5 dark:text-white/80 dark:hover:text-white bg-gray-50 text-black/80 hover:text-black transition rounded-xl"
on:click={() => {
openPreview = !openPreview;
}}
>
<span class="line-clamp-1">
{getDisplayTitle(formattedTitle(decodeURIComponent(sourceIds[token.ids[0] - 1])))}
<span class="dark:text-white/50 text-black/50">+{(token?.ids ?? []).length - 1}</span>
</span>
</button>
</LinkPreview.Trigger>
<LinkPreview.Content
class="z-[999]"
align="start"
strategy="fixed"
sideOffset={6}
el={containerElement}
>
<div class="bg-gray-50 dark:bg-gray-850 rounded-xl p-1 cursor-pointer">
{#each token.ids as sourceId}
<div class="">
<Source id={sourceId - 1} title={sourceIds[sourceId - 1]} {onClick} />
</div>
{/each}
</div>
</LinkPreview.Content>
</LinkPreview.Root>
{/if}

View file

@ -279,7 +279,7 @@
} }
for (const [idx, sentence] of messageContentParts.entries()) { for (const [idx, sentence] of messageContentParts.entries()) {
const blob = await $TTSWorker const url = await $TTSWorker
.generate({ .generate({
text: sentence, text: sentence,
voice: $settings?.audio?.tts?.voice ?? $config?.audio?.tts?.voice voice: $settings?.audio?.tts?.voice ?? $config?.audio?.tts?.voice
@ -292,8 +292,7 @@
loadingSpeech = false; loadingSpeech = false;
}); });
if (blob) { if (url && speaking) {
const url = URL.createObjectURL(blob);
$audioQueue.enqueue(url); $audioQueue.enqueue(url);
loadingSpeech = false; loadingSpeech = false;
} }
@ -314,7 +313,7 @@
loadingSpeech = false; loadingSpeech = false;
}); });
if (res) { if (res && speaking) {
const blob = await res.blob(); const blob = await res.blob();
const url = URL.createObjectURL(blob); const url = URL.createObjectURL(blob);
@ -627,7 +626,7 @@
> >
<div class={`shrink-0 ltr:mr-3 rtl:ml-3 hidden @lg:flex mt-1 `}> <div class={`shrink-0 ltr:mr-3 rtl:ml-3 hidden @lg:flex mt-1 `}>
<ProfileImage <ProfileImage
src={`${WEBUI_API_BASE_URL}/models/model/profile/image?id=${model.id}&lang=${$i18n.language}`} src={`${WEBUI_API_BASE_URL}/models/model/profile/image?id=${model?.id}&lang=${$i18n.language}`}
className={'size-8 assistant-message-profile-image'} className={'size-8 assistant-message-profile-image'}
/> />
</div> </div>
@ -797,11 +796,11 @@
onTaskClick={async (e) => { onTaskClick={async (e) => {
console.log(e); console.log(e);
}} }}
onSourceClick={async (id, idx) => { onSourceClick={async (id) => {
console.log(id, idx); console.log(id);
if (citationsElement) { if (citationsElement) {
citationsElement?.showSourceModal(idx - 1); citationsElement?.showSourceModal(id);
} }
}} }}
onAddMessages={({ modelId, parentId, messages }) => { onAddMessages={({ modelId, parentId, messages }) => {

View file

@ -6,7 +6,7 @@
import { models, settings } from '$lib/stores'; import { models, settings } from '$lib/stores';
import { user as _user } from '$lib/stores'; import { user as _user } from '$lib/stores';
import { copyToClipboard as _copyToClipboard, formatDate } from '$lib/utils'; import { copyToClipboard as _copyToClipboard, formatDate } from '$lib/utils';
import { WEBUI_BASE_URL } from '$lib/constants'; import { WEBUI_API_BASE_URL, WEBUI_BASE_URL } from '$lib/constants';
import Name from './Name.svelte'; import Name from './Name.svelte';
import ProfileImage from './ProfileImage.svelte'; import ProfileImage from './ProfileImage.svelte';

View file

@ -7,11 +7,18 @@
import { getTimeRange } from '$lib/utils'; import { getTimeRange } from '$lib/utils';
import ChevronUp from '$lib/components/icons/ChevronUp.svelte'; import ChevronUp from '$lib/components/icons/ChevronUp.svelte';
import ChevronDown from '$lib/components/icons/ChevronDown.svelte'; import ChevronDown from '$lib/components/icons/ChevronDown.svelte';
import Loader from '$lib/components/common/Loader.svelte';
import Spinner from '$lib/components/common/Spinner.svelte';
dayjs.extend(localizedFormat); dayjs.extend(localizedFormat);
export let chats = []; export let chats = [];
export let chatListLoading = false;
export let allChatsLoaded = false;
export let loadHandler: Function = null;
let chatList = null; let chatList = null;
const init = async () => { const init = async () => {
@ -158,19 +165,19 @@
</a> </a>
{/each} {/each}
<!-- {#if !allChatsLoaded && loadHandler} {#if !allChatsLoaded && loadHandler}
<Loader <Loader
on:visible={(e) => { on:visible={(e) => {
if (!chatListLoading) { if (!chatListLoading) {
loadHandler(); loadHandler();
} }
}} }}
> >
<div class="w-full flex justify-center py-1 text-xs animate-pulse items-center gap-2"> <div class="w-full flex justify-center py-1 text-xs animate-pulse items-center gap-2">
<Spinner className=" size-4" /> <Spinner className=" size-4" />
<div class=" ">Loading...</div> <div class=" ">Loading...</div>
</div> </div>
</Loader> </Loader>
{/if} --> {/if}
</div> </div>
{/if} {/if}

View file

@ -13,11 +13,38 @@
let selectedTab = 'chats'; let selectedTab = 'chats';
let chats = null;
let page = 1; let page = 1;
let chats = null;
let chatListLoading = false;
let allChatsLoaded = false;
const loadChats = async () => {
chatListLoading = true;
page += 1;
let newChatList = [];
newChatList = await getChatListByFolderId(localStorage.token, folder.id, page).catch(
(error) => {
console.error(error);
return [];
}
);
// once the bottom of the list has been reached (no results) there is no need to continue querying
allChatsLoaded = newChatList.length === 0;
chats = [...chats, ...newChatList];
chatListLoading = false;
};
const setChatList = async () => { const setChatList = async () => {
chats = null; chats = null;
page = 1;
allChatsLoaded = false;
chatListLoading = false;
if (folder && folder.id) { if (folder && folder.id) {
const res = await getChatListByFolderId(localStorage.token, folder.id, page); const res = await getChatListByFolderId(localStorage.token, folder.id, page);
@ -71,7 +98,7 @@
<FolderKnowledge /> <FolderKnowledge />
{:else if selectedTab === 'chats'} {:else if selectedTab === 'chats'}
{#if chats !== null} {#if chats !== null}
<ChatList {chats} /> <ChatList {chats} {chatListLoading} {allChatsLoaded} loadHandler={loadChats} />
{:else} {:else}
<div class="py-10"> <div class="py-10">
<Spinner /> <Spinner />

View file

@ -31,6 +31,7 @@
let showFolderModal = false; let showFolderModal = false;
let showDeleteConfirm = false; let showDeleteConfirm = false;
let deleteFolderContents = true;
const updateHandler = async ({ name, meta, data }) => { const updateHandler = async ({ name, meta, data }) => {
if (name === '') { if (name === '') {
@ -98,10 +99,12 @@
}; };
const deleteHandler = async () => { const deleteHandler = async () => {
const res = await deleteFolderById(localStorage.token, folder.id).catch((error) => { const res = await deleteFolderById(localStorage.token, folder.id, deleteFolderContents).catch(
toast.error(`${error}`); (error) => {
return null; toast.error(`${error}`);
}); return null;
}
);
if (res) { if (res) {
toast.success($i18n.t('Folder deleted successfully')); toast.success($i18n.t('Folder deleted successfully'));
@ -141,15 +144,22 @@
deleteHandler(); deleteHandler();
}} }}
> >
<div class=" text-sm text-gray-700 dark:text-gray-300 flex-1 line-clamp-3"> <div class=" text-sm text-gray-700 dark:text-gray-300 flex-1 line-clamp-3 mb-2">
{@html DOMPurify.sanitize( <!-- {$i18n.t('This will delete <strong>{{NAME}}</strong> and <strong>all its contents</strong>.', {
$i18n.t( NAME: folders[folderId].name
'This will delete <strong>{{NAME}}</strong> and <strong>all its contents</strong>.', })} -->
{
NAME: folder.name {$i18n.t(`Are you sure you want to delete "{{NAME}}"?`, {
} NAME: folders[folderId].name
) })}
)} </div>
<div class="flex items-center gap-1.5">
<input type="checkbox" bind:checked={deleteFolderContents} />
<div class="text-xs text-gray-500">
{$i18n.t('Delete all contents inside this folder')}
</div>
</div> </div>
</DeleteConfirmDialog> </DeleteConfirmDialog>
@ -173,7 +183,7 @@
</button> </button>
</EmojiPicker> </EmojiPicker>
<div class="text-3xl"> <div class="text-3xl line-clamp-1">
{folder.name} {folder.name}
</div> </div>
</div> </div>

View file

@ -16,8 +16,8 @@
deleteAllChats, deleteAllChats,
getAllChats, getAllChats,
getChatList, getChatList,
importChat, getPinnedChatList,
getPinnedChatList importChats
} from '$lib/apis/chats'; } from '$lib/apis/chats';
import { getImportOrigin, convertOpenAIChats } from '$lib/utils'; import { getImportOrigin, convertOpenAIChats } from '$lib/utils';
import { onMount, getContext } from 'svelte'; import { onMount, getContext } from 'svelte';
@ -52,7 +52,7 @@
console.log('Unable to import chats:', error); console.log('Unable to import chats:', error);
} }
} }
importChats(chats); importChatsHandler(chats);
}; };
if (importFiles.length > 0) { if (importFiles.length > 0) {
@ -60,24 +60,34 @@
} }
} }
const importChats = async (_chats) => { const importChatsHandler = async (_chats) => {
for (const chat of _chats) { const res = await importChats(
console.log(chat); localStorage.token,
_chats.map((chat) => {
if (chat.chat) { if (chat.chat) {
await importChat( return {
localStorage.token, chat: chat.chat,
chat.chat, meta: chat.meta ?? {},
chat.meta ?? {}, pinned: false,
false, folder_id: chat?.folder_id ?? null,
null, created_at: chat?.created_at ?? null,
chat?.created_at ?? null, updated_at: chat?.updated_at ?? null
chat?.updated_at ?? null };
); } else {
} else { // Legacy format
// Legacy format return {
await importChat(localStorage.token, chat, {}, false, null); chat: chat,
} meta: {},
pinned: false,
folder_id: null,
created_at: chat?.created_at ?? null,
updated_at: chat?.updated_at ?? null
};
}
})
);
if (res) {
toast.success(`Successfully imported ${res.length} chats.`);
} }
currentChatPage.set(1); currentChatPage.set(1);

View file

@ -117,13 +117,13 @@
<slot /> <slot />
</DropdownMenu.Trigger> </DropdownMenu.Trigger>
<DropdownMenu.Content <DropdownMenu.Content
class="max-w-full w-80 bg-gray-50 dark:bg-gray-850 rounded-lg z-9999 shadow-lg dark:text-white" class="max-w-full w-80 border border-gray-100 dark:border-gray-800 bg-white dark:bg-gray-850 rounded-3xl z-9999 shadow-lg dark:text-white"
sideOffset={8} sideOffset={8}
{side} {side}
{align} {align}
transition={flyAndScale} transition={flyAndScale}
> >
<div class="mb-1 px-3 pt-2 pb-2"> <div class="mb-1 px-4 pt-2.5 pb-2">
<input <input
type="text" type="text"
class="w-full text-sm bg-transparent outline-hidden" class="w-full text-sm bg-transparent outline-hidden"

View file

@ -4,7 +4,7 @@
import dayjs from 'dayjs'; import dayjs from 'dayjs';
import localizedFormat from 'dayjs/plugin/localizedFormat'; import localizedFormat from 'dayjs/plugin/localizedFormat';
import calendar from 'dayjs/plugin/calendar' import calendar from 'dayjs/plugin/calendar';
dayjs.extend(localizedFormat); dayjs.extend(localizedFormat);
dayjs.extend(calendar); dayjs.extend(calendar);
@ -244,14 +244,16 @@
<div class="basis-2/5 flex items-center justify-end"> <div class="basis-2/5 flex items-center justify-end">
<div class="hidden sm:flex text-gray-500 dark:text-gray-400 text-xs"> <div class="hidden sm:flex text-gray-500 dark:text-gray-400 text-xs">
{$i18n.t(dayjs(chat?.updated_at * 1000).calendar(null, { {$i18n.t(
sameDay: '[Today]', dayjs(chat?.updated_at * 1000).calendar(null, {
nextDay: '[Tomorrow]', sameDay: '[Today]',
nextWeek: 'dddd', nextDay: '[Tomorrow]',
lastDay: '[Yesterday]', nextWeek: 'dddd',
lastWeek: '[Last] dddd', lastDay: '[Yesterday]',
sameElse: 'L' // use localized format, otherwise dayjs.calendar() defaults to DD/MM/YYYY lastWeek: '[Last] dddd',
}))} sameElse: 'L' // use localized format, otherwise dayjs.calendar() defaults to DD/MM/YYYY
})
)}
</div> </div>
<div class="flex justify-end pl-2.5 text-gray-600 dark:text-gray-300"> <div class="flex justify-end pl-2.5 text-gray-600 dark:text-gray-300">

View file

@ -389,14 +389,16 @@
</div> </div>
<div class=" pl-3 shrink-0 text-gray-500 dark:text-gray-400 text-xs"> <div class=" pl-3 shrink-0 text-gray-500 dark:text-gray-400 text-xs">
{$i18n.t(dayjs(chat?.updated_at * 1000).calendar(null, { {$i18n.t(
sameDay: '[Today]', dayjs(chat?.updated_at * 1000).calendar(null, {
nextDay: '[Tomorrow]', sameDay: '[Today]',
nextWeek: 'dddd', nextDay: '[Tomorrow]',
lastDay: '[Yesterday]', nextWeek: 'dddd',
lastWeek: '[Last] dddd', lastDay: '[Yesterday]',
sameElse: 'L' // use localized format, otherwise dayjs.calendar() defaults to DD/MM/YYYY lastWeek: '[Last] dddd',
}))} sameElse: 'L' // use localized format, otherwise dayjs.calendar() defaults to DD/MM/YYYY
})
)}
</div> </div>
</a> </a>
{/each} {/each}

View file

@ -38,7 +38,7 @@
toggleChatPinnedStatusById, toggleChatPinnedStatusById,
getChatById, getChatById,
updateChatFolderIdById, updateChatFolderIdById,
importChat importChats
} from '$lib/apis/chats'; } from '$lib/apis/chats';
import { createNewFolder, getFolders, updateFolderParentIdById } from '$lib/apis/folders'; import { createNewFolder, getFolders, updateFolderParentIdById } from '$lib/apis/folders';
import { WEBUI_API_BASE_URL, WEBUI_BASE_URL } from '$lib/constants'; import { WEBUI_API_BASE_URL, WEBUI_BASE_URL } from '$lib/constants';
@ -85,6 +85,10 @@
let newFolderId = null; let newFolderId = null;
$: if ($selectedFolder) {
initFolders();
}
const initFolders = async () => { const initFolders = async () => {
const folderList = await getFolders(localStorage.token).catch((error) => { const folderList = await getFolders(localStorage.token).catch((error) => {
toast.error(`${error}`); toast.error(`${error}`);
@ -227,15 +231,16 @@
for (const item of items) { for (const item of items) {
console.log(item); console.log(item);
if (item.chat) { if (item.chat) {
await importChat( await importChats(localStorage.token, [
localStorage.token, {
item.chat, chat: item.chat,
item?.meta ?? {}, meta: item?.meta ?? {},
pinned, pinned: pinned,
folderId, folder_id: folderId,
item?.created_at ?? null, created_at: item?.created_at ?? null,
item?.updated_at ?? null updated_at: item?.updated_at ?? null
); }
]);
} }
} }
@ -713,7 +718,7 @@
bind:this={navElement} bind:this={navElement}
id="sidebar" id="sidebar"
class="h-screen max-h-[100dvh] min-h-screen select-none {$showSidebar class="h-screen max-h-[100dvh] min-h-screen select-none {$showSidebar
? 'bg-gray-50/70 dark:bg-gray-950/70 z-50' ? `${$mobile ? 'bg-gray-50 dark:bg-gray-950' : 'bg-gray-50/70 dark:bg-gray-950/70'} z-50`
: ' bg-transparent z-0 '} {$isApp : ' bg-transparent z-0 '} {$isApp
? `ml-[4.5rem] md:ml-0 ` ? `ml-[4.5rem] md:ml-0 `
: ' transition-all duration-300 '} shrink-0 text-gray-900 dark:text-gray-200 text-sm fixed top-0 left-0 overflow-x-hidden : ' transition-all duration-300 '} shrink-0 text-gray-900 dark:text-gray-200 text-sm fixed top-0 left-0 overflow-x-hidden
@ -999,15 +1004,16 @@
return null; return null;
}); });
if (!chat && item) { if (!chat && item) {
chat = await importChat( chat = await importChats(localStorage.token, [
localStorage.token, {
item.chat, chat: item.chat,
item?.meta ?? {}, meta: item?.meta ?? {},
false, pinned: false,
null, folder_id: null,
item?.created_at ?? null, created_at: item?.created_at ?? null,
item?.updated_at ?? null updated_at: item?.updated_at ?? null
); }
]);
} }
if (chat) { if (chat) {
@ -1064,15 +1070,16 @@
return null; return null;
}); });
if (!chat && item) { if (!chat && item) {
chat = await importChat( chat = await importChats(localStorage.token, [
localStorage.token, {
item.chat, chat: item.chat,
item?.meta ?? {}, meta: item?.meta ?? {},
false, pinned: false,
null, folder_id: null,
item?.created_at ?? null, created_at: item?.created_at ?? null,
item?.updated_at ?? null updated_at: item?.updated_at ?? null
); }
]);
} }
if (chat) { if (chat) {

View file

@ -3,6 +3,7 @@
const dispatch = createEventDispatcher(); const dispatch = createEventDispatcher();
import RecursiveFolder from './RecursiveFolder.svelte'; import RecursiveFolder from './RecursiveFolder.svelte';
import { chatId, selectedFolder } from '$lib/stores';
export let folderRegistry = {}; export let folderRegistry = {};
@ -27,6 +28,16 @@
folderRegistry[e.originFolderId]?.setFolderItems(); folderRegistry[e.originFolderId]?.setFolderItems();
} }
}; };
const loadFolderItems = () => {
for (const folderId of Object.keys(folders)) {
folderRegistry[folderId]?.setFolderItems();
}
};
$: if (folders || ($selectedFolder && $chatId)) {
loadFolderItems();
}
</script> </script>
{#each folderList as folderId (folderId)} {#each folderList as folderId (folderId)}

View file

@ -24,8 +24,8 @@
getChatById, getChatById,
getChatsByFolderId, getChatsByFolderId,
getChatListByFolderId, getChatListByFolderId,
importChat, updateChatFolderIdById,
updateChatFolderIdById importChats
} from '$lib/apis/chats'; } from '$lib/apis/chats';
import ChevronDown from '../../icons/ChevronDown.svelte'; import ChevronDown from '../../icons/ChevronDown.svelte';
@ -52,6 +52,8 @@
export let className = ''; export let className = '';
export let deleteFolderContents = true;
export let parentDragged = false; export let parentDragged = false;
export let onDelete = (e) => {}; export let onDelete = (e) => {};
@ -152,15 +154,16 @@
return null; return null;
}); });
if (!chat && item) { if (!chat && item) {
chat = await importChat( chat = await importChats(localStorage.token, [
localStorage.token, {
item.chat, chat: item.chat,
item?.meta ?? {}, meta: item?.meta ?? {},
false, pinned: false,
null, folder_id: null,
item?.created_at ?? null, created_at: item?.created_at ?? null,
item?.updated_at ?? null updated_at: item?.updated_at ?? null
).catch((error) => { }
]).catch((error) => {
toast.error(`${error}`); toast.error(`${error}`);
return null; return null;
}); });
@ -287,10 +290,12 @@
let showDeleteConfirm = false; let showDeleteConfirm = false;
const deleteHandler = async () => { const deleteHandler = async () => {
const res = await deleteFolderById(localStorage.token, folderId).catch((error) => { const res = await deleteFolderById(localStorage.token, folderId, deleteFolderContents).catch(
toast.error(`${error}`); (error) => {
return null; toast.error(`${error}`);
}); return null;
}
);
if (res) { if (res) {
toast.success($i18n.t('Folder deleted successfully')); toast.success($i18n.t('Folder deleted successfully'));
@ -369,17 +374,6 @@
toast.error(`${error}`); toast.error(`${error}`);
return []; return [];
}); });
if ($selectedFolder?.id === folderId) {
const folder = await getFolderById(localStorage.token, folderId).catch((error) => {
toast.error(`${error}`);
return null;
});
if (folder) {
await selectedFolder.set(folder);
}
}
} else { } else {
chats = null; chats = null;
} }
@ -429,12 +423,22 @@
deleteHandler(); deleteHandler();
}} }}
> >
<div class=" text-sm text-gray-700 dark:text-gray-300 flex-1 line-clamp-3"> <div class=" text-sm text-gray-700 dark:text-gray-300 flex-1 line-clamp-3 mb-2">
{@html DOMPurify.sanitize( <!-- {$i18n.t('This will delete <strong>{{NAME}}</strong> and <strong>all its contents</strong>.', {
$i18n.t('This will delete <strong>{{NAME}}</strong> and <strong>all its contents</strong>.', {
NAME: folders[folderId].name NAME: folders[folderId].name
}) })} -->
)}
{$i18n.t(`Are you sure you want to delete "{{NAME}}"?`, {
NAME: folders[folderId].name
})}
</div>
<div class="flex items-center gap-1.5">
<input type="checkbox" bind:checked={deleteFolderContents} />
<div class="text-xs text-gray-500">
{$i18n.t('Delete all contents inside this folder')}
</div>
</div> </div>
</DeleteConfirmDialog> </DeleteConfirmDialog>

View file

@ -711,7 +711,7 @@
bind:show={showAccessControlModal} bind:show={showAccessControlModal}
bind:accessControl={knowledge.access_control} bind:accessControl={knowledge.access_control}
share={$user?.permissions?.sharing?.knowledge || $user?.role === 'admin'} share={$user?.permissions?.sharing?.knowledge || $user?.role === 'admin'}
sharePublic={$user?.permissions?.sharing?.public_knowledge || $user?.role === 'admin'} sharePu={$user?.permissions?.sharing?.public_knowledge || $user?.role === 'admin'}
onChange={() => { onChange={() => {
changeDebounceHandler(); changeDebounceHandler();
}} }}
@ -724,7 +724,7 @@
<div class="w-full"> <div class="w-full">
<input <input
type="text" type="text"
class="text-left w-full font-semibold text-2xl font-primary bg-transparent outline-hidden" class="text-left w-full font-medium text-2xl font-primary bg-transparent outline-hidden"
bind:value={knowledge.name} bind:value={knowledge.name}
placeholder={$i18n.t('Knowledge Name')} placeholder={$i18n.t('Knowledge Name')}
on:input={() => { on:input={() => {

View file

@ -57,7 +57,7 @@
<div class="shrink-0 w-full flex justify-between items-center"> <div class="shrink-0 w-full flex justify-between items-center">
<div class="w-full"> <div class="w-full">
<input <input
class="w-full text-3xl font-semibold bg-transparent outline-hidden" class="w-full text-3xl font-medium bg-transparent outline-hidden"
type="text" type="text"
bind:value={name} bind:value={name}
placeholder={$i18n.t('Title')} placeholder={$i18n.t('Title')}

View file

@ -17,6 +17,7 @@
createNewModel, createNewModel,
deleteModelById, deleteModelById,
getModelItems as getWorkspaceModels, getModelItems as getWorkspaceModels,
getModelTags,
toggleModelById, toggleModelById,
updateModelById updateModelById
} from '$lib/apis/models'; } from '$lib/apis/models';
@ -42,6 +43,7 @@
import Eye from '../icons/Eye.svelte'; import Eye from '../icons/Eye.svelte';
import ViewSelector from './common/ViewSelector.svelte'; import ViewSelector from './common/ViewSelector.svelte';
import TagSelector from './common/TagSelector.svelte'; import TagSelector from './common/TagSelector.svelte';
import Pagination from '../common/Pagination.svelte';
let shiftKey = false; let shiftKey = false;
@ -51,41 +53,61 @@
let loaded = false; let loaded = false;
let models = [];
let tags = [];
let viewOption = '';
let selectedTag = '';
let filteredModels = [];
let selectedModel = null;
let showModelDeleteConfirm = false; let showModelDeleteConfirm = false;
let group_ids = []; let selectedModel = null;
$: if (models && query !== undefined && selectedTag !== undefined && viewOption !== undefined) { let groupIds = [];
setFilteredModels();
}
const setFilteredModels = async () => { let tags = [];
filteredModels = models.filter((m) => { let selectedTag = '';
if (query === '' && selectedTag === '' && viewOption === '') return true;
const lowerQuery = query.toLowerCase();
return (
((m.name || '').toLowerCase().includes(lowerQuery) ||
(m.user?.name || '').toLowerCase().includes(lowerQuery) || // Search by user name
(m.user?.email || '').toLowerCase().includes(lowerQuery)) && // Search by user email
(selectedTag === '' ||
m?.meta?.tags?.some((tag) => tag.name.toLowerCase() === selectedTag.toLowerCase())) &&
(viewOption === '' ||
(viewOption === 'created' && m.user_id === $user?.id) ||
(viewOption === 'shared' && m.user_id !== $user?.id))
);
});
};
let query = ''; let query = '';
let viewOption = '';
let page = 1;
let models = null;
let total = null;
$: if (
page !== undefined &&
query !== undefined &&
selectedTag !== undefined &&
viewOption !== undefined
) {
getModelList();
}
const getModelList = async () => {
try {
const res = await getWorkspaceModels(
localStorage.token,
query,
viewOption,
selectedTag,
null,
null,
page
).catch((error) => {
toast.error(`${error}`);
return null;
});
if (res) {
models = res.items;
total = res.total;
// get tags
tags = await getModelTags(localStorage.token).catch((error) => {
toast.error(`${error}`);
return [];
});
}
} catch (err) {
console.error(err);
}
};
const deleteModelHandler = async (model) => { const deleteModelHandler = async (model) => {
const res = await deleteModelById(localStorage.token, model.id).catch((e) => { const res = await deleteModelById(localStorage.token, model.id).catch((e) => {
toast.error(`${e}`); toast.error(`${e}`);
@ -94,6 +116,9 @@
if (res) { if (res) {
toast.success($i18n.t(`Deleted {{name}}`, { name: model.id })); toast.success($i18n.t(`Deleted {{name}}`, { name: model.id }));
page = 1;
getModelList();
} }
await _models.set( await _models.set(
@ -102,7 +127,6 @@
$config?.features?.enable_direct_connections && ($settings?.directConnections ?? null) $config?.features?.enable_direct_connections && ($settings?.directConnections ?? null)
) )
); );
models = await getWorkspaceModels(localStorage.token);
}; };
const cloneModelHandler = async (model) => { const cloneModelHandler = async (model) => {
@ -149,6 +173,9 @@
status: model.meta.hidden ? 'hidden' : 'visible' status: model.meta.hidden ? 'hidden' : 'visible'
}) })
); );
page = 1;
getModelList();
} }
await _models.set( await _models.set(
@ -157,7 +184,6 @@
$config?.features?.enable_direct_connections && ($settings?.directConnections ?? null) $config?.features?.enable_direct_connections && ($settings?.directConnections ?? null)
) )
); );
models = await getWorkspaceModels(localStorage.token);
}; };
const copyLinkHandler = async (model) => { const copyLinkHandler = async (model) => {
@ -185,26 +211,15 @@
saveAs(blob, `${model.id}-${Date.now()}.json`); saveAs(blob, `${model.id}-${Date.now()}.json`);
}; };
const setTags = () => {
if (models) {
tags = models
.filter((model) => !(model?.meta?.hidden ?? false))
.flatMap((model) => model?.meta?.tags ?? [])
.map((tag) => tag.name);
// Remove duplicates and sort
tags = Array.from(new Set(tags)).sort((a, b) => a.localeCompare(b));
}
};
onMount(async () => { onMount(async () => {
viewOption = localStorage.workspaceViewOption ?? ''; viewOption = localStorage.workspaceViewOption ?? '';
page = 1;
await getModelList();
models = await getWorkspaceModels(localStorage.token);
let groups = await getGroups(localStorage.token); let groups = await getGroups(localStorage.token);
group_ids = groups.map((group) => group.id); groupIds = groups.map((group) => group.id);
setTags();
loaded = true; loaded = true;
const onKeyDown = (event) => { const onKeyDown = (event) => {
@ -264,8 +279,9 @@
let reader = new FileReader(); let reader = new FileReader();
reader.onload = async (event) => { reader.onload = async (event) => {
let savedModels = [];
try { try {
let savedModels = JSON.parse(event.target.result); savedModels = JSON.parse(event.target.result);
console.log(savedModels); console.log(savedModels);
} catch (e) { } catch (e) {
toast.error($i18n.t('Invalid JSON file')); toast.error($i18n.t('Invalid JSON file'));
@ -276,16 +292,19 @@
if (model?.info ?? false) { if (model?.info ?? false) {
if ($_models.find((m) => m.id === model.id)) { if ($_models.find((m) => m.id === model.id)) {
await updateModelById(localStorage.token, model.id, model.info).catch((error) => { await updateModelById(localStorage.token, model.id, model.info).catch((error) => {
toast.error(`${error}`);
return null; return null;
}); });
} else { } else {
await createNewModel(localStorage.token, model.info).catch((error) => { await createNewModel(localStorage.token, model.info).catch((error) => {
toast.error(`${error}`);
return null; return null;
}); });
} }
} else { } else {
if (model?.id && model?.name) { if (model?.id && model?.name) {
await createNewModel(localStorage.token, model).catch((error) => { await createNewModel(localStorage.token, model).catch((error) => {
toast.error(`${error}`);
return null; return null;
}); });
} }
@ -298,7 +317,9 @@
$config?.features?.enable_direct_connections && ($settings?.directConnections ?? null) $config?.features?.enable_direct_connections && ($settings?.directConnections ?? null)
) )
); );
models = await getWorkspaceModels(localStorage.token);
page = 1;
getModelList();
}; };
reader.readAsText(importFiles[0]); reader.readAsText(importFiles[0]);
@ -311,7 +332,7 @@
</div> </div>
<div class="text-lg font-medium text-gray-500 dark:text-gray-500"> <div class="text-lg font-medium text-gray-500 dark:text-gray-500">
{filteredModels.length} {total}
</div> </div>
</div> </div>
@ -329,7 +350,7 @@
</button> </button>
{/if} {/if}
{#if models.length && ($user?.role === 'admin' || $user?.permissions?.workspace?.models_export)} {#if total && ($user?.role === 'admin' || $user?.permissions?.workspace?.models_export)}
<button <button
class="flex text-xs items-center space-x-1 px-3 py-1.5 rounded-xl bg-gray-50 hover:bg-gray-100 dark:bg-gray-850 dark:hover:bg-gray-800 dark:text-gray-200 transition" class="flex text-xs items-center space-x-1 px-3 py-1.5 rounded-xl bg-gray-50 hover:bg-gray-100 dark:bg-gray-850 dark:hover:bg-gray-800 dark:text-gray-200 transition"
on:click={async () => { on:click={async () => {
@ -399,9 +420,7 @@
bind:value={viewOption} bind:value={viewOption}
onChange={async (value) => { onChange={async (value) => {
localStorage.workspaceViewOption = value; localStorage.workspaceViewOption = value;
await tick(); await tick();
setTags();
}} }}
/> />
@ -416,9 +435,9 @@
</div> </div>
</div> </div>
{#if (filteredModels ?? []).length !== 0} {#if (models ?? []).length !== 0}
<div class=" px-3 my-2 gap-1 lg:gap-2 grid lg:grid-cols-2" id="model-list"> <div class=" px-3 my-2 gap-1 lg:gap-2 grid lg:grid-cols-2" id="model-list">
{#each filteredModels as model (model.id)} {#each models as model (model.id)}
<!-- svelte-ignore a11y_no_static_element_interactions --> <!-- svelte-ignore a11y_no_static_element_interactions -->
<!-- svelte-ignore a11y_click_events_have_key_events --> <!-- svelte-ignore a11y_click_events_have_key_events -->
<div <div
@ -428,7 +447,7 @@
if ( if (
$user?.role === 'admin' || $user?.role === 'admin' ||
model.user_id === $user?.id || model.user_id === $user?.id ||
model.access_control.write.group_ids.some((wg) => group_ids.includes(wg)) model.access_control.write.group_ids.some((wg) => groupIds.includes(wg))
) { ) {
goto(`/workspace/models/edit?id=${encodeURIComponent(model.id)}`); goto(`/workspace/models/edit?id=${encodeURIComponent(model.id)}`);
} }
@ -457,7 +476,7 @@
<div class="flex items-center justify-between w-full"> <div class="flex items-center justify-between w-full">
<Tooltip content={getTranslatedLabel(model.name, langCode)} className=" w-fit" placement="top-start"> <Tooltip content={getTranslatedLabel(model.name, langCode)} className=" w-fit" placement="top-start">
<a <a
class=" font-semibold line-clamp-1 hover:underline capitalize" class=" font-medium line-clamp-1 hover:underline capitalize"
href={`/?models=${encodeURIComponent(model.id)}`} href={`/?models=${encodeURIComponent(model.id)}`}
> >
{getTranslatedLabel(model.name, langCode)} {getTranslatedLabel(model.name, langCode)}
@ -610,6 +629,10 @@
</div> </div>
{/each} {/each}
</div> </div>
{#if total > 30}
<Pagination bind:page count={total} perPage={30} />
{/if}
{:else} {:else}
<div class=" w-full h-full flex flex-col justify-center items-center my-16 mb-24"> <div class=" w-full h-full flex flex-col justify-center items-center my-16 mb-24">
<div class="max-w-md text-center"> <div class="max-w-md text-center">
@ -635,7 +658,7 @@
target="_blank" target="_blank"
> >
<div class=" self-center"> <div class=" self-center">
<div class=" font-semibold line-clamp-1">{$i18n.t('Discover a model')}</div> <div class=" font-medium line-clamp-1">{$i18n.t('Discover a model')}</div>
<div class=" text-sm line-clamp-1"> <div class=" text-sm line-clamp-1">
{$i18n.t('Discover, download, and explore model presets')} {$i18n.t('Discover, download, and explore model presets')}
</div> </div>

View file

@ -25,7 +25,7 @@
{#if actions.length > 0} {#if actions.length > 0}
<div> <div>
<div class="flex w-full justify-between mb-1"> <div class="flex w-full justify-between mb-1">
<div class=" self-center text-sm font-semibold">{$i18n.t('Actions')}</div> <div class=" self-center text-sm font-medium">{$i18n.t('Actions')}</div>
</div> </div>
<div class="flex flex-col"> <div class="flex flex-col">

View file

@ -57,7 +57,7 @@
<div> <div>
<div class="flex w-full justify-between mb-1"> <div class="flex w-full justify-between mb-1">
<div class=" self-center text-sm font-semibold">{$i18n.t('Capabilities')}</div> <div class=" self-center text-sm font-medium">{$i18n.t('Capabilities')}</div>
</div> </div>
<div class="flex items-center mt-2 flex-wrap"> <div class="flex items-center mt-2 flex-wrap">
{#each Object.keys(capabilityLabels) as capability} {#each Object.keys(capabilityLabels) as capability}

View file

@ -27,7 +27,7 @@
<div> <div>
<div class="flex w-full justify-between mb-1"> <div class="flex w-full justify-between mb-1">
<div class=" self-center text-sm font-semibold">{$i18n.t('Default Features')}</div> <div class=" self-center text-sm font-medium">{$i18n.t('Default Features')}</div>
</div> </div>
<div class="flex items-center mt-2 flex-wrap"> <div class="flex items-center mt-2 flex-wrap">
{#each availableFeatures as feature} {#each availableFeatures as feature}

View file

@ -24,7 +24,7 @@
<div> <div>
<div class="flex w-full justify-between mb-1"> <div class="flex w-full justify-between mb-1">
<div class=" self-center text-sm font-semibold">{$i18n.t('Default Filters')}</div> <div class=" self-center text-sm font-medium">{$i18n.t('Default Filters')}</div>
</div> </div>
<div class="flex flex-col"> <div class="flex flex-col">

View file

@ -25,7 +25,7 @@
{#if filters.length > 0} {#if filters.length > 0}
<div> <div>
<div class="flex w-full justify-between mb-1"> <div class="flex w-full justify-between mb-1">
<div class=" self-center text-sm font-semibold">{$i18n.t('Filters')}</div> <div class=" self-center text-sm font-medium">{$i18n.t('Filters')}</div>
</div> </div>
<!-- TODO: Filer order matters --> <!-- TODO: Filer order matters -->

View file

@ -157,7 +157,7 @@
<slot name="label"> <slot name="label">
<div class="mb-2"> <div class="mb-2">
<div class="flex w-full justify-between mb-1"> <div class="flex w-full justify-between mb-1">
<div class=" self-center text-sm font-semibold"> <div class=" self-center text-sm font-medium">
{$i18n.t('Knowledge')} {$i18n.t('Knowledge')}
</div> </div>
</div> </div>

View file

@ -23,6 +23,7 @@
import PencilSolid from '$lib/components/icons/PencilSolid.svelte'; import PencilSolid from '$lib/components/icons/PencilSolid.svelte';
import DefaultFiltersSelector from './DefaultFiltersSelector.svelte'; import DefaultFiltersSelector from './DefaultFiltersSelector.svelte';
import DefaultFeatures from './DefaultFeatures.svelte'; import DefaultFeatures from './DefaultFeatures.svelte';
import PromptSuggestions from './PromptSuggestions.svelte';
const i18n = getContext('i18n'); const i18n = getContext('i18n');
@ -637,7 +638,7 @@
<div class="flex-1"> <div class="flex-1">
<div class="flex items-center"> <div class="flex items-center">
<input <input
class="text-3xl font-semibold w-full bg-transparent outline-hidden" class="text-3xl font-medium w-full bg-transparent outline-hidden"
placeholder={$i18n.t('Model Name')} placeholder={$i18n.t('Model Name')}
bind:value={titleTranslations[langCode]} bind:value={titleTranslations[langCode]}
required required
@ -668,7 +669,7 @@
{#if preset} {#if preset}
<div class="my-1"> <div class="my-1">
<div class=" text-sm font-semibold mb-1">{$i18n.t('Base Model (From)')}</div> <div class=" text-sm font-medium mb-1">{$i18n.t('Base Model (From)')}</div>
<div> <div>
<select <select
@ -693,7 +694,7 @@
<div class="my-1"> <div class="my-1">
<div class="mb-1 flex w-full justify-between items-center"> <div class="mb-1 flex w-full justify-between items-center">
<div class=" self-center text-sm font-semibold">{$i18n.t('Description')}</div> <div class=" self-center text-sm font-medium">{$i18n.t('Description')}</div>
<button <button
class="p-1 text-xs flex rounded-sm transition" class="p-1 text-xs flex rounded-sm transition"
@ -758,12 +759,12 @@
<div class="my-2"> <div class="my-2">
<div class="flex w-full justify-between"> <div class="flex w-full justify-between">
<div class=" self-center text-sm font-semibold">{$i18n.t('Model Params')}</div> <div class=" self-center text-sm font-medium">{$i18n.t('Model Params')}</div>
</div> </div>
<div class="mt-2"> <div class="mt-2">
<div class="my-1"> <div class="my-1">
<div class=" text-xs font-semibold mb-2">{$i18n.t('System Prompt')}</div> <div class=" text-xs font-medium mb-2">{$i18n.t('System Prompt')}</div>
<div> <div>
<Textarea <Textarea
className=" text-sm w-full bg-transparent outline-hidden resize-none overflow-y-hidden " className=" text-sm w-full bg-transparent outline-hidden resize-none overflow-y-hidden "
@ -777,7 +778,7 @@
</div> </div>
<div class="flex w-full justify-between"> <div class="flex w-full justify-between">
<div class=" self-center text-xs font-semibold"> <div class=" self-center text-xs font-medium">
{$i18n.t('Advanced Params')} {$i18n.t('Advanced Params')}
</div> </div>
@ -804,13 +805,13 @@
</div> </div>
</div> </div>
<hr class=" border-gray-100 dark:border-gray-850 my-1" /> <hr class=" border-gray-100 dark:border-gray-850 my-2" />
<div class="my-2"> <div class="my-2">
<div class="flex w-full justify-between items-center"> <div class="flex w-full justify-between items-center">
<div class="flex w-full justify-between items-center"> <div class="flex w-full justify-between items-center">
<div class=" self-center text-sm font-semibold"> <div class=" self-center text-sm font-medium">
{$i18n.t('Prompt suggestions')} {$i18n.t('Prompts')}
</div> </div>
<button <button
@ -818,7 +819,7 @@
type="button" type="button"
on:click={() => { on:click={() => {
if ((info?.meta?.suggestion_prompts ?? null) === null) { if ((info?.meta?.suggestion_prompts ?? null) === null) {
info.meta.suggestion_prompts = [{ content: JSON.stringify(createEmptyTranslations()) }]; info.meta.suggestion_prompts = [{ content: JSON.stringify(createEmptyTranslations()), title: ['', ''] }];
} else { } else {
info.meta.suggestion_prompts = null; info.meta.suggestion_prompts = null;
} }
@ -836,6 +837,7 @@
<button <button
class="p-1 px-2 text-xs flex rounded-sm transition" class="p-1 px-2 text-xs flex rounded-sm transition"
type="button" type="button"
aria-label={$i18n.t('Add prompt suggestion')}
on:click={() => { on:click={() => {
if ( if (
info.meta.suggestion_prompts.length === 0 || info.meta.suggestion_prompts.length === 0 ||
@ -843,7 +845,7 @@
) { ) {
info.meta.suggestion_prompts = [ info.meta.suggestion_prompts = [
...info.meta.suggestion_prompts, ...info.meta.suggestion_prompts,
{ content: JSON.stringify(createEmptyTranslations()) } { content: JSON.stringify(createEmptyTranslations()), title: ['', ''] }
]; ];
} }
}} }}
@ -971,7 +973,7 @@
<div class="my-2 text-gray-300 dark:text-gray-700"> <div class="my-2 text-gray-300 dark:text-gray-700">
<div class="flex w-full justify-between mb-2"> <div class="flex w-full justify-between mb-2">
<div class=" self-center text-sm font-semibold">{$i18n.t('JSON Preview')}</div> <div class=" self-center text-sm font-medium">{$i18n.t('JSON Preview')}</div>
<button <button
class="p-1 px-3 text-xs flex rounded-sm transition" class="p-1 px-3 text-xs flex rounded-sm transition"

View file

@ -0,0 +1,218 @@
<script lang="ts">
import { getContext } from 'svelte';
import { saveAs } from 'file-saver';
import { toast } from 'svelte-sonner';
const i18n = getContext('i18n');
export let promptSuggestions = [];
let _promptSuggestions = [];
const setPromptSuggestions = () => {
_promptSuggestions = promptSuggestions.map((s) => {
if (typeof s.title === 'string') {
s.title = [s.title, ''];
} else if (!Array.isArray(s.title)) {
s.title = ['', ''];
}
return s;
});
};
$: if (promptSuggestions) {
setPromptSuggestions();
}
</script>
<div class=" space-y-3">
<div class="flex w-full justify-between mb-2">
<div class=" self-center text-xs">
{$i18n.t('Default Prompt Suggestions')}
</div>
<button
class="p-1 px-3 text-xs flex rounded-sm transition"
type="button"
on:click={() => {
if (promptSuggestions.length === 0 || promptSuggestions.at(-1).content !== '') {
promptSuggestions = [...promptSuggestions, { content: '', title: ['', ''] }];
}
}}
>
<svg
xmlns="http://www.w3.org/2000/svg"
viewBox="0 0 20 20"
fill="currentColor"
class="w-4 h-4"
>
<path
d="M10.75 4.75a.75.75 0 00-1.5 0v4.5h-4.5a.75.75 0 000 1.5h4.5v4.5a.75.75 0 001.5 0v-4.5h4.5a.75.75 0 000-1.5h-4.5v-4.5z"
/>
</svg>
</button>
</div>
{#if _promptSuggestions.length > 0}
<div class="grid lg:grid-cols-2 flex-col gap-2">
{#each _promptSuggestions as prompt, promptIdx}
<div
class=" flex border rounded-3xl border-gray-100 dark:border-gray-800 dark:bg-gray-850 py-1.5"
>
<div class="flex flex-col flex-1 pl-1">
<div class="py-1 gap-1">
<input
class="px-3 text-sm font-medium w-full bg-transparent outline-hidden"
placeholder={$i18n.t('Title (e.g. Tell me a fun fact)')}
bind:value={prompt.title[0]}
/>
<input
class="px-3 text-xs w-full bg-transparent outline-hidden text-gray-600 dark:text-gray-400"
placeholder={$i18n.t('Subtitle (e.g. about the Roman Empire)')}
bind:value={prompt.title[1]}
/>
</div>
<hr class="border-gray-50 dark:border-gray-850 my-0.5" />
<textarea
class="px-3 py-1.5 text-xs w-full bg-transparent outline-hidden resize-none"
placeholder={$i18n.t('Prompt (e.g. Tell me a fun fact about the Roman Empire)')}
rows="4"
bind:value={prompt.content}
/>
</div>
<div class="">
<button
class="p-3"
type="button"
on:click={() => {
promptSuggestions.splice(promptIdx, 1);
promptSuggestions = promptSuggestions;
}}
>
<svg
xmlns="http://www.w3.org/2000/svg"
viewBox="0 0 20 20"
fill="currentColor"
class="w-4 h-4"
>
<path
d="M6.28 5.22a.75.75 0 00-1.06 1.06L8.94 10l-3.72 3.72a.75.75 0 101.06 1.06L10 11.06l3.72 3.72a.75.75 0 101.06-1.06L11.06 10l3.72-3.72a.75.75 0 00-1.06-1.06L10 8.94 6.28 5.22z"
/>
</svg>
</button>
</div>
</div>
{/each}
</div>
{:else}
<div class="text-xs text-center w-full py-2">{$i18n.t('No suggestion prompts')}</div>
{/if}
<div class="flex items-center justify-end space-x-2 mt-2">
<input
id="prompt-suggestions-import-input"
type="file"
accept=".json"
hidden
on:change={(e) => {
const files = e.target.files;
if (!files || files.length === 0) {
return;
}
console.log(files);
let reader = new FileReader();
reader.onload = async (event) => {
try {
let suggestions = JSON.parse(event.target.result);
suggestions = suggestions.map((s) => {
if (typeof s.title === 'string') {
s.title = [s.title, ''];
} else if (!Array.isArray(s.title)) {
s.title = ['', ''];
}
return s;
});
promptSuggestions = [...promptSuggestions, ...suggestions];
} catch (error) {
toast.error($i18n.t('Invalid JSON file'));
return;
}
};
reader.readAsText(files[0]);
e.target.value = ''; // Reset the input value
}}
/>
<button
class="flex text-xs items-center space-x-1 px-3 py-1.5 rounded-xl bg-gray-50 hover:bg-gray-100 dark:bg-gray-800 dark:hover:bg-gray-700 dark:text-gray-200 transition"
type="button"
on:click={() => {
const input = document.getElementById('prompt-suggestions-import-input');
if (input) {
input.click();
}
}}
>
<div class=" self-center mr-2 font-medium line-clamp-1">
{$i18n.t('Import Prompt Suggestions')}
</div>
<div class=" self-center">
<svg
xmlns="http://www.w3.org/2000/svg"
viewBox="0 0 16 16"
fill="currentColor"
class="w-3.5 h-3.5"
>
<path
fill-rule="evenodd"
d="M4 2a1.5 1.5 0 0 0-1.5 1.5v9A1.5 1.5 0 0 0 4 14h8a1.5 1.5 0 0 0 1.5-1.5V6.621a1.5 1.5 0 0 0-.44-1.06L9.94 2.439A1.5 1.5 0 0 0 8.878 2H4Zm4 9.5a.75.75 0 0 1-.75-.75V8.06l-.72.72a.75.75 0 0 1-1.06-1.06l2-2a.75.75 0 0 1 1.06 0l2 2a.75.75 0 1 1-1.06 1.06l-.72-.72v2.69a.75.75 0 0 1-.75.75Z"
clip-rule="evenodd"
/>
</svg>
</div>
</button>
{#if promptSuggestions.length}
<button
class="flex text-xs items-center space-x-1 px-3 py-1.5 rounded-xl bg-gray-50 hover:bg-gray-100 dark:bg-gray-800 dark:hover:bg-gray-700 dark:text-gray-200 transition"
type="button"
on:click={async () => {
let blob = new Blob([JSON.stringify(promptSuggestions)], {
type: 'application/json'
});
saveAs(blob, `prompt-suggestions-export-${Date.now()}.json`);
}}
>
<div class=" self-center mr-2 font-medium line-clamp-1">
{$i18n.t('Export Prompt Suggestions')} ({promptSuggestions.length})
</div>
<div class=" self-center">
<svg
xmlns="http://www.w3.org/2000/svg"
viewBox="0 0 16 16"
fill="currentColor"
class="w-3.5 h-3.5"
>
<path
fill-rule="evenodd"
d="M4 2a1.5 1.5 0 0 0-1.5 1.5v9A1.5 1.5 0 0 0 4 14h8a1.5 1.5 0 0 0 1.5-1.5V6.621a1.5 1.5 0 0 0-.44-1.06L9.94 2.439A1.5 1.5 0 0 0 8.878 2H4Zm4 3.5a.75.75 0 0 1 .75.75v2.69l.72-.72a.75.75 0 1 1 1.06 1.06l-2 2a.75.75 0 0 1-1.06 0l-2-2a.75.75 0 0 1 1.06-1.06l.72.72V6.25A.75.75 0 0 1 8 5.5Z"
clip-rule="evenodd"
/>
</svg>
</div>
</button>
{/if}
</div>
</div>

View file

@ -25,7 +25,7 @@
<div> <div>
<div class="flex w-full justify-between mb-1"> <div class="flex w-full justify-between mb-1">
<div class=" self-center text-sm font-semibold">{$i18n.t('Tools')}</div> <div class=" self-center text-sm font-medium">{$i18n.t('Tools')}</div>
</div> </div>
<div class=" text-xs dark:text-gray-500"> <div class=" text-xs dark:text-gray-500">

View file

@ -172,7 +172,7 @@
}} }}
> >
<div class=" text-sm text-gray-500 truncate"> <div class=" text-sm text-gray-500 truncate">
{$i18n.t('This will delete')} <span class=" font-semibold">{deletePrompt.command}</span>. {$i18n.t('This will delete')} <span class=" font-medium">{deletePrompt.command}</span>.
</div> </div>
</DeleteConfirmDialog> </DeleteConfirmDialog>
@ -417,7 +417,7 @@
target="_blank" target="_blank"
> >
<div class=" self-center"> <div class=" self-center">
<div class=" font-semibold line-clamp-1">{$i18n.t('Discover a prompt')}</div> <div class=" font-medium line-clamp-1">{$i18n.t('Discover a prompt')}</div>
<div class=" text-sm line-clamp-1"> <div class=" text-sm line-clamp-1">
{$i18n.t('Discover, download, and explore custom prompts')} {$i18n.t('Discover, download, and explore custom prompts')}
</div> </div>

View file

@ -147,7 +147,7 @@
<div class="my-2"> <div class="my-2">
<div class="flex w-full justify-between"> <div class="flex w-full justify-between">
<div class=" self-center text-sm font-semibold">{$i18n.t('Prompt Content')}</div> <div class=" self-center text-sm font-medium">{$i18n.t('Prompt Content')}</div>
</div> </div>
<div class="mt-2"> <div class="mt-2">

View file

@ -513,7 +513,7 @@
target="_blank" target="_blank"
> >
<div class=" self-center"> <div class=" self-center">
<div class=" font-semibold line-clamp-1">{$i18n.t('Discover a tool')}</div> <div class=" font-medium line-clamp-1">{$i18n.t('Discover a tool')}</div>
<div class=" text-sm line-clamp-1"> <div class=" text-sm line-clamp-1">
{$i18n.t('Discover, download, and explore custom tools')} {$i18n.t('Discover, download, and explore custom tools')}
</div> </div>
@ -536,7 +536,7 @@
}} }}
> >
<div class=" text-sm text-gray-500 truncate"> <div class=" text-sm text-gray-500 truncate">
{$i18n.t('This will delete')} <span class=" font-semibold">{selectedTool.name}</span>. {$i18n.t('This will delete')} <span class=" font-medium">{selectedTool.name}</span>.
</div> </div>
</DeleteConfirmDialog> </DeleteConfirmDialog>

View file

@ -63,7 +63,7 @@
<div class=" rounded-lg flex flex-col gap-2"> <div class=" rounded-lg flex flex-col gap-2">
<div class=""> <div class="">
<div class=" text-sm font-semibold mb-1.5">{$i18n.t('Visibility')}</div> <div class=" text-sm font-medium mb-1.5">{$i18n.t('Visibility')}</div>
<div class="flex gap-2.5 items-center mb-1"> <div class="flex gap-2.5 items-center mb-1">
<div> <div>
@ -150,7 +150,7 @@
<div> <div>
<div class=""> <div class="">
<div class="flex justify-between mb-1.5"> <div class="flex justify-between mb-1.5">
<div class="text-sm font-semibold"> <div class="text-sm font-medium">
{$i18n.t('Groups')} {$i18n.t('Groups')}
</div> </div>
</div> </div>

View file

@ -27,10 +27,15 @@
class="relative w-full flex items-center gap-0.5 px-2.5 py-1.5 rounded-xl " class="relative w-full flex items-center gap-0.5 px-2.5 py-1.5 rounded-xl "
aria-label={placeholder} aria-label={placeholder}
> >
<Select.Value <div
class="inline-flex h-input px-0.5 w-full outline-hidden bg-transparent truncate placeholder-gray-400 focus:outline-hidden capitalize" class="inline-flex h-input px-0.5 w-full outline-hidden bg-transparent truncate placeholder-gray-400 focus:outline-hidden capitalize"
{placeholder} >
/> {#if value}
{value}
{:else}
{placeholder}
{/if}
</div>
{#if value} {#if value}
<button <button

View file

@ -101,6 +101,6 @@ import 'dayjs/locale/yo';
import 'dayjs/locale/zh'; import 'dayjs/locale/zh';
import 'dayjs/locale/zh-tw'; import 'dayjs/locale/zh-tw';
import 'dayjs/locale/et'; import 'dayjs/locale/et';
import 'dayjs/locale/en-gb' import 'dayjs/locale/en-gb';
export default dayjs; export default dayjs;

View file

@ -95,6 +95,7 @@
"Allow Continue Response": "", "Allow Continue Response": "",
"Allow Delete Messages": "", "Allow Delete Messages": "",
"Allow File Upload": "", "Allow File Upload": "",
"Allow Group Sharing": "",
"Allow Multiple Models in Chat": "", "Allow Multiple Models in Chat": "",
"Allow non-local voices": "", "Allow non-local voices": "",
"Allow Rate Response": "", "Allow Rate Response": "",
@ -144,6 +145,7 @@
"Archived Chats": "الأرشيف المحادثات", "Archived Chats": "الأرشيف المحادثات",
"archived-chat-export": "", "archived-chat-export": "",
"Are you sure you want to clear all memories? This action cannot be undone.": "", "Are you sure you want to clear all memories? This action cannot be undone.": "",
"Are you sure you want to delete \"{{NAME}}\"?": "",
"Are you sure you want to delete this channel?": "", "Are you sure you want to delete this channel?": "",
"Are you sure you want to delete this message?": "", "Are you sure you want to delete this message?": "",
"Are you sure you want to unarchive all archived chats?": "", "Are you sure you want to unarchive all archived chats?": "",
@ -388,12 +390,14 @@
"Default description enabled": "", "Default description enabled": "",
"Default Features": "", "Default Features": "",
"Default Filters": "", "Default Filters": "",
"Default Group": "",
"Default mode works with a wider range of models by calling tools once before execution. Native mode leverages the model's built-in tool-calling capabilities, but requires the model to inherently support this feature.": "", "Default mode works with a wider range of models by calling tools once before execution. Native mode leverages the model's built-in tool-calling capabilities, but requires the model to inherently support this feature.": "",
"Default Model": "النموذج الافتراضي", "Default Model": "النموذج الافتراضي",
"Default model updated": "الإفتراضي تحديث الموديل", "Default model updated": "الإفتراضي تحديث الموديل",
"Default Models": "", "Default Models": "",
"Default permissions": "", "Default permissions": "",
"Default permissions updated successfully": "", "Default permissions updated successfully": "",
"Default Pinned Models": "",
"Default Prompt Suggestions": "الإفتراضي Prompt الاقتراحات", "Default Prompt Suggestions": "الإفتراضي Prompt الاقتراحات",
"Default to 389 or 636 if TLS is enabled": "", "Default to 389 or 636 if TLS is enabled": "",
"Default to ALL": "", "Default to ALL": "",
@ -402,6 +406,7 @@
"Delete": "حذف", "Delete": "حذف",
"Delete a model": "حذف الموديل", "Delete a model": "حذف الموديل",
"Delete All Chats": "حذف جميع الدردشات", "Delete All Chats": "حذف جميع الدردشات",
"Delete all contents inside this folder": "",
"Delete All Models": "", "Delete All Models": "",
"Delete Chat": "حذف المحادثه.", "Delete Chat": "حذف المحادثه.",
"Delete chat?": "", "Delete chat?": "",
@ -730,6 +735,7 @@
"Features": "", "Features": "",
"Features Permissions": "", "Features Permissions": "",
"February": "فبراير", "February": "فبراير",
"Feedback deleted successfully": "",
"Feedback Details": "", "Feedback Details": "",
"Feedback History": "", "Feedback History": "",
"Feedbacks": "", "Feedbacks": "",
@ -744,6 +750,7 @@
"File size should not exceed {{maxSize}} MB.": "", "File size should not exceed {{maxSize}} MB.": "",
"File Upload": "", "File Upload": "",
"File uploaded successfully": "", "File uploaded successfully": "",
"File uploaded!": "",
"Files": "", "Files": "",
"Filter": "", "Filter": "",
"Filter is now globally disabled": "", "Filter is now globally disabled": "",
@ -857,6 +864,7 @@
"Image Compression": "", "Image Compression": "",
"Image Compression Height": "", "Image Compression Height": "",
"Image Compression Width": "", "Image Compression Width": "",
"Image Edit": "",
"Image Edit Engine": "", "Image Edit Engine": "",
"Image Generation": "", "Image Generation": "",
"Image Generation Engine": "محرك توليد الصور", "Image Generation Engine": "محرك توليد الصور",
@ -941,6 +949,7 @@
"Knowledge Name": "", "Knowledge Name": "",
"Knowledge Public Sharing": "", "Knowledge Public Sharing": "",
"Knowledge reset successfully.": "", "Knowledge reset successfully.": "",
"Knowledge Sharing": "",
"Knowledge updated successfully": "", "Knowledge updated successfully": "",
"Kokoro.js (Browser)": "", "Kokoro.js (Browser)": "",
"Kokoro.js Dtype": "", "Kokoro.js Dtype": "",
@ -1006,6 +1015,7 @@
"Max Upload Size": "", "Max Upload Size": "",
"Maximum of 3 models can be downloaded simultaneously. Please try again later.": "يمكن تنزيل 3 نماذج كحد أقصى في وقت واحد. الرجاء معاودة المحاولة في وقت لاحق.", "Maximum of 3 models can be downloaded simultaneously. Please try again later.": "يمكن تنزيل 3 نماذج كحد أقصى في وقت واحد. الرجاء معاودة المحاولة في وقت لاحق.",
"May": "مايو", "May": "مايو",
"MBR": "",
"MCP": "", "MCP": "",
"MCP support is experimental and its specification changes often, which can lead to incompatibilities. OpenAPI specification support is directly maintained by the Open WebUI team, making it the more reliable option for compatibility.": "", "MCP support is experimental and its specification changes often, which can lead to incompatibilities. OpenAPI specification support is directly maintained by the Open WebUI team, making it the more reliable option for compatibility.": "",
"Medium": "", "Medium": "",
@ -1062,6 +1072,7 @@
"Models configuration saved successfully": "", "Models configuration saved successfully": "",
"Models imported successfully": "", "Models imported successfully": "",
"Models Public Sharing": "", "Models Public Sharing": "",
"Models Sharing": "",
"Mojeek Search API Key": "", "Mojeek Search API Key": "",
"More": "المزيد", "More": "المزيد",
"More Concise": "", "More Concise": "",
@ -1128,6 +1139,7 @@
"Note: If you set a minimum score, the search will only return documents with a score greater than or equal to the minimum score.": "ملاحظة: إذا قمت بتعيين الحد الأدنى من النقاط، فلن يؤدي البحث إلا إلى إرجاع المستندات التي لها نقاط أكبر من أو تساوي الحد الأدنى من النقاط.", "Note: If you set a minimum score, the search will only return documents with a score greater than or equal to the minimum score.": "ملاحظة: إذا قمت بتعيين الحد الأدنى من النقاط، فلن يؤدي البحث إلا إلى إرجاع المستندات التي لها نقاط أكبر من أو تساوي الحد الأدنى من النقاط.",
"Notes": "", "Notes": "",
"Notes Public Sharing": "", "Notes Public Sharing": "",
"Notes Sharing": "",
"Notification Sound": "", "Notification Sound": "",
"Notification Webhook": "", "Notification Webhook": "",
"Notifications": "إشعارات", "Notifications": "إشعارات",
@ -1269,11 +1281,11 @@
"Prompt Autocompletion": "", "Prompt Autocompletion": "",
"Prompt Content": "محتوى عاجل", "Prompt Content": "محتوى عاجل",
"Prompt created successfully": "", "Prompt created successfully": "",
"Prompt suggestions": "اقتراحات سريعة",
"Prompt updated successfully": "", "Prompt updated successfully": "",
"Prompts": "مطالبات", "Prompts": "مطالبات",
"Prompts Access": "", "Prompts Access": "",
"Prompts Public Sharing": "", "Prompts Public Sharing": "",
"Prompts Sharing": "",
"Provider Type": "", "Provider Type": "",
"Public": "", "Public": "",
"Pull \"{{searchValue}}\" from Ollama.com": "Ollama.com \"{{searchValue}}\" أسحب من ", "Pull \"{{searchValue}}\" from Ollama.com": "Ollama.com \"{{searchValue}}\" أسحب من ",
@ -1461,6 +1473,7 @@
"Sets the random number seed to use for generation. Setting this to a specific number will make the model generate the same text for the same prompt.": "", "Sets the random number seed to use for generation. Setting this to a specific number will make the model generate the same text for the same prompt.": "",
"Sets the size of the context window used to generate the next token.": "", "Sets the size of the context window used to generate the next token.": "",
"Sets the stop sequences to use. When this pattern is encountered, the LLM will stop generating text and return. Multiple stop patterns may be set by specifying multiple separate stop parameters in a modelfile.": "", "Sets the stop sequences to use. When this pattern is encountered, the LLM will stop generating text and return. Multiple stop patterns may be set by specifying multiple separate stop parameters in a modelfile.": "",
"Setting": "",
"Settings": "الاعدادات", "Settings": "الاعدادات",
"Settings saved successfully!": "تم حفظ الاعدادات بنجاح", "Settings saved successfully!": "تم حفظ الاعدادات بنجاح",
"Share": "كشاركة", "Share": "كشاركة",
@ -1641,6 +1654,7 @@
"Tools Function Calling Prompt": "", "Tools Function Calling Prompt": "",
"Tools have a function calling system that allows arbitrary code execution.": "", "Tools have a function calling system that allows arbitrary code execution.": "",
"Tools Public Sharing": "", "Tools Public Sharing": "",
"Tools Sharing": "",
"Top K": "Top K", "Top K": "Top K",
"Top K Reranker": "", "Top K Reranker": "",
"Transformers": "", "Transformers": "",
@ -1688,6 +1702,7 @@
"Upload Pipeline": "", "Upload Pipeline": "",
"Upload Progress": "جاري التحميل", "Upload Progress": "جاري التحميل",
"Upload Progress: {{uploadedFiles}}/{{totalFiles}} ({{percentage}}%)": "", "Upload Progress: {{uploadedFiles}}/{{totalFiles}} ({{percentage}}%)": "",
"Uploading file...": "",
"URL": "", "URL": "",
"URL is required": "", "URL is required": "",
"URL Mode": "رابط الموديل", "URL Mode": "رابط الموديل",
@ -1765,7 +1780,6 @@
"Workspace": "مساحة العمل", "Workspace": "مساحة العمل",
"Workspace Permissions": "", "Workspace Permissions": "",
"Write": "", "Write": "",
"Write a prompt suggestion (e.g. Who are you?)": "اكتب اقتراحًا سريعًا (على سبيل المثال، من أنت؟)",
"Write a summary in 50 words that summarizes {{topic}}.": "اكتب ملخصًا في 50 كلمة يلخص [الموضوع أو الكلمة الرئيسية]", "Write a summary in 50 words that summarizes {{topic}}.": "اكتب ملخصًا في 50 كلمة يلخص [الموضوع أو الكلمة الرئيسية]",
"Write something...": "", "Write something...": "",
"Write your model system prompt content here\ne.g.) You are Mario from Super Mario Bros, acting as an assistant.": "اكتب محتوى مطالبة النظام (system prompt) لنموذجك هنا\nعلى سبيل المثال: أنت ماريو من Super Mario Bros وتتصرف كمساعد.", "Write your model system prompt content here\ne.g.) You are Mario from Super Mario Bros, acting as an assistant.": "اكتب محتوى مطالبة النظام (system prompt) لنموذجك هنا\nعلى سبيل المثال: أنت ماريو من Super Mario Bros وتتصرف كمساعد.",

View file

@ -95,6 +95,7 @@
"Allow Continue Response": "", "Allow Continue Response": "",
"Allow Delete Messages": "", "Allow Delete Messages": "",
"Allow File Upload": "السماح بتحميل الملفات", "Allow File Upload": "السماح بتحميل الملفات",
"Allow Group Sharing": "",
"Allow Multiple Models in Chat": "", "Allow Multiple Models in Chat": "",
"Allow non-local voices": "السماح بالأصوات غير المحلية", "Allow non-local voices": "السماح بالأصوات غير المحلية",
"Allow Rate Response": "", "Allow Rate Response": "",
@ -144,6 +145,7 @@
"Archived Chats": "المحادثات المؤرشفة", "Archived Chats": "المحادثات المؤرشفة",
"archived-chat-export": "تصدير المحادثات المؤرشفة", "archived-chat-export": "تصدير المحادثات المؤرشفة",
"Are you sure you want to clear all memories? This action cannot be undone.": "هل أنت متأكد من رغبتك في مسح جميع الذكريات؟ لا يمكن التراجع عن هذا الإجراء.", "Are you sure you want to clear all memories? This action cannot be undone.": "هل أنت متأكد من رغبتك في مسح جميع الذكريات؟ لا يمكن التراجع عن هذا الإجراء.",
"Are you sure you want to delete \"{{NAME}}\"?": "",
"Are you sure you want to delete this channel?": "هل أنت متأكد من رغبتك في حذف هذه القناة؟", "Are you sure you want to delete this channel?": "هل أنت متأكد من رغبتك في حذف هذه القناة؟",
"Are you sure you want to delete this message?": "هل أنت متأكد من رغبتك في حذف هذه الرسالة؟", "Are you sure you want to delete this message?": "هل أنت متأكد من رغبتك في حذف هذه الرسالة؟",
"Are you sure you want to unarchive all archived chats?": "هل أنت متأكد من رغبتك في إلغاء أرشفة جميع المحادثات المؤرشفة؟", "Are you sure you want to unarchive all archived chats?": "هل أنت متأكد من رغبتك في إلغاء أرشفة جميع المحادثات المؤرشفة؟",
@ -388,12 +390,14 @@
"Default description enabled": "", "Default description enabled": "",
"Default Features": "", "Default Features": "",
"Default Filters": "", "Default Filters": "",
"Default Group": "",
"Default mode works with a wider range of models by calling tools once before execution. Native mode leverages the model's built-in tool-calling capabilities, but requires the model to inherently support this feature.": "الوضع الافتراضي يعمل مع مجموعة أوسع من النماذج من خلال استدعاء الأدوات مرة واحدة قبل التنفيذ. أما الوضع الأصلي فيستخدم قدرات استدعاء الأدوات المدمجة في النموذج، لكنه يتطلب دعمًا داخليًا لهذه الميزة.", "Default mode works with a wider range of models by calling tools once before execution. Native mode leverages the model's built-in tool-calling capabilities, but requires the model to inherently support this feature.": "الوضع الافتراضي يعمل مع مجموعة أوسع من النماذج من خلال استدعاء الأدوات مرة واحدة قبل التنفيذ. أما الوضع الأصلي فيستخدم قدرات استدعاء الأدوات المدمجة في النموذج، لكنه يتطلب دعمًا داخليًا لهذه الميزة.",
"Default Model": "النموذج الافتراضي", "Default Model": "النموذج الافتراضي",
"Default model updated": "الإفتراضي تحديث الموديل", "Default model updated": "الإفتراضي تحديث الموديل",
"Default Models": "النماذج الافتراضية", "Default Models": "النماذج الافتراضية",
"Default permissions": "الأذونات الافتراضية", "Default permissions": "الأذونات الافتراضية",
"Default permissions updated successfully": "تم تحديث الأذونات الافتراضية بنجاح", "Default permissions updated successfully": "تم تحديث الأذونات الافتراضية بنجاح",
"Default Pinned Models": "",
"Default Prompt Suggestions": "الإفتراضي Prompt الاقتراحات", "Default Prompt Suggestions": "الإفتراضي Prompt الاقتراحات",
"Default to 389 or 636 if TLS is enabled": "الافتراضي هو 389 أو 636 إذا تم تمكين TLS", "Default to 389 or 636 if TLS is enabled": "الافتراضي هو 389 أو 636 إذا تم تمكين TLS",
"Default to ALL": "الافتراضي هو الكل", "Default to ALL": "الافتراضي هو الكل",
@ -402,6 +406,7 @@
"Delete": "حذف", "Delete": "حذف",
"Delete a model": "حذف الموديل", "Delete a model": "حذف الموديل",
"Delete All Chats": "حذف جميع الدردشات", "Delete All Chats": "حذف جميع الدردشات",
"Delete all contents inside this folder": "",
"Delete All Models": "حذف جميع النماذج", "Delete All Models": "حذف جميع النماذج",
"Delete Chat": "حذف المحادثه.", "Delete Chat": "حذف المحادثه.",
"Delete chat?": "هل تريد حذف المحادثة؟", "Delete chat?": "هل تريد حذف المحادثة؟",
@ -730,6 +735,7 @@
"Features": "الميزات", "Features": "الميزات",
"Features Permissions": "أذونات الميزات", "Features Permissions": "أذونات الميزات",
"February": "فبراير", "February": "فبراير",
"Feedback deleted successfully": "",
"Feedback Details": "", "Feedback Details": "",
"Feedback History": "سجل الملاحظات", "Feedback History": "سجل الملاحظات",
"Feedbacks": "الملاحظات", "Feedbacks": "الملاحظات",
@ -744,6 +750,7 @@
"File size should not exceed {{maxSize}} MB.": "يجب ألا يتجاوز حجم الملف {{maxSize}} ميغابايت.", "File size should not exceed {{maxSize}} MB.": "يجب ألا يتجاوز حجم الملف {{maxSize}} ميغابايت.",
"File Upload": "", "File Upload": "",
"File uploaded successfully": "تم رفع الملف بنجاح", "File uploaded successfully": "تم رفع الملف بنجاح",
"File uploaded!": "",
"Files": "الملفات", "Files": "الملفات",
"Filter": "", "Filter": "",
"Filter is now globally disabled": "تم الآن تعطيل الفلتر على مستوى النظام", "Filter is now globally disabled": "تم الآن تعطيل الفلتر على مستوى النظام",
@ -857,6 +864,7 @@
"Image Compression": "ضغط الصور", "Image Compression": "ضغط الصور",
"Image Compression Height": "", "Image Compression Height": "",
"Image Compression Width": "", "Image Compression Width": "",
"Image Edit": "",
"Image Edit Engine": "", "Image Edit Engine": "",
"Image Generation": "توليد الصور", "Image Generation": "توليد الصور",
"Image Generation Engine": "محرك توليد الصور", "Image Generation Engine": "محرك توليد الصور",
@ -941,6 +949,7 @@
"Knowledge Name": "", "Knowledge Name": "",
"Knowledge Public Sharing": "", "Knowledge Public Sharing": "",
"Knowledge reset successfully.": "تم إعادة تعيين المعرفة بنجاح.", "Knowledge reset successfully.": "تم إعادة تعيين المعرفة بنجاح.",
"Knowledge Sharing": "",
"Knowledge updated successfully": "تم تحديث المعرفة بنجاح", "Knowledge updated successfully": "تم تحديث المعرفة بنجاح",
"Kokoro.js (Browser)": "Kokoro.js (المتصفح)", "Kokoro.js (Browser)": "Kokoro.js (المتصفح)",
"Kokoro.js Dtype": "نوع بيانات Kokoro.js", "Kokoro.js Dtype": "نوع بيانات Kokoro.js",
@ -1006,6 +1015,7 @@
"Max Upload Size": "الحد الأقصى لحجم الملف المرفوع", "Max Upload Size": "الحد الأقصى لحجم الملف المرفوع",
"Maximum of 3 models can be downloaded simultaneously. Please try again later.": "يمكن تنزيل 3 نماذج كحد أقصى في وقت واحد. الرجاء معاودة المحاولة في وقت لاحق.", "Maximum of 3 models can be downloaded simultaneously. Please try again later.": "يمكن تنزيل 3 نماذج كحد أقصى في وقت واحد. الرجاء معاودة المحاولة في وقت لاحق.",
"May": "مايو", "May": "مايو",
"MBR": "",
"MCP": "", "MCP": "",
"MCP support is experimental and its specification changes often, which can lead to incompatibilities. OpenAPI specification support is directly maintained by the Open WebUI team, making it the more reliable option for compatibility.": "", "MCP support is experimental and its specification changes often, which can lead to incompatibilities. OpenAPI specification support is directly maintained by the Open WebUI team, making it the more reliable option for compatibility.": "",
"Medium": "", "Medium": "",
@ -1062,6 +1072,7 @@
"Models configuration saved successfully": "تم حفظ إعدادات النماذج بنجاح", "Models configuration saved successfully": "تم حفظ إعدادات النماذج بنجاح",
"Models imported successfully": "", "Models imported successfully": "",
"Models Public Sharing": "", "Models Public Sharing": "",
"Models Sharing": "",
"Mojeek Search API Key": "مفتاح API لـ Mojeek Search", "Mojeek Search API Key": "مفتاح API لـ Mojeek Search",
"More": "المزيد", "More": "المزيد",
"More Concise": "", "More Concise": "",
@ -1128,6 +1139,7 @@
"Note: If you set a minimum score, the search will only return documents with a score greater than or equal to the minimum score.": "ملاحظة: إذا قمت بتعيين الحد الأدنى من النقاط، فلن يؤدي البحث إلا إلى إرجاع المستندات التي لها نقاط أكبر من أو تساوي الحد الأدنى من النقاط.", "Note: If you set a minimum score, the search will only return documents with a score greater than or equal to the minimum score.": "ملاحظة: إذا قمت بتعيين الحد الأدنى من النقاط، فلن يؤدي البحث إلا إلى إرجاع المستندات التي لها نقاط أكبر من أو تساوي الحد الأدنى من النقاط.",
"Notes": "ملاحظات", "Notes": "ملاحظات",
"Notes Public Sharing": "", "Notes Public Sharing": "",
"Notes Sharing": "",
"Notification Sound": "صوت الإشعارات", "Notification Sound": "صوت الإشعارات",
"Notification Webhook": "رابط Webhook للإشعارات", "Notification Webhook": "رابط Webhook للإشعارات",
"Notifications": "إشعارات", "Notifications": "إشعارات",
@ -1269,11 +1281,11 @@
"Prompt Autocompletion": "", "Prompt Autocompletion": "",
"Prompt Content": "محتوى عاجل", "Prompt Content": "محتوى عاجل",
"Prompt created successfully": "تم إنشاء التوجيه بنجاح", "Prompt created successfully": "تم إنشاء التوجيه بنجاح",
"Prompt suggestions": "اقتراحات سريعة",
"Prompt updated successfully": "تم تحديث التوجيه بنجاح", "Prompt updated successfully": "تم تحديث التوجيه بنجاح",
"Prompts": "مطالبات", "Prompts": "مطالبات",
"Prompts Access": "الوصول إلى التوجيهات", "Prompts Access": "الوصول إلى التوجيهات",
"Prompts Public Sharing": "", "Prompts Public Sharing": "",
"Prompts Sharing": "",
"Provider Type": "", "Provider Type": "",
"Public": "", "Public": "",
"Pull \"{{searchValue}}\" from Ollama.com": "Ollama.com \"{{searchValue}}\" أسحب من ", "Pull \"{{searchValue}}\" from Ollama.com": "Ollama.com \"{{searchValue}}\" أسحب من ",
@ -1461,6 +1473,7 @@
"Sets the random number seed to use for generation. Setting this to a specific number will make the model generate the same text for the same prompt.": "يحدد البذرة العشوائية لاستخدامها في التوليد. تعيين قيمة معينة يجعل النموذج ينتج نفس النص لنفس التوجيه.", "Sets the random number seed to use for generation. Setting this to a specific number will make the model generate the same text for the same prompt.": "يحدد البذرة العشوائية لاستخدامها في التوليد. تعيين قيمة معينة يجعل النموذج ينتج نفس النص لنفس التوجيه.",
"Sets the size of the context window used to generate the next token.": "يحدد حجم نافذة السياق المستخدمة لتوليد الرمز التالي.", "Sets the size of the context window used to generate the next token.": "يحدد حجم نافذة السياق المستخدمة لتوليد الرمز التالي.",
"Sets the stop sequences to use. When this pattern is encountered, the LLM will stop generating text and return. Multiple stop patterns may be set by specifying multiple separate stop parameters in a modelfile.": "يحدد تسلسلات الإيقاف. عند مواجهتها، سيتوقف النموذج عن التوليد ويُرجع النتيجة. يمكن تحديد أنماط توقف متعددة داخل ملف النموذج.", "Sets the stop sequences to use. When this pattern is encountered, the LLM will stop generating text and return. Multiple stop patterns may be set by specifying multiple separate stop parameters in a modelfile.": "يحدد تسلسلات الإيقاف. عند مواجهتها، سيتوقف النموذج عن التوليد ويُرجع النتيجة. يمكن تحديد أنماط توقف متعددة داخل ملف النموذج.",
"Setting": "",
"Settings": "الاعدادات", "Settings": "الاعدادات",
"Settings saved successfully!": "تم حفظ الاعدادات بنجاح", "Settings saved successfully!": "تم حفظ الاعدادات بنجاح",
"Share": "كشاركة", "Share": "كشاركة",
@ -1641,6 +1654,7 @@
"Tools Function Calling Prompt": "توجيه استدعاء وظائف الأدوات", "Tools Function Calling Prompt": "توجيه استدعاء وظائف الأدوات",
"Tools have a function calling system that allows arbitrary code execution.": "تحتوي الأدوات على نظام لاستدعاء الوظائف يتيح تنفيذ كود برمجي مخصص.", "Tools have a function calling system that allows arbitrary code execution.": "تحتوي الأدوات على نظام لاستدعاء الوظائف يتيح تنفيذ كود برمجي مخصص.",
"Tools Public Sharing": "", "Tools Public Sharing": "",
"Tools Sharing": "",
"Top K": "Top K", "Top K": "Top K",
"Top K Reranker": "", "Top K Reranker": "",
"Transformers": "Transformers", "Transformers": "Transformers",
@ -1688,6 +1702,7 @@
"Upload Pipeline": "رفع خط المعالجة", "Upload Pipeline": "رفع خط المعالجة",
"Upload Progress": "جاري التحميل", "Upload Progress": "جاري التحميل",
"Upload Progress: {{uploadedFiles}}/{{totalFiles}} ({{percentage}}%)": "", "Upload Progress: {{uploadedFiles}}/{{totalFiles}} ({{percentage}}%)": "",
"Uploading file...": "",
"URL": "الرابط", "URL": "الرابط",
"URL is required": "", "URL is required": "",
"URL Mode": "رابط الموديل", "URL Mode": "رابط الموديل",
@ -1765,7 +1780,6 @@
"Workspace": "مساحة العمل", "Workspace": "مساحة العمل",
"Workspace Permissions": "صلاحيات مساحة العمل", "Workspace Permissions": "صلاحيات مساحة العمل",
"Write": "كتابة", "Write": "كتابة",
"Write a prompt suggestion (e.g. Who are you?)": "اكتب اقتراحًا سريعًا (على سبيل المثال، من أنت؟)",
"Write a summary in 50 words that summarizes {{topic}}.": "اكتب ملخصًا في 50 كلمة يلخص [الموضوع أو الكلمة الرئيسية]", "Write a summary in 50 words that summarizes {{topic}}.": "اكتب ملخصًا في 50 كلمة يلخص [الموضوع أو الكلمة الرئيسية]",
"Write something...": "اكتب شيئًا...", "Write something...": "اكتب شيئًا...",
"Write your model system prompt content here\ne.g.) You are Mario from Super Mario Bros, acting as an assistant.": "اكتب محتوى مطالبة النظام (system prompt) لنموذجك هنا\nعلى سبيل المثال: أنت ماريو من Super Mario Bros وتتصرف كمساعد.", "Write your model system prompt content here\ne.g.) You are Mario from Super Mario Bros, acting as an assistant.": "اكتب محتوى مطالبة النظام (system prompt) لنموذجك هنا\nعلى سبيل المثال: أنت ماريو من Super Mario Bros وتتصرف كمساعد.",

View file

@ -95,6 +95,7 @@
"Allow Continue Response": "", "Allow Continue Response": "",
"Allow Delete Messages": "", "Allow Delete Messages": "",
"Allow File Upload": "Разреши качване на файлове", "Allow File Upload": "Разреши качване на файлове",
"Allow Group Sharing": "",
"Allow Multiple Models in Chat": "", "Allow Multiple Models in Chat": "",
"Allow non-local voices": "Разреши нелокални гласове", "Allow non-local voices": "Разреши нелокални гласове",
"Allow Rate Response": "", "Allow Rate Response": "",
@ -144,6 +145,7 @@
"Archived Chats": "Архивирани Чатове", "Archived Chats": "Архивирани Чатове",
"archived-chat-export": "експорт-на-архивирани-чатове", "archived-chat-export": "експорт-на-архивирани-чатове",
"Are you sure you want to clear all memories? This action cannot be undone.": "Сигурни ли сте, че исткате да изчистите всички спомени? Това е необратимо.", "Are you sure you want to clear all memories? This action cannot be undone.": "Сигурни ли сте, че исткате да изчистите всички спомени? Това е необратимо.",
"Are you sure you want to delete \"{{NAME}}\"?": "",
"Are you sure you want to delete this channel?": "Сигурни ли сте, че искате да изтриете този канал?", "Are you sure you want to delete this channel?": "Сигурни ли сте, че искате да изтриете този канал?",
"Are you sure you want to delete this message?": "Сигурни ли сте, че искате да изтриете това съобщение?", "Are you sure you want to delete this message?": "Сигурни ли сте, че искате да изтриете това съобщение?",
"Are you sure you want to unarchive all archived chats?": "Сигурни ли сте, че искате да разархивирате всички архивирани чатове?", "Are you sure you want to unarchive all archived chats?": "Сигурни ли сте, че искате да разархивирате всички архивирани чатове?",
@ -388,12 +390,14 @@
"Default description enabled": "", "Default description enabled": "",
"Default Features": "", "Default Features": "",
"Default Filters": "", "Default Filters": "",
"Default Group": "",
"Default mode works with a wider range of models by calling tools once before execution. Native mode leverages the model's built-in tool-calling capabilities, but requires the model to inherently support this feature.": "Режимът по подразбиране работи с по-широк набор от модели, като извиква инструменти веднъж преди изпълнение. Нативният режим използва вградените възможности за извикване на инструменти на модела, но изисква моделът да поддържа тази функция по същество.", "Default mode works with a wider range of models by calling tools once before execution. Native mode leverages the model's built-in tool-calling capabilities, but requires the model to inherently support this feature.": "Режимът по подразбиране работи с по-широк набор от модели, като извиква инструменти веднъж преди изпълнение. Нативният режим използва вградените възможности за извикване на инструменти на модела, но изисква моделът да поддържа тази функция по същество.",
"Default Model": "Модел по подразбиране", "Default Model": "Модел по подразбиране",
"Default model updated": "Моделът по подразбиране е обновен", "Default model updated": "Моделът по подразбиране е обновен",
"Default Models": "Модели по подразбиране", "Default Models": "Модели по подразбиране",
"Default permissions": "Разрешения по подразбиране", "Default permissions": "Разрешения по подразбиране",
"Default permissions updated successfully": "Разрешенията по подразбиране са успешно актуализирани", "Default permissions updated successfully": "Разрешенията по подразбиране са успешно актуализирани",
"Default Pinned Models": "",
"Default Prompt Suggestions": "Промпт Предложения по подразбиране", "Default Prompt Suggestions": "Промпт Предложения по подразбиране",
"Default to 389 or 636 if TLS is enabled": "По подразбиране 389 или 636, ако TLS е активиран", "Default to 389 or 636 if TLS is enabled": "По подразбиране 389 или 636, ако TLS е активиран",
"Default to ALL": "По подразбиране за ВСИЧКИ", "Default to ALL": "По подразбиране за ВСИЧКИ",
@ -402,6 +406,7 @@
"Delete": "Изтриване", "Delete": "Изтриване",
"Delete a model": "Изтриване на модела", "Delete a model": "Изтриване на модела",
"Delete All Chats": "Изтриване на всички чатове", "Delete All Chats": "Изтриване на всички чатове",
"Delete all contents inside this folder": "",
"Delete All Models": "Изтриване на всички модели", "Delete All Models": "Изтриване на всички модели",
"Delete Chat": "Изтриване на Чат", "Delete Chat": "Изтриване на Чат",
"Delete chat?": "Изтриване на чата?", "Delete chat?": "Изтриване на чата?",
@ -730,6 +735,7 @@
"Features": "Функции", "Features": "Функции",
"Features Permissions": "Разрешения за функции", "Features Permissions": "Разрешения за функции",
"February": "Февруари", "February": "Февруари",
"Feedback deleted successfully": "",
"Feedback Details": "", "Feedback Details": "",
"Feedback History": "История на обратната връзка", "Feedback History": "История на обратната връзка",
"Feedbacks": "Обратни връзки", "Feedbacks": "Обратни връзки",
@ -744,6 +750,7 @@
"File size should not exceed {{maxSize}} MB.": "Размерът на файла не трябва да надвишава {{maxSize}} MB.", "File size should not exceed {{maxSize}} MB.": "Размерът на файла не трябва да надвишава {{maxSize}} MB.",
"File Upload": "", "File Upload": "",
"File uploaded successfully": "Файлът е качен успешно", "File uploaded successfully": "Файлът е качен успешно",
"File uploaded!": "",
"Files": "Файлове", "Files": "Файлове",
"Filter": "", "Filter": "",
"Filter is now globally disabled": "Филтърът вече е глобално деактивиран", "Filter is now globally disabled": "Филтърът вече е глобално деактивиран",
@ -857,6 +864,7 @@
"Image Compression": "Компресия на изображенията", "Image Compression": "Компресия на изображенията",
"Image Compression Height": "", "Image Compression Height": "",
"Image Compression Width": "", "Image Compression Width": "",
"Image Edit": "",
"Image Edit Engine": "", "Image Edit Engine": "",
"Image Generation": "Генериране на изображения", "Image Generation": "Генериране на изображения",
"Image Generation Engine": "Двигател за генериране на изображения", "Image Generation Engine": "Двигател за генериране на изображения",
@ -941,6 +949,7 @@
"Knowledge Name": "", "Knowledge Name": "",
"Knowledge Public Sharing": "", "Knowledge Public Sharing": "",
"Knowledge reset successfully.": "Знанието е нулирано успешно.", "Knowledge reset successfully.": "Знанието е нулирано успешно.",
"Knowledge Sharing": "",
"Knowledge updated successfully": "Знанието е актуализирано успешно", "Knowledge updated successfully": "Знанието е актуализирано успешно",
"Kokoro.js (Browser)": "Kokoro.js (Браузър)", "Kokoro.js (Browser)": "Kokoro.js (Браузър)",
"Kokoro.js Dtype": "Kokoro.js Dtype", "Kokoro.js Dtype": "Kokoro.js Dtype",
@ -1006,6 +1015,7 @@
"Max Upload Size": "Максимален размер на качване", "Max Upload Size": "Максимален размер на качване",
"Maximum of 3 models can be downloaded simultaneously. Please try again later.": "Максимум 3 модела могат да бъдат сваляни едновременно. Моля, опитайте отново по-късно.", "Maximum of 3 models can be downloaded simultaneously. Please try again later.": "Максимум 3 модела могат да бъдат сваляни едновременно. Моля, опитайте отново по-късно.",
"May": "Май", "May": "Май",
"MBR": "",
"MCP": "", "MCP": "",
"MCP support is experimental and its specification changes often, which can lead to incompatibilities. OpenAPI specification support is directly maintained by the Open WebUI team, making it the more reliable option for compatibility.": "", "MCP support is experimental and its specification changes often, which can lead to incompatibilities. OpenAPI specification support is directly maintained by the Open WebUI team, making it the more reliable option for compatibility.": "",
"Medium": "", "Medium": "",
@ -1062,6 +1072,7 @@
"Models configuration saved successfully": "Конфигурацията на моделите е запазена успешно", "Models configuration saved successfully": "Конфигурацията на моделите е запазена успешно",
"Models imported successfully": "", "Models imported successfully": "",
"Models Public Sharing": "Споделяне на моделите публично", "Models Public Sharing": "Споделяне на моделите публично",
"Models Sharing": "",
"Mojeek Search API Key": "API ключ за Mojeek Search", "Mojeek Search API Key": "API ключ за Mojeek Search",
"More": "Повече", "More": "Повече",
"More Concise": "", "More Concise": "",
@ -1128,6 +1139,7 @@
"Note: If you set a minimum score, the search will only return documents with a score greater than or equal to the minimum score.": "Забележка: Ако зададете минимален резултат, търсенето ще върне само документи с резултат, по-голям или равен на минималния резултат.", "Note: If you set a minimum score, the search will only return documents with a score greater than or equal to the minimum score.": "Забележка: Ако зададете минимален резултат, търсенето ще върне само документи с резултат, по-голям или равен на минималния резултат.",
"Notes": "Бележки", "Notes": "Бележки",
"Notes Public Sharing": "", "Notes Public Sharing": "",
"Notes Sharing": "",
"Notification Sound": "Звук за известия", "Notification Sound": "Звук за известия",
"Notification Webhook": "Webhook за известия", "Notification Webhook": "Webhook за известия",
"Notifications": "Известия", "Notifications": "Известия",
@ -1269,11 +1281,11 @@
"Prompt Autocompletion": "", "Prompt Autocompletion": "",
"Prompt Content": "Съдържание на промпта", "Prompt Content": "Съдържание на промпта",
"Prompt created successfully": "Промптът е създаден успешно", "Prompt created successfully": "Промптът е създаден успешно",
"Prompt suggestions": "Промпт предложения",
"Prompt updated successfully": "Промптът е актуализиран успешно", "Prompt updated successfully": "Промптът е актуализиран успешно",
"Prompts": "Промптове", "Prompts": "Промптове",
"Prompts Access": "Достъп до промптове", "Prompts Access": "Достъп до промптове",
"Prompts Public Sharing": "Публично споделяне на промптове", "Prompts Public Sharing": "Публично споделяне на промптове",
"Prompts Sharing": "",
"Provider Type": "", "Provider Type": "",
"Public": "Публично", "Public": "Публично",
"Pull \"{{searchValue}}\" from Ollama.com": "Извади \"{{searchValue}}\" от Ollama.com", "Pull \"{{searchValue}}\" from Ollama.com": "Извади \"{{searchValue}}\" от Ollama.com",
@ -1457,6 +1469,7 @@
"Sets the random number seed to use for generation. Setting this to a specific number will make the model generate the same text for the same prompt.": "", "Sets the random number seed to use for generation. Setting this to a specific number will make the model generate the same text for the same prompt.": "",
"Sets the size of the context window used to generate the next token.": "", "Sets the size of the context window used to generate the next token.": "",
"Sets the stop sequences to use. When this pattern is encountered, the LLM will stop generating text and return. Multiple stop patterns may be set by specifying multiple separate stop parameters in a modelfile.": "Задава последователностите за спиране, които да се използват. Когато се срещне този модел, LLM ще спре да генерира текст и ще се върне. Множество модели за спиране могат да бъдат зададени чрез определяне на множество отделни параметри за спиране в моделния файл.", "Sets the stop sequences to use. When this pattern is encountered, the LLM will stop generating text and return. Multiple stop patterns may be set by specifying multiple separate stop parameters in a modelfile.": "Задава последователностите за спиране, които да се използват. Когато се срещне този модел, LLM ще спре да генерира текст и ще се върне. Множество модели за спиране могат да бъдат зададени чрез определяне на множество отделни параметри за спиране в моделния файл.",
"Setting": "",
"Settings": "Настройки", "Settings": "Настройки",
"Settings saved successfully!": "Настройките са запазени успешно!", "Settings saved successfully!": "Настройките са запазени успешно!",
"Share": "Подели", "Share": "Подели",
@ -1637,6 +1650,7 @@
"Tools Function Calling Prompt": "Промпт за извикване на функциите на инструментите", "Tools Function Calling Prompt": "Промпт за извикване на функциите на инструментите",
"Tools have a function calling system that allows arbitrary code execution.": "Инструментите имат система за извикване на функции, която позволява произволно изпълнение на код.", "Tools have a function calling system that allows arbitrary code execution.": "Инструментите имат система за извикване на функции, която позволява произволно изпълнение на код.",
"Tools Public Sharing": "Публично споделяне на инструменти", "Tools Public Sharing": "Публично споделяне на инструменти",
"Tools Sharing": "",
"Top K": "Топ К", "Top K": "Топ К",
"Top K Reranker": "", "Top K Reranker": "",
"Transformers": "Трансформатори", "Transformers": "Трансформатори",
@ -1684,6 +1698,7 @@
"Upload Pipeline": "Качване на конвейер", "Upload Pipeline": "Качване на конвейер",
"Upload Progress": "Прогрес на качването", "Upload Progress": "Прогрес на качването",
"Upload Progress: {{uploadedFiles}}/{{totalFiles}} ({{percentage}}%)": "", "Upload Progress: {{uploadedFiles}}/{{totalFiles}} ({{percentage}}%)": "",
"Uploading file...": "",
"URL": "URL", "URL": "URL",
"URL is required": "", "URL is required": "",
"URL Mode": "URL режим", "URL Mode": "URL режим",
@ -1761,7 +1776,6 @@
"Workspace": "Работно пространство", "Workspace": "Работно пространство",
"Workspace Permissions": "Разрешения за работното пространство", "Workspace Permissions": "Разрешения за работното пространство",
"Write": "Напиши", "Write": "Напиши",
"Write a prompt suggestion (e.g. Who are you?)": "Напиши предложение за промпт (напр. Кой сте вие?)",
"Write a summary in 50 words that summarizes {{topic}}.": "Напиши описание в 50 думи, което обобщава [тема или ключова дума].", "Write a summary in 50 words that summarizes {{topic}}.": "Напиши описание в 50 думи, което обобщава [тема или ключова дума].",
"Write something...": "Напишете нещо...", "Write something...": "Напишете нещо...",
"Write your model system prompt content here\ne.g.) You are Mario from Super Mario Bros, acting as an assistant.": "Напишете тук съдържанието на системния prompt на вашия модел\nнапр.: Вие сте Марио от Super Mario Bros и действате като асистент.", "Write your model system prompt content here\ne.g.) You are Mario from Super Mario Bros, acting as an assistant.": "Напишете тук съдържанието на системния prompt на вашия модел\nнапр.: Вие сте Марио от Super Mario Bros и действате като асистент.",

Some files were not shown because too many files have changed in this diff Show more